[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

What Can We Learn About a Generated Image Corrupting Its Latent Representation?

  • Conference paper
  • First Online:
Medical Image Computing and Computer Assisted Intervention – MICCAI 2022 (MICCAI 2022)

Abstract

Generative adversarial networks (GANs) offer an effective solution to the image-to-image translation problem, thereby allowing for new possibilities in medical imaging. They can translate images from one imaging modality to another at a low cost. For unpaired datasets, they rely mostly on cycle loss. Despite its effectiveness in learning the underlying data distribution, it can lead to a discrepancy between input and output data. The purpose of this work is to investigate the hypothesis that we can predict image quality based on its latent representation in the GANs bottleneck. We achieve this by corrupting the latent representation with noise and generating multiple outputs. The degree of differences between them is interpreted as the strength of the representation: the more robust the latent representation, the fewer changes in the output image the corruption causes. Our results demonstrate that our proposed method has the ability to i) predict uncertain parts of synthesized images, and ii) identify samples that may not be reliable for downstream tasks, e.g., liver segmentation task.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 71.50
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 89.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bilic, P., et al.: The liver tumor segmentation benchmark (LiTS), January 2019

    Google Scholar 

  2. Cellucci, C.J., Albano, A.M., Rapp, P.E.: Statistical validation of mutual information calculations: comparison of alternative numerical algorithms. Phys. Rev. E 71, 066208, June 2005. https://doi.org/10.1103/PhysRevE.71.066208

  3. Chen, J., Wei, J., Li, R.: TarGAN: target-aware generative adversarial networks for multi-modality medical image translation. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12906, pp. 24–33. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87231-1_3

    Chapter  Google Scholar 

  4. Chen, S., Qin, A., Zhou, D., Yan, D.: Technical note: U-Net-generated synthetic CT images for magnetic resonance imaging-only prostate intensity-modulated radiation therapy treatment planning. Med. Phys. 45, 5659–5665 (2018)

    Article  Google Scholar 

  5. Cohen, J.P., Luck, M., Honari, S.: Distribution matching losses can hallucinate features in medical image translation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 529–536. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_60

    Chapter  Google Scholar 

  6. Emami, H., Dong, M., Nejad-Davarani, S., Glide-Hurst, C.: SA-GAN: structure-aware generative adversarial network for shape-preserving synthetic CT generation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI) (2021)

    Google Scholar 

  7. Ge, Y., et al.: Unpaired MR to CT synthesis with explicit structural constrained adversarial learning. In: 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), pp. 1096–1099. IEEE (2019)

    Google Scholar 

  8. Goodfellow, I.J., et al.: Generative adversarial networks. In: Advances in Neural Information Processing Systems (NIPS) (2014)

    Google Scholar 

  9. Gupta, L., Klinkhammer, B., Boor, P., Merhof, D., Gadermayr, M.: GAN-based image enrichment in digital pathology boosts segmentation accuracy, pp. 631–639, October 2019. https://doi.org/10.1007/978-3-030-32239-7_70

  10. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6629–6640, NIPS 2017. Curran Associates Inc., Red Hook, NY, USA (2017)

    Google Scholar 

  11. Horvath, I., et al.: METGAN: generative tumour inpainting and modality synthesis in light sheet microscopy. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 227–237 (2022)

    Google Scholar 

  12. Huang, P., et al.: CoCa-GAN: common-feature-learning-based context-aware generative adversarial network for glioma grading, pp. 155–163, October 2019. https://doi.org/10.1007/978-3-030-32248-9_18

  13. Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175 (2019)

  14. Kavur, A.E., et al.: CHAOS challenge - combined (CT-MR) healthy abdominal organ segmentation. Med. Image Anal. 69, 101950 (2021). https://doi.org/10.1016/j.media.2020.101950, http://www.sciencedirect.com/science/article/pii/S1361841520303145

  15. Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  16. Shen, L., et al.: Multi-domain image completion for random missing input data. IEEE Trans. Med. Imaging 40(4), 1113–1122 (2021). https://doi.org/10.1109/TMI.2020.3046444

  17. Upadhyay, U., Chen, Y., Akata, Z.: Robustness via uncertainty-aware cycle consistency (2021)

    Google Scholar 

  18. Upadhyay, U., Chen, Y., Hepp, T., Gatidis, S., Akata, Z.: Uncertainty-guided progressive GANs for medical image translation. In: de Bruijne, M., et al. (eds.) MICCAI 2021. LNCS, vol. 12903, pp. 614–624. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87199-4_58

    Chapter  Google Scholar 

  19. Xin, B., Hu, Y., Zheng, Y., Liao, H.: Multi-modality generative adversarial networks with tumor consistency loss for brain MR image synthesis. In: The IEEE International Symposium on Biomedical Imaging (ISBI) (2020)

    Google Scholar 

  20. Yang, J., Dvornek, N.C., Zhang, F., Chapiro, J., Lin, M.D., Duncan, J.S.: Unsupervised domain adaptation via disentangled representations: application to cross-modality liver segmentation. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 255–263. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_29

    Chapter  Google Scholar 

  21. Yu, B., Zhou, L., Wang, L., Shi, Y., Fripp, J., Bourgeat, P.: EA-GANs: edge-aware generative adversarial networks for cross-modality MR image synthesis. IEEE Trans. Med. Imaging 38(7), 1750–1762 (2019). https://doi.org/10.1109/TMI.2019.2895894

    Article  Google Scholar 

  22. Zhang, J., Chao, H., Kalra, M.K., Wang, G., Yan, P.: Overlooked trustworthiness of explainability in medical AI. medRxiv (2021). https://doi.org/10.1101/2021.12.23.21268289, https://www.medrxiv.org/content/early/2021/12/24/2021.12.23.21268289

  23. Zhang, Z., Yang, L., Zheng, Y.: Translating and segmenting multimodal medical volumes with cycle- and shape-consistency generative adversarial network, pp. 9242–9251, June 2018. https://doi.org/10.1109/CVPR.2018.00963

  24. Zhou, Z., et al.: Models genesis: generic autodidactic models for 3D medical image analysis. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 384–393. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_42

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Agnieszka Tomczak .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 830 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tomczak, A., Gupta, A., Ilic, S., Navab, N., Albarqouni, S. (2022). What Can We Learn About a Generated Image Corrupting Its Latent Representation?. In: Wang, L., Dou, Q., Fletcher, P.T., Speidel, S., Li, S. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2022. MICCAI 2022. Lecture Notes in Computer Science, vol 13436. Springer, Cham. https://doi.org/10.1007/978-3-031-16446-0_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-16446-0_48

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-16445-3

  • Online ISBN: 978-3-031-16446-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics