Abstract
Magnetic Resonance Imaging (MRI) is a widely used non-invasive medical imaging technique that provides excellent contrast for soft tissues, making it invaluable for diagnosis and intervention. Acquiring multiple contrast images is often desirable for comprehensive evaluation and precise disease diagnosis. However, due to technical limitations, patient-related issues, and medical conditions, obtaining all desired MRI contrasts is not always feasible. Cross-contrast MRI synthesis can potentially address this challenge by generating target contrasts based on existing source contrasts. In this work, we propose Contrast Representation Learning (CRL), which explores the changes in MRI contrast by modifying MR sequences. Unlike generative models that treat image generation as an end-to-end cross-domain mapping, CRL aims to uncover the complex relationships between contrasts by embracing the interplay of imaging parameters within this space. By doing so, CRL enhances the fidelity and realism of synthesized MR images, providing a more accurate representation of intricate details. Experimental results on the Fast Spin Echo (FSE) sequence demonstrate the promising performance and generalization capability of CRL, even with limited training data. Moreover, CRL introduces a perspective of considering imaging parameters as implicit coordinates, shedding light on the underlying structure governing contrast variation in MR images. Our code is available at https://github.com/xionghonglin/CRL_MICCAI_2024.
H. Xiong and Y. Fang—Contributed equally to this work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Alexander, D.C., Zikic, D., Zhang, J., Zhang, H., Criminisi, A.: Image quality transfer via random forest regression: applications in diffusion MRI. In: Golland, P., Hata, N., Barillot, C., Hornegger, J., Howe, R. (eds.) MICCAI 2014. LNCS, vol. 8675, pp. 225–232. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10443-0_29
Armanious, K., et al.: MedGAN: medical image translation using GANs. Comput. Med. Imaging Graph. 79, 101684 (2020)
Chartsias, A., Joyce, T., Giuffrida, M.V., Tsaftaris, S.A.: Multimodal MR synthesis via modality-invariant latent representation. IEEE Trans. Med. Imaging 37(3), 803–814 (2017)
Dalmaz, O., Yurt, M., Çukur, T.: Resvit: residual vision transformers for multimodal medical image synthesis. IEEE Trans. Med. Imaging 41(10), 2598–2614 (2022)
Dar, S.U., Yurt, M., Karacan, L., Erdem, A., Erdem, E., Çukur, T.: Image synthesis in multi-contrast mri with conditional generative adversarial networks. IEEE Trans. Med. Imaging 38(10), 2375–2388 (2019)
Jog, A., Carass, A., Roy, S., Pham, D.L., Prince, J.L.: Random forest regression for magnetic resonance image synthesis. Med. Image Anal. 35, 475–488 (2017)
Lan, H., Initiative, A.D.N., Toga, A.W., Sepehrband, F.: SC-GAN: 3d self-attention conditional GAN with spectral normalization for multi-modal neuroimaging synthesis. BioRxiv pp. 2020–06 (2020)
Lee, D., Kim, J., Moon, W.J., Ye, J.C.: Collagan: collaborative GAN for missing image data imputation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2487–2496 (2019)
Liu, J., Pasumarthi, S., Duffy, B., Gong, E., Datta, K., Zaharchuk, G.: One model to synthesize them all: multi-contrast multi-scale transformer for missing data imputation. IEEE Trans. Med. Imaging 42(9), 2577–2591 (2023)
Qin, Z., Liu, Z., Zhu, P., Ling, W.: Style transfer in conditional GANs for cross-modality synthesis of brain magnetic resonance images. Comput. Biol. Med. 148, 105928 (2022)
Roy, S., Jog, A., Carass, A., Prince, J.L.: Atlas based intensity transformation of brain MR images. In: Shen, L., Liu, T., Yap, P.-T., Huang, H., Shen, D., Westin, C.-F. (eds.) MBIA 2013. LNCS, vol. 8159, pp. 51–62. Springer, Cham (2013). https://doi.org/10.1007/978-3-319-02126-3_6
Wang, G., et al.: Synthesize high-quality multi-contrast magnetic resonance imaging from multi-echo acquisition using multi-task deep generative model. IEEE Trans. Med. Imaging 39(10), 3089–3099 (2020)
Wang, Z., Bovik, A., Sheikh, H., Simoncelli, E.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Xin, B., Hu, Y., Zheng, Y., Liao, H.: Multi-modality generative adversarial networks with tumor consistency loss for brain MR image synthesis. In: 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), pp. 1803–1807. IEEE (2020)
Yang, H., Sun, J., Yang, L., Xu, Z.: A unified hyper-GAN model for unpaired multi-contrast MR image translation. In: de Bruijne, M., Cattin, P.C., Cotin, S., Padoy, N., Speidel, S., Zheng, Y., Essert, C. (eds.) MICCAI 2021. LNCS, vol. 12903, pp. 127–137. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87199-4_12
Yang, H., et al.: Synthesizing multi-contrast MR images via novel 3d conditional variational auto-encoding GAN. Mobile Netw. Appl. 26, 415–424 (2021)
Yu, B., Zhou, L., Wang, L., Shi, Y., Fripp, J., Bourgeat, P.: Ea-gans: edge-aware generative adversarial networks for cross-modality mr image synthesis. IEEE Trans. Med. Imaging 38(7), 1750–1762 (2019)
Zhang, X., et al.: Ptnet3d: a 3d high-resolution longitudinal infant brain MRI synthesizer based on transformers. IEEE Trans. Med. Imaging 41(10), 2925–2940 (2022)
Acknowledgments
This work was partially supported by National Natural Science Foundation of China (62131015) and Shanghai Municipal Central Guided Local Science and Technology Development Fund (YDZX20233100001001).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests.
The authors declare no competing interests.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Xiong, H. et al. (2024). Contrast Representation Learning from Imaging Parameters for Magnetic Resonance Image Synthesis. In: Linguraru, M.G., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2024. MICCAI 2024. Lecture Notes in Computer Science, vol 15007. Springer, Cham. https://doi.org/10.1007/978-3-031-72104-5_18
Download citation
DOI: https://doi.org/10.1007/978-3-031-72104-5_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-72103-8
Online ISBN: 978-3-031-72104-5
eBook Packages: Computer ScienceComputer Science (R0)