[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

A foundation model for enhancing magnetic resonance images and downstream segmentation, registration and diagnostic tasks

Abstract

In structural magnetic resonance (MR) imaging, motion artefacts, low resolution, imaging noise and variability in acquisition protocols frequently degrade image quality and confound downstream analyses. Here we report a foundation model for the motion correction, resolution enhancement, denoising and harmonization of MR images. Specifically, we trained a tissue-classification neural network to predict tissue labels, which are then leveraged by a ‘tissue-aware’ enhancement network to generate high-quality MR images. We validated the model’s effectiveness on a large and diverse dataset comprising 2,448 deliberately corrupted images and 10,963 images spanning a wide age range (from foetuses to elderly individuals) acquired using a variety of clinical scanners across 19 public datasets. The model consistently outperformed state-of-the-art algorithms in improving the quality of MR images, handling pathological brains with multiple sclerosis or gliomas, generating 7-T-like images from 3 T scans and harmonizing images acquired from different scanners. The high-quality, high-resolution and harmonized images generated by the model can be used to enhance the performance of models for tissue segmentation, registration, diagnosis and other downstream tasks.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Overview of the BME-X model.
Fig. 2: Visual comparison of the enhanced results for in vivo T1w images at 24 months old.
Fig. 3: Enhanced results for 180 synthesized corrupted T1w images from BCP at 24 months old, generated by four competing methods and the foundation model.
Fig. 4: Enhanced results on 1,908 synthesized corrupted images from five datasets.
Fig. 5: Enhanced results of the BME-X model for 10,963 in vivo low-quality images across the whole human lifespan, collected from 19 datasets.
Fig. 6: Enhancement results and the bias quantification for 280 in vivo corrupted T1w images from the MR-ART dataset, generated by competing methods and the BME-X model.
Fig. 7: Ultrasuper-resolution reconstruction by the BME-X model.
Fig. 8: Enhanced results for the abnormal brain images with different brain conditions.

Similar content being viewed by others

Data availability

The raw data generated in this study are available from dHCP52,53 (https://biomedia.github.io/dHCP-release-notes), NDAR54 (https://nda.nih.gov/edit_collection.html?id=19), BCP39 (https://nda.nih.gov/edit_collection.html?id=2848), SALD48 (http://fcon_1000.projects.nitrc.org/indi/retro/sald.html), CCNP49,50 (https://ccnp.scidb.cn/en/detail?dataSetId=826407529641672704&version=V3&code=o00133), DLBS51 (https://fcon_1000.projects.nitrc.org/indi/retro/dlbs.html), IXI (http://brain-development.org/ixi-dataset/), Chinese Adult Brain85 (https://www.nitrc.org/projects/adultatlas), ABIDE86 (https://fcon_1000.projects.nitrc.org/indi/abide/), ABVIB87 (https://ida.loni.usc.edu/home/projectPage.jsp?project=ABVIB), ADNI58 (https://ida.loni.usc.edu/home/projectPage.jsp?project=ADNI), AIBL88 (https://ida.loni.usc.edu/home/projectPage.jsp?project=AIBL), HBN55 (https://data.healthybrainnetwork.org/main.php), HCP56 (http://www.humanconnectomeproject.org/data), ICBM89 (https://ida.loni.usc.edu/home/projectPage.jsp?project=ICBM), OASIS357 (https://sites.wustl.edu/oasisbrains/), SLIM90 (https://fcon_1000.projects.nitrc.org/indi/retro/southwestuni_qiu_index.html) and MR-ART63 (https://openneuro.org/datasets/ds004173/versions/1.0.2). Source data are provided with this paper.

Code availability

The source codes and trained models are available via GitHub at https://github.com/DBC-Lab/Brain_MRI_Enhancement.git. The network was trained using the Caffe deep learning framework (Caffe 1.0.0-rc3). The deployment was implemented with custom Python code (Python 2.7.17). The source codes of competing methods are available for DUNCAN (version 3.0) via Zenodo at https://doi.org/10.5281/zenodo.3742351 (ref. 38); Pix2Pix/CycleGan via GitHub at https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix; DU-Net at https://liwang.web.unc.edu/wp-content/uploads/sites/11006/2020/04/Anatomy_Guided_Densely_Connected_U_Net.txt; NLUP (version 2.0) at https://personales.upv.es/jmanjon/upsampling.htm; FAST at https://fsl.fmrib.ox.ac.uk/fsl/docs/#/structural/fast; a multiresolution non-local means filter (version 1.0) at https://personales.upv.es/jmanjon/res_denoising_NLM3D.htm; and Demon’s algorithm at https://simpleitk.readthedocs.io/en/master/link_DemonsRegistration2_docs.html. The image pre-processing steps, including skull stripping and cerebellum removal, were performed by using a public cerebrum-dedicated pipeline (iBEAT V2.0, http://www.ibeat.cloud). The motion simulation tool is available via GitHub at https://github.com/Yonsei-MILab/MRI-Motion-Artifact-Simulation-Tool. The artefact simulator is available at https://ieeexplore.ieee.org/abstract/document/8759167. To quantitatively assess the significance of the results, we conducted statistical analyses using two-sided t-tests to obtain P values. Cohen’s d was used as a key metric to quantify the magnitude of the observed effect, with calculations performed using the effect size calculators at https://lbecker.uccs.edu.

References

  1. Frisoni, G., Fox, N., Jack, C., Scheltens, P. & Thompson, P. The clinical use of structural MRI in Alzheimer’s disease. Nat. Rev. Neurol. 6, 67–77 (2010).

    Article  PubMed  PubMed Central  Google Scholar 

  2. Copeland, A. et al. Infant and child MRI: a review of scanning procedures. Front. Neurosci. 15, 666020 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  3. Thieba, C. et al. Factors associated with successful MRI scanning in unsedated young children. Front. Pediatr. 6, 146 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  4. Khan, J. et al. A program to decrease the need for pediatric sedation for CT and MRI. Appl. Radiol. 36, 30–33 (2007).

    Article  Google Scholar 

  5. Li, G. et al. Mapping longitudinal development of local cortical gyrification in infants from birth to 2 years of age. J. Neurosci. 34, 4228–4238 (2014).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  6. Havsteen, I. et al. Are movement artifacts in magnetic resonance imaging a real problem?—A narrative review. Front. Neurol. 8, 232 (2017).

    Article  PubMed  PubMed Central  Google Scholar 

  7. Zaitsev, M., Maclaren, J. & Herbst, M. Motion artefacts in MRI: a complex problem with many partial solutions. J. Magn. Reson. Imaging 42, 887–901 (2015).

    Article  PubMed  PubMed Central  Google Scholar 

  8. Gallichan, D., Marques, J. & Gruetter, R. Retrospective correction of involuntary microscopic head movement using highly accelerated fat image navigators (3D FatNavs) at 7T: 3D FatNavs for high-resolution retrospective motion correction. Magn. Reson. Med. 75, 1030–1039 (2015).

    Article  PubMed  Google Scholar 

  9. Liu, S., Thung, K., Qu, L., Lin, W. & Yap, P.-T. Learning MRI artefact removal with unpaired data. Nat. Mach. Intell. 3, 60–67 (2021).

    Article  Google Scholar 

  10. Sommer, K. et al. Correction of motion artifacts using a multiscale fully convolutional neural network. Am. J. Neuroradiol. 41, 416–423 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. Cordero-Grande, L. et al. Motion‐corrected MRI with DISORDER: distributed and incoherent sample orders for reconstruction deblurring using encoding redundancy. Magn. Reson. Med. 84, 713–726 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  12. Kecskemeti, S. et al. Robust motion correction strategy for structural MRI in unsedated children demonstrated with three-dimensional radial MPnRAGE. Radiology 289, 509–516 (2018).

    Article  PubMed  Google Scholar 

  13. Duffy, B. et al. Retrospective motion artifact correction of structural MRI images using deep learning improves the quality of cortical surface reconstructions. NeuroImage 230, 117756 (2021).

    Article  PubMed  Google Scholar 

  14. Tisdall, M. et al. Volumetric navigators for prospective motion correction and selective reacquisition in neuroanatomical MRI. Magn. Reson. Med. 68, 389–399 (2011).

    Article  PubMed  PubMed Central  Google Scholar 

  15. Stucht, D. et al. Highest resolution in vivo human brain MRI using prospective motion correction. PLoS ONE 10, e0133921 (2015).

    Article  PubMed  PubMed Central  Google Scholar 

  16. Pipe, J. Motion correction with PROPELLER MRI: application to head motion and free‐breathing cardiac imaging. Magn. Reson. Med. 42, 963–969 (1999).

    Article  CAS  PubMed  Google Scholar 

  17. Korin, H., Felmlee, J., Riederer, S. & Ehman, R. Spatial-frequency-tuned markers and adaptive correction for rotational motion. Magn. Reson. Med. 33, 663–669 (1995).

    Article  CAS  PubMed  Google Scholar 

  18. Medley, M., Yan, H. & Rosenfeld, D. An improved algorithm for 2-D translational motion artifact correction. IEEE Trans. Med. Imaging 10, 548–553 (1992).

    Article  Google Scholar 

  19. Atkinson, D., Hill, D., Stoyle, P., Summers, P. & Keevil, S. Automatic correction of motion artifacts in magnetic resonance images using an entropy focus criterion. IEEE Trans. Med. Imaging 16, 903–910 (1997).

    Article  CAS  PubMed  Google Scholar 

  20. Haskell, M., Cauley, S. & Wald, L. TArgeted Motion Estimation and Reduction (TAMER): data consistency based motion mitigation for MRI using a reduced model joint optimization. IEEE Trans. Med. Imaging 37, 1253–1265 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  21. Cordero-Grande, L., Hughes, E., Hutter, J., Price, A. & Hajnal, J. Three-dimensional motion corrected sensitivity encoding reconstruction for multi-shot multi-slice MRI: application to neonatal brain imaging. Magn. Reson. Med. 79, 1365–1376 (2017).

    Article  PubMed  PubMed Central  Google Scholar 

  22. Haskell, M. et al. Network Accelerated Motion Estimation and Reduction (NAMER): convolutional neural network guided retrospective motion correction using a separable motion model. Magn. Reson. Med. 82, 1452–1461 (2019).

    Article  PubMed  PubMed Central  Google Scholar 

  23. Jin, K., Mccann, M., Froustey, E. & Unser, M. Deep convolutional neural network for inverse problems in imaging. IEEE Trans. Image Process. 26, 4509–4522 (2016).

    Article  Google Scholar 

  24. Ahishakiye, E., Van Gijzen, M. B., Tumwiine, J., Wario, R. & Obungoloch, J. A survey on deep learning in medical image reconstruction. Intell. Med. 1, 118–127 (2021).

    Article  Google Scholar 

  25. Ravishankar, S., Ye, J. C. & Fessler, J. Image reconstruction: from sparsity to data-adaptive methods and machine learning. Proc. IEEE 108, 86–109 (2019).

    Article  Google Scholar 

  26. Lee, J., Kim, B. & Park, H. MC2‐Net: motion correction network for multi‐contrast brain MRI. Magn. Reson. Med. 86, 1077–1092 (2021).

    Article  PubMed  Google Scholar 

  27. Polak, D. et al. Motion guidance lines for robust data consistency-based retrospective motion correction in 2D and 3D MRI. Magn. Reson. Med. 89, 1777–1790 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  28. Wang, G., Shi, H., Chen, Y. & Wu, B. Unsupervised image-to-image translation via long-short cycle-consistent adversarial networks. Appl. Intell. 53, 17243–17259 (2022).

    Article  Google Scholar 

  29. Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A. A. Image-to-image translation with conditional adversarial networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition 5967–5976 (IEEE Computer Society, 2017); https://doi.org/10.1109/CVPR.2017.632

  30. Manjon, J. et al. Non-local MRI upsampling. Med. Image Anal. 14, 784–792 (2010).

    Article  PubMed  Google Scholar 

  31. Iglesias, J. et al. SynthSR: a public AI tool to turn heterogeneous clinical brain scans into high-resolution T1-weighted images for 3D morphometry. Sci. Adv. 9, eadd3607 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  32. Pham, C.-H. et al. Multiscale brain MRI super-resolution using deep 3D convolutional networks. Comput. Med. Imaging Graph. 77, 101647 (2019).

    Article  PubMed  Google Scholar 

  33. Mohan, J., Krishnaveni, V. & Guo, Y. A survey on the magnetic resonance image denoising methods. Biomed. Signal Process. Control 9, 56–69 (2014).

    Article  Google Scholar 

  34. Liu, M. et al. Style transfer using generative adversarial networks for multi-site MRI harmonization. In Medical Image Computing and Computer Assisted Intervention – MICCAI 2021 (eds de Bruijne, M. et al.) 313–322 (Springer, 2021); https://doi.org/10.1007/978-3-030-87199-4_30

  35. Johnson, W. E., Li, C. & Rabinovic, A. Adjusting batch effects in microarray expression data using empirical Bayes methods. Biostatistics 8, 118–127 (2006).

    Article  PubMed  Google Scholar 

  36. Kemenczky, P. et al. Effect of head motion-induced artefacts on the reliability of deep learning-based whole-brain segmentation. Sci. Rep. 12, 1618 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  37. Wang, L. et al. Volume-based analysis of 6-month-old infant brain MRI for autism biomarker identification and early diagnosis. Med. Image Comput. Comput. Assist. Interv. https://doi.org/10.1007/978-3-030-00931-1_47 (2018).

  38. Liu, S. et al. eCode used in article “Learning MRI artefact removal with unpaired data”. Zenodo https://doi.org/10.5281/zenodo.3742351 (2020).

  39. Howell, B. et al. The UNC/UMN Baby Connectome Project (BCP): an overview of the study design and protocol development. NeuroImage 185, 891–905 (2018).

    Article  PubMed  Google Scholar 

  40. Coupé, P., Manjon, J., Robles, M. & Collins, L. Adaptive multiresolution non-local means filter for three-dimensional magnetic resonance image denoising. Image Process. IET 6, 558–568 (2012).

    Article  Google Scholar 

  41. Wang, Z., Bovik, A., Sheikh, H. R. & Simoncelli, E. Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13, 600–612 (2014).

    Article  Google Scholar 

  42. Wang, Z., Simoncelli, E. P. & Bovik, A. C. Multiscale structural similarity for image quality assessment. In The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers 1398–1402 (IEEE, 2003); https://doi.org/10.1109/ACSSC.2003.1292216

  43. Wang, Z. & Bovik, A. A universal image quality index. IEEE Signal Process Lett. 9, 81–84 (2002).

    Article  CAS  Google Scholar 

  44. Sheikh, H. R. & Bovik, A. C. Image information and visual quality. IEEE Trans. Image Process. 15, 430–444 (2006).

    Article  PubMed  Google Scholar 

  45. Mason, A. et al. Comparison of objective image quality metrics to expert radiologists’ scoring of diagnostic quality of MR images. IEEE Trans. Med. Imaging 39, 1064–1072 (2020).

    Article  PubMed  Google Scholar 

  46. Duffy, B. A. et al. Retrospective correction of motion artifact affected structural MRI images using deep learning of simulated motion. In Medical Imaging with Deep Learning (2018); https://openreview.net/forum?id=H1hWfZnjM

  47. Wang, L. et al. iBEAT V2.0: a multisite-applicable, deep learning-based pipeline for infant cerebral cortical surface reconstruction. Nat. Protoc. 18, 1488–1509 (2023).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  48. Wei, D. et al. Structural and functional brain scans from the cross-sectional Southwest University adult lifespan dataset. Sci. Data 5, 180134 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  49. Liu, S. et al. Chinese Color Nest Project: an accelerated longitudinal brain–mind cohort. Dev. Cogn. Neurosci. 52, 101020 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  50. Gao, P. et al. A Chinese multi-modal neuroimaging data release for increasing diversity of human brain mapping. Sci. Data 9, 286 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  51. Park, D. C. & Festini S. B. in Cognitive Neuroscience of Aging: Linking Cognitive and Cerebral Aging 363–388 (Oxford Univ. Press, 2016); https://doi.org/10.1093/acprof:oso/9780199372935.003.0015

  52. Hughes, E. et al. A dedicated neonatal brain imaging system. Magn. Reson. Med. 78, 794–804 (2017).

    Article  CAS  PubMed  Google Scholar 

  53. Cordero-Grande, L. et al. Sensitivity encoding for aligned multishot magnetic resonance reconstruction. IEEE Trans. Comput. Imaging 2, 266–280 (2016).

    Article  Google Scholar 

  54. Payakachat, N., Tilford, J. M. & Ungar, W. National Database for Autism Research (NDAR): big data opportunities for health services research and health technology assessment. PharmacoEconomics 34, 127–138 (2015).

    Article  Google Scholar 

  55. Alexander, L. et al. An open resource for transdiagnostic research in pediatric mental health and learning disorders. Sci. Data 4, 170181 (2017).

    Article  PubMed  PubMed Central  Google Scholar 

  56. Elam, J. S. et al. The Human Connectome Project: a retrospective. NeuroImage 244, 118543 (2021).

    Article  CAS  PubMed  Google Scholar 

  57. LaMontagne, P. J. et al. OASIS-3: longitudinal neuroimaging, clinical, and cognitive dataset for normal aging and Alzheimer disease. Preprint at medRxiv https://doi.org/10.1101/2019.12.13.19014902 (2019).

  58. Weiner, M. W. et al. The Alzheimer’s Disease Neuroimaging Initiative 3: continued innovation for clinical trial improvement. Alzheimer’s Dement. 13, 561–571 (2017).

    Article  Google Scholar 

  59. Wang, L. et al. Benchmark on automatic six-month-old infant brain segmentation algorithms: the iSeg-2017 Challenge. IEEE Trans. Med. Imaging 38, 2219–2230 (2019).

    Article  Google Scholar 

  60. Lee, S., Jung, S., Jung, K.-J. & Kim, D.-H. Deep learning in MR motion correction: a brief review and a new motion simulation tool (view2Dmotion). Invest. Magn. Reson. Imaging 24, 196 (2020).

    Article  Google Scholar 

  61. Coupé, P. et al. Robust Rician noise estimation for MR images. Med. Image Anal. 14, 483–493 (2010).

    Article  PubMed  Google Scholar 

  62. Nowak, R. D. Wavelet-based Rician noise removal for magnetic resonance imaging. IEEE Trans. Image Process. 8, 1408–1419 (1999).

    Article  CAS  PubMed  Google Scholar 

  63. Nárai, Á. et al. Movement-related artefacts (MR-ART) dataset of matched motion-corrupted and clean structural MRI brain scans. Sci. Data 9, 630 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  64. Plenge, E. et al. Super‐resolution methods in MRI: can they improve the trade‐off between resolution, signal‐to‐noise ratio, and acquisition time? Magn. Reson. Med. 68, 1983–1993 (2012).

    Article  PubMed  Google Scholar 

  65. Wang, J., Chen, Y., Wu, Y., Shi, J. & Gee, J. Enhanced generative adversarial network for 3D brain MRI super-resolution. In 2020 IEEE Winter Conference on Applications of Computer Vision 3616–3625 (IEEE, 2020); https://doi.org/10.1109/WACV45572.2020.9093603

  66. Tardif, C. et al. Open Science CBS Neuroimaging Repository: sharing ultra-high-field MR images of the brain. NeuroImage 124, 1143–1148 (2015).

    Article  PubMed  Google Scholar 

  67. Zhang, Y. Y., Brady, M. & Smith, S. A. Segmentation of brain MR images through a hidden Markov random field model and the expectation maximization algorithm. IEEE Trans. Med. Imaging 20, 45–57 (2001).

    Article  CAS  PubMed  Google Scholar 

  68. Styner, M. et al. 3D segmentation in the clinic: A Grand Challenge II: MS lesion segmentation. MIDAS J. https://doi.org/10.54294/lmkqvm (2007).

    Article  Google Scholar 

  69. Sayah, A. et al. Enhancing the REMBRANDT MRI collection with expert segmentation labels and quantitative radiomic features. Sci. Data 9, 338 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  70. Menze, B. H. et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans. Med. Imaging 34, 1993–2024 (2015).

    Article  PubMed  Google Scholar 

  71. Roca, V. et al. A three-dimensional deep learning model for inter-site harmonization of structural MR images of the brain: extensive validation with a multicenter dataset. Heliyon 9, e22647 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  72. Thirion, J.-P. Image matching as a diffusion process: an analogy with Maxwell’s demons. Med. Image Anal. 2, 243–260 (1998).

    Article  CAS  PubMed  Google Scholar 

  73. Deoni, S. et al. Accessible pediatric neuroimaging using a low field strength MRI scanner. NeuroImage 238, 118273 (2021).

    Article  PubMed  Google Scholar 

  74. Billot, B. et al. SynthSeg: segmentation of brain MRI scans of any contrast and resolution without retraining. Med. Image Anal. 86, 102789 (2023).

    Article  PubMed  PubMed Central  Google Scholar 

  75. Ronneberger, O., Fischer, P. & Brox, T. U-Net: convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention MICCAI 2015 (eds Navab, N., Hornegger, J., Wells, W. & Frangi, A.) 234–241 (Springer, 2015); https://doi.org/10.1007/978-3-319-24574-4_28

  76. Huang, G., Liu, Z., Maaten, L. V. D. & Weinberger K. Q. Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition 2261–2269 (IEEE Computer Society, 2017); https://doi.org/10.1109/CVPR.2017.243

  77. Isensee, F., Jaeger, P. F., Kohl, S. A. A., Petersen, J. & Maier-Hein, K. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18, 203–211 (2020).

    Article  PubMed  Google Scholar 

  78. Vaswani, A. et al. Attention is all you need. In Proc. 31st International Conference on Neural Information Processing Systems 6000–6010 (Curran Associates, 2017); https://api.semanticscholar.org/CorpusID:13756489

  79. Zhang, Q. et al. Frnet: flattened residual network for infant MRI skull stripping. In 2019 IEEE 16th International Symposium on Biomedical Imaging 999–1002 (IEEE, 2019); https://doi.org/10.1109/ISBI.2019.8759167

  80. Jia, Y. et al. Caffe: convolutional architecture for fast feature embedding. In Proc. 22nd ACM International Conference on Multimedia 675–678 (Association for Computing Machinery, 2014); https://doi.org/10.1145/2647868.2654889

  81. Nie, D. et al. 3-D fully convolutional networks for multimodal isointense infant brain image segmentation. IEEE Trans. Cybern. 49, 1123–1136 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  82. Makropoulos, A., Counsell, S. & Rueckert, D. A review on automatic fetal and neonatal brain MRI segmentation. NeuroImage 170, 231–248 (2017).

    Article  PubMed  Google Scholar 

  83. Wang, L. et al. LINKS: learning-based multi-source integration framework for segmentation of infant brain images. NeuroImage 108, 160–172 (2014).

    Article  PubMed  Google Scholar 

  84. Li, G. et al. Computational neuroanatomy of baby brains: a review. NeuroImage 185, 906–925 (2018).

    Article  PubMed  Google Scholar 

  85. Zhu, J. & Qiu, A. Chinese adult brain atlas with functional and white matter parcellation. Sci. Data 9, 352–362 (2022).

    Article  PubMed  PubMed Central  Google Scholar 

  86. Di Martino, A. et al. Enhancing studies of the connectome in autism using the Autism Brain Imaging Data Exchange II. Sci. Data 4, 170010 (2017).

    Article  PubMed  PubMed Central  Google Scholar 

  87. Rodriguez, F., Zheng, L. & Chui, H. C. Psychometric characteristics of cognitive reserve: how high education might improve certain cognitive abilities in aging. Dement. Geriatr. Cogn. Disord. 47, 1–10 (2019).

    Article  Google Scholar 

  88. Lai, M. et al. Relationship of established cardiovascular risk factors and peripheral biomarkers on cognitive function in adults at risk of cognitive deterioration. J. Alzheimer’s Dis. 74, 1–9 (2020).

    CAS  Google Scholar 

  89. JC, M. et al. A probabilistic atlas and reference system for the human brain: International Consortium for Brain Mapping (ICBM). Philos. Trans. R. Soc. Lond. Ser. B 356, 1293–1322 (2001).

    Article  Google Scholar 

  90. Wei, L. et al. Longitudinal test-retest neuroimaging data from healthy young adults in southwest China. Sci. Data 4, 170017 (2017).

    Article  PubMed  PubMed Central  Google Scholar 

  91. Sun, Y. et al. Multi-site infant brain segmentation algorithms: The iSeg-2019 Challenge. IEEE Trans. Med. Imaging 40, 1363–1376 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

Y.S., Limei Wang and Li Wang were supported by the National Institute of Mental Health under award numbers MH133845, MH117943, MH123202 and MH116225. G.L. was supported by the National Institutes of Health (NIH) under award numbers MH133845, MH117943, MH123202, MH116225, AG075582 and NS128534. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. This work also uses approaches developed by NIH grants (U01MH110274 and R01MH104324) and the efforts of the UNC/UMN Baby Connectome Project Consortium. We acknowledge M. M. Pangelinan for her valuable contribution in providing the in vivo low-resolution data used for super-resolution validation. We express our sincere gratitude to all those who have supported us in the validation: J. Bernal, J. Kim, K. A. Vaughn, J. Tuulari, K. Oishi, A. Tapp, Y. Chen, X. Geng, T. F. Vaz and Z. Zariry. We also deeply appreciate all participants who contributed to the datasets involved in this work.

Author information

Authors and Affiliations

Authors

Contributions

Y.S. and Li Wang designed and implemented the pipeline. Y.S. and Li Wang carried out the application, performed the experiments and analysed the data. Y.S. and Li Wang performed result validation. Y.S. wrote the paper. Limei Wang, W.L., G.L. and Li Wang revised the paper.

Corresponding author

Correspondence to Li Wang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Peer review

Peer review information

Nature Biomedical Engineering thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary Information

Supplementary figures, tables and notes.

Reporting Summary

Peer Review File

Supplementary Data

A blind inspection conducted by ten independent experts to assess the segmentation accuracy of iBEAT.

Source data

Source Data For Fig. 3

Source data and statistics.

Source Data For Fig. 4

Source data and statistics.

Source Data For Fig. 5

Source data and statistics.

Source Data For Fig. 6

Source data and statistics.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sun, Y., Wang, L., Li, G. et al. A foundation model for enhancing magnetic resonance images and downstream segmentation, registration and diagnostic tasks. Nat. Biomed. Eng (2024). https://doi.org/10.1038/s41551-024-01283-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1038/s41551-024-01283-7

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing