[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Image synthesis in contrast MRI based on super resolution reconstruction with multi-refinement cycle-consistent generative adversarial networks

  • Published:
Journal of Intelligent Manufacturing Aims and scope Submit manuscript

Abstract

In the field of medical image processing represented by magnetic resonance imaging (MRI), synthesizing the complementary target contrast of the target patient from the existing contrast has obvious medical significance for assisting doctors in making clinical diagnoses. To satisfy the image translation problem between different MRI contrasts (T1 and T2), a generative adversarial network is proposed that works in an end-to-end manner at image level. The low-frequency and high-frequency information of the image is preserved by using multi-stage optimization learning aided by adversarial loss, the loss of perceptual consistency and the loss of cyclic consistency, as it results in preserving the same contrast anatomical structure of the source domain supervisely when the perceptual pixel distribution of the target contrast is learned perfectly. To integrate different penalties (L1 and L2) organically, adaptive weights are set for the error sensitivity of the penalty function in the present total loss function, the aim being to achieve adaptive optimization of each stage of generating high-resolution images. In addition, a new net structure called multi-skip connection residual net is proposed to refine medical image details step by step with multi-stage optimization. Compared with the existing technology, the present method is more advanced. The contrast conversion of T1 and T2 in MRI is validated, which can help to shorten the imaging time, improve the imaging quality, and effectively assist doctors with diagnoses.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Armanious, K., Jiang, C., Fischer, M., Küstner, T., Nikolaou, K., Gatidis, S., & Yang, B. (2018). MedGAN: Medical image translation using GANs. arXiv preprint arXiv:1806.06397.

  • Ashhab, M. S., Breitsprecher, T., & Wartzack, S. (2014). Neural network based modeling and optimization of deep drawing: Extrusion combined process. Journal of Intelligent Manufacturing,25(1), 77–84.

    Article  Google Scholar 

  • Borji, A. (2019). Pros and cons of gan evaluation measures. Computer Vision and Image Understanding,179, 41–65.

    Article  Google Scholar 

  • Chartsias, A., Joyce, T., Giuffrida, M. V., & Tsaftaris, S. A. (2017). Multimodal MR synthesis via modality-invariant latent representation. IEEE Transactions on Medical Imaging,37(3), 803–814.

    Article  Google Scholar 

  • Dar, S. U., Yurt, M., Karacan, L., Erdem, A., Erdem, E., & Çukur, T. (2019). Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE Transactions on Medical Imaging,38, 2375–2388.

    Article  Google Scholar 

  • Denton, E. L., Chintala, S., & Fergus, R. (2015). Deep generative image models using aOBJ Laplacian pyramid of adversarial networks. In Advances in neural information processing systems (pp. 1486–1494).

  • Eigen, D., & Fergus, R. (2015). Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In Proceedings of the IEEE international conference on computer vision (pp. 2650–2658).

  • Gatys, L. A., Ecker, A. S., & Bethge, M. (2016). Image style transfer using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2414–2423).

  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., et al. (2014). Generative adversarial nets. In Advances in neural information processing systems (pp. 2672–2680).

  • Guha, A. (1992). Continuous process control using neural networks. Journal of Intelligent Manufacturing,3(4), 217–228.

    Article  Google Scholar 

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770–778).

  • Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017a). Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4700–4708).

  • Huang, Y., Shao, L., & Frangi, A. F. (2017b). Cross-modality image synthesis via weakly coupled and geometry co-regularized joint dictionary learning. IEEE Transactions on Medical Imaging,37(3), 815–827.

    Article  Google Scholar 

  • Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125–1134).

  • Jog, A., Carass, A., Roy, S., Pham, D. L., & Prince, J. L. (2015). MR image synthesis by contrast learning on neighborhood ensembles. Medical Image Analysis,24(1), 63–76.

    Article  Google Scholar 

  • Jog, A., Carass, A., Roy, S., Pham, D. L., & Prince, J. L. (2017). Random forest regression for magnetic resonance image synthesis. Medical Image Analysis,35, 475–488.

    Article  Google Scholar 

  • Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision (pp. 694–711). Cham: Springer.

  • Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196.

  • Kim, T., Cha, M., Kim, H., Lee, J. K., & Kim, J. (2017). Learning to discover cross-domain relations with generative adversarial networks. In Proceedings of the 34th international conference on machine learning (Vol. 70, pp. 1857–1865). JMLR.org.

  • Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.

  • Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.

  • Li, M., Huang, H., Ma, L., Liu, W., Zhang, T., & Jiang, Y. (2018). Unsupervised image-to-image translation with stacked cycle-consistent adversarial networks. In Proceedings of the European conference on computer vision (ECCV) (pp. 184–199).

  • Liang, X., Zhang, H., & Xing, E. P. (2017). Generative semantic manipulation with contrasting gan. arXiv preprint arXiv:1708.00315.

  • Mathieu, M., Couprie, C., & LeCun, Y. (2015). Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440.

  • Nie, D., Trullo, R., Lian, J., Wang, L., Petitjean, C., Ruan, S., et al. (2018). Medical image synthesis with deep convolutional adversarial networks. IEEE Transactions on Biomedical Engineering,65(12), 2720–2730.

    Article  Google Scholar 

  • Odena, A., Dumoulin, V., & Olah, C. (2016). Deconvolution and checkerboard artifacts. Distill,1(10), e3.

    Article  Google Scholar 

  • Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., & Efros, A. A. (2016). Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2536–2544).

  • Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.

  • Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. In International conference on medical image computing and computer-assisted intervention (pp. 234–241). Cham: Springer.

  • Seitzer, M., Yang, G., Schlemper, J., Oktay, O., Würfl, T., Christlein, V., et al. (2018). Adversarial and perceptual refinement for compressed sensing MRI reconstruction. In International conference on medical image computing and computer-assisted intervention (pp. 232–240). Cham: Springer.

  • Shi, W., Caballero, J., Huszár, F., Totz, J., Aitken, A. P., Bishop, R., et al. (2016). Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1874–1883).

  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.

  • Taigman, Y., Polyak, A., & Wolf, L. (2016). Unsupervised cross-domain image generation. arXiv preprint arXiv:1611.02200.

  • Ulyanov, D., Vedaldi, A., & Lempitsky, V. (2016). Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022.

  • Vincent, P., Larochelle, H., Bengio, Y., & Manzagol, P. A. (2008). Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on machine learning (pp. 1096–1103). ACM.

  • Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing,13(4), 600–612.

    Article  Google Scholar 

  • Wang, X., & Gupta, A. (2016). Generative image modeling using style and structure adversarial networks. In European conference on computer vision (pp. 318–335). Cham: Springer.

  • Wang, T. C., Liu, M. Y., Zhu, J. Y., Tao, A., Kautz, J., & Catanzaro, B. (2018a). High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8798–8807).

  • Wang, C., Xu, C., Wang, C., & Tao, D. (2018b). Perceptual adversarial networks for image-to-image transformation. IEEE Transactions on Image Processing,27(8), 4066–4079.

    Article  Google Scholar 

  • Wolterink, J. M., Leiner, T., Viergever, M. A., & Išgum, I. (2017). Generative adversarial networks for noise reduction in low-dose CT. IEEE Transactions on Medical Imaging,36(12), 2536–2545.

    Article  Google Scholar 

  • Yang, Q., Li, N., Zhao, Z., Fan, X., Chang, E. C., & Xu, Y. (2018). Mri image-to-image translation for cross-modality image registration and segmentation. arXiv preprint arXiv:1801.06940.

  • Yang, G., Yu, S., Dong, H., Slabaugh, G., Dragotti, P. L., Ye, X., et al. (2017). DAGAN: Deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction. IEEE Transactions on Medical Imaging,37(6), 1310–1321.

    Article  Google Scholar 

  • Yi, Z., Zhang, H., Tan, P., & Gong, M. (2017). Dualgan: Unsupervised dual learning for image-to-image translation. In Proceedings of the IEEE international conference on computer vision (pp. 2849–2857).

  • Zhang, R., Isola, P., & Efros, A. A. (2016). Colorful image colorization. In Proceedings of the European conference on computer vision (pp. 649–666).

  • Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O. (2018). The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the ieee conference on computer vision and pattern recognition (pp. 586–595).

  • Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., & Metaxas, D. N. (2017). Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 5907–5915).

  • Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2223–2232).

Download references

Acknowledgements

This work was funded in part by National Natural Science Foundation of China (Grant Number 61872261), and Natural Science Foundation of Shanxi Province, China (Grant Number 201801D121139). The authors thank the contributions of their partners in these projects.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yan Qiang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wu, K., Qiang, Y., Song, K. et al. Image synthesis in contrast MRI based on super resolution reconstruction with multi-refinement cycle-consistent generative adversarial networks. J Intell Manuf 31, 1215–1228 (2020). https://doi.org/10.1007/s10845-019-01507-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10845-019-01507-7

Keywords

Navigation