Abstract
In this article, a novel multimodal medical image fusion (MIF) method based on non-subsampled contourlet transform (NSCT) and pulse-coupled neural network (PCNN) is presented. The proposed MIF scheme exploits the advantages of both the NSCT and the PCNN to obtain better fusion results. The source medical images are first decomposed by NSCT. The low-frequency subbands (LFSs) are fused using the ‘max selection’ rule. For fusing the high-frequency subbands (HFSs), a PCNN model is utilized. Modified spatial frequency in NSCT domain is input to motivate the PCNN, and coefficients in NSCT domain with large firing times are selected as coefficients of the fused image. Finally, inverse NSCT (INSCT) is applied to get the fused image. Subjective as well as objective analysis of the results and comparisons with state-of-the-art MIF techniques show the effectiveness of the proposed scheme in fusing multimodal medical images.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Barra V, Boire JY (2001) A general framework for the fusion of anatomical and functional medical images. NeuroImage 13(3):410–424
da Cunha A, Zhou J, Do M (2006) The nonsubsampled contourlet transform: theory, design, and applications. IEEE Trans Image Process 15(10):3089–3101
Das S, Chowdhury M, Kundu MK (2011) Medical image fusion based on ripplet transform type-I. Prog Electromagn Res B 30:355–370
Deepika MM, Vaithyanathan V (2012) An efficient method to improve the spatial property of medical images. J Theor Appl Inf Technol 35(2):141–148
Deng H, Ma Y (2009) Image fusion based on steerable pyramid and PCNN. In: Proceedings of 2nd international conference of applications of digital information and web technologies, pp 569–573
Eckhorn R, Reitboeck HJ, Arndt M, Dicke P (1990) Feature linking via synchronization among distributed assemblies: simulations of results from cat visual cortex. Neural Comput 2(3):293–307
Eskicioglu A, Fisher P (1995) Image quality measures and their performance. IEEE Trans Commun 43(12):2959–2965
Feng K, Zhang X, Li X (2011) A novel method of medical image fusion based on bidimensional empirical mode decomposition. J Converg Inf Technol 6(12):84–91
Johnson J, Padgett M (1999) PCNN models and applications. IEEE Trans Neural Netw 10(3):480–498
Li H, Manjunath BS, Mitra SK (1995) Multi-sensor image fusion using the wavelet transform. CVGIP Graph Model Image Process 57(3):235–245
Li M, Cai W, Tan Z (2006) A region-based multi-sensor image fusion scheme using pulse-coupled neural network. Pattern Recogn Lett 27(16):1948–1956
Li S, Yang B (2010) Hybrid multiresolution method for multisensor multimodal image fusion. IEEE Sens J 10(9):1519–1526
Li S, Yang B, Hu J (2011) Performance comparison of different multi-resolution transforms for image fusion. Inf Fusion 12(2):74–84
Qu GH, Zhang DL, Yan PF (2002) Information measure for performance of image fusion. Electron Lett 38(7):313–315
Tian H, Fu YN, Wang PG (2010) Image fusion algorithm based on regional variance and multi-wavelet bases. In: Proceedings of 2nd international conference of future computer and communication, vol 2, pp 792–795
Wang Z, Ma Y (2008) Medical image fusion using m-PCNN. Inf Fusion 9(2):176–185
Wang Z, Ma Y, Cheng F, Yang L (2010) Review of pulse-coupled neural networks. Image Vision Comput 28(1):5–13
Wang Z, Ma Y, Gu J (2010) Multi-focus image fusion using PCNN. Pattern Recogn 43(6):2003–2016
Xiao-Bo Q, Jing-Wen Y, Hong-Zhi X, Zi-Qian Z (2008) Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Autom Sin 34(12):1508–1514
Xin G, Zou B, Li J, Liang Y (2011) Multi-focus image fusion based on the nonsubsampled contourlet transform and dual-layer PCNN model. Inf Technol J 10(6):1138–1149
Xydeas CS, Petrovic V (2000) Objective image fusion performance measure. Electron Lett 36(4):308–309
Yang L, Guo BL, Ni W (2008) Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform. Neurocomputing 72(1-3):203–211
Yang S, Wang M, Lu Y, Qi W, Jiao L (2009) Fusion of multiparametric SAR images based on SW-nonsubsampled contourlet and PCNN. Signal Process 89(12):2596–2608
Yang Y, Park DS, Huang S, Rao N (2010) Medical image fusion via an effective wavelet-based approach. EURASIP J Adv Signal Process 44:1–13
Yonghong J (1998) Fusion of landsat TM and SAR images based on principal component analysis. Remote Sens Technol Appl 13(3):46–49
Zheng Y, Essock EA, Hansen BC, Haun AM (2007) A new metric based on extended spatial frequency and its application to DWT based fusion algorithms. Inf Fusion 8(2):177–192
Acknowledgments
We would like to thank the editor, associate editor and the anonymous reviewers for their invaluable suggestions. We are grateful to Dr. Pradip Kumar Das (Medicare Images, Asansol-4, West Bengal) for the subjective evaluation of the fused images. We also like to thank http://www.imagefusion.org/ and http://www.med.harvard.edu/aanlib/home.html for providing us the source medical images.
Author information
Authors and Affiliations
Corresponding author
Additional information
This work was supported by the Machine Intelligence Unit, Indian Statistical Institute, Kolkata-108 (Internal Academic Project).
Rights and permissions
About this article
Cite this article
Das, S., Kundu, M.K. NSCT-based multimodal medical image fusion using pulse-coupled neural network and modified spatial frequency. Med Biol Eng Comput 50, 1105–1114 (2012). https://doi.org/10.1007/s11517-012-0943-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11517-012-0943-3