Abstract
Infrared and visible image fusion merges the salient details of an infrared and its respective visible image to create a formidable image more suitable for surveillance, image enhancement, object detection, and remote sensing. This paper presents a multi-scale transform-based infrared and visible image fusion method in the non-subsampled contourlet transform domain. The proposed method utilizes a new fast unit-linking dual-channel pulse coupled neural network (FUDPCNN) model. The low-pass sub-bands are fused by a new gravitational force operator-based mechanism. On the other hand, the internal activities of the proposed FUDPCNN are applied to get the fused high-pass sub-bands. The effectiveness of the FUDPCNN is shown by comparing it with practiced PCNN models. Moreover, the competitiveness of the gravitational force operator-based rule is described using other low-pass rules. The workability of the proposed method is shown by comparing its fusion outcomes on well-known infrared-visible image pairs with the nine existing approaches using eight objective metrics.
Similar content being viewed by others
Data availability
Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
References
Sun, C., Zhang, C., Xiong, N.: Infrared and visible image fusion techniques based on deep learning: a review. Electronics 9(12), 2162 (2020)
Ma, J., Ma, Y., Li, C.: Infrared and visible image fusion methods and applications: a survey. Inf. Fusion 45, 153–178 (2019)
Hermessi, H., Mourali, O., Zagrouba, E.: Multimodal medical image fusion review: theoretical background and recent advances. Signal Process. 183, 108036 (2021)
Panigrahy, C., Seal, A., Mahato, N.K.: Parameter adaptive unit-linking dual-channel PCNN based infrared and visible image fusion. Neurocomputing 514, 21–38 (2022)
Xiao-Bo, Q., Yan, J.-W., Hong, Z., Zhu, Z.Q.: Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Autom. Sin. 34(12), 1508–1514 (2008)
Panigrahy, C., Seal, A., Gonzalo-Martín, C., Pathak, P., Jalal, A.S.: Parameter adaptive unit-linking pulse coupled neural network based MRI-PET/SPECT image fusion. Biomed. Signal Process. Control 83, 104659 (2023)
Xiang, T., Yan, L., Gao, R.: A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain. Infrared Phys. Technol. 69, 53–61 (2015)
Katırcıoğlu, F., Çay, Y., Cingiz, Z.: Infrared image enhancement model based on gravitational force and lateral inhibition networks. Infrared Phys. Technol. 100, 15–27 (2019)
Chunming, W., Chen, L.: Infrared and visible image fusion method of dual NSCT and PCNN. PLoS ONE 15(9), e0239535 (2020)
He, K., Zhou, D., Zhang, X., Nie, R., Jin, X.: Multi-focus image fusion combining focus-region-level partition and pulse-coupled neural network. Soft. Comput. 23, 4685–4699 (2019)
Toet, A.: The TNO multiband image data collection. Data Brief 15, 249–251 (2017)
Ma, J., Tang, L., Fan, F., Huang, J., Mei, X., Ma, Y.: SwinFusion: Cross-domain long-range learning for general image fusion via swin transformer. IEEE/CAA J. Autom. Sin. 9(7), 1200–1217 (2022)
Tang, L., Yuan, J., Zhang, H., Jiang, X., Ma, J.: PIAFusion: A progressive infrared and visible image fusion network based on illumination aware. Inf. Fusion 83, 79–92 (2022)
Zhao, Z., Shuang, X., Zhang, C., Liu, J., Zhang, J.: Bayesian fusion for infrared and visible images. Signal Process. 177, 107734 (2020)
Li, H., Qi, X., Xie, W.: Fast infrared and visible image fusion with structural decomposition. Knowl.-Based Syst. 204, 106182 (2020)
Huang, X., Qi, G., Wei, H., Chai, Y., Sim, J.: A novel infrared and visible image information fusion method based on phase congruency and image entropy. Entropy 21(12), 1135 (2019)
Ma, J., Chen, C., Li, C., Huang, J.: Infrared and visible image fusion via gradient transfer and total variation minimization. Inf. Fusion 31, 100–109 (2016)
Liu, Yu., Liu, S., Wang, Z.: A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fusion 24, 147–164 (2015)
Piella, Gemma., Heijmans, Henk.: A new quality metric for image fusion. In Proceedings of International Conference on Image Processing, pages 173–176. IEEE, (2003)
Guihong, Q., Zhang, D., Yan, P.: Information measure for performance of image fusion. Electron. Lett. 38(7), 313–315 (2002)
Wang, Z., Bovik, A.C.: A universal image quality index. IEEE Signal Process. Lett. 9(3), 81–84 (2002)
Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Chen, H., Varshney, P.K.: A human perception inspired quality metric for image fusion based on regional information. Inf. Fusion 8(2), 193–207 (2007)
Xydeas, C.S., Petrovic, V.: Objective image fusion performance measure. Electron. Lett. 36(4), 308–309 (2000)
Sinha, A., Agarwal, R., Kumar, V., Garg, N., Pundir, D.S., Singh, H., Rani, R., Panigrahy, C.: Multi-modal medical image fusion using improved dual-channel pcnn. Med. Biol. Eng. Comput. (2024). https://doi.org/10.1007/s11517-024-03089-w
Vajpayee, P., Panigrahy, C., Kumar, A.: Medical image fusion by adaptive gaussian pcnn and improved roberts operator. SIViP 17, 3565–3573 (2023)
Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
All authors have no conficts of interest relevant to this study to disclose.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Bansal, K., Kumar, V., Agrawal, C. et al. Infrared and visible image fusion based on FUDPCNN and gravitational force operator. SIViP 18, 6973–6986 (2024). https://doi.org/10.1007/s11760-024-03367-y
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11760-024-03367-y