[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Infrared and visible image fusion based on FUDPCNN and gravitational force operator

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Infrared and visible image fusion merges the salient details of an infrared and its respective visible image to create a formidable image more suitable for surveillance, image enhancement, object detection, and remote sensing. This paper presents a multi-scale transform-based infrared and visible image fusion method in the non-subsampled contourlet transform domain. The proposed method utilizes a new fast unit-linking dual-channel pulse coupled neural network (FUDPCNN) model. The low-pass sub-bands are fused by a new gravitational force operator-based mechanism. On the other hand, the internal activities of the proposed FUDPCNN are applied to get the fused high-pass sub-bands. The effectiveness of the FUDPCNN is shown by comparing it with practiced PCNN models. Moreover, the competitiveness of the gravitational force operator-based rule is described using other low-pass rules. The workability of the proposed method is shown by comparing its fusion outcomes on well-known infrared-visible image pairs with the nine existing approaches using eight objective metrics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

  1. Sun, C., Zhang, C., Xiong, N.: Infrared and visible image fusion techniques based on deep learning: a review. Electronics 9(12), 2162 (2020)

    Article  Google Scholar 

  2. Ma, J., Ma, Y., Li, C.: Infrared and visible image fusion methods and applications: a survey. Inf. Fusion 45, 153–178 (2019)

    Article  Google Scholar 

  3. Hermessi, H., Mourali, O., Zagrouba, E.: Multimodal medical image fusion review: theoretical background and recent advances. Signal Process. 183, 108036 (2021)

    Article  Google Scholar 

  4. Panigrahy, C., Seal, A., Mahato, N.K.: Parameter adaptive unit-linking dual-channel PCNN based infrared and visible image fusion. Neurocomputing 514, 21–38 (2022)

    Article  Google Scholar 

  5. Xiao-Bo, Q., Yan, J.-W., Hong, Z., Zhu, Z.Q.: Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Autom. Sin. 34(12), 1508–1514 (2008)

    Google Scholar 

  6. Panigrahy, C., Seal, A., Gonzalo-Martín, C., Pathak, P., Jalal, A.S.: Parameter adaptive unit-linking pulse coupled neural network based MRI-PET/SPECT image fusion. Biomed. Signal Process. Control 83, 104659 (2023)

    Article  Google Scholar 

  7. Xiang, T., Yan, L., Gao, R.: A fusion algorithm for infrared and visible images based on adaptive dual-channel unit-linking PCNN in NSCT domain. Infrared Phys. Technol. 69, 53–61 (2015)

    Article  Google Scholar 

  8. Katırcıoğlu, F., Çay, Y., Cingiz, Z.: Infrared image enhancement model based on gravitational force and lateral inhibition networks. Infrared Phys. Technol. 100, 15–27 (2019)

    Article  Google Scholar 

  9. Chunming, W., Chen, L.: Infrared and visible image fusion method of dual NSCT and PCNN. PLoS ONE 15(9), e0239535 (2020)

    Article  Google Scholar 

  10. He, K., Zhou, D., Zhang, X., Nie, R., Jin, X.: Multi-focus image fusion combining focus-region-level partition and pulse-coupled neural network. Soft. Comput. 23, 4685–4699 (2019)

    Article  Google Scholar 

  11. Toet, A.: The TNO multiband image data collection. Data Brief 15, 249–251 (2017)

  12. Ma, J., Tang, L., Fan, F., Huang, J., Mei, X., Ma, Y.: SwinFusion: Cross-domain long-range learning for general image fusion via swin transformer. IEEE/CAA J. Autom. Sin. 9(7), 1200–1217 (2022)

    Article  Google Scholar 

  13. Tang, L., Yuan, J., Zhang, H., Jiang, X., Ma, J.: PIAFusion: A progressive infrared and visible image fusion network based on illumination aware. Inf. Fusion 83, 79–92 (2022)

    Article  Google Scholar 

  14. Zhao, Z., Shuang, X., Zhang, C., Liu, J., Zhang, J.: Bayesian fusion for infrared and visible images. Signal Process. 177, 107734 (2020)

    Article  Google Scholar 

  15. Li, H., Qi, X., Xie, W.: Fast infrared and visible image fusion with structural decomposition. Knowl.-Based Syst. 204, 106182 (2020)

    Article  Google Scholar 

  16. Huang, X., Qi, G., Wei, H., Chai, Y., Sim, J.: A novel infrared and visible image information fusion method based on phase congruency and image entropy. Entropy 21(12), 1135 (2019)

    Article  MathSciNet  Google Scholar 

  17. Ma, J., Chen, C., Li, C., Huang, J.: Infrared and visible image fusion via gradient transfer and total variation minimization. Inf. Fusion 31, 100–109 (2016)

    Article  Google Scholar 

  18. Liu, Yu., Liu, S., Wang, Z.: A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fusion 24, 147–164 (2015)

    Article  Google Scholar 

  19. Piella, Gemma., Heijmans, Henk.: A new quality metric for image fusion. In Proceedings of International Conference on Image Processing, pages 173–176. IEEE, (2003)

  20. Guihong, Q., Zhang, D., Yan, P.: Information measure for performance of image fusion. Electron. Lett. 38(7), 313–315 (2002)

    Article  Google Scholar 

  21. Wang, Z., Bovik, A.C.: A universal image quality index. IEEE Signal Process. Lett. 9(3), 81–84 (2002)

    Article  Google Scholar 

  22. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  23. Chen, H., Varshney, P.K.: A human perception inspired quality metric for image fusion based on regional information. Inf. Fusion 8(2), 193–207 (2007)

    Article  Google Scholar 

  24. Xydeas, C.S., Petrovic, V.: Objective image fusion performance measure. Electron. Lett. 36(4), 308–309 (2000)

    Article  Google Scholar 

  25. Sinha, A., Agarwal, R., Kumar, V., Garg, N., Pundir, D.S., Singh, H., Rani, R., Panigrahy, C.: Multi-modal medical image fusion using improved dual-channel pcnn. Med. Biol. Eng. Comput. (2024). https://doi.org/10.1007/s11517-024-03089-w

    Article  Google Scholar 

  26. Vajpayee, P., Panigrahy, C., Kumar, A.: Medical image fusion by adaptive gaussian pcnn and improved roberts operator. SIViP 17, 3565–3573 (2023)

    Article  Google Scholar 

Download references

Funding

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chinmaya Panigrahy.

Ethics declarations

Conflict of interest

All authors have no conficts of interest relevant to this study to disclose.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bansal, K., Kumar, V., Agrawal, C. et al. Infrared and visible image fusion based on FUDPCNN and gravitational force operator. SIViP 18, 6973–6986 (2024). https://doi.org/10.1007/s11760-024-03367-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-024-03367-y

Keywords