[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Advertisement

An image fusion framework using morphology and sparse representation

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Image fusion is the process which aims to integrate the relevant and complementary information from a set of images into a single comprehensive image. Sparse representation (SR) is a powerful technique used in a wide variety of applications like denoising, compression and fusion. Building a compact and informative dictionary is the principal challenge in these applications. Hence, we propose a supervised classification based learning technique for the fusion algorithm. As an initial step, each patch of the training data set is pre-classified based on their gradient dominant direction. Then, a dictionary is learned using K-SVD algorithm. With this universal dictionary, sparse coefficients are estimated using greedy OMP algorithm to represent the given set of source images in the dominant direction. Finally, the Euclidean norm is used as a distance measure to reconstruct the fused image. Experimental results on different types of source images demonstrate the effectiveness of the proposed algorithm with conventional methods in terms of visual and quantitative evaluations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

References

  1. http://home.ustc.edu.cn/~liuyu1/. Accessed Oct 2016

  2. http://www.med.harvard.edu/AANLIB/home.html. Accessed Oct 2016

  3. http://decsai.ugr.es/cvg/CG/base.htm. Accessed Oct 2016

  4. Aharon M, Elad M, Bruckstein A (2006) K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans Signal Process 54(11):4311–4322

    Article  MATH  Google Scholar 

  5. Aishwarya N, Asnath Victy Phamila Y, Amutha R (2013) Multi-focus image fusion using multi-structure top hat transform and image variance. In: Proceedings of IEEE International Conference on Communication and Signal Processing: 686–689

  6. Bai X, Zhang Y, Zhou F, Xue B (2015) Quad tree-based multi-focus image fusion using a weighted focus-measure. Inf Fusion 22:105–118

    Article  Google Scholar 

  7. Bruckstein AM, Donoho DL, Elad M (2007) From sparse solutions of systems of equations to sparse modeling of signals and images. SIAM Rev 51(1):34–81

    Article  MathSciNet  MATH  Google Scholar 

  8. Chen Y, Blum R (2009) A new automated quality assessment algorithm for image fusion. Image Vis Comput 27(10):1421–1432

    Article  Google Scholar 

  9. Dong L, Yang Q, Wu H, Xiao H, Xu M (2015) High quality multi-spectral and panchromatic image fusion technologies based on curvelet transform. Neurocomputing 159:268–274

    Article  Google Scholar 

  10. Elad M, Figueiredo M, Ma Y (2010) On the role of sparse and redundant representations in image processing. Proc IEEE 98(6):972–982

    Article  Google Scholar 

  11. Goshtasby A, Nikolov S (2007) Image fusion: advances in the state of the art, Guest editorial. Inf Fusion 8:114–118

    Article  Google Scholar 

  12. Huang W, Jing ZL (2007) Evaluation of focus measures in multi-focus image fusion. Pattern Recogn Lett 28(4):493–500

    Article  Google Scholar 

  13. James AP, Dasarathy B (2014) Medical image fusion: a survey of the state of the art. Inf. Fusion 19:14–19

    Google Scholar 

  14. Kim M, Han DK, Ko H (2016) Joint patch clustering-based dictionary learning for multimodal image fusion. Inf Fusion 27:198–214

    Article  Google Scholar 

  15. Li S, Yang B, Hu J (2011) Performance comparison of different multi-resolution transforms for image fusion. Inf Fusion 12(2):74–84

    Article  Google Scholar 

  16. Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22(7):2864–2875

    Article  Google Scholar 

  17. Liu Y, Wang Z (2015) Simultaneous image fusion and denoising with adaptive sparse representation. IET Image Process 9(5):347–357

    Article  Google Scholar 

  18. Liu Y, Liu S, Wang Z (2015) A general framework for image fusion based on multi-scale transform and sparse representation. Inf Fusion 24:147–164

    Article  Google Scholar 

  19. Nejati M, Samavi S, Shirani S (2015) Multi-focus image fusion using dictionary-based sparse representation. Inf Fusion 25:72–84

    Article  Google Scholar 

  20. Phamila YAV, Amutha R (2014) Discrete Cosine Transform based fusion of multi-focus images for visual sensor networks. Signal Process 95:161–170

    Article  Google Scholar 

  21. Piella G, Heijmans H (2003) A new quality metric for image fusion. In: Proceedings of 10th International Conference on Image Processing: 173–176

  22. Qu G, Zhang D, Yan P (2002) Information measure for performance of image fusion. Electron Lett 38(7):313–315

    Article  Google Scholar 

  23. Qu X-B, Yan J-W, Xiao H-Z, Zhu Z-Q (2008) Image fusion based on spatial frequency motivated pulse coupled neural networks in non subsampled countourlet transform domain. Acta Automat Sin 34:1508–1514

    Article  MATH  Google Scholar 

  24. Rama Bai M (2010) A new approach for border extraction using morphological methods. Int J Eng Sci Technol 2:3832–3837

    Google Scholar 

  25. Sollie P (2003) Morphological principles and applications. Springer-Verlag, Berlin

    Google Scholar 

  26. Tian J, Chen L, Ma L, Yu W (2011) Multi-focus image fusion using a bilateral gradient-based sharpness criterion. Opt Commun 284(1):80–87

    Article  Google Scholar 

  27. Xydeas C, Petrovic V (2000) Objective image fusion performance measure. Electron Lett 36(4):308–309

    Article  Google Scholar 

  28. Yang B, Li S (2010) Multifocus image fusion and restoration with sparse representation. IEEE Trans Instrum Meas 59(4):884–892

    Article  Google Scholar 

  29. Yang B, Li S (2012) Pixel-level image fusion with simultaneous orthogonal matching pursuit. Inf Fusion 13(1):10–19

    Article  Google Scholar 

  30. Yang C, Zhang J, Wang X, Liu X (2008) A novel similarity based quality metric for image fusion. Inf Fusion 9(2):156–160

    Article  Google Scholar 

  31. Yin H, Li S (2011) Multimodal image fusion with joint sparsity model. Opt Eng 50(6):067007

    Article  Google Scholar 

  32. Yin H, Li Y, Chai Y, Liu Z, Zhu Z (2016) A novel sparse-representation-based multi-focusimage fusion approach. Neurocomputing 216:216–229

    Article  Google Scholar 

  33. Yu N, Qiu T, Bi F, Wang A (2011) Image feature extraction and fusion based on joint sparse representation. IEEE J Sel Top Signal Process 5(5):1074–1082

    Article  Google Scholar 

  34. Zhang Q, Guo B (2009) Multi-focus image fusion using the nonsubsampled contourlet transform. Signal Process 89(7):1334–1346

    Article  MATH  Google Scholar 

  35. Zhu S (2011) Edge Detection Based on Multi-structure Elements Morphology and Image Fusion. In: Proceedings of IEEE International Conference on Computing, Control and Industrial Engineering, 406–411

Download references

Acknowledgment

We express our sincere thanks to Yu Liu [33] for sharing MST-SR toolbox. Multi-focus data set and Multi-modal CT/MR image pairs used in our experiments are available at [1, 2]. The artificially blurred multi-focus image pairs used in the proposed method are generated by convolving each ground truth image (available at [3]) with a 7 × 7 average filter centered at the left part and right part respectively.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to N. Aishwarya.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Aishwarya, N., Bennila Thangammal, C. An image fusion framework using morphology and sparse representation. Multimed Tools Appl 77, 9719–9736 (2018). https://doi.org/10.1007/s11042-017-5562-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-017-5562-4

Keywords