Abstract
Big data video images contain a lot of information in the cloud computing environment. There are usually many images in the same scene, and the information description is not sufficient. The traditional image fusion algorithms have some defects such as poor quality, low resolution and information loss of the fused image. Therefore, we propose a very deep fully convolutional encoder–decoder network based on wavelet transform for art image fusion in the cloud computing environment. This proposed network is based on VGG-Net and designs the encoder sub-network and the decoder sub-network. The images to be fused are decomposed by the wavelet transform to obtain the low frequency sub-image and high frequency sub-image at different scale spaces. The different fusion schemes for low frequency sub-band coefficient and high frequency sub-band coefficient are given respectively. The structural similarity of the images before and after fusion is taken as the objective orientation. By introducing the weight factor of the local information in the image, the loss function suitable for the final fusion of the image is customized. The fusion image can take the effective information of the different input images into account. Compared with other state-of-the-art image fusion methods, the proposed image fusion has achieved significant improvement in both subjective visual experience and objective quantification indexes.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Availability of data and materials
The data can be accessed from the corresponding authors.
References
Abualigah L, Yousri D, AbdElaziz M et al (2021a) Aquila optimizer: a novel meta-heuristic optimization algorithm. Comput Ind Eng 157:107250
Abualigah L, Diabat A, Mirjalili S et al (2021b) The arithmetic optimization algorithm. Comput Methods Appl Mech Eng 376:113609
Abualigah L, Diabat A, Sumari P et al (2021c) Applications, deployments, and integration of internet of drones (IoD): a review. IEEE Sens J. https://doi.org/10.1109/JSEN.2021.3114266
Abualigah L, AbdElaziz M, Sumari P et al (2022) Reptile search algorithm (RSA): a nature-inspired meta-heuristic optimizer. Expert Syst Appl 191:116158
An J, Ha S, Cho N (2014) Probabilistic motion pixel detection for the reduction of ghost artifacts in high dynamic range images from multiple exposures. EURASIP J Image Video Process 2014(1):1–15
Asadi A, Ezoji M (2020) Multi-exposure image fusion via a pyramidal integration of the phase congruency of input images with the intensity-based maps. IET Image Proc 14(13):3127–3133
Chen K, Chen Y, Feng H et al (2014) Detail preserving exposure fusion for a dual sensor camera. Opt Rev 21(6):769–774
Choi S, Kwon O, Lee J (2017) A method for fast multi-exposure image fusion. IEEE Access 5:7371–7380. https://doi.org/10.1109/ACCESS.2017.2694038
Divakar N, Babu RV (2017) Image denoising via CNNs: an adversarial approach. In: 2017 IEEE conference on computer vision and pattern recognition workshops (CVPRW), pp 1076–1083. https://doi.org/10.1109/CVPRW.2017.145
Eilertsen G, Kronander J, Denes G et al (2017) HDR image reconstruction from a single exposure using deep CNNs. ACM Trans Graph 36(6):1–15
Gu B, Li W (2012) Gradient field multi-exposure images fusion for high dynamic range image visualization. J Visual Commun Image Represent 23(4):604–610
Kou F, Li Z, Wen C, Chen W (2017) Multi-scale exposure fusion via gradient domain guided image filtering. In: 2017 IEEE international conference on multimedia and expo (ICME), pp 1105–1110. https://doi.org/10.1109/ICME.2017.8019529
Lahoulou A, Bouridane A, Viennet E et al (2013) Full-reference image quality metrics performance evaluation over image quality databases. Arab J Sci Eng 38(9):2327–2356
Lee S-H, Park JS, Cho NI (2018) A multi-exposure image fusion based on the adaptive weights reflecting the relative pixel intensity and global gradient. In: 2018 25th IEEE international conference on image processing (ICIP), pp 1737–1741. https://doi.org/10.1109/ICIP.2018.8451153
Li S, Kang X (2012) Fast multi-exposure image fusion with median filter and recursive filter. IEEE Trans Consum Electron 58(2):626–632. https://doi.org/10.1109/TCE.2012.6227469
Li H, Wu X (2019) DenseFuse: a fusion approach to infrared and visible images. IEEE Trans Image Process 28(5):2614–2623. https://doi.org/10.1109/TIP.2018.2887342
Li S, Kang X, Hu J (2013) Image fusion with guided filtering. IEEE Trans Image Process 22(7):2864–2875. https://doi.org/10.1109/TIP.2013.2244222
Li Y, Park JH, Shin BS (2017) A shortest path planning algorithm for cloud computing environment based on multi-access point topology analysis for complex indoor spaces. J Supercomput 73:2867–2880. https://doi.org/10.1007/s11227-016-1650-x
Li M, Bian L, Zhang J (2020) Coded coherent diffraction imaging with reduced binary modulations and low dynamic-range detection. Opt Lett 45(16):4373–4376
Lindeberg T (2013) Generalized axiomatic scale-space theory. Adv Imaging Electron Phys 178:1–96
Liu Y, Wang Z (2015) Dense SIFT for ghost-free multi-exposure fusion. J vis Commun Image Represent 31:208–224
Ma K, Yeganeh H, Zeng K, Wang Z (2015a) High dynamic range image compression by optimizing tone mapped image quality index. IEEE Trans Image Process 24(10):3086–3097. https://doi.org/10.1109/TIP.2015.2436340
Ma K, Zeng K, Wang Z (2015b) Perceptual quality assessment for multi-exposure image fusion. IEEE Trans Image Process 24(11):3345–3356. https://doi.org/10.1109/TIP.2015.2442920
Ma K, Hui L, Yong H et al (2017) Robust multi-exposure image fusion: a structural patch decomposition approach. IEEE Trans Image Process 26(5):2519–2532
Ma K, Duanmu Z, Zhu H, Fang Y, Wang Z (2020) Deep guided learning for fast multi-exposure image fusion. IEEE Trans Image Process 29:2808–2819. https://doi.org/10.1109/TIP.2019.2952716
Mertens T, Kautz J, Reeth FV (2009) Exposure fusion: a simple and practical alternative to high dynamic range photography. Comput Graph Forum 28(1):161–171
Prabhakar KR, Babu RV (2016) Ghosting-free multi-exposure image fusion in gradient domain. In: 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 1766–1770. https://doi.org/10.1109/ICASSP.2016.7471980
Prabhakar KR, Srikar VS, RV Babu RV (2017) DeepFuse: a deep unsupervised approach for exposure fusion with extreme exposure image pairs. In: 2017 IEEE international conference on computer vision (ICCV), pp 4724–4732. https://doi.org/10.1109/ICCV.2017.505
Que Y, Yang Y, Lee HJ (2019) Exposure measurement and fusion via adaptive multiscale edge-preserving smoothing. IEEE Trans Instrum Meas 68(12):4663–4674. https://doi.org/10.1109/TIM.2019.2896551
Radanliev P, de Roure D (2021) Review of algorithms for artificial intelligence on low memory devices. IEEE Access 9:109986–109993. https://doi.org/10.1109/ACCESS.2021.3101579
Radanliev P, De Roure D, Burnap P et al (2021) Epistemological equation for analysing uncontrollable states in complex systems: quantifying cyber risks from the internet of things. Rev Socionetw Strateg 15:381–411
Roomi MS, Imran M, Shah SA, Almogren A, Ali I, Zuair M (2020) A novel de-ghosting image fusion technique for multi-exposure, multi-focus images using guided image filtering. IEEE Access 8:219656–219671. https://doi.org/10.1109/ACCESS.2020.3043048
Shen R, Cheng I, Shi J, Basu A (2011) Generalized random walks for fusion of multi-exposure images. IEEE Trans Image Process 20(12):3634–3646. https://doi.org/10.1109/TIP.2011.2150235
Shen J, Zhao Y, Yan S, Li X (2014) Exposure fusion using boosting laplacian pyramid. IEEE Trans Cybern 44(9):1579–1590. https://doi.org/10.1109/TCYB.2013.2290435
Stojkovic A, Aelterman J, Luong H, Van Parys H, Philips W (2021) Highlights analysis system (HAnS) for low dynamic range to high dynamic range conversion of cinematic low dynamic range content. IEEE Access 9:43938–43969. https://doi.org/10.1109/ACCESS.2021.3065817
Teng L, Li H, Yin S, Karim S, Sun Y (2020) An active contour model based on hybrid energy and fisher criterion for image segmentation. Int J Image Data Fusion 11(1):97–112
Wang S, Zhao Y (2020) A novel patch-based multi-exposure image fusion using super-pixel segmentation. IEEE Access 8:39034–39045. https://doi.org/10.1109/ACCESS.2020.2975896
Wang J, Li H, Yin S, Sun Y (2019) Research on improved pedestrian detection algorithm based on convolutional neural network. In: 2019 international conference on Internet of Things (iThings) and IEEE green computing and communications (GreenCom) and IEEE cyber, physical and social computing (CPSCom) and IEEE smart data (SmartData), pp 254–258. https://doi.org/10.1109/iThings/GreenCom/CPSCom/SmartData.2019.00063
Wang Q, Chen W, Wu X, Li Z (2020) Detail-enhanced multi-scale exposure fusion in YUV color space. IEEE Trans Circ Syst Video Technol 30(8):2418–2429. https://doi.org/10.1109/TCSVT.2019.2919310
Wen G, Li L, Jin W et al (2015) Research on HDR image fusion algorithm based on Laplace pyramid weight transform with extreme low-light CMOS. Aopc: image processing & analysis. International Society for Optics and Photonics
Yang Y, Cao W, Wu S, Li Z (2018) Multi-scale fusion of two large-exposure-ratio images. IEEE Signal Process Lett 25(12):1885–1889. https://doi.org/10.1109/LSP.2018.2877893
Yin S, Zhang Y, Karim S (2019) Region search based on hybrid convolutional neural network in optical remote sensing images. Int J Distrib Sens Netw. https://doi.org/10.1177/1550147719852036
Yu ZA, Yu LB, Peng SC et al (2020) IFCNN: a general image fusion framework based on convolutional neural network. Inf Fusion 54:99–118
Yumei W, Daimei C, Genbao Z (2017) Image fusion algorithm of infrared and visible images based on target extraction and laplace transformation. Laser Optoelectron Prog 54(1):011002
Zhang W, Cham W (2010) Gradient-directed composition of multi-exposure images. In: 2010 IEEE computer society conference on computer vision and pattern recognition, pp 530–536. https://doi.org/10.1109/CVPR.2010.5540168
Funding
No.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that there is no conflict for this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Chen, T., Yang, J. Very deep fully convolutional encoder–decoder network based on wavelet transform for art image fusion in cloud computing environment. Evolving Systems 14, 281–293 (2023). https://doi.org/10.1007/s12530-022-09457-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12530-022-09457-x