Low-Light Image Enhancement Network Using Informative Feature Stretch and Attention
<p>Low-light images and their respective ground-truth counterparts, along with their accompanying histograms.</p> "> Figure 2
<p>Examples of enhanced image results using different machine-learning-based methods.</p> "> Figure 3
<p>The overall structure of the proposed LIE network.</p> "> Figure 4
<p>Conventional image normalization examples with different percentile maximum and minimum values (max percentile, min percentile).</p> "> Figure 5
<p>Configuration of DRS block.</p> "> Figure 6
<p>Restored low-light images and corresponding histograms using only the DRS block.</p> "> Figure 7
<p>Configuration of IFA block.</p> "> Figure 8
<p>Configuration of FS block.</p> "> Figure 9
<p>Configuration of DR block.</p> "> Figure 10
<p>Enhanced image results for the paired synthetic LOL dataset.</p> "> Figure 11
<p>Visual comparison of enhancement results on the VV dataset (24 images).</p> "> Figure 12
<p>Visual comparison of enhancement results on the DICM dataset (44 images).</p> "> Figure 13
<p>Visual comparison of enhancement results on the LIME dataset (10 images).</p> "> Figure 14
<p>Visual comparison of enhancement results on the MEF dataset (17 images).</p> "> Figure 15
<p>Visual comparison of results using three ablation studies.</p> ">
Abstract
:1. Introduction
- To enhance the quality of low-light images, we introduced an adaptive standardization block and an adaptive normalization block [14]. We achieved successful results in enhancing underwater images by carefully and repetitively combining these blocks. Building upon these findings, we design the DRS block by sequentially connecting a single adaptive standardization block and a single normalization block. However, when the DRS block is applied directly to low-light images, it primarily enhances brightness without adequately addressing color restoration and detail enhancement.
- We introduce an IFA block designed to complement the output of the DRS block, focusing specifically on restoring image colors and details. A histogram-equalized version of the low-light image is produced and used as input to the IFA block in conjunction with the original low-light image. The histogram-equalized image serves as a roughly reconstructed image, providing basic color and detail information. The IFA block comprises the FS and DR blocks. The FS block uses the DRS block in combination with a squeeze-and-excitation residual (SE-Res) block [15]. The DR block includes a modified multiscale block [16] and the SE-Res block.
- Finally, the feature outputs from both the DRS and IFA blocks are multiplied to generate the restored low-light image. The features from the IFA block serve as weights for the image features produced by the DRS block. After performing a convolution operation on the multiplied features, the proposed network outputs the restored low-light image.
2. Related Works
2.1. Image-Processing-Based Method
2.2. Model-Based Method
2.3. Machine-Learning-Based Method
3. Proposed Network
3.1. Configuration of Proposed Network
3.2. Dynamic Range Stretch Block
3.3. Informative Feature Attention Block
3.3.1. Feature Stretch Block
3.3.2. Detail-Recovery Block
3.4. Loss Function
4. Experimental Results
4.1. Implementation Details
4.2. Datasets and Comparison Methods
4.3. Complexity
4.4. Results on Synthetic Dataset
4.5. Results on Non-Reference Datasets
4.6. Ablation Experiments
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Kim, W. Low-light image enhancement: A comparative review and prospects. IEEE Access 2022, 10, 84535–84557. [Google Scholar] [CrossRef]
- Rasheed, M.T.; Shi, D.; Khan, H. A comprehensive experiment-based review of low-light image enhancement methods and benchmarking low-light image quality assessment. Signal Process. 2023, 204, 108821. [Google Scholar] [CrossRef]
- Li, C.; Guo, C.; Han, L.; Jiang, J.; Cheng, M.M.; Gu, J.; Loy, C.C. Low-light image and video enhancement using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 9396–9416. [Google Scholar] [CrossRef]
- Singh, K.; Kapoor, R. Image enhancement using exposure based subimage histogram equalization. Pattern Recogn. Lett. 2014, 36, 10–14. [Google Scholar] [CrossRef]
- Kim, S.E.; Jeon, J.J.; Eom, I.K. Image contrast enhancement using entropy scaling in wavelet domain. Signal Process. 2016, 127, 1–11. [Google Scholar] [CrossRef]
- Guo, X.; Li, Y.; Ling, H. LIME: Low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar] [CrossRef] [PubMed]
- Jeon, J.J.; Eom, I.K. Low-light image enhancement using inverted image normalized by atmospheric light. Signal Process. 2022, 196, 108523. [Google Scholar] [CrossRef]
- Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
- Park, J.Y.; Park, C.W.; Eom, I.K. ULBPNet: Low-light image enhancement using U-shaped lightening back-projection. Knowl.-Based Syst. 2023, 281, 111099. [Google Scholar] [CrossRef]
- Land, E.H. The Retinex theory of color vision. Sci. Am. 1997, 237, 108–128. [Google Scholar] [CrossRef] [PubMed]
- Narasimhan, S.G.; Nayar, S.K. Contrast restoration of weather degraded images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 713–724. [Google Scholar] [CrossRef]
- Wang, Y.; Wan, R.; Li, H.; Chau, L.P.; Kot, A. Low-light image enhancement with normalizing flow. In Proceedings of the AAAI Conference on Artificial Intelligence, Palo Alto, CA, USA, 22 February–1 March 2022; pp. 2604–2612. [Google Scholar] [CrossRef]
- Cui, H.; Li, J.; Hua, Z.; Fan, L. TPET: Two-stage perceptual enhancement transformer network for low-light image enhancement. Eng. Appl. Artif. Intell. 2022, 116, 105411. [Google Scholar] [CrossRef]
- Park, C.W.; Eom, I.K. Underwater image enhancement using adaptive standardization and normalization networks. Eng.Appl. Artif. Intell. 2024, 127, 107445. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-excitation networks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 2011–2023. [Google Scholar] [CrossRef] [PubMed]
- Zhang, Y.; Guo, X.; Ma, J.; Liu, W.; Zhang, J. Beyond brightening low-light images. Int. J. Comput. Vis. 2021, 129, 1013–1037. [Google Scholar] [CrossRef]
- Jiang, Y.; Gong, X.; Liu, D.; Cheng, Y.; Shen, X.; Yang, J.; Zhou, P.; Wang, Z. EnlightenGAN: Deep light enhancement without paired supervision. IEEE Trans. Image Process. 2021, 30, 2340–2349. [Google Scholar] [CrossRef]
- Lv, F.; Li, Y.; Lu, F. Attention guided low-light image enhancement with a large scale low-light simulation dataset. Int. J. Comput. Vis. 2021, 29, 2175–2193. [Google Scholar] [CrossRef]
- Lu, K.; Zhang, L. TBEFN: A two-branch exposure-fusion network for low-light image enhancement. IEEE Trans. Multimed. 2021, 23, 4093–4105. [Google Scholar] [CrossRef]
- Wu, W.; Weng, J.; Zhang, P.; Wang, X.; Yang, W.; Jiang, J. URetinex-Net: Retinex-based deep unfolding network for low-light image enhancement. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5891–5900. [Google Scholar] [CrossRef]
- Wang, T.; Zhang, K.; Shen, T.; Luo, W.; Stenger, B.; Lu, T. Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method. In Proceedings of the 37th AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; pp. 2654–2662. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Liu, S.C.; Liu, S.; Wu, H.; Rahman, M.A.; Lin, S.C.-F.; Wong, C.Y.; Kwok, N.; Shi, H. Enhancement of low illumination images based on an optimal hyperbolic tangent profile. Comput. Electr. Eng. 2018, 70, 538–550. [Google Scholar] [CrossRef]
- Srinivas, K.; Bhandari, A.K. Low light image enhancement with adaptive sigmoid transfer function. IET Image Process. 2020, 14, 668–678. [Google Scholar] [CrossRef]
- Celik, T.; Tjahjadi, T. Contextual and variational contrast enhancement. IEEE Trans. Image Process. 2011, 20, 3431–3441. [Google Scholar] [CrossRef] [PubMed]
- Fu, X.; Zeng, D.; Huang, Y.; Liao, Y.; Ding, X.; Paisley, J. A fusion-based enhancing method for weakly illuminated images. Signal Process. 2016, 129, 82–96. [Google Scholar] [CrossRef]
- Ying, Z.; Li, G.; Gao, W. A bio-inspired multi-exposure fusion framework for low-light image enhancement. arXiv 2017, arXiv:1711.00591. [Google Scholar] [CrossRef]
- Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-revealing low-light image enhancement via robust Retinex model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef]
- Hao, S.; Han, X.; Guo, Y.; Xu, X.; Wang, M. Low-light image enhancement with semi-decoupled decomposition. IEEE Trans. Multimed. 2020, 22, 3025–3038. [Google Scholar] [CrossRef]
- Ren, Y.; Ying, Z.; Li, T.H.; Li, G. LECARM: Low-light image enhancement using the camera response model. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 968–981. [Google Scholar] [CrossRef]
- He, K.M.; Sun, J.; Tang, X.O. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 23, 2341–2353. [Google Scholar] [CrossRef]
- Shi, Z.; Zhu, M.; Guo, B.; Zhao, M. A photographic negative imaging inspired method for low illumination night-time image enhancement. Multimed. Tools Appl. 2017, 76, 15027–15048. [Google Scholar] [CrossRef]
- Gu, Z.; Chen, C.; Zhang, D. A low-light image enhancement method based on image degradation model and pure pixel ratio prior. Math. Probl. Eng. 2018, 2018, 8178109. [Google Scholar] [CrossRef]
- Jeon, J.J.; Park, J.Y.; Eom, I.K. Low-light image enhancement using gamma correction prior in mixed color spaces. Pattern Recognit. 2024, 146, 110001. [Google Scholar] [CrossRef]
- Lv, F.; Lu, F.; Wu, J.; Lim, C. MBLLEN: Low-light image/video enhancement using CNNs. In Proceedings of the 29th British Machine Vision Conference (BMVC), Newcastle, UK, 3–6 September 2018; pp. 1–13. [Google Scholar]
- Wang, L.W.; Liu, J.S.; Siu, W.C.; Lun, D.P.K. Lightening network for low-light image enhancement. IEEE Trans. Image Process. 2020, 29, 7984–7996. [Google Scholar] [CrossRef]
- Lim, S.; Kim, W. DSLR: Deep stacked Laplacian restorer for low-light image enhancement. IEEE Trans. Multimed. 2021, 23, 4272–4284. [Google Scholar] [CrossRef]
- Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex decomposition for low-light enhancement. arXiv 2018, arXiv:1808.04560. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhang, J.; Guo, X. Kindling the darkness: A practical low-light image enhancer. In Proceedings of the 27th ACM International Conference on Multimedia, New York, NY, USA, 21–15 October 2019; pp. 1632–1640. [Google Scholar] [CrossRef]
- Zhu, A.; Zhang, L.; Shen, Y.; Ma, Y.; Zhao, S.; Zhou, Y. Zero-shot restoration of underexposed images via robust Retinex decomposition. In Proceedings of the 2020 IEEE International Conference on Multimedia and Expo (ICME), London, UK, 6–10 July 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Zhao, Z.; Xiong, B.; Wang, L.; Ou, Q.; Yu, L.; Kuang, F. RetinexDIP: A unified deep framework for low-light image enhancement. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 1076–1088. [Google Scholar] [CrossRef]
- Guo, C.; Li, C.; Cuo, J.; Loy, C.C.; Hou, J.; Kwong, S.; Cong, R. Zero-reference deep curve estimation for low-light image enhancement. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 1780–1789. [Google Scholar] [CrossRef]
- Xu, X.; Wang, R.; Fu, C.W.; Jia, J. SNR-aware low-light image enhancement. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 17693–17703. [Google Scholar] [CrossRef]
- Yang, S.; Zhou, D.; Cao, J.; Guo, Y. LightingNet: An integrated learning method for low-light image enhancement. IEEE Trans. Comput. Imaging 2023, 9, 29–42. [Google Scholar] [CrossRef]
- Loffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 448–456. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015; pp. 1–15. [Google Scholar] [CrossRef]
- Liu, J.; Xu, D.; Yang, W.; Fan, M.; Huang, H. Benchmarking low-light image enhancement and beyond. Int. J. Comput. Vis. 2021, 129, 1153–1184. [Google Scholar] [CrossRef]
- Lee, C.; Lee, C.; Kim, C.S. Contrast enhancement based on layered difference representation. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Orlando, FL, USA, 30 September–3 October 2012; pp. 965–968. [Google Scholar] [CrossRef]
- Wang, Q.; Fu, X.; Zhang, X.P.; Ding, X. A fusion-based method for single backlit image enhancement. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 4077–4081. [Google Scholar] [CrossRef]
- Vonikakis, V. Dataset. Available online: https://sites.google.com/site/vonikakis/datasets (accessed on 14 June 2024).
- Ma, K.; Zeng, K.; Wang, Z. Perceptual quality assessment for multi-exposure image fusion. IEEE Trans. Image Process. 2015, 24, 3345–3356. [Google Scholar] [CrossRef] [PubMed]
- Sharma, G.; Wu, W.; Dalal, E.N. The ciede2000 color difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Res. Appl. 2005, 30, 21–30. [Google Scholar] [CrossRef]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar] [CrossRef]
- Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [PubMed]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a ‘completely blind’ image quality analyzer. IEEE Signal Process. Lett. 2013, 20, 209–212. [Google Scholar] [CrossRef]
- Yang, K.F.; Cheng, C.; Zhao, S.X.; Yan, H.M.; Zhang, X.S.; Li, Y.J. Learning to adapt to light. Int. J. Comput. Vis. 2023, 131, 1022–1041. [Google Scholar] [CrossRef]
Network | Number of Parameters (×106) | Runtime (s) |
---|---|---|
RetinexNet [38] | 0.444 | 9.033 |
DLN [36] | 0.701 | 0.400 |
TBEFN [19] | 0.486 | 0.909 |
Zero-DCE [42] | 0.079 | 0.016 |
EnlightenGAN [17] | 8.637 | 0.031 |
KinD++ [16] | 8.275 | 23.45 |
AGLLNet [18] | 0.926 | 2.800 |
DSLR [37] | 14.931 | 4.025 |
URetinex-Net [20] | 0.361 | 0.456 |
SNRA [43] | 39.120 | 0.382 |
LLFormer [21] | 24.549 | 0.454 |
Proposed network | 1.532 | 0.388 |
Method | PSNR↑ | SSIM↑ | NIQE↓ | CIEDE2000↓ | LPIPS↓ | FSIM↑ |
---|---|---|---|---|---|---|
LIME [6] | 16.42 | 0.551 | 7.582 | 16.600 | 0.4035 | 0.879 |
SDD [29] | 13.34 | 0.609 | 2.863 | 21.835 | 0.2609 | 0.903 |
GCPMC [34] | 17.39 | 0.540 | 2.522 | 14.631 | 0.4289 | 0.876 |
RetinexNet [38] | 14.98 | 0.405 | 8.091 | 20.474 | 0.4649 | 0.782 |
DLN [36] | 19.26 | 0.728 | 6.577 | 13.295 | 0.2960 | 0.940 |
TBEFN [19] | 15.84 | 0.581 | 2.623 | 17.510 | 0.2107 | 0.839 |
Zero-DCE [42] | 14.80 | 0.583 | 6.925 | 18.920 | 0.3299 | 0.927 |
EnlightenGAN [17] | 15.31 | 0.498 | 5.056 | 19.128 | 0.3192 | 0.823 |
KinD++ [16] | 15.72 | 0.597 | 2.824 | 17.260 | 0.1977 | 0.780 |
AGLLNet [18] | 17.53 | 0.659 | 3.157 | 13.945 | 0.2181 | 0.919 |
DSLR [37] | 14.08 | 0.514 | 2.968 | 19.671 | 0.3275 | 0.833 |
URetinex-Net [20] | 17.28 | 0.618 | 2.838 | 15.232 | 0.1275 | 0.849 |
SNRA [43] | 24.61 | 0.843 | 2.516 | 6.849 | 0.1610 | 0.962 |
LLFormer [21] | 23.65 | 0.810 | 2.664 | 7.938 | 0.1640 | 0.959 |
Proposed network | 24.31 | 0.820 | 2.304 | 7.509 | 0.1380 | 0.965 |
Method | DICM | LIME | Fusion | VV | MEF | Average |
---|---|---|---|---|---|---|
LIME [6] | 3.338 | 3.581 | 2.764 | 2.458 | 2.997 | 3.028 |
SDD [29] | 2.321 | 2.856 | 2.392 | 2.019 | 2.227 | 2.363 |
GCPMC [34] | 3.308 | 3.630 | 2.728 | 2.306 | 3.002 | 2.995 |
RetinexNet [38] | 4.076 | 4.143 | 3.081 | 2.907 | 3.832 | 3.608 |
DLN [36] | 2.023 | 3.213 | 2.463 | 1.894 | 2.552 | 2.429 |
TBEFN [19] | 2.341 | 3.215 | 2.208 | 1.916 | 2.167 | 2.369 |
Zero-DCE [42] | 2.606 | 3.393 | 2.596 | 2.160 | 2.846 | 2.720 |
EnlightenGAN [17] | 2.731 | 2.957 | 2.246 | 2.884 | 2.370 | 2.638 |
KinD++ [16] | 2.257 | 3.563 | 2.414 | 1.999 | 2.280 | 2.503 |
AGLLNet [18] | 2.789 | 3.630 | 2.652 | 2.534 | 2.609 | 2.843 |
DSLR [37] | 2.579 | 2.853 | 2.332 | 1.760 | 2.411 | 2.387 |
SNRA [43] | 2.501 | 3.523 | 2.935 | 4.225 | 2.373 | 3.111 |
LLFormer [21] | 2.967 | 3.417 | 3.240 | 2.389 | 2.562 | 2.915 |
Proposed network | 2.204 | 3.034 | 2.407 | 1.854 | 2.404 | 2.331 |
DRS | IFA | PSNR↑ | SSIM↑ | FSIM↑ | |
---|---|---|---|---|---|
FS | DR | ||||
√ | √ | √ | 24.31 | 0.820 | 0.965 |
√ | 17.34 | 0.537 | 0.914 | ||
√ | √ | 22.41 | 0.776 | 0.953 | |
√ | √ | 22.16 | 0.786 | 0.946 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chun, S.M.; Park, J.Y.; Eom, I.K. Low-Light Image Enhancement Network Using Informative Feature Stretch and Attention. Electronics 2024, 13, 3883. https://doi.org/10.3390/electronics13193883
Chun SM, Park JY, Eom IK. Low-Light Image Enhancement Network Using Informative Feature Stretch and Attention. Electronics. 2024; 13(19):3883. https://doi.org/10.3390/electronics13193883
Chicago/Turabian StyleChun, Sung Min, Jun Young Park, and Il Kyu Eom. 2024. "Low-Light Image Enhancement Network Using Informative Feature Stretch and Attention" Electronics 13, no. 19: 3883. https://doi.org/10.3390/electronics13193883
APA StyleChun, S. M., Park, J. Y., & Eom, I. K. (2024). Low-Light Image Enhancement Network Using Informative Feature Stretch and Attention. Electronics, 13(19), 3883. https://doi.org/10.3390/electronics13193883