[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Invertible network for unpaired low-light image enhancement

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Existing unpaired image enhancement approaches prefer to employ the traditional two-way generative adversarial network (GAN) framework, in which two convolutional neural network (CNN) generators are deployed for enhancement and degradation separately. However, such data-driven models ignore the inherent characteristics of transformation between the low-light and normal-light images, leading to unstable training and artifacts. Here, we propose to leverage the invertible neural network to enhance the low-light images in the forward process and degrade the unpaired normal-light photograph inversely. The generated and real images are then fed into discriminators for adversarial learning. In addition to the adversarial loss, we design transformation-consistent loss to ensure the stability of training, detail-preserving loss to preserve more image details, and reversibility loss to alleviate the over-exposure problem. Moreover, we present a progressive self-guided enhancement process in inference and achieve favorable performance against the state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability

All data analyzed during this study are included in this published article. The data generated during the current study are available from the corresponding author on reasonable request.

Notes

  1. https://sites.google.com/site/vonikakis/datasets.

References

  1. Wang, R., Zhang, Q., Fu, C.-W., Shen, X., Zheng, W.-S., Jia, J.: Underexposed photo enhancement using deep illumination estimation. In: CVPR (2019)

  2. Chen, W., Wenjing, W., Wenhan, Y., Jiaying, L.: Deep retinex decomposition for low-light enhancement. In: BMVC (2018)

  3. Ibrahim, H., Kong, N.S.P.: Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 53(4), 1752–1758 (2007)

    Article  Google Scholar 

  4. Abdullah-Al-Wadud, M., Dewan, Md., Kabir, M., Chae, O.: A dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 53, 593–600 (2007)

    Article  Google Scholar 

  5. Huang, S.-C., Cheng, F.-C., Chiu, Y.-S.: Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process. 22, 1032–1041 (2013)

    Article  MathSciNet  Google Scholar 

  6. Wang, S., Zheng, J., Hai-Miao, H., Li, B.: Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 22(9), 3538–3548 (2013)

    Article  Google Scholar 

  7. Fu, X., Zeng, D., Huang, Y., Zhang, X.-P., Ding, X.: A weighted variational model for simultaneous reflectance and illumination estimation. In: CVPR (2016)

  8. Guo, X., Li, Y., Ling, H.: Lime: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26, 982–993 (2017)

    Article  MathSciNet  Google Scholar 

  9. Liu, J., Dejia, X., Yang, W., Fan, M., Huang, H.: Benchmarking low-light image enhancement and beyond. Int. J. Comput. Vis. 129(4), 1153–1184 (2021)

    Article  Google Scholar 

  10. Li, C., Guo, C., Han, L., Jiang, J., Cheng, M.-M., Gu, J., Loy, C.C.: Low-light image and video enhancement using deep learning: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 1, 1 (2021)

    Google Scholar 

  11. Zhang, Y., Guo, X., Ma, J., Liu, W., Zhang, J.: Beyond brightening low-light images. Int. J. Comput. Vis. 129(4), 1013–1037 (2021)

    Article  Google Scholar 

  12. Lin, Z., Shao-Ping, L., Tao, C., Zhenglu, Y., Ariel, S.: Deep symmetric network for underexposed image enhancement with recurrent attentional learning. In: ICCV (2021)

  13. Wang, Y., Wan, R., Yang, W., Li, H., Chau, L.-P., Kot, A.C.: Low-light image enhancement with normalizing flow (2021). arXiv preprint arXiv:2109.05923

  14. Liu, F., Hua, Z., Li, J., Fan, L.: Low-light image enhancement network based on recursive network. Front. Neurorobotics 16, 836551 (2022)

    Article  Google Scholar 

  15. Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: retinex-based deep unfolding network for low-light image enhancement. In: CVPR (2022)

  16. Guo, C.G., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.: Zero-reference deep curve estimation for low-light image enhancement. In: CVPR (2020)

  17. Li, C., Guo, C., Loy, C.C.: Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 1, 1 (2021)

    Google Scholar 

  18. Jiang, Y., Gong, X., Ding Liu, Yu., Cheng, C.F., Shen, X., Yang, J., Zhou, P., Wang, Z.: Enlightengan: deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021)

    Article  Google Scholar 

  19. Chen, Y.-S., Wang, Y.-C., Kao, M.-H., Chuang, Y.-Y.: Deep photo enhancer: unpaired learning for image enhancement from photographs with GANs. In: CVPR (2018)

  20. Kim, H.-U., Koh, Y.J., Kim, C.-S.: Global and local enhancement network for paired and unpaired image enhancement. In: ECCV (2020)

  21. Ni, Z., Yang, W., Wang, S., Ma, L., Kwong, S.: Unpaired image enhancement with quality-attention generative adversarial network. In: ACM MM (2020)

  22. Dinh, L., Sohl-Dickstein, J., Bengio, S.: Density estimation using real nvp. In: ICLR (2017)

  23. Kingma, D.P., Dhariwal, P.: Glow: generative flow with invertible 1x1 convolutions. In: NeurIPS (2018)

  24. Li, M., Liu, J., Yang, W., Sun, X., Guo, Z.: Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans. Image Process. 27, 2828–2841 (2018)

    Article  MathSciNet  Google Scholar 

  25. Lore, K.G., Akintayo, A., Sarkar, S.: Llnet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 61, 650–662 (2017)

    Article  Google Scholar 

  26. Feifan, L., Feng, L., Jianhua, W., Chongsoon, L.: Mbllen: low-light image/video enhancement using CNNs. In: BMVC (2018)

  27. Chen, C., Chen, Q., Xu, J., Koltun, V.: Learning to see in the dark. In: CVPR (2018)

  28. Yang, W., Wang, S., Fang, Y., Wang, Y., Liu, J.: From fidelity to perceptual quality: a semi-supervised approach for low-light image enhancement. In: CVPR (2020)

  29. Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR (2022)

  30. Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV (2017)

  31. Dinh, L., Krueger, D., Bengio, Y.: Nice: non-linear independent components estimation (2014). arXiv preprint arXiv:1410.8516

  32. Mao, X., Li, Q., Xie, H., Lau, R.Y.K., Wang, Z., Smolley, S.P.: Least squares generative adversarial networks. In: ICCV (2017)

  33. He, K., Sun, J., Tang, X.: Guided image filtering. In: ECCV (2010)

  34. Risheng, L., Long, M., Jiaao, Z., Xin, F., Zhongxuan, L.: Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: CVPR (2021)

  35. Lee, C., Lee, C., Kim, C.-S.: Contrast enhancement based on layered difference representation. In: ICIP (2012)

  36. Ma, K., Zeng, K., Wang, Z.: Perceptual quality assessment for multi-exposure image fusion. IEEE Trans. Image Process. 24(11), 3345–3356 (2015)

    Article  MathSciNet  Google Scholar 

  37. Mittal, A., Soundararajan, R., Bovik, A.C.: Making a completely blind image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2013)

    Article  Google Scholar 

  38. Blau, Y., Mechrez, R., Timofte, R., Michaeli, T., Zelnik-Manor, L.: The 2018 PIRM challenge on perceptual image super-resolution. In: ECCV Workshops (2018)

  39. Mukherjee, S., Su, G.-M., Cheng, I.: Adaptive dithering using curved Markov-Gaussian noise in the quantized domain for mapping SDR to HDR image. In: Smart Multimedia, pp. 193–203 (2018)

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China under Grants 62006064 and U19A2073.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaohe Wu.

Ethics declarations

Conflict of interest

All authors declare no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, J., Wang, H., Wu, X. et al. Invertible network for unpaired low-light image enhancement. Vis Comput 40, 109–120 (2024). https://doi.org/10.1007/s00371-023-02769-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-023-02769-2

Keywords

Navigation