Abstract
Existing unpaired image enhancement approaches prefer to employ the traditional two-way generative adversarial network (GAN) framework, in which two convolutional neural network (CNN) generators are deployed for enhancement and degradation separately. However, such data-driven models ignore the inherent characteristics of transformation between the low-light and normal-light images, leading to unstable training and artifacts. Here, we propose to leverage the invertible neural network to enhance the low-light images in the forward process and degrade the unpaired normal-light photograph inversely. The generated and real images are then fed into discriminators for adversarial learning. In addition to the adversarial loss, we design transformation-consistent loss to ensure the stability of training, detail-preserving loss to preserve more image details, and reversibility loss to alleviate the over-exposure problem. Moreover, we present a progressive self-guided enhancement process in inference and achieve favorable performance against the state-of-the-art methods.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data Availability
All data analyzed during this study are included in this published article. The data generated during the current study are available from the corresponding author on reasonable request.
References
Wang, R., Zhang, Q., Fu, C.-W., Shen, X., Zheng, W.-S., Jia, J.: Underexposed photo enhancement using deep illumination estimation. In: CVPR (2019)
Chen, W., Wenjing, W., Wenhan, Y., Jiaying, L.: Deep retinex decomposition for low-light enhancement. In: BMVC (2018)
Ibrahim, H., Kong, N.S.P.: Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 53(4), 1752–1758 (2007)
Abdullah-Al-Wadud, M., Dewan, Md., Kabir, M., Chae, O.: A dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 53, 593–600 (2007)
Huang, S.-C., Cheng, F.-C., Chiu, Y.-S.: Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans. Image Process. 22, 1032–1041 (2013)
Wang, S., Zheng, J., Hai-Miao, H., Li, B.: Naturalness preserved enhancement algorithm for non-uniform illumination images. IEEE Trans. Image Process. 22(9), 3538–3548 (2013)
Fu, X., Zeng, D., Huang, Y., Zhang, X.-P., Ding, X.: A weighted variational model for simultaneous reflectance and illumination estimation. In: CVPR (2016)
Guo, X., Li, Y., Ling, H.: Lime: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26, 982–993 (2017)
Liu, J., Dejia, X., Yang, W., Fan, M., Huang, H.: Benchmarking low-light image enhancement and beyond. Int. J. Comput. Vis. 129(4), 1153–1184 (2021)
Li, C., Guo, C., Han, L., Jiang, J., Cheng, M.-M., Gu, J., Loy, C.C.: Low-light image and video enhancement using deep learning: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 1, 1 (2021)
Zhang, Y., Guo, X., Ma, J., Liu, W., Zhang, J.: Beyond brightening low-light images. Int. J. Comput. Vis. 129(4), 1013–1037 (2021)
Lin, Z., Shao-Ping, L., Tao, C., Zhenglu, Y., Ariel, S.: Deep symmetric network for underexposed image enhancement with recurrent attentional learning. In: ICCV (2021)
Wang, Y., Wan, R., Yang, W., Li, H., Chau, L.-P., Kot, A.C.: Low-light image enhancement with normalizing flow (2021). arXiv preprint arXiv:2109.05923
Liu, F., Hua, Z., Li, J., Fan, L.: Low-light image enhancement network based on recursive network. Front. Neurorobotics 16, 836551 (2022)
Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., Jiang, J.: Uretinex-net: retinex-based deep unfolding network for low-light image enhancement. In: CVPR (2022)
Guo, C.G., Li, C., Guo, J., Loy, C.C., Hou, J., Kwong, S., Cong, R.: Zero-reference deep curve estimation for low-light image enhancement. In: CVPR (2020)
Li, C., Guo, C., Loy, C.C.: Learning to enhance low-light image via zero-reference deep curve estimation. IEEE Trans. Pattern Anal. Mach. Intell. 1, 1 (2021)
Jiang, Y., Gong, X., Ding Liu, Yu., Cheng, C.F., Shen, X., Yang, J., Zhou, P., Wang, Z.: Enlightengan: deep light enhancement without paired supervision. IEEE Trans. Image Process. 30, 2340–2349 (2021)
Chen, Y.-S., Wang, Y.-C., Kao, M.-H., Chuang, Y.-Y.: Deep photo enhancer: unpaired learning for image enhancement from photographs with GANs. In: CVPR (2018)
Kim, H.-U., Koh, Y.J., Kim, C.-S.: Global and local enhancement network for paired and unpaired image enhancement. In: ECCV (2020)
Ni, Z., Yang, W., Wang, S., Ma, L., Kwong, S.: Unpaired image enhancement with quality-attention generative adversarial network. In: ACM MM (2020)
Dinh, L., Sohl-Dickstein, J., Bengio, S.: Density estimation using real nvp. In: ICLR (2017)
Kingma, D.P., Dhariwal, P.: Glow: generative flow with invertible 1x1 convolutions. In: NeurIPS (2018)
Li, M., Liu, J., Yang, W., Sun, X., Guo, Z.: Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans. Image Process. 27, 2828–2841 (2018)
Lore, K.G., Akintayo, A., Sarkar, S.: Llnet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 61, 650–662 (2017)
Feifan, L., Feng, L., Jianhua, W., Chongsoon, L.: Mbllen: low-light image/video enhancement using CNNs. In: BMVC (2018)
Chen, C., Chen, Q., Xu, J., Koltun, V.: Learning to see in the dark. In: CVPR (2018)
Yang, W., Wang, S., Fang, Y., Wang, Y., Liu, J.: From fidelity to perceptual quality: a semi-supervised approach for low-light image enhancement. In: CVPR (2020)
Ma, L., Ma, T., Liu, R., Fan, X., Luo, Z.: Toward fast, flexible, and robust low-light image enhancement. In: CVPR (2022)
Zhu, J.-Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV (2017)
Dinh, L., Krueger, D., Bengio, Y.: Nice: non-linear independent components estimation (2014). arXiv preprint arXiv:1410.8516
Mao, X., Li, Q., Xie, H., Lau, R.Y.K., Wang, Z., Smolley, S.P.: Least squares generative adversarial networks. In: ICCV (2017)
He, K., Sun, J., Tang, X.: Guided image filtering. In: ECCV (2010)
Risheng, L., Long, M., Jiaao, Z., Xin, F., Zhongxuan, L.: Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement. In: CVPR (2021)
Lee, C., Lee, C., Kim, C.-S.: Contrast enhancement based on layered difference representation. In: ICIP (2012)
Ma, K., Zeng, K., Wang, Z.: Perceptual quality assessment for multi-exposure image fusion. IEEE Trans. Image Process. 24(11), 3345–3356 (2015)
Mittal, A., Soundararajan, R., Bovik, A.C.: Making a completely blind image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2013)
Blau, Y., Mechrez, R., Timofte, R., Michaeli, T., Zelnik-Manor, L.: The 2018 PIRM challenge on perceptual image super-resolution. In: ECCV Workshops (2018)
Mukherjee, S., Su, G.-M., Cheng, I.: Adaptive dithering using curved Markov-Gaussian noise in the quantized domain for mapping SDR to HDR image. In: Smart Multimedia, pp. 193–203 (2018)
Acknowledgements
This work was supported by National Natural Science Foundation of China under Grants 62006064 and U19A2073.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
All authors declare no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zhang, J., Wang, H., Wu, X. et al. Invertible network for unpaired low-light image enhancement. Vis Comput 40, 109–120 (2024). https://doi.org/10.1007/s00371-023-02769-2
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-023-02769-2