Abstract
Video recording suffers from noise, artifacts, low illumination, and weak contrast under low-light conditions. With such difficulties, it is challenging to recover a high-quality video from the corresponding low-light one. Previous works have proven that convolutional neural networks perform well on low-light image tasks, and these methods are further extended to the video processing field. However, existing video recovery methods fail to fully exploit the long-range spatial and temporal dependency simultaneously. In this paper, we propose a 3D deformable network based on Unet-like architecture (\(\mathrm 3D^2Unet\)) for low-light video enhancement, which recovers RGB formatted videos from RAW sensor data. Specifically, we adopt a spatial temporal adaptive block with 3D deformable convolutions to better adapt the varying features of videos along spatio-temporal dimensions. In addition, a global residual projection is employed to further boost learning efficiency. Experimental results demonstrate that our method outperforms state-of-the-art low-light video enhancement works.
Y. Fu—Student.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Celik, T., Tjahjadi, T.: Contextual and variational contrast enhancement. IEEE Trans. Image Process. 20(12), 3431–3441 (2011)
Chen, C., Chen, Q., Do, M.N., Koltun, V.: Seeing motion in the dark. In: Proceedings of International Conference on Computer Vision (ICCV) (2019)
Chen, C., Chen, Q., Xu, J., Koltun, V.: Learning to see in the dark. In: Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR) (2018)
Chen, W., Wenjing, W., Wenhan, Y., Liu, J.: Deep retinex decomposition for low-light enhancement. In: Proceedings of British Machine Vision Conference (BMVC) (2018)
Dai, J., et al.: Deformable convolutional networks. In: Proceedings of International Conference on Computer Vision (ICCV) (2017)
Lv, F., Lu, F., Wu, J., Lim, C.: Mbllen: low-light image/video enhancement using cnns. Proceedings of British Machine Vision Conference (BMVC) (2018)
Guo, C., et al.: Zero-reference deep curve estimation for low-light image enhancement. In: Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Guo, X., Li, Y., Ling, H.: Lime: low-light image enhancement via illumination map estimation. IEEE Trans. Image Process. 26(2), 982–993 (2017)
Ibrahim, H., Pik Kong, N.S.: Brightness preserving dynamic histogram equalization for image contrast enhancement. IEEE Trans. Consum. Electron. 53(4), 1752–1758 (2007)
Jiang, H., Zheng, Y.: Learning to see moving objects in the dark. In: Proceedings of International Conference on Computer Vision (ICCV) (2019)
Jiang, X., Yao, H., Zhang, S., Lu, X., Zeng, W.: Night video enhancement using improved dark channel prior. In: Proceedings of International Conference on Image Processing (ICIP), pp. 553–557 (2013)
Pang, J., Zhang, S., Bai, W.: A novel framework for enhancement of the low lighting video. In: 2017 IEEE Symposium on Computers and Communications (ISCC), pp. 1366–1371 (2017)
Jobson, D.J., Rahman, Z., Woodell, G.A.: Properties and performance of a center/surround retinex. IEEE Trans. Image Process. 6(3), 451–462 (1997)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization (2017)
Land, E.H.: The retinex theory of color vision. Sci. Am. 237(6), 108–129 (1977)
Lee, C., Lee, C., Kim, C.: Contrast enhancement based on layered difference representation of 2d histograms. IEEE Trans. Image Process. 22, 5372–5384 (2013)
Lee, C., Shih, J., Lien, C., Han, C.: Adaptive multiscale retinex for image contrast enhancement. In: International Conference on Signal-Image Technology Internet-Based Systems, pp. 43–50 (2013)
Li, M., Liu, J., Yang, W., Sun, X., Guo, Z.: Structure-revealing low-light image enhancement via robust retinex model. IEEE Trans. Image Process. 27(6), 2828–2841 (2018)
Lore, K.G., Akintayo, A., Sarkar, S.: Llnet: a deep autoencoder approach to natural low-light image enhancement. Pattern Recogn. 61, 650–662 (2017)
Nakai, K., Hoshi, Y., Taguchi, A.: Color image contrast enhacement method based on differential intensity/saturation gray-levels histograms. In: International Symposium on Intelligent Signal Processing and Communication Systems, pp. 445–449 (2013)
Ooi, C.H., Pik Kong, N.S., Ibrahim, H.: Bi-histogram equalization with a plateau limit for digital image enhancement. IEEE Trans. Consum. Electron. 55(4), 2072–2080 (2009)
Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library (2019)
Ren, W., et al.: Low-light image enhancement via a deep hybrid network. IEEE Trans. Image Process. 28(9), 4364–4375 (2019)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Sheet, D., Garud, H., Suveer, A., Mahadevappa, M., Chatterjee, J.: Brightness preserving dynamic fuzzy histogram equalization. IEEE Trans. Consum. Electron. 56(4), 2475–2480 (2010)
Tao, L., Zhu, C., Xiang, G., Li, Y., Jia, H., Xie, X.: Llcnn: a convolutional neural network for low-light image enhancement. In: Proceedings of Visual Communications and Image Processing (VCIP), pp. 1–4 (2017)
Wang, Y., Huang, H., Xu, Q., Liu, J., Liu, Y., Wang, J.: Practical deep raw image denoising on mobile devices. In: Proceedings of European Conference on Computer Vision (ECCV), pp. 1–16 (2020)
Wang, Z., Bovik, A., Sheikh, H., Simoncelli, E.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)
Wei, K., Fu, Y., Yang, J., Huang, H.: A physics-based noise formation model for extreme low-light raw denoising. In: Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Xiang, Y., Fu, Y., Zhang, L., Huang, H.: An effective network with convlstm for low-light image enhancement. In: Pattern Recognition and Computer Vision, pp. 221–233 (2019)
Dong, X., et al: Fast efficient algorithm for enhancement of low lighting video. In: IEEE International Conference on Multimedia and Expo, pp. 1–6 (2011)
Ying, X., Wang, L., Wang, Y., Sheng, W., An, W., Guo, Y.: Deformable 3d convolution for video super-resolution. IEEE Signal Process. Lett. 27, 1500–1504 (2020)
Ying, Z., Li, G., Ren, Y., Wang, R., Wang, W.: A new image contrast enhancement algorithm using exposure fusion framework. In: Felsberg, M., Heyden, A., Krüger, N. (eds.) CAIP 2017. LNCS, vol. 10425, pp. 36–46. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-64698-5_4
Ying, Z., Li, G., Ren, Y., Wang, R., Wang, W.: A new low-light image enhancement algorithm using camera response model. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops (2017)
Yue, H., Cao, C., Liao, L., Chu, R., Yang, J.: Supervised raw video denoising with a benchmark dataset on dynamic scenes. In: Proceedings of Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Acknowledgements
This work was supported by the National Natural Science Foundation of China under Grant No. 61827901 and No. 62088101.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Zeng, Y., Zou, Y., Fu, Y. (2021). \(\mathrm 3D^2Unet\): 3D Deformable Unet for Low-Light Video Enhancement. In: Ma, H., et al. Pattern Recognition and Computer Vision. PRCV 2021. Lecture Notes in Computer Science(), vol 13021. Springer, Cham. https://doi.org/10.1007/978-3-030-88010-1_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-88010-1_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-88009-5
Online ISBN: 978-3-030-88010-1
eBook Packages: Computer ScienceComputer Science (R0)