[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

RUN: Rethinking the UNet Architecture for Efficient Image Restoration

Published: 30 May 2024 Publication History

Abstract

Recent advanced image restoration (IR) methods typically stack homogeneous operators hierarchically in the UNet architecture. To achieve higher accuracy, these models are now going deeper and more complex, making them resource-intensive. After comprehensively reviewing different operators within modern networks, we provide an in-depth analysis of their individual favorable properties and invent a novel efficient IR network by redesigning the UNet architecture (RUN) with heterogeneous operators. Specifically, we propose three heterogeneous operators for different relational interactions concerning the specificity of different hierarchical features of the UNet architecture. First, the spatial self-attention block (SSA Block) processes high-resolution top-level features by modeling pixel interactions from the spatial dimension. Second, the channel self-attention block (CSA Block) performs channel recalibration and information transmission for the bottom-level features with rich channels. Finally, a simple and efficient convolution block (Conv Block) is used to facilitate middle-order information propagation, which complements the self-attention mechanism to achieve local-global coupling. Based on these designs, our RUN enables more comprehensive information dissemination and interaction regardless of topological distance, thus achieving superior performance while maintaining desirable computational budgets. Extensive experiments show that our RUN achieves state-of-the-art results for a variety of IR tasks, including image deblurring, image denoising, image deraining, and low-light image enhancement.

References

[1]
S. Nah, T. Hyun Kim, and K. Mu Lee, “Deep multi-scale convolutional neural network for dynamic scene deblurring,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2017, pp. 3883–3891.
[2]
A. Abdelhamed, S. Lin, and M. S. Brown, “A high-quality denoising dataset for smartphone cameras,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2018, pp. 1692–1700.
[3]
S. W. Zamir et al., “Learning enriched features for real image restoration and enhancement,” in Proc. Eur. Conf. Comput. Vis., 2020, pp. 492–511.
[4]
L. Chen, X. Lu, J. Zhang, X. Chu, and C. Chen, “HiNet: Half instance normalization network for image restoration,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 182–192.
[5]
S. Cheng et al., “NBNet: Noise basis learning for image denoising with subspace projection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 4896–4906.
[6]
S. W. Zamir et al., “Multi-stage progressive image restoration,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 14821–14831.
[7]
L. Chen, X. Chu, X. Zhang, and J. Sun, “Simple baselines for image restoration,” in Proc. Eur. Conf. Comput. Vis., 2022, pp. 17–33.
[8]
A. Vaswani et al., “Attention is all you need,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2017, pp. 6000–6010.
[9]
A. Dosovitskiy et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” in Proc. Int. Conf. Learn. Representations, 2020, pp. 1–12.
[10]
N. Carion et al., “End-to-end object detection with transformers,” in Proc. Eur. Conf. Comput. Vis., 2020, pp. 213–229.
[11]
R. Strudel, R. Garcia, I. Laptev, and C. Schmid, “Segmenter: Transformer for semantic segmentation,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 7262–7272.
[12]
W. Wang et al., “Pyramid vision transformer: A versatile backbone for dense prediction without convolutions,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 568–578.
[13]
H. Chen et al., “Pre-trained image processing transformer,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 12299–12310.
[14]
Z. Wang et al., “Uformer: A general U-shaped transformer for image restoration,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 17683–17693.
[15]
S. W. Zamir et al., “Restormer: Efficient transformer for high-resolution image restoration,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 5728–5739.
[16]
O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Interv., 2015, pp. 234–241.
[17]
X. Chen, X. Wang, J. Zhou, Y. Qiao, and C. Dong, “Activating more pixels in image super-resolution transformer,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2023, pp. 22367–22377.
[18]
W. Luo, Y. Li, R. Urtasun, and R. Zemel, “Understanding the effective receptive field in deep convolutional neural networks,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2016, pp. 4905–4913.
[19]
H. Zhang et al., “Resnest: Split-attention networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 2736–2746.
[20]
Z. Liu et al., “A convnet for the 2020s,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 11976–11986.
[21]
J.-B. Cordonnier, A. Loukas, and M. Jaggi, “On the relationship between self-attention and convolutional layers,” in Proc. Int. Conf. Learn. Representations, 2019, pp. 1–12.
[22]
Z. Dai, H. Liu, Q. V. Le, and M. Tan, “Coatnet: Marrying convolution and attention for all data sizes,” in Proc. Int. Conf. Neural Inf. Process. Syst., 2021, pp. 3965–3977.
[23]
X. Pan et al., “On the integration of self-attention and convolution,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 815–825.
[24]
Z. Chen, Y. Zhang, J. Gu, L. Kong, and X. Yuan, “Cross aggregation transformer for image restoration,” in Proc. Neural Inf. Process. Syst., vol. 35, pp. 25478–25490, 2022.
[25]
F. Pinto, P. H. Torr, and P. K. Dokania, “An impartial take to the CNN vs transformer robustness contest,” in Proc. Eur. Conf. Comput. Vis., Springer, 2022, pp. 466–480.
[26]
Q. Yan et al., “SharpFormer: Learning local feature preserving global representations for image deblurring,” IEEE Trans. Image Process., vol. 32, pp. 2857–2866, 2023.
[27]
Y. Zhang et al., “Image super-resolution using very deep residual channel attention networks,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 286–301.
[28]
Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image restoration,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 7, pp. 2480–2495, Jul. 2021.
[29]
S. Anwar and N. Barnes, “Densely residual Laplacian super-resolution,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 3, pp. 1192–1204, Mar. 2022.
[30]
C. Mou, J. Zhang, X. Fan, H. Liu, and R. Wang, “COLA-Net: Collaborative attention network for image restoration,” IEEE Trans. Multimedia, vol. 24, pp. 1366–1377, 2022.
[31]
M. Chang, H. Feng, Z. Xu, and Q. Li, “Low-light image restoration with short-and long-exposure raw pairs,” IEEE Trans. Multimedia, vol. 24, pp. 702–714, 2022.
[32]
S. Wu, C. Dong, and Y. Qiao, “Blind image restoration based on cycle-consistent network,” IEEE Trans. Multimedia, vol. 25, pp. 1111–1124, 2023.
[33]
A. Dudhane, S. W. Zamir, S. Khan, F. S. Khan, and M.-H. Yang, “Burst image restoration and enhancement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 5759–5768.
[34]
Z. Yue, Q. Zhao, L. Zhang, and D. Meng, “Dual adversarial network: Toward real-world noise removal and noise generation,” in Proc. Eur. Conf. Comput. Vis., 2020, pp. 41–58.
[35]
Y. Zhang et al., “Accurate and fast image denoising via attention guided scaling,” IEEE Trans. Image Process., vol. 30, pp. 6255–6265, 2021.
[36]
J. Ma, C. Peng, X. Tian, and J. Jiang, “DBDnet: A deep boosting strategy for image denoising,” IEEE Trans. Multimedia, vol. 24, pp. 3157–3168, 2022.
[37]
R. Ma, S. Li, B. Zhang, and Z. Li, “Towards fast and robust real image denoising with attentive neural network and PID controller,” IEEE Trans. Multimedia, vol. 24, pp. 2366–2377, 2022.
[38]
R. Ma, S. Li, B. Zhang, and H. Hu, “Meta PID attention network for flexible and efficient real-world noisy image denoising,” IEEE Trans. Image Process., vol. 31, pp. 2053–2066, 2022.
[39]
J. Xiao, X. Fu, M. Zhou, H. Liu, and Z.-J. Zha, “Random shuffle transformer for image restoration,” in Proc. Int. Conf. Mach. Learn., 2023, pp. 38039–38058.
[40]
A. Abuolaim and M. S. Brown, “Defocus deblurring using dual-pixel data,” in Proc. Eur. Conf. Comput. Vis., 2020, pp. 111–126.
[41]
O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas, “DeblurGAN: Blind motion deblurring using conditional adversarial networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2018, pp. 8183–8192.
[42]
K. Zhang et al., “Deblurring by Realistic Blurring,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 2737–2746.
[43]
Y. Liu et al., “Multi-scale grid network for image deblurring with high-frequency guidance,” IEEE Trans. Multimedia, vol. 24, pp. 2890–2901, 2022.
[44]
F.-J. Tsai, Y.-T. Peng, C.-C. Tsai, Y.-Y. Lin, and C.-W. Lin, “BANet: A blur-aware attention network for dynamic scene deblurring,” IEEE Trans. Image Process., vol. 31, pp. 6789–6799, 2022.
[45]
Y. Liu et al., “DegAE: A new pretraining paradigm for low-level vision,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2023, pp. 23292–23303.
[46]
X. Lin, L. Ma, B. Sheng, Z.-J. Wang, and W. Chen, “Utilizing two-phase processing with FBLS for single image deraining,” IEEE Trans. Multimedia, vol. 23, pp. 664–676, 2021.
[47]
L. Yu, B. Wang, J. He, G.-S. Xia, and W. Yang, “Single image deraining with continuous rain density estimation,” IEEE Trans. Multimedia, vol. 25, pp. 443–456, 2023.
[48]
X. Li, J. Wu, Z. Lin, H. Liu, and H. Zha, “Recurrent squeeze-and-excitation context aggregation net for single image deraining,” in Proc. Eur. Conf. Comput. Vis., 2018, pp. 254–269.
[49]
Y. Mei, Y. Fan, and Y. Zhou, “Image super-resolution with non-local sparse attention,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 3517–3526.
[50]
K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep CNN denoiser prior for image restoration,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2017, pp. 3929–3938.
[51]
Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image super-resolution,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2018, pp. 2472–2481.
[52]
S. Li et al., “Single image deraining: A comprehensive benchmark analysis,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 3838–3847.
[53]
Z. Wang, J. Chen, and S. C. H. Hoi, “Deep learning for image super-resolution: A survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 10, pp. 3365–3387, Oct. 2021.
[54]
A. Abdelhamed, R. Timofte, and M. S. Brown, “NTIRE 2019 challenge on real image denoising: Methods and results,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops, 2019, pp. 2197–2210.
[55]
H. Touvron et al., “Training data-efficient image transformers & distillation through attention,” in Proc. Int. Conf. Mach. Learn., 2021, pp. 10347–10357.
[56]
Z. Liu et al., “Swin transformer: Hierarchical vision transformer using shifted windows,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 10012–10022.
[57]
E. Xie et al., “SegFormer: Simple and efficient design for semantic segmentation with transformers,” Neural Inf. Process. Syst., vol. 34, pp. 12077–12090, 2021.
[58]
J. Liang et al., “SwinIR: Image restoration using swin transformer,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 1833–1844.
[59]
J. Zhang et al., “Accurate image restoration with attention retractable transformer,” in Proc. Int. Conf. Learn. Representations, 2023, pp. 1–13.
[60]
W. Yu et al., “MetaFormer is actually what you need for vision,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 10819–10829.
[61]
J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” 2016, arXiv:1607.06450.
[62]
A. Ali et al., “XCiT: Cross-covariance image transformers,” Neural Inf. Process. Syst., vol. 34, pp. 20014–20027, 2021.
[63]
J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2018, pp. 7132–7141.
[64]
H. Wu et al., “CVT: Introducing convolutions to vision transformers,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 22–31.
[65]
K. Yuan et al., “Incorporating convolution designs into visual transformers,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 579–588.
[66]
Y. N. Dauphin, A. Fan, M. Auli, and D. Grangier, “Language modeling with gated convolutional networks,” in Proc. Int. Conf. Mach. Learn., 2017, pp. 933–941.
[67]
S. Nah, S. Son, S. Lee, R. Timofte, and K. M. Lee, “NTIRE 2021 challenge on image deblurring,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 149–165.
[68]
D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proc. IEEE Int. Conf. Comput. Vis., 2001, pp. 416–423.
[69]
R. Franzen, “Kodak lossless true color image suite,” 1999. [Online]. Available: http://r0k.us/graphics/kodak
[70]
L. Zhang, X. Wu, A. Buades, and X. Li, “Color demosaicking by local directional interpolation and nonlocal adaptive thresholding,” J. Electron. Imag., vol. 20, no. 2, pp. 23016–23016, 2011.
[71]
X. Fu et al., “Removing rain from single images via a deep detail network,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2017, pp. 3855–3863.
[72]
H. Zhang, V. Sindagi, and V. M. Patel, “Image de-raining using a conditional generative adversarial network,” IEEE Trans. Circuits Syst. Video Technol., vol. 30, no. 11, pp. 3943–3956, Nov. 2020.
[73]
W. Yang et al., “Deep joint rain detection and removal from a single image,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2017, pp. 1357–1366.
[74]
K. Jiang et al., “Multi-scale progressive fusion network for single image deraining,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 8346–8355.
[75]
C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for low-light enhancement,” in Proc. Brit. Mach. Vis. Conf., 2018, pp. 1–12.
[76]
W. Yang, W. Wang, H. Huang, S. Wang, and J. Liu, “Sparse gradient regularized deep retinex network for robust low-light image enhancement,” IEEE Trans. Image Process., vol. 30, pp. 2072–2086, 2021.
[77]
D. Park, D. U. Kang, J. Kim, and S. Y. Chun, “Multi-temporal recurrent neural networks for progressive non-uniform single image deblurring with incremental temporal training,” in Proc. Eur. Conf. Comput. Vis., 2020, pp. 327–343.
[78]
S.-J. Cho, S.-W. Ji, J.-P. Hong, S.-W. Jung, and S.-J. Ko, “Rethinking coarse-to-fine approach in single image deblurring,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 4641–4650.
[79]
F.-J. Tsai, Y.-T. Peng, Y.-Y. Lin, C.-C. Tsai, and C.-W. Lin, “Stripformer: Strip transformer for fast image deblurring,” in Proc. Eur. Conf. Comput. Vis., Springer, 2022, pp. 146–162.
[80]
Z. Tu et al., “MAXIM: Multi-axis MLP for image processing,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 5769–5780.
[81]
Y. Cui, Y. Tao, W. Ren, and A. Knoll, “Dual-domain attention for image deblurring,” in Proc. Conf. Assoc. Advance. Artif. Intell., 2023, pp. 479–487.
[82]
Z. Luo, F. K. Gustafsson, Z. Zhao, J. Sjölund, and T. B. Schön, “Image restoration with mean-reverting stochastic differential equations,” in Proc. 40th Int. Conf. Mach. Learn., 2023, pp. 23045–23066.
[83]
E. Agustsson and R. Timofte, “NTIRE 2017 challenge on single image super-resolution: Dataset and study,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops, 2017, pp. 126–135.
[84]
R. Timofte, E. Agustsson, L. Van Gool, M.-H. Yang, and L. Zhang, “NTIRE 2017 challenge on single image super-resolution: Methods and results,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops, 2017, pp. 114–125.
[85]
P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 33, no. 5, pp. 898–916, May 2011.
[86]
K. Ma et al., “Waterloo exploration database: New challenges for image quality assessment models,” IEEE Trans. Image Process., vol. 26, no. 2, pp. 1004–1016, Feb. 2017.
[87]
K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process., vol. 26, no. 7, pp. 3142–3155, Jul. 2017.
[88]
K. Zhang, W. Zuo, and L. Zhang, “FFDNet: Toward a fast and flexible solution for CNN-based image denoising,” IEEE Trans. Image Process., vol. 27, no. 9, pp. 4608–4622, Sep. 2018.
[89]
Y. Peng et al., “Dilated residual networks with symmetric skip connection for image denoising,” Neurocomputing, vol. 345, pp. 67–76, 2019.
[90]
C. Tian, Y. Xu, and W. Zuo, “Image denoising using deep CNN with batch renormalization,” Neural Netw., vol. 121, pp. 461–473, 2020.
[91]
Y. Zhang, K. Li, K. Li, B. Zhong, and Y. Fu, “Residual non-local attention networks for image restoration,” in Proc. Int. Conf. Learn. Representations, 2019, pp. 1–13.
[92]
K. Zhang et al., “Plug-and-play image restoration with deep denoiser prior,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 10, pp. 6360–6376, Oct. 2022.
[93]
H. Zhao et al., “Comprehensive and delicate: An efficient transformer for image restoration,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2023, pp. 14122–14132.
[94]
S. Anwar and N. Barnes, “Real image denoising with feature attention,” in Proc. Int. Conf. Comput. Vis., 2019, pp. 3155–3164.
[95]
M. Chang, Q. Li, H. Feng, and Z. Xu, “Spatial-Adaptive Network for Single Image Denoising,” in Proc. Eur. Conf. Comput. Vis., 2020, pp. 171–187.
[96]
Y. Kim, J. W. Soh, G. Y. Park, and N. I. Cho, “Transfer learning from synthetic to real-noise denoising with adaptive instance normalization,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 3482–3492.
[97]
S. W. Zamir et al., “CycleISP: Real image restoration via improved data synthesis,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 2696–2705.
[98]
C. Ren, X. He, C. Wang, and Z. Zhao, “Adaptive consistency prior based deep network for image denoising,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 8596–8606.
[99]
C. Mou, J. Zhang, and Z. Wu, “Dynamic attentive graph learning for image restoration,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp. 4328–4337.
[100]
J. Zhang et al., “Xformer: Hybrid X-shaped transformer for image denoising,” in Proc. Int. Conf. Learn. Representations, 2024, pp. 1–13.
[101]
X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the skies: A deep network architecture for single-image rain removal,” IEEE Trans. Image Process., vol. 26, no. 6, pp. 2944–2956, Jun. 2017.
[102]
H. Zhang and V. M. Patel, “Density-aware single image de-raining using a multi-stream dense network,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2018, pp. 695–704.
[103]
W. Wei, D. Meng, Q. Zhao, Z. Xu, and Y. Wu, “Semi-supervised transfer learning for image rain removal,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 3877–3886.
[104]
R. Yasarla and V. M. Patel, “Uncertainty guided multi-scale residual learning-using a cycle spinning CNN for single image de-raining,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 8405–8414.
[105]
D. Ren, W. Zuo, Q. Hu, P. Zhu, and D. Meng, “Progressive image deraining networks: A. better and simpler baseline,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 3937–3946.
[106]
K. Purohit, M. Suin, A. Rajagopalan, and V. N. Boddeti, “Spatially-adaptive image restoration using distortion-guided networks,” in Proc. Int. Conf. Comput. Vis., 2021, pp. 2309–2319.
[107]
Y. Zhang, J. Zhang, and X. Guo, “Kindling the darkness: A practical low-light image enhancer,” in Proc. 27th ACM Int. Conf. Multimedia, 2019, pp. 1632–1640.
[108]
R. Wang et al., “Underexposed photo enhancement using deep illumination estimation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 6849–6857.
[109]
K. Xu, X. Yang, B. Yin, and R. W. Lau, “Learning to restore low-light images via decomposition-and-enhancement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 2281–2290.
[110]
S. Kosugi and T. Yamasaki, “Unpaired image enhancement featuring reinforcement-learning-controlled image editing software,” in Proc. Conf. Assoc. Advance. Artif. Intell., 2020, pp. 11296–11303.
[111]
S. Moran, P. Marza, S. McDonagh, S. Parisot, and G. Slabaugh, “DeepLPF: Deep local parametric filters for image enhancement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 12826–12835.
[112]
Y. Jiang et al., “EnlightenGAN: Deep light enhancement without paired supervision,” IEEE Trans. Image Process., vol. 30, pp. 2340–2349, 2021.
[113]
Y. Zhang, X. Guo, J. Ma, W. Liu, and J. Zhang, “Beyond brightening low-light images,” Int. J. Comput. Vis., vol. 129, pp. 1013–1037, 2021.
[114]
W. Wu et al., “URetinex-Net: Retinex-based deep unfolding network for low-light image enhancement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 5901–5910.

Index Terms

  1. RUN: Rethinking the UNet Architecture for Efficient Image Restoration
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image IEEE Transactions on Multimedia
      IEEE Transactions on Multimedia  Volume 26, Issue
      2024
      10405 pages

      Publisher

      IEEE Press

      Publication History

      Published: 30 May 2024

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 0
        Total Downloads
      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 11 Dec 2024

      Other Metrics

      Citations

      View Options

      View options

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media