[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Toward Interactive Self-Supervised Denoising

Published: 06 March 2023 Publication History

Abstract

Self-supervised denoising frameworks have recently been proposed to learn denoising models without noisy-clean image pairs, showing great potential in various applications. The denoising model is expected to produce visually pleasant images without noise patterns. However, it is non-trivial to achieve this goal using self-supervised methods because 1) the self-supervised model is difficult to restore the perceptual information due to the lack of clean supervision, and 2) perceptual quality is relatively subjective to users’ preferences. In this paper, we make the first attempt to build an interactive self-supervised denoising model to tackle the aforementioned problems. Specifically, we propose an interactive two-branch network to effectively restore perceptual information. The network consists of a denoising branch and an interactive branch, where the former focuses on efficient denoising, and the latter modulates the denoising branch. Based on the delicate architecture design, our network can produce various denoising outputs, allowing the user to easily select the most appealing outcome for satisfying the perceptual requirement. Moreover, to optimize the network with only noisy images, we propose a novel two-stage training strategy in a self-supervised way. Once the network is optimized, it can be interactively changed between noise reduction and texture restoration, providing more denoising choices for users. Existing self-supervised denoising methods can be integrated into our method to be user-friendly with interaction. Extensive experiments and comprehensive analyses are conducted to validate the effectiveness of the proposed method.

References

[1]
H. Liu, R. Xiong, D. Liu, S. Ma, F. Wu, and W. Gao, “Image denoising via low rank regularization exploiting intra and inter patch correlation,” IEEE Trans. Circuits Syst. Video Technol., vol. 28, no. 12, pp. 3321–3332, Dec. 2018.
[2]
J. Zhang, D. Zhao, R. Xiong, S. Ma, and W. Gao, “Image restoration using joint statistical modeling in a space-transform domain,” IEEE Trans. Circuits Syst. Video Technol., vol. 24, no. 6, pp. 915–928, Jun. 2014.
[3]
Y. Liu, Q. Jia, X. Fan, S. Wang, S. Ma, and W. Gao, “Cross-SRN: Structure-preserving super-resolution network with cross convolution,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 8, pp. 4927–4939, Aug. 2022.
[4]
X.-J. Mao, C. Shen, and Y.-B. Yang, “Image restoration using very deep convolutional encoder–decoder networks with symmetric skip connections,” 2016, arXiv:1603.09056.
[5]
M. Weigertet al., “Content-aware image restoration: Pushing the limits of fluorescence microscopy,” Nature Methods, vol. 15, no. 12, pp. 1090–1097, Dec. 2018.
[6]
K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Trans. Image Process., vol. 26, no. 7, pp. 3142–3155, Jul. 2017.
[7]
J. Chen, J. Chen, H. Chao, and M. Yang, “Image blind denoising with generative adversarial network based noise modeling,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 3155–3164.
[8]
J. Lehtinenet al., “Noise2Noise: Learning image restoration without clean data,” in Proc. Int. Conf. Mach. Learn. PMLR, 2018, pp. 2965–2974.
[9]
C. Chen, Z. Xiong, X. Tian, Z.-J. Zha, and F. Wu, “Real-world image denoising with deep boosting,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 12, pp. 3071–3087, Dec. 2020.
[10]
Y. Quan, M. Chen, T. Pang, and H. Ji, “Self2Self with dropout: Learning self-supervised denoising from single image,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 1890–1898.
[11]
J. Batson and L. Royer, “Noise2Self: Blind denoising by self-supervision,” in Proc. Int. Conf. Mach. Learn. PMLR, 2019, pp. 524–533.
[12]
S. Laine, T. Karras, J. Lehtinen, and T. Aila, “High-quality self-supervised deep image denoising,” in Proc. Adv. Neural Inf. Process. Syst., vol. 32, 2019, pp. 6970–6980.
[13]
Y. Blau and T. Michaeli, “The perception-distortion tradeoff,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 6228–6237.
[14]
J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in Proc. Eur. Conf. Comput. Vis. Cham, Switzerland: Springer, 2016, pp. 694–711.
[15]
C. Lediget al., “Photo-realistic single image super-resolution using a generative adversarial network,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2017, pp. 4681–4690.
[16]
P. Zhuang, O. Koyejo, and A. G. Schwing, “Enjoy your editing: Controllable GANs for image editing via latent space navigation,” 2021, arXiv:2102.01187.
[17]
J. He, C. Dong, and Y. Qiao, “Modulating image restoration with continual levels via adaptive feature modification layers,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 11056–11064.
[18]
J. He, C. Dong, and Y. Qiao, “Interactive multi-dimension modulation with dynamic controllable residual learning for image restoration,” 2019, arXiv:1912.05293.
[19]
X. Wang, K. Yu, C. Dong, X. Tang, and C. C. Loy, “Deep network interpolation for continuous imagery effect transition,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 1692–1701.
[20]
A. Shoshan, R. Mechrez, and L. Zelnik-Manor, “Dynamic-Net: Tuning the objective without re-training for synthesis tasks,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 3215–3223.
[21]
W. Wang, R. Guo, Y. Tian, and W. Yang, “CFSNet: Toward a controllable feature space for image restoration,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 4140–4149.
[22]
V. Jain and S. Seung, “Natural image denoising with convolutional networks,” in Proc. Adv. Neural Inf. Process. Syst., vol. 21, 2008, pp. 769–776.
[23]
B. Mildenhall, J. T. Barron, J. Chen, D. Sharlet, R. Ng, and R. Carroll, “Burst denoising with kernel prediction networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 2502–2510.
[24]
K. Zhang, W. Zuo, and L. Zhang, “FFDNet: Toward a fast and flexible solution for CNN-based image denoising,” IEEE Trans. Image Process., vol. 27, no. 9, pp. 4608–4622, Sep. 2018.
[25]
W. Xu, Q. Zhu, N. Qi, and D. Chen, “Deep sparse representation based image restoration with denoising prior,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 10, pp. 6530–6542, Oct. 2022.
[26]
W. Zhao, X. Liu, Y. Zhao, X. Fan, and D. Zhao, “NormalNet: Learning-based mesh normal denoising via local partition normalization,” IEEE Trans. Circuits Syst. Video Technol., vol. 31, no. 12, pp. 4697–4710, Dec. 2021.
[27]
L. Fan, X. Li, H. Fan, Y. Feng, and C. Zhang, “Adaptive texture-preserving denoising method using gradient histogram and nonlocal self-similarity priors,” IEEE Trans. Circuits Syst. Video Technol., vol. 29, no. 11, pp. 3222–3235, Nov. 2019.
[28]
B. Jiang, Y. Lu, J. Wang, G. Lu, and D. Zhang, “Deep image denoising with adaptive priors,” IEEE Trans. Circuits Syst. Video Technol., vol. 32, no. 8, pp. 5124–5136, Aug. 2022.
[29]
C. Chen, Z. Xiong, X. Tian, and F. Wu, “Deep boosting for image denoising,” in Proc. Eur. Conf. Comput. Vis. (ECCV), 2018, pp. 3–18.
[30]
Z. Yang, M. Yao, J. Huang, M. Zhou, and F. Zhao, “SIR-former: Stereo image restoration using transformer,” in Proc. 30th ACM Int. Conf. Multimedia, Oct. 2022, pp. 6377–6385.
[31]
S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proc. Int. Conf. Mach. Learn., 2015, pp. 448–456.
[32]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 770–778.
[33]
K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-D transform-domain collaborative filtering,” IEEE Trans. Image Process., vol. 16, no. 8, pp. 2080–2095, Aug. 2007.
[34]
A. Buades, B. Coll, and J.-M. Morel, “Non-local means denoising,” Image Process., vol. 1, pp. 208–212, Jan. 2011.
[35]
S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, “Toward convolutional blind denoising of real photographs,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 1712–1722.
[36]
S. Anwar and N. Barnes, “Real image denoising with feature attention,” in Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019, pp. 3155–3164.
[37]
S. Yu, B. Park, and J. Jeong, “Deep iterative down-up CNN for image denoising,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), 2019, pp. 2095–2103.
[38]
A. Abdelhamed, S. Lin, and M. S. Brown, “A high-quality denoising dataset for smartphone cameras,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 1692–1700.
[39]
T. Plotz and S. Roth, “Benchmarking denoising algorithms with real photographs,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 1586–1595.
[40]
J. Xu, H. Li, Z. Liang, D. Zhang, and L. Zhang, “Real-world noisy image denoising: A new benchmark,” 2018, arXiv:1804.02603.
[41]
V. Lempitsky, A. Vedaldi, and D. Ulyanov, “Deep image prior,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 9446–9454.
[42]
A. Krull, T.-O. Buchholz, and F. Jug, “Noise2Void—Learning denoising from single noisy images,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2019, pp. 2129–2137.
[43]
Y. Xie, Z. Wang, and S. Ji, “Noise2Same: Optimizing a self-supervised bound for image denoising,” in Proc. 34th Int. Conf. Neural Inf. Process. Syst., 2020, pp. 20320–20330.
[44]
R. Neshatavar, M. Yavartanoo, S. Son, and K. M. Lee, “CVF-SID: Cyclic multi-variate function for self-supervised image denoising by disentangling noise from image,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 17583–17591.
[45]
A. Krull, T. Vičar, M. Prakash, M. Lalit, and F. Jug, “Probabilistic Noise2Void: Unsupervised content-aware denoising,” Frontiers Comput. Sci., vol. 2, p. 5, Feb. 2020.
[46]
J. Li, Z. Xiong, D. Liu, X. Chen, and Z.-J. Zha, “Semantic image analogy with a conditional single-image GAN,” in Proc. 28th ACM Int. Conf. Multimedia, Oct. 2020, pp. 637–645.
[47]
X. Houet al., “Learning deep image priors for blind image denoising,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), 2019, pp. 1738–1747.
[48]
Y. Yuan, S. Liu, J. Zhang, Y. Zhang, C. Dong, and L. Lin, “Unsupervised image super-resolution using cycle-in-cycle generative adversarial networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), Jun. 2018, pp. 701–710.
[49]
K. Q. Weinberger and L. K. Saul, “Unsupervised learning of image manifolds by semidefinite programming,” Int. J. Comput. Vis., vol. 70, no. 1, pp. 77–90, 2006.
[50]
Y. Bengio, G. Mesnil, Y. Dauphin, and S. Rifai, “Better mixing via deep representations,” in Proc. Int. Conf. Mach. Learn., 2013, pp. 552–560.
[51]
P. P. Brahma, D. Wu, and Y. She, “Why deep learning works: A manifold disentanglement perspective,” IEEE Trans. Neural Netw. Learn. Syst., vol. 27, no. 10, pp. 1997–2008, Oct. 2016.
[52]
J. Han and C. Moraga, “The influence of the sigmoid function parameters on the speed of backpropagation learning,” in Proc. Int. Workshop Artif. Neural Netw. Cham, Switzerland: Springer, 1995, pp. 195–201.
[53]
R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., Jun. 2018, pp. 586–595.
[54]
H. R. Sheikh, A. C. Bovik, and G. De Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Process., vol. 14, no. 12, pp. 2117–2128, Dec. 2005.
[55]
O. Russakovskyet al., “ImageNet large scale visual recognition challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, Dec. 2015.
[56]
D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proc. 8th IEEE Int. Conf. Comput. Vis. (ICCV), vol. 2, Jun. 2001, pp. 416–423.
[57]
R. Franzen, “Kodak lossless true color image suite,” Eastman Kodak Company, Rochester, NY, USA, 1999. [Online]. Available: https://r0k.us/graphics/kodak/
[58]
R. Zeyde, M. Elad, and M. Protter, “On single image scale-up using sparse-representations,” in Proc. Int. Conf. Curves Surf. Cham, Switzerland: Springer, 2010, pp. 711–730.
[59]
Z. Wang, J. Liu, G. Li, and H. Han, “Blind2Unblind: Self-supervised image denoising with visible blind spots,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2022, pp. 2027–2036.
[60]
S. W. Zamir, A. Arora, S. Khan, M. Hayat, F. S. Khan, and M.-H. Yang, “Restormer: Efficient transformer for high-resolution image restoration,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2022, pp. 5728–5739.
[61]
O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent. Cham, Switzerland: Springer, 2015, pp. 234–241.
[62]
Y. Zhang, D. Li, K. L. Law, X. Wang, H. Qin, and H. Li, “IDR: Self-supervised image denoising via iterative data refinement,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2022, pp. 2098–2107.

Cited By

View all
  • (2024)Neural Degradation Representation Learning for All-in-One Image RestorationIEEE Transactions on Image Processing10.1109/TIP.2024.345658333(5408-5423)Online publication date: 1-Jan-2024

Index Terms

  1. Toward Interactive Self-Supervised Denoising
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Please enable JavaScript to view thecomments powered by Disqus.

          Information & Contributors

          Information

          Published In

          cover image IEEE Transactions on Circuits and Systems for Video Technology
          IEEE Transactions on Circuits and Systems for Video Technology  Volume 33, Issue 10
          Oct. 2023
          866 pages

          Publisher

          IEEE Press

          Publication History

          Published: 06 March 2023

          Qualifiers

          • Research-article

          Contributors

          Other Metrics

          Bibliometrics & Citations

          Bibliometrics

          Article Metrics

          • Downloads (Last 12 months)0
          • Downloads (Last 6 weeks)0
          Reflects downloads up to 05 Jan 2025

          Other Metrics

          Citations

          Cited By

          View all
          • (2024)Neural Degradation Representation Learning for All-in-One Image RestorationIEEE Transactions on Image Processing10.1109/TIP.2024.345658333(5408-5423)Online publication date: 1-Jan-2024

          View Options

          View options

          Media

          Figures

          Other

          Tables

          Share

          Share

          Share this Publication link

          Share on social media