[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

A deep image prior-based three-stage denoising method using generative and fusion strategies

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

In this work, by analyzing the modeling capability of the unsupervised deep image prior (DIP) network and its uncertainty in recovering the lost details, we aim to significantly boost its denoising effect by jointly exploiting generative and fusion strategies, resulting into a highly effective unsupervised three-stage recovery process. More specifically, for a given noisy image, we first apply two representative image denoisers that, respectively, belong to the internal and external prior-based denoising methods to produce corresponding two initial denoised images. Based on the two initial denoised images, we can randomly generate enough target images with a novel spatially random mixer. Then, we follow the standard DIP denoising routine but with different random inputs and target images to generate multiple complementary samples at separate runs. For more randomness and stability, some of generated samples are dropped out. Finally, the remaining samples are fused with weight maps generated by an unsupervised generative network in a pixel-wise manner, obtaining a final denoised image whose image quality is significantly improved. Extensive experiments demonstrate that, with our boosting strategy, the proposed method remarkably outperforms the original DIP and previous leading unsupervised networks with comparable peak signal-to-noise ratio and structural similarity, and is competitive with state-of-the-art supervised ones, on synthetic and real-world noisy image denoising.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Availability of data and materials

The data and materials will publicly available online.

References

  1. Wang, Q., Gao, Q., Wu, L., Sun, G., Jiao, L.: Adversarial multi-path residual network for image super-resolution. IEEE Trans. Image Process. 30, 6648–6658 (2021)

    Article  Google Scholar 

  2. Dong, W., Zhang, L., Shi, G., Li, X.: Nonlocally centralized sparse representation for image restoration. IEEE Trans. Image Process. 22(4), 1620–1630 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  3. Wu, D., Kim, K., Li, Q.: Low-dose CT reconstruction with Noise2Noise network and testing-time fine-tuning. Med. Phys. 48(12), 7657–7672 (2021)

    Article  Google Scholar 

  4. Song, T.-A., Yang, F., Dutta, J.: Noise2Void: unsupervised denoising of PET images. Phys. Med. Biol. 66(21), 214002 (2021)

    Article  Google Scholar 

  5. Fumio, H., Hiroyuki, O., Kibo, O., Atsushi, T., Hideo, T.: Dynamic PET image denoising using deep convolutional neural networks without prior training datasets. IEEE Access 7, 96594–96603 (2019)

    Article  Google Scholar 

  6. Dong, W., Zhang, L., Shi, G., Li, X.: Nonlocally centralized sparse representation for image restoration. IEEE Trans. Image Process. 22(4), 1620–1630 (2013)

  7. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)

  8. Buades, A., Coll, B., Morel, J.-M.: Nonlocal image and movie denoising. Int. J. Comput. Vis. 76(2), 123–139 (2008)

  9. Gu, S., Zhang, L., Zuo, W., Feng, X.: Weighted nuclear norm minimization with application to image denoising. In: Conference on Computer Vision and Pattern Recognition (CVPR 2014), Columbus, OH, USA, pp. 2862–2869 (2014)

  10. Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017)

  11. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)

    Article  MathSciNet  Google Scholar 

  12. Buades, A., Coll, B., Morel, J.-M.: Nonlocal image and movie denoising. Int. J. Comput. Vis. 76(2), 123–139 (2008)

    Article  Google Scholar 

  13. Wu, D., Kim, K., Li, Q.: Low-dose CT reconstruction with Noise2Noise network and testing-time fine-tuning. Med. Phys. 48(12), 7657–7672 (2021)

  14. Ma, K., Li, H., Yong, H., Wang, Z., Meng, D., Zhang, L.: Robust multi-exposure image fusion: a structural patch decomposition approach. IEEE Trans. Image Process. 26(5), 2519–2532 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  15. Chen, C., Wang, G., Peng, C., Fang, Y., Zhang, D., Qin, H.: Exploring rich and efficient spatial temporal interactions for real-time video salient object detection. IEEE Trans. Image Process. 30, 3995–4007 (2021)

    Article  Google Scholar 

  16. Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 26(7), 3142–3155 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  17. Zhang, K., Zuo, W., Zhang, L.: FFDNet: toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 27(9), 4608–4622 (2018)

    Article  MathSciNet  Google Scholar 

  18. Krull, A., Buchholz, T.-O., Jug, F.: Noise2Void—learning denoising from single noisy images. In: Conference on Computer Vision and Pattern Recognition (CVPR 2019), Long Beach, CA, USA, pp. 2129–2137 (2019)

  19. Quan, Y., Chen, M., Pang, T., Ji, H.: Self2Self with dropout: learning self-supervised denoising from single image. In: Conference on Computer Vision and Pattern Recognition (CVPR 2020), Seattle, WA, USA, pp. 1712–1722 (2020)

  20. Niresi, K.F., Chi, C.-Y.: Unsupervised hyperspectral denoising based on deep image prior and least favorable distribution. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 15, 5967–5983 (2022)

  21. Arbeláez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 898–916 (2011)

    Article  Google Scholar 

  22. Mataev, G., Milanfar, P., Elad, M.: DeepRED: deep image prior powered by RED. In: Conference on Computer Vision and Pattern Recognition (CVPR 2019), Long Beach, CA, USA (2019)

  23. Ulyanov, D., Vedaldi, A., Lempitsky, V.: Deep image prior. Int. J. Comput. Vis. 128(8), 1867–1888 (2020)

    Article  Google Scholar 

  24. Shi, Z., Mettes, P., Maji, S., Snoek, C.G.M.: On measuring and controlling the spectral bias of the deep image prior. Int. J. Comput. Vis. 130(4), 885–908 (2022)

    Article  Google Scholar 

  25. Luo, J., Xu, S., Li, C.: A fast denoising fusion network using internal and external priors. SIViP 15(6), 1275–1283 (2021)

    Article  Google Scholar 

  26. Ma, K., Li, H., Yong, H., Wang, Z., Meng, D., Zhang, L.: Robust multi-exposure image fusion: a structural patch decomposition approach. IEEE Trans. Image Process. 26(5), 2519–2532 (2017)

  27. Xu, J., Huang, Y., Cheng, M.-M., Liu, L., Zhu, F., Xu, Z., Shao, L.: Noisy-as-clean: learning self-supervised denoising from corrupted image. IEEE Trans. Image Process. 29, 9316–9329 (2020)

    Article  MATH  Google Scholar 

  28. Timofte, R., Agustsson, E., L.V.G. et al.: Ntire 2017 challenge on single image super-resolution: methods and results. In: Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2017), Honolulu, HI, USA, pp. 1110–1121 (2017)

  29. Arbeláez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 898–916 (2011)

  30. Zhang, W., Dong, L., Zhang, T., Xu, W.: Enhancing underwater image via color correction and bi-interval contrast enhancement. Signal Process. Image Commun. 90, 116030 (2021)

    Article  Google Scholar 

  31. Soh, J.W., Cho, S., Cho, N.I.: Meta-transfer learning for zero-shot super-resolution. In: Conference on Computer Vision and Pattern Recognition (CVPR 2020), Seattle, WA, USA, pp. 3513–3522 (2020)

Download references

Author information

Authors and Affiliations

Authors

Contributions

SX contributed to the conception of the study, and XC wrote the main manuscript text, and JL and XC contributed significantly to analysis and manuscript preparation, and NX conducted experiments. All authors reviewed the manuscript.

Corresponding author

Correspondence to Shaoping Xu.

Ethics declarations

Ethics approval

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article was supported by National Natural Science Foundation of China under Grants 62162043, 61662044 and 61902168.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, S., Chen, X., Luo, J. et al. A deep image prior-based three-stage denoising method using generative and fusion strategies. SIViP 17, 2385–2393 (2023). https://doi.org/10.1007/s11760-022-02455-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-022-02455-1

Keywords

Navigation