Blind Restoration of a Single Real Turbulence-Degraded Image Based on Self-Supervised Learning
<p>Architecture of the Autoencoder.</p> "> Figure 2
<p>Overall structure of the proposed restoration network.</p> "> Figure 3
<p>Structure of the <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>K</mi> </msub> </mrow> </semantics></math> network.</p> "> Figure 4
<p>Composition parameters of the <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mi>x</mi> </msub> </mrow> </semantics></math> network.</p> "> Figure 5
<p>Cassegrain optical observation system.</p> "> Figure 6
<p>(<b>a</b>) Loss function curve of the iterative process. (<b>b</b>) Output result of the iterative process.</p> "> Figure 7
<p>Point Spread Function, power spectrum phase of atmospheric turbulence, and degraded images in different turbulence intensities.</p> "> Figure 8
<p>Restored results of real near-ground turbulence-degraded images. (<b>a</b>) Degraded image; (<b>b</b>) CLEAR algorithm; (<b>c</b>) SGL algorithm; (<b>d</b>) IBD algorithm; (<b>e</b>) DNCNN algorithm; (<b>f</b>) Ours.</p> "> Figure 9
<p>Restored results of a turbulence-degraded astronomical object. (<b>a</b>) Degraded image; (<b>b</b>) CLEAR algorithm; (<b>c</b>) SGL algorithm; (<b>d</b>) IBD algorithm; (<b>e</b>) DNCNN algorithm; (<b>f</b>) Ours.</p> "> Figure 10
<p>Restored results of motion-blurred images. Blurry images in the first and second rows are from the GoPro dataset; in the last row, a real motion-blurred image is from the Internet. (<b>a</b>) Degraded image; (<b>b</b>) CLEAR algorithm; (<b>c</b>) SGL algorithm; (<b>d</b>) IBD algorithm; (<b>e</b>) DNCNN algorithm; (<b>f</b>) ours.</p> "> Figure 11
<p>Examples of testing simulated images. (<b>a</b>) Jupiter; (<b>b</b>) Nebula1; (<b>c</b>) Galaxy; (<b>d</b>) Nebula2; (<b>e</b>) Mars.</p> "> Figure 12
<p>The PSNR and SSIM are used to evaluate the DNCNN network and the proposed approach. D/r0 represents different turbulence intensities. (<b>a</b>) PSNR; (<b>b</b>) SSIM.</p> ">
Abstract
:1. Introduction
- (1)
- We proposed an effective self-supervised model to attempt to solve the problem; our proposed method can minimize finer geometrical distortions while requiring only a single turbulent image to get the restored image;
- (2)
- Effective regularization by denoising (RED) is introduced into the whole network framework in order to ensure a better restoration effect;
- (3)
- We conducted extensive experiments by incorporating the proposed approach, which demonstrate that our method can surpass previous state-of-art methods both quantitatively and qualitatively (for more information, see Section 4.1, Section 4.2, Section 4.3 and Section 4.4).
2. Proposed Method
2.1. Self-Supervised Learning
2.2. Proposed Network Architecture
2.3. Regularization by Denoising (RED)
2.4. Implementation Details
3. Comparative Experiment Preparation
3.1. Existing Restoration Methods
3.2. Experimental Datasets
3.3. Evaluation Metrics
4. Results and Discussion
4.1. Near-Ground Turbulence-Distorted Image Results
4.2. Turbulence-Degraded Astronomical Object Results
4.3. Motion-Blurred Image Results
4.4. Ablation Study
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Chen, E.; Haik, O.; Yitzhaky, Y. Detecting and tracking moving objects in long-distance imaging through turbulent medium. Appl. Opt. 2014, 53, 1181–1190. [Google Scholar] [CrossRef] [PubMed]
- Chen, E.; Haik, O.; Yitzhaky, Y. Online spatio-temporal action detection in long-distance imaging affected by the atmosphere. IEEE Access 2021, 9, 24531–24545. [Google Scholar] [CrossRef]
- Roggemann, M.C.; Welsh, B.M.; Hunt, B.R. Imaging Through Turbulence; CRC Press: Boca Raton, FL, USA, 1996. [Google Scholar]
- Labeyrie, A. Attainment of diffraction limited resolution in large telescopes by Fourier analysing speckle patterns in star images. Astron. Astrophys. 1970, 6, 85–87. [Google Scholar]
- Ayers, G.; Dainty, J.C. Iterative blind deconvolution method and its applications. Opt. Lett. 1988, 13, 547–549. [Google Scholar] [CrossRef]
- Fried, D.L. Probability of getting a lucky short-exposure image through turbulence. JOSA 1978, 68, 1651–1658. [Google Scholar] [CrossRef]
- Carasso, A.S. APEX blind deconvolution of color Hubble space telescope imagery and other astronomical data. Opt. Eng. 2006, 45, 107004. [Google Scholar] [CrossRef]
- Radenović, F.; Tolias, G.; Chum, O. CNN image retrieval learns from BoW: Unsupervised fine-tuning with hard examples. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 3–20. [Google Scholar]
- Liu, P.; Zhang, H.; Zhang, K.; Lin, L.; Zuo, W. Multi-level wavelet-CNN for image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 773–782. [Google Scholar]
- Boulila, W.; Sellami, M.; Driss, M.; Al-Sarem, M.; Safaei, M.; Ghaleb, F.A. RS-DCNN: A novel distributed convolutional-neural-networks based-approach for big remote-sensing image classification. Comput. Electron. Agric. 2021, 182, 106014. [Google Scholar] [CrossRef]
- Quan, Y.; Chen, Y.; Shao, Y.; Teng, H.; Xu, Y.; Ji, H. Image denoising using complex-valued deep CNN. Pattern Recogn. 2021, 111, 107639. [Google Scholar] [CrossRef]
- Gao, Z.; Shen, C.; Xie, C. Stacked convolutional auto-encoders for single space target image blind deconvolution. Neurocomputing 2018, 313, 295–305. [Google Scholar] [CrossRef]
- Chen, G.; Gao, Z.; Wang, Q.; Luo, Q. Blind de-convolution of images degraded by atmospheric turbulence. Appl. Soft Comput. 2020, 89, 106131. [Google Scholar] [CrossRef]
- Chen, G.; Gao, Z.; Wang, Q.; Luo, Q. U-net like deep autoencoders for deblurring atmospheric turbulence. J. Electron. Imaging 2019, 28, 053024. [Google Scholar] [CrossRef]
- Bai, X.; Liu, M.; He, C.; Dong, L.; Zhao, Y.; Liu, X. Restoration of turbulence-degraded images based on deep convolutional network. In Proceedings of the Applications of Machine Learning, Boca Raton, FL, USA, 16–19 December 2019; p. 111390B. [Google Scholar]
- Ulyanov, D.; Vedaldi, A.; Lempitsky, V. Deep image prior. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 9446–9454. [Google Scholar]
- Chen, B.; Rouditchenko, A.; Duarte, K.; Kuehne, H.; Thomas, S.; Boggust, A.; Panda, R.; Kingsbury, B.; Feris, R.; Harwath, D. Multimodal clustering networks for self-supervised learning from unlabeled videos. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 8012–8021. [Google Scholar]
- Yaman, B.; Hosseini, S.A.H.; Akcakaya, M. Zero-Shot Self-Supervised Learning for MRI Reconstruction. In Proceedings of the International Conference on Learning Representations, Vienna, Austria, 3–7 May 2021. [Google Scholar]
- Mahapatra, D.; Ge, Z.; Reyes, M. Self-supervised generalized zero shot learning for medical image classification using novel interpretable saliency maps. IEEE Trans. Med. Imaging 2022, 41, 2443–2456. [Google Scholar] [CrossRef] [PubMed]
- Kolesnikov, A.; Zhai, X.; Beyer, L. Revisiting Self-Supervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 1920–1929. [Google Scholar]
- Zhai, X.; Oliver, A.; Kolesnikov, A.; Beyer, L. S4l: Self-supervised semi-supervised learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1476–1485. [Google Scholar]
- Lee, H.; Hwang, S.J.; Shin, J. Self-supervised label augmentation via input transformations. In Proceedings of the International Conference on Machine Learning, Online, 13–18 July 2020; pp. 5714–5724. [Google Scholar]
- Liu, X.; Zhang, F.; Hou, Z.; Mian, L.; Wang, Z.; Zhang, J.; Tang, J. Self-supervised learning: Generative or contrastive. IEEE Trans. Knowl. Data Eng. 2021, 35, 857–876. [Google Scholar] [CrossRef]
- Hua, T.; Wang, W.; Xue, Z.; Ren, S.; Wang, Y.; Zhao, H. On feature decorrelation in self-supervised learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 9598–9608. [Google Scholar]
- Wu, J.; Zhang, T.; Zha, Z.-J.; Luo, J.; Zhang, Y.; Wu, F. Self-supervised domain-aware generative network for generalized zero-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 12767–12776. [Google Scholar]
- Eckart, B.; Yuan, W.; Liu, C.; Kautz, J. Self-supervised learning on 3d point clouds by learning discrete generative models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 8248–8257. [Google Scholar]
- Sermanet, P.; Lynch, C.; Chebotar, Y.; Hsu, J.; Jang, E.; Schaal, S.; Levine, S.; Brain, G. Time-contrastive networks: Self-supervised learning from video. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 June 2018; pp. 1134–1141. [Google Scholar]
- Wang, X.; Zhang, R.; Shen, C.; Kong, T.; Li, L. Dense contrastive learning for self-supervised visual pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 3024–3033. [Google Scholar]
- Donahue, J.; Krähenbühl, P.; Darrell, T. Adversarial feature learning. arXiv 2016, arXiv:1605.09782. [Google Scholar]
- Dumoulin, V.; Belghazi, I.; Poole, B.; Mastropietro, O.; Lamb, A.; Arjovsky, M.; Courville, A. Adversarially learned inference. arXiv 2016, arXiv:1606.00704. [Google Scholar]
- Ballard, D.H. Modular learning in neural networks. In Proceedings of the Aaai, Seattle, WA, USA, 13–17 July 1987; pp. 279–284. [Google Scholar]
- Zhu, X.; Milanfar, P. Removing atmospheric turbulence via space-invariant deconvolution. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 157–170. [Google Scholar] [CrossRef]
- Furhad, M.H.; Tahtali, M.; Lambert, A. Restoring atmospheric-turbulence-degraded images. Appl. Opt. 2016, 55, 5082–5090. [Google Scholar] [CrossRef]
- Romano, Y.; Elad, M.; Milanfar, P. The little engine that could: Regularization by denoising (RED). SIAM J. Imag. Sci. 2017, 10, 1804–1844. [Google Scholar] [CrossRef]
- Afonso, M.V.; Bioucas-Dias, J.M.; Figueiredo, M.A. Fast image recovery using variable splitting and constrained optimization. IEEE Trans. Image Process. 2010, 19, 2345–2356. [Google Scholar] [CrossRef]
- Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; Lerer, A. Automatic differentiation in pytorch. In Proceedings of the Conference and Workshop on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 1–4. [Google Scholar]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
- Anantrasirichai, N.; Achim, A.; Kingsbury, N.G.; Bull, D.R. Atmospheric turbulence mitigation using complex wavelet-based fusion. IEEE Trans. Image Process. 2013, 22, 2398–2408. [Google Scholar] [CrossRef] [PubMed]
- Lou, Y.; Kang, S.H.; Soatto, S.; Bertozzi, A.L. Video stabilization of atmospheric turbulence distortion. Inverse Probl. Imaging 2013, 7, 839. [Google Scholar] [CrossRef]
- Li, D.; Mersereau, R.M.; Simske, S. Atmospheric turbulence-degraded image restoration using principal components analysis. IEEE Geosci. Remote Sens. Lett. 2007, 4, 340–344. [Google Scholar] [CrossRef]
- Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef]
- Calder, J.; Mansouri, A.; Yezzi, A. Image sharpening via Sobolev gradient flows. SIAM J. Imag. Sci. 2010, 3, 981–1014. [Google Scholar] [CrossRef]
- Gao, J.; Anantrasirichai, N.; Bull, D. Atmospheric turbulence removal using convolutional neural network. arXiv 2019, arXiv:1912.11350. [Google Scholar]
- Johansson, E.M.; Gavel, D.T. Simulation of stellar speckle imaging. In Proceedings of the Amplitude and Intensity Spatial Interferometry II, Kona, HI, USA, 15–16 March 1994; pp. 372–383. [Google Scholar]
- Hirsch, M.; Sra, S.; Schölkopf, B.; Harmeling, S. Efficient filter flow for space-variant multiframe blind deconvolution. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 607–614. [Google Scholar]
- Gilles, J.; Ferrante, N.B. Open turbulent image set (OTIS). Pattern Recognit. Lett. 2017, 86, 38–41. [Google Scholar] [CrossRef]
- Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a “completely blind” image quality analyzer. IEEE Signal Process Lett. 2012, 20, 209–212. [Google Scholar] [CrossRef]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
- Nah, S.; Hyun Kim, T.; Mu Lee, K. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3883–3891. [Google Scholar]
Instrument | Hardware System Parameters |
---|---|
Optical System | RC 12 Telescope Tube |
Automatic Tracking System | CELESTRON CGX-L German Equatorial Mount |
Imaging Camera | ASI071MC Pro Frozen Camera |
PC System | CPU: I7-9750H; RAM:16 G; GPU: NVIDIA RTX 2070 |
Parameters | Simulation Value |
---|---|
Aperture | D = 1 m |
Inner scale of Turbulence | l0 = 0.01 m |
Outer scale of Turbulence | L0 = 50 m |
Number of phase screens | n = 1 |
Picture Size | P = 256 × 256 |
Entropy ↑ | Average Gradient ↑ | NIQE ↓ | BRISQUE ↓ | |
---|---|---|---|---|
Degraded image | 6.3879 | 4.0198 | 7.8462 | 43.2354 |
CLEAR [39] | 6.9210 | 6.3797 | 7.6126 | 39.2766 |
SGL [40] | 6.4044 | 4.1000 | 7.8020 | 43.2147 |
IBD [41] | 6.4785 | 9.4909 | 7.6866 | 36.3150 |
DNCNN [44] | 6.2785 | 6.6840 | 9.5055 | 54.4397 |
Ours | 6.6310 | 7.3636 | 7.2665 | 26.7663 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Guo, Y.; Wu, X.; Qing, C.; Liu, L.; Yang, Q.; Hu, X.; Qian, X.; Shao, S. Blind Restoration of a Single Real Turbulence-Degraded Image Based on Self-Supervised Learning. Remote Sens. 2023, 15, 4076. https://doi.org/10.3390/rs15164076
Guo Y, Wu X, Qing C, Liu L, Yang Q, Hu X, Qian X, Shao S. Blind Restoration of a Single Real Turbulence-Degraded Image Based on Self-Supervised Learning. Remote Sensing. 2023; 15(16):4076. https://doi.org/10.3390/rs15164076
Chicago/Turabian StyleGuo, Yiming, Xiaoqing Wu, Chun Qing, Liyong Liu, Qike Yang, Xiaodan Hu, Xianmei Qian, and Shiyong Shao. 2023. "Blind Restoration of a Single Real Turbulence-Degraded Image Based on Self-Supervised Learning" Remote Sensing 15, no. 16: 4076. https://doi.org/10.3390/rs15164076
APA StyleGuo, Y., Wu, X., Qing, C., Liu, L., Yang, Q., Hu, X., Qian, X., & Shao, S. (2023). Blind Restoration of a Single Real Turbulence-Degraded Image Based on Self-Supervised Learning. Remote Sensing, 15(16), 4076. https://doi.org/10.3390/rs15164076