Generative Adversarial Learning in YUV Color Space for Thin Cloud Removal on Satellite Imagery
"> Figure 1
<p>Overall framework of our method.</p> "> Figure 2
<p>The failures of RSC-Net when predicting cloud-free images with some bright and dark pixels. Bright pixels are predicted to be turquoise and dark pixels are predicted to be red.</p> "> Figure 3
<p>Architecture of the residual block.</p> "> Figure 4
<p>Architecture of the generator.</p> "> Figure 5
<p>Architecture of discriminator.</p> "> Figure 6
<p>Synthesis of cloudy images: (<b>a</b>) reference cloud-free images, (<b>b</b>) simulated clouds using Perlin Fractal noise, and (<b>c</b>) cloudy images synthesized by alpha blending.</p> "> Figure 7
<p>The thin cloud removal results of two RICE1 samples reconstructed in different color spaces: (<b>a1</b>,<b>a2</b>) input cloudy image, (<b>b1</b>,<b>b2</b>) output images reconstructed in RGB color space, (<b>c1</b>,<b>c2</b>) output images reconstructed in YUV color space, and (<b>d1</b>,<b>d2</b>) reference cloud-free image.</p> "> Figure 8
<p>The thin cloud removal results of two Sentinel-2A samples reconstructed in different color spaces: (<b>a1</b>,<b>a2</b>) input cloudy image, (<b>b1</b>,<b>b2</b>) output images reconstructed in YUV color space, (<b>c1</b>,<b>c2</b>) output images reconstructed in YUV color space, and (<b>d1</b>,<b>d2</b>) reference cloud-free image.</p> "> Figure 9
<p>The thin cloud removal results of a RICE1 sample reconstructed with different fidelity losses: (<b>a</b>) input cloudy image, (<b>b</b>) reconstructed image with the <math display="inline"><semantics> <msub> <mo>ℓ</mo> <mn>2</mn> </msub> </semantics></math> loss, (<b>c</b>) reconstructed image with the <math display="inline"><semantics> <msub> <mo>ℓ</mo> <mn>1</mn> </msub> </semantics></math> loss, and (<b>d</b>) reference cloud-free image.</p> "> Figure 10
<p>The thin cloud removal results of a Sentinel-2A sample reconstructed with different fidelity losses: (<b>a</b>) input cloudy image, (<b>b</b>) reconstructed image with the <math display="inline"><semantics> <msub> <mo>ℓ</mo> <mn>2</mn> </msub> </semantics></math> loss, (<b>c</b>) reconstructed image with the <math display="inline"><semantics> <msub> <mo>ℓ</mo> <mn>1</mn> </msub> </semantics></math> loss, and (<b>d</b>) reference cloud-free image.</p> "> Figure 11
<p>The thin cloud removal results under the influence of adversarial training: (<b>a1</b>,<b>a2</b>) input cloudy image, (<b>b1</b>,<b>b2</b>) reconstructed images without adversarial training, (<b>c1</b>,<b>c2</b>) reconstructed images with adversarial training, and (<b>d1</b>,<b>d2</b>) reference cloud-free image.</p> "> Figure 12
<p>The trend of the PSNR values with increasing training sets.</p> "> Figure 13
<p>Thin cloud removal results of various methods on a cloudy image from the RICE1 test set: (<b>a</b>) input cloudy image, (<b>b</b>) result of DCP, (<b>c</b>) result of McGAN, (<b>d</b>) result of RSC-Net, (<b>e</b>) result of our method, and (<b>f</b>) reference cloud-free image.</p> "> Figure 14
<p>Thin cloud removal results of various methods on a cloudy image from the Sentinel-2A test set: (<b>a</b>) input cloudy image, (<b>b</b>) result of DCP, (<b>c</b>) result of McGAN, (<b>d</b>) result of RSC-Net, (<b>e</b>) result of our method, and (<b>f</b>) reference cloud-free image.</p> "> Figure 15
<p>The result of removing heavy clouds and cloud shadows with poor performance: (<b>a</b>) input cloudy image, (<b>b</b>) reconstructed image by YUV-GAN, and (<b>c</b>) reference cloud-free image.</p> "> Figure 16
<p>A failure case: smooth output for an image with overly heavy clouds: (<b>a</b>) input cloudy image, (<b>b</b>) reconstructed image by YUV-GAN, and (<b>c</b>) reference cloud-free image.</p> ">
Abstract
:1. Introduction
- The reconstruction of cloud-free images was conducted in YUV color space, which is efficient at reducing the number of unrecoverable bright and dark pixels without increasing the complexity of the algorithm.
- A residual symmetrical encoding–decoding architecture, without down-sampling and up-sampling layers, was used as the generator to recover detailed information. A mixed loss function combining loss and adversarial loss in YUV color space was employed to guide model training, which further improved the effectiveness of detailed reconstruction and the accuracy of scene recognition.
- We conducted the first study of transfer learning upon simulated and real data for thin cloud removal. Our results show that a network initialized with simulated data and then optimized by real data has an advantage over a network trained only with scarce real data.
- Both the public benchmark for thin cloud removal RICE1 and self-constructed Sentinel-2A datasets were used to verify the effectiveness of the proposed method by ablation study. Moreover, we demonstrate that the proposed method outperforms one traditional and two deep learning-based approaches based on quantitative indexes and qualitative effects.
2. Related Works
2.1. Typical Thin Cloud Removal Methods
2.2. Image Reconstruction in Different Color Spaces
2.3. Transferring Knowledge to Scarce Target Data
3. Method
3.1. Color Space Transformation and Inverse Transformation
3.2. Architecture of YUV-GAN
3.3. Transferring Knowledge from Simulated to Real Data
4. Experiments and Results
4.1. Description of Datasets
4.2. Experimental Settings and Evaluation Indexes
4.3. Ablation Study of the Proposed Method
4.3.1. Network Architecture Analysis
4.3.2. Training Strategy Analysis
4.4. Comparison of Different Methods
5. Discussions
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Vogelmann, J.E.; Tolk, B.; Zhu, Z. Monitoring forest changes in the southwestern United States using multitemporal Landsat data. Remote Sens. Environ. 2009, 113, 1739–1748. [Google Scholar] [CrossRef]
- Huang, C.; Thomas, N.; Goward, S.N.; Masek, J.G.; Zhu, Z.; Townshend, J.R.; Vogelmann, J.E. Automated masking of cloud and cloud shadow for forest change analysis using Landsat images. Int. J. Remote Sens. 2010, 31, 5449–5464. [Google Scholar] [CrossRef]
- King, M.D.; Platnick, S.; Menzel, W.P.; Ackerman, S.A.; Hubanks, P.A. Spatial and temporal distribution of clouds observed by MODIS onboard the Terra and Aqua satellites. IEEE Trans. Geosci. Remote Sens. 2013, 51, 3826–3852. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587. [Google Scholar]
- Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Jeppesen, J.H.; Jacobsen, R.H.; Inceoglu, F.; Toftegaard, T.S. A cloud detection algorithm for satellite imagery based on deep learning. Remote Sens. Environ. 2019, 229, 247–259. [Google Scholar] [CrossRef]
- Chai, D.; Newsam, S.; Zhang, H.K.; Qiu, Y.; Huang, J. Cloud and cloud shadow detection in Landsat imagery based on deep convolutional neural networks. Remote Sens. Environ. 2019, 225, 307–316. [Google Scholar] [CrossRef]
- Li, Z.; Shen, H.; Cheng, Q.; Liu, Y.; You, S.; He, Z. Deep learning based cloud detection for medium and high resolution Remote Sensing images of different sensors. ISPRS J. Photogramm. Remote Sens. 2019, 150, 197–212. [Google Scholar] [CrossRef] [Green Version]
- Yang, J.; Guo, J.; Yue, H.; Liu, Z.; Hu, H.; Li, K. CDnet: CNN-based cloud detection for Remote Sensing imagery. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6195–6211. [Google Scholar] [CrossRef]
- Delac, K.; Grgic, M.; Kos, T. Sub-image homomorphic filtering technique for improving facial identification under difficult illumination conditions. In Proceedings of the International Conference on Systems, Signals and Image Processing, Citeseer, Osijek, Croatia, 5–7 June 2006; Volume 1, pp. 21–23. [Google Scholar]
- Rasti, B.; Sveinsson, J.R.; Ulfarsson, M.O. Wavelet-based sparse reduced-rank regression for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6688–6698. [Google Scholar] [CrossRef]
- Liang, S.; Fang, H.; Chen, M. Atmospheric correction of Landsat ETM+ land surface imagery. I. Methods. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2490–2498. [Google Scholar] [CrossRef]
- Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
- He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [PubMed]
- Zhang, Y.; Guindon, B.; Cihlar, J. An image transform to characterize and compensate for spatial variations in thin cloud contamination of Landsat images. Remote Sens. Environ. 2002, 82, 173–187. [Google Scholar] [CrossRef]
- Lv, H.; Wang, Y.; Shen, Y. An empirical and radiative transfer model based algorithm to remove thin clouds in visible bands. Remote Sens. Environ. 2016, 179, 183–195. [Google Scholar] [CrossRef]
- Kauth, R.J.; Thomas, G. The tasselled cap—A graphic description of the spectral-temporal development of agricultural crops as seen by Landsat. In LARS Symposia; Laboratory for Applications of Remote Sensing: West Lafayette, IN, USA, 1976; p. 159. [Google Scholar]
- Richter, R. Atmospheric correction of satellite data with haze removal including a haze/clear transition region. Comput. Geosci. 1996, 22, 675–681. [Google Scholar] [CrossRef]
- Shen, Y.; Wang, Y.; Lv, H.; Qian, J. Removal of thin clouds in Landsat-8 OLI data with independent component analysis. Remote Sens. 2015, 7, 11481–11500. [Google Scholar] [CrossRef] [Green Version]
- Lv, H.; Wang, Y.; Gao, Y. Using Independent Component Analysis and Estimated Thin-Cloud Reflectance to Remove Cloud Effect on Landsat-8 Oli Band Data. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Valencia, Spain, 22–27 July 2018. [Google Scholar]
- Li, X.; Shen, H.; Zhang, L.; Zhang, H.; Yuan, Q.; Yang, G. Recovering quantitative remote sensing products contaminated by thick clouds and shadows using multitemporal dictionary learning. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7086–7098. [Google Scholar]
- Ji, T.Y.; Yokoya, N.; Zhu, X.X.; Huang, T.Z. Nonlocal tensor completion for multitemporal remotely sensed images’ inpainting. IEEE Trans. Geosci. Remote Sens. 2018, 56, 3047–3061. [Google Scholar] [CrossRef]
- Li, J.; Hu, Q.; Ai, M. Haze and thin cloud removal via sphere model improved dark channel prior. IEEE Geosci. Remote Sens. Lett. 2018, 16, 472–476. [Google Scholar] [CrossRef]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
- Wang, X.; Xu, G.; Wang, Y.; Lin, D.; Li, P.; Lin, X. Thin and Thick Cloud Removal on Remote Sensing Image by Conditional Generative Adversarial Network. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Yokohama, Japan, 28 July–2 August 2019. [Google Scholar]
- Lin, D.; Xu, G.; Wang, X.; Wang, Y.; Sun, X.; Fu, K. A Remote Sens. image dataset for cloud removal. arXiv 2019, arXiv:1901.00600. [Google Scholar]
- Enomoto, K.; Sakurada, K.; Wang, W.; Fukui, H.; Matsuoka, M.; Nakamura, R.; Kawaguchi, N. Filmy cloud removal on satellite imagery with multispectral conditional generative adversarial nets. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 48–56. [Google Scholar]
- Grohnfeldt, C.; Schmitt, M.; Zhu, X. A conditional generative adversarial network to fuse sar and multispectral optical data for cloud removal from sentinel-2 images. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Valencia, Spain, 22–27 July 2018. [Google Scholar]
- Meraner, A.; Ebel, P.; Zhu, X.X.; Schmitt, M. Cloud removal in Sentinel-2 imagery using a deep residual neural network and SAR-optical data fusion. ISPRS J. Photogramm. Remote Sens. 2020, 166, 333–346. [Google Scholar] [CrossRef]
- Singh, P.; Komodakis, N. Cloud-gan: Cloud removal for sentinel-2 imagery using a cyclic consistent generative adversarial networks. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Valencia, Spain, 22–27 July 2018. [Google Scholar]
- Zou, Z.; Li, W.; Shi, T.; Shi, Z.; Ye, J. Generative adversarial training for weakly supervised cloud matting. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27–28 October 2019; pp. 201–210. [Google Scholar]
- Li, J.; Wu, Z.; Hu, Z.; Zhang, J.; Li, M.; Mo, L.; Molinier, M. Thin cloud removal in optical remote sensing images based on generative adversarial networks and physical model of cloud distortion. ISPRS J. Photogramm. Remote Sens. 2020, 166, 373–389. [Google Scholar] [CrossRef]
- Li, W.; Li, Y.; Chen, D.; Chan, J.C.W. Thin cloud removal with residual symmetrical concatenation network. ISPRS J. Photogramm. Remote Sens. 2019, 153, 137–150. [Google Scholar] [CrossRef]
- Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Change Loy, C. Esrgan: Enhanced super-resolution generative adversarial networks. arXiv 2018, arXiv:1809.00219. [Google Scholar]
- Wan, Z.; Zhang, B.; Chen, D.; Zhang, P.; Chen, D.; Liao, J.; Wen, F. Bringing old photos back to life. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 2747–2757. [Google Scholar]
- Narasimhan, S.G.; Nayar, S.K. Vision and the atmosphere. Int. J. Comput. Vis. 2002, 48, 233–254. [Google Scholar] [CrossRef]
- Jang, H.; Bang, K.; Jang, J.; Hwang, D. Inverse tone mapping operator using sequential deep neural networks based on the human visual system. IEEE Access. 2018, 6, 52058–52072. [Google Scholar] [CrossRef]
- Markchom, T.; Lipikorn, R. Thin cloud removal using local minimization and logarithm image transformation in HSI color space. In Proceedings of the 2018 4th International Conference on Frontiers of Signal Processing (ICFSP), Poitiers, France, 24–27 September 2018; pp. 100–104. [Google Scholar]
- Wu, M.; Jin, X.; Jiang, Q.; Lee, S.j.; Liang, W.; Lin, G.; Yao, S. Remote Sens. image colorization using symmetrical multi-scale DCGAN in YUV color space. In The Visual Computer; Springer: Berlin, Germany, 2020; pp. 1–23. [Google Scholar]
- Chen, Z.; Zhang, T.; Ouyang, C. End-to-end airplane detection using transfer learning in remote sensing images. Remote Sens. 2018, 10, 139. [Google Scholar] [CrossRef] [Green Version]
- Marmanis, D.; Datcu, M.; Esch, T.; Stilla, U. Deep learning earth observation classification using ImageNet pretrained networks. IEEE Geosci. Remote Sens. Lett. 2015, 13, 105–109. [Google Scholar] [CrossRef] [Green Version]
- Yuan, Y.; Zheng, X.; Lu, X. Hyperspectral image superresolution by transfer learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 1963–1974. [Google Scholar] [CrossRef]
- Huang, Z.; Pan, Z.; Lei, B. Transfer learning with deep convolutional neural network for SAR target classification with limited labeled data. Remote Sens. 2017, 9, 907. [Google Scholar] [CrossRef] [Green Version]
- Huang, Z.; Pan, Z.; Lei, B. What, where, and how to transfer in SAR target recognition based on deep CNNs. IEEE Trans. Geosci. Remote Sens. 2019, 58, 2324–2336. [Google Scholar] [CrossRef] [Green Version]
- Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? arXiv 2014, arXiv:1411.1792. [Google Scholar]
- Malmgren-Hansen, D.; Kusk, A.; Dall, J.; Nielsen, A.A.; Engholm, R.; Skriver, H. Improving SAR automatic target recognition models with transfer learning from simulated data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1484–1488. [Google Scholar] [CrossRef] [Green Version]
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein generative adversarial networks. Int. Conf. Mach. Learn. PMLR 2017, 214–223. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
Band Number | Band Name | Central Wavelength (m) | Bandwidth (nm) | Spatial Resolution (m) |
---|---|---|---|---|
Band 2 | Blue | 0.490 | 98 | 10 |
Band 3 | Green | 0.560 | 45 | 10 |
Band 4 | Red | 0.665 | 38 | 10 |
Dataset | Training Set | Test Set |
---|---|---|
RICE1 | 700 real pairs | 140 real pairs |
Sentinel-2A | 100 real pairs + 880 × 700 simulated pairs | 140 real pairs |
Method | Color Space | Fidelity Loss | Adv-T | PSNR (dB) | SSIM |
---|---|---|---|---|---|
1 | RGB | No | 22.973169 | 0.888430 | |
2 | YUV | No | 23.579019 | 0.905411 | |
3 | YUV | No | 24.506355 | 0.917344 | |
4 | YUV | Yes | 25.130979 | 0.918523 |
Method | Color Space | Fidelity Loss | Adv-T | PSNR (dB) | SSIM |
---|---|---|---|---|---|
1 | RGB | No | 19.377462 | 0.626397 | |
2 | YUV | No | 19.476388 | 0.629305 | |
3 | YUV | No | 19.527454 | 0.630686 | |
4 | YUV | Yes | 19.584991 | 0.638989 |
Method | PSNR (dB) | SSIM |
---|---|---|
YUV-GAN without TL | 25.130979 | 0.918523 |
YUV-GAN with TL | 25.076177 | 0.914382 |
Method | PSNR (dB) | SSIM |
---|---|---|
YUV-GAN without TL | 22.136661 | 0.677236 |
YUV-GAN with TL | 22.457308 | 0.694812 |
Method | PSNR (dB) | SSIM |
---|---|---|
DCP [16] | 20.64042 | 0.797252 |
McGAN [29] | 21.162399 | 0.873094 |
RSC-Net [35] | 22.973169 | 0.888430 |
Ours | 25.130979 | 0.918523 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wen, X.; Pan, Z.; Hu, Y.; Liu, J. Generative Adversarial Learning in YUV Color Space for Thin Cloud Removal on Satellite Imagery. Remote Sens. 2021, 13, 1079. https://doi.org/10.3390/rs13061079
Wen X, Pan Z, Hu Y, Liu J. Generative Adversarial Learning in YUV Color Space for Thin Cloud Removal on Satellite Imagery. Remote Sensing. 2021; 13(6):1079. https://doi.org/10.3390/rs13061079
Chicago/Turabian StyleWen, Xue, Zongxu Pan, Yuxin Hu, and Jiayin Liu. 2021. "Generative Adversarial Learning in YUV Color Space for Thin Cloud Removal on Satellite Imagery" Remote Sensing 13, no. 6: 1079. https://doi.org/10.3390/rs13061079
APA StyleWen, X., Pan, Z., Hu, Y., & Liu, J. (2021). Generative Adversarial Learning in YUV Color Space for Thin Cloud Removal on Satellite Imagery. Remote Sensing, 13(6), 1079. https://doi.org/10.3390/rs13061079