Towards Streamlined Single-Image Super-Resolution: Demonstration with 10 m Sentinel-2 Colour and 10–60 m Multi-Spectral VNIR and SWIR Bands
<p>An example of the 10 m/pixel Sentinel-2 “true” colour image and the 3.44 m/pixel TARSGAN SRR results over a geo-calibration site at Baotou, China (Sentinel-2 image ID: S2A_MSIL1C_20171031T032851_N0206_R018 T49TCF_20171031T032851_TCI).</p> "> Figure 2
<p>An example of the MARSGAN training LR image (4 times down-sampled version of the training HR image–similar to using the 4m MS band image), the TARSGAN training LR image (created via 4 times down-sampling, 4 times up-sampling, and Gaussian blurring), and the training HR image (same for both MARSGAN and TARSGAN). Image dimensions: 256 m × 256 m. Deimos-2 image courtesy of Deimos Imaging, S.L. 2021.</p> "> Figure 3
<p>Network architecture of the TARSGAN generator.</p> "> Figure 4
<p>Flow diagram of the proposed ELF automated image effective resolution assessment system.</p> "> Figure 5
<p>Examples of the Exp-1 ELF measurements for two detected slanted edge ROIs of a <math display="inline"><semantics> <mrow> <mn>8</mn> <mi>km</mi> <mo>×</mo> <mn>8</mn> <mi>km</mi> </mrow> </semantics></math> image crop at Site-1. 1st column: Sentinel-2 image crops of the 10 m/pixel B04 band and 20 m/pixel B05 band images (pre-upsampled to 10 m/pixel for comparison) showing two examples of the detected slanted edges in the green box; 2nd column: zoom-in views of the examplar slanted edges within the automatically extracted ROIs; 3rd column: plots of ESFs (blue curve), LSFs (orange curve) and FWHMs (red line) for the examplar slanted edges. For all Exp-1 ELF measurements of all detected slanted edges within the <math display="inline"><semantics> <mrow> <mn>8</mn> <mi>km</mi> <mo>×</mo> <mn>8</mn> <mi>km</mi> </mrow> </semantics></math> image crop at Site-1, please refer to <a href="#app1-remotesensing-13-02614" class="html-app">Supplementary Material</a>. N.B. Units of the x and y axes of the 1st column and the 2nd column, and x axes of the 3rd column figures are “pixels”; units of the y axes of the 3rd column figures are normalised intensity values—[0, 1] for ESF and [−0.1, 0.1] for LSF. 1st and 2nd columns show images at different sizes of <math display="inline"><semantics> <mrow> <mn>8</mn> <mi>km</mi> <mo>×</mo> <mn>8</mn> <mi>km</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>250</mn> <mi mathvariant="normal">m</mi> <mo>×</mo> <mn>300</mn> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, respectively.</p> "> Figure 6
<p>Examples of the Exp-2 ELF measurements for two detected slanted edge ROIs of a <math display="inline"><semantics> <mrow> <mn>8</mn> <mi>km</mi> <mo>×</mo> <mn>8</mn> <mi>km</mi> </mrow> </semantics></math> image crop at Site-1. 1st row: Sentinel-2 image crops of the 10 m/pixel B08 band, 20 m/pixel B8A band, and 60 m/pixel B09 band images, showing two examples of the detected slanted edges in green box; 2nd row: zoom-in views of the examplar slanted edges within the automatically extracted ROIs; 3rd row: plots of ESFs (blue curve), LSFs (orange curve) and FWHMs (red line) for the examplar slanted edges. For all Exp-2 ELF measurements of all detected slanted edges within the <math display="inline"><semantics> <mrow> <mn>8</mn> <mi>km</mi> <mo>×</mo> <mn>8</mn> <mi>km</mi> </mrow> </semantics></math> image crop at Site-1, please refer to <a href="#app1-remotesensing-13-02614" class="html-app">Supplementary Material</a>. N.B. Units of the x and y axes of the 1st row and the 2nd row, and x axes of the 3rd row figures are “pixels”; units of the y axes of the 3rd row figures are normalised intensity values—[0, 1] for ESF and [−0.1, 0.1] for LSF. 1st and 2nd rows show images at different sizes of <math display="inline"><semantics> <mrow> <mn>8</mn> <mi>km</mi> <mo>×</mo> <mn>8</mn> <mi>km</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>250</mn> <mi mathvariant="normal">m</mi> <mo>×</mo> <mn>300</mn> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>, respectively.</p> "> Figure 7
<p>Cropped examples (<math display="inline"><semantics> <mrow> <mn>625</mn> <mi mathvariant="normal">m</mi> <mo>×</mo> <mn>625</mn> <mi mathvariant="normal">m</mi> </mrow> </semantics></math> ) of the 10 m/pixel Sentinel-2 “true” colour images (L1C) and TARSGAN SRR result over Baotou, China (Site-1). Please refer to <a href="#app1-remotesensing-13-02614" class="html-app">Supplementary Material</a> for full-size SRR (produced from Sentinel-2 S2A_MSIL1C_20171031T032851_N0206_R018 T49TCF_20171031T070327).</p> "> Figure 8
<p>Cropped examples (<math display="inline"><semantics> <mrow> <mn>625</mn> <mi mathvariant="normal">m</mi> <mo>×</mo> <mn>625</mn> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>) of the 10 m/pixel Sentinel-2 “true” colour images (L1C) and TARSGAN SRR result over Dubai, United Arab Emirates (Site-2). Please refer to <a href="#app1-remotesensing-13-02614" class="html-app">Supplementary Material</a> for full-size SRR (produced from Sentinel-2 S2B_MSIL1C_20210528T064629_N0300_R020 T40RCN_20210528T084809).</p> "> Figure 9
<p>Cropped examples (<math display="inline"><semantics> <mrow> <mn>625</mn> <mi mathvariant="normal">m</mi> <mo>×</mo> <mn>625</mn> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>) of the 10 m/pixel Sentinel-2 “true” colour images (L1C) and TARSGAN SRR result over Hainich, Germany (Site-3). Please refer to <a href="#app1-remotesensing-13-02614" class="html-app">Supplementary Material</a> for full-size SRR (produced from Sentinel-2 S2A_MSIL1C_20200921T103031_N0209_R108 T32UNB_20200921T142406).</p> "> Figure 10
<p>Cropped examples (<math display="inline"><semantics> <mrow> <mn>625</mn> <mi mathvariant="normal">m</mi> <mo>×</mo> <mn>625</mn> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>) of the 10 m/pixel Sentinel-2 “true” colour images (L2A) and TARSGAN SRR result over Hainich, Germany (Site-3). Please refer to <a href="#app1-remotesensing-13-02614" class="html-app">Supplementary Material</a> for full-size SRR (produced from Sentinel-2 S2B_MSIL2A_20210531T101559_N0300_R065 T32UNB_20210531T140040).</p> "> Figure 11
<p>Cropped examples (<math display="inline"><semantics> <mrow> <mn>625</mn> <mi mathvariant="normal">m</mi> <mo>×</mo> <mn>625</mn> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>) of the 10 m/pixel Sentinel-2 “true” colour images (L1C) and TARSGAN SRR result over London, UK (Site-4). Please refer to <a href="#app1-remotesensing-13-02614" class="html-app">Supplementary Material</a> for full-size SRR (produced from Sentinel-2 S2B_MSIL1C_20201217T111359_N0209_R137 T30UXC_20201217T132006).</p> "> Figure 12
<p>Cropped examples (<math display="inline"><semantics> <mrow> <mn>625</mn> <mi mathvariant="normal">m</mi> <mo>×</mo> <mn>625</mn> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>) of the 10 m/pixel Sentinel-2 “true” colour images (L2A) and TARSGAN SRR result over London, UK (Site-4). Please refer to <a href="#app1-remotesensing-13-02614" class="html-app">Supplementary Material</a> for full-size SRR (produced from Sentinel-2 S2A_MSIL2A_20210309T105901_N0214_R094 T30UXC_20210309T135358).</p> "> Figure 13
<p>Cropped examples (<math display="inline"><semantics> <mrow> <mn>625</mn> <mi mathvariant="normal">m</mi> <mo>×</mo> <mn>625</mn> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>) of the 10 m/pixel Sentinel-2 “true” colour images (L1C) and TARSGAN SRR result over Desert Rock, near Flagstaff, AZ, U.S. (Site-5). Please refer to <a href="#app1-remotesensing-13-02614" class="html-app">Supplementary Material</a> for full-size SRR (produced from Sentinel-2 S2A_MSIL1C_20210507T180921_N0300_R084 T12SUD_20210507T215833).</p> "> Figure 14
<p>Cropped examples (<math display="inline"><semantics> <mrow> <mn>625</mn> <mi mathvariant="normal">m</mi> <mo>×</mo> <mn>625</mn> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>) of the 10 m/pixel Sentinel-2 “true” colour images (L1C) and TARSGAN SRR result over Lincoln Sea, Greenland (Site-6). Please refer to <a href="#app1-remotesensing-13-02614" class="html-app">Supplementary Material</a> for full-size SRR (produced from Sentinel-2 S2A_MSIL1C_20200729T190921_N0209_R056 T21XWM_20200729T222945).</p> "> Figure 15
<p>Site-2 intercomparisons of all spectral bands of the SRR product against the original Sentinel-2 L2A surface reflectance product (S2B_MSIL2A_20210528T064629_N0300_R020_T40RCN_20210528T091914).</p> "> Figure 16
<p>Site-3 intercomparisons of all spectral bands of the SRR product against the original Sentinel-2 L2A surface reflectance product (S2B_MSIL2A_20210531T101559_N0300_R065_T32UNB_20210531T140040).</p> "> Figure 17
<p>(<b>A</b>–<b>D</b>) Cropped examples of Site-1 of the original 10 m/pixel Sentinel-2 image, the MARSGAN SRR as described in [<a href="#B27-remotesensing-13-02614" class="html-bibr">27</a>] (MARSGANv0), MARSGAN SRR trained with upsampled and blurred LR using perceptual loss (MARSGANv1), and the MARSGAN SRR trained with upsampled and blurred LR using the SSIM loss (TARSGAN). All subfigures are self contrast stretched and have sizes of <math display="inline"><semantics> <mrow> <mn>625</mn> <mi mathvariant="normal">m</mi> <mo>×</mo> <mn>625</mn> <mi mathvariant="normal">m</mi> </mrow> </semantics></math>.</p> "> Figure 18
<p>Proposed future streamlined SRR processing system based on automated SRR and quality assessments.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. Datasets for Testing and Training
2.2. Key Modifications of MARSGAN
2.3. The TARSGAN System
2.4. The ELF System
- (1)
- Create a binary image from the input SRR image using the Otsu adaptive thresholding method [39].
- (2)
- Use a Canny edge detector [40] to extract all potential edges.
- (3)
- Use a Hough transform [41] to detect potential lines from the output of (2) and filter for the given thresholds of lengths, gaps, and intersections.
- (4)
- Crop for any number of regions of interest (ROIs) centred on the filtered lines and apply the same cropping for the same areas with the same sizes using the corresponding LR image.
- (5)
- Perform image normalisation within each crop for both the crops from SRR and crops from LR.
- (6)
- Calculate and plot the ESF for each slanted edge within each normalised crop from (5).
- (7)
- Filter each continuous ESF and only leave the peak ESF for each slanted edge.
- (8)
- Calculate and plot the LSF for each ESF from (7).
- (9)
- Calculate FWHM for each LSF from (8) and calculate the mean FWHM for the SRR and LR images.
3. Results
3.1. Estimation of Image Effective Resolution through ELF
3.2. Demonstration of TARSGAN SRR Results and Subsequent ELF Assessment
3.3. Results from Multispectral Bands
4. Discussion
4.1. From MARSGAN to TARSGAN
4.2. Potential Improvements to TARSGAN and ELF
4.3. A Future Streamlined SRR System
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Van Ouwerkerk, J.D. Image super-resolution survey. Image Vis. Comput. 2006, 24, 1039–1052. [Google Scholar] [CrossRef]
- Shah, A.J.; Gupta, S.B. Image super resolution-a survey. In Proceedings of the 1st International Conference on Emerging Technology Trends in Electronics, Communication & Networking 2012, Surat, India, 19–21 December 2012; pp. 1–6. [Google Scholar]
- Ha, V.K.; Ren, J.; Xu, X.; Zhao, S.; Xie, G.; Vargas, V.M. Deep Learning Based Single Image Super-Resolution: A Survey. In Proceedings of the International Conference on Brain Inspired Cognitive Systems, Xi’an, China, 7–8 July 2018; Springer: Cham, Switzerland, 2018; pp. 106–119. [Google Scholar]
- Wang, Z.; Chen, J.; Hoi, S.C. Deep learning for image super-resolution: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Tsai, R.Y.; Huang, T.S. Multipleframe Image Restoration and Registration. In Advances in Computer Vision and Image Processing; JAI Press Inc.: New York, NY, USA, 1984; pp. 317–339. [Google Scholar]
- Keren, D.; Peleg, S.; Brada, R. Image sequence enhancement using subpixel displacements. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Ann Arbor, MI, USA, 5–9 June 1988; pp. 742–746. [Google Scholar]
- Kim, S.P.; Bose, N.K.; Valenzuela, H.M. Recursive reconstruction of high resolution image from noisy undersampled multiframes. IEEE Trans. Acoust. Speech Signal Process. 1990, 38, 1013–1027. [Google Scholar] [CrossRef]
- Bose, N.K.; Kim, H.C.; Valenzuela, H.M. Recursive implementation of total least squares algorithm for image reconstruction from noisy, undersampled multiframes. In Proceedings of the IEEE Conference Acoustics, Speech and Signal Processing, Minneapolis, MN, USA, 27–30 April 1993; Volume 5, pp. 269–272. [Google Scholar]
- SRhee, H.; Kang, M.G. Discrete cosine transform based regularized high-resolution image reconstruction algorithm. Opt. Eng. 1999, 38, 1348–1356. [Google Scholar]
- Hardie, R.C.; Barnard, K.J.; Armstrong, E.E. Joint MAP registration and high resolution image estimation using a sequence of undersampled images. IEEE Trans. Image Process. 1997, 6, 1621–1633. [Google Scholar] [CrossRef] [Green Version]
- Farsiu, S.; Robinson, D.; Elad, M.; Milanfar, P. Fast and robust multi-frame super-resolution. IEEE Trans. Image Process. 2004, 13, 1327–1344. [Google Scholar] [CrossRef]
- Yuan, Q.; Zhang, L.; Shen, H. Multiframe super-resolution employing a spatially weighted total variation model. IEEE Trans. Circuits Syst. Video Technol. 2012, 22, 379–392. [Google Scholar] [CrossRef]
- Tao, Y.; Muller, J.-P. A novel method for surface exploration: Super-resolution restoration of Mars repeat-pass orbital imagery. Planet. Space Sci. 2016, 121, 103–114. [Google Scholar] [CrossRef] [Green Version]
- Tao, Y.; Muller, J.-P. Super-Resolution Restoration of Spaceborne HD Videos Using the UCL MAGiGAN System. In Image and Signal Processing for Remote Sensing XXV; SPIE: Strasbourg, France, 2019; pp. 1115508-1–1115508-7. [Google Scholar]
- Tao, Y.; Muller, J.-P. Super-resolution restoration of MISR images using the UCL MAGiGAN system. Remote Sens. 2019, 11, 52. [Google Scholar] [CrossRef] [Green Version]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Mu Lee, K. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 136–144. [Google Scholar]
- Yu, J.; Fan, Y.; Yang, J.; Xu, N.; Wang, Z.; Wang, X.; Huang, T. Wide activation for efficient and accurate image super-resolution. arXiv 2018, arXiv:1808.08718. [Google Scholar]
- Ahn, N.; Kang, B.; Sohn, K.A. Fast, accurate, and lightweight super-resolution with cascading residual network. In Proceedings of the European Conference on Computer Vision (ECCV) 2018, Munich, Germany, 8–14 September 2018; pp. 252–268. [Google Scholar]
- Kim, J.; Lee, J.K.; Lee, K.M. Deeply-recursive convolutional network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 1637–1645. [Google Scholar]
- Tai, Y.; Yang, J.; Liu, X. Image super-resolution via deep recursive residual network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 3147–3155. [Google Scholar]
- Wang, C.; Li, Z.; Shi, J. Lightweight image super-resolution with adaptive weighted learning network. arXiv 2019, arXiv:1904.02358. [Google Scholar]
- Zhang, Y.; Li, K.; Li, K.; Wang, L.; Zhong, B.; Fu, Y. ; Fu, Y. Image super-resolution using very deep residual channel attention networks. In Proceedings of the European Conference on Computer Vision (ECCV) 2018, Munich, Germany, 8–14 September 2018; pp. 286–301. [Google Scholar]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition 2017, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
- Sajjadi, M.S.; Scholkopf, B.; Hirsch, M. EnhanceNet: Single image super-resolution through automated texture synthesis. In Proceedings of the IEEE International Conference on Computer Vision 2017, Venice, Italy, 22–29 October 2017; pp. 4491–4500. [Google Scholar]
- Wang, X.; Yu, K.; Wu, S.; Gu, J.; Liu, Y.; Dong, C.; Qiao, Y.; Change Loy, C. ESRGAN: Enhanced super-resolution generative adversarial networks. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops 2018, Munich, Germany, 8–14 September 2018. [Google Scholar]
- Rakotonirina, N.C.; Rasoanaivo, A. ESRGAN+: Further improving enhanced super-resolution generative adversarial network. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020, Barcelona, Spain, 4–8 May 2020; pp. 3637–3641. [Google Scholar]
- Tao, Y.; Conway, S.J.; Muller, J.-P.; Putri, A.R.D.; Thomas, N.; Cremonese, G. Single Image Super-Resolution Restoration of TGO CaSSIS Colour Images: Demonstration with Perseverance Rover Landing Site and Mars Science Targets. Remote Sens. 2021, 13, 1777. [Google Scholar] [CrossRef]
- Sun, W.; Chen, Z. Learned image downscaling for upscaling using content adaptive resampler. IEEE Trans. Image Process. 2020, 29, 4027–4040. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Cai, J.; Zeng, H.; Yong, H.; Cao, Z.; Zhang, L. Toward real-world single image super-resolution: A new benchmark and a new model. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2019, Seoul, Korea, 27 October–2 November 2019; pp. 3086–3095. [Google Scholar]
- Dong, C.; Loy, C.C.; Tang, X. Accelerating the super-resolution convolutional neural network. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016; Springer: Cham, Switzerland, 2016; pp. 391–407. [Google Scholar]
- Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2016, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 1874–1883. [Google Scholar]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. arXiv 2014, arXiv:1406.2661. [Google Scholar] [CrossRef]
- Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein GAN. arXiv 2017, arXiv:1701.07875. [Google Scholar]
- Jolicoeur-Martineau, A. The relativistic discriminator: A key element missing from standard GAN. arXiv 2018, arXiv:1807.00734. [Google Scholar]
- Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
- Godard, C.; Mac Aodha, O.; Firman, M.; Brostow, G.J. Digging into self-supervised monocular depth estimation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 3828–3838. [Google Scholar]
- Alhashim, I.; Wonka, P. High quality monocular depth estimation via transfer learning. arXiv 2018, arXiv:1812.11941. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 1979, 9, 62–66. [Google Scholar] [CrossRef] [Green Version]
- Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar] [CrossRef]
- Hart, P.E.; Duda, R.O. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11–15. [Google Scholar]
- Li, C.R.; Tang, L.L.; Ma, L.L.; Zhou, Y.S.; Gao, C.X.; Wang, N.; Li, X.H.; Zhou, X.H. A comprehensive calibration and validation site for information remote sensing. ISPRS-IAPRSIS 2015, XL-7/W3, 1233–1240. [Google Scholar] [CrossRef] [Green Version]
- Zhou, Y.; Li, C.; Tang, L.; Wang, Q.; Liu, Q. A Permanent Bar Pattern Distributed Target for Microwave Image Resolution Analysis. IEEE Geosci. Rem. Sens. 2017, 14, 164–168. [Google Scholar] [CrossRef]
- Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-Reference Image Quality Assessment in the Spatial Domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
- Venkatanath, N.; Praneeth, D.; Chandrasekhar, B.M.; Channappayya, S.S.; Medasani, S.S. Blind Image Quality Evaluation Using Perception Based Features. In Proceedings of the 21st National Conference on Communications (NCC) 2015, Mumbai, India, 27 February–1 March 2015. [Google Scholar]
Band Name | Band No. | Spatial Resolution (m) | Central Wavelength (nm) 2A/2B | Bandwidth (nm) 2A/2B |
---|---|---|---|---|
VNIR | B01 | 60 | 442.7/442.2 | 20/21 |
B02 | 10 | 492.4/492.1 | 66/66 | |
B03 | 10 | 559.8/559.0 | 36/36 | |
B04 | 10 | 664.6/664.9 | 31/31 | |
B05 | 20 | 704.1/703.8 | 15/16 | |
B06 | 20 | 740.5/739.1 | 15/15 | |
B07 | 20 | 782.8/779.7 | 20/20 | |
B08 | 10 | 832.8/832.9 | 106/106 | |
B8A | 20 | 864.7/864.0 | 21/22 | |
B09 | 60 | 945.1/943.2 | 20/21 | |
SWIR | B10 | 60 | 1373.5/1376.9 | 31/30 |
B11 | 20 | 1613.7/1610.4 | 91/94 | |
B12 | 20 | 2202.4/2185.7 | 175/185 |
Site Name | Image ID (Product Level) | Type of Features or Targets | |||
---|---|---|---|---|---|
Area-1 | Area-2 | Area-3 | Area-4 | ||
Site-1 Baotou, China | S2A_MSIL1C_20171031T032851_N0206_R018_ T49TCF_20171031T070327 (L1C) | Geo-calibration targets and buildings | Buildings and roads | Farms and roads | Industrial building blocks |
Site-2 Dubai, AE | S2B_MSIL1C_20210528T064629_N0300_R020_ T40RCN_20210528T084809 (L1C) | Tower buildings | Ships and sea waves | Artificial island | Airport, airplane, and roads |
Site-3 Hainich, Germany | S2A_MSIL1C_20200921T103031_N0209_R108_ T32UNB_20200921T142406 (L1C) | Farms, houses, and roads | Forest | Farms with structures | Farms and hills |
S2B_MSIL2A_20210531T101559_N0300_R065_ T32UNB_20210531T140040 (L2A) | Agriculture site | Agriculture site | Agriculture site and village | Agriculture site | |
Site-4 London, UK | S2B_MSIL1C_20201217T111359_N0209_R137_ T30UXC_20201217T132006 (L1C) | Train stations and buildings | Small building blocks and bridges | Building blocks | Bridges and ships |
S2A_MSIL2A_20210309T105901_N0214_R094_ T30UXC_20210309T135358 (L2A) | London millennium wheel (with thin clouds) | London Stadium | Canary Wharf | Kensington Gardens (with thin clouds) | |
Site-5 Desert Rock, U.S. | S2A_MSIL1C_20210507T180921_N0300_R084_ T12SUD_20210507T215833 (L1C) | Mountain and road | Trees in desert | Desert and river | Desert and trees |
Site-6 Lincoln Sea, Greenland | S2A_MSIL1C_20200729T190921_N0209_R056_ T21XWM_20200729T222945 (L1C) | Sea-ice and leads | Isolated sea-ice | Snow on mountain surface | Sea-ice and open water |
Crops | α = 2 | α = 3 | α = 6 | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Edges | B08 | B8A | Edges | B8A | B09 | Edges | B08 | B09 | ||||
1 | 75/166 | 2.76 | 2.85 | 1.033 | 20/104 | 3.47 | 4.15 | 1.196 | 12/166 | 3.24 | 4.53 | 1.398 |
2 | 22/43 | 2.92 | 3.14 | 1.075 | 2/35 | 3.20 | 4.40 | 1.375 | 9/43 | 3.36 | 5.44 | 1.619 |
3 | 32/65 | 2.74 | 2.90 | 1.058 | 7/49 | 2.94 | 4.67 | 1.588 | 10/65 | 3.30 | 5.06 | 1.533 |
Avg. | - | - | - | 1.055 | - | - | - | 1.386 | - | - | - | 1.517 |
Site# | ||||||
---|---|---|---|---|---|---|
Area-1 | Area-2 | Area-3 | Area-4 | |||
Site-1 | 4.37/2.73 = 1.60 | - | 4.24/3.49 = 1.21 | 3.63/2.83 = 1.28 | 1.363 | 2.695 times |
Site-2 | 4.7/3.95 = 1.19 | - | 3.80/3.08 = 1.23 | 4.30/3.40 = 1.26 | 1.227 | 2.520 times |
Site-3 (L1C) | 4.67/3.53 = 1.32 | - | - | 5.00/3.30 = 1.52 | 1.42 | 3.779 times |
Site-3 (L2A) | 3.35/2.60 = 1.29 | 4.27/2.93 = 1.46 | 5.08/3.78 = 1.34 | 4.15/2.80 = 1.48 | 1.393 | 3.160 times |
Site-4 (L1C) | - | - | 5.27/4.24 = 1.24 | 4.07/3.58 = 1.14 | 1.19 | 2.408 times |
Site-4 (L2A) | - | 4.75/3.86 = 1.23 | 3.79/3.44 = 1.10 | 1.165 | 2.332 times | |
Site-5 | - | 4.58/3.22 = 1.42 | 4.27/3.93 = 1.09 | - | 1.255 | 2.604 times |
Site-6 | 3.68/3.12 = 1.18 | - | - | 4.88/2.94 = 1.66 | 1.42 | 3.779 times |
Total Avg. | - | 1.304 | 2.910 times |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tao, Y.; Xiong, S.; Song, R.; Muller, J.-P. Towards Streamlined Single-Image Super-Resolution: Demonstration with 10 m Sentinel-2 Colour and 10–60 m Multi-Spectral VNIR and SWIR Bands. Remote Sens. 2021, 13, 2614. https://doi.org/10.3390/rs13132614
Tao Y, Xiong S, Song R, Muller J-P. Towards Streamlined Single-Image Super-Resolution: Demonstration with 10 m Sentinel-2 Colour and 10–60 m Multi-Spectral VNIR and SWIR Bands. Remote Sensing. 2021; 13(13):2614. https://doi.org/10.3390/rs13132614
Chicago/Turabian StyleTao, Yu, Siting Xiong, Rui Song, and Jan-Peter Muller. 2021. "Towards Streamlined Single-Image Super-Resolution: Demonstration with 10 m Sentinel-2 Colour and 10–60 m Multi-Spectral VNIR and SWIR Bands" Remote Sensing 13, no. 13: 2614. https://doi.org/10.3390/rs13132614
APA StyleTao, Y., Xiong, S., Song, R., & Muller, J. -P. (2021). Towards Streamlined Single-Image Super-Resolution: Demonstration with 10 m Sentinel-2 Colour and 10–60 m Multi-Spectral VNIR and SWIR Bands. Remote Sensing, 13(13), 2614. https://doi.org/10.3390/rs13132614