Super-Resolution Land Cover Mapping Based on the Convolutional Neural Network
"> Figure 1
<p>Three stages of flowchart.</p> "> Figure 2
<p>The proposed <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SRM</mi> </mrow> <mrow> <mi>CNN</mi> </mrow> </msub> </mrow> </semantics></math> model.</p> "> Figure 3
<p>Results of Vahingen Dataset. (<b>a</b>) Coarse image; (<b>b</b>) SASPM result; (<b>c</b>) VBSPM result; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SRM</mi> </mrow> <mrow> <mi>CNN</mi> </mrow> </msub> </mrow> </semantics></math> result; and (<b>e</b>) reference map. The second and third row were zoom-in areas from the first row.</p> "> Figure 4
<p>Results of Potsdam dataset. (<b>a</b>) Coarse image; (<b>b</b>) SASPM result; (<b>c</b>) VBSPM result; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SRM</mi> </mrow> <mrow> <mi>CNN</mi> </mrow> </msub> </mrow> </semantics></math> result; and (<b>e</b>) reference. The second and third row were zoom-in areas from the first row.</p> "> Figure 5
<p>Features visualization. (<b>a</b>) Input coarse remote sensing images; (<b>b</b>) simulated result; (<b>c</b>) reference result; (<b>d</b>) visualization of first output features; (<b>e</b>) visualization of ninth output features; (<b>f</b>) visualization of 18th output features.</p> "> Figure 6
<p>Examples of the performance of <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>SRM</mi> </mrow> <mrow> <mi>CNN</mi> </mrow> </msub> </mrow> </semantics></math> for different geo-objects: 1, 2, and 3 represent the patch number; and a, b, and c are the input coarse remote sensing images, predicted result, and reference, respectively.</p> ">
Abstract
:1. Introduction
2. Methods
2.1. Basic Theory of Super-Resolution Mapping
2.2. Background of the Convolutional Neural Network
2.3. Three Stages of the CNN for Super-Resolution Mapping
2.4. Proposed SRM Network
3. Experiments
3.1. Datasets
3.2. Training and Prediction Procedure
3.3. Baselines and Evaluation Metrics
4. Results
4.1. Results for Vaihingen Dataset
4.2. Result of Potsdam Dataset
5. Discussion
5.1. The Advantage of CNN for Super Resolution Mapping
5.2. Reason for Outperforms Baselines
5.3. Implications for Future Work
6. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Shi, C.; Wang, L. Incorporating spatial information in spectral unmixing: A review. Remote Sens. Entviron. 2014, 149, 70–87. [Google Scholar] [CrossRef]
- Chen, X.; Wang, D.; Chen, J.; Wang, C.; Shen, M. The mixed pixel effect in land surface phenology: A simulation study. Remote Sens. Environ. 2018, 211, 338–344. [Google Scholar] [CrossRef]
- Ge, Y.; Chen, Y.; Li, S.; Jiang, Y. Vectorial boundary-based sub-pixel mapping method for remote-sensing imagery. Int. J. Remote Sens. 2014, 35, 1756–1768. [Google Scholar] [CrossRef]
- Wang, Q.; Wang, L.; Liu, D. Particle swarm optimization-based sub-pixel mapping for remote-sensing imagery. Int. J. Remote Sens. 2012, 33, 6480–6496. [Google Scholar] [CrossRef]
- Ling, F.; Du, Y.; Xiao, F.; Xue, H.; Wu, S. Super-resolution land-cover mapping using multiple sub-pixel shifted remotely sensed images. Int. J. Remote Sens. 2010, 31, 5023–5040. [Google Scholar] [CrossRef]
- Atkinson, P.M. Sub-pixel Target Mapping from Soft-classified Remotely Sensed Imagery. Photogramm. Eng. Remote Sens. 2005, 71, 839–846. [Google Scholar] [CrossRef]
- Chen, Y.; Ge, Y.; Wang, Q.; Jiang, Y. A subpixel mapping algorithm combining pixel-level and subpixel-level spatial dependences with binary integer programming. Remote Sens. Lett. 2014, 5, 902–911. [Google Scholar] [CrossRef]
- Makido, Y.; Shortridge, A.; Messina, J.P. Assessing alternatives for modeling the spatial distribution of multiple land-cover classes at sub-pixel scales. Photogramm. Eng. Remote Sens. 2007, 73, 935–943. [Google Scholar] [CrossRef]
- Wang, Q.; Wang, L.; Liu, D. Integration of spatial attractions between and within pixels for sub-pixel mapping. J. Eng. Electron. 2012, 23, 293–303. [Google Scholar] [CrossRef] [Green Version]
- Ling, F.; Li, X.; Du, Y.; Xiao, F. Sub-pixel mapping of remotely sensed imagery with hybrid intra-and inter-pixel dependence. Int. J. Remote Sens. 2013, 34, 341–357. [Google Scholar] [CrossRef]
- Mertens, K.C.; de Baets, B.; Verbeke, L.P.C.; de Wulf, R.R. A sub-pixel mapping algorithm based on sub-pixel/pixel spatial attraction models. Int. J. Remote Sens. 2006, 27, 3293–3310. [Google Scholar] [CrossRef]
- Wang, L.; Zhang, Y.; Li, J. BP neural network based SubPixel mapping method. In Intelligent Computing in Signal Processing and Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
- Foody, G.M.; Muslim, A.M.; Atkinson, P.M. Super-resolution mapping of the waterline from remotely sensed data. Int. J. Remote Sens. 2005, 26, 5381–5392. [Google Scholar] [CrossRef]
- Ling, F.; Du, Y.; Zhang, Y.; Li, X.; Xioa, F. Burned-Area Mapping at the Subpixel Scale With MODIS Images. IEEE Geosci. Remote Sens. Lett. 2015, 112, 1963–1967. [Google Scholar] [CrossRef]
- Li, X.; Du, Y.; Ling, F.; Feng, Q.; Fu, B. Superresolution Mapping of Remotely Sensed Image Based on Hopfield Neural Network With Anisotropic Spatial Dependence Model. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1265–1269. [Google Scholar]
- Muad, A.M.; Foody, G.M. Super-resolution mapping of lakes from imagery with a coarse spatial and fine temporal resolution. Int. J. Appl. Earth Obs. Geoinf. 2012, 15, 79–91. [Google Scholar] [CrossRef]
- Nguyen, M.Q.; Atkinson, P.M.; Lewis, H.G. Super-resolution mapping using Hopfield Neural Network with panchromatic imagery. Int. J. Remote Sens. 2011, 32, 6149–6176. [Google Scholar] [CrossRef]
- Tatem, A.J.; Lewis, H.G.; Atkinson, P.M.; Nixon, M.S. Super-resolution target identification from remotely sensed images using a Hopfield neural network. IEEE Trans. Geosci. Remote Sens. 2001, 39, 781–796. [Google Scholar] [CrossRef] [Green Version]
- Luciani, P.; Chen, D. The impact of image and class structure upon sub-pixel mapping accuracy using the pixel-swapping algorithm. Ann. GIS 2011, 17, 31–42. [Google Scholar] [CrossRef]
- Shen, Z.; Qi, J.; Wang, K. Modification of pixel-swapping algorithm with initialization from a sub-pixel/pixel spatial attraction model. Photogramm. Eng. Remote Sens. 2009, 75, 557–567. [Google Scholar] [CrossRef]
- Thornton, M.W.; Atkinson, P.M.; Holland, D.A. A linearised pixel-swapping method for mapping rural linear land cover features from fine spatial resolution remotely sensed imagery. Comput. Geosci. 2007, 33, 1261–1272. [Google Scholar] [CrossRef]
- Chen, Y.; Ge, Y.; Jia, Y. Integrating Object Boundary in Super-Resolution Land Cover Mapping. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 10, 219–230. [Google Scholar] [CrossRef]
- Ling, F.; Li, X.; Xiao, F.; Fang, S.; Du, Y. Object-based sub-pixel mapping of buildings incorporating the prior shape information from remotely sensed imagery. Int. J. Appl. Earth Obs. Geoinf. 2012, 18, 283–292. [Google Scholar] [CrossRef]
- Ge, Y.; Chen, Y.; Stein, A.; Li, S.; Hu, J. Enhanced Subpixel Mapping With Spatial Distribution Patterns of Geographical Objects. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2356–2370. [Google Scholar] [CrossRef]
- Boucher, A.; Kyriakidis, P.C. Super-resolution land cover mapping with indicator geostatistics. Remote Sens. Environ. 2006, 104, 264–282. [Google Scholar] [CrossRef]
- Wang, Q.; Shi, W.; Wang, L. Indicator Cokriging-Based Subpixel Land Cover Mapping With Shifted Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 327–339. [Google Scholar] [CrossRef]
- Wang, Q.; Atkinson, P.M.; Shi, W. Indicator Cokriging-Based Subpixel Mapping Without Prior Spatial Structure Information. IEEE Trans. Geosci. Remote Sens. 2015, 53, 309–324. [Google Scholar] [CrossRef]
- Li, X.; Du, Y.; Ling, F.; Wu, S.; Feng, Q. Using a sub-pixel mapping model to improve the accuracy of landscape pattern indices. Ecol. Indic. 2011, 11, 1160–1170. [Google Scholar] [CrossRef]
- Li, X.; Ling, F.; Foody, G.M.; Du, Y. Improving super-resolution mapping through combining multiple super-resolution land-cover maps. Int. J. Remote Sens. 2016, 37, 2415–2432. [Google Scholar] [CrossRef] [Green Version]
- Li, X.; Ling, F.; Foody, G.M.; Ge, Y.; Zhang, Y.; Du, Y. Generating a series of fine spatial and temporal resolution land cover maps by fusing coarse spatial resolution remotely sensed images and fine spatial resolution land cover maps. Remote Sens. Environ. 2017, 196, 293–311. [Google Scholar] [CrossRef]
- Huang, C.; Chen, Y.; Wu, J. DEM-based modification of pixel-swapping algorithm for enhancing floodplain inundation mapping. Int. J. Remote Sens. 2014, 35, 365–381. [Google Scholar] [CrossRef]
- Marmanis, D.; Datcu, M.; Esch, T.; Stilla, U. Deep Learning Earth Observation Classification Using ImageNet Pretrained Networks. IEEE Geosci. Remote Sens. Lett. 2016, 13, 105–109. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S. Scene classification using multi-scale deeply described visual words. Int. J. Remote Sens. 2016, 37, 4119–4131. [Google Scholar] [CrossRef]
- Zhang, F.; Du, B.; Zhang, L. Scene Classification via a Gradient Boosting Random Convolutional Network Framework. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1793–1802. [Google Scholar] [CrossRef]
- Volpi, M.; Tuia, D. Dense Semantic Labeling of Subdecimeter Resolution Images With Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 55, 881–893. [Google Scholar] [CrossRef] [Green Version]
- Huang, J.; Zhang, X.; Xin, Q.; Sun, Y.; Zhang, P. Automatic building extraction from high-resolution aerial images and LiDAR data using gated residual refinement network. ISPRS J. Photogramm. Remote Sens. 2019, 151, 91–105. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S. Learning multiscale and deep representations for classifying remotely sensed imagery. ISPRS J. Photogramm. Remote Sens. 2016, 113, 155–165. [Google Scholar] [CrossRef]
- Audebert, N.; Saux, B.L.; Lefèvre, S. Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks. ISPRS J. Photogramm. Remote Sens. 2018, 140, 20–32. [Google Scholar] [CrossRef] [Green Version]
- Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Gill, E.; Molinier, M. A new fully convolutional neural network for semantic segmentation of polarimetric SAR imagery in complex land cover ecosystem. ISPRS J. Photogramm. Remote Sens. 2019, 151, 223–236. [Google Scholar] [CrossRef]
- Qiu, C.; Mou, L.; Schmitt, M.; Zhu, X.X. Local climate zone-based urban land cover classification from multi-seasonal Sentinel-2 images with a recurrent residual network. ISPRS J. Photogramm. Remote Sens. 2019, 154, 151–162. [Google Scholar] [CrossRef]
- Interdonato, R.; Ienco, D.; Gaetano, R.; Ose, K. DuPLO: A DUal view Point deep Learning architecture for time series classificatiOn. ISPRS J. Photogramm. Remote Sens. 2019, 149, 91–104. [Google Scholar] [CrossRef] [Green Version]
- Lanaras, C.; Bioucas-Dias, J.; Galliani, S.; Baltsavias, E.; Schindler, K. Super-resolution of Sentinel-2 images: Learning a globally applicable deep neural network. ISPRS J. Photogramm. Remote Sens. 2018, 146, 305–319. [Google Scholar] [CrossRef] [Green Version]
- Chen, Y.; Ge, Y.; An, R.; Chen, Y. Super-Resolution Mapping of Impervious Surfaces from Remotely Sensed Imagery with Points-of-Interest. Remote Sens. 2018, 10, 242. [Google Scholar] [CrossRef]
- Marcos, D.; Volpi, M.; Kellenberger, B.; Tuia, D. Land cover mapping at very high resolution with rotation equivariant CNNs: Towards small yet accurate models. ISPRS J. Photogramm. Remote Sens. 2018, 145, 96–107. [Google Scholar] [CrossRef] [Green Version]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; The MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv 2016, arXiv:1511.06434. [Google Scholar]
- Yu, F.; Koltun, V. Multi-Scale Context Aggregation by Dilated Convolutions. arXiv 2015, arXiv:1511.07122. [Google Scholar]
- Ling, F.; Foody, G.M. Super-resolution land cover mapping by deep learning. Remote Sens. Lett. 2019, 10, 598–606. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image Super-Resolution Using Deep Convolutional Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 295–307. [Google Scholar] [CrossRef] [PubMed]
- Ledig, C.; Theis, L.; Huszar, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2018. [Google Scholar]
- Lim, B.; Son, S.; Kim, H.; Nah, S.; Lee, K.M. Enhanced Deep Residual Networks for Single Image Super-Resolution. arXiv arXiv:1707.02921, 2017.
- ISPRS. International Society for Photogrammetry and Remote Sensing 2D Semantic Labeling Challenge. 2016. Available online: http://www2.isprs.org/commissions/comm3/wg4/semantic-labeling.html (accessed on 2 June 2019).
- Ge, Y.; Jiang, Y.; Chen, Y.; Stein, A.; Jiang, D.; Jia, Y. Designing an Experiment to Investigate Subpixel Mapping as an Alternative Method to Obtain Land Use/Land Cover Maps. Remote Sens. 2016, 8. [Google Scholar] [CrossRef]
- Keras. 2018. Available online: https://github.com/keras-team/keras (accessed on 2 June 2019).
- Li, X.; Ling, F.; Du, Y.; Feng, Q.; Zhang, Y. A spatial–temporal Hopfield neural network approach for super-resolution land cover mapping with multi-temporal different resolution remotely sensed images. ISPRS J. Photogramm. Remote Sens. 2014, 93, 76–87. [Google Scholar] [CrossRef]
- Gong, P.; Yu, L.; Li, C.; Wang, J.; Liang, L.; Li, X.; Ji, L.; Bai, Y.; Cheng, Y.; Zhu, Z. A new research paradigm for global land cover mapping. Ann. GIS 2016, 22, 87–102. [Google Scholar] [CrossRef] [Green Version]
- Pouliot, D.; Latifovic, R.; Pasher, J.; Duffe, J. Landsat Super-Resolution Enhancement Using Convolution Neural Networks and Sentinel-2 for Training. Remote Sens. 2018, 10, 394. [Google Scholar] [CrossRef]
- Hui, Z.; Wang, X.; Gao, X. Fast and Accurate Single Image Super-Resolution via Information Distillation Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2018, Utah Salt Lake, UT, USA, 18–22 June 2018. [Google Scholar]
- Chen, J.; Chen, J.; Liao, A.; Cao, X.; Chen, L.; Chen, X.; He, C.; Han, G.; Peng, S.; Lu, M.; et al. Global land cover mapping at 30m resolution: A POK-based operational approach. ISPRS J. Photogramm. Remote Sens. 2015, 103, 7–27. [Google Scholar] [CrossRef] [Green Version]
- Hong, D.; Yokoya, N.; Ge, N.; Chanussot, J.; Zhu, X.X. Learnable manifold alignment (LeMA): A semi-supervised cross-modality learning framework for land cover and land use classification. ISPRS J. Photogramm. Remote Sens. 2019, 147, 193–205. [Google Scholar] [CrossRef] [PubMed]
- Ma, L.; Chen, J.; Zhou, Y.; Chen, X. Two-step constrained nonlinear spectral mixture analysis method for mitigating the collinearity effect. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2873–2886. [Google Scholar] [CrossRef]
- Zhao, T.; Wu, X. Pyramid Feature Attention Network for Saliency detection. In Proceedings of the CVPR, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
Layer | Filter Size | Filter Number | Stride | Pooling Window |
---|---|---|---|---|
Conv 1–3 | 3 × 3 | 64 | 1 | - |
Conv 4–6 | 3 × 3 | 64 | 1 | - |
Conv 7–8 | 3 × 3 | 32 | 1 | |
TransConv 1–3 | 3 × 3 | 64 | 2 | - |
TransConv 4 | 3 × 3 | 32 | 2 | |
Pooling 1–2 | - | - | - | 2 |
Dataset | Bands | Spatial Resolution | Number | Average Size |
---|---|---|---|---|
Vaihingen | Red, Green, Near Infrared, DSMs | 9 cm | 16 | |
Potsdam | Red, Green, Blue, Near Infrared | 5 cm | 25 |
Dataset | Image Number | Patches Number | |
---|---|---|---|
Vaihingen | Training | 15 | 4560 |
Test | 1 | - | |
Potsdam | Training | 24 | 50,784 |
Test | 1 | - |
Reference | ||||||
---|---|---|---|---|---|---|
Background | Building | Low Vegetation | Tree | PA | ||
Result | Background | 593,604 | 12,242 | 18,390 | 4320 | 0.92 |
Building | 48,490 | 862,340 | 50,310 | 86,752 | 0.82 | |
Low Vegetation | 143,482 | 18,525 | 586,454 | 204,855 | 0.61 | |
Tree | 4651 | 9644 | 175,597 | 696,139 | 0.79 | |
UA | 0.75 | 0.96 | 0.71 | 0.70 | OA = 0.77 |
Reference | ||||||
---|---|---|---|---|---|---|
Background | Building | Low Vegetation | Tree | PA | ||
Result | Background | 590,096 | 10,945 | 14,484 | 3015 | 0.93 |
Building | 40,376 | 865,377 | 46,417 | 81,770 | 0.84 | |
Low Vegetation | 156,955 | 17,597 | 597,862 | 207,619 | 0.60 | |
Tree | 1977 | 8832 | 171,934 | 699,657 | 0.79 | |
UA | 0.75 | 0.96 | 0.72 | 0.71 | OA = 0.78 |
Reference | ||||||
---|---|---|---|---|---|---|
Background | Building | Low Vegetation | Tree | PA | ||
Result | Background | 689,600 | 17,100 | 40,429 | 19,528 | 0.89 |
Building | 12,303 | 840,268 | 13,162 | 19,483 | 0.95 | |
Low Vegetation | 68,967 | 29,091 | 728,624 | 320,821 | 0.63 | |
Tree | 3852 | 5846 | 32,842 | 623,789 | 0.94 | |
UA | 0.87 | 0.93 | 0.88 | 0.63 | OA = 0.83 |
Reference | |||||
---|---|---|---|---|---|
Background | Building | Vegetation | PA | ||
Result | Background | 13,920,870 | 1,741,689 | 1,711,104 | 0.80 |
Building | 558,257 | 5,412,002 | 117,220 | 0.89 | |
Vegetation | 1,829,913 | 704,592 | 10,004,353 | 0.80 | |
UA | 0.85 | 0.69 | 0.85 | OA = 0.81 |
Reference | |||||
---|---|---|---|---|---|
Background | Building | Vegetation | PA | ||
Result | Background | 13,940,867 | 1,748,038 | 1,678,244 | 0.80 |
Building | 559,091 | 5,423,543 | 111,263 | 0.89 | |
Vegetation | 1,809,082 | 686,702 | 10,043,170 | 0.80 | |
UA | 0.85 | 0.69 | 0.85 | OA = 0.82 |
Reference | |||||
---|---|---|---|---|---|
Background | Building | Vegetation | PA | ||
Result | Background | 13,800,985 | 1,760,361 | 706,099 | 0.85 |
Building | 236,141 | 5,832,048 | 33,361 | 0.96 | |
Vegetation | 2,271,914 | 265,874 | 11,093,217 | 0.81 | |
UA | 0.85 | 0.74 | 0.94 | OA = 0.85 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Jia, Y.; Ge, Y.; Chen, Y.; Li, S.; Heuvelink, G.B.M.; Ling, F. Super-Resolution Land Cover Mapping Based on the Convolutional Neural Network. Remote Sens. 2019, 11, 1815. https://doi.org/10.3390/rs11151815
Jia Y, Ge Y, Chen Y, Li S, Heuvelink GBM, Ling F. Super-Resolution Land Cover Mapping Based on the Convolutional Neural Network. Remote Sensing. 2019; 11(15):1815. https://doi.org/10.3390/rs11151815
Chicago/Turabian StyleJia, Yuanxin, Yong Ge, Yuehong Chen, Sanping Li, Gerard B.M. Heuvelink, and Feng Ling. 2019. "Super-Resolution Land Cover Mapping Based on the Convolutional Neural Network" Remote Sensing 11, no. 15: 1815. https://doi.org/10.3390/rs11151815
APA StyleJia, Y., Ge, Y., Chen, Y., Li, S., Heuvelink, G. B. M., & Ling, F. (2019). Super-Resolution Land Cover Mapping Based on the Convolutional Neural Network. Remote Sensing, 11(15), 1815. https://doi.org/10.3390/rs11151815