A Framework for Evaluating Land Use and Land Cover Classification Using Convolutional Neural Networks
"> Figure 1
<p>Indian Pines dataset. (<b>a</b>) false color composite image (bands 29, 19, and 9); (<b>b</b>) ground truth.</p> "> Figure 2
<p>Pavia University dataset. (<b>a</b>) false color composite image (bands 45, 27, and 11); (<b>b</b>) ground truth.</p> "> Figure 3
<p>Salinas dataset. (<b>a</b>) false color composite image (bands 29, 19, and 9); (<b>b</b>) ground truth.</p> "> Figure 4
<p>Flevoland dataset.</p> "> Figure 5
<p>San Francisco dataset.</p> "> Figure 6
<p>Patch extraction process to generate the instances that would be fed to the ML models.</p> "> Figure 7
<p>CNN architecture. The <span class="html-italic">conv</span> layers refer to convolutional layers, <span class="html-italic">pool</span> to max-pooling layers, <span class="html-italic">fc</span> to the fully connected layer, and <span class="html-italic">sm</span> to the softmax layer. Dropout and batch normalization are omitted to simplify the visualization.</p> "> Figure 8
<p>Experimental setup. For each dataset, we obtain five independent subsamples and perform repeated stratified cross-validation. The collected results of all datasets are analyzed with a statistical test (Friedman and Post-hoc).</p> "> Figure 9
<p>Cross-validation overall accuracy of all methods over each dataset, ordered by increasing size of training set.</p> "> Figure 10
<p>Indian Pines GRSS competition. (<b>a</b>) training set; (<b>b</b>) classification map <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>O</mi> <mi>A</mi> <mo>=</mo> <mn>95.53</mn> <mo>%</mo> <mo>)</mo> </mrow> </semantics></math>.</p> "> Figure 11
<p>Pavia University GRSS competition. (<b>a</b>) training set; (<b>b</b>) classification map <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>O</mi> <mi>A</mi> <mo>=</mo> <mn>84.79</mn> <mo>%</mo> <mo>)</mo> </mrow> </semantics></math>.</p> "> Figure 12
<p>Flevoland GRSS competition. (<b>a</b>) training set; (<b>b</b>) classification map <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>O</mi> <mi>A</mi> <mo>=</mo> <mn>99.05</mn> <mo>%</mo> <mo>)</mo> </mrow> </semantics></math>.</p> "> Figure 13
<p>San Francisco GRSS competition. (<b>a</b>) training set; (<b>b</b>) classification map <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>O</mi> <mi>A</mi> <mo>=</mo> <mn>99.37</mn> <mo>%</mo> <mo>)</mo> </mrow> </semantics></math>.</p> "> Figure 14
<p>Influence of patch size on classification accuracy over four datasets.</p> ">
Abstract
:1. Introduction
- A general 2D CNN, with a fixed architecture and parametrization, to achieve high accuracy in LULC classification over remote sensing imagery from different sources, concretely radar and hyperspectral images.
- A validation methodology, based on cross-validation and statistical analysis, in order to perform a rigorous experimental comparison between the performance of our proposed DL architecture and other traditional ML models.
2. Materials and Methods
2.1. Description of Datasets
2.2. Data Generation
2.3. 2D Convolutional Neural Network
2.3.1. CNN Architecture
2.3.2. Regularization Techniques
2.3.3. Parameter Selection
2.4. Experimental Setup
2.4.1. Stratified Cross-Validation
2.4.2. Statistical Analysis
2.4.3. GRSS DASE Website Competition
3. Results and Discussion
3.1. Cross-Validation Results
3.1.1. Computation Time
3.2. Statistical Analysis
3.3. GRSS DASE Competition
3.3.1. Influence of Patch Size on Classification Accuracy
4. Conclusions and Future Work
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
Abbreviations
LULC | Land Use and Land Cover |
RS | Remote Sensing |
DL | Deep Learning |
ML | Machine Learning |
CNN | Convolutional Neural Network |
SVM | Support Vector Machine |
RF | Random Forest |
kNN | k-Nearest Neighbours |
References
- Hussain, M.; Chen, D.; Cheng, A.; Wei, H.; Stanley, D. Change detection from remotely sensed images: From pixel-based to object-based approaches. ISPRS J. Photogramm. Remote Sens. 2013, 80, 91–106. [Google Scholar] [CrossRef]
- Dandois, J.; Ellis, E. High spatial resolution three-dimensional mapping of vegetation spectral dynamics using computer vision. Remote Sens. Environ. 2013, 136, 259–276. [Google Scholar] [CrossRef] [Green Version]
- He, C.; Liu, Z.; Tian, J.; Ma, Q. Urban expansion dynamics and natural habitat loss in China: A multiscale landscape perspective. Glob. Chang. Biol. 2014, 20, 2886–2902. [Google Scholar] [CrossRef] [PubMed]
- Xie, J.; Chen, H.; Liao, Z.; Gu, X.; Zhu, D.; Zhang, J. An integrated assessment of urban flooding mitigation strategies for robust decision making. Environ. Model. Softw. 2017, 95, 143–155. [Google Scholar] [CrossRef]
- Zolkos, S.; Goetz, S.; Dubayah, R. A meta-analysis of terrestrial aboveground biomass estimation using lidar remote sensing. Remote Sens. Environ. 2013, 128, 289–298. [Google Scholar] [CrossRef]
- Paisitkriangkrai, S.; Sherrah, J.; Janney, P.; van den Hengel, A. Semantic Labeling of Aerial and Satellite Imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2868–2881. [Google Scholar] [CrossRef]
- Chen, Y.; Lin, Z.; Zhao, X.; Wang, G.; Gu, Y. Deep learning-based classification of hyperspectral data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2094–2107. [Google Scholar] [CrossRef]
- Jensen, J.R. Introductory Digital Image Processing: A Remote Sensing Perspective, 4th ed.; Prentice Hall Press: Upper Saddle River, NJ, USA, 2015. [Google Scholar]
- Rogan, J.; Franklin, J.; Roberts, D.A. A comparison of methods for monitoring multitemporal vegetation change using Thematic Mapper imagery. Remote Sens. Environ. 2002, 80, 143–156. [Google Scholar] [CrossRef] [Green Version]
- Du, Q.; Chang, C.I. A linear constrained distance-based discriminant analysis for hyperspectral image classification. Pattern Recognit. 2001, 34, 361–373. [Google Scholar] [CrossRef]
- Kal-Yi, H. A synergistic automatic clustering technique (SYNERACT) for multispectral image Analysis. Photogramm. Eng. Remote Sens. 2002, 68, 33–40. [Google Scholar]
- Etter, A.; McAlpine, C.; Wilson, K.; Phinn, S.; Possingham, H. Regional patterns of agricultural land use and deforestation in Colombia. Agric. Ecosyst. Environ. 2006, 114, 369–386. [Google Scholar] [CrossRef]
- Xu, M.; Watanachaturaporn, P.; Varshney, P.K.; Arora, M.K. Decision tree regression for soft classification of remote sensing data. Remote Sens. Environ. 2005, 97, 322–336. [Google Scholar] [CrossRef]
- Samaniego, L.; Bardossy, A.; Schulz, K. Supervised Classification of Remotely Sensed Imagery Using a Modified k-NN Technique. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2112–2125. [Google Scholar] [CrossRef]
- Gislason, P.; Benediktsson, J.; Sveinsson, J. Random forests for land cover classification. Pattern Recognit. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
- Mas, J.; Flores, J. The application of artificial neural networks to the analysis of remotely sensed data. Int. J. Remote Sens. 2008, 29, 617–663. [Google Scholar] [CrossRef]
- Melgani, F.; Bruzzone, L. Classification of hyperspectral remote sensing images with support vector machines. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1778–1790. [Google Scholar] [CrossRef] [Green Version]
- Khatami, R.; Mountrakis, G.; Stehman, S.V. A meta-analysis of remote sensing research on supervised pixel-based land-cover image classification processes: General guidelines for practitioners and future research. Remote Sens. Environ. 2016, 177, 89–100. [Google Scholar] [CrossRef] [Green Version]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv, 2015; arXiv:1512.03385. [Google Scholar]
- Li, Y.; Zhang, H.; Xue, X.; Jiang, Y.; Shen, Q. Deep learning for remote sensing image classification: A survey. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2018, 8, e1264. [Google Scholar] [CrossRef]
- Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
- Ding, J.; Chen, B.; Liu, H.; Huang, M. Convolutional Neural Network With Data Augmentation for SAR Target Recognition. IEEE Geosci. Remote Sens. Lett. 2016, 13, 364–368. [Google Scholar] [CrossRef]
- Romero, A.; Gatta, C.; Camps-Valls, G. Unsupervised deep feature extraction for remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 2016, 54, 1349–1362. [Google Scholar] [CrossRef]
- Castelluccio, M.; Poggi, G.; Sansone, C.; Verdoliva, L. Land Use Classification in Remote Sensing Images by Convolutional Neural Networks. arXiv, 2015; arXiv:1508.00092. [Google Scholar]
- Li, W.; Fu, H.; Yu, L.; Gong, P.; Feng, D.; Li, C.; Clinton, N. Stacked Autoencoder-based deep learning for remote-sensing image classification: A case study of African land-cover mapping. Int. J. Remote Sens. 2016, 37, 5632–5646. [Google Scholar] [CrossRef]
- Chen, Y.; Zhao, X.; Jia, X. Spectral–Spatial Classification of Hyperspectral Data Based on Deep Belief Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2381–2392. [Google Scholar] [CrossRef]
- Li, Y.; Zhang, H.; Shen, Q. Spectral–Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sens. 2017, 9, 67. [Google Scholar] [CrossRef]
- Hou, F.; Lei, W.; Li, H.; Xi, J. FMRSS Net: Fast Matrix Representation-Based Spectral-Spatial Feature Learning Convolutional Neural Network for Hyperspectral Image Classification. Math. Probl. Eng. 2018, 2018, 1–11. [Google Scholar] [CrossRef]
- Kohavi, R. A Study of Cross-validation and Bootstrap for Accuracy Estimation and Model Selection. In Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI’95), Montreal, QC, Canada, 20–25 August 1995; Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA, 1995; Volume 2, pp. 1137–1143. [Google Scholar]
- García, S.; Herrera, F. An extension on “statistical comparisons of classifiers over multiple data sets” for all pairwise comparisons. J. Mach. Learn. Res. 2008, 9, 2677–2694. [Google Scholar]
- Mahdianpari, M.; Salehi, B.; Rezaee, M.; Mohammadimanesh, F.; Zhang, Y. Very Deep Convolutional Neural Networks for Complex Land Cover Mapping Using Multispectral Remote Sensing Imagery. Remote Sens. 2018, 10, 1119. [Google Scholar] [CrossRef]
- Sharma, A.; Liu, X.; Yang, X.; Shi, D. A patch-based convolutional neural network for remote sensing image classification. Neural Netw. 2017, 95, 19–28. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems, Volume 1 (NIPS’12); Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q., Eds.; Curran Associates Inc.: Red Hook, NY, USA, 2012; pp. 1097–1105. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv, 2014; arXiv:1409.1556. [Google Scholar]
- Chen, Y.; Jiang, H.; Li, C.; Jia, X.; Ghamisi, P. Deep Feature Extraction and Classification of Hyperspectral Images Based on Convolutional Neural Networks. IEEE Trans. Geosci. Remote Sens. 2016, 54, 6232–6251. [Google Scholar] [CrossRef]
- Nair, V.; Hinton, G. Rectified linear units improve Restricted Boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML 2010), Haifa, Israel, 21 June 2010; Fürnkranz, J., Joachims, T., Eds.; pp. 807–814. [Google Scholar]
- Boureau, Y.L.; Ponce, J.; Lecun, Y. A Theoretical Analysis of Feature Pooling in Visual Recognition. In Proceedings of the 27th International Conference on Machine Learning (ICML 2010), Haifa, Israel, 21 June 2010; Fürnkranz, J., Joachims, T., Eds.; pp. 111–118. [Google Scholar]
- Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics; Teh, Y.W., Titterington, M., Eds.; PMLR: Sardinia, Italy, 2010; Volume 9, pp. 249–256. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv, 2015; arXiv:1502.03167. [Google Scholar]
- Hinton, G.E.; Srivastava, N.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Improving neural networks by preventing co-adaptation of feature detectors. arXiv, 2012; arXiv:1207.0580. [Google Scholar]
- Luo, P.; Wang, X.; Shao, W.; Peng, Z. Towards Understanding Regularization in Batch Normalization. arXiv, 2018; arXiv:1809.00846. [Google Scholar]
- Li, X.; Chen, S.; Hu, X.; Yang, J. Understanding the Disharmony between Dropout and Batch Normalization by Variance Shift. arXiv, 2018; arXiv:1801.05134. [Google Scholar]
- GRSS DASE Website Competition. Available online: http://dase.grss-ieee.org (accessed on 10 December 2018).
- 2D Convolutional Neural Networks for Land Use and Land Cover Classification of Radar and Hyperspectral Images. Available online: https://github.com/carranza96/cnn-landcover (accessed on 28 January 2019).
- Kim, J.H. Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap. Comput. Stat. Data Anal. 2009, 53, 3735–3745. [Google Scholar] [CrossRef]
- Hurni, K.; Schneider, A.; Heinimann, A.; Nong, D.H.; Fox, J. Mapping the Expansion of Boom Crops in Mainland Southeast Asia Using Dense Time Stacks of Landsat Data. Remote Sens. 2017, 9, 320. [Google Scholar] [CrossRef]
- Graves, S.J.; Asner, G.P.; Martin, R.E.; Anderson, C.B.; Colgan, M.S.; Kalantari, L.; Bohlman, S.A. Tree Species Abundance Predictions in a Tropical Agricultural Landscape with a Supervised Classification Model and Imbalanced Data. Remote Sens. 2016, 8, 161. [Google Scholar] [CrossRef]
- Hughes, G. On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory 1968, 14, 55–63. [Google Scholar] [CrossRef]
- Geurts, P.; Ernst, D.; Wehenkel, L. Extremely randomized trees. Mach. Learn. 2006, 63, 3–42. [Google Scholar] [CrossRef] [Green Version]
- Demšar, J. Statistical Comparisons of Classifiers over Multiple Data Sets. J. Mach. Learn. Res. 2006, 7, 1–30. [Google Scholar]
- Gao, Q.; Lim, S.; Jia, X. Hyperspectral Image Classification Using Convolutional Neural Networks and Multiple Feature Learning. Remote Sens. 2018, 10, 299. [Google Scholar] [CrossRef]
Dataset | Type (Sensor) | Size | # Bands | Spatial Resolution | # Classes |
---|---|---|---|---|---|
Indian Pines | Hyperspectral (AVIRIS) | 145 × 145 | 220 | 20 m | 16 |
Salinas | Hyperspectral (AVIRIS) | 512 × 217 | 204 | 3.7 m | 16 |
Pavia | Hyperspectral (ROSIS) | 610 × 340 | 103 | 1.3 m | 9 |
San Francisco | Radar (AirSAR) | 1168 × 2531 | 27 (P, L & C) | 10 m | 3 |
Flevoland | Radar (AirSAR) | 1279 × 1274 | 27 (P, L & C) | 10 m | 12 |
CNN Architecture | |||
---|---|---|---|
Layer | Type | Neurons & # Maps | Kernel |
0 | Input | ||
1 | Batch normalization | ||
2 | Convolutional | ||
3 | ReLU | ||
4 | Batch normalization | ||
5 | Max-Pooling | ||
6 | Convolutional | ||
7 | ReLU | ||
8 | Batch normalization | ||
9 | Max-Pooling | ||
10 | Fully connected | 1024 neurons | |
11 | Dropout | 1024 neurons | |
12 | Softmax | C neurons |
CNN Parameter Selection | ||
---|---|---|
Parameter | Grid Search | Selected Value |
Dropout rate | {0.2, 0.5} | 0.2 |
Learning rate | {0.1, 0.01, 0.001} | 0.01 |
Decaying learning rate | {True, False} | True |
Number of epochs | {20, 50, 80, 100} | 50 |
Batch size | {16, 32, 64, 128} | 16 |
Indian Pines Cross-Validation | |||||||||
---|---|---|---|---|---|---|---|---|---|
Sample Distribution | Accuracy | ||||||||
# | Class | Samples | 1NN | 3NN | 5NN | SVM | RF | CNN | |
1 | Corn-notill | 286 | 76.19 | 72.26 | 66.08 | 81.75 | 68.21 | 95.78 | |
2 | Corn-min | 166 | 69.28 | 59.02 | 54.05 | 74.60 | 62.03 | 93.32 | |
3 | Grass/Pasture | 97 | 93.83 | 91.35 | 88.38 | 92.59 | 88.45 | 96.56 | |
4 | Grass/Trees | 146 | 98.33 | 97.36 | 97.20 | 97.94 | 98.79 | 99.54 | |
5 | Hay-windrowed | 96 | 99.37 | 99.24 | 99.16 | 99.20 | 99.58 | 99.92 | |
6 | Soybeans-notill | 195 | 86.97 | 79.54 | 77.69 | 81.71 | 67.21 | 95.33 | |
7 | Soybeans-min | 491 | 86.83 | 81.21 | 79.79 | 92.40 | 91.40 | 97.22 | |
8 | Soybean-clean | 119 | 64.02 | 42.03 | 32.77 | 72.99 | 46.53 | 96.70 | |
9 | Woods | 253 | 89.23 | 86.46 | 85.50 | 97.00 | 95.99 | 98.46 | |
10 | Alfalfa | 10 | 62.67 | 35.89 | 13.44 | 31.89 | 25.22 | 94.22 | |
11 | Corn | 48 | 48.05 | 22.18 | 14.25 | 61.40 | 32.34 | 95.35 | |
12 | Grass/pasture-mowed | 6 | 70.00 | 51.33 | 26.00 | 35.33 | 11.33 | 94.67 | |
13 | Oats | 4 | 69.33 | 31.33 | 10.67 | 44.67 | 22.00 | 76.02 | |
14 | Wheat | 41 | 98.61 | 97.13 | 95.39 | 98.40 | 97.63 | 99.90 | |
15 | Bldg-Grass-Tree-Drives | 78 | 42.43 | 23.78 | 14.09 | 73.35 | 58.25 | 94.75 | |
16 | Stone-steel towers | 19 | 93.27 | 79.05 | 74.16 | 95.14 | 79.81 | 99.59 | |
Total | 2055 | OA | 82.18 | 75.24 | 71.81 | 86.80 | 78.98 | 96.78 | |
AA | 78.03 | 65.57 | 58.04 | 76.90 | 65.30 | 95.78 |
Pavia University Cross-Validation | |||||||||
---|---|---|---|---|---|---|---|---|---|
Sample Distribution | Accuracy | ||||||||
# | Class | Samples | 1NN | 3NN | 5NN | SVM | RF | CNN | |
1 | Self-Blocking Bricks | 737 | 92.48 | 92.09 | 91.47 | 97.72 | 95.62 | 96.14 | |
2 | Meadows | 3730 | 99.18 | 99.43 | 99.52 | 99.85 | 98.83 | 99.97 | |
3 | Gravel | 420 | 87.62 | 85.18 | 83.52 | 88.67 | 81.71 | 91.25 | |
4 | Shadow | 190 | 99.85 | 99.79 | 99.83 | 99.87 | 100 | 99.64 | |
5 | Bitumen | 266 | 92.60 | 91.47 | 90.92 | 90.03 | 87.67 | 93.43 | |
6 | Bare Soil | 1006 | 75.67 | 66.28 | 59.89 | 95.65 | 79.80 | 99.14 | |
7 | Metal sheets | 269 | 100 | 100 | 100 | 100 | 99.66 | 100 | |
8 | Asphalt | 1327 | 95.21 | 93.75 | 92.69 | 96.33 | 97.96 | 98.24 | |
9 | Trees | 613 | 89.45 | 86.19 | 83.52 | 97.92 | 95.68 | 98.56 | |
Total | 8558 | OA | 93.80 | 92.15 | 90.94 | 97.65 | 94.82 | 98.45 | |
AA | 92.45 | 90.46 | 89.04 | 96.26 | 92.99 | 97.35 |
Salinas Cross-Validation | |||||||||
---|---|---|---|---|---|---|---|---|---|
Sample Distribution | Accuracy | ||||||||
# | Class | Samples | 1NN | 3NN | 5NN | SVM | RF | CNN | |
1 | Brocoli_green_weeds_1 | 402 | 99.26 | 98.94 | 98.75 | 99.84 | 99.99 | 99.99 | |
2 | Brocoli_green_weeds_2 | 746 | 99.63 | 99.53 | 99.36 | 99.90 | 100 | 99.28 | |
3 | Fallow | 396 | 99.72 | 99.74 | 99.77 | 99.86 | 99.36 | 99.59 | |
4 | Fallow_rough_plow | 279 | 99.76 | 99.66 | 99.58 | 99.76 | 99.83 | 99.44 | |
5 | Fallow_smooth | 536 | 98.99 | 98.81 | 98.48 | 99.76 | 99.42 | 98.48 | |
6 | Stubble | 792 | 100 | 100 | 100 | 100 | 100 | 100 | |
7 | Celery | 716 | 99.90 | 99.83 | 99.77 | 99.94 | 99.97 | 99.95 | |
8 | Grapes_untrained | 2255 | 82.64 | 83.15 | 83.31 | 92.16 | 91.43 | 92.81 | |
9 | Soil_vinyard_develop | 1241 | 99.77 | 99.51 | 99.23 | 99.93 | 99.91 | 99.91 | |
10 | Corn_green_weeds | 656 | 97.08 | 95.48 | 94.41 | 98.70 | 97.62 | 98.47 | |
11 | Lettuce_romaine_4wk | 214 | 99.23 | 98.37 | 98.03 | 99.03 | 98.95 | 99.44 | |
12 | Lettuce_romaine_5wk | 386 | 100 | 99.98 | 100 | 100 | 100 | 100 | |
13 | Lettuce_romaine_6wk | 184 | 99.98 | 100 | 100 | 99.98 | 99.96 | 99.52 | |
14 | Lettuce_romaine_7wk | 214 | 98.43 | 97.44 | 96.96 | 99.24 | 97.70 | 99.20 | |
15 | Vinyard_untrained | 1454 | 81.28 | 81.15 | 80.84 | 80.34 | 77.01 | 91.34 | |
16 | Vinyard_vertical_trellis | 362 | 98.94 | 98.29 | 97.90 | 99.26 | 98.90 | 99.58 | |
Total | 10833 | OA | 93.46 | 93.33 | 93.16 | 95.54 | 94.81 | 97.03 | |
AA | 97.16 | 96.87 | 96.65 | 97.98 | 97.50 | 98.56 |
San Francisco Cross-Validation | |||||||||
---|---|---|---|---|---|---|---|---|---|
Sample Distribution | Accuracy | ||||||||
# | Class | Samples | 1NN | 3NN | 5NN | SVM | RF | CNN | |
1 | Ocean | 3383 | 100 | 100 | 100 | 100 | 100 | 100 | |
2 | Urban | 3594 | 91.50 | 94.92 | 96.04 | 94.27 | 99.38 | 96.97 | |
3 | Mixed trees/grass | 770 | 35.83 | 28.93 | 25.48 | 71.88 | 43.93 | 85.97 | |
Total | 7747 | OA | 89.68 | 90.58 | 90.76 | 94.55 | 94.14 | 97.20 | |
AA | 75.78 | 74.62 | 73.84 | 88.72 | 81.10 | 94.31 |
Flevoland Cross-Validation | |||||||||
---|---|---|---|---|---|---|---|---|---|
Sample Distribution | Accuracy | ||||||||
# | Class | Samples | 1NN | 3NN | 5NN | SVM | RF | CNN | |
1 | Rapeseed | 3525 | 51.23 | 54.09 | 54.83 | 99.63 | 100 | 100 | |
2 | Potato | 6571 | 82.96 | 88.02 | 88.06 | 98.47 | 99.32 | 99.23 | |
3 | Barley | 3295 | 68.89 | 67.09 | 65.00 | 95.84 | 99.52 | 99.44 | |
4 | Maize | 8004 | 96.27 | 96.83 | 97.24 | 98.00 | 99.80 | 99.62 | |
5 | Lucerne | 468 | 12.20 | 6.10 | 6.56 | 96.29 | 86.55 | 98.73 | |
6 | Peas | 482 | 60.23 | 59.45 | 58.77 | 86.81 | 99.99 | 99.99 | |
7 | Fruit | 832 | 4.36 | 0.92 | 1.08 | 88.96 | 99.45 | 99.47 | |
8 | Beans | 252 | 18.30 | 7.63 | 7.98 | 82.15 | 93.29 | 95.02 | |
9 | Wheat | 24 | 50.50 | 27.83 | 17.01 | 6.83 | 13.33 | 77.33 | |
10 | Beet | 112 | 7.77 | 1.04 | 0.21 | 58.11 | 59.23 | 98.43 | |
11 | Grass | 1160 | 73.50 | 72.45 | 71.42 | 89.78 | 96.18 | 99.20 | |
12 | Oats | 72 | 36.00 | 24.78 | 20.78 | 36.89 | 12.72 | 74.39 | |
Total | 24797 | OA | 74.86 | 76.06 | 75.96 | 96.53 | 98.65 | 99.36 | |
AA | 46.85 | 42.19 | 40.74 | 78.15 | 79.95 | 95.06 |
Computation Time (s) | ||||||||
---|---|---|---|---|---|---|---|---|
Training | Test | |||||||
Dataset | SVM | RF | CNN CPU | CNN GPU | SVM | RF | CNN CPU | CNN GPU |
Indian Pines | 90.2 | 36.4 | 84.2 | 16.1 | 15.1 | 0.11 | 0.05 | 0.03 |
Pavia | 150.1 | 140.5 | 186.6 | 71.3 | 33.2 | 0.14 | 0.11 | 0.06 |
Salinas | 294.1 | 153.4 | 272.3 | 82.1 | 96.9 | 0.31 | 0.24 | 0.15 |
San Francisco | 24.8 | 23.9 | 95.2 | 49.6 | 4.8 | 0.12 | 0.06 | 0.05 |
Flevoland | 826.4 | 234.1 | 313.6 | 160.8 | 89.1 | 0.42 | 0.21 | 0.17 |
Friedman Test Ranking | |
---|---|
CNN | 1.000 |
SVM | 2.240 |
RF | 2.960 |
1NN | 4.600 |
3NN | 4.840 |
5NN | 5.360 |
Post Hoc Analysis | |||
---|---|---|---|
Method | p | z | Holm |
5NN | 0.0000 | 8.2396 | 0.0100 |
3NN | 0.0000 | 7.2569 | 0.0125 |
1NN | 0.0000 | 6.8034 | 0.0167 |
RF | 0.0002 | 3.7041 | 0.0250 |
SVM | 0.0191 | 2.3434 | 0.0500 |
Accuracies GRSS DASE Website Competition | |||||||
---|---|---|---|---|---|---|---|
1NN | 3NN | 5NN | SVM | RF | CNN | CNN-MF | |
Indian Pines | 64.75 | 64.55 | 65.14 | 86.31 | 64.33 | 94.64 | 95.53 |
San Francisco | 90.50 | 91.86 | 92.40 | 96.28 | 96.81 | 98.70 | 99.37 |
Pavia | 63.85 | 62.60 | 62.78 | 79.75 | 65.02 | 83.43 | 84.79 |
Flevoland | 78.55 | 78.96 | 77.54 | 95.51 | 96.22 | 98.51 | 99.05 |
CNN Accuracy Depending on Patch Size | ||||
---|---|---|---|---|
3 × 3 | 5 × 5 | 7 × 7 | 9 × 9 | |
Indian Pines | 88.23 | 94.64 | 90.41 | 89.53 |
Pavia | 81.48 | 83.43 | 77.75 | 77.4 |
Flevoland | 97.79 | 98.51 | 98.43 | 97.95 |
San Francisco | 93.99 | 98.70 | 98.09 | 97.6 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Carranza-García, M.; García-Gutiérrez, J.; Riquelme, J.C. A Framework for Evaluating Land Use and Land Cover Classification Using Convolutional Neural Networks. Remote Sens. 2019, 11, 274. https://doi.org/10.3390/rs11030274
Carranza-García M, García-Gutiérrez J, Riquelme JC. A Framework for Evaluating Land Use and Land Cover Classification Using Convolutional Neural Networks. Remote Sensing. 2019; 11(3):274. https://doi.org/10.3390/rs11030274
Chicago/Turabian StyleCarranza-García, Manuel, Jorge García-Gutiérrez, and José C. Riquelme. 2019. "A Framework for Evaluating Land Use and Land Cover Classification Using Convolutional Neural Networks" Remote Sensing 11, no. 3: 274. https://doi.org/10.3390/rs11030274
APA StyleCarranza-García, M., García-Gutiérrez, J., & Riquelme, J. C. (2019). A Framework for Evaluating Land Use and Land Cover Classification Using Convolutional Neural Networks. Remote Sensing, 11(3), 274. https://doi.org/10.3390/rs11030274