Object-Based Multi-Temporal and Multi-Source Land Cover Mapping Leveraging Hierarchical Class Relationships
"> Figure 1
<p>Overview of the acquisition dates of Sentinel-1 (S1) and Sentinel-2 (S2) images over the two study sites. S2 acquisitions were sparsed due to the ubiquitous cloudiness.</p> "> Figure 2
<p>Overview of the taxonomy derived from the Reunion island land cover classes.</p> "> Figure 3
<p>Overview of the taxonomy derived from the Senegalese site land cover classes.</p> "> Figure 4
<p>Some details about the segmentation performed on the Reunion island.</p> "> Figure 5
<p>Some details about the segmentation performed on the <span class="html-italic">Senegalese</span> site.</p> "> Figure 6
<p>Overview of the HOb2sRNN method. The architecture is composed of two branches, one for each source (radar and optical) SITS. Each branch processes the SITS by means of an enriched RNN cell we named FCGRU and an attention mechanism is employed on its outputs to extract the per source features. Furthermore, the same attention mechanism is employed on the concatenation of the per source outputs allowing to extract fused features. Finally, the per-source and fused feature sets are leveraged in order to provide the final classification.</p> "> Figure 7
<p>Visual representation of the GRU and FCGRU cells.</p> "> Figure 8
<p>Overview of the hierarchical pretraining strategy adopted for HOb2sRNN architecture.</p> "> Figure 9
<p>Per-Class F1 score for the Reunion island (average over ten random splits).</p> "> Figure 10
<p>Per-Class F1 score for the Senegalese site (average over ten random splits).</p> "> Figure 11
<p>Confusion matrices of the land cover classification produced by (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>R</mi> <msub> <mi>F</mi> <mrow> <mi>P</mi> <mi>O</mi> <mi>E</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>b</b>) RF, (<b>c</b>) SVM, (<b>d</b>) MLP, (<b>e</b>) TempCNN, (<b>f</b>) OD2RNN and (<b>g</b>) HOb2sRNN on the Reunion island. See <a href="#remotesensing-12-02814-t001" class="html-table">Table 1</a> for corresponding labels.</p> "> Figure 11 Cont.
<p>Confusion matrices of the land cover classification produced by (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>R</mi> <msub> <mi>F</mi> <mrow> <mi>P</mi> <mi>O</mi> <mi>E</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>b</b>) RF, (<b>c</b>) SVM, (<b>d</b>) MLP, (<b>e</b>) TempCNN, (<b>f</b>) OD2RNN and (<b>g</b>) HOb2sRNN on the Reunion island. See <a href="#remotesensing-12-02814-t001" class="html-table">Table 1</a> for corresponding labels.</p> "> Figure 12
<p>Confusion matrices of the land cover classification produced by (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>R</mi> <msub> <mi>F</mi> <mrow> <mi>P</mi> <mi>O</mi> <mi>E</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>b</b>) RF, (<b>c</b>) SVM, (<b>d</b>) MLP, (<b>e</b>) TempCNN, (<b>f</b>) OD2RNN and (<b>g</b>) HOb2sRNN on the <span class="html-italic">Senegalese</span> site. See <a href="#remotesensing-12-02814-t002" class="html-table">Table 2</a> for corresponding labels.</p> "> Figure 12 Cont.
<p>Confusion matrices of the land cover classification produced by (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>R</mi> <msub> <mi>F</mi> <mrow> <mi>P</mi> <mi>O</mi> <mi>E</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>b</b>) RF, (<b>c</b>) SVM, (<b>d</b>) MLP, (<b>e</b>) TempCNN, (<b>f</b>) OD2RNN and (<b>g</b>) HOb2sRNN on the <span class="html-italic">Senegalese</span> site. See <a href="#remotesensing-12-02814-t002" class="html-table">Table 2</a> for corresponding labels.</p> "> Figure 13
<p>Sensitivity analysis of the <math display="inline"><semantics> <mi>α</mi> </semantics></math> weights that are associated to the importance of the auxiliary classifiers in HOb2sRNN regarding the F1 score. Standard deviation is displayed as error bar.</p> "> Figure 14
<p>Qualitative investigation of land cover map details produced on the Reunion island study site over a mixed urban/agricultural area (top) and an agricultural/natural vegetation area (bottom).</p> "> Figure 14 Cont.
<p>Qualitative investigation of land cover map details produced on the Reunion island study site over a mixed urban/agricultural area (top) and an agricultural/natural vegetation area (bottom).</p> "> Figure 15
<p>Qualitative investigation of land cover map details produced on the Senegalese study site over heterogeneous landscapes including buildings, agricultural, and wet areas.</p> "> Figure 15 Cont.
<p>Qualitative investigation of land cover map details produced on the Senegalese study site over heterogeneous landscapes including buildings, agricultural, and wet areas.</p> "> Figure 16
<p>Box plots of the attention weights on cereals and legumes land covers considering the multi-source time series.</p> "> Figure 17
<p>Visualization of end of season agricultural practices in the <span class="html-italic">Senegalese groundnut basin</span> concerning cereals and legumes land covers. Background images come from the Sentinel-2 time series and are displayed in Green–Red–Infrared composite colours.</p> ">
Abstract
:1. Introduction
2. Materials
2.1. Sentinel-1 Data
2.2. Sentinel-2 Data
2.3. Ground Truth
3. Method
3.1. Fully Connected Gated Recurrent Unit (FCGRU)
3.2. Modified Attention Mechanism
3.3. Feature Combination
3.4. Hierarchical Pretraining Strategy
4. Experiments
- an in-depth evaluation of the quantitative performances of HOb2sRNN model with respect to several other competitors;
- a sensitivity analysis of the hyperparameter to weight the auxiliary classifier contributions and an ablation study of input sources and main components of the architecture in order to characterize the interplay among them;
- a qualitative analysis of land cover maps produced by HOb2sRNN and its competitors; and,
- an inspection of the attention parameters learnt by the HOb2sRNN model with the aim to investigate to what extent such side information contributes to the model interpretability.
4.1. Experimental Settings
4.2. Comparative Analysis
4.2.1. General Behaviour
4.2.2. Per-Class Analysis
4.3. Sensitivity and Ablation Analysis
4.3.1. Sensitivity Analysis on the Weights of Per-Source Auxiliary Classifiers
4.3.2. Ablation on the Multi-Source Data
4.3.3. Ablation on the Main Components of the Architecture
4.3.4. Ablation on Optical Information
4.4. Qualitative Analysis of Land Cover Maps
4.5. Attention Parameters Analysis
5. Discussion
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Berger, M.; Moreno, J.; Johannessen, J.A.; Levelt, P.F.; Hanssen, R.F. ESA’s sentinel missions in support of Earth system science. Remote Sens. Environ. 2012, 120, 84–90. [Google Scholar] [CrossRef]
- Kolecka, N.; Ginzler, C.; Pazur, R.; Price, B.; Verburg, P.H. Regional Scale Mapping of Grassland Mowing Frequency with Sentinel-2 Time Series. Remote Sens. 2018, 10, 1221. [Google Scholar] [CrossRef] [Green Version]
- Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep Learning Classification of Land Cover and Crop Types Using Remote Sensing Data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
- Inglada, J.; Vincent, A.; Arias, M.; Tardy, B.; Morin, D.; Rodes, I. Operational High Resolution Land Cover Map Production at the Country Scale Using Satellite Image Time Series. Remote Sens. 2017, 9, 95. [Google Scholar] [CrossRef] [Green Version]
- Guttler, F.; Ienco, D.; Nin, J.; Teisseire, M.; Poncelet, P. A graph-based approach to detect spatiotemporal dynamics in satellite image time series. ISPRS J. Photogramm. Remote Sens. 2017, 130, 92–107. [Google Scholar] [CrossRef] [Green Version]
- Khiali, L.; Ienco, D.; Teisseire, M. Object-oriented satellite image time series analysis using a graph-based representation. Ecol. Inform. 2018, 43, 52–64. [Google Scholar] [CrossRef] [Green Version]
- Steinhausen, M.J.; Wagner, P.D.; Narasimhan, B.; Waske, B. Combining Sentinel-1 and Sentinel-2 data for improved land use and land cover mapping of monsoon regions. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 595–604. [Google Scholar] [CrossRef]
- Minh, D.H.T.; Ienco, D.; Gaetano, R.; Lalande, N.; Ndikumana, E.; Osman, F.; Maurel, P. Deep Recurrent Neural Networks for Winter Vegetation Quality Mapping via Multitemporal SAR Sentinel-1. IEEE Geosci. Remote Sens. Lett. 2018, 15, 464–468. [Google Scholar] [CrossRef]
- Gbodjo, Y.J.E.; Ienco, D.; Leroux, L. Toward Spatio-Spectral Analysis of Sentinel-2 Time Series Data for Land Cover Mapping. IEEE Geosci. Remote Sens. Lett. 2019, 17, 307–311. [Google Scholar] [CrossRef]
- Interdonato, R.; Ienco, D.; Gaetano, R.; Ose, K. DuPLO: A DUal view Point deep Learning architecture for time series classificatiOn. ISPRS J. Photogramm. Remote Sens. 2019, 149, 91–104. [Google Scholar] [CrossRef] [Green Version]
- Mousavi, S.M.; Roostaei, S.; Rostamzadeh, H. Estimation of flood land use/land cover mapping by regional modelling of flood hazard at sub-basin level case study: Marand basin. Geomat. Nat. Hazards Risk 2019, 10, 1155–1175. [Google Scholar] [CrossRef] [Green Version]
- Fritz, S.; See, L.; Bayas, J.C.L.; Waldner, F.; Jacques, D.; Becker-Reshef, I.; Whitcraft, A.; Baruth, B.; Bonifacio, R.; Crutchfield, J.; et al. A comparison of global agricultural monitoring systems and current gaps. Agric. Syst. 2019, 168, 258–272. [Google Scholar] [CrossRef]
- Gao, F.; Masek, J.G.; Schwaller, M.R.; Hall, F.G. On the blending of the Landsat and MODIS surface reflectance: Predicting daily Landsat surface reflectance. IEEE Trans. Geosci. Remote Sens. 2006, 44, 2207–2218. [Google Scholar]
- Ienco, D.; Interdonato, R.; Gaetano, R.; Minh, D.H.T. Combining Sentinel-1 and Sentinel-2 Satellite Image Time Series for land cover mapping via a multi-source deep learning architecture. ISPRS J. Photogramm. Remote Sens. 2019, 158, 11–22. [Google Scholar] [CrossRef]
- Ienco, D.; Gaetano, R.; Interdonato, R.; Ose, K.; Minh, D.H.T. Combining Sentinel-1 and Sentinel-2 Time Series via RNN for Object-Based Land Cover Classification. In Proceedings of the 2019 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2019), Yokohama, Japan, 28 July–2 August 2019; pp. 4881–4884. [Google Scholar]
- Iannelli, G.C.; Gamba, P. Jointly Exploiting Sentinel-1 and Sentinel-2 for Urban Mapping. In Proceedings of the 2018 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2018), Valencia, Spain, 22–27 July 2018; pp. 8209–8212. [Google Scholar]
- Erinjery, J.; Singh, M.; Kent, R. Mapping and assessment of vegetation types in the tropical rainforests of the Western Ghats using multispectral Sentinel-2 and SAR Sentinel-1 satellite imagery. Remote Sens. Environ. 2018, 216, 345–354. [Google Scholar] [CrossRef]
- Tricht, K.V.; Gobin, A.; Gilliams, S.; Piccard, I. Synergistic Use of Radar Sentinel-1 and Optical Sentinel-2 Imagery for Crop Mapping: A Case Study for Belgium. Remote Sens. 2018, 10, 1642. [Google Scholar] [CrossRef] [Green Version]
- Denize, J.; Hubert-Moy, L.; Betbeder, J.; Corgne, S.; Baudry, J.; Pottier, E. Evaluation of using sentinel-1 and-2 time-series to identify winter land use in agricultural landscapes. Remote Sens. 2019, 11, 37. [Google Scholar] [CrossRef] [Green Version]
- Fernández-Beltran, R.; Haut, J.M.; Paoletti, M.E.; Plaza, J.; Plaza, A.; Pla, F. Multimodal Probabilistic Latent Semantic Analysis for Sentinel-1 and Sentinel-2 Image Fusion. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1347–1351. [Google Scholar] [CrossRef]
- Di Gregorio, A. Land Cover Classification System: Classification Concepts and User Manual: LCCS; Food & Agriculture Organization: Rome, Italy, 2005; Volume 2. [Google Scholar]
- Sulla-Menashe, D.; Friedl, M.A.; Krankina, O.N.; Baccini, A.; Woodcock, C.E.; Sibley, A.; Sun, G.; Kharuk, V.; Elsakov, V. Hierarchical mapping of Northern Eurasian land cover using MODIS data. Remote Sens. Environ. 2011, 115, 392–403. [Google Scholar] [CrossRef]
- Wu, M.F.; Sun, Z.C.; Yang, B.; Yu, S.S. A Hierarchical Object-oriented Urban Land Cover Classification Using WorldView-2 Imagery and Airborne LiDAR data. IOP Conf. Ser. Earth Environ. Sci. 2016, 46, 012016. [Google Scholar] [CrossRef]
- Sulla-Menashe, D.; Gray, J.M.; Abercrombie, S.P.; Friedl, M.A. Hierarchical mapping of annual global land cover 2001 to present: The MODIS Collection 6 Land Cover product. Remote Sens. Environ. 2019, 222, 183–194. [Google Scholar] [CrossRef]
- Blaschke, T. Object based image analysis for remote sensing. ISPRS J. Photogramm. Remote Sens. 2010, 65, 2–16. [Google Scholar] [CrossRef] [Green Version]
- Lillesand, T.; Kiefer, R.W.; Chipman, J. Remote Sensing and Image Interpretation; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
- Zhu, X.; Tuia, D.; Mou, L.; Zhang, G.X.L.; Xu, F.; Fraundorfer, F. Deep Learning in Remote Sensing: A Comprehensive Review and List of Resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef] [Green Version]
- Ienco, D.; Gaetano, R.; Dupaquier, C.; Maurel, P. Land Cover Classification via Multitemporal Spatial Data by Deep Recurrent Neural Networks. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1685–1689. [Google Scholar] [CrossRef] [Green Version]
- Mou, L.; Ghamisi, P.; Zhu, X.X. Unsupervised Spectral-Spatial Feature Learning via Deep Residual Conv-Deconv Network for Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 391–406. [Google Scholar] [CrossRef] [Green Version]
- Zhong, L.; Hu, L.; Zhou, H. Deep learning based multi-temporal crop classification. Remote Sens. Environ. 2019, 221, 430–443. [Google Scholar] [CrossRef]
- Rußwurm, M.; Körner, M. Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders. ISPRS Int. J. Geo-Inf. 2018, 7, 129. [Google Scholar] [CrossRef] [Green Version]
- Rouse, J.W.; Hass, R.H.; Schell, J.; Deering, D. Monitoring vegetation systems in the great plains with ERTS. In Proceedings of the Third Earth Resources Technology Satellite (ERTS) symposium, Washington, DC, USA, 10–14 December 1973; Volume 1, pp. 309–317. [Google Scholar]
- Dupuy, S.; Gaetano, R.; Mézo, L.L. Mapping land cover on Reunion Island in 2017 using satellite imagery and geospatial ground data. Data Brief 2020, 28, 104934. [Google Scholar] [CrossRef]
- Lassalle, P.; Inglada, J.; Michel, J.; Grizonnet, M.; Malik, J. A Scalable Tile-Based Framework for Region-Merging Segmentation. IEEE Trans. Geosci. Remote Sens. 2015, 53, 5473–5485. [Google Scholar] [CrossRef]
- Cho, K.; Van Merrienboer, B.; Gülçehre, Ç.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. arXiv 2014, arXiv:1406.1078. [Google Scholar]
- Bahdanau, D.; Cho, K.; Bengio, Y. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv 2014, arXiv:1409.0473. [Google Scholar]
- Luong, M.; Pham, H.; Manning, C.D. Effective Approaches to Attention-based Neural Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP 2015), Lisbon, Portugal, 17–21 September 2015; pp. 1412–1421. [Google Scholar]
- Britz, D.; Guan, M.Y.; Luong, M. Efficient Attention using a Fixed-Size Memory Representation. arXiv 2017, arXiv:1707.00110. [Google Scholar]
- Karamanolakis, G.; Hsu, D.; Gravano, L. Weakly Supervised Attention Networks for Fine-Grained Opinion Mining and Public Health. arXiv 2019, arXiv:1910.00054. [Google Scholar]
- Hou, S.; Liu, X.; Wang, Z. DualNet: Learn Complementary Features for Image Recognition. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 502–510. [Google Scholar]
- Benedetti, P.; Ienco, D.; Gaetano, R.; Ose, K.; Pensa, R.G.; Dupuy, S. M3 Fusion: A Deep Learning Architecture for Multiscale Multimodal Multitemporal Satellite Data Fusion. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 4939–4949. [Google Scholar] [CrossRef] [Green Version]
- Valero, S.; Arnaud, L.; Planells, M.; Ceschia, E.; Dedieu, G. Sentinel’s Classifier Fusion System for Seasonal Crop Mapping. In Proceedings of the 2019 IEEE International Geoscience and Remote Sensing Symposium (IGARSS 2019), Yokohama, Japan, 28 July–2 August 2019; pp. 6243–6246. [Google Scholar]
- Pelletier, C.; Webb, G.; Petitjean, F. Temporal Convolutional Neural Network for the Classification of Satellite Image Time Series. Remote Sens. 2019, 11, 523. [Google Scholar] [CrossRef] [Green Version]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Lecun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
- Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef] [Green Version]
- Choi, H.; Cho, K.; Bengio, Y. Fine-grained attention mechanism for neural machine translation. Neurocomputing 2018, 284, 171–176. [Google Scholar] [CrossRef] [Green Version]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar]
- Chen, G.; Weng, Q.; Hay, G.J.; He, Y. Geographic object-based image analysis (GEOBIA): Emerging trends and future opportunities. GISci. Remote Sens. 2018, 55, 159–182. [Google Scholar] [CrossRef]
- Boccardo, P.; Tonolo, F.G. Remote sensing role in emergency mapping for disaster response. In Engineering Geology for Society and Territory—Volume 5; Springer: Berlin/Heidelberg, Germany, 2015; pp. 17–24. [Google Scholar]
- Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is All you Need. In Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA, 4–9 December 2017; pp. 5998–6008. [Google Scholar]
Class | Label | Polygons | Segments |
---|---|---|---|
0 | Sugarcane | 869 | 1258 |
1 | Pasture and fodder | 582 | 869 |
2 | Market gardening | 758 | 912 |
3 | Greenhouse crops or shadows | 260 | 233 |
4 | Orchards | 767 | 1014 |
5 | Wooded areas | 570 | 1106 |
6 | Moor and Savannah | 506 | 850 |
7 | Rocks and natural bare soil | 299 | 573 |
8 | Relief shadows | 81 | 107 |
9 | Water | 177 | 261 |
10 | Urbanized areas | 1396 | 725 |
Total | 6265 | 7908 |
Class | Label | Polygons | Segments |
---|---|---|---|
0 | Bushes | 50 | 100 |
1 | Fallows and Uncultivated areas | 69 | 322 |
2 | Ponds | 33 | 59 |
3 | Banks and bare soils | 35 | 132 |
4 | Villages | 21 | 767 |
5 | Wet areas | 22 | 156 |
6 | Valley | 22 | 56 |
7 | Cereals | 260 | 816 |
8 | Legumes | 222 | 676 |
Total | 734 | 3084 |
Trainable Parameters | Reunion | Senegal |
---|---|---|
MLP | 349,195 | 332,809 |
TempCNN | 465,739 | 268,617 |
OD2RNN | 2,173,761 | 2,160,667 |
HOb2sRNN | 4,391,810 | 4,382,576 |
Method | Hyperparameter | Value or Range |
---|---|---|
RF | Number of trees | {100, 200, 300, 400, 500} |
Maximum depth | {20, 40, 60, 80, 100} | |
Maximum features | {’sqrt’, ’log2’, None} | |
SVM | Kernel | {’linear’, ’poly’, ’rbf’, ’sigmoid’} |
Gamma | {0.25, 0.5, 1,2} | |
Penalty | {0.1, 1, 10} | |
MLP | Hidden units | 512 |
Hidden layers | 2 | |
Dropout rate | 0.4 | |
HOb2sRNN | FCGRU units | 512 for each hidden state |
units | 64 | |
units | 128 | |
Main classifier units | 512 for each layer | |
Dropout rate | 0.4 | |
All neural network models | Batch size | 32 |
Optimizer | Adam [45] | |
Learning rate | 1 × 10−4 | |
Number of epochs | 2000 (per level for HOb2sRNN) |
Reunion | F1 Score | Kappa | Accuracy |
---|---|---|---|
RF | 74.26 ± 0.75 | 0.713 ± 0.009 | 74.72 ± 0.78 |
RF | 75.62 ± 1.00 | 0.726 ± 0.011 | 75.75 ± 0.98 |
SVM | 75.34 ± 0.88 | 0.722 ± 0.010 | 75.39 ± 0.89 |
MLP | 77.96 ± 0.70 | 0.752 ± 0.008 | 78.03 ± 0.66 |
TempCNN | 77.76 ± 1.06 | 0.749 ± 0.012 | 77.79 ± 1.05 |
OD2RNN | 74.39 ± 1.14 | 0.712 ± 0.012 | 74.50 ± 1.09 |
HOb2sRNN | 79.66 ± 0.85 | 0.772 ± 0.009 | 79.78 ± 0.82 |
Senegal | F1 Score | Kappa | Accuracy |
RF | 85.31 ± 0.50 | 0.816 ± 0.006 | 85.45 ± 0.48 |
RF | 86.31 ± 0.91 | 0.828 ± 0.012 | 86.35 ± 0.90 |
SVM | 89.95 ± 0.85 | 0.875 ± 0.011 | 89.96 ± 0.85 |
MLP | 90.05 ± 0.56 | 0.876 ± 0.007 | 90.07 ± 0.57 |
TempCNN | 88.81 ± 0.58 | 0.861 ± 0.007 | 88.83 ± 0.58 |
OD2RNN | 88.35 ± 0.72 | 0.855 ± 0.009 | 88.34 ± 0.72 |
HOb2sRNN | 90.78 ± 1.03 | 0.885 ± 0.013 | 90.78 ± 1.03 |
Sentinel-1 | F1 Score | Kappa | Accuracy |
---|---|---|---|
RF | 36.77 ± 0.93 | 0.291 ± 0.011 | 37.85 ± 0.95 |
SVM | 6.56 ± 0.36 | 0.018 ± 0.009 | 16.85 ± 0.53 |
MLP | 34.93 ± 1.42 | 0.271 ± 0.016 | 36.01 ± 1.39 |
TempCNN | 32.28 ± 1.19 | 0.239 ± 0.013 | 33.17 ± 1.17 |
OD2RNN | 31.83 ± 0.98 | 0.234 ± 0.012 | 32.71 ± 1.01 |
HOb2sRNN | 31.80 ± 1.10 | 0.231 ± 0.011 | 32.39 ± 1.04 |
Sentinel-2 | F1 Score | Kappa | Accuracy |
RF | 76.24 ± 0.59 | 0.732 ± 0.007 | 76.32 ± 0.63 |
SVM | 75.55 ± 0.80 | 0.724 ± 0.009 | 75.60 ± 0.80 |
MLP | 77.95 ± 0.69 | 0.751 ± 0.008 | 77.98 ± 0.73 |
TempCNN | 78.25 ± 0.88 | 0.755 ± 0.010 | 78.27 ± 0.90 |
OD2RNN | 74.55 ± 0.81 | 0.714 ± 0.008 | 74.66 ± 0.72 |
HOb2sRNN | 78.69 ± 0.95 | 0.761 ± 0.010 | 78.79 ± 0.91 |
RF | 74.26 ± 0.75 | 0.713 ± 0.009 | 74.72 ± 0.78 |
RF | 75.62 ± 1.00 | 0.726 ± 0.011 | 75.75 ± 0.98 |
SVM | 75.34 ± 0.88 | 0.722 ± 0.010 | 75.39 ± 0.89 |
MLP | 77.96 ± 0.70 | 0.752 ± 0.008 | 78.03 ± 0.66 |
TempCNN | 77.76 ± 1.06 | 0.749 ± 0.012 | 77.79 ± 1.05 |
OD2RNN | 74.39 ± 1.14 | 0.712 ± 0.012 | 74.50 ± 1.09 |
HOb2sRNN | 79.66 ± 0.85 | 0.772 ±0.009 | 79.78 ± 0.82 |
Sentinel-1 | F1 Score | Kappa | Accuracy |
---|---|---|---|
RF | 75.71 ± 1.03 | 0.703 ± 0.013 | 76.56 ± 1.00 |
SVM | 71.27 ± 0.82 | 0.653 ± 0.010 | 72.82 ± 0.78 |
MLP | 78.96 ± 1.28 | 0.738 ± 0.015 | 79.05 ± 1.23 |
TempCNN | 77.79 ± 0.79 | 0.725 ± 0.010 | 78.01 ± 0.80 |
OD2RNN | 75.07 ± 1.59 | 0.692 ± 0.019 | 75.34 ± 1.50 |
HOb2sRNN | 77.42 ± 1.33 | 0.721 ± 0.016 | 77.63 ± 1.27 |
Sentinel-2 | F1 Score | Kappa | Accuracy |
RF | 84.51 ± 1.17 | 0.806 ± 0.015 | 84.60 ± 1.17 |
SVM | 88.64 ± 0.47 | 0.858 ± 0.006 | 88.63 ± 0.45 |
MLP | 88.38 ± 0.61 | 0.855 ± 0.008 | 88.40 ± 0.62 |
TempCNN | 87.42 ± 1.02 | 0.843 ± 0.013 | 87.42 ± 1.04 |
OD2RNN | 86.03 ± 0.75 | 0.826 ± 0.010 | 86.01 ± 0.75 |
HOb2sRNN | 87.56 ± 1.33 | 0.845 ± 0.017 | 87.55 ± 1.33 |
Both Sources | F1 Score | Kappa | Accuracy |
RF | 85.31 ± 0.50 | 0.816 ± 0.006 | 85.45 ± 0.48 |
RF | 86.31 ± 0.91 | 0.828 ± 0.012 | 86.35 ± 0.90 |
SVM | 89.95 ± 0.85 | 0.875 ± 0.011 | 89.96 ± 0.85 |
MLP | 90.05 ± 0.56 | 0.876 ± 0.007 | 90.07 ± 0.57 |
TempCNN | 88.81 ± 0.58 | 0.861 ± 0.007 | 88.83 ± 0.58 |
OD2RNN | 88.35 ± 0.72 | 0.855 ± 0.009 | 88.34 ± 0.72 |
HOb2sRNN | 90.78 ± 1.03 | 0.885 ± 0.013 | 90.78 ± 1.03 |
Reunion | F1 Score | Kappa | Accuracy |
---|---|---|---|
77.66 ± 0.99 | 0.749 ± 0.011 | 77.74 ± 0.99 | |
77.32 ± 1.22 | 0.746 ± 0.013 | 77.47 ± 1.18 | |
78.35 ± 0.70 | 0.756 ± 0.007 | 78.43 ± 0.66 | |
79.09 ± 0.57 | 0.764 ± 0.006 | 79.10 ± 0.50 | |
HOb2sRNN | 79.66 ± 0.85 | 0.772 ± 0.009 | 79.78 ± 0.82 |
Senegal | F1 Score | Kappa | Accuracy |
89.86 ± 0.62 | 0.874 ± 0.008 | 89.89 ± 0.63 | |
89.91 ± 0.54 | 0.874 ± 0.007 | 89.92 ± 0.52 | |
89.25 ± 0.88 | 0.866 ± 0.011 | 89.24 ± 0.87 | |
89.12 ± 0.64 | 0.864 ± 0.008 | 89.11 ± 0.64 | |
HOb2sRNN | 90.78 ± 1.03 | 0.885 ± 0.013 | 90.78 ± 1.03 |
Reunion | F1 Score | Kappa | Accuracy |
---|---|---|---|
79.83 ± 0.70 | 0.774 ± 0.008 | 79.95 ± 0.68 | |
HOb2sRNN | 79.66 ± 0.85 | 0.772 ± 0.009 | 79.78 ± 0.82 |
Senegal | F1 Score | Kappa | Accuracy |
90.46 ± 0.82 | 0.881 ± 0.010 | 90.46 ± 0.82 | |
HOb2sRNN | 90.78 ± 1.03 | 0.885 ± 0.013 | 90.78 ± 1.03 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gbodjo, Y.J.E.; Ienco, D.; Leroux, L.; Interdonato, R.; Gaetano, R.; Ndao, B. Object-Based Multi-Temporal and Multi-Source Land Cover Mapping Leveraging Hierarchical Class Relationships. Remote Sens. 2020, 12, 2814. https://doi.org/10.3390/rs12172814
Gbodjo YJE, Ienco D, Leroux L, Interdonato R, Gaetano R, Ndao B. Object-Based Multi-Temporal and Multi-Source Land Cover Mapping Leveraging Hierarchical Class Relationships. Remote Sensing. 2020; 12(17):2814. https://doi.org/10.3390/rs12172814
Chicago/Turabian StyleGbodjo, Yawogan Jean Eudes, Dino Ienco, Louise Leroux, Roberto Interdonato, Raffaele Gaetano, and Babacar Ndao. 2020. "Object-Based Multi-Temporal and Multi-Source Land Cover Mapping Leveraging Hierarchical Class Relationships" Remote Sensing 12, no. 17: 2814. https://doi.org/10.3390/rs12172814
APA StyleGbodjo, Y. J. E., Ienco, D., Leroux, L., Interdonato, R., Gaetano, R., & Ndao, B. (2020). Object-Based Multi-Temporal and Multi-Source Land Cover Mapping Leveraging Hierarchical Class Relationships. Remote Sensing, 12(17), 2814. https://doi.org/10.3390/rs12172814