Integrating Semi-Supervised Learning with an Expert System for Vegetation Cover Classification Using Sentinel-2 and RapidEye Data
"> Figure 1
<p>Location of Schiermonnikoog national park.</p> "> Figure 2
<p>SSLES flowchart.</p> "> Figure 3
<p>Segmentation result of RapidEye image with <span class="html-italic">h<sub>s</sub></span> = 5 and <span class="html-italic">h<sub>r</sub></span> = 7, on a false colour composite of Sentinel-2 (B08, B04, B03).</p> "> Figure 4
<p>Illustration of label propagation procedure. Coloured circles represent four different labelled samples, and white circles represent unlabelled samples. Values indicate the probability of edges. Propagation direction is shown by the direction of the arrows, i.e., the arrow is always from a labelled sample to an unlabelled sample.</p> "> Figure 5
<p>Example of expert rule weights for ancillary data. Y-axis shows the initial conditional probability, and the X-axis shows the 10 quantiles.</p> "> Figure 6
<p>Classified map of the study area using the Group 4 of band combinations, red-edge spectral bands (as this group achieved the highest accuracy) of Sentinel-2 data, and the SSLES method.</p> "> Figure 7
<p>Example of classification result using (<b>a</b>) SSLES, (<b>b</b>) SSL only, (<b>c</b>) ES only, and (<b>d</b>) RF.</p> "> Figure 8
<p>Producer’s accuracy obtained with SSLES. The colours illustrate the accuracy, where white colour represents 100% accuracy, and black represents 0% accuracy.</p> "> Figure 9
<p>A detailed comparison of two classification methods, the matrix is the result of subtracting the confusion matrix of the SSLES from the SSL. The white colour represents an increase in the cell’s value, black indicates a decrease, and if the value was unchanged, it was coloured cyan.</p> ">
Abstract
:1. Introduction
2. Study Area and Materials
2.1. Study Area
2.2. Materials
2.2.1. Sentinel-2 Data
- 10 m resolution bands: blue (490 nm), green (560 nm), red (665 nm), and near-infrared (842 nm).
- 20 m resolution bands: four red-edge/NIR bands with central wavelength at 705 nm, 740 nm, 783 nm, and 865 nm, respectively, and shortwave infrared-1 and -2 (1610 nm and 2190 nm).
- 60 m resolution bands: coastal (443 nm), water vapour (1375 nm), and cirrus (1376).
- Group 1: All spectral bands;
- Group 2: Red and infrared bands;
- Group 3: All shortwave infrared bands;
- Group 4: All red-edge bands;
- Group 5: Red, infrared, and red-edge bands;
- Group 6: Red-edge and shortwave infrared bands.
2.2.2. RapidEye Data
2.2.3. Reference Data and Sampling
2.2.4. Knowledge Sources
- A reference vegetation map of the study area, generated in 2010 [32]. As this map was generated with experts’ visual interpretation of aerial photographs and extensive fieldwork, it contained some level of experts’ knowledge.
- Ancillary data, including field records of dominant vegetation types for 30 vegetation plots, NDVI data from the Sentinel-2 image, and a digital elevation model (DEM) of the island (produced from laser altimetry by the Dutch ministry of public works, Rijkswaterstaat) to generate height, slope, aspect, etc.
3. Methods
3.1. Object-Based Image Analysis (OBIA)
- Spatial radius hs (spatial distance between classes);
- Range radius hr (the spectral difference between classes).
3.2. Semi-Supervised Learning
3.3. Expert System
- Generate a histogram population of each feature layer in the knowledge base;
- Divide each histogram into 10 quantiles, representing the frequency of the occurrence of each class at each percentile of the feature layer;
- Normalize the frequency values by fitting a normal distribution.
3.4. SSLES Algorithm
Algorithm 1: SSLES |
Inputs:
For every oi ∈ OL, i = 1:L
otherwise, let it remain with the unlabelled object set |
3.5. Classification and Evaluation
- The number of classification trees, i.e., the number of bootstrap iterations (ntree);
- The number of input variables used at each node (mtry).
4. Results
4.1. Object-Based Image Analysis
4.2. Semi-Supervised Learning
4.3. Expert System a Priori Probabilities
4.4. SSLES Results
4.5. Classification Results and Evaluation
4.5.1. Parameter Tuning
4.5.2. Classification Evaluation
5. Discussion
6. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Egbert, S.L.; Park, S.; Price, K.P.; Lee, R.-Y.; Wu, J.; Duane Nellis, M. Using conservation reserve program maps derived from satellite imagery to characterize landscape structure. Comput. Electron. Agric. 2002, 37, 141–156. [Google Scholar] [CrossRef]
- Xie, Y.; Sha, Z.; Yu, M. Remote sensing imagery in vegetation mapping: A review. J. Plant Ecol. 2008, 1, 9–23. [Google Scholar] [CrossRef]
- Immitzer, M.; Vuolo, F.; Atzberger, C. First experience with Sentinel-2 data for crop and tree species classifications in central Europe. Remote Sens. 2016, 8, 166. [Google Scholar] [CrossRef]
- Mui, A.; He, Y.; Weng, Q. An object-based approach to delineate wetlands across landscapes of varied disturbance with high spatial resolution satellite imagery. ISPRS J. Photogramm. Remote Sens. 2015, 109, 30–46. [Google Scholar] [CrossRef] [Green Version]
- Richards, J.A. Remote Sensing Digital Image Analysis; Springer: Berlin/Heidelberg, Germany, 2013; ISBN 978-3-642-30061-5. [Google Scholar]
- Bhatnagar, S.; Gill, L.; Regan, S.; Naughton, O.; Johnston, P.; Waldren, S.; Ghosh, B. Mapping vegetation communities inside wetlands using Sentinel-2 imagery in Ireland. Int. J. Appl. Earth Obs. Geoinf. 2020, 88, 102083. [Google Scholar] [CrossRef]
- Skidmore, A.K.; Forbes, G.W.; Carpenter, D.J. Technical note non-parametric test of overlap in multispectral classification. Int. J. Remote Sens. 1988, 9, 777–785. [Google Scholar] [CrossRef]
- Mellor, A.; Boukir, S.; Haywood, A.; Jones, S. Exploring issues of training data imbalance and mislabelling on random forest performance for large area land cover classification using the ensemble margin. ISPRS J. Photogramm. Remote Sens. 2015, 105, 155–168. [Google Scholar] [CrossRef]
- Persello, C.; Bruzzone, L. Active and semisupervised learning for the classification of remote sensing images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6937–6956. [Google Scholar] [CrossRef]
- Board, R.; Pitt, L. Semi-supervised learning. Mach. Learn. 1989, 4, 41–65. [Google Scholar] [CrossRef]
- Chapelle, O.; Schölkopf, B.; Zien, A. Semi-Supervised Learning; The MIT Press: London, UK, 2010; ISBN 9780262255899. [Google Scholar]
- Jackson, Q.; Landgrebe, D.A. An adaptive classifier design for high-dimensional data analysis with a limited training data set. IEEE Trans. Geosci. Remote Sens. 2001, 39, 2664–2679. [Google Scholar] [CrossRef] [Green Version]
- Prabukumar, M.; Shrutika, S. Band clustering using expectation–maximization algorithm and weighted average fusion-based feature extraction for hyperspectral image classification. J. Appl. Remote Sens. 2018, 12, 046015. [Google Scholar] [CrossRef]
- Dalponte, M.; Ene, L.T.; Marconcini, M.; Gobakken, T.; Næsset, E. Semi-supervised SVM for individual tree crown species classification. ISPRS J. Photogramm. Remote Sens. 2015, 110, 77–87. [Google Scholar] [CrossRef]
- Bruzzone, L.; Chi, M.; Marconcini, M. A novel transductive SVM for semisupervised classification of remote-sensing images. IEEE Trans. Geosci. Remote Sens. 2006, 44, 3363–3373. [Google Scholar] [CrossRef] [Green Version]
- Maulik, U.; Chakraborty, D. A self-trained ensemble with semisupervised SVM: An application to pixel classification of remote sensing imagery. Pattern Recognit. 2011, 44, 615–623. [Google Scholar] [CrossRef]
- Dopido, I.; Li, J.; Marpu, P.R.; Plaza, A.; Bioucas Dias, J.M.; Benediktsson, J.A. Semisupervised self-learning for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4032–4044. [Google Scholar] [CrossRef] [Green Version]
- Geiß, C.; Aravena Pelizari, P.; Blickensdörfer, L.; Taubenböck, H. Virtual support vector machines with self-learning strategy for classification of multispectral remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2019, 151, 42–58. [Google Scholar] [CrossRef]
- Lu, X.; Zhang, J.; Li, T.; Zhang, Y. Incorporating diversity into self-learning for synergetic classification of hyperspectral and panchromatic images. Remote Sens. 2016, 8, 804. [Google Scholar] [CrossRef] [Green Version]
- Camps-Valls, G.; Bandos Marsheva, T.V.; Zhou, D. Semi-supervised graph-based hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3044–3054. [Google Scholar] [CrossRef]
- Gu, Y.; Feng, K. L1-graph semisupervised learning for hyperspectral image classification. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012; pp. 1401–1404. [Google Scholar]
- Ma, L.; Crawford, M.M.; Yang, X.; Guo, Y. Local-manifold-learning-based graph construction for semisupervised hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2832–2844. [Google Scholar] [CrossRef]
- Zhao, Y.; Su, F.; Yan, F. Novel Semi-supervised hyperspectral image classification based on a superpixel graph and discrete potential method. Remote Sens. 2020, 12, 1528. [Google Scholar] [CrossRef]
- Zhu, X.; Ghahramani, Z.; Lafferty, J. Semi-supervised learning using gaussian fields and harmonic functions. In Proceedings of the 20th International Conference on Machine Learning, Washington, DC, USA, 21 August 2003; pp. 912–919. [Google Scholar]
- Kim, K.-H.; Choi, S. Label propagation through minimax paths for scalable semi-supervised learning. Pattern Recognit. Lett. 2014, 45, 17–25. [Google Scholar] [CrossRef]
- Ma, L.; Ma, A.; Ju, C.; Li, X. Graph-based semi-supervised learning for spectral-spatial hyperspectral image classification. Pattern Recognit. Lett. 2016, 83, 133–142. [Google Scholar] [CrossRef]
- Chong, Y.; Ding, Y.; Yan, Q.; Pan, S. Graph-based semi-supervised learning: A review. Neurocomputing 2020, 408, 216–230. [Google Scholar] [CrossRef]
- Skidmore, A.K.; Turner, B.J. Forest mapping accuracies are improved using a supervised nonparametric classifier with SPOT data. Photogramm. Eng. Remote Sens. PERS 1988, 54, 1415–1421. [Google Scholar]
- Hayes-Roth, F.; Waterman, D.; Lenat, D. Building Expert Systems; Addison-Wesley, Reading: Boston, MA, USA, 1983; ISBN 0-201-10686-8. [Google Scholar]
- Booker, J.M.; McNamara, L.A. Solving black box computation problems using expert knowledge theory and methods. Reliab. Eng. Syst. Saf. 2004, 85, 331–340. [Google Scholar] [CrossRef]
- Schmidt, K.S.; Skidmore, A.K. Spectral discrimination of vegetation types in a coastal wetland. Remote Sens. Environ. 2003, 85, 92–108. [Google Scholar] [CrossRef]
- Pranger, D.P.; Tolman, M.E. Toelichting Bij De Vegetatiekartering Schiermonnikoog Op Basis Van False Colour-Luchtfoto’s 1:10.000 [Explanation to the Vegetation Mapping Schiermonnikoog 2010 on the Basis of False Colour Aerial Photographs 1:10.000, in Dutch]; Rijkswaterstaat: Delft, The Netherlands, 2012. [Google Scholar]
- Vrieling, A.; Skidmore, A.K.; Wang, T.; Meroni, M.; Ens, B.J.; Oosterbeek, K.; O’Connor, B.; Darvishzadeh, R.; Heurich, M.; Shepherd, A.; et al. Spatially detailed retrievals of spring phenology from single-season high-resolution image time series. Int. J. Appl. Earth Obs. Geoinf. 2017, 59, 19–30. [Google Scholar] [CrossRef]
- Darvishzadeh, R.; Wang, T.; Skidmore, A.; Vrieling, A.; O’Connor, B.; Gara, T.W.; Ens, B.J.; Paganini, M. Analysis of Sentinel-2 and RapidEye for Retrieval of Leaf Area Index in a Saltmarsh Using a Radiative Transfer Model. Remote Sens. 2019, 11, 671. [Google Scholar] [CrossRef] [Green Version]
- ESA SENTINEL-2 User Handbook. Available online: https://sentinels.copernicus.eu/web/sentinel/user-guides/document-library/-/asset_publisher/xlslt4309D5h/content/sentinel-2-user-handbook (accessed on 24 July 2015).
- Atzberger, C.; Darvishzadeh, R.; Schlerf, M.; le Maire, G. Suitability and adaptation of prosail radiative transfer model for hyperspectral grassland studies. Remote Sens. Lett. 2013, 4, 55–64. [Google Scholar] [CrossRef]
- Tigges, J.; Lakes, T.; Hostert, P. Urban vegetation classification: Benefits of Multitemporal rapideye satellite data. Remote Sens. Environ. 2013, 136, 66–75. [Google Scholar] [CrossRef]
- Darvishzadeh, R.; Atzberger, C.; Skidmore, A.K.; Abkar, A.A. Leaf area index derivation from hyperspectral vegetation indicesand the red edge position. Int. J. Remote Sens. 2009, 30, 6199–6218. [Google Scholar] [CrossRef]
- Gilmore, M.S.; Wilson, E.H.; Barrett, N.; Civco, D.L.; Prisloe, S.; Hurd, J.D.; Chadwick, C. Integrating multi-temporal spectral and structural information to map wetland vegetation in a lower connecticut river tidal marsh. Remote Sens. Environ. 2008, 112, 4048–4060. [Google Scholar] [CrossRef]
- Macintyre, P.; van Niekerk, A.; Mucina, L. Efficacy of multi-season Sentinel-2 imagery for compositional vegetation classification. Int. J. Appl. Earth Obs. Geoinf. 2020, 85, 101980. [Google Scholar] [CrossRef]
- Cochran, W.G. Sampling Techniques; John Wiley and Sons: New York, NY, USA, 1977; p. 428. [Google Scholar]
- Bird, E. Coastal Geomorphology. An Introduction; John Wiley and Sons Ltd.: Chichester, UK, 2008; ISBN 9780874216561. [Google Scholar]
- Rundquist, D.C.; Narumalani, S.; Narayanan, R.M. A Review of Wetlands Remote Sensing and Defining New Considerations. Remote Sens. Rev. 2001, 20, 207–226. [Google Scholar] [CrossRef]
- Schmidt, K.S.; Skidmore, K.; Kloosterman, E.H.; van Oosten, H.; Kumar, L.; Janssen, J.M. Mapping Coastal Vegetation Using an Expert System and Hyperspectral Imagery. Photogramm. Eng. Remote Sens. 2004, 70, 703–715. [Google Scholar] [CrossRef]
- Comaniciu, D.; Meer, P. Mean Shift: A robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2002, 24, 603–619. [Google Scholar] [CrossRef] [Green Version]
- Laurent, V.C.E.; Schaepman, M.E.; Verhoef, W.; Weyermann, J.; Chavez, R.O. Bayesian object-based estimation of LAI and chlorophyll from a simulated Sentinel-2 top-of-atmosphere radiance image. Remote Sens. Environ. 2014, 140, 318–329. [Google Scholar] [CrossRef]
- Clinton, N. An accuracy assessment measure for object based image segmentation an accuracy assessment measure for object based image. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 1189–1194. [Google Scholar]
- Möller, M.; Lymburner, L.; Volk, M. The comparison index: A tool for assessing the accuracy of image segmentation. Int. J. Appl. Earth Obs. Geoinf. 2007, 9, 311–321. [Google Scholar] [CrossRef]
- Haralick, R.M.; Shanmugam, K.; Dinstein, I. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 1973, 3, 610–621. [Google Scholar] [CrossRef] [Green Version]
- Yu, Q.; Gong, P.; Clinton, N.; Biging, G.; Kelly, M.; Schirokauer, D. Object based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogramm. Eng. Remote Sens. 2006, 72, 799–811. [Google Scholar] [CrossRef] [Green Version]
- Mathieu, R.; Aryal, J.; Chong, A.K. Object-based classification of ikonos imagery for mapping large-scale vegetation communities in urban areas. Sensors 2007, 7, 2860–2880. [Google Scholar] [CrossRef] [Green Version]
- Pham, L.T.H.; Brabyn, L.; Ashraf, S. Combining QuickBird, LiDAR, and GIS topography indices to identify a single native tree species in a complex landscape using an object-based classification approach. Int. J. Appl. Earth Obs. Geoinf. 2016, 50, 187–197. [Google Scholar] [CrossRef]
- Fu, B.; Xie, S.; He, H.; Zuo, P.; Sun, J.; Liu, L.; Huang, L.; Fan, D.; Gao, E. Synergy of multi-temporal polarimetric SAR and optical Image satellite for mapping of marsh vegetation using object-based random forest algorithm. Ecol. Indic. 2021, 131, 108173. [Google Scholar] [CrossRef]
- Rohban, M.H.; Rabiee, H.R. Supervised neighborhood graph construction for semi-supervised classification. Pattern Recognit. 2012, 45, 1363–1372. [Google Scholar] [CrossRef]
- Szummer, M.; Jaakkola, T. Partially labelled classification with Markov random walks. Adv. Neural Inf. Processing Syst. 2001, 14, 945–952. [Google Scholar]
- Skidmore, A. An expert system classifies eucalypt forest types using thematic mapper data and a digital terrain model. Photogramm. Eng. Remote Sens. 1989, 55, 1449–1464. [Google Scholar]
- Breiman, L. Random Forests—Random Features; Technical report 567; Statistics department of University of California: Berkeley, CA, USA, 1999. [Google Scholar]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
- Immitzer, M.; Atzberger, C.; Koukal, T. Tree species classification with random forest using very high spatial resolution 8-band WorldView-2 Satellite data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef] [Green Version]
- Liaw, A.; Wiener, M. Classification and Regression by RandomForest. R News 2002, 2, 18–22. [Google Scholar]
- Gislason, P.O.; Benediktsson, J.A.; Sveinsson, J.R. Random forests for land cover classification. Pattern Recognit. Lett. 2006, 27, 294–300. [Google Scholar] [CrossRef]
- Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2008; ISBN 9781420055122-CAT# 55127. [Google Scholar]
- Beukeboom, T.J. The Hydrology of the Frisian Islands; Rodopi bv Editions: Amsterdam, The Netherlands, 1976; ISBN 9062034195. [Google Scholar]
- Wang, L.; Hao, S.; Wang, Q.; Wang, Y. Semi-supervised classification for hyperspectral imagery based on spatial-spectral label propagation. ISPRS J. Photogramm. Remote Sens. 2014, 97, 123–137. [Google Scholar] [CrossRef]
- Song, M.; Civco, D.L.; Hurd, J.D. A competitive pixel-object approach for land cover classification. Int. J. Remote Sens. 2005, 26, 4981–4997. [Google Scholar] [CrossRef]
- Liu, D.; Xia, F. Assessing object-based classification: Advantages and limitations. Remote Sens. Lett. 2010, 1, 187–194. [Google Scholar] [CrossRef]
- Jebara, T.; Wang, J.; Chang, S.-F. Graph construction and b-matching for semi-supervised learning. Int. Conf. Mach. Learn. ICML 2009, 1–8. [Google Scholar] [CrossRef]
- Anderson, J.; Hardy, E.; Roach, J.; Witmer, R. A Land Use and Land Cover Classification Systems for Use with Remote Sensor Data; Geological Survey Professional Paper 964; US Government Printing Office: Washington, DC, USA, 1976.
Class Name | Number of Training Samples | Number of Test Samples |
---|---|---|
High matted grass | 160 | 107 |
Low matted grass | 142 | 95 |
Agriculture | 71 | 47 |
Forest | 58 | 39 |
Green beach | 58 | 39 |
Tussock grass | 45 | 30 |
High shrub | 45 | 30 |
Herbs | 35 | 23 |
Low salix shrub | 25 | 17 |
Low hippopahe shrub | 11 | 7 |
Sum | 650 | 434 |
Training Objects | Test Objects | Unlabelled Objects | Total Number of Objects |
---|---|---|---|
650 | 434 | 4146 | 5230 |
Class Name | HMG | LMG | Ag | Fr | GB | TG | HS | Hr | LSS | LHS |
---|---|---|---|---|---|---|---|---|---|---|
Probability | 0.28 | 0.24 | 0.1 | 0.08 | 0.08 | 0.06 | 0.06 | 0.05 | 0.03 | 0.01 |
Vegetation Classes | Distance to Streams | Distance to the Residential Area | ||||
---|---|---|---|---|---|---|
Quantile 1 | Quantile 2 | Quantile 3 | Quantile 1 | Quantile 2 | Quantile 3 | |
Ag | 0.11 | 0.31 | 0.58 | 0.90 | 0.10 | 0.00 |
Fr | 0.06 | 0.22 | 0.72 | 0.41 | 0.34 | 0.25 |
HMG | 0.40 | 0.45 | 0.15 | 0.00 | 0.15 | 0.85 |
LMG | 0.22 | 0.31 | 0.47 | 0.23 | 0.46 | 0.31 |
TG | 0.09 | 0.21 | 0.70 | 0.27 | 0.55 | 0.18 |
HS | 0.19 | 0.25 | 0.56 | 0.13 | 0.45 | 0.42 |
LHS | 0.00 | 0.29 | 0.71 | 0.00 | 0.04 | 0.96 |
LSS | 0.29 | 0.10 | 0.61 | 0.24 | 0.76 | 0.00 |
GB | 0.00 | 0.51 | 0.49 | 0.00 | 0.01 | 0.99 |
Hr | 0.61 | 0.38 | 0.01 | 0.01 | 0.00 | 0.99 |
Class Name | Number of Original Training Samples | Number of Newly Labelled Samples | Number of New Training Samples |
---|---|---|---|
HMG | 160 | 494 | 654 |
LMG | 142 | 290 | 432 |
Ag | 71 | 167 | 238 |
Fr | 58 | 121 | 179 |
GB | 58 | 106 | 164 |
TG | 45 | 99 | 144 |
HS | 45 | 89 | 134 |
Hr | 35 | 67 | 102 |
LSS | 25 | 48 | 73 |
LHS | 11 | 32 | 43 |
Sum | 650 | 1513 | 2163 |
Dataset | SSLES | RF | SSL Only | ES Only | ||||
---|---|---|---|---|---|---|---|---|
OA (%) | Kappa | OA (%) | Kappa | OA (%) | Kappa | OA (%) | Kappa | |
Group 1 | 81.1 | 0.67 | 64.6 | 0.52 | 70.9 | 0.60 | 68.1 | 0.55 |
Group 2 | 73.5 | 0.57 | 58.9 | 0.44 | 62.3 | 0.48 | 60.9 | 0.46 |
Group 3 | 74.6 | 0.59 | 60.1 | 0.47 | 63.8 | 0.49 | 63.1 | 0.48 |
Group 4 | 83.6 | 0.70 | 64.9 | 0.56 | 71.8 | 0.61 | 69.5 | 0.57 |
Group 5 | 67.8 | 0.49 | 47.2 | 0.33 | 55.3 | 0.40 | 53.8 | 0.38 |
Group 6 | 79.9 | 0.67 | 64.3 | 0.55 | 58.2 | 0.59 | 68.7 | 0.57 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Farsad Layegh, N.; Darvishzadeh, R.; Skidmore, A.K.; Persello, C.; Krüger, N. Integrating Semi-Supervised Learning with an Expert System for Vegetation Cover Classification Using Sentinel-2 and RapidEye Data. Remote Sens. 2022, 14, 3605. https://doi.org/10.3390/rs14153605
Farsad Layegh N, Darvishzadeh R, Skidmore AK, Persello C, Krüger N. Integrating Semi-Supervised Learning with an Expert System for Vegetation Cover Classification Using Sentinel-2 and RapidEye Data. Remote Sensing. 2022; 14(15):3605. https://doi.org/10.3390/rs14153605
Chicago/Turabian StyleFarsad Layegh, Nasir, Roshanak Darvishzadeh, Andrew K. Skidmore, Claudio Persello, and Nina Krüger. 2022. "Integrating Semi-Supervised Learning with an Expert System for Vegetation Cover Classification Using Sentinel-2 and RapidEye Data" Remote Sensing 14, no. 15: 3605. https://doi.org/10.3390/rs14153605
APA StyleFarsad Layegh, N., Darvishzadeh, R., Skidmore, A. K., Persello, C., & Krüger, N. (2022). Integrating Semi-Supervised Learning with an Expert System for Vegetation Cover Classification Using Sentinel-2 and RapidEye Data. Remote Sensing, 14(15), 3605. https://doi.org/10.3390/rs14153605