Feature Selection on Sentinel-2 Multispectral Imagery for Mapping a Landscape Infested by Parthenium Weed
<p>Location of the study area.</p> "> Figure 2
<p>Mean f-score learning curve of trace ratio (<b>a</b>), ReliefF (<b>b</b>), Gini-index (<b>c</b>), F-score (<b>d</b>), LS_121(<b>e</b>), LL_121 (<b>f</b>), JMI (<b>g</b>), MIM (<b>h</b>), svm-b (<b>i</b>), dt-f (<b>j</b>) for different feature subsets (Features are made of 75 VIs and 10 Sentinel-2 bands).</p> "> Figure 2 Cont.
<p>Mean f-score learning curve of trace ratio (<b>a</b>), ReliefF (<b>b</b>), Gini-index (<b>c</b>), F-score (<b>d</b>), LS_121(<b>e</b>), LL_121 (<b>f</b>), JMI (<b>g</b>), MIM (<b>h</b>), svm-b (<b>i</b>), dt-f (<b>j</b>) for different feature subsets (Features are made of 75 VIs and 10 Sentinel-2 bands).</p> "> Figure 3
<p>Spatial distribution of Parthenium weed and surrounding land-cover with optimal features from ReliefF on first (<b>a</b>), second (<b>b</b>) and third (<b>c</b>) training set and from full dataset on the first (<b>d</b>), second (<b>e</b>) and third (<b>f</b>) training set.</p> ">
Abstract
:1. Introduction
2. Material and Method
2.1. Study Area
2.2. Reference Data
2.3. Acquisition of Multi-Temporal Sentinel-2 Images and Pre-Processing
2.4. Data Analysis
2.4.1. Feature Section Methods
(A) Similarity-Based Feature Selection Methods
(B) Statistical-Based Feature Selection Methods
(C) Sparse Learning Based Methods
(D) Information Theoretical Based Methods
(E) Wrapper
2.4.2. Vegetation Indices Computation
2.4.3. Classification Algorithm: Random Forest (RF)
2.4.4. Model Assessment
2.4.5. Software and Feature Selection
3. Results
3.1. Comparison Among Investigated Features Algorithms
3.2. Comparison of Performance Between Peak Accuracy and Accuracy Derived From Full Feature Subsets
3.2.1. 1st Training Set
3.2.2. 2nd Training Set
3.2.3. Third Training Set
4. Discussion
4.1. Comparison of Feature Selection Methods
4.2. Impact of Training Sizes on Feature Selection Performance
4.3. Implications of Findings in Parthenium Weed Management
5. Conclusions
- (1)
- Wrappers methods such as svm-b yield higher accuracies on classifying Parthenium weed using the Random forest classifier;
- (2)
- ReliefF was the best performing feature selection method in terms of f-score and the size of optimal features selected;
- (3)
- To achieve better performance with feature selection methods, the ratio of 3:1 between the training and test set size turned out to be better than ratios of 1.1 and 1:3;
- (4)
- Gini-index, F-score and svm-b, were slightly affected by the curse of dimensionality;
- (5)
- None of feature selection method groups seemed to perform the best for all the datasets.
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Serpico, S.B.; Bruzzone, L. A new search algorithm for feature selection in hyperspectral remote sensing images. IEEE Trans. Geosci. Remote Sens. 2001, 39, 1360–1367. [Google Scholar] [CrossRef]
- Zheng, X.; Yuan, Y.; Lu, X. Dimensionality reduction by spatial–spectral preservation in selected bands. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5185–5197. [Google Scholar] [CrossRef]
- Adam, E.; Mutanga, O. Spectral discrimination of papyrus vegetation (Cyperus papyrus L.) in swamp wetlands using field spectrometry. ISPRS J. Photogramm. Remote Sens. 2009, 64, 612–620. [Google Scholar] [CrossRef]
- Xie, L.; Li, G.Y.; Peng, L.; Chen, Q.C.; Tan, Y.L.; Xiao, M. Band selection algorithm based on information entropy for hyperspectral image classification. J. Appl. Remote Sens. 2017, 11, 17. [Google Scholar] [CrossRef]
- Ma, L.; Fu, T.; Blaschke, T.; Li, M.; Tiede, D.; Zhou, Z.; Ma, X.; Chen, D. Evaluation of feature selection methods for object-based land cover mapping of unmanned aerial vehicle imagery using random forest and support vector machine classifiers. ISPRS Int. J. Geo Inf. 2017, 6, 51. [Google Scholar] [CrossRef]
- Yu, Q.; Gong, P.; Clinton, N.; Biging, G.; Kelly, M.; Schirokauer, D. Object-based detailed vegetation classification with airborne high spatial resolution remote sensing imagery. Photogramm. Eng. Remote Sens. 2006, 72, 799–811. [Google Scholar] [CrossRef]
- Waser, L.T.; Küchler, M.; Jütte, K.; Stampfer, T. Evaluating the potential of WorldView-2 data to classify tree species and different levels of ash mortality. Remote Sens. 2014, 6, 4515–4545. [Google Scholar] [CrossRef]
- Aires, F.; Pellet, V.; Prigent, C.; Moncet, J.L. Dimension reduction of satellite observations for remote sensing. Part 1: A comparison of compression, channel selection and bottleneck channel approaches. Q. J. R. Meteorol. Soc. 2016, 142, 2658–2669. [Google Scholar] [CrossRef]
- Immitzer, M.; Vuolo, F.; Atzberger, C. First Experience with Sentinel-2 Data for Crop and Tree Species Classifications in Central Europe. Remote Sens. 2016, 8, 166. [Google Scholar] [CrossRef]
- Gnana, D.A.A.; Balamurugan, S.A.A.; Leavline, E.J. Literature review on feature selection methods for high-dimensional data. Int. J. Comput. Appl. 2016, 136, 8887. [Google Scholar] [CrossRef]
- Kavzoglu, T.; Mather, P. The role of feature selection in artificial neural network applications. Int. J. Remote Sens. 2002, 23, 2919–2937. [Google Scholar] [CrossRef]
- Taşkın, G.; Kaya, H.; Bruzzone, L. Feature selection based on high dimensional model representation for hyperspectral images. IEEE Trans. Image Process. 2017, 26, 2918–2928. [Google Scholar] [CrossRef] [PubMed]
- Lagrange, A.; Fauvel, M.; Grizonnet, M. Large-scale feature selection with Gaussian mixture models for the classification of high dimensional remote sensing images. IEEE Trans. Comput. Imaging 2017, 3, 230–242. [Google Scholar] [CrossRef]
- Li, J.; Cheng, K.; Wang, S.; Morstatter, F.; Trevino, R.P.; Tang, J.; Liu, H. Feature selection: A data perspective. ACM Comput. Surv. CSUR 2017, 50, 94. [Google Scholar] [CrossRef]
- Cao, X.; Wei, C.; Han, J.; Jiao, L. Hyperspectral Band Selection Using Improved Classification Map. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2147–2151. [Google Scholar] [CrossRef] [Green Version]
- Novack, T.; Esch, T.; Kux, H.; Stilla, U. Machine learning comparison between WorldView-2 and QuickBird-2-simulated imagery regarding object-based urban land cover classification. Remote Sens. 2011, 3, 2263–2282. [Google Scholar] [CrossRef]
- Li, J.; Tang, J.; Liu, H. Reconstruction-based unsupervised feature selection: An embedded approach. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, IJCAI/AAAI, Melbourne, Australia, 19–25 August 2017. [Google Scholar]
- Chen, H.M.; Varshney, P.K.; Arora, M.K. Performance of mutual information similarity measure for registration of multitemporal remote sensing images. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2445–2454. [Google Scholar] [CrossRef]
- Henrich, V.; Götze, E.; Jung, A.; Sandow, C.; Thürkow, D.; Gläßer, C. Development of an Online indices-database: Motivation, concept and implementation. In Proceedings of the 6th EARSeL Imaging Spectroscopy SIG Workshop Innovative Tool for Scientific and Commercial Environment Applications, Tel-Aviv, Israel, 16–19 March 2009. [Google Scholar]
- Adkins, S.; Shabbir, A. Biology, ecology and management of the invasive parthenium weed (Parthenium hysterophorus L.). Pest Manag. Sci. 2014, 70, 1023–1029. [Google Scholar] [CrossRef]
- Dhileepan, K. Biological control of parthenium (Parthenium hysterophorus) in Australian rangeland translates to improved grass production. Weed Sci. 2007, 55, 497–501. [Google Scholar] [CrossRef]
- McConnachie, A.J.; Strathie, L.W.; Mersie, W.; Gebrehiwot, L.; Zewdie, K.; Abdurehim, A.; Abrha, B.; Araya, T.; Asaregew, F.; Assefa, F.; et al. Current and potential geographical distribution of the invasive plant Parthenium hysterophorus (Asteraceae) in eastern and southern Africa. Weed Res. 2011, 51, 71–84. [Google Scholar] [CrossRef]
- Georganos, S.; Grippa, T.; Vanhuysse, S.; Lennert, M.; Shimoni, M.; Kalogirou, S.; Wolff, E. Less is more: Optimizing classification performance through feature selection in a very-high-resolution remote sensing object-based urban application. GISci. Remote Sens. 2018, 55, 221–242. [Google Scholar] [CrossRef]
- Kganyago, M.; Odindi, J.; Adjorlolo, C.; Mhangara, P. Selecting a subset of spectral bands for mapping invasive alien plants: A case of discriminating Parthenium hysterophorus using field spectroscopy data. Int. J. Remote Sens. 2017, 38, 5608–5625. [Google Scholar] [CrossRef]
- Ao, Z.; Su, Y.; Li, W.; Guo, Q.; Zhang, J. One-class classification of airborne LiDAR data in urban areas using a presence and background learning algorithm. Remote Sens. 2017, 9, 1001. [Google Scholar] [CrossRef]
- Norman, N.; Whitfield, G. Geological Journeys: A Traveller’s Guide to South Africa’s Rocks and Landforms; Struik: Cape Town, South Africa, 2006. [Google Scholar]
- Municipality, M.L. Integrated Development Plan; Prepared by the Councillors and Officials of the Msunduzi Municipality; Mtubatuba Municipality: Mtubatuba, South Africa, 2002. [Google Scholar]
- National Geo-Spatial Information. NGI, Pietermaritzburg (Air Photo); National Geo-Spatial Information: Cape Town, South Africa, 2008. [Google Scholar]
- Carter, G.A.; Lucas, K.L.; Blossom, G.A.; Lassitter, C.L.; Holiday, D.M.; Mooneyhan, D.S.; Holcombe, T.R.; Griffith, J. Remote sensing and mapping of tamarisk along the Colorado river, USA: A comparative use of summer-acquired Hyperion, Thematic Mapper and QuickBird data. Remote Sens. 2009, 1, 318–329. [Google Scholar] [CrossRef]
- Pal, M.; Foody, G.M. Feature selection for classification of hyperspectral data by SVM. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2297–2307. [Google Scholar] [CrossRef]
- Congedo, L. Semi-automatic classification plugin documentation. Release 2016, 4, 29. [Google Scholar]
- Nie, F.; Xiang, S.; Jia, Y.; Zhang, C.; Yan, S. Trace ratio criterion for feature selection. In Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008), Chicago, IL, USA, 13–17 July 2008; pp. 671–676. [Google Scholar]
- Robnik-Šikonja, M.; Kononenko, I. Theoretical and empirical analysis of ReliefF and RReliefF. Mach. Learn. 2003, 53, 23–69. [Google Scholar] [CrossRef]
- Colkesen, I.; Kavzoglu, T. Selection of Optimal Object Features in Object-Based Image Analysis Using Filter-Based Algorithms. J. Indian Soc. Remote Sens. 2018, 46, 1233–1242. [Google Scholar] [CrossRef]
- Gini, C. Variability and mutability, contribution to the study of statistical distribution and relaitons. Studi Economico-Giuricici della R 1912. reviewed in: Light, rj, margolin, bh: An analysis of variance for categorical data. J. Am. Stat. Assoc. 1971, 66, 534–544. [Google Scholar]
- Shang, W.; Huang, H.; Zhu, H.; Lin, Y.; Qu, Y.; Wang, Z. A novel feature selection algorithm for text categorization. Expert Syst. Appl. 2007, 33, 1–5. [Google Scholar] [CrossRef]
- Wright, S. The interpretation of population structure by F-statistics with special regard to systems of mating. Evolution 1965, 19, 395–420. [Google Scholar] [CrossRef]
- Hastie, T.; Tibshirani, R.; Wainwright, M. Statistical Learning with Sparsity: The Lasso and Generalizations; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
- Liu, J.; Ji, S.; Ye, J. Multi-task feature learning via efficient 12, 1-norm minimization. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, Montreal, QC, Canada, 18–21 June 2009; pp. 339–348. [Google Scholar]
- Kohavi, R.; John, G.H. Wrappers for feature subset selection. Artif. Intell. 1997, 97, 273–324. [Google Scholar] [CrossRef] [Green Version]
- Guyon, I.; Elisseeff, A. An introduction to variable and feature selection. J. Mach. Learn. Res. 2003, 3, 1157–1182. [Google Scholar]
- Breiman, L. Bagging predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef] [Green Version]
- Rodriguez-Galiano, V.F.; Ghimire, B.; Rogan, J.; Chica-Olmo, M.; Rigol-Sanchez, J.P. An assessment of the effectiveness of a random forest classifier for land-cover classification. ISPRS J. Photogramm. Remote Sens. 2012, 67, 93–104. [Google Scholar] [CrossRef]
- Pal, M. Random forest classifier for remote sensing classification. Int. J. Remote Sens. 2005, 26, 217–222. [Google Scholar] [CrossRef]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Díaz-Uriarte, R.; de Andres, S.A. Gene selection and classification of microarray data using random forest. BMC Bioinform. 2006, 7, 3. [Google Scholar] [CrossRef]
- Archer, K.J.; Kimes, R.V. Empirical characterization of random forest variable importance measures. Comput. Stat. Data Anal. 2008, 52, 2249–2260. [Google Scholar] [CrossRef]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Ahmad, M.W.; Mourshed, M.; Rezgui, Y. Trees vs. Neurons: Comparison between random forest and ANN for high-resolution prediction of building energy consumption. Energy Build. 2017, 147, 77–89. [Google Scholar] [CrossRef]
- Du, P.; Samat, A.; Waske, B.; Liu, S.; Li, Z. Random forest and rotation forest for fully polarized SAR image classification using polarimetric and spatial features. ISPRS J. Photogramm. Remote Sens. 2015, 105, 38–53. [Google Scholar] [CrossRef]
- Chu, C.; Hsu, A.-L.; Chou, K.-H.; Bandettini, P.; Lin, C.; Alzheimer’s Disease Neuroimaging Initiative. Does feature selection improve classification accuracy? Impact of sample size and feature selection on classification using anatomical magnetic resonance images. Neuroimage 2012, 60, 59–70. [Google Scholar] [CrossRef] [PubMed]
- Vergara, J.R.; Estévez, P.A. A review of feature selection methods based on mutual information. Neural Comput. Appl. 2014, 24, 175–186. [Google Scholar] [CrossRef]
- Kira, K.; Rendell, L.A. A practical approach to feature selection. In Machine Learning Proceedings 1992; Elsevier: Amsterdam, The Netherlands, 1992; pp. 249–256. [Google Scholar]
- Li, S.; Zhu, Y.; Feng, J.; Ai, P.; Chen, X. Comparative study of three feature selection methods for regional land cover classification using modis data. In Proceedings of the 2008 Congress on Image and Signal Processing, Sanya, China, 27–30 May 2008; pp. 565–569. [Google Scholar]
- Talavera, L. An evaluation of filter and wrapper methods for feature selection in categorical clustering. In Proceedings of the International Symposium on Intelligent Data Analysis, Madrid, Spain, 8–10 September 2005; pp. 440–451. [Google Scholar]
- Chrysostomou, K. Wrapper feature selection. In Encyclopedia of Data Warehousing and Mining, 3nd ed.; IGI Global: Hershey, PA, USA, 2009; pp. 2103–2108. [Google Scholar]
- Hall, M.A.; Smith, L.A. Feature selection for machine learning: Comparing a correlation-based filter approach to the wrapper. In Proceedings of the Twelfth International FLAIRS Conference, Hamilton, New Zealand, 1–5 May 1999; pp. 235–239. [Google Scholar]
- Urbanowicz, R.J.; Meeker, M.; la Cava, W.; Olson, R.S.; Moore, J.H. Relief-based feature selection: Introduction and review. J. Biomed. Inform. 2018, 85, 189–203. [Google Scholar] [CrossRef]
- Jain, A.; Zongker, D. Feature selection: Evaluation, application, and small sample performance. IEEE Trans. Pattern Anal. Mach. Intell. 1997, 19, 153–158. [Google Scholar] [CrossRef]
- Terblanche, C.; Nänni, I.; Kaplan, H.; Strathie, L.W.; McConnachie, A.J.; Goodall, J.; van Wilgen, B.W. An approach to the development of a national strategy for controlling invasive alien plant species: The case of Parthenium hysterophorus in South Africa. Bothalia 2016, 46, 1–11. [Google Scholar] [CrossRef]
- Sonobe, R.; Yamaya, Y.; Tani, H.; Wang, X.; Kobayashi, N.; Mochizuki, K.I. Crop classification from Sentinel-2-derived vegetation indices using ensemble learning. J. Appl. Remote Sens. 2018, 12, 026019. [Google Scholar] [CrossRef] [Green Version]
Land-Cover Classes | Training Set 3 (70%) | Test Set 3 (30%) | Training Set 2 (50%) | Test Set 2 (50%) | Training Set 1 (30%) | Test Set 1 (70%) | Total |
---|---|---|---|---|---|---|---|
Forest | 70 | 30 | 50 | 50 | 30 | 70 | 100 |
Water Body | 49 | 21 | 35 | 35 | 21 | 49 | 70 |
Parthenium Weed | 63 | 27 | 45 | 45 | 27 | 63 | 90 |
Grassland | 64 | 28 | 46 | 46 | 28 | 64 | 92 |
Settlement | 66 | 29 | 48 | 48 | 29 | 66 | 95 |
Feature Selection Method | Peak Accuracy | ||||
---|---|---|---|---|---|
PA (%) | UA (%) | F-score (%) | Number of Features | *Comput. Time | |
Trace Ratio | 74 | 69.8 | 71.6 | 56 | 0.58 |
ReliefF | 74.5 | 70.1 | 72 | 6 | 1.05 |
Gini-Index | 74.2 | 69.3 | 71.3 | 13 | 18.54 |
F-Score | 74.9 | 69.5 | 72 | 63 | 0.25 |
LS_121 | 73.4 | 71.2 | 71.8 | 41 | 20.01 |
LL_121 | 75.2 | 70.2 | 72.4 | 29 | 0.37 |
JMI | 74.8 | 69.6 | 71.9 | 38 | 38.58 |
MIM | 74.4 | 70.1 | 72.0 | 39 | 39.45 |
SVM-B | 74.7 | 71.1 | 72.5 | 26 | 0.50 |
DT-F | 73.2 | 68.5 | 70.3 | 38 | 573.44 |
None | 72.7 | 69.1 | 70.4 | 85 | - |
Forest | Water Body | Grassland | Settlements | ||||||
---|---|---|---|---|---|---|---|---|---|
*F.S.M | PA (%) | UA (%) | PA (%) | UA (%) | PA (%) | UA (%) | PA (%) | UA (%) | **Kappa Coef. |
Trace Ratio | 89.4 | 88.8 | 99 | 97 | 58.2 | 64.5 | 85.8 | 81.3 | 0.78 |
ReliefF | 91.1 | 90 | 100 | 97.6 | 58.5 | 67.6 | 85.6 | 79 | 0.75 |
Gini-Index | 90.8 | 90.4 | 99.2 | 95.4 | 57.9 | 64.5 | 85.4 | 82.4 | 0.74 |
F-Score LS_121 | 90.3 | 88.5 | 100 | 96.8 | 58.6 | 66.6 | 85.9 | 82.5 | 0.76 |
91.8 | 88.8 | 100 | 96 | 55.8 | 62.8 | 85 | 81.7 | 0.75 | |
LL_121 | 90.7 | 90.8 | 100 | 97.8 | 58.3 | 64.2 | 83.8 | 80.3 | 0.76 |
JMI | 91.8 | 90.7 | 99.4 | 96.8 | 59.5 | 67.2 | 85.6 | 81.9 | 0.77 |
MIM | 93.3 | 90.8 | 99.2 | 95 | 59.7 | 66.6 | 84.6 | 83.1 | 0.78 |
SVM-b | 90.4 | 90.3 | 97.4 | 100 | 60 | 64.4 | 84.4 | 82.3 | 0.75 |
DT-F | 91.8 | 89.1 | 100 | 96.8 | 58.7 | 64.8 | 83.3 | 80.8 | 0.73 |
None | 91.4 | 89.7 | 100 | 97.4 | 58.8 | 66.2 | 87.1 | 81.8 | 0.78 |
Feature Selection Method | Peak Accuracy | ||||
---|---|---|---|---|---|
PA (%) | UA (%) | F-score (%) | Number of Features | Comput. Time (s) | |
Trace Ratio | 77 | 70.3 | 73 | 16 | 0.60 |
ReliefF | 77.1 | 69.6 | 73.1 | 4 | 2.14 |
Gini-Index | 75.6 | 71.1 | 73.1 | 10 | 31.29 |
F-score | 75.7 | 68.6 | 72 | 57 | 0.26 |
LS_121 | 76 | 68.7 | 71.9 | 41 | 20.12 |
LL_121 | 76.6 | 70 | 73.1 | 9 | 0.43 |
JMI | 75.5 | 71.5 | 73.3 | 40 | 59.46 |
MIM | 75.3 | 72.7 | 74.0 | 25 | 61.07 |
SVM-b | 77.1 | 72 | 74.1 | 41 | 0.54 |
DT-F | 78.3 | 69.6 | 73.3 | 39 | 934.08 |
None | 75.9 | 69.7 | 72.6 | 85 | - |
Forest | Water Body | Grassland | Settlements | ||||||
---|---|---|---|---|---|---|---|---|---|
F.S.M | PA (%) | UA (%) | PA (%) | UA (%) | PA (%) | UA (%) | PA (%) | UA (%) | Kappa Coef. |
Trace Ratio | 91.6 | 90.2 | 99.2 | 95.2 | 61.3 | 72.8 | 86.7 | 81 | 0.76 |
ReliefF | 89 | 91.2 | 98.5 | 97.6 | 61.6 | 68.9 | 85.6 | 81.4 | 0.78 |
Gini-Index | 91.5 | 91.6 | 98.9 | 95.5 | 60.3 | 66.8 | 86.8 | 83.1 | 0.79 |
F-Score | 90.4 | 92.2 | 100 | 96.4 | 63.5 | 71.8 | 87.7 | 84.1 | 0.79 |
LS_121 | 90.7 | 91.4 | 98.8 | 95 | 59.9 | 69 | 86.5 | 81.7 | 0.78 |
LL_121 | 90.2 | 91.4 | 98.8 | 96.7 | 60.6 | 68.8 | 85.1 | 80.7 | 0.80 |
JMI | 91.9 | 91 | 98.8 | 96.1 | 62.3 | 70.3 | 88 | 82.1 | 0.77 |
MIM | 90 | 91.8 | 98.9 | 95.5 | 64.8 | 69.1 | 86.6 | 82.9 | 0.82 |
SVM-b | 91.4 | 92.8 | 100 | 97 | 64 | 70.7 | 86.9 | 82.7 | 0.83 |
DT-F | 91.0 | 86.0 | 100 | 96 | 60 | 68 | 83.3 | 80.8 | 78.1 |
None | 87.2 | 82.5 | 99.7 | 97.3 | 62.2 | 68.6 | 87.2 | 82.5 | 0.79 |
Feature Selection Method | Peak Accuracy | ||||
---|---|---|---|---|---|
PA (%) | UA (%) | F-score (%) | Number of Features | Comput. Time (s) | |
Trace Ratio | 77.2 | 74.5 | 75.7 | 11 | 0.70 |
Relief | 80 | 75 | 77.2 | 7 | 3.56 |
Gini-Index | 78.5 | 74.2 | 75.9 | 12 | 43.97 |
F-Score | 78.5 | 73.6 | 75.6 | 82 | 0.30 |
LS_121 | 78.6 | 76.1 | 76.8 | 10 | 20.64 |
LL_12 1 | 79.2 | 78.2 | 78.3 | 10 | 2.20 |
MIM | 76.9 | 72.6 | 74.1 | 50 | 87.71 |
SVM-b | 82.3 | 75 | 78.1 | 33 | 0.54 |
DT-F | 76.6 | 72.4 | 74.1 | 26 | 1346.43 |
None | 75.2 | 71.4 | 72.6 | 85 | - |
Forest | Water Body | Grassland | Settlements | ||||||
---|---|---|---|---|---|---|---|---|---|
F.S.M | PA (%) | UA (%) | PA (%) | UA (%) | PA (%) | UA (%) | PA (%) | UA (%) | Kappa Coef. |
Trace Ratio | 93.5 | 81 | 98.5 | 81 | 61.8 | 81 | 88.3 | 81 | 0.75 |
ReliefF | 92 | 81.4 | 100 | 81.4 | 66.2 | 81.4 | 89.9 | 81.4 | 0.77 |
Gini-Index | 92.1 | 83.1 | 99.5 | 83.1 | 64 | 83.1 | 86.3 | 83.1 | 0.78 |
F-Score | 92.2 | 95.2 | 99.5 | 98 | 64.4 | 71.2 | 91.4 | 84.4 | 0.79 |
LS_121 | 92.5 | 81.7 | 99.5 | 81.7 | 62.9 | 81.7 | 89 | 81.7 | 0.82 |
LL_121 | 92.3 | 68.8 | 98.5 | 68.8 | 63.4 | 68.8 | 90 | 68.8 | 0.77 |
JMI | 90.6 | 82.1 | 99.5 | 82.1 | 68.4 | 82.1 | 90.9 | 82.1 | 0.80 |
MIM | 76.9 | 72.6 | 99 | 82.9 | 63.7 | 82.9 | 84.8 | 82.9 | 0.75 |
SVM-b | 91.6 | 82.7 | 99.5 | 82.7 | 67.2 | 82.7 | 89.7 | 82.7 | 0.83 |
DT-F | 93.1 | 73 | 100 | 73 | 63.7 | 73 | 87.9 | 73 | 0.82 |
None | 90.8 | 82.5 | 100 | 82.5 | 64.3 | 82.5 | 89.1 | 82.5 | 0.74 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kiala, Z.; Mutanga, O.; Odindi, J.; Peerbhay, K. Feature Selection on Sentinel-2 Multispectral Imagery for Mapping a Landscape Infested by Parthenium Weed. Remote Sens. 2019, 11, 1892. https://doi.org/10.3390/rs11161892
Kiala Z, Mutanga O, Odindi J, Peerbhay K. Feature Selection on Sentinel-2 Multispectral Imagery for Mapping a Landscape Infested by Parthenium Weed. Remote Sensing. 2019; 11(16):1892. https://doi.org/10.3390/rs11161892
Chicago/Turabian StyleKiala, Zolo, Onisimo Mutanga, John Odindi, and Kabir Peerbhay. 2019. "Feature Selection on Sentinel-2 Multispectral Imagery for Mapping a Landscape Infested by Parthenium Weed" Remote Sensing 11, no. 16: 1892. https://doi.org/10.3390/rs11161892
APA StyleKiala, Z., Mutanga, O., Odindi, J., & Peerbhay, K. (2019). Feature Selection on Sentinel-2 Multispectral Imagery for Mapping a Landscape Infested by Parthenium Weed. Remote Sensing, 11(16), 1892. https://doi.org/10.3390/rs11161892