Effects of Multi-Growth Periods UAV Images on Classifying Karst Wetland Vegetation Communities Using Object-Based Optimization Stacking Algorithm
<p>Geographical location of the study area. (<b>a</b>,<b>b</b>) Photos of field measurements; (<b>c</b>–<b>e</b>) UAV-RGB true-color images in March, July, and October, respectively.</p> "> Figure 2
<p>Workflow of this study.</p> "> Figure 3
<p>The framework of stacking ensemble learning model.</p> "> Figure 4
<p>Qualitative comparison of classification results of different classification scenarios based on four algorithms (HM: Human-made matter; CP: Cephalanthus tetrandrus-Paliurus ramosissimus; CB: Cropland-beach; LT: Lotus; RI: Reed-Imperate; HK: Huakolasa; KR: Karst River; BO: Bamboo; LC: Linden-Camphora; WH: Water Hyacinth; AG: Algae; BG: Bermudagrass; MC: Miscanthus).</p> "> Figure 5
<p>(<b>a</b>) Overall accuracy distribution of the four algorithms; (<b>b</b>) statistical tests for classification results based on the four classifiers.</p> "> Figure 6
<p>Statistical analysis of F1 scores of different vegetation communities by all classification scenarios and classifiers (HM: Human-made matter; CP: Cephalanthus tetrandrus-Paliurus ramosissimus; CB: Cropland-beach; LT: Lotus; RI: Reed-Imperate; HK: Huakolasa; KR: Karst River; BO: Bamboo; LC: Linden-Camphora; WH: Water Hyacinth; AG: Algae; BG: Bermudagrass; MC: Miscanthus).</p> "> Figure 7
<p>Comparison of mF1 of different vegetation communities based on four classification models. The bold number indicates the highest MF1 for each vegetation community.</p> "> Figure 8
<p>Comparison of the classification results of different growth periods based on the Stacking algorithm. (<b>a</b>–<b>d</b>) represent four different areas. (HM: Human-made matter; CP: Cephalanthus tetrandrus-Paliurus ramosissimus; CB: Cropland-beach; LT: Lotus; RI: Reed-Imperate; HK: Huakolasa; KR: Karst River; BO: Bamboo; LC: Linden-Camphora; WH: Water Hyacinth; AG: Algae; BG: Bermudagrass; MC: Miscanthus).</p> "> Figure 9
<p>Comparison of classification results for each vegetation community based on the Stacking algorithm. (<b>a</b>) represents global classification results and (<b>b</b>) represents the F1 score of different vegetation communities in different scenarios based on the Stacking algorithm.</p> "> Figure 10
<p>The F1 score growth rate and confusion matrix for different vegetation communities. (<b>a</b>) shows box plots of F1 fractional growth rates. (<b>b</b>,<b>c</b>) shows the confusion matrix for different vegetation communities in July and March+July+October.</p> "> Figure 11
<p>Growth rate and distribution of the F1 score for different vegetation communities in different growth periods. (<b>a</b>) represents the comparison between July and March+July+October. (<b>b</b>) represents the comparison between October and March+July+October.</p> "> Figure 12
<p>Sensitivity analysis of the contribution of feature variables in the summer and the combined spring, summer, and autumn growth period scenarios to distinguish vegetation communities. (<b>a</b>–<b>c</b>) Feature importance of UAV images in the summer growth period, (<b>d</b>–<b>f</b>) feature importance of UAV images based on the combined three growth periods.</p> "> Figure 13
<p>Comparison of the importance of different features in different growth periods based on the SHAP method, where (<b>a</b>–<b>c</b>) represents feature importance in March, and (<b>d</b>–<b>f</b>) represents feature importance in combined three growth periods. The horizontal coordinate is the SHAP value, representing the weight of the effect of different feature variables on the classification results, and each row represents a feature, and the impact of the size of feature values on the results is indicated by the different colors. The red color indicates a positive contribution to the classifications, and the blue color indicates a negative contribution to the classifications. The larger the width of the color region, the greater the effect of feature on classifications.</p> ">
Abstract
:1. Introduction
- We quantitatively evaluated the differences in the classification accuracy of single-growth-period UAV RGB images for vegetation communities mapping and explored the appropriate growth period images and their sensitive feature bands for distinguishing each vegetation community in a karst wetland.
- We stacked an ensemble learning classification model using three machine learning algorithms (RF, XGBoost, and CatBoost) and examined its stability and generalization ability for vegetation communities’ classification using different growth period image combinations.
- We explored the classification performance of different growth period UAV image combinations and quantitatively evaluated the effect of different growth period scenarios on vegetation communities mapping.
- We interpreted the contributors of local and global feature variables to classify karst wetland vegetation communities using the Shapley Additive explanations (SHAP) method for black-box stacking model outputs and extracted the sensitive image features for distinguishing each vegetation community in different growth periods.
2. Materials and Methods
2.1. Study Area
2.2. Multi-Temporal UAV Image Acquisition and Field Measurements
2.3. Methods
2.3.1. Creation of Multi-Temporal Dataset and Data Dimensionality Reduction
2.3.2. Stacking Ensembled Learning Classification Model
2.3.3. Accuracy Metrics
2.3.4. Model Interpretation and Feature Importance Analysis
3. Results
3.1. Classification Results of Base Model vs. Stacking Ensemble Learning Model
3.1.1. Comparison of Classification Accuracy of Different Algorithms
3.1.2. Classification Results of Different Algorithms of Different Vegetation Communities
3.2. Classification Results of Single Growth Period vs. Different Growth Periods
3.2.1. Classification Results of Vegetation Communities during Different Growth Periods
3.2.2. Classification Results of Combined Multiple Growth Period Vegetation Communities
3.3. Explanation of Variable Importance for Vegetation Community Mapping
3.3.1. Sensitive Bands for Vegetation Communities Mapping in Different Growth Periods
3.3.2. Global Contribution of Different Feature Bands to Vegetation Communities’ Classification
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A. Optimal Parameters for the Three Base Models
Scenario | RF | XGBoost | CatBoost |
May | mtry = 6 | max_depth = 4 | learning_rate = 0.05 |
ntree = 500 | eta = 0.2 | max_depth = 5 | |
July | mtry = 6 | max_depth = 6 | learning_rate = 0.1 |
ntree = 1000 | eta = 0.2 | max_depth = 6 | |
October | mtry = 8 | max_depth = 6 | learning_rate = 0.1 |
ntree = 500 | eta = 0.1 | max_depth = 6 | |
May + July | mtry = 5 | max_depth = 8 | learning_rate = 0.2 |
ntree = 2000 | eta = 0.3 | max_depth = 8 | |
July + October | mtry = 6 | max_depth = 6 | learning_rate = 0.25 |
ntree = 1000 | eta = 0.25 | max_depth = 5 | |
May + July + October | mtry = 8 | max_depth = 8 | learning_rate = 0.25 |
ntree = 1000 | eta = 0.1 | max_depth = 6 |
References
- Dai, X.; Yang, G.; Liu, D.; Wan, R. Vegetation carbon sequestration mapping in herbaceous wetlands by using a MODIS EVI time-series data set: A case in Poyang lake wetland, China. Remote Sens. 2020, 12, 3000. [Google Scholar] [CrossRef]
- Ding, Y.; Li, Z.; Peng, S. Global analysis of time-lag and-accumulation effects of climate on vegetation growth. Int. J. Appl. Earth Obs. Geoinf. 2020, 92, 102179. [Google Scholar] [CrossRef]
- Zhou, J.; Wu, J.; Gong, Y. Valuing wetland ecosystem services based on benefit transfer: A meta-analysis of China wetland studies. J. Clean. Prod. 2020, 276, 122988. [Google Scholar] [CrossRef]
- Amani, M.; Salehi, B.; Mahdavi, S.; Granger, J.; Brisco, B. Wetland classification in Newfoundland and Labrador using multi-source SAR and optical data integration. GISci. Remote Sens. 2017, 54, 779–796. [Google Scholar] [CrossRef]
- Deng, T.; Fu, B.; Liu, M.; He, H.; Fan, D.; Li, L.; Huang, L.; Gao, E. Comparison of multi-class and fusion of multiple single-class SegNet model for mapping karst wetland vegetation using UAV images. Sci. Rep. 2022, 12, 13270. [Google Scholar] [CrossRef]
- Cai, Y.; Liang, J.; Zhang, P.; Wang, Q.; Wu, Y.; Ding, Y.; Wang, H.; Fu, C.; Sun, J. Review on strategies of close-to-natural wetland restoration and a brief case plan for a typical wetland in northern China. Chemosphere 2021, 285, 131534. [Google Scholar] [CrossRef]
- Chi, Y.; Sun, J.; Liu, W.; Wang, J.; Zhao, M. Mapping coastal wetland soil salinity in different seasons using an improved comprehensive land surface factor system. Ecol. Indic. 2019, 107, 105517. [Google Scholar] [CrossRef]
- Mirmazloumi, S.M.; Moghimi, A.; Ranjgar, B.; Mohseni, F.; Ghorbanian, A.; Ahmadi, S.A.; Amani, M.; Brisco, B. Status and trends of wetland studies in Canada using remote sensing technology with a focus on wetland classification: A bibliographic analysis. Remote Sens. 2021, 13, 4025. [Google Scholar] [CrossRef]
- Arroyo-Mora, J.; Kalacska, M.; Soffer, R.; Ifimov, G.; Leblanc, G.; Schaaf, E.; Lucanus, O. Evaluation of phenospectral dynamics with Sentinel-2A using a bottom-up approach in a northern ombrotrophic peatland. Remote Sens. Environ. 2018, 216, 544–560. [Google Scholar] [CrossRef]
- Ahmed, K.R.; Akter, S.; Marandi, A.; Schüth, C. A simple and robust wetland classification approach by using optical indices, unsupervised and supervised machine learning algorithms. Remote Sens. Appl. Soc. Environ. 2021, 23, 100569. [Google Scholar] [CrossRef]
- Chen, J.; Chen, Z.; Huang, R.; You, H.; Han, X.; Yue, T.; Zhou, G. The Effects of Spatial Resolution and Resampling on the Classification Accuracy of Wetland Vegetation Species and Ground Objects: A Study Based on High Spatial Resolution UAV Images. Drones 2023, 7, 61. [Google Scholar] [CrossRef]
- Fu, B.; Liu, M.; He, H.; Lan, F.; He, X.; Liu, L.; Huang, L.; Fan, D.; Zhao, M.; Jia, Z. Comparison of optimized object-based RF-DT algorithm and SegNet algorithm for classifying Karst wetland vegetation communities using ultra-high spatial resolution UAV data. Int. J. Appl. Earth Obs. Geoinf. 2021, 104, 102553. [Google Scholar] [CrossRef]
- Diez, Y.; Kentsch, S.; Fukuda, M.; Caceres, M.L.L.; Moritake, K.; Cabezas, M. Deep learning in forestry using uav-acquired rgb data: A practical review. Remote Sens. 2021, 13, 2837. [Google Scholar] [CrossRef]
- Li, B.; Xu, X.; Zhang, L.; Han, J.; Bian, C.; Li, G.; Liu, J.; Jin, L. Above-ground biomass estimation and yield prediction in potato by using UAV-based RGB and hyperspectral imaging. ISPRS J. Photogramm. Remote Sens. 2020, 162, 161–172. [Google Scholar] [CrossRef]
- Yan, G.; Li, L.; Coy, A.; Mu, X.; Chen, S.; Xie, D.; Zhang, W.; Shen, Q.; Zhou, H. Improving the estimation of fractional vegetation cover from UAV RGB imagery by colour unmixing. ISPRS J. Photogramm. Remote Sens. 2019, 158, 23–34. [Google Scholar] [CrossRef]
- Bhatnagar, S.; Gill, L.; Ghosh, B. Drone image segmentation using machine and deep learning for mapping raised bog vegetation communities. Remote Sens. 2020, 12, 2602. [Google Scholar] [CrossRef]
- Yi, Z.; Jia, L.; Chen, Q. Crop classification using multi-temporal Sentinel-2 data in the Shiyang River Basin of China. Remote Sens. 2020, 12, 4052. [Google Scholar] [CrossRef]
- Zhao, Y.; Feng, D.; Yu, L.; Cheng, Y.; Zhang, M.; Liu, X.; Xu, Y.; Fang, L.; Zhu, Z.; Gong, P. Long-term land cover dynamics (1986–2016) of Northeast China derived from a multi-temporal Landsat archive. Remote Sens. 2019, 11, 599. [Google Scholar] [CrossRef] [Green Version]
- Fu, B.; Xie, S.; He, H.; Zuo, P.; Sun, J.; Liu, L.; Huang, L.; Fan, D.; Gao, E. Synergy of multi-temporal polarimetric SAR and optical image satellite for mapping of marsh vegetation using object-based random forest algorithm. Ecol. Indic. 2021, 131, 108173. [Google Scholar] [CrossRef]
- van Deventer, H.; Cho, M.A.; Mutanga, O. Multi-season RapidEye imagery improves the classification of wetland and dryland communities in a subtropical coastal region. ISPRS J. Photogramm. Remote Sens. 2019, 157, 171–187. [Google Scholar] [CrossRef]
- Kollert, A.; Bremer, M.; Löw, M.; Rutzinger, M. Exploring the potential of land surface phenology and seasonal cloud free composites of one year of Sentinel-2 imagery for tree species mapping in a mountainous region. Int. J. Appl. Earth Obs. Geoinf. 2021, 94, 102208. [Google Scholar] [CrossRef]
- Hu, Q.; Wu, W.; Song, Q.; Lu, M.; Chen, D.; Yu, Q.; Tang, H. How do temporal and spectral features matter in crop classification in Heilongjiang Province, China? J. Integr. Agric. 2017, 16, 324–336. [Google Scholar] [CrossRef]
- Piaser, E.; Villa, P. Evaluating capabilities of machine learning algorithms for aquatic vegetation classification in temperate wetlands using multi-temporal Sentinel-2 data. Int. J. Appl. Earth Obs. Geoinf. 2023, 117, 103202. [Google Scholar] [CrossRef]
- Macintyre, P.; Van Niekerk, A.; Mucina, L. Efficacy of multi-season Sentinel-2 imagery for compositional vegetation classification. Int. J. Appl. Earth Obs. Geoinf. 2020, 85, 101980. [Google Scholar] [CrossRef]
- Tang, Y.; Qiu, F.; Jing, L.; Shi, F.; Li, X. Integrating spectral variability and spatial distribution for object-based image analysis using curve matching approaches. ISPRS J. Photogramm. Remote Sens. 2020, 169, 320–336. [Google Scholar] [CrossRef]
- Mao, D.; Wang, Z.; Du, B.; Li, L.; Tian, Y.; Jia, M.; Zeng, Y.; Song, K.; Jiang, M.; Wang, Y. National wetland mapping in China: A new product resulting from object-based and hierarchical classification of Landsat 8 OLI images. ISPRS J. Photogramm. Remote Sens. 2020, 164, 11–25. [Google Scholar] [CrossRef]
- Fallatah, A.; Jones, S.; Wallace, L.; Mitchell, D. Combining object-based machine learning with long-term time-series analysis for informal settlement identification. Remote Sens. 2022, 14, 1226. [Google Scholar] [CrossRef]
- Liu, H.; Lang, B. Machine learning and deep learning methods for intrusion detection systems: A survey. Appl. Sci. 2019, 9, 4396. [Google Scholar] [CrossRef] [Green Version]
- Mallick, J.; Talukdar, S.; Pal, S.; Rahman, A. A novel classifier for improving wetland mapping by integrating image fusion techniques and ensemble machine learning classifiers. Ecol. Inform. 2021, 65, 101426. [Google Scholar] [CrossRef]
- Liu, T.; Abd-Elrahman, A.; Morton, J.; Wilhelm, V.L. Comparing fully convolutional networks, random forest, support vector machine, and patch-based deep convolutional neural networks for object-based wetland mapping using images from small unmanned aircraft system. GISci. Remote Sens. 2018, 55, 243–264. [Google Scholar] [CrossRef]
- Mohammadimanesh, F.; Salehi, B.; Mahdianpari, M.; Motagh, M.; Brisco, B. An efficient feature optimization for wetland mapping by synergistic use of SAR intensity, interferometry, and polarimetry data. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 450–462. [Google Scholar] [CrossRef]
- Yao, H.; Fu, B.; Zhang, Y.; Li, S.; Xie, S.; Qin, J.; Fan, D.; Gao, E. Combination of Hyperspectral and Quad-Polarization SAR Images to Classify Marsh Vegetation Using Stacking Ensemble Learning Algorithm. Remote Sens. 2022, 14, 5478. [Google Scholar] [CrossRef]
- Zhou, R.; Yang, C.; Li, E.; Cai, X.; Yang, J.; Xia, Y. Object-based wetland vegetation classification using multi-feature selection of unoccupied aerial vehicle RGB imagery. Remote Sens. 2021, 13, 4910. [Google Scholar] [CrossRef]
- Khan, I.U.; Javeid, N.; Taylor, C.J.; Gamage, K.A.; Ma, X. A stacked machine and deep learning-based approach for analysing electricity theft in smart grids. IEEE Trans. Smart Grid 2021, 13, 1633–1644. [Google Scholar] [CrossRef]
- Wen, L.; Hughes, M. Coastal wetland mapping using ensemble learning algorithms: A comparative study of bagging, boosting and stacking techniques. Remote Sens. 2020, 12, 1683. [Google Scholar] [CrossRef]
- Cai, Y.; Li, X.; Zhang, M.; Lin, H. Mapping wetland using the object-based stacked generalization method based on multi-temporal optical and SAR data. Int. J. Appl. Earth Obs. Geoinf. 2020, 92, 102164. [Google Scholar] [CrossRef]
- Rudin, C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat. Mach. Intell. 2019, 1, 206–215. [Google Scholar] [CrossRef]
- Chen, J.; de Hoogh, K.; Gulliver, J.; Hoffmann, B.; Hertel, O.; Ketzel, M.; Bauwelinck, M.; Van Donkelaar, A.; Hvidtfeldt, U.A.; Katsouyanni, K. A comparison of linear regression, regularization, and machine learning algorithms to develop Europe-wide spatial models of fine particles and nitrogen dioxide. Environ. Int. 2019, 130, 104934. [Google Scholar] [CrossRef]
- Arora, N.; Kaur, P.D. A Bolasso based consistent feature selection enabled random forest classification algorithm: An application to credit risk assessment. Appl. Soft Comput. 2020, 86, 105936. [Google Scholar] [CrossRef]
- Rodríguez-Pérez, R.; Bajorath, J. Interpretation of compound activity predictions from complex machine learning models using local approximations and shapley values. J. Med. Chem. 2019, 63, 8761–8777. [Google Scholar] [CrossRef]
- Fu, B.; Wang, Y.; Campbell, A.; Li, Y.; Zhang, B.; Yin, S.; Xing, Z.; Jin, X. Comparison of object-based and pixel-based Random Forest algorithm for wetland vegetation mapping using high spatial resolution GF-1 and SAR data. Ecol. Indic. 2017, 73, 105–117. [Google Scholar] [CrossRef]
- Belgiu, M.; Drǎguţ, L. Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery. ISPRS J. Photogramm. Remote Sens. 2014, 96, 67–75. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Ramezan, C.A. Transferability of Recursive Feature Elimination (RFE)-Derived Feature Sets for Support Vector Machine Land Cover Classification. Remote Sens. 2022, 14, 6218. [Google Scholar] [CrossRef]
- Zhang, S.; Zhang, J.; Li, X.; Du, X.; Zhao, T.; Hou, Q.; Jin, X. Estimating the grade of storm surge disaster loss in coastal areas of China via machine learning algorithms. Ecol. Indic. 2022, 136, 108533. [Google Scholar] [CrossRef]
- Wu, J.; Guo, P.; Cheng, Y.; Zhu, H.; Wang, X.-B.; Shao, X. Ensemble generalized multiclass support-vector-machine-based health evaluation of complex degradation systems. IEEE/ASME Trans. Mechatron. 2020, 25, 2230–2240. [Google Scholar] [CrossRef]
- Li, M.; Yan, C.; Liu, W. The network loan risk prediction model based on Convolutional neural network and Stacking fusion model. Appl. Soft Comput. 2021, 113, 107961. [Google Scholar] [CrossRef]
- Malekloo, A.; Ozer, E.; AlHamaydeh, M.; Girolami, M. Machine learning and structural health monitoring overview with emerging technology and high-dimensional data source highlights. Struct. Health Monit. 2022, 21, 1906–1955. [Google Scholar] [CrossRef]
- Meng, Y.; Yang, N.; Qian, Z.; Zhang, G. What makes an online review more helpful: An interpretation framework using XGBoost and SHAP values. J. Theor. Appl. Electron. Commer. Res. 2020, 16, 466–490. [Google Scholar] [CrossRef]
- Budholiya, K.; Shrivastava, S.K.; Sharma, V. An optimized XGBoost based diagnostic system for effective prediction of heart disease. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 4514–4523. [Google Scholar] [CrossRef]
- Toharudin, T.; Caraka, R.E.; Pratiwi, I.R.; Kim, Y.; Gio, P.U.; Sakti, A.D.; Noh, M.; Nugraha, F.A.L.; Pontoh, R.S.; Putri, T.H. Boosting Algorithm to handle Unbalanced Classification of PM2. 5 Concentration Levels by Observing Meteorological Parameters in Jakarta-Indonesia using AdaBoost, XGBoost, CatBoost, and LightGBM. IEEE Access 2023, 11, 35680–35696. [Google Scholar] [CrossRef]
- McNemar, Q. Note on the sampling error of the difference between correlated proportions or percentages. Psychometrika 1947, 12, 153–157. [Google Scholar] [CrossRef] [PubMed]
- Raju, V.G.; Lakshmi, K.P.; Jain, V.M.; Kalidindi, A.; Padma, V. Study the influence of normalization/transformation process on the accuracy of supervised classification. In Proceedings of the 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 August 2020; pp. 729–735. [Google Scholar]
- Ludwig, C.; Walli, A.; Schleicher, C.; Weichselbaum, J.; Riffler, M. A highly automated algorithm for wetland detection using multi-temporal optical satellite data. Remote Sens. Environ. 2019, 224, 333–351. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Erion, G.; Chen, H.; DeGrave, A.; Prutkin, J.M.; Nair, B.; Katz, R.; Himmelfarb, J.; Bansal, N.; Lee, S.-I. From local explanations to global understanding with explainable AI for trees. Nat. Mach. Intell. 2020, 2, 56–67. [Google Scholar] [CrossRef]
- Wen, X.; Xie, Y.; Wu, L.; Jiang, L. Quantifying and comparing the effects of key risk factors on various types of roadway segment crashes with LightGBM and SHAP. Accid. Anal. Prev. 2021, 159, 106261. [Google Scholar] [CrossRef]
- Judah, A.; Hu, B. The integration of multi-source remotely-sensed data in support of the classification of wetlands. Remote Sens. 2019, 11, 1537. [Google Scholar] [CrossRef] [Green Version]
- Zhao, C.; Jia, M.; Wang, Z.; Mao, D.; Wang, Y. Toward a better understanding of coastal salt marsh mapping: A case from China using dual-temporal images. Remote Sens. Environ. 2023, 295, 113664. [Google Scholar] [CrossRef]
- Wang, R.; Lu, S.; Feng, W. A novel improved model for building energy consumption prediction based on model integration. Appl. Energy 2020, 262, 114561. [Google Scholar] [CrossRef]
- Li, Y.; Fu, B.; Sun, X.; Fan, D.; Wang, Y.; He, H.; Gao, E.; He, W.; Yao, Y. Comparison of Different Transfer Learning Methods for Classification of Mangrove Communities Using MCCUNet and UAV Multispectral Images. Remote Sens. 2022, 14, 5533. [Google Scholar] [CrossRef]
- Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; Terzopoulos, D. Image segmentation using deep learning: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3523–3542. [Google Scholar] [CrossRef]
- Goh, G.D.; Sing, S.L.; Yeong, W.Y. A review on machine learning in 3D printing: Applications, potential, and challenges. Artif. Intell. Rev. 2021, 54, 63–94. [Google Scholar] [CrossRef]
- Hartling, S.; Sagan, V.; Sidike, P.; Maimaitijiang, M.; Carron, J. Urban tree species classification using a WorldView-2/3 and LiDAR data fusion approach and deep learning. Sensors 2019, 19, 1284. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Mangalathu, S.; Hwang, S.-H.; Jeon, J.-S. Failure mode and effects analysis of RC members based on machine-learning-based SHapley Additive exPlanations (SHAP) approach. Eng. Struct. 2020, 219, 110927. [Google Scholar] [CrossRef]
- Rodríguez-Pérez, R.; Bajorath, J. Interpretation of machine learning models using shapley values: Application to compound potency and multi-target activity predictions. J. Comput. Aided Mol. Des. 2020, 34, 1013–1026. [Google Scholar] [CrossRef] [PubMed]
Acquisition Times | Growth Periods | Number of Images | Resolution (m) |
---|---|---|---|
15 March 2022 | Spring | 1756 | 0.02 |
2 July 2022 | Summer | 1693 | 0.03 |
23 October 2022 | Autumn | 1535 | 0.02 |
Categories | March | July | October | March + July | July + October | March + July + October |
---|---|---|---|---|---|---|
HK | 43 | 50 | 43 | 46 | 49 | 49 |
BG | 76 | 85 | 92 | 83 | 77 | 77 |
RI | 35 | 33 | 30 | 32 | 30 | 33 |
MC | 42 | 36 | 36 | 41 | 37 | 40 |
WH | 50 | 52 | 50 | 52 | 44 | 44 |
LT | — | 40 | 36 | 47 | 39 | 40 |
AG | 65 | 60 | 49 | 56 | 55 | 55 |
KR | 54 | 52 | 72 | 56 | 58 | 58 |
BO | 70 | 61 | 70 | 66 | 61 | 61 |
CP | 57 | 58 | 52 | 56 | 56 | 54 |
LC | 82 | 75 | 80 | 70 | 80 | 81 |
CB | 87 | 67 | 67 | 63 | 66 | 66 |
HM | 50 | 50 | 50 | 48 | 48 | 48 |
Total | 711 | 719 | 727 | 716 | 700 | 706 |
Vegetation Indices | Formula |
---|---|
Excess green index (ExG) | |
Excess green minus excess red index (ExGR) | |
Vegetation index (VEG) | |
Color index of vegetation (CIVE) | |
Combination index (COM) | |
Combination index 2(COM2) | |
Normalized green-red difference index (NGRDI) | |
Normalized green-blue difference index (NGBDI) | |
Visable-band difference vegetation index (VDVI) | |
Red-green ratio index (RGRI) | |
Blue-green ratio index (BGRI) | |
Woebbecke index (WI) | |
Red-green-blue ratio index (RGBRI) | |
Red-green-blue index (RGBVI) | |
Kawashima index (IKAW) | |
Visible atmospherically resistance index (VARI) | |
Principal component analysis index (IPCA) |
Scenarios | Phases | Features | Number |
---|---|---|---|
1 | March | TF_3 + PF_3 + VIs_3 + SF_3 + GF_3 | 80 |
2 | July | TF_7 + PF_7 + VIs_7 + SF_7 + GF_7 | 80 |
3 | October | TF_10 + PF_10 + VIs_10 + SF_10 + GF_10 | 80 |
4 | March + July | TF_3 + PF_3 + VIs_3 + SF_3 + GF_3 + TF_7 + PF_7 + VIs_7 + SF_7 + GF_7 | 139 |
5 | July + October | TF_7 + PF_7 + VIs_7 + SF_7 + GF_7 + TF_10 + PF_10 + VIs_10 + SF_10 + GF_10 | 139 |
6 | March + July + October | TF_3 + PF_3 + VIs_3 + SF_3 + GF_3 + TF_7 + PF_7 + VIs_7 + SF_7 + GF_7 + TF_10 + PF_10 + VIs_10 + SF_10 + GF_10 | 198 |
Phases | Correlation Coefficient | Original → Eliminating High Correlation → Number of Variables after RFE-Based Variable Selection | Training Accuracy of RFE Model | RMSE |
---|---|---|---|---|
March | 0.95 | 80 → 43 → 33 | 0.865 | 1.670 |
July | 0.95 | 80 → 36 → 30 | 0.855 | 1.672 |
October | 0.95 | 80 → 33 → 23 | 0.875 | 1.694 |
March + July | 0.80 | 139 → 55 → 52 | 0.938 | 1.255 |
July + October | 0.80 | 139 → 52 → 31 | 0.935 | 1.402 |
March + July + October | 0.80 | 198 → 64 → 56 | 0.926 | 1.263 |
Models | Parameters | Tuning Range | Tuning Step Size |
---|---|---|---|
RF | mtry | 4–8 | 1 |
ntree | 500–2000 | 500 | |
XGBoost | max_depth | 4–8 | 1 |
eta | 0.05–0.3 | 0.05 | |
CatBoost | learning_rate | 0.05–0.3 | 0.05 |
max_depth | 4–8 | 1 |
Classes | (M + J + O) – M | (M + J + O) – J | (M + J + O) – O | (M + J + O) – (M + J) | (M + J + O) – (J + O) |
---|---|---|---|---|---|
BG | 30.43% | 6.25% | 13.89% | 7.94% | 1.59% |
LT | 0 | −11.96% | −3.57% | −0.86% | −15.41% |
KR | 2.13% | 6.25% | 18.37% | 0.00% | 9.09% |
BO | 15.46% | 14.41% | 5.12% | −2.96% | −1.49% |
LC | 9.55% | 14.18% | 16.36% | 3.35% | 4.32% |
CP | 7.12% | 10.45% | 17.52% | 1.08% | 1.49% |
Phases | Features | Top 3 | Top 5 | Top 10 | Top 15 |
---|---|---|---|---|---|
July | Vegetation Indexes | 55.6% | 46.6% | 46.7% | 40% |
Texture features | 44.4% | 40% | 30% | 26.7% | |
Spectral features | 0.0% | 6.7% | 10% | 13.3% | |
Geometry Features | 0.0% | 6.7% | 10% | 13.3% | |
Position Features | 0.0% | 0.0% | 3.3% | 6.7% | |
March + July + October | Vegetation Indexes | 55.6% | 46.6% | 43.3% | 42.2% |
Texture features | 44.4% | 46.6% | 46.7% | 44.4% | |
Spectral features | 0.0% | 0.0% | 6.7% | 4.4% | |
Geometry features | 0.0% | 0.0% | 3.3% | 4.4% | |
Position Features | 0.0% | 6.8% | 0.0% | 2.2% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, Y.; Fu, B.; Sun, X.; Yao, H.; Zhang, S.; Wu, Y.; Kuang, H.; Deng, T. Effects of Multi-Growth Periods UAV Images on Classifying Karst Wetland Vegetation Communities Using Object-Based Optimization Stacking Algorithm. Remote Sens. 2023, 15, 4003. https://doi.org/10.3390/rs15164003
Zhang Y, Fu B, Sun X, Yao H, Zhang S, Wu Y, Kuang H, Deng T. Effects of Multi-Growth Periods UAV Images on Classifying Karst Wetland Vegetation Communities Using Object-Based Optimization Stacking Algorithm. Remote Sensing. 2023; 15(16):4003. https://doi.org/10.3390/rs15164003
Chicago/Turabian StyleZhang, Ya, Bolin Fu, Xidong Sun, Hang Yao, Shurong Zhang, Yan Wu, Hongyuan Kuang, and Tengfang Deng. 2023. "Effects of Multi-Growth Periods UAV Images on Classifying Karst Wetland Vegetation Communities Using Object-Based Optimization Stacking Algorithm" Remote Sensing 15, no. 16: 4003. https://doi.org/10.3390/rs15164003
APA StyleZhang, Y., Fu, B., Sun, X., Yao, H., Zhang, S., Wu, Y., Kuang, H., & Deng, T. (2023). Effects of Multi-Growth Periods UAV Images on Classifying Karst Wetland Vegetation Communities Using Object-Based Optimization Stacking Algorithm. Remote Sensing, 15(16), 4003. https://doi.org/10.3390/rs15164003