Recognition of Urbanized Areas in UAV-Derived Very-High-Resolution Visible-Light Imagery
<p>Jerzmanowice dataset: (<b>a</b>) the area covered by UAV photogrammetry missions with the locations of ground control points (GCPs) and check points (CPs) and the boundary of the study area; (<b>b</b>) the study area with the sample area locations.</p> "> Figure 2
<p>Wieliczka dataset: (<b>a</b>) the area covered by UAV photogrammetry missions with the locations of GCPs and CPs and the boundary of the study area; (<b>b</b>) the study area with the sample area locations.</p> "> Figure 3
<p>Workflow for VI threshold (<b>a</b>) calibration, and (<b>b</b>) validation and testing. The optimal threshold value was determined using the JERZ dataset (<b>a</b>), which employed orthomosaics and manually performed reference classification. Subsequently (<b>b</b>), the assessment of classification accuracy was tested for both the calibration (JERZ) and the WIEL datasets, utilizing the optimal threshold value determined in step (<b>a</b>). This ensured that the accuracies determined for the WIEL dataset were independent and unbiased.</p> "> Figure 4
<p>Workflow for NN (<b>a</b>) training, and (<b>b</b>) validation and testing. The training was conducted using the JERZ dataset (<b>a</b>), which employed orthomosaics and manually performed reference classification. Subsequently (<b>b</b>), classification accuracy assessment was tested for both the validation part of the JERZ dataset and the WIEL dataset. This ensured that the accuracies determined for the WIEL dataset were independent and unbiased.</p> "> Figure 5
<p>Dendrogram of associations between VIs obtained for 1 − |<span class="html-italic">r</span>| metric and UPGMA method using the JERZ dataset. Taking into account the 0.1 distance criterium, six clusters were identified, i.e., (1) ExG, CIVE, GLI, ExGR, AL (in red); (2) GBdiff, ExB, ExGB, RGBVI (in green); (3) ExR, NGRDI, MGRVI (in blue); (4) MExG; (5) TGI, AI (in violet); and (6) AB.</p> "> Figure 6
<p>Histograms of VIs with optimal thresholds for the JERZ dataset. This figure also shows the percentage of pixels classified as urbanized and non-urbanized areas.</p> "> Figure 7
<p>The variability in the optimal threshold depending on the survey date, with the horizontal axes displaying dates in the year.month format.</p> "> Figure 8
<p>The influence of the adopted threshold value on MCC and <span class="html-italic">accuracy</span>.</p> "> Figure 9
<p>Examples of classification results for dark orthomosaic fragments: (<b>a</b>) for roof, and (<b>b</b>) for trees. The figure shows a fragment of an orthomosaic, reference classification, classification predicted by ExB and NNs (linear, MLP, and CNN), and classification errors. In the classification images, pixels classified as urbanized areas are red, and pixels classified as non-urbanized areas are green. In the error images, blue indicates correctly classified pixels, and yellow indicates misclassified pixels.</p> ">
Abstract
:1. Introduction
- The utilized datasets comprise only high-resolution RGB images acquired from two distinct study areas. These images were acquired using two UAV platforms across multiple measurement series conducted at various stages of vegetation cover (different seasons), ensuring comprehensive validation and robustness of the tested classifiers;
- The reference classification used to calibrate/train and test classification methods was meticulously manually performed at the full spatial resolution of RGB data, which was 10 mm and 15 mm, respectively;
- The study undertook a comparative and evaluative analysis of 16 VIs, determined based on high-resolution UAV-derived RGB images, in conjunction with a simple thresholding approach, within the purview of the classification task at hand;
- The influence of season and dataset on the optimal VIs’ thresholds and classification accuracy was analyzed;
- The classification accuracy using simple neural networks (NNs), including linear networks, multi-layer perceptron (MLP), and CNNs, trained especially for this purpose, was also assessed and compared with the classification results obtained based on VIs thresholding;
- As input data for NNs consisted of a pixel and its surroundings (an image patch), the study also provided some feedback about neighborhoods’ influence on the classification results.
2. Materials and Methods
2.1. Datasets
2.1.1. Jerzmanowice Dataset
2.1.2. Wieliczka Dataset
2.2. Classification Methods
2.2.1. Vegetation Indices Thresholding
2.2.2. Neural Networks
- regularization was not applied;
- three NNs of the same structure were trained, and the one with the highest MCC for the validation dataset was applied in the further tests;
- a fully connected layer with two linear units followed by softmax function was used at network outputs;
- the class for which membership probability exceeded 50% was assigned;
- the classification was conducted only for the central point of a patch;
- the patch sizes were in sets 1, 3, 5, 7, and 9, but for CNNs, only patch sizes of 5, 7, and 9 were tested;
- the training dataset was not augmented.
2.3. Classification Quality Measures
3. Results
3.1. Vegetation Indices Classification Results
3.2. Neural Networks Classification Results
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Arpitha, M.; Ahmed, S.A.; Harishnaika, N. Land use and land cover classification using machine learning algorithms in google earth engine. Earth Sci. Inform. 2023, 16, 3057–3073. [Google Scholar] [CrossRef]
- Moharram, M.A.; Sundaram, D.M. Land use and land cover classification with hyperspectral data: A comprehensive review of methods, challenges and future directions. Neurocomputing 2023, 536, 90–113. [Google Scholar] [CrossRef]
- Wang, Y.; Sun, Y.; Cao, X.; Wang, Y.; Zhang, W.; Cheng, X. A review of regional and Global scale Land Use/Land Cover (LULC) mapping products generated from satellite remote sensing. ISPRS J. Photogramm. Remote Sens. 2023, 206, 311–334. [Google Scholar] [CrossRef]
- Tokarczyk, P.; Leitao, J.P.; Rieckermann, J.; Schindler, K.; Blumensaat, F. High-quality observation of surface imperviousness for urban runoff modelling using UAV imagery. Hydrol. Earth Syst. Sci. 2015, 19, 4215–4228. [Google Scholar] [CrossRef]
- Liao, W.; Deng, Y.; Li, M.; Sun, M.; Yang, J.; Xu, J. Extraction and Analysis of Finer Impervious Surface Classes in Urban Area. Remote Sens. 2021, 13, 459. [Google Scholar] [CrossRef]
- Shao, Z.; Cheng, T.; Fu, H.; Li, D.; Huang, X. Emerging Issues in Mapping Urban Impervious Surfaces Using High-Resolution Remote Sensing Images. Remote Sens. 2023, 15, 2562. [Google Scholar] [CrossRef]
- Alvarez-Vanhard, E.; Houet, T.; Mony, C.; Lecoq, L.; Corpetti, T. Can UAVs fill the gap between in situ surveys and satellites for habitat mapping? Remote Sens. Environ. 2020, 243, 111780. [Google Scholar] [CrossRef]
- Gokool, S.; Mahomed, M.; Brewer, K.; Naiken, V.; Clulow, A.; Sibanda, M.; Mabhaudhi, T. Crop mapping in smallholder farms using unmanned aerial vehicle imagery and geospatial cloud computing infrastructure. Heliyon 2024, 10, e26913. [Google Scholar] [CrossRef]
- Mollick, T.; Azam, M.G.; Karim, S. Geospatial-based machine learning techniques for land use and land cover mapping using a high-resolution unmanned aerial vehicle image. Remote Sens. Appl. Soc. Environ. 2023, 29, 100859. [Google Scholar] [CrossRef]
- Furukawa, F.; Laneng, L.A.; Ando, H.; Yoshimura, N.; Kaneko, M.; Morimoto, J. Comparison of RGB and multispectral unmanned aerial vehicle for monitoring vegetation coverage changes on a landslide area. Drones 2021, 5, 97. [Google Scholar] [CrossRef]
- Ćwiąkała, P.; Gruszczyński, W.; Stoch, T.; Puniach, E.; Mrocheń, D.; Matwij, W.; Matwij, K.; Nędzka, M.; Sopata, P.; Wójcik, A. UAV applications for determination of land deformations caused by underground mining. Remote Sens. 2020, 12, 1733. [Google Scholar] [CrossRef]
- Puniach, E.; Gruszczyński, W.; Ćwiąkała, P.; Matwij, W. Application of UAV-based orthomosaics for determination of horizontal displacement caused by underground mining. ISPRS J. Photogramm. Remote Sens. 2021, 174, 282–303. [Google Scholar] [CrossRef]
- da Silva, V.S.; Salami, G.; da Silva, M.I.O.; Silva, E.A.; Monteiro Junior, J.J.; Alba, E. Methodological evaluation of vegetation indexes in land use and land cover (LULC) classification. Geol. Ecol. Landsc. 2020, 4, 159–169. [Google Scholar] [CrossRef]
- Rouse, J.W.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring vegetation systems in the Great Plains with ERTS. In Proceedings of the Third Symposium of ERTS, Greenbelt, MD, USA; 1974; pp. 309–317. [Google Scholar]
- Cho, W.; Iida, M.; Suguri, M.; Masuda, R.; Kurita, H. Vision-based uncut crop edge detection for automated guidance of head-feeding combine. Eng. Agric. Environ. Food 2014, 7, 97–102. [Google Scholar] [CrossRef]
- Maxwell, A.E.; Warner, T.A.; Fang, F. Implementation of machine-learning classification in remote sensing: An applied review. Int. J. Remote Sens. 2018, 39, 2784–2817. [Google Scholar] [CrossRef]
- Talukdar, S.; Singha, P.; Mahato, S.; Pal, S.; Liou, Y.-A.; Rahman, A. Land-Use Land-Cover Classification by Machine Learning Classifiers for Satellite Observations—A Review. Remote Sens. 2020, 12, 1135. [Google Scholar] [CrossRef]
- Memduhoğlu, A. Identifying impervious surfaces for rainwater harvesting feasibility using unmanned aerial vehicle imagery and machine learning classification. Adv. GIS 2023, 3, 1–6. [Google Scholar]
- Lu, T.; Wan, L.; Qi, S.; Gao, M. Land Cover Classification of UAV Remote Sensing Based on Transformer–CNN Hybrid Architecture. Sensors 2023, 23, 5288. [Google Scholar] [CrossRef]
- Zhao, S.; Tu, K.; Ye, S.; Tang, H.; Hu, Y.; Xie, C. Land Use and Land Cover Classification Meets Deep Learning: A Review. Sensors 2023, 23, 8966. [Google Scholar] [CrossRef]
- Aryal, J.; Sitaula, C.; Frery, A.C. Land use and land cover (LULC) performance modeling using machine learning algorithms: A case study of the city of Melbourne, Australia. Sci. Rep. 2023, 13, 13510. [Google Scholar] [CrossRef]
- Chen, J.; Chen, Z.; Huang, R.; You, H.; Han, X.; Yue, T.; Zhou, G. The Effects of Spatial Resolution and Resampling on the Classification Accuracy of Wetland Vegetation Species and Ground Objects: A Study Based on High Spatial Resolution UAV Images. Drones 2023, 7, 61. [Google Scholar] [CrossRef]
- Gibril, M.B.A.; Kalantar, B.; Al-Ruzouq, R.; Ueda, N.; Saeidi, V.; Shanableh, A.; Mansor, S.; Shafri, H.Z.M. Mapping Heterogeneous Urban Landscapes from the Fusion of Digital Surface Model and Unmanned Aerial Vehicle-Based Images Using Adaptive Multiscale Image Segmentation and Classification. Remote Sens. 2020, 12, 1081. [Google Scholar] [CrossRef]
- Li, Y.; Deng, T.; Fu, B.; Lao, Z.; Yang, W.; He, H.; Fan, D.; He, W.; Yao, Y. Evaluation of Decision Fusions for Classifying Karst Wetland Vegetation Using One-Class and Multi-Class CNN Models with High-Resolution UAV Images. Remote Sens. 2022, 14, 5869. [Google Scholar] [CrossRef]
- Park, G.; Park, K.; Song, B.; Lee, H. Analyzing Impact of Types of UAV-Derived Images on the Object-Based Classification of Land Cover in an Urban Area. Drones 2022, 6, 71. [Google Scholar] [CrossRef]
- Öztürk, M.Y.; Çölkesen, I. The impacts of vegetation indices from UAV-based RGB imagery on land cover classification using ensemble learning. Mersin Photogramm. J. 2021, 3, 41–47. [Google Scholar] [CrossRef]
- Al-Najjar, H.A.; Kalantar, B.; Pradhan, B.; Saeidi, V.; Halin, A.A.; Ueda, N.; Mansor, S. Land cover classification from fused DSM and UAV images using convolutional neural networks. Remote Sens. 2019, 11, 1461. [Google Scholar] [CrossRef]
- Elamin, A.; El-Rabbany, A. UAV-Based Multi-Sensor Data Fusion for Urban Land Cover Mapping Using a Deep Convolutional Neural Network. Remote Sens. 2022, 14, 4298. [Google Scholar] [CrossRef]
- Abdolkhani, A.; Attarchi, S.; Alavipanah, S.K. A new classification scheme for urban impervious surface extraction from UAV data. Earth Sci. Inform. 2024. [Google Scholar] [CrossRef]
- Fan, C.L. Ground surface structure classification using UAV remote sensing images and machine learning algorithms. Appl. Geomat. 2023, 15, 919–931. [Google Scholar] [CrossRef]
- Concepcion, R.S.; Lauguico, S.C.; Tobias, R.R.; Dadios, E.P.; Bandala, A.A.; Sybingco, E. Estimation of photosynthetic growth signature at the canopy scale using new genetic algorithm-modified visible band triangular greenness index. In Proceedings of the 2020 International Conference on Advanced Robotics and Intelligent Systems (ARIS), Taipei, Taiwan, 19–21 August 2020; pp. 1–6. [Google Scholar] [CrossRef]
- Louhaichi, M.; Borman, M.M.; Johnson, D.E. Spatially located platform and aerial photography for documentation of grazing impacts on wheat. Geocarto Int. 2001, 16, 65–70. [Google Scholar] [CrossRef]
- Bendig, J.; Yu, K.; Aasen, H.; Bolten, A.; Bennertz, S.; Broscheit, J.; Gnyp, M.L.; Bareth, G. Combining UAV-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass monitoring in barley. Int. J. Appl. Earth Obs. Geoinf. 2015, 39, 79–87. [Google Scholar] [CrossRef]
- Lu, N.; Zhou, J.; Han, Z.; Li, D.; Cao, Q.; Yao, X.; Tian, Y.; Zhu, Y.; Cao, W.; Cheng, T. Improved estimation of aboveground biomass in wheat from RGB imagery and point cloud data acquired with a low-cost unmanned aerial vehicle system. Plant Methods 2019, 15, 17. [Google Scholar] [CrossRef] [PubMed]
- Puniach, E.; Gruszczyński, W.; Stoch, T.; Mrocheń, D.; Ćwiąkała, P.; Sopata, P.; Pastucha, E.; Matwij, W. Determination of the coefficient of proportionality between horizontal displacement and tilt change using UAV photogrammetry. Eng. Geol. 2023, 312, 106939. [Google Scholar] [CrossRef]
- Gruszczyński, W.; Puniach, E.; Ćwiąkała, P.; Matwij, W. Correction of low vegetation impact on UAV-derived point cloud heights with U-Net networks. IEEE Trans. Geosci. Remote Sens. 2022, 60, 3057272. [Google Scholar] [CrossRef]
- Matthews, B.W. Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim. Biophys Acta (BBA) Protein Struct. 1975, 405, 442–451. [Google Scholar] [CrossRef]
- Kawashima, S.; Nakatani, M. An algorithm for estimating chlorophyll content in leaves using a video camera. Ann. Bot. 1998, 81, 49–54. [Google Scholar] [CrossRef]
- Woebbecke, D.M.; Meyer, G.E.; Von Bargen, K.; Mortensen, D.A. Color indices for weed identification under various soil, residue, and lighting conditions. Trans. ASAE 1995, 38, 259–269. [Google Scholar] [CrossRef]
- Meyer, G.E.; Hindman, T.W.; Laksmi, K. Machine vision detection parameters for plant species identification. Precis. Agric. Biol. Qual. 1999, 3543, 327–335. [Google Scholar] [CrossRef]
- Mao, W.; Wang, Y.; Wang, Y. Real-time detection of between-row weeds using machine vision. In Proceedings of the 2003 ASAE Annual Meeting, Las Vegas, NV, USA, 27–30 July 2003; American Society of Agricultural and Biological Engineers: St. Joseph, MI, USA, 2003. [Google Scholar] [CrossRef]
- Meyer, G.E.; Neto, J.C. Verification of color vegetation indices for automated crop imaging applications. Comput. Electron. Agric. 2008, 63, 282–293. [Google Scholar] [CrossRef]
- Burgos-Artizzu, X.P.; Ribeiro, A.; Guijarro, M.; Pajares, G. Real-time image processing for crop/weed discrimination in maize fields. Comput. Electron. Agric. 2011, 75, 337–346. [Google Scholar] [CrossRef]
- Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef]
- Tosaka, N.; Hata, S.; Okamoto, H.; Takai, M. Automatic thinning mechanism of sugar beets, 2: Recognition of sugar beets by image color information. J. Jpn. Soc. Agric. Mach. 1998, 60, 75–82. [Google Scholar] [CrossRef]
- Kataoka, T.; Kaneko, T.; Okamoto, H.; Hata, S. Crop growth estimation system using machine vision. In Proceedings of the 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003), Kobe, Japan, 20–24 July 2003; pp. b1079–b1083. [Google Scholar] [CrossRef]
- Wahono, W.; Indradewa, D.; Sunarminto, B.H.; Haryono, E.; Prajitno, D. CIE L* a* b* color space based vegetation indices derived from unmanned aerial vehicle captured images for chlorophyll and nitrogen content estimation of tea (Camellia sinensis L. Kuntze) leaves. Ilmu Pertan. (Agric. Sci.) 2019, 4, 46–51. [Google Scholar] [CrossRef]
- Bishop, M.C. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006. [Google Scholar]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Nocedal, J. Updating quasi-Newton matrices with limited storage. Math. Comput. 1980, 35, 773–782. [Google Scholar] [CrossRef]
- Liu, D.C.; Nocedal, J. On the limited memory BFGS method for large scale optimization. Math. Program. 1989, 45, 503–528. [Google Scholar] [CrossRef]
- Chicco, D.; Jurman, G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genom. 2020, 21, 6. [Google Scholar] [CrossRef]
- Sneath, P.H.; Sokal, R.R. Numerical taxonomy. In The Principles and Practice of Numerical Classification, 1st ed.; W. H. Freeman: San Francisco, CA, USA, 1973. [Google Scholar]
- Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. Syst. 1979, 9, 62–66. [Google Scholar] [CrossRef]
No. | Vegetation Index | Abbr. | Color Space | Equation | Reference |
---|---|---|---|---|---|
1. | - | GBdiff | RGB | (G − B)/(R + G + B) | [38] |
2. | Excess Green | ExG | RGB | 2g − r − b | [39] |
3. | Excess Red | ExR | RGB | 1.4r − g | [40] |
4. | Excess Blue | ExB | RGB | 1.4b − g | [41] |
5. | Excess Green Minus Excess Red | ExGR | RGB | ExG − ExR | [42] |
6. | Excess Green Minus Excess Blue | ExGB | RGB | ExG − ExB | [15] |
7. | Modified Excess Green Index | MExG | RGB | 1.262G − 0.884R − 0.311B | [43] |
8. | Normalized Green Red Difference Index | NGRDI | RGB | (G − R)/(G + R) | [44] |
9. | Green Leaf Index | GLI | RGB | (2 × G − R − B)/(2 × G + R + B) | [32] |
10. | Red Green Blue Vegetation Index | RGBVI | RGB | (G2 − B × R)/(G2 + B × R) | [33] |
11. | Modified Green Red Vegetation Index | MGRVI | RGB | (G2 − R2)/(G2 + R2) | [33] |
12. | Color Index of Vegetation Extraction | CIVE | RGB | 0.441r − 0.881g + 0.385b + 18.78745 | [45,46] |
13. | Triangular Greenness Index | TGI | RGB | G − 0.39R − 0.61B | [31] |
14. | AI | AI | CIELab | b* − a* | [47] |
15. | AB | AB | CIELab | a*/L* | [47] |
16. | AL | AL | CIELab | a*/b* | [47] |
Vegetation Index | Optimal Threshold | MCC | IoU | Accuracy | Recall | Precision | |||||
---|---|---|---|---|---|---|---|---|---|---|---|
JERZ | WIEL | JERZ | WIEL | JERZ | JERZ | WIEL | JERZ | WIEL | JERZ | ||
ExG | 0.020 | 0.4522 | 0.4744 | 0.3284 | 0.3512 | 0.7647 | 0.7847 | 0.8911 | 0.8812 | 0.3421 | 0.3687 |
ExB | 0.108 | 0.6583 | 0.6099 | 0.5329 | 0.4774 | 0.9016 | 0.8700 | 0.8701 | 0.8980 | 0.5789 | 0.5048 |
NGRDI | 0.032 | 0.1608 | 0.2863 | 0.1609 | 0.2288 | 0.3969 | 0.6436 | 0.8957 | 0.7991 | 0.1639 | 0.2427 |
MExG | 0.078 | 0.1636 | 0.2389 | 0.1569 | 0.1875 | 0.3357 | 0.4507 | 0.9578 | 0.9580 | 0.1580 | 0.1890 |
AI | 4.087 | 0.5113 | 0.4624 | 0.3910 | 0.3362 | 0.8346 | 0.7627 | 0.8230 | 0.9086 | 0.4269 | 0.3479 |
AB | 0.665 | 0.2275 | 0.0910 | 0.1618 | 0.1108 | 0.8609 | 0.8063 | 0.2080 | 0.1825 | 0.4216 | 0.2201 |
Network Type | Patch Size | MCC | IoU | Accuracy | Recall | Precision | |||||
---|---|---|---|---|---|---|---|---|---|---|---|
JERZ* | WIEL | JERZ* | WIEL | JERZ* | WIEL | JERZ* | WIEL | JERZ* | WIEL | ||
Linear | 1 px | 0.6566 | 0.7344 | 0.5307 | 0.6162 | 0.9301 | 0.9424 | 0.6453 | 0.6967 | 0.7493 | 0.8422 |
3 px | 0.7031 | 0.7926 | 0.5854 | 0.6914 | 0.9375 | 0.9539 | 0.7491 | 0.7774 | 0.7281 | 0.8621 | |
5 px | 0.7879 | 0.8075 | 0.6860 | 0.7098 | 0.9546 | 0.9572 | 0.8092 | 0.7868 | 0.8184 | 0.8789 | |
7 px | 0.7990 | 0.8163 | 0.7003 | 0.7211 | 0.9567 | 0.9591 | 0.8251 | 0.7934 | 0.8224 | 0.8878 | |
9 px | 0.8055 | 0.8208 | 0.7087 | 0.7267 | 0.9579 | 0.9601 | 0.8337 | 0.7953 | 0.8254 | 0.8938 | |
MLP | 1 px | 0.7040 | 0.7558 | 0.5870 | 0.6498 | 0.9370 | 0.9444 | 0.7314 | 0.7768 | 0.7484 | 0.7989 |
3 px | 0.7333 | 0.7976 | 0.6172 | 0.7020 | 0.9376 | 0.9519 | 0.8540 | 0.8517 | 0.6900 | 0.7998 | |
5 px | 0.8324 | 0.8087 | 0.7436 | 0.7160 | 0.9623 | 0.9547 | 0.8919 | 0.8586 | 0.8173 | 0.8117 | |
7 px | 0.8529 | 0.8060 | 0.7716 | 0.7123 | 0.9675 | 0.9550 | 0.8947 | 0.8360 | 0.8486 | 0.8281 | |
9 px | 0.8457 | 0.8121 | 0.7616 | 0.7203 | 0.9655 | 0.9561 | 0.8984 | 0.8476 | 0.8334 | 0.8275 | |
CNN | 5 px | 0.8468 | 0.7927 | 0.7633 | 0.6959 | 0.9665 | 0.9503 | 0.8808 | 0.8550 | 0.8512 | 0.7889 |
7 px | 0.8654 | 0.7811 | 0.7890 | 0.6817 | 0.9708 | 0.9482 | 0.8913 | 0.8332 | 0.8730 | 0.7894 | |
9 px | 0.8789 | 0.7808 | 0.8078 | 0.6794 | 0.9740 | 0.9503 | 0.8890 | 0.7896 | 0.8984 | 0.8296 |
Vegetation Index | Otsu’s Method-Based Threshold | MCC | IoU | Accuracy | Recall | Precision |
---|---|---|---|---|---|---|
ExG | 0.165 | 0.2097 | 0.1678 | 0.3615 | 0.9972 | 0.1678 |
ExB | −0.031 | 0.2275 | 0.1677 | 0.3916 | 0.9970 | 0.1747 |
NGRDI | 0.035 | 0.1600 | 0.1599 | 0.3854 | 0.9063 | 0.1626 |
MExG | 0.059 | 0.1327 | 0.1551 | 0.4114 | 0.8370 | 0.1599 |
AI | 16.887 | 0.2814 | 0.1985 | 0.4802 | 0.9973 | 0.1986 |
AB | −0.134 | 0.0826 | 0.1402 | 0.6095 | 0.4935 | 0.1638 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Puniach, E.; Gruszczyński, W.; Ćwiąkała, P.; Strząbała, K.; Pastucha, E. Recognition of Urbanized Areas in UAV-Derived Very-High-Resolution Visible-Light Imagery. Remote Sens. 2024, 16, 3444. https://doi.org/10.3390/rs16183444
Puniach E, Gruszczyński W, Ćwiąkała P, Strząbała K, Pastucha E. Recognition of Urbanized Areas in UAV-Derived Very-High-Resolution Visible-Light Imagery. Remote Sensing. 2024; 16(18):3444. https://doi.org/10.3390/rs16183444
Chicago/Turabian StylePuniach, Edyta, Wojciech Gruszczyński, Paweł Ćwiąkała, Katarzyna Strząbała, and Elżbieta Pastucha. 2024. "Recognition of Urbanized Areas in UAV-Derived Very-High-Resolution Visible-Light Imagery" Remote Sensing 16, no. 18: 3444. https://doi.org/10.3390/rs16183444
APA StylePuniach, E., Gruszczyński, W., Ćwiąkała, P., Strząbała, K., & Pastucha, E. (2024). Recognition of Urbanized Areas in UAV-Derived Very-High-Resolution Visible-Light Imagery. Remote Sensing, 16(18), 3444. https://doi.org/10.3390/rs16183444