[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (756)

Search Parameters:
Keywords = individual tree detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 86590 KiB  
Article
Automated Detection of Araraucaria angustifolia (Bertol.) Kuntze in Urban Areas Using Google Earth Images and YOLOv7x
by Mauro Alessandro Karasinski, Ramon de Sousa Leite, Emmanoella Costa Guaraná, Evandro Orfanó Figueiredo, Eben North Broadbent, Carlos Alberto Silva, Erica Kerolaine Mendonça dos Santos, Carlos Roberto Sanquetta and Ana Paula Dalla Corte
Remote Sens. 2025, 17(5), 809; https://doi.org/10.3390/rs17050809 - 25 Feb 2025
Viewed by 409
Abstract
This study addresses the urgent need for effective methods to monitor and conserve Araucaria angustifolia, a critically endangered species of immense ecological and cultural significance in southern Brazil. Using high-resolution satellite images from Google Earth, we apply the YOLOv7x deep learning model [...] Read more.
This study addresses the urgent need for effective methods to monitor and conserve Araucaria angustifolia, a critically endangered species of immense ecological and cultural significance in southern Brazil. Using high-resolution satellite images from Google Earth, we apply the YOLOv7x deep learning model to detect this species in two distinct urban contexts in Curitiba, Paraná: isolated trees across the urban landscape and A. angustifolia individuals within forest remnants. Data augmentation techniques, including image rotation, hue and saturation adjustments, and mosaic augmentation, were employed to increase the model’s accuracy and robustness. Through a 5-fold cross-validation, the model achieved a mean Average Precision (AP) of 90.79% and an F1-score of 88.68%. Results show higher detection accuracy in forest remnants, where the homogeneous background of natural landscapes facilitated the identification of trees, compared to urban areas where complex visual elements like building shadows presented challenges. To reduce false positives, especially misclassifications involving palm species, additional annotations were introduced, significantly enhancing performance in urban environments. These findings highlight the potential of integrating remote sensing with deep learning to automate large-scale forest inventories. Furthermore, the study highlights the broader applicability of the YOLOv7x model for urban forestry planning, offering a cost-effective solution for biodiversity monitoring. The integration of predictive data with urban forest maps reveals a spatial correlation between A. angustifolia density and the presence of forest fragments, suggesting that the preservation of these areas is vital for the species’ sustainability. The model’s scalability also opens the door for future applications in ecological monitoring across larger urban areas. As urban environments continue to expand, understanding and conserving key species like A. angustifolia is critical for enhancing biodiversity, resilience, and addressing climate change. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area in the city of Curitiba, Paraná, Brazil. The highlighted neighborhoods (Batel, Centro, Jardim Botânico, Jardim das Américas, Rebouças, and Santa Felicidade) were used to train and test the YOLOv7x model. The gray area indicates regions where the available images did not have the same quality as the others and, therefore, were not included in the study.</p>
Full article ">Figure 2
<p>Components of a bounding box. (bx, by) represent the X and Y coordinates of the center of the bounding box; w represents the width and h the height of the bounding box.</p>
Full article ">Figure 3
<p>Learning curve performance of YOLOv7x in the detection of <span class="html-italic">A. angustifolia</span> in the city of Curitiba, Paraná, Brazil.</p>
Full article ">Figure 4
<p>Frequency distribution of individuals classified as forest and isolated individuals.</p>
Full article ">Figure 5
<p>Overview of <span class="html-italic">A. angustifolia</span> distribution by YOLOv7x in Curitiba, Paraná. (<b>a</b>) Forest areas. (<b>b</b>) Kernel Density Map (trees/ha). (<b>c</b>) Predicted trees. (<b>d</b>) Uncertainty distribution for predicted trees.</p>
Full article ">Figure 6
<p>Examples of prediction results: (<b>a</b>) Detection in the context of isolated trees. (<b>b</b>) Detection in forest fragments. (<b>c</b>) Example of a false negative caused by building shadows. (<b>d</b>) Example of a false positive due to confusion with palm trees. (<b>e</b>) Example of a false positive caused by confusion with the shadow projection of an <span class="html-italic">A. angustifolia</span>.</p>
Full article ">
18 pages, 5819 KiB  
Article
Analysis of Population Structure and Selective Signatures for Milk Production Traits in Xinjiang Brown Cattle and Chinese Simmental Cattle
by Kailun Ma, Xue Li, Shengchao Ma, Menghua Zhang, Dan Wang, Lei Xu, Hong Chen, Xuguang Wang, Aladaer Qi, Yifan Ren, Xixia Huang and Qiuming Chen
Int. J. Mol. Sci. 2025, 26(5), 2003; https://doi.org/10.3390/ijms26052003 - 25 Feb 2025
Viewed by 175
Abstract
This study aims to elucidate the population structure and genetic diversity of Xinjiang brown cattle (XJBC) and Chinese Simmental cattle (CSC) while conducting genome-wide selective signatures analyses to identify selected genes associated with milk production traits in both breeds. Based on whole-genome resequencing [...] Read more.
This study aims to elucidate the population structure and genetic diversity of Xinjiang brown cattle (XJBC) and Chinese Simmental cattle (CSC) while conducting genome-wide selective signatures analyses to identify selected genes associated with milk production traits in both breeds. Based on whole-genome resequencing technology, whole-genome single nucleotide polymorphisms (SNPs) of 83 Xinjiang brown cattle and 80 Chinese Simmental cattle were detected to resolve the genetic diversity and genetic structure of the two populations, whole-genome selective elimination analysis was performed for the two breeds of cattle using the fixation index (Fst) and nucleotide diversity (θπ ratio), and enrichment analysis was performed to explore their biological functions further. Both breeds exhibited relatively rich genetic diversity, with the Chinese Simmental cattle demonstrating higher genetic diversity than Xinjiang brown cattle. The IBS and G matrix results indicated that most individuals in the two populations were farther apart from each other. The PCA and neighbor-joining tree revealed no hybridization between the two breeds, but there was a certain degree of genetic differences among the individuals in the two breeds. Population structure analysis revealed that the optimal number of ancestors was three when K = 3. This resulted in clear genetic differentiation between the two populations, with only a few individuals having one ancestor and the majority having two or three common ancestors. A combined analysis of Fst and θπ was used to screen 112 candidate genes related to milk production traits in Xinjiang brown cattle and Chinese Simmental cattle. This study used genome-wide SNP markers to reveal the genetic diversity, population structure, and selection characteristics of two breeds. This study also screened candidate genes related to milk production traits, providing a theoretical basis for conserving genetic resources and improving genetic selection for milk production traits in Xinjiang brown cattle and Chinese Simmental cattle. Full article
(This article belongs to the Section Molecular Genetics and Genomics)
Show Figures

Figure 1

Figure 1
<p>Distribution and functional annotation of SNPs in Xinjiang brown cattle and Chinese Simmental cattle. (<b>A</b>) Distribution of SNPs on autosomes in Xinjiang brown cattle. The horizontal coordinates indicate the density or number of SNPs, whereas the vertical coordinates indicate the positions or positional intervals of the 29 chromosomes. Different colors in the figure represent the number of SNPs per 1 Mb range: a color closer to red indicates a higher density of SNPs, a color closer to green indicates a lower density of SNPs, and gray indicates no SNP distribution at that position. (<b>B</b>) Distribution of SNPs on autosomes in Chinese Simmental cattle. (<b>C</b>) Frequency distribution of SNPs on 29 chromosomes of Xinjiang brown cattle and Chinese Simmental cattle. (<b>D</b>) Functional distribution of SNPs in Xinjiang brown cattle. The pie chart to the left displays the distribution of SNPs, including intergenic, intronic, upstream, downstream, UTR3, UTR5, splicing, ncRNA, and exonic regions, based on their location. The pie chart to the right displays the breakdown of exons, including nonsynonymous, synonymous, stop-gain, and stop-loss exons. (<b>E</b>) Functional distribution of SNPs in Chinese Simmental cattle.</p>
Full article ">Figure 2
<p>LD attenuation in Xinjiang brown cattle and Chinese Simmental cattle. The horizontal coordinate indicates the distance at which the LD occurred, whereas the vertical coordinate indicates the LD correlation coefficient r<sup>2</sup>.</p>
Full article ">Figure 3
<p>Population structure of Xinjiang brown cattle and Chinese Simmental cattle. (<b>A</b>) Principal component analysis of the two breeds. (<b>B</b>) Neighbor-joining tree of two breeds. (<b>C</b>) Admixture analysis cross-validation error. The abscissa is the K value (2–6), and the ordinate is the cross-validation error. (<b>D</b>) Analysis of the population structure of the two populations at K = 3. Every vertical line represents a sample, with the horizontal axis showing the sample number and the vertical axis indicating the percentage of subgroups or ancestors present in each sample. Various colors (blue, green, and red) distinguish between different subgroups or ancestors.</p>
Full article ">Figure 4
<p>Genetic matrices of Xinjiang brown cattle and Chinese Simmental cattle. (<b>A</b>) Genetic distance matrix for Xinjiang brown cattle with IBS. The horizontal and vertical axes represent the individual numbers of the two breeds of cattle, and each small square represents the genetic distance value between two individuals. The closer the color is to orange, the greater the genetic distance; in contrast, the closer the color is to green, the smaller the genetic distance. (<b>B</b>) Genetic distance matrix of Chinese Simmental cattle with IBS. (<b>C</b>) Kinship G matrix of Xinjiang brown cattle. The horizontal and vertical axes represent the individual numbers of two breeds of cattle, and each small square represents the kinship coefficient between two individuals. The closer the color is to orange, the closer the kinship between individuals is. In contrast, the closer the color is to green, the more distant the kinship is. (<b>D</b>) Kinship G matrix of Chinese Simmental cattle.</p>
Full article ">Figure 5
<p>Analysis of selective signatures in Xinjiang brown cattle. (<b>A</b>) Manhattan plot of <span class="html-italic">F</span><sub>st</sub>. The blue line represents the threshold value for the top 5% based on the Z(<span class="html-italic">F</span><sub>st</sub>) (1.96). (<b>B</b>) PI boxplots of the HY and LY groups. (<b>C</b>) Manhattan plot of the θπ ratio. The blue lines represent the threshold value for the top 5% and the below 5% based on the Log<sub>2</sub> (θπ ratio) (0.37 and −0.46). (<b>D</b>) Selective sweep analysis of <span class="html-italic">F</span><sub>st</sub> and θπ. The Log<sub>2</sub>Pi ratio value is shown on the horizontal axis, whereas the Z(<span class="html-italic">F</span><sub>st</sub>) value is indicated on the vertical axis. The frequency distribution plot is displayed at the top and right, whereas the dot plots in the middle depict the Z(<span class="html-italic">F</span><sub>st</sub>) and Log<sub>2</sub>Pi ratios for various windows. The uppermost blue and green areas represent the top 5% and bottom 5% of the regions selected based on the Log<sub>2</sub>Pi ratio, whereas the orange area on the right indicates the top 5% of the region selected by Z(<span class="html-italic">F</span><sub>st</sub>). The central blue and green areas denote the intersection of Z(<span class="html-italic">F</span><sub>st</sub>) and the Log<sub>2</sub>Pi ratio, which identifies the candidate loci for the HY group and the LY group, respectively. (<b>E</b>) GO enrichment analysis of candidate genes identified through the intersection of the two methods. (<b>F</b>) KEGG enrichment analysis of candidate genes identified through the intersection of the two methods.</p>
Full article ">Figure 6
<p>Analysis of selective signatures in Chinese Simmental cattle. (<b>A</b>) Manhattan plot of <span class="html-italic">F</span><sub>st</sub>. The blue line represents the threshold value for the top 5% based on the Z(<span class="html-italic">F</span><sub>st</sub>) (1.90). (<b>B</b>) PI boxplots of the HY and LY groups. (<b>C</b>) Manhattan plot of the θπ ratio. The blue lines represent the threshold value for the top 5% and the below 5% based on the Log<sub>2</sub> (θπ ratio) (0.39 and −0.40). (<b>D</b>) Selective sweep analysis of <span class="html-italic">F</span><sub>st</sub> and θπ. (<b>E</b>) GO enrichment analysis of candidate genes identified through the intersection of the two methods. (<b>F</b>) KEGG enrichment analysis of candidate genes identified through the intersection of the two methods.</p>
Full article ">Figure 7
<p>Venn diagram of candidate genes for Xinjiang brown cattle and Chinese Simmental cattle.</p>
Full article ">
24 pages, 5096 KiB  
Article
Aboveground Biomass and Tree Mortality Revealed Through Multi-Scale LiDAR Analysis
by Inacio T. Bueno, Carlos A. Silva, Kristina Anderson-Teixeira, Lukas Magee, Caiwang Zheng, Eben N. Broadbent, Angélica M. Almeyda Zambrano and Daniel J. Johnson
Remote Sens. 2025, 17(5), 796; https://doi.org/10.3390/rs17050796 - 25 Feb 2025
Viewed by 347
Abstract
Accurately monitoring aboveground biomass (AGB) and tree mortality is crucial for understanding forest health and carbon dynamics. LiDAR (Light Detection and Ranging) has emerged as a powerful tool for capturing forest structure across different spatial scales. However, the effectiveness of LiDAR for predicting [...] Read more.
Accurately monitoring aboveground biomass (AGB) and tree mortality is crucial for understanding forest health and carbon dynamics. LiDAR (Light Detection and Ranging) has emerged as a powerful tool for capturing forest structure across different spatial scales. However, the effectiveness of LiDAR for predicting AGB and tree mortality depends on the type of instrument, platform, and the resolution of the point cloud data. We evaluated the effectiveness of three distinct LiDAR-based approaches for predicting AGB and tree mortality in a 25.6 ha North American temperate forest. Specifically, we evaluated the following: GEDI-simulated waveforms from airborne laser scanning (ALS), grid-based structural metrics derived from unmanned aerial vehicle (UAV)-borne lidar data, and individual tree detection (ITD) from ALS data. Our results demonstrate varying levels of performance in the approaches, with ITD emerging as the most accurate for AGB modeling with a median R2 value of 0.52, followed by UAV (0.38) and GEDI (0.11). Our findings underscore the strengths of the ITD approach for fine-scale analysis, while grid-based forest metrics used to analyze the GEDI and UAV LiDAR showed promise for broader-scale monitoring, if more uncertainty is acceptable. Moreover, the complementary strengths across scales of each LiDAR method may offer valuable insights for forest management and conservation efforts, particularly in monitoring forest dynamics and informing strategic interventions aimed at preserving forest health and mitigating climate change impacts. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Study area within the Smithsonian Conservation Biology Institute (SCBI) showing the ForestGEO plot. The ForestGEO plot’s canopy height model provides detailed information on forest structure (<b>a</b>), while the map inset highlights tree mortality locations within the plot (<b>b</b>).</p>
Full article ">Figure 2
<p>Illustration of overall steps used for AGB and tree mortality modeling and validation. Lidar (<b>a1</b>) and field (<b>a2</b>) data collection; the feature extraction from spaceborne, UAV, and ITD LiDAR datasets (<b>b</b>); and the steps for tree mortality modeling and validation (<b>c</b>).</p>
Full article ">Figure 3
<p>A comparison of model performance across the GEDI (blue), UAV (green), and ITD (red) LiDAR datasets in terms of (<b>a</b>) bias; (<b>b</b>) relative bias (%bias); (<b>c</b>) root mean squared error (RMSE); (<b>d</b>) relative root mean squared error (%RMSE); and (<b>e</b>) coefficient of determination R<sup>2</sup> for the testing datasets. Strong evidence against the null hypothesis is denoted by *** <span class="html-italic">p</span> &lt; 0.001.</p>
Full article ">Figure 4
<p>Top 5 variables for AGB estimation according to the mean decrease importance for (<b>a</b>) GEDI, (<b>b</b>) UAV, and (<b>c</b>) ITD.</p>
Full article ">Figure 5
<p>GEDI (blue), UAV (green), and ITD (orange) relationship between the observed and predicted tree mortality explained by bias, RMSE, and the CCC. In addition, percentages of predicted AGB loss, ranging from 100% (<b>a</b>) down to 30% (<b>h</b>) in 10% increments (<b>b</b>–<b>g</b>), display different assumptions of biomass retention in the results.</p>
Full article ">Figure 6
<p>Bias (<b>a</b>), RMSE (<b>b</b>), and CCC (<b>c</b>) variation as the percentage of observed tree mortality decreased by remote sensing product.</p>
Full article ">
21 pages, 6718 KiB  
Review
Early Warning Signs in Tree Crowns as a Response to the Impact of Drought
by Goran Češljar, Ilija Đorđević, Saša Eremija, Miroslava Marković, Renata Gagić Serdar, Aleksandar Lučić and Nevena Čule
Forests 2025, 16(3), 405; https://doi.org/10.3390/f16030405 - 24 Feb 2025
Viewed by 274
Abstract
The interaction between trees’ water needs during drought and the signals that appear in their canopies is not fully understood. The first visually detectable signs, which we describe as early warning signals in tree canopies, are often not noticeable at first glance. When [...] Read more.
The interaction between trees’ water needs during drought and the signals that appear in their canopies is not fully understood. The first visually detectable signs, which we describe as early warning signals in tree canopies, are often not noticeable at first glance. When these signs become widely apparent, tree decline is already underway. In this study, we focus on identifying early visible signs of drought stress in the tree crowns, such as very small leaves, premature needle/leaf discolouration and abscission, and defoliation. We provide guidance on recognising initial signs, offer specific examples, and comprehensively analyse each signal. Our focus is on signs in the tree crowns that appear during intense and prolonged droughts, which we confirmed by calculating the Standardised Precipitation Evapotranspiration Index (SPEI). Our findings are based on 20 years (2004–2024) of continuous fieldwork and data collection from permanent sample plots in Serbia, which was conducted as part of the International Co-operative Programme on Assessment and Monitoring of Air Pollution Effects on Forests (ICP Forests). We also conducted a comprehensive review of the literature and key findings related to the early signs we address. This research was further motivated by the signs observed in the tree crowns during the summer of 2024 due to extreme climatic events, which classify this year as one of the hottest recorded in Serbia. However, we still cannot conclusively determine which specific trees will die back based solely on these early warning signals, as some trees manage to withstand severe drought conditions. Nonetheless, the widespread appearance of these indicators is a clear warning of significant ecosystem instability, potentially leading to the decline of individual trees or larger groups. Full article
(This article belongs to the Special Issue Abiotic and Biotic Stress Responses in Trees Species)
Show Figures

Figure 1

Figure 1
<p>Early warning signs in tree crowns as a response to the impact of drought.</p>
Full article ">Figure 2
<p>Observation area with permanent sample plots Level I and Level II.</p>
Full article ">Figure 3
<p>Moisture conditions in the observation area: (<b>A</b>) SPEI-12, (<b>B</b>) SPEI-6.</p>
Full article ">Figure 3 Cont.
<p>Moisture conditions in the observation area: (<b>A</b>) SPEI-12, (<b>B</b>) SPEI-6.</p>
Full article ">Figure 4
<p>Individual Beech tree (<span class="html-italic">Fagus sylvatica</span> L.): (<b>A</b>) with small leaves (May 2024), (<b>B</b>) and with the final outcome of desiccation during the drought period of the same year (July 2024), (<b>C</b>) the observed tree on the left with small leaves compared to the tree on the right with normal leaf size (May 2024).</p>
Full article ">Figure 5
<p>Examples of premature discolouration (ageing) of leaves and needles induced by drought during the growing season. (<b>A</b>) <span class="html-italic">Fagus sylvatica</span> L. (August 2024), (<b>B</b>) <span class="html-italic">Abies alba</span> Mill. (August 2013), (<b>C</b>) <span class="html-italic">Picea abies</span> (L.) H. Karst. (July 2023).</p>
Full article ">Figure 6
<p>The impact of drought on leaf discolouration at a broader area level (August 2024).</p>
Full article ">Figure 7
<p>Examples of premature leaf abscission/shedding caused by drought during the vegetative period. <span class="html-italic">Fagus sylvatica</span> L.: (<b>A</b>) (August 2013), (<b>B</b>) (beginning of September 2024), (<b>C</b>) (beginning of September 2024).</p>
Full article ">Figure 8
<p>The impact of drought on premature leaf abscission at a broader area level (August 2024).</p>
Full article ">
22 pages, 4474 KiB  
Article
Advancing Stem Volume Estimation Using Multi-Platform LiDAR and Taper Model Integration for Precision Forestry
by Yongkyu Lee and Jungsoo Lee
Remote Sens. 2025, 17(5), 785; https://doi.org/10.3390/rs17050785 - 24 Feb 2025
Viewed by 204
Abstract
Stem volume is a critical factor in managing and evaluating forest resources. At present, stem volume is commonly estimated indirectly by constructing a taper model that utilizes sampling, diameter at breast height (DBH), and tree height. However, these estimates are constrained by errors [...] Read more.
Stem volume is a critical factor in managing and evaluating forest resources. At present, stem volume is commonly estimated indirectly by constructing a taper model that utilizes sampling, diameter at breast height (DBH), and tree height. However, these estimates are constrained by errors arising from spatial and stand environment variations as well as uncertainties in height measurements. To address these issues, this study aimed to accurately estimate stem volume using light detection and ranging (LiDAR) technology, a key tool in modern precision forestry. LiDAR data were used to build comprehensive three-dimensional models of forests with multi-platform LiDAR systems. This approach allowed for precise measurements of tree heights and stem diameters at various heights, effectively mitigating the limitations of earlier measurement methods. Based on these data, a Kozak taper curve was developed at the individual tree level using LiDAR-derived tree height and diameter estimates. Integrating this curve with LiDAR data enabled a hybrid approach to estimating stem volume, facilitating the calculation of diameters at points not directly identifiable from LiDAR data alone. The proposed method was implemented and evaluated for two economically significant tree species in Korea: Pinus koraiensis and Larix kaempferi. The RMSE comparison between the taper curve-based approach and the hybrid volume estimation method showed that, for Pinus koraiensis, the RMSE was 0.11 m3 using the taper curve-based approach and 0.07 m3 for the hybrid method, while for Larix kaempferi, the RMSE was 0.13 m3 and 0.05 m3, respectively. The estimation error of the hybrid method was approximately half that of the taper curve-based approach. Consequently, the hybrid volume estimation method, which integrates LiDAR and the taper model, overcomes the limitations of conventional taper curve-based approaches and contributes to improving the accuracy of forest resource monitoring. Full article
(This article belongs to the Special Issue Remote Sensing-Assisted Forest Inventory Planning)
Show Figures

Figure 1

Figure 1
<p>Location of the study area: academic forest of Kangwon National University (Kangwon-do, Republic of Korea).</p>
Full article ">Figure 2
<p>Selection and spatial distribution of reference trees by species considering diameter at breast height and height.</p>
Full article ">Figure 3
<p>Flowchart for advancing stem volume estimation using multi-platform LiDAR and taper model integration [<a href="#B13-remotesensing-17-00785" class="html-bibr">13</a>].</p>
Full article ">Figure 4
<p>Comparison of TLS and ULS LiDAR data collection results for <span class="html-italic">P. koraiensis</span> and <span class="html-italic">L. kaempferi</span>.</p>
Full article ">Figure 5
<p>Reference tree data collection.</p>
Full article ">Figure 6
<p>Schematic of stem data construction and diameter estimation at various heights using LiDAR data.</p>
Full article ">Figure 7
<p>Comparison of accuracy between Vertex-measured height and LiDAR-based height relative to reference tree data.</p>
Full article ">Figure 8
<p>Distribution of diameter residuals by relative height and predicted diameter for each species: comparison between LiDAR-based taper model (LD) and standard volume table-based taper model (SD).</p>
Full article ">Figure 9
<p>Evaluation of stem volume estimation accuracy using standard and LiDAR methods.</p>
Full article ">
25 pages, 9167 KiB  
Review
Modeling LiDAR-Derived 3D Structural Metric Estimates of Individual Tree Aboveground Biomass in Urban Forests: A Systematic Review of Empirical Studies
by Ruonan Li, Lei Wang, Yalin Zhai, Zishan Huang, Jia Jia, Hanyu Wang, Mengsi Ding, Jiyuan Fang, Yunlong Yao, Zhiwei Ye, Siqi Hao and Yuwen Fan
Forests 2025, 16(3), 390; https://doi.org/10.3390/f16030390 - 22 Feb 2025
Viewed by 354
Abstract
The aboveground biomass (AGB) of individual trees is a critical indicator for assessing urban forest productivity and carbon storage. In the context of global warming, it plays a pivotal role in understanding urban forest carbon sequestration and regulating the global carbon cycle. Recent [...] Read more.
The aboveground biomass (AGB) of individual trees is a critical indicator for assessing urban forest productivity and carbon storage. In the context of global warming, it plays a pivotal role in understanding urban forest carbon sequestration and regulating the global carbon cycle. Recent advances in light detection and ranging (LiDAR) have enabled the detailed characterization of three-dimensional (3D) structures, significantly enhancing the accuracy of individual tree AGB estimation. This review examines studies that use LiDAR-derived 3D structural metrics to model and estimate individual tree AGB, identifying key metrics that influence estimation accuracy. A bibliometric analysis of 795 relevant articles from the Web of Science Core Collection was conducted using R Studio (version 4.4.1) and VOSviewer 1.6.20 software, followed by an in-depth review of 80 papers focused on urban forests, published after 2010 and selected from the first and second quartiles of the Chinese Academy of Sciences journal ranking. The results show the following: (1) Dalponte2016 and watershed are more widely used among 2D raster-based algorithms, and 3D point cloud-based segmentation algorithms offer greater potential for innovation; (2) tree height and crown volume are important 3D structural metrics for individual tree AGB estimation, and biomass indices that integrate these parameters can further improve accuracy and applicability; (3) machine learning algorithms such as Random Forest and deep learning consistently outperform parametric methods, delivering stable AGB estimates; (4) LiDAR data sources, point cloud density, and forest types are important factors that significantly affect the accuracy of individual tree AGB estimation. Future research should emphasize deep learning applications for improving point cloud segmentation and 3D structure extraction accuracy in complex forest environments. Additionally, optimizing multi-sensor data fusion strategies to address data matching and resolution differences will be crucial for developing more accurate and widely applicable AGB estimation models. Full article
(This article belongs to the Section Urban Forestry)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Annual publications from 2003 to 2024. (<b>b</b>) Top 8 productive journals from 2003 to 2024. The size of the circles represents the number of publications; larger circles indicate higher publication volumes.</p>
Full article ">Figure 2
<p>(<b>a</b>) Top 20 most productive countries. (<b>b</b>) Country collaboration map. The line thickness represents the strength of collaboration.</p>
Full article ">Figure 3
<p>(<b>a</b>) Top 10 most productive affiliations from 2003 to 2024. (<b>b</b>) The performances of the top 10 most productive authors from 2003 to 2024. (<b>c</b>) Affiliation co-occurrence network. (<b>d</b>) Author co-occurrence network. The size of the circles represents the publication volume, and edge thickness represents the collaboration strength.</p>
Full article ">Figure 4
<p>The distribution of reviewed studies categorized by (<b>a</b>) country and (<b>b</b>–<b>g</b>) city. The size of the circles represents the number of studies, with larger circles indicating a higher number of studies.</p>
Full article ">
26 pages, 29509 KiB  
Article
MangiSpectra: A Multivariate Phenological Analysis Framework Leveraging UAV Imagery and LSTM for Tree Health and Yield Estimation in Mango Orchards
by Muhammad Munir Afsar, Muhammad Shahid Iqbal, Asim Dilawar Bakhshi, Ejaz Hussain and Javed Iqbal
Remote Sens. 2025, 17(4), 703; https://doi.org/10.3390/rs17040703 - 19 Feb 2025
Viewed by 281
Abstract
Mango (Mangifera Indica L.), a key horticultural crop, particularly in Pakistan, has been primarily studied locally using low- to medium-resolution satellite imagery, usually focusing on a particular phenological stage. The large canopy size, complex tree structure, and unique phenology of mango trees [...] Read more.
Mango (Mangifera Indica L.), a key horticultural crop, particularly in Pakistan, has been primarily studied locally using low- to medium-resolution satellite imagery, usually focusing on a particular phenological stage. The large canopy size, complex tree structure, and unique phenology of mango trees further accentuate intrinsic challenges posed by low-spatiotemporal-resolution data. The absence of mango-specific vegetation indices compounds the problem of accurate health classification and yield estimation at the tree level. To overcome these issues, this study utilizes high-resolution multi-spectral UAV imagery collected from two mango orchards in Multan, Pakistan, throughout the annual phenological cycle. It introduces MangiSpectra, an integrated two-staged framework based on Long Short-Term Memory (LSTM) networks. In the first stage, nine conventional and three mango-specific vegetation indices derived from UAV imagery were processed through fine-tuned LSTM networks to classify the health of individual mango trees. In the second stage, associated data such as the trees’ age, variety, canopy volume, height, and weather data were combined with predicted health classes for yield estimation through a decision tree algorithm. Three mango-specific indices, namely the Mango Tree Yellowness Index (MTYI), Weighted Yellowness Index (WYI), and Normalized Automatic Flowering Detection Index (NAFDI), were developed to measure the degree of canopy covered by flowers to enhance the robustness of the framework. In addition, a Cumulative Health Index (CHI) derived from imagery analysis after every flight is also proposed for proactive orchard management. MangiSpectra outperformed the comparative benchmarks of AdaBoost and Random Forest in health classification by achieving 93% accuracy and AUC scores of 0.85, 0.96, and 0.92 for the healthy, moderate and weak classes, respectively. Yield estimation accuracy was reasonable with R2=0.21, and RMSE=50.18. Results underscore MangiSpectra’s potential as a scalable precision agriculture tool for sustainable mango orchard management, which can be improved further by fine-tuning algorithms using ground-based spectrometry, IoT-based orchard monitoring systems, computer vision-based counting of fruit on control trees, and smartphone-based data collection and insight dissemination applications. Full article
(This article belongs to the Special Issue Application of Satellite and UAV Data in Precision Agriculture)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area is located in Multan, Punjab, Pakistan, as shown in the inset maps. The main experimental site, Orchard 1 (outlined in red), covers an area of 45 acres and contains 1305 trees. The validation site, Orchard 2 (outlined in yellow), spans 55 acres with 1833 trees. The location of both orchards is indicated in the high-resolution satellite image. The grid overlay provides geospatial reference points, with latitude and longitude markers.</p>
Full article ">Figure 2
<p>Mango yield estimates across different varieties, age groups, and health conditions, showing variations in productivity based on cultivar type and overall mango tree health.</p>
Full article ">Figure 3
<p>Overview of the four-staged integrated MangiSpectra framework for tree-level health and yield estimation.</p>
Full article ">Figure 4
<p>Unsegmented tree canopies of the same group of 12 mango trees over different flight dates in Orchard 1 showing the effect of underlying vegetation. (<b>Left</b>): RGB image; (<b>Right</b>): Normalized GNDVI.</p>
Full article ">Figure 5
<p>Cumulative trend of key vegetation indices across phenological stages at Orchard 1.</p>
Full article ">Figure 6
<p>Progression of flowering from March to April 2024 as detected by Mango Tree Yellowness Index (MTYI), Weighted Yellowness Index (WYI), and Normalized Automatic Flowering Detection Index (NAFDI) on the canopy of the same tree. RGB UAV imagery in the top row is for visual reference. The color scale represents the relative density of flowers, with red indicating a lower degree of flowering and green indicating higher.</p>
Full article ">Figure 7
<p>Sample of per-flight health classification and in-season farming intervention recommendations. The map shows the health of categories of trees in Orchard 1 on 24 March 24 during the flowering stage.</p>
Full article ">Figure 8
<p>Utilization of LSTM component within the MangiSpectra framework for health classification and yield estimation.</p>
Full article ">Figure 9
<p>Key performance metrics of the LSTM model for tree health classification: (<b>a</b>) training and test accuracy over epochs, (<b>b</b>) confusion matrix, (<b>c</b>) ROC curves for each class, and (<b>d</b>) F1 score, accuracy, and class distribution.</p>
Full article ">Figure 10
<p>Analysis of tree health in the orchard using the MangiSpectra framework: (<b>a</b>) age distribution by tree health status, (<b>b</b>) model accuracy comparison, (<b>c</b>) age distribution by model agreement on tree health status, (<b>d</b>) age distribution by tree health status and MangiSpectra prediction.</p>
Full article ">Figure 11
<p>Spatial distribution of tree health as estimated by MangiSpectra for Orchard 1. Each dot represents an individual tree, categorized into three health classes: healthy (green, 639 trees), moderate (yellow, 405 trees), and weak (red, 261 trees). The background heat map provides an interpolated health estimate, highlighting areas of varying tree conditions with greenish arcs indicating prevalence of healthier trees, yellowish indicating moderate, and reddish areas showing clustering of weak trees.</p>
Full article ">Figure 12
<p>Comparison of actual yield with model predictions: (<b>a</b>) actual yield compared with MangiSpectra estimate, (<b>b</b>) actual yield compared with Random Forest estimate, (<b>c</b>) estimated yield of MangiSpectra correlated with age, and (<b>d</b>) predicted yields of MangiSpectra and Random Forest.</p>
Full article ">Figure 13
<p>Spatial distribution of yield estimates in Orchard 1. Individual trees are represented by circles. Circle sizes correspond to the normalized yield, while colors indicate yield estimation: low (red), moderate (yellow), and healthy (green). The gradient background depicts yield estimate zones from low (red) to high (green).</p>
Full article ">Figure 14
<p>Spatial distribution of health classification over Orchard 2.</p>
Full article ">
29 pages, 12160 KiB  
Article
Integration of UAS and Backpack-LiDAR to Estimate Aboveground Biomass of Picea crassifolia Forest in Eastern Qinghai, China
by Junejo Sikandar Ali, Long Chen, Bingzhi Liao, Chongshan Wang, Fen Zhang, Yasir Ali Bhutto, Shafique A. Junejo and Yanyun Nian
Remote Sens. 2025, 17(4), 681; https://doi.org/10.3390/rs17040681 - 17 Feb 2025
Viewed by 365
Abstract
Precise aboveground biomass (AGB) estimation of forests is crucial for sustainable carbon management and ecological monitoring. Traditional methods, such as destructive sampling, field measurements of Diameter at Breast Height with height (DBH and H), and optical remote sensing imagery, often fall short in [...] Read more.
Precise aboveground biomass (AGB) estimation of forests is crucial for sustainable carbon management and ecological monitoring. Traditional methods, such as destructive sampling, field measurements of Diameter at Breast Height with height (DBH and H), and optical remote sensing imagery, often fall short in capturing detailed spatial heterogeneity in AGB estimation and are labor-intensive. Recent advancements in remote sensing technologies, predominantly Light Detection and Ranging (LiDAR), offer potential improvements in accurate AGB estimation and ecological monitoring. Nonetheless, there is limited research on the combined use of UAS (Uncrewed Aerial System) and Backpack-LiDAR technologies for detailed forest biomass. Thus, our study aimed to estimate AGB at the plot level for Picea crassifolia forests in eastern Qinghai, China, by integrating UAS-LiDAR and Backpack-LiDAR data. The Comparative Shortest Path (CSP) algorithm was employed to segment the point clouds from the Backpack-LiDAR, detect seed points and calculate the DBH of individual trees. After that, using these initial seed point files, we segmented the individual trees from the UAS-LiDAR data by employing the Point Cloud Segmentation (PCS) method and measured individual tree heights, which enabled the calculation of the observed/measured AGB across three specific areas. Furthermore, advanced regression models, such as Random Forest (RF), Multiple Linear Regression (MLR), and Support Vector Regression (SVR), are used to estimate AGB using integrated data from both sources (UAS and Backpack-LiDAR). Our results show that: (1) Backpack-LiDAR extracted DBH compared to field extracted DBH shows about (R2 = 0.88, RMSE = 0.04 m) whereas UAS-LiDAR extracted height achieved the accuracy (R2 = 0.91, RMSE = 1.68 m), which verifies the reliability of the abstracted DBH and height obtained from the LiDAR data. (2) Individual Tree Segmentation (ITS) using a seed file of X and Y coordinates from Backpack to UAS-LiDAR, attaining a total accuracy F-score of 0.96. (3) Using the allometric equation, we obtained AGB ranges from 9.95–409 (Mg/ha). (4) The RF model demonstrated superior accuracy with a coefficient of determination (R2) of 89%, a relative Root Mean Square Error (rRMSE) of 29.34%, and a Root Mean Square Error (RMSE) of 33.92 Mg/ha compared to the MLR and SVR models in AGB prediction. (5) The combination of Backpack-LiDAR and UAS-LiDAR enhanced the ITS accuracy for the AGB estimation of forests. This work highlights the potential of integrating LiDAR technologies to advance ecological monitoring, which can be very important for climate change mitigation and sustainable environmental management in forest monitoring practices. Full article
(This article belongs to the Special Issue Remote Sensing and Lidar Data for Forest Monitoring)
Show Figures

Figure 1

Figure 1
<p>Maps of the selected <span class="html-italic">Picea crassifolia</span> forest study area and distribution of sample plots. (<b>a</b>) Location of the Qinghai−Tibetan Plateau in China. (<b>b</b>) The Digital Elevation Model (DEM) stands for the height from sea level (<b>c</b>) Normalized Difference Vegetation Index (NDVI) shows the vegetation distribution ratio map of three <span class="html-italic">Picea crassifolia</span> forest sites using Google Earth Engine (GEE) sentinel satellite images with Google Earth images showing the study area (<b>d</b>) LiDAR point cloud cross-sectional view of alignment; the black dots represent Backpack-LiDAR data, while chromatic dots show UAS-LiDAR data alignment of both platforms over each other.</p>
Full article ">Figure 2
<p>One plot of 30 × 30 m for the Backpack-LiDAR data acquisition method; the yellow lines are the distance from one end to the other end of the plot, and the red lines represent the data acquisition track.</p>
Full article ">Figure 3
<p>The schematic diagram for one plot data segmentation subdivided each 30 × 30 m plot into 10 × 10 m.</p>
Full article ">Figure 4
<p>The workflow overview for estimating forest aboveground biomass using LiDAR data.</p>
Full article ">Figure 5
<p>Data alignment of both platforms (Backpack and UAS-LiDAR): (<b>a</b>) Marking numbers for the same tree point cloud in the UAS and Backpack-LiDAR point clouds, (<b>b</b>) Normalizing 30 × 30 m LiDAR point clouds, (<b>c</b>) Cross section line on point clouds chromatic color represents Backpack while RGB is UAS point clouds, (<b>d</b>) Overlay maps of original Backpack-LiDAR (black) and UAS-LiDAR (chromatic) point, (<b>e</b>) Overlay maps of normalized Backpack-LiDAR and UAS-LiDAR point clouds. A yellow line in figure (<b>c</b>,<b>d</b>) represents cross-sectional region on point clouds from “A towards ”B at both ends.</p>
Full article ">Figure 6
<p>LiDAR 360 software interface for Backpack-LiDAR data segmentation; the result shows a trunk slice at 1.3 m, (<b>a</b>) Backpack-LiDAR point cloud data for a single tree. (<b>b</b>) Point cloud data fitted to DBH at 1.3 m. (<b>c</b>) Incorrectly classified trees manually corrected. (<b>d</b>) Segmentation results of Backpack-LiDAR point clouds.</p>
Full article ">Figure 7
<p>Comparisons between field-measured DBH, H, and extracted DBH from Backpack-LiDAR and H from UAS-LiDAR point cloud data. (<b>a</b>) DBH comparison, and (<b>b</b>) Height comparison.</p>
Full article ">Figure 8
<p>LiDAR variables essential for AGB prediction (<b>a</b>) The importance rank of the variable based on random forest. (<b>b</b>) Pearson correlation between LiDAR variables.</p>
Full article ">Figure 9
<p>Field-estimated Forest AGB (Mg/ha) versus predicted forest AGB (Mg/ha). (<b>a</b>) MLR model. (<b>b</b>) RF model (<b>c</b>) SVR model. The solid line represents the fitting model and the gray areas show the 95% confidence intervals of the fitting models.</p>
Full article ">Figure 10
<p>The performance comparison of MLR, RF, and SVR, using R<sup>2</sup>, RMSE, and rRMSE, where R<sup>2</sup> shows a random forest with the highest score of 89%.</p>
Full article ">Figure 11
<p>Graphical representation of the results for tree segmentation. Where (<b>a</b>) illustrates graphical results of Backpack-LiDAR point cloud data segmentation, (<b>b</b>) shows the results of the UAS-LiDAR data based on the segmentation results using seed points, and (<b>c</b>) demonstrates the segmentation results of the UAS-LiDAR data without seed points. Note that each color in the above figure characterizes a different tree species, whereas the polygon areas overlapped on (<b>b</b>,<b>c</b>) refer to the distinct trees and/or tree crowns resulting from the graphical illustration.</p>
Full article ">
26 pages, 27528 KiB  
Article
A Stereo Visual-Inertial SLAM Algorithm with Point-Line Fusion and Semantic Optimization for Forest Environments
by Bo Liu, Hongwei Liu, Yanqiu Xing, Weishu Gong, Shuhang Yang, Hong Yang, Kai Pan, Yuanxin Li, Yifei Hou and Shiqing Jia
Forests 2025, 16(2), 335; https://doi.org/10.3390/f16020335 - 13 Feb 2025
Viewed by 347
Abstract
Accurately localizing individual trees and identifying species distribution are critical tasks in forestry remote sensing. Visual Simultaneous Localization and Mapping (visual SLAM) algorithms serve as important tools for outdoor spatial positioning and mapping, mitigating signal loss caused by tree canopy obstructions. To address [...] Read more.
Accurately localizing individual trees and identifying species distribution are critical tasks in forestry remote sensing. Visual Simultaneous Localization and Mapping (visual SLAM) algorithms serve as important tools for outdoor spatial positioning and mapping, mitigating signal loss caused by tree canopy obstructions. To address these challenges, a semantic SLAM algorithm called LPD-SLAM (Line-Point-Distance Semantic SLAM) is proposed, which integrates stereo cameras with an inertial measurement unit (IMU), with contributions including dynamic feature removal, an individual tree data structure, and semantic point distance constraints. LPD-SLAM is capable of performing individual tree localization and tree species discrimination tasks in forest environments. In mapping, LPD-SLAM reduces false species detection and filters dynamic objects by leveraging a deep learning model and a novel individual tree data structure. In optimization, LPD-SLAM incorporates point and line feature reprojection error constraints along with semantic point distance constraints, which improve robustness and accuracy by introducing additional geometric constraints. Due to the lack of publicly available forest datasets, we choose to validate the proposed algorithm on eight experimental plots, which are selected to cover different seasons, various tree species, and different data collection paths, ensuring the dataset’s diversity and representativeness. The experimental results indicate that the average root mean square error (RMSE) of the trajectories of LPD-SLAM is reduced by up to 81.2% compared with leading algorithms. Meanwhile, the mean absolute error (MAE) of LPD-SLAM in tree localization is 0.24 m, which verifies its excellent performance in forest environments. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>System framework.</p>
Full article ">Figure 2
<p>Example of real-time system operation.</p>
Full article ">Figure 3
<p>Data collection equipment.</p>
Full article ">Figure 4
<p>Generation of the semantic segmentation mask.</p>
Full article ">Figure 5
<p>Semantic feature extraction.</p>
Full article ">Figure 6
<p>Stereo vision geometry.</p>
Full article ">Figure 7
<p>Extraction of stereo point and line features.</p>
Full article ">Figure 8
<p>Establishment of global individual tree database.</p>
Full article ">Figure 9
<p>Postex multi-functional tree measurement system.</p>
Full article ">Figure 10
<p>TSI acquisition of ground truth trajectory data.</p>
Full article ">Figure 11
<p>Experimental data.</p>
Full article ">Figure 12
<p>Individual tree localization coordinate comparison on 8 experimental plots.</p>
Full article ">Figure 12 Cont.
<p>Individual tree localization coordinate comparison on 8 experimental plots.</p>
Full article ">Figure 13
<p>Trajectory comparison of different algorithms on 8 experimental plots.</p>
Full article ">Figure 13 Cont.
<p>Trajectory comparison of different algorithms on 8 experimental plots.</p>
Full article ">Figure 13 Cont.
<p>Trajectory comparison of different algorithms on 8 experimental plots.</p>
Full article ">
20 pages, 2742 KiB  
Article
Impact of Parameters and Tree Stand Features on Accuracy of Watershed-Based Individual Tree Crown Detection Method Using ALS Data in Coniferous Forests from North-Eastern Poland
by Marcin Kozniewski, Łukasz Kolendo, Szymon Chmur and Marek Ksepko
Remote Sens. 2025, 17(4), 575; https://doi.org/10.3390/rs17040575 - 8 Feb 2025
Viewed by 392
Abstract
The accurate detection of individual tree crowns and estimation of tree density is essential for effective forest management, biodiversity assessment, and ecological monitoring. The precision of tree crown detection algorithms plays a critical role in providing reliable data for these applications, where even [...] Read more.
The accurate detection of individual tree crowns and estimation of tree density is essential for effective forest management, biodiversity assessment, and ecological monitoring. The precision of tree crown detection algorithms plays a critical role in providing reliable data for these applications, where even slight inaccuracies can lead to significant deviations in tree population estimates and ecological indicators. Various algorithmic parameters, such as pixel size and crown segmentation thresholds, can substantially impact tree crown detection accuracy. This study aims to explore the influence of tree stand features and parameters on the effectiveness of the individual tree crown detection method based on a watershed algorithm, leading to identifying optimal configurations that enhance the reliability of forest inventories and support sustainable management practices. Our analysis of the algorithm results shows that the features of the tree stand, such as tree height variance and tree crown size variance, significantly impact the algorithm’s output in precisely estimating tree count. Consequently, adjusting the pixel size of a canopy height model in the context of tree stand features is necessary to minimize error. Additionally, our findings show that there is a need to carefully assess the criterion of membership of a detected tree crown in a circular sample plot, which we based on the point cloud. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The study area is located in Northeaster Poland—Central Europe. The middle map shows the location of the Zednia Forest District in Poland with distribution of forested areas in gray. The map on the right presents the distribution of circular ground sample plots in the Zednia Forest District that we used in our study.</p>
Full article ">Figure 2
<p>The general outline of steps in the ITCD method.</p>
Full article ">Figure 3
<p>The modeled circular plot with a tree crown modeled as a circle (the theoretical example) (<b>left</b>) most of the area of the tree crown model falls outside of the sample plot despite the fact that the middle of the tree is located in the sample plot. An example of a real sample plot with tree crown segments with assigned percentage of the segment’s point cloud inside the sample plot expressed as levels of opacity of a red color (<b>right</b>) used to determine the tree membership in the circular sample plot. White areas represent gaps in the canopy height model.</p>
Full article ">Figure 4
<p>A Bayesian network created with PC algorithm based on a complete dataset of results for the ITCD method utilizing the watershed algorithm. The tree stand features (in light blue) are interconnected.</p>
Full article ">Figure 5
<p>A Bayesian network created with the PC algorithm using a subset of the dataset of results for the ITCD method utilizing the watershed algorithm, where the sample plot tree membership threshold was 0.5. The tree stand features (in light blue) are interconnected.</p>
Full article ">Figure 6
<p>Visualization of the average strength of influence calculated for all of the connections in the second BN from the GeNIe modeler—the thicker the arrow, the more substantial the influence. Additionally, conditional probability distribution of each variable is calculated given the lowest error, ranging from −0.05 to 0.05, using a membership threshold at a level of 0.5 (assumption of the second Bayesian network). Each conditional probability distribution is presented in the form of a horizontal bar chart. Image prepared with the GeNIe modeler.</p>
Full article ">Figure 7
<p>Differences in conditional probability distributions of <span class="html-italic">Gaussian Smoothing</span> for tree crown membership thresholds equal to 0.45 (<b>left</b>) and 0.5 (<b>right</b>) given the perfect results with tree crown count estimation errors in the range [−0.05, 0.05] for small SPs. Images prepared with the GeNIe modeler.</p>
Full article ">Figure 8
<p>A Bayesian network with calculated marginal conditional probability distributions of <span class="html-italic">Pixel Size</span> variable given specific features of the tree stand, and <span class="html-italic">Error</span> is in one of the ranges (−0.15, −0.05], (−0.05, 0.05], or (0.05, 0.15]—suggested use of pixel size of 0.4 m (the dominant probability value).</p>
Full article ">
24 pages, 13025 KiB  
Article
Modelling LiDAR-Based Vegetation Geometry for Computational Fluid Dynamics Heat Transfer Models
by Pirunthan Keerthinathan, Megan Winsen, Thaniroshan Krishnakumar, Anthony Ariyanayagam, Grant Hamilton and Felipe Gonzalez
Remote Sens. 2025, 17(3), 552; https://doi.org/10.3390/rs17030552 - 6 Feb 2025
Viewed by 741
Abstract
Vegetation characteristics significantly influence the impact of wildfires on individual building structures, and these effects can be systematically analyzed using heat transfer modelling software. Close-range light detection and ranging (LiDAR) data obtained from uncrewed aerial systems (UASs) capture detailed vegetation morphology; however, the [...] Read more.
Vegetation characteristics significantly influence the impact of wildfires on individual building structures, and these effects can be systematically analyzed using heat transfer modelling software. Close-range light detection and ranging (LiDAR) data obtained from uncrewed aerial systems (UASs) capture detailed vegetation morphology; however, the integration of dense vegetation and merged canopies into three-dimensional (3D) models for fire modelling software poses significant challenges. This study proposes a method for integrating the UAS–LiDAR-derived geometric features of vegetation components—such as bark, wooden core, and foliage—into heat transfer models. The data were collected from the natural woodland surrounding an elevated building in Samford, Queensland, Australia. Aboveground biomass (AGB) was estimated for 21 trees utilizing three 3D tree reconstruction tools, with validation against biomass allometric equations (BAEs) derived from field measurements. The most accurate reconstruction tool produced a tree mesh utilized for modelling vegetation geometry. A proof of concept was established with Eucalyptus siderophloia, incorporating vegetation data into heat transfer models. This non-destructive framework leverages available technologies to create reliable 3D tree reconstructions of complex vegetation in wildland–urban interfaces (WUIs). It facilitates realistic wildfire risk assessments by providing accurate heat flux estimations, which are critical for evaluating building safety during fire events, while addressing the limitations associated with direct measurements. Full article
(This article belongs to the Special Issue LiDAR Remote Sensing for Forest Mapping)
Show Figures

Figure 1

Figure 1
<p>Main steps of the proposed methodology.</p>
Full article ">Figure 2
<p>The WUI study site in Samford, Queensland. (<b>a</b>) The elevated building in the study site. (<b>b</b>) The vegetation surrounding the elevated building. (<b>c</b>) Location of study site in relation to Brisbane, Queensland.</p>
Full article ">Figure 3
<p>Survey paths (red lines) and point clouds generated from (<b>a</b>) handheld LiDAR survey and (<b>b</b>) UAS–LiDAR survey.</p>
Full article ">Figure 4
<p>(<b>a</b>) UAS–LiDAR and handheld LiDAR survey and (<b>b</b>) in situ data collection including DBH and height of surrounding vegetation.</p>
Full article ">Figure 5
<p>Visualization of branch voxelization process, showing (<b>a</b>) triangulated mesh, (<b>b</b>) point-to-mesh face distances in a colour continuum from blue (negative distances) through green (distance = 0) to red (positive distances), and (<b>c</b>) voxelized stem.</p>
Full article ">Figure 6
<p>Suboptimal voxelization result, showing (<b>a</b>) triangulated mesh of intersecting branches, (<b>b</b>) point distances from mesh faces in a colour continuum from blue (negative distances) through green (distance = 0) to red (positive distances), and (<b>c</b>) inadequate voxelization voxels excluded.</p>
Full article ">Figure 7
<p>Non-manifold edge on intersecting faces.</p>
Full article ">Figure 8
<p>Visualization of <span class="html-italic">V<sub>t</sub></span>, <span class="html-italic">V</span>, <span class="html-italic">V<sub>m</sub></span>, and the normal vector of the inner and outer face pairs for the selected non-manifold edge.</p>
Full article ">Figure 9
<p>Output of the automated Raycloudtools segmentation showing 21 trees closest to the building.</p>
Full article ">Figure 10
<p>Volume estimates from 3D meshes reconstructed with three tree reconstruction tools vs. volumes estimated with a biomass allometric equation (BAE) using field-measured height and DBH.</p>
Full article ">Figure 11
<p>The <span class="html-italic">Eucalyptus siderophloia</span> selected for geometric representation, depicted in (<b>a</b>) a photograph taken during the handheld LiDAR survey and (<b>b</b>) the manually segmented point cloud.</p>
Full article ">Figure 12
<p>The reconstructed mesh of the <span class="html-italic">Eucalyptus siderophloia</span> produced by (<b>a</b>) TreeQSM, (<b>b</b>) AdTree, and (<b>c</b>) Raycloudtools.</p>
Full article ">Figure 13
<p>(<b>a</b>) The result of the face differentiation process at a cylinder intersection in which the outer faces (shown in red) have been retained in the mesh, which is a surface representation. (<b>b</b>) The inner faces (shown in yellow) were discarded.</p>
Full article ">Figure 14
<p>The result of removing the border faces depicted in red in (<b>a</b>) and (<b>b</b>) can be seen in (<b>c</b>), which shows a cylinder intersection from which all border faces have been eliminated.</p>
Full article ">Figure 15
<p>The end result of our voxelization process showing the successful geometric representation (blue squares) of branches. The red squares denote the voxels that were missed while the inner faces were present.</p>
Full article ">Figure 16
<p>Geometric representations of (<b>a</b>) a deciduous <span class="html-italic">Eucalyptus siderophloia</span>, and (<b>b</b>) a coniferous <span class="html-italic">Araucaria bidwillii</span>.</p>
Full article ">Figure 17
<p>FDS model of Eucalyptus siderophloia (tree #20) showing particles identified as (<b>a</b>) foliage, (<b>b</b>) wooden core and bark, and (<b>c</b>) a comprehensive view of the whole tree.</p>
Full article ">Figure 18
<p>Simulated fire spread in the FDS tree model.</p>
Full article ">Figure 19
<p>(<b>a</b>) The incomplete point cloud of a <span class="html-italic">Callitris columellaris</span> (tree #9) where the trunk is hidden by foliage, and the 3D mesh reconstructions of this tree produced by (<b>b</b>) AdTree (<b>c</b>) TreeQSM, and (<b>d</b>) Raycloudtools.</p>
Full article ">
17 pages, 4853 KiB  
Article
Extracting Individual Tree Positions in Closed-Canopy Stands Using a Multi-Source Local Maxima Method
by Guozhen Lai, Meng Cao, Chengchuan Zhou, Liting Liu, Xun Zhong, Zhiwen Guo and Xunzhi Ouyang
Forests 2025, 16(2), 262; https://doi.org/10.3390/f16020262 - 1 Feb 2025
Viewed by 479
Abstract
The accurate extraction of individual tree positions is key to forest structure quantification, and Unmanned Aerial Vehicle (UAV) visible light data have become the primary data source for extracting individual tree locations. Compared to deep learning methods, classical detection methods require lower computational [...] Read more.
The accurate extraction of individual tree positions is key to forest structure quantification, and Unmanned Aerial Vehicle (UAV) visible light data have become the primary data source for extracting individual tree locations. Compared to deep learning methods, classical detection methods require lower computational resources and have stronger interpretability and applicability. However, in closed-canopy forests, challenges such as crown overlap and uneven light distribution hinder extraction accuracy. To address this, the study improves the existing Revised Local Maxima (RLM) method and proposes a Multi-Source Local Maxima (MSLM) method, based on UAV visible light data, which integrates Canopy Height Models (CHMs) and Digital Orthophoto Mosaics (DOMs). Both the MSLM and RLM methods were used to extract individual tree positions from three different types of closed-canopy stands, and the extraction results of the two methods were compared. The results show that the MSLM method outperforms the RLM in terms of Accuracy Rate (85.59%), Overall Accuracy (99.09%), and F1 score (85.21%), with stable performance across different forest stand types. This demonstrates that the MSLM method can effectively overcome the challenges posed by closed-canopy stands, significantly improving extraction precision. These findings provide a cost-effective and efficient approach for forest resource monitoring and offer valuable insights for forest structure optimization and management. Full article
Show Figures

Figure 1

Figure 1
<p>Study area and sites locations.</p>
Full article ">Figure 2
<p>Generated data (<b>a</b>) DOM, (<b>b</b>) DSM, (<b>c</b>) DTM, (<b>d</b>) CHM.</p>
Full article ">Figure 3
<p>Flowchart of the MSLM method.</p>
Full article ">Figure 4
<p>Flow of judgment for extracted individual tree positions.</p>
Full article ">Figure 5
<p>Partial sample extracted results by different methods.</p>
Full article ">Figure 6
<p>Accuracy analysis of different methods and land types.</p>
Full article ">
21 pages, 10908 KiB  
Article
Canopy Segmentation of Overlapping Fruit Trees Based on Unmanned Aerial Vehicle LiDAR
by Shiji Wang, Jie Ji, Lijun Zhao, Jiacheng Li, Mian Zhang and Shengling Li
Agriculture 2025, 15(3), 295; https://doi.org/10.3390/agriculture15030295 - 29 Jan 2025
Viewed by 463
Abstract
Utilizing LiDAR sensors mounted on unmanned aerial vehicles (UAVs) to acquire three-dimensional data of fruit orchards and extract precise information about individual trees can greatly facilitate unmanned management. To address the issue of low accuracy in traditional watershed segmentation methods based on canopy [...] Read more.
Utilizing LiDAR sensors mounted on unmanned aerial vehicles (UAVs) to acquire three-dimensional data of fruit orchards and extract precise information about individual trees can greatly facilitate unmanned management. To address the issue of low accuracy in traditional watershed segmentation methods based on canopy height models, this paper proposes an enhanced method to extract individual tree crowns in fruit orchards, enabling the improved detection of overlapping crown features. Firstly, a distribution curve of single-row or single-column treetops is fitted based on the detected treetops using variable window size. Subsequently, a cubic spatial region extending infinitely along the Z-axis is generated with equal width around this curve, and all crown points falling within this region are extracted and then projected onto the central plane. The projecting contour of the crowns on the plane is then fitted using Gaussian functions. Treetops are detected by identifying peak points on the curve fitted by Gaussian functions. Finally, the watershed algorithm is applied to segment fruit tree crowns. The results demonstrate that in citrus orchards with pronounced crown overlap, this novel method significantly reduces the number of undetected trees with a recall of 97.04%, and the F1 score representing the detection accuracy for fruit trees reaches 98.01%. Comparisons between the traditional method and the Gaussian fitting–watershed fusion algorithm across orchards exhibiting varying degrees of crown overlap reveal that the fusion algorithm achieves high segmentation accuracy when dealing with overlapping crowns characterized by significant height variations. Full article
Show Figures

Figure 1

Figure 1
<p>An overview of the study area. (<b>a</b>) The location of the study area and plots. (<b>b1</b>) plot with non-overlapping trees, (<b>b2</b>) plot with mixed overlapping trees, (<b>b3</b>) plot with highly overlapping trees.</p>
Full article ">Figure 2
<p>Workflow of point cloud data preprocessing and Gaussian fitting–watershed fusion algorithm.</p>
Full article ">Figure 3
<p>Canopy profile points and their projection points on the corresponding plane.</p>
Full article ">Figure 4
<p>The graph displays a curve with red markers indicating more than two peaks within a single crest and a green marker representing a valley.</p>
Full article ">Figure 5
<p>The projection points of the crown profile in the target plane and the peak finding results based on the curve fitted by Gaussian function.</p>
Full article ">Figure 6
<p>The difference between our algorithm and the traditional watershed algorithm in dealing with overlapping tree crowns.</p>
Full article ">Figure 7
<p>CHM with treetop markers, red markers represent treetops. (<b>a</b>) Performance of treetop detection by traditional algorithm. (<b>b</b>) Performance of treetop detection by Gaussian fitting algorithm.</p>
Full article ">Figure 8
<p>Vertical view of individual tree segmentation, the circle areas are the segmentation results comparison of the local region between the two methods. (<b>a</b>) Performance of individual tree segmentation by traditional algorithm. (<b>b</b>) Performance of individual tree segmentation by our algorithm.</p>
Full article ">Figure 9
<p>CHM of three plots with different degrees of overlap. (<b>a</b>) Plot 1: completely non-overlapping tree crowns. (<b>b</b>) Plot 2: mixed overlapping tree crowns. (<b>c</b>) Plot 3: highly overlapping tree crowns.</p>
Full article ">Figure 10
<p>Performance of treetop detection by traditional CHM algorithm. (<b>a</b>) Plot 1 with detected treetops. (<b>b</b>) Plot 2 with detected treetops. (<b>c</b>) Plot 2 with detected treetops.</p>
Full article ">Figure 11
<p>Performance of individual tree segmentation processed by traditional CHM algorithm. (<b>a</b>) Vertical view of individual tree segmentation of Plot 1 processed by traditional CHM algorithm. (<b>b</b>) Vertical view of individual tree segmentation of Plot 2 processed by traditional CHM algorithm. (<b>c</b>) Vertical view of individual tree segmentation of Plot 3 processed by traditional CHM algorithm.</p>
Full article ">Figure 12
<p>Performance of treetop detection by Gaussian fitting algorithm. (<b>a</b>) Plot 2 with detected treetops. (<b>b</b>) Plot 3 with detected treetops.</p>
Full article ">Figure 13
<p>Performance of individual tree segmentation processed by Gaussian fitting algorithm. (<b>a</b>) Vertical view of individual tree segmentation of Plot 2 processed by Gaussian fitting algorithm. (<b>b</b>) Vertical view of individual tree segmentation of Plot 3 processed by Gaussian fitting algorithm.</p>
Full article ">Figure 14
<p>Comparison of treetop detection and crown segmentation results using traditional CHM and Gaussian fitting. (<b>a</b>) Treetop detection accuracy in Plot 2. (<b>b</b>) Treetop detection accuracy in Plot 3. (<b>c</b>) Segmentation accuracy in Plot 2. (<b>d</b>) Segmentation accuracy in Plot 3.</p>
Full article ">Figure 14 Cont.
<p>Comparison of treetop detection and crown segmentation results using traditional CHM and Gaussian fitting. (<b>a</b>) Treetop detection accuracy in Plot 2. (<b>b</b>) Treetop detection accuracy in Plot 3. (<b>c</b>) Segmentation accuracy in Plot 2. (<b>d</b>) Segmentation accuracy in Plot 3.</p>
Full article ">
26 pages, 33213 KiB  
Article
From Crown Detection to Boundary Segmentation: Advancing Forest Analytics with Enhanced YOLO Model and Airborne LiDAR Point Clouds
by Yanan Liu, Ai Zhang and Peng Gao
Forests 2025, 16(2), 248; https://doi.org/10.3390/f16020248 - 28 Jan 2025
Viewed by 630
Abstract
Individual tree segmentation is crucial to extract forest structural parameters, which is vital for forest resource management and ecological monitoring. Airborne LiDAR (ALS), with its ability to rapidly and accurately acquire three-dimensional forest structural information, has become an essential tool for large-scale forest [...] Read more.
Individual tree segmentation is crucial to extract forest structural parameters, which is vital for forest resource management and ecological monitoring. Airborne LiDAR (ALS), with its ability to rapidly and accurately acquire three-dimensional forest structural information, has become an essential tool for large-scale forest monitoring. However, accurately locating individual trees and mapping canopy boundaries continues to be hindered by the overlapping nature of the tree canopies, especially in dense forests. To address these issues, this study introduces CCD-YOLO, a novel deep learning-based network for individual tree segmentation from the ALS point cloud. The proposed approach introduces key architectural enhancements to the YOLO framework, including (1) the integration of cross residual transformer network extended (CReToNeXt) backbone for feature extraction and multi-scale feature fusion, (2) the application of the convolutional block attention module (CBAM) to emphasize tree crown features while suppressing noise, and (3) a dynamic head for adaptive multi-layer feature fusion, enhancing boundary delineation accuracy. The proposed network was trained using a newly generated individual tree segmentation (ITS) dataset collected from a dense forest. A comprehensive evaluation of the experimental results was conducted across varying forest densities, encompassing a variety of both internal and external consistency assessments. The model outperforms the commonly used watershed algorithm and commercial LiDAR 360 software, achieving the highest indices (precision, F1, and recall) in both tree crown detection and boundary segmentation stages. This study highlights the potential of CCD-YOLO as an efficient and scalable solution for addressing the critical challenges of accuracy segmentation in complex forests. In the future, we will focus on enhancing the model’s performance and application. Full article
Show Figures

Figure 1

Figure 1
<p>Overview of the study area. The true color image (right) of the study area captured by the drone, highlights the boundaries of Area 1 (in red) and Area 2 (in yellow). The blue region (Epe) in the bottom-left corner indicates the specific study area in Gelderland, while the map in the upper-left corner provides a scaled location map of The Netherlands.</p>
Full article ">Figure 2
<p>Visualization of ALS point clouds used in the study, colored by height.</p>
Full article ">Figure 3
<p>Workflow of the proposed approach using ALS point clouds.</p>
Full article ">Figure 4
<p>The labeled maps derived from the 3D point cloud and the original RGB image. (<b>a</b>) RGB point cloud map and mask result. (<b>b</b>) Multi-feature point cloud map and mask result.</p>
Full article ">Figure 5
<p>The proposed CCD-YOLO network architecture.</p>
Full article ">Figure 6
<p>CReToNeXt module structure.</p>
Full article ">Figure 7
<p>CBAM module structure.</p>
Full article ">Figure 8
<p>Dynamic head module structure.</p>
Full article ">Figure 9
<p>Visualization of the ITS dataset (parts of <b><span class="html-italic">A</span>1</b>). (<b>a</b>) Multi-feature point cloud map. (<b>b</b>) ITS dataset with labeled mark (<b><span class="html-italic">A</span>1</b>).</p>
Full article ">Figure 10
<p>Visualization of the ITS dataset (<b><span class="html-italic">A</span>2</b>), and its corresponding 3D point cloud. (<b>a</b>) ITS dataset with labeled mark (<b><span class="html-italic">A</span>2</b>). (<b>b</b>) 3D tree point cloud.</p>
Full article ">Figure 11
<p>View of individual tree segmentation results in <b><span class="html-italic">A</span>1</b>. (<b>a</b>) Prediction box result. (<b>b</b>) Mask result. (<b>c</b>) Top view of the segmentation result. (<b>d</b>) Oblique view of segmentation result.</p>
Full article ">Figure 12
<p>View of individual tree segmentation results in <b><span class="html-italic">A</span>2</b>. (<b>a</b>) Prediction box result. (<b>b</b>) Mask result. (<b>c</b>) Top view of the segmentation result. (<b>d</b>) Oblique view of segmentation result.</p>
Full article ">Figure 12 Cont.
<p>View of individual tree segmentation results in <b><span class="html-italic">A</span>2</b>. (<b>a</b>) Prediction box result. (<b>b</b>) Mask result. (<b>c</b>) Top view of the segmentation result. (<b>d</b>) Oblique view of segmentation result.</p>
Full article ">Figure 13
<p>Results of individual tree segmentation in low to medium tree density and high tree density regions. Extraction of low to medium tree density and high tree density region in <b><span class="html-italic">A</span>1</b>. (<b>a</b>) Low to medium tree density region. (<b>b</b>) High tree density region.</p>
Full article ">Figure 14
<p>Visualization of the individual tree segmentation results across various densities. (<b>a</b>) Low to medium tree density region. (<b>b</b>) High tree density region.</p>
Full article ">Figure 15
<p>Comparison of evaluation metrics of different stage.</p>
Full article ">Figure 16
<p>Comparison of the segmentation performance of individual trees.</p>
Full article ">Figure 17
<p>Comparison of individual tree segmentation results in various forest densities. (<b>a</b>–<b>c</b>) The oblique and top views of the individual tree segmentation results obtained by the three methods in regions with low to medium tree density; (<b>d</b>–<b>f</b>) The oblique and top views of the individual tree segmentation results obtained by three methods in regions with high tree density. (<b>a</b>) Results (oblique and top view) in forests with low-medium density using CCD-YOLO. (<b>b</b>) Results (oblique and top view) in forests with low-medium density using watershed. (<b>c</b>) Results (oblique and top view) in forests with low-medium density using LiDAR360 software. (<b>d</b>) Results (oblique and top view) in forests with high density using CCD-YOLO. (<b>e</b>) Results (oblique and top view) in forests with high density using watershed. (<b>f</b>) Results (oblique and top view) in forests with high density using LiDAR360 software.</p>
Full article ">Figure 18
<p>Results of individual tree segmentation between YOLOv8 and CCD-YOLO. (<b>a</b>) YOLOv8 model. (<b>b</b>) The proposed CCD-YOLO.</p>
Full article ">Figure 19
<p>Framework for extracting individual tree structural parameters.</p>
Full article ">Figure 20
<p>Results of the erroneous segmentation. (<b>a</b>) Erroneous segmentation. (<b>b</b>) Missed detection. (<b>c</b>) False detection.</p>
Full article ">Figure 21
<p>Results of the incomplete and failed segmented trees. (<b>a</b>) Adjacent trees are identified as a single crown object. (<b>b</b>) A single large crown as multiple crowns. (<b>c</b>) Misclassified shrubs (non-trees).</p>
Full article ">
24 pages, 2174 KiB  
Article
Clustering and Machine Learning Models of Skeletal Class I and II Parameters of Arab Orthodontic Patients
by Kareem Midlej, Osayd Zohud, Iqbal M. Lone, Obaida Awadi, Samir Masarwa, Eva Paddenberg-Schubert, Sebastian Krohn, Christian Kirschneck, Peter Proff, Nezar Watted and Fuad A. Iraqi
J. Clin. Med. 2025, 14(3), 792; https://doi.org/10.3390/jcm14030792 - 25 Jan 2025
Viewed by 577
Abstract
Background: Orthodontic problems can affect vital quality of life functions, such as swallowing, speech sound production, and the aesthetic effect. Therefore, it is important to diagnose and treat these patients precisely. The main aim of this study is to introduce new classification [...] Read more.
Background: Orthodontic problems can affect vital quality of life functions, such as swallowing, speech sound production, and the aesthetic effect. Therefore, it is important to diagnose and treat these patients precisely. The main aim of this study is to introduce new classification methods for skeletal class I occlusion (SCIO) and skeletal class II malocclusion (SCIIMO) among Arab patients in Israel. We conducted hierarchical clustering to detect critical trends within malocclusion classes and applied machine learning (ML) models to predict classification outcomes. Methods: This study is based on assessing the lateral cephalometric parameters from the Center for Dentistry Research and Aesthetics based in Jatt, Israel. The study involved the encoded records of 394 Arab patients with diagnoses of SCIO/SCIIMO, according to the individualized ANB of Panagiotidis and Witt. After clustering analysis, an ML model was established by evaluating the performance of different models. Results: The clustering analysis identified three distinct clusters for each skeletal class (SCIO and SCIIMO). Among SCIO clusters, the results showed that in the second cluster, retrognathism of the mandible was less severe, as represented by a lower ANB angle. In addition, the third cluster had a lower NL-ML angle, gonial angle, SN-Ba angle, and lower ML-NSL angle compared to clusters 1 and 2. Among SCIIMO clusters, the results also showed that the second cluster has less severe retrognathism of the mandible, which is represented by a lower ANB angle and Calculated_ANB and a higher SNB angle (p < 0.05). The general ML model that included all measurements for patient classification showed a classification accuracy of 0.87 in the Random Forest and the Classification and Regression Tree models. Using ANB angle and Wits appraisal only in the ML, an accuracy of 0.78 (sensitivity = 0.80, specificity = 0.76) was achieved to classify patients as SCIO or SCIIMO. Conclusions: The clustering analysis revealed distinguished patterns that can be present within SCIO and SCIIMO patients, which can affect the diagnosis and treatment plan. In addition, the ML model can accurately diagnose SCIO/SCIIMO patients, which should improve precise diagnostics. Full article
(This article belongs to the Topic Advances in Dental Health)
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Hierarchical clustering dendrogram for skeletal class I occlusion (SCIO). <span class="html-italic">x</span>-axis (rows) represents patient clustering, divided into three main clusters with different colors, and <span class="html-italic">y</span>-axis represents distance between clusters. (<b>B</b>) Hierarchical clustering dendrogram for skeletal class II malocclusion (SCIIMO). <span class="html-italic">x</span>-axis (rows) represents patients clustering, divided into three main clusters with different colors, and <span class="html-italic">y</span>-axis represents distance between clusters.</p>
Full article ">Figure 2
<p>General machine learning model summary of importance of each parameter to model in predicting classification of skeletal class I or II. X-axis shows importance of prediction of different assessed variables. Y-axis shows list of the assessed variables. In this figure, we arranged parameters according to their contribution to the model. In this figure, we can see that Calculated_ANB was most important and contributed more than any other parameter.</p>
Full article ">Figure 3
<p>(<b>A</b>) Summarize model 1 (one predictor) of the different machine learning models. This figure presents a summary of the five Machine Learning classification models, including Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), K-Nearest Neighbor, Random Forest (RF), Classification, and Regression Tree (CART), presented in Y-axis. X-axis shows accuracy and kappa scores for each model. First model includes ANB angle only; in LDA, we received an accuracy of 0.75 and kappa of 0.47. (<b>B</b>) Summarize model 1 (one predictor) of the different machine learning models. Presents LDA Machine Learning Model Confusion Matrix of validation data (30% of the sample) for ANB angle to predict classification (Predicted) compared to actual classification, based on using this variable only. X-axis shows skeletal class I and II predictions, and Y-axis shows number of identified patients in each classification.</p>
Full article ">Figure 4
<p>(<b>A</b>) Summarize model 2 (two predictors) of different machine learning models. This figure presents a summary of five machine learning classification models tested, including Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), K-Nearest Neighbor, Random Forest (RF), Classification and Regression Tree (CART) as presented on Y-axis. X-axis shows accuracy and kappa scores for each model. (<b>B</b>) Machine Learning Model Confusion Matrix of validation data shows the ability of KNN model to predict classification (Predicted) compared to Actual classification based on ANB angle and Wits appraisal. X-axis shows skeletal class I and II predictions, and Y-axis indicates number of identified patients in each classification. (<b>C</b>) Second model tree diagram shows decision rules of model. Root node is at top, and leaf nodes are at bottom. Each node is labeled with the cephalometric parameter used to split the data at that node, as well as the split value. Leaf nodes are labeled with predicted class for data that reaches that node.</p>
Full article ">Figure 5
<p>(<b>A</b>) Summarize model 3 (all parameters except ANB angle, ANBind, and Calculated_ANB) of different machine learning models. This figure presents a summary of five machine learning classification models tested, including Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), K-Nearest Neighbor, Random Forest (RF), Classification and Regression Tree (CART) as presented on Y-axis. X-axis shows accuracy and kappa scores for each model. (<b>B</b>) Model 3 summarizes importance of each parameter to the model in predicting classification of skeletal classes I and II. X-axis shows the importance of prediction for different assessed variables. Y-axis shows list of assessed variables. (<b>C</b>) Machine Learning Model Confusion Matrix of validation data. It shows ability of LDA model to predict classification (Predicted) compared to Actual classification based on individualized ANB angle (Calculated_ANB). X-axis shows skeletal class I and II predictions, and Y-axis indicates number of identified patients in each classification. (<b>D</b>) Second model tree diagram shows decision rules of model. Root node is at top, and leaf nodes are at bottom. Each node is labeled with cephalometric parameter used to split the data at that node and the split value. Leaf nodes are labeled with predicted class for data that reaches that node.</p>
Full article ">
Back to TopTop