[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,708)

Search Parameters:
Keywords = 3D lidars

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2641 KiB  
Article
S2*-ODM: Dual-Stage Improved PointPillar Feature-Based 3D Object Detection Method for Autonomous Driving
by Chen Hua, Xiaokun Zheng, Xinkai Kuang, Wencheng Zhang, Chunmao Jiang, Ziyu Chen and Biao Yu
Sensors 2025, 25(5), 1581; https://doi.org/10.3390/s25051581 - 4 Mar 2025
Abstract
Three-dimensional (3D) object detection is crucial for autonomous driving, yet current PointPillar feature-based methods face challenges like under-segmentation, overlapping, and false detection, particularly in occluded scenarios. This paper presents a novel dual-stage improved PointPillar feature-based 3D object detection method (S2*-ODM) specifically designed to [...] Read more.
Three-dimensional (3D) object detection is crucial for autonomous driving, yet current PointPillar feature-based methods face challenges like under-segmentation, overlapping, and false detection, particularly in occluded scenarios. This paper presents a novel dual-stage improved PointPillar feature-based 3D object detection method (S2*-ODM) specifically designed to address these issues. The first innovation is the introduction of a dual-stage pillar feature encoding (S2-PFE) module, which effectively integrates both inter-pillar and intra-pillar relational features. This enhancement significantly improves the recognition of local structures and global distributions, enabling better differentiation of objects in occluded or overlapping environments. As a result, it reduces problems such as under-segmentation and false positives. The second key improvement is the incorporation of an attention mechanism within the backbone network, which refines feature extraction by emphasizing critical features in pseudo-images and suppressing irrelevant ones. This mechanism strengthens the network’s ability to focus on essential object details. Experimental results on the KITTI dataset show that the proposed method outperforms the baseline, achieving notable improvements in detection accuracy, with average precision for 3D detection of cars, pedestrians, and cyclists increasing by 1.04%, 2.17%, and 3.72%, respectively. These innovations make S2*-ODM a significant advancement in enhancing the accuracy and reliability of 3D object detection for autonomous driving. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Figure 1
<p>Framework of the dual-stage improved PointPillar feature-based 3D object detection method (S2*-ODM). The overall architecture of the S2*-ODM framework consists of four key modules: <b>(1) Point Cloud Input Layer:</b> receives raw LiDAR data, including coordinates and reflection intensity. <b>(2) Dual-stage Feature Encoding (S2-PFE) Module:</b> —<span class="html-italic">Stage I (Local):</span> extracts geometric features from individual pillars using an MLP. —<span class="html-italic">Stage II (Global):</span> aggregates neighboring pillar features via sparse convolution and fuses them into enhanced features. <b>(3) SeNet Backbone:</b> applies channel-wise attention to the pseudo-image tensor, emphasizing critical feature channels. <b>(4) Detection Head:</b> predicts 3D bounding box parameters from multi-scale features.</p>
Full article ">Figure 2
<p>The dual-stage pillar feature encoding (S2-PFE) module framework.</p>
Full article ">Figure 3
<p>Qualitative analysis of 3D object detection performance based on the KITTI validation dataset.</p>
Full article ">Figure 4
<p>Error samples of 3D object detection based on the KITTI validation dataset. (<b>a</b>) Missed detections due to dataset annotation limitations, (<b>b</b>) Missed detections for small or heavily occluded objects, (<b>c</b>) Misclassification of vertically oriented objects, (<b>d</b>) Over-segmentation issue in object detection.</p>
Full article ">Figure 5
<p>Comparison of the detection method based on a single S2-PFE module with the baseline PointPillars, showing the effects before and after improvements in 3D object detection. (<b>a</b>,<b>c</b>,<b>d</b>) demonstrate improvements in addressing missed detection issues, while (<b>b</b>) highlights improvements in addressing under-segmentation issues.</p>
Full article ">
23 pages, 5994 KiB  
Article
Three-Dimensional Distribution of Arctic Aerosols Based on CALIOP Data
by Yukun Sun and Liang Chang
Remote Sens. 2025, 17(5), 903; https://doi.org/10.3390/rs17050903 - 4 Mar 2025
Abstract
Tropospheric aerosols play an important role in the notable warming phenomenon and climate change occurring in the Arctic. The accuracy of Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP) aerosol optical depth (AOD) and the distribution of Arctic AOD based on the CALIOP Level 2 [...] Read more.
Tropospheric aerosols play an important role in the notable warming phenomenon and climate change occurring in the Arctic. The accuracy of Cloud–Aerosol Lidar with Orthogonal Polarization (CALIOP) aerosol optical depth (AOD) and the distribution of Arctic AOD based on the CALIOP Level 2 aerosol products and the Aerosol Robotic Network (AERONET) AOD data during 2006–2021 were analyzed. The distributions, trends, and three-dimensional (3D) structures of the frequency of occurrences (FoOs) of different aerosol subtypes during 2006–2021 are also discussed. We found that the CALIOP AOD exhibited a high level of agreement with AERONET AOD, with a correlation coefficient of approximately 0.67 and an RMSE of less than 0.1. However, CALIOP usually underestimated AOD over the Arctic, especially in wet conditions during the late spring and early summer. Moreover, the Arctic AOD was typically higher in winter than in autumn, summer, and spring. Specifically, polluted dust (PD), dust, and clean marine (CM) were the dominant aerosol types in spring, autumn, and winter, while in summer, ES (elevated smoke) from frequent wildfires reached the highest FoOs. There were increasing trends in the FoOs of CM and dust, with decreasing trends in the FoOs of PD, PC (polluted continental), and DM (dusty marine) due to Arctic amplification. In general, the vertical distribution patterns of different aerosol types showed little seasonal variation, but their horizontal distribution patterns at various altitudes varied by season. Furthermore, locally sourced aerosols such as dust in Greenland, PD in eastern Siberia, and ES in middle Siberia can spread to surrounding areas and accumulate further north, affecting a broader region in the Arctic. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Spatial distribution of the AERONET stations in the region of interest.</p>
Full article ">Figure 2
<p>Scatterplot comparing CALIOP and AERONET AOD over the Arctic during 2006–2021. The dashed line represents the 1:1 line, while the red line represents the fitted line.</p>
Full article ">Figure 3
<p>(<b>a</b>) Monthly mean AOD from CALIOP and AERONET, (<b>b</b>) correlation coefficient between monthly mean AOD from CALIOP and AERONET, and (<b>c</b>) monthly RMSE and (<b>d</b>) RMB of CALIOP against AERONET AOD over the Arctic during 2006–2021.</p>
Full article ">Figure 4
<p>Spatial distributions of seasonal mean AOD over the Arctic during 2006–2021.</p>
Full article ">Figure 5
<p>Average proportions of seven different types of aerosols over the Arctic during 2006–2021.</p>
Full article ">Figure 6
<p>Average proportions of all types of Arctic aerosols in different seasons during 2006–2021.</p>
Full article ">Figure 7
<p>M-K trend analysis results (including Z value, time series of UF and UB values) of FoOs of (<b>a</b>) CM, (<b>b</b>) dust, (<b>c</b>) PC, (<b>d</b>) PD, (<b>e</b>) ES, and (<b>f</b>) DM during 2006–2021. Dashed lines represent the confidence intervals for UF values at the 95% significance level or greater.</p>
Full article ">Figure 8
<p>Vertical profiles of mean FoOs of different types of aerosols over the Arctic in each season during 2006–2021.</p>
Full article ">Figure 9
<p>Horizontal distribution of the most dominant aerosol types at different altitude ranges over the Arctic in each season during 2006–2021.</p>
Full article ">Figure 10
<p>Three-dimensional structure of dust in spring over GLZ during 2006–2021, with each latitude–vertical cross-section made along the longitudes at 10° intervals (top panel), and each longitude–vertical cross-section made along the latitudes at 10° intervals (bottom panel).</p>
Full article ">Figure 11
<p>Same as <a href="#remotesensing-17-00903-f010" class="html-fig">Figure 10</a> but for the three-dimensional structure of PD in spring over ESZ during 2006–2021.</p>
Full article ">Figure 12
<p>Same as <a href="#remotesensing-17-00903-f011" class="html-fig">Figure 11</a> but for the three-dimensional structure of ES in summer over MSZ during 2006–2021.</p>
Full article ">
18 pages, 1337 KiB  
Article
Performance Analysis for High-Dimensional Bell-State Quantum Illumination
by Jeffrey H. Shapiro
Physics 2025, 7(1), 7; https://doi.org/10.3390/physics7010007 - 3 Mar 2025
Viewed by 170
Abstract
Quantum illumination (QI) is an entanglement-based protocol for improving LiDAR/radar detection of unresolved targets beyond what a classical LiDAR/radar of the same average transmitted energy can do. Originally proposed by Seth Lloyd as a discrete-variable quantum LiDAR, it was soon shown that his [...] Read more.
Quantum illumination (QI) is an entanglement-based protocol for improving LiDAR/radar detection of unresolved targets beyond what a classical LiDAR/radar of the same average transmitted energy can do. Originally proposed by Seth Lloyd as a discrete-variable quantum LiDAR, it was soon shown that his proposal offered no quantum advantage over its best classical competitor. Continuous-variable, specifically Gaussian-state, QI has been shown to offer a true quantum advantage, both in theory and in table-top experiments. Moreover, despite its considerable drawbacks, the microwave version of Gaussian-state QI continues to attract research attention. A recent QI study by Armanpreet Pannu, Amr Helmy, and Hesham El Gamal (PHE), however, has: (i) combined the entangled state from Lloyd’s QI with the channel models from Gaussian-state QI; (ii) proposed a new positive operator-valued measurement for that composite setup; and (iii) claimed that, unlike Gaussian-state QI, PHE QI achieves the Nair–Gu lower bound on QI target-detection error probability at all noise brightnesses. PHE’s analysis was asymptotic, i.e., it presumed infinite-dimensional entanglement. The current paper works out the finite-dimensional performance of PHE QI. It shows that there is a threshold value for the entangled-state dimensionality below which there is no quantum advantage, and above which the Nair–Gu bound is approached asymptotically. Moreover, with both systems operating with error-probability exponents 1 dB lower than the Nair–Gu bound, PHE QI requires enormously higher entangled-state dimensionality than does Gaussian-state QI to achieve useful error probabilities in both high-brightness (100 photons/mode) and moderate-brightness (1 photon/mode) noise. Furthermore, neither system has an appreciable quantum advantage in low-brightness (much less than 1 photon/mode) noise. Full article
(This article belongs to the Section Atomic Physics)
19 pages, 4281 KiB  
Article
Rapid Target Extraction in LiDAR Sensing and Its Application in Rocket Launch Phase Measurement
by Xiaoqi Liu, Heng Shi, Meitu Ye, Minqi Yan, Fan Wang and Wei Hao
Appl. Sci. 2025, 15(5), 2651; https://doi.org/10.3390/app15052651 - 1 Mar 2025
Viewed by 191
Abstract
The paper presents a fast method for 3D point cloud target extraction, addressing the challenge of time-consuming processing in LiDAR-based 3D point cloud data. The method begins with the acquisition of environmental 3D point cloud data using LiDAR, which is then projected onto [...] Read more.
The paper presents a fast method for 3D point cloud target extraction, addressing the challenge of time-consuming processing in LiDAR-based 3D point cloud data. The method begins with the acquisition of environmental 3D point cloud data using LiDAR, which is then projected onto a 2D cylindrical map. We propose a method for rapid target extraction from LiDAR-based 3D point cloud data, which includes key steps such as projection into 2D space, image processing for segmentation, and target extraction. A mapping matrix between the 2D grayscale image and the cylindrical projection is derived through Gaussian elimination. A target backtracking search algorithm is used to map the extracted target region back to the original 3D point cloud, enabling precise extraction of the 3D target points. Near-field experiments using hybrid solid-state LiDAR demonstrate the method’s effectiveness, requiring only 0.53 s to extract 3D target point clouds from datasets containing hundreds of thousands of points. Further, far-field rocket launch experiments show that the method can extract target point clouds within 158 milliseconds, with measured positional offsets of 0.2159 m and 0.1911 m as the rocket moves away from the launch tower. Full article
Show Figures

Figure 1

Figure 1
<p>Cylindrical projection schematic.</p>
Full article ">Figure 2
<p>Schematic diagram of the grayscale map conversion.</p>
Full article ">Figure 3
<p>(<b>a</b>) Grayscale image; (<b>b</b>) the filtered grayscale image; (<b>c</b>) edge map; (<b>d</b>) surface diagram; (<b>e</b>) connected block diagram; and (<b>f</b>) target section.</p>
Full article ">Figure 4
<p>Schematic diagram of 3D target extraction.</p>
Full article ">Figure 5
<p>LiDAR field installation diagram.</p>
Full article ">Figure 6
<p>Raw point cloud data collected by LiDAR.</p>
Full article ">Figure 7
<p>(<b>a</b>) Grayscale image (Scattergram); (<b>b</b>) grayscale image; (<b>c</b>) the filtered grayscale image; (<b>d</b>) edge map; (<b>e</b>) surface diagram; (<b>f</b>) connected block diagram; and (<b>g</b>) target section.</p>
Full article ">Figure 8
<p>(<b>a</b>) Grayscale image target; (<b>b</b>) cylindrical projection target; (<b>c</b>) 3D point cloud targets (including edge clutter); and (<b>d</b>) 3D point cloud target.</p>
Full article ">Figure 9
<p>Target extraction relative error chart.</p>
Full article ">Figure 10
<p>Rocket point cloud measurement test scene diagram.</p>
Full article ">Figure 11
<p>(<b>a</b>) Rocket raw point cloud map; (<b>b</b>) cylindrical projection diagram; (<b>c</b>) 2D grayscale image; (<b>d</b>) rocket point cloud target; and (<b>e</b>) rocket 3D point cloud.</p>
Full article ">Figure 12
<p>Three-dimensional trajectory of rocket vertical launch phase.</p>
Full article ">
18 pages, 3245 KiB  
Article
Enhanced DetNet: A New Framework for Detecting Small and Occluded 3D Objects
by Baowen Zhang, Chengzhi Su and Guohua Cao
Electronics 2025, 14(5), 979; https://doi.org/10.3390/electronics14050979 - 28 Feb 2025
Viewed by 156
Abstract
To mitigate the impact on detection performance caused by insufficient input information in 3D object detection based on single LiDAR data, this study designs three innovative modules based on the PointRCNN framework. Firstly, addressing the issue of the Multi-Layer Perceptron (MLP) in PointNet++ [...] Read more.
To mitigate the impact on detection performance caused by insufficient input information in 3D object detection based on single LiDAR data, this study designs three innovative modules based on the PointRCNN framework. Firstly, addressing the issue of the Multi-Layer Perceptron (MLP) in PointNet++ failing to effectively capture local features during the feature extraction phase, we propose the Adaptive Multilayer Perceptron (AMLP). Secondly, to prevent the problem of gradient vanishing due to the increased parameter scale and computational complexity of AMLP, we introduce the Channel Aware Residual module (CA-Res) in the feature extraction layer. Finally, in the head layer of the subsequent processing stage, we propose the Dynamic Attention Head (DA-Head) to enhance the representation of key features in the process of target detection. A series of experiments conducted on the KITTI validation set demonstrate that in complex scenarios, for the small target “Pedestrian”, our model achieves performance improvements of 2.08% and 3.46%, respectively, at the “Medium” and “Difficult” detection difficulty levels. To further validate the generalization capability of the Enhanced DetNet network, we deploy the trained model on the KITTI server and conduct a comprehensive evaluation of detection performance for the “Car”, “Pedestrian”, and “Cyclist” categories. Full article
Show Figures

Figure 1

Figure 1
<p>Missed occluded target. The colored 3D boxes represent different detected targets by the model. Green boxes denote “Cyclist”, orange boxes represent “Pedestrian”, and pink boxes indicate “Car”. The red 2D boxes mark the occluded targets that were missed by the detection.</p>
Full article ">Figure 2
<p>PointRCNN architecture.</p>
Full article ">Figure 3
<p>Enhanced DetNet overall network architecture diagram.</p>
Full article ">Figure 4
<p>Structure comparison between AMLP and PointNet-MLP.</p>
Full article ">Figure 5
<p>CA-Res structure diagram.</p>
Full article ">Figure 6
<p>Flowchart of the DA-Head.</p>
Full article ">Figure 7
<p>Qualitative results of “Car” detection on the KITTI test set.</p>
Full article ">Figure 8
<p>Qualitative results of “Pedestrian” detection on the KITTI test set.</p>
Full article ">Figure 9
<p>Qualitative results of “Cyclist” detection on the KITTI test set.</p>
Full article ">Figure 10
<p>Close-up picture of the missed vehicle.</p>
Full article ">Figure 11
<p>A visual comparison of the detection results of <a href="#electronics-14-00979-f008" class="html-fig">Figure 8</a>a between the Enhanced DetNet network and the PointRCNN* network.</p>
Full article ">
16 pages, 5587 KiB  
Article
Flat Emission Silicon Nitride Grating Couplers for Lidar Optical Antennas
by Thenia Prousalidi, Georgios Syriopoulos, Evrydiki Kyriazi, Roel Botter, Charalampos Zervos, Giannis Poulopoulos and Dimitrios Apostolopoulos
Photonics 2025, 12(3), 214; https://doi.org/10.3390/photonics12030214 - 28 Feb 2025
Viewed by 215
Abstract
Light detection and ranging (Lidar) is a key enabling technology for autonomous vehicles and drones. Its emerging implementations are based on photonic integrated circuits (PICs) and optical phased arrays (OPAs). In this work, we introduce a novel approach to the design of OPA [...] Read more.
Light detection and ranging (Lidar) is a key enabling technology for autonomous vehicles and drones. Its emerging implementations are based on photonic integrated circuits (PICs) and optical phased arrays (OPAs). In this work, we introduce a novel approach to the design of OPA Lidar antennas based on Si3N4 grating couplers. The well-established TriPleX platform and the asymmetric double stripe waveguide geometry with full etching are employed, ensuring low complexity and simple fabrication combined with the low-loss advantages of the platform. The design study aims to optimize the performance of the grating coupler-based radiators as well as the OPA, thus enhancing the overall capabilities of Si3N4-based Lidar. Uniform and non-uniform grating structures are considered, achieving θ and φ angle divergences of 0.9° and 32° and 0.54° and 25.41°, respectively. Also, wavelength sensitivity of 7°/100 nm is achieved. Lastly, the fundamental OPA parameters are investigated, and 35 dBi of peak directivity is achieved for an eight-element OPA. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic of an optical antenna in OPA configuration based on grating couplers. The <span class="html-italic">θ</span> and <span class="html-italic">φ</span> angles and the distance d between adjacent GC elements are noted.</p>
Full article ">Figure 2
<p>(<b>a</b>) The OPA schematic to indicate the cross-sectional planes. (<b>b</b>) Schematic of the yz-plane cross section of the standard TriPleX ADS waveguide. The different regions (Si<sub>3</sub>N<sub>4</sub> waveguide, SiO<sub>2</sub> top oxide layer (TOX) and bottom oxide layer (BOX) and air top cladding) are marked with different colors. (<b>c</b>) Schematic of the sideview (xz-plane cross section) of a periodic grating structure. The grating pitch is denoted with <span class="html-italic">Λ</span> and the filling factor with <span class="html-italic">FF</span>. The effective index of the etched part is n<sub>0</sub> and of the unetched part is n<sub>1</sub>.</p>
Full article ">Figure 3
<p>Sideview of the uniform GC, showing the constant pitch, <span class="html-italic">FF</span> and width across the direction of propagation.</p>
Full article ">Figure 4
<p>Simulated <span class="html-italic">θ</span> and <span class="html-italic">φ</span> angle divergences, (<b>a</b>) varying the grating width for a fixed length of 50 μm and (<b>b</b>) varying the grating length for a fixed width of 2 μm.</p>
Full article ">Figure 5
<p>(<b>Left</b>): Top view of the emission profile of the uniform grating for a width of 2 μm and a length of 100 μm. The Ez component field distribution is shown with a color scale. (<b>Right)</b>: A 1D plot of the emission profile along the dashed line of the left figure.</p>
Full article ">Figure 6
<p>Calculated emission angle <span class="html-italic">θ</span> of the far-field profile, varying the wavelength in the range of 1.5–1.6 μm for a uniform tooth profile and a pitch of 926 nm. The electric field intensity is shown with a color scale.</p>
Full article ">Figure 7
<p>(<b>Left</b>) Top view and (<b>Right</b>) sideview of the investigated non-uniform grating design with a varying width and <span class="html-italic">FF</span>.</p>
Full article ">Figure 8
<p>Calculated effective refractive index of the TE<sub>0</sub> mode varying the waveguide width.</p>
Full article ">Figure 9
<p><span class="html-italic">n<sub>eff−grating</sub></span> of the fundamental supported mode calculated via FDE simulations, varying the waveguide width and <span class="html-italic">FF</span>. The black lines are the contour lines of the plots along which the <span class="html-italic">n<sub>eff−grating</sub></span> has a constant value. The selected contour line is marked with pink stars.</p>
Full article ">Figure 10
<p>The calculated coupling constant <span class="html-italic">k</span> for the different width values across the grating for three of the contour lines.</p>
Full article ">Figure 11
<p>Simulated <span class="html-italic">θ</span> and <span class="html-italic">φ</span> angle divergences varying the grating length for width–<span class="html-italic">FF</span> pairs calculated from the same contour line.</p>
Full article ">Figure 12
<p>(<b>Left</b>): Top view of the emission profile of the non-uniform grating for the selected geometrical parameters. The Ez component field distribution is shown with a color scale. (<b>Right</b>): A 1D plot of the emission profile along the dashed line of the left figure.</p>
Full article ">Figure 13
<p>Calculated emission angle <span class="html-italic">θ</span> of the far-field profile, varying the wavelength in the range of 1.5–1.6 μm, for the non-uniform tooth profile and a pitch of 926 nm. The electric field intensity is shown with a color scale.</p>
Full article ">Figure 14
<p>3D directivity plots (in dBi) produced with the Sensor Array Analyzer app, varying the number (<span class="html-italic">NA</span>) of grating elements and the distance (<span class="html-italic">d</span>) between them. The axis information is mentioned in the first subplot and is the same for all the subplots. The colorbar shows the directivity in dBi.</p>
Full article ">Figure 15
<p>Elevation cut for azimuth angle = 0° for two of the directivity plots of <a href="#photonics-12-00214-f014" class="html-fig">Figure 14</a>. On the other hand, the divergence of the <span class="html-italic">φ</span> angle is affected by the OPA topology, both by the number of elements <span class="html-italic">NA</span> and by their distance <span class="html-italic">d</span>. Increasing the <span class="html-italic">NA</span> reduces the <span class="html-italic">φ</span> divergence of the main lobe. Also, increasing the distance between adjacent elements reduces the main lobe <span class="html-italic">φ</span> divergence. It can also be seen in <a href="#photonics-12-00214-f014" class="html-fig">Figure 14</a> that increasing the number of elements (for a fixed distance) will result in the appearance of more side lobes with lower peak directivities, while the width of the main lobe is reduced. A similar effect is observed when the number of elements is kept constant and their distance is increased. In this case, more side lobes appear, and their peak directivity is also increased. Lastly, from the directivity plots, the peak directivity value can be extracted. This is 34.8 dBi for <span class="html-italic">NA</span> = 4 and <span class="html-italic">d</span> = 1.5<span class="html-italic">λ</span> and 35 dBi for <span class="html-italic">NA</span> = 8 and <span class="html-italic">d</span> = 1.5<span class="html-italic">λ</span>.</p>
Full article ">Figure 16
<p>Calculated <span class="html-italic">L</span><sub>10</sub> varying the distance d between two adjacent waveguides, with widths of 1 μm, 1.5 μm and 2 μm.</p>
Full article ">
19 pages, 21047 KiB  
Article
Real-Time Localization for an AMR Based on RTAB-MAP
by Chih-Jer Lin, Chao-Chung Peng and Si-Ying Lu
Actuators 2025, 14(3), 117; https://doi.org/10.3390/act14030117 - 27 Feb 2025
Viewed by 180
Abstract
This study aimed to develop a real-time localization system for an AMR (autonomous mobile robot), which utilizes the Robot Operating System (ROS) Noetic version in the Ubuntu 20.04 operating system. RTAB-MAP (Real-Time Appearance-Based Mapping) is employed for localization, integrating with an RGB-D camera [...] Read more.
This study aimed to develop a real-time localization system for an AMR (autonomous mobile robot), which utilizes the Robot Operating System (ROS) Noetic version in the Ubuntu 20.04 operating system. RTAB-MAP (Real-Time Appearance-Based Mapping) is employed for localization, integrating with an RGB-D camera and a 2D LiDAR for real-time localization and mapping. The navigation was performed using the A* algorithm for global path planning, combined with the Dynamic Window Approach (DWA) for local path planning. It enables the AMR to receive velocity control commands and complete the navigation task. RTAB-MAP is a graph-based visual SLAM method that combines closed-loop detection and the graph optimization algorithm. The maps built using these three methods were evaluated with RTAB-MAP localization and AMCL (Adaptive Monte Carlo Localization) in a high-similarity long corridor environment. For RTAB-MAP and AMCL methods, three map optimization methods, i.e., TORO (Tree-based Network Optimizer), g2o (General Graph Optimization), and GTSAM (Georgia Tech Smoothing and Mapping), were used for the graph optimization of the RTAB-MAP and AMCL methods. Finally, the TORO, g2o, and GTSAM methods were compared to test the accuracy of localization for a long corridor according to the RGB-D camera and the 2D LiDAR. Full article
(This article belongs to the Special Issue Actuators in Robotic Control—3rd Edition)
Show Figures

Figure 1

Figure 1
<p>Block diagram of the AMR for this experiment.</p>
Full article ">Figure 2
<p>(<b>a</b>) Architecture of RTAB-MAP. (<b>b</b>) Flowchart of the RTAB-MAP method.</p>
Full article ">Figure 3
<p>(<b>a</b>) Experimental location (AMR moves from A,B,C,D,E, to F); (<b>b</b>) graph optimization setup for RTAB-MAP with TORO.</p>
Full article ">Figure 4
<p>(<b>a</b>) Loop closure detection for time <span class="html-italic">t</span> = 00:10. (<b>b</b>) Loop closure detection for time <span class="html-italic">t</span> = 01:02. (<b>c</b>) Loop closure detection for initial time <span class="html-italic">t</span> = 02:07. (<b>d</b>) Loop closure detection for time <span class="html-italic">t</span> = 03:21. (<b>e</b>) Loop closure detection for time <span class="html-italic">t</span> = 04:24.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) Loop closure detection for time <span class="html-italic">t</span> = 00:10. (<b>b</b>) Loop closure detection for time <span class="html-italic">t</span> = 01:02. (<b>c</b>) Loop closure detection for initial time <span class="html-italic">t</span> = 02:07. (<b>d</b>) Loop closure detection for time <span class="html-italic">t</span> = 03:21. (<b>e</b>) Loop closure detection for time <span class="html-italic">t</span> = 04:24.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) Loop closure detection for time <span class="html-italic">t</span> = 00:10. (<b>b</b>) Loop closure detection for time <span class="html-italic">t</span> = 01:02. (<b>c</b>) Loop closure detection for initial time <span class="html-italic">t</span> = 02:07. (<b>d</b>) Loop closure detection for time <span class="html-italic">t</span> = 03:21. (<b>e</b>) Loop closure detection for time <span class="html-italic">t</span> = 04:24.</p>
Full article ">Figure 5
<p>Localization graph for RTAB-MAP with TORO.</p>
Full article ">Figure 6
<p>Localization graph for RTAB-MAP with g2o.</p>
Full article ">Figure 7
<p>Localization graph for RTAB-MAP with GTSAM.</p>
Full article ">Figure 8
<p>Proposed TF tree in ROS.</p>
Full article ">Figure 9
<p>Move_base node [<a href="#B41-actuators-14-00117" class="html-bibr">41</a>].</p>
Full article ">Figure 10
<p>Recovery behaviors of the move_base node [<a href="#B42-actuators-14-00117" class="html-bibr">42</a>].</p>
Full article ">Figure 11
<p>(<b>a</b>) Obstacle avoidance and loop closure detection; (<b>b</b>) beginning of the task; (<b>c</b>) destination of the obstacle avoidance task.</p>
Full article ">Figure 12
<p>Navigation results for AMCL with TORO.</p>
Full article ">Figure 13
<p>Navigation results for RTAB-MAP with TORO.</p>
Full article ">Figure 14
<p>Navigation photos for the proposed RTAB-MAP with TORO.</p>
Full article ">Figure 15
<p>(<b>a</b>) Obstacle avoidance trajectories of TORO for RTAB-MAP. (<b>b</b>) Obstacle avoidance trajectories of g2o for RTAB-MAP. (<b>c</b>) Obstacle avoidance trajectories of GTSAM for RTAB-MAP.</p>
Full article ">
24 pages, 8561 KiB  
Review
A Review of Research on SLAM Technology Based on the Fusion of LiDAR and Vision
by Peng Chen, Xinyu Zhao, Lina Zeng, Luxinyu Liu, Shengjie Liu, Li Sun, Zaijin Li, Hao Chen, Guojun Liu, Zhongliang Qiao, Yi Qu, Dongxin Xu, Lianhe Li and Lin Li
Sensors 2025, 25(5), 1447; https://doi.org/10.3390/s25051447 - 27 Feb 2025
Viewed by 224
Abstract
In recent years, simultaneous localization and mapping with the fusion of LiDAR and vision fusion has gained extensive attention in the field of autonomous navigation and environment sensing. However, its limitations in feature-scarce (low-texture, repetitive structure) environmental scenarios and dynamic environments have prompted [...] Read more.
In recent years, simultaneous localization and mapping with the fusion of LiDAR and vision fusion has gained extensive attention in the field of autonomous navigation and environment sensing. However, its limitations in feature-scarce (low-texture, repetitive structure) environmental scenarios and dynamic environments have prompted researchers to investigate the use of combining LiDAR with other sensors, particularly the effective fusion with vision sensors. This technique has proven to be highly effective in handling a variety of situations by fusing deep learning with adaptive algorithms. LiDAR excels in complex environments, with its ability to acquire high-precision 3D spatial information, especially when dealing with complex and dynamic environments with high reliability. This paper analyzes the research status, including the main research results and findings, of the early single-sensor SLAM technology and the current stage of LiDAR and vision fusion SLAM. Specific solutions for current problems (complexity of data fusion, computational burden and real-time performance, multi-scenario data processing, etc.) are examined by categorizing and summarizing the body of the extant literature and, at the same time, discussing the trends and limitations of the current research by categorizing and summarizing the existing literature, as well as looks forward to the future research directions, including multi-sensor fusion, optimization of algorithms, improvement of real-time performance, and expansion of application scenarios. This review aims to provide guidelines and insights for the development of SLAM technology for LiDAR and vision fusion, with a view to providing a reference for further SLAM technology research. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Figure 1
<p>Overview of the SLAM research progress.</p>
Full article ">Figure 2
<p>The three primary steps of framework for traditional SLAM architecture: (<b>a</b>) the eigenvalue determination for estimating the global stage; (<b>b</b>) Original data processing stage; (<b>c</b>) Global map creation and data inconsistencies stage.</p>
Full article ">Figure 3
<p>Diagram of the flow of multimodal data fusion technology.</p>
Full article ">Figure 4
<p>(<b>a</b>) The PVL-Cartographer system’s misalignment is depicted without closed-loop detection; (<b>b</b>) the outcomes following closed-loop detection. In (<b>a</b>,<b>b</b>), the region where loop closure detection is carried out is enclosed by the white rectangle. In order to merge LiDAR point clouds with panoramic photos, the system combines tilt-mounted LiDAR, panoramic cameras, and IMU sensors. It then uses internal data and algorithms to calculate the actual scale of the environment without the need for extra positional information. Even in surroundings with limited features, it may function effectively and dependably by achieving the seamless integration of data from many sensors, boosting the positioning and mapping results’ accuracy and dependability [<a href="#B30-sensors-25-01447" class="html-bibr">30</a>].</p>
Full article ">Figure 5
<p>Comparison of the Lou et al. scheme and the quality of 3D reconstruction using DynaSLAM techniques: (<b>a</b>) reconstruction of DynaSLAM, (<b>b</b>) reconstruction of Lou et al.’s method [<a href="#B32-sensors-25-01447" class="html-bibr">32</a>].</p>
Full article ">Figure 6
<p>The entire LSD-SLAM algorithm system framework diagram [<a href="#B38-sensors-25-01447" class="html-bibr">38</a>].</p>
Full article ">Figure 7
<p>The maps produced by the original ORB-SLAM system. (<b>a</b>) System-generated map for ORB-SLAM-front view; (<b>b</b>) System-generated map for ORB-SLAM-vertical view; (<b>c</b>) ORB-SLAM with Sun et al.’s approach-front view; (<b>d</b>) ORB-SLAM with Sun et al.’s approach-vertical view [<a href="#B60-sensors-25-01447" class="html-bibr">60</a>].</p>
Full article ">Figure 8
<p>The suggested environment perception system based on LiDAR and vision. (<b>a</b>) The map in top perspective; (<b>b</b>) A navigational two-dimensional grid map; (<b>c</b>) Side view of the map; (<b>d</b>) Three-dimensional point cloud map; SLAM technology illustration using multi-sensor fusion.</p>
Full article ">
25 pages, 9276 KiB  
Article
Experimental Evaluation of Multi- and Single-Drone Systems with 1D LiDAR Sensors for Stockpile Volume Estimation
by Ahmad Alsayed, Fatemeh Bana, Farshad Arvin, Mark K. Quinn and Mostafa R. A. Nabawy
Aerospace 2025, 12(3), 189; https://doi.org/10.3390/aerospace12030189 - 26 Feb 2025
Viewed by 227
Abstract
This study examines the application of low-cost 1D LiDAR sensors in drone-based stockpile volume estimation, with a focus on indoor environments. Three approaches were experimentally investigated: (i) a multi-drone system equipped with static, downward-facing 1D LiDAR sensors combined with an adaptive formation control [...] Read more.
This study examines the application of low-cost 1D LiDAR sensors in drone-based stockpile volume estimation, with a focus on indoor environments. Three approaches were experimentally investigated: (i) a multi-drone system equipped with static, downward-facing 1D LiDAR sensors combined with an adaptive formation control algorithm; (ii) a single drone with a static, downward-facing 1D LiDAR following a zigzag trajectory; and (iii) a single drone with an actuated 1D LiDAR in an oscillatory fashion to enhance scanning coverage while following a shorter trajectory. The adaptive formation control algorithm, newly developed in this study, synchronises the drones’ waypoint arrivals and facilitates smooth transitions between dynamic formation shapes. Real-world experiments conducted in a motion-tracking indoor facility confirmed the effectiveness of all three approaches in accurately completing scanning tasks, as per intended waypoints allocation. A trapezoidal prism stockpile was scanned, and the volume estimation accuracy of each approach was compared. The multi-drone system achieved an average volumetric error of 1.3%, similar to the single drone with a static sensor, but with less than half the flight time. Meanwhile, the actuated LiDAR system required shorter paths but experienced a higher volumetric error of 4.4%, primarily due to surface reconstruction outliers and common LiDAR bias when scanning at non-vertical angles. Full article
(This article belongs to the Special Issue UAV System Modelling Design and Simulation)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Paths of the three drones starting from origin, illustrating formation transitions between waypoints. (<b>b</b>) Position error, <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>P</mi> </mrow> <mrow> <mi>error</mi> </mrow> </msup> </mrow> </semantics></math>, over time for each drone, showing synchronised arrivals at waypoints. (<b>c</b>) Reduction in formation error, <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>F</mi> </mrow> <mrow> <mi>error</mi> </mrow> </msup> </mrow> </semantics></math>, as the drones get nearer to the waypoints. (<b>d</b>) Relative position, <math display="inline"><semantics> <mrow> <msubsup> <mrow> <mi>d</mi> </mrow> <mrow> <mi>i</mi> </mrow> <mrow> <mi>relative</mi> </mrow> </msubsup> </mrow> </semantics></math>, between each drone’s current position and the desired position of drone 1 over time, demonstrating the algorithm’s ability to maintain consistent formations during transitions.</p>
Full article ">Figure 2
<p>A schematic illustration of the actuated 1D LiDAR approach.</p>
Full article ">Figure 3
<p>A schematic illustration of a single drone with a static 1D LiDAR following a zigzag trajectory.</p>
Full article ">Figure 4
<p>Views of the netted drone flight test enclosure. Highlighted are the advanced VICON motion tracking system components. The inset image also showcases the Crazyflie 2.1 drone in action while above the blue stockpile.</p>
Full article ">Figure 5
<p>Crazyflie 2.1 drones equipped with four, one large and three smaller, markers for accurate tracking during the experiments.</p>
Full article ">Figure 6
<p>A schematic setup showcasing the connectivity between the VICON cameras, the main PC, and the ground station laptop facilitating real-time tracking and communication with the drones.</p>
Full article ">Figure 7
<p>Illustration of the actuated 1D LiDAR setup, showcasing the integration of a Raspberry Pi and TFMini LiDAR sensor mounted on a servo motor, all attached to a Parrot Bebop 2 drone, equipped with markers for motion tracking. The figure includes a schematic representation for clearer visualisation of the actuated 1D LiDAR system.</p>
Full article ">Figure 8
<p>(<b>a</b>) The reference stockpile used for scanning demonstrations of the proposed approaches, highlighting the corner points hosting the markers to determine the actual volume. (<b>b</b>) Visualisation of the 3D reconstruction of the reference stockpile shape with the shape’s corners being represented as black dots. The colour gradient indicates height variations.</p>
Full article ">Figure 9
<p>Desired waypoints <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">W</mi> </mrow> <mrow> <mi>k</mi> </mrow> </msub> </mrow> </semantics></math>, denoted as red circles, and the corresponding formation shapes <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">H</mi> </mrow> <mrow> <mi>k</mi> </mrow> </msub> </mrow> </semantics></math>, showcasing the cooperation of waypoints and formation shapes in ensuring comprehensive, non-overlapping coverage of all the desired area.</p>
Full article ">Figure 10
<p>Desired trajectories for each applied approach. (<b>a</b>) The desired trajectories for the multi-agent system with two drones, and (<b>b</b>) a comparative trajectory to (<b>a</b>) when using a single drone. (<b>c</b>) The desired trajectories for the multi-agent system with three drones, and (<b>d</b>) a comparative trajectory to (<b>c</b>) when using a single drone. (<b>e</b>) The trajectory for a single drone with an actuated 1D LiDAR. The points A1–A4, B1–B4, and C1–C4 represent the waypoints for Drone 1, Drone 2, and Drone 3, respectively.</p>
Full article ">Figure 11
<p>Illustration of the 3D desired and actual recorded trajectories for (<b>a</b>) two and (<b>b</b>) three drones trajectories in formation, (<b>c</b>) a zigzag trajectory of a single drone loosely similar to the two drones in formation, (<b>d</b>) a finer zigzag trajectory of a single drone analogous to that of three drones in formation, and (<b>e</b>) the recorded simple trajectory for the single drone equipped with the actuated 1D LiDAR. The mission includes phases of take-off, scanning at a 1.5 m altitude, and landing. Moreover, (<b>a</b>,<b>b</b>) demonstrate the predefined formation shapes (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">H</mi> </mrow> <mrow> <mi>k</mi> </mrow> </msub> </mrow> </semantics></math>) at desired waypoints (<math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="bold-italic">W</mi> </mrow> <mrow> <mi>k</mi> </mrow> </msub> </mrow> </semantics></math>). The distances <math display="inline"><semantics> <mrow> <mi>s</mi> </mrow> </semantics></math> in (<b>a</b>,<b>b</b>) are obtained using Equation (18), and then used to design the zigzag path shown in (<b>c</b>,<b>d</b>).</p>
Full article ">Figure 12
<p>Performance metrics for the formation system based on experimental measurements, illustrating the simultaneous reduction in both trajectory error (<math display="inline"><semantics> <mrow> <msup> <mrow> <mi>P</mi> </mrow> <mrow> <mi>error</mi> </mrow> </msup> </mrow> </semantics></math>) and formation error (<math display="inline"><semantics> <mrow> <msup> <mrow> <mi>F</mi> </mrow> <mrow> <mi>error</mi> </mrow> </msup> </mrow> </semantics></math>) as shown in (<b>a</b>,<b>b</b>), respectively; and consistency between different formation shapes as shown by the metric <math display="inline"><semantics> <mrow> <msup> <mrow> <mi>d</mi> </mrow> <mrow> <mrow> <mo> </mo> <mi>relative</mi> </mrow> </mrow> </msup> </mrow> </semantics></math> in (<b>c</b>).</p>
Full article ">Figure 13
<p>Performance of the system’s ability to minimise trajectory error for the single-drone approaches with (<b>a</b>) a static LiDAR shown in <a href="#aerospace-12-00189-f011" class="html-fig">Figure 11</a>d, and (<b>b</b>) actuated 1D LiDAR shown in <a href="#aerospace-12-00189-f011" class="html-fig">Figure 11</a>e.</p>
Full article ">Figure 14
<p>Illustration of the registered point clouds (<math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">P</mi> <msub> <mrow> <mi mathvariant="bold-italic">C</mi> </mrow> <mrow> <mi>G</mi> </mrow> </msub> </mrow> </semantics></math>), shown in black scatters, and the 3D reconstructed shapes, shown in colour gradient, for the stockpile displayed in <a href="#aerospace-12-00189-f008" class="html-fig">Figure 8</a>a, while employing (<b>a</b>) two drones in formation, (<b>b</b>) three drones in formation, (<b>c</b>) a single drone navigating a zigzag path, (<b>d</b>) a single drone following a denser zigzag path, and (<b>e</b>) a solitary drone equipped with the actuated 1D LiDAR system. The colour gradient indicates varying heights.</p>
Full article ">Figure 15
<p>A schematic representation highlighting potential inaccuracies in point cloud collection owing to having a wider FOV sensor, which tends to capture the closest data point. The point cloud shown in this figure was obtained using additional testing with a 1D LiDAR mounted on a stick to collect dense data.</p>
Full article ">Figure 16
<p>(<b>a</b>) Second reference stockpile used for scanning demonstrations. (<b>b</b>) Visualisation of the 3D reconstruction of the reference stockpile shape with the shape’s corners being represented as black dots. The colour gradient indicates height variations.</p>
Full article ">Figure 17
<p>Illustration of the registered point clouds (<math display="inline"><semantics> <mrow> <mi mathvariant="bold-italic">P</mi> <msub> <mrow> <mi mathvariant="bold-italic">C</mi> </mrow> <mrow> <mi>G</mi> </mrow> </msub> </mrow> </semantics></math>), shown in black scatters, and the 3D reconstructed shapes, shown in colour gradient, for the stockpile displayed in <a href="#aerospace-12-00189-f016" class="html-fig">Figure 16</a>, while employing (<b>a</b>) a single drone navigating a zigzag path, (<b>b</b>) a single drone following a denser zigzag path, and (<b>c</b>) a solitary drone equipped with the actuated 1D LiDAR system. The colour gradient indicates varying heights.</p>
Full article ">Figure 18
<p>A schematic representation highlighting potential inaccuracies in point cloud collection due to the use of a wider FOV sensor, which makes it difficult to reconstruct sharp edges, leading to an increase in the estimated volume. The blue dots show the actual shape corners from the side view.</p>
Full article ">
44 pages, 35373 KiB  
Article
Quantitative Rockfall Hazard Assessment of the Norwegian Road Network and Residences at an Indicative Level from Simulated Trajectories
by François Noël and Synnøve Flugekvam Nordang
Remote Sens. 2025, 17(5), 819; https://doi.org/10.3390/rs17050819 - 26 Feb 2025
Viewed by 237
Abstract
Field observations provide valuable information for rockfall assessments, but estimating physical and statistical quantities related to rockfall propagation directly is challenging. Simulations are commonly used to infer these quantities, but their subjectivity can result in varying hazard land use zonation extents for different [...] Read more.
Field observations provide valuable information for rockfall assessments, but estimating physical and statistical quantities related to rockfall propagation directly is challenging. Simulations are commonly used to infer these quantities, but their subjectivity can result in varying hazard land use zonation extents for different projects. This paper focuses on the application of simulated trajectories for rockfall hazard assessments, with an emphasis on reducing subjectivity. A quantitative guiding rockfall hazard methodology based on earlier concepts is presented and put in the context of legislated requirements. It details how the temporal hazard component, related to the likelihood of failure, can be distributed spatially using simulated trajectories. The method can be applied with results from any process-based software and combined with various prediction methods of the temporal aspect, although this aspect is not the primary focus. Applied examples for static objects and moving objects, such as houses and vehicles, are shown to illustrate the important effect of the object size. For that purpose, the methodology was applied at an indicative level over Norway utilizing its 1 m detailed digital terrain model (DTM) acquired from airborne LiDAR. Potential rockfall sources were distributed in 3D where slopes are steeper than 50°, as most rockfall events in the national landslide database (NSDB) occurred in such areas. This threshold considerably shifts toward gentler slopes when repeating the analysis with coarser DTMs. Simulated trajectories were produced with an adapted version of the simulation model stnParabel. Comparing the number of trajectories reaching the road network to the numerous related registered rockfall events of the NSDB, an indicative averaged yearly frequency of released rock fragments of 1/25 per 10,000 m2 of cliff was obtained for Norway. This average frequency can serve as a starting point for hazard assessments and should be adjusted to better match local conditions. Full article
Show Figures

Figure 1

Figure 1
<p>As a study area increases, the amount of spatial data to consider increases. The situation is exacerbated with the use of detailed terrain models, since they provide a lot of spatial data with their high fidelity. Still, the tradeoff can be rewarding if it comes with a potential gain in predictive accuracy of the simulations. Earlier novel studies simply did not have a choice, since detailed DTMs were not available (e.g., Guzzetti et al. [<a href="#B60-remotesensing-17-00819" class="html-bibr">60</a>]; Frattini et al. [<a href="#B45-remotesensing-17-00819" class="html-bibr">45</a>]; or Derron et al. [<a href="#B64-remotesensing-17-00819" class="html-bibr">64</a>] work delivered in 2009). With the advances in remote sensing technology and computing power, it is now possible to push the bar beyond considering 100,000 Megapixels. This increases the amount of information to process by several orders of magnitude.</p>
Full article ">Figure 2
<p>Effect of the loss of precision on the interpretations related to the rockfall sources when using coarser DTMs. At a cell size of 1 m, most mapped sources have a cliff steeper than 50° in their vicinity. With coarse DTMs, the cumulative curves are artificially shifted toward gentler maximal slopes.</p>
Full article ">Figure 3
<p>Conceptual representation of the possible source distribution spatial bias and correcting strategies. One thousand points are shown in each example square. The squares thus represent about 120 × 120 m of cliff surface for the chosen number of simulated trajectories per cliff surface of this application case.</p>
Full article ">Figure 4
<p>Overview of the study area and the intermediate states of the methodology, visualized in QGIS. The national 1 m DTM is subdivided into 2033 15 × 15 km tiles (<b>a</b>). An area of about half of one 15 × 15 km tile is shown in (<b>b</b>), subdivided into 35 1.9 × 1.9 km sub-tiles (out of 64 per tile). The sources shown in orange in (<b>c</b>) are distributed proportionally to the cliff surfaces. The 3D-simulated trajectories and their resulting rasterized paths pass seamlessly across sub-tile boundaries. This way, the temporal information is seamlessly distributed spatially following realistic possible rockfall propagations. The rockfall hazard was estimated locally with moving averages using a circular floating window of the size of hypothetical exposed objects. The indicative hazard zones (<b>d</b>) were produced for the boundaries of the hypothetical objects. Elevation and imagery data in (<b>a</b>) are from the ©Mapzen (MIT license) and ©OpenStreetMap contributors (ODbL license), respectively. Elevation and imagery data in (<b>b</b>–<b>d</b>) are from ©hoydedata.no and ©norgeskart.no, respectively (both with a Norwegian license for open government data—NLOD license). EPSG: 25833.</p>
Full article ">Figure 5
<p>Qualitative overview of the considered portion of the Norwegian road network (county and national roads). Attributed information such as the speed limits and regulated <span class="html-italic">P<sub>Hazard_max</sub></span> threshold based on the yearly averaged daily traffic volume (AADT) is shown in (<b>a</b>,<b>b</b>). Since <span class="html-italic">λ</span> is subject to change based on local conditions, the results were also expressed in terms of the local maximal tolerable frequency of block fragments (<span class="html-italic">λ<sub>max</sub></span>). The obtained <span class="html-italic">λ<sub>max</sub></span> values are shown in (<b>c</b>,<b>d</b>) for an exposed object length of 1000 m and for a vehicle length combined with its stopping distance, respectively. Elevation, imagery, and road network data are from the ©Mapzen (MIT license) and ©OpenStreetMap contributors (ODbL license) and ©vegdata.no (NLOD license), respectively. EPSG: 25833.</p>
Full article ">Figure 6
<p>Compiled overview of precisely mapped average and maximal runout distances for several rockfall events and experiments with known source heights. Given the wide range of site geometries and scales, the data are shown in terms of reach angles (source-rock line-of-sight) over a hypothetical slope profile for comparison purposes. Note the wide variety of observed maximal runout distances (<b>a</b>) around a median value of 31.1°, or (<b>b</b>), around a median of 34.6° for the average runout distances (center of mass of the mapped deposits). (Mapped data from: Caviezel et al. [<a href="#B78-remotesensing-17-00819" class="html-bibr">78</a>]; Hibert et al. [<a href="#B79-remotesensing-17-00819" class="html-bibr">79</a>]; Bourrier et al. [<a href="#B13-remotesensing-17-00819" class="html-bibr">13</a>]; Noël et al. [<a href="#B16-remotesensing-17-00819" class="html-bibr">16</a>]; Pettersen [<a href="#B80-remotesensing-17-00819" class="html-bibr">80</a>]; Midtun [<a href="#B81-remotesensing-17-00819" class="html-bibr">81</a>]; Domaas [<a href="#B82-remotesensing-17-00819" class="html-bibr">82</a>]; Erickson [<a href="#B83-remotesensing-17-00819" class="html-bibr">83</a>]; Lévy and Verly [<a href="#B84-remotesensing-17-00819" class="html-bibr">84</a>]; Barstad [<a href="#B85-remotesensing-17-00819" class="html-bibr">85</a>]; Volkwein et al. [<a href="#B86-remotesensing-17-00819" class="html-bibr">86</a>]).</p>
Full article ">Figure 7
<p>Cumulative curves showing the ability of the simulations to properly reach the mapped deposited rock fragments for different sites. Perfect or over-conservative predictions would lay on the red curve. The further a cumulative curve deviates from the red one (observations), the more mapped rock fragments it misses due to predicting too short runout distances compared to the observations. (Mapped data from: Caviezel et al. [<a href="#B78-remotesensing-17-00819" class="html-bibr">78</a>]; Hibert et al. [<a href="#B79-remotesensing-17-00819" class="html-bibr">79</a>]; Bourrier et al. [<a href="#B13-remotesensing-17-00819" class="html-bibr">13</a>]; Noël et al. [<a href="#B16-remotesensing-17-00819" class="html-bibr">16</a>]; Pettersen [<a href="#B80-remotesensing-17-00819" class="html-bibr">80</a>]; Midtun [<a href="#B81-remotesensing-17-00819" class="html-bibr">81</a>]; Domaas [<a href="#B82-remotesensing-17-00819" class="html-bibr">82</a>]; Erickson [<a href="#B83-remotesensing-17-00819" class="html-bibr">83</a>]; Lévy and Verly [<a href="#B84-remotesensing-17-00819" class="html-bibr">84</a>]; Barstad, [<a href="#B85-remotesensing-17-00819" class="html-bibr">85</a>]; Volkwein et al. [<a href="#B86-remotesensing-17-00819" class="html-bibr">86</a>]).</p>
Full article ">Figure 8
<p>Example of the results produced over Norway are shown here in (<b>a</b>,<b>b</b>) in the Lom municipality, Innlandet county, Norway. For comparison with the indicative hazard zones, the existing hazard zones dominated by rockfall processes in this area are shown in (<b>c</b>). The hazard assessment project with less constraining results in (<b>c</b>) supported the chosen zones with RAMMS::ROCKFALL simulations (Andresen et al. [<a href="#B89-remotesensing-17-00819" class="html-bibr">89</a>]). Elevation, imagery, road network, and existing hazard zones data are from ©hoydedata.no, ©norgeskart.no, ©vegdata.no, and ©kartkatalog.nve.no, respectively (all with a NLOD license). EPSG: 25833.</p>
Full article ">Figure 9
<p>Examples of the results produced over Norway with a focus on their application to linear infrastructures. In (<b>a</b>,<b>b</b>), the results are shown around the Mannheller ferry infrastructure, as well as for a portion of the Lærdal valley in (<b>c</b>,<b>d</b>), Vestland county, Norway. Elevation, imagery, road network, and rockfall records data are from ©hoydedata.no, ©norgeskart.no, ©vegdata.no, and ©kartkatalog.nve.no, respectively (all with a NLOD license). EPSG: 25833.</p>
Full article ">Figure 10
<p>Conceptual representation of the distinctions between susceptibility and hazard concepts. If an exposed object can be reached by potential rockfalls, it is considered susceptible (e.g., the yellow and orange houses). Despite being susceptible, it can have a tolerable level of hazard if it is not reached frequently (e.g., the yellow houses). In such a situation, the object is constrained by susceptibility zones (any zones within the dashed black limits), but not by restrictive hazard zones (orange zones).</p>
Full article ">Figure 11
<p>Ratio of residences constrained by the susceptibility zones (black limits) and the indicative hazard zones (orange-red limits). Note that the indicative hazard zones can expand or contract depending on the expected <span class="html-italic">λ</span>. The ratios thus vary depending on the chosen <span class="html-italic">λ</span>. All residences were considered without differentiating their respective safety class. * Like for stnParabel’s process-based results, the quantitative hazard methodology could also be applied to Rockyfor3D’s results.</p>
Full article ">Figure 12
<p>Ratio of county and national roads constrained by the susceptibility zones (black limits) and the indicative hazard zones (orange-red limits). Note that the indicative hazard zones can expand or contract depending on the expected <span class="html-italic">λ</span>. The ratios thus vary depending on the chosen <span class="html-italic">λ</span>. In (<b>a</b>), all roads were considered without differentiating their respective safety class. In (<b>b</b>,<b>c</b>), the ratio of road segments exceeding their respective <span class="html-italic">P<sub>Hazard_max</sub></span> threshold is given per individualized safety class. * Like for stnParabel’s process-based results, the quantitative hazard methodology could also be applied to Rockyfor3D’s results.</p>
Full article ">Figure A1
<p>The Mel de la Niva Mountain overlooking the Evolène village in Val d’Hérens, Valais, Switzerland. The mapped rockfall history of the site (Noël et al. [<a href="#B16-remotesensing-17-00819" class="html-bibr">16</a>]) and rock deposits (BEG) overlays the orthophotos and elevation data from ©swisstopo. Also, the coarse contours of the alpine glaciers retreat simulation by Seguinot et al. [<a href="#B95-remotesensing-17-00819" class="html-bibr">95</a>] are shown as a rough temporal reference. EPSG: 21781.</p>
Full article ">Figure A2
<p>Comparison of the simulated results to the mapped rockfall paths, deposited block fragments, and to the reconstructed trajectories of the 2015 event. The simulated paths and envelopes for the three scenarios are shown in yellow, green, and blue in (<b>a</b>). The simulated number of passing trajectories from the combined scenarios is shown in shades of blue in (<b>b</b>). The simulated translational velocities of the three small set of five trajectories are shown in (<b>c</b>) without the near vertical lines associated with the velocity drops at impacts. The cumulative curves of the mapped deposited block fragments from the BEG SA and of the simulated deposited block fragments in relation with their corresponding reach angles are shown in (<b>d</b>). Elevation data are from ©swisstopo. EPSG: 21781.</p>
Full article ">Figure A3
<p>Distinction in between (<b>a</b>) the number of passing trajectories and (<b>b</b>) the number of reaching trajectories for exposed objects with a width of 30 m perpendicular to the rockfall paths. Elevation data from ©swisstopo. EPSG: 21781.</p>
Full article ">Figure A4
<p>Contours of the same constant hazard probabilities (1/300) based on a frequency of 13 bl./40 yr. for the same simulation results but obtained considering different widths around the simulated paths in (<b>a</b>). In (<b>b</b>), the considered object width remains constant but contours of different hazard probabilities are shown. One would obtain similar probability contours in (<b>b</b>) if using the lower average frequency since deglaciation of 1 bl./9 yr., but with the classes shifted by one category, i.e., the 1/100 contour would become the 1/300 contour. The cumulative contours in red consider all mapped blocks (n = 449). Otherwise, the 50, 60, 70, 80 and 90% contours, respectively, become ~70, ~75, ~78, ~80 and ~85% when neglecting the block fragments that could alternatively be glacial erratics. For this site, the values and related zones are expressed for the center of the hypothetical exposed objects for continuity with (Noël et al. [<a href="#B15-remotesensing-17-00819" class="html-bibr">15</a>]). Elevation and imagery data from ©swisstopo. EPSG: 21781.</p>
Full article ">Figure A5
<p>Ratio of mapped deposited rock fragments constrained by the susceptibility zones (black limits) and the indicative hazard zones (orange-red limits). Note that the indicative hazard zones can expand or contract depending on the expected <span class="html-italic">λ</span>. The ratios thus vary depending on the chosen <span class="html-italic">λ</span>. * Like for stnParabel’s process-based results, the quantitative hazard methodology could also be applied with Rockyfor3D’s results.</p>
Full article ">
18 pages, 10762 KiB  
Article
NRAP-RCNN: A Pseudo Point Cloud 3D Object Detection Method Based on Noise-Reduction Sparse Convolution and Attention Mechanism
by Ziyue Zhou, Yongqing Jia, Tao Zhu and Yaping Wan
Information 2025, 16(3), 176; https://doi.org/10.3390/info16030176 - 26 Feb 2025
Viewed by 149
Abstract
In recent years, pseudo point clouds generated from depth completion of RGB images and LiDAR data have provided a robust foundation for multimodal 3D object detection. However, the generation process often introduces noise, reducing data quality and detection accuracy. Moreover, existing methods fail [...] Read more.
In recent years, pseudo point clouds generated from depth completion of RGB images and LiDAR data have provided a robust foundation for multimodal 3D object detection. However, the generation process often introduces noise, reducing data quality and detection accuracy. Moreover, existing methods fail to effectively capture channel correlations and global contextual information during the 2D feature extraction stage after the 3D backbone network, limiting detection performance. To address these challenges, this paper proposes NRAP-RCNN, a pseudo point cloud-based 3D object detection method with two key innovations: (1) A noise-reduction sparse convolution network (NRConvNet), comprising NRConv (noise-resistant submanifold sparse convolution), SRB (sparse convolution residual block), and MHSA (multi-head self-attention). NRConv suppresses pseudo point cloud noise by jointly encoding 2D and 3D features, SRB enhances feature extraction depth and robustness, and MHSA optimizes global feature representation. (2) An attention fusion module (ECA_GCA) is introduced to enhance the feature representation of the 2D backbone network by combining channel and global contextual information. The experimental results demonstrate that NRAP-RCNN achieves 88.4% car AP (R40) on the KITTI validation set and 85.1% on the test set, significantly outperforming advanced 3D detection methods, showcasing its effectiveness in improving detection performance. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The overall architecture of NRAP-RCNN.</p>
Full article ">Figure 2
<p>The structure of NRConv.</p>
Full article ">Figure 3
<p>The structure of SRB.</p>
Full article ">Figure 4
<p>The structure of ECA_GCA.</p>
Full article ">Figure 5
<p>The structure of GCA.</p>
Full article ">Figure 6
<p>Qualitative results of our 3D detection results. Section (<b>a</b>) illustrates the detection outcomes in the image coordinate system, Section (<b>b</b>) presents the detection results in the point cloud spatial coordinate system and visualizes the LiDAR signals, while Section (<b>c</b>) showcases the detection results in the bird’s eye view (BEV).</p>
Full article ">
19 pages, 3487 KiB  
Article
Evaluating the Effectiveness of Soil Profile Rehabilitation for Pluvial Flood Mitigation Through Two-Dimensional Hydrodynamic Modeling
by Julia Atayi, Xin Zhou, Christos Iliadis, Vassilis Glenis, Donghee Kang, Zhuping Sheng, Joseph Quansah and James G. Hunter
Hydrology 2025, 12(3), 44; https://doi.org/10.3390/hydrology12030044 - 26 Feb 2025
Viewed by 255
Abstract
Pluvial flooding, driven by increasingly impervious surfaces and intense storm events, presents a growing challenge for urban areas worldwide. In Baltimore City, MD, USA, climate change, rapid urbanization, and aging stormwater infrastructure are exacerbating flooding impacts, resulting in significant socio-economic consequences. This study [...] Read more.
Pluvial flooding, driven by increasingly impervious surfaces and intense storm events, presents a growing challenge for urban areas worldwide. In Baltimore City, MD, USA, climate change, rapid urbanization, and aging stormwater infrastructure are exacerbating flooding impacts, resulting in significant socio-economic consequences. This study evaluated the effectiveness of a soil profile rehabilitation scenario using a 2D hydrodynamic modeling approach for the Tiffany Run watershed, Baltimore City. This study utilized different extreme storm events, a high-resolution (1 m) LiDAR Digital Terrain Model (DTM), building footprints, and hydrological soil data. These datasets were integrated into a fully coupled 2D hydrodynamic model, the City Catchment Analysis Tool (CityCAT), to simulate urban flood dynamics. The pre-soil rehabilitation simulation revealed a maximum water depth of 3.00 m in most areas, with hydrologic soil groups C and D, especially downstream of the study area. The post-soil rehabilitation simulation was targeted at vacant lots and public parcels, accounting for 33.20% of the total area of the watershed. This resulted in a reduced water depth of 2.50 m. Additionally, the baseline runoff coefficient of 0.49 decreased to 0.47 following the rehabilitation, and the model consistently recorded a peak runoff reduction rate of 4.10 across varying rainfall intensities. The validation using a contingency matrix demonstrated true-positive rates of 0.75, 0.50, 0.64, and 0 for the selected events, confirming the model’s capability at capturing real-world flood occurrences. Full article
(This article belongs to the Special Issue Runoff Modelling under Climate Change)
Show Figures

Figure 1

Figure 1
<p>Map showing the study area and its geographical features.</p>
Full article ">Figure 2
<p>A contingency table was applied to validate modeled flood results (Source: [<a href="#B37-hydrology-12-00044" class="html-bibr">37</a>]).</p>
Full article ">Figure 3
<p>Water depth (m) changes resulting from different storm intensities.</p>
Full article ">Figure 4
<p>311 reports received on these extreme storm events.</p>
Full article ">Figure 5
<p>Social media and newspaper reports received on 10 June 2021 storm event.</p>
Full article ">Figure 6
<p>Social media and newspaper reports received on 12 September 2023 storm event.</p>
Full article ">Figure 7
<p>(<b>A</b>) Boundaries of public properties and vacant lots within the study area; (<b>B</b>) overlay of public parcels and vacant lots on the soil profile map, highlighting the areas targeted for soil rehabilitation.</p>
Full article ">Figure 8
<p>Spatial distribution of flood water depths post-soil rehabilitation.</p>
Full article ">
21 pages, 3894 KiB  
Article
Bounded-Error LiDAR Compression for Bandwidth-Efficient Cloud-Edge In-Vehicle Data Transmission
by Ray-I Chang, Ting-Wei Hsu, Chih Yang and Yen-Ting Chen
Electronics 2025, 14(5), 908; https://doi.org/10.3390/electronics14050908 - 25 Feb 2025
Viewed by 237
Abstract
Recent advances in autonomous driving have led to an increased use of LiDAR (Light Detection and Ranging) sensors for high-frequency 3D perceptions, resulting in massive data volumes that challenge in-vehicle networks, storage systems, and cloud-edge communications. To address this issue, we propose a [...] Read more.
Recent advances in autonomous driving have led to an increased use of LiDAR (Light Detection and Ranging) sensors for high-frequency 3D perceptions, resulting in massive data volumes that challenge in-vehicle networks, storage systems, and cloud-edge communications. To address this issue, we propose a bounded-error LiDAR compression framework that enforces a user-defined maximum coordinate deviation (e.g., 2 cm) in the real-world space. Our method combines multiple compression strategies in both axis-wise metric Axis or Euclidean metric L2 (namely, Error-Bounded Huffman Coding (EB-HC), Error-Bounded 3D Compression (EB-3D), and the extended Error-Bounded Huffman Coding with 3D Integration (EB-HC-3D)) with a lossless Huffman coding baseline. By quantizing and grouping point coordinates based on a strict threshold (either axis-wise or Euclidean), our method significantly reduces data size while preserving the geometric fidelity. Experiments on the KITTI dataset demonstrate that, under a 2 cm bounded-error, our single-bin compression reduces the data to 25–35% of their original size, while multi-bin processing can further compress the data to 15–25% of their original volume. An analysis of compression ratios, error metrics, and encoding/decoding speeds shows that our method achieves a substantial data reduction while keeping reconstruction errors within the specified limit. Moreover, runtime profiling indicates that our method is well-suited for deployment on in-vehicle edge devices, thereby enabling scalable cloud-edge cooperation. Full article
(This article belongs to the Special Issue Recent Advances of Cloud, Edge, and Parallel Computing)
Show Figures

Figure 1

Figure 1
<p>Combined CR performance across multiple scenes.</p>
Full article ">Figure 2
<p>Combined <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>t</mi> </mrow> <mrow> <mi>e</mi> <mi>n</mi> <mi>c</mi> </mrow> </msub> </mrow> </semantics></math> performance across multiple scenes.</p>
Full article ">Figure 3
<p>Combined <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>t</mi> </mrow> <mrow> <mi>d</mi> <mi>e</mi> <mi>c</mi> </mrow> </msub> </mrow> </semantics></math> performance across multiple scenes.</p>
Full article ">Figure 4
<p>Combined mean reconstruction error analysis across multiple scenes.</p>
Full article ">Figure 5
<p>Combined maximum reconstruction error analysis across multiple scenes.</p>
Full article ">Figure 6
<p>Combined Chamfer Distance analysis across multiple scenes.</p>
Full article ">Figure 7
<p>Combined Occupancy IoU analysis across multiple scenes.</p>
Full article ">
27 pages, 11161 KiB  
Article
Quantifying Tree Structural Change in an African Savanna by Utilizing Multi-Temporal TLS Data
by Tasiyiwa Priscilla Muumbe, Jussi Baade, Pasi Raumonen, Corli Coetsee, Jenia Singh and Christiane Schmullius
Remote Sens. 2025, 17(5), 757; https://doi.org/10.3390/rs17050757 - 22 Feb 2025
Viewed by 252
Abstract
Structural changes in savanna trees vary spatially and temporally because of both biotic and abiotic drivers, as well as the complex interactions between them. Given this complexity, it is essential to monitor and quantify woody structural changes in savannas efficiently. We implemented a [...] Read more.
Structural changes in savanna trees vary spatially and temporally because of both biotic and abiotic drivers, as well as the complex interactions between them. Given this complexity, it is essential to monitor and quantify woody structural changes in savannas efficiently. We implemented a non-destructive approach based on Terrestrial Laser Scanning (TLS) and Quantitative Structure Models (QSMs) that offers the unique advantage of investigating changes in complex tree parameters, such as volume and branch length parameters that have not been previously reported for savanna trees. Leaf-off multi-scan TLS point clouds were acquired during the dry season, using a Riegl VZ1000 TLS, in September 2015 and October 2019 at the Skukuza flux tower in Kruger National Park, South Africa. These three-dimensional (3D) data covered an area of 15.2 ha with an average point density of 4270 points/m2 (0.015°) and 1600 points/m2 (0.025°) for the 2015 and 2019 clouds, respectively. Individual tree segmentation was applied on the two clouds using the comparative shortest-path algorithm in LiDAR 360(v5.4) software. We reconstructed optimized QSMs and assessed tree structural parameters such as Diameter at Breast Height (DBH), tree height, crown area, volume, and branch length at individual tree level. The DBH, tree height, crown area, and trunk volume showed significant positive correlations (R2 > 0.80) between scanning periods regardless of the difference in the number of points of the matched trees. The opposite was observed for total and branch volume, total number of branches, and 1st-order branch length. As the difference in the point densities increased, the difference in the computed parameters also increased (R2 < 0.63) for a high relative difference. A total of 45% of the trees present in 2015 were identified in 2019 as damaged/felled (75 trees), and the volume lost was estimated to be 83.4 m3. The results of our study showed that volume reconstruction algorithms such as TreeQSMs and high-resolution TLS datasets can be used successfully to quantify changes in the structure of savanna trees. The results of this study are key in understanding savanna ecology given its complex and dynamic nature and accurately quantifying the gains and losses that could arise from fire, drought, herbivory, and other abiotic and biotic disturbances. Full article
(This article belongs to the Special Issue Remote Sensing of Savannas and Woodlands II)
Show Figures

Figure 1

Figure 1
<p>Study area location (<b>left</b>) showing the scanned area marked in black and the matched tree area marked in yellow and (<b>right</b>) the map of Kruger National Park showing the location of Skukuza Flux Tower where scanning was conducted (<b>a</b>) showing a cross-section displaying the change in vegetation over the 4-year period (<b>b</b>) 2015 and (<b>c</b>) 2019.</p>
Full article ">Figure 2
<p>Number of correctly segmented trees (<b>a</b>) 178 trees segmented in 2015 (<b>b</b>) 168 trees segmented in 2019, 93 standing and 75 felled or damaged (<b>c</b>) 75 damaged in 2019 matched to standing trees in 2015.</p>
Full article ">Figure 3
<p>The location of the 53 matched trees in the study area.</p>
Full article ">Figure 4
<p>Difference of points per tree between the matched trees. Very high (&gt;75%), high (50–75%), Medium (25–50%), and Low (≤25%) relative difference.</p>
Full article ">Figure 5
<p>Canopy Height Models Differences (<b>a</b>) 2019 CHM (<b>b</b>) 2015 CHM (<b>c</b>) Δ CHM = 2019 CHM-2015 CHM.</p>
Full article ">Figure 6
<p>Comparison between tree parameters between the two scanning periods (<b>a</b>) DBH (<b>b</b>) Tree Height (<b>c</b>) Crown Area (<b>d</b>) Trunk Volume (<b>e</b>) Total Volume (<b>f</b>) Branch Volume (<b>g</b>) Branch Length and (<b>h</b>) Branch length 1st-order branches. The dashed line is the 1:1 line, and the grey area represents the 95% confidence interval, while the black solid line represents the linear regression between the tree structural parameters in 2019 and 2015. <span class="html-italic">n</span> = 53. The error bars indicate the standard deviation of 10 reconstructions.</p>
Full article ">Figure 6 Cont.
<p>Comparison between tree parameters between the two scanning periods (<b>a</b>) DBH (<b>b</b>) Tree Height (<b>c</b>) Crown Area (<b>d</b>) Trunk Volume (<b>e</b>) Total Volume (<b>f</b>) Branch Volume (<b>g</b>) Branch Length and (<b>h</b>) Branch length 1st-order branches. The dashed line is the 1:1 line, and the grey area represents the 95% confidence interval, while the black solid line represents the linear regression between the tree structural parameters in 2019 and 2015. <span class="html-italic">n</span> = 53. The error bars indicate the standard deviation of 10 reconstructions.</p>
Full article ">Figure 6 Cont.
<p>Comparison between tree parameters between the two scanning periods (<b>a</b>) DBH (<b>b</b>) Tree Height (<b>c</b>) Crown Area (<b>d</b>) Trunk Volume (<b>e</b>) Total Volume (<b>f</b>) Branch Volume (<b>g</b>) Branch Length and (<b>h</b>) Branch length 1st-order branches. The dashed line is the 1:1 line, and the grey area represents the 95% confidence interval, while the black solid line represents the linear regression between the tree structural parameters in 2019 and 2015. <span class="html-italic">n</span> = 53. The error bars indicate the standard deviation of 10 reconstructions.</p>
Full article ">Figure 7
<p>Mean DBH in DBH Classes.</p>
Full article ">Figure 8
<p>Mean Tree Height in DBH Classes.</p>
Full article ">Figure 9
<p>Mean Crown Area in DBH Classes.</p>
Full article ">Figure 10
<p>Mean Trunk Volume in DBH Classes.</p>
Full article ">Figure 11
<p>Mean Total Volume in DBH Classes.</p>
Full article ">Figure 12
<p>Mean Branch Volume in DBH Classes.</p>
Full article ">Figure 13
<p>Mean Branch Length in DBH Classes.</p>
Full article ">Figure 14
<p>Mean 1st-order Branch Length in DBH Classes.</p>
Full article ">Figure 15
<p>Comparison between the TLS-derived and the field-measured DBH for 38 trees for both scanning periods. The dashed line is the 1:1 line, and the grey area represents the 95% confidence interval, while the blue and red solid lines represent the linear regression between the TLS-measured and field-measured DBH for 2015 and 2019, respectively.</p>
Full article ">Figure 16
<p>(<b>a</b>) An example of a tree that succumbed to the effects of elephant damage. The image shows the trees standing in 2015 (blue), and on the ground in 2019 (yellow) (<b>b</b>) An example of a tree that succumbed to the effects of drought—the image shows the tree with a crown in 2015 (blue) and having lost most of its crown in 2019 (yellow).</p>
Full article ">Figure 17
<p>Example of a matched tree that had a high relative difference (50–75%) in the number of points (<b>a</b>) The tree in 2015 and the resulting QSM model (51 200 points) (<b>b</b>) The tree in 2019 and the resulting QSM Model (115 700 points).</p>
Full article ">Figure 17 Cont.
<p>Example of a matched tree that had a high relative difference (50–75%) in the number of points (<b>a</b>) The tree in 2015 and the resulting QSM model (51 200 points) (<b>b</b>) The tree in 2019 and the resulting QSM Model (115 700 points).</p>
Full article ">
25 pages, 9167 KiB  
Review
Modeling LiDAR-Derived 3D Structural Metric Estimates of Individual Tree Aboveground Biomass in Urban Forests: A Systematic Review of Empirical Studies
by Ruonan Li, Lei Wang, Yalin Zhai, Zishan Huang, Jia Jia, Hanyu Wang, Mengsi Ding, Jiyuan Fang, Yunlong Yao, Zhiwei Ye, Siqi Hao and Yuwen Fan
Forests 2025, 16(3), 390; https://doi.org/10.3390/f16030390 - 22 Feb 2025
Viewed by 320
Abstract
The aboveground biomass (AGB) of individual trees is a critical indicator for assessing urban forest productivity and carbon storage. In the context of global warming, it plays a pivotal role in understanding urban forest carbon sequestration and regulating the global carbon cycle. Recent [...] Read more.
The aboveground biomass (AGB) of individual trees is a critical indicator for assessing urban forest productivity and carbon storage. In the context of global warming, it plays a pivotal role in understanding urban forest carbon sequestration and regulating the global carbon cycle. Recent advances in light detection and ranging (LiDAR) have enabled the detailed characterization of three-dimensional (3D) structures, significantly enhancing the accuracy of individual tree AGB estimation. This review examines studies that use LiDAR-derived 3D structural metrics to model and estimate individual tree AGB, identifying key metrics that influence estimation accuracy. A bibliometric analysis of 795 relevant articles from the Web of Science Core Collection was conducted using R Studio (version 4.4.1) and VOSviewer 1.6.20 software, followed by an in-depth review of 80 papers focused on urban forests, published after 2010 and selected from the first and second quartiles of the Chinese Academy of Sciences journal ranking. The results show the following: (1) Dalponte2016 and watershed are more widely used among 2D raster-based algorithms, and 3D point cloud-based segmentation algorithms offer greater potential for innovation; (2) tree height and crown volume are important 3D structural metrics for individual tree AGB estimation, and biomass indices that integrate these parameters can further improve accuracy and applicability; (3) machine learning algorithms such as Random Forest and deep learning consistently outperform parametric methods, delivering stable AGB estimates; (4) LiDAR data sources, point cloud density, and forest types are important factors that significantly affect the accuracy of individual tree AGB estimation. Future research should emphasize deep learning applications for improving point cloud segmentation and 3D structure extraction accuracy in complex forest environments. Additionally, optimizing multi-sensor data fusion strategies to address data matching and resolution differences will be crucial for developing more accurate and widely applicable AGB estimation models. Full article
(This article belongs to the Section Urban Forestry)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Annual publications from 2003 to 2024. (<b>b</b>) Top 8 productive journals from 2003 to 2024. The size of the circles represents the number of publications; larger circles indicate higher publication volumes.</p>
Full article ">Figure 2
<p>(<b>a</b>) Top 20 most productive countries. (<b>b</b>) Country collaboration map. The line thickness represents the strength of collaboration.</p>
Full article ">Figure 3
<p>(<b>a</b>) Top 10 most productive affiliations from 2003 to 2024. (<b>b</b>) The performances of the top 10 most productive authors from 2003 to 2024. (<b>c</b>) Affiliation co-occurrence network. (<b>d</b>) Author co-occurrence network. The size of the circles represents the publication volume, and edge thickness represents the collaboration strength.</p>
Full article ">Figure 4
<p>The distribution of reviewed studies categorized by (<b>a</b>) country and (<b>b</b>–<b>g</b>) city. The size of the circles represents the number of studies, with larger circles indicating a higher number of studies.</p>
Full article ">
Back to TopTop