[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (500)

Search Parameters:
Keywords = mobile LIDAR

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 12506 KiB  
Article
Hierarchical Optimization Segmentation and Parameter Extraction of Street Trees Based on Topology Checking and Boundary Analysis from LiDAR Point Clouds
by Yuan Kou, Xianjun Gao, Yue Zhang, Tianqing Liu, Guanxing An, Fen Ye, Yongyu Tian and Yuhan Chen
Sensors 2025, 25(1), 188; https://doi.org/10.3390/s25010188 - 1 Jan 2025
Viewed by 206
Abstract
Roadside tree segmentation and parameter extraction play an essential role in completing the virtual simulation of road scenes. Point cloud data of roadside trees collected by LiDAR provide important data support for achieving assisted autonomous driving. Due to the interference from trees and [...] Read more.
Roadside tree segmentation and parameter extraction play an essential role in completing the virtual simulation of road scenes. Point cloud data of roadside trees collected by LiDAR provide important data support for achieving assisted autonomous driving. Due to the interference from trees and other ground objects in street scenes caused by mobile laser scanning, there may be a small number of missing points in the roadside tree point cloud, which makes it familiar for under-segmentation and over-segmentation phenomena to occur in the roadside tree segmentation process. In addition, existing methods have difficulties in meeting measurement requirements for segmentation accuracy in the individual tree segmentation process. In response to the above issues, this paper proposes a roadside tree segmentation algorithm, which first completes the scene pre-segmentation through unsupervised clustering. Then, the over-segmentation and under-segmentation situations that occur during the segmentation process are processed and optimized through projection topology checking and tree adaptive voxel bound analysis. Finally, the overall high-precision segmentation of roadside trees is completed, and relevant parameters such as tree height, diameter at breast height, and crown area are extracted. At the same time, the proposed method was tested using roadside tree scenes. The experimental results show that our methods can effectively recognize all trees in the scene, with an average individual tree segmentation accuracy of 99.07%, and parameter extraction accuracy greater than 90%. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Display of street tree test dataset 1.</p>
Full article ">Figure 2
<p>Display of street tree test dataset 2. (Points colored with blue and green are trees. Points colored with gray are background noise.).</p>
Full article ">Figure 3
<p>Procession of tree segmentation.</p>
Full article ">Figure 4
<p>Schematic diagram of the DBSCAN algorithm.</p>
Full article ">Figure 5
<p>Display of tree segmentation.</p>
Full article ">Figure 6
<p>Optimization process diagram.</p>
Full article ">Figure 7
<p>Topology checking and gauge analysis. (<b>a</b>) Topology checking of the growth direction of the trunk; (<b>b</b>) XOY canopy topology relationship analysis; (<b>c</b>) Gauge Analysis.</p>
Full article ">Figure 8
<p>Analysis results of chest diameter point cloud data.</p>
Full article ">Figure 9
<p>Effect of least squares circle fitting method.</p>
Full article ">Figure 10
<p>Schematic diagram of crown diameter detection.</p>
Full article ">Figure 11
<p>Schematic diagram of crown convex hull method.</p>
Full article ">Figure 12
<p>Effect of individual tree segmentation on the left side of the road in dataset 1.</p>
Full article ">Figure 13
<p>Effect of individual tree segmentation on the right side of the road in dataset 1.</p>
Full article ">Figure 14
<p>Tree parameter extraction results on the right side of the road in dataset 1.</p>
Full article ">Figure 15
<p>Tree parameter extraction results on the left side of the road in dataset 1.</p>
Full article ">Figure 16
<p>Effect of individual tree segmentation on the right side of the road in dataset 2.</p>
Full article ">Figure 17
<p>Effect of individual tree segmentation on the left side of the road in dataset 2.</p>
Full article ">Figure 18
<p>Tree parameter extraction results on the right side of the road in dataset 2.</p>
Full article ">Figure 19
<p>Tree parameter extraction results on the left side of the road in dataset 2.</p>
Full article ">Figure 20
<p>Comparison of individual tree segmentation effects on the left side of the road. (<b>a</b>) The segmentation effect of the individual tree algorithm of our method; (<b>b</b>) Comparison algorithm results of the individual tree segmentation; (<b>c</b>) Top view segmentation results of our method; (<b>d</b>) Top view results of comparison algorithm; (<b>e</b>) The convex hull of the tree crown results of our method; (<b>f</b>) The crown convex hull results of Comparison algorithm.</p>
Full article ">Figure 21
<p>Comparison of individual tree segmentation effects on the right side of the road. (<b>a</b>) The segmentation effect of the individual tree algorithm of our method; (<b>b</b>) Comparison algorithm results for individual tree segmentation effect; (<b>c</b>) Top view segmentation results of our method; (<b>d</b>) Top view results of comparison algorithm; (<b>e</b>) The convex hull of the tree crown results of our method; (<b>f</b>) The crown convex hull results of Comparison algorithm.</p>
Full article ">Figure 22
<p>Ablation experiments of the topology checking module. The circle parts are incorrect segmentation.</p>
Full article ">Figure 23
<p>Dublin dataset of ALS data.</p>
Full article ">
13 pages, 5322 KiB  
Article
Assessment of LiDAR-Based Sensing Technologies in Bird–Drone Collision Scenarios
by Paula Seoane, Enrique Aldao, Fernando Veiga-López and Higinio González-Jorge
Drones 2025, 9(1), 13; https://doi.org/10.3390/drones9010013 - 27 Dec 2024
Viewed by 297
Abstract
The deployment of Advanced Air Mobility requires the continued development of technologies to ensure operational safety. One of the key aspects to consider here is the availability of robust solutions to avoid tactical conflicts between drones and other flying elements, such as other [...] Read more.
The deployment of Advanced Air Mobility requires the continued development of technologies to ensure operational safety. One of the key aspects to consider here is the availability of robust solutions to avoid tactical conflicts between drones and other flying elements, such as other drones or birds. Bird detection is a relatively underexplored area, but due to the large number of birds, their shared airspace with drones, and the fact that they are non-cooperative elements within an air traffic management system, it is of interest to study how their detection can be improved and how collisions with them can be avoided. This work demonstrates how a LiDAR sensor mounted on a drone can detect birds of various sizes. A LiDAR simulator, previously developed by the Aerolab research group, is employed in this study. Six different collision trajectories and three different bird sizes (pigeon, falcon, and seagull) are tested. The results show that the LiDAR can detect any of these birds at about 30 m; bird detection improves when the bird gets closer and has a larger size. The detection accuracy is higher than 1 m in most of the cases under study. The errors grow with increasing drone-bird relative speed. Full article
Show Figures

Figure 1

Figure 1
<p>Operational scenarios. (<b>a</b>) trajectory 1, (<b>b</b>) trajectory 2, (<b>c</b>) trajectory 3, (<b>d</b>), trajectory 4, (<b>e</b>) trajectory 5, and (<b>f</b>) trajectory 6.</p>
Full article ">Figure 2
<p>Randomized operational scenarios. (<b>a</b>) <math display="inline"><semantics> <mrow> <mover accent="true"> <mrow> <msub> <mi>r</mi> <mn>0</mn> </msub> </mrow> <mo stretchy="true">→</mo> </mover> <mo>=</mo> <mfenced close="]" open="["> <mrow> <mn>50</mn> <mo>,</mo> <mo>−</mo> <mn>17</mn> <mo>,</mo> <mo> </mo> <mn>0</mn> </mrow> </mfenced> <mo> </mo> <mi>m</mi> <mo>;</mo> <mo> </mo> <mover accent="true"> <mrow> <msub> <mi>v</mi> <mn>0</mn> </msub> </mrow> <mo stretchy="true">→</mo> </mover> <mo>=</mo> <mfenced close="]" open="["> <mrow> <mo>−</mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mn>10</mn> <mo>,</mo> <mo> </mo> <mn>1</mn> </mrow> </mfenced> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mi>m</mi> <mi>s</mi> </mfrac> </mstyle> <mo>;</mo> <mover accent="true"> <mrow> <mo> </mo> <msub> <mi>a</mi> <mn>0</mn> </msub> </mrow> <mo stretchy="true">→</mo> </mover> <mo>=</mo> <mfenced close="]" open="["> <mrow> <mo>−</mo> <mn>0.6</mn> <mo>,</mo> <mo>−</mo> <mn>0.8</mn> <mo> </mo> <mn>0.2</mn> </mrow> </mfenced> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mi>m</mi> <mrow> <msup> <mi>s</mi> <mrow> <mn>2</mn> <mo> </mo> </mrow> </msup> </mrow> </mfrac> </mstyle> <mo>,</mo> <mo> </mo> <mfenced> <mstyle mathvariant="bold" mathsize="normal"> <mi>b</mi> </mstyle> </mfenced> <mo> </mo> <mover accent="true"> <mrow> <msub> <mi>r</mi> <mn>0</mn> </msub> </mrow> <mo stretchy="true">→</mo> </mover> <mo>=</mo> <mfenced close="]" open="["> <mrow> <mn>20</mn> <mo>,</mo> <mo> </mo> <mn>17</mn> <mo>,</mo> <mo> </mo> <mn>0</mn> </mrow> </mfenced> <mo> </mo> <mi>m</mi> <mo>;</mo> <mo> </mo> <mover accent="true"> <mrow> <msub> <mi>v</mi> <mn>0</mn> </msub> </mrow> <mo stretchy="true">→</mo> </mover> <mo>=</mo> <mfenced close="]" open="["> <mrow> <mn>11</mn> <mo>,</mo> <mo>−</mo> <mn>8</mn> <mo>,</mo> <mo>−</mo> <mn>1.2</mn> </mrow> </mfenced> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mi>m</mi> <mi>s</mi> </mfrac> </mstyle> <mo>;</mo> <mover accent="true"> <mrow> <msub> <mi>a</mi> <mn>0</mn> </msub> </mrow> <mo stretchy="true">→</mo> </mover> <mo>=</mo> <mfenced close="]" open="["> <mrow> <mn>0</mn> <mo>,</mo> <mo>−</mo> <mn>1.2</mn> <mo>,</mo> <mo>−</mo> <mn>0.3</mn> </mrow> </mfenced> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mi>m</mi> <mrow> <msup> <mi>s</mi> <mrow> <mn>2</mn> <mo> </mo> </mrow> </msup> </mrow> </mfrac> </mstyle> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Bird 3D models. (<b>a</b>) pigeon, (<b>b</b>) falcon, and (<b>c</b>) seagull.</p>
Full article ">Figure 4
<p>LiDAR detection algorithm.</p>
Full article ">Figure 5
<p>LiDAR echoes. (<b>a</b>) pigeon 3D model (<b>left</b>) and point cloud (<b>right</b>), (<b>b</b>) falcon 3D model (<b>left</b>) and point cloud (<b>right</b>), and (<b>c</b>) seagull 3D model (<b>left</b>) and point cloud (<b>right</b>).</p>
Full article ">Figure 6
<p>LiDAR echoes simulated for each bird and trajectory.</p>
Full article ">Figure 7
<p>Position detection error depending on the operational scenario: (<b>a</b>) trajectory 1, (<b>b</b>) trajectory 2, (<b>c</b>) trajectory 3, (<b>d</b>) trajectory 4, (<b>e</b>) trajectory 5, and (<b>f</b>) trajectory 6.</p>
Full article ">Figure 8
<p>Error statistical assessment for the 300 simulated trajectories of pigeon encounters: (<b>a</b>) average error in target position as a function of flight speed, (<b>b</b>) average error in target position as a function of sensing distance, (<b>c</b>) average error in speed estimation as a function of flight speed, and (<b>d</b>) average error in speed estimation as a function of sensing distance.</p>
Full article ">Figure 9
<p>Error statistical assessment for the 300 simulated trajectories of seagull encounters: (<b>a</b>) average error in target position as a function of flight speed, (<b>b</b>) average error in target position as a function of sensing distance, (<b>c</b>) average error in speed estimation as a function of flight speed, and (<b>d</b>) average error in speed estimation as a function of sensing distance.</p>
Full article ">Figure 10
<p>Error statistical assessment for the 300 simulated trajectories of falcon encounters: (<b>a</b>) average error in target position as a function of flight speed, (<b>b</b>) average error in target position as a function of sensing distance, (<b>c</b>) average error in speed estimation as a function of flight speed, and (<b>d</b>) average error in speed estimation as a function of sensing distance.</p>
Full article ">
30 pages, 33512 KiB  
Article
Ecological Management Zoning Based on the Supply–Demand Relationship and Synergies of Urban Forest Ecosystem Services: A Case Study from Fuzhou, China
by Mingzhe Li, Nuo Xu, Fan Liu, Huanran Tong, Nayun Ding, Jianwen Dong and Minhua Wang
Forests 2025, 16(1), 17; https://doi.org/10.3390/f16010017 - 25 Dec 2024
Viewed by 286
Abstract
Urban forests, as vital components of green infrastructure, provide essential ecosystem services (ESs) that support urban sustainability. However, rapid urban expansion and increased density threaten these forests, creating significant imbalances between the supply and demand for these services. Understanding the characteristics of ecosystem [...] Read more.
Urban forests, as vital components of green infrastructure, provide essential ecosystem services (ESs) that support urban sustainability. However, rapid urban expansion and increased density threaten these forests, creating significant imbalances between the supply and demand for these services. Understanding the characteristics of ecosystem services and reasonably dividing ecological management zones are crucial for promoting sustainable urban development. This study introduces an innovative ecological management zoning framework based on the matching degree and synergies relationships of ESs. Focusing on Fuzhou’s fourth ring road area in China, data from 1038 urban forest sample plots were collected using mobile LIDAR. By integrating the i-Tree Eco model and Kriging interpolation, we assessed the spatial distribution of four key ESs—carbon sequestration, avoided runoff, air purification, and heat mitigation—and analyzed their supply–demand relationships and synergies. Based on these ecological characteristics, we employed unsupervised machine learning classification to identify eight distinct ecological management zones, each accompanied by targeted recommendations. Key findings include the following: (1) ecosystem services of urban forests in Fuzhou exhibit pronounced spatial heterogeneity, with clearly identifiable high-value and low-value areas of significant statistical relevance; (2) heat mitigation, avoided runoff, and air purification services all exhibit synergistic effects, while carbon sequestration shows trade-offs with the other three services in high-value areas, necessitating targeted optimization; (3) eight ecological management zones were identified, each with unique ecological characteristics. This study offers precise spatial insights into Fuzhou’s urban forests, providing a foundation for sustainable ecological management strategies. Full article
(This article belongs to the Special Issue Assessing, Valuing, and Mapping Ecosystem Services)
Show Figures

Figure 1

Figure 1
<p>The research site: Built-up area within the fourth ring road, Fuzhou, China.</p>
Full article ">Figure 2
<p>Methodological framework.</p>
Full article ">Figure 3
<p>Block units delineation and sample plots Selection. (<b>a</b>) Block units’ division and land-use information; (<b>b</b>) Spatial distribution characteristics of vegetation structure heterogeneity; (<b>c</b>) sample plots selection.</p>
Full article ">Figure 4
<p>Two-dimensional schematic of EST and ECSI.</p>
Full article ">Figure 5
<p>Intensity distribution of supply and demand for ESs.</p>
Full article ">Figure 6
<p>The spatial distribution characteristics of the ESs supply-demand ratio. (<b>a</b>) Supply and demand diagram for air purification services. (<b>b</b>) Supply and demand diagram for heat mitigation services. (<b>c</b>) Supply and demand diagram for carbon sequestration services. (<b>d</b>) Supply and demand diagram for avoided runoff services. (<b>e</b>) Combined supply and demand ratio for ESs.</p>
Full article ">Figure 7
<p>(<b>a</b>) Intensity of collaborative services between two ESs, ** represents a significant correlation. (<b>b</b>) intensity of integrated collaborative services among the four ESs.</p>
Full article ">Figure 8
<p>Ecological management zoning map.</p>
Full article ">Figure 9
<p>Fuzhou urban green space system planning (2016–2020).</p>
Full article ">Figure 10
<p>The characteristics of ESs under different urban gradients.</p>
Full article ">Figure 11
<p>Nonlinear relationships among ESs.</p>
Full article ">Figure 12
<p>Spatial distribution of ecological management zones under different classification numbers.</p>
Full article ">
20 pages, 6270 KiB  
Article
Initial Pose Estimation Method for Robust LiDAR-Inertial Calibration and Mapping
by Eun-Seok Park , Saba Arshad and Tae-Hyoung Park
Sensors 2024, 24(24), 8199; https://doi.org/10.3390/s24248199 - 22 Dec 2024
Viewed by 379
Abstract
Handheld LiDAR scanners, which typically consist of a LiDAR sensor, Inertial Measurement Unit, and processor, enable data capture while moving, offering flexibility for various applications, including indoor and outdoor 3D mapping in fields such as architecture and civil engineering. Unlike fixed LiDAR systems, [...] Read more.
Handheld LiDAR scanners, which typically consist of a LiDAR sensor, Inertial Measurement Unit, and processor, enable data capture while moving, offering flexibility for various applications, including indoor and outdoor 3D mapping in fields such as architecture and civil engineering. Unlike fixed LiDAR systems, handheld devices allow data collection from different angles, but this mobility introduces challenges in data quality, particularly when initial calibration between sensors is not precise. Accurate LiDAR-IMU calibration, essential for mapping accuracy in Simultaneous Localization and Mapping applications, involves precise alignment of the sensors’ extrinsic parameters. This research presents a robust initial pose calibration method for LiDAR-IMU systems in handheld devices, specifically designed for indoor environments. The research contributions are twofold. Firstly, we present a robust plane detection method for LiDAR data. This plane detection method removes the noise caused by mobility of scanning device and provides accurate planes for precise LiDAR initial pose estimation. Secondly, we present a robust planes-aided LiDAR calibration method that estimates the initial pose. By employing this LiDAR calibration method, an efficient LiDAR-IMU calibration is achieved for accurate mapping. Experimental results demonstrate that the proposed method achieves lower calibration errors and improved computational efficiency compared to existing methods. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>LiDAR based mapping using (<b>a</b>) LiDAR-IMU calibration method: Error-free mapping, and (<b>b</b>) Without LiDAR-IMU calibration method: Mapping error due to drift, highlighted in yellow circle. The colors in each map represents the intensity of LiDAR point cloud.</p>
Full article ">Figure 2
<p>Overall framework of the proposed initial pose estimation method for robust LiDAR-IMU calibration. Different colors in voxelization shows the intensity of LiDAR points in each voxel. The extracted planes are represented with yellow and green color while red color points indicate noise.</p>
Full article ">Figure 3
<p>Robust plane detection method.</p>
Full article ">Figure 4
<p>Robust plane extraction through refinement. (<b>a</b>) Voxels containing edges and noise have low plane scores due to large distances and high variance represented as red color normal vector while those with high plane scores are represented with blue. (<b>b</b>) The refinement process enables the effective separation and removal of areas containing edges and noise.</p>
Full article ">Figure 5
<p>LiDAR calibration method.</p>
Full article ">Figure 6
<p>IMU downsampling.</p>
Full article ">Figure 7
<p>Qualitative Comparison of the proposed method with the benchmark plane detection algorithms.</p>
Full article ">Figure 8
<p>Top view of LiDAR data. (<b>a</b>) LiDAR raw data before calibration. (<b>b</b>) LiDAR data after calibration using the proposed method.</p>
Full article ">Figure 9
<p>Performance comparison in terms of (<b>a</b>) roll and (<b>b</b>) pitch errors in the VECtor dataset.</p>
Full article ">Figure 10
<p>Performance comparison in terms of the (<b>a</b>) mapping result using LI-init and (<b>b</b>) mapping result using LI-init+Proposed.</p>
Full article ">
25 pages, 17064 KiB  
Article
An Environment Recognition Algorithm for Staircase Climbing Robots
by Yanjie Liu, Yanlong Wei, Chao Wang and Heng Wu
Remote Sens. 2024, 16(24), 4718; https://doi.org/10.3390/rs16244718 - 17 Dec 2024
Viewed by 429
Abstract
For deformed wheel-based staircase-climbing robots, the accuracy of staircase step geometry perception and scene mapping are critical factors in determining whether the robot can successfully ascend the stairs and continue its task. Currently, while there are LiDAR-based algorithms that focus either on step [...] Read more.
For deformed wheel-based staircase-climbing robots, the accuracy of staircase step geometry perception and scene mapping are critical factors in determining whether the robot can successfully ascend the stairs and continue its task. Currently, while there are LiDAR-based algorithms that focus either on step geometry detection or scene mapping, few comprehensive algorithms exist that address both step geometry perception and scene mapping for staircases. Moreover, significant errors in step geometry estimation and low mapping accuracy can hinder the ability of deformed wheel-based mobile robots to climb stairs, negatively impacting the efficiency and success rate of task execution. To solve the above problems, we propose an effective LiDAR-Inertial-based point cloud detection method for staircases. Firstly, we preprocess the staircase point cloud, mainly using the Statistical Outlier Removal algorithm to effectively remove the outliers in the staircase scene and combine the vertical angular resolution and spatial geometric relationship of LiDAR to realize the ground segmentation in the staircase scene. Then, we perform post-processing based on the point cloud map obtained from LiDAR SLAM, extract the staircase point cloud and project and fit the staircase point cloud by Ceres optimizer, and solve the dimensional information such as depth and height of the staircase by combining with the mean filtering method. Finally, we fully validate the effectiveness of the method proposed in this paper by conducting multiple sets of SLAM and size detection experiments in real different staircase scenarios. Full article
(This article belongs to the Special Issue Advanced AI Technology in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>The overall architecture of the algorithm proposed in this paper. In the figure, the green box indicates the pre-processing phase of the algorithm, the blue box indicates the degradation detection, as well as the processing phase, and the grey indicates the post-processing phase.</p>
Full article ">Figure 2
<p>Schematic representation of point cloud motion distortion. The blue line depicts the actual contour of the physical environment, the yellow arrow indicates the direction of LiDAR movement, the purple line represents the demarcation line, and the green dashed line illustrates the environment contour as measured by the LiDAR.</p>
Full article ">Figure 3
<p>LiDAR point cloud de-distortion. (<b>a</b>) shows the raw LiDAR point cloud data, and (<b>b</b>) shows the de-distorted LiDAR point cloud data.</p>
Full article ">Figure 4
<p>Staircase Point Cloud Outlier Removal Flowchart.</p>
Full article ">Figure 5
<p>Staircase Point Cloud Outlier Ground Segmentation.</p>
Full article ">Figure 6
<p>Outlier Removal and Ground Point Segmentation. (<b>a</b>) shows the main view of the staircase point cloud, (<b>b</b>) shows the original point cloud of the staircase, (<b>c</b>) shows outliers and ground points, and (<b>d</b>) shows the filtered and ground segmented point cloud.</p>
Full article ">Figure 7
<p>Corner and Planar Point Extraction Process.</p>
Full article ">Figure 8
<p>Point Cloud Feature Extraction Result. (<b>a</b>) shows the overall extraction results of the point cloud features, (<b>b</b>) shows the point cloud planar feature extraction details, and (<b>c</b>) shows the point cloud line feature extraction details.</p>
Full article ">Figure 9
<p>Schematic of Point-Line Distance and Point-Plane Distance. (<b>a</b>) shows the process of calculating point-line distances, and (<b>b</b>) shows the process of calculating point-plane distances.</p>
Full article ">Figure 10
<p>Schematic diagram of the degradation factor solution.</p>
Full article ">Figure 11
<p>Calculate the interest region based on the horizontal angle.</p>
Full article ">Figure 12
<p>Region of interest point cloud segmentation. (<b>a</b>) shows the original LiDAR point cloud, and (<b>b</b>) shows the point cloud of the interest region. In <a href="#remotesensing-16-04718-f012" class="html-fig">Figure 12</a>b, the white line shows the point cloud of the interest region determined using the above method.</p>
Full article ">Figure 13
<p>Straight line fitting in the XZ plane of the point cloud in the interest region.</p>
Full article ">Figure 14
<p>Staircase point cloud extracted from the interest region.</p>
Full article ">Figure 15
<p>Staircase point cloud row number calculation process.</p>
Full article ">Figure 16
<p>Comparison of trajectory and ground truth results for four algorithms on four recorded staircase datasets. (<b>a</b>) shows the trajectory comparison of the four algorithms with the ground truth on Dataset 1, (<b>b</b>) shows the trajectory comparison on Dataset 2, (<b>c</b>) shows the trajectory comparison on Dataset 3, and (<b>d</b>) shows the trajectory comparison on Dataset 4.</p>
Full article ">Figure 17
<p>ATE comparison of four algorithms across four datasets. Subfigures (<b>a</b>–<b>d</b>) display the trajectory evaluations of each algorithm on Datasets 1–4, respectively. In each subfigure, the ATE of A-LOAM, LeGO-LOAM, LIO-SAM, and our algorithm’s fitted trajectories are shown relative to the ground truth.</p>
Full article ">Figure 18
<p>Actual view of the staircases corresponding to datasets 1 and 2. (<b>a</b>) shows the staircase corresponding to dataset 1, the <math display="inline"><semantics> <mrow> <mrow> <mi>depth</mi> <mo> </mo> </mrow> <mo>×</mo> <mrow> <mi>height</mi> <mo> </mo> </mrow> </mrow> </semantics></math>of each step is <math display="inline"><semantics> <mrow> <mfenced> <mrow> <mn>0.35</mn> <mo>×</mo> <mn>0.155</mn> </mrow> </mfenced> <mrow> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math>, and (<b>b</b>) shows the staircase corresponding to dataset 2, the <math display="inline"><semantics> <mrow> <mrow> <mi>depth</mi> <mo> </mo> </mrow> <mo>×</mo> <mi>height</mi> </mrow> </semantics></math> of each step is <math display="inline"><semantics> <mrow> <mfenced> <mrow> <mn>0.30</mn> <mo>×</mo> <mn>0.145</mn> </mrow> </mfenced> <mrow> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 19
<p>Dataset 1 mapping results. (<b>a</b>) is the mapping result of A-LOAM algorithm, (<b>b</b>) is the mapping result of LeGO-LOAM algorithm, (<b>c</b>) is the mapping result of LIO-SAM algorithm, and (<b>d</b>) is the mapping result of our algorithm.</p>
Full article ">Figure 20
<p>Dataset 2 mapping results. (<b>a</b>) is the mapping result of A-LOAM algorithm, (<b>b</b>) is the mapping result of LeGO-LOAM algorithm, (<b>c</b>) is the mapping result of LIO-SAM algorithm, and (<b>d</b>) is the mapping result of our algorithm.</p>
Full article ">Figure 21
<p>Actual view of the staircases corresponding to datasets 3 and 4. (<b>a</b>) shows the staircase corresponding to dataset 3, the <math display="inline"><semantics> <mrow> <mrow> <mi>depth</mi> <mo> </mo> </mrow> <mo>×</mo> <mo> </mo> <mi>height</mi> </mrow> </semantics></math>of each step is <math display="inline"><semantics> <mrow> <mfenced> <mrow> <mn>0.26</mn> <mo>×</mo> <mn>0.16</mn> </mrow> </mfenced> <mrow> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math>, (<b>b</b>) shows the staircase corresponding to dataset 4, the <math display="inline"><semantics> <mrow> <mrow> <mi>depth</mi> <mo> </mo> </mrow> <mo>×</mo> <mi>height</mi> </mrow> </semantics></math> of each step is <math display="inline"><semantics> <mrow> <mfenced> <mrow> <mn>0.26</mn> <mo>×</mo> <mn>0.17</mn> </mrow> </mfenced> <mrow> <mo> </mo> <mi mathvariant="normal">m</mi> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 22
<p>Comparison of tested and true values for different types of staircase dimensions.</p>
Full article ">
19 pages, 2856 KiB  
Article
Efficiency of Mobile Laser Scanning for Digital Marteloscopes for Conifer Forests in the Mediterranean Region
by Francesca Giannetti, Livia Passarino, Gianfrancesco Aleandri, Costanza Borghi, Elia Vangi, Solaria Anzilotti, Sabrina Raddi, Gherardo Chirici, Davide Travaglini, Alberto Maltoni, Barbara Mariotti, Andrés Bravo-Oviedo, Yamuna Giambastiani, Patrizia Rossi and Giovanni D’Amico
Forests 2024, 15(12), 2202; https://doi.org/10.3390/f15122202 - 14 Dec 2024
Viewed by 556
Abstract
This study evaluates the performance of the ZEB Horizon RT portable mobile laser scanner (MLS) in simulating silvicultural thinning operations across three different Tuscan forests dominated by Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco), Italian cypress (Cupressus sempervirens L.), and Stone pine ( [...] Read more.
This study evaluates the performance of the ZEB Horizon RT portable mobile laser scanner (MLS) in simulating silvicultural thinning operations across three different Tuscan forests dominated by Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco), Italian cypress (Cupressus sempervirens L.), and Stone pine (Pinus pinea L.). The aim is to compare the efficiency and accuracy of the MLS with traditional dendrometric methods. The study established three marteloscopes, each covering a 50 m × 50 m plot area (0.25 ha). Traditional dendrometric methods involved a team georeferencing trees using a total station and measuring the diameter at breast height (DBH) and selected tree heights (H) to calculate the growing stock volume (GSV). The MLS survey was carried out by a two-person team, who processed the point cloud data with LiDAR 360 software to automatically identify the tree positions, DBH, and H. The methods were compared based on the time, cost, and simulated felling volume. The MLS method was more time-efficient, saving nearly one and a half hours per marteloscope, equivalent to EUR 170. This advantage was most significant in denser stands, especially the Italian cypress forest. Both methods were comparable in terms of accuracy for Douglas-fir and Stone pine stands, with no significant differences in felling number or volume, although greater differences were noted for the Italian cypress forest. Full article
(This article belongs to the Section Forest Ecology and Management)
Show Figures

Figure 1

Figure 1
<p>Panels (<b>A</b>,<b>B</b>) for the location of the Tuscany Region and the three study areas. Panels (<b>C</b>–<b>E</b>) display the three marteloscopes (<b>C</b>) Vallombrosa; (<b>D</b>) Monte Morello; (<b>E</b>) San Rossore, including the positions of all trees overlaid on a high-resolution orthomosaic imagery.</p>
Full article ">Figure 2
<p>Graphical representation of the MLS scan acquisition through 6 walking segments in the 50 m × 50 m marteloscope. The larger panel on the right shows the complete acquisition process.</p>
Full article ">Figure 3
<p>Douglas-fir forest at Vallombrosa. (<b>A</b>,<b>B</b>) Top view. (<b>C</b>,<b>D</b>) Front view. (<b>A</b>,<b>C</b>) Marteloscope acquisition; (<b>B</b>,<b>D</b>) virtual marteloscope after simulated from-below thinning.</p>
Full article ">Figure 4
<p>Stone pine forest at San Rossore. (<b>A</b>,<b>B</b>) Top view. (<b>C</b>,<b>D</b>) Front view. (<b>A</b>,<b>C</b>) Marteloscope acquisition; (<b>B</b>,<b>D</b>) virtual marteloscope after simulated gap cutting for natural regeneration.</p>
Full article ">Figure 5
<p>Italian cypress at Monte Morello. (<b>A</b>–<b>D</b>) Top view. (<b>E</b>–<b>H</b>) Front view. (<b>A</b>,<b>E</b>) Marteloscope acquisition; (<b>C</b>,<b>G</b>) virtual marteloscope after simulated geometric thinning; (<b>B</b>,<b>F</b>) virtual marteloscope after simulated from-below thinning; (<b>D</b>,<b>H</b>) virtual marteloscope after selective thinning.</p>
Full article ">
21 pages, 10733 KiB  
Article
CNN-Based Multi-Object Detection and Segmentation in 3D LiDAR Data for Dynamic Industrial Environments
by Danilo Giacomin Schneider  and Marcelo Ricardo Stemmer
Robotics 2024, 13(12), 174; https://doi.org/10.3390/robotics13120174 - 9 Dec 2024
Viewed by 578
Abstract
Autonomous navigation in dynamic environments presents a significant challenge for mobile robotic systems. This paper proposes a novel approach utilizing Convolutional Neural Networks (CNNs) for multi-object detection in 3D space and 2D segmentation using bird’s eye view (BEV) maps derived from 3D Light [...] Read more.
Autonomous navigation in dynamic environments presents a significant challenge for mobile robotic systems. This paper proposes a novel approach utilizing Convolutional Neural Networks (CNNs) for multi-object detection in 3D space and 2D segmentation using bird’s eye view (BEV) maps derived from 3D Light Detection and Ranging (LiDAR) data. Our method aims to enable mobile robots to localize movable objects and their occupancy, which is crucial for safe and efficient navigation. To address the scarcity of labeled real-world datasets, a synthetic dataset based on a simulation environment is generated to train and evaluate our model. Additionally, we employ a subset of the NVIDIA r2b dataset for evaluation in the real world. Furthermore, we integrate our CNN-based detection and segmentation model into a Robot Operating System 2 (ROS2) framework, facilitating communication between mobile robots and a centralized node for data aggregation and map creation. Our experimental results demonstrate promising performance, showcasing the potential applicability of our approach in future assembly systems. While further validation with real-world data is warranted, our work contributes to advancing perception systems by proposing a solution for multi-source, multi-object tracking and mapping. Full article
(This article belongs to the Section AI in Robotics)
Show Figures

Figure 1

Figure 1
<p>Architecture of Keypoint Feature Pyramid Network and Fully Convolutional Network with a ResNet-50 encoder backbone. To ease output visualization, segmentation is colored and 3D detection is represented with 2D-oriented bounding boxes. Source: Authors.</p>
Full article ">Figure 2
<p>Design of system nodes for data aggregation and map building. Source: authors.</p>
Full article ">Figure 3
<p>First and second frames predictions. RGB image with 3D bounding boxes visualization ((<b>a</b>,<b>b</b>), respectively), Bird’s Eye View (BEV) input with oriented bounding boxes ((<b>c</b>,<b>e</b>), respectively), and semantic segmentation with oriented bounding boxes ((<b>d</b>,<b>f</b>), respectively). Different colors represent different classes and orientations are signed with red edges. Source: Authors.</p>
Full article ">Figure 4
<p>Qualitative inference comparison on oriented bounding boxes predictions, ground truth on the left column, RTMDET-R-l inferences on the middle and our network’s inferences on the right column. Different colors represent different classes. Source: Authors.</p>
Full article ">Figure 5
<p>Qualitative inference comparison on semantic segmentation predictions. Different colors represent different classes. Source: Authors.</p>
Full article ">Figure 6
<p>First and second frames predictions on r2b dataset. RGB image with 3D bounding boxes visualization ((<b>a</b>,<b>b</b>), respectively), BEV input with oriented bounding boxes ((<b>c</b>,<b>e</b>), respectively), and semantic segmentation with oriented bounding boxes ((<b>d</b>,<b>f</b>), respectively). Yellow color represents ”person” class and orientations are signed with red edges. Source: Authors.</p>
Full article ">Figure 7
<p>Absolute localization errors for first and second simulated scenarios ((<b>a</b>,<b>b</b>), respectively). Source: Authors.</p>
Full article ">Figure 8
<p>First and second simulated scenarios ((<b>a</b>,<b>b</b>), respectively). Semantic occupancy map generated by the central node for scene one (<b>c</b>) and two (<b>d</b>). Semantic occupancy map ground truth for scene one (<b>e</b>) and two (<b>f</b>). Different colors represent different classes. Source: Authors.</p>
Full article ">Figure 9
<p>Synthetic dataset BEV sample and its automatically generated segmentation ground truth ((<b>a</b>,<b>b</b>), respectively). BEV sample of r2b dataset (<b>c</b>), segmentation obtained with the polygon (contour in red) though manually clicked vertices; blind sides were annotated in a non-convex manner. Source: Authors.</p>
Full article ">
21 pages, 17557 KiB  
Article
Lidar Simultaneous Localization and Mapping Algorithm for Dynamic Scenes
by Peng Ji, Qingsong Xu and Yifan Zhao
World Electr. Veh. J. 2024, 15(12), 567; https://doi.org/10.3390/wevj15120567 - 7 Dec 2024
Viewed by 827
Abstract
To address the issue of significant point cloud ghosting in the construction of high-precision point cloud maps by low-speed intelligent mobile vehicles due to the presence of numerous dynamic obstacles in the environment, which affects the accuracy of map construction, this paper proposes [...] Read more.
To address the issue of significant point cloud ghosting in the construction of high-precision point cloud maps by low-speed intelligent mobile vehicles due to the presence of numerous dynamic obstacles in the environment, which affects the accuracy of map construction, this paper proposes a LiDAR-based Simultaneous Localization and Mapping (SLAM) algorithm tailored for dynamic scenes. The algorithm employs a tightly coupled SLAM framework integrating LiDAR and inertial measurement unit (IMU). In the process of dynamic obstacle removal, the point cloud data is first gridded. To more comprehensively represent the point cloud information, the point cloud within the perception area is linearly discretized by height to obtain the distribution of the point cloud at different height layers, which is then encoded to construct a linear discretized height descriptor for dynamic region extraction. To preserve more static feature points without altering the original point cloud, the Random Sample Consensus (RANSAC) ground fitting algorithm is employed to fit and segment the ground point cloud within the dynamic regions, followed by the removal of dynamic obstacles. Finally, accurate point cloud poses are obtained through static feature matching. The proposed algorithm has been validated using open-source datasets and self-collected campus datasets. The results demonstrate that the algorithm improves dynamic point cloud removal accuracy by 12.3% compared to the ERASOR algorithm and enhances overall mapping and localization accuracy by 8.3% compared to the LIO-SAM algorithm, thereby providing a reliable environmental description for intelligent mobile vehicles. Full article
Show Figures

Figure 1

Figure 1
<p>Algorithm framework.</p>
Full article ">Figure 2
<p>(<b>a</b>) Line feature association; (<b>b</b>) surface feature association.</p>
Full article ">Figure 3
<p>Flowchart of dynamic obstacle removal algorithm.</p>
Full article ">Figure 4
<p>Lidar point cloud scanning image.</p>
Full article ">Figure 5
<p>The concentric circle radius formed by LiDAR on the horizontal plane.</p>
Full article ">Figure 6
<p>Schematic diagram of point cloud encoding.</p>
Full article ">Figure 7
<p>Encoding diagram.</p>
Full article ">Figure 8
<p>Schematic diagram of each grid encoding.</p>
Full article ">Figure 9
<p>Correspondence between keyframes and submaps.</p>
Full article ">Figure 10
<p>Dynamic obstacle recognition point cloud rendering.</p>
Full article ">Figure 11
<p>Experimental platform.</p>
Full article ">Figure 12
<p>Real scene map.</p>
Full article ">Figure 13
<p>Initial point cloud map.</p>
Full article ">Figure 14
<p>(<b>a</b>) Comparison chart of dynamic obstacle removal effects using the ERASOR algorithm; (<b>b</b>) comparison chart of dynamic obstacle removal effects using the algorithm in this paper.</p>
Full article ">Figure 15
<p>Trajectory comparison of gate02 sequence on <span class="html-italic">x</span>-<span class="html-italic">z</span> plane.</p>
Full article ">Figure 16
<p>Trajectory comparison of street02 sequence on <span class="html-italic">x</span>-<span class="html-italic">z</span> plane.</p>
Full article ">Figure 17
<p>(<b>a</b>) ALOAM algorithm’s trajectory error chart for the gate02 sequence; (<b>b</b>) LeGO-LOAM algorithm’s trajectory error chart for the gate02 sequence. (<b>c</b>) LIO-SAM algorithm’s trajectory error Chart for the gate02 sequence; (<b>d</b>) our algorithm’s trajectory error chart for the gate02 sequence.</p>
Full article ">Figure 18
<p>(<b>a</b>) ALOAM algorithm’s trajectory error chart for the street02 sequence; (<b>b</b>) LeGO-LOAM algorithm’s trajectory error chart for the street02 sequence; (<b>c</b>) LIO-SAM algorithm’s trajectory error chart for the street02 sequence; (<b>d</b>) our algorithm’s trajectory error chart for the street02 sequence.</p>
Full article ">Figure 18 Cont.
<p>(<b>a</b>) ALOAM algorithm’s trajectory error chart for the street02 sequence; (<b>b</b>) LeGO-LOAM algorithm’s trajectory error chart for the street02 sequence; (<b>c</b>) LIO-SAM algorithm’s trajectory error chart for the street02 sequence; (<b>d</b>) our algorithm’s trajectory error chart for the street02 sequence.</p>
Full article ">Figure 19
<p>(<b>a</b>) Scene one from campus self-collected dataset; (<b>b</b>) scene two from campus self-collected dataset.</p>
Full article ">Figure 19 Cont.
<p>(<b>a</b>) Scene one from campus self-collected dataset; (<b>b</b>) scene two from campus self-collected dataset.</p>
Full article ">Figure 20
<p>(<b>a</b>) Trajectory comparison chart for scene one; (<b>b</b>) trajectory comparison chart for scene two.</p>
Full article ">Figure 21
<p>(<b>a</b>) Distribution chart of ATE in different ranges for experiment scene one on campus; (<b>b</b>) distribution chart of ATE in different ranges for experiment scene two on campus.</p>
Full article ">
24 pages, 3947 KiB  
Article
Learnable Resized and Laplacian-Filtered U-Net: Better Road Marking Extraction and Classification on Sparse-Point-Cloud-Derived Imagery
by Miguel Luis Rivera Lagahit, Xin Liu, Haoyi Xiu, Taehoon Kim, Kyoung-Sook Kim and Masashi Matsuoka
Remote Sens. 2024, 16(23), 4592; https://doi.org/10.3390/rs16234592 - 6 Dec 2024
Viewed by 476
Abstract
High-definition (HD) maps for autonomous driving rely on data from mobile mapping systems (MMS), but the high cost of MMS sensors has led researchers to explore cheaper alternatives like low-cost LiDAR sensors. While cost effective, these sensors produce sparser point clouds, leading to [...] Read more.
High-definition (HD) maps for autonomous driving rely on data from mobile mapping systems (MMS), but the high cost of MMS sensors has led researchers to explore cheaper alternatives like low-cost LiDAR sensors. While cost effective, these sensors produce sparser point clouds, leading to poor feature representation and degraded performance in deep learning techniques, such as convolutional neural networks (CNN), for tasks like road marking extraction and classification, which are essential for HD map generation. Examining common image segmentation workflows and the structure of U-Net, a CNN, reveals a source of performance loss in the succession of resizing operations, which further diminishes the already poorly represented features. Addressing this, we propose improving U-Net’s ability to extract and classify road markings from sparse-point-cloud-derived images by introducing a learnable resizer (LR) at the input stage and learnable resizer blocks (LRBs) throughout the network, thereby mitigating feature and localization degradation from resizing operations in the deep learning framework. Additionally, we incorporate Laplacian filters (LFs) to better manage activations along feature boundaries. Our analysis demonstrates significant improvements, with F1-scores increasing from below 20% to above 75%, showing the effectiveness of our approach in improving road marking extraction and classification from sparse-point-cloud-derived imagery. Full article
(This article belongs to the Special Issue Applications of Laser Scanning in Urban Environment)
Show Figures

Figure 1

Figure 1
<p>A typical CNN image segmentation workflow includes resizing operations both before and after the network to adhere to computing constraints.</p>
Full article ">Figure 2
<p>A sparse-point-cloud-derived image shown in its (<b>left</b>) original size and (<b>right</b>) downsampled size, which will serve as the input for the network. Noticeable changes are evident as the image decreases in scale, with portions of target features being visibly missing.</p>
Full article ">Figure 3
<p>A downsampled input sparse-point-cloud-derived image (<b>left</b>) and its counterpart after four max-pooling operations, similar to those of U-Net’s encoder (<b>right</b>). The pink-circled area highlights target feature disappearance, while the blue-circled area shows misrepresentation of non-target feature areas. This effect remains even after training the network, as confirmed by the results presented in this paper.</p>
Full article ">Figure 4
<p>Structure of the proposed learnable resized and Laplacian-filtered U-Net. The pink box denotes the learnable resizer (LR) at the input phase. The yellow arrow represents the feature map sharpening at the skip connection. The blue and brown arrows indicate the learnable resizing blocks (LRBs) with Laplacian filter(s) (LF) placed in lieu of the resizing operations within the network.</p>
Full article ">Figure 5
<p>Abstract representation of a learnable resizer: a conventional resizing operation combined with a learnable convolution operation.</p>
Full article ">Figure 6
<p>Comparing downsampling results from (<b>left</b>) the learnable resizer and (<b>right</b>) a conventional resizer (bilinear interpolation). The pink box showcases zoomed-in features, while the blue box highlights retained edges after passing the downsampled images through a simple edge filter. To enhance clarity, the downsampled outcome from the learnable resizer has been converted to grayscale for better visualization.</p>
Full article ">Figure 7
<p>The effective receptive field (ERF) of U-Net with a resizer and max pooling (<b>left</b>) versus a learnable resizer (<b>right</b>) at the input phase with respect to the central pixel. The ERFs shown are from an untrained model and were visualized using the method presented in the original paper [<a href="#B27-remotesensing-16-04592" class="html-bibr">27</a>], and then zoomed and scaled for presentation purposes.</p>
Full article ">Figure 8
<p>Existing resizing operations in U-Net include max pooling in the encoder and transposed (or up-convolutions) in the decoder.</p>
Full article ">Figure 9
<p>The effective receptive field (ERF) with respect to the central pixel for a trained U-Net with LR <b>(left)</b> versus LR+LRB (<b>right</b>) at the input phase. The ERFs were visualized using the method presented in the original paper [<a href="#B27-remotesensing-16-04592" class="html-bibr">27</a>] and then zoomed and scaled for presentation purposes.</p>
Full article ">Figure 10
<p>A downsampled sparse-point-cloud-derived image (<b>left</b>) and its Laplacian-filtered counterpart (<b>right</b>). The pink circle highlights a target feature whose boundary has been emphasized by the Laplacian filter.</p>
Full article ">Figure 11
<p>Sample (<b>left</b>) sparse-point-cloud-derived image and its corresponding (<b>right</b>) labeled ground truth. The images in the dataset have sizes of 2048 × 512 pixels, with a corresponding ground resolution of 1 × 1 cm. Pixel values were obtained from intensity or return signal strength from the low-cost LiDAR scanning. In the ground truth, black, white, green, and red pixels represent no point cloud value, other (non-road marking), lane line, and ped xing features, respectively.</p>
Full article ">Figure 12
<p>Samples measuring 1 × 1 m of the sparse-point-cloud-derived image, highlighting the widely spaced distribution of features across the image. This image has also been modified to binary, with all pixels corresponding to point cloud values turned white for better visualization.</p>
Full article ">Figure 13
<p>A sample of the segmentation results is shown. U-Net completely misses the pedestrian crossing and misclassifies the lane line. U-Net with an LR successfully extracts both but presents misclassification through the over extended lane line. U-Net+LRB+LF with an LR achieves the best extraction and classification, closely resembling the ground truth.</p>
Full article ">Figure 14
<p>Tracking the localization of the pedestrian crossing class using seg-grad-cam [<a href="#B35-remotesensing-16-04592" class="html-bibr">35</a>], shown as a blue-to-red gradient (with red indicating the highest activation), reveals how the model identifies this class from sampled encoder and decoder layers compared to the base model.</p>
Full article ">Figure 15
<p>Tracking the localization of the lane line class using seg-grad-cam [<a href="#B35-remotesensing-16-04592" class="html-bibr">35</a>], shown as a blue-to-red gradient (with red indicating the highest activation), reveals how the model identifies this class from sampled encoder and decoder layers compared to the base model.</p>
Full article ">Figure 16
<p>A sample segmentation result of (<b>left</b>) the base model and (<b>right</b>) our proposal, focusing on feature boundaries. The pink pixels highlight misclassifications along the boundaries, indicating overreaching.</p>
Full article ">Figure 17
<p>Visualized results of our proposal compared to other U-Net variants. For clarity, we retain only target road marking pixels. In the classification task, green pixels represent a lane line, while red pixels represent a pedestrian crossing. For the extraction task, yellow pixels represent those markings classified as either of the road marking types. This distinction is important to highlight the model’s ability to identify road markings in general and correctly distinguish between different types.</p>
Full article ">Figure 18
<p>Visualized results of our proposal compared to other models. For clarity, we retain only target road marking pixels. In the classification task, green pixels represent a lane line, while red pixels represent a pedestrian crossing. For the extraction task, yellow pixels represent those markings classified as either of the road marking types. This distinction is important to highlight the model’s ability to identify road markings in general and correctly distinguish between different types.</p>
Full article ">
23 pages, 4830 KiB  
Article
Vertical Profiles of Aerosol Optical Properties (VIS/NIR) over Wetland Environment: POLIMOS-2018 Field Campaign
by Michal T. Chilinski, Krzysztof M. Markowicz, Patryk Poczta, Bogdan H. Chojnicki, Kamila M. Harenda, Przemysław Makuch, Dongxiang Wang and Iwona S. Stachlewska
Remote Sens. 2024, 16(23), 4580; https://doi.org/10.3390/rs16234580 - 6 Dec 2024
Viewed by 509
Abstract
This study aims to present the benefits of unmanned aircraft systems (UAS) in atmospheric aerosol research, specifically to obtain information on the vertical variability of aerosol single-scattering properties in the lower troposphere. The results discussed in this paper were obtained during the Polish [...] Read more.
This study aims to present the benefits of unmanned aircraft systems (UAS) in atmospheric aerosol research, specifically to obtain information on the vertical variability of aerosol single-scattering properties in the lower troposphere. The results discussed in this paper were obtained during the Polish Radar and Lidar Mobile Observation System (POLIMOS) field campaign in 2018 at a wetland and rural site located in the Rzecin (Poland). UAS was equipped with miniaturised devices (low-cost aerosol optical counter, aethalometer AE-51, RS41 radiosonde) to measure aerosol properties (scattering and absorption coefficient) and air thermodynamic parameters. Typical UAS vertical profiles were conducted up to approximately 1000 m agl. During nighttime, UAS measurements show a very shallow inversion surface layer up to about 100–200 m agl, with significant enhancement of aerosol scattering and absorption coefficient. In this case, the Pearson correlation coefficient between aerosol single-scattering properties measured by ground-based equipment and UAS devices significantly decreases with altitude. In such conditions, aerosol properties at 200 m agl are independent of the ground-based observation. On the contrary, the ground observations are better correlated with UAS measurements at higher altitudes during daytime and under well-mixed conditions. During long-range transport of biomass burning from fire in North America, the aerosol absorption coefficient increases with altitude, probably due to entrainment of such particles into the PBL. Full article
Show Figures

Figure 1

Figure 1
<p>Versa X6-Sci hexacopter with equipment mounted on the bottom side of the UAS.</p>
Full article ">Figure 2
<p>Time variability of AOD at 500 nm (Level 2.0) between May and September 2018, obtained from CIMEL observations at the Rzecin AERONET site. The navy blue, red, green, and black circles correspond to all data, 22–27 May, 28–30 August, and 8–9 September, respectively. The blue line indicates the run’s mean AOD with a time window of 10 days.</p>
Full article ">Figure 3
<p>Temporal variability of anthropogenic (blue), mineral dust (orange), smoke (orange), and sea salt (navy blue) AOD at 550 nm obtained from NAAPS reanalysis for Rzecin region between (<b>a</b>) 22 and 27 May and (<b>b</b>) 29 August and 9 September 2023.</p>
Full article ">Figure 4
<p>Time variability of (<b>a</b>) aerosol scattering coefficient at 550 nm, (<b>b</b>) eBC concentration [ng/m<sup>3</sup>], (<b>c</b>) surface SSA at 550 nm (blue line) and columnar SSA from AERONET at 441 nm (red circles), and (<b>d</b>) ratio of AOD to surface aerosol extinction coefficient [Mm].</p>
Full article ">Figure 5
<p>Vertical profile of the (<b>a</b>) aerosol scattering [Mm<sup>−1</sup>], (<b>b</b>) absorption coefficient [Mm<sup>−1</sup>], (<b>c</b>) single-scattering albedo (all at 525 nm), (<b>d</b>) air temperature [°C], and (<b>e</b>) relative humidity [%] obtained on 22 May 2018 at 23:13 UTC. The error bars show the uncertainty for optical properties and standard deviation for thermodynamic parameters. The black dots show in situ ground-based observations averaged during UAS measurements.</p>
Full article ">Figure 6
<p>Vertical profile of the (<b>a</b>) aerosol scattering [Mm<sup>−1</sup>], (<b>b</b>) absorption coefficient [Mm<sup>−1</sup>], (<b>c</b>) single-scattering albedo (all at 525 nm), (<b>d</b>) air temperature [°C], and (<b>e</b>) relative humidity [%] obtained on 23 May 2018 at 21:02 UTC. The error bars show the uncertainty for optical properties and standard deviation for thermodynamic parameters. The black dots show in situ ground-based observations averaged during profiling measurements.</p>
Full article ">Figure 7
<p>Vertical profile of the (<b>a</b>) aerosol scattering [Mm<sup>−1</sup>], (<b>b</b>) absorption coefficient [Mm<sup>−1</sup>], (<b>c</b>) single-scattering albedo (all at 525 nm), (<b>d</b>) air temperature [°C], and (<b>e</b>) relative humidity [%] obtained on 23 May 2018 at 08:44 UTC. The error bars show the uncertainty for optical properties and standard deviation for thermodynamic parameters. The black dots show in situ ground-based observations averaged during profiling measurements.</p>
Full article ">Figure 8
<p>Vertical profile of the (<b>a</b>) aerosol scattering [Mm<sup>−1</sup>], (<b>b</b>) absorption coefficient [Mm<sup>−1</sup>], (<b>c</b>) single-scattering albedo (all at 525 nm), (<b>d</b>) air temperature [°C], and (<b>e</b>) relative humidity [%] obtained on 27 May 2018 at 08:34 UTC. The error bars show the uncertainty for optical properties and standard deviation for thermodynamic parameters. The black dots show in situ ground-based observations averaged during profiling measurements.</p>
Full article ">Figure 9
<p>Scatter plot of the aerosol scattering (<b>a</b>) and absorption (<b>b</b>) coefficients measured by miniaturised equipment on the UAS at 525 nm just above the surface and by ground-based PAX devices at 532 nm. The dotted line corresponds to perfect agreement. Data were taken from all flights where the corresponding sensors were mounted.</p>
Full article ">Figure 10
<p>Pearson correlation coefficient of aerosol single-scattering properties measured at the ground station and by miniaturised devices onboard of UAS as a function of altitude. The dotted red and solid blue lines correspond to aerosol scattering and absorption coefficient, respectively, while the solid black and orange lines respectively correspond to the aerosol absorption coefficient during inversion and the convection conditions.</p>
Full article ">Figure 11
<p>AOD at 525 nm during UAS flights in May 2023 obtained from the CIMEL sun-photometer (blue), Aurora 4000 (ASC), and PAX (AAC) at the ground station (red) and from SPS7003 (ASC) and AE-51 (AAE) at the UAS (orange). In the last two cases, the AOD was estimated in the layer between the surface and 1 km agl.</p>
Full article ">Figure 12
<p>Mean vertical variability of (<b>a</b>) eBC concentration [ng/m<sup>3</sup>], (<b>b</b>) air temperature [°C], and (<b>c</b>) relative humidity [%] during day (blue) and night (orange).</p>
Full article ">Figure 13
<p>Mean hourly surface variability of aerosol (<b>a</b>) scattering coefficient [Mm<sup>−1</sup>], (<b>b</b>) absorption coefficient [Mm<sup>−1</sup>], (<b>c</b>) single-scattering albedo at 532 nm, (<b>d</b>) scattering Angstrom exponent (870/532 nm), (<b>e</b>) absorbing Angstrom exponent (950/370 nm), and (<b>f</b>) PBL top height [m]. Optical properties were measured by the PAX and AE-31 (AAE only) devices, while the PBL was obtained from the HYSPLIT model. The means were calculated for the period of the whole field campaign.</p>
Full article ">Figure 14
<p>The lidar range-corrected signal at 532 nm (arbitrary units) between 28 and 30 August 2018.</p>
Full article ">Figure 15
<p>Vertical profiles of (<b>a</b>) aerosol absorption coefficient at 525 nm [Mm<sup>−1</sup>], (<b>b</b>) air temperature, and (<b>c</b>) relative humidity during surface inversion conditions on 29 August at 07:37 UTC (blue), 30 August at 04:40 UTC (red), and 30 August at 05:40 UTC (orange).</p>
Full article ">Figure 16
<p>Vertical profiles of (<b>a</b>) aerosol absorption coefficient at 532 nm [Mm<sup>−1</sup>], (<b>b</b>) air temperature, and (<b>c</b>) relative humidity in the well-mixed lower troposphere on 28 August at 13:52 UTC (blue), 28 August at 15:27 UTC (red), 29 August at 10:35 UTC (orange), and 29 August at 12:22 UTC (navy blue).</p>
Full article ">Figure A1
<p>Histogram of the maximum UAS flight level in [m].</p>
Full article ">Figure A2
<p>Mean AOD at 500 nm as a function of the distance of the 48 h back-trajectory ending in Rzecin at 1.5 km. agl. The error bars correspond to the standard deviation, while yellow values correspond to the percentage of cases.</p>
Full article ">Figure A3
<p>Time variability of the organic carbon mixing ratio as a function of pressure obtained from MERRA-II reanalysis during the second half of August 2018 over the Rzecin site.</p>
Full article ">Figure A4
<p>The 96 h back-trajectories ending on 27 May 2018 at 12 UTC over Rzecin at 0.5, 1.5, and 3.0 km agl.</p>
Full article ">Figure A5
<p>The 200 h back-trajectories ending on 29 August 2018 at 00 UTC over Rzecin at 7, 8.5, and 10 km agl.</p>
Full article ">Figure A6
<p>Smoke from burning biomass over Northern California and British Columbia observed by the MODIS detector on 22 August 2018 (marked with black ovals).</p>
Full article ">
15 pages, 77254 KiB  
Technical Note
Accuracy Assessment of Advanced Laser Scanner Technologies for Forest Survey Based on Three-Dimensional Point Cloud Data
by Jin-Soo Kim, Sang-Min Sung, Ki-Suk Back and Yong-Su Lee
Sustainability 2024, 16(23), 10636; https://doi.org/10.3390/su162310636 - 4 Dec 2024
Viewed by 656
Abstract
Forests play a crucial role in carbon sequestration and climate change mitigation, offering ecosystem services, biodiversity conservation, and water resource management. As global efforts to reduce greenhouse gas emissions intensify, the demand for accurate spatial information to monitor forest conditions and assess carbon [...] Read more.
Forests play a crucial role in carbon sequestration and climate change mitigation, offering ecosystem services, biodiversity conservation, and water resource management. As global efforts to reduce greenhouse gas emissions intensify, the demand for accurate spatial information to monitor forest conditions and assess carbon absorption capacity has grown. LiDAR (Light Detection and Ranging) has emerged as a transformative tool, providing high-resolution 3D spatial data for detailed analysis of forest attributes, including tree height, canopy structure, and biomass distribution. Unlike traditional manpower-intensive forest surveys, which are time-consuming and often limited in accuracy, LiDAR offers a more efficient and reliable solution. This study evaluates the accuracy and applicability of advanced LiDAR technologies—drone-mounted, terrestrial, and mobile scanners—for generating 3D forest spatial data. The results show that the terrestrial LiDAR achieved the highest precision for diameter at breast height (DBH) and tree height measurements, with RMSE values of 0.66 cm and 0.91 m, respectively. Drone-mounted LiDAR demonstrated excellent efficiency for large-scale surveys, while mobile LiDAR offered portability and speed but required further improvement in accuracy (e.g., RMSE: DBH 0.76 cm, tree height 1.83 m). By comparing these technologies, this study identifies their strengths, limitations, and optimal application scenarios, contributing to more accurate forest management practices and carbon absorption assessments. Full article
(This article belongs to the Special Issue Sustainable Forestry Management and Technologies)
Show Figures

Figure 1

Figure 1
<p>The study area. (<b>Left</b>) Geographic location of the site with coordinates. (<b>Top-right</b>) Field view of the forest showing individual trees labeled for measurements. (<b>Bottom-right</b>) 3D LiDAR point cloud model of the area.</p>
Full article ">Figure 2
<p>Measurement of diameter at breast height (DBH) and tree height using tools within 3D point cloud data.</p>
Full article ">Figure 3
<p>Measurement of DBH using diameter tape.</p>
Full article ">Figure 4
<p>Tree height measurement using a total station.</p>
Full article ">Figure 5
<p>Three-dimensional point cloud representation of forest canopy and ground structure captured by drone LiDAR.</p>
Full article ">Figure 6
<p>Detailed 3D point cloud of forest captured using terrestrial 3D laser scanner (Leica RTC360).</p>
Full article ">Figure 7
<p>Road view image captured by terrestrial 3D laser scanner (Leica RTC360) for tree species identification.</p>
Full article ">Figure 8
<p>Forest 3D point cloud generated by mobile LiDAR scanner (PX-80).</p>
Full article ">Figure 9
<p>Measurement agreement for DBH and tree height: Bland–Altman plot.</p>
Full article ">
22 pages, 4119 KiB  
Article
Fast Detection of Idler Supports Using Density Histograms in Belt Conveyor Inspection with a Mobile Robot
by Janusz Jakubiak and Jakub Delicat
Appl. Sci. 2024, 14(23), 10774; https://doi.org/10.3390/app142310774 - 21 Nov 2024
Viewed by 491
Abstract
The automatic inspection of belt conveyors gathers increasing attention in the mining industry. The utilization of mobile robots to perform the inspection allows increasing the frequency and precision of inspection data collection. One of the issues that needs to be solved is the [...] Read more.
The automatic inspection of belt conveyors gathers increasing attention in the mining industry. The utilization of mobile robots to perform the inspection allows increasing the frequency and precision of inspection data collection. One of the issues that needs to be solved is the location of inspected objects, such as, for example, conveyor idlers in the vicinity of the robot. This paper presents a novel approach to analyze the 3D LIDAR data to detect idler frames in real time with high accuracy. Our method processes a point cloud image to determine positions of the frames relative to the robot. The detection algorithm utilizes density histograms, Euclidean clustering, and a dimension-based classifier. The proposed data flow focuses on separate processing of single scans independently, to minimize the computational load, necessary for real-time performance. The algorithm is verified with data recorded in a raw material processing plant by comparing the results with human-labeled objects. The proposed process is capable of detecting idler frames in a single 3D scan with accuracy above 83%. The average processing time of a single scan is under 22 ms, with a maximum of 75 ms, ensuring that idler frames are detected within the scan acquisition period, allowing continuous operation without delays. These results demonstrate that the algorithm enables the fast and accurate detection and localization of idler frames in real-world scenarios. Full article
Show Figures

Figure 1

Figure 1
<p>Idler supports of typical belt conveyors: (<b>a</b>) [<a href="#B4-applsci-14-10774" class="html-bibr">4</a>], (<b>b</b>) [<a href="#B5-applsci-14-10774" class="html-bibr">5</a>].</p>
Full article ">Figure 2
<p>Activity diagram of point cloud processing.</p>
Full article ">Figure 3
<p>A scheme of the experiment location with marked robot path segments A and B.</p>
Full article ">Figure 4
<p>Images of the experiment location. (<b>a</b>) Path A. (<b>b</b>) Path B. Green rectangles mark the idlers’ supports to be detected.</p>
Full article ">Figure 5
<p>Mobile platform with the sensor module at the experiment site [<a href="#B4-applsci-14-10774" class="html-bibr">4</a>].</p>
Full article ">Figure 6
<p>Transformation of a point cloud in preprocessing stage. (<b>a</b>) Original image from the LIDAR sensor. The red rectangle indicates the area with conveyors. (<b>b</b>) Point cloud with distant points clipped. The boxes show the location of idler supports. (<b>c</b>) The results of the RANSAC algorithm—the ground points marked in red. (<b>d</b>) Aligned point cloud with ground removal.</p>
Full article ">Figure 7
<p>Two-dimensional histograms for a single scan. (<b>a</b>) Projection to the horizontal plane <math display="inline"><semantics> <msub> <mi>H</mi> <mrow> <mi>X</mi> <mi>Y</mi> </mrow> </msub> </semantics></math> with manually marked support locations. (<b>b</b>) Projection to the front plane <math display="inline"><semantics> <msub> <mi>H</mi> <mrow> <mi>Y</mi> <mi>Z</mi> </mrow> </msub> </semantics></math>, marked elongated objects.</p>
Full article ">Figure 8
<p>The results of the density-based segmentation (the points of interest marked in blue-green). (<b>a</b>) Points from the <math display="inline"><semantics> <mrow> <mi>X</mi> <mi>Y</mi> </mrow> </semantics></math> segmentation. (<b>b</b>) Points from the <math display="inline"><semantics> <mrow> <mi>Y</mi> <mi>Z</mi> </mrow> </semantics></math> segmentation. (<b>c</b>) The set difference of points from the <math display="inline"><semantics> <mrow> <mi>X</mi> <mi>Y</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>Y</mi> <mi>Z</mi> </mrow> </semantics></math> segmentations.</p>
Full article ">Figure 9
<p>Clusters representing idler frame candidates.</p>
Full article ">Figure 10
<p>Examples of detection.</p>
Full article ">Figure 11
<p>Spatial distribution of detection results in robot local coordinates and unrestricted range. (<b>a</b>) Along Path A. (<b>b</b>) Along Path B.</p>
Full article ">Figure 12
<p>Detection results in areas with various theoretical numbers of active LIDAR channels. (<b>a</b>) Along Path A. (<b>b</b>) Along Path B.</p>
Full article ">Figure 13
<p>Spatial distribution of detection results in robot local coordinates in region limited to 6 and more LIDAR planes. (<b>a</b>) Along Path A. (<b>b</b>) Along Path B.</p>
Full article ">Figure 14
<p>Detection of the supports in time—X coordinate of the objects. (<b>a</b>) Path A—in the first row to the left of the robot. (<b>b</b>) Path A—in the first row to the right of the robot. (<b>c</b>) Path B—in the first row to the left of the robot.</p>
Full article ">Figure 15
<p>Duration of processing stages along Path A. (<b>a</b>) For each scan along the trajectory. (<b>b</b>) Box plots of the duration of stages.</p>
Full article ">Figure 16
<p>Duration of processing stages along Path B. (<b>a</b>) For each scan along the trajectory. (<b>b</b>) Box plots of the duration of stages.</p>
Full article ">
16 pages, 12606 KiB  
Article
Monitoring and Modeling Urban Temperature Patterns in the State of Iowa, USA, Utilizing Mobile Sensors and Geospatial Data
by Clemir Abbeg Coproski, Bingqing Liang, James T. Dietrich and John DeGroote
Appl. Sci. 2024, 14(22), 10576; https://doi.org/10.3390/app142210576 - 16 Nov 2024
Viewed by 661
Abstract
Thorough investigations into air temperature variation across urban environments are essential to address concerns about city livability. With limited research on smaller cities, especially in the American Midwest, the goal of this research was to examine the spatial patterns of air temperature across [...] Read more.
Thorough investigations into air temperature variation across urban environments are essential to address concerns about city livability. With limited research on smaller cities, especially in the American Midwest, the goal of this research was to examine the spatial patterns of air temperature across multiple small to medium-sized cities in Iowa, a relatively rural US state. Extensive fieldwork was conducted utilizing manually built mobile temperature sensors to collect air temperature data at a high temporal and spatial resolution in ten Iowa urban areas during the afternoon, evening, and night on days exceeding 32 °C from June to September 2022. Using the random forest machine-learning algorithm and estimated urban morphological variables at varying neighborhood distances derived from 1 m2 aerial imagery and derived products from LiDAR data, we created 24 predicted surface temperature models that demonstrated R2 coefficients ranging from 0.879 to 0.997 with the majority exceeding an R2 of 0.95, all with p-values < 0.001. The normalized vegetation index and 800 m neighbor distance were found to be the most significant in explaining the collected air temperature values. This study expanded upon previous research by examining different sized cities to provide a broader understanding of the impact of urban morphology on air temperature distribution while also demonstrating utility of the random forest algorithm across cities ranging from approximately 10,000 to 200,000 inhabitants. These findings can inform policies addressing urban heat island effects and climate resilience. Full article
(This article belongs to the Special Issue Geospatial Technology: Modern Applications and Their Impact)
Show Figures

Figure 1

Figure 1
<p>Cities for which temperature data were collected.</p>
Full article ">Figure 2
<p>Temperature sensor devices (Adafruit Sensirion SHT40).</p>
Full article ">Figure 3
<p>Workflow to derive urban morphometric independent variables and application of the random forest algorithm.</p>
Full article ">Figure 4
<p>Measured temperature in Waterloo/Cedar Falls during the afternoon.</p>
Full article ">Figure 5
<p>Measured temperature in Waterloo/Cedar Falls during the evening.</p>
Full article ">Figure 6
<p>Measured temperature in Waterloo/Cedar Falls during the night.</p>
Full article ">Figure 7
<p>Modeled raster surface for Waterloo/Cedar Falls afternoon.</p>
Full article ">Figure 8
<p>Modeled raster surface for Waterloo/Cedar Falls evening.</p>
Full article ">Figure 9
<p>Modeled raster surface for Waterloo/Cedar Falls night.</p>
Full article ">
16 pages, 4667 KiB  
Article
State Estimation for Quadruped Robots on Non-Stationary Terrain via Invariant Extended Kalman Filter and Disturbance Observer
by Mingfei Wan, Daoguang Liu, Jun Wu, Li Li, Zhangjun Peng and Zhigui Liu
Sensors 2024, 24(22), 7290; https://doi.org/10.3390/s24227290 - 14 Nov 2024
Viewed by 713
Abstract
Quadruped robots possess significant mobility in complex and uneven terrains due to their outstanding stability and flexibility, making them highly suitable in rescue missions, environmental monitoring, and smart agriculture. With the increasing use of quadruped robots in more demanding scenarios, ensuring accurate and [...] Read more.
Quadruped robots possess significant mobility in complex and uneven terrains due to their outstanding stability and flexibility, making them highly suitable in rescue missions, environmental monitoring, and smart agriculture. With the increasing use of quadruped robots in more demanding scenarios, ensuring accurate and stable state estimation in complex environments has become particularly important. Existing state estimation algorithms relying on multi-sensor fusion, such as those using IMU, LiDAR, and visual data, often face challenges on non-stationary terrains due to issues like foot-end slippage or unstable contact, leading to significant state drift. To tackle this problem, this paper introduces a state estimation algorithm that integrates an invariant extended Kalman filter (InEKF) with a disturbance observer, aiming to estimate the motion state of quadruped robots on non-stationary terrains. Firstly, foot-end slippage is modeled as a deviation in body velocity and explicitly included in the state equations, allowing for a more precise representation of how slippage affects the state. Secondly, the state update process integrates both foot-end velocity and position observations to improve the overall accuracy and comprehensiveness of the estimation. Lastly, a foot-end contact probability model, coupled with an adaptive covariance adjustment strategy, is employed to dynamically modulate the influence of the observations. These enhancements significantly improve the filter’s robustness and the accuracy of state estimation in non-stationary terrain scenarios. Experiments conducted with the Jueying Mini quadruped robot on various non-stationary terrains show that the enhanced InEKF method offers notable advantages over traditional filters in compensating for foot-end slippage and adapting to different terrains. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>Test environments.</p>
Full article ">Figure 2
<p>Foot slipping scenarios of a quadruped robot during ground contact.</p>
Full article ">Figure 3
<p>Estimation of foot contact probability during unstable contact events, with (<b>a</b>) representing right front leg and (<b>b</b>) left rear leg.</p>
Full article ">Figure 4
<p>The position estimates of the quadruped robot in the X, Y, and Z directions on different terrains, with (<b>a</b>–<b>c</b>) depicting the position estimate for rugged slope terrain, (<b>d</b>–<b>f</b>) for shallow grass terrain, and (<b>g</b>–<b>i</b>) for deep grass terrain.</p>
Full article ">Figure 4 Cont.
<p>The position estimates of the quadruped robot in the X, Y, and Z directions on different terrains, with (<b>a</b>–<b>c</b>) depicting the position estimate for rugged slope terrain, (<b>d</b>–<b>f</b>) for shallow grass terrain, and (<b>g</b>–<b>i</b>) for deep grass terrain.</p>
Full article ">Figure 5
<p>Pitch and roll angle estimation of the quadruped robot on different terrains, with (<b>a</b>,<b>d</b>) depicting the estimate for rugged slope terrain, (<b>b</b>,<b>e</b>) for shallow grass terrain, and (<b>c</b>,<b>f</b>) for deep grass terrain.</p>
Full article ">Figure 5 Cont.
<p>Pitch and roll angle estimation of the quadruped robot on different terrains, with (<b>a</b>,<b>d</b>) depicting the estimate for rugged slope terrain, (<b>b</b>,<b>e</b>) for shallow grass terrain, and (<b>c</b>,<b>f</b>) for deep grass terrain.</p>
Full article ">
26 pages, 21893 KiB  
Article
An Example of Using Low-Cost LiDAR Technology for 3D Modeling and Assessment of Degradation of Heritage Structures and Buildings
by Piotr Kędziorski, Marcin Jagoda, Paweł Tysiąc and Jacek Katzer
Materials 2024, 17(22), 5445; https://doi.org/10.3390/ma17225445 - 7 Nov 2024
Viewed by 663
Abstract
This article examines the potential of low-cost LiDAR technology for 3D modeling and assessment of the degradation of historic buildings, using a section of the Koszalin city walls in Poland as a case study. Traditional terrestrial laser scanning (TLS) offers high accuracy but [...] Read more.
This article examines the potential of low-cost LiDAR technology for 3D modeling and assessment of the degradation of historic buildings, using a section of the Koszalin city walls in Poland as a case study. Traditional terrestrial laser scanning (TLS) offers high accuracy but is expensive. The study assessed whether more accessible LiDAR options, such as those integrated with mobile devices such as the Apple iPad Pro, can serve as viable alternatives. This study was conducted in two phases—first assessing measurement accuracy and then assessing degradation detection—using tools such as the FreeScan Combo scanner and the Z+F 5016 IMAGER TLS. The results show that, while low-cost LiDAR is suitable for small-scale documentation, its accuracy decreases for larger, complex structures compared to TLS. Despite these limitations, this study suggests that low-cost LiDAR can reduce costs and improve access to heritage conservation, although further development of mobile applications is recommended. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the object under study.</p>
Full article ">Figure 2
<p>City plan with the existing wall sections plotted on a current orthophotomap [<a href="#B17-materials-17-05445" class="html-bibr">17</a>].</p>
Full article ">Figure 3
<p>Six fragments of walls that survive today, numbered from 1 to 6.</p>
Full article ">Figure 4
<p>Workflow of the research program.</p>
Full article ">Figure 5
<p>Dimensions and weights of the equipment used.</p>
Full article ">Figure 6
<p>Locations of scanner position.</p>
Full article ">Figure 7
<p>Achieved point clouds using TLS.</p>
Full article ">Figure 8
<p>Measurement results from 3DScannerApp for fragment D and M.</p>
Full article ">Figure 9
<p>Location of selected measurement markers. (<b>a</b>). View of fragment D. (<b>b</b>). View of fragment M.</p>
Full article ">Figure 10
<p>Cross-section through the acquired point clouds in relation to the reference cloud (green): (<b>a</b>). 3DScannerApp; (<b>b</b>). Pix4DCatch Captured; (<b>c</b>). Pix4DCatch Depth; (<b>d</b>). Pix4DCatch Fused.</p>
Full article ">Figure 11
<p>Measurement results from the SiteScape application.</p>
Full article ">Figure 12
<p>Differences between Stages 1 and 2 for city wall fragment D.</p>
Full article ">Figure 13
<p>Differences between Stages 1 and 2 for city wall fragment M.</p>
Full article ">Figure 14
<p>Location of selected defects where degradation has occurred.</p>
Full article ">Figure 15
<p>Defect W1 projected onto the plane.</p>
Full article ">Figure 16
<p>Cross-sections through defect W1.</p>
Full article ">Figure 17
<p>W2 defect projected onto the plane.</p>
Full article ">Figure 18
<p>Cross-sections through defect W2.</p>
Full article ">Figure 19
<p>W3 defect projected onto the plane.</p>
Full article ">Figure 20
<p>Cross-sections through defect W3.</p>
Full article ">Figure 21
<p>W4 defect projected onto the plane.</p>
Full article ">Figure 22
<p>Cross-sections through defect W4.</p>
Full article ">Figure 23
<p>Differences between Stages 1 and 2 for measurements taken with a handheld scanner.</p>
Full article ">Figure 24
<p>Defect W2 projected onto the plane—handheld scanner.</p>
Full article ">Figure 25
<p>Cross-sections through defect W2—handheld scanner.</p>
Full article ">Figure 26
<p>Defect W3 projected onto the plane—handheld scanner.</p>
Full article ">Figure 27
<p>Cross-sections through defect W3—handheld scanner.</p>
Full article ">Figure 28
<p>Defect W4 projected onto the plane—handheld scanner.</p>
Full article ">Figure 29
<p>Cross-sections through defect W4—handheld scanner.</p>
Full article ">Figure 30
<p>Example path of a single measurement with marked sample positions of the device.</p>
Full article ">Figure 31
<p>Examples of errors created at corners with the device’s trajectory marked: (<b>a</b>). SiteScape; (<b>b</b>). 3DScannerApp.</p>
Full article ">
Back to TopTop