[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (17)

Search Parameters:
Keywords = LiDAR-IMU calibration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 9657 KiB  
Article
Research on Digital Terrain Construction Based on IMU and LiDAR Fusion Perception
by Chen Huang, Yiqi Wang, Xiaoqiang Sun and Shiyue Yang
Sensors 2025, 25(1), 15; https://doi.org/10.3390/s25010015 - 24 Dec 2024
Viewed by 213
Abstract
To address the shortcomings of light detection and ranging (LiDAR) sensors in extracting road surface elevation information in front of a vehicle, a scheme for digital terrain construction based on the fusion of an Inertial Measurement Unit (IMU) and LiDAR perception is proposed. [...] Read more.
To address the shortcomings of light detection and ranging (LiDAR) sensors in extracting road surface elevation information in front of a vehicle, a scheme for digital terrain construction based on the fusion of an Inertial Measurement Unit (IMU) and LiDAR perception is proposed. First, two sets of sensor coordinate systems were configured, and the parameters of LiDAR and IMU were calibrated. Then, a terrain construction system based on the fusion perception of IMU and LiDAR was established, and improvements were made to the state estimation and mapping architecture. Terrain construction experiments were conducted in an academic setting. Finally, based on the output information from the terrain construction system, a moving average-like algorithm was designed to process point cloud data and extract the road surface elevation information at the vehicle’s trajectory position. By comparing the extraction effects of four different sliding window widths, the 4 cm width sliding window, which yielded the best results, was ultimately selected, making the extracted road surface elevation information more accurate and effective. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

Figure 1
<p>LiDAR coordinate system and IMU coordinate system.</p>
Full article ">Figure 2
<p>LiDAR and IMU calibration diagram.</p>
Full article ">Figure 3
<p>Overall architecture of state estimation.</p>
Full article ">Figure 4
<p>Road segment with speed bump on campus.</p>
Full article ">Figure 5
<p>Point cloud map of road segment with speed bump.</p>
Full article ">Figure 6
<p>Point cloud map of road segment with speed bump generated using improved architecture.</p>
Full article ">Figure 7
<p>Uneven road surface used for the formal experiment.</p>
Full article ">Figure 8
<p>Point cloud map generated using the original architecture.</p>
Full article ">Figure 9
<p>Improved architecture-generated point cloud maps.</p>
Full article ">Figure 10
<p>Effect of the Passthrough algorithm.</p>
Full article ">Figure 11
<p>Flowchart of the Passthrough algorithm.</p>
Full article ">Figure 12
<p>Top view of point cloud data for the front tire trajectory position.</p>
Full article ">Figure 13
<p>Road surface elevation information generated using moving average-like algorithm.</p>
Full article ">Figure 14
<p>Road surface elevation information generated using Gaussian filter algorithm.</p>
Full article ">
20 pages, 6270 KiB  
Article
Initial Pose Estimation Method for Robust LiDAR-Inertial Calibration and Mapping
by Eun-Seok Park , Saba Arshad and Tae-Hyoung Park
Sensors 2024, 24(24), 8199; https://doi.org/10.3390/s24248199 - 22 Dec 2024
Viewed by 373
Abstract
Handheld LiDAR scanners, which typically consist of a LiDAR sensor, Inertial Measurement Unit, and processor, enable data capture while moving, offering flexibility for various applications, including indoor and outdoor 3D mapping in fields such as architecture and civil engineering. Unlike fixed LiDAR systems, [...] Read more.
Handheld LiDAR scanners, which typically consist of a LiDAR sensor, Inertial Measurement Unit, and processor, enable data capture while moving, offering flexibility for various applications, including indoor and outdoor 3D mapping in fields such as architecture and civil engineering. Unlike fixed LiDAR systems, handheld devices allow data collection from different angles, but this mobility introduces challenges in data quality, particularly when initial calibration between sensors is not precise. Accurate LiDAR-IMU calibration, essential for mapping accuracy in Simultaneous Localization and Mapping applications, involves precise alignment of the sensors’ extrinsic parameters. This research presents a robust initial pose calibration method for LiDAR-IMU systems in handheld devices, specifically designed for indoor environments. The research contributions are twofold. Firstly, we present a robust plane detection method for LiDAR data. This plane detection method removes the noise caused by mobility of scanning device and provides accurate planes for precise LiDAR initial pose estimation. Secondly, we present a robust planes-aided LiDAR calibration method that estimates the initial pose. By employing this LiDAR calibration method, an efficient LiDAR-IMU calibration is achieved for accurate mapping. Experimental results demonstrate that the proposed method achieves lower calibration errors and improved computational efficiency compared to existing methods. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

Figure 1
<p>LiDAR based mapping using (<b>a</b>) LiDAR-IMU calibration method: Error-free mapping, and (<b>b</b>) Without LiDAR-IMU calibration method: Mapping error due to drift, highlighted in yellow circle. The colors in each map represents the intensity of LiDAR point cloud.</p>
Full article ">Figure 2
<p>Overall framework of the proposed initial pose estimation method for robust LiDAR-IMU calibration. Different colors in voxelization shows the intensity of LiDAR points in each voxel. The extracted planes are represented with yellow and green color while red color points indicate noise.</p>
Full article ">Figure 3
<p>Robust plane detection method.</p>
Full article ">Figure 4
<p>Robust plane extraction through refinement. (<b>a</b>) Voxels containing edges and noise have low plane scores due to large distances and high variance represented as red color normal vector while those with high plane scores are represented with blue. (<b>b</b>) The refinement process enables the effective separation and removal of areas containing edges and noise.</p>
Full article ">Figure 5
<p>LiDAR calibration method.</p>
Full article ">Figure 6
<p>IMU downsampling.</p>
Full article ">Figure 7
<p>Qualitative Comparison of the proposed method with the benchmark plane detection algorithms.</p>
Full article ">Figure 8
<p>Top view of LiDAR data. (<b>a</b>) LiDAR raw data before calibration. (<b>b</b>) LiDAR data after calibration using the proposed method.</p>
Full article ">Figure 9
<p>Performance comparison in terms of (<b>a</b>) roll and (<b>b</b>) pitch errors in the VECtor dataset.</p>
Full article ">Figure 10
<p>Performance comparison in terms of the (<b>a</b>) mapping result using LI-init and (<b>b</b>) mapping result using LI-init+Proposed.</p>
Full article ">
27 pages, 3941 KiB  
Article
Precision Inter-Row Relative Positioning Method by Using 3D LiDAR in Planted Forests and Orchards
by Limin Liu, Dong Ji, Fandi Zeng, Zhihuan Zhao and Shubo Wang
Agronomy 2024, 14(6), 1279; https://doi.org/10.3390/agronomy14061279 - 13 Jun 2024
Cited by 1 | Viewed by 994
Abstract
Accurate positioning at the inter-row canopy can provide data support for precision variable-rate spraying. Therefore, there is an urgent need to design a reliable positioning method for the inter-row canopy of closed orchards (planted forests). In the study, the Extended Kalman Filter (EKF) [...] Read more.
Accurate positioning at the inter-row canopy can provide data support for precision variable-rate spraying. Therefore, there is an urgent need to design a reliable positioning method for the inter-row canopy of closed orchards (planted forests). In the study, the Extended Kalman Filter (EKF) fusion positioning method (method C) was first constructed by calibrating the IMU and encoder with errors. Meanwhile, 3D Light Detection and Ranging (LiDAR) observations were introduced to be fused into Method C. An EKF fusion positioning method (method D) based on 3D LiDAR corrected detection was designed. The method starts or closes method C by the presence or absence of the canopy. The vertically installed 3D LiDAR detected the canopy body center, providing the vehicle with inter-row vertical distance and heading. They were obtained through the distance between the center of the body and fixed row spacing. This can provide an accurate initial position for method C and correct the positioning trajectory. Finally, the positioning and canopy length measurement experiments were designed using a GPS positioning system. The results show that the method proposed in this study can significantly improve the accuracy of length measurement and positioning at the inter-row canopy, which does not significantly change with the distance traveled. In the orchard experiment, the average positioning deviations of the lateral and vertical distances at the inter-row canopy are 0.1 m and 0.2 m, respectively, with an average heading deviation of 6.75°, and the average relative error of canopy length measurement was 4.35%. The method can provide a simple and reliable inter-row positioning method for current remote-controlled and manned agricultural machinery when working in standardized 3D crops. This can modify the above-mentioned machinery to improve its automation level. Full article
(This article belongs to the Special Issue Agricultural Unmanned Systems: Empowering Agriculture with Automation)
Show Figures

Figure 1

Figure 1
<p>Composition and electronic hardware system of ICV. (<b>a</b>) ICV, and (<b>b</b>) Electronic hardware system design. 1. 3D LiDAR, 2. GPS, 3. Notebook PC, 4. Battery, 5. Chassis walking system. The green line in (<b>b</b>) indicates the power supply, while the others represent the information transmission.</p>
Full article ">Figure 2
<p>Kinematic model. Two coordinate systems are included in the figure, one is the world coordinate system and the other is the body coordinate system. The heading angle (<math display="inline"><semantics> <mrow> <mi>θ</mi> </mrow> </semantics></math>), wheelbase (2L), and body center (<span class="html-italic">O<sub>R</sub></span>) of the vehicle are also labeled in the figure.</p>
Full article ">Figure 3
<p>Encoder measurement model based on the sine theorem. The ICV heading, trajectory, displacement (<span class="html-italic">L<sub>P</sub></span>), displacement change (<math display="inline"><semantics> <mrow> <mo>∆</mo> <mi mathvariant="normal">x</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>∆</mo> <mi mathvariant="normal">y</mi> </mrow> </semantics></math>) and heading deviation (2α<sub>c</sub>) are clearly shown. α is the chord tangent angle, so α = α<sub>c</sub>, which is half of the heading deviation.</p>
Full article ">Figure 4
<p>Flowchart of method C.</p>
Full article ">Figure 5
<p>Flowchart of method D.</p>
Full article ">Figure 6
<p>Principle diagram of method D. The laser beam detects tree canopies or gaps on both sides when the ICV is traveling between rows.</p>
Full article ">Figure 7
<p>Situation of the experimental campus and orchard. (<b>a</b>) Ginkgo trees on both sides of the sidewalk; (<b>b</b>) 3D LiDAR installation angle correction; (<b>c</b>) experimental orchard situation; (<b>d</b>) experimental site. The figures include both campus and orchard experimental site conditions. The two topographies are obviously different.</p>
Full article ">Figure 8
<p>Schematic diagram of positioning and heading angle measurement methods. (<b>a</b>) Schematic diagram of the positioning test. (<b>b</b>) Calculation method of heading angle.</p>
Full article ">Figure 9
<p>Trajectories of the ICV obtained by the four methods. The figure clearly illustrates the actual trajectory of the ICV and the measured trajectories of the four methods. The <span class="html-italic">X</span>-axis represents the inter-row lateral distance and the <span class="html-italic">Y</span>-axis represents the inter-row vertical distance.</p>
Full article ">Figure 10
<p>Plots of the ICV positioning errors obtained by the four methods. (<b>a</b>) Inter-row lateral positioning deviation; (<b>b</b>) Inter-row vertical positioning deviation; (<b>c</b>) Relative error of inter-row lateral positioning; (<b>d</b>) Relative error of inter-row vertical positioning. For convenience of drawing, the values of method D in (<b>d</b>) were enlarged by 400 times. If the absolute relative error of methods A, B, and C is greater than 2000%, it should be equal to ±2000%. Different letters indicate significant differences (Duncan test, α = 0.05).</p>
Full article ">Figure 11
<p>Acquisition of body center locations. Green points in the figure represent the canopy and red points indicate the partitioned canopy body centers.</p>
Full article ">Figure 12
<p>Relative measurement errors of canopy length obtained by the four methods. (<b>a</b>) Group 1; (<b>b</b>) Group 2; (<b>c</b>) Group 3; (<b>d</b>) Group 4. The nearest distance from the initial point is the group 1, and the nearest distance from the end point is the group 4. Different letters indicate significant differences (Duncan test, α = 0.05).</p>
Full article ">Figure 13
<p>Point clouds of group four ginkgo tree canopy obtained by the four methods. (<b>a</b>) Method A; (<b>b</b>) Method B; (<b>c</b>) Method C; (<b>d</b>) Method D. The <span class="html-italic">X</span>-axis represents the direction of the ICV, and the <span class="html-italic">Z</span>-axis represents vertical ground upward. The actual measured maximum canopy of the ginkgo tree was 3.69 m.</p>
Full article ">Figure A1
<p>Counter schematic for encoder mode 3.</p>
Full article ">Figure A2
<p>Ellipsoid fitting results. (<b>a</b>) Ellipsoidal fitting results of the accelerometer. (<b>b</b>) Ellipsoidal fitting results of the magnetometer.</p>
Full article ">
16 pages, 12904 KiB  
Article
ZUST Campus: A Lightweight and Practical LiDAR SLAM Dataset for Autonomous Driving Scenarios
by Yuhang He, Bo Li, Jianyuan Ruan, Aihua Yu and Beiping Hou
Electronics 2024, 13(7), 1341; https://doi.org/10.3390/electronics13071341 - 2 Apr 2024
Cited by 1 | Viewed by 1549
Abstract
This research proposes a lightweight and applicable dataset with a precise elevation ground truth and extrinsic calibration toward the LiDAR (Light Detection and Ranging) SLAM (Simultaneous Localization and Mapping) task in the field of autonomous driving. Our dataset focuses on more cost-effective platforms [...] Read more.
This research proposes a lightweight and applicable dataset with a precise elevation ground truth and extrinsic calibration toward the LiDAR (Light Detection and Ranging) SLAM (Simultaneous Localization and Mapping) task in the field of autonomous driving. Our dataset focuses on more cost-effective platforms with limited computational power and low-resolution three-dimensional LiDAR sensors (16-beam LiDAR), and fills the gaps in the existing literature. Our data include abundant scenarios that include degenerated environments, dynamic objects, and large slope terrain to facilitate the investigation of the performance of the SLAM system. We provided the ground truth pose from RTK-GPS and carefully rectified its elevation errors, and designed an extra method to evaluate the vertical drift. The module for calibrating the LiDAR and IMU was also enhanced to ensure the precision of point cloud data. The reliability and applicability of the dataset are fully tested through a series of experiments using several state-of-the-art LiDAR SLAM methods. Full article
Show Figures

Figure 1

Figure 1
<p>Comparison between a typical frame of point cloud obtained using low-resolution LiDAR and one obtained using high-resolution LiDAR. The type of the high-resolution LiDAR and low-resolution are Velodyne HDL-64 and Velodyne VLP-16, respectively. The point cloud data are from our dataset and the KITTI dataset [<a href="#B7-electronics-13-01341" class="html-bibr">7</a>]. The time consumption for preprocessing, the numbers of points per frame, and the storage consumption for each of the two frames of the point cloud are shown in the bar chart in the lower part of the figure.</p>
Full article ">Figure 2
<p>Data collection systems.</p>
Full article ">Figure 3
<p>Strictly delineated areas where the start position of the vehicle is located. The markers are composed of black tapes and barricades.</p>
Full article ">Figure 4
<p>Adjusted incorrect altitude information from GPS. (<b>a</b>) Picture of the real-word environments where large altitude error occurs. (<b>b</b>) Incorrect altitude measurements obtained by GPS. (<b>c</b>) Rectified altitude information.</p>
Full article ">Figure 5
<p>The configuration and spatial arrangement of LiDAR, IMU, and RTK devices. Their body coordinate systems are marked in red, blue, and green, respectively.</p>
Full article ">Figure 6
<p>The results of motion distortion compensation. The original and rectified point clouds are visualized in white and red, respectively.</p>
Full article ">Figure 7
<p>The data collection routes overlaid on satellite images.</p>
Full article ">Figure 8
<p>Pictures of the representative scenarios in the ZUST Campus dataset.</p>
Full article ">Figure 9
<p>The structure of the provided files.</p>
Full article ">Figure 10
<p>The experimental results of the three LiDAR SLAM methods. The colored dots represent the point cloud maps; the red line represent the estimated trajectories; the colors represent the height of the points.</p>
Full article ">Figure 11
<p>Estimated trajectories of each method compared to the ground truth. Red lines indicate the estimated trajectories output by SLAM methods. Black line indicates the ground truth of trajectory.</p>
Full article ">Figure 12
<p>The evaluation results obtained using the evaluation method proposed in this paper, which calculates vertical drift based on the difference in height between the starting and ending points. The positions of the starting/end point relative to the whole trajectory in both experiments are shown in (<b>a</b>,<b>b</b>), where green lines indicate the trajectories. The local mapping results of the three methods in the vicinity of the start/end point highlighted by the yellow boxes are illustrated by (<b>c1</b>–<b>c3</b>) and (<b>d1</b>–<b>d3</b>) for the two experiments, respectively. The local estimated trajectories of the three methods near the start/end point are demonstrated by (<b>e1</b>–<b>e3</b>) and (<b>f1</b>–<b>f3</b>) for the two experiments, respectively. The departure and return sections are represented by solid red and blue lines, respectively. The differences in height are depicted using gray lines with corresponding values.</p>
Full article ">
16 pages, 7583 KiB  
Technical Note
Geometric and Radiometric Quality Assessments of UAV-Borne Multi-Sensor Systems: Can UAVs Replace Terrestrial Surveys?
by Junhwa Chi, Jae-In Kim, Sungjae Lee, Yongsik Jeong, Hyun-Cheol Kim, Joohan Lee and Changhyun Chung
Drones 2023, 7(7), 411; https://doi.org/10.3390/drones7070411 - 22 Jun 2023
Cited by 4 | Viewed by 1933
Abstract
Unmanned aerial vehicles (UAVs), also known as drones, are a cost-effective alternative to traditional surveying methods, and they can be used to collect geospatial data over inaccessible or hard-to-reach locations. UAV-integrated miniaturized remote sensing sensors such as hyperspectral and LiDAR sensors, which formerly [...] Read more.
Unmanned aerial vehicles (UAVs), also known as drones, are a cost-effective alternative to traditional surveying methods, and they can be used to collect geospatial data over inaccessible or hard-to-reach locations. UAV-integrated miniaturized remote sensing sensors such as hyperspectral and LiDAR sensors, which formerly operated on airborne and spaceborne platforms, have recently been developed. Their accuracies can still be guaranteed when incorporating pieces of equipment such as ground control points (GCPs) and field spectrometers. This study conducted three experiments for geometric and radiometric accuracy assessments of simultaneously acquired RGB, hyperspectral, and LiDAR data from a single mission. Our RGB and hyperspectral data generated orthorectified images based on direct georeferencing without any GCPs. Because of this, a base station is required for the post-processed Global Navigation Satellite System/Inertial Measurement Unit (GNSS/IMU) data. First, we compared the geometric accuracy of orthorectified RGB and hyperspectral images relative to the distance of the base station to determine which base station should be used. Second, point clouds could be generated from overlapped RGB images and a LiDAR sensor. We quantitatively and qualitatively compared RGB and LiDAR point clouds in this experiment. Lastly, we evaluated the radiometric quality of hyperspectral images, which is the most critical factor of the hyperspectral sensor, using reference spectra that was simultaneously measured by a field spectrometer. Consequently, the distance of the base station for post-processing the GNSS/IMU data was found to have no significant impact on the geometric accuracy, indicating that a dedicated base station is not always necessary. Our experimental results demonstrated geometric errors of less than two hyperspectral pixels without using GCPs, achieving a level of accuracy that is comparable to survey-level standards. Regarding the comparison of RGB- and LiDAR-based point clouds, RGB point clouds exhibited noise and lacked details; however, through the cleaning process, their vertical accuracy was found to be comparable with LiDAR’s accuracy. Although photogrammetry generated denser point clouds compared with LiDAR, the overall quality for extracting the elevation data greatly relies on factors such as the original image quality, including the image’s occlusions, shadows, and tie-points, for matching. Furthermore, the image spectra derived from hyperspectral data consistently demonstrated high radiometric quality without the need for in situ field spectrum information. This finding indicates that in situ field spectra are not always required to guarantee the radiometric quality of hyperspectral data, as long as well-calibrated targets are utilized. Full article
Show Figures

Figure 1

Figure 1
<p>Integrated multi-sensor system onboard UAV.</p>
Full article ">Figure 2
<p>Overview of the multi-sensor data acquisition. Targets for calibration and evaluation were deployed on the soccer field (red marks: ground control points; yellow rectangles: LiDAR targets; blue rectangles: radiometric targets).</p>
Full article ">Figure 3
<p>Overview of the proposed study. The red, blue, and green rectangles are related to the geometric (X and Y directions) accuracy of orthomosaic images, vertical accuracy and qualitative quality of point clouds, and radiometric accuracy of the hyperspectral image, respectively.</p>
Full article ">Figure 4
<p>Location of the base stations used in this study with the distance from the study area (KOPRI).</p>
Full article ">Figure 5
<p>Example of histogram of target point clouds to determine the bottom/top height of the target.</p>
Full article ">Figure 6
<p>Targets for radiometric correction and evaluation.</p>
Full article ">Figure 7
<p>(<b>a</b>) DJI Phantom 4 image acquired on 21 February 2022; (<b>b</b>) DJI Phantom 4 image acquired on 22 February 2022; (<b>c</b>) RGB image acquired from the GNSS/IMU-assisted multi-sensor systems on 26 January 2022. Yellow lines are the geometric errors between the surveyed and image-driven coordinates.</p>
Full article ">Figure 8
<p>Image locations of the surveyed GCPs in the hyperspectral orthomosaic image using the furthest CORS.</p>
Full article ">Figure 9
<p>Illustrations of the positioning errors according to the distance of the CORS; The background image is a georeferenced orthomosaic generated from the RGB images of a frame camera for visual inspection.</p>
Full article ">Figure 10
<p>Comparison of the LiDAR- and RGB-generated point clouds. (<b>a</b>) Overview of the study area; (<b>b</b>) enlarged point clouds.</p>
Full article ">Figure 11
<p>Comparison of the field and image spectra of ten radiometric targets. Blue lines represent the field spectra of the targets, and orange lines are the mean spectral reflectance curves of the image pixels acquired using a hyperspectral sensor.</p>
Full article ">Figure 12
<p>Comparison of the reflectance values for the same targets acquired from different flight trajectories.</p>
Full article ">Figure 13
<p>Comparison of the correlation coefficients of the empirical line method according to the wavelengths.</p>
Full article ">
13 pages, 13033 KiB  
Article
Uncontrolled Two-Step Iterative Calibration Algorithm for Lidar–IMU System
by Shilun Yin, Donghai Xie, Yibo Fu, Zhibo Wang and Ruofei Zhong
Sensors 2023, 23(6), 3119; https://doi.org/10.3390/s23063119 - 14 Mar 2023
Cited by 1 | Viewed by 2336
Abstract
Calibration of sensors is critical for the precise functioning of lidar–IMU systems. However, the accuracy of the system can be compromised if motion distortion is not considered. This study proposes a novel uncontrolled two-step iterative calibration algorithm that eliminates motion distortion and improves [...] Read more.
Calibration of sensors is critical for the precise functioning of lidar–IMU systems. However, the accuracy of the system can be compromised if motion distortion is not considered. This study proposes a novel uncontrolled two-step iterative calibration algorithm that eliminates motion distortion and improves the accuracy of lidar–IMU systems. Initially, the algorithm corrects the distortion of rotational motion by matching the original inter-frame point cloud. Then, the point cloud is further matched with IMU after the prediction of attitude. The algorithm performs iterative motion distortion correction and rotation matrix calculation to obtain high-precision calibration results. In comparison with existing algorithms, the proposed algorithm boasts high accuracy, robustness, and efficiency. This high-precision calibration result can benefit a wide range of acquisition platforms, including handheld, unmanned ground vehicle (UGV), and backpack lidar–IMU systems. Full article
Show Figures

Figure 1

Figure 1
<p>Calibration schematic for lidar–IMU system.</p>
Full article ">Figure 2
<p>Flowchart of the two-step calibration algorithm.</p>
Full article ">Figure 3
<p>Principle of motion distortion correction.</p>
Full article ">Figure 4
<p>The results obtained by applying SLAM algorithm to different data: (<b>a</b>) Park data; (<b>b</b>) Walk data; (<b>c</b>) Rotation data; (<b>d</b>) Campus data; (<b>e</b>) Cnu data.</p>
Full article ">Figure 5
<p>Calibration error of three methods.</p>
Full article ">Figure 6
<p>(<b>a</b>,<b>b</b>) Operation results using ZJU’s algorithm.</p>
Full article ">Figure 7
<p>Calibration results of different matching algorithms for different data: (<b>a</b>) Park data; (<b>b</b>) Walk data; (<b>c</b>) Rotation data; (<b>d</b>) Campus data; (<b>e</b>) Cnu data.</p>
Full article ">Figure 8
<p>(<b>a</b>) Angle change between adjacent frames of different data; (<b>b</b>) the relationship between angular variation of different data and calibration error.</p>
Full article ">
15 pages, 5799 KiB  
Article
Two-Step Self-Calibration of LiDAR-GPS/IMU Based on Hand-Eye Method
by Xin Nie, Jun Gong, Jintao Cheng, Xiaoyu Tang and Yuanfang Zhang
Symmetry 2023, 15(2), 254; https://doi.org/10.3390/sym15020254 - 17 Jan 2023
Cited by 1 | Viewed by 3574
Abstract
Multi-line LiDAR and GPS/IMU are widely used in autonomous driving and robotics, such as simultaneous localization and mapping (SLAM). Calibrating the extrinsic parameters of each sensor is a necessary condition for multi-sensor fusion. The calibration of each sensor directly affects the accurate positioning [...] Read more.
Multi-line LiDAR and GPS/IMU are widely used in autonomous driving and robotics, such as simultaneous localization and mapping (SLAM). Calibrating the extrinsic parameters of each sensor is a necessary condition for multi-sensor fusion. The calibration of each sensor directly affects the accurate positioning control and perception performance of the vehicle. Through the algorithm, accurate extrinsic parameters and a symmetric covariance matrix of extrinsic parameters can be obtained as a measure of the confidence of the extrinsic parameters. As for the calibration of LiDAR-GPS/IMU, many calibration methods require specific vehicle motion or manual calibration marking scenes to ensure good constraint of the problem, resulting in high costs and a low degree of automation. To solve this problem, we propose a new two-step self-calibration method, which includes extrinsic parameter initialization and refinement. The initialization part decouples the extrinsic parameters from the rotation and translation part, first calculating the reliable initial rotation through the rotation constraints, then calculating the initial translation after obtaining a reliable initial rotation, and eliminating the accumulated drift of LiDAR odometry by loop closure to complete the map construction. In the refinement part, the LiDAR odometry is obtained through scan-to-map registration and is tightly coupled with the IMU. The constraints of the absolute pose in the map refined the extrinsic parameters. Our method is validated in the simulation and real environments, and the results show that the proposed method has high accuracy and robustness. Full article
(This article belongs to the Special Issue Recent Progress in Robot Control Systems: Theory and Applications)
Show Figures

Figure 1

Figure 1
<p>The pipeline of the LiDAR-GPS/IMU calibration system is presented in this paper. In the parameter initialization part, the feature-based LiDAR odometry and the interpolated GPS/IMU relative pose were used to construct the hand–eye calibration problem to solve the initial extrinsic parameters and construct the map. In the parameters refinement part, the initial extrinsic parameters were tightly coupled with the LiDAR and IMU, and the extrinsic parameters were refined through the constraints of the absolute pose in the map. When the relative change in the extrinsic parameters is less than the set threshold during the iterative refinement process, it is considered that the extrinsic parameters are sufficiently convergent, and the refinement ends.</p>
Full article ">Figure 2
<p>This figure shows the pose relationship of hand–eye calibration. We use <math display="inline"><semantics> <mrow> <mo>{</mo> <mi mathvariant="bold">W</mi> <mo>}</mo> </mrow> </semantics></math> as the world coordinate system for mapping. Hand–eye calibration is mainly the relationship between extrinsic parameter <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">T</mi> <mi>L</mi> <mi>I</mi> </msubsup> </semantics></math> and two relative poses <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">T</mi> <mrow> <msub> <mi>I</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mrow> <msub> <mi>I</mi> <mi>k</mi> </msub> </msubsup> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi mathvariant="bold">T</mi> <mrow> <msub> <mi>L</mi> <mrow> <mi>k</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </mrow> <msub> <mi>L</mi> <mi>k</mi> </msub> </msubsup> </semantics></math>, which denote the relative pose from <math display="inline"><semantics> <mrow> <mo>{</mo> <msub> <mi mathvariant="bold">I</mi> <mrow> <mi mathvariant="bold">k</mi> <mo>+</mo> <mn mathvariant="bold">1</mn> </mrow> </msub> <mo>}</mo> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mo>{</mo> <msub> <mi mathvariant="bold">I</mi> <mi mathvariant="bold">k</mi> </msub> <mo>}</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>{</mo> <msub> <mi mathvariant="bold">L</mi> <mrow> <mi mathvariant="bold">k</mi> <mo>+</mo> <mn mathvariant="bold">1</mn> </mrow> </msub> <mo>}</mo> </mrow> </semantics></math> to <math display="inline"><semantics> <mrow> <mo>{</mo> <msub> <mi mathvariant="bold">L</mi> <mi mathvariant="bold">k</mi> </msub> <mo>}</mo> </mrow> </semantics></math>, respectively.</p>
Full article ">Figure 3
<p>This figure shows the pose relationship during the refinement of extrinsic parameters, where <math display="inline"><semantics> <msub> <mi>T</mi> <msub> <mi>L</mi> <mi>k</mi> </msub> </msub> </semantics></math> and <math display="inline"><semantics> <msubsup> <mi>T</mi> <mrow> <msub> <mi>I</mi> <mi>k</mi> </msub> </mrow> <mi>W</mi> </msubsup> </semantics></math> are the absolute pose of LiDAR and GPS/IMU, respectively, and <math display="inline"><semantics> <msubsup> <mi>T</mi> <mi>L</mi> <mi>I</mi> </msubsup> </semantics></math> is extrinsic parameters. The coordinate system of the sensor is indicated in red.</p>
Full article ">Figure 4
<p>Outdoor scene diagram of the Carla simulation platform.</p>
Full article ">Figure 5
<p>The line chart of the extrinsic parameter error value of the simulation environment. The three broken lines of different colors in the figure represent the error data of extrinsic parameters under three different scenarios. Regarding the rotation part, the error of raw angle and pitch angle is within 0.1 degrees, the error of yaw angle is relatively large, the error of scene 2 and scene 3 is within 0.2 degrees, and the error of scene 1 is about 0.8 degrees. Regarding the translation part, all errors are within 0.1 m.</p>
Full article ">Figure 6
<p>Illustration of our car equipped with the ouster-128 LiDAR and FDI integrated navigation system.</p>
Full article ">Figure 7
<p>The line chart of the extrinsic parameters error value of the real environment. The three broken lines of different colors in the figure represent the error data of extrinsic parameters under three different scenarios. Regarding the rotation part, the errors of the pitch angles of the three scenes are all within 0.2 degrees, the errors of the raw angles are all within 0.45 degrees, and the errors of the yaw angles are all within 0.6 degrees. Regarding the translation part, all errors are within 0.05 m.</p>
Full article ">
18 pages, 2894 KiB  
Article
OMC-SLIO: Online Multiple Calibrations Spinning LiDAR Inertial Odometry
by Shuang Wang, Hua Zhang and Guijin Wang
Sensors 2023, 23(1), 248; https://doi.org/10.3390/s23010248 - 26 Dec 2022
Cited by 5 | Viewed by 2903
Abstract
Light detection and ranging (LiDAR) is often combined with an inertial measurement unit (IMU) to get the LiDAR inertial odometry (LIO) for robot localization and mapping. In order to apply LIO efficiently and non-specialistically, self-calibration LIO is a hot research topic in the [...] Read more.
Light detection and ranging (LiDAR) is often combined with an inertial measurement unit (IMU) to get the LiDAR inertial odometry (LIO) for robot localization and mapping. In order to apply LIO efficiently and non-specialistically, self-calibration LIO is a hot research topic in the related community. Spinning LiDAR (SLiDAR), which uses an additional rotating mechanism to spin a common LiDAR and scan the surrounding environment, achieves a large field of view (FoV) with low cost. Unlike common LiDAR, in addition to the calibration between the IMU and the LiDAR, the self-calibration odometer for SLiDAR must also consider the mechanism calibration between the rotating mechanism and the LiDAR. However, existing self-calibration LIO methods require the LiDAR to be rigidly attached to the IMU and do not take the mechanism calibration into account, which cannot be applied to the SLiDAR. In this paper, we propose firstly a novel self-calibration odometry scheme for SLiDAR, named the online multiple calibration inertial odometer (OMC-SLIO) method, which allows online estimation of multiple extrinsic parameters among the LiDAR, rotating mechanism and IMU, as well as the odometer state. Specially, considering that the rotating and static parts of the motor encoder inside the SLiDAR are rigidly connected to the LiDAR and IMU respectively, we formulate the calibration within the SLiDAR as two separate sets of calibrations: the mechanism calibration between the LiDAR and the rotating part of the motor encoder and the sensor calibration between the static part of the motor encoder and the IMU. Based on such a SLiDAR calibration formulation, we can construct a well-defined kinematic model from the LiDAR to the IMU with the angular information from the motor encoder. Based on the kinematic model, a two-stage motion compensation method is presented to eliminate the point cloud distortion resulting from LiDAR spinning and platform motion. Furthermore, the mechanism and sensor calibration as well as the odometer state are wrapped in a measurement model and estimated via an error-state iterative extended Kalman filter (ESIEKF). Experimental results show that our OMC-SLIO is effective and attains excellent performance. Full article
Show Figures

Figure 1

Figure 1
<p>Internal coordinate relation for SLiDAR.</p>
Full article ">Figure 2
<p>The overall pipeline of proposed OMC-SLIO.</p>
Full article ">Figure 3
<p>The two-stage motion compensation. The green and yellow dots represent the sequence of LiDAR scan points before LiDAR spinning and platform moving compensation respectively. The purple dots represent the measured spinning angle sequence. The brown dots represent the sequence of measured IMU data.</p>
Full article ">Figure 4
<p>3D SLiDAR experimental platform.</p>
Full article ">Figure 5
<p>Lab room with a Nokov Motion Capture System.</p>
Full article ">Figure 6
<p>The mapping results of the lab room with Calib-SLIO, FAST-SLIO and OMC-SLIO. Note that the green and red dots show the strong and weak laser beam reflection intensities, respectively.</p>
Full article ">Figure 7
<p>The mapping results of underground parking with Calib-SLIO, FAST-SLIO and OMC-SLIO methods. Note that the green and red dots show the strong and weak laser beam reflection intensities, respectively.</p>
Full article ">
17 pages, 4078 KiB  
Article
A Spatiotemporal Calibration Algorithm for IMU–LiDAR Navigation System Based on Similarity of Motion Trajectories
by Yunhui Li, Shize Yang, Xianchao Xiu and Zhonghua Miao
Sensors 2022, 22(19), 7637; https://doi.org/10.3390/s22197637 - 9 Oct 2022
Cited by 6 | Viewed by 3237
Abstract
The fusion of light detection and ranging (LiDAR) and inertial measurement unit (IMU) sensing information can effectively improve the environment modeling and localization accuracy of navigation systems. To realize the spatiotemporal unification of data collected by the IMU and the LiDAR, a two-step [...] Read more.
The fusion of light detection and ranging (LiDAR) and inertial measurement unit (IMU) sensing information can effectively improve the environment modeling and localization accuracy of navigation systems. To realize the spatiotemporal unification of data collected by the IMU and the LiDAR, a two-step spatiotemporal calibration method combining coarse and fine is proposed. The method mainly includes two aspects: (1) Modeling continuous-time trajectories of IMU attitude motion using B-spline basis functions; the motion of the LiDAR is estimated by using the normal distributions transform (NDT) point cloud registration algorithm, taking the Hausdorff distance between the local trajectories as the cost function and combining it with the hand–eye calibration method to solve the initial value of the spatiotemporal relationship between the two sensors’ coordinate systems, and then using the measurement data of the IMU to correct the LiDAR distortion. (2) According to the IMU preintegration, and the point, line, and plane features of the lidar point cloud, the corresponding nonlinear optimization objective function is constructed. Combined with the corrected LiDAR data and the initial value of the spatiotemporal calibration of the coordinate systems, the target is optimized under the nonlinear graph optimization framework. The rationality, accuracy, and robustness of the proposed algorithm are verified by simulation analysis and actual test experiments. The results show that the accuracy of the proposed algorithm in the spatial coordinate system relationship calibration was better than 0.08° (3δ) and 5 mm (3δ), respectively, and the time deviation calibration accuracy was better than 0.1 ms and had strong environmental adaptability. This can meet the high-precision calibration requirements of multisensor spatiotemporal parameters of field robot navigation systems. Full article
(This article belongs to the Topic Advances in Mobile Robotics Navigation)
Show Figures

Figure 1

Figure 1
<p>Multisensor spatiotemporal calibration problem description.</p>
Full article ">Figure 2
<p>Schematic diagram of LiDAR ranging system.</p>
Full article ">Figure 3
<p>Block diagram of multisensor spatiotemporal calibration.</p>
Full article ">Figure 4
<p>Trajectory similarity selects initial calibration trajectory segment. The red dots represent the lidar odometry localization data; the blue dots represent the IMU odometer localization data.</p>
Full article ">Figure 5
<p>Multisensor spatiotemporal calibration factor graph model.</p>
Full article ">Figure 6
<p>Schematic diagram of the trajectory of IMU and LiDAR.</p>
Full article ">Figure 7
<p>External parameters’ calibration error of IMU and LiDAR varies with time.</p>
Full article ">Figure 8
<p>Statistical histogram of calibration errors of IMU and LiDAR external parameters.</p>
Full article ">Figure 8 Cont.
<p>Statistical histogram of calibration errors of IMU and LiDAR external parameters.</p>
Full article ">Figure 9
<p>Physical map of IMU and LiDAR assembly.</p>
Full article ">Figure 10
<p>Calibration results of external parameters of IMU and LiDAR.</p>
Full article ">Figure 11
<p>IMU and lidar odometer localization results. (<b>a</b>) LiDAR_IMU_calib; (<b>b</b>) proposed.</p>
Full article ">Figure 12
<p>LiDAR point cloud mapping results.</p>
Full article ">
16 pages, 3732 KiB  
Article
A Method of Calibration for the Distortion of LiDAR Integrating IMU and Odometer
by Qiuxuan Wu, Qinyuan Meng, Yangyang Tian, Zhongrong Zhou, Cenfeng Luo, Wandeng Mao, Pingliang Zeng, Botao Zhang and Yanbin Luo
Sensors 2022, 22(17), 6716; https://doi.org/10.3390/s22176716 - 5 Sep 2022
Cited by 5 | Viewed by 3246
Abstract
To improve the motion distortion caused by LiDAR data at low and medium frame rates when moving, this paper proposes an improved algorithm for scanning matching of estimated velocity that combines an IMU and odometer. First, the information of the IMU and the [...] Read more.
To improve the motion distortion caused by LiDAR data at low and medium frame rates when moving, this paper proposes an improved algorithm for scanning matching of estimated velocity that combines an IMU and odometer. First, the information of the IMU and the odometer is fused, and the pose of the LiDAR is obtained using the linear interpolation method. The ICP method is used to scan and match the LiDAR data. The data fused by the IMU and the odometer provide the optimal initial value for the ICP. The estimated speed of the LiDAR is introduced as the termination condition of the ICP method iteration to realize the compensation of the LiDAR data. The experimental comparative analysis shows that the algorithm is better than the ICP algorithm and the VICP algorithm in matching accuracy. Full article
(This article belongs to the Special Issue Efficient Intelligence with Applications in Embedded Sensing)
Show Figures

Figure 1

Figure 1
<p>The acquisition process of one frame of LiDAR data.</p>
Full article ">Figure 2
<p>LiDAR motion distortion.</p>
Full article ">Figure 3
<p>The principle of ICP algorithm. (<b>a</b>) Frame <span class="html-italic">i</span> (<b>b</b>) Frame <span class="html-italic">i</span> + 1 (<b>c</b>) Start matching (<b>d</b>) Find adjacent (<b>e</b>) First match (<b>f</b>) After multiple iterations of matching.</p>
Full article ">Figure 4
<p>The architecture diagram of the Iao_ICP algorithm.</p>
Full article ">Figure 5
<p>Linear interpolation of laser pose.</p>
Full article ">Figure 6
<p>Pose function graph.</p>
Full article ">Figure 7
<p>Mapping based on the b0_2014_07_11_11_00_49 sequence.</p>
Full article ">Figure 8
<p>Comparison of relative trajectory errors of sequence ①. (<b>a</b>) Cartographer, improvement scheme, and real trajectory comparison (<b>b</b>) Local trajectory map (<b>c</b>) Absolute trajectory error of Cartographer (<b>d</b>) Absolute trajectory error of the improved scheme.</p>
Full article ">Figure 9
<p>Comparison of relative trajectory errors of sequence ②. (<b>a</b>) Cartographer, improvement scheme, and real trajectory comparison (<b>b</b>) Local trajectory map (<b>c</b>) Absolute trajectory error of Cartographer (<b>d</b>) Absolute trajectory error of the improved scheme.</p>
Full article ">Figure 10
<p>Comparison of relative trajectory errors of sequence ③. (<b>a</b>) Cartographer, improvement scheme, and real trajectory comparison (<b>b</b>) Local trajectory map (<b>c</b>) Absolute trajectory error of Cartographer (<b>d</b>) Absolute trajectory error of the improved scheme.</p>
Full article ">Figure 11
<p>Mobile experiment platform.</p>
Full article ">Figure 12
<p>Experimental real scene.</p>
Full article ">Figure 13
<p>Mapping effect of Cartographer.</p>
Full article ">Figure 14
<p>Mapping effect of Iao_ICP.</p>
Full article ">Figure 15
<p>Line chart of relative error comparison of two algorithms.</p>
Full article ">
22 pages, 1538 KiB  
Article
A Human Gait Tracking System Using Dual Foot-Mounted IMU and Multiple 2D LiDARs
by Huu Toan Duong and Young Soo Suh
Sensors 2022, 22(17), 6368; https://doi.org/10.3390/s22176368 - 24 Aug 2022
Cited by 3 | Viewed by 2554
Abstract
This paper proposes a human gait tracking system using a dual foot-mounted IMU and multiple 2D LiDARs. The combining system aims to overcome the disadvantages of each single sensor system (the short tracking range of the single 2D LiDAR and the drift errors [...] Read more.
This paper proposes a human gait tracking system using a dual foot-mounted IMU and multiple 2D LiDARs. The combining system aims to overcome the disadvantages of each single sensor system (the short tracking range of the single 2D LiDAR and the drift errors of the IMU system). The LiDARs act as anchors to mitigate the errors of an inertial navigation algorithm. In our system, two 2D LiDARs are used. LiDAR 1 is placed around the starting point, and LiDAR 2 is placed at the ending point (in straight walking) or at the turning point (in rectangular path walking). Using the LiDAR 1, we can estimate the initial headings and positions of each IMU without any calibration process. We also propose a method to calibrate two LiDARs that are placed far apart. Then, the measurement from two LiDARs can be combined in a Kalman filter and the smoother algorithm to correct the two estimated feet trajectories. If straight walking is detected, we update the current stride heading and the foot position using the previous stride headings. Then, it is used as a measurement update in the Kalman filter. In the smoother algorithm, a step width constraint is used as a measurement update. We evaluate the stride length estimation through a straight walking experiment along a corridor. The root mean square errors compared with an optical tracking system are less than 3 cm. The performance of proposed method is also verified with a rectangular path walking experiment. Full article
(This article belongs to the Collection Inertial Sensors and Applications)
Show Figures

Figure 1

Figure 1
<p>System overview: a dual foot-mounted IMU and multiple 2D LiDARs.</p>
Full article ">Figure 2
<p>Block diagram of proposed method.</p>
Full article ">Figure 3
<p>Estimation of the human leg trajectories from each LiDAR.</p>
Full article ">Figure 4
<p>The configuration of LiDARs calibration.</p>
Full article ">Figure 5
<p>Two LiDARs calibration results in 20 m straight walking. The stance leg positions from LiDAR 2 are transformed into the LiDAR 1 coordinate system.</p>
Full article ">Figure 6
<p>Initial position of IMU in LiDAR 1 coordinate estimation.</p>
Full article ">Figure 7
<p>Straight detection for current left foot.</p>
Full article ">Figure 8
<p>The configuration of the 20 m and 50 m walking distance experiment.</p>
Full article ">Figure 9
<p>An example of measurement availability of two feet in 50 m walking. The straight-walking-based measurement updates are available for all walking steps outside the LiDAR range in the normal update (<b>top</b> plot) and are removed by 10 s in the modeled update (<b>bottom</b> plot).</p>
Full article ">Figure 10
<p>The total estimated 20 m walking trajectories of two subjects from the smoother algorithm in the normal and modeled update.</p>
Full article ">Figure 11
<p>The total estimated 50 m walking trajectories of two subjects from the smoother algorithm in the normal and modeled update.</p>
Full article ">Figure 12
<p>An example of an estimation of 50 m walking trajectories from the Kalman filter and the smoother algorithm. The circles represent the estimated LiDAR-based foot positions in the stance phase.</p>
Full article ">Figure 13
<p>The histogram of all walking stride length estimation errors of the proposed method.</p>
Full article ">Figure 14
<p>The configuration of rectangular path walk experiment.</p>
Full article ">Figure 15
<p>The estimated trajectory of proposed method in the rectangular path walking experiment. The shading circle area indicates the human leg tracking range of each LiDAR.</p>
Full article ">Figure 16
<p>The estimated trajectory inthe rectangular path walking experiment, in which the LiDAR data are not used.</p>
Full article ">
15 pages, 5738 KiB  
Article
An Onsite Calibration Method for MEMS-IMU in Building Mapping Fields
by Sen Li, Yunchen Niu, Chunyong Feng, Haiqiang Liu, Dan Zhang and Hengjie Qin
Sensors 2019, 19(19), 4150; https://doi.org/10.3390/s19194150 - 25 Sep 2019
Cited by 7 | Viewed by 2783
Abstract
Light detection and ranging (LiDAR) is one of the popular technologies to acquire critical information for building information modelling. To allow an automatic acquirement of building information, the first and most important step of LiDAR technology is to accurately determine the important gesture [...] Read more.
Light detection and ranging (LiDAR) is one of the popular technologies to acquire critical information for building information modelling. To allow an automatic acquirement of building information, the first and most important step of LiDAR technology is to accurately determine the important gesture information that micro electromechanical (MEMS) based inertial measurement unit (IMU) sensors can provide from the moving robot. However, during the practical building mapping, serious errors may happen due to the inappropriate installation of a MEMS-IMU. Through this study, we analyzed the different systematic errors, such as biases, scale errors, and axial installation deviation, that happened during the building mapping, based on a robot equipped with MEMS-IMU. Based on this, an error calibration model was developed. The problems of the deviation between the calibrated and horizontal planes were solved by a new sampling method. For this method, the calibrated plane was rotated twice; the gravity acceleration of the six sides of the MEMS-IMU was also calibrated by the practical values, and the whole calibration process was completed after solving developed model based on the least-squares method. Finally, the building mapping was then calibrated based on the error calibration model, and also the Gmapping algorithm. It was indicated from the experiments that the proposed model is useful for the error calibration, which can increase the prediction accuracy of yaw by 1–2° based on MEMS-IMU; the mapping results are more accurate when compared to the previous methods. The research outcomes can provide a practical basis for the construction of the building information modelling model. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Figure 1
<p>Laser simultaneous localization and mapping (SLAM) algorithm robot.</p>
Full article ">Figure 2
<p>Micro electromechanical-inertial measurement unit (MEMS-IMU) product map.</p>
Full article ">Figure 3
<p>IMU and mobile car installation model.</p>
Full article ">Figure 4
<p>Schematic diagram of shaft declination.</p>
Full article ">Figure 5
<p>This is a schematic diagram of the deviation of the calibration surface from the direction of gravity’s acceleration.</p>
Full article ">Figure 6
<p>Schematic diagram of two rotations of the same calibration surface.</p>
Full article ">Figure 7
<p>Rikirobot.</p>
Full article ">Figure 8
<p>Overview of the hardware design.</p>
Full article ">Figure 9
<p>The flowchart of nodes in the software.</p>
Full article ">Figure 10
<p>Map construction in a looped corridor.</p>
Full article ">Figure 11
<p>Map of the indoor looped corridor after the calibration.</p>
Full article ">Figure 12
<p>Map construction for turning right angle corridors before (<b>a</b>) and after (<b>b</b>) the calibration.</p>
Full article ">Figure 13
<p>Ring corridor closed-loop connection for map construction before (<b>a</b>) and after (<b>b</b>) the calibration.</p>
Full article ">
35 pages, 23573 KiB  
Article
Indoor and Outdoor Backpack Mapping with Calibrated Pair of Velodyne LiDARs
by Martin Velas, Michal Spanel, Tomas Sleziak, Jiri Habrovec and Adam Herout
Sensors 2019, 19(18), 3944; https://doi.org/10.3390/s19183944 - 12 Sep 2019
Cited by 24 | Viewed by 7718
Abstract
This paper presents a human-carried mapping backpack based on a pair of Velodyne LiDAR scanners. Our system is a universal solution for both large scale outdoor and smaller indoor environments. It benefits from a combination of two LiDAR scanners, which makes the odometry [...] Read more.
This paper presents a human-carried mapping backpack based on a pair of Velodyne LiDAR scanners. Our system is a universal solution for both large scale outdoor and smaller indoor environments. It benefits from a combination of two LiDAR scanners, which makes the odometry estimation more precise. The scanners are mounted under different angles, thus a larger space around the backpack is scanned. By fusion with GNSS/INS sub-system, the mapping of featureless environments and the georeferencing of resulting point cloud is possible. By deploying SoA methods for registration and the loop closure optimization, it provides sufficient precision for many applications in BIM (Building Information Modeling), inventory check, construction planning, etc. In our indoor experiments, we evaluated our proposed backpack against ZEB-1 solution, using FARO terrestrial scanner as the reference, yielding similar results in terms of precision, while our system provides higher data density, laser intensity readings, and scalability for large environments. Full article
(This article belongs to the Special Issue Mobile Laser Scanning Systems)
Show Figures

Figure 1

Figure 1
<p>The motivation and the results of our work. The reconstruction of indoor environments (<b>a</b>) is beneficial for inspection, inventory checking and automatic floor plans generation. 3D maps of forest environments (<b>b</b>) is useful for quick and precise estimation of the biomass (timber) amount. The other example of 3D LiDAR mapping deployment is preserving cultural heritages or providing models of historical building, e.g., the roof in (<b>c</b>).</p>
Full article ">Figure 2
<p>The example of resulting models of indoor mapping. The office environment (<b>a</b>) and the staircase (<b>b</b>) were captured by a human carrying our 4RECON backpack. The data acquisition process took 3 and 2 min, respectively.</p>
Full article ">Figure 3
<p>“Double walls” error in the reconstruction of Zebedee [<a href="#B5-sensors-19-03944" class="html-bibr">5</a>]. The wall and the ceiling appears twice in the reconstruction, causing an ambiguity. In the solution without loop closure (<b>a</b>), the error is quite visible. Double walls are reduced after global loop closure (<b>b</b>), but they are still present (highlighted by yellow dashed lines).</p>
Full article ">Figure 4
<p>Dataset of indoor office environment for evaluation of ZEB-1 scanner [<a href="#B4-sensors-19-03944" class="html-bibr">4</a>]. In the experiment, 3.8 cm error of corner-to-corner average distances within the rooms was achieved.</p>
Full article ">Figure 5
<p>The dependency of laser intensity readings (weak readings in red, strong in green) on the measurement range (<b>a</b>) and the angle of incidence (<b>b</b>) [<a href="#B37-sensors-19-03944" class="html-bibr">37</a>].</p>
Full article ">Figure 6
<p>Various configurations of LiDAR scanners in worst case scenarios we have encountered in our experiments: narrow corridor (<b>a</b>,<b>c</b>) and staircase (<b>b</b>). The field of view (30° for Velodyne Puck) is displayed in color. When only single LiDAR (<b>a</b>) was used, the scans did not contain 3D information of the floor or the ceiling (red cross). The situation was not improved when the scanner is tilted because of failing in, e.g., staircases (<b>b</b>). When we added a second LiDAR, our tiled asymmetrical configuration (<b>d</b>) provides better top–bottom and left–right observation than the symmetrical one (<b>c</b>). Moreover, when the LiDARs are aligned in direction of movement (<b>e</b>), there is no overlap between current (violet) and future (yellow) frame, leading to lower accuracy. In our solution (<b>f</b>), the LiDARs are aligned perpendicularly to the walking direction solving all mentioned issues.</p>
Full article ">Figure 6 Cont.
<p>Various configurations of LiDAR scanners in worst case scenarios we have encountered in our experiments: narrow corridor (<b>a</b>,<b>c</b>) and staircase (<b>b</b>). The field of view (30° for Velodyne Puck) is displayed in color. When only single LiDAR (<b>a</b>) was used, the scans did not contain 3D information of the floor or the ceiling (red cross). The situation was not improved when the scanner is tilted because of failing in, e.g., staircases (<b>b</b>). When we added a second LiDAR, our tiled asymmetrical configuration (<b>d</b>) provides better top–bottom and left–right observation than the symmetrical one (<b>c</b>). Moreover, when the LiDARs are aligned in direction of movement (<b>e</b>), there is no overlap between current (violet) and future (yellow) frame, leading to lower accuracy. In our solution (<b>f</b>), the LiDARs are aligned perpendicularly to the walking direction solving all mentioned issues.</p>
Full article ">Figure 7
<p>The initial (<b>a</b>) and improved (<b>b</b>,<b>c</b>) prototype of our backpack mapping solution for both indoor (<b>b</b>) and outdoor (<b>c</b>) use. The removable dual GNSS antenna provides precise heading information, aiding for outdoor odometry estimation and also georeferencing of the resulting 3D point cloud model. It should be noted that the position of LiDAR scanners is different in the initial and the later solution. This is elaborated on in the next section.</p>
Full article ">Figure 8
<p>Components of the system and the connections. Each Velodyne scanner is connected via a custom wiring “box” requiring power supply (red wires), 1PPS and NMEA synchronization (green) and Fast Ethernet (blue) connection with computer (PC NUC in our case).</p>
Full article ">Figure 9
<p>Extrinsic calibration required in our system. The mutual positions between the Velodyne scanners and the GNSS/INS unit are computed. The offsets <math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>o</mi> <mo stretchy="false">→</mo> </mover> <mrow> <mi>A</mi> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mover accent="true"> <mi>o</mi> <mo stretchy="false">→</mo> </mover> <mrow> <mi>A</mi> <mn>2</mn> </mrow> </msub> </mrow> </semantics></math> of the antennas are tape measured.</p>
Full article ">Figure 10
<p>Two Velodyne LiDAR frames aligned into the single <span class="html-italic">multiframe</span>. This data association requires time synchronization and precise extrinsic calibration of laser scanners.</p>
Full article ">Figure 11
<p>The sampling of Velodyne point cloud by the Collar Line Segments (CLS) (<b>a</b>). The segments (purple) are randomly generated within the polar bin (blue polygon) of azimuthal resolution <math display="inline"><semantics> <mi>ϕ</mi> </semantics></math>. The registration process (<b>b</b>–<b>e</b>) transforms the line segments of the target point cloud (red lines) to fit the lines of the source cloud (blue). First, the lines are matched by Euclidean distance of midpoints (<b>c</b>); then, the segments are extended into infinite lines and the vectors between closest points are found (<b>d</b>); and, finally, they are used to estimate the transformation that fits the matching lines into common planes (green in (<b>e</b>)).</p>
Full article ">Figure 12
<p>The overlap (<b>a</b>) between the source (blue) and the target (purple) LiDAR frame. In this case, approximately 30% of source points are within the view volume of target frame. The view volume can be effectively represented by <span class="html-italic">spherical z-buffer</span> (<b>b</b>) where range information (minimum in this case) or the information regarding empty space within the spherical grid is stored.</p>
Full article ">Figure 13
<p>The error of measurement (Euclidean distance between points <span class="html-italic">p</span> and <math display="inline"><semantics> <msup> <mi>p</mi> <mi>e</mi> </msup> </semantics></math>) can be split into rotation <math display="inline"><semantics> <msup> <mi>e</mi> <mi>r</mi> </msup> </semantics></math> and translation <math display="inline"><semantics> <msup> <mi>e</mi> <mi>t</mi> </msup> </semantics></math> part. The impact of rotation error <math display="inline"><semantics> <mrow> <mn>2</mn> <mo>·</mo> <mi>tg</mi> <mo>(</mo> <msub> <mi>e</mi> <mi>r</mi> </msub> <mo>/</mo> <mn>2</mn> <mo>)</mo> </mrow> </semantics></math> can be simplified to <math display="inline"><semantics> <mrow> <mi>tg</mi> <mo>(</mo> <msub> <mi>e</mi> <mi>r</mi> </msub> <mo>)</mo> </mrow> </semantics></math> due to near linear properties of tangent function for small angles.</p>
Full article ">Figure 14
<p>Example of a LiDAR frame distorted by the rolling shutter effect when the operator with mapping backpack was turning around (green) and the corrected frame (purple). This is the top view and the distortion is mostly visible on the “bent” green wall at the bottom of this picture.</p>
Full article ">Figure 15
<p>Pose graph as the output of point cloud registration and the input of SLAM optimization. The goal is to estimate 6DoF poses <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">P</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi mathvariant="bold-italic">P</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi mathvariant="bold-italic">P</mi> <mi>N</mi> </msub> </mrow> </semantics></math> of graph nodes (vertices) <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>p</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>p</mi> <mn>1</mn> </msub> <mn>5</mn> </mrow> </semantics></math> in the trajectory. The edges represent the transformations between LiDAR frames for given nodes estimated by point cloud registration. Black edges represent transformations between consequent frames, blue edges are for transformations within a certain neighborhood (maximum distance of three frames in this example) and the green edges (in (<b>a</b>)) represent visual loops of revisited places detected by a significant overlap between the given frames. When GNSS subsystem is available (<b>b</b>), additional visual loops are introduced as transformations from the origin <math display="inline"><semantics> <mi mathvariant="bold-italic">O</mi> </semantics></math> of some local geodetic (orthogonal NED) coordinate frame.</p>
Full article ">Figure 16
<p>Verification of edge <math display="inline"><semantics> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>i</mi> </msub> <mo>,</mo> <msub> <mi>p</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </semantics></math> representing transformation <math display="inline"><semantics> <msub> <mi mathvariant="bold-italic">T</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> </msub> </semantics></math> is performed by comparison with transformation <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="bold-italic">T</mi> <mn>1</mn> </msub> <mo>·</mo> <msub> <mi mathvariant="bold-italic">T</mi> <mn>2</mn> </msub> <mo>…</mo> <msub> <mi mathvariant="bold-italic">T</mi> <mi>K</mi> </msub> </mrow> </semantics></math> of alternative path (blue) between <span class="html-italic">i</span>th and <span class="html-italic">j</span>th node.</p>
Full article ">Figure 17
<p>The reconstruction built by our SLAM solution before (<b>a</b>) and after (<b>b</b>) the alignment of horizontal planes (floor, ceiling, etc.) with XY plane (blue circle).</p>
Full article ">Figure 18
<p>The dependency of laser return intensity on: the source beam (<b>a</b>); range of the measurement (<b>b</b>); and the angle of incidence (<b>c</b>). We are using 2 LiDAR scanners with 16 laser beams per each scanner, 32 beams in total.</p>
Full article ">Figure 18 Cont.
<p>The dependency of laser return intensity on: the source beam (<b>a</b>); range of the measurement (<b>b</b>); and the angle of incidence (<b>c</b>). We are using 2 LiDAR scanners with 16 laser beams per each scanner, 32 beams in total.</p>
Full article ">Figure 19
<p>Results of 3D reconstruction without (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>) and with (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>) the normalization of laser intensities. One can observe more consistent intensities for solid color ceiling (<b>b</b>) reducing the artifacts of trajectory, while preserving the contrast with ceiling lights. Besides the consistency, normalization of intensities reduces the noise (<b>d</b>). The most significant improvement is the visibility of important objects e.g., markers at the electrical towers (<b>f</b>,<b>h</b>) or emergency exit doors (<b>j</b>) at the highway wall. All these objects can not be found in the original point clouds(<b>e</b>,<b>g</b>,<b>i</b>).</p>
Full article ">Figure 20
<p>Experimental environments Office (<b>a</b>) and Staircase (<b>b</b>), and the highlighted slices that were used for precision evaluation.</p>
Full article ">Figure 21
<p>Error <math display="inline"><semantics> <msub> <mi>e</mi> <mi>r</mi> </msub> </semantics></math> distribution (the amount of the points within certain error) for our system 4RECON and ZEB-1 product. The experiments were performed for all test slices in <a href="#sensors-19-03944-f020" class="html-fig">Figure 20</a> on Office (<b>a</b>) and Staircase (<b>b</b>) dataset. Note that the model built by ZEB-1 was not available and therefore the evaluation is missing.</p>
Full article ">Figure 22
<p>Color coded errors within the horizontal reference slice of the Office dataset (<b>a</b>)–(<b>d</b>) and vertical slice in Staircase dataset (<b>e</b>)–(<b>g</b>). Blue color represents zero error, red color stands for 10 cm error and higher. The ground truth FARO data are displayed in green. The results are provided for 4RECON-10 (<b>a</b>,<b>e</b>), 4RECON-overlap (<b>b</b>,<b>f</b>), 4RECON-verification (<b>c</b>,<b>g</b>), and ZEB-1 (<b>d</b>). For Office dataset, there are no ambiguities (double walls) even without visual loop detection while both loop closure and pose graph verification is necessary for more challenging Staircase dataset to discard such errors. Moreover, one can observe that ZEB-1 solution yields lower noise reconstruction thanks to the less noisy Hokuyo LiDAR.</p>
Full article ">Figure 23
<p>The comparison of data density provided by ZEB-1 (<b>a</b>,<b>c</b>) and our (<b>b</b>,<b>d</b>) solution. Since the ZEB-1 solution is based on the Hokuyo scanner, the laser intensity readings are missing and data density is much lower compared with our solution. Multiple objects which can be distinguished in our reconstruction (lamps on the ceiling in the top, furniture and other equipment in the bottom image) are not visible in the ZEB-1 model.</p>
Full article ">Figure 24
<p>The example of 3D reconstruction of open field with high voltage electrical lines (<b>a</b>). The model is height-colored for better visibility. The estimation of positions and height of the lines (<b>b</b>), towers (<b>e</b>), etc. was the main goal of this mapping task. The other elements (<b>c</b>,<b>d</b>) in the scene are shown for demonstration of the reconstruction quality.</p>
Full article ">Figure 25
<p>Example of ambiguities caused by reconstruction errors (<b>a</b>), which disqualifies the model to be used for practical measurements. We obtained such results when we used only poses provided by GNSS/INS subsystem without any refinements by SLAM or point cloud registration. Our solution (including SLAM) provides valid reconstructions (<b>b</b>), where both towers and wires (in this case) can be distinguished.</p>
Full article ">Figure 26
<p>Geodetic survey markers painted on the road (<b>a</b>) is also visible in the point cloud (<b>b</b>) thanks to the coloring by laser intensities.</p>
Full article ">Figure 27
<p>Comparison of reconstructions provided by dual LiDAR system—floor plan top view (<b>a</b>) and side view of the corridor (<b>b</b>)—with the reconstruction built using only single horizontally (<b>c</b>,<b>d</b>) or vertically (<b>e</b>,<b>f</b>) positioned Velodyne LiDAR. The reconstructions are red colored with ground truth displayed in blue.</p>
Full article ">Figure 27 Cont.
<p>Comparison of reconstructions provided by dual LiDAR system—floor plan top view (<b>a</b>) and side view of the corridor (<b>b</b>)—with the reconstruction built using only single horizontally (<b>c</b>,<b>d</b>) or vertically (<b>e</b>,<b>f</b>) positioned Velodyne LiDAR. The reconstructions are red colored with ground truth displayed in blue.</p>
Full article ">
18 pages, 4751 KiB  
Article
Automatic Data Selection and Boresight Adjustment of LiDAR Systems
by Rabine Keyetieu and Nicolas Seube
Remote Sens. 2019, 11(9), 1087; https://doi.org/10.3390/rs11091087 - 7 May 2019
Cited by 20 | Viewed by 3754
Abstract
This paper details a new automatic calibration method for the boresight angles between a LiDAR (Light Detection and Ranging) and an inertial measurement unit (IMU), based on a data selection algorithm, followed by the adjustment of boresight angles. This method, called LiDAR-IMU boresight [...] Read more.
This paper details a new automatic calibration method for the boresight angles between a LiDAR (Light Detection and Ranging) and an inertial measurement unit (IMU), based on a data selection algorithm, followed by the adjustment of boresight angles. This method, called LiDAR-IMU boresight automatic calibration (LIBAC), takes in input overlapping survey strips following simple line patterns over regular slopes. We first construct a boresight error observability criterion, used to select automatically the most sensitive points to boresight errors. From these points, we adjust the boresight angles. From a statistical analysis of the adjustment results, we derive the boresight angle precision. Results obtained with LIBAC on several LiDAR system integrated within drones are presented. We also give results about the reproducibility of the method. Full article
Show Figures

Figure 1

Figure 1
<p>A typical LiDAR system mounted on a drone. Lever arms is denoted by <math display="inline"><semantics> <msub> <mi>a</mi> <mrow> <mi>b</mi> <mi>I</mi> </mrow> </msub> </semantics></math>. The LiDAR local frame is denoted by <math display="inline"><semantics> <mrow> <mi>b</mi> <mi>S</mi> <mo>=</mo> <mo>(</mo> <msub> <mi>X</mi> <mrow> <mi>b</mi> <mi>S</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>Y</mi> <mrow> <mi>b</mi> <mi>S</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>Z</mi> <mrow> <mi>b</mi> <mi>S</mi> </mrow> </msub> <mo>)</mo> </mrow> </semantics></math>, and the inertial measurement unit (IMU) frame by <math display="inline"><semantics> <mrow> <mi>b</mi> <mi>I</mi> <mo>=</mo> <mo>(</mo> <msub> <mi>X</mi> <mrow> <mi>b</mi> <mi>I</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>Y</mi> <mrow> <mi>b</mi> <mi>I</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>Z</mi> <mrow> <mi>b</mi> <mi>I</mi> </mrow> </msub> <mo>)</mo> </mrow> </semantics></math>. Boresight angles define the transformation from frame <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>b</mi> <mi>S</mi> <mo>)</mo> </mrow> </semantics></math> to frame <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>b</mi> <mi>I</mi> <mo>)</mo> </mrow> </semantics></math>, denoted by <math display="inline"><semantics> <msubsup> <mi>C</mi> <mrow> <mi>b</mi> <mi>S</mi> </mrow> <mrow> <mi>b</mi> <mi>I</mi> </mrow> </msubsup> </semantics></math>.</p>
Full article ">Figure 2
<p>The blue and green slopes are subject to pitch boresight. The transformation from the blue or green profile to the real one (in red) is more complex than a single rotation.</p>
Full article ">Figure 3
<p>Local geodetic frame (LGF) (navigation frame), denoted by <math display="inline"><semantics> <mrow> <mo>(</mo> <mi>n</mi> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Three modules of LiDAR IMU boresight automatic calibration (LIBAC): the georeferencing, the data selection and the boresight estimation. The data selection needs the georeferencing to be performed first. The boresight estimation needs the data selection to be done, in order to decompose the point cloud in different planar regions.</p>
Full article ">Figure 5
<p>Potential pattern of survey lines for boresight angles calibration.</p>
Full article ">Figure 6
<p>LIBAC data selection. Each cell represent the quad-tree planar surface element that was found in the overlaping area. Color code is its sensitivity criterion. The top grid is the result of the quad-tree. The color represents the angle sensitivity criteria. <b>Above: left</b>: roll (<math display="inline"><semantics> <msub> <mi mathvariant="script">C</mi> <mrow> <mi>δ</mi> <msub> <mi>φ</mi> <mi>p</mi> </msub> </mrow> </msub> </semantics></math>), <b>right</b>: pitch (<math display="inline"><semantics> <msub> <mi mathvariant="script">C</mi> <mrow> <mi>δ</mi> <msub> <mi>θ</mi> <mi>p</mi> </msub> </mrow> </msub> </semantics></math>). <b>Below: left</b>: yaw <math display="inline"><semantics> <msub> <mi mathvariant="script">C</mi> <mrow> <mi>δ</mi> <msub> <mi>ψ</mi> <mi>p</mi> </msub> </mrow> </msub> </semantics></math>, <b>right</b>: combination of all angles. Most sensitive surface elements to roll and yaw boresight correspond to outer beams. Sensitive surface elements to pitch boresight are close to Nadir.</p>
Full article ">Figure 7
<p>Point cloud of natural surface with the presence of roofs.</p>
Full article ">Figure 8
<p>Results of data selection (quad-tree and sensitivity criterion) on the point cloud presented in <a href="#remotesensing-11-01087-f007" class="html-fig">Figure 7</a>. Yellow patches correspond to selected surface elements. On the top, the green and brown lines represent that calibration lines.</p>
Full article ">Figure 9
<p>LIBAC data selection without propagation of the combined standard measurement uncertainty (CSMU). The point cloud is colored according to its elevation, and surface elements are colored according to their sensitivity function, as defined in the color bar. In this case, the Deming least-square (DLS) behaves like a principal component analysis (PCA) and the plane coefficient uncertainty is uniform.</p>
Full article ">Figure 10
<p>LIBAC data selection with propagation of the CSMU. The point cloud is colored according to its elevation, and surface elements are colored according to their sensitivity function, as defined by the color bar.</p>
Full article ">Figure 11
<p>Riegl MiniVUX-DL LiDAR before (<b>left</b>) and after (<b>right</b>) LIBAC. The colors means the different flight lines.</p>
Full article ">Figure 12
<p>Velodyne VLP16 LiDAR before (<b>left</b>) and after (<b>right</b>) LIBAC.</p>
Full article ">Figure 13
<p>VUX-LR LiDAR before (<b>left</b>) and after (<b>right</b>) LIBAC.</p>
Full article ">Figure 14
<p>Standard deviation along surface normal (SDASN) before (<b>left</b>) and after (<b>right</b>) LIBAC. We can see that LIBAC reduced the main discrepancies between overlapping strip within the calibration area. The areas indicated in yellow correspond to irregularities where the local PCA is unable to fit a correct local plane. In this figure, color represents the point cloud elevation.</p>
Full article ">Figure 15
<p>Boresight angle results of reproducibility tests for ten similar pattern of lines in different conditions and fluctuation bands (in delimited by red squares for each boresight angle).</p>
Full article ">Figure 16
<p>Histogram of normalized residual. The <span class="html-italic">x</span>-axis unit dimension less. The red line is a standard normal distribution at 99.7%.</p>
Full article ">
16 pages, 4718 KiB  
Article
Rigorous Boresight Self-Calibration of Mobile and UAV LiDAR Scanning Systems by Strip Adjustment
by Zhen Li, Junxiang Tan and Hua Liu
Remote Sens. 2019, 11(4), 442; https://doi.org/10.3390/rs11040442 - 20 Feb 2019
Cited by 40 | Viewed by 6857
Abstract
Mobile LiDAR Scanning (MLS) systems and UAV LiDAR Scanning (ULS) systems equipped with precise Global Navigation Satellite System (GNSS)/Inertial Measurement Unit (IMU) positioning units and LiDAR sensors are used at an increasing rate for the acquisition of high density and high accuracy point [...] Read more.
Mobile LiDAR Scanning (MLS) systems and UAV LiDAR Scanning (ULS) systems equipped with precise Global Navigation Satellite System (GNSS)/Inertial Measurement Unit (IMU) positioning units and LiDAR sensors are used at an increasing rate for the acquisition of high density and high accuracy point clouds because of their safety and efficiency. Without careful calibration of the boresight angles of the MLS systems and ULS systems, the accuracy of data acquired would degrade severely. This paper proposes an automatic boresight self-calibration method for the MLS systems and ULS systems using acquired multi-strip point clouds. The boresight angles of MLS systems and ULS systems are expressed in the direct geo-referencing equation and corrected by minimizing the misalignments between points scanned from different directions and different strips. Two datasets scanned by MLS systems and two datasets scanned by ULS systems were used to verify the proposed boresight calibration method. The experimental results show that the root mean square errors (RMSE) of misalignments between point correspondences of the four datasets after boresight calibration are 2.1 cm, 3.4 cm, 5.4 cm, and 6.1 cm, respectively, which are reduced by 59.6%, 75.4%, 78.0%, and 94.8% compared with those before boresight calibration. Full article
(This article belongs to the Special Issue Trends in UAV Remote Sensing Applications)
Show Figures

Figure 1

Figure 1
<p>The scenes for acquiring the experimental data where the highlighted red polylines are the trajectories of the MLS and ULS systems.</p>
Full article ">Figure 2
<p>The trajectories of the experimental data where the highlighted red parts are selected for calibrating and the arrows indicate the travel direction of MLS systems and ULS systems in data acquisition.</p>
Full article ">Figure 3
<p>Coordinate frames of the MLS and ULS systems. (<b>a</b>,<b>b</b>) the laser scanner frame <span class="html-italic">O</span><sub>L</sub>-<span class="html-italic">X</span><sub>L</sub><span class="html-italic">Y</span><sub>L</sub>Z<sub>L</sub> and IMU frame <span class="html-italic">O</span><sub>I</sub>-<span class="html-italic">X</span><sub>I</sub><span class="html-italic">Y</span><sub>I</sub><span class="html-italic">Z</span><sub>I</sub>. (<b>c</b>) the local level frame <span class="html-italic">O</span><sub>H</sub>-<span class="html-italic">X</span><sub>H</sub><span class="html-italic">Y</span><sub>H</sub><span class="html-italic">Z</span><sub>H</sub> and WGS84 earth centered earth fixed frame (ECEF) frame <span class="html-italic">O</span><sub>W</sub>-<span class="html-italic">X</span><sub>W</sub><span class="html-italic">Y</span><sub>W</sub><span class="html-italic">Z</span><sub>W</sub>.</p>
Full article ">Figure 4
<p>Effect of the measuring errors for multi-strip point clouds from the MLS and ULS systems. (<b>a</b>,<b>b</b>) multi-strip point clouds coincide with each other perfectly when no errors exist; (<b>c</b>,<b>d</b>) there are misalignments among the multi-strip point clouds because of errors. Points scanned from different strips are colored by different colors.</p>
Full article ">Figure 5
<p>Multi-strip point clouds before and after boresight calibration in “MLS1”. (<b>a</b>) raw point clouds; (<b>b</b>,<b>c</b>) point clouds of cross section 1 and cross section 2 respectively before boresight calibration; (<b>d</b>,<b>e</b>) point clouds of cross section 1 and cross section 2 respectively after boresight calibration. Points scanned from different strips have different colors.</p>
Full article ">Figure 6
<p>Multi-strip point clouds before and after boresight angle calibration in “MLS2”. (<b>a</b>) raw point clouds; (<b>b</b>,<b>c</b>) point clouds of cross section 1 and cross section 2 respectively before boresight calibration; (<b>d</b>,<b>e</b>) point clouds of cross section 1 and cross section 2 respectively after boresight calibration. Points scanned from different strips have different colors.</p>
Full article ">Figure 7
<p>Multi-strip point clouds before and after boresight angle calibration in “ULS1”. (<b>a</b>) raw point clouds; (<b>b</b>,<b>c</b>) point clouds of cross section 1 and cross section 2 respectively before boresight calibration; (<b>d</b>,<b>e</b>) point clouds of cross section 1 and cross section 2 respectively after boresight calibration. Points scanned from different strips have different colors.</p>
Full article ">Figure 8
<p>Multi-strip point clouds before and after boresight angle calibration in “ULS2”. (<b>a</b>) raw point clouds; (<b>b</b>,<b>c</b>) point clouds of cross section 1 and cross section 2 respectively before boresight calibration; (<b>d</b>,<b>e</b>) point clouds of cross section 1 and cross section 2 respectively after boresight calibration. Points scanned from different strips have different colors.</p>
Full article ">Figure 9
<p>The histogram of distances between point correspondences. (<b>a</b>–<b>d</b>) are the histogram of distances of “MLS1”, “MLS2”, “ULS1” and “ULS2” before boresight angle calibration. (<b>e</b>–<b>h</b>) are the histogram of distances of “MLS1”, “MLS2”, “ULS1” and “ULS2” after boresight angle calibration.</p>
Full article ">
Back to TopTop