[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Mitigating Data Leakage in a WiFi CSI Benchmark for Human Action Recognition
Previous Article in Journal
Structural Design and Electromagnetic Performance Analysis of Octupole Active Radial Magnetic Bearing
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Initial Pose Estimation Method for Robust LiDAR-Inertial Calibration and Mapping

1
Department of Intelligent Systems & Robotics, Chungbuk National University, Cheongju 28644, Republic of Korea
2
School of Electrical and Computer Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2024, 24(24), 8199; https://doi.org/10.3390/s24248199
Submission received: 11 November 2024 / Revised: 9 December 2024 / Accepted: 19 December 2024 / Published: 22 December 2024
(This article belongs to the Section Sensors and Robotics)
Figure 1
<p>LiDAR based mapping using (<b>a</b>) LiDAR-IMU calibration method: Error-free mapping, and (<b>b</b>) Without LiDAR-IMU calibration method: Mapping error due to drift, highlighted in yellow circle. The colors in each map represents the intensity of LiDAR point cloud.</p> ">
Figure 2
<p>Overall framework of the proposed initial pose estimation method for robust LiDAR-IMU calibration. Different colors in voxelization shows the intensity of LiDAR points in each voxel. The extracted planes are represented with yellow and green color while red color points indicate noise.</p> ">
Figure 3
<p>Robust plane detection method.</p> ">
Figure 4
<p>Robust plane extraction through refinement. (<b>a</b>) Voxels containing edges and noise have low plane scores due to large distances and high variance represented as red color normal vector while those with high plane scores are represented with blue. (<b>b</b>) The refinement process enables the effective separation and removal of areas containing edges and noise.</p> ">
Figure 5
<p>LiDAR calibration method.</p> ">
Figure 6
<p>IMU downsampling.</p> ">
Figure 7
<p>Qualitative Comparison of the proposed method with the benchmark plane detection algorithms.</p> ">
Figure 8
<p>Top view of LiDAR data. (<b>a</b>) LiDAR raw data before calibration. (<b>b</b>) LiDAR data after calibration using the proposed method.</p> ">
Figure 9
<p>Performance comparison in terms of (<b>a</b>) roll and (<b>b</b>) pitch errors in the VECtor dataset.</p> ">
Figure 10
<p>Performance comparison in terms of the (<b>a</b>) mapping result using LI-init and (<b>b</b>) mapping result using LI-init+Proposed.</p> ">
Versions Notes

Abstract

:
Handheld LiDAR scanners, which typically consist of a LiDAR sensor, Inertial Measurement Unit, and processor, enable data capture while moving, offering flexibility for various applications, including indoor and outdoor 3D mapping in fields such as architecture and civil engineering. Unlike fixed LiDAR systems, handheld devices allow data collection from different angles, but this mobility introduces challenges in data quality, particularly when initial calibration between sensors is not precise. Accurate LiDAR-IMU calibration, essential for mapping accuracy in Simultaneous Localization and Mapping applications, involves precise alignment of the sensors’ extrinsic parameters. This research presents a robust initial pose calibration method for LiDAR-IMU systems in handheld devices, specifically designed for indoor environments. The research contributions are twofold. Firstly, we present a robust plane detection method for LiDAR data. This plane detection method removes the noise caused by mobility of scanning device and provides accurate planes for precise LiDAR initial pose estimation. Secondly, we present a robust planes-aided LiDAR calibration method that estimates the initial pose. By employing this LiDAR calibration method, an efficient LiDAR-IMU calibration is achieved for accurate mapping. Experimental results demonstrate that the proposed method achieves lower calibration errors and improved computational efficiency compared to existing methods.

1. Introduction

As the demand for Light Detection and Ranging (LiDAR) continues to rise, handheld LiDAR devices that allow users to capture data while moving are becoming increasingly popular alongside fixed LiDAR systems mounted on autonomous vehicles. These handheld LiDAR devices are lightweight and enable users to freely acquire data on the go, thus making them suitable for various applications such as indoor and outdoor 3D scanning in fields like architecture and civil engineering [1]. A typical handheld LiDAR scanner consists of a LiDAR sensor, an Inertial Measurement Unit (IMU), camera, and a processor. Compared to fixed LiDAR systems, handheld LiDAR offers the advantage of greater flexibility, as it can collect data from various angles and positions. However, this high degree of freedom leads to decline in data quality if the initial calibration between sensors is not precise [2].
Handheld scanners typically employs Simultaneous Localization and Mapping (SLAM) technology to generate maps and estimate the robot position in GPS-denied environments [3,4,5]. The generated map allows the system to accurately determine its location within the environment, plan movement paths, and understand its surroundings during navigation. Therefore, environment mapping plays a critical role in the performance of SLAM-based autonomous navigation, making the creation of precise maps a crucial task. To generate such precise maps, accurate calibration of the extrinsic parameters between the LiDAR and the IMU is essential.
The LiDAR-IMU calibration refers to the process of precisely aligning the relative position and orientation between the LiDAR and IMU sensors [6,7,8,9,10,11,12,13]. While LiDAR provides precise recognition of the surrounding environment’s structure, its data can become distorted during high-speed movements or rapid motion. IMU, on the other hand, can accurately track short-term movements, but its errors tend to accumulate over time. By combining the strengths of these two sensors LiDAR-Inertial SLAM performance can be optimized. However, if the extrinsic parameters between the two sensors are not accurately calibrated, the data from both sensors may not align properly, causing distortions in the mapping process, which ultimately degrades the overall performance of system, as illustrated in Figure 1.
To improve the performance of LiDAR-IMU calibration, it is crucial to calibrate the initial pose of the LiDAR and IMU. Initial pose calibration refers to the process of calculating and adjusting the actual tilt angles of sensors based on the global coordinate system, rather than their local sensor coordinate frames. If the initial pose of LiDAR and IMU is not calibrated, the two sensors may have different tilts, which can lead to inaccurate data fusion. This may increase errors in position estimation or mapping processes. In particular, since IMU rotation data changes rapidly over short periods, failing to calibrate the initial pose can result in significant distortion in rotational measurements. This can negatively affect the overall system performance, making it essential to accurately calibrate the initial pose of both the LiDAR and IMU.
The existing research on LiDAR-IMU calibration used a pre-defined map [7], LiDAR scans [8], or ground information [10] for initial pose estimation. Though these approaches perform well in fixed LiDAR systems, the pose estimation accuracy is significantly low for moving platforms. Thus, this makes them unsuitable for handheld scanning systems.
In this research, we present a robust initial pose estimation method for LiDAR-IMU calibration on a handheld scanning system. The proposed method is designed for accurate mapping in an indoor environment. This is achieved by detecting robust planes in indoor environments and calibrating the initial poses of LiDAR and IMU based on the detected planes. To detect planes, we first divide the input data into voxels, and extract plane features from each voxel. Subsequently, we compute a plane score for each voxel using variance of the normal vectors and the point-to-plane distance. This allows for the robust detection of planes, even in the presence of edges and noise. The initial poses of the LiDAR and IMU are then calibrated based on the relationship between the detected planes and the actual planes. Through evaluation, it is shown that the proposed method achieves lower calibration errors compared to existing methods. Furthermore, the plane detection results used for calibration demonstrate improved performance over conventional plane detection algorithms.
The contributions of this paper are as follows.
  • We propose a plane detection method based on plane scores; we can robustly detect planes effectively, even in the presence of noise and edges.
  • We present an initial pose estimation method for LiDAR-IMU calibration. Using the relationship between the detected planes and the actual planes, stable calibration is achieved even under various movements.
  • The proposed method demonstrates high calibration performance compared to existing methods on the benchmark dataset. The plane detection approach also proves to have higher accuracy and faster computational speed when compared to conventional algorithms.

2. Related Works

This section presents a critical review of the existing LiDAR-IMU calibration methods proposed for LiDAR-Inertial fusion SLAM systems. Based on the initial pose estimation strategy, existing methods are grouped into two categories; calibration methods with and without initial pose estimation. Each of these categories are explained below. Moreover, Table 1 presents a taxonomy of the existing research highlighting important characteristics for each method including the modalities, environments, and approach used for initial pose estimation and their limitations.

2.1. Methods Without Initial Pose Estimation

Das et al. [11] presented a method that dynamically calibrates the LiDAR sensor based on IMU data. The proposed method first estimates the position and orientation of the IMU relative to the vehicle frame. Then, using the calibrated pose of IMU, it calibrates the pose of the LiDAR relative to the vehicle frame. Though it has the advantage of performing real-time alignment between the two sensors using IMU data, it is sensitive to IMU drift and noise.
Lv et al. [12] estimated initial extrinsic parameters using LiDAR and IMU, extracted planes from LiDAR data, and performed optimization to achieve more accurate extrinsic calibration. This method has the advantage of not requiring a target and being robust against LiDAR motion distortion, yet it requires high computational resources, limiting its applicability in real-world environments. In [13], the authors proposed a method that estimates the initial extrinsic parameters between the two sensors and refines these parameters through iterative alignment. While this method has the advantage of being applicable in various environments, it does not perform initial pose estimation, like [12], and performed online but has the drawback of increased computational cost due to the iterative alignment process, making real-time processing challenging.

2.2. Methods with Initial Pose Estimation

Several studies have proposed LiDAR-IMU calibration methods to improve calibration accuracy by utilizing LiDAR data to accurately calibrate the LiDAR-IMU pose.
Le et al. [6] proposed a targetless-based approach for LiDAR-IMU calibration without the aid of other sensors. It corrects temporal motion distortion in the IMU by upsampling LiDAR data. Although the proposed method uses IMU data to correct distortions in LiDAR data, precise calibration is not achievable due to the use of low-accuracy upsampled LiDAR points. Furthermore, by relying solely on IMU data without the structural information of LiDAR, it focuses on motion distortion.
Li et al. [7] improved accuracy by correcting the IMU’s rotation and displacement based on LiDAR data and a pre-defined map containing structural environmental information. The proposed method allows for relatively easy estimation and correction of orientation by utilizing a pre-defined map. However, its computational complexity is very high due to time synchronization process performed through continuous path estimation. Also, this approach cannot be used if the predefined map is unavailable or inaccurate.
In [8], the authors presented a method that estimates the initial pose of LiDAR and IMU sensors using LiDAR data, robustly calibrated against disturbances in dynamic environments. This method firstly extracts planes from the first LiDAR scan. Secondly, the point-to-plane distance is calculated with planes extracted from subsequent LiDAR scans to correct the initial orientation of the LiDAR. This method is designed for real-time operation, addressing the computational complexity problem in [7]. However, its performance may vary depending on different environments. Also, this method relies solely on the relative relationship between data estimated within the LiDAR frame and does not consider direct alignment with the global frame. As a result, the accuracy of IMU calibration may degrade in case of significant movement or rotation.
Li et al. [9] proposed a method that performs spatial and temporal calibration of the IMU by not only utilizing LiDAR data but also visual information from camera. This method calibrates the IMU in two stages, thus enabling more precise estimation. However, the simultaneous use of LiDAR and camera requires a complex processing workflow, and the interdependencies between stages result in inconsistent performance.
Kim et al. [10] performed IMU calibration by incorporating LiDAR data and planar motion constraints, allowing calibration without a target and enabling simpler calibration through these constraints. This method extracts ground data from LiDAR mounted on a ground robot and calibrates the pose of the LiDAR and IMU by calculating the relationship between detected and the actual ground. Unlike [8], this method aligns directly with the global frame to calibrate the pose of the LiDAR and IMU, demonstrating improved calibration performance. However, for handheld LiDAR, which cannot always observe the ground due to device movement, relying solely on ground observations for calibration will affect accuracy on large scale.
In other words, the existing method lacks versatility or is unsuitable for use with portable LiDAR due to constraints such as reliance on predefined maps or ground information. While there are methods without such constraints, they have the drawback of not directly calibrating the initial pose between the global and local coordinate systems, making it challenging to ensure consistent performance. This paper aims to overcome the limitations of previous studies and proposes an initial pose estimation method for accurate calibration of LiDAR-IMU embedded in handheld scanning systems. For this purpose, we firstly extract plane information from the real environment in a global frame, then we estimate the orientation of LiDAR using the relationship between global and LiDAR coordinates, and lastly, we use LiDAR orientation to estimate the IMU orientation which enables LiDAR-IMU calibration without complex procedures. The propose method is designed to ensure its application in highly dynamic environments.

3. Method Overview

3.1. System Architecture

This paper focuses on calibrating the initial pose of a handheld LiDAR and IMU. The overall pipeline of the proposed framework is shown in Figure 2. Firstly, plane features are extracted using the input LiDAR data. Then, these plane features are given as input to the Robust Plane Detection module, which detects the planes that are resilient to edges and noise. Using those planes, the initial pose of LiDAR is calibrated. On the other hand, noise is removed from the input IMU measurements. Finally, the initial pose of IMU is calibrated using IMU measurements and LiDAR calibration.

3.2. Notations

Several important notations used in this paper are presented in Table 2. The LiDAR frame and IMU frame refer to the coordinate systems relative to the LiDAR and IMU, respectively, while absolute frame represents the global coordinate system.

4. Proposed Method

4.1. Plane Feature Extration

For plane feature extraction, the input LiDAR data P is divided into W × H × C voxels. Then, the normal vector is repeatedly calculated N times for each voxel, v w , h , c located at position ( w , h , c ) and the average normal vector n ¯ w , h , c R 3 is computed using Equation (1):
n ¯ w , h , c = 1 N j = 1 N n x w , h , c ( j ) , 1 N j = 1 N n y w , h , c ( j ) , 1 N j = 1 N n z w , h , c ( j )
where n x w , h , c j , n y w , h , c j , n z w , h , c j refer to the x , y , z components of the j-th normal vector n w , h , c j within voxel v w , h , c . Consequently, each voxel v w , h , c contains total N normal vectors and one average vector n ¯ w , h , c , expressed as v w , h , c = { n ¯ w , h , c , n w , h , c 1 , n w , h , c 2 , , n w , h , c N } .

4.2. Robust Plane Detection

This module takes normal vectors as input and detects robust planes using the W × H × C set of voxels V W , H , C as V W , H , C = { v w , h , c w = 1 , 2 , , W ; h = 1 , 2 , , H ; c = 1 , 2 , , C } , which includes the N normal vectors and the average vector n ¯ w , h , c calculated for each voxel. The configuration for plane detection is shown in Figure 3. For robust plane detection, this module computes a plane score utilizing the variance of normal vectors n w , h , c 1 , , N and point-to-plane distance.
The variance of normal vectors represents the directionality of the normal vectors detected in the current voxel. In general, planes in indoor environments have flat structures. However, near edges or boundaries, where two or more different planes intersect, the normal vectors in this region can point in multiple directions. In other words, while calculating the normal vectors for each voxel, the normal vectors in the planar region tend to have almost the same direction, whereas the regions containing edges may have normal vectors pointing in various directions. To ensure the consistency of the normal vector directions within a voxel v w , h , c , the variance σ 2 of the normal vectors n w , h , c ( 1 , , N ) given as σ n w , h , c 2 is calculated using Equation (2):
σ n w , h , c 2 = 1 N j = 1 N r w , h , c ( j ) 2 = 1 N j = 1 N n w , h , c ( j ) n ¯ w , h , c 2
where r w , h , c ( j ) 2 R denotes the deviation between j-th normal vector n w , h , c ( j ) and the average vector n ¯ w , h , c of voxel. A planar region with consistent vector directions will have low variance, while an edge region with vectors pointing in multiple directions will exhibit high variance.
Next, the point-to-plane distance is calculated, to mitigate the impact of noise. The point-to-plane distance determines how well the given points align with a specific plane. The distance d i between the ith point in voxel v w , h , c and the plane n ¯ w , h , c is computed using Equation (3):
d i = | n ¯ x w , h , c x i + n ¯ y w , h , c y i + n ¯ z w , h , c z i + D | n ¯ x w , h , c 2 + n ¯ y w , h , c 2 + n ¯ z w , h , c 2
where D represents the distance of the plane from origin in LiDAR coordinate system. The larger the point-to-plane distance is the more will be the noise. Thus, among all the points P w , h , c in a voxel, we select the points p v w , h , c i , , K with d i less than certain threshold d t h r e s h , while discarding those with higher distance, as given in Equation (4):
Num p t = p v w , h , c i , , K P w , h , c d i < d t h r e s h
where Num p t represents the set of robust points in a voxel v w , h , c . These robust points are used to compute the plane score S w , h , c for each voxel, using Equation (5):
S w , h , c = 1 σ n w , h , c 2 × Num p t Num P w , h , c
where the value S w , h , c ranges between 0 and 1 and Num P w , h , c represents the number of total points contained within v w , h , c .
Once the plane scores are computed, plane refinement is performed, which further reduces the noise by eliminating the noisy voxels. For each voxel v w , h , c , plane score S w , h , c is compared with a predefined threshold s t h r e s h . If  S w , h , c < s t h r e s h the plane score for v w , h , c is redefined using the neighboring voxels, V n e g i b o r . This score redefining is achieved by computing the distance d k between each point p w , h , c in v w , h , c with average normal vector n ¯ w , h , c of neighboring voxels using Equation (3).
If the distance d k < d t h r e s h , p w , h , c is associated with the relevant neighboring voxel, while if d k > d t h r e s h , p w , h , c is recognized as a noise and is simply discarded. This allows for noise removal and edge correction for voxels with low plane scores S w , h , c . Figure 4 depicts the plane refinement process. The final detected robust planes are denoted as u q U , with a total of Q planes and q = 1 , 2 , Q . Here, u q represent each plane with plane score s q and normal vector n q . The complete process of robust plane detection is explained in Algorithm 1.
Algorithm 1 Robust Plane Detection Algorithm
Input:  V W , H , C : Input Voxel, s t h : score threshold, d t h : refine threshold
Output: U: Detected plane
  • for  w , h , c V W , H , C  do    ▹ Iterate over all voxels
  •      σ n w , h , c 2 1 N j = 0 N 1 | | n [ j ] n ¯ | |
  •      r a t e 1 | P w , h , c | p P w , h , c distance ( p , n ¯ ) < d t h r e s h
  •      S w , h , c ( 1 σ n w , h , c 2 ) × r a t e
  •     if  S w , h , c < s t h  then
  •         for i in range [ w 1 , w + 1 ]  do
  •            for j in range [ h 1 , h + 1 ]  do
  •                for k in range [ c 1 , c + 1 ]  do
  •                     V n e i g h b o r V i , j , k
  •                end for
  •            end for
  •         end for
  •         for  p t P w , h , c  do
  •            for  v n e i g h b o r V n e i g h b o r  do
  •                Compute d i s t distance ( p t , v n e i g h b o r . n ¯ )
  •                if  d i s t < d t h  then
  •                     P n e i g h b o r P n e i g h b o r { p t }
  •                end if
  •            end for
  •         end for
  •     else
  •          U U P w , h , c
  •     end if
  • end for

4.3. LiDAR Calibration

The calibration of the LiDAR’s initial pose refers to the process of calculating a transformation matrix based on the relationship between the LiDAR frame and the global frame. Planes are generally defined with respect to the global coordinate system. However, the LiDAR coordinate system is based on the LiDAR itself. Therefore, if the LiDAR sensor is tilted, even a perfectly flat plane will appear as an inclined plane in the LiDAR coordinate system. As a result, the LiDAR coordinate system is not aligned with the global coordinate system. This paper estimates the initial pose of the LiDAR to address the misalignment between these two coordinate systems. The detected robust planes u q , including a plane score s q and normal vectors n q , are given as input to the LiDAR calibration method, as depicted in Figure 5.
The initial pose LiDAR calibration is based on the following two assumptions:
  • The indoor spaces have a cuboidal structure and are composed of at least two or more planes, including walls, floors, and ceilings;
  • Each plane is flat and the angles between planes are perpendicular. Unlike outdoor environments, the planes that make up indoor spaces, such as walls, floors, and ceilings, are flat and each plane is oriented perpendicular to its neighboring planes.
Based on the aforementioned assumptions, we define the normal vectors of the planes that constitute indoor spaces in real environments, as given in Equation (6):
n G = 0 0 1 , 0 1 0 , 1 0 0 , 0 0 1 , 0 1 0 , 1 0 0
where n G is a set of plane vectors n g , each representing the real environment including walls and ceilings in an indoor setting. Positive and negative values represent directions, where a vector with a negative value indicates the opposite direction of a vector with a positive value. We define n G as the global frame and compute the transformation with respect to the detected planes U.
Handheld LiDAR has greater degrees of freedom compared to ground robots, which are limited to planar motion. Specifically, the yaw motion of handheld LiDAR is similar to that of fixed LiDAR mounted on ground robots; it exhibits a larger range of motion in the roll and pitch axes. This paper emphasizes the correction of the roll and pitch axes among the three rotational axes. Additionally, this paper highlights the issue of non-linear tilting of sensors caused by the high degree of freedom in handheld LiDAR systems. This means that while the sensor’s position may be accurate, its rotational pose may not be properly aligned. Therefore, this study focuses solely on the calibration of the sensor pose and does not consider positional information. The sensor’s position is fixed at global frame origin O ( 0 , 0 , 0 ) , and the rotation matrix between the global frame n G and LiDAR frame is calculated through optimization.
To calibrate LiDAR pose orientation, the two planes with the highest plane scores are used. After projecting these two planes in the same direction, an average vector n a v g is computed. In this paper, the rotation matrix is calculated using vectors, so it is not necessary for the two vectors to be orthogonal to each other. Therefore, when the angle between the two vectors is close to perpendicular, two vectors are projected in the same direction to make them parallel, and then their average vector is calculated. In this case, the projection rotates 90 degrees based on the vector with the higher plane score. If the two vectors are parallel to each other, the average vector of the two is calculated. The average vector n a v g is a plane vector used to estimate the current pose of the LiDAR frame. The normal vector between two frames is computed and is used as the initial calibration matrix R i n i t :
n = arg min n g n G cos 1 n a v g · n g n a v g n g
R init = n · n a v g T n a v g · n a v g T
where n refers to the vector n g within n G that has the smallest angle with n a v g . Based on this initial rotation matrix R init , optimization is performed to correct the pose in LiDAR frame to the global frame. The cost function is computed as follows:
loss q = R n q n g 2
where loss q represents the cost function, where R S O ( 3 ) denotes the rotation matrix that changes through optimization, with  R init as the initial value. The  loss q is calculated based on the deviation between the robust plane vector n q , calibrated by R, and  n g .
The Transformation matrix R is optimized by using loss q and score s q for each plane, as depicted in Equation (10):
R L G = arg min R q = 1 Q ( 1 s q ) · loss q
The LiDAR calibration is explained in Algorithm 2.
Algorithm 2 LiDAR Calibration Algorithm
Input:  U Q : Detected planes
Output:  R L G : Optimized rotation matrix
  • Step 1: Sort and Select Planes
  •  Sort U Q in descending order based on scores
  •  Select the top two planes: P 1 U Q [ 0 ] , P 2 U Q [ 1 ]
  • Step 2: Compute Initial Transformation R i n i t
  •  Project P 2 onto P 1 and compute their mean n a v g
  •  Compute rotation matrix: R i n i t = n a v g 1 n g
  • Step 3: Optimize the Transformation Matrix
  •  Initialize R R i n i t
  • while not converged do
  •      Compute loss:
    loss P = n g R · P 2
  •      Update transformation matrix:
    R L G = arg min R P U Q ( 1 S P ) l o s s P
  • end while

4.4. IMU Noise Removal, Downsampling, and Calibration

For IMU pose correction and calibration, a method similar to the existing approach [10] is used. The IMU measures acceleration a ^ i and angular velocity ω ^ i at each time point i. The IMU measurement model is given in Equations (11) and (12):
a ^ i = a i + R i w g w + b a + ϵ a
ω ^ i = ω i + b g + ϵ g
where a i and ω i are the ground truth of IMU angular velocity and linear acceleration, R i w is a matrix that transforms the gravity vector into the IMU frame, b a and b g denote the biases for linear acceleration and angular velocity, and ϵ a and ϵ g represent the Gaussian noise for acceleration and angular velocity, respectively.
Generally, the sampling rate of IMU data is higher than that of LiDAR data. Moreover, downsampling of the IMU data is performed as shown in Figure 6. Equations (13) and (14) are used to compute the downsampled acceleration a d and angular velocity ω d :
a d = 1 C i = 1 C a ^ i x , 1 C i = 1 C a ^ i y , 1 C i = 1 C a ^ i z
ω d = 1 C i = 1 C ω ^ i x , 1 C i = 1 C ω ^ i y , 1 C i = 1 C ω ^ i z
where C denotes the number of accumulated IMU data until one LiDAR scan is received. In other words, downsampling is performed by calculating the average of accumulated IMU data until a single LiDAR data point is acquired. Subsequently, the downsampled data are processed using Zero Phase Filter (ZPF) [14] to remove noise components, ϵ a and ϵ g . ZPF typically utilizes a linear filter that considers both past and future data points to perform filtering.
Finally, the IMU acceleration and angular velocity data, with noise removed by the ZPF, are input into the Madgwick filter [15] to estimate the IMU pose at the current time. The Madgwick filter provides the current quaternion q = [ q w , q x , q y , q z ], which is then used to construct the rotation matrix R I G for calibration of the IMU pose.

5. Results

5.1. Experimental Setup

The experiments are conducted using a PC equipped with an Intel Core™ i7-12700KF CPU and an Nvidia GeForce RTX 3080 GPU. The proposed method is implemented in Ubuntu 20.04 environment, C++, OpenCV, Eigen, PCL (Point Cloud Library), and ROS1 libraries. In the first experiment, the accuracy of the proposed plane detection method for sensor calibration is evaluated. The evaluation metrics used in the experiment include Precision, Recall, and F1-score for the detected planes. In the second experiment, calibration results of the proposed method are compared to the existing methods, LI-init [8] and GRIL-Calib [10], using the Root Mean Squared Error (RMSE).

5.2. Datasets

The proposed method is evaluated on publicity available benchmark datasets, i.e, LiDAR-Net [16], Hilti dataset [17], and VECtor dataset [18]. LiDAR-Net [16] contains 3D LiDAR point clouds from various indoor environments, with detailed labeling for multiple objects and planes such as walls, ceilings, and floors. The Hilti [17] and VECtor [18] datasets are designed for SLAM, acquired using handheld LiDAR and ground robots in diverse indoor and outdoor environments. Each dataset includes both LiDAR and IMU data. Details of sensor models used in the acquisition of these datasets are listed in Table 3 and Table 4.
As can be seen in Table 4, while the IMUs included in both datasets contain accelerometer, gyroscope, and magnetometer sensors, this paper only utilized the accelerometer and gyroscope data.

5.3. Evaluation of Robust Plane Detection Method

This section present the performance results of the proposed robust plane detection method.

5.3.1. Impact of Robust Plane Selection

Among all the detected planes, the robust plane detection module selects the robust planes while removing noise and discarding the corners. This plane selection involves point-to-plane distance d i computation between each ith point in voxel v w , h , c and plane n ¯ w , h , c . The robust plane includes the points with d i < d t h r e s h , only removing the points with d i higher than d t h r e s h .
The higher the threshold, the more noisy and robust points the data contain, while lower threshold value results in removal of noise along with robust points. We have performed several experiments to find the optimal value for d t h r e s h . Table 5 lists the impact of threshold d t h r e s h on the overall performance of the proposed method in terms of F1-score and computation time. Moreover, we have also analyzed the impact of sensor noise σ on overall performance in terms of F1 score and Time. Table 5 lists the results of d t h r e s h for different values of Gaussian noise σ = 0 (which represents no noise), σ = 0.03 , and σ = 0.05 , respectively.
According to the experimental results, the proposed method shows best performance for d t h r e s h = 0.2 . Setting d t h r e s h too high, closer to 1, results in lower performance because it does not sufficiently remove the noise and corners. Conversely, setting d t h r e s h too low, closer to 0, reduces the influence of noise but also removes the robust points, which leads to a decline in overall performance. It is also noted that sensor noise affects the performance of the system, yet the highest F1 score is still achieved using d t h r e s h = 0.2 . Therefore, this paper sets d t h r e s h to 0.2.

5.3.2. Plane Refinement Performance Analysis

Once the robust planes are selected, the plane refinement is performed to further remove the noise. The plane score S w , h , c for each voxel v w , h , c is computed and if the score is below certain threshold s t h r e s h , the plane score for v w , h , c is redefined. Planes with a score S w , h , c > s t h r e s h are considered final robust planes. Thus, the lower the threshold will be, the lower the number of planes that are redefined, and thus, the less noise is removed. On the other hand, higher threshold values, closer to 1, will not only remove noise, but will cause redefining of robust voxels, resulting in distortion of robust data. We have performed several experiments to find the optimal value for s t h r e s h in order to achieve the best performance for plane refinement. Table 6 lists the impact of plane refinement on overall performance of the proposed method in terms of F1-score and computation time. Moreover, we have also analyzed the impact of sensor noise σ on overall performance in terms of F1 score and Time. Table 6 lists the results of s t h r e s h for different values of Gaussian noise σ = 0 (which represents no noise), σ = 0.03 , and σ = 0.05 , respectively.
It can be observed that setting the threshold s t h r e s h = 0.2 achieves best performance. If s t h r e s h is set too low, few or no planes are subject to refinement, which reduces overall accuracy as incorrect planes are not improved. Conversely, if s t h r e s h is set too high, even correctly detected planes undergo refinement, potentially decreasing performance. It is also noted that sensor noise affects the performance of the system, yet the highest F1 score is still achieved using s t h r e s h = 0.2 . Therefore, this paper sets s t h r e s h to 0.2.

5.3.3. Comparison with Other Plane Detection Methods

The proposed robust plane detection method for initial pose calibration is evaluated and compared with other plane detection algorithms, such as RANSAC [19], Region Growing [20], and 3D-Hough Transform [21]. The experiments are conducted using room and corridor data included in the LiDAR-Net dataset. Although the dataset provides detailed labeling for planes, such as walls, ceilings, and floors, the proposed plane detection method does not classify the detected planes into these categories. Therefore, if the points of the detected plane match the labeled points of the walls, ceilings, or floors in the dataset, the True Positive (TP) count is increased. The results of the experiment are shown in Table 7, in terms of Precision, Recall, and F1-score.
The experimental results demonstrate that the proposed method achieves higher plane detection accuracy compared to existing methods. Specifically, the proposed method achieves noticeable improvement of up to 4.7% and 13.3% on the corridor and room sequence, in terms of F1-score. In terms of Recall, the proposed method achieves 12.7% and 18% improvement on the corridor and room sequences, respectively. Though RANSAC [19] and the 3D-Hough Transform [21] methods achieved higher precision, they had lower recall, resulting in lower F1-scores. This implies that while these methods had high prediction rates for data identified as planes, they struggle to accurately detect planes in the dataset.
The proposed method also detected planes with significantly faster computational speed compared to existing methods. The proposed method demonstrated approximately 101 times and 147 times faster computational speed than the Region Growing method [20] in the corridor and room environments, respectively, while being 1.41 times and 2.09 times faster, respectively, than RANSAC. The qualitative comparison of plane detection on corridor data are given Figure 7.
For visualization, points detected as planes by the algorithms are compared with the dataset labels, and if they corresponded to a wall, ceiling, or floor, the points are colored green, yellow, and purple, respectively. It can be observed that the proposed method can accurately detect planes compared to existing methods.

5.4. Evaluation of Initial Pose Estimation Methods

In this section, calibration accuracy is evaluated using the Hilti dataset [17] and VECtor dataset [18]. The evaluation method involves calculating odometry using LiDAR and IMU calibrated to the global frame based on the detected planes, followed by computing roll and pitch from the odometry. The roll and pitch errors are then calculated against the ground truth using the Root Mean Squared Error (RMSE) and compared with the existing methods LI-init [8] and GRIL-Calib [10]. The units are in degrees, with the ground truth set to 0°.
In LI-init [8], a handheld LiDAR is used, while GRIL-Calib [10] performs calibration using LiDAR mounted on a ground robot. Both of these methods are implemented using open-source software provided by the authors. Therefore, in this experiment, a comparative evaluation with existing methods is conducted using the data acquired from a ground robot in the Hilti dataset [17] and from a handheld LiDAR in the VECtor dataset [18]. Data were collected in the Basement and Corridor environments. The results of LiDAR calibration are shown in Figure 8, and the RMSE for roll and pitch are presented in Table 8.
Figure 8 shows the top view of the LiDAR data after calibrating the sensor’s pose using the rotation matrix computed by the proposed method. In the initial input data, the tilt of the sensor causes the wall to appear as multiple overlapping layers. However, after the proposed calibration process, the LiDAR data are accurately aligned, displaying the wall as a single, clean layer without any overlaps. This demonstrates that the sensor’s tilt and distortion have been effectively removed.
The proposed method did not show significant differences compared to existing methods for the Hilti dataset [17], which utilizes a ground robot. In fact, GRIL-Calib [10] demonstrated the best performance, likely due to its calibration approach, specifically designed for ground robots. However, in the case of the VECtor dataset [18], which uses a handheld LiDAR, the proposed method shows significantly lower calibration errors compared to the existing methods. This can be explained by the limitations of GRIL-Calib [10], which relies solely on ground data, resulting in reduced calibration accuracy when ground data are not detected by the LiDAR. On the other hand, LI-init [8] uses planar information for calibration but only considers the relative relationships within the LiDAR frame without considering direct alignment with the global frame, leading to a notable decrease in sensor calibration accuracy.
The proposed method, as defined by using the normal vector of the real environment plane in the global frame, is defined in Equation (7), and calculating the relationship with the detected plane demonstrates better calibration performance compared to existing methods. Figure 9 shows the roll and pitch errors of the proposed and existing methods in the corridor environment. It is evident from the results that the proposed method outperforms the existing methods with lower calibration errors.

5.5. Evaluation of Mapping Result

This study focuses on the initial pose calibration of a moving handheld LiDAR and IMU system, rather than a fixed LiDAR. To evaluate the impact of the proposed initial pose estimation method on mapping, a comparison was made with existing approaches. Therefore, mapping was performed by replacing the initial pose estimation method used in LI-init, which performs both initial pose estimation and extrinsic parameter calibration for mapping, with the proposed initial pose estimation method. The results are shown in Figure 10.
Figure 10 shows the mapping results using only LI-init and the mapping results obtained by integrating LI-init with the proposed method. Since LI-init estimates the initial pose based on relative alignment between LiDAR scan data rather than with respect to the global frame, errors in initial pose estimation occur, leading to uncorrected sensor tilt and resulting in distortion in the map. However, in the map generated by integrating the proposed initial pose estimation method, it can be observed that the map is created without any distortion. Additionally, the color of the map represents the z-axis error of each point. In other words, the existing method shows significant z-axis errors due to incorrect initial pose estimation, while the map generated using the proposed method demonstrates relatively smaller z-axis errors.

6. Conclusions

This research addresses the problem of accurate initial pose estimation in handheld LiDAR-IMU systems, which are essential for creating precise maps in indoor environments. While handheld LiDAR scanners offer flexibility and portability, ensuring accurate data alignment between LiDAR and IMU sensors is critical to mitigate distortions and maintain map quality. Our proposed method leverages robust plane detection through voxel-based segmentation and a plane scoring approach, which effectively handles noise and edge disruptions, resulting in reliable plane identification even in complex indoor environments. By calibrating the initial poses based on these detected planes, the proposed method ensures low calibration errors and consistent performance, even with sensor motion. Through comparative evaluations, it is demonstrated that the proposed method outperforms existing calibration approaches in terms of accuracy and computational efficiency, making it well-suited for real-time, handheld applications. Our contributions to plane detection and calibration advance the potential for accurate localization and mapping in applications spanning architecture, engineering, and autonomous navigation in GPS-denied environments. Future work may explore further optimization for dynamic and outdoor settings, where varying lighting and environmental conditions pose additional challenges.

Author Contributions

Conceptualization, E.-S.P., S.A. and T.-H.P.; methodology, E.-S.P., S.A. and T.-H.P.; Software, E.-S.P.; Validation, E.-S.P. and S.A.; writing—original draft preparation, E.-S.P. and S.A.; writing—review and editing, E.-S.P., S.A. and T.-H.P.; visualization, E.-S.P.; supervision, S.A.; project administration, T.-H.P.; funding acquisition, T.-H.P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partly supported by the Innovative Human Resource Development for Local Intellectualization program through the Institute of Information Communications Technology Planning Evaluation (IITP) grant funded by the Korean government (MSIT) (IITP-2024-2020-0-01462). Includedare results of a study on the “Leaders in INdustry-university Cooperation 3.0” Project, supported by the Ministry of Education and National Research Foundation of Korea.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zeybek, M. Indoor mapping and positioning applications of hand-held lidar simultaneous localization and mapping (slam) systems. Türk. Lidar Derg. 2021, 3, 7–16. [Google Scholar] [CrossRef]
  2. Bi, S.; Yuan, C.; Liu, C.; Cheng, J.; Wang, W.; Cai, Y. A survey of low-cost 3D laser scanning technology. Appl. Sci. 2021, 11, 3938. [Google Scholar] [CrossRef]
  3. Zhang, J.; Singh, S. LOAM: Lidar odometry and mapping in real-time. Robot. Sci. Syst. 2014, 9, 1–9. [Google Scholar]
  4. Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 5135–5142. [Google Scholar]
  5. Xu, W.; Cai, Y.; He, D.; Lin, J.; Zhang, F. Fast-lio2: Fast direct lidar-inertial odometry. IEEE Trans. Robot. 2022, 9, 2053–2073. [Google Scholar] [CrossRef]
  6. Le Gentil, C.; Vidal-Calleja, T.; Huang, S. 3d lidar-imu calibration based on upsampled preintegrated measurements for motion distortion correction. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 2149–2155. [Google Scholar]
  7. Li, S.; Wang, L.; Li, J.; Tian, B.; Chen, L.; Li, G. 3D LiDAR/IMU calibration based on continuous-time trajectory estimation in structured environments. IEEE Access 2021, 9, 138803–138816. [Google Scholar] [CrossRef]
  8. Zhu, F.; Ren, Y.; Zhang, F. Robust real-time lidar-inertial initialization. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 3948–3955. [Google Scholar]
  9. Li, S.; Li, X.; Chen, S.; Zhou, Y.; Wang, S. Two-Step LiDAR/Camera/IMU Spatial and Temporal Calibration Based on Continuous-Time Trajectory Estimation. IEEE Trans. Ind. Electron. 2024, 71, 3182–3191. [Google Scholar] [CrossRef]
  10. Kim, T.; Pak, G.; Kim, E. GRIL-Calib: Targetless Ground Robot IMU-LiDAR Extrinsic Calibration Method using Ground Plane Motion Constraints. IEEE Robot. Autom. Lett. 2024, 9, 5409–5416. [Google Scholar] [CrossRef]
  11. Das, S.; Boberg, B.; Fallon, M.; Chatterjee, S. IMU-based Online Multi-lidar Calibration. In Proceedings of the IEEE Intelligent Vehicles Symposium (IV), Jeju Island, Republic of Korea, 2–5 June 2024; pp. 3227–3234. [Google Scholar]
  12. Lv, J.; Xu, J.; Hu, K. Targetless calibration of lidar-imu system based on continuous-time batch estimation. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 25–29 October 2020; pp. 9968–9975. [Google Scholar]
  13. Yin, S.; Xie, D.; Fu, Y.; Wang, Z.; Zhong, R. Uncontrolled Two-Step Iterative Calibration Algorithm for Lidar–IMU System. Sensors 2023, 23, 3119. [Google Scholar] [CrossRef] [PubMed]
  14. Gustafsson, F. Determining the initial states in forward-backward filtering. IEEE Trans. Signal Process. 1996, 44, 988–992. [Google Scholar] [CrossRef] [PubMed]
  15. Madgwick, S. An efficient orientation filter for inertial and inertial/magnetic sensor arrays. Sens. Arrays Rep. x-io Univ. Bristol 2010, 25, 113–118. [Google Scholar]
  16. Guo, Y.; Li, Y.; Ren, D.; Zhang, X.; Li, J.; Pu, L.; Ma, C.; Zhan, X.; Guo, J.; Wei, M.; et al. LiDAR-Net: A Real-scanned 3D Point Cloud Dataset for Indoor Scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), Seattle, WA, USA, 16–22 June 2024; pp. 21989–21999. [Google Scholar]
  17. Helmberger, M.; Morin, K.; Scaramuzza, D. The hilti slam challenge dataset. IEEE Robot. Autom. Lett. 2022, 7, 7518–7525. [Google Scholar] [CrossRef]
  18. Gao, L.; Liang, Y.; Kneip, L. Vector: A versatile event-centric benchmark for multi-sensor slam. IEEE Robot. Autom. Lett. 2022, 7, 8217–8224. [Google Scholar] [CrossRef]
  19. Li, L.; Yang, F.; Zhu, H.; Li, D.; Li, Y.; Tang, L. An improved RANSAC for 3D point cloud plane segmentation based on normal distribution transformation cells. Remote Sens. 2017, 9, 433. [Google Scholar] [CrossRef]
  20. Vo, A.V.; Truong-Hong, L.; Laefer, D.F.; Bertolotto, M. Octree-based region growing for point cloud segmentation. ISPRS J. Photogramm. Remote Sens. 2015, 104, 88–100. [Google Scholar] [CrossRef]
  21. Tian, Y.; Song, W.; Chen, L.; Sung, Y.; Kwak, J.; Sun, S. Fast planar detection system using a GPU-based 3D Hough transform for LiDAR point clouds. Appl. Sci. 2020, 10, 1744. [Google Scholar] [CrossRef]
Figure 1. LiDAR based mapping using (a) LiDAR-IMU calibration method: Error-free mapping, and (b) Without LiDAR-IMU calibration method: Mapping error due to drift, highlighted in yellow circle. The colors in each map represents the intensity of LiDAR point cloud.
Figure 1. LiDAR based mapping using (a) LiDAR-IMU calibration method: Error-free mapping, and (b) Without LiDAR-IMU calibration method: Mapping error due to drift, highlighted in yellow circle. The colors in each map represents the intensity of LiDAR point cloud.
Sensors 24 08199 g001
Figure 2. Overall framework of the proposed initial pose estimation method for robust LiDAR-IMU calibration. Different colors in voxelization shows the intensity of LiDAR points in each voxel. The extracted planes are represented with yellow and green color while red color points indicate noise.
Figure 2. Overall framework of the proposed initial pose estimation method for robust LiDAR-IMU calibration. Different colors in voxelization shows the intensity of LiDAR points in each voxel. The extracted planes are represented with yellow and green color while red color points indicate noise.
Sensors 24 08199 g002
Figure 3. Robust plane detection method.
Figure 3. Robust plane detection method.
Sensors 24 08199 g003
Figure 4. Robust plane extraction through refinement. (a) Voxels containing edges and noise have low plane scores due to large distances and high variance represented as red color normal vector while those with high plane scores are represented with blue. (b) The refinement process enables the effective separation and removal of areas containing edges and noise.
Figure 4. Robust plane extraction through refinement. (a) Voxels containing edges and noise have low plane scores due to large distances and high variance represented as red color normal vector while those with high plane scores are represented with blue. (b) The refinement process enables the effective separation and removal of areas containing edges and noise.
Sensors 24 08199 g004
Figure 5. LiDAR calibration method.
Figure 5. LiDAR calibration method.
Sensors 24 08199 g005
Figure 6. IMU downsampling.
Figure 6. IMU downsampling.
Sensors 24 08199 g006
Figure 7. Qualitative Comparison of the proposed method with the benchmark plane detection algorithms.
Figure 7. Qualitative Comparison of the proposed method with the benchmark plane detection algorithms.
Sensors 24 08199 g007
Figure 8. Top view of LiDAR data. (a) LiDAR raw data before calibration. (b) LiDAR data after calibration using the proposed method.
Figure 8. Top view of LiDAR data. (a) LiDAR raw data before calibration. (b) LiDAR data after calibration using the proposed method.
Sensors 24 08199 g008
Figure 9. Performance comparison in terms of (a) roll and (b) pitch errors in the VECtor dataset.
Figure 9. Performance comparison in terms of (a) roll and (b) pitch errors in the VECtor dataset.
Sensors 24 08199 g009
Figure 10. Performance comparison in terms of the (a) mapping result using LI-init and (b) mapping result using LI-init+Proposed.
Figure 10. Performance comparison in terms of the (a) mapping result using LI-init and (b) mapping result using LI-init+Proposed.
Sensors 24 08199 g010
Table 1. A taxonomy of existing LiDAR-IMU calibration methods.
Table 1. A taxonomy of existing LiDAR-IMU calibration methods.
Ref.YearModalityPlatformEnvApproachLimitations
[6]2018LiDAR+IMUHandheldIndoorScan-basedPrecise calibration is not achievable due to the use of low-accuracy upsampled LiDAR points.
[7]2021LiDAR+IMUHandheldIndoorPre-define mapRequires a pre-defined map
[8]2022LiDAR+IMUHandheldIndoorScan-basedSince the first LiDAR scan is defined in the global frame, the pose in the first LiDAR scan affects the accuracy of the calibration
[9]2024LiDAR+IMU+CameraHandheldIndoor/OutdoorScan-basedDue to the two-step calibration process involving both the camera and LiDAR, a complex processing workflow is required, and the calibration accuracy of each sensor affects the accuracy of the other
[10]2024LiDAR+IMUVehicleOutdoorGround informationNot suitable for handheld LiDAR
[11]2024Multi LiDAR+IMUVehicleIndoorIMU-basedSince the calibration is based on IMU data, it is sensitive to IMU drift and noise
[12]2020LiDAR+IMUHandheldIndoor/OutdoorNo initial pose estimationHigh cost of computation
[13]2023LiDAR+IMUHandheld, VehicleIndoor/OutdoorNo initial pose estimationHigh cost of computation
Table 2. Notations and their explanation.
Table 2. Notations and their explanation.
NotationsExplanations
P , p l The set of LiDAR data and the l-th point
V W , H , C A voxel containing ( W × H × C ) elements
v w , h , c The voxel located at ( w , h , c )
P w , h , c R 3 , p v w , h , c i R 3 The LiDAR total points v w , h , c at ( w , h , c ) , and the i-th point inside the voxel v w , h , c
n j R 3 , n ¯ R 3 The j-th normal vector and the mean vector within the voxel
U Q The final Q detected planes
u q The q-th plane detected through the proposed method
R i n i t S O ( 3 ) , n g t R 3 , n a v g R 3 The initial rotation matrix, the normal vector in the global frame, and the normal vector in the LiDAR frame
R L G S O ( 3 ) The rotation matrix between the global and LiDAR frames
R I G S O ( 3 ) The rotation matrix between the global and IMU frames
n G R 3 The normal vector of the actual plane
Table 3. LiDAR sensor details in SLAM datasets.
Table 3. LiDAR sensor details in SLAM datasets.
DatasetSensorFrame RateHorizontal FOVVertical FOVMaximum Detection RangeDetection Distance Accuracy
LiDAR-Net [16]Leica BLK2GO5 Hz210°85°10 m≤5 mm
Hilti [17]Ouster OSO-6410 Hz360°90°50 m≤5 cm
VECtor [18] Ouster OSO-128 20 Hz360°90°50 m≤5 cm
Table 4. IMU sensor details in SLAM datasets.
Table 4. IMU sensor details in SLAM datasets.
DatasetSensorFrame RateAccelerometerGyroscopeMagnetometer
Hilti [17]InvenSense ICM-20948100 Hz
VECtor [18]XSens MTi-30 AHRS200 Hz
Table 5. Performance evaluation of the proposed method using different values of d t h r e s h on the corridor sequence of LiDAR-Net.
Table 5. Performance evaluation of the proposed method using different values of d t h r e s h on the corridor sequence of LiDAR-Net.
d thresh F1 ScoreTime
σ = 0.00 σ = 0.03 σ = 0.05 σ = 0.00 σ = 0.03 σ = 0.05
0.083.3 79.5 78.5 0.83 1.00 1.07
0.190.6 81.2 80.3 0.851.001.07
0.291.483.281.80.85 1.02 1.09
0.489.7 79.8 79.2 0.85 1.02 1.09
0.687.0 79.6 78.5 0.87 1.02 1.09
0.885.5 77.3 75.5 0.87 1.03 1.10
1.079.8 76.3 74.2 0.87 1.03 1.10
Table 6. Performance evaluation of the proposed method using different values of s t h r e s h on the corridor sequence of LiDAR-Net.
Table 6. Performance evaluation of the proposed method using different values of s t h r e s h on the corridor sequence of LiDAR-Net.
s thresh F1 ScoreTime
σ = 0.00 σ = 0.03 σ = 0.05 σ = 0.00 σ = 0.03 σ = 0.05
0.087.5 79.2 78.6 0.85 1.00 1.07
0.191.1 81.2 79.2 0.861.001.07
0.291.483.281.80.85 1.02 1.08
0.391.2 80.8 79.8 0.85 1.02 1.08
0.490.1 78.5 78.7 0.85 1.02 1.08
0.589.3 75.8 72.4 0.86 1.03 1.09
0.686.9 72.3 69.2 0.86 1.03 1.09
0.779.9 71.2 60.8 0.86 1.04 1.09
0.878.4 69.5 45.9 0.86 1.04 1.10
0.956.2 52.2 30.3 0.87 1.05 1.11
1.043.7 28.9 24.2 0.87 1.06 1.13
Table 7. Comparison of the proposed method with benchmark plane detection algorithms on the LiDAR-Net dataset.
Table 7. Comparison of the proposed method with benchmark plane detection algorithms on the LiDAR-Net dataset.
MethodCorridorRoom
Precision (%)Recall (%)F1-Score (%)Time (s)Precision (%)Recall (%)F1-Score(%)Time (s)
RANSAC [19]89.972.480.22.696.957.872.44.9
3D-Hough [21]99.048.965.413.590.940.956.423.9
Region Growing [20]95.179.686.7183.787.460.871.7339.6
Ours90.592.391.40.8593.978.885.72.34
Table 8. Performance comparison with SOTA methods in terms of the RMSE (Roll and pitch) on the Hilti and VECtor datasets.
Table 8. Performance comparison with SOTA methods in terms of the RMSE (Roll and pitch) on the Hilti and VECtor datasets.
MethodHilti (Basement)VECtor (Corridor)
RMSE (Deg)RMSE (Deg)
RollPitchRollPitch
LI-init [8]0.730.4121.826.7
Gril-calib [10]0.210.0710.518.3
Proposed method0.410.063.81.4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Park , E.-S.; Arshad, S.; Park, T.-H. Initial Pose Estimation Method for Robust LiDAR-Inertial Calibration and Mapping. Sensors 2024, 24, 8199. https://doi.org/10.3390/s24248199

AMA Style

Park  E-S, Arshad S, Park T-H. Initial Pose Estimation Method for Robust LiDAR-Inertial Calibration and Mapping. Sensors. 2024; 24(24):8199. https://doi.org/10.3390/s24248199

Chicago/Turabian Style

Park , Eun-Seok, Saba Arshad, and Tae-Hyoung Park. 2024. "Initial Pose Estimation Method for Robust LiDAR-Inertial Calibration and Mapping" Sensors 24, no. 24: 8199. https://doi.org/10.3390/s24248199

APA Style

Park , E. -S., Arshad, S., & Park, T. -H. (2024). Initial Pose Estimation Method for Robust LiDAR-Inertial Calibration and Mapping. Sensors, 24(24), 8199. https://doi.org/10.3390/s24248199

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop