A Survey on Ground Segmentation Methods for Automotive LiDAR Sensors
<p>Multi-sensor perception system.</p> "> Figure 2
<p>LiDAR working principle.</p> "> Figure 3
<p>LiDAR processing stack.</p> "> Figure 4
<p>Basic principle of the ground segmentation task.</p> "> Figure 5
<p>Classification and taxonomy of existing ground segmentation methods.</p> "> Figure 6
<p>Visual representation of an Elevation Map.</p> "> Figure 7
<p>Elevation map techniques with overhangs.</p> "> Figure 8
<p>Visual representation of an orthogonal distance classification.</p> "> Figure 9
<p>Visual representation of a Polar Grid Map.</p> "> Figure 10
<p>Hybrid Regression Method.</p> "> Figure 11
<p>Velodyne VLP-16 side view.</p> "> Figure 12
<p>Representation of the channel-based method.</p> "> Figure 13
<p>Region-growing labeling based on four neighbours window.</p> "> Figure 14
<p>Conversion of a point cloud frame to a range image representation.</p> "> Figure 15
<p>Graph Cut segmentation method.</p> "> Figure 16
<p>Overview of the GndNet algorithm.</p> "> Figure 17
<p>BEV sector partition representation.</p> ">
Abstract
:1. Introduction
2. Automotive LiDAR Sensors
2.1. LiDAR Technology
2.1.1. Measurement Techniques
2.1.2. Imaging Techniques
2.2. LiDAR Applications
2.2.1. Object Detection and Classification
2.2.2. SLAM
2.2.3. Drivable Area Detection
3. Ground Segmentation Methods
3.1. 2.5D Grid-Based
- Elevation Maps: This technique is the most used in 2.5D grid representations. As depicted by Figure 6 (top view representation of the surrounding environment), each cell contains relevant information about all points inside. Elevation maps can provide advantages in terms of noise reduction when compared with other methods. However, they still face problems in terms of vertical space representation as they fail to model the empty space between points, i.e., overhangs and treetops. Nevertheless, due to its simplicity, several algorithms leverage this technique for ground plane segmentation. Douillard et al. [64] use a mean-based elevation map, i.e., each cell contains the average height of all points, followed by clustering techniques to detect and remove the ground points. The algorithm follows three simple steps: (1) calculates the surface gradients of each cell and classifies protruding objects and ground cells; (2) clusters adjacent ground cells; and (3) corrects artifacts wrongly generated by the gradient computations, e.g., if a cell that was classified as an object has an average height close to its neighboring ground cells, it is changed to a ground cell. Despite presenting good performance results, the proposed solution still needs further improvements to achieve real-time constraints.
- Occupancy Grid Maps: These algorithms, introduced in the 1980s by Moravec and Elfes, use fine-grained grids to model the occupied and free space in the surrounding environment [71]. The main goal of this technique is to generate a consistent metric map from noisy or incomplete sensor data. This can be achieved by measuring a cell multiple times, so that information can be integrated using Bayes filters [72]. This method is considered highly robust and easy to implement, which is essential for autonomous driving tasks [73]. Stanley, the robot created by the Stanford University that won the Defense Advanced Research Projects Agency (DARPA) challenge in 2005, used occupancy grids for obstacle detection [74]. First, the surroundings are modeled by assuming each cell is in an occupied, free, or unknown state. A cell is considered occupied if the vertical distance between the maximum and minimum height of the detected points exceeds a distance delta. If this verification fails, the area is considered drivable, meaning the ground is successfully detected. In the following editions of the DARPA Challenge, several teams also used occupancy grids for modeling the drivable area, which helped them in finishing the challenge [75,76,77]. Similarly, Himmelsbach et al. [78] proposed the utilization of occupancy grids allied with vehicle position detection mechanisms to accurately segment the ground plane for the object classification tasks, achieving an object classification accuracy of around 96% on the used dataset. By their turn, Luo et al. [79] use a combination of different attributes, such as the average height, the standard deviation of height, and the number of points on each cell, to compute the probability of a cell belonging to the ground plane. This work can achieve a precision ratio of up to 96% with real-time features in light traffic using a Velodyne HDL-32E sensor. The processing unit is composed of an Intel i7-4800 processor running at 2.7 GHz, 16 GB of RAM, and an Nvidia GeForce GTX 1080 graphics card.
3.2. Ground Modelling
- Plane Fitting: The work from Hu et al. [80] identifies the ground plane by fitting the lowest elevation points of each frame and uses the RANdom SAmple Consensus (RANSAC) algorithm to estimate some of the unknown parameters of the plane efficiently. The ground segmentation is then performed based on the orthogonal distance between points and the ground plane (Figure 8). The maximum point-to-plane threshold is then calculated, indicating the minimum distance between a point and a plane for a point to be classified as non-ground. According to the authors, this technique may fail in some situations, reducing the number of ground points and affecting the ground plane estimation’s accuracy. To avoid this limitation, when the number of ground points is severely reduced, a short-term memory technique is used to create a forecast for the current road points based on the previous frame, which is effective on flat surfaces, allowing the algorithm to achieve an effectiveness of around 81% and a precision of 72%. The evaluation was conducted using the KITTI dataset [81], and the experimental setup includes a Linux machine with a Pentium 3200 processor with 3 GB of RAM. Nonetheless, the ground plane is not always flat, thus, the assumption of a single ground plane may result in incorrect classifications. Moreover, noisy or complex data can reduce the RANSAC’s performance.
- Line Extraction: To overcome some limitations of Plane Fitting methods, e.g., the processing time, other works based on Line Extraction have emerged [87,88]. These approaches divide the area under evaluation into several segments of a polar grid map, as depicted in Figure 9, to search for the ground plane. The work from Himmelsbach et al. [87] models the ground using local line fits, and to be considered part of the ground plane, such line fits must satisfy a set of requirements. Considering the slop-intercept equation of a straight line, y = mx + b, the slope (m) cannot exceed a threshold to exclude vertical structures, and in the case of lower slopes, the line’s intersection with the y-axis (b) must not also exceed a threshold to exclude plateaus from the ground plane. After finding the lines that model the ground plane, the distance between the points and these lines is analyzed to determine whether the points are ground or non-ground. This estimation of several local lines/planes allows for greater flexibility in the overall plane fitting. However, when using small regions, this method can be affected by the lack of local points. This strategy can divide the segmentation problem into several smaller and simple fitting problems, thus, using data from a Velodyne HDL-64 sensor, and in a computer with an Intel Core 2 quad-core processor running at 2.4 GHz, it can sometimes achieve real-time performance (0.105 s to process one point cloud frame). However, this approach does not perform well in uneven rugged terrain.
- Gaussian Process Regression (GPR)-Based: These methods can be used to estimate and interpolate (to fill gaps in unknown areas) elevation information across the field. They perform well in handling small datasets and have the ability to provide uncertainty measurements on the predictions, demonstrating their applicability in a wide range of autonomous driving features. These approaches require tuning several parameters, which cannot always be obtained beforehand. Therefore, Vasudeva et al. [89] propose a method based on hyper-parameters, which consist primarily of a regression task that, for each segment, solves a supervised learning problem to acquire such parameters. By dividing a complex two-dimensional ground segmentation problem into a large number of one-dimensional regressions, the method can achieve good performance ratios while meeting real-time requirements. However, this approach does not perform well in slowly rising obstacles (e.g., stair steps), as the separation of the ground into independent angular sectors do not keep the general continuity of the ground elevation.
3.3. Adjacent Points and Local Features
- Channel-Based: Channel- or scan-based approaches explore relationships between points in the point cloud that result from patterns in the LiDAR sensor [94]. Taking advantage of the vertically aligned channels in multi-layer LiDAR sensors (Figure 11), Chu et al. [95] separate each frame into vertical lines to analyze points. This method uses a set of geometric conditions, such as height and height gradients, to detect start ground points and threshold points. The start ground points define where the ground begins, and the threshold points represent where an obstacle starts.
- Region-Growing: Region-growing methods perform segmentation by expanding a particular starting seed into a region using pre-defined criteria, usually related to a principle of similarity between those points, as depicted in Figure 13. Moosmann et al. [101] use a graph-based approach where a region-growing algorithm randomly selects the initial seed node, followed by the merging of the neighboring nodes into the same region based on Local Convexity. This algorithm was evaluated with non-optimized code on a Pentium M processor running at 2.1 GHz, and used several scans from an HDL-64 sensor mounted on an experimental vehicle in inner city traffic scenes, achieving good results in urban environments with slightly curved road surfaces. However, the achieved average segmentation time was nearly 250 ms with the selected hardware setup, thus, it cannot achieve the desired real-time performance. Na et al. [102] use a similar region-growing approach, followed by a region-merging algorithm to correct overly partitioned ground regions. Additionally, Kim et al. [103] propose using a weighted-graph structure followed by a voxelization method to improve the region-growing results. Techniques based on Region-growing are simple to implement but suffer from being highly reliant on the selection of a good starting seed and criteria, which may penalize the performance in complex urban environments [104].
- Clustering: Similarly to region-growing methods, clustering techniques divide point cloud data into smaller groups. However, unlike region-growing algorithms, some techniques do not rely entirely on analyzing neighboring points. Douillard et al. [90] start by creating a voxel grid, followed by clustering the created voxels based on height averages and variances. In this method, the largest cluster is considered ground and removed from the point cloud. Yang et al. [105] propose a different approach that uses the shape features of adjacent points to assemble points. Since road surfaces are mainly planar objects with low elevations, the ground segments are easily identified. Nitsch et al. [106] compute the point’s surface normal vectors and a quality value based on the weighted average between eight adjacent neighborhood points. Every calculated vector is then compared with the vector perpendicular to the surface below the vehicle. If these vectors are relatively similar, meaning their surface properties are identical to those in the ground below the vehicle, and the calculated quality value is high, the corresponding points are considered good ground candidates. Finally, a euclidean clustering method groups the ground candidates together, where the large clusters are classified as the ground surface. This approach can provide a relatively complete feature analysis, being able to detect 97% of ground points in a simulated data urban scenario and 93% in a simulated rural scenario, using the automatically labeled Virtual KITTI [107]. However, it may not perform as desired in regions with a low point density [94]. Clustering algorithms are considered robust methods that do not require a good selection of seed points/regions, contrary to region-growing-based techniques. However, they can be computationally expensive, especially for large datasets that contain multi-dimensional features [104].
- Range Images: Typically, LiDAR sensors provide point cloud data in the spherical coordinate system, which is later converted to the cartesian system by the corresponding manufacturer drivers. However, using the spherical coordinate system allows the detection of trigonometric relationships between adjacent points in a point cloud without the associated computational costs of performing these calculations in a cartesian coordinate system [108]. Some approaches make use of the raw data provided by the sensors and transform it directly into a range image, which is a common approach for projecting 3D LiDAR point cloud data into a 2D image [109,110,111,112], as demonstrated in Figure 14. This conversion enables segmentation algorithms to directly exploit the clearly defined neighborhood relations in the 2D image, allowing for real-time segmentation based on angle relations formed by the LiDAR center and the points on the depth image. Bogoslavskyi et al. [109], running on a single CPU core, were able to perform the ground filtering in less than 4 ms using the KITTI dataset. Nonetheless, the method for creating range images is very dependent on the sensor and may affect the accuracy due to the needed sampling process [113].
3.4. Higher Order Inference
- Markov Random Field (MRF): An MRF model can be seen as an undirected graph, where nodes represent random variables and edges represent desired local influences between pairs of nodes. In the case of LiDAR data, the points are the nodes of the graph and the edges can be modeled using the height values. Guo et al. [115] and Byun et al. [116] suggest the combination of an MRF with a Belief Propagation (BP) algorithm to identify the drivable area, even at high distances. Nonetheless, these approaches still face detection problems when driving in rough terrains. The work from Zhang et al. [117] tries to improve these approaches to be used both in rough and uneven terrains. It implements cost functions to generate probabilistic ground height measurements, creating models that compensate the loss of information due to partial obstruction from closer objects. Based on this information, combined with a BP algorithm, a multi-label Markov network is used for the ground segmentation task. The proposed method shows promising results in sparse point cloud distributions, achieving false positive rates as low as 2.12% in complex off-road environments. However, the average processing time in the mentioned scenario was above 1 s using an Intel Core processor running at 3.2 GHz, which makes its utilization hard in embedded perception systems with real-time requirements. Other works use similar approaches combined with height histograms to estimate the ground height range, followed by the MRF model to refine the labeling of ground points [118].
- Conditional Random Field (CRF): CRF is a subset of MRF that labels sequences of nodes given a specific chain of observations, which improves the ability of capturing long-range dependencies between them. Rummelhard et al. [120] propose the addition of spatial and temporal dependencies to CRF to model the ground surface. Their method divides the environment into different interconnected elevation cells, which is influenced by local observations and spatio-temporal relationships. The temporal constraints are incorporated into the segmentation data using a dynamic Bayesian framework, allowing for a more accurate modeling of ground points. The proposed method was first tested on an Intel Xeon W3520 processor running at 2.6 GHz with 8 GB of RAM and with a Quadro 2000 graphics card with 2 GB of video memory, achieving a frame processing frequency of 6.8 Hz with data from a Velodyne HDL-64E sensor. However, the authors claim that with experimental platforms such as Tegra X1 and K1, the algorithm achieved real-time performance. Similarly, and following the approach of using CRF methods in the segmentation of data from digital cameras [121], Wang et al. [122] model ground data by representing the CRF in a 2D lattice plane. Next, it uses the RANSAC on the created plane to extract the ground points. Despite presenting good segmentation results, it requires several iterations to correctly extract the points in uneven terrains, which heavily compromises real-time performance. Additionally, these methods are very computationally-hungry, which makes them unsuitable for real-time scenarios unless specialized hardware acceleration is used.
3.5. Learn-Based
4. Discussion
- Real-time: Within an autonomous driving scenario, it is mandatory for the perception system to process and understand the surrounding environment in real time, which means that, for a given sensor or a set of sensors, the steps of the LiDAR processing stack must be performed within a known period of time so that the driving decisions can be taken within a safe time frame. A high-resolution LiDAR sensor can generate millions of data points. For instance, the Velodyne VLS-128 can produce up to 9.6 M points per second in the dual-return mode, with frame rates varying from 5 Hz to 10 Hz. In a typical operation, this sensor can be configured to produce, on average, a point cloud of 2,403,840 points per second (240,384 points at 10 Hz), which means that such an amount of data must be processed in under 100 ms across all software stack layers. Regarding the ground segmentation tasks, with a few exceptions, both 2.5D Grid-based and Ground Modelling methods can achieve real-time processing. For the algorithms based on the multi-level approach [69,70], this information could not be retrieved. Due to their complexity, most of the GPR-based methods [90,92,93] and one Plane-fitting method [80] are unable to provide the desired real-time features.
- Computational Requirements: When deployed in automotive applications, the computational requirements associated with ground segmentation methods are a crucial metric to consider, mainly because the perception system is often composed of embedded processing environments that try to minimize the available hardware resources. Methods based on Elevation Maps present the lowest computational requirements as they analyze a 2D tessellated representation of the surroundings, instead of the entire 3D representation. Regarding the other 2.5D grid-based approaches, Multi-level algorithms require extra classification steps, while Occupancy Grids require the interpolation of data and the use of Bayes filters, which inherently increases the memory and computational needs in both cases. Likewise, GPR-based and Plane Fitting methods (based on RANSAC) rely on iterative approaches, consequently increasing the memory requirements. Additionally, GPR methods require complex calculations to process the point cloud, which further increases the computational needs. On the other hand, Line Extraction approaches feature a lower resource consumption than the remaining Ground Modelling methods since they divide the point cloud into a polar grid map, simplifying the required computations. Concerning the Adjacent Points and Local Features methods, Channel-based and Range Image approaches are not considered very computationally intensive. However, they are based on the analysis of geometric conditions between points, which can represent the need for specialized hardware, such as a floating-point unit for trigonometric calculations. On the other hand, despite their simplicity, Clustering and Region-Growing methods require very iterative operations, which can represent high memory requirements especially for large point clouds with points containing multiple features, e.g., target’s reflectivity. On the other side, Higher Order Inference methods feature high computational requirements due to the associated complex mathematical computations and respective iterative steps, e.g., the BP algorithm, which translates into high memory requirements. Finally, among all methods, Learn-based approaches demand the highest computational needs. This is mainly caused by the extensive complex computations, significant memory utilization, and sometimes specialized hardware, e.g., GPUs, generally associated with CNN implementations.
- Segmentation Immunity: Segmentation immunity refers to the algorithm’s susceptibility to under- and/or over-segmentation, which corresponds to either too coarse or too fine segmentation, respectively. When the under-segmentation occurs, the points belonging to different objects are merged into the same ground segment. On the other hand, with over-segmentation, a single object can be represented by several clusters. In many applications, under-segmentation is considered a more severe issue than over-segmentation, since the wrong classification of the ground plane can lead to safety issues to the autonomous vehicle. 2.5D Grid-based Elevation Map algorithms tend to suffer both from under- or over-segmentation, which can highly affect the overall algorithm’s performance and accuracy. That happens especially when the ground is significantly sloped or curbed. Ground Modelling’s Line Extraction and Plane Fitting [82] methods tend to slightly suffer from under- or over-segmentation. However, GPR-based [91] and Plane Fitting [83] are able to overcome under-segmentation issues. Adjacent Points and Local Features algorithms typically are susceptible to under- or over-segmentation, except for [103], which is immune to over-segmentation, while [97,105] are immune to both under- and over-segmentation.
- Performance with Rising Obstacles and Regions, Uneven Ground, and Sparse Data: In a real-world driving scenario, the ground is not flat and, rising obstacles, regions, and uneven ground usually represent a significant challenge to ground segmentation algorithms. Additionally, dealing with sparse point clouds can lead to performance loss or algorithm inability to solve the segmentation task. Therefore, to assess the versatility and safety of a method in different situations, it is crucial to evaluate the segmentation performance with rising obstacles, slopped or rough terrains, and sparse data. Regarding 2.5 Grid-based methods, since the ground is modeled into a grid where each cell represents a small region of the ground plane, almost all of them can perform well with rising obstacles and uneven ground surfaces [67,68,69,70]. However, they can be unpredictable when dealing with sparse data. The Ground Modeling approaches based on GPR algorithms can suffer from insensitivity to these slowly rising obstacles [91].Since the ground division in independent angular sectors does not guarantee the general ground elevation continuity, some obstacles, such as stair steps, may be classified as ground. Nonetheless, the other approaches, i.e., Line Extraction [88], and Plane Fitting [83,86], can handle these objects properly. Concerning the uneven ground regions, some Ground Modelling approaches cannot handle them properly [80,87], e.g., the Plane Fitting method [80] uses RANSAC in order to estimate the ground plane. Therefore, the assumption of a single ground plane leads to false classifications and complex data can degrade the RANSAC performance. Nonetheless, most GPR-based algorithms [90,92,93], and one Plane Fitting method [82], can achieve good segmentation results when handling sparse data.
- Future trends of ground segmentation methods: Ground and object segmentation is an important task in autonomous driving applications, where the algorithm’s performance and the real-time aspects are the most critical requirements when building the perception system of the vehicle. Automotive LiDAR sensors are also becoming mainstream, thus, several algorithms and approaches to process point cloud data are constantly emerging. With the success of learn-based approaches in terms of accuracy and performance, it is expected that future solutions will adopt CNNs to perform the ground segmentation tasks [123]. With the constant development of technology and research around these topics, currently challenges of CNNs with LiDAR data applied to automotive, e.g., the lack of datasets [146], the high time-consuming training phases, the requirement for powerful computing systems, and the algorithm’s complexity, are slowly being mitigated. Despite perception systems being including more sensors and LiDAR devices providing more resolution data at higher frame rates (which inherently increases the amount of data to be processed), it is expected that in the near future learn-based solutions with lower hardware requirements will perform the ground segmentation steps with reduced processing times.
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Litman, T. Autonomous Vehicle Implementation Predictions; Victoria Transport Policy Institute: Victoria, BC, Canada, 2021. [Google Scholar]
- Society of Automotive Engineers (SAE). Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles (Surface Vehicle Recommended Practice: Superseding J3016 Jun 2018); SAE International: Warrendale, PA, USA, 2021. [Google Scholar]
- Mercedes-Benz Group. First Internationally Valid System Approval for Conditionally Automated Driving. Mercedes. 2021. Available online: https://group.mercedes-benz.com/innovation/product-innovation/autonomous-driving/system-approval-for-conditionally-automated-driving.html (accessed on 5 September 2022).
- 157—Automated Lane Keeping Systems (ALKS); Nations Economic Commission for Europe: Geneva, Switzerland, 2021; pp. 75–137.
- Goelles, T.; Schlager, B.; Muckenhuber, S. Fault Detection, Isolation, Identification and Recovery (FDIIR) Methods for Automotive Perception Sensors Including a Detailed Literature Survey for Lidar. Sensors 2020, 20, 3662. [Google Scholar] [CrossRef] [PubMed]
- Urmson, C.; Anhalt, J.; Bagnell, D.; Baker, C.; Bittner, R.; Clark, M.N.; Dolan, J.; Duggins, D.; Galatali, T.; Geyer, C.; et al. Autonomous driving in urban environments: Boss and the Urban Challenge. J. Field Robot. 2008, 25, 425–466. [Google Scholar] [CrossRef] [Green Version]
- Marti, E.; de Miguel, M.A.; Garcia, F.; Perez, J. A Review of Sensor Technologies for Perception in Automated Driving. IEEE Intell. Transp. Syst. Mag. 2019, 11, 94–108. [Google Scholar] [CrossRef] [Green Version]
- Shahian Jahromi, B.; Tulabandhula, T.; Cetin, S. Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles. Sensors 2019, 19, 4537. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Chase, A.F.; Chase, D.Z.; Weishampel, J.F.; Drake, J.B.; Shrestha, R.L.; Slatton, K.C.; Awe, J.J.; Carter, W.E. Airborne LiDAR, archaeology, and the ancient Maya landscape at Caracol, Belize. J. Archaeol. Sci. 2011, 38, 387–398. [Google Scholar] [CrossRef]
- Chase, A.S.Z.; Chase, D.Z.; Chase, A.F. LiDAR for Archaeological Research and the Study of Historical Landscapes. In Sensing the Past: From Artifact to Historical Site; Masini, N., Soldovieri, F., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 89–100. [Google Scholar] [CrossRef]
- Štular, B.; Lozić, E.; Eichert, S. Airborne LiDAR-Derived Digital Elevation Model for Archaeology. Remote Sens. 2021, 13, 1855. [Google Scholar] [CrossRef]
- Jones, L.; Hobbs, P. The Application of Terrestrial LiDAR for Geohazard Mapping, Monitoring and Modelling in the British Geological Survey. Remote Sens. 2021, 13, 395. [Google Scholar] [CrossRef]
- Asner, G.P.; Mascaro, J.; Muller-Landau, H.C.; Vieilledent, G.; Vaudry, R.; Rasamoelina, M.; Hall, J.S.; van Breugel, M. A universal airborne LiDAR approach for tropical forest carbon mapping. Oecologia 2012, 168, 1147–1160. [Google Scholar] [CrossRef]
- Li, X.; Liu, C.; Wang, Z.; Xie, X.; Li, D.; Xu, L. Airborne LiDAR: State-of-the-art of system design, technology and application. Meas. Sci. Technol. 2020, 32, 032002. [Google Scholar] [CrossRef]
- Liu, X. Airborne LiDAR for DEM generation: Some critical issues. Prog. Phys. Geogr. Earth Environ. 2008, 32, 31–49. [Google Scholar] [CrossRef]
- Meng, X.; Currit, N.; Zhao, K. Ground Filtering Algorithms for Airborne LiDAR Data: A Review of Critical Issues. Remote Sens. 2010, 2, 833–860. [Google Scholar] [CrossRef] [Green Version]
- Yan, W.Y.; Shaker, A.; El-Ashmawy, N. Urban land cover classification using airborne LiDAR data: A review. Remote Sens. Environ. 2015, 158, 295–310. [Google Scholar] [CrossRef]
- Chen, Z.; Gao, B.; Devereux, B. State-of-the-Art: DTM Generation Using Airborne LIDAR Data. Sensors 2017, 17, 150. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Herzfeld, U.C.; McDonald, B.W.; Wallin, B.F.; Neumann, T.A.; Markus, T.; Brenner, A.; Field, C. Algorithm for Detection of Ground and Canopy Cover in Micropulse Photon-Counting Lidar Altimeter Data in Preparation for the ICESat-2 Mission. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2109–2125. [Google Scholar] [CrossRef]
- Li, Y.; Ibanez-Guzman, J. Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems. IEEE Signal Process. Mag. 2020, 37, 50–61. [Google Scholar] [CrossRef]
- Roriz, R.; Cabral, J.; Gomes, T. Automotive LiDAR Technology: A Survey. IEEE Trans. Intell. Transp. Syst. 2021, 23, 6282–6297. [Google Scholar] [CrossRef]
- Lopac, N.; Jurdana, I.; Brnelić, A.; Krljan, T. Application of Laser Systems for Detection and Ranging in the Modern Road Transportation and Maritime Sector. Sensors 2022, 22, 5946. [Google Scholar] [CrossRef]
- Arnold, E.; Al-Jarrah, O.Y.; Dianati, M.; Fallah, S.; Oxtoby, D.; Mouzakitis, A. A Survey on 3D Object Detection Methods for Autonomous Driving Applications. IEEE Trans. Intell. Transp. Syst. 2019, 20, 3782–3795. [Google Scholar] [CrossRef] [Green Version]
- Shi, S.; Wang, X.; Li, H. PointRCNN: 3D Object Proposal Generation and Detection From Point Cloud. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 770–779. [Google Scholar]
- Wu, J.; Xu, H.; Tian, Y.; Pi, R.; Yue, R. Vehicle Detection under Adverse Weather from Roadside LiDAR Data. Sensors 2020, 20, 3433. [Google Scholar] [CrossRef]
- Li, Y.; Ma, L.; Zhong, Z.; Liu, F.; Chapman, M.A.; Cao, D.; Li, J. Deep Learning for LiDAR Point Clouds in Autonomous Driving: A Review. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 3412–3432. [Google Scholar] [CrossRef]
- Wang, H.; Wang, B.; Liu, B.; Meng, X.; Yang, G. Pedestrian recognition and tracking using 3D LiDAR for autonomous vehicle. Robot. Auton. Syst. 2017, 88, 71–78. [Google Scholar] [CrossRef]
- Peng, X.; Shan, J. Detection and Tracking of Pedestrians Using Doppler LiDAR. Remote Sens. 2021, 13, 2952. [Google Scholar] [CrossRef]
- Chen, T.; Dai, B.; Liu, D.; Zhang, B.; Liu, Q. 3D LIDAR-based ground segmentation. In Proceedings of the The First Asian Conference on Pattern Recognition, Beijing, China, 28 November 2011; pp. 446–450. [Google Scholar] [CrossRef]
- Karlsson, R.; Wong, D.R.; Kawabata, K.; Thompson, S.; Sakai, N. Probabilistic Rainfall Estimation from Automotive Lidar. In Proceedings of the 2022 IEEE Intelligent Vehicles Symposium (IV), Aachen, Germany, 4–9 June 2022; pp. 37–44. [Google Scholar] [CrossRef]
- Kim, G.; Eom, J.; Park, Y. An Experiment of Mutual Interference between Automotive LIDAR Scanners. In Proceedings of the 2015 12th International Conference on Information Technology—New Generations, Las Vegas, NV, USA, 13–15 April 2015; pp. 680–685. [Google Scholar]
- Hwang, I.P.; Yun, S.J.; Lee, C.H. Mutual interferences in frequency-modulated continuous-wave (FMCW) LiDARs. Optik 2020, 220, 165109. [Google Scholar] [CrossRef]
- Hwang, I.P.; Yun, S.j.; Lee, C.H. Study on the Frequency-Modulated Continuous-Wave LiDAR Mutual Interference. In Proceedings of the 2019 IEEE 19th International Conference Communication Technology (ICCT), Xi’an, China, 16–19 October 2019; pp. 1053–1056. [Google Scholar]
- Wallace, A.M.; Halimi, A.; Buller, G.S. Full Waveform LiDAR for Adverse Weather Conditions. IEEE Trans. Veh. Technol. 2020, 69, 7064–7077. [Google Scholar] [CrossRef]
- Goodin, C.; Carruth, D.; Doude, M.; Hudson, C. Predicting the Influence of Rain on LIDAR in ADAS. Electronics 2019, 8, 89. [Google Scholar] [CrossRef] [Green Version]
- Heinzler, R.; Schindler, P.; Seekircher, J.; Ritter, W.; Stork, W. Weather Influence and Classification with Automotive Lidar Sensors. In Proceedings of the 2019 IEEE Intelligent Vehicles Symposium (IV), Paris, France, 9–12 June 2019; pp. 1527–1534. [Google Scholar] [CrossRef] [Green Version]
- Linnhoff, C.; Hofrichter, K.; Elster, L.; Rosenberger, P.; Winner, H. Measuring the Influence of Environmental Conditions on Automotive Lidar Sensors. Sensors 2022, 22, 5266. [Google Scholar] [CrossRef] [PubMed]
- Roriz, R.; Campos, A.; Pinto, S.; Gomes, T. DIOR: A Hardware-Assisted Weather Denoising Solution for LiDAR Point Clouds. IEEE Sens. J. 2022, 22, 1621–1628. [Google Scholar] [CrossRef]
- Cunha, L.; Roriz, R.; Pinto, S.; Gomes, T. Hardware-Accelerated Data Decoding and Reconstruction for Automotive LiDAR Sensors. IEEE Trans. Veh. Technol. 2022, 1–10. [Google Scholar] [CrossRef]
- Cao, C.; Preda, M.; Zaharia, T. 3D Point Cloud Compression: A Survey. In Proceedings of the 24th International Conference on 3D Web Technology, Los Angeles, CA, USA, 26–28 July 2019; Association for Computing Machinery: New York, NY, USA, 2019; pp. 1–9. [Google Scholar] [CrossRef] [Green Version]
- Maksymova, I.; Steger, C.; Druml, N. Review of LiDAR Sensor Data Acquisition and Compression for Automotive Applications. Proceedings 2018, 2, 852. [Google Scholar] [CrossRef] [Green Version]
- Bar Hillel, A.; Lerner, R.; Levi, D.; Raz, G. Recent progress in road and lane detection: A survey. Mach. Vis. Appl. 2014, 25, 727–745. [Google Scholar] [CrossRef]
- Fernandes, D.; Silva, A.; Névoa, R.; Simões, C.; Gonzalez, D.; Guevara, M.; Novais, P.; Monteiro, J.; Melo-Pinto, P. Point-cloud based 3D object detection and classification methods for self-driving applications: A survey and taxonomy. Inf. Fusion 2021, 68, 161–191. [Google Scholar] [CrossRef]
- Zimmer, W.; Ercelik, E.; Zhou, X.; Ortiz, X.J.D.; Knoll, A. A Survey of Robust 3D Object Detection Methods in Point Clouds. arXiv 2022, arXiv:2204.00106. [Google Scholar] [CrossRef]
- Ma, X.; Ouyang, W.; Simonelli, A.; Ricci, E. 3D Object Detection from Images for Autonomous Driving: A Survey. arXiv 2022, arXiv:2202.02980. [Google Scholar] [CrossRef]
- Thakurdesai, H.M.; Aghav, J.V. Autonomous Cars: Technical Challenges and a Solution to Blind Spot. In Proceedings of the Advances in Computational Intelligence and Communication Technology, Udaipur, India, 10–12 December 2021; Gao, X.Z., Tiwari, S., Trivedi, M.C., Mishra, K.K., Eds.; Springer: Singapore, 2021; pp. 533–547. [Google Scholar]
- Turcian, D.; Dolga, V.; Turcian, D.; Moldovan, C. Fusion Sensors Experiment for Active Cruise Control. In Proceedings of the Joint International Conference of the International Conference on Mechanisms and Mechanical Transmissions and the International Conference on Robotics, Timișoara, Romania, 14–16 October 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 432–443. [Google Scholar]
- Ragesh, N.; Rajesh, R. Pedestrian detection in automotive safety: Understanding state-of-the-art. IEEE Access 2019, 7, 47864–47890. [Google Scholar] [CrossRef]
- Baharuddin, M.; Khamis, N.; Kassim, K.A.; Mansor, M. Autonomous Emergency Brake (AEB) for pedestrian for ASEAN NCAP safety rating consideration: A review. J. Soc. Automot. Eng. Malays. 2019, 3, 63–73. [Google Scholar] [CrossRef]
- Ren, H.; Haipeng, F. Research and development of autonomous emergency brake (AEB) technology. J. Automot. Saf. Energy 2019, 10, 1. [Google Scholar]
- Bialer, O.; Jonas, A.; Tirer, T. Super Resolution Wide Aperture Automotive Radar. IEEE Sens. J. 2021, 21, 17846–17858. [Google Scholar] [CrossRef]
- Schulte-Tigges, J.; Förster, M.; Nikolovski, G.; Reke, M.; Ferrein, A.; Kaszner, D.; Matheis, D.; Walter, T. Benchmarking of Various LiDAR Sensors for Use in Self-Driving Vehicles in Real-World Environments. Sensors 2022, 22, 7146. [Google Scholar] [CrossRef]
- Zhao, F.; Jiang, H.; Liu, Z. Recent development of automotive LiDAR technology, industry and trends. In Proceedings of the Eleventh International Conference on Digital Image Processing (ICDIP 2019), Guangzhou, China, 10–13 May 2019; SPIE: Bellingham, WA, USA, 2019; Volume 11179, pp. 1132–1139. [Google Scholar]
- Royo, S.; Ballesta-Garcia, M. An overview of lidar imaging systems for autonomous vehicles. Appl. Sci. 2019, 9, 4093. [Google Scholar] [CrossRef] [Green Version]
- Warren, M.E. Automotive LIDAR technology. In Proceedings of the 2019 Symposium on VLSI Circuits, Kyoto, Japan, 9–14 June 2019; IEEE: Piscatway, NJ, USA, 2019; pp. C254–C255. [Google Scholar]
- Yoo, H.W.; Druml, N.; Brunner, D.; Schwarzl, C.; Thurner, T.; Hennecke, M.; Schitter, G. MEMS-based lidar for autonomous driving. e i Elektrotechnik Inf. 2018, 135, 408–415. [Google Scholar] [CrossRef] [Green Version]
- Hsu, C.P.; Li, B.; Solano-Rivas, B.; Gohil, A.R.; Chan, P.H.; Moore, A.D.; Donzella, V. A review and perspective on optical phased array for automotive LiDAR. IEEE J. Sel. Top. Quantum Electron. 2020, 27, 1–16. [Google Scholar] [CrossRef]
- Hu, J.; Liu, B.; Ma, R.; Liu, M.; Zhu, Z. A 32x 32-Pixel Flash LiDAR Sensor With Noise Filtering for High-Background Noise Applications. IEEE Trans. Circuits Syst. I Regul. Pap. 2021, 69, 645–656. [Google Scholar] [CrossRef]
- Jung, M.; Kim, D.Y.; Kim, S. A System Architecture of a Fusion System for Multiple LiDARs Image Processing. Appl. Sci. 2022, 12, 9421. [Google Scholar] [CrossRef]
- Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. In Proceedings of the Robotics: Science and Systems, Berkeley, CA, USA, 12–16 July 2014. [Google Scholar] [CrossRef]
- Jung, J.; Bae, S.H. Real-Time Road Lane Detection in Urban Areas Using LiDAR Data. Electronics 2018, 7, 276. [Google Scholar] [CrossRef] [Green Version]
- Rawashdeh, N.A.; Bos, J.P.; Abu-Alrub, N.J. Camera–Lidar sensor fusion for drivable area detection in winter weather using convolutional neural networks. Opt. Eng. 2022, 62, 031202. [Google Scholar] [CrossRef]
- Kato, S.; Tokunaga, S.; Maruyama, Y.; Maeda, S.; Hirabayashi, M.; Kitsukawa, Y.; Monrroy, A.; Ando, T.; Fujii, Y.; Azumi, T. Autoware on Board: Enabling Autonomous Vehicles with Embedded Systems. In Proceedings of the 2018 ACM/IEEE 9th International Conference on Cyber-Physical Systems (ICCPS), Porto, Portugal, 11–13 April 2018; pp. 287–296. [Google Scholar] [CrossRef]
- Douillard, B.; Underwood, J.; Melkumyan, N.; Singh, S.; Vasudevan, S.; Brunner, C.; Quadros, A. Hybrid elevation maps: 3D surface models for segmentation. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 1532–1538. [Google Scholar] [CrossRef] [Green Version]
- Asvadi, A.; Peixoto, P.; Nunes, U. Detection and Tracking of Moving Objects Using 2.5D Motion Grids. In Proceedings of the 2015 IEEE 18th International Conference on Intelligent Transportation Systems, Gran Canaria, Spain, 1–15 September 2015; pp. 788–793. [Google Scholar] [CrossRef]
- Li, Q.; Zhang, L.; Mao, Q.; Zou, Q.; Zhang, P.; Feng, S.; Ochieng, W. Motion Field Estimation for a Dynamic Scene Using a 3D LiDAR. Sensors 2014, 14, 16672–16691. [Google Scholar] [CrossRef] [Green Version]
- Meng, X.; Cao, Z.; Liang, S.; Pang, L.; Wang, S.; Zhou, C. A terrain description method for traversability analysis based on elevation grid map. Int. J. Adv. Robot. Syst. 2018, 15, 1–12. [Google Scholar] [CrossRef] [Green Version]
- Tanaka, Y.; Ji, Y.; Yamashita, A.; Asama, H. Fuzzy based traversability analysis for a mobile robot on rough terrain. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 3965–3970. [Google Scholar] [CrossRef]
- Pfaff, P.; Burgard, W. An efficient extension of elevation maps for outdoor terrain mapping. In Proceedings of the International Conference on Field and Service Robotics (FSR), Port Douglas, QLD, Australia, 29–31 July 2005; pp. 165–176. [Google Scholar]
- Triebel, R.; Pfaff, P.; Burgard, W. Multi-Level Surface Maps for Outdoor Terrain Mapping and Loop Closing. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006; pp. 2276–2282. [Google Scholar] [CrossRef]
- Moravec, H.; Elfes, A. High resolution maps from wide angle sonar. In Proceedings of the 1985 IEEE International Conference on Robotics and Automation, St. Louis, MO, USA, 25–28 March 1985; Volume 2, pp. 116–121. [Google Scholar] [CrossRef]
- Siciliano, B.; Khatib, O. Springer Handbook of Robotics; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
- Thrun, S. Robotic Mapping: A Survey. Science 2002, 298, 1. [Google Scholar]
- Thrun, S.; Montemerlo, M.; Dahlkamp, H.; Stavens, D.; Aron, A.; Diebel, J.; Fong, P.; Gale, J.; Halpenny, M.; Hoffmann, G.; et al. Stanley: The robot that won the DARPA Grand Challenge. J. Field Robot. 2006, 23, 661–692. [Google Scholar] [CrossRef]
- Ferguson, D.; Darms, M.; Urmson, C.; Kolski, S. Detection, prediction, and avoidance of dynamic obstacles in urban environments. In Proceedings of the 2008 IEEE Intelligent Vehicles Symposium, Eindhoven, The Netherlands, 4–6 June 2008; pp. 1149–1154. [Google Scholar] [CrossRef]
- Kammel, S.; Ziegler, J.; Pitzer, B.; Werling, M.; Gindele, T.; Jagzent, D.; Schöder, J.; Thuy, M.; Goebl, M.; von Hundelshausen, F.; et al. Team AnnieWAY’s Autonomous System for the DARPA Urban Challenge 2007. In The DARPA Urban Challenge: Autonomous Vehicles in City Traffic; Springer: Berlin/Heidelberg, Germany, 2009; pp. 359–391. [Google Scholar] [CrossRef] [Green Version]
- Montemerlo, M.; Becker, J.; Bhat, S.; Dahlkamp, H.; Dolgov, D.; Ettinger, S.; Haehnel, D.; Hilden, T.; Hoffmann, G.; Huhnke, B.; et al. Junior: The Stanford Entry in the Urban Challenge. In The DARPA Urban Challenge: Autonomous Vehicles in City Traffic; Springer: Berlin/Heidelberg, Germany, 2009; pp. 91–123. [Google Scholar] [CrossRef] [Green Version]
- Himmelsbach, M.; Luettel, T.; Wuensche, H.J. Real-time object classification in 3D point clouds using point feature histograms. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 11–15 October 2009; pp. 994–1000. [Google Scholar] [CrossRef]
- Luo, Z.; Mohrenschildt, M.V.; Habibi, S. A Probability Occupancy Grid Based Approach for Real-Time LiDAR Ground Segmentation. IEEE Trans. Intell. Transp. Syst. 2020, 21, 998–1010. [Google Scholar] [CrossRef]
- Hu, X.; Rodríguez, F.S.A.; Gepperth, A. A multi-modal system for road detection and segmentation. In Proceedings of the 2014 IEEE Intelligent Vehicles Symposium Proceedings, Dearborn, MI, USA, 8–11 June 2014; pp. 1365–1370. [Google Scholar] [CrossRef] [Green Version]
- Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The KITTI vision benchmark suite. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar] [CrossRef]
- Josyula, A.; Anand, B.; Rajalakshmi, P. Fast object segmentation pipeline for point clouds using robot operating system. In Proceedings of the 2019 IEEE 5th World Forum on Internet of Things (WF-IoT), Limerick, Ireland, 15–18 April 2019; pp. 915–919. [Google Scholar]
- Lim, H.; Oh, M.; Myung, H. Patchwork: Concentric Zone-Based Region-Wise Ground Segmentation With Ground Likelihood Estimation Using a 3D LiDAR Sensor. IEEE Robot. Autom. Lett. 2021, 6, 6458–6465. [Google Scholar] [CrossRef]
- Zhang, Y.; Wang, J.; Wang, X.; Dolan, J.M. Road-segmentation-based curb detection method for self-driving via a 3D-LiDAR sensor. IEEE Trans. Intell. Transp. Syst. 2018, 19, 3981–3991. [Google Scholar] [CrossRef]
- Sun, P.; Zhao, X.; Xu, Z.; Wang, R.; Min, H. A 3D LiDAR Data-Based Dedicated Road Boundary Detection Algorithm for Autonomous Vehicles. IEEE Access 2019, 7, 29623–29638. [Google Scholar] [CrossRef]
- Anand, B.; Senapati, M.; Barsaiyan, V.; Rajalakshmi, P. LiDAR-INS/GNSS-Based Real-Time Ground Removal, Segmentation, and Georeferencing Framework for Smart Transportation. IEEE Trans. Instrum. Meas. 2021, 70, 1–11. [Google Scholar] [CrossRef]
- Himmelsbach, M.; Hundelshausen, F.v.; Wuensche, H.J. Fast segmentation of 3D point clouds for ground vehicles. In Proceedings of the 2010 IEEE Intelligent Vehicles Symposium, La Jolla, CA, USA, 21–24 June 2010; pp. 560–565. [Google Scholar] [CrossRef]
- Stamos, I.; Hadjiliadis, O.; Zhang, H.; Flynn, T. Online algorithms for classification of urban objects in 3d point clouds. In Proceedings of the 2012 Second International Conference on 3D Imaging, Modeling, Processing, Visualization & Transmission, Zurich, Switzerland, 13–15 October 2012; pp. 332–339. [Google Scholar]
- Vasudevan, S.; Ramos, F.; Nettleton, E.; Durrant-Whyte, H.; Blair, A. Gaussian Process modeling of large scale terrain. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 1047–1053. [Google Scholar] [CrossRef]
- Douillard, B.; Underwood, J.; Kuntz, N.; Vlaskine, V.; Quadros, A.; Morton, P.; Frenkel, A. On the segmentation of 3D LIDAR point clouds. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 2798–2805. [Google Scholar] [CrossRef]
- Chen, T.; Dai, B.; Wang, R.; Liu, D. Gaussian-Process-Based Real-Time Ground Segmentation for Autonomous Land Vehicles. J. Intell. Robot. Syst. 2013, 76, 563–582. [Google Scholar] [CrossRef]
- Lang, T.; Plagemann, C.; Burgard, W. Adaptive Non-Stationary Kernel Regression for Terrain Modeling. In Proceedings of the Robotics: Science and Systems, Atlanta, GA, USA, 27–30 June 2007; Volume 6. [Google Scholar]
- Liu, K.; Wang, W.; Tharmarasa, R.; Wang, J.; Zuo, Y. Ground Surface Filtering of 3D Point Clouds Based on Hybrid Regression Technique. IEEE Access 2019, 7, 23270–23284. [Google Scholar] [CrossRef]
- Chu, P.; Cho, S.; Sim, S.; Kwak, K.; Cho, K.; Jiménez, V.; Godoy, J.; Artuñedo, A.; Villagra, J. Ground Segmentation Algorithm for Sloped Terrain and Sparse LiDAR Point Cloud. IEEE Access 2021, 9, 132914–132927. [Google Scholar] [CrossRef]
- Chu, P.; Cho, S.; Sim, S.; Kwak, K.; Cho, K. A Fast Ground Segmentation Method for 3D Point Cloud. J. Inf. Process. Syst. 2017, 13, 491–499. [Google Scholar] [CrossRef] [Green Version]
- Leng, Z.; Li, S.; Li, X.; Gao, B. An Improved Fast Ground Segmentation Algorithm for 3D Point Cloud. In Proceedings of the 2020 Chinese Control And Decision Conference (CCDC), Hefei, China, 22–24 August 2020; pp. 5016–5020. [Google Scholar] [CrossRef]
- Chu, P.; Cho, S.; Park, J.; Fong, S.; Cho, K. Enhanced ground segmentation method for Lidar point clouds in human-centric autonomous robot systems. Hum.-Centric Comput. Inf. Sci. 2019, 9, 17. [Google Scholar] [CrossRef] [Green Version]
- Rieken, J.; Matthaei, R.; Maurer, M. Benefits of using explicit ground-plane information for grid-based urban environment modeling. In Proceedings of the 2015 18th International Conference on Information Fusion (Fusion), Washington, DC, USA, 6–9 July 2015; pp. 2049–2056. [Google Scholar]
- Cheng, J.; Xiang, Z.; Cao, T.; Liu, J. Robust vehicle detection using 3D Lidar under complex urban environment. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 691–696. [Google Scholar] [CrossRef]
- Cheng, Z.; Ren, G.; Zhang, Y. Ground Segmentation Algorithm Based on 3D Lidar Point Cloud. In Proceedings of the Proceedings of the 2018 International Conference on Mechanical, Electrical, Electronic Engineering & Science (MEEES 2018), Chongqing, China,, 26–27 May 2018; Atlantis Press: Paris, France, 2018; pp. 16–21. [Google Scholar] [CrossRef] [Green Version]
- Moosmann, F.; Pink, O.; Stiller, C. Segmentation of 3D lidar data in non-flat urban environments using a local convexity criterion. In Proceedings of the 2009 IEEE Intelligent Vehicles Symposium, Xi’an, China, 3–5 June 2009; pp. 215–220. [Google Scholar] [CrossRef] [Green Version]
- Na, K.; Byun, J.; Roh, M.; Seo, B. The ground segmentation of 3D LIDAR point cloud with the optimized region merging. In Proceedings of the 2013 International Conference on Connected Vehicles and Expo (ICCVE), Las Vegas, NV, USA, 2–6 December 2013; pp. 445–450. [Google Scholar] [CrossRef]
- Kim, J.S.; Park, J.H. Weighted-graph-based supervoxel segmentation of 3D point clouds in complex urban environment. Electron. Lett. 2015, 51, 1789–1791. [Google Scholar] [CrossRef]
- Vo, A.V.; Truong-Hong, L.; Laefer, D.F.; Bertolotto, M. Octree-based region growing for point cloud segmentation. ISPRS J. Photogramm. Remote Sens. 2015, 104, 88–100. [Google Scholar] [CrossRef]
- Yang, B.; Dong, Z. A shape-based segmentation method for mobile laser scanning point clouds. ISPRS J. Photogramm. Remote Sens. 2013, 81, 19–30. [Google Scholar] [CrossRef]
- Nitsch, J.; Aguilar, J.; Nieto, J.; Siegwart, R.; Schmidt, M.; Cadena, C. 3D Ground Point Classification for Automotive Scenarios. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 2603–2608. [Google Scholar] [CrossRef]
- Gaidon, A.; Wang, Q.; Cabon, Y.; Vig, E. VirtualWorlds as Proxy for Multi-object Tracking Analysis. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 4340–4349. [Google Scholar] [CrossRef] [Green Version]
- Yin, H.; Yang, X.; He, C. Spherical Coordinates Based Methods of Ground Extraction and Objects Segmentation Using 3-D LiDAR Sensor. IEEE Intell. Transp. Syst. Mag. 2016, 8, 61–68. [Google Scholar] [CrossRef]
- Bogoslavskyi, I.; Stachniss, C. Fast range image-based segmentation of sparse 3D laser scans for online operation. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Korea, 9–14 October 2016; pp. 163–169. [Google Scholar] [CrossRef]
- Bogoslavskyi, I.; Stachniss, C. Efficient Online Segmentation for Sparse 3D Laser Scans. PFG J. Photogramm. Remote Sens. Geoinf. Sci. 2017, 85, 41–52. [Google Scholar] [CrossRef]
- Moosmann, F. Interlacing Self-Localization, Moving Object Tracking and Mapping for 3d Range Sensors; KIT Scientific Publishing: Karlsruhe, Germany, 2014; Volume 24. [Google Scholar]
- Hasecke, F.; Hahn, L.; Kummert, A. FLIC: Fast Lidar Image Clustering. arXiv 2020, arXiv:2003.00575. [Google Scholar] [CrossRef]
- Wu, T.; Fu, H.; Liu, B.; Xue, H.; Ren, R.; Tu, Z. Detailed Analysis on Generating the Range Image for LiDAR Point Cloud Processing. Electronics 2021, 10, 1224. [Google Scholar] [CrossRef]
- Zhang, J.; Djolonga, J.; Krause, A. Higher-Order Inference for Multi-class Log-Supermodular Models. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1859–1867. [Google Scholar] [CrossRef] [Green Version]
- Guo, C.; Sato, W.; Han, L.; Mita, S.; McAllester, D. Graph-based 2D road representation of 3D point clouds for intelligent vehicles. In Proceedings of the 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 5–9 June 2011; pp. 715–721. [Google Scholar] [CrossRef]
- Byun, J.; Na, K.I.; Seo, B.S.; Roh, M. Drivable Road Detection with 3D Point Clouds Based on the MRF for Intelligent Vehicle. Springer Tracts Adv. Robot. 2015, 105, 49–60. [Google Scholar] [CrossRef]
- Zhang, M.; Morris, D.D.; Fu, R. Ground Segmentation Based on Loopy Belief Propagation for Sparse 3D Point Clouds. In Proceedings of the 2015 International Conference on 3D Vision, Lyon, France, 19–22 October 2015; pp. 615–622. [Google Scholar] [CrossRef]
- Song, W.; Cho, K.; Um, K.; Won, C.S.; Sim, S. Intuitive terrain reconstruction using height observation-based ground segmentation and 3D object boundary estimation. Sensors 2012, 12, 17186–17207. [Google Scholar] [CrossRef]
- Huang, W.; Liang, H.; Lin, L.; Wang, Z.; Wang, S.; Yu, B.; Niu, R. A Fast Point Cloud Ground Segmentation Approach Based on Coarse-To-Fine Markov Random Field. IEEE Trans. Intell. Transp. Syst. 2022, 23, 7841–7854. [Google Scholar] [CrossRef]
- Rummelhard, L.; Paigwar, A.; Negre, A.; Laugier, C. Ground estimation and point cloud segmentation using SpatioTemporal Conditional Random Field. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA, 11–14 June 2017; pp. 1105–1110. [Google Scholar] [CrossRef]
- Wang, Y.; Ji, Q. A dynamic conditional random field model for object segmentation in image sequences. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 1, pp. 264–270. [Google Scholar] [CrossRef] [Green Version]
- Wang, S.; Huang, H.; Liu, M. Simultaneous clustering classification and tracking on point clouds using Bayesian filter. In Proceedings of the 2017 IEEE International Conference on Robotics and Biomimetics (ROBIO), Macau, Macao, 5–8 December 2017; pp. 2521–2526. [Google Scholar] [CrossRef]
- Guo, Y.; Wang, H.; Hu, Q.; Liu, H.; Liu, L.; Bennamoun, M. Deep Learning for 3D Point Clouds: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 1. [Google Scholar] [CrossRef]
- Pomerleau, D. ALVINN: An Autonomous Land Vehicle In a Neural Network. In Proceedings of the (NeurIPS) Neural Information Processing Systems, Denver, CO, USA, 27–30 November 1989; Touretzky, D., Ed.; Morgan Kaufmann: Burlington, MA, USA, 1989; pp. 305–313. [Google Scholar]
- Charles, R.Q.; Su, H.; Kaichun, M.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 June 2017; pp. 77–85. [Google Scholar] [CrossRef] [Green Version]
- Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Adv. Neural Inf. Process. Syst. 2017, 30, 5105–5114. [Google Scholar]
- Hua, B.; Tran, M.; Yeung, S. Pointwise Convolutional Neural Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; IEEE Computer Society: Los Alamitos, CA, USA, 2018; pp. 984–993. [Google Scholar] [CrossRef] [Green Version]
- Landrieu, L.; Simonovsky, M. Large-Scale Point Cloud Semantic Segmentation with Superpoint Graphs. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4558–4567. [Google Scholar] [CrossRef] [Green Version]
- Varney, N.; Asari, V.K. Pyramid Point: A Multi-Level Focusing Network for Revisiting Feature Layers. IEEE Geosci. Remote Sens. Lett. 2022, 1. [Google Scholar] [CrossRef]
- Paigwar, A.; Erkent, O.; Sierra-Gonzalez, D.; Laugier, C. GndNet: Fast Ground Plane Estimation and Point Cloud Segmentation for Autonomous Vehicles. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 2150–2156. [Google Scholar] [CrossRef]
- Behley, J.; Garbade, M.; Milioto, A.; Quenzel, J.; Behnke, S.; Gall, J.; Stachniss, C. Towards 3D LiDAR-based semantic scene understanding of 3D point cloud sequences: The SemanticKITTI Dataset. Int. J. Robot. Res. 2021, 40, 959–967. [Google Scholar] [CrossRef]
- He, D.; Abid, F.; Kim, Y.M.; Kim, J.H. SectorGSnet: Sector Learning for Efficient Ground Segmentation of Outdoor LiDAR Point Clouds. IEEE Access 2022, 10, 11938–11946. [Google Scholar] [CrossRef]
- Zhou, Y.; Tuzel, O. VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4490–4499. [Google Scholar] [CrossRef] [Green Version]
- Su, H.; Jampani, V.; Sun, D.; Maji, S.; Kalogerakis, E.; Yang, M.H.; Kautz, J. Splatnet: Sparse lattice networks for point cloud processing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2530–2539. [Google Scholar] [CrossRef] [Green Version]
- Zhou, H.; Zhu, X.; Song, X.; Ma, Y.; Wang, Z.; Li, H.; Lin, D. Cylinder3D: An Effective 3D Framework for Driving-scene LiDAR Semantic Segmentation. arXiv 2020, arXiv:2008.01550. [Google Scholar]
- Lang, A.H.; Vora, S.; Caesar, H.; Zhou, L.; Yang, J.; Beijbom, O. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12697–12705. [Google Scholar] [CrossRef] [Green Version]
- Wu, B.; Wan, A.; Yue, X.; Keutzer, K. SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D LiDAR Point Cloud. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; IEEE: Brisbane, QLD, Australia, 2018; pp. 1887–1893. [Google Scholar] [CrossRef] [Green Version]
- Wu, B.; Zhou, X.; Zhao, S.; Yue, X.; Keutzer, K. SqueezeSegV2: Improved Model Structure and Unsupervised Domain Adaptation for Road-Object Segmentation from a LiDAR Point Cloud. arXiv 2018, arXiv:1809.08495. [Google Scholar]
- Xu, C.; Wu, B.; Wang, Z.; Zhan, W.; Vajda, P.; Keutzer, K.; Tomizuka, M. Squeezesegv3: Spatially-adaptive convolution for efficient point-cloud segmentation. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1–19. [Google Scholar] [CrossRef]
- Milioto, A.; Vizzo, I.; Behley, J.; Stachniss, C. RangeNet ++: Fast and Accurate LiDAR Semantic Segmentation. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 4–8 November 2019; IEEE: Macau, China, 2019; pp. 4213–4220. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhou, Z.; David, P.; Yue, X.; Xi, Z.; Gong, B.; Foroosh, H. PolarNet: An Improved Grid Representation for Online LiDAR Point Clouds Semantic Segmentation. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; IEEE: Seattle, WA, USA, 2020; pp. 9598–9607. [Google Scholar] [CrossRef]
- Lyu, Y.; Bai, L.; Huang, X. Real-Time Road Segmentation Using LiDAR Data Processing on an FPGA. In Proceedings of the 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 27–30 May 2018; pp. 1–5. [Google Scholar] [CrossRef] [Green Version]
- Velas, M.; Spanel, M.; Hradis, M.; Herout, A. CNN for very fast ground segmentation in velodyne lidar data. In Proceedings of the 2018 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), Torres Vedras, Portugal, 25–27 April 2018; IEEE: Torres Vedras, Portugal, 2018; pp. 97–103. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Z.; Hua, B.; Yeung, S. ShellNet: Efficient Point Cloud Convolutional Neural Networks Using Concentric Shells Statistics. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea, 27 October–2 November 2019; IEEE Computer Society: Los Alamitos, CA, USA, 2019; pp. 1607–1616. [Google Scholar] [CrossRef]
- Shen, Z.; Liang, H.; Lin, L.; Wang, Z.; Huang, W.; Yu, J. Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process. Remote Sens. 2021, 13, 3239. [Google Scholar] [CrossRef]
- Gao, B.; Pan, Y.; Li, C.; Geng, S.; Zhao, H. Are We Hungry for 3D LiDAR Data for Semantic Segmentation? A Survey of Datasets and Methods. IEEE Trans. Intell. Transp. Syst. 2022, 23, 6063–6081. [Google Scholar] [CrossRef]
Metric | Real-Time | Computational Requirements | Segmentation Immunity | Performance with Rising Obstacles and Regions | Performance with Uneven Ground | Performance with Sparse Data | |
---|---|---|---|---|---|---|---|
Method | |||||||
2.5D Grid-based | Mean-Based [64,65,66,67,68] | ✓ [66,68]; ✗ [64] | Low | Under-/over-segmentation | Good [67,68] | Good [67,68] | - |
Multi-level [69,70] | - | Medium | - | Good | Good | - | |
Occupancy Grid [74,75,76,77,78,79] | ✓ [74,75,76,77,78]; ✗ [79] | High | Under-segmentation [74] | Good [79] | - | - | |
Ground Modelling | GPR-based [90,91,92,93] | ✓ [91]; ✗ [90,92,93] | High | Under-segmentation [91] | Insensitive to slowly rising obstacles [91] | Good [91,93] | Good [90,92,93] |
Line Extraction [87,88] | ✓ [87] | Medium | Slight Over-/under-segmentation | Good [88] | Good [88]; Bad [87] | - | |
Plane Fitting [80,82,83,84,85,86] | ✓ [82,83,84,85,86]; ✗ [80] | Medium/High | Prone to over-segmentation [82]; Under-segmentation [83] | Good [83,86] | Good [83,86]; Bad [80] | Good [82] | |
Adjacent Points and Local Features | Channel-based [94,95,96,97,98,99,100] | ✓ [95,96,97,98,100]; ✗ [94] | Medium | Under/over-segmentation [94]; Under/over-segmentation [97] | Good [94,95,96,97,98] | Good [94,97] | Good [94] |
Range Images [109,110,111,112] | ✓ [109,110,112]; ✗ [111] | Medium | Prone to over-/under-segmentation; Over-segmentation [109,110] | Good [109,110] | - | Good [109,110] | |
Clustering [90,105,106] | ✗ [90] | Medium/High | Under-/over-segmentation [105] | Good [106] | - | Good [90] | |
Region Growing [101,102,103,104] | ✗ [101] | Medium/High | Small over-segmentation [101,102,104]; Small under-/over-segmentation [103] | Good [104] | - | - | |
Higher Order Inference | MRF [115,116,117,118,119] | ✓ [119]; ✗ [117] | High | - | Good [115,116] | Good [115,117]; Bad [116] | Good [115,116] |
CRF [120,122] | ✓ [120] (with GPU); ✗ [122] | High | - | - | - | Good [120] | |
Deep Learning | CNN [125,126,127,128,129,130,132,133,134,135,136,137,138,139,140,141,142,143,144] | ✓ [130,133,136,140] (with GPU), and FPGA [142]; ✗ [143] | High/Very High | - | Good | Good | Good |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gomes, T.; Matias, D.; Campos, A.; Cunha, L.; Roriz, R. A Survey on Ground Segmentation Methods for Automotive LiDAR Sensors. Sensors 2023, 23, 601. https://doi.org/10.3390/s23020601
Gomes T, Matias D, Campos A, Cunha L, Roriz R. A Survey on Ground Segmentation Methods for Automotive LiDAR Sensors. Sensors. 2023; 23(2):601. https://doi.org/10.3390/s23020601
Chicago/Turabian StyleGomes, Tiago, Diogo Matias, André Campos, Luís Cunha, and Ricardo Roriz. 2023. "A Survey on Ground Segmentation Methods for Automotive LiDAR Sensors" Sensors 23, no. 2: 601. https://doi.org/10.3390/s23020601
APA StyleGomes, T., Matias, D., Campos, A., Cunha, L., & Roriz, R. (2023). A Survey on Ground Segmentation Methods for Automotive LiDAR Sensors. Sensors, 23(2), 601. https://doi.org/10.3390/s23020601