[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Occupancy Grid Mapping Without Ray-Casting for High-Resolution LiDAR Sensors

Published: 16 October 2023 Publication History

Abstract

Occupancy mapping is a fundamental component of robotic systems to reason about the unknown and known regions of the environment. This article presents an efficient occupancy mapping framework for high-resolution light detection and ranging (LiDAR) sensors, termed D-Map. The framework introduces three main novelties to address the computational efficiency challenges of occupancy mapping. First, we use a depth image to determine the occupancy state of regions instead of the traditional ray-casting method. Second, we introduce an efficient on-tree update strategy on a tree-based map structure. These two techniques avoid redundant visits to small cells, significantly reducing the number of cells to be updated. Third, we remove known cells from the map at each update by leveraging the low false alarm rate of LiDAR sensors. This approach not only enhances our framework's update efficiency by reducing map size but also endows it with an interesting decremental property, which we have named D-Map. To support our design, we provide theoretical analyzes of the accuracy of the depth image projection and time complexity of occupancy updates. Furthermore, we conduct extensive benchmark experiments on various LiDAR sensors in both public and private datasets. Our framework demonstrates superior efficiency in comparison with other state-of-the-art methods while maintaining comparable mapping accuracy and high memory efficiency. We demonstrate two real-world applications of D-Map for real-time occupancy mapping on a handheld device and an aerial platform carrying a high-resolution LiDAR.

References

[1]
F. Gao, W. Wu, W. Gao, and S. Shen, “Flying on point clouds: Online trajectory generation and autonomous navigation for quadrotors in cluttered environments,” J. Field Robot., vol. 36, no. 4, pp. 710–733, 2019.
[2]
Y. Ren et al., “Bubble planner: Planning high-speed smooth quadrotor trajectories using receding corridors,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2022, pp. 6332–6339.
[3]
Y. Ren, S. Liang, F. Zhu, G. Lu, and F. Zhang, “Online whole-body motion planning for quadrotor using multi-resolution search,” in Proc. IEEE Int. Conf. Robot. Autom., 2023, pp. 1594–1600.
[4]
F. Zhu et al., “Swarm-LIO: Decentralized swarm LiDAR-inertial odometry,” in Proc. IEEE Int. Conf. Robot. Autom., 2023, pp. 3254–3260.
[5]
J. Leonard et al., “A perception-driven autonomous urban vehicle,” J. Field Robot., vol. 25, no. 10, pp. 727–774, 2008.
[6]
J. Levinson et al., “Towards fully autonomous driving: Systems and algorithms,” in Proc. IEEE Intell. Veh. Symp., 2011, pp. 163–168.
[7]
Y. Li et al., “Deep learning for LiDAR point clouds in autonomous driving: A review,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 8, pp. 3412–3432, Aug. 2021.
[8]
J. Zhang and S. Singh, “LOAM: Lidar odometry and mapping in real-time,” in Proc. Robot.: Sci. Syst., Berkeley, CA, USA, 2014, pp. 1–9.
[9]
J. Lin et al., “Immesh: An immediate lidar localization and meshing framework,” 2023, arXiv:2301.05206.
[10]
X. Liu, Z. Liu, F. Kong, and F. Zhang, “Large-scale LiDAR consistent mapping using hierarchical LiDAR bundle adjustment,” IEEE Robot. Autom. Lett., vol. 8, no. 3, pp. 1523–1530, Mar. 2023.
[11]
Y. Li and J. Ibanez-Guzman, “Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems,” IEEE Signal Process. Mag., vol. 37, no. 4, pp. 50–61, Jul. 2020.
[12]
B. T. Lopez and J. P. How, “Aggressive 3-D collision avoidance for high-speed navigation,” in Proc. IEEE Int. Conf. Robot. Autom., 2017, pp. 5759–5765.
[13]
F. Kong, W. Xu, Y. Cai, and F. Zhang, “Avoiding dynamic small obstacles with onboard sensing and computation on aerial robots,” IEEE Robot. Autom. Lett., vol. 6, no. 4, pp. 7869–7876, Oct. 2021.
[14]
S. Liu, M. Watterson, S. Tang, and V. Kumar, “High speed navigation for quadrotors with limited onboard sensing,” in Proc. IEEE Int. Conf. Robot. Autom., 2016, pp. 1484–1491.
[15]
J. Tordesillas, B. T. Lopez, and J. P. How, “Faster: Fast and safe trajectory planner for flights in unknown environments,” in Proc. IEEE/RSJ Int. Conf. Intell. robots Syst., 2019, pp. 1934–1940.
[16]
Z. Zhang, T. Henderson, S. Karaman, and V. Sze, “FSMI: Fast computation of Shannon mutual information for information-theoretic mapping,” Int. J. Robot. Res., vol. 39, no. 9, pp. 1155–1177, 2020.
[17]
B. Zhou, Y. Zhang, X. Chen, and S. Shen, “FUEL: Fast UAV exploration using incremental frontier structure and hierarchical planning,” IEEE Robot. Autom. Lett., vol. 6, no. 2, pp. 779–786, Apr. 2021.
[18]
W. Tabib, K. Goel, J. Yao, C. Boirum, and N. Michael, “Autonomous cave surveying with an aerial robot,” IEEE Trans. Robot., vol. 38, no. 2, pp. 1016–1032, Apr. 2022.
[19]
A. Elfes, Robot Navigation: Integrating Perception, Environmental Constraints and Task Execution Within a Probabilistic Framework. Berlin, Germany: Springer, 1996, pp. 91–130.
[20]
A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “OctoMap: An efficient probabilistic 3D mapping framework based on octrees,” Auton. robots, vol. 34, no. 3, pp. 189–206, 2013.
[21]
“Livox avia user manual, livox technology company limited,” Oct. 2020. [Online], Available: https://www.livoxtech.com/avia
[22]
S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. Cambridge, MA, USA: MIT Press, 2005.
[23]
S. T. O'Callaghan and F. T. Ramos, “Gaussian process occupancy maps,” Int. J. Robot. Res., vol. 31, no. 1, pp. 42–62, 2012.
[24]
S. Kim and J. Kim, “Building occupancy maps with a mixture of Gaussian processes,” in Proc. IEEE Int. Conf. Robot. Autom., 2012, pp. 4756–4761.
[25]
S. Kim and J. Kim, “GPmap: A unified framework for robotic mapping based on sparse Gaussian processes,” in Field and Service Robotics. Berlin, Germany: Springer, 2015, pp. 319–332.
[26]
F. Ramos and L. Ott, “Hilbert maps: Scalable continuous occupancy mapping with stochastic gradient descent,” Int. J. Robot. Res., vol. 35, no. 14, pp. 1717–1730, 2016.
[27]
K. Doherty, J. Wang, and B. Englot, “Probabilistic map fusion for fast, incremental occupancy mapping with 3D hilbert maps,” in Proc. IEEE Int. Conf. Robot. Autom., 2016, pp. 1011–1018.
[28]
C. O'Meadhra, W. Tabib, and N. Michael, “Variable resolution occupancy mapping using Gaussian mixture models,” IEEE Robot. Autom. Lett., vol. 4, no. 2, pp. 2015–2022, Apr. 2019.
[29]
K. Doherty, T. Shan, J. Wang, and B. Englot, “Learning-aided 3-D occupancy mapping with Bayesian generalized kernel inference,” IEEE Trans. Robot., vol. 35, no. 4, pp. 953–966, Aug. 2019.
[30]
T. Duong, M. Yip, and N. Atanasov, “Autonomous navigation in unknown environments with sparse Bayesian kernel-based occupancy mapping,” IEEE Trans. Robot., vol. 38, no. 6, pp. 3694–3712, Dec. 2022.
[31]
L. Mescheder, M. Oechsle, M. Niemeyer, S. Nowozin, and A. Geiger, “Occupancy networks: Learning 3D reconstruction in function space,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 4460–4470.
[32]
B. Mildenhall, P.P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “NERF: Representing scenes as neural radiance fields for view synthesis,” Commun. ACM, vol. 65, no. 1, pp. 99–106, 2021.
[33]
J. J. Park, P. Florence, J. Straub, R. Newcombe, and S. Lovegrove, “DEEPSDF: Learning continuous signed distance functions for shape representation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2019, pp. 165–174.
[34]
A. Asgharivaskasi and N. Atanasov, “Semantic OcTree mapping and shannon mutual information computation for robot exploration,” IEEE Trans. Robot., vol. 39, no. 3, pp. 1910–1928, Jun. 2023.
[35]
A. Rosinol, M. Abate, Y. Chang, and L. Carlone, “Kimera: An open-source library for real-time metric-semantic localization and mapping,” in Proc. IEEE Int. Conf. Robot. Autom., 2020, pp. 1689–1696.
[36]
Z. Zhu et al., “NICE-SLAM: Neural implicit scalable encoding for SLAM,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2022, pp. 12786–12796.
[37]
M. C. Martin and H. P. Moravec, Robot Evidence Grids. Pittsburgh, PA, USA: Carnegie Mellon Univ., Robot. Inst., 1996.
[38]
Y. Roth-Tabak and R. Jain, “Building an environment model using depth information,” Computer, vol. 22, no. 6, pp. 85–90, 1989.
[39]
G. M. Hunter and K. Steiglitz, “Operations on images using quad trees,” IEEE Trans. Pattern Anal. Mach. Intell., no. 2, pp. 145–153, Apr. 1979.
[40]
C. L. Jackins and S. L. Tanimoto, “Oct-trees and their use in representing three-dimensional objects,” Comput. Graph. Image Process., vol. 14, no. 3, pp. 249–270, 1980.
[41]
P. Payeur, P. Hébert, D. Laurendeau, and C. M. Gosselin, “Probabilistic octree modeling of a 3D dynamic environment,” in Proc. Int. Conf. Robot. Autom., 1997, pp. 1289–1296.
[42]
M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3D reconstruction at scale using voxel hashing,” ACM Trans. Graph., vol. 32, no. 6, pp. 1–11, 2013.
[43]
X. Zhou, Z. Wang, H. Ye, C. Xu, and F. Gao, “EGO-planner: An ESDF-free gradient-based local planner for quadrotors,” IEEE Robot. Autom. Lett., vol. 6, no. 2, pp. 478–485, Apr. 2020.
[44]
B. Zhou, H. Xu, and S. Shen, “RACER: Rapid collaborative exploration with a decentralized multi-UAV system,” IEEE Trans. Robot., vol. 39, no. 3, pp. 1816–1835, Jun. 2023.
[45]
H. Oleynikova, Z. Taylor, M. Fehr, R. Siegwart, and J. Nieto, “Voxblox: Incremental 3D euclidean signed distance fields for on-board MAV planning,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2017, pp. 1366–1373.
[46]
L. Han, F. Gao, B. Zhou, and S. Shen, “Fiesta: Fast incremental euclidean distance fields for online motion planning of aerial robots,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2019, pp. 4423–4430.
[47]
K. M. Wurm et al., “Hierarchies of octrees for efficient 3D mapping,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2011, pp. 4249–4255.
[48]
E. Einhorn, C. Schröter, and H.-M. Gross, “Finding the adequate resolution for grid mapping-cell sizes locally adapting on-the-FLY,” in Proc. IEEE Int. Conf. Robot. Autom., 2011, pp. 1843–1848.
[49]
M. Yguel, O. Aycard, and C. Laugier, “Update policy of dense maps: Efficient algorithms and sparse representation,” in Field and Service Robotics. Berlin, Germany: Springer, 2008, pp. 23–33.
[50]
D. Duberg and P. Jensfelt, “UFOMap: An efficient probabilistic 3D mapping framework that embraces the unknown,” IEEE Robot. Automat. Lett., vol. 5, no. 4, pp. 6411–6418, Oct. 2020.
[51]
I. Wald, T. Ize, A. Kensler, A. Knoll, and S. G. Parker, “Ray tracing animated scenes using coherent grid traversal,” ACM Trans. Graph., vol. 25, no. 3, pp. 485–493, 2006.
[52]
Y. Kwon, D. Kim, I. An, and S.-E. Yoon, “Super rays and culling region for real-time updates on grid-based occupancy maps,” IEEE Trans. Robot., vol. 35, no. 2, pp. 482–497, Apr. 2019.
[53]
J. L. Bentley and D. Wood, “An optimal worst case algorithm for reporting intersections of rectangles,” IEEE Trans. Comput., vol. 29, no. 7, pp. 571–577, Jul. 1980.
[54]
V. K. Vaishnavi, “Computing point enclosures,” IEEE Trans. Comput., vol. 31, no. 1, pp. 22–29, Jan. 1982.
[55]
Y. Cai, F. Kong, Y. Ren, F. Zhu, J. Lin, and F. Zhang, “Supplementary materials for occupancy grid mapping without ray casting for high-resolution lidar sensors,” 2023. [Online]. Available: https://github.com/hku-mars/D-Map/blob/main/documents/SupplementaryMaterials.pdf
[56]
M. R. Spiegel, S. Lipschutz, and J. Liu, Schaum's Outlines: Mathematical Handbook of Formulas and Tables. vol. 2, New York, NY, USA: McGraw-Hill, 2009.
[57]
M. Teschner, B. Heidelberger, M. Müller, D. Pomerantes, and M. H. Gross, “Optimized spatial hashing for collision detection of deformable objects,” in VMV, 2003, pp. 47–54.
[58]
D. Meagher, “Geometric modeling using octree encoding,” Comput. Graph. Image Process., vol. 19, no. 2, pp. 129–147, 1982.
[59]
W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “FAST-LIO2: Fast direct LiDAR-inertial odometry,” IEEE Trans. Robot., vol. 38, no. 4, pp. 2053–2073, Aug. 2022.
[60]
A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” Int. J. Robot. Res., vol. 32, no. 11, pp. 1231–1237, 2013.
[61]
W. Xu and F. Zhang, “FAST-LIO: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter,” IEEE Robot. Autom. Lett., vol. 6, no. 2, pp. 3317–3324, Apr. 2021.
[62]
F. Kong et al., “MARSIM: A light-weight point-realistic simulator for LiDAR-based UAVs,” IEEE Robot. Autom. Lett., vol. 8, no. 5, pp. 2954–2961, May 2023.
[63]
J. Amanatides and A. Woo, “A fast voxel traversal algorithm for ray tracing,” Eurographics, vol. 87, no. 3, pp. 3–10, 1987.
[64]
S. Mystakidis, “Metaverse,” Encyclopedia, vol. 2, no. 1, pp. 486–497, 2022.
[65]
Y. Wang et al., “A survey on metaverse: Fundamentals, security, and privacy,” IEEE Commun. Surveys Tut., vol. 25, no. 1, pp. 319–352, First Quarter 2023.
[66]
P. Cipresso, I. A. C. Giglioli, M. A. Raya, and G. Riva, “The past, present, and future of virtual and augmented reality research: A network and cluster analysis of the literature,” Front. Psychol., vol. 8, 2018, Art. no.
[67]
S. Shah, D. Dey, C. Lovett, and A. Kapoor, “AirSim: High-fidelity visual and physical simulation for autonomous vehicles,” in Field Serv. Robot.: Results 11th Int. Conf., 2018, pp. 621–635.
[68]
K. Themistocleous, C. Mettas, E. Evagorou, and D. Hadjimitsis, “The use of UAVs and photogrammetry for the documentation of cultural heritage monuments: The case study of the churches in Cyprus,” in Proc. Earth Resour. Environ. Remote Sens./GIS Appl. X, 2019, pp. 85–96.
[69]
F. Zhu, Y. Ren, and F. Zhang, “Robust real-time lidar-inertial initialization,” in Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., 2022, pp. 3948–3955.
[70]
G. Lu, W. Xu, and F. Zhang, “On-manifold model predictive control for trajectory tracking on robotic systems,” IEEE Trans. Ind. Electron., vol. 70, no. 9, pp. 9192–9202, Sep. 2023.
[71]
S. Agarwal, A. Vora, G. Pandey, W. Williams, H. Kourous, and J. McBride, “Ford multi-AV seasonal dataset,” Int. J. Robot. Res., vol. 39, no. 12, pp. 1367–1376, 2020.
[72]
J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of RGB-D SLAM systems,” in Proc. Int. Conf. Intell. Robot Syst., 2012, pp. 573–580.
[73]
M. A. Awad, S. Ashkiani, R. Johnson, M. Farach-Colton, and J. D. Owens, “Engineering a high-performance GPU B-tree,” in Proc. 24th Symp. Princ. Pract. Parallel Program., 2019, pp. 145–157.

Cited By

View all
  • (2024)Sparse Query Dense: Enhancing 3D Object Detection with Pseudo PointsProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681420(409-418)Online publication date: 28-Oct-2024

Index Terms

  1. Occupancy Grid Mapping Without Ray-Casting for High-Resolution LiDAR Sensors
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image IEEE Transactions on Robotics
      IEEE Transactions on Robotics  Volume 40, Issue
      2024
      4290 pages

      Publisher

      IEEE Press

      Publication History

      Published: 16 October 2023

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)0
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 13 Dec 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Sparse Query Dense: Enhancing 3D Object Detection with Pseudo PointsProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3681420(409-418)Online publication date: 28-Oct-2024

      View Options

      View options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media