[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3636534.3690708acmconferencesArticle/Chapter ViewAbstractPublication PagesmobicomConference Proceedingsconference-collections
research-article

αLiDAR: An Adaptive High-Resolution Panoramic LiDAR System

Published: 04 December 2024 Publication History

Abstract

LiDAR technology holds vast potential across various sectors, including robotics, autonomous driving, and urban planning. However, the performance of current LiDAR sensors is hindered by limited field of view (FOV), low resolution, and lack of flexible focusing capability. We introduce αLiDAR, an innovative LiDAR system that employs controllable actuation to provide a panoramic FOV, high resolution, and adaptable scanning focus. The core concept of αLiDAR is to expand the operational freedom of a LiDAR sensor through the incorporation of a controllable, active rotational mechanism. This modification allows the sensor to scan previously inaccessible blind spots and focus on specific areas of interest in an adaptive manner. By modeling uncertainties in LiDAR rotation process and estimating point-wise uncertainty, αLiDAR can correct point cloud distortions resulted from significant rotation. In addition, by optimizing LiDAR's rotation trajectory, αLiDAR can swiftly adapt to dynamic areas of interest. We developed several prototypes of αLiDAR and conducted comprehensive evaluations in various indoor and outdoor real-world scenarios. Our results demonstrate that αLiDAR achieves centimeter-level pose estimation accuracy, with an average latency of only 37 ms. In two typical LiDAR applications, αLiDAR significantly enhances 3D mapping accuracy, coverage, and density by 8.5×, 2×, and 1.6× respectively, compared to conventional LiDAR sensors. Additionally, αLiDAR's adaptive rotation improves the effective sensing distance by 1.8× and increases the number of perceived objects by 1.9×. A video demonstration of αLiDAR's in action in real world is available at https://youtu.be/x4zc_I_xTaw. The code is available at https://github.com/HViktorTsoi/alpha_lidar.

References

[1]
2017. Velodyne LiDAR Launches VLS-128, The World's Highest Resolution LiDAR for Autonomous Vehicles. https://velodynelidar.com/press-release/velodyne-lidar-launches-vls-128-the-worlds-highest-resolution-lidar-for-autonomous-vehicles/.
[2]
Sick AG. 2006. LMS200/211/221/291 Laser Measurement Systems Technical Description. https://sicktoolbox.sourceforge.net/docs/sick-lms-technical-description.pdf.
[3]
Lucas R Agostinho, Nuno M Ricardo, Maria I Pereira, Antoine Hiolle, and Andry M Pinto. 2022. A practical survey on visual odometry for autonomous driving in challenging scenarios and conditions. IEEE Access 10 (2022), 72182--72205.
[4]
Mingcong Cao and Junmin Wang. 2020. Obstacle detection for autonomous driving vehicles with multi-lidar sensor fusion. Journal of Dynamic Systems, Measurement, and Control 142, 2 (2020), 021007.
[5]
Yi Cheng and Gong Ye Wang. 2018. Mobile robot navigation based on lidar. In 2018 Chinese control and decision conference (CCDC). IEEE, 1243--1246.
[6]
Cyberbotics. 2023. Webots: OPEN SOURCE ROBOT SIMULATOR. https://cyberbotics.com.
[7]
Renaud Dube, Andrei Cramariuc, Daniel Dugas, Hannes Sommer, Marcin Dymczyk, Juan Nieto, Roland Siegwart, and Cesar Cadena. 2020. SegMap: Segment-based mapping and localization using data-driven descriptors. The International Journal of Robotics Research 39, 2--3 (2020), 339--355.
[8]
Ryan Garnett and Matthew D Adams. 2018. LIDAR---A technology to assist with smart cities and climate change resilience: A case study in an urban metropolis. ISPRS International Journal of Geo-Information 7, 5 (2018), 161.
[9]
Andreas Geiger, Philip Lenz, and Raquel Urtasun. 2012. Are we ready for autonomous driving? The KITTI vision benchmark suite. In 2012 IEEE Conference on Computer Vision and Pattern Recognition. 3354--3361.
[10]
Pavel Gonzalez, Alicia Mora, Santiago Garrido, Ramon Barber, and Luis Moreno. 2022. Multi-lidar mapping for scene segmentation in indoor environments for mobile robots. Sensors 22, 10 (2022), 3690.
[11]
W Shane Grant, Randolph C Voorhies, and Laurent Itti. 2019. Efficient Velodyne SLAM with point and plane features. Autonomous Robots 43, 5 (2019), 1207--1224.
[12]
Dongjiao He, Wei Xu, Nan Chen, Fanze Kong, Chongjian Yuan, and Fu Zhang. 2023. Point-LIO: Robust High-Bandwidth Light Detection and Ranging Inertial Odometry. Advanced Intelligent Systems (2023), 2200459.
[13]
Yuze He, Li Ma, Zhehao Jiang, Yi Tang, and Guoliang Xing. 2021. VI-eye: semantic-based 3D point cloud registration for infrastructure-assisted autonomous driving. In Proceedings of the 27th Annual International Conference on Mobile Computing and Networking. 573--586.
[14]
Michael Helmberger, Kristian Morin, Beda Berner, Nitish Kumar, Giovanni Cioffi, and Davide Scaramuzza. 2022. The hilti slam challenge dataset. IEEE Robotics and Automation Letters 7, 3 (2022), 7518--7525.
[15]
HESAI. 2023. Pandar64: 64-Line Mechanical LiDAR. https://www.hesaitech.com/cn/product/Pandar64.
[16]
Hanjiang Hu, Zuxin Liu, Sharad Chitlangia, Akhil Agnihotri, and Ding Zhao. 2022. Investigating the Impact of Multi-LiDAR Placement on Object Detection for Autonomous Driving. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 2550--2559.
[17]
Jianhao Jiao, Haoyang Ye, Yilong Zhu, and Ming Liu. 2021. Robust odometry and mapping for multi-lidar systems with online extrinsic calibration. IEEE Transactions on Robotics 38, 1 (2021), 351--371.
[18]
Nikhil Jonnavithula, Yecheng Lyu, and Ziming Zhang. 2021. Lidar odometry methodologies for autonomous driving: A survey. arXiv preprint arXiv:2109.06120 (2021).
[19]
You Li and Javier Ibanez-Guzman. 2020. Lidar for autonomous driving: The principles, challenges, and trends for automotive lidar and perception systems. IEEE Signal Processing Magazine 37, 4 (2020), 50--61.
[20]
Xiyuan Liu, Zheng Liu, Fanze Kong, and Fu Zhang. 2023. Large-scale lidar consistent mapping using hierarchical lidar bundle adjustment. IEEE Robotics and Automation Letters 8, 3 (2023), 1523--1530.
[21]
Jordan Lorence. 2023. LIDAR Sensor Integration in Autonomous Vehicles Platforms. https://www.mrlcg.com/resources/blog/lidar-sensor-integration-in-autonomous-vehicles-platforms/.
[22]
LSLIDAR. 2023. C16: 16-Line Mechanical LiDAR. https://www.leishen-lidar.com/cx/103.
[23]
LSLIDAR. 2023. CH32: 32-Line Mechanical LiDAR. https://www.leishen-lidar.com/ch/70.
[24]
LSLIDAR. 2023. LS-S2. https://www.leishen-lidar.com/tof/159.
[25]
Bin Lv, Hao Xu, Jianqing Wu, Yuan Tian, Yongsheng Zhang, Yichen Zheng, Changwei Yuan, and Sheng Tian. 2019. LiDAR-enhanced connected infrastructures sensing and broadcasting high-resolution traffic information serving smart cities. IEEE Access 7 (2019), 79895--79907.
[26]
Flavio BP Malavazi, Remy Guyonneau, Jean-Baptiste Fasquel, Sebastien Lagrange, and Franck Mercier. 2018. LiDAR-only based navigation algorithm for an autonomous agricultural robot. Computers and electronics in agriculture 154 (2018), 71--79.
[27]
MARKETSANDMARKETS. 2023. LiDAR Market. https://www.marketsandmarkets.com/Market-Reports/lidar-market-1261.html.
[28]
Sherif AS Mohamed, Mohammad-Hashem Haghbayan, Tomi Westerlund, Jukka Heikkonen, Hannu Tenhunen, and Juha Plosila. 2019. A survey on odometry for autonomous navigation systems. IEEE access 7 (2019), 97466--97486.
[29]
Matteo Palieri, Benjamin Morrell, Abhishek Thakur, Kamak Ebadi, Jeremy Nash, Arghya Chatterjee, Christoforos Kanellakis, Luca Carlone, Cataldo Guaragnella, and Ali-akbar Agha-Mohammadi. 2020. Locus: A multi-sensor lidar-centric solution for high-precision odometry and 3d mapping in real-time. IEEE Robotics and Automation Letters 6, 2 (2020), 421--428.
[30]
David Prokhorov, Dmitry Zhukov, Olga Barinova, Konushin Anton, and Anna Vorontsova. 2019. Measuring robustness of Visual SLAM. In 2019 16th International Conference on Machine Vision Applications (MVA). IEEE, 1--6.
[31]
RoboSense. 2023. Helios Series: A new generation of customized LiDAR platform, including 32-beam and 16-beam LiDAR. https://www.robosense.ai/en/rslidar/RS-Helios.
[32]
RoboSense. 2023. M1 & M1 Plus: Delivering automotive-grade lidar for smarter, safer vehicles. https://www.robosense.ai/en/rslidar/rs-lidar-M1.
[33]
RoboSense. 2023. Ruby Plus: Upgraded 128-beam LiDAR customized for L4 autonomous vehicle commercial operations. https://www.robosense.ai/en/rslidar/RS-Ruby_Plus.
[34]
Santiago Royo and Maria Ballesta-Garcia. 2019. An overview of lidar imaging systems for autonomous vehicles. Applied sciences 9, 19 (2019), 4093.
[35]
Schmersal. 2003. Schmersal Newsletter 02 Automotive. https://docplayer.org/16519398-Schmersal-newsletter-02-automotive.html.
[36]
Tixiao Shan and Brendan Englot. 2018. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 4758--4765.
[37]
Tixiao Shan and Brendan Englot. 2018. Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 4758--4765.
[38]
Tixiao Shan, Brendan Englot, Drew Meyers, Wei Wang, Carlo Ratti, and Daniela Rus. 2020. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 5135--5142.
[39]
TechCrunch. 2017. Toyota Research Institute debuts its next-generation automated driving platform. https://techcrunch.com/2017/09/27/toyota-research-institute-self-driving-2-1/.
[40]
Los Angeles Times. 2015. Ford driverless car to hit california roads. https://www.latimes.com/business/autos/la-fi-hy-ford-driverless-car-to-hit-california-roads-20151215-story.html.
[41]
u blox. 2023. ZED-F9P module. https://www.u-blox.com/en/product/zed-f9p-module.
[42]
Inc. Velodyne LiDAR. 2015. HDL-64E User Manual. http://www.cajunbot.com/wiki/images/8/86/HDL-64E_Users_Manual_lowres_B.pdf.
[43]
Inc. Velodyne LiDAR. 2019. VLP-16 User Manual. https://velodynelidar.com/wp-content/uploads/2019/12/63-9243-Rev-E-VLP-16-User-Manual.pdf.
[44]
VentureBeat. 2021. Pony.ai raises $100 million more to advance its autonomous vehicle tech. https://venturebeat.com/business/pony-ai-raises-100-million-more-to-advance-its-autonomous-vehicle-tech/.
[45]
Han Wang, Chen Wang, Chun-Lin Chen, and Lihua Xie. 2021. Floam: Fast lidar odometry and mapping. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 4390--4396.
[46]
Weimin Wang, Ken Sakurada, and Nobuo Kawaguchi. 2017. Reflectance intensity assisted automatic and accurate extrinsic calibration of 3d lidar and panoramic camera using a printed chessboard. Remote Sensing 9, 8 (2017), 851.
[47]
Wei Xu, Yixi Cai, Dongjiao He, Jiarong Lin, and Fu Zhang. 2022. Fastlio2: Fast direct lidar-inertial odometry. IEEE Transactions on Robotics 38, 4 (2022), 2053--2073.
[48]
Wei Xu and Fu Zhang. 2021. Fast-lio: A fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter. IEEE Robotics and Automation Letters 6, 2 (2021), 3317--3324.
[49]
Tao Yang, You Li, Cheng Zhao, Dexin Yao, Guanyin Chen, Li Sun, Tomas Krajnik, and Zhi Yan. 2022. 3D ToF LiDAR in mobile robotics: A review. arXiv preprint arXiv:2202.11025 (2022).
[50]
Haoyang Ye, Yuying Chen, and Ming Liu. 2019. Tightly coupled 3d lidar inertial odometry and mapping. In 2019 International Conference on Robotics and Automation (ICRA). IEEE, 3144--3150.
[51]
Chongjian Yuan, Xiyuan Liu, Xiaoping Hong, and Fu Zhang. 2021. Pixel-level extrinsic self calibration of high resolution lidar and camera in targetless environments. IEEE Robotics and Automation Letters 6, 4 (2021), 7517--7524.
[52]
Zikang Yuan, Fengtian Lang, Tianle Xu, and Xin Yang. 2022. Sr-lio: Lidar-inertial odometry with sweep reconstruction. arXiv preprint arXiv:2210.10424 (2022).
[53]
Ji Zhang and Sanjiv Singh. 2014. LOAM: Lidar odometry and mapping in real-time. In Robotics: Science and systems, Vol. 2. Berkeley, CA, 1--9.
[54]
Ji Zhang, Sanjiv Singh, et al. 2014. LOAM: Lidar odometry and mapping in real-time. In Robotics: Science and systems, Vol. 2. Berkeley, CA, 1--9.
[55]
Weikun Zhen, Yaoyu Hu, Jingfeng Liu, and Sebastian Scherer. 2019. A joint optimization approach of lidar-camera fusion for accurate dense 3-d reconstructions. IEEE Robotics and Automation Letters 4, 4 (2019), 3585--3592.
[56]
Pengwei Zhou, Xuexun Guo, Xiaofei Pei, and Ci Chen. 2021. T-loam: truncated least squares lidar-only odometry and mapping in real time. IEEE Transactions on Geoscience and Remote Sensing 60 (2021), 1--13.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
ACM MobiCom '24: Proceedings of the 30th Annual International Conference on Mobile Computing and Networking
December 2024
2476 pages
ISBN:9798400704895
DOI:10.1145/3636534
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 December 2024

Check for updates

Badges

Author Tags

  1. LiDAR
  2. active sensing
  3. pose estimation
  4. state estimation
  5. sensor fusion

Qualifiers

  • Research-article

Conference

ACM MobiCom '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 440 of 2,972 submissions, 15%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 112
    Total Downloads
  • Downloads (Last 12 months)112
  • Downloads (Last 6 weeks)112
Reflects downloads up to 21 Dec 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media