[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

LAPRNet: Lightweight Airborne Particle Removal Network for LiDAR Point Clouds

  • Conference paper
  • First Online:
Image and Video Technology (PSIVT 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14403))

Included in the following conference series:

  • 348 Accesses

Abstract

Autonomous vehicles and mobile robotics usually rely on LiDAR sensors for outdoor environment perception. Airborne particles, such as fog, rain, and snow, introduce undesired measurement points resulting in missing detection and false positives. Hence LiDAR-based perception systems must contend with inclement weather to avoid a significant drop in performance. This paper introduces a lightweight network to infer these undesired measurement points. It mainly consists of three Wide Multi-Level Residual modules (WMLR). WMLR is delicately designed to integrate wide activation, multi-level shortcuts, and shuffle attention seamlessly, to make it an effective and efficient pre-processing tool for subsequent tasks. We also introduce an enhanced LiDAR data representation to boost the performance further. It integrates point cloud spatial distribution with the standard intensity and distance inputs. Thus, two models following the same network architecture but with the standard and enhanced input representation, namely LAPRNet\(_2\) and LAPRNet\(_3\), are proposed. They are trained and tested in controlled and natural weather environments. Experiments on the WADS and Chamber datasets show that they outperform state-of-the-art deep learning and traditional filtering methods by a significant margin. Considering the limited computing resources on edge devices, both LAPRNet\(_2\) and LAPRNet\(_3\) provide an optimal balance between quality and computation to ensure successful deployment. LAPRNet\(_2\) is more efficient, and the parameters and computations of it against WeatherNet are 1.53M vs. 0.39M and 18.4 GFLOPs vs. 4.9 GFLOPs, respectively. The source code will be available on GitHub soon.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 51.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 64.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bijelic, M., et al.: Seeing through fog without seeing fog: deep multimodal sensor fusion in unseen adverse weather. In: CVPR (2020)

    Google Scholar 

  2. Bijelic, M., Gruber, T., Ritter, W.: A benchmark for lidar sensors in fog: is detection breaking down? 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 760–767 (2018)

    Google Scholar 

  3. Charron, N., Phillips, S., Waslander, S.L.: De-noising of lidar point clouds corrupted by snowfall. In: 2018 15th Conference on Computer and Robot Vision (CRV), pp. 254–261 (2018)

    Google Scholar 

  4. Hahner, M., et al.: LiDAR snowfall simulation for robust 3D object detection. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022)

    Google Scholar 

  5. Hahner, M., Sakaridis, C., Dai, D., Gool, L.V.: Fog simulation on real LiDAR point clouds for 3D object detection in adverse weather. In: ICCV, pp. 15263–15272 (2021)

    Google Scholar 

  6. Heinzler, R., Piewak, F., Schindler, P., Stork, W.: CNN-based LiDAR point cloud de-noising in adverse weather. IEEE Rob. Autom. Lett. 5(2), 2514–2521 (2020)

    Article  Google Scholar 

  7. Howard, A., et al.: Searching for MobileNetV3. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1314–1324 (2019). https://doi.org/10.1109/ICCV.2019.00140

  8. Kong, L., et al.: Rethinking range view representation for LiDAR segmentation. arXiv preprint arXiv:2303.05367 (2023)

  9. Kurup, A., Bos, J.: DSOR: a scalable statistical filter for removing falling snow from LiDAR point clouds in severe winter weather. arXiv preprint arXiv:2109.07078 (2021)

  10. Kutila, M., Pyykonen, P., Holzhuter, H., Colomb, M., Duthon, P.: Automotive LiDAR performance verification in fog and rain. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 1695–1701 (2018)

    Google Scholar 

  11. Luo, S., Hu, W.: Score-based point cloud denoising. In: ICCV, pp. 4563–4572 (2021)

    Google Scholar 

  12. Luo, S., Hu, W.: Differentiable manifold reconstruction for point cloud denoising. In: ACMMM (2020)

    Google Scholar 

  13. Milioto, A., Vizzo, I., Behley, J., Stachniss, C.: RangeNet++: fast and accurate LiDAR semantic segmentation. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4213–4220. IEEE (2019)

    Google Scholar 

  14. Piewak, F., et al.: Boosting LiDAR-based semantic labeling by cross-modal training data generation. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11134, pp. 497–513. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11024-6_39

    Chapter  Google Scholar 

  15. Qian, K., Zhu, S., Zhang, X., Li, L.E.: Robust multimodal vehicle detection in foggy weather using complementary lidar and radar signals. In: CVPR, pp. 444–453 (2021)

    Google Scholar 

  16. Rakotosaona, M.J., La Barbera, V., Guerrero, P., Mitra, N.J., Ovsjanikov, M.: PointCleanNet: learning to denoise and remove outliers from dense point clouds. Comput. Graph. Forum 39, 185–203 (2020)

    Google Scholar 

  17. Rusu, R.B., Cousins, S.: 3D is here: point cloud library (pcl). In: 2011 IEEE International Conference on Robotics and Automation, pp. 1–4 (2011)

    Google Scholar 

  18. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: MobileNetV2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018

    Google Scholar 

  19. Stanislas, L., et al.: Airborne particle classification in LiDAR point clouds using deep learning. In: Ishigami, G., Yoshida, K. (eds.) Field and Service Robotics. SPAR, vol. 16, pp. 395–410. Springer, Singapore (2021). https://doi.org/10.1007/978-981-15-9460-1_28

    Chapter  Google Scholar 

  20. Tian, C., Fei, L., Zheng, W., Xu, Y., Zuo, W., Lin, C.W.: Deep learning on image denoising: an overview. Neural Netw. 131, 251–275 (2020)

    Article  Google Scholar 

  21. Wang, W., You, X., Chen, L., Tian, J., Tang, F., Zhang, L.: A scalable and accurate de-snowing algorithm for LiDAR point clouds in winter. Remote. Sens. 14, 1468 (2022)

    Article  Google Scholar 

  22. Yang, T., Li, Y., Ruichek, Y., Yan, Z.: Performance modeling a near-infrared ToF LiDAR under fog: a data-driven approach. TITS, 1–10 (2021)

    Google Scholar 

  23. Yu, J., Fan, Y., Huang, T.: Wide activation for efficient image and video super-resolution. In: BMVC (2019)

    Google Scholar 

  24. Zhang, Q.L., Yang, Y.B.: SA-Net: shuffle attention for deep convolutional neural networks. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2235–2239. IEEE (2021)

    Google Scholar 

  25. Zhang, Y., et al.: PolarNet: an improved grid representation for online LiDAR point clouds semantic segmentation. In: CVPR, pp. 9598–9607 (2020)

    Google Scholar 

  26. Zhou, Y., Tuzel, O.: VoxelNet: end-to-end learning for point cloud based 3D object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4490–4499 (2018)

    Google Scholar 

  27. Zhu, X., et al.: Cylindrical and asymmetrical 3D convolution networks for lidar segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9939–9948, June 2021

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Junjie Cao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ma, Y., Yue, Z., Wang, Y., Liu, R., Su, Z., Cao, J. (2024). LAPRNet: Lightweight Airborne Particle Removal Network for LiDAR Point Clouds. In: Yan, W.Q., Nguyen, M., Nand, P., Li, X. (eds) Image and Video Technology. PSIVT 2023. Lecture Notes in Computer Science, vol 14403. Springer, Singapore. https://doi.org/10.1007/978-981-97-0376-0_22

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-0376-0_22

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-0375-3

  • Online ISBN: 978-981-97-0376-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics