Abstract
Autonomous vehicles and mobile robotics usually rely on LiDAR sensors for outdoor environment perception. Airborne particles, such as fog, rain, and snow, introduce undesired measurement points resulting in missing detection and false positives. Hence LiDAR-based perception systems must contend with inclement weather to avoid a significant drop in performance. This paper introduces a lightweight network to infer these undesired measurement points. It mainly consists of three Wide Multi-Level Residual modules (WMLR). WMLR is delicately designed to integrate wide activation, multi-level shortcuts, and shuffle attention seamlessly, to make it an effective and efficient pre-processing tool for subsequent tasks. We also introduce an enhanced LiDAR data representation to boost the performance further. It integrates point cloud spatial distribution with the standard intensity and distance inputs. Thus, two models following the same network architecture but with the standard and enhanced input representation, namely LAPRNet\(_2\) and LAPRNet\(_3\), are proposed. They are trained and tested in controlled and natural weather environments. Experiments on the WADS and Chamber datasets show that they outperform state-of-the-art deep learning and traditional filtering methods by a significant margin. Considering the limited computing resources on edge devices, both LAPRNet\(_2\) and LAPRNet\(_3\) provide an optimal balance between quality and computation to ensure successful deployment. LAPRNet\(_2\) is more efficient, and the parameters and computations of it against WeatherNet are 1.53M vs. 0.39M and 18.4 GFLOPs vs. 4.9 GFLOPs, respectively. The source code will be available on GitHub soon.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bijelic, M., et al.: Seeing through fog without seeing fog: deep multimodal sensor fusion in unseen adverse weather. In: CVPR (2020)
Bijelic, M., Gruber, T., Ritter, W.: A benchmark for lidar sensors in fog: is detection breaking down? 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 760–767 (2018)
Charron, N., Phillips, S., Waslander, S.L.: De-noising of lidar point clouds corrupted by snowfall. In: 2018 15th Conference on Computer and Robot Vision (CRV), pp. 254–261 (2018)
Hahner, M., et al.: LiDAR snowfall simulation for robust 3D object detection. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2022)
Hahner, M., Sakaridis, C., Dai, D., Gool, L.V.: Fog simulation on real LiDAR point clouds for 3D object detection in adverse weather. In: ICCV, pp. 15263–15272 (2021)
Heinzler, R., Piewak, F., Schindler, P., Stork, W.: CNN-based LiDAR point cloud de-noising in adverse weather. IEEE Rob. Autom. Lett. 5(2), 2514–2521 (2020)
Howard, A., et al.: Searching for MobileNetV3. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1314–1324 (2019). https://doi.org/10.1109/ICCV.2019.00140
Kong, L., et al.: Rethinking range view representation for LiDAR segmentation. arXiv preprint arXiv:2303.05367 (2023)
Kurup, A., Bos, J.: DSOR: a scalable statistical filter for removing falling snow from LiDAR point clouds in severe winter weather. arXiv preprint arXiv:2109.07078 (2021)
Kutila, M., Pyykonen, P., Holzhuter, H., Colomb, M., Duthon, P.: Automotive LiDAR performance verification in fog and rain. In: 2018 21st International Conference on Intelligent Transportation Systems (ITSC), pp. 1695–1701 (2018)
Luo, S., Hu, W.: Score-based point cloud denoising. In: ICCV, pp. 4563–4572 (2021)
Luo, S., Hu, W.: Differentiable manifold reconstruction for point cloud denoising. In: ACMMM (2020)
Milioto, A., Vizzo, I., Behley, J., Stachniss, C.: RangeNet++: fast and accurate LiDAR semantic segmentation. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4213–4220. IEEE (2019)
Piewak, F., et al.: Boosting LiDAR-based semantic labeling by cross-modal training data generation. In: Leal-Taixé, L., Roth, S. (eds.) ECCV 2018. LNCS, vol. 11134, pp. 497–513. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-11024-6_39
Qian, K., Zhu, S., Zhang, X., Li, L.E.: Robust multimodal vehicle detection in foggy weather using complementary lidar and radar signals. In: CVPR, pp. 444–453 (2021)
Rakotosaona, M.J., La Barbera, V., Guerrero, P., Mitra, N.J., Ovsjanikov, M.: PointCleanNet: learning to denoise and remove outliers from dense point clouds. Comput. Graph. Forum 39, 185–203 (2020)
Rusu, R.B., Cousins, S.: 3D is here: point cloud library (pcl). In: 2011 IEEE International Conference on Robotics and Automation, pp. 1–4 (2011)
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: MobileNetV2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018
Stanislas, L., et al.: Airborne particle classification in LiDAR point clouds using deep learning. In: Ishigami, G., Yoshida, K. (eds.) Field and Service Robotics. SPAR, vol. 16, pp. 395–410. Springer, Singapore (2021). https://doi.org/10.1007/978-981-15-9460-1_28
Tian, C., Fei, L., Zheng, W., Xu, Y., Zuo, W., Lin, C.W.: Deep learning on image denoising: an overview. Neural Netw. 131, 251–275 (2020)
Wang, W., You, X., Chen, L., Tian, J., Tang, F., Zhang, L.: A scalable and accurate de-snowing algorithm for LiDAR point clouds in winter. Remote. Sens. 14, 1468 (2022)
Yang, T., Li, Y., Ruichek, Y., Yan, Z.: Performance modeling a near-infrared ToF LiDAR under fog: a data-driven approach. TITS, 1–10 (2021)
Yu, J., Fan, Y., Huang, T.: Wide activation for efficient image and video super-resolution. In: BMVC (2019)
Zhang, Q.L., Yang, Y.B.: SA-Net: shuffle attention for deep convolutional neural networks. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2235–2239. IEEE (2021)
Zhang, Y., et al.: PolarNet: an improved grid representation for online LiDAR point clouds semantic segmentation. In: CVPR, pp. 9598–9607 (2020)
Zhou, Y., Tuzel, O.: VoxelNet: end-to-end learning for point cloud based 3D object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4490–4499 (2018)
Zhu, X., et al.: Cylindrical and asymmetrical 3D convolution networks for lidar segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9939–9948, June 2021
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Ma, Y., Yue, Z., Wang, Y., Liu, R., Su, Z., Cao, J. (2024). LAPRNet: Lightweight Airborne Particle Removal Network for LiDAR Point Clouds. In: Yan, W.Q., Nguyen, M., Nand, P., Li, X. (eds) Image and Video Technology. PSIVT 2023. Lecture Notes in Computer Science, vol 14403. Springer, Singapore. https://doi.org/10.1007/978-981-97-0376-0_22
Download citation
DOI: https://doi.org/10.1007/978-981-97-0376-0_22
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-97-0375-3
Online ISBN: 978-981-97-0376-0
eBook Packages: Computer ScienceComputer Science (R0)