Abstract
Environmental mapping serves as a crucial element in Simultaneous Localization and Mapping (SLAM) algorithms, playing a pivotal role in ensuring the accurate representation necessary for autonomous robot navigation guided by SLAM. Current SLAM systems predominantly rely on grid-based map representations, encountering challenges such as measurement discretization for cell fitting and grid map interpolation for online posture prediction. Splines present a promising alternative, capable of mitigating these issues while maintaining computational efficiency. This paper delves into the efficiency disparities between B-Spline surface mapping and discretized cell-based approaches, such as grid mapping, within indoor environments. B-Spline Online SLAM and FastSLAM, utilizing Rao-Blackwellized Particle Filter (RBPF), are employed to achieve range-based mapping of the unknown 2D environment. The system incorporates deep learning networks in the B-Spline curve estimation process to compute parameterizations and knot vectors. The research implementation utilizes the Intel Research Lab benchmark dataset to conduct a comprehensive qualitative and quantitative analysis of both approaches. The B-Spline surface approach demonstrates significantly superior performance, evidenced by low error metrics, including an average squared translational error of 0.0016 and an average squared rotational error of 1.137. Additionally, comparative analysis with Vision Benchmark Suite demonstrates robustness across different environments, highlighting the effectiveness of B-Spline SLAM for real-world applications.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data Availability and Access
The dataset supporting the findings of this study is derived from the open dataset provided by the Intel Research Lab benchmark and is publicly accessible for research purposes [34, 35]. The authors affirm compliance with the dataset’s terms and conditions and emphasize that, due to anonymization and de-identification, individual informed consent is not applicable.
References
Rodrigues RT, Aguiar AP, Pascoal A (2018) A b-spline mapping framework for long-term autonomous operations. In: 2018 IEEE/RSJ International conference on intelligent robots and systems (IROS), pp 3204–3209. https://doi.org/10.1109/IROS.2018.8594456
Liu M, Huang S, Dissanayake G, Kodagoda S (2010) Towards a consistent slam algorithm using b-splines to represent environments. In: 2010 IEEE/RSJ International conference on intelligent robots and systems, pp 2065–2070. https://doi.org/10.1109/IROS.2010.5649703
Schumaker L (2007) Approximation Power of Splines (Free Knots), 3rd Edition, Cambridge Mathematical Library, Cambridge University Press, pp 268–296. https://doi.org/10.1017/CBO9780511618994.009
A V SM, Kanna BR (2021) Parallel fpfh slam for aerial vehicles. In: 2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW), pp 1–4. https://doi.org/10.1109/21CW48944.2021.9532582
Mur-Artal R, Montiel JMM, Tardós JD (2015) Orb-slam: a versatile and accurate monocular slam system. IEEE Trans Rob 31(5):1147–1163. https://doi.org/10.1109/TRO.2015.2463671
Kohlbrecher S, Meyer J, Graber T, Petersen K, Klingauf U, von Stryk O (2014) Hector open source modules for autonomous mapping and navigation with rescue robots. In: Behnke S, Veloso M, Visser A, Xiong R (eds) RoboCup 2013: Robot World Cup XVII. Springer, Berlin Heidelberg, pp 624–631
Engel J, Schöps T, Cremers D (2014) Lsd-slam: large-scale direct monocular slam. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) Computer vision - ECCV 2014. Springer International Publishing, Cham, pp 834–849
Filipenko M, Afanasyev IM (2018) Comparison of various slam systems for mobile robot in an indoor environment. 2018 International Conference on Intelligent Systems (IS) 400–407
Xu L, Feng C, Kamat VR, Menassa CC (2019) An occupancy grid mapping enhanced visual slam for real-time locating applications in indoor gps-denied environments. Autom Constr 104:230–245. https://doi.org/10.1016/j.autcon.2019.04.011
Keonyong Lee S-HR, Nam C, Doh NL (2018) A practical 2d/3d slam using directional patterns of an indoor structure. Intel Serv Robotics 11(1):1–24. https://doi.org/10.1007/s11370-017-0234-9
Rodrigues RT, Tsiogkas N, Aguiar AP, Pascoal A (2020) B-spline surfaces for range-based environment mapping. In: 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 10774–10779. https://doi.org/10.1109/IROS45743.2020.9341768
Shreyas Madhav AV, Kanna BR, Pavithra LK (2021) Parallel exploitation of 2d lidar simultaneous localization and mapping. In: 2021 5th International conference on computer, communication and signal processing (ICCCSP), pp 204–208. https://doi.org/10.1109/ICCCSP52374.2021.9465538
Chen L-H, Peng C-C (2019) A robust 2d-slam technology with environmental variation adaptability. IEEE Sens J 19(23):11475–11491. https://doi.org/10.1109/JSEN.2019.2931368
Ma J, Wang X, Yijia H, Mei X, Zhao J (2019) Line-based visual slam by junction matching and vanishing point alignment. IEEE Access PP 1–1. https://doi.org/10.1109/ACCESS.2019.2960282
Çatal O, Jansen W, Verbelen T, Dhoedt B, Steckel J (2021) Latentslam: unsupervised multi-sensor representation learning for localization and mapping. arXiv:2105.03265
Grisetti G, Stachniss C, Burgard W (2007) Improved techniques for grid mapping with rao-blackwellized particle filters. IEEE Trans Rob 23:34–46. https://doi.org/10.1109/TRO.2006.889486
Vázquez-Martín R, Núñez P, Bandera A, Sandoval F (2009) Curvature-based environment description for robot navigation using laser range sensors. Sensors 9(8):5894–5918. https://doi.org/10.3390/s90805894
Caccavale A, Schwager M (2018) Wireframe mapping for resource-constrained robots. In: 2018 IEEE/RSJ International conference on intelligent robots and systems (IROS), pp 1–9. https://doi.org/10.1109/IROS.2018.8594057
Burgard W, Brock O, Stachniss C (2008) BS-SLAM: shaping the World, pp 169–176
Manni Ghaffari Jadidi JVM, Dissanayake G (2018) Gaussian processes autonomous mapping and exploration for range-sensing mobile robots. Autonomous Robots 42:273–290. https://doi.org/10.1007/s10514-017-9668-3
Li Y, Ruichek Y (2014) Occupancy grid mapping in urban environments from a moving on-board stereo-vision system. Sensors 14(6):10454–10478. https://doi.org/10.3390/s140610454
Shen D, Xu Y, Huang Y (2019) Research on 2d-slam of indoor mobile robot based on laser radar. In: Proceedings of the 2019 4th international conference on automation, control and robotics engineering, association for computing machinery, New York, USA. https://doi.org/10.1145/3351917.3351966
Hempel T, Al-Hamadi A (2022) An online semantic mapping system for extending and enhancing visual slam. Eng Appl Artif Intell 111:104830. https://doi.org/10.1016/j.engappai.2022.104830
Wen S, Liu X, Wang Z, Zhang H, Zhang Z, Tian W (2022) An improved multi-object classification algorithm for visual slam under dynamic environment. Intel Serv Robot 15:39–55. https://doi.org/10.1007/s11370-021-00400-8
Ebadi K, Palieri M, Wood S, Padgett CW, Agha-mohammadi AA (2021) DARE-SLAM: Degeneracy-Aware and Resilient Loop Closing in Perceptually-Degraded Environments. J Intell Robot Syst 102:1–25. https://doi.org/10.1007/s10846-021-01362-w
Li R, Wang S, Gu D (2021) Deepslam: a robust monocular slam system with unsupervised deep learning. IEEE Trans Industr Electron 68(4):3577–3587. https://doi.org/10.1109/TIE.2020.2982096
Bescos B, Fácil JM, Civera J, Neira J (2018) Dynaslam: tracking, mapping, and inpainting in dynamic scenes. IEEE Robot Autom Lett 3(4):4076–4083. https://doi.org/10.1109/LRA.2018.2860039
McCormac J, Handa A, Davison A, Leutenegger S (2017) Semanticfusion: dense 3d semantic mapping with convolutional neural networks. In: 2017 IEEE international conference on robotics and automation (ICRA), pp 4628–4635. https://doi.org/10.1109/ICRA.2017.7989538
Kendall A, Grimes M, Cipolla R (2015) Posenet: a convolutional network for real-time 6-dof camera relocalization. In: Proceedings of the 2015 IEEE international conference on computer vision (ICCV), ICCV ’15, IEEE Computer Society, USA, pp 2938–2946. https://doi.org/10.1109/ICCV.2015.336
Engel J, Schöps T, Cremers D (2014) Lsd-slam: large-scale direct monocular slam. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) Computer vision - ECCV 2014. Springer International Publishing, Cham, pp 834–849
Ban X, Wang H, Chen T, Wang Y, Xiao Y (2021) Monocular visual odometry based on depth and optical flow using deep learning. IEEE Trans Instrum Meas 70:1–19. https://doi.org/10.1109/TIM.2020.3024011
Günther S, Pazner W, Qi D (2021) Spline parameterization of neural network controls for deep learning. arXiv:2103.00301
Lee J-S, Nam SY, Chung WK (2011) Robust rbpf-slam for indoor mobile robots using sonar sensors in non-static environments. Adv Robot 25(9–10):1227–1248. https://doi.org/10.1163/016918611X574696
Kümmerle R, Steder B, Dornhege C, Ruhnke M, Grisetti G, Stachniss C, Kleiner A (2009) On measuring the accuracy of slam algorithms. Auton Robot 27(4):387–407. https://doi.org/10.1007/s10514-009-9155-6
SLAM Benchmarking Datasets — ais.informatik.uni-freiburg.de. http://ais.informatik.uni-freiburg.de/slamevaluation/datasets.php. Accessed 27 Nov 2023
Dwijotomo A, Abdul Rahman MA, Mohammed Ariff MH, Zamzuri H, Wan Azree WMH (2020) Cartographer slam method for optimization with an adaptive multi-distance scan scheduler. Appl Sci 10(1). https://doi.org/10.3390/app10010347
Xu B, Liu Z, Fu Y, Zhang C (2017) Research of cartographer laser SLAM algorithm. In: Lv Y, Bao W, Chen W, Shi Z, Su J, Fei J, Gong W, Han S, Jin W, Yang J (eds), LiDAR Imaging Detection and Target Recognition 2017, vol. 10605, International Society for Optics and Photonics, SPIE, p 1060509. https://doi.org/10.1117/12.2292864
Montemerlo M, Thrun S, Koller D, Wegbreit B (2002) Fastslam: a factored solution to the simultaneous localization and mapping problem. In: Eighteenth national conference on artificial intelligence, american association for artificial intelligence, pp 593–598
Acknowledgements
The authors would like to express their gratitude to the Rajiv Gandhi National Institute of Youth Development, Sriperumbudur, India, and the Vellore Institute of Technology in Chennai, India, for giving valuable research facilities. The authors also thank Mr. Ganesh, VIT Chennai, and Mr. Sethuraman T V, VIT Chennai for sparing time and sharing their expertise towards the enhancement of this research work. Funded by Seed research grant in engineering management and sciences, VIT Chennai. Photonic disinfection to combat COVID-19 using Indoor unmanned aerial vehicle
Author information
Authors and Affiliations
Contributions
The collaborative effort of the authors in this study is outlined as follows: Dr. B. Rajesh Kanna played a crucial role in conceptualizing the study, designing and implementing the methodology, and contributed significantly to both the initial drafting and subsequent revisions of the manuscript. Shreyas Madhav AV actively participated in the development of the research design and methodology, played a key role in data analysis and interpretation, made substantial contributions to the manuscript’s writing, and took primary responsibility for data collection and analysis. Dr. C. Sweetlin Hemalatha provided valuable critical feedback during the revision process, contributed to the conceptualization and design of the study, played a substantial role in drafting and revising the manuscript, and offered insightful perspectives during the interpretation of results. Dr. Manoj Kumar Rajagopal contributed to formulating the research question and study design, actively engaged in data analysis and interpretation, and provided critical input during the revision process.
Corresponding author
Ethics declarations
Ethical and Informed Consent for Data Used
This manuscript should not be submitted to multiple journals for simultaneous consideration. The submitted work should be original and should not have been published elsewhere in any form or language (partially or in full). The data utilized in this study were sourced from the open dataset provided by the Intel Research Lab benchmark. This dataset is publicly accessible and intended for research purposes. The authors acknowledge and comply with the terms and conditions outlined by the dataset provider. As an open dataset, individual informed consent is not applicable, given that the data is anonymized and de-identified.
Competing Interests
The author of the manuscript entitled “Enhancing SLAM Efficiency: A Comparative Analysis of B-Spline Surface Mapping and Grid-Based Approaches for Autonomous Robot Navigation” declares (s) that there is no conflict of interest with anyone.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Kanna, B.R., AV, S.M., Hemalatha, C.S. et al. Enhancing SLAM efficiency: a comparative analysis of B-spline surface mapping and grid-based approaches. Appl Intell 54, 10802–10818 (2024). https://doi.org/10.1007/s10489-024-05776-5
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-024-05776-5