Perception-Aware Planning for Active SLAM in Dynamic Environments
<p>Framework of system.</p> "> Figure 2
<p>Fuzzy surface of relationship between input 1, input 2 and output.</p> "> Figure 3
<p>Fuzzy surface of relationship between input 1, input 3 and output.</p> "> Figure 4
<p>Fuzzy surface of relationship between input 2, input 3 and output.</p> "> Figure 5
<p>Membership function for input 1 of fuzzy inference system.</p> "> Figure 6
<p>Membership function for input 2 and 3 of the fuzzy inference system.</p> "> Figure 7
<p>Output of the fuzzy inference system.</p> "> Figure 8
<p>Scenarios in different environments: (<b>a</b>) small-scale, (<b>b</b>) medium-scale, (<b>c</b>) large-scale.</p> "> Figure 9
<p>MAV in Gazebo simulator: (<b>a</b>) small-scale and (<b>b</b>) large-scale.</p> "> Figure 10
<p>Trajectory of ORB-SLAM3 in the <math display="inline"><semantics> <mrow> <mi>s</mi> <mi>m</mi> <mi>a</mi> <mi>l</mi> <mi>l</mi> </mrow> </semantics></math>-<math display="inline"><semantics> <mrow> <mi>s</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mi>e</mi> </mrow> </semantics></math> scenario.</p> "> Figure 11
<p>Trajectory of ORB-SLAM3 in the <math display="inline"><semantics> <mrow> <mi>m</mi> <mi>e</mi> <mi>d</mi> <mi>i</mi> <mi>u</mi> <mi>m</mi> </mrow> </semantics></math>-<math display="inline"><semantics> <mrow> <mi>s</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mi>e</mi> </mrow> </semantics></math> scenario.</p> "> Figure 12
<p>Exploration result in different environments (<math display="inline"><semantics> <mrow> <mi>s</mi> <mi>m</mi> <mi>a</mi> <mi>l</mi> <mi>l</mi> <mo>-</mo> <mi>s</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mi>e</mi> </mrow> </semantics></math>).</p> "> Figure 13
<p>Exploration result in different environments (<math display="inline"><semantics> <mrow> <mi>m</mi> <mi>i</mi> <mi>d</mi> <mi>d</mi> <mi>l</mi> <mi>e</mi> <mo>-</mo> <mi>s</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mi>e</mi> </mrow> </semantics></math>).</p> "> Figure 14
<p>Exploration result in different environments (<math display="inline"><semantics> <mrow> <mi>l</mi> <mi>a</mi> <mi>r</mi> <mi>g</mi> <mi>e</mi> <mo>-</mo> <mi>s</mi> <mi>c</mi> <mi>a</mi> <mi>l</mi> <mi>e</mi> </mrow> </semantics></math>).</p> ">
Abstract
:1. Introduction
- (1)
- A perception-aware path planning method is proposed, which combines the next best view planner with an active loop closing strategy to perform a trajectory that enables more accurate localization while performing search and rescue tasks;
- (2)
- A key-waypoint-based active loop closing strategy is proposed to improve the efficiency of active SLAM system;
- (3)
- Fuzzy RRT-based dynamic obstacle avoidance is integrated into the path planning method to improve the system performance in dealing with complex dynamic environments.
2. Related Work
3. Method
3.1. System Overview
3.2. Active Loop Closing Planner
3.3. Key-Waypoints-Based Active Loop Closing
Algorithm 1 Exploration Planner—Active Loop Closing |
Input: Path planning times ; Best nodes Output: Enable active loop closing flag ; Planned path 1 select key waypoints according to 2 save key waypoints to as candidate active loop closing point 3 If then 4 5 end if 6 If and then 7 8 9 break 10 end if |
3.4. Fuzzy-RRT-Based Dynamic Obstacle Avoidance
Algorithm 2 Fuzzy-RRT-based dynamic obstacle avoidance |
Input:; Obstacle distance threshold Output: Refined path by fuzzy-RRT 1 generate path by (Algorithm 3) 2 follow the planned path 3 calculate the distance from obstacle in the path 4 If < then 5 generate refined path by fuzzy inference 6 else 7 = 8 end if 9 return |
Algorithm 3: Exploration Planner—Iterative Step |
Input: Initial state ; Enable active loop-closing flag Output: Best gain:; Planned path ; Path planning times 1 Initialize path planning times 2 Initialize T with 3 Initialize best gain: = 0 4 Initial root as best node: = 5 Number of nodes in T: 6 Initialize loop time: = 0 7 Waiting for planner call 8 while < or = 0 do 9 Add new node to build T incrementally 10 = + 1 11 = + 1 12 if > then 13 if then 14 15 else 16 17 end if 18 break 19 end if 20 if then 21 = 22 = 23 break 24 end if 25 end while 26 27 = + 1 |
- Input 1:
- distance to obstacle in the flight direction;
- Input 2:
- distance to obstacle at 45 degrees to the left of the direction of flight;
- Input 3:
- distance to obstacle at 45 degrees to the right of the direction of flight;
- Output:
- steering response based on the fuzzy inference.
4. Results and Analysis
4.1. Experimental Settings
4.2. Results in Different Scenarios
5. Discussion
5.1. Comparison of Real-Time Performance
5.2. Comparison of Different Active Loop Closing Check Threshold Selection Strategies
5.3. Comparison of the Localization Accuracy in Different Scenarios
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Ceccarelli, N.; Enright, J.J.; Frazzoli, E.; Rasmussen, S.J.; Schumacher, C.J. Micro UAV path planning for reconnaissance in wind. In Proceedings of the 2007 American Control Conference, New York, NY, USA, 9–13 July 2007; pp. 5310–5315. [Google Scholar]
- Quenzel, J.; Nieuwenhuisen, M.; Droeschel, D.; Beul, M.; Houben, S.; Behnke, S. Autonomous MAV-based indoor chimney inspection with 3D laser localization and textured surface reconstruction. J. Intell. Robot. Syst. 2019, 93, 317–335. [Google Scholar] [CrossRef]
- Dąbrowski, P.S.; Specht, C.; Specht, M.; Burdziakowski, P.; Makar, A.; Lewicka, O. Integration of multi-source geospatial data from GNSS receivers, terrestrial laser scanners, and unmanned aerial vehicles. Can. J. Remote. Sens. 2021, 47, 621–634. [Google Scholar] [CrossRef]
- He, J.; Lin, J.; Ma, M.; Liao, X. Mapping topo-bathymetry of transparent tufa lakes using UAV-based photogrammetry and RGB imagery. Geomorphology 2021, 389, 107832. [Google Scholar] [CrossRef]
- Wang, D.; Xing, S.; He, Y.; Yu, J.; Xu, Q.; Li, P. Evaluation of a New Lightweight UAV-Borne Topo-Bathymetric LiDAR for Shallow Water Bathymetry and Object Detection. Sensors 2022, 22, 1379. [Google Scholar] [CrossRef]
- Lutz, P.; Müller, M.G.; Maier, M.; Stoneman, S.; Tomić, T.; von Bargen, I.; Schuster, M.J.; Steidle, F.; Wedler, A.; Stürzl, W.; et al. ARDEA—An MAV with skills for future planetary missions. J. Field Robot. 2020, 37, 515–551. [Google Scholar] [CrossRef] [Green Version]
- Bi, Y.; Lan, M.; Li, J.; Lai, S.; Chen, B.M. A lightweight autonomous MAV for indoor search and rescue. Asian J. Control. 2019, 21, 1732–1744. [Google Scholar] [CrossRef]
- Wang, L.; Cavallaro, A. Microphone-array ego-noise reduction algorithms for auditory micro aerial vehicles. IEEE Sensors J. 2017, 17, 2447–2455. [Google Scholar] [CrossRef]
- Qin, T.; Li, P.; Shen, S. Vins-mono: A robust and versatile monocular visual-inertial state estimator. IEEE Trans. Robot. 2018, 34, 1004–1020. [Google Scholar] [CrossRef] [Green Version]
- Forster, C.; Zhang, Z.; Gassner, M.; Werlberger, M.; Scaramuzza, D. SVO: Semidirect visual odometry for monocular and multicamera systems. IEEE Trans. Robot. 2016, 33, 249–265. [Google Scholar] [CrossRef] [Green Version]
- Engel, J.; Schöps, T.; Cremers, D. LSD-SLAM: Large-scale direct monocular SLAM. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2014; pp. 834–849. [Google Scholar]
- Campos, C.; Elvira, R.; Rodríguez, J.J.G.; Montiel, J.M.; Tardós, J.D. Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. IEEE Trans. Robot. 2021, 37, 1874–1890. [Google Scholar] [CrossRef]
- Lluvia, I.; Lazkano, E.; Ansuategi, A. Active mapping and robot exploration: A survey. Sensors 2021, 21, 2445. [Google Scholar] [CrossRef] [PubMed]
- Fraundorfer, F.; Heng, L.; Honegger, D.; Lee, G.H.; Meier, L.; Tanskanen, P.; Pollefeys, M. Vision-based autonomous mapping and exploration using a quadrotor MAV. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Vilamoura, Portugal, 7–12 October 2012; pp. 4557–4564. [Google Scholar]
- Meng, Z.; Sun, H.; Qin, H.; Chen, Z.; Zhou, C.; Ang, M.H. Intelligent robotic system for autonomous exploration and active SLAM in unknown environments. In Proceedings of the 2017 IEEE/SICE International Symposium on System Integration (SII), Taipei, Taiwan, 1–14 December 2017; pp. 651–656. [Google Scholar]
- Qin, H.; Meng, Z.; Meng, W.; Chen, X.; Sun, H.; Lin, F.; Ang, M.H. Autonomous exploration and mapping system using heterogeneous UAVs and UGVs in GPS-denied environments. IEEE Trans. Veh. Technol. 2019, 68, 1339–1350. [Google Scholar] [CrossRef]
- Papachristos, C.; Khattak, S.; Alexis, K. Uncertainty-aware receding horizon exploration and mapping using aerial robots. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 4568–4575. [Google Scholar]
- Kan, X.; Teng, H.; Karydis, K. Online exploration and coverage planning in unknown obstacle-cluttered environments. IEEE Robot. Autom. Lett. 2020, 5, 5969–5976. [Google Scholar] [CrossRef]
- Papachristos, C.; Mascarich, F.; Khattak, S.; Dang, T.; Alexis, K. Localization uncertainty-aware autonomous exploration and mapping with aerial robots using receding horizon path-planning. Auton. Robot. 2019, 43, 2131–2161. [Google Scholar] [CrossRef]
- Rodríguez-Arévalo, M.L.; Neira, J.; Castellanos, J.A. On the importance of uncertainty representation in active SLAM. IEEE Trans. Robot. 2018, 34, 829–834. [Google Scholar] [CrossRef]
- Deng, X.; Zhang, Z.; Sintov, A.; Huang, J.; Bretl, T. Feature-constrained active visual SLAM for mobile robot navigation. In Proceedings of the 2018 IEEE international conference on robotics and automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 7233–7238. [Google Scholar]
- Bircher, A.; Kamel, M.; Alexis, K.; Oleynikova, H.; Siegwart, R. Receding horizon “next-best-view” planner for 3d exploration. In Proceedings of the 2016 IEEE international conference on robotics and automation (ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 1462–1468. [Google Scholar]
- Yamauchi, B. A frontier-based approach for autonomous exploration. In Proceedings of the 1997 IEEE International Symposium on Computational Intelligence in Robotics and Automation CIRA’97.’Towards New Computational Principles for Robotics and Automation’, Monterey, CA, USA, 10–11 July 1997; pp. 146–151. [Google Scholar]
- Naazare, M.; Garcia-Rosas, F.J.; Schulz, D. Online Next-Best-View Planner for 3D-Exploration and Inspection with a Mobile Manipulator Robot. IEEE Robot. Autom. Lett. 2022, 7, 3779–3786. [Google Scholar] [CrossRef]
- Papachristos, C.; Khattak, S.; Alexis, K. Autonomous exploration of visually-degraded environments using aerial robots. In Proceedings of the 2017 International Conference on Unmanned Aircraft Systems (ICUAS), Miami, FL, USA, 13–16 June 2017; pp. 775–780. [Google Scholar]
- Schmid, L.; Pantic, M.; Khanna, R.; Ott, L.; Siegwart, R.; Nieto, J. An efficient sampling-based method for online informative path planning in unknown environments. IEEE Robot. Autom. Lett. 2020, 5, 1500–1507. [Google Scholar] [CrossRef] [Green Version]
- Zhang, Z.; Scaramuzza, D. Perception-aware receding horizon navigation for MAVs. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 2534–2541. [Google Scholar]
- Zhou, B.; Pan, J.; Gao, F.; Shen, S. Raptor: Robust and perception-aware trajectory replanning for quadrotor fast flight. IEEE Trans. Robot. 2021, 37, 1992–2009. [Google Scholar] [CrossRef]
- Zhang, Z.; Scaramuzza, D. Beyond point clouds: Fisher information field for active visual localization. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 5986–5992. [Google Scholar]
- Song, S.; Jo, S. Surface-based exploration for autonomous 3d modeling. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia, 21–25 May 2018; pp. 4319–4326. [Google Scholar]
- Wang, Q.; Gao, Y.; Ji, J.; Xu, C.; Gao, F. Visibility-aware trajectory optimization with application to aerial tracking. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021; pp. 5249–5256. [Google Scholar]
- Lu, L.; Redondo, C.; Campoy, P. Optimal frontier-based autonomous exploration in unconstructed environment using RGB-D sensor. Sensors 2020, 20, 6507. [Google Scholar] [CrossRef]
- Dang, T.; Papachristos, C.; Alexis, K. Visual saliency-aware receding horizon autonomous exploration with application to aerial robotics. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 2526–2533. [Google Scholar]
- Dai, A.; Papatheodorou, S.; Funk, N.; Tzoumanikas, D.; Leutenegger, S. Fast frontier-based information-driven autonomous exploration with an mav. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 9570–9576. [Google Scholar]
- Khan, A.; Noreen, I.; Habib, Z. Coverage path planning of mobile robots using rational quadratic Bézier spline. In Proceedings of the 2016 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 19–21 December 2016; pp. 319–323. [Google Scholar]
- Song, S.; Kim, D.; Jo, S. Online coverage and inspection planning for 3D modeling. Auton. Robot. 2020, 44, 1431–1450. [Google Scholar] [CrossRef]
- Galceran, E.; Carreras, M. A survey on coverage path planning for robotics. Robot. Auton. Syst. 2013, 61, 1258–1276. [Google Scholar] [CrossRef] [Green Version]
- Tordesillas, J.; How, J.P. PANTHER: Perception-aware trajectory planner in dynamic environments. IEEE Access 2022, 10, 22662–22677. [Google Scholar] [CrossRef]
- Lu, L.; Carrio, A.; Sampedro, C.; Campoy, P. A Robust and Fast Collision-Avoidance Approach for Micro Aerial Vehicles Using a Depth Sensor. Remote. Sens. 2021, 13, 1796. [Google Scholar] [CrossRef]
- Tordesillas, J.; How, J.P. MADER: Trajectory planner in multiagent and dynamic environments. IEEE Trans. Robot. 2021, 38, 463–476. [Google Scholar] [CrossRef]
- Quan, L.; Zhang, Z.; Zhong, X.; Xu, C.; Gao, F. EVA-planner: Environmental adaptive quadrotor planning. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021; pp. 398–404. [Google Scholar]
- González-Banos, H.H.; Latombe, J.C. Navigation strategies for exploring indoor environments. Int. J. Robot. Res. 2002, 21, 829–848. [Google Scholar] [CrossRef]
- Driankov, D.; Hellendoorn, H.; Reinfrank, M. An Introduction to Fuzzy Control; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
- Koenig, N.; Howard, A. Design and use paradigms for gazebo, an open-source multi-robot simulator. In Proceedings of the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No. 04CH37566), Sendai, Japan, 28 September–2 October 2004; Volume 3, pp. 2149–2154. [Google Scholar]
- Furrer, F.; Burri, M.; Achtelik, M.; Siegwart, R. RotorS—A modular gazebo MAV simulator framework. In Robot Operating System (ROS); Springer: Berlin/Heidelberg, Germany, 2016; pp. 595–625. [Google Scholar]
Rule No. | Input 1 | Input 2 | Input 3 | Output |
---|---|---|---|---|
1 | (replan) | |||
2 | (replan) | |||
3 | (replan) | |||
4 | (replan) | |||
5 | (replan) | |||
6 | (replan) | |||
7 | (replan) | |||
8 | (replan) | |||
9 | (replan) | |||
10 | (replan) | |||
11 | (replan) | |||
12 | ||||
13 | (replan) | |||
14 | (replan) | |||
15 | ||||
16 | ||||
17 | ||||
18 | ||||
19 | (replan) | |||
20 | (replan) | |||
21 | ||||
22 | (replan) | |||
23 | (replan) | |||
24 | ||||
25 | ||||
26 | ||||
27 |
Sequence | Planning Time | Active Loop Closing Time | Planning Times | Total Time Spent | ||||
---|---|---|---|---|---|---|---|---|
NBVP | ALCP | NBVP | ALCP | NBVP | ALCP | NBVP | ALCP | |
(s) | (s) | (s) | (s) | (s) | (s) | |||
- | 0.30 | 0.32 | / | 0.35 | 101 | 95 | 361.8 | 385.6 |
- | 0.31 | 0.35 | / | 0.55 | 465 | 453 | 1161.8 | 1235.6 |
- | 0.35 | 0.41 | / | 0.69 | 851 | 783 | 1779.8 | 1865.2 |
Speed | NBVP | ALCP | ||
---|---|---|---|---|
RMSE | Mean | RMSE | Mean | |
(m) | (m) | (m) | (m) | |
m/s | 3.25 | 2.56 | 3.21 | 2.87 |
m/s | 3.02 | 2.32 | 3.44 | 2.70 |
m/s | 4.89 | 3.66 | 4.19 | 3.52 |
m/s | 3.85 | 3.48 | 3.75 | 3.27 |
m/s | 10.56 | 5.05 | 3.36 | 2.82 |
Scenarios | NBVP | ALCP | ||
---|---|---|---|---|
RMSE | Mean | RMSE | Mean | |
(m) | (m) | (m) | (m) | |
- | 3.25 | 2.56 | 3.21 | 2.87 |
- | 2.32 | 1.78 | 1.00 | 0.91 |
- | 4.89 | 3.66 | 3.45 | 3.49 |
Check Frequency | ALCP | |||
---|---|---|---|---|
RMSE | Mean | Planning Times | Total Time Spent | |
(m) | (m) | (s) | ||
2.94 | 1.89 | 113 | 484.7 | |
3.01 | 1.94 | 106 | 448.3 | |
3.25 | 2.56 | 95 | 385.6 | |
3.21 | 2.58 | 92 | 376.4 | |
3.86 | 2.73 | 101 | 370.2 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhao, Y.; Xiong, Z.; Zhou, S.; Wang, J.; Zhang, L.; Campoy, P. Perception-Aware Planning for Active SLAM in Dynamic Environments. Remote Sens. 2022, 14, 2584. https://doi.org/10.3390/rs14112584
Zhao Y, Xiong Z, Zhou S, Wang J, Zhang L, Campoy P. Perception-Aware Planning for Active SLAM in Dynamic Environments. Remote Sensing. 2022; 14(11):2584. https://doi.org/10.3390/rs14112584
Chicago/Turabian StyleZhao, Yao, Zhi Xiong, Shuailin Zhou, Jingqi Wang, Ling Zhang, and Pascual Campoy. 2022. "Perception-Aware Planning for Active SLAM in Dynamic Environments" Remote Sensing 14, no. 11: 2584. https://doi.org/10.3390/rs14112584
APA StyleZhao, Y., Xiong, Z., Zhou, S., Wang, J., Zhang, L., & Campoy, P. (2022). Perception-Aware Planning for Active SLAM in Dynamic Environments. Remote Sensing, 14(11), 2584. https://doi.org/10.3390/rs14112584