Self-Localization of Anonymous UGVs Using Deep Learning from Periodic Aerial Images for a GPS-Denied Environment
<p>The considered neural network.</p> "> Figure 2
<p>The four mobile robots and two decoys.</p> "> Figure 3
<p>The four mobile robots with Artoolkit targets.</p> "> Figure 4
<p>Turtlebot3.</p> "> Figure 5
<p>Comparison of the success rates for LDMRA (blue) and DLA (red: simulated; yellow: experimental).</p> "> Figure 6
<p>The red triangles delimit the area where the training coordinates were recorded for a period of 5 s between images. The blue dots represent measurements for a period of 8 s between images. (<b>a</b>): simulation; (<b>b</b>): experimental.</p> "> Figure 7
<p>DLA (red: success rate (%); blue: percentage of points out of learning limits—(<b>a</b>): simulations; (<b>b</b>): experimental).</p> "> Figure 8
<p>Success rates for the non-indexed localization; red: simulated; blue: experimental.</p> ">
Abstract
:1. Introduction
2. State-of-the-Art Works
3. Deep Learning Algorithm (DLA) for Indexed Localization
4. Platform Presentation
4.1. Simulation Platforms
4.2. Experimental Platform
4.3. Algorithms and Programming
5. Results and Analysis
5.1. Indexed Localization of Mobile Robots
5.2. Non Indexed Localization of Mobile Robots
- The simulated learning phase considers only one mobile robot, while the success rates take into account all the mobile robots. Even if they are perfectly identical, their trajectories can be different.
- The differences between the experimental mobile robots were considered in the learning process and allow for a better understanding of the respective motions of each mobile robot, thus enhancing non-indexed localization.
- This could have been the case if the mobile robots were operating in a very large environment where the ambiguity of being confused and making errors in non-indexed localization would be low. Here, the surfaces were relatively small, there were frequent obstacle detections, and the coordinates could be close, which increased the difficulty of localization.
- Each mobile robot, regardless of its own movements, succeeded in determining the positions of all the mobile robots as if, ultimately, its own linear and angular motions were not of major importance in non-indexed detection. But, rather, regardless of its own motions, the mobile robots were able to anticipate the positions of all because the learning had integrated the characteristics of each mobile robot. Linear and angular controls played a minor role in non-indexed localization. The network is capable of knowing the positions of all the mobile robots without even knowing their controls.
6. Conclusions and Future Works
- The Deep Learning Algorithm (DLA) outperformed the algorithm based on a modeling of the mobile robots’ motions (LDRMA).
- The Deep Learning Algorithm was equally efficient with simulation and experimental measurements.
- The success rate envelope of the DLA formed a “bell curve” with the maximum centered on the sampling period. The results were the best for measurements at the same inter-image period as the learning, and they decreased with period deviation.
- The Deep Learning Algorithm was able to know the positions of all mobile robots without indexing between two successive images with a very high success rate.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Carvalho, J.L.; Farias, P.C.; Simas Filho, E.F. Global Localization of Unmanned Ground Vehicles Using Swarm Intelligence and Evolutionary Algorithms. J. Intell. Robot. Syst. 2023, 107, 45. [Google Scholar] [CrossRef]
- Se, S.; Lowe, D.; Little, J. Local and global localization for mobile robots using visual landmarks. In Proceedings of the 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems, Expanding the Societal Role of Robotics in the the Next Millennium (Cat. No. 01CH37180), Maui, HI, USA, 29 October–3 November 2001; Volume 1, pp. 414–420. [Google Scholar]
- Chen, S.; Yin, D.; Niu, Y. A survey of robot swarms’ relative localization method. Sensors 2022, 22, 4424. [Google Scholar] [CrossRef]
- Quan, L.; Yin, L.; Xu, C.; Gao, F. Distributed swarm trajectory optimization for formation flight in dense environments. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 4979–4985. [Google Scholar]
- Gao, Y.; Wang, Y.; Zhong, X.; Yang, T.; Wang, M.; Xu, Z.; Wang, Y.; Lin, Y.; Xu, C.; Gao, F. Meeting-merging-mission: A multi-robot coordinate framework for large-scale communication-limited exploration. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 13700–13707. [Google Scholar]
- Joubert, N.; Reid, T.G.; Noble, F. Developments in modern GNSS and its impact on autonomous vehicle architectures. In Proceedings of the 2020 IEEE Intelligent Vehicles Symposium (IV), Las Vegas, NV, USA, 19 October–13 November 2020; pp. 2029–2036. [Google Scholar]
- Krasuski, K.; Ciećko, A.; Bakuła, M.; Grunwald, G.; Wierzbicki, D. New methodology of designation the precise aircraft position based on the RTK GPS solution. Sensors 2021, 22, 21. [Google Scholar] [CrossRef] [PubMed]
- Sesyuk, A.; Ioannou, S.; Raspopoulos, M. A survey of 3D indoor localization systems and technologies. Sensors 2022, 22, 9380. [Google Scholar] [CrossRef] [PubMed]
- Flocchini, P.; Prencipe, G.; Santoro, N.; Widmayer, P. Arbitrary pattern formation by asynchronous, anonymous, oblivious robots. Theor. Comput. Sci. 2008, 407, 412–447. [Google Scholar] [CrossRef]
- Di Luna, G.A.; Uehara, R.; Viglietta, G.; Yamauchi, Y. Gathering on a circle with limited visibility by anonymous oblivious robots. arXiv 2020, arXiv:2005.07917. [Google Scholar]
- Yamauchi, Y. Symmetry of anonymous robots. In Distributed Computing by Mobile Entities: Current Research in Moving and Computing; Springer: Berlin/Heidelberg, Germany, 2019; pp. 109–133. [Google Scholar]
- Poulet, O.; Guérin, F.; Guinand, F. Self-localization of anonymous mobile robots from aerial images. In Proceedings of the 2018 European Control Conference (ECC), Limassol, Cyprus, 12–15 June 2018; pp. 1094–1099. [Google Scholar]
- Siva, J.; Poellabauer, C. Robot and drone localization in gps-denied areas. In Mission-Oriented Sensor Networks and Systems: Art and Science; Springer: Berlin/Heidelberg, Germany, 2019; Volume 2, pp. 597–631. [Google Scholar]
- Kim Geok, T.; Zar Aung, K.; Sandar Aung, M.; Thu Soe, M.; Abdaziz, A.; Pao Liew, C.; Hossain, F.; Tso, C.P.; Yong, W.H. Review of indoor positioning: Radio wave technology. Appl. Sci. 2020, 11, 279. [Google Scholar] [CrossRef]
- Gönültaş, E.; Lei, E.; Langerman, J.; Huang, H.; Studer, C. CSI-based multi-antenna and multi-point indoor positioning using probability fusion. IEEE Trans. Wirel. Commun. 2021, 21, 2162–2176. [Google Scholar] [CrossRef]
- Kaune, R. Accuracy studies for TDOA and TOA localization. In Proceedings of the 2012 15th International Conference on Information Fusion, Singapore, 9–12 July 2012; pp. 408–415. [Google Scholar]
- Liu, X.; Zhou, B.; Huang, P.; Xue, W.; Li, Q.; Zhu, J.; Qiu, L. Kalman filter-based data fusion of Wi-Fi RTT and PDR for indoor localization. IEEE Sens. J. 2021, 21, 8479–8490. [Google Scholar] [CrossRef]
- Šoštarić, D.; Mester, G. Drone localization using ultrasonic TDOA and RSS signal: Integration of the inverse method of a particle filter. Fme Trans. 2020, 48, 21–30. [Google Scholar] [CrossRef]
- Menta, E.Y.; Malm, N.; Jäntti, R.; Ruttik, K.; Costa, M.; Leppänen, K. On the performance of AoA–based localization in 5G ultra–dense networks. IEEE Access 2019, 7, 33870–33880. [Google Scholar] [CrossRef]
- Thomas, F.; Ros, L. Revisiting trilateration for robot localization. IEEE Trans. Robot. 2005, 21, 93–101. [Google Scholar] [CrossRef]
- Kokkinis, A.; Kanaris, L.; Liotta, A.; Stavrou, S. RSS indoor localization based on a single access point. Sensors 2019, 19, 3711. [Google Scholar] [CrossRef]
- Lian, L.; Xia, S.; Zhang, S.; Wu, Q.; Jing, C. Improved Indoor positioning algorithm using KPCA and ELM. In Proceedings of the 2019 11th International Conference on Wireless Communications and Signal Processing (WCSP), Xi’an, China, 23–25 October 2019; pp. 1–5. [Google Scholar]
- Cebollada, S.; Payá, L.; Flores, M.; Peidró, A.; Reinoso, O. A state-of-the-art review on mobile robotics tasks using artificial intelligence and visual data. Expert Syst. Appl. 2021, 167, 114195. [Google Scholar] [CrossRef]
- Wozniak, P.; Afrisal, H.; Esparza, R.G.; Kwolek, B. Scene recognition for indoor localization of mobile robots using deep CNN. In Proceedings of the Computer Vision and Graphics: International Conference, ICCVG 2018, Warsaw, Poland, 17–19 September 2018; Proceedings. Springer International Publishing: Berlin/Heidelberg, Germany, 2018; pp. 137–147. [Google Scholar]
- Xu, S.; Chou, W.; Dong, H. A robust indoor localization system integrating visual localization aided by CNN-based image retrieval with Monte Carlo localization. Sensors 2019, 19, 249. [Google Scholar] [CrossRef] [PubMed]
- Walch, F.; Hazirbas, C.; Leal-Taixe, L.; Sattler, T.; Hilsenbeck, S.; Cremers, D. Image-based localization using lstms for structured feature correlation. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 627–637. [Google Scholar]
- Li, Y.; Hu, X.; Zhuang, Y.; Gao, Z.; Zhang, P.; El-Sheimy, N. Deep reinforcement learning (DRL): Another perspective for unsupervised wireless localization. IEEE Internet Things J. 2019, 7, 6279–6287. [Google Scholar] [CrossRef]
- Magrin, C.E.; Todt, E. Multi-Sensor Fusion Method Based on Artificial Neural Network for Mobile Robot Self-Localization. In Proceedings of the 2019 Latin American Robotics Symposium (LARS), 2019 Brazilian Symposium on Robotics (SBR) and 2019 Workshop on Robotics in Education (WRE), Rio Grande, Brazil, 23–25 October 2019; pp. 138–143. [Google Scholar] [CrossRef]
- Tang, Q.N.; Truong, X.T.; Nguyen, D.Q. AN INDOOR LOCALIZATION METHOD FOR MOBILE ROBOT USING CEILING MOUNTED APRILTAGS. J. Sci. Tech. 2022, 17. [Google Scholar]
- Kalaitzakis, M.; Cain, B.; Carroll, S.; Ambrosi, A.; Whitehead, C.; Vitzilaios, N. Fiducial markers for pose estimation: Overview, applications and experimental comparison of the artag, apriltag, aruco and stag markers. J. Intell. Robot. Syst. 2021, 101, 71. [Google Scholar] [CrossRef]
- Kato, H.; Billinghurst, M. Marker tracking and hmd calibration for a video-based augmented reality conferencing system. In Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99), San Francisco, CA, USA, 20–21 October 1999; pp. 85–94. [Google Scholar]
- Franchi, A.; Oriolo, G.; Stegagno, P. Mutual localization in multi-robot systems using anonymous relative measurements. Int. J. Robot. Res. 2013, 32, 1302–1322. [Google Scholar] [CrossRef]
- Nguyen, T.; Mohta, K.; Taylor, C.J.; Kumar, V. Vision-based multi-MAV localization with anonymous relative measurements using coupled probabilistic data association filter. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 3349–3355. [Google Scholar]
- Wang, Y.; Wen, X.; Yin, L.; Xu, C.; Cao, Y.; Gao, F. Certifiably optimal mutual localization with anonymous bearing measurements. IEEE Robot. Autom. Lett. 2022, 7, 9374–9381. [Google Scholar] [CrossRef]
- Poulet, O.; Guérin, F.; Guinand, F. Experimental and Simulation Platforms for Anonymous Robots Self-Localization. In Proceedings of the 2021 29th Mediterranean Conference on Control and Automation (MED), Puglia, Italy, 22–25 June 2021; pp. 949–954. [Google Scholar]
- Kruse, R.; Mostaghim, S.; Borgelt, C.; Braune, C.; Steinbrecher, M. Multi-layer perceptrons. In Computational Intelligence: A Methodological Introduction; Springer International Publishing: Cham, Switzerland, 2022; pp. 53–124. [Google Scholar]
- Stathakis, D. How many hidden layers and nodes? Int. J. Remote. Sens. 2009, 30, 2133–2147. [Google Scholar] [CrossRef]
- Rasheed, F.; Yau, K.L.A.; Noor, R.M.; Wu, C.; Low, Y.C. Deep reinforcement learning for traffic signal control: A review. IEEE Access 2020, 8, 208016–208044. [Google Scholar] [CrossRef]
- Rasamoelina, A.D.; Adjailia, F.; Sinčák, P. A review of activation function for artificial neural network. In Proceedings of the 2020 IEEE 18th World Symposium on Applied Machine Intelligence and Informatics (SAMI), Herlany, Slovakia, 23–25 January 2020; pp. 281–286. [Google Scholar]
- Pratiwi, H.; Windarto, A.P.; Susliansyah, S.; Aria, R.R.; Susilowati, S.; Rahayu, L.K.; Fitriani, Y.; Merdekawati, A.; Rahadjeng, I.R. Sigmoid activation function in selecting the best model of artificial neural networks. J. Phys. Conf. Ser. 2020, 471, 012010. [Google Scholar] [CrossRef]
- Haji, S.H.; Abdulazeez, A.M. Comparison of optimization techniques based on gradient descent algorithm: A review. Palarch’s J. Archaeol. Egypt/Egyptol. 2021, 18, 2715–2743. [Google Scholar]
- Manaswi, N.K.; Manaswi, N.K. Understanding and working with Keras. In Deep Learning with Applications Using Python: Chatbots and Face, Object, and Speech Recognition with TensorFlow and Keras; Springer: Berlin/Heidelberg, Germany, 2018; pp. 31–43. [Google Scholar]
- Gerkey, B.; Vaughan, R.T.; Howard, A. The player/stage project: Tools for multi-robot and distributed sensor systems. In Proceedings of the 11th International Conference on Advanced Robotics, Coimbra, Portugal, 30 June–3 July 2003; Volume 1, pp. 317–323. [Google Scholar]
- Amsters, R.; Slaets, P. Turtlebot 3 as a robotics education platform. In Robotics in Education: Current Research and Innovations 10; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 170–181. [Google Scholar]
K | |||||||
---|---|---|---|---|---|---|---|
0.1 | 0.2 | 0.2 | 0.4 | 0 | 0.05 | 100 | 30 |
K | ||||||
---|---|---|---|---|---|---|
0.025 | 0.05 | 0.1 | 0.2 | −0.1 | 0.1 | 30 |
Probability | 0.970 | 0.951 | 0.951 | 0.939 | 0.938 |
---|---|---|---|---|---|
Robot number (initial/final) | 3/3 | 2/2 | 4/4 | 5/5 | 1/1 |
Probability | 0.934 | 0.001 | 0.001 | 0.000 | 0.000 |
Robot number (initial/final) | 6/6 | 4/1 | 1/4 | 2/3 | 3/2 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Poulet, O.; Guinand, F.; Guérin, F. Self-Localization of Anonymous UGVs Using Deep Learning from Periodic Aerial Images for a GPS-Denied Environment. Robotics 2024, 13, 148. https://doi.org/10.3390/robotics13100148
Poulet O, Guinand F, Guérin F. Self-Localization of Anonymous UGVs Using Deep Learning from Periodic Aerial Images for a GPS-Denied Environment. Robotics. 2024; 13(10):148. https://doi.org/10.3390/robotics13100148
Chicago/Turabian StylePoulet, Olivier, Frédéric Guinand, and François Guérin. 2024. "Self-Localization of Anonymous UGVs Using Deep Learning from Periodic Aerial Images for a GPS-Denied Environment" Robotics 13, no. 10: 148. https://doi.org/10.3390/robotics13100148
APA StylePoulet, O., Guinand, F., & Guérin, F. (2024). Self-Localization of Anonymous UGVs Using Deep Learning from Periodic Aerial Images for a GPS-Denied Environment. Robotics, 13(10), 148. https://doi.org/10.3390/robotics13100148