Abstract
In practical work scenarios, it is often necessary to repeat specific tasks, which include navigating along a desired path. Visual teach and repeat systems are a type of autonomous navigation in which a robot repeats a previously taught path using a camera and dead reckoning. There have been many different teach and repeat methods proposed in the literature, but only a few are open-source. In this paper, we compare four recently published open-source methods and a Boston Dynamics proprietary solution embedded in a Spot robot. The intended use for each method is different, which has an impact on their strengths and weaknesses. When deciding which method to use, factors such as the environment and desired precision and speed should be taken into consideration. For example, in controlled artificial environments, which do not change significantly, navigation precision and speed are more important than robustness to environment variations. However, the appearance of unstructured natural environments varies over time, making robustness to changes a crucial property for outdoor navigation systems. This paper compares the speed, precision, reliability, robustness, and practicality of the available teach and repeat methods. We will outline their flaws and strengths, helping to choose the most suitable method for a particular utilization.
This research was funded by Czech Science Foundation research project number 20-27034J ‘ToltaTempo’.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bürki, M., Dymczyk, M., Gilitschenski, I., Cadena, C., Siegwart, R., Nieto, J.: Map management for efficient long-term visual localization in outdoor environments. In: 2018 IEEE Intelligent Vehicles Symposium (IV), pp. 682–688. IEEE (2018)
Cadena, C., et al.: Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Trans. Rob. 32(6), 1309–1332 (2016)
Chaumette, F., Hutchinson, S.: Visual servo control, part I: Basic approaches. IEEE Robot. Autom. Mag. 13(4), 82–90 (2006). http://www.irisa.fr/lagadic/publi/publi/Chaumette07a-eng.html
Chen, Z., et al.: Deep learning features at scale for visual place recognition. In: 2017 IEEE International Conference on Robotics and Automation (ICRA) (2017)
Chen, Z., Birchfield, S.T.: Qualitative vision-based mobile robot navigation. In: Proceedings 2006 IEEE International Conference on Robotics and Automation, ICRA 2006, pp. 2686–2692. IEEE (2006)
Chen, Z., Birchfield, S.T.: Qualitative vision-based path following. IEEE Trans. Rob. 25(3), 749–754 (2009)
Churchill, W.S., Newman, P.: Experience-based navigation for long-term localisation. IJRR (2013). https://doi.org/10.1177/0278364913499193
Čížek, P., Faigl, J.: Real-time FPGA-based detection of speeded-up robust features using separable convolution. IEEE Trans. Industr. Inf. 14(3), 1155–1163 (2017)
Dall’Osto, D., Fischer, T.: FRB github. https://github.com/QVPR/teach-repeat/
Dall’Osto, D., Fischer, T., Milford, M.: Fast and robust bio-inspired teach and repeat navigation. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 500–507 (2021). https://doi.org/10.1109/IROS51168.2021.9636334
Davison, A.J., Reid, I.D., Molton, N.D., Stasse, O.: MonoSLAM: real-time single camera slam. IEEE Trans. Pattern Anal. Mach. Intell. 29(6), 1052–1067 (2007)
Dayoub, F., Duckett, T.: An adaptive appearance-based map for long-term topological localization of mobile robots. In: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3364–3369. IEEE (2008)
Engelhard, N., Endres, F., Hess, J., Sturm, J., Burgard, W.: Real-time 3D visual SLAM with a hand-held RGB-D camera. In: Proceedings of the RGB-D Workshop on 3D Perception in Robotics at the European Robotics Forum, Vasteras, Sweden, vol. 180, pp. 1–15 (2011)
Furgale, P., Barfoot, T.D.: Visual teach and repeat for long-range rover autonomy. J. Field Robot. 27(5), 534–560 (2010)
Halodová, L., et al.: Adaptive image processing methods for outdoor autonomous vehicles. In: Mazal, J. (ed.) MESAS 2018. LNCS, vol. 11472, pp. 456–476. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-14984-0_34
Halodová, L., et al.: Predictive and adaptive maps for long-term visual navigation in changing environments. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 7033–7039. IEEE (2019)
Hawes, N., et al.: The strands project: long-term autonomy in everyday environments. IEEE Robot. Autom. Mag. 24(3), 146–156 (2017)
Khairuddin, A.R., Talib, M.S., Haron, H.: Review on simultaneous localization and mapping (SLAM). In: 2015 IEEE International Conference on Control System, Computing and Engineering (ICCSCE), pp. 85–90. IEEE (2015)
Krajník, T., Blažíček, J., Santos, J.M.: Visual road following using intrinsic images. In: 2015 European Conference on Mobile Robots (ECMR), pp. 1–6. IEEE (2015)
Krajník, T., Broughton, G., Rouček, Tomáš Rozsypálek, Z.: BearNav2 github. https://github.com/broughtong/bearnav2
Krajník, T., Cristóforis, P., Kusumam, K., Neubert, P., Duckett, T.: Image features for visual teach-and-repeat navigation in changing environments. Robot. Auton. Syst. 88, 127–141 (2016)
Krajnik, T., Fentanes, J.P., Cielniak, G., Dondrup, C., Duckett, T.: Spectral analysis for long-term robotic mapping. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 3706–3711. IEEE (2014)
Krajník, T., Filip, M., Broughton, G., Rouček, Tomáš Rozsypálek, Z.: BearNav github. https://github.com/gestom/stroll_bearnav/tree/core
Krajník, T., Přeučil, L.: A simple visual navigation system with convergence property. In: Bruyninckx, H., Přeučil, L., Kulich, M. (eds.) European Robotics Symposium 2008. Springer Tracts in Advanced Robotics, vol. 44, pp. 283–292. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78317-6_29
Krajník, T., Faigl, J., Vonásek, V., Košnar, K., Kulich, M., Přeučil, L.: Simple yet stable bearing-only navigation. J. Field Robot. 27(5), 511–533 (2010). https://doi.org/10.1002/rob.20354, https://onlinelibrary.wiley.com/doi/abs/10.1002/rob.20354
Krajník, T., Majer, F., Halodová, L., Vintr, T.: Navigation without localisation: reliable teach and repeat based on the convergence theorem. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1657–1664 (2018). https://doi.org/10.1109/IROS.2018.8593803
Linegar, C., Churchill, W., Newman, P.: Work smart, not hard: recalling relevant experiences for vast-scale but time-constrained localisation. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 90–97. IEEE (2015)
Lowry, S., Milford, M.J.: Supervised and unsupervised linear learning techniques for visual place recognition in changing environments. IEEE Trans. Rob. 32(3), 600–613 (2016)
Lowry, S., et al.: Visual place recognition: a survey. IEEE Trans. Rob. 32(1), 1–19 (2015)
Lowry, S., Wyeth, G., Milford, M.: Unsupervised online learning of condition-invariant images for place recognition. In: Proceedings of the Australasian Conference on Robotics and Automation. Citeseer (2014)
Macario Barros, A., Michel, M., Moline, Y., Corre, G., Carrel, F.: A comprehensive survey of visual slam algorithms. Robotics 11(1), 24 (2022)
Majer, F., et al.: A versatile visual navigation system for autonomous vehicles. In: Mazal, J. (ed.) MESAS 2018. LNCS, vol. 11472, pp. 90–110. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-14984-0_8
Matias, L.P., Santos, T.C., Wolf, D.F., Souza, J.R.: Path planning and autonomous navigation using AMCL and AD. In: 2015 12th Latin American Robotics Symposium and 2015 3rd Brazilian Symposium on Robotics (LARS-SBR), pp. 320–324. IEEE (2015)
Mühlfellner, P., Bürki, M., Bosse, M., Derendarz, W., Philippsen, R., Furgale, P.: Summary maps for lifelong visual localization. J. Field Robot. 33(5), 561–590 (2016)
Mur-Artal, R., Montiel, J.M.M., Tardós, J.D.: Orb-SLAM: a versatile and accurate monocular slam system. IEEE Trans. Rob. 31(5), 1147–1163 (2015). https://doi.org/10.1109/TRO.2015.2463671
Neubert, P., Sünderhauf, N., Protzel, P.: Superpixel-based appearance change prediction for long-term navigation across seasons. RAS 69, 15–27 (2014). https://doi.org/10.1016/j.robot.2014.08.005
Paton, M., MacTavish, K., Ostafew, C., Barfoot, T.: It’s not easy seeing green: lighting-resistant stereo visual teach-and-repeat using color-constant images. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (2015)
Paton, M., MacTavish, K., Berczi, L.-P., van Es, S.K., Barfoot, T.D.: I can see for miles and miles: an extended field test of visual teach and repeat 2.0. In: Hutter, M., Siegwart, R. (eds.) Field and Service Robotics. SPAR, vol. 5, pp. 415–431. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-67361-5_27
Paz, L.M., Piniés, P., Tardós, J.D., Neira, J.: Large-scale 6-DoF slam with stereo-in-hand. IEEE Trans. Rob. 24(5), 946–957 (2008)
Rosen, D.M., Mason, J., Leonard, J.J.: Towards lifelong feature-based mapping in semi-static environments. In: ICRA, pp. 1063–1070. IEEE (2016)
Rouček, T., et al.: Self-supervised robust feature matching pipeline for teach and repeat navigation. Sensors 22(8), 2836 (2022)
Rouček, T., et al.: DARPA subterranean challenge: multi-robotic exploration of underground environments. In: Mazal, J., Fagiolini, A., Vasik, P. (eds.) MESAS 2019. LNCS, vol. 11995, pp. 274–290. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-43890-6_22
Rozsypálek, Z., et al.: Contrastive learning for image registration in visual teach and repeat navigation. Sensors 22, 2975 (2022)
Rozsypálek, Z., Rouček, T., Vintr, T., Krajník, T.: Non-cartesian multidimensional particle filter for long-term visual teach and repeat in changing environments. IEEE Robot. Autom. Lett. (2023, to appear)
Sledevič, T., Serackis, A.: Surf algorithm implementation on FPGA. In: 2012 13th Biennial Baltic Electronics Conference, pp. 291–294. IEEE (2012)
Sun, L., Yan, Z., Zaganidis, A., Zhao, C., Duckett, T.: Recurrent-OctoMap: learning state-based map refinement for long-term semantic mapping with 3-D-lidar data. IEEE Robot. Autom. Lett. 3(4), 3749–3756 (2018)
Sünderhauf, N., Shirazi, S., Dayoub, F., Upcroft, B., Milford, M.: On the performance of convnet features for place recognition. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4297–4304. IEEE (2015)
Valgren, C., Lilienthal, A.J.: SIFT, SURF & seasons: appearance-based long-term localization in outdoor environments. Robot. Auton. Syst. 58(2), 149–156 (2010)
Zhang, N., Warren, M., Barfoot, T.D.: Learning place-and-time-dependent binary descriptors for long-term visual localization. In: 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 828–835. IEEE (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Simon, M., Broughton, G., Rouček, T., Rozsypálek, Z., Krajník, T. (2023). Performance Comparison of Visual Teach and Repeat Systems for Mobile Robots. In: Mazal, J., et al. Modelling and Simulation for Autonomous Systems. MESAS 2022. Lecture Notes in Computer Science, vol 13866. Springer, Cham. https://doi.org/10.1007/978-3-031-31268-7_1
Download citation
DOI: https://doi.org/10.1007/978-3-031-31268-7_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-31267-0
Online ISBN: 978-3-031-31268-7
eBook Packages: Computer ScienceComputer Science (R0)