Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-21T12:32:09.168Z Has data issue: false hasContentIssue false

Evaluation of Field of View Width in Stereo-vision-Based Visual Homing

Published online by Cambridge University Press:  03 July 2019

D. M. Lyons*
Affiliation:
Robotics and Computer Vision Lab, Fordham University, Bronx, NY10458, USA. E-mails: bbarriage@fordham.edu, ldelsignore@fordham.edu
B. Barriage
Affiliation:
Robotics and Computer Vision Lab, Fordham University, Bronx, NY10458, USA. E-mails: bbarriage@fordham.edu, ldelsignore@fordham.edu
L. Del Signore
Affiliation:
Robotics and Computer Vision Lab, Fordham University, Bronx, NY10458, USA. E-mails: bbarriage@fordham.edu, ldelsignore@fordham.edu
*
*Corresponding author. E-mail: dlyons@fordham.edu

Summary

Visual homing is a local navigation technique used to direct a robot to a previously seen location by comparing the image of the original location with the current visual image. Prior work has shown that exploiting depth cues such as image scale or stereo-depth in homing leads to improved homing performance. While it is not unusual to use a panoramic field of view (FOV) camera in visual homing, it is unusual to have a panoramic FOV stereo-camera. So, while the availability of stereo-depth information may improve performance, the concomitant-restricted FOV may be a detriment to performance, unless specialized stereo hardware is used. In this paper, we present an investigation of the effect on homing performance of varying the FOV widths in a stereo-vision-based visual homing algorithm using a common stereo-camera. We have collected six stereo-vision homing databases – three indoor and three outdoor. Based on over 350,000 homing trials, we show that while a larger FOV yields performance improvements for larger homing offset angles, the relative improvement falls off with increasing FOVs, and in fact decreases for the widest FOV tested. We conduct additional experiments to identify the cause of this fall-off in performance, which we term the ‘blinder’ effect, and which we predict should affect other correspondence-based visual homing algorithms.

Type
Articles
Copyright
Copyright © Cambridge University Press 2019

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Stelzer, A., Mair, E. and Suppa, M., “Trail-Map: A Scalable Landmark Data Structure for Biologically Inspired Range-Free Navigation,” Proceedings of the IEEE International Conference on Robotics and Biomimetics, Bali, Indonesia (2014) pp. 21382145.Google Scholar
Thrun, S., Burgard, W. and Fox, D., Probabilistic Robotics (MIT Press, Cambridge, MA, 2005).Google Scholar
Dudek, G. and Jenkin, M., Computational Principles of Mobile Robotics (Cambridge University Press, Cambridge, NY, 2000).Google Scholar
Dudek, G., Jenkin, M., Milios, E. and Wilkes, D., “Robotic exploration as graph construction,” IEEE. Trans. Robot. Autom. 7(6), 859864 (1991).CrossRefGoogle Scholar
Moller, R., Krzykawski, M., Gerstmayr-Hillen, L., Fleer, D. and de Jong, J., “Cleaning robot navigation using panoramic views and particle clouds as landmarks,” Robot. Auton. Syst. 61(12), 14151439 (2013).CrossRefGoogle Scholar
Churchill, D. and Vardy, A., “Homing in Scale Space,” Proceedings of the IEEE/RSJ Conference on Intelligent Robots and Systems (IROS), Nice, France (2008).CrossRefGoogle Scholar
Nirmal, P. and Lyons, D., “Homing with stereovision,” Robotica 34(12), 27412758 (2015).CrossRefGoogle Scholar
Argyros, A., Bekris, K., Orphanoudakis, S. and Kavraki, L., “Robot homing by exploiting panoramic vision,” Auton. Robots 19, 725 (2005).CrossRefGoogle Scholar
Aggarwal, R., Vohra, A. and Namboodiri, A., “Panoramic Stereo Videos with a Single Camera,” Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV (2016).CrossRefGoogle Scholar
Feng, W., Zhang, B., Roning, J., Zong, X. and Yi, T., “Panoramic Stereo Vision,” Proceedings of the SPIE Conference on Intelligent Robotics and Computer Vision XXX: Algorithms and Techniques, Burlingame, CA (2013).Google Scholar
Churchill, D. and Vardy, A., “An orientation invariant visual homing algorithm,” J. Intell. Robot. Syst. 17(1), 329 (2012).Google Scholar
Franz, M., Scholkopf, B., Mallot, M. and Bulthoff, H., “Where did I take that snapshot? Scene-based homing by image matching,” Biol. Cybern. 79, 191202 (1998).CrossRefGoogle Scholar
Zhu, Q., Liu, C. and Cai, C., “A novel robot visual homing method based on SIFT features,” Sensors. 15, 2606326084 (2015).CrossRefGoogle ScholarPubMed
Moller, R., “Do insects use templates or parameters for landmark navigation?J. Theor. Biol. 210, 3345 (2001).CrossRefGoogle ScholarPubMed
Moller, R. and Vardy, A., “Local visual homing by matched-filter descent in image databases,” Biol. Cybern. 95, 413430 (2006).CrossRefGoogle Scholar
Lambrinos, D., Moller, R., Labhart, T., Pfeifer, R. and Wehner, R., “Mobile robot employing insect strategies for navigation,” Robot. Auton. Syst. 30, 3964 (2000).CrossRefGoogle Scholar
Ramisa, A., Goldhoom, A., Aldavert, D., Toledo, R. and de Mantaras, R., “Combining invariant features and the ALV homing method for autonomous robot navigation based on panoramas,” J. Intell. Robot. Syst. 64, 625649 (2011).CrossRefGoogle Scholar
Pons, J., Huhner, W., Dahmen, J. and Mallot, H., “Vision-Based Robot Homing in Dynamic Environments,” Proceedings of the 13th IASTED International Conference on Robotics and Applications, Würzburg, Germany (2007).Google Scholar
Tron, R. and Daniilidis, K., “An Optimization Approach to Bearing-Only Visual Homing with Applications to a 2-D Unicycle Model,” Proceedings of the IEEE International Conference on Robotics and Automation, Hong Kong, China (2014).CrossRefGoogle Scholar
Cartweight, B. and Collet, T., “Landmark learning in bees,” J. Comp. Physiol. 151, 521543 (1983).CrossRefGoogle Scholar
Fu, Y., Hsiang, T. and Chung, S., “Multi-waypoint visual homing in piecewise linear trajectory,” Robotica 31(3), 479491 (2013).CrossRefGoogle Scholar
Liu, M., Pradlier, C., Pomerleau, F. and Siegwert, R., “Scale-Only Visual Homing from an Omnidirectional Camera,” Proceedings of the IEEE International Conference Robotics and Automation, Saint Paul, MN (2012).CrossRefGoogle Scholar
Fang, F. and Lyons, D., “Robust Homing with Stereovision,” Proceedings of the SPIE Unmanned Systems Technology XX, Orlando, FL (2018).Google Scholar
Choi, D., Shim, I., Bok, Y., Oh, T. and Kweon, I., “Autonomous Homing Based on Laser-Camera Fusion System,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Portugal (2012).CrossRefGoogle Scholar
Jin, Y. and Xie, M., “Vision Guided Homing for Humanoid Service Robot,” Proceedings of the 15th International Conference on Pattern Recognition (ICPR) vol. 4, Barcelona, Spain (2000).Google Scholar
Ammirato, P., Poirson, P., Park, E., Kosecka, J. and Berg, A., “A Dataset for Developing and Benchmarking Active Vision,” Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Singapore, Singapore (2017).CrossRefGoogle Scholar
Müller, M. M., Bertrand, O. J. N., Differt, D. and Egelhaaf, M., “The problem of home choice in skyline-based homing,” PLoS One. 13(3), e0194070 (2018).CrossRefGoogle ScholarPubMed
Lyons, D., Cluster Computing for Robotics and Computer Vision (World Scientific, Singapore, 2011).CrossRefGoogle Scholar
Lowe, D., “Distinctive image features from scale invariant keypoints,” J. Comput. Vis. 60(2), 91110 (2004).CrossRefGoogle Scholar
Basten, K. and Mallot, H., “Simulated visual homing in desert ant environments: Efficiency of skyline cues,” Biol. Cybern. 102, 413425 (2010).CrossRefGoogle ScholarPubMed
Vardy, A., Biologically Plausible Methods for Robot Visual Homing, Ph.D. Thesis (School of Computer Science, Carleton University, 2005).Google Scholar
Moller, R., Vardy, A., Kreft, S. and Ruwisch, S., “Visual homing in environments with anisotropic landmark distribution,” Auton. Robots 23, 231245 (2007).CrossRefGoogle Scholar