Authors:
Satyajit Tourani
1
;
Dhagash Desai
1
;
Udit Singh Parihar
1
;
Sourav Garg
2
;
Ravi Kiran Sarvadevabhatla
3
;
Michael Milford
2
and
K. Madhava Krishna
1
Affiliations:
1
Robotics Research Center, IIIT Hyderabad, India
;
2
Centre for Robotics, Queensland University of Technology (QUT), Australia
;
3
Centre for Visual Information Technology, IIIT Hyderabad, India
Keyword(s):
Visual Place Recognition, Homography, Image Representation, Pose Graph Optimization, Correspondences Detection.
Abstract:
Significant recent advances have been made in Visual Place Recognition (VPR), feature correspondence and localization due to deep-learning-based methods. However, existing approaches tend to address, partially or fully, only one of two key challenges: viewpoint change and perceptual aliasing. In this paper, we present novel research that simultaneously addresses both challenges by combining deep-learnt features with geometric transformations based on domain knowledge about navigation on a ground-plane, without specialized hardware (e.g. downwards facing cameras, etc.). In particular, our integration of VPR with SLAM by leveraging the robustness of deep-learnt features and our homography-based extreme viewpoint invariance significantly boosts the performance of VPR, feature correspondence and pose graph sub-modules of the SLAM pipeline. We demonstrate a localization system capable of state-of-the-art performance despite perceptual aliasing and extreme 180-degree-rotated viewpoint chan
ge in a range of real-world and simulated experiments. Our system is able to achieve early loop closures that prevent significant drifts in SLAM trajectories.
(More)