[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Satyajit Tourani 1 ; Dhagash Desai 1 ; Udit Singh Parihar 1 ; Sourav Garg 2 ; Ravi Kiran Sarvadevabhatla 3 ; Michael Milford 2 and K. Madhava Krishna 1

Affiliations: 1 Robotics Research Center, IIIT Hyderabad, India ; 2 Centre for Robotics, Queensland University of Technology (QUT), Australia ; 3 Centre for Visual Information Technology, IIIT Hyderabad, India

Keyword(s): Visual Place Recognition, Homography, Image Representation, Pose Graph Optimization, Correspondences Detection.

Abstract: Significant recent advances have been made in Visual Place Recognition (VPR), feature correspondence and localization due to deep-learning-based methods. However, existing approaches tend to address, partially or fully, only one of two key challenges: viewpoint change and perceptual aliasing. In this paper, we present novel research that simultaneously addresses both challenges by combining deep-learnt features with geometric transformations based on domain knowledge about navigation on a ground-plane, without specialized hardware (e.g. downwards facing cameras, etc.). In particular, our integration of VPR with SLAM by leveraging the robustness of deep-learnt features and our homography-based extreme viewpoint invariance significantly boosts the performance of VPR, feature correspondence and pose graph sub-modules of the SLAM pipeline. We demonstrate a localization system capable of state-of-the-art performance despite perceptual aliasing and extreme 180-degree-rotated viewpoint chan ge in a range of real-world and simulated experiments. Our system is able to achieve early loop closures that prevent significant drifts in SLAM trajectories. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 79.170.44.78

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Tourani, S., Desai, D., Parihar, U. S., Garg, S., Sarvadevabhatla, R. K., Milford, M. and Krishna, K. M. (2021). Early Bird: Loop Closures from Opposing Viewpoints for Perceptually-aliased Indoor Environments. In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 5: VISAPP; ISBN 978-989-758-488-6; ISSN 2184-4321, SciTePress, pages 409-416. DOI: 10.5220/0010230804090416

@conference{visapp21,
author={Satyajit Tourani and Dhagash Desai and Udit Singh Parihar and Sourav Garg and Ravi Kiran Sarvadevabhatla and Michael Milford and K. Madhava Krishna},
title={Early Bird: Loop Closures from Opposing Viewpoints for Perceptually-aliased Indoor Environments},
booktitle={Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 5: VISAPP},
year={2021},
pages={409-416},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0010230804090416},
isbn={978-989-758-488-6},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021) - Volume 5: VISAPP
TI - Early Bird: Loop Closures from Opposing Viewpoints for Perceptually-aliased Indoor Environments
SN - 978-989-758-488-6
IS - 2184-4321
AU - Tourani, S.
AU - Desai, D.
AU - Parihar, U.
AU - Garg, S.
AU - Sarvadevabhatla, R.
AU - Milford, M.
AU - Krishna, K.
PY - 2021
SP - 409
EP - 416
DO - 10.5220/0010230804090416
PB - SciTePress

<style> #socialicons>a span { top: 0px; left: -100%; -webkit-transition: all 0.3s ease; -moz-transition: all 0.3s ease-in-out; -o-transition: all 0.3s ease-in-out; -ms-transition: all 0.3s ease-in-out; transition: all 0.3s ease-in-out;} #socialicons>ahover div{left: 0px;} </style>