[CVPR 2025] Reloc3r: Large-Scale Training of Relative Camera Pose Regression for Generalizable, Fast, and Accurate Visual Localization
Siyan Dong* · Shuzhe Wang* · Shaohui Liu · Lulu Cai · Qingnan Fan · Juho Kannala · Yanchao Yang
Reloc3r is a simple yet effective camera pose estimation framework that combines a pre-trained two-view relative camera pose regression network with a multi-view motion averaging module.
Trained on approximately 8 million posed image pairs, Reloc3r achieves surprisingly good performance and generalization ability, producing high-quality camera pose estimates in real-time.
- TODO List
- Installation
- Usage
- Evaluation on Relative Camera Pose Estimation
- Evaluation on Visual Localization
- Training
- Citation
- Acknowledgments
- Release pre-trained weights and inference code.
- Release evaluation code for ScanNet1500 and MegaDepth1500 datasets.
- Release evaluation code for 7Scenes and Cambridge datasets.
- Release sample code for self-captured images and videos.
- Release training code and data.
- Evaluation code for other datasets.
- Accelerated version for visual localization.
- Gradio demo.
- Clone Reloc3r
git clone --recursive https://github.com/ffrivera0/reloc3r.git
cd reloc3r
# if you have already cloned reloc3r:
# git submodule update --init --recursive
- Create the environment using conda
conda create -n reloc3r python=3.11 cmake=3.14.0
conda activate reloc3r
conda install pytorch torchvision pytorch-cuda=12.1 -c pytorch -c nvidia # use the correct version of cuda for your system
pip install -r requirements.txt
# optional: you can also install additional packages to:
# - add support for HEIC images
pip install -r requirements_optional.txt
- Optional: Compile the cuda kernels for RoPE
# Reloc3r relies on RoPE positional embeddings for which you can compile some cuda kernels for faster runtime.
cd croco/models/curope/
python setup.py build_ext --inplace
cd ../../../
- Optional: Download the checkpoints Reloc3r-224/Reloc3r-512. The pre-trained model weights will automatically download when running the evaluation and demo code below.
Using Reloc3r, you can estimate camera poses for images and videos you captured.
For relative pose estimation, try the demo code in wild_relpose.py
. We provide some image pairs used in our paper.
# replace the args with your paths
python wild_relpose.py --v1_path ./data/wild_images/zurich0.jpg --v2_path ./data/wild_images/zurich1.jpg --output_folder ./data/wild_images/
Visualize the relative pose
# replace the args with your paths
python visualization.py --mode relpose --pose_path ./data/wild_images/pose2to1.txt
For visual localization, the demo code in wild_visloc.py
estimates absolute camera poses from sampled frames in self-captured videos.
Important
The demo simply uses the first and last frames as the database, which requires overlapping regions among all images. This demo does not support linear motion. We provide some videos as examples.
# replace the args with your paths
python wild_visloc.py --video_path ./data/wild_video/ids.MOV --output_folder ./data/wild_video
Visualize the absolute poses
# replace the args with your paths
python visualization.py --mode visloc --pose_folder ./data/wild_video/ids_poses/
To reproduce our evaluation on ScanNet1500, download the dataset here and unzip it to ./data/scannet1500
.
Then run the following script.
bash scripts/eval_scannet1500.sh
To reproduce our evaluation on MegaDepth1500, download the dataset here and unzip it to ./data/megadepth1500
.
Then run the following script.
bash scripts/eval_megadepth1500.sh
Note
To achieve faster inference speed, set --amp=1
. This enables evaluation with fp16
, which increases speed from 24 FPS to 40 FPS on an RTX 4090 with Reloc3r-512, without any accuracy loss.
To reproduce our evaluation on 7Scenes, download the dataset here and unzip it to ./data/7scenes
.
Then run the following script.
bash scripts/eval_7scenes.sh
To reproduce our evaluation on Cambridge, download the dataset here and unzip it to ./data/cambridge
.
Then run the following script.
bash scripts/eval_cambridge.sh
We follow DUSt3R to process the training data. Download the datasets: CO3Dv2, ScanNet++, ARKitScenes, BlendedMVS, MegaDepth, DL3DV, RealEstate10K.
For each dataset, we provide a preprocessing script in the datasets_preprocess
directory and an archive containing the list of pairs when needed. You have to download the datasets yourself from their official sources, agree to their license, and run the preprocessing script.
We provide a sample script to train Reloc3r with ScanNet++ on an RTX 3090 GPU
bash scripts/train_small.sh
To reproduce our training for Reloc3r-512 with 8 H800 GPUs, run the following script
bash scripts/train.sh
Note
They are not strictly equivalent to what was used to train Reloc3r, but they should be close enough.
If you find our work helpful in your research, please consider citing:
@article{reloc3r,
title={Reloc3r: Large-Scale Training of Relative Camera Pose Regression for Generalizable, Fast, and Accurate Visual Localization},
author={Dong, Siyan and Wang, Shuzhe and Liu, Shaohui and Cai, Lulu and Fan, Qingnan and Kannala, Juho and Yang, Yanchao},
journal={arXiv preprint arXiv:2412.08376},
year={2024}
}
Our implementation is based on several awesome repositories:
We thank the respective authors for open-sourcing their code.