8000 GitHub - PKU-VCL-3DV/SLAM3R: [CVPR 2025 Highlight] Real-time dense scene reconstruction with SLAM3R
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

PKU-VCL-3DV/SLAM3R

Repository files navigation

[CVPR 2025 Highlight] SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos

Yuzheng Liu* · Siyan Dong* · Shuzhe Wang · Yingda Yin · Yanchao Yang · Qingnan Fan · Baoquan Chen

SLAM3R is a real-time dense scene reconstruction system that regresses 3D points from video frames using feed-forward neural networks, without explicitly estimating camera parameters.

News

  • 2025-04: SLAM3R is reported by 机器之心(Chinese)

  • 2025-04: 🎉 SLAM3R is selected as a highlight paper in CVPR 2025 and Top1 paper in China3DV 2025.

Table of Contents

Installation

  1. Clone SLAM3R
git clone https://github.com/PKU-VCL-3DV/SLAM3R.git
cd SLAM3R
  1. Prepare environment
conda create -n slam3r python=3.11 cmake=3.14.0
conda activate slam3r 
# install torch according to your cuda version
pip install torch==2.5.0 torchvision==0.20.0 torchaudio==2.5.0 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
# optional: install additional packages to support visualization and data preprocessing
pip install -r requirements_optional.txt
  1. Optional: Accelerate SLAM3R with XFormers and custom cuda kernels for RoPE
# install XFormers according to your pytorch version, see https://github.com/facebookresearch/xformers
pip install xformers==0.0.28.post2
# compile cuda kernels for RoPE
# if the compilation fails, try the propoesd solution: https://github.com/CUT3R/CUT3R/issues/7.
cd slam3r/pos_embed/curope/
python setup.py build_ext --inplace
cd ../../../
  1. Optional: Download the SLAM3R checkpoints for the Image-to-Points and Local-to-World models through HuggingFace
from slam3r.models import Image2PointsModel, Local2WorldModel
Image2PointsModel.from_pretrained('siyan824/slam3r_i2p')
Local2WorldModel.from_pretrained('siyan824/slam3r_l2w')

The pre-trained model weights will automatically download when running the demo and evaluation code below.

Demo

Replica dataset

To run our demo on Replica dataset, download the sample scene here and unzip it to ./data/Replica_demo/. Then run the following command to reconstruct the scene from the video images

bash scripts/demo_replica.sh

The results will be stored at ./results/ by default.

Self-captured outdoor data

We also provide a set of images extracted from an in-the-wild captured video. Download it here and unzip it to ./data/wild/.

Set the required parameter in this script, and then run SLAM3R by using the following command

bash scripts/demo_wild.sh

When --save_preds is set in the script, the per-frame prediction for reconstruction will be saved at ./results/TEST_NAME/preds/. Then you can visualize the incremental reconstruction process with the following command

bash scripts/demo_vis_wild.sh

A Open3D window will appear after running the script. Please click space key to record the adjusted rendering view and close the window. The code will then do the rendering of the incremental reconstruction.

You can run SLAM3R on your self-captured video with the steps above. Here are some tips for it

Gradio interface

We also provide a Gradio interface, where you can upload a directory, a video or specific images to perform the reconstruction. After setting the reconstruction parameters, you can click the 'Run' button to start the process. Modifying the visualization parameters at the bottom allows you to directly display different visualization results without rerunning the inference.

The interface can be launched with the following command:

python app.py

Here is a demo GIF for the Gradio interface (accelerated).

Evaluation on the Replica dataset

  1. Download the Replica dataset generated by the authors of iMAP:
cd data
wget https://cvg-data.inf.ethz.ch/nice-slam/data/Replica.zip
unzip Replica.zip
rm -rf Replica.zip
  1. Obtain the GT pointmaps and valid masks for each frame by running the following command:
python evaluation/process_gt.py

The processed GT will be saved at ./results/gt/replica.

  1. Evaluate the reconstruction on the Replica dataset with the following command:
bash ./scripts/eval_replica.sh

Both the numerical results and the error heatmaps will be saved in the directory ./results/TEST_NAME/eval/.

Note

Different versions of CUDA, PyTorch, and xformers can lead to slight variations in the predicted point cloud. These differences may be amplified during the alignment process in evaluation. Consequently, the numerical results you obtain might differ from those reported in the paper. However, the average values should remain approximately the same.

Training

Datasets

We use ScanNet++, Aria Synthetic Environments and Co3Dv2 to train our models. For data downloading and pre-processing, please refer to here.

Pretrained weights

# download the pretrained weights from DUSt3R
mkdir checkpoints 
wget https://download.europe.naverlabs.com/ComputerVision/DUSt3R/DUSt3R_ViTLarge_BaseDecoder_224_linear.pth -P checkpoints/

Start training

# train the Image-to-Points model and the retrieval module
bash ./scripts/train_i2p.sh
# train the Local-to-Wrold model
bash ./scripts/train_l2w.sh

Note

They are not strictly equivalent to what was used to train SLAM3R, but they should be close enough.

Citation

If you find our work helpful in your research, please consider citing:

@article{slam3r,
  title={SLAM3R: Real-Time Dense Scene Reconstruction from Monocular RGB Videos},
  author={Liu, Yuzheng and Dong, Siyan and Wang, Shuzhe and Yin, Yingda and Yang, Yanchao and Fan, Qingnan and Chen, Baoquan},
  journal={arXiv preprint arXiv:2412.09401},
  year={2024}
}

Acknowledgments

Our implementation is based on several awesome repositories:

We thank the respective authors for open-sourcing their code.

About

[CVPR 2025 Highlight] Real-time dense scene reconstruction with SLAM3R

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

0