@misc{rayst3r,
title={RaySt3R: Predicting Novel Depth Maps for Zero-Shot Object Completion},
author={Bardienus P. Duisterhof and Jan Oberst and Bowen Wen and Stan Birchfield and Deva Ramanan and Jeffrey Ichnowski},
year={2025},
eprint={2506.05285},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.05285},
}
- Inference code
- Local gradio demo
- Huggingface demo
- Docker
- Training code
- Eval code
- ViT-S, No-DINO and Pointmap models
- Dataset release
mamba create -n rayst3r python=3.11 cmake=3.14.0
mamba activate rayst3r
mamba install pytorch torchvision pytorch-cuda=12.4 -c pytorch -c nvidia # change to your version of cuda
pip install -r requirements.txt
# compile the cuda kernels for RoPE
cd extensions/curope/
python setup.py build_ext --inplace
cd ../../
The expected input for RaySt3R is a folder with the following structure:
📁 data_dir/
├── cam2world.pt # Camera-to-world transformation (PyTorch tensor), 4x4 - eye(4) if not provided
├── depth.png # Depth image, uint16 with max 10 meters
├── intrinsics.pt # Camera intrinsics (PyTorch tensor), 3x3
├── mask.png # Binary mask image
└── rgb.png # RGB image
Note the depth image needs to be saved in uint16, normalized to a 0-10 meters range. We provide an example directory in example_scene
.
Run RaySt3R with:
python3 eval_wrapper/eval.py example_scene/
This writes a colored point cloud back into the input directory.
Optional flags:
--visualize # Spins up a rerun client to visualize predictions and camera posees
--run_octmae # Novel views sampled with the OctMAE parameters (see paper)
--set_conf N # Sets confidence threshold to N
--n_pred_views # Number of predicted views along each axis in a grid, 5--> 22 views total
--filter_all_masks # Use all masks, point gets rejected if in background for a single mask
--tsdf # Fits TSDF to depth maps
We also provide a gradio app, which uses MoGe and Rembg to generate 3D from a single image.
Launch it with:
python app.py
Certain applications may benefit from different hyper parameters, here we provide guidance on how to select them.
We sample novel views evenly on a cylindrical equal-area projection of the sphere. Customize sampling in sample_poses.py. Use --n_pred_views to reduce the total number of views, making inference faster and reduce overlap and artifacts.
You can set the confidence threshold with the --set_conf threshold. As shown in the paper, a higher threshold generally improves accuracy, reduces edge bleeding but also affects completeness.
On top of what was presented in the paper, we also provide the option to consider all predicted masks for each point. I.e., for any point, if any of the predicted masks classifies them as background the point gets removed. In our limited testing this led to cleaner predictions, but it ocasinally carves out crucial parts of geometry.
The RaySt3R training command is provided in train_rayst3r.py, documentation will follow later.