😇FatesGS: Fast and Accurate Sparse-View Surface Reconstruction Using Gaussian Splatting with Depth-Feature Consistency
Han Huang*
Yulun Wu*
Chao Deng
Ge Gao†
Ming Gu
Yu-Shen Liu
Tsinghua University
*Equal contribution. †Corresponding author.
Tsinghua University
*Equal contribution. †Corresponding author.
We propose FatesGS for sparse-view surface reconstruction, taking full advantage of the Gaussian Splatting pipeline. Compared with previous methods, our approach neither requires long-term per-scene optimization nor costly pre-training.
conda create -n fatesgs python=3.8
conda activate fatesgs
pip install -r requirements.txt
- Download the processed DTU dataset from this link. The data structure should be like:
|-- DTU
|-- <set_name, e.g. set_23_24_33>
|-- <scan_name, e.g. scan24>
|-- pair.txt
|-- images
|-- 0000.png
|-- 0001.png
...
|-- sparse <COLMAP sparse reconstruction>
|-- 0
|-- cameras.txt
|-- images.txt
|-- points3D.txt
|-- dense <COLMAP dense reconstruction>
|-- fused.ply
...
|-- depth_npy <monocular depth maps (to be generated)>
|-- 0000_pred.npy
|-- 0001_pred.npy
...
...
...
- Follow Marigold to generate the estimated monocular depth maps. Put the
.npy
format depth maps under thedepth_npy
folder. You may also use more advanced depth estimation models for better performance. (P.S. The size of the depth maps used as priors ought to be consistent with those of the rendered color images during the Gaussian Splatting training process.)
- Training
CUDA_VISIBLE_DEVICES=0
python train.py -s <source_path> -m <model_path> -r 2
- Mesh extraction
CUDA_VISIBLE_DEVICES=0
python render.py -s <source_path> -m <model_path> -r 2 --skip_test --skip_train
If you find our work useful in your research, please consider citing:
@inproceedings{huang2025fatesgs,
title={FatesGS: Fast and Accurate Sparse-View Surface Reconstruction Using Gaussian Splatting with Depth-Feature Consistency},
author={Han Huang and Yulun Wu and Chao Deng and Ge Gao and Ming Gu and Yu-Shen Liu},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
year={2025}
}
This implementation is based on 2DGS, 3DGS and MVSDF. Thanks to the authors for their great work.