8000 GitHub - yulunwu0108/FatesGS: [AAAI'25 Oral] FatesGS: Fast and Accurate Sparse-View Surface Reconstruction Using Gaussian Splatting with Depth-Feature Consistency
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

[AAAI'25 Oral] FatesGS: Fast and Accurate Sparse-View Surface Reconstruction Using Gaussian Splatting with Depth-Feature Consistency

License

Notifications You must be signed in to change notification settings

yulunwu0108/FatesGS

Repository files navigation

😇FatesGS: Fast and Accurate Sparse-View Surface Reconstruction Using Gaussian Splatting with Depth-Feature Consistency

AAAI 2025 Oral

Han Huang*  Yulun Wu*  Chao Deng  Ge Gao†  Ming Gu  Yu-Shen Liu
Tsinghua University
*Equal contribution. †Corresponding author.

Overview

We propose FatesGS for sparse-view surface reconstruction, taking full advantage of the Gaussian Splatting pipeline. Compared with previous methods, our approach neither requires long-term per-scene optimization nor costly pre-training.

Installation

conda create -n fatesgs python=3.8
conda activate fatesgs
pip install -r requirements.txt

Dataset

DTU dataset

  1. Download the processed DTU dataset from this link. The data structure should be like:
|-- DTU
    |-- <set_name, e.g. set_23_24_33>
        |-- <scan_name, e.g. scan24>
            |-- pair.txt
            |-- images
                |-- 0000.png
                |-- 0001.png
                ...
            |-- sparse <COLMAP sparse reconstruction>
                |-- 0
                    |-- cameras.txt
                    |-- images.txt
                    |-- points3D.txt
            |-- dense <COLMAP dense reconstruction>
                |-- fused.ply
                ...
            |-- depth_npy <monocular depth maps (to be generated)>
                |-- 0000_pred.npy
                |-- 0001_pred.npy
                ...
        ...
    ...
  1. Follow Marigold to generate the estimated monocular depth maps. Put the .npy format depth maps under the depth_npy folder. You may also use more advanced depth estimation models for better performance. (P.S. The size of the depth maps used as priors ought to be consistent with those of the rendered color images during the Gaussian Splatting training process.)

Running

  • Training
CUDA_VISIBLE_DEVICES=0
python train.py -s <source_path> -m <model_path> -r 2
  • Mesh extraction
CUDA_VISIBLE_DEVICES=0
python render.py -s <source_path> -m <model_path> -r 2 --skip_test --skip_train

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{huang2025fatesgs,
    title={FatesGS: Fast and Accurate Sparse-View Surface Reconstruction Using Gaussian Splatting with Depth-Feature Consistency},
    author={Han Huang and Yulun Wu and Chao Deng and Ge Gao and Ming Gu and Yu-Shen Liu},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    year={2025}
}

Acknowledgement

This implementation is based on 2DGS, 3DGS and MVSDF. Thanks to the authors for their great work.

About

[AAAI'25 Oral] FatesGS: Fast and Accurate Sparse-View Surface Reconstruction Using Gaussian Splatting with Depth-Feature Consistency

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0