8000 GitHub - wanglids/ST-4DGS
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

wanglids/ST-4DGS

Repository files navigation

ST-4DGS: Spatial-Temporally Consistent 4D Gaussian Splatting for Efficient Dynamic Scene Rendering


SIGGRAPH 2024 | Paper


Deqi Li1, Shi-Sheng Huang1, Zhiyuan Lu1, Xinran Duan1, Hua Huang1✉

1School of Artificial Intelligence, Beijing Normal University; Corresponding Author.


block

Our method guarantee the compactness of the 4D Gaussians that adhere to the surface in
motion objects. It achieve high-fidelity dynamic rendering quality and maintains real-time rendering efficiency.


Environmental Setups

Please follow the 3D-GS to install the relative packages. And install some necessary environments in requirements.txt.

git clone https://github.com/wanglids/ST-4DGS
cd ST-4DGS
conda create -n ST4DGS python=3.9
conda activate ST4DGS

pip install -r requirements.txt
pip install -e submodules/depth-diff-gaussian-rasterization
pip install -e submodules/simple-knn

Data Preparation

We evaluate the proposed ST-4DGS on three publicly available datasets of dynamic scenes, namely DyNeRF, ENeRF-Outdoor), and Dynamic Scene. Download datasets form these links, you should extract the frames of each video and then organize your dataset as follows.

|── data
|	|── DyNeRF
|		|── cook_spinach
|			|── cam00
|			|── ...
|			|── cam08
|				|── images
|					|── 0.png
|					|── 1.png
|					|── 2.png
|					|── ...
|				|── flow
|					|── 0.npy
|					|── 1.npy
|					|── 2.npy
|					|── ...

|			|── ...
|			|── cam19
|			|── colmap
|				|── input
|					|── cam00.png
|					|── ...
|					|── cam19.png
|		|── ...
|	|── ENeRF-Outdoor
|		|── ...
|	|── Dynamic Scene
|		|── ...

The colmap/input folder is the collection data of different cameras at the same time. Calculate camera parameters and initialize Gaussians based on COLMAP (execute python scripts/convert.py). The optical flow is estimated by RAFT. You can place scripts/getFlow.py in the installation root directory of RAFT (such as ./submodels/RAFT) and then estimate the optical flow via running

cd $ ROOT_PATH/submodels/RAFT
python getFlow.py --source_path rootpath/data/DyNeRF/cook_spinach --win_size timestep

Training

For cook_spinach dataset, run

cd $ ROOT_PATH/
python train.py --source_path rootpath/data/DyNeRF/cook_spinach --model_path output/test --configs arguments/DyNeRF.py    
#The results will be saved in rootpath/data/cook_spinach/output/test

Rendering

You can download pre trained data and models and place them in the output/test folder. Run the following script to render the images.

cd $ ROOT_PATH/
python render.py --source_path rootpath/data/DyNeRF/cook_spinach --model_path output/test --configs arguments/DyNeRF.py

Citation

If you find this code useful for your research, welcome to cite the following paper:

@inproceedings{Li2024ST,
    author = {Li, Deqi and Huang, Shi-Sheng and Lu, Zhiyuan and Duan, Xinran and Huang, Hua},
    title = {ST-4DGS: Spatial-Temporally Consistent 4D Gaussian Splatting for Efficient Dynamic Scene Rendering},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    booktitle = {ACM SIGGRAPH 2024 Conference Papers},
    location = {Denver, CO, USA},
}

Acknowledgments

Our training code is build upon 3DGS, 4DGS, D3DGS. We sincerely appreciate these excellent works.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published
0