Paper | Video | Project Page
Tobias Kirschstein, Simon Giebenhain, Jiapeng Tang, Markos Georgopoulos, and Matthias Nießner
Siggraph Asia 2024
- Create conda environment
ggh
with newest PyTorch and CUDA 11.8:conda env create -f environment.yml
- Ensure that
nvcc.exe
is taken from the conda environment and includes can be found:- [Linux]
conda activate gghead conda env config vars set CUDA_HOME=$CONDA_PREFIX conda activate base conda activate gghead
- [Windows]
conda activate gghead conda env config vars set CUDA_HOME=$Env:CONDA_PREFIX conda env config vars set NVCC_PREPEND_FLAGS="-I$Env:CONDA_PREFIX\Library\include" conda activate base conda activate gghead
- [Linux]
- Check whether the correct
nvcc
can be found on the path via:which should say something likenvcc --version
release 11.8
. - Install Gaussian Splatting (which upon installation will compile CUDA kernels with
nvcc
):pip install gaussian_splatting@git+https://github.com/tobias-kirschstein/gaussian-splatting.git
- [Optional] If you compile the CUDA kernels on a different machine than the one you use for running code, you may need to manually specify the target GPU compute architecture for the compilation process via the
TORCH_CUDA_ARCH_LIST
environment variable:Choose the correct compute architecture(s) that match your setup. Consult this website if unsure about the compute architecture of your graphics card.TORCH_CUDA_ARCH_LIST="8.0" pip install gaussian_splatting@git+https://github.com/tobias-kirschstein/gaussian-splatting.git
- [Optional] If you compile the CUDA kernels on a different machine than the one you use for running code, you may need to manually specify the target GPU compute architecture for the compilation process via the
All paths to data / models / renderings are defined by environment variables.
Please create a file in your home directory in ~/.config/diffusion-avatars/.env
with the following content:
GGHEAD_DATA_PATH="..."
GGHEAD_MODELS_PATH="..."
GGHEAD_RENDERS_PATH="..."
Replace the ... with the locations where data / models / renderings should be located on your machine.
GGHEAD_DATA_PATH
: Location of the FFHQ dataset and foreground masks. Only needed for training. See Section 2 for how to obtain the datasets.GGHEAD_MODELS_PATH
: During training, model checkpoints and configs will be saved here. See Section 4 for downloading pre-trained models.GGHEAD_RENDERS_PATH
: Video renderings of trained models will be stored here If you do not like creating a config file in your home directory, you can instead hard-code the paths in theenv.py
.
TODO
TODO
From a trained model GGHEAD-xxx
, render short videos of randomly sampled 3D heads via:
python scripts/sample_heads.py GGHEAD-xxx
Replace xxx
with the actual ID of the model.
The generated videos will be placed into ${GGHEAD_RENDERS_PATH}/sampled_heads/
From a trained model GGHEAD-xxx
, render interpolation videos that morph between randomly sampled 3D heads via:
python scripts/render_interpolation.py GGHEAD-xxx
Replace xxx
with the actual ID of the model.
The generated videos will be placed into ${GGHEAD_RENDERS_PATH}/interpolations/
TODO
The notebooks folder contains minimal examples on how to:
- Load a trained model, generate a 3D head and render it from an arbitrary viewpoint (inference.ipynb)
You can start the excellent GUI from EG3D and StyleGAN by running:
python visualizer.py
In the visualizer, you can select all checkpoints found in ${GGHEAD_MODELS_PATH}/gghead
freely explore the generated heads in 3D.
Put pre-trained models into ${GGHEAD_MODELS_PATH}/gghead
.
Dataset | GGHead model |
---|---|
FFHQ-512 | GGHEAD-1_ffhq512 |
FFHQ-1024 | GGHEAD-2_ffhq1024 |
AFHQ-512 | GGHEAD-3-afhq512 |
@article{kirschstein2024gghead,
title={GGHead: Fast and Generalizable 3D Gaussian Heads},
author={Kirschstein, Tobias and Giebenhain, Simon and Tang, Jiapeng and Georgopoulos, Markos and Nie{\ss}ner, Matthias},
journal={arXiv preprint arXiv:2406.09377},
year={2024}
}
Contact Tobias Kirschstein for questions, comments and reporting bugs, or open a GitHub issue.