Wooseok Jang · 7301 Youngjun Hong · Geonho Cha · Seungryong Kim
Build the environment as follows:
conda create -n controlface python=3.8
conda activate controlface
conda install pytorch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 pytorch-cuda=11.8 -c pytorch -c nvidia
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install pytorch3d -c pytorch3d
conda install mpi4py dlib scikit-learn scikit-image tqdm -c conda-forge
pip install -r requirements.txt
First run the following command which will automatically download the weights.
Weights will be placed under the ./pretrained_weights
directory.
python tools/download_weights.py
Then follow the DECA Setup
stage present in here.
We provide a example script for face editing. Change the command below to specify the attribute you want to edit (pose, expression, light, shape) by modifying the --mode flag.
PATH_TO_REFERENCE="./examples/00013.png"
PATH_TO_TARGET="./examples/00690.png"
python sample.py --ref ${PATH_TO_REFERENCE} \
--tgt ${PATH_TO_TARGET} \
--mode pose
The output will be saved under the ./output
directory.
Our project builds upon and incorporates elements from DiffusionRig, Moore-AnimateAnyone, and LightningDrag. We would like to thank the authors and maintainers of these projects for their invaluable work and for making their code available to the community.