Open-sourced Implementation of POCA: Post Training Quantization of Codec Avatar (ECCV, 2024)
Post-Training Quantization for Codec Avatar Model with Jittering-Free rendering.
Jian Meng, Yuecheng Li, Leo (Chenghui) Li, Syed Shakib Sarwar, Diling Wang, and Jae-sun Seo
Although the low precision quantization has been widely investigated, compressing the Codec-Avatar decoder model (e.g., Deep Appearance Model, Pixel Codec Avatar) leads to visible and hevaily jittered avatar, which motivated the invention of POCA to compress the decoder without introducng additional filtering or finer-grained quantization scheme.
Following the requirement.txt
file and install the Conda virtual environment.
conda env create -f requirement.txt
Make sure you manually install the nvdiffrast (0.3.1)
with the following command:
git clone https://github.com/NVlabs/nvdiffrast
pip install .
The pre-trained full-precision model can be found in the official repo of MultiFace
By default, the full-precision baseline model should be saved inside the pretrained_model
folder.
To start the post-training quantization of POCA, please execute the example script file with the ID = 002643814.
bash ptq_002643814.sh
The quantized model will be saved inside the path: ./runs/experiment_002643814/PTQ_${arch}_w${wbit}a${wbit}_shape_batch_wise_mask_tau${threshold}_model_calib${model_calib}
To visualize the output rendered by POCA, execute the following bash file
bash visualize_002643814_w8a8_proposed.sh
@inproceedings{poca2024meng,
author = {Jian Meng and Yuecheng Li and Leo Chenghui Li and Syed Shakib Sarwar and Dilin Wang and Jae-sun Seo},
title = {POCA: Post-training Quantization with Temporal Alignment for Codec Avatars},
booktitle = {ECCV},
year = {2024},
}