Clothed avatar reconstruction
For single view reconstruction, please follow the THuman2.0 Data Processing Instruction from ICON. For avatar reconstruction, please follow the
Models are trained on THuman2.0 dataset using normal image as input and output human body in projected space. Users can take RGB image as input by setting option '-ii rgb'.
# PIFu
python -m apps.train --gpu 0 --data thuman -ii normal -cfg configs/pifu.yaml
# ICON (*: re-implementation)
python -m apps.train --gpu 0 --data thuman -ii normal -cfg configs/icon.yaml
# Ours
python -m apps.train --gpu 0 --data thuman -ii normal -cfg configs/pifu-sdf.yaml
# ARCH* (*: re-implementation)
python -m apps.train --gpu 0 --data mvp -ii normal -cfg configs/arch.yaml
# ARCH++* (*: re-implementation)
# ARWild
python -m apps.train --gpu 0 --data mvp -ii normal -cfg configs/arwild.yaml
# Ours
python -m apps.train --gpu 0 --data mvp -ii normal -cfg configs/ours.yaml
- Download the pretrained model and put it in ./out/ckpt/ours-normal-1view/.
- Download extra data (PyMAF, ICON normal model, SMPL model) and put them to ./data.
- Testing example images in directory ./examples. Results will be saved to ./examples/results.
python -m apps.infer --gpu 0 -cfg configs/ours.yaml