The model architectures of InSwapper and SimSwap are extremely similar. This branch is based on the SimSwap repository. Work in progress.
git clone https://github.com/somanchiu/ReSwapper.git
cd ReSwapper
python -m venv venv
venv\scripts\activate
pip install onnxruntime moviepy tensorboard timm==0.5.4 insightface==0.7.3
pip install torch torchvision --force --index-url https://download.pytorch.org/whl/cu121
pip install onnxruntime-gpu --force --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
pip install sympy==1.13.1
pip install typing_extensions --upgrade
The training now works, but it's unstable. The discriminator losses fluctuate heavily.
- Download arcface_w600k_r50.pth or convert the w600k_r50.onnx to arcface_w600k_r50.pth yourself using weight_transfer.arcface_onnx_to_pth
- (Optional) Download <step>_net_D.pth and <step>_net_G.pth and place it in the folder "checkpoints/reswapper"
Download VGGFace2-HQ
Example:
python train.py --use_tensorboard "True" --dataset "VGGface2_HQ/VGGface2_None_norm_512_true_bygfpgan" --name "reswapper" --load_pretrain "checkpoints/reswapper" --sample_freq "1000" --model_freq "1000" --batchSize "4" --lr_g "0.00005" --lr_d "0.00005" --load_optimizer "False"
- batchSize must be greater than or equal to 2
- Tested args:
- For the steps from 1 to 6500: --lr_g "0.00005" --lr_d "0.0001" --lambda_feat "1" --batchSize "4"
- For the steps from 6501 to 40000: --lr_g "0.00005" --lr_d "0.00005" --lambda_feat "1" --batchSize "4"
- For the steps starting from 40001: --lr_g "0.00005" --lr_d "0.00001" --lambda_feat "1" --batchSize "4"
- Improve the stability of the training process