8000 GitHub - antgroup/echomimic: [AAAI 2025] EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

[AAAI 2025] EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning

License

Notifications You must be signed in to change notification settings

antgroup/echomimic

Repository files navigation

EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning

*Equal Contribution.
Terminal Technology Department, Alipay, Ant Group.

🚀 EchoMimic Series

  • EchoMimicV1: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning. GitHub
  • EchoMimicV2: Towards Striking, Simplified, and Semi-Body Human Animation. GitHub

📣 Updates

  • [2024.12.10] 🔥 EchoMimic is accepted by AAAI 2025.
  • [2024.11.21] 🔥🔥🔥 We release our EchoMimicV2 codes and models.
  • [2024.08.02] 🔥 EchoMimic is now available on huggingface with A100 GPU. Thanks Wenmeng Zhou@ModelScope.
  • [2024.07.25] 🔥🔥🔥 Accelerated models and pipe on Audio Driven are released. The inference speed can be improved by 10x (from ~7mins/240frames to ~50s/240frames on V100 GPU)
  • [2024.07.23] 🔥 EchoMimic gradio 8000 demo on modelscope is ready.
  • [2024.07.23] 🔥 EchoMimic gradio demo on huggingface is ready. Thanks Sylvain Filoni@fffiloni.
  • [2024.07.17] 🔥🔥🔥 Accelerated models and pipe on Audio + Selected Landmarks are released. The inference speed can be improved by 10x (from ~7mins/240frames to ~50s/240frames on V100 GPU)
  • [2024.07.14] 🔥 ComfyUI is now available. Thanks @smthemex for the contribution.
  • [2024.07.13] 🔥 Thanks NewGenAI for the video installation tutorial.
  • [2024.07.13] 🔥 We release our pose&audio driven codes and models.
  • [2024.07.12] 🔥 WebUI and GradioUI versions are released. We thank @greengerong @Robin021 and @O-O1024 for their contributions.
  • [2024.07.12] 🔥 Our paper is in public on arxiv.
  • [2024.07.09] 🔥 We release our audio driven codes and models.

🌅 Gallery

Audio Driven (Sing)

s_01.mp4
s_02.mp4
s_03.mp4

Audio Driven (English)

en_01.mp4
en_03.mp4
en_05.mp4

Audio Driven (Chinese)

ch_02.mp4
ch_03.mp4
ch_04.mp4

Landmark Driven

po_01.mp4
po_02.mp4
po_03.mp4

Audio + Selected Landmark Driven

ap_04.mp4
ap_05.mp4
ap_06.mp4

(Some demo images above are sourced from image websites. If there is any infringement, we will immediately remove them and apologize.)

⚒️ Installation

Download the Codes

  git clone https://github.com/BadToBest/EchoMimic
  cd EchoMimic

Python Environment Setup

  • Tested System Environment: Centos 7.2/Ubuntu 22.04, Cuda >= 11.7
  • Tested GPUs: A100(80G) / RTX4090D (24G) / V100(16G)
  • Tested Python Version: 3.8 / 3.10 / 3.11

Create conda environment (Recommended):

  conda create -n echomimic python=3.8
  conda activate echomimic

Install packages with pip

  pip install -r requirements.txt

Download ffmpeg-static

Download and decompress ffmpeg-static, then

export FFMPEG_PATH=/path/to/ffmpeg-4.4-amd64-static

Download pretrained weights

git lfs install
git clone https://huggingface.co/BadToBest/EchoMimic pretrained_weights

The pretrained_weights is organized as follows.

./pretrained_weights/
├── denoising_unet.pth
├── reference_unet.pth
├── motion_module.pth
├── face_locator.pth
├── sd-vae-ft-mse
│   └── ...
├── sd-image-variations-diffusers
│   └── ...
└── audio_processor
    └── whisper_tiny.pt

In which denoising_unet.pth / reference_unet.pth / motion_module.pth / face_locator.pth are the main checkpoints of EchoMimic. Other models in this hub can be also downloaded from it's original hub, thanks to their brilliant works:

Audio-Drived Algo Inference

Run the python inference script:

  python -u infer_audio2vid.py
  python -u infer_audio2vid_pose.py

Audio-Drived Algo Inference On Your Own Cases

Edit the inference config file ./configs/prompts/animation.yaml, and add your own case:

test_cases:
  "path/to/your/image":
    - "path/to/your/audio"

The run the python inference script:

  python -u infer_audio2vid.py

Motion Alignment between Ref. Img. and Driven Vid.

(Firstly download the checkpoints with '_pose.pth' postfix from huggingface)

Edit driver_video and ref_image to your path in demo_motion_sync.py, then run

  python -u demo_motion_sync.py

Audio&Pose-Drived Algo Inference

Edit ./configs/prompts/animation_pose.yaml, then run

  python -u infer_audio2vid_pose.py

Pose-Drived Algo Inference

Set draw_mouse=True in line 135 of infer_audio2vid_pose.py. Edit ./configs/prompts/animation_pose.yaml, then run

  python -u infer_audio2vid_pose.py

Run the Gradio UI

Thanks to the contribution from @Robin021:

python -u webgui.py --server_port=3000

📝 Release Plans

Status Milestone ETA
The inference source code of the Audio-Driven algo meet everyone on GitHub 9th July, 2024
Pretrained models trained on English and Mandarin Chinese to be released 9th July, 2024
The inference source code of the Pose-Driven algo meet everyone on GitHub 13th July, 2024
Pretrained models with better pose control to be released 13th July, 2024
Accelerated models to be released 17th July, 2024
🚀 Pretrained models with better sing performance to be released TBD
🚀 Large-Scale and High-resolution Chinese-Based Talking Head Dataset TBD

⚖️ Disclaimer

This project is intended for academic research, and we explicitly disclaim any responsibility for user-generated content. Users are solely liable for their actions while using the generative model. The project contributors have no legal affiliation with, nor accountability for 7A07 , users' behaviors. It is imperative to use the generative model responsibly, adhering to both ethical and legal standards.

🙏🏻 Acknowledgements

We would like to thank the contributors to the AnimateDiff, Moore-AnimateAnyone and MuseTalk repositories, for their open research and exploration.

We are also grateful to V-Express and hallo for their outstanding work in the area of diffusion-based talking heads.

If we missed any open-source projects or related articles, we would like to complement the acknowledgement of this specific work immediately.

📒 Citation

If you find our work useful for your research, please consider citing the paper :

@misc{chen2024echomimic,
  title={EchoMimic: Lifelike Audio-Driven Portrait Animations through Editable Landmark Conditioning},
  author={Zhiyuan Chen, Jiajiong Cao, Zhiquan Chen, Yuming Li, Chenguang Ma},
  year={2024},
  eprint={2407.08136},
  archivePrefix={arXiv},
  primaryClass={cs.CV}
}

🌟 Star History

Star History Chart

0