8000 GitHub - deepbeepmeep/Wan2GP: Wan 2.1 for the GPU Poor
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

deepbeepmeep/Wan2GP

Β 
Β 

Repository files navigation

WanGP


WanGP by DeepBeepMeep : The best Open Source Video Generative Models Accessible to the GPU Poor

WanGP supports the Wan (and derived models), Hunyuan Video and LTV Video models with:

  • Low VRAM requirements (as low as 6 GB of VRAM is sufficient for certain models)
  • Support for old GPUs (RTX 10XX, 20xx, ...)
  • Very Fast on the latest GPUs
  • Easy to use Full Web based interface
  • Auto download of the required model adapted to your specific architecture
  • Tools integrated to facilitate Video Generation : Mask Editor, Prompt Enhancer, Temporal and Spatial Generation
  • Loras Support to customize each model
  • Queuing system : make your shopping list of videos to generate and come back later

Discord Server to get Help from Other Users and show your Best Videos: https://discord.gg/g7efUW9jGV

Follow DeepBeepMeep on Twitter/X to get the Latest News: https://x.com/deepbeepmeep

πŸ”₯ Latest Updates

June 23 2025: WanGP v6.3, Vace Unleashed. Thought we couldnt squeeze Vace even more ?

  • Multithreaded preprocessing when possible for faster generations
  • Multithreaded frames Lanczos Upsampling as a bonus
  • A new Vace preprocessor : Flow to extract fluid motion
  • Multi Vace Controlnets: you can now transfer several properties at the same time. This opens new possibilities to explore, for instance if you transfer Human Movement and Shapes at the same time for some reasons the lighting of your character will take into account much more the environment of your character.
  • Injected Frames Outpainting, in case you missed it in WanGP 6.21

Don't know how to use all of the Vace features ? Check the Vace Guide embedded in WanGP as it has also been updated.

June 19 2025: WanGP v6.2, Vace even more Powercharged

πŸ‘‹ Have I told you that I am a big fan of Vace ? Here are more goodies to unleash its power:

  • If you ever wanted to watch Star Wars in 4:3, just use the new Outpainting feature and it will add the missing bits of image at the top and the bottom of the screen. The best thing is Outpainting can be combined with all the other Vace modifications, for instance you can change the main character of your favorite movie at the same time
  • More processing can combined at the same time (for instance the depth process can be applied outside the mask)
  • Upgraded the depth extractor to Depth Anything 2 which is much more detailed

As a bonus, I have added two finetunes based on the Safe-Forcing technology (which requires only 4 steps to generate a video): Wan 2.1 text2video Self-Forcing and Vace Self-Forcing. I know there is Lora around but the quality of the Lora is worse (at least with Vace) compared to the full model. Don't hesitate to share your opini 8000 on about this on the discord server.

June 17 2025: WanGP v6.1, Vace Powercharged

πŸ‘‹ Lots of improvements for Vace the Mother of all Models:

  • masks can now be combined with on the fly processing of a control video, for instance you can extract the motion of a specific person defined by a mask
  • on the fly modification of masks : reversed masks (with the same mask you can modify the background instead of the people covered by the masks), enlarged masks (you can cover more area if for instance the person you are trying to inject is larger than the one in the mask), ...
  • view these modified masks directly inside WanGP during the video generation to check they are really as expected
  • multiple frames injections: multiples frames can be injected at any location of the video
  • expand past videos in on click: just select one generated video to expand it

Of course all these new stuff work on all Vace finetunes (including Vace Fusionix).

Thanks also to Reevoy24 for adding a Notfication sound at the end of a generation and for fixing the background color of the current generation summary.

June 12 2025: WanGP v6.0

πŸ‘‹ Finetune models: You find the 20 models supported by WanGP not sufficient ? Too impatient to wait for the next release to get the support for a newly released model ? Your prayers have been answered: if a new model is compatible with a model architecture supported by WanGP, you can add by yourself the support for this model in WanGP by just creating a finetune model definition. You can then store this model in the cloud (for instance in Huggingface) and the very light finetune definition file can be easily shared with other users. WanGP will download automatically the finetuned model for them.

To celebrate the new finetunes support, here are a few finetune gifts (directly accessible from the model selection menu):

  • Fast Hunyuan Video : generate model t2v in only 6 steps
  • Hunyuan Vido AccVideo : generate model t2v in only 5 steps
  • Wan FusioniX: it is a combo of AccVideo / CausVid ans other models and can generate high quality Wan videos in only 8 steps

One more thing...

The new finetune system can be used to combine complementaty models : what happens when you combine Fusionix Text2Video and Vace Control Net ?

You get Vace FusioniX: the Ultimate Vace Model, Fast (10 steps, no need for guidance) and with a much better quality Video than the original slower model (despite being the best Control Net out there). Here goes one more finetune...

Check the Finetune Guide to create finetune models definitions and share them on the WanGP discord server.

June 11 2025: WanGP v5.5

πŸ‘‹ Hunyuan Video Custom Audio: it is similar to Hunyuan Video Avatar except there isn't any lower limit on the number of frames and you can use your reference images in a different context than the image itself
Hunyuan Video Custom Edit: Hunyuan Video Controlnet, use it to do inpainting and replace a person in a video while still keeping his poses. Similar to Vace but less restricted than the Wan models in terms of content...

June 6 2025: WanGP v5.41

πŸ‘‹ Bonus release: Support for AccVideo Lora to speed up x2 Video generations in Wan models. Check the Loras documentation to get the usage instructions of AccVideo.
You will need to do a pip install -r requirements.txt

June 6 2025: WanGP v5.4

πŸ‘‹ World Exclusive : Hunyuan Video Avatar Support ! You won't need 80 GB of VRAM nor 32 GB oF VRAM, just 10 GB of VRAM will be sufficient to generate up to 15s of high quality speech / song driven Video at a high speed with no quality degradation. Support for TeaCache included.
Here is a link to the original repo where you will find some very interesting documentation and examples. https://github.com/Tencent-Hunyuan/HunyuanVideo-Avatar. Kudos to the Hunyuan Video Avatar team for the best model of its kind.
Also many thanks to Reevoy24 for his repackaging / completing the documentation

May 28 2025: WanGP v5.31

πŸ‘‹ Added Phantom 14B, a model that you can use to transfer objects / people in the video. My preference goes to Vace that remains the king of controlnets. VACE improvements: Better sliding window transitions, image mask support in Matanyone, new Extend Video feature, and enhanced background removal options.

May 26, 2025: WanGP v5.3

πŸ‘‹ Settings management revolution! Now you can:

  • Select any generated video and click Use Selected Video Settings to instantly reuse its configuration
  • Drag & drop videos to automatically extract their settings metadata
  • Export/import settings as JSON files for easy sharing and backup

May 20, 2025: WanGP v5.2

πŸ‘‹ CausVid support - Generate videos in just 4-12 steps with the new distilled Wan model! Also added experimental MoviiGen for 1080p generation (20GB+ VRAM required). Check the Loras documentation to get the usage instructions of CausVid.

May 18, 2025: WanGP v5.1

πŸ‘‹ LTX Video 13B Distilled - Generate high-quality videos in less than one minute!

May 17, 2025: WanGP v5.0

πŸ‘‹ One App to Rule Them All! Added Hunyuan Video and LTX Video support, plus Vace 14B and integrated prompt enhancer.

See full changelog: Changelog

πŸ“‹ Table of Contents

πŸš€ Quick Start

One-click installation: Get started instantly with Pinokio App

Manual installation:

git clone https://github.com/deepbeepmeep/Wan2GP.git
cd Wan2GP
conda create -n wan2gp python=3.10.9
conda activate wan2gp
pip install torch==2.7.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/test/cu124
pip install -r requirements.txt

Run the application:

python wgp.py  # Text-to-video (default)
python wgp.py --i2v  # Image-to-video

Update the application: If using Pinokio use Pinokio to update otherwise: Get in the directory where WanGP is installed and:

git pull
pip install -r requirements.txt

πŸ“¦ Installation

For detailed installation instructions for different GPU generations:

🎯 Usage

Basic Usage

Advanced Features

πŸ“š Documentation

πŸ”— Related Projects

Other Models for the GPU Poor

  • HuanyuanVideoGP - One of the best open source Text to Video generators
  • Hunyuan3D-2GP - Image to 3D and text to 3D tool
  • FluxFillGP - Inpainting/outpainting tools based on Flux
  • Cosmos1GP - Text to world generator and image/video to world
  • OminiControlGP - Flux-derived application for object transfer
  • YuE GP - Song generator with instruments and singer's voice

Made with ❀️ by DeepBeepMeep

About

Wan 2.1 for the GPU Poor

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.1%
  • HTML 0.9%
0