Stars
基于SparkTTS、OrpheusTTS等模型,提供高质量中文语音合成与声音克隆服务。
No fortress, purely open ground. OpenManus is Coming.
[CVPR 2025] Diffusion Self-Distillation for Zero-Shot Customized Image Generation
Clapper.app, a video synthesizer and sequencer designed for the age of AI cinema
[ACM MM 2024] This is the official code for "AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding"
[ECCV 2024] MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model.
ControlNet++: All-in-one ControlNet for image generations and editing!
Code for SCIS-2025 Paper "UniAnimate: Taming Unified Video Diffusion Models for Consistent Human Image Animation".
Code and dataset for photorealistic Codec Avatars driven from audio
MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation
MuseV: Infinite-length and High Fidelity Virtual Human Video Generation with Visual Conditioned Parallel Denoising
Enjoy the magic of Diffusion models!
[AAAI 2025] Follow-Your-Click: This repo is the official implementation of "Follow-Your-Click: Open-domain Regional Image Animation via Short Prompts"
InstantID: Zero-shot Identity-Preserving Generation in Seconds 🔥
Unofficial Implementation of Animate Anyone
One UI is all done with chatgpt web, midjourney, gpts,suno,luma,runway,viggle,flux,ideogram,realtime,pika,udio; Simultaneous support Web / PWA / Linux / Win / MacOS platform
A novel image harmonization method based on Implicit Neural Representation.
[CVPR'24 Highlight] Official PyTorch implementation of CoDeF: Content Deformation Fields for Temporally Consistent Video Processing
Automatic Shadow Generation via Exposure Fusion
LVDM: Latent Video Diffusion Models for High-Fidelity Long Video Generation
Visualizer for neural network, deep learning and machine learning models
efficient video representation through neural fields
Use AnimeGANv3 to make your own animation works, including turning photos or videos into anime.
Generate image analogies using neural matching and blending.