This repository is the official implementation of AnimateDiff [ICLR2024 Spotlight]. It is a plug-and-play module turning most community models into animation generators, without the need of additional training.
AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning
Yuwei Guo,
Ceyuan Yang*,
Anyi Rao,
Zhengyang Liang,
Yaohui Wang,
Yu Qiao,
Maneesh Agrawala,
Dahua Lin,
Bo Dai
(*Corresponding Author)
We developed four versions of AnimateDiff: v1
, v2
and v3
for Stable Diffusion V1.5; sdxl-beta
for Stable Diffusion XL.
- Update to latest diffusers version
- Update Gradio demo
- Release training scripts
- Release AnimateDiff v3 and SparseCtrl
We show some results in the GALLERY. Some of them are contributed by the community.
Note: see ANIMATEDIFF for detailed setup.
git clone https://github.com/guoyww/AnimateDiff.git
cd AnimateDiff
conda env create -f environment.yaml
conda activate animatediff
git lfs install
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 models/StableDiffusion/
Manually download the community .safetensors
models from CivitAI, and save them to models/DreamBooth_LoRA
. We recommand RealisticVision V5.1 and ToonYou Beta6.
Manually download the AnimateDiff modules. The download links can be found in each version's model zoo, as provided in the following. Save the modules to models/Motion_Module
.
In this version, we did the image model finetuning through Domain Adapter LoRA for more flexiblity at inference time.
Additionally, we implement two (RGB image/scribble) SparseCtrl Encoders, which can take abitary number of condition maps to control the generation process.
- Explanation: Domain Adapter is a LoRA module trained on static frames of the training video dataset. This process is done before training the motion module, and helps the motion module focus on motion modeling, as shown in the figure below. At inference, By adjusting the LoRA scale of the Domain Adapter, some visual attributes of the training video, e.g., the watermarks, can be removed. To utilize the SparseCtrl encoder, it's necessary to use a full Domain Adapter in the pipeline.
Technical details of SparseCtrl can be found in this research paper:
SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models
Yuwei Guo,
Ceyuan Yang*,
Anyi Rao,
Maneesh Agrawala,
Dahua Lin,
Bo Dai
(*Corresponding Author)
AnimateDiff v3 Model Zoo
Name | HuggingFace | Type | Storage Space | Description |
---|---|---|---|---|
v3_adapter_sd_v15.ckpt |
Link | Domain Adapter | 97.4 MB | |
v3_sd15_mm.ckpt.ckpt |
Link | Motion Module | 1.56 GB | |
v3_sd15_sparsectrl_scribble.ckpt |
Link | SparseCtrl Encoder | 1.86 GB | scribble condition |
v3_sd15_sparsectrl_rgb.ckpt |
Link | SparseCtrl Encoder | 1.85 GB | RGB image condition |
Input (by RealisticVision) | Animation | Input | Animation |
Input Scribble | Output | Input Scribbles | Output |
Here we provide three demo inference scripts. The corresponding AnimateDiff modules and community models need to be downloaded in advance. Put motion module in models/Motion_Module
; put SparseCtrl encoders in models/SparseCtrl
.
# under general T2V setting
python -m scripts.animate --config configs/prompts/v3/v3-1-T2V.yaml
# image animation (on RealisticVision)
python -m scripts.animate --config configs/prompts/v3/v3-2-animation-RealisticVision.yaml
# sketch-to-animation and storyboarding (on RealisticVision)
python -m scripts.animate --config configs/prompts/v3/v3-3-sketch-RealisticVision.yaml
- Small fickering is noticable. To be solved in future versions;
- To stay compatible with comunity models, there is no specific optimizations for general T2V, leading to limited visual quality under this setting;
- (Style Alignment) For usage such as image animation/interpolation, it's recommanded to use images generated by the same community model.
Release the Motion Module (beta version) on SDXL, available at Google Drive / HuggingFace / CivitAI. High resolution videos (i.e., 1024x1024x16 frames with various aspect ratios) could be produced with/without personalized models. Inference usually requires ~13GB VRAM and tuned hyperparameters (e.g., #sampling steps), depending on the chosen personalized models.
Checkout to the branch sdxl for more details of the inference. More checkpoints with better-quality would be available soon. Stay tuned. Examples below are manually downsampled for fast loading.
AnimateDiff SDXL-Beta Model Zoo
Name | HuggingFace | Type | Storage Space |
---|---|---|---|
mm_sdxl_v10_beta.ckpt |
Link | Motion Module | 950 MB |
Original SDXL | Community SDXL | Community SDXL |
In this version, the motion module is trained upon larger resolution and batch size. We observe this significantly helps improve the sample quality.
Moreover, we support MotionLoRA for eight basic camera movements.
AnimateDiff v2 Model Zoo
Name | HuggingFace | Type | Parameter | Storage Space |
---|---|---|---|---|
mm_sd_v15_v2.ckpt | Link | Motion Module | 453 M | 1.7 GB |
v2_lora_ZoomIn.ckpt | Link | MotionLoRA | 19 M | 74 MB |
v2_lora_ZoomOut.ckpt | Link | MotionLoRA | 19 M | 74 MB |
v2_lora_PanLeft.ckpt | Link | MotionLoRA | 19 M | 74 MB |
v2_lora_PanRight.ckpt | Link | MotionLoRA | 19 M | 74 MB |
v2_lora_TiltUp.ckpt | Link | MotionLoRA | 19 M | 74 MB |
v2_lora_TiltDown.ckpt | Link | MotionLoRA | 19 M | 74 MB |
v2_lora_RollingClockwise.ckpt | Link | MotionLoRA | 19 M | 74 MB |
v2_lora_RollingAnticlockwise.ckpt | Link | MotionLoRA | 19 M | 74 MB |
-
Release MotionLoRA and its model zoo, enabling camera movement controls! Please download the MotionLoRA models (74 MB per model, available at Google Drive / HuggingFace / CivitAI ) and save them to the
models/MotionLoRA
folder. Example:python -m scripts.animate --config configs/prompts/v2/5-RealisticVision-MotionLoRA.yaml
Zoom In Zoom Out Zoom Pan Left Zoom Pan Right Tilt Up Tilt Down Rolling Anti-Clockwise Rolling Clockwise -
New Motion Module release!
mm_sd_v15_v2.ckpt
was trained on larger resolution & batch size, and gains noticeable quality improvements. Check it out at Google Drive / HuggingFace / CivitAI and use it withconfigs/inference/inference-v2.yaml
. Example:python -m scripts.animate --config configs/prompts/v2/5-RealisticVision.yaml
Here is a qualitative comparison between
mm_sd_v15.ckpt
(left) andmm_sd_v15_v2.ckpt
(right):
AnimateDiff v1 Model Zoo
Name | HuggingFace | Parameter | Storage Space |
---|---|---|---|
mm_sd_v14.ckpt | Link | 417 M | 1.6 GB |
mm_sd_v15.ckpt | Link | 417 M | 1.6 GB |
Model:ToonYou
Model:Realistic Vision V2.0
Here we provide several demo inference scripts. The corresponding AnimateDiff modules and community models need to be downloaded in advance. See ANIMATEDIFF for detailed setup.
python -m scripts.animate --config configs/prompts/1-ToonYou.yaml
python -m scripts.animate --config configs/prompts/3-RcnzCartoon.yaml
User Interface developed by community:
- A1111 Extension sd-webui-animatediff (by @continue-revolution)
- ComfyUI Extension ComfyUI-AnimateDiff-Evolved (by @Kosinkadink)
- Google Colab: Colab (by @camenduru)
We created a Gradio demo to make AnimateDiff easier to use. To launch the demo, please run the following commands:
conda activate animatediff
python app.py
By default, the demo will run at localhost:7860
.
Installation
Please ensure the installation of xformer that is applied to reduce the inference memory.
Various resolution or number of frames
Currently, we recommend users to generate animation with 16 frames and 512 resolution that are aligned with our training settings. Notably, various resolution/frames may affect the quality more or less.How to use it without any coding
-
Get lora models: train lora model with A1111 based on a collection of your own favorite images (e.g., tutorials English, Japanese, Chinese) or download Lora models from Civitai.
-
Animate lora models: using gradio interface or A1111 (e.g., tutorials English, Japanese, Chinese)
-
Be creative togther with other techniques, such as, super resolution, frame interpolation, music generation, etc.
Animating a given image
We totally agree that animating a given image is an appealing feature, which we would try to support officially in future. For now, you may enjoy other efforts from the talesofai.
Contributions from community
Contributions are always welcome!! Thedev
branch is for community contributions. As for the main branch, we would like to align it with the original technical report :)
Please refer to ANIMATEDIFF for the detailed setup.
@article{guo2023animatediff,
title={AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning},
author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Liang, Zhengyang and Wang, Yaohui and Qiao, Yu and Agrawala, Maneesh and Lin, Dahua and Dai, Bo},
journal={International Conference on Learning Representations},
year={2024}
}
@article{guo2023sparsectrl,
title={SparseCtrl: Adding Sparse Controls to Text-to-Video Diffusion Models},
author={Guo, Yuwei and Yang, Ceyuan and Rao, Anyi and Agrawala, Maneesh and Lin, Dahua and Dai, Bo},
journal={arXiv preprint arXiv:2311.16933},
year={2023}
}
This project is released for academic use. We disclaim responsibility for user-generated content. Users are solely liable for their actions. The project contributors are not legally affiliated with, nor accountable for, users' behaviors. Use the generative model responsibly, adhering to ethical and legal standards. Please be advised that our only official website is https://github.com/guoyww/AnimateDiff, and all the other websites are NOT associated with us at AnimateDiff.
Yuwei Guo: guoyuwei@pjlab.org.cn
Ceyuan Yang: yangceyuan@pjlab.org.cn
Bo Dai: daibo@pjlab.org.cn
Codebase built upon Tune-a-Video.