-
Notifications
You must be signed in to change notification settings - Fork 5.4k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* draft design * clean up * clean up * clean up * clean up * clean up * clean up * clean up * clean up * clean up * update pipeline * clean up * clean up * clean up * add tests * change motion block * clean up * clean up * clean up * update * update * update * update * update * update * update * update * clean up * update * update * update model test * update * update * update * update * make style * update * fix embeddings * update * merge upstream * max fix copies * fix bug * fix mistake * add docs * update * clean up * update * clean up * clean up * fix docstrings * fix docstrings * update * update * clean up * update
- Loading branch information
Showing
18 changed files
with
3,322 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
# UNetMotionModel | ||
|
||
The [UNet](https://huggingface.co/papers/1505.04597) model was originally introduced by Ronneberger et al for biomedical image segmentation, but it is also commonly used in 🤗 Diffusers because it outputs images that are the same size as the input. It is one of the most important components of a diffusion system because it facilitates the actual diffusion process. There are several variants of the UNet model in 🤗 Diffusers, depending on it's number of dimensions and whether it is a conditional model or not. This is a 2D UNet model. | ||
|
||
The abstract from the paper is: | ||
|
||
*There is large consent that successful training of deep networks requires many thousand annotated training samples. In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently. The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization. We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks. Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU. The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net.* | ||
|
||
## UNetMotionModel | ||
[[autodoc]] UNetMotionModel | ||
|
||
## UNet3DConditionOutput | ||
[[autodoc]] models.unet_3d_condition.UNet3DConditionOutput |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,108 @@ | ||
<!--Copyright 2023 The HuggingFace Team. All rights reserved. | ||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | ||
the License. You may obtain a copy of the License at | ||
http://www.apache.org/licenses/LICENSE-2.0 | ||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | ||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | ||
specific language governing permissions and limitations under the License. | ||
--> | ||
|
||
# Text-to-Video Generation with AnimateDiff | ||
|
||
## Overview | ||
|
||
[AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning](https://arxiv.org/abs/2307.04725) by Yuwei Guo, Ceyuan Yang*, Anyi Rao, Yaohui Wang, Yu Qiao, Dahua Lin, Bo Dai | ||
|
||
The abstract of the paper is the following: | ||
|
||
With the advance of text-to-image models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. Subsequently, there is a great demand for image animation techniques to further combine generated static images with motion dynamics. In this report, we propose a practical framework to animate most of the existing personalized text-to-image models once and for all, saving efforts in model-specific tuning. At the core of the proposed framework is to insert a newly initialized motion modeling module into the frozen text-to-image model and train it on video clips to distill reasonable motion priors. Once trained, by simply injecting this motion modeling module, all personalized versions derived from the same base T2I readily become text-driven models that produce diverse and personalized animated images. We conduct our evaluation on several public representative personalized text-to-image models across anime pictures and realistic photographs, and demonstrate that our proposed framework helps these models generate temporally smooth animation clips while preserving the domain and diversity of their outputs. Code and pre-trained weights will be publicly available at this https URL . | ||
|
||
## Available Pipelines: | ||
|
||
| Pipeline | Tasks | Demo | ||
|---|---|:---:| | ||
| [AnimateDiffPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/animatediff/pipeline_animatediff.py) | *Text-to-Video Generation with AnimateDiff* | | ||
|
||
## Usage example | ||
|
||
AnimateDiff works with a MotionAdapter checkpoint and a Stable Diffusion model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in Stable Diffusion UNet. | ||
|
||
The following example demonstrates how to use a *MotionAdapter* checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5. | ||
|
||
```python | ||
import torch | ||
from diffusers import MotionAdapter, AnimateDiffPipeline, DDIMScheduler | ||
from diffusers.utils import export_to_gif | ||
|
||
# Load the motion adapter | ||
adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2") | ||
# load SD 1.5 based finetuned model | ||
model_id = "SG161222/Realistic_Vision_V5.1_noVAE" | ||
pipe = AnimateDiffPipeline.from_pretrained(model_id, motion_adapter=adapter) | ||
scheduler = DDIMScheduler.from_pretrained( | ||
model_id, subfolder="scheduler", clip_sample=False, timestep_spacing="linspace", steps_offset=1 | ||
) | ||
pipe.scheduler = scheduler | ||
|
||
# enable memory savings | ||
pipe.enable_vae_slicing() | ||
pipe.enable_model_cpu_offload() | ||
|
||
output = pipe( | ||
prompt=( | ||
"masterpiece, bestquality, highlydetailed, ultradetailed, sunset, " | ||
"orange sky, warm lighting, fishing boats, ocean waves seagulls, " | ||
"rippling water, wharf, silhouette, serene atmosphere, dusk, evening glow, " | ||
"golden hour, coastal landscape, seaside scenery" | ||
), | ||
negative_prompt="bad quality, worse quality", | ||
num_frames=16, | ||
guidance_scale=7.5, | ||
num_inference_steps=25, | ||
generator=torch.Generator("cpu").manual_seed(42), | ||
) | ||
frames = output.frames[0] | ||
export_to_gif(frames, "animation.gif") | ||
``` | ||
|
||
Here are some sample outputs: | ||
|
||
<table> | ||
<tr> | ||
<td><center> | ||
masterpiece, bestquality, sunset. | ||
<br> | ||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/animatediff-realistic-doc.gif" | ||
alt="masterpiece, bestquality, sunset" | ||
style="width: 300px;" /> | ||
</center></td> | ||
</tr> | ||
</table> | ||
|
||
<Tip> | ||
|
||
AnimateDiff tends to work better with finetuned Stable Diffusion models. If you plan on using a scheduler that can clip samples, make sure to disable it by setting `clip_sample=False` in the scheduler as this can also have an adverse effect on generated samples. | ||
|
||
</Tip> | ||
|
||
## AnimateDiffPipeline | ||
[[autodoc]] AnimateDiffPipeline | ||
- all | ||
- __call__ | ||
- enable_freeu | ||
- disable_freeu | ||
- enable_vae_slicing | ||
- disable_vae_slicing | ||
- enable_vae_tiling | ||
- disable_vae_tiling | ||
|
||
## AnimateDiffPipelineOutput | ||
|
||
[[autodoc]] pipelines.animatediff.AnimateDiffPipelineOutput | ||
|
||
## Available checkpoints | ||
|
||
Motion Adapter checkpoints can be found under [guoyww](https://huggingface.co/guoyww/). These checkpoints are meant to work with any model based on Stable Diffusion 1.4/1.5 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.