Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Animatediff Proposal #5413
Animatediff Proposal #5413
Changes from 45 commits
d8ced0f
9e4c700
a026ea5
bbb2b6c
36b3a44
2db7bd3
72e0fa6
d8d3515
7a5fbf8
9eeee36
86a4d31
c7ba4b8
6ec184a
79f402f
b24f58a
2688d07
0deab59
9c66c21
1bd65de
22c9f7b
0e1f7a8
c7e1b14
ee79cf3
fe3828a
bcbc2d1
4df582e
bf5b65a
3ba1ba0
4d0b5ec
8be5f1f
313db1d
e82331e
37de1de
71dc350
5d65837
2b78f1e
3f5d8de
d939379
9e6a146
5e43f24
dc6eb04
5f003e5
6f6f8aa
ec8bb6e
d41f717
6d81f2a
840f576
a6d025b
ee51b90
dfa52fb
c24c97b
ef893c4
a2e38cc
0d6f5be
beb1646
88e76c6
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don't understand why we need this. If
cross_attention_dim
is None then why do we have to manually set encodre_hidden_states toNone
. This looks more like a hacky bug correction. Why do we passencoder_hidden_states
in the first place if we don't have cross attention?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@patrickvonplaten
Initially added it in so that users could train new motion modules with the option of using cross attention.
We can omit sending the encoder hidden state to this block from the higher level blocks. It just means the MotionModules in the UNetMotionModel cannot support cross attention at all. We can then remove the
temporal_cross_attention_dim
argument.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should only add options that are needed to run the official animate diff checkpoints, all possible customizations that the user could try out should not be added (only if it becomes necessary)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here if the motion modules all have
temporal_cross_attention_dim
always set to None, the let's not give the possibility to customize it as this unnecessarily bloats the code