-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bug]: Prompt with term weighted 0 results in an all-black image #2832
Comments
I think this is a bug that @damian0815 will be best able to find and fix. |
hmm, i can't reproduce this on my m1 |
What do you need from me? |
@JPPhoto maybe can you try with xformers disabled and/or attention slicing disabled? i'll have to log in to a vast.ai instance before i can debug it myself |
This happens with both disabled - I still avoid xformers and I've tried with images large and small. It also doesn't matter where the term weighted 0 appears in the prompt. |
ok, i'll see if i can take a look tomorrow then |
issue was an epsilon that was set too low for fp16 |
#2961) Update `compel` to 1.0.0. This fixes #2832. It also changes the way downweighting is applied. In particular, downweighting should now be much better and more controllable. From the [compel changelog](https://github.com/damian0815/compel#changelog): > Downweighting now works by applying an attention mask to remove the downweighted tokens, rather than literally removing them from the sequence. This behaviour is the default, but the old behaviour can be re-enabled by passing `downweight_mode=DownweightMode.REMOVE` on init of the `Compel` instance. > > Formerly, downweighting a token worked by both multiplying the weighting of the token's embedding, and doing an inverse-weighted blend with a copy of the token sequence that had the downweighted tokens removed. The intuition is that as weight approaches zero, the tokens being downweighted should be actually removed from the sequence. However, removing the tokens resulted in the positioning of all downstream tokens becoming messed up. The blend ended up blending a lot more than just the tokens in question. > > As of v1.0.0, taking advice from @keturn and @bonlime (damian0815/compel#7) the procedure is by default different. Downweighting still involves a blend but what is blended is a version of the token sequence with the downweighted tokens masked out, rather than removed. This correctly preserves positioning embeddings of the other tokens.
Is there an existing issue for this?
OS
Linux
GPU
cuda
VRAM
12GB
What happened?
Generating an image with a zero-weight prompt (
prompt with term weighted++ (zero)0.0
) results in a black image.Tagging @damian0815.
Screenshots
No response
Additional context
No response
Contact Details
No response
The text was updated successfully, but these errors were encountered: