Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: Prompt with term weighted 0 results in an all-black image #2832

Closed
1 task done
JPPhoto opened this issue Feb 27, 2023 · 7 comments · Fixed by #2961
Closed
1 task done

[bug]: Prompt with term weighted 0 results in an all-black image #2832

JPPhoto opened this issue Feb 27, 2023 · 7 comments · Fixed by #2961
Assignees
Labels
bug Something isn't working

Comments

@JPPhoto
Copy link
Contributor

JPPhoto commented Feb 27, 2023

Is there an existing issue for this?

  • I have searched the existing issues

OS

Linux

GPU

cuda

VRAM

12GB

What happened?

Generating an image with a zero-weight prompt (prompt with term weighted++ (zero)0.0) results in a black image.

>> Image Generation Parameters:

{'prompt': 'prompt with term weighted++ (zero)0.0', 'iterations': 1, 'steps': 100, 'cfg_scale': 7.5, 'threshold': 0, 'perlin': 0, 'height': 1024, 'width': 576, 'sampler_name': 'k_euler_a', 'seed': 2084177759, 'progress_images': False, 'progress_latents': True, 'save_intermediates': 5, 'generation_mode': 'txt2img', 'init_mask': '...', 'hires_fix': True, 'strength': 0.6, 'seamless': False, 'variation_amount': 0.01}

>> ESRGAN Parameters: False
>> Facetool Parameters: False

>> [TOKENLOG] Parsed Prompt: FlattenedPrompt:[Fragment:'prompt with term'@1.0, Fragment:'weighted'@1.2100000000000002, Fragment:'zero'@0.0]

>> [TOKENLOG] Parsed Negative Prompt: FlattenedPrompt:[Fragment:''@1.0]

>> [TOKENLOG] Tokens  (5):
prompt with term weighted zero

Tagging @damian0815.

Screenshots

No response

Additional context

No response

Contact Details

No response

@lstein
Copy link
Collaborator

lstein commented Feb 28, 2023

I think this is a bug that @damian0815 will be best able to find and fix.

@damian0815
Copy link
Contributor

hmm, i can't reproduce this on my m1

@JPPhoto
Copy link
Contributor Author

JPPhoto commented Feb 28, 2023

What do you need from me?

@damian0815
Copy link
Contributor

@JPPhoto maybe can you try with xformers disabled and/or attention slicing disabled? i'll have to log in to a vast.ai instance before i can debug it myself

@JPPhoto
Copy link
Contributor Author

JPPhoto commented Mar 1, 2023

This happens with both disabled - I still avoid xformers and I've tried with images large and small. It also doesn't matter where the term weighted 0 appears in the prompt.

@damian0815
Copy link
Contributor

ok, i'll see if i can take a look tomorrow then

@damian0815
Copy link
Contributor

issue was an epsilon that was set too low for fp16

blessedcoolant added a commit that referenced this issue Mar 16, 2023
#2961)

Update `compel` to 1.0.0.

This fixes #2832.

It also changes the way downweighting is applied. In particular,
downweighting should now be much better and more controllable.

From the [compel
changelog](https://github.com/damian0815/compel#changelog):

> Downweighting now works by applying an attention mask to remove the
downweighted tokens, rather than literally removing them from the
sequence. This behaviour is the default, but the old behaviour can be
re-enabled by passing `downweight_mode=DownweightMode.REMOVE` on init of
the `Compel` instance.
>
> Formerly, downweighting a token worked by both multiplying the
weighting of the token's embedding, and doing an inverse-weighted blend
with a copy of the token sequence that had the downweighted tokens
removed. The intuition is that as weight approaches zero, the tokens
being downweighted should be actually removed from the sequence.
However, removing the tokens resulted in the positioning of all
downstream tokens becoming messed up. The blend ended up blending a lot
more than just the tokens in question.
> 
> As of v1.0.0, taking advice from @keturn and @bonlime
(damian0815/compel#7) the procedure is by
default different. Downweighting still involves a blend but what is
blended is a version of the token sequence with the downweighted tokens
masked out, rather than removed. This correctly preserves positioning
embeddings of the other tokens.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
3 participants