-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SDXL with Compel: unexpected keyword argument output_hidden_states
#357
Comments
Hi @neo, Whether the text encoder can output the hidden_states or not is defined during the compilation, and won't be adjustable during the inference. Does your checkpoint config set explicitly
And the traced model can only take positional arguments and output a tuple, so in optimum, we put up a wrapper to control the keyword arguments and ensure they are passed to the model with the same order as it was traced. optimum-neuron/optimum/neuron/modeling_diffusion.py Lines 681 to 694 in 2d824ba
So if you need P.S. I did not expect the components to be used outside of diffusers, and it makes sense to me to support natively these extra arguments in optimum-neuron, would you like to file me a PR and make it natively in optimum? |
Hi @JingyaHuang, thank you for the response! I had the same understanding that these extra options won't really affect the post-compilation run-time, so I guess my questions was:
And I'd be more than happy to send in a PR once I get something working and then have a generalized solution. |
I was able to setup a temporary shim – however, other than I'm wondering is there a quick way to do a compilation matching this signature and the amount of arguments for the text encoders? Thanks! |
Hi @neo,
|
Hi @JingyaHuang! How about the |
Yes In general, for optimum, we do not support by default every possible input as it sometimes comes with extra compute cost. But we try to support different use cases, by letting users customize whatever is possible. |
That makes total sense, I do understand it might not be a super common use case for the library to bake this in. While acknowledging this is not your job or obligation, would you be generous enough to point me to how I could do a compilation accepting the |
update: based on your suggestions, I was able to figure out the now there's only I'll try to see what can I do about that, let me know if there's any suggestions/recommendations 😊 |
I inspected that, for our use case, the I also noticed in the neuron config of the text encoder, there are output names specified, which was very helping in confirming which output tuple element is which for assembling the |
It seems like for SD, the attention mask would always be all ones? https://github.com/damian0815/compel/blob/v2.0.2/src/compel/embeddings_provider.py#L47
|
Hi, Can I ask what to do, If I want to use SDXL with compel? I'm stucked in this error
what should I do when I compile the model. segment fault error even I tried with optimum.neuron python module.. can you guys gives me some hint of this? ;) |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Thank you! |
not stale |
Yep.. I think so too. |
If compel is a widely used option happy to support it. |
I had a semi-ready fix @JingyaHuang 😆 let me verify #371 tomorrow and I'll try to do a PR |
Is this looks good? @JingyaHuang I usually use compel in normal stablediffusion like this method.
|
Hey folks, I just prepared a poc supporting the feature in a dev branch here: https://github.com/huggingface/optimum-neuron/tree/lez-support-compel. It's just a very draft one that works with the example sent by @Suprhimp, and might help @neo with the PR that you would like to submit. Please tell me if that meets what's needed for supporting compel. ty! |
I think it depends on the scope of changes you'd like – for the solution I had, the exporting side didn't need to be changed, because it's default to Edit: so it's up to you if we'd like to include the option also in compilation to potentially open doors to other future integrations. |
From looking at the code, I could tell that
output_hidden_states
is always true for SDXL's text encoders; however in compel, it's always explicitly passed (even though it's also True for SDXL): https://github.com/damian0815/compel/blob/v2.0.2/src/compel/embeddings_provider.py#L392. And it's causing theNeuronModelTextEncoder
to error outunexpected keyword argument 'output_hidden_states'
.So I'm wondering, what's the best way to approach this as well as possibly a similar issue with
return_dict
? Thanks!The text was updated successfully, but these errors were encountered: