-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for multiple controlnet #691
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Hi @JingyaHuang that's great, it works! Thanks a lot for providing the fix 🎉 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thank you for the pull-request !
elif use_auth_token: | ||
huggingface_token = get_token() | ||
else: | ||
raise ValueError("You need to provide `use_auth_token` to be able to push to the hub") | ||
api = HfApi(endpoint=endpoint) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I actually think we should pass the token here, so that it can be omitted later.
cc @Wauplin
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that sounds good, I just did it the way it was to pass the CIs, I will leave it to you if you want to change! (don't want to wait for CIs again hhh
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Co-authored-by: Michael Benayoun <michael@huggingface.co>
Co-authored-by: Michael Benayoun <michael@huggingface.co>
What does this PR do?
optimum-cli export neuron --inline-weights-neff --model jyoung105/stable-diffusion-v1-5 --task stable-diffusion --auto_cast matmul --auto_cast_type bf16 --batch_size 1 --num_images_per_prompt 1 --controlnet_ids lllyasviel/control_v11p_sd15_openpose lllyasviel/control_v11f1p_sd15_depth --height 512 --width 512 sd15-512x512-bf16-openpose-depth
optimum-cli export neuron -m stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl --batch_size 1 --height 1024 --width 1024 --controlnet_ids diffusers/controlnet-canny-sdxl-1.0-small thibaud/controlnet-openpose-sdxl-1.0 --num_images_per_prompt 1 sdxl_neuron_canny_openpose/
Theoretically yes, but easily get oom without parallelism which will be supported with future NxD integration for inference.
Tests
Documentation
Some changes to fix CIs
It seems that the CIs failed after we bump the transformers/optimum versions:
use_auth_token
since the CIs failed with recent changes in Optimum: Deprecateduse_auth_token
optimum#1837Before submitting