-
Notifications
You must be signed in to change notification settings - Fork 5.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AutoencodeTiny doesn't work for LCM img2img when passing an image to encode #5619
Comments
This should solve it for you: https://colab.research.google.com/gist/sayakpaul/fa95a41beb5fea6d830324cbf6a8e8f4/scratchpad.ipynb We have included Latency Consistency Models officially as a part of For now, it's needed to specify a |
hi @sayakpaul , I think the issue is with the image-to-image pipeline, do you know if we can already use AutoPipelineForImage2Image for LCM? |
That example isn't img2img with an image input(not an input latent which doesn't need encoding). |
We recently added support for an LCM Img2Img pipeline. #5636 enables inference for the major image-to-image pipelines with the tiny Autoencoder. Could you give it a look? |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
I've been successfully using TinyVAE now that LCM support is in diffusers. |
I've been successfully using TinyVAE now that LCM support is in diffusers. |
That is amazing. Feel free to share your code and results. If you have shared it on Social Media, feel free to let us know, we can try to amplify :-) |
Describe the bug
A normal(?) VAE has 'latent_dist'. Tiny has "latents" instead.
The custom extension: latent_consistency_img2img.py does:
and
resulting in the error. @vladmandic says that when he directly uses TAESD he doesn't have a problem on the img2img encode. Either he is passing a latent, instead of an image to pipe() or the real TAESD has changes not present in the diffusers' Tiny VAE.
The Tiny VAE has allow me to hit 22 LCM 512x512 4 step txt2img images per second and 15 LCM 512x512 4 step img2img images per second. I got this to work by using "latents" instead of "latent_dist.sample(generator)".
I don't know if this is the correct fix but I get good, for LCM images. And fast because of the TAESD.
Reproduction
You need torch, PIL and diffusers
Logs
No response
System Info
diffusers
version: 0.21.4Who can help?
@sayakpaul @patrickvonplaten
The text was updated successfully, but these errors were encountered: