-
Notifications
You must be signed in to change notification settings - Fork 160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Diffusion model benchmark #904
Comments
|
How about the e2e latency run with TensorRT/AIT? |
Can you share the scripts to run these benchmarks? |
@zzpmiracle Where I can find scripts to run this benchmarks? i've tried using it on the latest diffusers version, and it can't trace the components... |
maybe we can use docker images in this zhihu article https://zhuanlan.zhihu.com/p/631461489 |
@zzpmiracle hi, do you have an example of Hires.fix ? |
We have support diffusers in #867 . This issue tracks performance of all the diffuser pipelines. For the concern of performance, we use BlaDNN to tuning models during runtime.
The following pipelines would be tested:
The text was updated successfully, but these errors were encountered: