-
Notifications
You must be signed in to change notification settings - Fork 27k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Align Your Steps to available schedulers #15751
Add Align Your Steps to available schedulers #15751
Conversation
* Include both SDXL and SD 1.5 variants (https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/howto.html)
what's the difference between this? I see this pull request use quick start guide numbers and drhead implemented Theorem 3.1 in the paper |
@AG-w I think the main difference is that this implements the schedule as recommended by the authors. My understanding from reading the material is that the provided schedules are the optimized ones using the techniques described in the paper (https://arxiv.org/pdf/2404.14507). The section "B.1. Practical Implementation Details" explains in more detail. Happy to be corrected if I've misinterpreted or missed anything. |
* Consistent with implementations in k-diffusion. * Makes this compatible with AUTOMATIC1111#15823
Just wanted to put this out there: https://arxiv.org/abs/2405.11326 It's a new method "GITS" that purports to beat AYS in These are the sigmas I was able to get from model_wrap.sigmas for the recommended timesteps: I'm not sure they're correct because they didn't change when I loaded a SDXL model. |
what if you calculate the scale between SD1.5 and SDXL sigma in AYS then apply that scale to GITS so you get SDXL version of that sigmas? something like I use this way generated a result for sdxl, need testing though |
@LoganBooker I had to type in my own sigmas for 32 steps, which leads me to a feature request for this scheduler. Can someone better than me modify the code to use "script_callbacks.CFGDenoiserParams" in a "while loop" to pull the "total_sampling_steps" variable from "CFGDenoiserParams" and automatically scale down to 0? I can't share my results but they are amazing. The following is an Edit, I took the time to run some tests and uploaded to Imgur: First prompt is from here: https://prompthero.com/prompt/cf5ed5a0881 Here is a link to the 4 grids for side by side comparison. I used multiple samplers (DPM++ 2S a, DPM2, Euler, & Heun) in different images so you can see better results. 11 Sigmas only performs really well under Heun with complex prompts. As you can see, Sigmas should be stretched over the amount of steps you use for better prompt coherence. As for the testing, you will not be able to replicate my results. I'm using a lot of custom forked & edited code that I haven't uploaded to a repo yet, along with 2k generation using a 64k resized seed. I'd use a higher resized seed but 8GB of VRAM on my 3060TI gets maxed out by 64k. I'm also using an SD v1.5 based model for these results, SDXL will have to wait until my setup plays nice with it. You can test the sigmas yourselves.
Don't forget to add the following lines to the bottom
|
maybe you should explain what you have modified since all we know is you changed sigmas? |
It wouldn't matter. I just did a 512x512 comparison with none of the "bells & whistles" I normally use. The difference is still apparent, just difficult to tell at such a small scale. You'll probably have to zoom in a bit. It's most apparent with the ancestral sampler. You can find the image at the bottom of the same Imgur link: https://imgur.com/a/NQLCD4M I even tossed in Restart & UniPC samplers (default options for UniPC since I haven't tested that one too much). EDIT: SDXL testing won't be happening. I keep getting garbled images and pixelation even with the normal samplers. I guess 8GB of VRAM is enough to get it running, but not running well. And it's happening with and without --medvram-sdxl 2nd EDIT: Looks like SDXL testing did happen, and only thanks to Forge and NeverOOM. 32 step sigmas are more accurate than 11 Sigmas stretched over 32 steps. Their samplers & schedulers are a pain in the butt to edit because they're not separated like A1111's. I updated the above code post with my new sigmas, got tired of issues and just used nVidia's code to generate new sigmas. |
Adds AYS GITS, refer from https://arxiv.org/abs/2405.11326 AUTOMATIC1111/stable-diffusion-webui#15751 (comment) Adds AYS 11 and 32 steps, from AUTOMATIC1111/stable-diffusion-webui#15751 (comment)
Adds AYS GITS, refer from https://arxiv.org/abs/2405.11326 AUTOMATIC1111#15751 (comment) Adds AYS 11 and 32 steps, from AUTOMATIC1111#15751 (comment)
Implements the Align Your Steps noise schedule as described here: https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/howto.html. This includes the sigmas for SDXL and SD 1.5, as well as the recommended interpolation for using larger step sizes.
Description
According to the original work (https://research.nvidia.com/labs/toronto-ai/AlignYourSteps/), AYS can provide better image quality over schedulers such as Karras and Exponential at low step counts (~10). This does appear to bear out in limited testing as can be seen below, though in some cases (such as the tower), it's debatable. It's certain not a panacea; you'll still want to perform at least 15 samples for more consistent, coherent images.
Note I've used 11 steps in the examples below to account for the appending of zero to the sigmas, which is consistent with other schedulers. The alternative would be to truncate/replace the final sigma with zero, but that doesn't seem correct.
Screenshots/videos:
Checklist: