-
Notifications
You must be signed in to change notification settings - Fork 617
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Upgrade FlashAttention to 2.0 #796
Conversation
Hi @ghunkins! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
Testing using the import torch
from diffusers import DiffusionPipeline
from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
# load pipe
pipe = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16,
).to('cuda')
# enable xformers with flash attention
pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
# run the pipe
image = pipe(prompt="a cute cat").images[0]
|
Closing. I think someone with more context needs eyes on this. |
|
@Birch-san The xformers team is tackling this, you can see #795 for context! I haven't had time to test yet, but based on the conversation it sounds like the pre-release wheels already support FlashAttention 2.0 on certain GPUs. pip install xformers --pre |
What does this PR do?
This PR updates the symlink to FlashAttention 2.0 released in v1.0.9 of flash-attention. All credit goes to Tridao for the implementation.
According to reported results, this can enable a 2-4x speed-up.
This PR is a WIP and still needs testing and documentation updates. Additionally, the heuristics for choosing the underlying implementation likely needs to be updated per this comment.
Before submitting
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.