Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance, Triton Kernel Args] extend_attention, optimize kern args to _fwd_kernel #1941

Merged
merged 6 commits into from
Nov 8, 2024

Conversation

HaiShaw
Copy link
Collaborator

@HaiShaw HaiShaw commented Nov 7, 2024

Motivation

Speedup:
Llama-3.1-8B, TP=8, FP8, b32/i1024: prefill throughput +2.82%
Grok-1, TP=8, FP8, b32/i1024: prefill throughput +2.23%

Modifications

Setting optimal kernel arguments to _fwd_kernel of extend_attention on ROCm.

Checklist

  • Format your code according to the Contributor Guide.
  • Add unit tests as outlined in the Contributor Guide.
  • Update documentation as needed, including docstrings or example tutorials.

@merrymercy merrymercy merged commit 67c424c into sgl-project:main Nov 8, 2024
12 of 13 checks passed
@HaiShaw HaiShaw deleted the triton-tune branch November 8, 2024 03:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants