Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eliminate 2 gpu ops during sampling when logit_bias is zero #338

Merged
merged 3 commits into from
Apr 3, 2024

Conversation

Qubitium
Copy link
Contributor

@Qubitium Qubitium commented Mar 28, 2024

Reason for PR: Eliminate 2 gpu ops during sampling when logit_bias is not used (all zeros)

  1. skip allocation of logit_bias
  2. skip applying logits.add_(self.logit_bias)

On our internal test of Yi-6B quantized using marlin on 4090 we see tangible improvement: up to 63% throughput improvement for small sized tokens output work loads when batch > 1.

Concurrency is 10 requests so it will cause sglang to use batch size between 1-10:

IMG_20240328_204927.png

@Qubitium Qubitium changed the title Eliminate 2 gpu ops during inference (sampling) when logit_bias is zero Eliminate 2 gpu ops during sampling when logit_bias is zero Mar 28, 2024
@hnyls2002
Copy link
Collaborator

hnyls2002 commented Apr 3, 2024

@Qubitium Sorry to cause trouble. I have directly pushed the commit on your branch.
closes #344

@hnyls2002 hnyls2002 merged commit c9de3e1 into sgl-project:main Apr 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants