-
Notifications
You must be signed in to change notification settings - Fork 27.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
improve(llama): Faster apply_rotary_pos_emb #22785
improve(llama): Faster apply_rotary_pos_emb #22785
Conversation
The documentation is not available anymore as the PR was closed or merged. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for improving the performance @fpgaminer 🙏
@amyeroberts for context, this comment shows that a) it gets exactly the same numerical output b) it is faster than the previous version
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔥 🔥 🔥 - thanks for updating and for the time to validate and benchmark 🙏
Should a similar patch be applied to GPT-NeoX? |
@neggert I believe it can be added to GPT-NeoX too - very happy to review a PR if you'd like to add! |
What does this PR do?
Faster implementation for
apply_rotary_pos_emb
inmodeling_llama.py
.Please see issue #22683 for code that verifies the correctness of the change.
NOTE: Not marking as fixing the above issue, as speed is still not as good as before.
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@gante