-
Notifications
You must be signed in to change notification settings - Fork 26.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes to alternating SWA layers in Gemma2 #31775
Conversation
Thanks for your PR @turboderp, we're taking a look with @ArthurZucker |
Any updates on this? It's likely required to get the proper performance out of the Gemma 2 models |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM thanks for fixing!
The slow tests are gonna fail potentially cc @ydshieh if it's alright with you to update later on? I think a patch will include this! |
Thanks @turboderp |
* HybridCache: Flip order of alternating global-attn/sliding-attn layers * HybridCache: Read sliding_window argument from cache_kwargs * Gemma2Model: Flip order of alternating global-attn/sliding-attn layers * Code formatting
* HybridCache: Flip order of alternating global-attn/sliding-attn layers * HybridCache: Read sliding_window argument from cache_kwargs * Gemma2Model: Flip order of alternating global-attn/sliding-attn layers * Code formatting
* HybridCache: Flip order of alternating global-attn/sliding-attn layers * HybridCache: Read sliding_window argument from cache_kwargs * Gemma2Model: Flip order of alternating global-attn/sliding-attn layers * Code formatting
* HybridCache: Flip order of alternating global-attn/sliding-attn layers * HybridCache: Read sliding_window argument from cache_kwargs * Gemma2Model: Flip order of alternating global-attn/sliding-attn layers * Code formatting
What does this PR do?
Reverses the order of global and sliding attention layers in Gemma2. This brings it in line with Google's implementation in which sliding attention is used on layers 0, 2, 4.., whereas currently the Transformers implementation uses sliding attn on layers 1, 3, 5...
Changes
HybridCache.update
to read thesliding_window
argument fromcache_kwargs
since it wasn't being parsed otherwise. The cache was created with alternating max seq lenghts of 4k and 8k, but all layers were being updated as if they were 8k, causing out-of-bounds errors and CUDA exceptions.Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. @ArthurZucker