Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The PR #57 had a couple of mistakes that needed to be fix, This is because of two things
flash_attention_forward
was moved out earlierThe strategy now is simple:
_flash_attention_forward
, means the function has been seperated outAugmentation
_flash_attention_forward
depending on version.Some redesign is done, since
_flash_attention_forward
couild either be a method or function, then thje previous method to bind_flash_attention_forward
by closure doesnt hold. So we need to install a method on the backbone to intercept the position ids, then modify_flash_attention_forward
to be able to access the position ids, and bind themBad news is that once this is done properly, the speed dropped. However, we verified that the speed is consistent when we upgrade transformers to latest main which means our implementation is correct