Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Llama FA2] Re-add _expand_attention_mask and clean a couple things #27074

Merged
merged 12 commits into from
Oct 26, 2023

Conversation

patrickvonplaten
Copy link
Contributor

@patrickvonplaten patrickvonplaten commented Oct 25, 2023

What does this PR do?

This PR cleans the attention mask converter a bit more, corrects some docstrings and removes outdated comments and deprecates _expand_attention_mask to fix optimum.

@patrickvonplaten patrickvonplaten changed the title clean [Llama FA2] Re-add _expand_attention_mask and clean a couple things Oct 25, 2023
@patrickvonplaten
Copy link
Contributor Author

patrickvonplaten commented Oct 25, 2023

@ArthurZucker could you give this a quick review? It'd make the Bart FA PR much easier to continue and should also fix the better transformers problem with optimum

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Oct 25, 2023

The documentation is not available anymore as the PR was closed or merged.

@ArthurZucker
Copy link
Collaborator

Of course!

src/transformers/models/llama/modeling_llama.py Outdated Show resolved Hide resolved
Comment on lines +67 to +71
def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None):
warnings.warn(
"Calling `transformers.models.llama.modeling_llama._expand_mask` is deprecated and will be removed in v4.37. Use `transformers.models.llama.modeling_llama.AttnMaskConverter._expand_mask"
)
return AttnMaskConverter._expand_mask(mask=mask, dtype=dtype, tgt_len=tgt_len)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! We should probably do the same for falcon and mistral as well

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok 👍🏻

@patrickvonplaten patrickvonplaten merged commit d7cb5e1 into main Oct 26, 2023
3 checks passed
@patrickvonplaten patrickvonplaten deleted the clean_llama branch October 26, 2023 11:06
EduardoPach pushed a commit to EduardoPach/transformers that referenced this pull request Nov 19, 2023
…uggingface#27074)

* clean

* clean llama

* fix more

* make style

* Apply suggestions from code review

* Apply suggestions from code review

* Update src/transformers/models/llama/modeling_llama.py

* Update src/transformers/models/llama/modeling_llama.py

* Apply suggestions from code review

* finish

* make style
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants