-
Notifications
You must be signed in to change notification settings - Fork 27.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: linalg.vector_norm: Expected a floating point or complex tensor as input. Got Long #34573
Comments
cc @muellerzr @SunMarc ! |
Sorry for the late reply. this is my accelerate config
this is my deepspeed_config
And I encounter another similar issue.
The error message I get is a dtype mismatch between query_states and attention_bias. To resolve this, I converted my custom attention_mask to bfloat16 to match the llama3.1 model's dtype. After making this change, the previous error disappears, but a new issue arises during the backward pass with accelerator.backward(loss):
I suspect that this issue is related to the activation of the causal_mask in LlamaSdpaAttention. The same error occurs when padding is present in the input, and the causal mask is activated. |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
System Info
transformers == 4.45
torch == 2.4.1 + cu118
accelerate == 1.0.1
Who can help?
No response
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Expected behavior
I'm using PyTorch 2.4.1 +cu118 and transformers 4.45, training with a batch size of 2 with 2 nvidia A100-80GB. When padding appeared in a batch, the attention_mask in LlamaSdpaAttention was activated(i.e. not None at this step).
After performing the torch.nn.functional.scaled_dot_product_attention operation, I encountered the following error at this line
accelerator.backward(loss)
RuntimeError: linalg.vector_norm: Expected a floating point or complex tensor as input. Got Long
For now, I’ve resolved this by skipping batches that include padding, but I would like to understand the root cause and potential solutions for this issue.
The text was updated successfully, but these errors were encountered: