You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think I found the problem @Gimperion
Something is wrong with the model and the tokenizer.
The <mask> token has the index 50264 while the model config states that "vocab_size": 50264 .
Since the first token has the index 0, there are in practice 50265 tokens in the vocabulary, the index is thus out of bounds.
If you try to do an inference with the <mask> token, it fails.
If you really need to convert the model you have two possibilities:
expand the token embedding matrix
use random_global_init=True or --random_global_init to skip the step with the mask token
I keep getting an
index 50264 is out of bounds for dimension 0 with size 50264
or something similar when converting BART and some other models to LSG.The issue seems to be this line of code in the update_global method -
positions[1:] += u[mask_id].unsqueeze(0)
The text was updated successfully, but these errors were encountered: