You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the LlamaTokenizer, I can get correct encoding result when directly loading from LlamaTokenizer. But the results are incorrect when using AutoTokenizer. Another issue is loading the AutoTokenizer much slower than directly loading the LlamaTokenizer. It take around 4 mins to load the tokenizer from the path when using AutoTokenizer, while it only takes one second if directly using the LlamaTokenizer.
ret1 is the expected output and ret2 is an error result from AutoTokenizer. AutoTokenizer add an additional token, 31822 (which is a space token), to the encoding results.
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
System Info
For the LlamaTokenizer, I can get correct encoding result when directly loading from LlamaTokenizer. But the results are incorrect when using AutoTokenizer. Another issue is loading the AutoTokenizer much slower than directly loading the LlamaTokenizer. It take around 4 mins to load the tokenizer from the path when using AutoTokenizer, while it only takes one second if directly using the LlamaTokenizer.
Who can help?
@ArthurZucker
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Python version: 3.8.16
transformers version: 4.28.1
Follow the given example:
Expected behavior
ret1: [322, 2661, 285, 14363, 31844, 906, 23982, 985, 3668, 483, 4309, 562, 266, 13803, 15136, 393, 7732, 31843]
ret2: [31822, 322, 2661, 285, 14363, 31844, 906, 23982, 985, 3668, 483, 4309, 562, 266, 13803, 15136, 393, 7732, 31843]
ret1 is the expected output and ret2 is an error result from AutoTokenizer. AutoTokenizer add an additional token, 31822 (which is a space token), to the encoding results.
The text was updated successfully, but these errors were encountered: