Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Llama 2 tokenizer: apparition of the token id 29871 #26273

Closed
2 of 4 tasks
piegu opened this issue Sep 19, 2023 · 7 comments
Closed
2 of 4 tasks

Llama 2 tokenizer: apparition of the token id 29871 #26273

piegu opened this issue Sep 19, 2023 · 7 comments

Comments

@piegu
Copy link
Contributor

piegu commented Sep 19, 2023

System Info

transformers==4.31.0
meta-llama/Llama-2-7b-hf

Who can help?

@ArthurZucker

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

# from https://github.com/philschmid/sagemaker-huggingface-llama-2-samples/blob/master/training/sagemaker-notebook.ipynb
!pip install "transformers==4.31.0"

YOUR_TOKEN = "hf_xxxxx"
!huggingface-cli login --token $YOUR_TOKEN

from transformers import AutoTokenizer

# get tokenizer
model_id = "meta-llama/Llama-2-7b-hf" 
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token

instruction = "### Instruction\nRepeat the context."
context = "\n\n### Context\n```\nI like Paris.\n```"
answer = "\n\n### Answer\nI like Paris."

prompt = instruction + context + answer

tokens_prompt = tokenizer(prompt, return_tensors="pt").input_ids[0]

tokens_instruction = tokenizer(instruction, return_tensors="pt").input_ids[0]
tokens_context = tokenizer(context, return_tensors="pt").input_ids[0]
tokens_answer = tokenizer(answer, return_tensors="pt").input_ids[0]

# from each number of tokens, we take out 1 token that this the <s> token
# then, we sum all tokens and expect get the same number of tokens that ones of the prompt 
num_tokens_of_sum = (len(tokens_instruction) - 1) + (len(tokens_context) - 1) + (len(tokens_answer) - 1)

# we compare the number of tokens of the prompt (without the <s> token) with the sum claculated before
print((len(tokens_prompt) - 1) - num_tokens_of_sum)

# we get -2, which is wrong: it means there are 2 tokens in the sum of tokens that were not in the prompt

tokens_prompt
# tensor([    1,   835,  2799,  4080,    13,  1123, 11666,   278,  3030, 29889,
#            13,    13,  2277, 29937, 15228,    13, 28956,    13, 29902,   763,
#         3681, 29889,    13, 28956,    13,    13,  2277, 29937,   673,    13,
#        29902,   763,  3681, 29889])

tokens_instruction
# tensor([    1,   835,  2799,  4080,    13,  1123, 11666,   278,  3030, 29889])

tokens_context
# tensor([    1, 29871,    13,    13,  2277, 29937, 15228,    13, 28956,    13,
#       29902,   763,  3681, 29889,    13, 28956])

tokens_answer
# tensor([    1, 29871,    13,    13,  2277, 29937,   673,    13, 29902,   763,
#         3681, 29889])

# we can see that the token 29871 appears 2 times (tokens_context and tokens_answer) but it should not!

Expected behavior

We hope that tokenizing each part of the prompt will lead to the same result as tokenizing the prompt. But not.

Why does token ID 29871 appear?

@ArthurZucker
Copy link
Collaborator

Hey, could you try this on transformers == 4.33? Pretty sure the fixes to Llama have been merged

@piegu
Copy link
Contributor Author

piegu commented Sep 20, 2023

Same (wrong) result with transformers == 4.33.

@ArthurZucker
Copy link
Collaborator

Okay, this is actually expected, 29871 is the SPIECE_UNDERLINE token. If you encode each prompt individually, you are adding an underline to the prompt, then adding the special token. If you encode everything concatenated, you add the prefix token to the first token only.

@oir
Copy link

oir commented Oct 9, 2023

Hello! Same issue here. @ArthurZucker, can you clarify your comment? What is SPIECE_UNDERLINE?

you are adding an underline to the prompt

Do you mean tokenizer() call does this automatically (if @piegu is explicitly doing this, I've missed it)? If so, I would at least expect add_special_tokens==False to fix this but it does not:

>>> tkzr = AutoTokenizer.from_pretrained("./llama-2")
>>> tkzr.encode("\nhi")
[1, 29871, 13, 2918]
>>> tkzr.encode("\nhi", add_special_tokens=False)
[29871, 13, 2918]
>>> tkzr.decode([29871, 13, 2918])
'\nhi'
>>> tkzr.decode([13, 2918])
'\nhi'
>>> tkzr.decode([29871])
''

@ArthurZucker
Copy link
Collaborator

No the argument you might be looking for is add_prefix_space which we did not include for llama. The SPIECE_UNDERLINE is the prefix space added by sentencepiece. We can't de activate this easily but can add more support for this

@oir
Copy link

oir commented Oct 10, 2023

Got it, I'm following this now. Thanks!

Copy link

github-actions bot commented Nov 4, 2023

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants