-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix dpo_trainer bug for LLMs without bos_token in config #1885
Conversation
@DZ9 since this snippet is used in a number of other trainers, would it be better to add it as a helper function and then use it in DPO, CPO, ORPO trainers? |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
@kashif Sure. I've changed the adding of bos and eos token function to utils.py, and applied them in dpo, orpo and cpo trainer. |
thanks @DZ9 can you also run |
@kashif Absolutely. Done with running the formatting command and commited. |
I just met this problem today, thanks |
In dpo_trainer.py, the
bos_token
will be automatically added if the first token in current tokenized sentence is not equal tobos_token
. Currently thisbos_token
is read from tokenizer config file, but in some LLMs, like Qwen2,bos_token
is leaving toNone
in config, which results inNone
is added to the input_ids tensor , as shown below:and then the followiing trace will be raised when running dpo:
Also, in LLMs like Qwen2, bos_token(
<|endoftext|>
) is not equal to the first template token(<|im_start|>
). Automatically add thisbos_token
without user awareness will cause unexpected behavior when using the trained dpo model to inference. Actually thebos_token
should only be added when this value in tokenizer config is not None.After this PR, the data is running normally like this:
and the model can be trained normally:
Test command is: