-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] When trying to convert llama2-7b model from HF format to megatron format #1348
Comments
i meet this question too! |
I am going to try Nemo for finetuning the model :) |
Meet this question too. Don't know why this happened, but can be ignored by changing code around from
to
|
Thanks a lot, i will try it later |
thank you,i'll have a try. |
Describe the bug
A clear and concise description of what the bug is.
The error is
To Reproduce
Steps to reproduce the behavior. The easier it is to reproduce the faster it will get maintainer attention.
Stack trace/logs
Environment (please complete the following information):
Proposed fix
If you have a proposal for how to fix the issue state it here or link to a PR.
Additional context
Add any other context about the problem here.
When i try to install the transformers=4.31/4.32, there will be an error about compiling the tokenizer whl. So i choose to "pip install transformers" instead. I follow the guidence at https://github.com/NVIDIA/Megatron-LM/blob/main/docs/llama_mistral.md
The text was updated successfully, but these errors were encountered: