Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support load model locally #1040

Closed
irasin opened this issue Aug 19, 2024 · 3 comments
Closed

support load model locally #1040

irasin opened this issue Aug 19, 2024 · 3 comments
Labels
bug Something isn't working

Comments

@irasin
Copy link

irasin commented Aug 19, 2024

🚀 The feature, motivation and pitch

I have downloaded the llama3-8B model from huggingface into local dir as below
image

But I can't run python3 torchchat.py generate --checkpoint_dir /home/Meta-Llama-3-8B-Instruct --prompt "It was a dark and stormy night, and" since it's not a single pt file.
How should we run the pre-downloaded model in torchchat?

Alternatives

No response

Additional context

No response

RFC (Optional)

No response

@Jack-Khuu
Copy link
Contributor

Jack-Khuu commented Aug 19, 2024

@Jack-Khuu Jack-Khuu added the bug Something isn't working label Aug 19, 2024
@irasin
Copy link
Author

irasin commented Aug 20, 2024

Thanks a lot.
Follow the code, I found that only ".bin" files are supported in torchchat, so I have to modify the code to support ".safetensors", and I can run the local model now.

@irasin irasin closed this as completed Aug 20, 2024
@sunshinesfbay
Copy link

@irasin would you be able to submit a PR for docs/ADVANCED-USERS.md how to recognize and navigate this issue? Also any other issues/errors/unclear instructions you encountered?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants