-
Notifications
You must be signed in to change notification settings - Fork 224
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to deploy a new model by torchchat? #1038
Comments
Glad to have you try things out What file format is the local model you're working with? |
two formats The model's base model is llama3-8b |
If the model is accessible from huggingface: Here's an example PR of how you can add it #947 Specifically the known_model_configs and model.json
For this you can can add a known_model_config (based on your params.json with "use_tiktoken"=true for llama3 derivatives) and then point to your pth with For example: |
Similar to this case? #1040 |
I want to use torchchat to load the trained model directly from the local. How to change the torchchat/config/data/models.json? Need to change download _ and _ convert in download.py?And, what other documents may need to be changed?
The text was updated successfully, but these errors were encountered: