-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update readme to use mistralai/Mistral-7B-Instruct-v0.2
#723
Conversation
README.md
Outdated
}, { | ||
"title": "Assist in a task", | ||
"prompt": "How do I make a delicious lemon cheesecake?" | ||
"name": "meta-llama/Llama-2-70b-chat-hf", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
chose llama over mixtral since llama config shows/uses more fields like userMessageEndToken
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually those tokens are deprecated, and are mostly here for legacy reason. (we didn´t have the prompt template back then), but indeed we should update the readme model!
Could we use something like mistralai/Mistral-7B-Instruct-v0.2
or any other model that works with the free tier of the inference API? I think you need to be PRO to use llama 2 70b right ? not sure
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually those tokens are deprecated, and are mostly here for legacy reason
oh I see
2d31904 added mistralai/Mistral-7B-Instruct-v0.2
instead
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Long term I'd like to make a separate markdown doc for specifying all the custom models variables and endpoint types, as it's quite a lot of options and too detailed for the README I think but this is already a much better example 😁
^ agree with the above. With all the endpoints options, etc, the readme is getting bit long |
meta-llama/Llama-2-70b-chat-hf
mistralai/Mistral-7B-Instruct-v0.2
* Update readme to use `meta-llama/Llama-2-70b-chat-hf` * add `mistralai/Mistral-7B-Instruct-v0.2` instead
Update readme to use a model that is not deprecated