Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update readme to use mistralai/Mistral-7B-Instruct-v0.2 #723

Merged
merged 2 commits into from
Jan 24, 2024
Merged

Conversation

mishig25
Copy link
Collaborator

@mishig25 mishig25 commented Jan 23, 2024

Update readme to use a model that is not deprecated

@mishig25 mishig25 requested a review from nsarrazin January 23, 2024 18:00
README.md Outdated
}, {
"title": "Assist in a task",
"prompt": "How do I make a delicious lemon cheesecake?"
"name": "meta-llama/Llama-2-70b-chat-hf",
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

chose llama over mixtral since llama config shows/uses more fields like userMessageEndToken

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually those tokens are deprecated, and are mostly here for legacy reason. (we didn´t have the prompt template back then), but indeed we should update the readme model!

Could we use something like mistralai/Mistral-7B-Instruct-v0.2 or any other model that works with the free tier of the inference API? I think you need to be PRO to use llama 2 70b right ? not sure

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually those tokens are deprecated, and are mostly here for legacy reason

oh I see

2d31904 added mistralai/Mistral-7B-Instruct-v0.2 instead

Copy link
Collaborator

@nsarrazin nsarrazin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Long term I'd like to make a separate markdown doc for specifying all the custom models variables and endpoint types, as it's quite a lot of options and too detailed for the README I think but this is already a much better example 😁

@mishig25 mishig25 merged commit 22e7bfa into main Jan 24, 2024
3 checks passed
@mishig25 mishig25 deleted the update_readme branch January 24, 2024 09:46
@mishig25
Copy link
Collaborator Author

^ agree with the above. With all the endpoints options, etc, the readme is getting bit long

@mishig25 mishig25 changed the title Update readme to use meta-llama/Llama-2-70b-chat-hf Update readme to use mistralai/Mistral-7B-Instruct-v0.2 Jan 24, 2024
ice91 pushed a commit to ice91/chat-ui that referenced this pull request Oct 30, 2024
* Update readme to use `meta-llama/Llama-2-70b-chat-hf`

* add `mistralai/Mistral-7B-Instruct-v0.2` instead
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants