-
Notifications
You must be signed in to change notification settings - Fork 263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Gemini #2186
Add Gemini #2186
Conversation
Should be good to go. The quota increase to 600 requests per minute has been submitted. I updated the metadata and parameters which I think are correct now. |
# TODO: support VLM such as "gemini-pro-vision" | ||
model_name: str = "" | ||
if request.model_engine == "gemini": | ||
model_name = "gemini-pro" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does model versioning work? I believe sending a request with gemini-pro
will just get the latest version.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We will revisit later when Google releases more versions.
I think it's simpler to just match their name and call it |
This still needs more testing but so far works on MMLU.