-
Notifications
You must be signed in to change notification settings - Fork 223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make main.py
compatible with OpenAI compatible APIs
#189
base: main
Are you sure you want to change the base?
Conversation
@loubnabnl, if you have time I'd appreciate a review, thanks! |
Seems like there is an issue with
|
@tshrjn you're going to need to provide more context, the word In the PR description I explicitly state that I am not using the |
I tested this branch and it worked perfectly fine. Only caveat, it really only works with completion models (i.e. babbage, davinci at OpenAI) and not with chat models! But this is expected due to the format of the benchmark. |
It could be adapted to use the chat API fairly easily. However, I have given up on this PR getting reviewed. |
Solves #161 and #148 and is an alternative to #179.
Employs the DRY principle by only changing the creation of the
Evaluator
class inmain.py
andgeneration.parallel_generations
function. Therefore, won't need to maintain multipleEvaluator
classes in parallel.Using the
completions
instead ofchat.completions
was a design choice because it eliminates errors/confusion from additional chat templating taking place behind the API.If you want to evaluate a model running behind an OpenAI compatible API, then you can use
base_url
to send any generation requests to that URL.base_url
to the url you are hosting with (i.e.http://localhost:8000/v1
).model
to the served name of your model.OPENAI_API_KEY
.base_url
tohttps://api.openai.com/v1
.model
to the name of the OpenAI model you want to use (e.g.gpt-3.5-turbo-1106
).