-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhancement] Add LLM API key checks to LLM-based evaluators #1989
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @aybruhm !
There is one thing missing though.
See 👇
success, response = await check_ai_critique_inputs( |
We'd need to update that so that we immediately check for API keys for any LLM-based evaluator, not just AI critique. Does it make sense ?
- format llm provider keys - and to ensure required llm keys exists in the provided evaluator configs
- properly format llm provider keys - and check that the required llm keys exists
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left one suggestion to centralise evaluator info.
- configurable setting to evaluators requiring llm api keys - update fixture to make use of centralized evaluators
…s-in-llm-based-evaluators
…loat for ai critique evaluator
…i-to-playground' into feature/age-532-poc-1e-add-llm-api-key-checks-in-llm-based-evaluators
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
9212c7b
into
feature/age-491-poc-1e-expose-running-evaluators-via-api-to-playground
I have QA the PR and I got the expected result ✅ |
Description
This PR enhances the backend by adding checks to ensure that an OpenAI API key is present for LLM-based evaluators, with clear exception messages if the key is missing. Additionally, test coverage has been expanded to include scenarios where the OpenAI API key is required, including a new test case for
auto_ai_critique
.Related Issue
Closes AGE-532 & AGE-569
Acceptance Tests
Test 1: OpenAI API Key is Required for LLM-Based Evaluators
auto_ai_critique
) without an OpenAI API key.Test 2: LLM-based Evaluator Runs Successfully with a Valid API Key
auto_ai_critique
) with the valid OpenAI API key.Test 3 (automated): Test for AI Critique (LLM as a Judge) API Key Checks
auto_ai_critique
should be included and verified as part of the coverage.