-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(backend): AGE-391 Fix different error info in playground and evals #1857
Conversation
An existing try-except mechanism why hiding valuable information only in evaluation mode, and not in playground mode
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jp-agenta !
Awaiting test in oss/cloud.beta to merge
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jp-agenta
The fix solves the issue for wrong api key. However it does not resolve the issue for the case where for instance the model is called with json format (see jam) but it does not allow that.
https://jam.dev/c/66077ee8-94cc-4c9c-84e4-690fa73ca584
In such case, the response is 500 and actually contains the detail and the traceback from the call. In fact, I would argue that the traceback from llm_app_service.py
is not very useful, what is useful is only the traceback from the llm app itself, no?
{
"detail": {
"error": "Model gpt-3.5-turbo does not support JSON response format",
"traceback": "Traceback (most recent call last):\n File \"/var/task/agenta/sdk/decorators/llm_entrypoint.py\", line 194, in execute_function\n result = await func(*args, **func_params[\"params\"])\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/var/task/agenta/sdk/decorators/tracing.py\", line 78, in async_wrapper\n raise e\n File \"/var/task/agenta/sdk/decorators/tracing.py\", line 59, in async_wrapper\n result = await func(*args, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/var/task/_app.py\", line 88, in generate\n raise ValueError(\nValueError: Model gpt-3.5-turbo does not support JSON response format\n"
}
}
On it. |
Actually, I do get the expected response without any changes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does now work indeed. Not sure what happened..
An existing try-except mechanism why hiding valuable information only in evaluation mode, and not in playground mode
Playground (correct)
Evaluations (incorrect — before)
Evaluations (correct — after)