Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve error handling in evaluation #1708

Merged
merged 10 commits into from
Jul 5, 2024

Conversation

bekossy
Copy link
Member

@bekossy bekossy commented May 26, 2024

No description provided.

Copy link

vercel bot commented May 26, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
agenta ✅ Ready (Inspect) Visit Preview 💬 Add feedback Jun 27, 2024 3:05pm

@bekossy
Copy link
Member Author

bekossy commented Jun 7, 2024

When the output returns an OpenAi exception, it should be returned as anerrorobject with message and/or stacktrace properties and not text. The frontend will handle the error by displaying the information in a modal. The message property will be shown in red text, and the stacktrace will be displayed in a code block.
(cc @aybruhm @mmabrouk)

For Instance
Screen Shot 2024-06-07 at 6 29 01 AM

@bekossy bekossy marked this pull request as ready for review June 7, 2024 06:03
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. Frontend UI labels Jun 7, 2024
@bekossy
Copy link
Member Author

bekossy commented Jun 9, 2024

Ways to reproduce errors in the Evaluation:
(cc @mmabrouk @aybruhm)

  • Create a testset with very long context
Screen Shot 2024-06-09 at 4 47 59 PM Screen Shot 2024-06-09 at 4 49 07 PM
  • Create Code evaluator with bad code
Screen Shot 2024-06-09 at 4 51 27 PM Screen Shot 2024-06-09 at 4 52 23 PM
  • Create App with invalid/incorrect API Key
Screen Shot 2024-06-09 at 4 53 16 PM
  • Enabling Force json
    Backend returned 500 Internal Server error
    Response: {detail: "'NoneType' object is not iterable"}
Screen Shot 2024-06-09 at 5 04 40 PM Screen Shot 2024-06-09 at 5 05 11 PM

@aakrem
Copy link
Collaborator

aakrem commented Jun 23, 2024

@bekossy can we fix the conflicts please

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Not blocking]
There is a change that this could be used for LLM App errors too, not just Evaluations errors, right ?

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Jul 5, 2024
@mmabrouk mmabrouk merged commit 486ea1e into main Jul 5, 2024
8 checks passed
@mmabrouk mmabrouk deleted the AGE-171/-improve-error-handling-in-evaluation branch July 5, 2024 08:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Frontend lgtm This PR has been approved by a maintainer size:L This PR changes 100-499 lines, ignoring generated files. UI
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants