-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: structured_predict returns error string during pydantic validation fail #17649
Comments
Hey @alexander-azizi-martin! I'm here to help you out with any bugs, questions, or contributions you have! The issue you're encountering with the To address this, you could modify the error handling logic to raise exceptions for Pydantic validation errors instead of returning them as strings. This would involve checking the This behavior is particularly problematic with the If you're considering fixing this, you would need to adjust the code to ensure that validation errors are properly propagated as exceptions. This would likely involve changes in the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
I think the intention is that you (the user) check the output The reason it is this way right now is that it makes it easier for outer components to handle (putting the output directly in some refine logic in a query engine or chat history in an agent) |
@logan-markewich Could a flag be added to the call in order to throw on error? |
@alexander-azizi-martin that's definitely an option! |
Ok will work on a PR for this then. |
Bug Description
When a validation error from pydantic occurs,
structured_predict
will simply return the error as a string rather than raising it. The following example is for illustrative purposes. If the response from openai does not contain two elements in thepeople
list, then a string will be returned fromstructured_predict
rather than aPersonList
object.This is because within the
FunctionCallingProgram
class, it casts theAgentChatResponse
to the pydantic model without checking whether there is an error associated with theToolOutput
:llama_index/llama-index-core/llama_index/core/program/function_program.py
Lines 157 to 168 in c2a323e
llama_index/llama-index-core/llama_index/core/program/function_program.py
Lines 36 to 52 in c2a323e
When the tool is called and there is an error it simply serializes the error and sets
is_error
to true on theToolOutput
but then doesn't use this value to raise an error after the fact.llama_index/llama-index-core/llama_index/core/tools/calling.py
Lines 10 to 33 in c2a323e
This is unexpected behaviour. I'd assume that if there is a pydantic error it would raise it. Instead I have to check for every output whether the returned value is an instance of a string or the pydantic class to know if an error occured.
I have faced this issues with the use of
4o-mini
more compared to other open ai models, as it is not very good at consistently following specific instructions. So I'd often face pydantic validation errors and be suprised when the error i get is something along the lines ofstr object does not have property people
.Further guidance on this would be appreciated. I can possibly fix this if it is something which needs fixing.
Version
0.12.12
Steps to Reproduce
Described above.
Relevant Logs/Tracbacks
The text was updated successfully, but these errors were encountered: