-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable To Generate Schema: InvalidLLMOutputType (Response validation failed!) #1265
Comments
Hello, @mayankagarwal19911! I'm here to assist you with any bugs, questions, or contributions you may have. Let's tackle this issue together! To resolve the
By following these steps, you should be able to resolve the |
#1265 Will like to solve this bug |
@pratik9409 sure, thanks a lot for the availability, I've assigned it to you! :D |
Testing with a simple DataFramedf = pd.DataFrame(columns=["Empdata"], data=[[1], [2]]) df.head() try: |
System Info
pandasai 2.2.8
Python 3.9.6
🐛 Describe the bug
Sample code
df = pd.DataFrame(columns=["column1"], data=[["value1"]])
df.head()
semantic_agent = SemanticAgent(dfs=df)
print(semantic_agent._schema)
Error
in BaseAgent.call_llm_with_prompt(self, prompt)
return result
else:
raise InvalidLLMOutputType("Response validation failed!")
except Exception:
if (
not self.context.config.use_error_correction_framework
or retry_count >= self.context.config.max_retries - 1
):
The text was updated successfully, but these errors were encountered: