-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LLM Call response of JudgeAgent not always returning <Yes> or <No> #1291
Comments
Hello, @sschrijver-pon! I'm here to help you with any bugs, questions, or contributions you have. Let's work together to resolve your issue. To ensure that the LLM responses include the required Here is an example of how you might modify the prompt: prompt = """
Please analyze the following data and respond with either <Yes> or <No> based on your analysis.
<Data>
{data}
</Data>
Your response should be either <Yes> or <No>.
"""
response = pipeline_context.config.llm.call(prompt, pipeline_context) By ensuring that the prompt explicitly asks for Additionally, you can implement unit tests to verify that the LLM responses are correctly formatted. For example: def test_llm_call(self, sample_df, context, logger, config):
input_validator = LLMCall()
config["llm"].call = MagicMock(return_value="<Yes>")
context = PipelineContext([sample_df], config)
result = input_validator.execute(input="test", context=context, logger=logger)
assert isinstance(result, LogicUnitOutput)
assert result.output is True
def test_llm_call_no(self, sample_df, context, logger, config):
input_validator = LLMCall()
config["llm"].call = MagicMock(return_value="<No>")
context = PipelineContext([sample_df], config)
result = input_validator.execute(input="test", context=context, logger=logger)
assert isinstance(result, LogicUnitOutput)
assert result.output is False
def test_llm_call_with_no_tags(self, sample_df, context, logger, config):
input_validator = LLMCall()
config["llm"].call = MagicMock(return_value="yes")
context = PipelineContext([sample_df], config)
with pytest.raises(InvalidOutputValueMismatch):
input_validator.execute(input="test", context=context, logger=logger) These tests ensure that the LLM responses are correctly formatted and help catch any issues early in the development process [1]. |
Possible solution, change pandas-ai/pandasai/ee/agents/judge_agent/prompts/templates/judge_agent_prompt.tmpl Lines 7 to 11 in e011e8f
and pandas-ai/pandasai/ee/agents/judge_agent/pipeline/llm_call.py Lines 45 to 48 in e011e8f
Edit: I also tried the solution below, but it seems to skip the
|
System Info
macos = 14.5
python = 3.10.13
pandasai = 2.2.12
🐛 Describe the bug
Using AzureOpenAI agent in combination with JudgeAgent.
The logs show that the following is added to the prompt by the JudgeAgent:
But, the actual answers of the LLM responses do not contain
<Yes>
or<No>
and only answers the questions 1, 2 and 3, sopandas-ai/pandasai/ee/agents/judge_agent/pipeline/llm_call.py
Lines 44 to 50 in e011e8f
pandasai.exceptions.InvalidOutputValueMismatch: Invalid response of LLM Call
The text was updated successfully, but these errors were encountered: