Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Function calling in groupchats #960

Closed
milioe opened this issue Dec 12, 2023 · 13 comments
Closed

[Bug]: Function calling in groupchats #960

milioe opened this issue Dec 12, 2023 · 13 comments
Assignees

Comments

@milioe
Copy link

milioe commented Dec 12, 2023

Describe the bug

Hi,

with pyautogen==0.2.2 I tried to run three codes provided in #274 and previously in #252 and #152 about function calling in groupchats but I got errors from all of them.

Steps to reproduce

This is my current code:

from autogen import config_list_from_json, AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager, Agent, ConversableAgent
import os
import random
from dataclasses import dataclass
from dotenv import load_dotenv
load_dotenv()


@dataclass
from autogen import GroupChat, ConversableAgent, UserProxyAgent
from dataclasses import dataclass


@dataclass
class ExecutorGroupchat(GroupChat):
    def select_speaker(
        self, last_speaker: ConversableAgent, selector: ConversableAgent
    ):
        """Select the next speaker."""

        try:
            message = self.messages[-1]
            if "function_call" in message:
                return self.admin
        except Exception as e:
            print(e)
            pass

        selector.update_system_message(self.select_speaker_msg())
        final, name = selector.generate_oai_reply(
            self.messages
            + [
                {
                    "role": "system",
                    "content": f"Read the above conversation. Then select the next role from {self.agent_names} to play. Only return the role.",
                }
            ]
        )
        if not final:
            # i = self._random.randint(0, len(self._agent_names) - 1)  # randomly pick an id
            return self.next_agent(last_speaker)
        try:
            return self.agent_by_name(name)
        except ValueError:
            return self.next_agent(last_speaker)



def say_hello(name):
    return f"Hi, {name}, how are you doing?"

def say_goodbye(name):
    return f"bye, {name}, have a good day"

def write_txt(text):
    with open("output.txt", "w") as f:
        f.write(text)
    return "done"



config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST.json")

llm_config = {
    "functions": [
        {
            "name": "say_hello",
            "description": "Use this function to say hello to someone",
            "parameters": {
                "type": "object",
                "properties": {
                    "name": {
                        "type": "string",
                        "description": "The name of the person to say hello to",
                    },
                },
                "required": ["name"],
            },
        },
        {
            "name": "say_goodbye",
            "description": "Use this function to say goodbye to someone",
            "parameters": {
                "type": "object",
                "properties": {
                    "name": {
                        "type": "string",
                        "description": "The name of the person to say goodbye to",
                    },
                },
                "required": ["name"],
            },
        },
        {
            "name": "write_txt",
            "description": "Use this function to write content to a file",
            "parameters": {
                "type": "object",
                "properties": {
                    "text": {
                        "type": "string",
                        "description": "The text to write",
                    },
                },
                "required": ["text"],
            },
        },
    ],
    "config_list": config_list,
    "seed": 45,
    "request_timeout": 120
}


user_proxy = UserProxyAgent(
    name="user_proxy",
    system_message="A human that will provide the necessary information to the group chat manager. Execute suggested function calls.",
    function_map={
        "say_hello": say_hello,
        "say_goodbye": say_goodbye,
        "write_txt": write_txt,
    },
    human_input_mode="NEVER",
    code_execution_config={"work_dir": "fileread"})

assistant = AssistantAgent(
    name="assistant",
    system_message="""You are an assistant that proposes the execution of functions to the user proxy""",
    llm_config=llm_config
)

architect = AssistantAgent(
    name="azure_architect",
    system_message="""You are an architect that creates a plan in order for the assistant to execute the functions and complete the task""",
    llm_config={'config_list': config_list, 'seed': 45, 'request_timeout': 120},
)

groupchat = ExecutorGroupchat(agents=[user_proxy, assistant, architect], messages=[],max_round=20)

manager = GroupChatManager(groupchat=groupchat, llm_config=llm_config, system_message="Choose one agent to play the role of the user proxy")

user_proxy.initiate_chat(
    manager,
    message="""say hello to thibault"""
    )

i got this error on three options:

Traceback (most recent call last):
  File "/Users/emiliosandoval/Documents/gen/Testing/group_w_funct.py", line 137, in <module>
    user_proxy.initiate_chat(
  File "/Users/emiliosandoval/opt/anaconda3/envs/gen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 556, in initiate_chat
    self.send(self.generate_init_message(**context), recipient, silent=silent)
  File "/Users/emiliosandoval/opt/anaconda3/envs/gen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 354, in send
    recipient.receive(message, self, request_reply, silent)
  File "/Users/emiliosandoval/opt/anaconda3/envs/gen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 487, in receive
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
  File "/Users/emiliosandoval/opt/anaconda3/envs/gen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 962, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
  File "/Users/emiliosandoval/opt/anaconda3/envs/gen/lib/python3.10/site-packages/autogen/agentchat/groupchat.py", line 338, in run_chat
    speaker = groupchat.select_speaker(speaker, self)
  File "/Users/emiliosandoval/Documents/gen/Testing/group_w_funct.py", line 24, in select_speaker
    selector.update_system_message(self.select_speaker_msg())
TypeError: GroupChat.select_speaker_msg() missing 1 required positional argument: 'agents'

Expected Behavior

I noticed that if i pass the llm_configargument to manager (manager = GroupChatManager(groupchat=groupchat, llm_config=llm_config, system_message="Choose one agent to play the role of the user proxy") this error appears:
ValueError: GroupChatManager is not allowed to make function/tool calls. Please remove the 'functions' or 'tools' config in 'llm_config' you passed in.

Screenshots and logs

No response

Additional Information

I understand that when #152 was raised as an issue autogen was > 0.1.8 and #274 was > v0.1.11.

Is it because the versions, the code or any of the prompts? I'd appreciate your help.

@milioe milioe added the bug label Dec 12, 2023
@afourney
Copy link
Member

afourney commented Dec 12, 2023

It looks like there are two problems here, and it is the Notebook that needs to be fixed.

The first problem is that ExecutorGroupchat is subclassing GroupChat, but GroupChat has evolved since this was written. Many of the methods require an agents parameter now, because the list can be dynamic due to new allow_repeat_speaker options. I can fix this by setting it as an optional parameter in GroupChat, then setting sane defaults. But, we probably also want the original author of this notebook to fix things. I'll track down who that was.

The second problem is that you are passing an llm_config with functions to the GroupChatManager -- that's not supported behavior (and was the source of several breaking bugs). This is why you are now seeing the exception. Create a new llm_config that you pass to the GroupChatManager. Just set it to:

manager_llm_config = {
    "config_list": config_list,
    "seed": 45,
    "request_timeout": 120
}

@afourney
Copy link
Member

@sonichi @kevin666aa for visibility

@kbalasu1
Copy link

@afourney - When I use a different config for group chat manager that does not include the function definition. But when the function executor responds with a response from the function that is run, the chat manager errors out that the role = 'function' is not supported

Encountered BadRequestError('Error code: 400 - {\'error\': {\'code\': \'BadRequest\', \'message\': "\'function\' is not an allowed role. The allowed roles are [\'system\', \'user\', \'assistant\'].", \'param\': None, \'type\': None}}'). Set callback_exception='verbose' to see the full traceback.

I am not sure where I can set the callback_exception='verbose' to see additional logging here as well.

@yoadsn
Copy link

yoadsn commented Jan 12, 2024

@kbalasu1 I suggest opening a separate case for this - but here is my (uneducated) pointer:

  • OpenAI API chokes on seeing "role=function" when no functions definitions is passed in.
  • The code here looks for the "tool" role specifically to avoid including the pure tool reply in the provided message list to OpenAI. I think that it should also compare against the "function" role since they serve the same purpose, one with the old API concept of functions and the other with the new tools concept.
  • Since this line of code does not check against "function" it includes the reply in the message list with the "function" role which does not work.

I may not see the whole picture here - but anyway @afourney maybe this could be helpful.

@kbalasu1 would be useful if you provide a minimal reproduction code (including explicit llm_config if possible and without the api_key of course) so we can run locally and trace the problem.

Btw, code here and here also needs to generalize to "tool" and not just "function" - Maybe worth having a "is_tool_role" helper function that can be used and internally handle the specific two possible roles to reduce bugs like theses.

@sonichi
Copy link
Contributor

sonichi commented Jan 14, 2024

@yoadsn Thanks for the notes. I agree with most of them. Your first point is valid, and the last suggestion is valid too.

if message.get("role") != "tool":

is needed for tool responses only because we store tool responses in a special way that needs to be processed.

To address the first point, which is the hardest, I'm thinking of the following: For agents who are neither the suggester nor the executor of a function/tool, the function/tool call/response message needs to be processed into plain text before sending to them by the group chat manager. Same for the groupchat's message which will be used to perform speaker selection.

@sonichi
Copy link
Contributor

sonichi commented Jan 14, 2024

After investigation, the described bug doesn't exist for sync group chat. There is indeed a bug for async group chat due to mismatch of the sync group chat. #1243 fixes that and add tests.
The last comment about generalization to "tool" is also not necessary as seen in these tests.

@selimhanerhan
Copy link

I'm having the similar issue and my code is very similar to the @milioe , I checked the code @yoadsn shared from the groupchat.py, should "function_map" be used instead of "functions"? if so can you point me to the right place please @sonichi ? thank you

@ithllc
Copy link

ithllc commented Mar 25, 2024

if I am not mistaken, in an earlier version of autogen, didn't we have to call the "pop" on the llm_config right before assigning it to the groupchat? that way we pop out functions and tools? I remember having to do that but don't remember the version. Please can someone check.

Thanks.

@ithllc
Copy link

ithllc commented Mar 25, 2024

Just a quick update, I found the fix. Not sure how you all will implement it but I went to the Autogen github website and found an example and I added it and it worked for me as I had this same issue with autogen version 0.2.20 . I came across this issue working on the sample notebook agentchat_groupchat_RAG.ipynb .

https://microsoft.github.io/autogen/docs/notebooks/agentchat_function_call_async

The code is below and notice how I used the pop function and renamed the groupchat manager llm_config so there is nothing confusing to me. Again, I remembered this being an issue from 0.1.8 version. Please someone fix this for the next iteration if you can. It's definitely a headache to figure out for such a simple solution.

`llm_config = {
"timeout": 600,
"temperature": 0,
"config_list": config_list,
}

def termination_msg(x):
return isinstance(x, dict) and "TERMINATE" == str(x.get("content", ""))[-9:].upper()

boss = autogen.UserProxyAgent(
name="Boss",
is_termination_msg=termination_msg,
human_input_mode="NEVER",
code_execution_config=False, # we don't want to execute code in this case.
default_auto_reply="Reply TERMINATE if the task is done.",
description="The boss who ask questions and give tasks.",
)

boss_aid = RetrieveUserProxyAgent(
name="Boss_Assistant",
is_termination_msg=termination_msg,
human_input_mode="NEVER",
max_consecutive_auto_reply=3,
retrieve_config={
"task": "code",
"docs_path": "https://raw.githubusercontent.com/microsoft/FLAML/main/website/docs/Examples/Integrate%20-%20Spark.md",
"chunk_token_size": 1000,
"model": config_list[0]["model"],
"client": chromadb.PersistentClient(path="/tmp/chromadb"),
"collection_name": "flaml",
"get_or_create": True,
},
code_execution_config=False, # we don't want to execute code in this case.
description="Assistant who has extra content retrieval power for solving difficult problems.",
)

coder = AssistantAgent(
name="Senior_Python_Engineer",
is_termination_msg=termination_msg,
system_message="You are a senior python engineer, you provide python code to answer questions. Reply TERMINATE in the end when everything is done.",
llm_config=llm_config,
description="Senior Python Engineer who can write code to solve problems and answer questions.",
)

pm = autogen.AssistantAgent(
name="Product_Manager",
is_termination_msg=termination_msg,
system_message="You are a product manager. Reply TERMINATE in the end when everything is done.",
llm_config=llm_config,
description="Product Manager who can design and plan the project.",
)

reviewer = autogen.AssistantAgent(
name="Code_Reviewer",
is_termination_msg=termination_msg,
system_message="You are a code reviewer. Reply TERMINATE in the end when everything is done.",
llm_config=llm_config,
description="Code Reviewer who can review the code.",
)

PROBLEM = "How to use spark for parallel training in FLAML? Give me sample code."

def _reset_agents():
boss.reset()
boss_aid.reset()
coder.reset()
pm.reset()
reviewer.reset()

def rag_chat():
_reset_agents()
groupchat = autogen.GroupChat(
agents=[boss_aid, pm, coder, reviewer], messages=[], max_round=12, speaker_selection_method="round_robin"
)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

# Start chatting with boss_aid as this is the user proxy agent.
boss_aid.initiate_chat(
    manager,
    message=boss_aid.message_generator,
    problem=PROBLEM,
    n_results=3,
)

def norag_chat():
_reset_agents()
groupchat = autogen.GroupChat(
agents=[boss, pm, coder, reviewer],
messages=[],
max_round=12,
speaker_selection_method="auto",
allow_repeat_speaker=False,
)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

# Start chatting with the boss as this is the user proxy agent.
boss.initiate_chat(
    manager,
    message=PROBLEM,
)

def call_rag_chat():
_reset_agents()

# In this case, we will have multiple user proxy agents and we don't initiate the chat
# with RAG user proxy agent.
# In order to use RAG user proxy agent, we need to wrap RAG agents in a function and call
# it from other agents.
def retrieve_content(
    message: Annotated[
        str,
        "Refined message which keeps the original meaning and can be used to retrieve content for code generation and question answering.",
    ],
    n_results: Annotated[int, "number of results"] = 3,
) -> str:
    boss_aid.n_results = n_results  # Set the number of results to be retrieved.
    # Check if we need to update the context.
    update_context_case1, update_context_case2 = boss_aid._check_update_context(message)
    if (update_context_case1 or update_context_case2) and boss_aid.update_context:
        boss_aid.problem = message if not hasattr(boss_aid, "problem") else boss_aid.problem
        _, ret_msg = boss_aid._generate_retrieve_user_reply(message)
    else:
        _context = {"problem": message, "n_results": n_results}
        ret_msg = boss_aid.message_generator(boss_aid, None, _context)
    return ret_msg if ret_msg else message

boss_aid.human_input_mode = "NEVER"  # Disable human input for boss_aid since it only retrieves content.

for caller in [pm, coder, reviewer]:
    d_retrieve_content = caller.register_for_llm(
        description="retrieve content for code generation and question answering.", api_style="function"
    )(retrieve_content)

for executor in [boss, pm]:
    executor.register_for_execution()(d_retrieve_content)

groupchat = autogen.GroupChat(
    agents=[boss, pm, coder, reviewer],
    messages=[],
    max_round=12,
    speaker_selection_method="round_robin",
    allow_repeat_speaker=False,
)

llm_config_manager = llm_config.copy()
llm_config_manager.pop("functions", None)
llm_config_manager.pop("tools", None)

manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config_manager)

# Start chatting with the boss as this is the user proxy agent.
boss.initiate_chat(
    manager,
    message=PROBLEM,
)`

@ekzhu
Copy link
Collaborator

ekzhu commented Mar 25, 2024

Don't share llm_config among different agents. You can share config list but LLM configs gets modified as you register functions

@ithllc
Copy link

ithllc commented Mar 25, 2024

@ekzhu and @afourney Thanks for explaining that. I was also making it known that this is reproducible directly from Microsoft Autogen's sample notebook under the current version, of which I came across this issue for the agentchat_groupchat_RAG.ipynb . Thankfully, I remembered what was being done from the old autogen==0.1.8 version to remedy the issue, but your answers are the correct answer and they explain thoroughly why this error is happening, and it is unfortunate that based on the sample notebooks, this is unknown.

Can this be changed in the documentation and sample notebooks?

Thanks in advance

@afourney
Copy link
Member

Yup. Can you file a documentation bug. We're going through and updating all the documentation now, and I don't want to lose track of this.

-- Adam

@ithllc
Copy link

ithllc commented Mar 25, 2024

Yes, I will file a documentation bug for this today.

whiskyboy pushed a commit to whiskyboy/autogen that referenced this issue Apr 17, 2024
* more tolerant time limit for test_overtime

* Cancel assertion becasue github VM sometimes is super slow

---------

Co-authored-by: Li Jiang <lijiang1@microsoft.com>
@gagb gagb closed this as completed Aug 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants