-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Function calling in groupchats #960
Comments
It looks like there are two problems here, and it is the Notebook that needs to be fixed. The first problem is that ExecutorGroupchat is subclassing GroupChat, but GroupChat has evolved since this was written. Many of the methods require an agents parameter now, because the list can be dynamic due to new The second problem is that you are passing an llm_config with functions to the GroupChatManager -- that's not supported behavior (and was the source of several breaking bugs). This is why you are now seeing the exception. Create a new llm_config that you pass to the GroupChatManager. Just set it to:
|
@sonichi @kevin666aa for visibility |
@afourney - When I use a different config for group chat manager that does not include the function definition. But when the function executor responds with a response from the function that is run, the chat manager errors out that the role = 'function' is not supported
I am not sure where I can set the callback_exception='verbose' to see additional logging here as well. |
@kbalasu1 I suggest opening a separate case for this - but here is my (uneducated) pointer:
I may not see the whole picture here - but anyway @afourney maybe this could be helpful. @kbalasu1 would be useful if you provide a minimal reproduction code (including explicit llm_config if possible and without the api_key of course) so we can run locally and trace the problem. Btw, code here and here also needs to generalize to "tool" and not just "function" - Maybe worth having a "is_tool_role" helper function that can be used and internally handle the specific two possible roles to reduce bugs like theses. |
@yoadsn Thanks for the notes. I agree with most of them. Your first point is valid, and the last suggestion is valid too.
is needed for tool responses only because we store tool responses in a special way that needs to be processed. To address the first point, which is the hardest, I'm thinking of the following: For agents who are neither the suggester nor the executor of a function/tool, the function/tool call/response message needs to be processed into plain text before sending to them by the group chat manager. Same for the |
After investigation, the described bug doesn't exist for sync group chat. There is indeed a bug for async group chat due to mismatch of the sync group chat. #1243 fixes that and add tests. |
if I am not mistaken, in an earlier version of autogen, didn't we have to call the "pop" on the llm_config right before assigning it to the groupchat? that way we pop out functions and tools? I remember having to do that but don't remember the version. Please can someone check. Thanks. |
Just a quick update, I found the fix. Not sure how you all will implement it but I went to the Autogen github website and found an example and I added it and it worked for me as I had this same issue with autogen version 0.2.20 . I came across this issue working on the sample notebook agentchat_groupchat_RAG.ipynb . https://microsoft.github.io/autogen/docs/notebooks/agentchat_function_call_async The code is below and notice how I used the pop function and renamed the groupchat manager llm_config so there is nothing confusing to me. Again, I remembered this being an issue from 0.1.8 version. Please someone fix this for the next iteration if you can. It's definitely a headache to figure out for such a simple solution. `llm_config = { def termination_msg(x): boss = autogen.UserProxyAgent( boss_aid = RetrieveUserProxyAgent( coder = AssistantAgent( pm = autogen.AssistantAgent( reviewer = autogen.AssistantAgent( PROBLEM = "How to use spark for parallel training in FLAML? Give me sample code." def _reset_agents(): def rag_chat():
def norag_chat():
def call_rag_chat():
|
Don't share llm_config among different agents. You can share config list but LLM configs gets modified as you register functions |
@ekzhu and @afourney Thanks for explaining that. I was also making it known that this is reproducible directly from Microsoft Autogen's sample notebook under the current version, of which I came across this issue for the agentchat_groupchat_RAG.ipynb . Thankfully, I remembered what was being done from the old autogen==0.1.8 version to remedy the issue, but your answers are the correct answer and they explain thoroughly why this error is happening, and it is unfortunate that based on the sample notebooks, this is unknown. Can this be changed in the documentation and sample notebooks? Thanks in advance |
Yup. Can you file a documentation bug. We're going through and updating all the documentation now, and I don't want to lose track of this. -- Adam |
Yes, I will file a documentation bug for this today. |
* more tolerant time limit for test_overtime * Cancel assertion becasue github VM sometimes is super slow --------- Co-authored-by: Li Jiang <lijiang1@microsoft.com>
Describe the bug
Hi,
with pyautogen==0.2.2 I tried to run three codes provided in #274 and previously in #252 and #152 about function calling in groupchats but I got errors from all of them.
Steps to reproduce
This is my current code:
i got this error on three options:
Expected Behavior
I noticed that if i pass the
llm_config
argument tomanager
(manager = GroupChatManager(groupchat=groupchat, llm_config=llm_config, system_message="Choose one agent to play the role of the user proxy"
) this error appears:ValueError: GroupChatManager is not allowed to make function/tool calls. Please remove the 'functions' or 'tools' config in 'llm_config' you passed in.
Screenshots and logs
No response
Additional Information
I understand that when #152 was raised as an issue autogen was > 0.1.8 and #274 was > v0.1.11.
Is it because the versions, the code or any of the prompts? I'd appreciate your help.
The text was updated successfully, but these errors were encountered: