-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: name Field Compatibility Issue in AssistantAgent and UserProxyAgent with fireworks.ai API #1129
Comments
Thanks for the issue. I think this should be a feature request not a bug. Do you know if there is a reason that fireworks.ai is not compatible with OpenAI API? |
I apologize for choosing the incorrect method to raise an issue; it was an oversight on my part. I have posted the query along with the relevant logs on the fireworks.ai Discord channel and am currently unsure of the cause. However, upon reviewing the documentation for the OpenAI API, I did not find any mention of using the 'name' field within 'message' field. Thank you for your response, and I am hopeful that a solution can be found quickly. It would be fantastic to have Autogen work seamlessly with APIs provided by most mainstream teams. Each model has its unique strengths, and the prospect of integrating an assistant agent that's already equipped with a knowledge base is particularly exciting – the potential synergy is thrilling to consider. |
No worries. Could you see if this problem still exists in the latest version? |
So the name field is pretty sparingly documented by OpenAI -- it's easy to miss. I am not surprised there are implementations that lack it. Having said that, we use this field for bookkeeping in GroupChat as well, and I would be nervous about removing it or adding yet another flag. I wonder if there is something we can do that is more general. Perhaps allowing one to provide a validator function in llm_config that could handle last minute transformations right at the moment where the LLM is called. Or perhaps abstracting the Client, so that people can use custom implementations. TL:DR; I have a strong preference to adding general hooks rather than addressing incompatibilities filed by field, or provider by provider. |
I remember at one-point OpenAI supports the "name" field, perhaps back in chatml days. @afourney could you point out one example of our code base that uses the "name" field? |
Group chat uses it here: https://github.com/microsoft/autogen/blob/6bf33dfede736e929c0764f4c881956cfe74c2e1/autogen/agentchat/groupchat.py#L384C7-L390C1 It was also discussed recently here: #890 (comment) |
I see. It is also in here: https://github.com/microsoft/autogen/blob/main/autogen/agentchat/conversable_agent.py#L479 |
Yes, I updated to version 0.2.2, but the issue persists:
|
Many different ways to resolve this issue, but all would require some coding. Solution 1: Create a custom _message_to_dict functionNote: this is a hack, because it overrides a private member function (not good coding practice). The advantage is: you can use it without creating a brand-new class; useful if you are in a rush. See Solution 2 for an actual better design. We need to remove the "name" filed in the
Solution 2: Create another class FireworkConversableAgentCreate a class, inherit from ConversableAgent, and then override the You don't need to create a PR either and can use the customized agent class for the Firework API. Solution 3: use Capability registrationThere is an ongoing PR #1091, and we can wait for that PR and then register a preprocessing function to |
My feeling is that we may not be able to prioritize compatibility with every LLM provider. The current approach of using vllm and perhaps as a next step using litellm proxy to achieve OpenAI compatibility should be the goal. |
Agreed. But even the discussion about gpt-v is important here. There may be some fields that some agents just don't care about (e.g., image data), or can't handle, and we should handle that gracefully perhaps. |
Yes. @rickyloynd-microsoft would the hook methods in #1091 be applicable? |
Unless I'm missing something, it could be done by modifying
where Users could then make any agent compatible with fireworks.ai with one line of code, which would attach the capability's hook method to the agent's hookable method:
If this works, then future message-format-conversion capabilities could be added to agents without further modifications to |
Describe the bug
Description
When interfacing with the fireworks.ai API using the
AssistantAgent
andUserProxyAgent
classes, I've encountered a compatibility issue due to the mandatoryname
field within these classes. The fireworks.ai API rejects the request with anInvalidRequestError
because thename
field is not expected.Steps to reproduce
AssistantAgent
orUserProxyAgent
with thename
field included in the request payload.InvalidRequestError
, indicating thename
field is extraneous.Expected Behavior
Expetced Behavior
There should be compatibility with fireworks.ai API, which means either the
name
field should be accepted, or there should be a way to exclude it from requests depending on the API requirements.Actual Behavior
The request is rejected by the fireworks.ai API, and the following error message is received:
openai.error.InvalidRequestError: [{'loc': ('body', 'messages', 1, 'name'), 'msg': 'extra fields not permitted', 'type': 'value_error.extra'}]
Possible Solutions
name
field in the request payload based on the API being used.AssistantAgent
andUserProxyAgent
classes to make thename
field optional or provide a method to exclude it for certain APIs.Concerns
While a temporary fix could involve modifying the request information within the classes, this could lead to additional issues upon future updates. Thus, a permanent solution from the autogen team would be preferable to ensure consistent support for various models.
Additional Context
mixtral-8x7b-instruct
andllama-v2-7b-chat
.Request
I would appreciate guidance on how this issue might be resolved on an official level. It's not clear whether this is an issue with the settings on fireworks.ai's side or if the model itself does not support the
name
field. I hope that autogen can officially support multiple models to ensure broad compatibility.Screenshots and logs
Traceback (most recent call last):
File "/home/zyl/chatdev/autogen/sqlFormat/demo.py", line 57, in
user_proxy.initiate_chat(
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 531, in initiate_chat
self.send(self.generate_init_message(**context), recipient, silent=silent)
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 334, in send
recipient.receive(message, self, request_reply, silent)
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 462, in receive
reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 781, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/agentchat/groupchat.py", line 162, in run_chat
speaker = groupchat.select_speaker(speaker, self)
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/agentchat/groupchat.py", line 91, in select_speaker
final, name = selector.generate_oai_reply(
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 606, in generate_oai_reply
response = oai.ChatCompletion.create(
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/oai/completion.py", line 803, in create
response = cls.create(
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/oai/completion.py", line 834, in create
return cls._get_response(params, raise_on_ratelimit_or_timeout=raise_on_ratelimit_or_timeout)
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/oai/completion.py", line 224, in _get_response
response = openai_completion.create(request_timeout=request_timeout, **config)
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 157, in create
response, _, api_key = requestor.request(
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/openai/api_requestor.py", line 299, in request
resp, got_stream = self._interpret_response(result, stream)
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/openai/api_requestor.py", line 713, in _interpret_response
self._interpret_response_line(
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/openai/api_requestor.py", line 779, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: [{'loc': ('body', 'messages', 1, 'name'), 'msg': 'extra fields not permitted', 'type': 'value_error.extra'}]
Additional Information
AutoGen Version: 0.1.14
Operating System: Linux version 5.15.133.1-microsoft-standard-WSL2 (root@1c602f52c2e4) (gcc (GCC) 11.2.0, GNU ld (GNU Binutils) 2.37) #1 SMP Thu Oct 5 21:02:42 UTC 2023 Ubuntu 22.04.3 LTS
Python Version:Python 3.10.13
Related Issues: not found
Looking forward to your response and a possible solution. Thank you!
The text was updated successfully, but these errors were encountered: