-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Issue] Function call not working with Non-OpenAI models + LiteLLM proxy. #1150
Comments
Does your local LLM support function calling? |
Yes it supports function calling |
Which model specifically are you using? Do you have examples of it successfully incurring function calls without AutoGen? |
I believe this is the same issue we are trying to solve in #1206 @ragesh2000 can you please check if it is still failing for you with the latest version from git, you can install it with pip install git+https://github.com/microsoft/autogen.git@main |
Now I am getting an error as soon as I start running |
@ragesh2000 can you post your code? The currency function notebook you linked works for me on main branch. The error message looks like a misconfiguration of llm_config. |
Sure
Also Iam using llama2 model using litellm |
I see. It looks like you may have to specify a |
Can i set it as the model I am using? |
You can try it. We are currently relying on the openai client library. But we are currently working toward customizable client #831 |
It wasn't working. |
Did you get a new error message? |
iam getting the following error message when i set it to llama2
|
@ragesh2000 a fix for this was merged yesterday (#1227). Can you please install the latest version from the github and try again:
|
Actually this error message was coming from the latest version(0.2.7) |
Just now I confirmed it by uninstalling and reinstalling from git that the error is same. @davorrunje |
I think the issue is caused by the model itself does not support function calling. Have you tried to enable the adding function call to prompt feature offered by litellm? https://litellm.vercel.app/docs/completion/function_call#function-calling-for-non-openai-llms |
yes i have enabled |
Forgot to mention, in a recent release we added a backward compatibility for function calling for older OpenAI API versions. I am not actively following litellm's API specs. Do they support tool calls for Non-OpenAI models? You can try adding @agent2.register_for_llm(description="...", api_style="function")
def my_function(a: Annotated[str, "description of a parameter"] = "a", b: int, c=3.14) -> str:
return a + str(b * c) |
Adding api_style="function" helped me to get rid of that error message. But now also the problem is my assistant agent is aware of the function to use but not the user proxy. Is that the issue with the model iam using ? @ekzhu |
@ragesh2000 Did you register the function for execution with @user_proxy. register_for_execution()
@agent2.register_for_llm(description="...", api_style="function")
def my_function(a: Annotated[str, "description of a parameter"] = "a", b: int, c=3.14) -> str:
return a + str(b * c) |
Yes I did |
Oh, the screenshot above indicates that the model tried to execute Python code, not to call the function. Could you please share the source code of your example? |
This is the code @davorrunje |
I assumed you added |
Yes I added |
Did you use |
No |
Is there a way to expose your LiteLLM endpoint to me so I can debug it? You can DM me on Discord with info as you obviously don't want to make it public. |
Sorry I can't reveal the endpoint. Is there any other way that you can debug ? |
Can you set up an endpoint just for debugging and kill it after we are done? |
I think I know what's is going on. The UserProxyAgent is registered with the function but this is only handled via generate_tool_call_reply method, which only comes to effect when the incoming message has a To make this work. First, we need the model to generate a structured field that contains the function call and its parameters, say something like {"function_call": {"name": "calculator", "arguments": [...]}}. The field should be serialized and put inside the "content" part of the message. This might be achieved via Guidance. Second, we need to register a new reply function to the UserProxyAgent that can parse the structured field to convert the input parameters into Python objects, and then calls the registered function, then return the result. Because model is not GPT-4, you may also need to add a few context to the result such as "The function ... returns ...". Example on AutoGen + Guidance: https://github.com/microsoft/autogen/blob/main/notebook/agentchat_guidance.ipynb |
* autogen.agent -> autogen.agentchat * bug fix in portfolio * notebook * timeout * timeout * infer lang; close microsoft#1150 * timeout * message context * context handling * add sender to generate_reply * clean up the receive function * move mathchat to contrib * contrib * last_message
… update doc and packaging; capture ipython output; find code blocks with llm when regex fails. (microsoft#1154) * autogen.agent -> autogen.agentchat * bug fix in portfolio * notebook * timeout * timeout * infer lang; close microsoft#1150 * timeout * message context * context handling * add sender to generate_reply * clean up the receive function * move mathchat to contrib * contrib * last_message * Add OptiGuide: agent and notebook * Optiguide notebook: add figures and URL 1. figures and code points to remote URL 2. simplify the prompt for the interpreter, because all information is already in the chat history. * Update name: Agent -> GenericAgent * Update notebook * Rename: GenericAgent -> ResponsiveAgent * Rebase to autogen.agentchat * OptiGuide: Comment, sytle, and notebook updates * simplify optiguide * raise error when msg is invalid; fix docstr * allow return None for generate_reply() * update_system_message * test update_system_message * simplify optiguide * simplify optiguide * simplify optiguide * simplify optiguide * move test * add test and fix bug * doc update * doc update * doc update * color * optiguide * prompt * test danger case * packaging * docker * remove path in traceback * capture ipython output * simplify * find code blocks with llm * find code with llm * order * order * fix bug in context handling * print executing msg * print executing msg * test find code * test find code * disable find_code * default_auto_reply * default auto reply * remove optiguide * remove -e --------- Co-authored-by: Beibin Li <beibin79@gmail.com>
I was following exactly as the notebook for function calling
https://github.com/microsoft/autogen/blob/main/notebook/agentchat_function_call_currency_calculator.ipynb
but instead of the output shown there Iam keep on getting an error
The only change i made is instead of open ai model i used open source model using litellm.
Can anybody tell me why it is happening ?
The text was updated successfully, but these errors were encountered: