Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Issue] Function call not working with Non-OpenAI models + LiteLLM proxy. #1150

Closed
ragesh2000 opened this issue Jan 5, 2024 · 32 comments
Closed
Assignees
Labels
models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.) tool-usage suggestion and execution of function/tool call

Comments

@ragesh2000
Copy link

ragesh2000 commented Jan 5, 2024

I was following exactly as the notebook for function calling
https://github.com/microsoft/autogen/blob/main/notebook/agentchat_function_call_currency_calculator.ipynb
but instead of the output shown there Iam keep on getting an error
Screenshot from 2024-01-05 17-18-08
The only change i made is instead of open ai model i used open source model using litellm.
Can anybody tell me why it is happening ?

@ragesh2000 ragesh2000 added the bug label Jan 5, 2024
@rickyloynd-microsoft
Copy link
Contributor

Does your local LLM support function calling?

@kevin666aa

@ragesh2000
Copy link
Author

Yes it supports function calling

@qingyun-wu
Copy link
Contributor

Yes it supports function calling

Which model specifically are you using? Do you have examples of it successfully incurring function calls without AutoGen?

@davorrunje davorrunje self-assigned this Jan 10, 2024
@davorrunje
Copy link
Collaborator

davorrunje commented Jan 11, 2024

I believe this is the same issue we are trying to solve in #1206

@ragesh2000 can you please check if it is still failing for you with the latest version from git, you can install it with

pip install git+https://github.com/microsoft/autogen.git@main

@ragesh2000
Copy link
Author

Now I am getting an error as soon as I start running
Screenshot from 2024-01-11 16-03-58
@davorrunje

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 11, 2024

@ragesh2000 can you post your code? The currency function notebook you linked works for me on main branch. The error message looks like a misconfiguration of llm_config.

@ragesh2000
Copy link
Author

ragesh2000 commented Jan 11, 2024

Sure

import autogen
import pandas as pd
import os
# from typing import Literal
from typing_extensions import Annotated

config_list = [
    {
        'base_url': "http://0.0.0.0:8000",
        'api_key': "NULL"
    }
]

llm_config = {
    "config_list": config_list,
    "timeout": 120,
}
chatbot = autogen.AssistantAgent(
    name="chatbot",
    system_message="For currency exchange tasks, only use the functions you have been provided with. Reply TERMINATE when the task is done.",
    llm_config=llm_config,
)

# create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
    human_input_mode="NEVER",
    max_consecutive_auto_reply=10,
)


CurrencySymbol = Literal["USD", "EUR"]


def exchange_rate(base_currency: CurrencySymbol, quote_currency: CurrencySymbol) -> float:
    if base_currency == quote_currency:
        return 1.0
    elif base_currency == "USD" and quote_currency == "EUR":
        return 1 / 1.1
    elif base_currency == "EUR" and quote_currency == "USD":
        return 1.1
    else:
        raise ValueError(f"Unknown currencies {base_currency}, {quote_currency}")


@user_proxy.register_for_execution()
@chatbot.register_for_llm(description="Currency exchange calculator.")
def currency_calculator(
    base_amount: Annotated[float, "Amount of currency in base_currency"],
    base_currency: Annotated[CurrencySymbol, "Base currency"] = "USD",
    quote_currency: Annotated[CurrencySymbol, "Quote currency"] = "EUR",
) -> str:
    quote_amount = exchange_rate(base_currency, quote_currency) * base_amount
    return f"{quote_amount} {quote_currency}"

assert user_proxy.function_map["currency_calculator"]._origin == currency_calculator

# start the conversation
user_proxy.initiate_chat(
    chatbot,
    message="How much is 123.45 USD in EUR?",
)

Also Iam using llama2 model using litellm
@ekzhu

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 11, 2024

I see. It looks like you may have to specify a model value in the config list entry.

@ragesh2000
Copy link
Author

ragesh2000 commented Jan 11, 2024

Can i set it as the model I am using?
'model': 'llama2' ?

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 12, 2024

You can try it. We are currently relying on the openai client library. But we are currently working toward customizable client #831

@ragesh2000
Copy link
Author

It wasn't working.

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 13, 2024

It wasn't working.

Did you get a new error message?

@ragesh2000
Copy link
Author

iam getting the following error message when i set it to llama2


Traceback (most recent call last):
  File "/home/gpu/ai/llm/autogen/userproxy_test.py", line 64, in <module>
    user_proxy.initiate_chat(
  File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 667, in initiate_chat
    self.send(self.generate_init_message(**context), recipient, silent=silent)
  File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 420, in send
    recipient.receive(message, self, request_reply, silent)
  File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 573, in receive
    reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
  File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 1239, in generate_reply
    final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
  File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 754, in generate_oai_reply
    response = client.create(
  File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/autogen/oai/client.py", line 278, in create
    response = self._completions_create(client, params)
  File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/autogen/oai/client.py", line 543, in _completions_create
    response = completions.create(**params)
  File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/_utils/_utils.py", line 271, in wrapper
    return func(*args, **kwargs)
  File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 643, in create
    return self._post(
  File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/_base_client.py", line 1112, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
  File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/_base_client.py", line 859, in request
    return self._request(
  File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/_base_client.py", line 934, in _request
    return self._retry_request(
  File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/_base_client.py", line 982, in _retry_request
    return self._request(
  File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/_base_client.py", line 934, in _request
    return self._retry_request(
  File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/_base_client.py", line 982, in _retry_request
    return self._request(
  File "/home/gpu/miniconda3/envs/autogen2/lib/python3.10/site-packages/openai/_base_client.py", line 949, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 500 - {'detail': 'ollama does not support parameters: {\'tools\': [{\'type\': \'function\', \'function\': {\'description\': \'Currency exchange calculator.\', \'name\': \'currency_calculator\', \'parameters\': {\'type\': \'object\', \'properties\': {\'base_amount\': {\'type\': \'number\', \'description\': \'Amount of currency in base_currency\'}, \'base_currency\': {\'enum\': [\'USD\', \'EUR\'], \'type\': \'string\', \'default\': \'USD\', \'description\': \'Base currency\'}, \'quote_currency\': {\'enum\': [\'USD\', \'EUR\'], \'type\': \'string\', \'default\': \'EUR\', \'description\': \'Quote currency\'}}, \'required\': [\'base_amount\']}}}]}. To drop these, set `litellm.drop_params=True`.\n\nTraceback (most recent call last):\n  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/main.py", line 417, in completion\n    optional_params = get_optional_params(\n  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 2383, in get_optional_params\n    _check_valid_arg(supported_params=supported_params)\n  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 2065, in _check_valid_arg\n    raise UnsupportedParamsError(status_code=500, message=f"{custom_llm_provider} does not support parameters: {unsupported_params}. To drop these, set `litellm.drop_params=True`.")\nlitellm.utils.UnsupportedParamsError: ollama does not support parameters: {\'tools\': [{\'type\': \'function\', \'function\': {\'description\': \'Currency exchange calculator.\', \'name\': \'currency_calculator\', \'parameters\': {\'type\': \'object\', \'properties\': {\'base_amount\': {\'type\': \'number\', \'description\': \'Amount of currency in base_currency\'}, \'base_currency\': {\'enum\': [\'USD\', \'EUR\'], \'type\': \'string\', \'default\': \'USD\', \'description\': \'Base currency\'}, \'quote_currency\': {\'enum\': [\'USD\', \'EUR\'], \'type\': \'string\', \'default\': \'EUR\', \'description\': \'Quote currency\'}}, \'required\': [\'base_amount\']}}}]}. To drop these, set `litellm.drop_params=True`.\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/main.py", line 189, in acompletion\n    response =  await loop.run_in_executor(None, func_with_context)\n  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/concurrent/futures/thread.py", line 58, in run\n    result = self.fn(*self.args, **self.kwargs)\n  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 1482, in wrapper\n    raise e\n  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 1411, in wrapper\n    result = original_function(*args, **kwargs)\n  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/main.py", line 1425, in completion\n    raise exception_type(\n  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 4763, in exception_type\n    raise e\n  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 4733, in exception_type\n    raise APIConnectionError(\nlitellm.exceptions.APIConnectionError: ollama does not support parameters: {\'tools\': [{\'type\': \'function\', \'function\': {\'description\': \'Currency exchange calculator.\', \'name\': \'currency_calculator\', \'parameters\': {\'type\': \'object\', \'properties\': {\'base_amount\': {\'type\': \'number\', \'description\': \'Amount of currency in base_currency\'}, \'base_currency\': {\'enum\': [\'USD\', \'EUR\'], \'type\': \'string\', \'default\': \'USD\', \'description\': \'Base currency\'}, \'quote_currency\': {\'enum\': [\'USD\', \'EUR\'], \'type\': \'string\', \'default\': \'EUR\', \'description\': \'Quote currency\'}}, \'required\': [\'base_amount\']}}}]}. To drop these, set `litellm.drop_params=True`.\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/proxy/proxy_server.py", line 950, in chat_completion\n    response = await litellm.acompletion(**data)\n  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 1578, in wrapper_async\n    raise e\n  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 1523, in wrapper_async\n    result = await original_function(*args, **kwargs)\n  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/main.py", line 196, in acompletion\n    raise exception_type(\n  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 4763, in exception_type\n    raise e\n  File "/home/gpu/miniconda3/envs/autogen/lib/python3.10/site-packages/litellm/utils.py", line 4733, in exception_type\n    raise APIConnectionError(\nlitellm.exceptions.APIConnectionError: ollama does not support parameters: {\'tools\': [{\'type\': \'function\', \'function\': {\'description\': \'Currency exchange calculator.\', \'name\': \'currency_calculator\', \'parameters\': {\'type\': \'object\', \'properties\': {\'base_amount\': {\'type\': \'number\', \'description\': \'Amount of currency in base_currency\'}, \'base_currency\': {\'enum\': [\'USD\', \'EUR\'], \'type\': \'string\', \'default\': \'USD\', \'description\': \'Base currency\'}, \'quote_currency\': {\'enum\': [\'USD\', \'EUR\'], \'type\': \'string\', \'default\': \'EUR\', \'description\': \'Quote currency\'}}, \'required\': [\'base_amount\']}}}]}. To drop these, set `litellm.drop_params=True`.\n'}

@davorrunje
Copy link
Collaborator

@ragesh2000 a fix for this was merged yesterday (#1227). Can you please install the latest version from the github and try again:

pip install git+https://github.com/microsoft/autogen.git@main

@ragesh2000
Copy link
Author

Actually this error message was coming from the latest version(0.2.7)

@ragesh2000
Copy link
Author

ragesh2000 commented Jan 15, 2024

Just now I confirmed it by uninstalling and reinstalling from git that the error is same. @davorrunje

@ekzhu ekzhu changed the title Function call fail in chat [Issue] Function call not working with Non-OpenAI models + LiteLLM proxy. Jan 15, 2024
@ekzhu ekzhu added enhancement tool-usage suggestion and execution of function/tool call labels Jan 15, 2024
@ekzhu
Copy link
Collaborator

ekzhu commented Jan 15, 2024

I think the issue is caused by the model itself does not support function calling.

Have you tried to enable the adding function call to prompt feature offered by litellm? https://litellm.vercel.app/docs/completion/function_call#function-calling-for-non-openai-llms

@ragesh2000
Copy link
Author

ragesh2000 commented Jan 15, 2024

yes i have enabled addfunction call to prompt feature by litellm.
Now I just tried with another model which is fine tuned for function calling https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-v2/blob/main/llama-2-7b-function-calling.Q3_K_M.gguf
The result was same. @ekzhu

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 15, 2024

Forgot to mention, in a recent release we added a backward compatibility for function calling for older OpenAI API versions. I am not actively following litellm's API specs. Do they support tool calls for Non-OpenAI models? You can try adding api_style="function" to see if this helps.

@agent2.register_for_llm(description="...", api_style="function") 
def my_function(a: Annotated[str, "description of a parameter"] = "a", b: int, c=3.14) -> str: 
  return a + str(b * c)

@ragesh2000
Copy link
Author

ragesh2000 commented Jan 15, 2024

Adding api_style="function" helped me to get rid of that error message. But now also the problem is my assistant agent is aware of the function to use but not the user proxy. Is that the issue with the model iam using ? @ekzhu

Screenshot from 2024-01-15 11-47-52

@davorrunje
Copy link
Collaborator

@ragesh2000 Did you register the function for execution with user_proxy (see https://microsoft.github.io/autogen/docs/Use-Cases/agent_chat#tool-calling)? Something like this:

@user_proxy. register_for_execution() 
@agent2.register_for_llm(description="...", api_style="function") 
def my_function(a: Annotated[str, "description of a parameter"] = "a", b: int, c=3.14) -> str: 
  return a + str(b * c)

@ragesh2000
Copy link
Author

Yes I did

@davorrunje
Copy link
Collaborator

Oh, the screenshot above indicates that the model tried to execute Python code, not to call the function. Could you please share the source code of your example?

@ragesh2000
Copy link
Author

Sure

import autogen
import pandas as pd
import os
# from typing import Literal
from typing_extensions import Annotated

config_list = [
    {
        'base_url': "http://0.0.0.0:8000",
        'api_key': "NULL"
    }
]

llm_config = {
    "config_list": config_list,
    "timeout": 120,
}
chatbot = autogen.AssistantAgent(
    name="chatbot",
    system_message="For currency exchange tasks, only use the functions you have been provided with. Reply TERMINATE when the task is done.",
    llm_config=llm_config,
)

# create a UserProxyAgent instance named "user_proxy"
user_proxy = autogen.UserProxyAgent(
    name="user_proxy",
    is_termination_msg=lambda x: x.get("content", "") and x.get("content", "").rstrip().endswith("TERMINATE"),
    human_input_mode="NEVER",
    max_consecutive_auto_reply=10,
)


CurrencySymbol = Literal["USD", "EUR"]


def exchange_rate(base_currency: CurrencySymbol, quote_currency: CurrencySymbol) -> float:
    if base_currency == quote_currency:
        return 1.0
    elif base_currency == "USD" and quote_currency == "EUR":
        return 1 / 1.1
    elif base_currency == "EUR" and quote_currency == "USD":
        return 1.1
    else:
        raise ValueError(f"Unknown currencies {base_currency}, {quote_currency}")


@user_proxy.register_for_execution()
@chatbot.register_for_llm(description="Currency exchange calculator.")
def currency_calculator(
    base_amount: Annotated[float, "Amount of currency in base_currency"],
    base_currency: Annotated[CurrencySymbol, "Base currency"] = "USD",
    quote_currency: Annotated[CurrencySymbol, "Quote currency"] = "EUR",
) -> str:
    quote_amount = exchange_rate(base_currency, quote_currency) * base_amount
    return f"{quote_amount} {quote_currency}"

assert user_proxy.function_map["currency_calculator"]._origin == currency_calculator

# start the conversation
user_proxy.initiate_chat(
    chatbot,
    message="How much is 123.45 USD in EUR?",
)

Also Iam using llama2 model using litellm @ekzhu

This is the code @davorrunje

@davorrunje
Copy link
Collaborator

davorrunje commented Jan 15, 2024

I assumed you added api_style="function". Is there any other change? For some reason, code execution is enabled and it is not in the example you just shared above.

@ragesh2000
Copy link
Author

Yes I added api_style="function". Sorry to mention that in the above code.

@davorrunje
Copy link
Collaborator

Did you use code_execution_config in your example?

@ragesh2000
Copy link
Author

No

@davorrunje
Copy link
Collaborator

Is there a way to expose your LiteLLM endpoint to me so I can debug it? You can DM me on Discord with info as you obviously don't want to make it public.

@ragesh2000
Copy link
Author

Sorry I can't reveal the endpoint. Is there any other way that you can debug ?

@davorrunje
Copy link
Collaborator

Can you set up an endpoint just for debugging and kill it after we are done?

@ekzhu
Copy link
Collaborator

ekzhu commented Jan 15, 2024

I think I know what's is going on. The UserProxyAgent is registered with the function but this is only handled via generate_tool_call_reply method, which only comes to effect when the incoming message has a function_call field. Given the model does not support function call, litellm adds the function signature into the prompt itself, the model generates the function call inside the "content" -- not the "function_call" field. So the UserProxyAgent goes straight into code execution mode and tries to execute the function call but without the function being defined first.

To make this work. First, we need the model to generate a structured field that contains the function call and its parameters, say something like {"function_call": {"name": "calculator", "arguments": [...]}}. The field should be serialized and put inside the "content" part of the message. This might be achieved via Guidance. Second, we need to register a new reply function to the UserProxyAgent that can parse the structured field to convert the input parameters into Python objects, and then calls the registered function, then return the result. Because model is not GPT-4, you may also need to add a few context to the result such as "The function ... returns ...".

Example on AutoGen + Guidance: https://github.com/microsoft/autogen/blob/main/notebook/agentchat_guidance.ipynb

@ekzhu ekzhu added the models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.) label Jan 19, 2024
whiskyboy pushed a commit to whiskyboy/autogen that referenced this issue Apr 17, 2024
* autogen.agent -> autogen.agentchat

* bug fix in portfolio

* notebook

* timeout

* timeout

* infer lang; close microsoft#1150

* timeout

* message context

* context handling

* add sender to generate_reply

* clean up the receive function

* move mathchat to contrib

* contrib

* last_message
whiskyboy pushed a commit to whiskyboy/autogen that referenced this issue Apr 17, 2024
… update doc and packaging; capture ipython output; find code blocks with llm when regex fails. (microsoft#1154)

* autogen.agent -> autogen.agentchat

* bug fix in portfolio

* notebook

* timeout

* timeout

* infer lang; close microsoft#1150

* timeout

* message context

* context handling

* add sender to generate_reply

* clean up the receive function

* move mathchat to contrib

* contrib

* last_message

* Add OptiGuide: agent and notebook

* Optiguide notebook: add figures and URL
1. figures and code points to remote URL
2. simplify the prompt for the interpreter, because
all information is already in the chat history.

* Update name: Agent -> GenericAgent

* Update notebook

* Rename: GenericAgent -> ResponsiveAgent

* Rebase to autogen.agentchat

* OptiGuide: Comment, sytle, and notebook updates

* simplify optiguide

* raise error when msg is invalid; fix docstr

* allow return None for generate_reply()

* update_system_message

* test update_system_message

* simplify optiguide

* simplify optiguide

* simplify optiguide

* simplify optiguide

* move test

* add test and fix bug

* doc update

* doc update

* doc update

* color

* optiguide

* prompt

* test danger case

* packaging

* docker

* remove path in traceback

* capture ipython output

* simplify

* find code blocks with llm

* find code with llm

* order

* order

* fix bug in context handling

* print executing msg

* print executing msg

* test find code

* test find code

* disable find_code

* default_auto_reply

* default auto reply

* remove optiguide

* remove -e

---------

Co-authored-by: Beibin Li <beibin79@gmail.com>
@gagb gagb closed this as completed Aug 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
models Pertains to using alternate, non-GPT, models (e.g., local models, llama, etc.) tool-usage suggestion and execution of function/tool call
Projects
None yet
Development

No branches or pull requests

6 participants