-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: convenience Enum
for tool_choice
#6091
Labels
enhancement
New feature or request
Comments
Hey @jamesbraza can you share a code example of what you expect? |
Sure, I recently wrote something like this: from litellm import acompletion
TOOL_CHOICE_REQUIRED = "required"
tool_choice: Tool | str | None = TOOL_CHOICE_REQUIRED
completion_kwargs: dict[str, Any] = {}
# SEE: https://platform.openai.com/docs/guides/function-calling/configuring-function-calling-behavior-using-the-tool_choice-parameter
expected_finish_reason: set[str] = {"tool_calls"}
if isinstance(tool_choice, Tool):
completion_kwargs["tool_choice"] = {
"type": "function",
"function": {"name": tool_choice.info.name},
}
expected_finish_reason = {"stop"} # TODO: should this be .add("stop") too?
elif tool_choice is not None:
completion_kwargs["tool_choice"] = tool_choice
if tool_choice == TOOL_CHOICE_REQUIRED:
# Even though docs say it should be just 'stop',
# in practice 'tool_calls' shows up too
expected_finish_reason.add("stop")
model_response = await acompletion(
"gpt-4o",
messages=...,
tools=...,
**completion_kwargs,
)
if (num_choices := len(model_response.choices)) != 1:
raise MalformedMessageError(
f"Expected one choice in LiteLLM model response, got {num_choices}"
f" choices, full response was {model_response}."
)
choice = model_response.choices[0]
if choice.finish_reason not in expected_finish_reason:
raise MalformedMessageError(
f"Expected a finish reason in {expected_finish_reason} in LiteLLM"
f" model response, got finish reason {choice.finish_reason!r}, full"
f" response was {model_response} and tool choice was {tool_choice}."
)
# Process choice ... Note how it has to:
I would like to upstream at least item 1 and 2 into LiteLLM, mainly because LiteLLM handles almost all of our LLM logic besides this at the moment |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The Feature
From https://platform.openai.com/docs/api-reference/chat/create#chat-create-tool_choice,
tool_choice
can be:str
valuesdict
specifically naming a tool:{"type": "function", "function": {"name": "my_tool_name"}}
From https://platform.openai.com/docs/guides/function-calling/configuring-function-calling-behavior-using-the-tool_choice-parameter, we see the response's
finish_reason
is a function oftool_choice
.It would be nice if LiteLLM provided an
Enum
that could handle the logic:tool_choice
Enum
) that convertstool_choice
to expectedfinish_reason
, for response validationAlternately, perhaps LiteLLM can add an opt-in flag to
acompletion
that validates thefinish_reason
matches the inputtool_choice
andtools
Motivation, pitch
Enabling clients to not have to care about calculating the
finish_reason
, but have a validation confirming its correctTwitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: