-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to do both text output and FunctionCall in the same request? #232
Comments
Hi @jaromiru The OpenAI API returns either a message or tool call(s), while the Anthropic API can return "thoughts" before making a tool call (see related issue #220). Because of this, a return type that allows "thinking" before making a function call would only work with some LLM providers, but it might still be useful to have. One approach you could try is having separate steps for the thinking and the function call. Something like @prompt(...)
def think_about_doing_something(param: str) -> str: ...
@prompt("Based on thoughts {thoughts}. Do thing")
def do_something(thoughts: str, param: str) -> str: ...
thoughts = think_about_doing_something(param)
function_call = do_something(thoughts, param) Another option which would reduce this to a single query would be to set the return type to a pydantic model with an "explanation" field and fields for the function parameters. You would need to do this for all available functions and union them in the return type. Similar to https://magentic.dev/structured-outputs/#chain-of-thought-prompting class ExplainedFunctionParams(BaseModel):
explanation: str = Field(description="reasoning for the following choice of parameters")
param: str
@prompt(...) # No `functions` provided here
def do_something(param: str) -> ExplainedFunctionParams: ...
func_params = do_something(param)
my_function(func_params.param) Please let me know if either of these approaches would work for you. Thanks for using magentic. |
Hi @jackmpcollins, thanks for the answer and the proposed workarounds. Obviously, both of them have their drawbacks. I currently have no solutions on my own, so we can close the ticket, or keep it open for followup discussion. |
@jaromiru Let's leave this open because it would be great for magentic to support this for the Anthropic API. Please let me know if/how you solve this for OpenAI using another library or their API directly because that could inform how to solve it generally for magentic. |
@jackmpcollins Maybe it's finally a good time to revive this given the new release of Sonnet 3.5! It doesn't look like Anthropic will be changing this pattern anytime soon. |
GPT-4o also supports thinking before tool use. I'll try to get this added soon. Probably will be a new return type that is an iterable of from openai import Client
client = Client()
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Say hello, then call the tool for Boston"}],
stream=True,
stream_options={"include_usage": True},
tools=[
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
},
},
],
)
for chunk in response:
print(chunk)
|
Hello, I want the model to reason before making a function call in one request, which is possible in other libraries. However, I haven't found a way to do this in magentic. The usual signature of a method is:
which does not allow any other outputs than the function call.
I'd be happy to do something like:
but this results in a pydantic error. It applies to
ParallelFunctionCall
also.The text was updated successfully, but these errors were encountered: