-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Patch 4.1.12 - Single assistant message with multiple tool steps should be reverted #4753
Comments
The underlying issue was a bug that message annotations were split between different assistant response messages. This bug was fixed by combining assistant messages into one. Also ideally UI message should be a sequence of user-assistant-user-assistant messages to make rendering in the UI easy, and combining assistant messages achieves that. Many users reporting that they needed to put hacks in place to combine assistant messages. With this in mind, I have introduced Check out this example of using message parts with tool invocations: https://github.com/vercel/ai/blob/main/examples/next-openai/app/use-chat-tools/page.tsx I hope message parts are the way forward here. Please let me know if that addresses the limitations that you were facing, and if there are missing features in the new approach. |
@mlshv would you mind opening a separate issue for that bug? |
I also ran into this too. The move to |
@lgrammel I'll try the parts implementation this weekend and see if works |
@wookiehangover what function would you need to convert old messages to the new format? here is a helper that you can import from |
Description
When using tools, the stream used to to be very clear about the order of assistant text responses, tool use, then any following text responses.
It used to look like this:
assistant text response: "I will search the web for you"
toolInvocation assistant response: Render tool in UI
assistant text response: "These are the results I found..."
However, with this patch, all the assistant messages and toolInvocations are in a single assistant message. So now the UI doesn't really know how to break up the assistant message from before or after a tool call. It behaves like this now:
assistant text response: "I will search the web for you. These are the results I found"
toolInvocation assistant response: Render tool in UI
I think the previous behavior was better. I'm not sure why the fix was needed, but I don't believe it was the ideal fix. This patch caused a bug in my production UI, and though I have a fix for it, I'm posting this to see if there's a way to get the previous behavior back.
Another reason I believe the behavior should be reverted is because the LLMs return the responses in separate messages anyways. For example, this is what is saved to my database after the a tool call
[ { "text": "I'll search for \"dogs\" using the Google Search tool and provide you with the first result.", "type": "text" }, { "args": { "q": "dogs" }, "type": "tool-call", "toolName": "tool-f30474a4-9290-4862-8c32-e31c64aee162", "toolCallId": "toolu_018qJ9H8FYk53unG5RfCbGoG" }, { "data": { "appId": "web", "actionId": "web_action_google-search" }, "type": "tool-result", "result": { "success": { "results": [ { "link": "https://en.wikipedia.org/wiki/Dog", "title": "Dog - Wikipedia", "snippet": "Dogs have been bred for desired behaviors, sensory capabilities, and physical attributes. Dog breeds vary widely in shape, size, and color. They have the same ..." } ] } }, "toolName": "tool-f30474a4-9290-4862-8c32-e31c64aee162", "toolCallId": "toolu_018qJ9H8FYk53unG5RfCbGoG" } ]
So it doesn't make sense for the stream to try to put it all together into one message when the LLM's output (onFinish -> result.response.messages) is going to be in separate messages anyway.
This happens with all the ai providers now.
I'm using "ai" version 4.1.16
Code example
No response
AI provider
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: