Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Patch 4.1.12 - Single assistant message with multiple tool steps should be reverted #4753

Open
lecca-io opened this issue Feb 6, 2025 · 6 comments
Assignees
Labels
ai/ui bug Something isn't working

Comments

@lecca-io
Copy link

lecca-io commented Feb 6, 2025

Description

When using tools, the stream used to to be very clear about the order of assistant text responses, tool use, then any following text responses.

It used to look like this:

assistant text response: "I will search the web for you"
toolInvocation assistant response: Render tool in UI
assistant text response: "These are the results I found..."

However, with this patch, all the assistant messages and toolInvocations are in a single assistant message. So now the UI doesn't really know how to break up the assistant message from before or after a tool call. It behaves like this now:

assistant text response: "I will search the web for you. These are the results I found"
toolInvocation assistant response: Render tool in UI

I think the previous behavior was better. I'm not sure why the fix was needed, but I don't believe it was the ideal fix. This patch caused a bug in my production UI, and though I have a fix for it, I'm posting this to see if there's a way to get the previous behavior back.

Another reason I believe the behavior should be reverted is because the LLMs return the responses in separate messages anyways. For example, this is what is saved to my database after the a tool call

[ { "text": "I'll search for \"dogs\" using the Google Search tool and provide you with the first result.", "type": "text" }, { "args": { "q": "dogs" }, "type": "tool-call", "toolName": "tool-f30474a4-9290-4862-8c32-e31c64aee162", "toolCallId": "toolu_018qJ9H8FYk53unG5RfCbGoG" }, { "data": { "appId": "web", "actionId": "web_action_google-search" }, "type": "tool-result", "result": { "success": { "results": [ { "link": "https://en.wikipedia.org/wiki/Dog", "title": "Dog - Wikipedia", "snippet": "Dogs have been bred for desired behaviors, sensory capabilities, and physical attributes. Dog breeds vary widely in shape, size, and color. They have the same ..." } ] } }, "toolName": "tool-f30474a4-9290-4862-8c32-e31c64aee162", "toolCallId": "toolu_018qJ9H8FYk53unG5RfCbGoG" } ]
So it doesn't make sense for the stream to try to put it all together into one message when the LLM's output (onFinish -> result.response.messages) is going to be in separate messages anyway.

This happens with all the ai providers now.

I'm using "ai" version 4.1.16

Code example

No response

AI provider

No response

Additional context

No response

@lecca-io lecca-io added the bug Something isn't working label Feb 6, 2025
@lgrammel
Copy link
Collaborator

lgrammel commented Feb 7, 2025

The underlying issue was a bug that message annotations were split between different assistant response messages. This bug was fixed by combining assistant messages into one. Also ideally UI message should be a sequence of user-assistant-user-assistant messages to make rendering in the UI easy, and combining assistant messages achieves that. Many users reporting that they needed to put hacks in place to combine assistant messages.

With this in mind, I have introduced parts on UI messages: #4670

Check out this example of using message parts with tool invocations: https://github.com/vercel/ai/blob/main/examples/next-openai/app/use-chat-tools/page.tsx

I hope message parts are the way forward here. Please let me know if that addresses the limitations that you were facing, and if there are missing features in the new approach.

@lgrammel lgrammel self-assigned this Feb 7, 2025
@lgrammel lgrammel added the ai/ui label Feb 7, 2025
@mlshv
Copy link

mlshv commented Feb 7, 2025

I found another potential problem that might be related. For some reason message id changes while streaming, causing the appear animation to play not only on the appended tool calls but on the entire message. You can see my logs below, showing that while the 5ths message is streaming, its id changes.

Image

@lgrammel
Copy link
Collaborator

lgrammel commented Feb 7, 2025

@mlshv would you mind opening a separate issue for that bug?

@wookiehangover
Copy link
Contributor

I also ran into this too. The move to message.parts is absolutely backwards incompatible and should have been a major version bump if this is truly the intended behavior. I have no way to reconcile the order of tool calls and text parts in messages created with earlier versions of the SDK.

@lecca-io
Copy link
Author

lecca-io commented Feb 7, 2025

@lgrammel I'll try the parts implementation this weekend and see if works

@lgrammel
Copy link
Collaborator

lgrammel commented Feb 8, 2025

@wookiehangover what function would you need to convert old messages to the new format? here is a helper that you can import from @ai-sdk/ui-utils: https://github.com/vercel/ai/blob/main/packages/ui-utils/src/get-message-parts.ts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ai/ui bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants