Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow to interrupt/stop streaming #965

Closed
krassowski opened this issue Aug 26, 2024 · 0 comments · Fixed by #1022
Closed

Allow to interrupt/stop streaming #965

krassowski opened this issue Aug 26, 2024 · 0 comments · Fixed by #1022
Assignees
Labels
enhancement New feature or request

Comments

@krassowski
Copy link
Member

Problem

Streaming in chat is great! It allows me to start reading the response before it is fully generated.

However, when I see that the model goes in the wrong way I would like to stop it and ask again.

Proposed Solution

Add a "stop generating" button.

Even though not every provider will support cancelling generation, even having the UI stop moving due to a flush of new text would be an improvement for the UX.

Additional context

Most other chat interfaces have a stop button. For example ChatGPT turns its send button into a stop button while generating.

image

As does GitHub Copilot:

image

While the might technically belong to an individual message (since multiple messages could be streamed at the same time), that would likely be a bad UX as streaming would scroll the window moving the button and user would need to chase it.

This could be implemented by extending the SendButton implementation:

export type SendButtonProps = {
onSend: (selection?: AiService.Selection) => unknown;
sendWithShiftEnter: boolean;
currSlashCommand: string | null;
inputExists: boolean;
activeCellHasError: boolean;
};
export function SendButton(props: SendButtonProps): JSX.Element {

To short-circuit on client-side we can add a breaker condition in:

case 'agent-stream-chunk': {
const target = newMessage.id;
const streamMessage = this._messages.find<AiService.AgentStreamMessage>(
(m): m is AiService.AgentStreamMessage =>
m.type === 'agent-stream' && m.id === target
);
if (!streamMessage) {
console.error(
`Received stream chunk with ID ${target}, but no agent-stream message with that ID exists. ` +
'Ignoring this stream chunk.'
);
break;
}
streamMessage.body += newMessage.content;
if (newMessage.stream_complete) {
streamMessage.complete = true;
}
this._messages = [...this._messages];
break;

It could be a property set on the message. We could then send a request to the server to stop generating messages. We could add a conditional breaker in:

async for chunk in self.llm_chain.astream(
{"input": message.body},
config={"configurable": {"session_id": "static_session"}},
):
if not received_first_chunk:
# when receiving the first chunk, close the pending message and
# start the stream.
self.close_pending(pending_message)
stream_id = self._start_stream(human_msg=message)
received_first_chunk = True
if isinstance(chunk, AIMessageChunk):
self._send_stream_chunk(stream_id, chunk.content)
elif isinstance(chunk, str):
self._send_stream_chunk(stream_id, chunk)
else:
self.log.error(f"Unrecognized type of chunk yielded: {type(chunk)}")
break
# complete stream after all chunks have been streamed
self._send_stream_chunk(stream_id, "", complete=True)

I am not sure if there is a well-supported way to propagate the interruption to the actual models in langchain. There are some issues, but it does not appear to be documented or officially supported:

@krassowski krassowski added the enhancement New feature or request label Aug 26, 2024
@krassowski krassowski self-assigned this Sep 23, 2024
@github-staff github-staff deleted a comment from Lxx-c Oct 23, 2024
@github-staff github-staff deleted a comment from Lxx-c Oct 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants
@krassowski and others