Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatGoogleGenerativeAI: not call callbacks (_runManager?.handleLLMNewToken) #3699

Closed
Nanyi-Jiang opened this issue Dec 17, 2023 · 4 comments · Fixed by #3708
Closed

ChatGoogleGenerativeAI: not call callbacks (_runManager?.handleLLMNewToken) #3699

Nanyi-Jiang opened this issue Dec 17, 2023 · 4 comments · Fixed by #3708
Assignees
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@Nanyi-Jiang
Copy link

Nanyi-Jiang commented Dec 17, 2023

Context

When implementing Gemini with ChatGoogleGenerativeAI, I tried to use the call and stream functions with a callback function handleLLMToken which will write the response to my client side

  • Expected: Can see the response from Gemini sent to the client
  • Actual: Nothing was sent, and the handleLLMToken I passed in was not executed

Investigation

I think it didn't call the callbacks (_runManager?.handleLLMNewToken) in the functions: _generate and _streamResponseChunks which caused the actual behavior to diverge from expected.

Implementation of ChatGoogleGenerativeAI _streamResponseChunks:

for await (const response of stream) {
const chunk = convertResponseContentToChatGenerationChunk(response);
if (!chunk) {
continue;
}
yield chunk;
}
}

The implementation of other models:

const delta = data.completion ?? "";
yield new ChatGenerationChunk({
message: new AIMessageChunk({
content: delta,
additional_kwargs,
}),
text: delta,
});
await runManager?.handleLLMNewToken(delta);

Copy link

dosubot bot commented Dec 17, 2023

🤖

Hey there, @Nanyi-Jiang! Fancy seeing you here again. Hope your code is behaving better than a caffeinated squirrel today! 🐿️

Based on the information provided, it appears that the _runManager parameter, which is responsible for managing callbacks, is not utilized in the _generate and _streamResponseChunks methods of the Langchain's ChatGoogleGenerativeAI model. This is why the callbacks that are passed in are not being called. Here is the relevant code:

async _generate(
  messages: BaseMessage[],
  options: this["ParsedCallOptions"],
  _runManager?: CallbackManagerForLLMRun
): Promise<ChatResult> {
  // ... code that generates the response but does not use _runManager ...
}

async *_streamResponseChunks(
  messages: BaseMessage[],
  options: this["ParsedCallOptions"],
  _runManager?: CallbackManagerForLLMRun
): AsyncGenerator<ChatGenerationChunk> {
  // ... code that generates the response but does not use _runManager ...
}

You can find this in the libs/langchain-google-genai/src/chat_models.ts file.

This behavior is unlike other models where the _runManager parameter is used to manage and call the callbacks.

There have been similar issues in the past, such as ChatOpenAI LLMChain handleLLMNewToken constructor chain doesnt trigger and Streaming stopped working since the latest version (0.0.30). The solutions to these issues involved ensuring that the callbacks are attached to the correct object and not using the overall verbose flag of the model for the handleLLMNewToken method.

I hope this helps! If you have any other questions or need further clarification, feel free to ask.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Dec 17, 2023
@jacoblee93
Copy link
Collaborator

CC @bracesproul

@jacoblee93
Copy link
Collaborator

Thank you for flagging!

@bracesproul
Copy link
Collaborator

hey @Nanyi-Jiang this will be patched in the next package release!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants