-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ChatGoogleGenerativeAI: not call callbacks (_runManager?.handleLLMNewToken) #3699
Comments
🤖 Hey there, @Nanyi-Jiang! Fancy seeing you here again. Hope your code is behaving better than a caffeinated squirrel today! 🐿️ Based on the information provided, it appears that the async _generate(
messages: BaseMessage[],
options: this["ParsedCallOptions"],
_runManager?: CallbackManagerForLLMRun
): Promise<ChatResult> {
// ... code that generates the response but does not use _runManager ...
}
async *_streamResponseChunks(
messages: BaseMessage[],
options: this["ParsedCallOptions"],
_runManager?: CallbackManagerForLLMRun
): AsyncGenerator<ChatGenerationChunk> {
// ... code that generates the response but does not use _runManager ...
} You can find this in the libs/langchain-google-genai/src/chat_models.ts file. This behavior is unlike other models where the There have been similar issues in the past, such as ChatOpenAI LLMChain handleLLMNewToken constructor chain doesnt trigger and Streaming stopped working since the latest version ( I hope this helps! If you have any other questions or need further clarification, feel free to ask. Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
CC @bracesproul |
Thank you for flagging! |
hey @Nanyi-Jiang this will be patched in the next package release! |
Context
When implementing Gemini with
ChatGoogleGenerativeAI
, I tried to use thecall
andstream
functions with a callback functionhandleLLMToken
which will write the response to my client sidehandleLLMToken
I passed in was not executedInvestigation
I think it didn't call the callbacks (
_runManager?.handleLLMNewToken
) in the functions:_generate
and_streamResponseChunks
which caused the actual behavior to diverge from expected.Implementation of
ChatGoogleGenerativeAI
_streamResponseChunks
:langchainjs/libs/langchain-google-genai/src/chat_models.ts
Lines 298 to 306 in 1794285
The implementation of other models:
langchainjs/langchain/src/chat_models/anthropic.ts
Lines 302 to 310 in 1794285
The text was updated successfully, but these errors were encountered: