-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Toolcall not available in result, even when the model outputs a tool call in the additional kwargs of response when using ChatVertexAI. #6100
Comments
To resolve the issue where the tool call is not available in the result, even though the model outputs a tool call in the additional kwargs of the response, you need to ensure that the import { ChatVertexAI } from "@langchain/google-vertexai";
import { type GeminiTool } from "@langchain/google-vertexai/types";
import { zodToGeminiParameters } from "@langchain/google-vertexai/utils";
import { z } from "zod";
const calculatorSchema = z.object({
operation: z
.enum(["add", "subtract", "multiply", "divide"])
.describe("The type of operation to execute"),
number1: z.number().describe("The first number to operate on."),
number2: z.number().describe("The second number to operate on."),
});
const geminiCalculatorTool: GeminiTool = {
functionDeclarations: [
{
name: "calculator",
description: "A simple calculator tool",
parameters: zodToGeminiParameters(calculatorSchema),
},
],
};
const model = new ChatVertexAI({
temperature: 0.7,
model: "gemini-1.5-flash-001",
}).bind({
tools: [geminiCalculatorTool],
});
const response = await model.invoke("What is 1628253239 times 81623836?");
console.log(JSON.stringify(response.additional_kwargs, null, 2));
/*
{
"tool_calls": [
{
"id": "calculator",
"type": "function",
"function": {
"name": "calculator",
"arguments": "{\"number2\":81623836,\"number1\":1628253239,\"operation\":\"multiply\"}"
}
}
],
}
*/ Ensure that your implementation of the
Additionally, you can refer to the |
There're two issues with the code snippet you provided. The first is import { tool } from "@langchain/core/tools";
import { ChatVertexAI } from "@langchain/google-vertexai";
import { z } from "zod";
const calculatorTool = tool(
(_) => {
return "";
},
{
name: "calculator",
description: "A calculator tool that can do basic arithmetic.",
schema: z.object({
a: z.number(),
b: z.number(),
}),
}
);
const model = new ChatVertexAI({
model: "gemini-1.5-flash-001",
temperature: 0,
});
const tools = [calculatorTool];
const modelWithTools = model.bindTools(tools);
const result = await modelWithTools.invoke(
"What is 173262 plus 183612836? Use the calculator tool."
);
console.log(JSON.stringify(result.tool_calls, null, 2));
/*
[
{
"name": "service_tool",
"args": {
"explanation": "It is faster.",
"decision": "UseAPI",
"apiDetails": {
"endpointName": "MyEndpoint",
"parameters": {
"param2": "value2",
"param1": "value1"
},
"extractionPath": "Users/johndoe/data",
"serviceName": "MyService"
}
},
"id": "13672eb9bb8a4dc5afe23bddec2bf80b",
"type": "tool_call"
}
]
*/ alternatibitly you can also do this: ...
const tools = [calculatorTool];
const model = new ChatVertexAI({
model: "gemini-1.5-flash-001",
temperature: 0,
}).bindTools(tools);
... |
I am sorry for the late response @bracesproul, Here is my code to select the model import { env } from "@/env";
import { ChatAnthropic } from "@langchain/anthropic";
import { ChatOpenAI, type ChatOpenAICallOptions } from "@langchain/openai";
import { ChatGroq } from "@langchain/groq";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { z } from "zod";
import { ChatVertexAI } from "@langchain/google-vertexai-web";
import { OUTPUT_MODEL } from "@/utils/server";
type Model =
| ChatOpenAI<ChatOpenAICallOptions>
| ChatAnthropic
| ChatVertexAI
| ChatGroq
| ChatGoogleGenerativeAI;
export const AvailableModels = z.enum(["gpt", "claude", "gemini", "groq"]);
export type AvailableModels = z.infer<typeof AvailableModels>;
export function modelPicker(
model: z.infer<typeof AvailableModels>,
stream?: boolean,
) {
let modelObject: Model;
switch (model) {
case "gpt": {
modelObject = new ChatOpenAI({
model: "gpt-4o-mini",
apiKey: env.OPENAI_API_KEY,
streaming: stream,
modelKwargs: stream
? {
parallel_tool_calls: false,
}
: undefined,
});
break;
}
case "claude": {
modelObject = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
apiKey: env.ANTHROPIC_API_KEY,
streaming: true,
});
break;
}
case "gemini": {
modelObject = new ChatVertexAI({
model: "gemini-1.5-flash-001",
authOptions: {
credentials: {
auth_provider_x509_cert_url: env.GOOGLE_AUTH_PROVIDER_X509_CERT_URL,
auth_uri: env.GOOGLE_AUTH_URI,
client_email: env.GOOGLE_CLIENT_EMAIL,
client_id: env.GOOGLE_VERTEX_CLIENT_ID,
client_x509_cert_url: env.GOOGLE_CLIENT_X509_CERT_URL,
private_key: env.GOOGLE_PRIVATE_KEY,
private_key_id: env.GOOGLE_PRIVATE_KEY_ID,
project_id: env.GOOGLE_PROJECT_ID,
token_uri: env.GOOGLE_TOKEN_URI,
type: "service_account",
},
},
temperature: 0,
});
break;
}
case "groq": {
modelObject = new ChatGroq({
apiKey: env.GROQ_API_KEY,
streaming: stream,
model: "llama3-70b-8192",
temperature: 0.7,
});
}
}
return modelObject;
} Here is the code to invoke the model const invokeModel = async (
state: AgentExecutorState,
config?: RunnableConfig,
): Promise<Partial<AgentExecutorState>> => {
console.log(config);
const initialPrompt =
state.model !== "groq" ? promptWithImage : promptWithoutImages;
const MessageHistoryStore = new UpstashRedisChatMessageHistory({
sessionId: `${state.userId}-chat-${state.chatId}`, // Or some other unique identifier for the conversation
client: redis,
});
const tools = [search_tool, weatherTool, crypto_tool];
const llm = modelPicker(state.model, true)
.bindTools(tools)
.withConfig({ runName: OUTPUT_MODEL });
const chain = initialPrompt.pipe(llm);
let result: AIMessageChunk | undefined = undefined;
result = await chain.invoke(state, config);
await appendRunnableUI(
config?.callbacks as CallbackManager,
<div>Hello there is something that is very weird</div>,
);
// This is the work around that I am using right now.
if (
state.model === "gemini" &&
result.additional_kwargs.tool_calls &&
result.additional_kwargs.tool_calls.length > 0
) {
const tool_call = result.additional_kwargs.tool_calls[0]!;
const toolCall = {
name: tool_call.function.name,
parameters: safeJsonParse(tool_call.function.arguments)!,
id: tool_call.id ?? "",
};
return {
toolCall,
chat_history: [result],
};
}
if (result.tool_calls && result.tool_calls.length > 0) {
const toolCall = {
name: result.tool_calls[0]!.name,
parameters: result.tool_calls[0]!.args,
id: result.tool_calls[0]!.id ?? "",
};
return {
toolCall,
chat_history: [result],
};
}
result.content &&
void MessageHistoryStore.addAIMessage(result.content as string);
const newSummary = await memory.predictNewSummary(
[
new HumanMessage(state.objective),
new AIMessage(result.content as string),
],
state.existingSummary,
);
void redis.set(`${state.userId}-summary-${state.chatId}`, newSummary);
return {
result: result.content as string,
chat_history: [result],
toolCall: undefined,
};
}; Here is langsmith trace |
Checked other resources
Example Code
This is the raw output of the model from langsmith
Error Message and Stack Trace (if applicable)
No response
Description
I am trying to build a LLM app with multiple model support. But ChatVertexAI is not working for me.
System Info
pnpm 9.4
windows 10
Node version
The text was updated successfully, but these errors were encountered: