Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Toolcall not available in result, even when the model outputs a tool call in the additional kwargs of response when using ChatVertexAI. #6100

Closed
5 tasks done
DevDeepakBhattarai opened this issue Jul 17, 2024 · 3 comments
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@DevDeepakBhattarai
Copy link

DevDeepakBhattarai commented Jul 17, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

const model= new ChatVertexAI({
        model: "gemini-1.5-flash-001",
        authOptions: {
          credentials: {
            auth_provider_x509_cert_url: env.GOOGLE_AUTH_PROVIDER_X509_CERT_URL,
            auth_uri: env.GOOGLE_AUTH_URI,
            client_email: env.GOOGLE_CLIENT_EMAIL,
            client_id: env.GOOGLE_VERTEX_CLIENT_ID,
            client_x509_cert_url: env.GOOGLE_CLIENT_X509_CERT_URL,
            private_key: env.GOOGLE_PRIVATE_KEY,
            private_key_id: env.GOOGLE_PRIVATE_KEY_ID,
            project_id: env.GOOGLE_PROJECT_ID,
            token_uri: env.GOOGLE_TOKEN_URI,
            type: "service_account",
          },
        },
        temperature: 0,
      });
     const tools = [search_tool, weatherTool, crypto_tool];
     model.bindTools(tools, {
    tool_choice: "auto",
  });
  
 const result =await model.invoke(input,config) // This input is a input where tool call is necessary
 
 //result.tool_calls is an empty array

This is the raw output of the model from langsmith

{
 "llmOutput": {},
 "generations": [
   {
     "text": "",
     "generationInfo": {
       "finishReason": "stop"
     },
     "message": {
       "lc": 1,
       "type": "constructor",
       "id": [
         "langchain_core",
         "messages",
         "AIMessageChunk"
       ],
       "kwargs": {
         "content": "",
         "additional_kwargs": {
           "tool_calls": [
             {
               "id": "ba811aeda038412cb4f2492b180f810e",
               "type": "function",
               "function": {
                 "name": "get_weather",
                 "arguments": "{\"city\":\"sf\",\"state\":\"CA\"}"
               }
             }
           ] // tool call is present 
         },
         "response_metadata": {},
         "tool_call_chunks": [],
         "usage_metadata": {
           "input_tokens": 299,
           "output_tokens": 7,
           "total_tokens": 306
         },
         "tool_calls": [], // Tool Call is empty
         "invalid_tool_calls": []
       }
     }
   }
 ]
}

Error Message and Stack Trace (if applicable)

No response

Description

I am trying to build a LLM app with multiple model support. But ChatVertexAI is not working for me.

System Info

pnpm 9.4
windows 10
Node version

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Jul 17, 2024
Copy link

dosubot bot commented Jul 17, 2024

To resolve the issue where the tool call is not available in the result, even though the model outputs a tool call in the additional kwargs of the response, you need to ensure that the ChatVertexAI model processes the tool calls correctly. Here is a relevant part of the code that processes tool calls:

import { ChatVertexAI } from "@langchain/google-vertexai";
import { type GeminiTool } from "@langchain/google-vertexai/types";
import { zodToGeminiParameters } from "@langchain/google-vertexai/utils";
import { z } from "zod";

const calculatorSchema = z.object({
  operation: z
    .enum(["add", "subtract", "multiply", "divide"])
    .describe("The type of operation to execute"),
  number1: z.number().describe("The first number to operate on."),
  number2: z.number().describe("The second number to operate on."),
});

const geminiCalculatorTool: GeminiTool = {
  functionDeclarations: [
    {
      name: "calculator",
      description: "A simple calculator tool",
      parameters: zodToGeminiParameters(calculatorSchema),
    },
  ],
};

const model = new ChatVertexAI({
  temperature: 0.7,
  model: "gemini-1.5-flash-001",
}).bind({
  tools: [geminiCalculatorTool],
});

const response = await model.invoke("What is 1628253239 times 81623836?");
console.log(JSON.stringify(response.additional_kwargs, null, 2));
/*
{
  "tool_calls": [
    {
      "id": "calculator",
      "type": "function",
      "function": {
        "name": "calculator",
        "arguments": "{\"number2\":81623836,\"number1\":1628253239,\"operation\":\"multiply\"}"
      }
    }
  ],
}
 */

Ensure that your implementation of the ChatVertexAI model follows a similar approach to parse and include tool calls in the response. Specifically, you need to:

  1. Define the tool schema using zod.
  2. Create a GeminiTool object with the tool's function declarations.
  3. Bind the tool to the ChatVertexAI model.
  4. Invoke the model and check the additional_kwargs for the tool call.

Additionally, you can refer to the AnthropicFunctions class in langchain/src/experimental/chat_models/anthropic_functions.ts for another example of how tool calls are processed and included in the response. This class demonstrates how to handle tool calls using XML parsing and how to structure the response to include the tool call information [1][2][3].

To continue talking to Dosu, mention @dosu.

@bracesproul
Copy link
Member

There're two issues with the code snippet you provided. The first is ChatVertexAI doesn't support the tool_choice param, so passing that won't do anything (and should give you a type error). The second is .bindTools returns a new instance of the model as a Runnable, and does not modify the chat model in place. This means you need to assign the result of .bindTools to a new variable. See the example below:

import { tool } from "@langchain/core/tools";
import { ChatVertexAI } from "@langchain/google-vertexai";
import { z } from "zod";

const calculatorTool = tool(
  (_) => {
    return "";
  },
  {
    name: "calculator",
    description: "A calculator tool that can do basic arithmetic.",
    schema: z.object({
      a: z.number(),
      b: z.number(),
    }),
  }
);

const model = new ChatVertexAI({
  model: "gemini-1.5-flash-001",
  temperature: 0,
});
const tools = [calculatorTool];
const modelWithTools = model.bindTools(tools);

const result = await modelWithTools.invoke(
  "What is 173262 plus 183612836? Use the calculator tool."
);
console.log(JSON.stringify(result.tool_calls, null, 2));

/*
[
  {
    "name": "service_tool",
    "args": {
      "explanation": "It is faster.",
      "decision": "UseAPI",
      "apiDetails": {
        "endpointName": "MyEndpoint",
        "parameters": {
          "param2": "value2",
          "param1": "value1"
        },
        "extractionPath": "Users/johndoe/data",
        "serviceName": "MyService"
      }
    },
    "id": "13672eb9bb8a4dc5afe23bddec2bf80b",
    "type": "tool_call"
  }
]
*/

alternatibitly you can also do this:

...

const tools = [calculatorTool];
const model = new ChatVertexAI({
  model: "gemini-1.5-flash-001",
  temperature: 0,
}).bindTools(tools);

...

@DevDeepakBhattarai
Copy link
Author

DevDeepakBhattarai commented Jul 20, 2024

I am sorry for the late response @bracesproul,
But the above code is not my actual code, I just wanted to give example of the code since my code was really messy.

Here is my code to select the model

import { env } from "@/env";
import { ChatAnthropic } from "@langchain/anthropic";
import { ChatOpenAI, type ChatOpenAICallOptions } from "@langchain/openai";
import { ChatGroq } from "@langchain/groq";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { z } from "zod";
import { ChatVertexAI } from "@langchain/google-vertexai-web";
import { OUTPUT_MODEL } from "@/utils/server";
type Model =
  | ChatOpenAI<ChatOpenAICallOptions>
  | ChatAnthropic
  | ChatVertexAI
  | ChatGroq
  | ChatGoogleGenerativeAI;
export const AvailableModels = z.enum(["gpt", "claude", "gemini", "groq"]);
export type AvailableModels = z.infer<typeof AvailableModels>;
export function modelPicker(
  model: z.infer<typeof AvailableModels>,
  stream?: boolean,
) {
  let modelObject: Model;
  switch (model) {
    case "gpt": {
      modelObject = new ChatOpenAI({
        model: "gpt-4o-mini",
        apiKey: env.OPENAI_API_KEY,
        streaming: stream,
        modelKwargs: stream
          ? {
              parallel_tool_calls: false,
            }
          : undefined,
      });

      break;
    }

    case "claude": {
      modelObject = new ChatAnthropic({
        model: "claude-3-5-sonnet-20240620",
        apiKey: env.ANTHROPIC_API_KEY,
        streaming: true,
      });
      break;
    }

    case "gemini": {
      modelObject = new ChatVertexAI({
        model: "gemini-1.5-flash-001",
        authOptions: {
          credentials: {
            auth_provider_x509_cert_url: env.GOOGLE_AUTH_PROVIDER_X509_CERT_URL,
            auth_uri: env.GOOGLE_AUTH_URI,
            client_email: env.GOOGLE_CLIENT_EMAIL,
            client_id: env.GOOGLE_VERTEX_CLIENT_ID,
            client_x509_cert_url: env.GOOGLE_CLIENT_X509_CERT_URL,
            private_key: env.GOOGLE_PRIVATE_KEY,
            private_key_id: env.GOOGLE_PRIVATE_KEY_ID,
            project_id: env.GOOGLE_PROJECT_ID,
            token_uri: env.GOOGLE_TOKEN_URI,
            type: "service_account",
          },
        },
        temperature: 0,
      });
      break;
    }
    case "groq": {
      modelObject = new ChatGroq({
        apiKey: env.GROQ_API_KEY,
        streaming: stream,
        model: "llama3-70b-8192",
        temperature: 0.7,
      });
    }
  }
  return modelObject;
}

Here is the code to invoke the model

const invokeModel = async (
  state: AgentExecutorState,
  config?: RunnableConfig,
): Promise<Partial<AgentExecutorState>> => {
  console.log(config);
  const initialPrompt =
    state.model !== "groq" ? promptWithImage : promptWithoutImages;
  const MessageHistoryStore = new UpstashRedisChatMessageHistory({
    sessionId: `${state.userId}-chat-${state.chatId}`, // Or some other unique identifier for the conversation
    client: redis,
  });

  const tools = [search_tool, weatherTool, crypto_tool];
  const llm = modelPicker(state.model, true)
    .bindTools(tools)
    .withConfig({ runName: OUTPUT_MODEL });
  const chain = initialPrompt.pipe(llm);

  let result: AIMessageChunk | undefined = undefined;
  result = await chain.invoke(state, config);

  await appendRunnableUI(
    config?.callbacks as CallbackManager,
    <div>Hello there is something that is very weird</div>,
  );
  // This is the work around that I am using right now. 
  if (
    state.model === "gemini" &&
    result.additional_kwargs.tool_calls &&
    result.additional_kwargs.tool_calls.length > 0
  ) {
    const tool_call = result.additional_kwargs.tool_calls[0]!;
    const toolCall = {
      name: tool_call.function.name,
      parameters: safeJsonParse(tool_call.function.arguments)!,
      id: tool_call.id ?? "",
    };
    return {
      toolCall,
      chat_history: [result],
    };
  }

  if (result.tool_calls && result.tool_calls.length > 0) {
    const toolCall = {
      name: result.tool_calls[0]!.name,
      parameters: result.tool_calls[0]!.args,
      id: result.tool_calls[0]!.id ?? "",
    };
    return {
      toolCall,
      chat_history: [result],
    };
  }

  result.content &&
    void MessageHistoryStore.addAIMessage(result.content as string);

  const newSummary = await memory.predictNewSummary(
    [
      new HumanMessage(state.objective),
      new AIMessage(result.content as string),
    ],
    state.existingSummary,
  );
  void redis.set(`${state.userId}-summary-${state.chatId}`, newSummary);

  return {
    result: result.content as string,
    chat_history: [result],
    toolCall: undefined,
  };
};

Here is langsmith trace
https://smith.langchain.com/public/a3ca436c-95ea-4e1d-a645-4df379508018/r

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

2 participants