Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(openai): Allow OpenAI streaming with json_schema, fix docs nits #6906

Merged
merged 1 commit into from
Sep 30, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
210 changes: 161 additions & 49 deletions docs/core_docs/docs/integrations/chat/openai.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -97,9 +97,9 @@
"import { ChatOpenAI } from \"@langchain/openai\" \n",
"\n",
"const llm = new ChatOpenAI({\n",
" model: \"gpt-4o\",\n",
" temperature: 0,\n",
" // other params...\n",
" model: \"gpt-4o\",\n",
" temperature: 0,\n",
" // other params...\n",
"})"
]
},
Expand All @@ -124,35 +124,39 @@
"output_type": "stream",
"text": [
"AIMessage {\n",
" \"id\": \"chatcmpl-9rB4GvhlRb0x3hxupLBQYOKKmTxvV\",\n",
" \"id\": \"chatcmpl-ADItECqSPuuEuBHHPjeCkh9wIO1H5\",\n",
" \"content\": \"J'adore la programmation.\",\n",
" \"additional_kwargs\": {},\n",
" \"response_metadata\": {\n",
" \"tokenUsage\": {\n",
" \"completionTokens\": 8,\n",
" \"completionTokens\": 5,\n",
" \"promptTokens\": 31,\n",
" \"totalTokens\": 39\n",
" \"totalTokens\": 36\n",
" },\n",
" \"finish_reason\": \"stop\"\n",
" \"finish_reason\": \"stop\",\n",
" \"system_fingerprint\": \"fp_5796ac6771\"\n",
" },\n",
" \"tool_calls\": [],\n",
" \"invalid_tool_calls\": [],\n",
" \"usage_metadata\": {\n",
" \"input_tokens\": 31,\n",
" \"output_tokens\": 8,\n",
" \"total_tokens\": 39\n",
" \"output_tokens\": 5,\n",
" \"total_tokens\": 36\n",
" }\n",
"}\n"
]
}
],
"source": [
"const aiMsg = await llm.invoke([\n",
" [\n",
" \"system\",\n",
" \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" ],\n",
" [\"human\", \"I love programming.\"],\n",
" {\n",
" role: \"system\",\n",
" content: \"You are a helpful assistant that translates English to French. Translate the user sentence.\",\n",
" },\n",
" {\n",
" role: \"user\",\n",
" content: \"I love programming.\"\n",
" },\n",
"])\n",
"aiMsg"
]
Expand Down Expand Up @@ -196,23 +200,24 @@
"output_type": "stream",
"text": [
"AIMessage {\n",
" \"id\": \"chatcmpl-9rB4JD9rVBLzTuMee9AabulowEH0d\",\n",
" \"content\": \"Ich liebe das Programmieren.\",\n",
" \"id\": \"chatcmpl-ADItFaWFNqkSjSmlxeGk6HxcBHzVN\",\n",
" \"content\": \"Ich liebe Programmieren.\",\n",
" \"additional_kwargs\": {},\n",
" \"response_metadata\": {\n",
" \"tokenUsage\": {\n",
" \"completionTokens\": 6,\n",
" \"completionTokens\": 5,\n",
" \"promptTokens\": 26,\n",
" \"totalTokens\": 32\n",
" \"totalTokens\": 31\n",
" },\n",
" \"finish_reason\": \"stop\"\n",
" \"finish_reason\": \"stop\",\n",
" \"system_fingerprint\": \"fp_5796ac6771\"\n",
" },\n",
" \"tool_calls\": [],\n",
" \"invalid_tool_calls\": [],\n",
" \"usage_metadata\": {\n",
" \"input_tokens\": 26,\n",
" \"output_tokens\": 6,\n",
" \"total_tokens\": 32\n",
" \"output_tokens\": 5,\n",
" \"total_tokens\": 31\n",
" }\n",
"}\n"
]
Expand All @@ -222,22 +227,22 @@
"import { ChatPromptTemplate } from \"@langchain/core/prompts\"\n",
"\n",
"const prompt = ChatPromptTemplate.fromMessages(\n",
" [\n",
" [\n",
" [\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ],\n",
" [\"human\", \"{input}\"],\n",
" ]\n",
" \"system\",\n",
" \"You are a helpful assistant that translates {input_language} to {output_language}.\",\n",
" ],\n",
" [\"human\", \"{input}\"],\n",
" ]\n",
")\n",
"\n",
"const chain = prompt.pipe(llm);\n",
"await chain.invoke(\n",
" {\n",
" input_language: \"English\",\n",
" output_language: \"German\",\n",
" input: \"I love programming.\",\n",
" }\n",
" {\n",
" input_language: \"English\",\n",
" output_language: \"German\",\n",
" input: \"I love programming.\",\n",
" }\n",
")"
]
},
Expand Down Expand Up @@ -384,7 +389,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 8,
"id": "2b675330",
"metadata": {},
"outputs": [
Expand All @@ -396,7 +401,7 @@
" content: [\n",
" {\n",
" token: 'Hello',\n",
" logprob: -0.0005151443,\n",
" logprob: -0.0004740447,\n",
" bytes: [ 72, 101, 108, 108, 111 ],\n",
" top_logprobs: []\n",
" },\n",
Expand All @@ -408,25 +413,25 @@
" },\n",
" {\n",
" token: ' How',\n",
" logprob: -0.000035477897,\n",
" logprob: -0.000030113732,\n",
" bytes: [ 32, 72, 111, 119 ],\n",
" top_logprobs: []\n",
" },\n",
" {\n",
" token: ' can',\n",
" logprob: -0.0006658526,\n",
" logprob: -0.0004797665,\n",
" bytes: [ 32, 99, 97, 110 ],\n",
" top_logprobs: []\n",
" },\n",
" {\n",
" token: ' I',\n",
" logprob: -0.0000010280384,\n",
" logprob: -7.89631e-7,\n",
" bytes: [ 32, 73 ],\n",
" top_logprobs: []\n",
" },\n",
" {\n",
" token: ' assist',\n",
" logprob: -0.10124119,\n",
" logprob: -0.114006,\n",
" bytes: [\n",
" 32, 97, 115,\n",
" 115, 105, 115,\n",
Expand All @@ -436,23 +441,24 @@
" },\n",
" {\n",
" token: ' you',\n",
" logprob: -5.5122365e-7,\n",
" logprob: -4.3202e-7,\n",
" bytes: [ 32, 121, 111, 117 ],\n",
" top_logprobs: []\n",
" },\n",
" {\n",
" token: ' today',\n",
" logprob: -0.000052643223,\n",
" logprob: -0.00004501419,\n",
" bytes: [ 32, 116, 111, 100, 97, 121 ],\n",
" top_logprobs: []\n",
" },\n",
" {\n",
" token: '?',\n",
" logprob: -0.000012352386,\n",
" logprob: -0.000010206721,\n",
" bytes: [ 63 ],\n",
" top_logprobs: []\n",
" }\n",
" ]\n",
" ],\n",
" refusal: null\n",
"}\n"
]
}
Expand Down Expand Up @@ -489,24 +495,26 @@
"id": "3392390e",
"metadata": {},
"source": [
"### ``strict: true``\n",
"## ``strict: true``\n",
"\n",
"As of Aug 6, 2024, OpenAI supports a `strict` argument when calling tools that will enforce that the tool argument schema is respected by the model. See more here: https://platform.openai.com/docs/guides/function-calling.\n",
"\n",
"```{=mdx}\n",
"\n",
":::info Requires ``@langchain/openai >= 0.2.6``\n",
"\n",
"As of Aug 6, 2024, OpenAI supports a `strict` argument when calling tools that will enforce that the tool argument schema is respected by the model. See more here: https://platform.openai.com/docs/guides/function-calling\n",
"\n",
"**Note**: If ``strict: true`` the tool definition will also be validated, and a subset of JSON schema are accepted. Crucially, schema cannot have optional args (those with default values). Read the full docs on what types of schema are supported here: https://platform.openai.com/docs/guides/structured-outputs/supported-schemas. \n",
":::\n",
"\n",
"\n",
"```"
"```\n",
"\n",
"Here's an example with tool calling. Passing an extra `strict: true` argument to `.bindTools` will pass the param through to all tool definitions:"
]
},
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 9,
"id": "90f0d465",
"metadata": {},
"outputs": [
Expand All @@ -517,9 +525,9 @@
"[\n",
" {\n",
" name: 'get_current_weather',\n",
" args: { location: 'Hanoi' },\n",
" args: { location: 'current' },\n",
" type: 'tool_call',\n",
" id: 'call_aB85ybkLCoccpzqHquuJGH3d'\n",
" id: 'call_hVFyYNRwc6CoTgr9AQFQVjm9'\n",
" }\n",
"]\n"
]
Expand Down Expand Up @@ -552,6 +560,110 @@
"console.dir(strictTrueResult.tool_calls, { depth: null });"
]
},
{
"cell_type": "markdown",
"id": "6c46a668",
"metadata": {},
"source": [
"If you only want to apply this parameter to a select number of tools, you can also pass OpenAI formatted tool schemas directly:"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "e2da9ead",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[\n",
" {\n",
" name: 'get_current_weather',\n",
" args: { location: 'London' },\n",
" type: 'tool_call',\n",
" id: 'call_EOSejtax8aYtqpchY8n8O82l'\n",
" }\n",
"]\n"
]
}
],
"source": [
"import { zodToJsonSchema } from \"zod-to-json-schema\";\n",
"\n",
"const toolSchema = {\n",
" type: \"function\",\n",
" function: {\n",
" name: \"get_current_weather\",\n",
" description: \"Get the current weather\",\n",
" strict: true,\n",
" parameters: zodToJsonSchema(\n",
" z.object({\n",
" location: z.string(),\n",
" })\n",
" ),\n",
" },\n",
"};\n",
"\n",
"const llmWithStrictTrueTools = new ChatOpenAI({\n",
" model: \"gpt-4o\",\n",
"}).bindTools([toolSchema], {\n",
" strict: true,\n",
"});\n",
"\n",
"const weatherToolResult = await llmWithStrictTrueTools.invoke([{\n",
" role: \"user\",\n",
" content: \"What is the current weather in London?\"\n",
"}])\n",
"\n",
"weatherToolResult.tool_calls;"
]
},
{
"cell_type": "markdown",
"id": "045668fe",
"metadata": {},
"source": [
"### Structured output\n",
"\n",
"We can also pass `strict: true` to the [`.withStructuredOutput()`](https://js.langchain.com/docs/how_to/structured_output/#the-.withstructuredoutput-method). Here's an example:"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "8e8171a5",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{ traits: [ `6'5\" tall`, 'love fruit' ] }\n"
]
}
],
"source": [
"import { ChatOpenAI } from \"@langchain/openai\";\n",
"\n",
"const traitSchema = z.object({\n",
" traits: z.array(z.string()).describe(\"A list of traits contained in the input\"),\n",
"});\n",
"\n",
"const structuredLlm = new ChatOpenAI({\n",
" model: \"gpt-4o-mini\",\n",
"}).withStructuredOutput(traitSchema, {\n",
" name: \"extract_traits\",\n",
" strict: true,\n",
"});\n",
"\n",
"await structuredLlm.invoke([{\n",
" role: \"user\",\n",
" content: `I am 6'5\" tall and love fruit.`\n",
"}]);"
]
},
{
"cell_type": "markdown",
"id": "3a5bb5ca-c3ae-4a58-be67-2cd18574b9a3",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ This example goes over how to load data from EPUB files. By default, one documen
# Setup

```bash npm2yarn
npm install @langchian/community @langchain/core epub2 html-to-text
npm install @langchain/community @langchain/core epub2 html-to-text
```

# Usage, one document per chapter
Expand Down
Loading
Loading