From d2b46c414e5af6a5d6b7367da7c1246ee154ebf4 Mon Sep 17 00:00:00 2001 From: jacoblee93 Date: Fri, 28 Jun 2024 17:07:44 -0700 Subject: [PATCH 1/4] Update intro, docs nits --- README.md | 13 ++-- docs/core_docs/docs/concepts.mdx | 4 +- .../docs/how_to/agent_executor.ipynb | 4 +- docs/core_docs/docs/how_to/installation.mdx | 2 + .../core_docs/docs/how_to/migrate_agent.ipynb | 36 ++++++++- docs/core_docs/docs/how_to/tool_calling.ipynb | 74 +++++++++++-------- .../integrations/chat/ollama_functions.mdx | 6 ++ docs/core_docs/docs/introduction.mdx | 20 +++-- .../static/svg/langchain_stack_062024.svg | 40 ++++++++++ .../svg/langchain_stack_062024_dark.svg | 47 ++++++++++++ langchain-core/README.md | 2 +- langchain/README.md | 2 +- libs/langchain-community/README.md | 2 +- 13 files changed, 198 insertions(+), 54 deletions(-) create mode 100644 docs/core_docs/static/svg/langchain_stack_062024.svg create mode 100644 docs/core_docs/static/svg/langchain_stack_062024_dark.svg diff --git a/README.md b/README.md index 5f8830dec5df..98e4d6ca6f03 100644 --- a/README.md +++ b/README.md @@ -39,19 +39,20 @@ LangChain is written in TypeScript and can be used in: - **Reason**: rely on a language model to reason (about how to answer based on provided context, what actions to take, etc.) This framework consists of several parts. -- **LangChain Libraries**: The Python and JavaScript libraries. Contains interfaces and integrations for a myriad of components, a basic runtime for combining these components into chains and agents, and off-the-shelf implementations of chains and agents. -- **[LangChain Templates](https://github.com/langchain-ai/langchain/tree/master/templates)**: (currently Python-only) A collection of easily deployable reference architectures for a wide variety of tasks. -- **[LangServe](https://github.com/langchain-ai/langserve)**: (currently Python-only) A library for deploying LangChain chains as a REST API. -- **[LangSmith](https://smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor chains built on any LLM framework and seamlessly integrates with LangChain. +- **Open-source libraries**: Build your applications using LangChain's open-source [building blocks](https://js.langchain.com/v0.2/docs/concepts#langchain-expression-language), [components](https://js.langchain.com/v0.2/docs/concepts), and [third-party integrations](https://js.langchain.com/v0.2/docs/integrations/platforms/). +Use [LangGraph.js](https://js.langchain.com/v0.2/docs/concepts/#langgraphjs) to build stateful agents with first-class streaming and human-in-the-loop support. +- **Productionization**: Use [LangSmith](https://docs.smith.langchain.com/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence. +- **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/) (currently Python-only). The LangChain libraries themselves are made up of several different packages. - **[`@langchain/core`](https://github.com/langchain-ai/langchainjs/blob/main/langchain-core)**: Base abstractions and LangChain Expression Language. - **[`@langchain/community`](https://github.com/langchain-ai/langchainjs/blob/main/libs/langchain-community)**: Third party integrations. - **[`langchain`](https://github.com/langchain-ai/langchainjs/blob/main/langchain)**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture. +- **[LangGraph.js](https://langchain-ai.github.io/langgraphjs/)**: A library for building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. Integrates smoothly with LangChain, but can be used without it. Integrations may also be split into their own compatible packages. -![LangChain Stack](https://github.com/langchain-ai/langchainjs/blob/main/docs/core_docs/static/img/langchain_stack_feb_2024.webp) +![LangChain Stack](https://github.com/langchain-ai/langchainjs/blob/main/docs/core_docs/static/svg/langchain_stack_062024.svg) This library aims to assist in the development of those types of applications. Common examples of these applications include: @@ -86,7 +87,7 @@ Data Augmented Generation involves specific types of chains that first interact **🤖 Agents:** -Agents involve an LLM making decisions about which Actions to take, taking that Action, seeing an Observation, and repeating that until done. LangChain provides a standard interface for agents, a selection of agents to choose from, and examples of end-to-end agents. +Agents allow an LLM autonomy over how a task is accomplished. Agents make decisions about which Actions to take, then take that Action, observe the result, and repeat until the task is complete. LangChain provides a [standard interface for agents](https://js.langchain.com/v0.2/docs/concepts/#agents), along with [LangGraph.js](https://github.com/langchain-ai/langgraphjs/) for building custom agents. ## 📖 Documentation diff --git a/docs/core_docs/docs/concepts.mdx b/docs/core_docs/docs/concepts.mdx index e02b5647e04a..f2162ae0c773 100644 --- a/docs/core_docs/docs/concepts.mdx +++ b/docs/core_docs/docs/concepts.mdx @@ -12,8 +12,8 @@ import useBaseUrl from "@docusaurus/useBaseUrl"; diff --git a/docs/core_docs/docs/how_to/agent_executor.ipynb b/docs/core_docs/docs/how_to/agent_executor.ipynb index 541a7f78cf7f..630c52d8aff1 100644 --- a/docs/core_docs/docs/how_to/agent_executor.ipynb +++ b/docs/core_docs/docs/how_to/agent_executor.ipynb @@ -22,7 +22,7 @@ "In this tutorial we will build an agent that can interact with multiple different tools: one being a local database, the other being a search engine. You will be able to ask this agent questions, watch it call tools, and have conversations with it.\n", "\n", ":::{.callout-important}\n", - "This section will cover building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph](/docs/concepts/#langgraph).\n", + "This section will cover building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph](/docs/concepts/#langgraphjs).\n", ":::\n", "\n", "## Concepts\n", @@ -978,7 +978,7 @@ "That's a wrap! In this quick start we covered how to create a simple agent. Agents are a complex topic, and there's lot to learn! \n", "\n", ":::{.callout-important}\n", - "This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph](/docs/concepts/#langgraph).\n", + "This section covered building with LangChain Agents. LangChain Agents are fine for getting started, but past a certain point you will likely want flexibility and control that they do not offer. For working with more advanced agents, we'd recommend checking out [LangGraph](/docs/concepts/#langgraphjs).\n", "\n", "You can also see [this guide to help migrate to LangGraph](/docs/how_to/migrate_agent).\n", ":::" diff --git a/docs/core_docs/docs/how_to/installation.mdx b/docs/core_docs/docs/how_to/installation.mdx index ac6004c758c8..9b97c4b131c6 100644 --- a/docs/core_docs/docs/how_to/installation.mdx +++ b/docs/core_docs/docs/how_to/installation.mdx @@ -73,6 +73,8 @@ npm install @langchain/core ### LangGraph [LangGraph.js](https://langchain-ai.github.io/langgraphjs/) is a library for building stateful, multi-actor applications with LLMs. +It integrates smoothly with LangChain, but can be used without it. + Install with: ```bash npm2yarn diff --git a/docs/core_docs/docs/how_to/migrate_agent.ipynb b/docs/core_docs/docs/how_to/migrate_agent.ipynb index cdebd6c7ce1a..f91cd3fd2454 100644 --- a/docs/core_docs/docs/how_to/migrate_agent.ipynb +++ b/docs/core_docs/docs/how_to/migrate_agent.ipynb @@ -21,7 +21,16 @@ "source": [ "# How to migrate from legacy LangChain agents to LangGraph\n", "\n", - "Here we focus on how to move from legacy LangChain agents to LangGraph agents.\n", + ":::info Prerequisites\n", + "\n", + "This guide assumes familiarity with the following concepts:\n", + "- [Agents](/docs/concepts/#agents)\n", + "- [LangGraph.js](https://langchain-ai.github.io/langgraphjs/)\n", + "- [Tool calling](/docs/how_to/tool_calling/)\n", + "\n", + ":::\n", + "\n", + "Here we focus on how to move from legacy LangChain agents to more flexible [LangGraph](https://langchain-ai.github.io/langgraphjs/) agents.\n", "LangChain agents (the\n", "[`AgentExecutor`](https://api.js.langchain.com/classes/langchain_agents.AgentExecutor.html)\n", "in particular) have multiple configuration parameters. In this notebook we will\n", @@ -605,9 +614,16 @@ "id": "c54b374d", "metadata": {}, "source": [ - "Now, let's pass a custom system message to\n", - "[react agent executor](https://langchain-ai.github.io/langgraphjs/reference/functions/prebuilt.createReactAgent.html).\n", - "This can either be a string or a LangChain `SystemMessage`.\n" + "Now, let's pass a custom system message to [react agent executor](https://langchain-ai.github.io/langgraphjs/reference/functions/prebuilt.createReactAgent.html).\n", + "\n", + "LangGraph's prebuilt `create_react_agent` does not take a prompt template directly as a parameter, but instead takes a `messages_modifier` parameter. This modifies messages before they are passed into the model, and can be one of four values:\n", + "\n", + "- A `SystemMessage`, which is added to the beginning of the list of messages.\n", + "- A `string`, which is converted to a `SystemMessage` and added to the beginning of the list of messages.\n", + "- A `Callable`, which should take in a list of messages. The output is then passed to the language model.\n", + "- Or a [`Runnable`](/docs/concepts/#langchain-expression-language), which should should take in a list of messages. The output is then passed to the language model.\n", + "\n", + "Here's how it looks in action:\n" ] }, { @@ -1615,6 +1631,18 @@ "}" ] }, + { + "cell_type": "markdown", + "id": "e56203e7", + "metadata": {}, + "source": [ + "## Next steps\n", + "\n", + "You've now learned how to migrate your LangChain agent executors to LangGraph.\n", + "\n", + "Next, check out other [LangGraph how-to guides](https://langchain-ai.github.io/langgraph/how-tos/)." + ] + }, { "cell_type": "code", "execution_count": null, diff --git a/docs/core_docs/docs/how_to/tool_calling.ipynb b/docs/core_docs/docs/how_to/tool_calling.ipynb index 00822e7c03e8..fac402da9af6 100644 --- a/docs/core_docs/docs/how_to/tool_calling.ipynb +++ b/docs/core_docs/docs/how_to/tool_calling.ipynb @@ -46,6 +46,12 @@ "parameters matching the desired schema, then treat the generated output as your final \n", "result.\n", "\n", + ":::note\n", + "\n", + "If you only need formatted values, try the [.with_structured_output()](/docs/how_to/structured_output/#the-.withstructuredoutput-method) chat model method as a simpler entrypoint.\n", + "\n", + ":::\n", + "\n", "However, tool calling goes beyond [structured output](/docs/how_to/structured_output/)\n", "since you can pass responses to caled tools back to the model to create longer interactions.\n", "For instance, given a search engine tool, an LLM might handle a \n", @@ -105,12 +111,14 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import { tool } from \"@langchain/core/tools\";\n", "import { z } from \"zod\";\n", + "import { ChatOpenAI } from \"@langchain/openai\";\n", + "const llm = new ChatOpenAI({ model: \"gpt-4o\", temperature: 0, })\n", "\n", "/**\n", " * Note that the descriptions here are crucial, as they will be passed along\n", @@ -155,7 +163,7 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 2, "metadata": {}, "outputs": [ { @@ -164,9 +172,9 @@ "text": [ "[\n", " {\n", - " name: 'calculator',\n", - " args: { operation: 'multiply', number1: 3, number2: 12 },\n", - " id: 'call_5KWEQgV40XVoY1rqDhwyDmli'\n", + " name: \"calculator\",\n", + " args: { operation: \"multiply\", number1: 3, number2: 12 },\n", + " id: \"call_QraczsExVCpWmD8mY34BnyFL\"\n", " }\n", "]\n" ] @@ -222,7 +230,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": 3, "metadata": {}, "outputs": [ { @@ -233,7 +241,7 @@ " {\n", " name: \"calculator\",\n", " args: \"\",\n", - " id: \"call_rGqPR1ivppYUeBb0iSAF8HGP\",\n", + " id: \"call_MzevUrdu5msUvISEP5TWGQYI\",\n", " index: 0\n", " }\n", "]\n", @@ -275,7 +283,7 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 4, "metadata": {}, "outputs": [ { @@ -286,13 +294,15 @@ " {\n", " name: \"calculator\",\n", " args: { operation: \"subtract\", number1: 32993, number2: 2339 },\n", - " id: \"call_WMhL5X0fMBBZPNeyUZY53Xuw\"\n", + " id: \"call_dDcRfLQ7L27c50eeSCaHEaIo\"\n", " }\n", "]\n" ] } ], "source": [ + "import { concat } from \"@langchain/core/utils/stream\";\n", + "\n", "const streamWithAccumulation = await llmWithTools.stream(\"What is 32993 - 2339\");\n", "\n", "let final;\n", @@ -300,7 +310,7 @@ " if (!final) {\n", " final = chunk;\n", " } else {\n", - " final = final.concat(chunk);\n", + " final = concat(final, chunk);\n", " }\n", "}\n", "\n", @@ -318,15 +328,21 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ - "It seems like you've used an emoji (🦜) in your expression, which I'm not familiar with in a mathematical context. Could you clarify what operation you meant by using the parrot emoji? For example, did you mean addition, subtraction, multiplication, or division?\n", - "[]\n" + "\n", + "[\n", + " {\n", + " name: \"calculator\",\n", + " args: { operation: \"multiply\", number1: 3, number2: 12 },\n", + " id: \"call_pVPqABsVEJCpLQRSOv8h3N0I\"\n", + " }\n", + "]\n" ] } ], @@ -346,7 +362,7 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 10, "metadata": {}, "outputs": [ { @@ -357,7 +373,7 @@ " {\n", " name: \"calculator\",\n", " args: { operation: \"divide\", number1: 3, number2: 12 },\n", - " id: \"call_BDuJv8QkDZ7N7Wsd6v5VDeVa\"\n", + " id: \"call_fSqOSwyJYTKpH1Y7x63JBLik\"\n", " }\n", "]\n" ] @@ -369,7 +385,7 @@ "const res = await llmWithTools.invoke([\n", " new HumanMessage(\"What is 333382 🦜 1932?\"),\n", " new AIMessage({\n", - " content: \"\",\n", + " content: \"The 🦜 operator is shorthand for division, so we call the divide tool.\",\n", " tool_calls: [{\n", " id: \"12345\",\n", " name: \"calulator\",\n", @@ -409,7 +425,7 @@ }, { "cell_type": "code", - "execution_count": 1, + "execution_count": 11, "metadata": {}, "outputs": [ { @@ -423,7 +439,7 @@ " {\n", " name: \u001b[32m\"calculator\"\u001b[39m,\n", " args: { operation: \u001b[32m\"multiply\"\u001b[39m, number1: \u001b[33m119\u001b[39m, number2: \u001b[33m8\u001b[39m },\n", - " id: \u001b[32m\"call_pBlKOPNMRN4AAMkPaOKLLcyj\"\u001b[39m\n", + " id: \u001b[32m\"call_OP8F1LP7B3hwPEc2TzGBOYKP\"\u001b[39m\n", " }\n", " ],\n", " invalid_tool_calls: [],\n", @@ -431,7 +447,7 @@ " function_call: \u001b[90mundefined\u001b[39m,\n", " tool_calls: [\n", " {\n", - " id: \u001b[32m\"call_pBlKOPNMRN4AAMkPaOKLLcyj\"\u001b[39m,\n", + " id: \u001b[32m\"call_OP8F1LP7B3hwPEc2TzGBOYKP\"\u001b[39m,\n", " type: \u001b[32m\"function\"\u001b[39m,\n", " function: \u001b[36m[Object]\u001b[39m\n", " }\n", @@ -446,7 +462,7 @@ " function_call: \u001b[90mundefined\u001b[39m,\n", " tool_calls: [\n", " {\n", - " id: \u001b[32m\"call_pBlKOPNMRN4AAMkPaOKLLcyj\"\u001b[39m,\n", + " id: \u001b[32m\"call_OP8F1LP7B3hwPEc2TzGBOYKP\"\u001b[39m,\n", " type: \u001b[32m\"function\"\u001b[39m,\n", " function: {\n", " name: \u001b[32m\"calculator\"\u001b[39m,\n", @@ -463,14 +479,15 @@ " {\n", " name: \u001b[32m\"calculator\"\u001b[39m,\n", " args: { operation: \u001b[32m\"multiply\"\u001b[39m, number1: \u001b[33m119\u001b[39m, number2: \u001b[33m8\u001b[39m },\n", - " id: \u001b[32m\"call_pBlKOPNMRN4AAMkPaOKLLcyj\"\u001b[39m\n", + " id: \u001b[32m\"call_OP8F1LP7B3hwPEc2TzGBOYKP\"\u001b[39m\n", " }\n", " ],\n", - " invalid_tool_calls: []\n", + " invalid_tool_calls: [],\n", + " usage_metadata: { input_tokens: \u001b[33m85\u001b[39m, output_tokens: \u001b[33m24\u001b[39m, total_tokens: \u001b[33m109\u001b[39m }\n", "}" ] }, - "execution_count": 1, + "execution_count": 11, "metadata": {}, "output_type": "execute_result" } @@ -533,15 +550,12 @@ "name": "deno" }, "language_info": { - "codemirror_mode": { - "mode": "typescript", - "name": "javascript", - "typescript": true - }, "file_extension": ".ts", - "mimetype": "text/typescript", + "mimetype": "text/x.typescript", "name": "typescript", - "version": "3.7.2" + "nb_converter": "script", + "pygments_lexer": "typescript", + "version": "5.3.3" } }, "nbformat": 4, diff --git a/docs/core_docs/docs/integrations/chat/ollama_functions.mdx b/docs/core_docs/docs/integrations/chat/ollama_functions.mdx index fece11d8de1b..19aa6e78ead5 100644 --- a/docs/core_docs/docs/integrations/chat/ollama_functions.mdx +++ b/docs/core_docs/docs/integrations/chat/ollama_functions.mdx @@ -10,6 +10,12 @@ that gives it the same API as OpenAI Functions. Note that more powerful and capable models will perform better with complex schema and/or multiple functions. The examples below use [Mistral](https://ollama.ai/library/mistral). +:::warning + +This is an experimental wrapper that attempts to bolt-on tool calling support to models that do not natively support it. Use with caution. + +::: + ## Setup Follow [these instructions](https://github.com/jmorganca/ollama) to set up and run a local Ollama instance. diff --git a/docs/core_docs/docs/introduction.mdx b/docs/core_docs/docs/introduction.mdx index 1e5752daa2a7..498a49ad28df 100644 --- a/docs/core_docs/docs/introduction.mdx +++ b/docs/core_docs/docs/introduction.mdx @@ -8,9 +8,10 @@ sidebar_position: 0 LangChain simplifies every stage of the LLM application lifecycle: -- **Development**: Build your applications using LangChain's open-source [building blocks](/docs/how_to/#langchain-expression-language-lcel) and [components](/docs/how_to/). Hit the ground running using [third-party integrations](/docs/integrations/platforms/). -- **Productionization**: Use [LangSmith](https://docs.smith.langchain.com) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence. -- **Deployment**: Turn any chain into an API with [LangServe](https://www.langchain.com/langserve). +- **Development**: Build your applications using LangChain's open-source [building blocks](/docs/concepts#langchain-expression-language), [components](/docs/concepts), and [third-party integrations](/docs/integrations/platforms/). + Use [LangGraph.js](/docs/concepts/#langgraphjs) to build stateful agents with first-class streaming and human-in-the-loop support. +- **Productionization**: Use [LangSmith](https://docs.smith.langchain.com/) to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence. +- **Deployment**: Turn your LangGraph applications into production-ready APIs and Assistants with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/) (currently Python-only). import ThemedImage from "@theme/ThemedImage"; import useBaseUrl from "@docusaurus/useBaseUrl"; @@ -18,8 +19,8 @@ import useBaseUrl from "@docusaurus/useBaseUrl"; @@ -30,7 +31,7 @@ Concretely, the framework consists of the following open-source libraries: - **`@langchain/community`**: Third party integrations. - Partner packages (e.g. **`@langchain/openai`**, **`@langchain/anthropic`**, etc.): Some integrations have been further split into their own lightweight packages that only depend on **`@langchain/core`**. - **`langchain`**: Chains, agents, and retrieval strategies that make up an application's cognitive architecture. -- **[langgraph](https://www.langchain.com/langserveh)**: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. +- **[LangGraph.js](https://langchain-ai.github.io/langgraphjs/)**: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. - **[LangSmith](https://docs.smith.langchain.com)**: A developer platform that lets you debug, test, evaluate, and monitor LLM applications. :::note @@ -49,8 +50,9 @@ These are the best ones to get started with: - [Build a Simple LLM Application](/docs/tutorials/llm_chain) - [Build a Chatbot](/docs/tutorials/chatbot) - [Build an Agent](/docs/tutorials/agents) +- [LangGraph.js quickstart](https://langchain-ai.github.io/langgraphjs/tutorials/quickstart/) -Explore the full list of tutorials [here](/docs/tutorials). +Explore the full list of LangChain tutorials [here](/docs/tutorials), and check out other [LangGraph tutorials here](https://langchain-ai.github.io/langgraphjs/tutorials/). ## [How-To Guides](/docs/how_to/) @@ -58,10 +60,14 @@ Explore the full list of tutorials [here](/docs/tutorials). These how-to guides don't cover topics in depth - you'll find that material in the [Tutorials](/docs/tutorials) and the [API Reference](https://v02.api.js.langchain.com). However, these guides will help you quickly accomplish common tasks. +Check out [LangGraph-specific how-tos here](https://langchain-ai.github.io/langgraphjs/how-tos/). + ## [Conceptual Guide](/docs/concepts) Introductions to all the key parts of LangChain you'll need to know! [Here](/docs/concepts) you'll find high level explanations of all LangChain concepts. +For a deeper dive into LangGraph concepts, check out [this page](https://langchain-ai.github.io/langgraph/concepts/). + ## [API reference](https://api.js.langchain.com) Head to the reference section for full documentation of all classes and methods in the LangChain Python packages. diff --git a/docs/core_docs/static/svg/langchain_stack_062024.svg b/docs/core_docs/static/svg/langchain_stack_062024.svg new file mode 100644 index 000000000000..f7b01d3e7cb1 --- /dev/null +++ b/docs/core_docs/static/svg/langchain_stack_062024.svg @@ -0,0 +1,40 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/docs/core_docs/static/svg/langchain_stack_062024_dark.svg b/docs/core_docs/static/svg/langchain_stack_062024_dark.svg new file mode 100644 index 000000000000..0571dc3b8579 --- /dev/null +++ b/docs/core_docs/static/svg/langchain_stack_062024_dark.svg @@ -0,0 +1,47 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/langchain-core/README.md b/langchain-core/README.md index f72ec0a52106..250433af9657 100644 --- a/langchain-core/README.md +++ b/langchain-core/README.md @@ -85,7 +85,7 @@ Rather than having to write multiple implementations for all of those, LCEL allo For more check out the [LCEL docs](https://js.langchain.com/v0.2/docs/concepts#langchain-expression-language). -![LangChain Stack](../docs/core_docs/static/img/langchain_stack_feb_2024.webp) +![LangChain Stack](../docs/core_docs/static/svg/langchain_stack_062024.svg) ## 📕 Releases & Versioning diff --git a/langchain/README.md b/langchain/README.md index 9551ca4b6dd4..6980c27c318d 100644 --- a/langchain/README.md +++ b/langchain/README.md @@ -51,7 +51,7 @@ The LangChain libraries themselves are made up of several different packages. Integrations may also be split into their own compatible packages. -![LangChain Stack](https://github.com/langchain-ai/langchainjs/blob/main/docs/core_docs/static/img/langchain_stack_feb_2024.webp) +![LangChain Stack](https://github.com/langchain-ai/langchainjs/blob/main/docs/core_docs/static/svg/langchain_stack_062024.svg) This library aims to assist in the development of those types of applications. Common examples of these applications include: diff --git a/libs/langchain-community/README.md b/libs/langchain-community/README.md index 532e2c7febf7..3a44ecc9919e 100644 --- a/libs/langchain-community/README.md +++ b/libs/langchain-community/README.md @@ -40,7 +40,7 @@ The field you need depends on the package manager you're using, but we recommend LangChain Community contains third-party integrations that implement the base interfaces defined in LangChain Core, making them ready-to-use in any LangChain application. -![LangChain Stack](../../docs/core_docs/static/img/langchain_stack_feb_2024.webp) +![LangChain Stack](../../docs/core_docs/static/svg/langchain_stack_062024.svg) ## 📕 Releases & Versioning From b0a727b9f63592b57d48be22e83881ab477489a3 Mon Sep 17 00:00:00 2001 From: jacoblee93 Date: Fri, 28 Jun 2024 17:28:09 -0700 Subject: [PATCH 2/4] Bump community core dep --- libs/langchain-community/package.json | 2 +- yarn.lock | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/libs/langchain-community/package.json b/libs/langchain-community/package.json index 769e989e0d80..0f73ca4ce032 100644 --- a/libs/langchain-community/package.json +++ b/libs/langchain-community/package.json @@ -35,7 +35,7 @@ "author": "LangChain", "license": "MIT", "dependencies": { - "@langchain/core": "~0.2.9", + "@langchain/core": "~0.2.11", "@langchain/openai": "~0.1.0", "binary-extensions": "^2.2.0", "expr-eval": "^2.0.2", diff --git a/yarn.lock b/yarn.lock index 537651c150fe..1442f4022322 100644 --- a/yarn.lock +++ b/yarn.lock @@ -10150,7 +10150,7 @@ __metadata: "@gradientai/nodejs-sdk": ^1.2.0 "@huggingface/inference": ^2.6.4 "@jest/globals": ^29.5.0 - "@langchain/core": ~0.2.9 + "@langchain/core": ~0.2.11 "@langchain/openai": ~0.1.0 "@langchain/scripts": ~0.0.14 "@langchain/standard-tests": 0.0.0 From bc26173344a1758c88cdac9c293bd25aae5be273 Mon Sep 17 00:00:00 2001 From: jacoblee93 Date: Fri, 28 Jun 2024 17:50:17 -0700 Subject: [PATCH 3/4] Update test --- .../src/document_loaders/tests/csv-blob.test.ts | 2 ++ 1 file changed, 2 insertions(+) diff --git a/libs/langchain-community/src/document_loaders/tests/csv-blob.test.ts b/libs/langchain-community/src/document_loaders/tests/csv-blob.test.ts index 5790ce883c66..9e7b9adbb839 100644 --- a/libs/langchain-community/src/document_loaders/tests/csv-blob.test.ts +++ b/libs/langchain-community/src/document_loaders/tests/csv-blob.test.ts @@ -46,6 +46,7 @@ test("Test CSV loader from blob", async () => { expect(docs.length).toBe(2); expect(docs[0]).toMatchInlineSnapshot(` Document { + "id": undefined, "metadata": { "blobType": "text/csv", "line": 1, @@ -57,6 +58,7 @@ test("Test CSV loader from blob", async () => { `); expect(docs[1]).toMatchInlineSnapshot(` Document { + "id": undefined, "metadata": { "blobType": "text/csv", "line": 2, From 4fb9df51e4e5271b3847f83da32df8c863bf0a39 Mon Sep 17 00:00:00 2001 From: jacoblee93 Date: Fri, 28 Jun 2024 17:52:23 -0700 Subject: [PATCH 4/4] Adds tracing howto and concepts --- docs/core_docs/docs/concepts.mdx | 11 +++++++++++ docs/core_docs/docs/how_to/index.mdx | 14 +++++++++++++- 2 files changed, 24 insertions(+), 1 deletion(-) diff --git a/docs/core_docs/docs/concepts.mdx b/docs/core_docs/docs/concepts.mdx index f2162ae0c773..f28f6e211de9 100644 --- a/docs/core_docs/docs/concepts.mdx +++ b/docs/core_docs/docs/concepts.mdx @@ -1138,6 +1138,17 @@ This process is vital for building reliable applications. To learn more, check out [this LangSmith guide](https://docs.smith.langchain.com/concepts/evaluation). +### Tracing + + + +A trace is essentially a series of steps that your application takes to go from input to output. +Traces contain individual steps called `runs`. These can be individual calls from a model, retriever, +tool, or sub-chains. +Tracing gives you observability inside your chains and agents, and is vital in diagnosing issues. + +For a deeper dive, check out [this LangSmith conceptual guide](https://docs.smith.langchain.com/concepts/tracing). + ### Generative UI LangChain.js provides a few templates and examples showing off generative UI, diff --git a/docs/core_docs/docs/how_to/index.mdx b/docs/core_docs/docs/how_to/index.mdx index 4027385c504a..fba45f0ce5dc 100644 --- a/docs/core_docs/docs/how_to/index.mdx +++ b/docs/core_docs/docs/how_to/index.mdx @@ -285,7 +285,8 @@ LangSmith allows you to closely trace, monitor and evaluate your LLM application It seamlessly integrates with LangChain and LangGraph.js, and you can use it to inspect and debug individual steps of your chains as you build. LangSmith documentation is hosted on a separate site. -You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/how_to_guides/). +You can peruse [LangSmith how-to guides here](https://docs.smith.langchain.com/how_to_guides/), but we'll highlight a few sections that are particularly +relevant to LangChain below: ### Evaluation @@ -295,3 +296,14 @@ Evaluating performance is a vital part of building LLM-powered applications. LangSmith helps with every step of the process from creating a dataset to defining metrics to running evaluators. To learn more, check out the [LangSmith evaluation how-to guides](https://docs.smith.langchain.com/how_to_guides#evaluation). + +### Tracing + + + +Tracing gives you observability inside your chains and agents, and is vital in diagnosing issues. + +- [How to: trace with LangChain](https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langchain) +- [How to: add metadata and tags to traces](https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langchain#add-metadata-and-tags-to-traces) + +You can see general tracing-related how-tos [in this section of the LangSmith docs](https://docs.smith.langchain.com/how_to_guides/tracing).