-
-
Notifications
You must be signed in to change notification settings - Fork 79
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
89f7b0b
commit ab3ab57
Showing
6 changed files
with
371 additions
and
105 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,6 +1,8 @@ | ||
# LangChain Expression Language (LCEL) | ||
|
||
LangChain Expression Language or LCEL is a declarative way to easily compose chains together. Any chain constructed this way will automatically have full sync, async, and streaming support. | ||
LangChain Expression Language, or LCEL, is a declarative way to easily compose chains together. LCEL was designed from day 1 to support putting prototypes in production, with no code changes, from the simplest “prompt + LLM” chain to the most complex chains (we’ve seen folks successfully run LCEL chains with 100s of steps in production). To highlight a few of the reasons you might want to use LCEL: | ||
|
||
- [Interface](/expression_language/interface.md): The base interface shared by all LCEL objects. | ||
- Cookbook: Examples of common LCEL usage patterns. | ||
- **Streaming support:** When you build your chains with LCEL you get the best possible time-to-first-token (time elapsed until the first chunk of output comes out). For some chains this means eg. we stream tokens straight from an LLM to a streaming output parser, and you get back parsed, incremental chunks of output at the same rate as the LLM provider outputs the raw tokens. | ||
- **Optimized parallel execution:** Whenever your LCEL chains have steps that can be executed in parallel (eg if you fetch documents from multiple retrievers) we automatically do it for the smallest possible latency. | ||
- **Retries and fallbacks:** Configure retries and fallbacks for any part of your LCEL chain. This is a great way to make your chains more reliable at scale. | ||
- **Access intermediate results:** For more complex chains it’s often very useful to access the results of intermediate steps even before the final output is produced. This can be used to let end-users know something is happening, or even just to debug your chain. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,159 @@ | ||
# Get started | ||
|
||
LCEL makes it easy to build complex chains from basic components, and supports out of the box functionality such as streaming, parallelism, and logging. | ||
|
||
# Basic example: prompt + model + output parser | ||
|
||
The most basic and common use case is chaining a prompt template and a model together. To see how this works, let’s create a chain that takes a topic and generates a joke: | ||
|
||
```dart | ||
final openaiApiKey = Platform.environment['OPENAI_API_KEY']; | ||
final promptTemplate = ChatPromptTemplate.fromTemplate( | ||
'Tell me a joke about {topic}', | ||
); | ||
final model = ChatOpenAI(apiKey: openaiApiKey); | ||
const outputParser = StringOutputParser<AIChatMessage>(); | ||
final chain = promptTemplate.pipe(model).pipe(outputParser); | ||
final res = await chain.invoke({'topic': 'ice cream'}); | ||
print(res); | ||
// Why did the ice cream truck break down? | ||
// Because it had too many "scoops"! | ||
``` | ||
|
||
Notice this line of this code, where we piece together then different components into a single chain using LCEL: | ||
|
||
```dart | ||
final chain = promptTemplate.pipe(model).pipe(outputParser); | ||
``` | ||
|
||
The `.pipe()` method (or `|` operator) is similar to a unix pipe operator, which chains together the different components feeds the output from one component as input into the next component. | ||
|
||
In this chain the user input is passed to the prompt template, then the prompt template output is passed to the model, then the model output is passed to the output parser. Let’s take a look at each component individually to really understand what’s going on. | ||
|
||
## 1. Prompt | ||
|
||
`promptTemplate` is a `BasePromptTemplate`, which means it takes in a map of template variables and produces a `PromptValue`. A `PromptValue` is a wrapper around a completed prompt that can be passed to either an `LLM` (which takes a string as input) or `ChatModel` (which takes a sequence of messages as input). It can work with either language model type because it defines logic both for producing `ChatMessage` and for producing a string. | ||
|
||
```dart | ||
final promptValue = await promptTemplate.invoke({'topic': 'ice cream'}); | ||
final messages = promptValue.toChatMessages(); | ||
print(messages); | ||
// [HumanChatMessage{ | ||
// content: ChatMessageContentText{ | ||
// text: Tell me a joke about ice cream, | ||
// }, | ||
// }] | ||
final string = promptValue.toString(); | ||
print(string); | ||
// Human: Tell me a joke about ice cream | ||
``` | ||
|
||
## 2. Model | ||
|
||
The `PromptValue` is then passed to `model`. In this case our `model` is a `ChatModel`, meaning it will output a `ChatMessage`. | ||
|
||
```dart | ||
final chatOutput = await model.invoke(promptValue); | ||
print(chatOutput.firstOutput); | ||
// AIChatMessage{ | ||
// content: Why did the ice cream truck break down? | ||
// Because it couldn't make it over the rocky road!, | ||
// } | ||
``` | ||
|
||
If our model was an `LLM`, it would output a `String`. | ||
|
||
```dart | ||
final llm = OpenAI(apiKey: openaiApiKey); | ||
final llmOutput = await llm.invoke(promptValue); | ||
print(llmOutput.firstOutput); | ||
// Why did the ice cream go to therapy? | ||
// Because it had a rocky road! | ||
``` | ||
|
||
## 3. Output parser | ||
|
||
And lastly we pass our `model` output to the `outputParser`, which is a `BaseOutputParser` meaning it takes either a `String` or a `ChatMessage` as input. The `StringOutputParser` specifically simple converts any input into a `String`. | ||
|
||
```dart | ||
final parsed = await outputParser.invoke(chatOutput); | ||
print(parsed); | ||
// Why did the ice cream go to therapy? | ||
// Because it had a rocky road! | ||
``` | ||
|
||
## 4. Entire Pipeline | ||
|
||
To follow the steps along: | ||
|
||
1. We pass in user input on the desired topic as `{'topic': 'ice cream'}` | ||
2. The `promptTemplate` component takes the user input, which is then used to construct a `PromptValue` after using the `topic` to construct the prompt. | ||
3. The `model` component takes the generated prompt, and passes into the OpenAI chat model for evaluation. The generated output from the model is a `ChatMessage` object (specifically an `AIChatMessage`). | ||
4. Finally, the `outputParser` component takes in a `ChatMessage`, and transforms this into a `String`, which is returned from the invoke method. | ||
|
||
Note that if you’re curious about the output of any components, you can always test out a smaller version of the chain such as `promptTemplate` or `promptTemplate.pipe(model)` to see the intermediate results. | ||
|
||
## RAG Search Example | ||
|
||
For our next example, we want to run a retrieval-augmented generation chain to add some context when responding to questions. | ||
|
||
```dart | ||
final openaiApiKey = Platform.environment['OPENAI_API_KEY']; | ||
// 1. Create a vector store and add documents to it | ||
final vectorStore = MemoryVectorStore( | ||
embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), | ||
); | ||
await vectorStore.addDocuments( | ||
documents: [ | ||
Document(pageContent: 'LangChain was created by Harrison'), | ||
Document(pageContent: 'David ported LangChain to Dart in LangChain.dart'), | ||
], | ||
); | ||
// 2. Construct a RAG prompt template | ||
final promptTemplate = ChatPromptTemplate.fromTemplates([ | ||
(ChatMessageType.system, 'Answer the question based on only the following context:\n{context}'), | ||
(ChatMessageType.human, '{question}'), | ||
]); | ||
// 3. Create a Runnable that combines the retrieved documents into a single string | ||
final docCombiner = Runnable.fromFunction<List<Document>, String>((docs, _) { | ||
return docs.map((final d) => d.pageContent).join('\n'); | ||
}); | ||
// 4. Define the RAG pipeline | ||
final chain = Runnable.fromMap<String>({ | ||
'context': vectorStore.asRetriever().pipe(docCombiner), | ||
'question': Runnable.passthrough(), | ||
}) | ||
.pipe(promptTemplate) | ||
.pipe(ChatOpenAI(apiKey: openaiApiKey)) | ||
.pipe(StringOutputParser()); | ||
// 5. Run the pipeline | ||
final res = await chain.invoke('Who created LangChain.dart?'); | ||
print(res); | ||
// David created LangChain.dart | ||
``` | ||
|
||
In this chain we add some extra logic around retrieving context from a vector store. | ||
|
||
We first instantiate our vector store and add some documents to it. Then we define our prompt, which takes in two input variables: | ||
|
||
- `context` -> this is a string which is returned from our vector store based on a semantic search from the input. | ||
- `question` -> this is the question we want to ask. | ||
|
||
In our `chain`, we use a `RunnableMap` which is special type of runnable that takes an object of runnables and executes them all in parallel. It then returns an object with the same keys as the input object, but with the values replaced with the output of the runnables. | ||
|
||
In our case, it has two sub-chains to get the data required by our prompt: | ||
|
||
- `context` -> this is a `RunnableFunction` which takes the input from the `.invoke()` call, makes a request to our vector store, and returns the retrieved documents combined in a single String. | ||
- `question` -> this uses a `RunnablePassthrough` which simply passes whatever the input was through to the next step, and in our case it returns it to the key in the object we defined. | ||
|
||
Finally, we chain together the prompt, model, and output parser as before. |
111 changes: 111 additions & 0 deletions
111
examples/docs_examples/bin/expression_language/get_started.dart
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,111 @@ | ||
// ignore_for_file: avoid_print | ||
import 'dart:io'; | ||
|
||
import 'package:langchain/langchain.dart'; | ||
import 'package:langchain_chroma/langchain_chroma.dart'; | ||
import 'package:langchain_openai/langchain_openai.dart'; | ||
|
||
void main(final List<String> arguments) async { | ||
await _promptModelOutputParser(); | ||
await _ragSearch(); | ||
} | ||
|
||
Future<void> _promptModelOutputParser() async { | ||
final openaiApiKey = Platform.environment['OPENAI_API_KEY']; | ||
|
||
final promptTemplate = ChatPromptTemplate.fromTemplate( | ||
'Tell me a joke about {topic}', | ||
); | ||
final model = ChatOpenAI(apiKey: openaiApiKey); | ||
const outputParser = StringOutputParser<AIChatMessage>(); | ||
|
||
final chain = promptTemplate.pipe(model).pipe(outputParser); | ||
|
||
final res = await chain.invoke({'topic': 'ice cream'}); | ||
print(res); | ||
// Why did the ice cream truck break down? | ||
// Because it had too many "scoops"! | ||
|
||
// 1. Prompt | ||
|
||
final promptValue = await promptTemplate.invoke({'topic': 'ice cream'}); | ||
|
||
final messages = promptValue.toChatMessages(); | ||
print(messages); | ||
// [HumanChatMessage{ | ||
// content: ChatMessageContentText{ | ||
// text: Tell me a joke about ice cream, | ||
// }, | ||
// }] | ||
|
||
final string = promptValue.toString(); | ||
print(string); | ||
// Human: Tell me a joke about ice cream | ||
|
||
// 2. Model | ||
|
||
final chatOutput = await model.invoke(promptValue); | ||
print(chatOutput.firstOutput); | ||
// AIChatMessage{ | ||
// content: Why did the ice cream truck break down? | ||
// Because it couldn't make it over the rocky road!, | ||
// } | ||
|
||
final llm = OpenAI(apiKey: openaiApiKey); | ||
final llmOutput = await llm.invoke(promptValue); | ||
print(llmOutput.firstOutput); | ||
// Why did the ice cream go to therapy? | ||
// Because it had a rocky road! | ||
|
||
// 3. Output parser | ||
|
||
final parsed = await outputParser.invoke(chatOutput); | ||
print(parsed); | ||
// Why did the ice cream go to therapy? | ||
// Because it had a rocky road! | ||
} | ||
|
||
Future<void> _ragSearch() async { | ||
final openaiApiKey = Platform.environment['OPENAI_API_KEY']; | ||
|
||
// 1. Create a vector store and add documents to it | ||
final vectorStore = MemoryVectorStore( | ||
embeddings: OpenAIEmbeddings(apiKey: openaiApiKey), | ||
); | ||
await vectorStore.addDocuments( | ||
documents: [ | ||
const Document(pageContent: 'LangChain was created by Harrison'), | ||
const Document( | ||
pageContent: 'David ported LangChain to Dart in LangChain.dart'), | ||
], | ||
); | ||
|
||
// 2. Construct a RAG prompt template | ||
final promptTemplate = ChatPromptTemplate.fromTemplates(const [ | ||
( | ||
ChatMessageType.system, | ||
'Answer the question based on only the following context:\n{context}', | ||
), | ||
(ChatMessageType.human, '{question}'), | ||
]); | ||
|
||
// 3. Create a Runnable that combines the retrieved documents into a single string | ||
final docCombiner = | ||
Runnable.fromFunction<List<Document>, String>((final docs, final _) { | ||
return docs.map((final d) => d.pageContent).join('\n'); | ||
}); | ||
|
||
// 4. Define the RAG pipeline | ||
final chain = Runnable.fromMap<String>({ | ||
'context': vectorStore.asRetriever().pipe(docCombiner), | ||
'question': Runnable.passthrough(), | ||
}) | ||
.pipe(promptTemplate) | ||
.pipe(ChatOpenAI(apiKey: openaiApiKey)) | ||
.pipe(const StringOutputParser()); | ||
|
||
// 5. Run the pipeline | ||
final res = await chain.invoke('Who created LangChain.dart?'); | ||
print(res); | ||
// David created LangChain.dart | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.