Skip to content

Commit

Permalink
docs: Add RAG example using OllamaEmbeddings and ChatOllama (#337)
Browse files Browse the repository at this point in the history
  • Loading branch information
davidmigloz committed Feb 21, 2024
1 parent c55fe50 commit 8bddc6c
Show file tree
Hide file tree
Showing 4 changed files with 106 additions and 4 deletions.
49 changes: 49 additions & 0 deletions docs/modules/model_io/models/chat_models/integrations/ollama.md
Original file line number Diff line number Diff line change
Expand Up @@ -128,3 +128,52 @@ final res = await chatModel.invoke(PromptValue.chat([prompt]));
print(res.firstOutputAsString);
// -> 'An Apple'
```

## RAG (Retrieval-Augmented Generation) pipeline

We can easily create a fully local RAG pipeline using `OllamaEmbeddings` and `ChatOllama`.

```dart
// 1. Create a vector store and add documents to it
final vectorStore = MemoryVectorStore(
embeddings: OllamaEmbeddings(model: 'llama2'),
);
await vectorStore.addDocuments(
documents: [
Document(pageContent: 'LangChain was created by Harrison'),
Document(pageContent: 'David ported LangChain to Dart in LangChain.dart'),
],
);
// 2. Construct a RAG prompt template
final promptTemplate = ChatPromptTemplate.fromTemplates([
(ChatMessageType.system, 'Answer the question based on only the following context:\n{context}'),
(ChatMessageType.human, '{question}'),
]);
// 3. Define the model to use and the vector store retriever
final chatModel = ChatOllama(
defaultOptions: ChatOllamaOptions(model: 'llama2'),
);
final retriever = vectorStore.asRetriever(
defaultOptions: VectorStoreRetrieverOptions(
searchType: VectorStoreSimilaritySearch(k: 1),
),
);
// 4. Create a Runnable that combines the retrieved documents into a single string
final docCombiner = Runnable.fromFunction<List<Document>, String>((docs, _) {
return docs.map((d) => d.pageContent).join('\n');
});
// 4. Define the RAG pipeline
final chain = Runnable.fromMap<String>({
'context': retriever.pipe(docCombiner),
'question': Runnable.passthrough(),
}).pipe(promptTemplate).pipe(chatModel).pipe(StringOutputParser());
// 5. Run the pipeline
final res = await chain.invoke('Who created LangChain.dart?');
print(res);
// Based on the context provided, David created LangChain.dart.
```
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ void main(final List<String> arguments) async {
await _chatOllamaStreaming();
await _chatOllamaJsonMode();
await _chatOllamaMultimodal();
await _rag();
}

Future<void> _chatOllama() async {
Expand Down Expand Up @@ -108,3 +109,54 @@ Future<void> _chatOllamaMultimodal() async {
print(res.firstOutputAsString);
// -> 'An Apple'
}

Future<void> _rag() async {
// 1. Create a vector store and add documents to it
final vectorStore = MemoryVectorStore(
embeddings: OllamaEmbeddings(model: 'llama2'),
);
await vectorStore.addDocuments(
documents: [
const Document(pageContent: 'LangChain was created by Harrison'),
const Document(
pageContent: 'David ported LangChain to Dart in LangChain.dart',
),
],
);

// 2. Construct a RAG prompt template
final promptTemplate = ChatPromptTemplate.fromTemplates(const [
(
ChatMessageType.system,
'Answer the question based on only the following context:\n{context}',
),
(ChatMessageType.human, '{question}'),
]);

// 3. Define the model to use and the vector store retriever
final chatModel = ChatOllama(
defaultOptions: const ChatOllamaOptions(model: 'llama2'),
);
final retriever = vectorStore.asRetriever(
defaultOptions: const VectorStoreRetrieverOptions(
searchType: VectorStoreSimilaritySearch(k: 1),
),
);

// 4. Create a Runnable that combines the retrieved documents into a single string
final docCombiner =
Runnable.fromFunction<List<Document>, String>((final docs, final _) {
return docs.map((final d) => d.pageContent).join('\n');
});

// 4. Define the RAG pipeline
final chain = Runnable.fromMap<String>({
'context': retriever.pipe(docCombiner),
'question': Runnable.passthrough(),
}).pipe(promptTemplate).pipe(chatModel).pipe(const StringOutputParser());

// 5. Run the pipeline
final res = await chain.invoke('Who created LangChain.dart?');
print(res);
// Based on the context provided, David created LangChain.dart.
}
8 changes: 5 additions & 3 deletions packages/langchain/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ dependencies:
...
```

The most basic building block of LangChain.dart is calling an LLM on some prompt. LangChain.dart provides a unified interface for calling different LLMs. For example, we can use `ChatGoogleGenerativeAI` to call Google's Gemini model:
The most basic building block of LangChain.dart is calling an LLM on some prompt. LangChain.dart provides a unified interface for calling different LLMs. For example, we can use `ChatGoogleGenerativeAI` to call Google's Gemini model:

```dart
final model = ChatGoogleGenerativeAI(apiKey: googleApiKey);
Expand Down Expand Up @@ -116,7 +116,7 @@ final promptTemplate = ChatPromptTemplate.fromTemplates([
// 3. Create a Runnable that combines the retrieved documents into a single string
final docCombiner = Runnable.fromFunction<List<Document>, String>((docs, _) {
return docs.map((final d) => d.pageContent).join('\n');
return docs.map((d) => d.pageContent).join('\n');
});
// 4. Define the RAG pipeline
Expand All @@ -143,7 +143,9 @@ print(res);

## Community

Stay up-to-date on the latest news and updates on the field, have great discussions, and get help in the official [LangChain.dart Discord](https://discord.gg/x4qbhqecVR).
Stay up-to-date on the latest news and updates on the field, have great discussions, and get help in the official [LangChain.dart Discord server](https://discord.gg/x4qbhqecVR).

[![LangChain.dart Discord server](https://invidget.switchblade.xyz/x4qbhqecVR?theme=light)](http://discord.gg/x4qbhqecVR)

## Contribute

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,6 @@ void main() {
'mistralai/mistral-small',
];
for (final model in models) {
print('Testing model: $model');
final res = await chatModel.invoke(
PromptValue.string(
'List the numbers from 1 to 9 in order. '
Expand Down

0 comments on commit 8bddc6c

Please sign in to comment.