A powerful RAG (Retrieval-Augmented Generation) system that enhances AI interactions by providing relevant context from your local files. Built primarily for gptme, but can be used standalone.
RAG systems improve AI responses by retrieving and incorporating relevant information from a knowledge base into the generation process. This leads to more accurate, contextual, and factual responses.
- π Document indexing with ChromaDB
- Fast and efficient vector storage
- Semantic search capabilities
- Persistent storage
- π Semantic search with embeddings
- Relevance scoring
- Token-aware context assembly
- Clean output formatting
- π Smart document processing
- Streaming large file handling
- Automatic document chunking
- Configurable chunk size/overlap
- Document reconstruction
- π File watching and auto-indexing
- Real-time index updates
- Pattern-based file filtering
- Efficient batch processing
- Automatic persistence
- π οΈ CLI interface for testing and development
- Index management
- Search functionality
- Context assembly
- File watching
# Install (requires Python 3.10+)
pipx install gptme-rag # or: pip install gptme-rag
# Index your documents
gptme-rag index **.md
# Search
gptme-rag search "What is the architecture of the system?"
For development installation:
git clone https://github.com/ErikBjare/gptme-rag.git
cd gptme-rag
poetry install
# Index markdown files in a directory
gptme-rag index *.md
# Index with custom persist directory
gptme-rag index *.md --persist-dir ./index
# Basic search
gptme-rag search "your query here"
# Advanced search with options
gptme-rag search "your query" \
--n-results 5 \
--persist-dir ./index \
--max-tokens 4000 \
--show-context
The watch command monitors directories for changes and automatically updates the index:
# Watch a directory with default settings
gptme-rag watch /path/to/documents
# Watch with custom pattern and ignore rules
gptme-rag watch /path/to/documents \
--pattern "**/*.{md,py}" \
--ignore-patterns "*.tmp" "*.log" \
--persist-dir ./index
Features:
- π Real-time index updates
- π― Pattern matching for file types
- π« Configurable ignore patterns
- π Efficient batch processing
- πΎ Automatic persistence
The watcher will:
- Perform initial indexing of existing files
- Monitor for file changes (create/modify/delete/move)
- Update the index automatically
- Handle rapid changes efficiently with debouncing
- Continue running until interrupted (Ctrl+C)
The benchmark commands help measure and optimize performance:
# Benchmark document indexing
gptme-rag benchmark indexing /path/to/documents \
--pattern "**/*.md" \
--persist-dir ./benchmark_index
# Benchmark search performance
gptme-rag benchmark search /path/to/documents \
--queries "python" "documentation" "example" \
--n-results 10
# Benchmark file watching
gptme-rag benchmark watch-perf /path/to/documents \
--duration 10 \
--updates-per-second 5
Features:
- π Comprehensive metrics
- Operation duration
- Memory usage
- Throughput
- Custom metrics per operation
- π¬ Multiple benchmark types
- Document indexing
- Search operations
- File watching
- π Performance tracking
- Memory efficiency
- Processing speed
- System resource usage
Example benchmark output:
ββββββββββββββββββ³βββββββββββββ³ββββββββββββ³ββββββββββββ³ββββββββββββββββββββ
β Operation β Duration(s) β Memory(MB) β Throughput β Additional Metrics β
β‘ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ©
β indexing β 0.523 β 15.42 β 19.12/s β files: 10 β
β search β 0.128 β 5.67 β 23.44/s β queries: 3 β
β file_watching β 5.012 β 8.91 β 4.99/s β updates: 25 β
ββββββββββββββββββ΄βββββββββββββ΄ββββββββββββ΄ββββββββββββ΄βββββββββββββββββββ
The indexer supports automatic document chunking for efficient processing of large files:
# Index with custom chunk settings
gptme-rag index /path/to/documents \
--chunk-size 1000 \
--chunk-overlap 200
# Search with chunk grouping
gptme-rag search "your query" \
--group-chunks \
--n-results 5
Features:
- π Streaming processing
- Handles large files efficiently
- Minimal memory usage
- Progress reporting
- π Smart chunking
- Configurable chunk size
- Overlapping chunks for context
- Token-aware splitting
- π Enhanced search
- Chunk-aware relevance
- Result grouping by document
- Full document reconstruction
Example Output:
Most Relevant Documents:
1. documentation.md#chunk2 (relevance: 0.85)
Detailed section about configuration options, including chunk size and overlap settings.
[Part of: documentation.md]
2. guide.md#chunk5 (relevance: 0.78)
Example usage showing how to process large documents efficiently.
[Part of: guide.md]
3. README.md#chunk1 (relevance: 0.72)
Overview of the chunking system and its benefits for large document processing.
[Part of: README.md]
Full Context:
Total tokens: 850
Documents included: 3 (from 3 source documents)
Truncated: False
The chunking system automatically:
- Splits large documents into manageable pieces
- Maintains context across chunk boundaries
- Groups related chunks in search results
- Provides document reconstruction when needed
# Run all tests
poetry run pytest
# Run with coverage
poetry run pytest --cov=gptme_rag
gptme_rag/
βββ __init__.py
βββ cli.py # CLI interface
βββ indexing/ # Document indexing
β βββ document.py # Document model
β βββ indexer.py # ChromaDB integration
βββ query/ # Search functionality
β βββ context_assembler.py # Context assembly
βββ utils/ # Utility functions
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Run tests and linting
- Submit a pull request
Releases are automated through GitHub Actions. The process is:
- Update version in pyproject.toml
- Commit the change:
git commit -am "chore: bump version to x.y.z"
- Create and push a tag:
git tag vx.y.z && git push origin master vx.y.z
- Create a GitHub release (can be done with
gh release create vx.y.z
) - The publish workflow will automatically:
- Run tests
- Build the package
- Publish to PyPI
This package is designed to integrate with gptme to provide AI assistants with relevant context from your local files. When used with gptme, it:
- Automatically indexes your project files
- Enhances AI responses with relevant context
- Provides semantic search across your codebase
- Maintains a persistent knowledge base
- Assembles context intelligently within token limits
To use with gptme, simply install both packages and gptme will automatically detect and use gptme-rag for context management.
MIT License. See LICENSE for details.