- Supports multiple document formats and vector databases.
- Provides a production ready REST API.
- Customizable splitting/chunking.
- Includes options for encoding data using different encoding models both propriatory and open source.
- Built in code interpreter mode for computational question & answer scenarios.
- Allows session management through unique IDs for caching purposes.
Easiest way to get started is to use our Cloud API. This API is free to use (within reasonable limits).
-
Clone the repository
git clone https://github.com/superagent-ai/super-rag cd super-rag
-
Setup virtual environment
# Using virtualenv virtualenv env source env/bin/activate # Or using venv python3 -m venv env source env/bin/activate
-
Install requried packages
poetry install
-
Rename
.env.example
to.env
and set your environment variables -
Run server
uvicorn main:app --reload
Super-Rag has built in support for running computational Q&A using code interpreters powered by E2B.dev custom runtimes. You can signup to receive an API key to leverage they sandboxes in a cloud environment or setup your own by following these instructions.
Super-Rag comes with a built in REST API powered by FastApi.
// POST: /api/v1/ingest
// Payload
{
"files": [
{
"name": "My file", // Optional
"url": "https://path-to-my-file.pdf"
}
],
"document_processor": { // Optional
"encoder": {
"dimensions": 384,
"model_name": "embed-multilingual-light-v3.0",
"provider": "cohere"
},
"unstructured": {
"hi_res_model_name": "detectron2_onnx",
"partition_strategy": "auto",
"process_tables": false
},
"splitter": {
"max_tokens": 400,
"min_tokens": 30,
"name": "semantic",
"prefix_summary": true,
"prefix_title": true,
"rolling_window_size": 1
}
},
"vector_database": {
"type": "qdrant",
"config": {
"api_key": "YOUR API KEY",
"host": "THE QDRANT HOST"
}
},
"index_name": "my_index",
"webhook_url": "https://my-webhook-url"
}
// POST: /api/v1/query
// Payload
{
"input": "What is ReAct",
"vector_database": {
"type": "qdrant",
"config": {
"api_key": "YOUR API KEY",
"host": "THE QDRANT HOST"
}
},
"index_name": "YOUR INDEX",
"interpreter_mode": true,
"encoder": {
"provider": "cohere",
"name": "embed-multilingual-light-v3.0",
"dimensions": 384
},
"exclude_fields": ["metadata"], // Exclude specific fields
"interpreter_mode": False, // Set to True if you wish to run computation Q&A with a code interpreter
"session_id": "my_session_id" // keeps micro-vm sessions and enables caching
}
// POST: /api/v1/delete
// Payload
{
"file_url": "A file url to delete",
"vector_database": {
"type": "qdrant",
"config": {
"api_key": "YOUR API KEY",
"host": "THE QDRANT HOST"
}
},
"index_name": "my_index",
}
- OpenAi
- Cohere
- HuggingFace
- FastEmbed
- Mistral (coming soon)
- Anthropic (coming soon)
- Pinecone
- Qdrant
- Weaviate
- Astra
- Supabase (coming soon)
- Chroma (coming soon)