Skip to content
This repository has been archived by the owner on Sep 15, 2024. It is now read-only.

Commit

Permalink
chore: Updated packages, default model for Cohere and default Docker …
Browse files Browse the repository at this point in the history
…environment variables.
  • Loading branch information
anirbanbasu committed May 7, 2024
1 parent e9cb01c commit 3128b62
Show file tree
Hide file tree
Showing 4 changed files with 29 additions and 15 deletions.
16 changes: 8 additions & 8 deletions .env.docker
Original file line number Diff line number Diff line change
Expand Up @@ -42,26 +42,26 @@ OLLAMA_URL = "http://localhost:11434"
OLLAMA_MODEL = "llama3"

# Large language model
LLM_REQUEST_TIMEOUT = 120
LLM_TEMPERATURE = 0.4
LLM_REQUEST_TIMEOUT = "120"
LLM_TEMPERATURE = "0.4"
# Customise the message as needed
LLM_SYSTEM_MESSAGE = "You are an intelligent assistant. You provide concise and informative responses to questions from the user using only the information given to you as context. If you are unsure about an answer or if the user question cannot be answered using information in the context then say that you do not know. If you can, quote the actual text from the context as a reference with your answer."
LLM_CHUNK_SIZE = 512
LLM_CHUNK_OVERLAP = 64
LLM_CHUNK_SIZE = "512"
LLM_CHUNK_OVERLAP = "64"

# Knowledge graph index and chat engine
INDEX_MEMORY_TOKEN_LIMIT = 8192
INDEX_MAX_TRIPLETS_PER_CHUNK = 16
INDEX_MEMORY_TOKEN_LIMIT = "8192"
INDEX_MAX_TRIPLETS_PER_CHUNK = "16"
INDEX_CHAT_MODE = "context"
INDEX_INCLUDE_EMBEDDINGS = "True"

# Knowledge graph visualisation
# This height is in pixels
KG_VIS_HEIGHT = 800
KG_VIS_HEIGHT = "800"
# Acceptable layout options are "circular", "planar", "shell", "spectral", "spring" and "spring" is the default
KG_VIS_LAYOUT = "spring"
KG_VIS_PHYSICS_ENABLED = "True"
KG_VIS_MAX_NODES = 100
KG_VIS_MAX_NODES = "100"

# Performance evaluation using Langfuse
# See: https://langfuse.com/docs/sdk/python
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ You can specify the language model provider to use by using the environment vari

If using [Ollama](https://ollama.com/), you will also need to install it or, point the chatbot to a remotely hosted Ollama server. You also need to pull the Ollama model that you specify with `OLLAMA_MODEL` environment variable using `ollama pull <model-name>` (replace `<model-name>` with the actual model that you want to use) on your Ollama server. Check the [available Ollama models](https://ollama.com/library).

Open AI can be used by specifying an `OPENAI_API_KEY`, an `OPENAI_MODEL`, and by choosing `Open AI` as the `LLM_PROVIDER`. Follow [this link](https://platform.openai.com/account/api-keys) to get an Open AI API key. Similarly, Cohere can be used by specifying a `COHERE_API_KEY`, a `COHERE_MODEL` (which defaults to `command-r`), and by choosing `Cohere` as the `LLM_PROVIDER`. Follow [this link](https://cohere.com/pricing) to obtain a Cohere API key.
Open AI can be used by specifying an `OPENAI_API_KEY`, an `OPENAI_MODEL`, and by choosing `Open AI` as the `LLM_PROVIDER`. Follow [this link](https://platform.openai.com/account/api-keys) to get an Open AI API key. Similarly, Cohere can be used by specifying a `COHERE_API_KEY`, a `COHERE_MODEL` (which defaults to `command-r-plus`), and by choosing `Cohere` as the `LLM_PROVIDER`. Follow [this link](https://cohere.com/pricing) to obtain a Cohere API key.

See the settings in the `.env.template` file customisation of the LLM settings.

Expand Down
24 changes: 19 additions & 5 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,46 +1,60 @@
altair==5.3.0
async-timeout==4.0.3
blinker==1.8.1
autopep8==2.1.0
blinker==1.8.2
bs4==0.0.2
cachetools==5.3.3
chevron==0.14.0
exceptiongroup==1.2.1
fqdn==1.5.1
GitPython==3.1.43
ipdb==0.13.13
isoduration==20.11.0
jsonpatch==1.33
jupytext==1.16.2
langcodes==3.4.0
langfuse==2.29.0
langfuse==2.29.1
linkpreview==0.9.0
llama-index==0.10.34
llama-index==0.10.35
llama-index-embeddings-cohere==0.1.8
llama-index-embeddings-ollama==0.1.2
llama-index-graph-stores-neo4j==0.1.4
llama-index-llms-cohere==0.1.6
llama-index-llms-ollama==0.1.3
llama-index-postprocessor-cohere-rerank==0.1.4
llama-index-readers-papers==0.1.4
llama-index-readers-web==0.1.12
llama-index-readers-web==0.1.13
llama-index-readers-wikipedia==0.1.4
llama-index-storage-docstore-redis==0.1.2
llama-index-storage-index-store-redis==0.1.2
matplotlib==3.8.4
notebook==6.5.7
ollama==0.1.9
orjson==3.10.3
papermill==2.6.0
pip==24.0
ploomber-cloud==0.2.6
ploomber-engine==0.0.32
ploomber-scaffold==0.3.1
protobuf==5.26.1
pyarrow==16.0.0
pydeck==0.9.0
pyflakes==3.2.0
pymdown-extensions==10.8.1
PyMuPDF==1.24.2
PySocks==1.7.1
python-dotenv==1.0.1
pyvis==0.3.2
sentence-transformers==2.7.0
solara==1.32.1
sqlparse==0.5.0
starlette==0.37.2
tabulate==0.9.0
toml==0.10.2
trafilatura==1.9.0
uri-template==1.3.0
uvicorn==0.29.0
watchdog==4.0.0
watchfiles==0.21.0
webcolors==1.13
websockets==12.0
wikipedia==1.4.0
2 changes: 1 addition & 1 deletion utils/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@
DEFAULT_SETTING_LLM_PROVIDER = "Ollama"
DEFAULT_SETTING_OLLAMA_URL = "http://localhost:11434"
DEFAULT_SETTING_OLLAMA_MODEL = "llama3"
DEFAULT_SETTING_COHERE_MODEL = "command-r"
DEFAULT_SETTING_COHERE_MODEL = "command-r-plus"
DEFAULT_SETTING_OPENAI_MODEL = "gpt-3.5-turbo-0125"

DEFAULT_SETTING_LLM_REQUEST_TIMEOUT = "120"
Expand Down

0 comments on commit 3128b62

Please sign in to comment.