Skip to content
This repository has been archived by the owner on Sep 15, 2024. It is now read-only.

Commit

Permalink
feat: Simplified UI. Langfuse evaluation enabled.
Browse files Browse the repository at this point in the history
  • Loading branch information
anirbanbasu committed May 30, 2024
1 parent 1b65fc5 commit 338d8a3
Show file tree
Hide file tree
Showing 12 changed files with 917 additions and 448 deletions.
8 changes: 5 additions & 3 deletions .env.docker
Original file line number Diff line number Diff line change
Expand Up @@ -15,15 +15,15 @@
# Neo4j graph store
# Remote Neo4j AuraDB URLs will look like neo4j+s://NNNNNN.databases.neo4j.io -- note that the protocol is neo4j+s, not bolt
NEO4J_DISABLE = "True"
NEO4J_URL = "bolt://localhost:7687"
NEO4J_URL = "bolt://host.docker.internal:7687"
NEO4J_USERNAME = "neo4j"
NEO4J_PASSWORD = "XXXXXX"
NEO4J_DB_NAME = "neo4j"

# Redis document and index store
# Remote Redis (on Render) URL will look like rediss://user:password@area-redis.render.com:6379 -- note that the protocol is rediss, not redis
REDIS_DISABLE = "True"
REDIS_URL = "redis://localhost:6379"
REDIS_URL = "redis://host.docker.internal:6379"
REDIS_NAMESPACE = "tldrlc"

LLM_PROVIDER = "Ollama"
Expand All @@ -37,10 +37,12 @@ COHERE_API_KEY = "Set your Cohere API key here."
COHERE_MODEL = "command-r-plus"

# Ollama
OLLAMA_URL = "http://localhost:11434"
OLLAMA_URL = "http://host.docker.internal:11434"
# The model must be available in the Ollama installation
OLLAMA_MODEL = "llama3"

LLAMAFILE_URL = "http://host.docker.internal:8080"

# Large language model
LLM_REQUEST_TIMEOUT = "120"
LLM_TEMPERATURE = "0.4"
Expand Down
2 changes: 2 additions & 0 deletions .env.template
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,8 @@ OLLAMA_URL = "http://localhost:11434"
# The model must be available in the Ollama installation
OLLAMA_MODEL = "llama3"

LLAMAFILE_URL = "http://localhost:8080"

# Large language model
LLM_REQUEST_TIMEOUT = "120"
LLM_TEMPERATURE = "0.4"
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -83,9 +83,9 @@ docker container start tldrlc-container

Following this, the app will be accessible on your Docker host, for example as [http://localhost:8765](http://localhost:8765) -- assuming that nothing else on host is blocking port 8765 when the container starts.

If you want to change the settings of the app itself inside the container, login to the container as `root`. You can do this by running `docker exec -it tldrlc-container bash`. Once, you have the shell access in the container, edit the file `/app/.env` using the `nano` editor that is installed for convenience. For example, you can change the default behaviour of the containerised app to use your preferred remote graph, index and document storage. Then, restart the _same_ container, by running `docker container restart tldrlc-container`. Remember that these changes _will not_ propagate to any new container that you spin out of the image.
<!-- If you want to change the settings of the app itself inside the container, login to the container as `root`. You can do this by running `docker exec -it tldrlc-container bash`. Once, you have the shell access in the container, edit the file `/app/.env` using the `nano` editor that is installed for convenience. For example, you can change the default behaviour of the containerised app to use your preferred remote graph, index and document storage. Then, restart the _same_ container, by running `docker container restart tldrlc-container`. Remember that these changes _will not_ propagate to any new container that you spin out of the image. -->

The Docker container has to depend on external LLM provider, graph database, document and index storage. If any of these, such as `Ollama`, is running on the Docker host then you should change the host name for the service from the default `localhost` to `host.docker.internal`.
The Docker container has to depend on external LLM provider, graph database, document and index storage. If any of these, such as `Ollama`, is running on the Docker host then you should use the host name for the service as `host.docker.internal` or `gateway.docker.internal`. See [the networking documentation of Docker desktop](https://docs.docker.com/desktop/networking/) for details.

### Cloud deployment

Expand Down
180 changes: 159 additions & 21 deletions app.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,51 +16,189 @@

from typing import Any
import solara
from solara.alias import rv

from pages import chatbot, ingest, settings
from utils import global_state
import reacton.ipyvuetify as rv

import utils.state_manager as sm
import utils.constants as constants

import ui.settings as settings_uic
import ui.ingest as ingest_uic
import ui.chat as chat_uic


from pathlib import Path

CWD = Path(__file__).parent
extern_style = (CWD / "styles.css").read_text(encoding=constants.CHAR_ENCODING_UTF8)

page_step: solara.Reactive[int] = solara.reactive(1)


@solara.component
def CustomLayout(children: Any = []):
global_state.set_theme_colours()
global_state.initialise_default_settings()
sm.set_theme_colours()
sm.initialise_default_settings()

with solara.AppLayout(
children=children,
color=global_state.corrective_background_colour.value,
color=None, # sm.corrective_background_colour.value,
navigation=True,
sidebar_open=False,
) as app_layout:
with solara.AppBar():
with solara.v.Btn(
with rv.Btn(
icon=True,
tag="a",
attributes={
"href": "https://github.com/anirbanbasu/tldrlc",
"title": "TLDRLC GitHub repository",
"target": "_blank",
"href": constants.PROJECT_GIT_REPO_URL,
"title": f"{constants.PROJECT_NAME} {constants.PROJECT_GIT_REPO_LABEL}",
"target": constants.HTTP_TARGET_BLANK,
},
):
solara.v.Icon(children=["mdi-github-circle"])
rv.Icon(children=["mdi-github-circle"])
solara.lab.ThemeToggle()
with rv.Snackbar(
bottom=True,
left=True,
timeout=0,
multi_line=True,
color=global_state.status_message_colour.value,
v_model=global_state.status_message_show.value,
color=sm.status_message_colour.value,
v_model=sm.status_message_show.value,
):
solara.Markdown(f"{global_state.status_message.value}")
solara.Markdown(f"{sm.status_message.value}")
return app_layout


@solara.component
def Page():
# Remove the "This website runs on Solara" message
solara.Style(constants.UI_SOLARA_NOTICE_REMOVE)
solara.Style(extern_style)

step_labels = [1, 2, 3, 4]

with solara.Sidebar():
if page_step.value in step_labels[2:]:
settings_uic.AllSettingsCategorical()

with rv.Stepper(
alt_labels=False,
vertical=False,
non_linear=False,
v_model=page_step.value,
):
with rv.StepperHeader():
for step in step_labels:
with rv.StepperStep(step=step, complete=page_step.value > step):
match step:
case 1:
solara.Markdown("Information")
case 2:
solara.Markdown("Language model (LLM)")
case 3:
solara.Markdown("Data")
case 4:
solara.Markdown("Chat")
if step != step_labels[-1]:
rv.Divider()
with rv.StepperItems():
with rv.StepperContent(step=1):
with rv.Card(elevation=0):
solara.Markdown(constants.MESSAGE_TLDRLC_WELCOME)
solara.Markdown(
f"**{constants.NOTICE_EU_AI_ACT__TITLE}**: {constants.NOTICE_EU_AI_ACT__MESSAGE}"
)
with rv.CardActions():
solara.Button(
constants.BTN_NOTICE_EU_AI_ACT__MORE,
color="warning",
icon_name="mdi-github-circle",
attributes={
"href": constants.PROJECT_GIT_REPO_URL,
"title": f"{constants.PROJECT_NAME} {constants.PROJECT_GIT_REPO_LABEL}",
"target": constants.HTTP_TARGET_BLANK,
},
)
solara.Button(
constants.BTN_NOTICE_EU_AI_ACT__OK,
color="primary",
icon_name="mdi-thumb-up",
on_click=lambda: page_step.set(2),
)
with rv.StepperContent(step=2):
with rv.Card(elevation=0):
solara.Markdown(
"""
### Language model settings
_You can configure other settings of the language model along
with indexing and storage from the settings menu, which is available
from the next step on the left sidebar_.
""",
)
settings_uic.LLMSettingsBasicComponent()
with rv.CardActions():
solara.Button(
constants.EMPTY_STRING,
icon_name="mdi-information",
on_click=lambda: page_step.set(1),
)
solara.Button(
"Get data",
icon_name="mdi-page-next",
color="primary",
on_click=lambda: page_step.set(3),
)
with rv.StepperContent(step=3):
with rv.Card(elevation=0):
solara.Markdown(
"""
### Data ingestion
You must ingest data from one of the following sources in order to chat about it.
""",
)
ingest_uic.IngestSelectiveComponent()
with rv.CardActions():
solara.Button(
"LLM",
icon_name="mdi-cogs",
disabled=(
ingest_uic.ingest_webpage_data.pending
or ingest_uic.ingest_pdfurl_data.pending
or ingest_uic.ingest_wikipedia_data.pending
or ingest_uic.ingest_arxiv_data.pending
or ingest_uic.ingest_pubmed_data.pending
),
on_click=lambda: page_step.set(2),
)
solara.Button(
"Let's chat!",
icon_name="mdi-chat-processing",
color="primary",
disabled=(
ingest_uic.ingest_webpage_data.pending
or ingest_uic.ingest_pdfurl_data.pending
or ingest_uic.ingest_wikipedia_data.pending
or ingest_uic.ingest_arxiv_data.pending
or ingest_uic.ingest_pubmed_data.pending
or ingest_uic.last_ingested_data_source.value
== constants.EMPTY_STRING
),
on_click=lambda: page_step.set(4),
)
with rv.StepperContent(step=4):
with rv.Card(elevation=0):
with rv.CardActions():
solara.Button(
"Go back to get some other data",
color="primary",
outlined=True,
icon_name="mdi-page-previous",
on_click=lambda: page_step.set(3),
)
chat_uic.ChatInterface()


routes = [
solara.Route(
path="/", component=chatbot.Page, label="Chatbot", layout=CustomLayout
),
solara.Route(path="ingest", component=ingest.Page, label="Ingest data"),
solara.Route(path="settings", component=settings.Page, label="Settings"),
solara.Route(path="/", component=Page, label="TLDRLC", layout=CustomLayout),
]
8 changes: 3 additions & 5 deletions local.dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,10 @@ RUN apt-get update && apt-get -y upgrade && apt-get -y install nano build-essent

# Create a non-root user
RUN useradd -m -u 1000 app_user
USER app_user

ENV HOME="/home/app_user"

USER app_user
# Set the working directory in the container
WORKDIR $HOME/app

Expand All @@ -42,12 +42,10 @@ ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN $VIRTUAL_ENV/bin/pip install --no-cache-dir -r requirements.txt

# Copy the project files
COPY ./*.md ./LICENSE ./*.py ./*.sh ./
COPY ./*.md ./LICENSE ./*.py ./*.sh ./*.css ./
COPY ./.env.docker /.env
COPY ./pages/*.py ./pages/
COPY ./ui/*.py ./ui/
COPY ./utils/*.py ./utils/
#RUN chown -R app_user:app_user $HOME/app
#RUN chmod +x $HOME/app/*.sh

# Expose the port to conect
EXPOSE 8765
Expand Down
40 changes: 12 additions & 28 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,46 +1,30 @@
GitPython==3.1.43
PyMuPDF==1.24.4
PySocks==1.7.1
altair==5.3.0
async-timeout==4.0.3
blinker==1.8.2
bs4==0.0.2
cachetools==5.3.3
chevron==0.14.0
exceptiongroup==1.2.1
jsonpatch==1.33
langcodes==3.4.0
langfuse==2.32.0
langfuse==2.33.0
linkpreview==0.9.0
llama-index==0.10.37
llama-index==0.10.40
llama-index-embeddings-cohere==0.1.8
llama-index-embeddings-llamafile==0.1.2
llama-index-embeddings-ollama==0.1.2
llama-index-graph-stores-neo4j==0.1.4
llama-index-graph-stores-neo4j==0.2.0
llama-index-llms-cohere==0.2.0
llama-index-llms-ollama==0.1.4
llama-index-llms-llamafile==0.1.2
llama-index-llms-ollama==0.1.5
llama-index-postprocessor-cohere-rerank==0.1.6
llama-index-readers-papers==0.1.5
llama-index-readers-web==0.1.16
llama-index-readers-papers==0.1.6
llama-index-readers-web==0.1.17
llama-index-readers-wikipedia==0.1.4
llama-index-storage-docstore-redis==0.1.2
llama-index-storage-index-store-redis==0.1.2
llama-index-vector-stores-redis==0.2.0
matplotlib==3.9.0
ollama==0.2.0
orjson==3.10.3
pip-autoremove==0.10.0
protobuf==5.26.1
pyarrow==16.1.0
pydeck==0.9.1
pymdown-extensions==10.8.1
python-dotenv==1.0.1
sentence-transformers==2.7.0
sentence-transformers==3.0.0
solara==1.32.2
starlette==0.37.2
toml==0.10.2
trafilatura==1.9.0
uvicorn==0.29.0
watchdog==4.0.0
watchfiles==0.21.0
uvicorn==0.30.0
watchdog==4.0.1
watchfiles==0.22.0
websockets==12.0
wikipedia==1.4.0
Loading

0 comments on commit 338d8a3

Please sign in to comment.