Skip to content

Commit

Permalink
Feature/chat sessions (#12)
Browse files Browse the repository at this point in the history
* added cli option to set ollama ps polling interval
* added cli option to purge all chats
* fixed dupe toast messages
* various bug fixes
* updated deps
* added pre-commit run to makefile
* changed /save to /export
* renamed /clear to /new and added /delete
* updated help with table of contents and workflow sections
* add ability to select, load and delete existing sessions
* clear button renamed to new on chat screen
  • Loading branch information
paulrobello authored Jul 10, 2024
1 parent 508b7aa commit c9d0ea7
Show file tree
Hide file tree
Showing 27 changed files with 813 additions and 187 deletions.
4 changes: 2 additions & 2 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ repos:
#- repo: https://github.com/asottile/pyupgrade
# rev: v3.16.0
# hooks:
# - id: pyupgrade
# - message_id: pyupgrade

- repo: https://github.com/psf/black
rev: 22.10.0
Expand All @@ -38,7 +38,7 @@ repos:
#- repo: https://github.com/pre-commit/mirrors-mypy
# rev: v1.10.1
# hooks:
# - id: mypy
# - message_id: mypy
# additional_dependencies: [textual, rich, pydantic, ollama, docker, types-beautifulsoup4, types-requests, types-pytz, types-simplejson, typing-extensions, asyncio, humanize, argparse, python-dotenv]
# exclude: tests(/\w*)*/functional/|tests/input|tests(/.*)+/conftest.py|doc/data/messages|tests(/\w*)*data/

Expand Down
8 changes: 4 additions & 4 deletions Pipfile.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

55 changes: 51 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,30 @@
# PAR LLAMA

## Table of Contents

- [About](#about)
- [Screenshots](#screenshots)
- [Prerequisites](#prerequisites-for-running)
- [For Running](#prerequisites-for-running)
- [For Development](#prerequisites-for-dev)
- [For Model Quantization](#prerequisites-for-model-quantization)
- [Installation](#installing-from-mypi-using-pipx)
- [Using pipx](#installing-from-mypi-using-pipx)
- [Using pip](#installing-from-mypi-using-pip)
- [For Development](#installing-for-dev-mode)
- [Command Line Arguments](#command-line-arguments)
- [Environment Variables](#environment-variables)
- [Running PAR_LLAMA](#running-par_llama)
- [With pipx installation](#with-pipx-installation)
- [With pip installation](#with-pip-installation)
- [Under Windows WSL](#running-under-windows-wsl)
- [In Development Mode](#dev-mode)
- [Example Workflow](#example-workflow)
- [Themes](#themes)
- [Contributing](#contributing)
- [Roadmap](#roadmap)
- [What's New](#whats-new)

## About
PAR LLAMA is a TUI application designed for easy management and use of Ollama based LLMs.
The application was built with [Textual](https://textual.textualize.io/) and [Rich](https://github.com/Textualize/rich?tab=readme-ov-file)
Expand All @@ -20,6 +45,7 @@ Supports Dark and Light mode as well as custom themes.
## Prerequisites for running
* Install and run [Ollama](https://ollama.com/download)
* Install Python 3.11 or newer
* On Windows the [Scoop](https://scoop.sh/) tool makes it easy to install and manage things like python.

## Prerequisites for dev
* Install pipenv
Expand Down Expand Up @@ -66,8 +92,8 @@ make first-setup

## Command line arguments
```
usage: parllama [-h] [-v] [-d DATA_DIR] [-u OLLAMA_URL] [-t THEME_NAME] [-m {dark,light}] [-s {local,site,tools,create,logs}]
[--restore-defaults] [--clear-cache] [--no-save]
usage: parllama [-h] [-v] [-d DATA_DIR] [-u OLLAMA_URL] [-t THEME_NAME] [-m {dark,light}] [-s {local,site,tools,create,chat,logs}] [-p PS_POLL]
[--restore-defaults] [--clear-cache] [--purge-chats] [--no-save]
PAR LLAMA -- Ollama TUI.
Expand All @@ -82,10 +108,13 @@ options:
Theme name. Defaults to par
-m {dark,light}, --theme-mode {dark,light}
Dark / Light mode. Defaults to dark
-s {local,site,tools,create,logs}, --starting-screen {local,site,tools,create,logs}
-s {local,site,tools,create,chat,logs}, --starting-screen {local,site,tools,create,chat,logs}
Starting screen. Defaults to local
-p PS_POLL, --ps-poll PS_POLL
Interval in seconds to poll ollama ps command. 0 = disable. Defaults to 3
--restore-defaults Restore default settings and theme
--clear-cache Clear cached data
--purge-chats Purge all chat history
--no-save Prevent saving settings for this session.
```

Expand Down Expand Up @@ -141,6 +170,21 @@ From repo root:
make dev
```

## Example workflow
* Start parllama.
* Click the "Site" tab.
* Use ^R to fetch the latest models from Ollama.com.
* User the "Filter Site models" text box and type "llama3".
* Find the entry with title of "llama3".
* Click the blue tag "8B" to update the search box to read "llama3:8b".
* Press ^P to pull the model from Ollama to your local machine. Depending on the size of the model and your internet connection this can take a few min.
* Click the "Local" tab to see models that have been locally downloaded
* Select the "llama3:8b" entry and press ^C to jump to the "Chat" tab and auto select the model
* Type a message to the model such as "Why is the sky blue?". It will take a few seconds for Ollama to load the model. After which the LLMs answer will stream in.
* Towards the very top of the app you will see what model is loaded and what percent of it is loaded into the GPU / CPU. If a model cant be loaded 100% on the GPU it will run slower.
* To export your conversation as a Markdown file type "/export" in the message input box. This will open a export dialog.
* Type "/help" to see what other slash commands are available.

## Themes
Themes are json files stored in the themes folder in the data directory which defaults to **~/.parllama/themes**

Expand Down Expand Up @@ -206,16 +250,19 @@ if anything remains to be fixed before the commit is allowed.
**Where we are**
* Initial release - Find, maintain and create new models
* Basic chat with LLM
* Chat history / conversation management

**Where we're going**
* Chat history / conversation management
* Chat with multiple models at same time to compare outputs
* LLM tool use


## What's new

### v0.2.6
* Added chat history panel and management to chat page

### v0.2.51
* Fix missing dependency in package

### v0.2.5
Expand Down
Binary file modified docs/chat_dark_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion parllama/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
__credits__ = ["Paul Robello"]
__maintainer__ = "Paul Robello"
__email__ = "probello@gmail.com"
__version__ = "0.2.51"
__version__ = "0.2.6"
__licence__ = "MIT"
__application_title__ = "PAR LLAMA"
__application_binary__ = "parllama"
Expand Down
60 changes: 47 additions & 13 deletions parllama/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@
from parllama.messages.main import AppRequest
from parllama.messages.main import ChangeTab
from parllama.messages.main import CreateModelFromExistingRequested
from parllama.messages.main import DeleteSession
from parllama.messages.main import LocalModelCopied
from parllama.messages.main import LocalModelCopyRequested
from parllama.messages.main import LocalModelDelete
Expand All @@ -51,6 +52,8 @@
from parllama.messages.main import NotifyInfoMessage
from parllama.messages.main import PsMessage
from parllama.messages.main import SendToClipboard
from parllama.messages.main import SessionListChanged
from parllama.messages.main import SessionSelected
from parllama.messages.main import SetModelNameLoading
from parllama.messages.main import SiteModelsLoaded
from parllama.messages.main import SiteModelsRefreshRequested
Expand Down Expand Up @@ -105,6 +108,7 @@ def __init__(self) -> None:
"""Initialize the application."""
super().__init__()
self.notify_subs = {"*": set[Widget]()}
chat_manager.set_app(self)

self.title = __application_title__
self.dark = settings.theme_mode != "light"
Expand All @@ -114,7 +118,6 @@ def __init__(self) -> None:
self.is_refreshing = False
self.last_status = ""
self.main_screen = MainScreen()
chat_manager.set_app(self)

def _watch_dark(self, value: bool) -> None:
"""Watch the dark property."""
Expand Down Expand Up @@ -155,6 +158,12 @@ async def on_mount(self) -> None:
self.main_screen.post_message(
StatusMessage(f"Using Ollama server url: {settings.ollama_host}")
)
self.main_screen.post_message(
StatusMessage(
f"Polling Ollama ps every: {settings.ollama_ps_poll_interval} seconds"
)
)

self.main_screen.post_message(
StatusMessage(
f"""Theme: "{settings.theme_name}" in {settings.theme_mode} mode"""
Expand All @@ -168,7 +177,8 @@ async def on_mount(self) -> None:
)

self.set_timer(1, self.do_jobs)
self.set_timer(1, self.update_ps)
if settings.ollama_ps_poll_interval > 0:
self.set_timer(1, self.update_ps)

def action_noop(self) -> None:
"""Do nothing"""
Expand Down Expand Up @@ -220,7 +230,7 @@ def send_to_clipboard(self, msg: SendToClipboard) -> None:
# works for local sessions
pyperclip.copy(msg.message)
if msg.notify:
self.post_message(NotifyInfoMessage("Value copied to clipboard"))
self.post_message(NotifyInfoMessage("Copied to clipboard"))

@on(ModelPushRequested)
def on_model_push_requested(self, msg: ModelPushRequested) -> None:
Expand Down Expand Up @@ -501,6 +511,9 @@ def on_app_request(self, msg: AppRequest) -> None:
"""Add any widget that requests an action to notify_subs"""
if msg.widget:
self.notify_subs["*"].add(msg.widget)
if msg.__class__.__name__ not in self.notify_subs:
self.notify_subs[msg.__class__.__name__] = set()
self.notify_subs[msg.__class__.__name__].add(msg.widget)

@on(LocalModelListRefreshRequested)
def on_model_list_refresh_requested(self) -> None:
Expand All @@ -520,7 +533,9 @@ async def refresh_models(self):
)
dm.refresh_models()
self.main_screen.post_message(StatusMessage("Local model list refreshed"))
self.post_message_all(LocalModelListLoaded())
# self.post_message_all(LocalModelListLoaded())
self.main_screen.local_view.post_message(LocalModelListLoaded())
self.main_screen.chat_view.post_message(LocalModelListLoaded())
finally:
self.is_refreshing = False

Expand Down Expand Up @@ -554,7 +569,7 @@ async def refresh_site_models(self, msg: SiteModelsRefreshRequested):
)
)
dm.refresh_site_models(msg.ollama_namespace, None, msg.force)
self.post_message_all(
self.main_screen.site_view.post_message(
SiteModelsLoaded(ollama_namespace=msg.ollama_namespace)
)
self.main_screen.post_message(
Expand All @@ -571,7 +586,10 @@ async def update_ps(self) -> None:
"""Update ps status bar msg"""
was_blank = False
while self.is_running:
await asyncio.sleep(2)
if settings.ollama_ps_poll_interval < 1:
self.main_screen.post_message(PsMessage(msg=""))
break
await asyncio.sleep(settings.ollama_ps_poll_interval)
ret = dm.model_ps()
if not ret:
if not was_blank:
Expand Down Expand Up @@ -600,14 +618,14 @@ def status_notify(self, msg: str, severity: SeverityLevel = "information") -> No
self.notify(msg, severity=severity)
self.main_screen.post_message(StatusMessage(msg))

def post_message_all(self, msg: Message) -> None:
def post_message_all(self, msg: Message, sub_name: str = "*") -> None:
"""Post a message to all screens"""
if isinstance(msg, StatusMessage):
self.log(msg.msg)
self.last_status = msg.msg

for w in list(self.notify_subs["*"]):
w.post_message(msg)
if sub_name in self.notify_subs:
for w in list(self.notify_subs[sub_name]):
w.post_message(msg)
if self.main_screen:
self.main_screen.post_message(msg)

Expand All @@ -622,8 +640,6 @@ def on_create_model_from_existing_requested(
self, msg: CreateModelFromExistingRequested
) -> None:
"""Create model from existing event"""
msg.stop()

self.main_screen.create_view.name_input.value = f"my-{msg.model_name}:latest"
self.main_screen.create_view.text_area.text = msg.model_code
self.main_screen.create_view.quantize_input.value = msg.quantization_level or ""
Expand All @@ -633,7 +649,25 @@ def on_create_model_from_existing_requested(
@on(ModelInteractRequested)
def on_model_interact_requested(self, msg: ModelInteractRequested) -> None:
"""Model interact requested event"""
msg.stop()
self.main_screen.change_tab("Chat")
self.main_screen.chat_view.model_select.value = msg.model_name
self.main_screen.chat_view.user_input.focus()

@on(SessionListChanged)
def on_session_list_changed(self) -> None:
"""Session list changed event"""
self.main_screen.chat_view.session_list.post_message(SessionListChanged())

@on(SessionSelected)
def on_session_selected(self, msg: SessionSelected) -> None:
"""Session selected event"""
self.main_screen.chat_view.session_list.post_message(
SessionSelected(session_id=msg.session_id)
)

@on(DeleteSession)
def on_delete_session(self, msg: DeleteSession) -> None:
"""Delete session event"""
self.main_screen.chat_view.post_message(
DeleteSession(session_id=msg.session_id)
)
Loading

0 comments on commit c9d0ea7

Please sign in to comment.