This web app allows users to co-write a story with an AI language model, taking turns with the AI to build a shared narrative. Users can download their preferred language model from Hugging Face's library, and inference will be performed using the Transformers Pipelines.
- Frontend: Svelte (for building and serving the single-page app), JavaScript, SCSS, and HTML
- Backend: Python (for writing a WebSocket server hosting the LLM)
- Containerization: Docker (for packaging and deploying the application)
- Clone the repository:
git clone https://github.com/akselkristoffersen/llm-story-collab.git cd llm-story-collab
- Install huggingface_hub
pip install -U huggingface_hub
- Set Environment Variables
$HF_HOME
: The directory where the Hugging Face models will be downloaded/stored. Docker Compose will also create volumes to this directory.$AX_MODEL_NAME
: The name of the model on Hugging Face that you want to use. The envir is forwarded by Docker Compose to the backend container.export HF_HOME=/path/to/your/huggingface/models export AX_MODEL_NAME=huggingface-model-name
-
Download the Model
huggingface-cli download $AX_MODEL_NAME
-
Install Docker
If you haven't installed Docker yet, download and install it from Docker's official website.
-
Run Docker Compose
docker-compose up
-
Open in Web Browser http://localhost:80