Skip to content

Latest commit

 

History

History
55 lines (31 loc) · 1.8 KB

README.md

File metadata and controls

55 lines (31 loc) · 1.8 KB

Running Llama3 on NAS

This is a simple instruction on how to locally run Llama3 on your NAS using Open WebUI and Ollama.

Hardware

Setup Instructions

1. Create Necessary Directories

Open a terminal and run the following commands to create directories for your AI models and the web interface:

$ mkdir -p /home/ansonhe/AI/ollama
$ mkdir -p /home/ansonhe/AI/Llama3/open-webui

2. Deploy Using Docker

Copy the provided docker-compose.yml file to your directory, then start the services using Docker:

$ sudo docker-compose up -d

3. Access the Web Interface

Open web browser and enter YOUR_NAS_IP:8080

4. Configure Ollama Models

In the web interface, change the Ollama Models address to 0.0.0.0:11434.

config

5. Download the Llama3 Model

Download the Llama3 Model ollama run llama3:8b, (Only using 8b versions here since performance limitations on NAS)

download

6. Set Llama3 as the Default Model

After the model has been downloaded, you can select Llama3 as your default model and host it locally.

llama3

Performance Considerations

Please note that the CPU of the NAS may still face limitations when running large language models (LLMs) due to its processing capabilities.