A lightweight Python application hosting deepseek-r1 Large Language Model ran locally.
- Python 3.7+
- Libraries:
customtkinter
,ollama
,pillow
- Ollama (for handling the model)
- Deepseek-r1 Reasoning Model
-
Clone the repository:
git clone https://github.com/PETROUNKNOWN/DeepSeek-R1_distilled.git
-
Install the required libraries:
pip install customtkinter ollama pillow
-
Ollama:
- Download Ollama from Ollama.com
- macOS
- Open the
.dmg
file you downloaded. - Drag and drop the Ollama app icon into the Applications folder.
- Open Finder, go to Applications, and double-click the Ollama app to launch it.
- Open the
- Windows
- Double-click the .exe file you downloaded.
- Follow the on-screen instructions in the installation wizard.
- Choose the installation location (default is fine for most users).
- Once the installation is complete, find Ollama in the Start Menu or Desktop and open it.
- Linux
- Open a terminal.
- Navigate to the directory where the downloaded file is located.
- Run the appropriate command based on the file type:
sudo dpkg -i ollama.deb sudo apt-get install -f
- For .rpm (Fedora/RedHat-based systems):
sudo rpm -ivh ollama.rpm
- Once installed, launch Ollama from the Applications menu or by running ollama in the terminal.
- In a terminal:
ollama serve
- An Ollama http instance should now be running on port
11434
:- Visiting
localhost:11434
will take you to a page that saysOllama is running
- Visiting
-
Model deepseek-r1:
- In a teminal:
ollama run deepseek-r1:8b
- In a teminal:
- The model requires a lot of compute hence a dedicated GPU is highly recommended
- The Model download requires 4.9GB of disk space although one could go with a 20.0GB one.
Instead ofollama run deepseek-r1:8b
useollama run deepseek-r1:32b
- You are recommended to search online other things Ollama can do including other models you can run instead of DeepSeek's deepseek-r1 Model.