Nosia is a platform that allows you to run an AI model on your own data. It is designed to be easy to install and use.
You can follow this README or go to the Nosia Guides.
POC-RAG-AI-Rails-8.mp4
POC-Nosia-install.mp4
It will install Docker, Ollama, and Nosia on a macOS, Debian or Ubuntu machine.
curl -fsSL https://raw.githubusercontent.com/nosia-ai/nosia-install/main/nosia-install.sh | sh
You should see the following output:
✅ Setting up environment
✅ Setting up Docker
✅ Setting up Ollama
✅ Starting Ollama
✅ Starting Nosia
You can now access Nosia at https://nosia.localhost
By default, Nosia sets up ollama
locally.
To use a remote Ollama instance, set the OLLAMA_BASE_URL
environment variable during configuration.
Example:
Replace $OLLAMA_HOST_IP
with the FQDN or IP address of your Ollama host and run:
curl -fsSL https://raw.githubusercontent.com/nosia-ai/nosia-install/main/nosia-install.sh \
| OLLAMA_BASE_URL=http://$OLLAMA_HOST_IP:11434 sh
By default, Nosia uses:
- Completion model:
qwen2.5
- Embeddings model:
nomic-embed-text
- Checking model:
bespoke-minicheck
You can use any completion model available on Ollama by setting the LLM_MODEL
environment variable during the installation.
Example:
To use the mistral
model, run:
curl -fsSL https://raw.githubusercontent.com/nosia-ai/nosia-install/main/nosia-install.sh \
| LLM_MODEL=mistral sh
At this time, the nomic-embed-text
embeddings model is required for Nosia to work.
On macOS, install Homebrew:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Then install Ollama with Homebrew:
Replace $OLLAMA_HOST_IP
with the IP address of the Ollama host machine and run the following command:
brew install ollama
ollama pull qwen2.5
ollama pull bespoke-minicheck
ollama pull nomic-embed-text
OLLAMA_BASE_URL=$OLLAMA_HOST_IP:11434 OLLAMA_MAX_LOADED_MODELS=3 ollama serve
On the Debian/Ubuntu VM:
Replace $OLLAMA_HOST_IP
with the IP address of the host machine and run the following command:
curl -fsSL https://raw.githubusercontent.com/nosia-ai/nosia-install/main/nosia-install.sh \
| OLLAMA_BASE_URL=http://$OLLAMA_HOST_IP:11434 sh
You should see the following output:
✅ Setting up environment
✅ Setting up Docker
✅ Setting up Ollama
✅ Starting Ollama
✅ Starting Nosia
From the VM, you can access Nosia at https://nosia.localhost
If you want to access Nosia from the host machine, you may need to forward the port from the VM to the host machine.
Replace $USER
with the username of the VM, $VM_IP
with the IP address of the VM, and $LOCAL_PORT
with the port you want to use on the host machine, 8443 for example, and run the following command:
ssh $USER@$VM_IP -L $LOCAL_PORT:localhost:443
After running the command, you can access Nosia at https://nosia.localhost:$LOCAL_PORT
.
- Go as a logged in user to
https://nosia.localhost/api_tokens
- Generate and copy your token
- Use your favorite OpenAI chat completion API client by configuring API base to
https://nosia.localhost/v1
and API key with your token.
You can upgrade the services with the following command:
./script/upgrade
You can start the services with the following command:
./script/start
You can stop the services with the following command:
./script/stop
If you encounter any issue:
- during the installation, you can check the logs at
./log/production.log
- during the use waiting for an AI response, you can check the jobs at
http://<IP>:3000/jobs
- with Nosia, you can check the logs with
docker compose -f ./docker-compose.yml logs -f
- with the Ollama server, you can check the logs at
~/.ollama/logs/server.log
If you need further assistance, please open an issue!