Skip to content

Commit

Permalink
Merge pull request #100 from yankay/Add-Ollama-backends
Browse files Browse the repository at this point in the history
docs: Add Ollama backends
  • Loading branch information
AlexsJones authored Jul 15, 2024
2 parents abbc431 + 58ea49d commit 4a1fbed
Showing 1 changed file with 22 additions and 0 deletions.
22 changes: 22 additions & 0 deletions docs/reference/providers/backend.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ Currently, we have a total of 11 backends available:
- [Hugging Face](https://huggingface.co)
- [IBM watsonx.ai](https://www.ibm.com/products/watsonx-ai)
- [LocalAI](https://github.com/go-skynet/LocalAI)
- [Ollama](https://github.com/ollama/ollama)
- FakeAI

## OpenAI
Expand Down Expand Up @@ -208,6 +209,27 @@ Ollama is a local model, which has an OpenAI compatible API. It supports the mod
k8sgpt analyze --explain --backend localai
```
## Ollama
Ollama can get up and running locally with large language models. It runs Llama 2, Code Llama, and other models.
- To start the Ollama server, follow the instruction in [Ollama](https://github.com/ollama/ollama?tab=readme-ov-file#start-ollama).
```bash
ollama serve
```
It can also run as an docker image, follow the instruction in [Ollama BLog](https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image)
```bash
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
```
- Authenticate K8sGPT with Ollama:
```bash
k8sgpt auth add --backend ollama --model llama2 --baseurl http://localhost:11434/v1
```
- Analyze with a Ollama backend:
```bash
k8sgpt analyze --explain --backend ollama
```
## FakeAI
FakeAI or the NoOpAiProvider might be useful in situations where you need to test a new feature or simulate the behaviour of an AI based-system without actually invoking it. It can help you with local development, testing and troubleshooting.
Expand Down

0 comments on commit 4a1fbed

Please sign in to comment.