Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying holmesgpt with local ollama3.1 fails w. KeyError: 'name' #129

Open
suukit opened this issue Sep 6, 2024 · 2 comments
Open

Trying holmesgpt with local ollama3.1 fails w. KeyError: 'name' #129

suukit opened this issue Sep 6, 2024 · 2 comments

Comments

@suukit
Copy link
Contributor

suukit commented Sep 6, 2024

Hi,
I wanted to give this a try and installed ollama locally. I am able to use the ollama API on http://localhost:11434/api/generate with curl.
I evaluated export OLLAMA_API_BASE=http://localhost:11434, installed holmes using brew as described in the readme and startet
holmes ask --model ollama/llama3.1:latest "what issues do I have in my cluster" -v
resulting in

APIConnectionError: litellm.APIConnectionError: 'name'
Traceback (most recent call last):
File "litellm/main.py", line 2422, in completion
File "litellm/llms/ollama.py", line 293, in get_ollama_response
KeyError: 'name'

image

Any help how to debug this? Already tryied -v which gives no further helpful output. Changed the OLLAMA_API_BASE to http://localhost:11434/ZZ and then receive and 404 as expected, so I assume OLLAMA_API_BASE is working. Also switched to ollama/llama2 model with same result.

Thank you in advance
Max

@juliendf
Copy link

Also have the same error. Did you find a solution or workaround ?

Thanks

@arikalon1
Copy link
Contributor

Hey @juliendf

I think it's this issue on litellm
BerriAI/litellm#3912 (reference)
ollama/ollama#4398

Which version of Holmes are you using?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants