Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Ollama LLM #1526

Merged
merged 1 commit into from
Feb 9, 2024
Merged

Add support for Ollama LLM #1526

merged 1 commit into from
Feb 9, 2024

Conversation

ygalblum
Copy link
Contributor

Allow using Ollama as the LLM

@Fau57
Copy link

Fau57 commented Jan 25, 2024

AGREED!

@oatmealm
Copy link

I'm trying this branch ... its seems to to want to build lllamacpp...

settings-ollama.yaml

ollama:
  model: llama2:latest
PGPT_PROFILES=ollama make run
  File "/home/user/src/privateGPT/private_gpt/components/llm/llm_component.py", line 38, in __init__
    self.llm = LlamaCPP(
               ^^^^^^^^^
  File "/home/user/.cache/pypoetry/virtualenvs/private-gpt-o6kY6e_Z-py3.11/lib/python3.11/site-packages/llama_index/llms/llama_cpp.py", line 119, in __init__
    raise ValueError(
ValueError: Provided model path does not exist. Please check the path or provide a model_url to download.

@ygalblum
Copy link
Contributor Author

ygalblum commented Feb 2, 2024

@oatmealm
IIUC according to https://github.com/imartinez/privateGPT/blob/main/private_gpt/settings/settings_loader.py#L36 setting PGPT_PROFILES instruct the code to load additional settings-{profile} .yml files. But, the base settings.yml is always loaded.

So, I think that you also need to set the llm mode:

llm:
  mode: ollama

Copy link
Collaborator

@imartinez imartinez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing! Could you update the documentation so people can figure out how to use this?

https://docs.privategpt.dev/manual/advanced-setup/llm-backends

Just update the markdown in fern/docs/pages/manual/llms.mdx

Allow using Ollama as the LLM

Signed-off-by: Ygal Blum <ygal.blum@gmail.com>
@ygalblum
Copy link
Contributor Author

ygalblum commented Feb 8, 2024

Could you update the documentation so people can figure out how to use this?

Done

Copy link
Contributor

github-actions bot commented Feb 8, 2024

@imartinez imartinez merged commit 6bbec79 into zylon-ai:main Feb 9, 2024
8 checks passed
@ygalblum ygalblum deleted the add-ollama branch February 11, 2024 07:24
@icsy7867
Copy link
Contributor

This is great! Ollama now supports the openai api format. I have just been running privategpt using the "openailike" which has been working fine.

@skywalker123p
Copy link

How can I combine embeddings when using Ollama LLM? Please give example of configuration and how to run it? thanks

simonbermudez pushed a commit to simonbermudez/saimon that referenced this pull request Feb 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants