Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[completions and embeddings] Add support for Ollama (local LLMs) #711

Merged
merged 9 commits into from
Nov 9, 2023

Conversation

eolivelli
Copy link
Member

@eolivelli eolivelli commented Nov 9, 2023

Summary:

  • add support for https://ollama.ai/
  • with this patch you can now connect to local/custom LLMs via the Ollama REST API
  • add a sample application that uses Ollama both for Embeddings and completions

Configuration for Ollama

resources:
    - type: "ollama-configuration"
      name: "ollama"
      configuration:
        url: "${secrets.ollama.url}"

@eolivelli eolivelli changed the title [completions] Add support for Ollama (local LLMs) [completions and embeddings] Add support for Ollama (local LLMs) Nov 9, 2023
@eolivelli eolivelli merged commit b03f6d5 into main Nov 9, 2023
10 checks passed
@eolivelli eolivelli deleted the impl/ollama branch November 9, 2023 16:47
benfrank241 pushed a commit to vectorize-io/langstream that referenced this pull request May 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant