Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Assistant API #1273

Open
mudler opened this issue Nov 11, 2023 · 9 comments
Open

feat: Assistant API #1273

mudler opened this issue Nov 11, 2023 · 9 comments
Labels
enhancement New feature or request roadmap up for grabs Tickets that no-one is currently working on

Comments

@mudler
Copy link
Owner

mudler commented Nov 11, 2023

OpenAI released the Assistant API. https://platform.openai.com/docs/assistants/overview

This thread is to collect thoughts on how to integrate this - it seems we have already all the needed pieces, but it's needed to stich things together to expose a similar API.

I didn't had a closer look at the API yet - not on my radar until #1126 is completed

For reference, the OpenAI API spec: https://github.com/openai/openai-openapi

@mudler mudler added the enhancement New feature or request label Nov 11, 2023
@mudler mudler self-assigned this Nov 11, 2023
@mudler mudler added the roadmap label Nov 11, 2023
@mudler mudler removed their assignment Nov 11, 2023
@mudler mudler added the up for grabs Tickets that no-one is currently working on label Nov 11, 2023
@localai-bot
Copy link
Contributor

⚠️⚠️⚠️⚠️⚠️

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

but.... I can also be funny or helpful 😸 and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me,
I don't support (yet) replying!

⚠️⚠️⚠️⚠️⚠️

To integrate the OpenAI Assistant API with LocalAGI, we can use the local LLMs that are already loaded in memory to generate completions and edits. We can also use the existing chat interface to interact with the user and receive instructions for generating completions and edits.
Here's a possible approach:

  1. Add a new endpoint to LocalAGI to generate completions and edits using the local LLMs. This endpoint should take in the instruction, input text, and temperature parameters, and return the generated completion or edit as JSON response.
  2. Modify the chat interface to include buttons for generating completions and edits, and send the appropriate request to the new endpoint when the user clicks on these buttons.
  3. Add a new command-line argument to LocalAGI to specify the OpenAI Assistant API key, if needed. This will allow users to use their own OpenAI API key to access the Assistant API directly from LocalAGI.
  4. Test and document the integration thoroughly to ensure it works as expected and is easy to use for users with different levels of technical expertise.
    Overall, this integration should provide a seamless experience for users who want to use LocalAGI to generate completions and edits using OpenAI's LLMs, while also allowing them to access the full Assistant API directly from LocalAGI if needed.

Sources:

@muka
Copy link
Contributor

muka commented Dec 28, 2023

In the effort to contribute on this, I started collecting the requirements and functionalities offered by the Assistant API.

Assistant functionalities

  • Assistant API
  • Thread API
  • Run API
  • File upload API
  • Tools support
  • RAG support (involving also a vector db, embeddings, feature extraction and search)
  • Code instruct support
  • Automatic truncation (or other approaches) of long context to fit in a Thread

Links

@mudler mudler pinned this issue Feb 11, 2024
@mudler
Copy link
Owner Author

mudler commented Feb 13, 2024

First PR in that direction adding File API: #1703

@richiejp
Copy link
Collaborator

I have a few ideas for vector search in order of my personal preference:

  1. Embedded vector store and search
  2. Host one of the many vector DBs in a backend
  3. Connect to an external store

All could exist at once. I like the first one for the use-case where someone has a limited number of documents and/or search volume. Also for being the default choice in LocalAI without incurring a lot of maintenance or bloat. It should be relatively simple because:

  • Doing a brute force vector search in memory is reasonably fast for up to let's say 1M small vectors. Also it's an exact search whereas others are approximate. Both HNSW and brute force search implementations are included in the link
  • Embeddings and document chunks could be saved to flat file and loaded into memory when needed
  • Alternatively BadgerDB can be embedded which allows fast key iteration for comparing the vectors
    • Even large 4096 dimension embeddings can be stored as keys and the values can be document segments.
    • The keys can be prefixed with a file ID, so that only particular files are used in a query matching the OpenAI API
    • Should handle the use-case where events are being streamed into the database in real-time

So I went ahead and started an experiment external to LocalAI: https://github.com/richiejp/badger-cybertron-vector/blob/main/main.go

It's probably really slow due to copying the keys, lack of parallel execution and such, but it works. I expect these things can be optimized. The question is whether to go with BadgerDB or a pure in memory implementation?

BTW Cybertron is pretty cool, that could be a new backend.

@richiejp
Copy link
Collaborator

Perhaps instead of, or in addition to what I have done with the basic vector search. We could have a higher level API which is backed by https://github.com/bclavie/ragatouille as a starting point.

The reason is that it seems colBERT v2 is far superior to basic cosine similarity search, but it is difficult to unpack it and get it to work with some arbitrary vector database. It's possibly the wrong level of abstraction for LocalAI to be working at even internally.

Opinions on implementing Ragatouille or colBERT (https://github.com/stanford-futuredata/ColBERT) as a backend?

@richiejp
Copy link
Collaborator

I'm mainly thinking of the indexing and retrieval Ragatuille APIs https://github.com/bclavie/ragatouille?tab=readme-ov-file#%EF%B8%8F-indexing

@christ66
Copy link
Collaborator

+1 for Ragatouille

@SuperPat45
Copy link

SuperPat45 commented Sep 27, 2024

I just discovered Open Interpreter and OpenCodeInterpreter, which may be used as a OpenAI's Code Interpreter tool replacement for LocalAI Assistant API implementation

@SuperPat45
Copy link

SuperPat45 commented Sep 28, 2024

I also discovered MY Local Assistant (myla) and astra-assistants-api two local implementations of OpenAI Assistants API.

Maybe these projects can be integrated in LocalAI or, at least, help to develop the LocalAI implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request roadmap up for grabs Tickets that no-one is currently working on
Projects
None yet
Development

No branches or pull requests

7 participants
@richiejp @muka @christ66 @mudler @SuperPat45 @localai-bot and others