-
-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement: LibreChat Agents #3607
Comments
Regarding the modular system, it would be nice to be able to put all assistant tool logic into a single folder, accompanied by a JSON manifest. This would facilitate the creation of a librechat agents store, where each agent/assistant such as |
I want to say is not agent, but prompt, I want to say, can prompt management consult https://github.com/lobehub/lobe-chat, lobe on the do is better, and built a lot of useful prompt. Also, librechat's prompt seems to be sent via chat rather than system prompt, hopefully supporting system prompt |
I would greatly prefer a way to use tools that is not tied to the assistant concept. Most LLMs today support tools as part of the interface, so I would prefer the baseline tool support to be in the preset. I really don't need more than a way to provide a JSON manifest for each tool in preset and then to be able to implement the callback myself - either by supplying a url to call or the actual server-side implementation. Assistants are fine, but I think they're an additional abstraction on top and one of the things I love about LibreChat is that it doesn't force me to use abstractions beyond those in the baseline LLM API contracts. |
Thanks for your comment! This will be a successor to the plugins endpoint, which does not have that additional abstraction you mention. I want to allow “tools” to be toggled on and off per “run”, on the fly, as before. Though it’s worth mentioning, this is an essence an agent without an association to the agent data structure and Plugins also functions like this under the hood. As soon as the LLM has “agency” of tools it crosses that territory. |
Since day one, LibreChat has let users work with system prompts using presets, right after the OpenAI API launched in March or April 2023. With our upcoming update, we want to make this feature even more noticeable and easier to use. Also, think of "Agent" as a more straightforward way to describe this idea, since it basically uses "instructions" that act like a system prompt |
Is there planned integration of agents with locally hosted LLM's with platforms such as ollama? |
Yes! All ollama models that support tool-calling |
Would you mind sharing the architecture in a nutshell you're planning to implement? I tried to create a simple WeatherAssistant as a way to experiment with this process: https://www.librechat.ai/docs/development/tools_and_plugins. It worked. However, I found it quite complex, with too many files to edit for such a simple project. As I mentioned, I would really appreciate seeing a solution where all that's needed is to put the assistant's logic and its manifest.json in a single directory. Is that something you're working towards? Thanks! |
Sure, yes, can’t comment on too much but adding tools will be made much simpler and you will also have the benefit of adding actions similar to chatgpt/Assistants api as well |
@danny-avila eagerly waiting for this feature as we want to integrate librechat with bedrock agents. Do we have any ETA for this ? |
Initial release will be this week. Thanks for your patience |
Hey is there an initial release yet? Thx |
Currently in beta, you can define this in your .env file: EXPERIMENTAL_AGENTS=true this will enable “Agents” as an endpoint to be used, currently supporting Azure, Bedrock, OpenAI, Anthropic. custom endpoints and Google still in the works |
Hi @danny-avila, can you point me to any documentation or examples on how I can try out bedrock agents ? |
Is code interpreter already enabled? can only see the the file retrieval option and would like to test code execution as well. |
Not yet |
There’s no documentation yet on agents specifically, but you should follow my other comment to enable “Agents” as a drop-down endpoint option. As long as you have Bedrock setup like usual, which is documented, you should it see as an option when creating an agent. |
Just a quick question : I am running this docker image ghcr.io/danny-avila/librechat-dev:latest, I have pulled the latest changes, and I have set EXPERIMENTAL_AGENTS=true into my env file, but I do not see any changes in the interface. Am I supposed to see a new "Agent Endpoint" somewhere, or is it replacing the former Assistants endpoint ? Thanks, |
@djuillard after docker restart I have it in the drop down as a new |
hmm hmm strange
I did it several times, but no Agents endpoint appearing on my side...
…On Wed, Oct 9, 2024 at 2:07 PM avimar ***@***.***> wrote:
@djuillard <https://github.com/djuillard> after docker restart I have it
in the drop down as a new Agents endpoint:
image.png (view on web)
<https://github.com/user-attachments/assets/8f70f7b6-4c37-416a-911d-c9f0a8ca8111>
—
Reply to this email directly, view it on GitHub
<#3607 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACUBT4ZIIG53AHHXU7XUHV3Z2UL67AVCNFSM6AAAAABMKBTHKWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMBSGEZTOMRSGE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I updated my conf, removed everything that could possibly be in "conflict" with experimental_agents just to leave one single openai endpoint. But still no agents endpoint appearing. Nothing special in the error or debug logs. Any idea why it does not appear ? Am I supposed to state EXPERIMENTAL_AGENTS=true at a specific location in the .env file, or doesn't it matter (which is my guess) ? Thanks |
MY mistake.... of course, just needed to add the 'agents' endpoints in the endpoints.... it was more than obvious.... sorry ! You can all forget my previous message. |
I was able to try out agent using bedrock, this is great work !. I know the full support for agents is still work in progress. Meanwhile I had couple of questions :
CC : @danny-avila |
I understand how global assistants would work, with admins toggling access, but how would sharing assistants work? Will it be based on email addresses? I think it would be really neat to have a solution similar to Google Docs, where only the author can edit and manage access. |
Even though agents are still experimental, they seem to work well with pre-defined endpoints. However, they do not appear to work with custom (Ollama) endpoints. Is this a known issue, or could it be a misconfiguration on my side? Here is the error that occurs when I select the custom provider and model and then click on the back arrow:
|
They are not yet supported but planned for the official release.
@ravi-katiyar thanks for your questions.
@OskarVismaDev it will work similar to prompts now, as well as allowing author to manage access. With regards to sharing, this can eventually evolve to a local marketplace within your instance |
I've experimented with agents and found that they are not very proactive in invoking plugins, which makes it difficult to achieve good coordination. |
@danny-avila Thanks for the response! I really like the local marketplace idea. In bigger organizations like schools or companies, though, I think it’d be helpful to have a third sharing option: private, public, or with specific emails in a custom field. I’ve tested a crude version of this using the assistant’s description field to control access, and it’s been great for group work without cluttering the dropdown for everyone (and of course keeping the work inside the right groups). Any chance we could see something like this in future updates? Thanks! |
Thanks @djuillard for your questions!
Sharing functionality is not completely built out yet, and I'm actively working on this. However, you are meant to see the full file list as shown here:
Yes, the idea of "workspaces" or "projects" is already present in the app, and will be built out more soon.
This is happening due to some functionality that hasn't been built out yet, will address it sooner rather than later.
Unfortunately, this is not a simple problem. I'm happy to say after months of work, I've built something that will be highly compatible with LibreChat, to run code in a safe and secure manner, running many different languages (not just python or javascript), with the ability to work with and generate files, and scalable to many users. On open interpreter: it would be one of the least secure ways of doing this, and it's not scalable. It's mainly meant for a single-user environment.
I would like to comment here for the first time that Code Interpreter functionality won't be open source. I've given a lot to the open source community through this project and I'm very thankful for what it has given in return. However, with this particular feature, I would like to protect the source code as it came from months of research, frustration, and trial and error, to build something truly effective and seamless across many use cases and principally for the use case of LibreChat Agents. I believe this would also give LibreChat a "competitive" edge (as the AI space is wildly competitive right now, even across open source software), and would also help sustain the project long-term, for anyone who would like to pay a monthly cost for the ability to safely and effectively run code across the app. My decision was also due to the fact that packaging this solution would significantly bloat the current tech stack, and with less compatibility across systems, thereby creating more maintenance overhead, not to mention the increased bandwidth. |
Thanks for your clear answers (as always !).
Le mer. 30 oct. 2024 à 15:05, Danny Avila ***@***.***> a
écrit :
… Thanks @djuillard <https://github.com/djuillard> for your questions!
file attachments : I can attach files in the knowledge base (when I tick
the "Enable File Search" and upload a file), but they do not appear in the
agent builder (meaning that we have to ask the agent which files are in the
knowledge base). It is not a big issue, but to better manage the KB, maybe
it would be interesting to have the list of such files
Sharing functionality is not completely built out yet, and I'm actively
working on this. However, you are meant to see the full file list as shown
here:
image.png (view on web)
<https://github.com/user-attachments/assets/2156abfa-81dc-4356-83d1-dcba3814ba68>
so far, with the share button, we can either share our agents with
everyone, and keep it private; in the future, will we be able to share an
assistant with only selected people in the organization ?
Yes, the idea of "workspaces" or "projects" is already present in the app,
and will be built out more soon.
Sometimes, I experience an issue like the one below : no immediate idea of
the reason for that error....
This is happening due to some functionality that hasn't been built out
yet, will address it sooner rather than later.
code interpreter is not available yet. What do you plan to add ? Maybe
code interpreter would only be available through official openai agents,
which means that we would need to opt for an open source equivalent, like
open interpreter ?
Unfortunately, this is not a simple problem. I'm happy to say after months
of work, I've built something that will be highly compatible with
LibreChat, to run code in a safe and secure manner, running many different
languages (not just python or javascript), with the ability to work with
and generate files, and scalable to many users. On open interpreter: it
would be one of the least secure ways of doing this, and it's not scalable.
It's mainly meant for a single-user environment.
open source equivalent
I would like to comment here for the first time that Code Interpreter
functionality won't be open source. I've given a lot to the open source
community through this project and I'm very thankful for what it has given
in return. However, with this particular feature, I would like to protect
the source code as it came from months of research, frustration, and trial
and error, to build something truly effective and seamless across many use
cases and principally for the use case of LibreChat Agents. I believe this
would also give LibreChat a "competitive" edge (as the AI space is wildly
competitive right now, even across open source software), and would also
help sustain the project long-term, for anyone who would like to pay a
monthly cost for the ability to safely and effectively run code across the
app.
My decision was also due to the fact that packaging this solution would
significantly bloat the current tech stack, and with less compatibility
across systems, thereby creating more "core" LibreChat maintenance
overhead, not to mention the increased bandwidth.
—
Reply to this email directly, view it on GitHub
<#3607 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACUBT47VRVVI7FK4QBUUIKLZ6DRTTAVCNFSM6AAAAABMKBTHKWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINBXGI4DGMJUG4>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
You might find inspiration in the sponsorware model, as pioneered by Caleb Porzio. Instead of keeping the Code Interpreter permanently closed-source, you could initially release it exclusively to financial supporters, with the promise to open-source it once a funding milestone is reached. This model aligns with the open-source ethos while ensuring financial sustainability and maintaining a competitive edge. Moreover, it could extend beyond the initial release—applying the same approach to fund ongoing maintenance and improvements, ensuring the feature remains robust and effective over time. Caleb’s journey illustrates how this approach fosters community collaboration and rewards innovation: Caleb Porzio on Sponsorware. |
What features would you like to see added?
Open-source alternative to Assistants API, as a successor to the "Plugins" endpoint, with support for Mistral, AWS Bedrock, Anthropic, OpenAI, Azure OpenAI Services, and more.
More details
Tweet announcing this: https://twitter.com/LibreChatAI/status/1821195627830599895
Which components are impacted by your request?
Frontend/backend
Code of Conduct
The text was updated successfully, but these errors were encountered: