Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add Galadriel API integration #188

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

kristjanpeterson1
Copy link

Add a basic Galadriel LLM integration

@ThierryBleau ThierryBleau requested a review from 0xMochan January 8, 2025 19:44
@0xMochan
Copy link
Contributor

0xMochan commented Jan 9, 2025

Hello! Welcome to the rig community, we appreciate your PR a ton! It looks like this integration is nearly identifical to the openai provider, if the API is based on the OpenAI spec it might be more fitting to just use the openai provider client instead of having a duplicate provider. I do see that an extra client method was added so it might be alright to have a singular method for instantiating an openai client w/ a custom reqwest client.

Also, diving into the documentation for Galadriel, it seems like the crux of this platform is to provide verifiable / on-chain attestations of LLM output. This seems like a much deeper integration in order to provide that if we were to match how the sentience library works on their end, atleast in implementing the TEE.

If it's a simple provider integration, it's likely not needed to maintain an entire provider if you can reuse the openai client. With a larger TEE implementation, it might require a deeper look on how that platform works. I think it would be best if an issue was made so that we can explore this in more detail, perhaps discussing with the proper teams, etc. Let me know if this makes sense, or I can follow up with any clarifications!

@kristjanpeterson1
Copy link
Author

@0xMochan Yep it is very similar to the OpenAI provider, but the Galadriel one does not support embedding models, making it kind of incorrect to use the openai provider?

The main part of the sentience library is still just LLM inference, users can also fetch their inferences and prove their LLM inputs/outputs, but that would usually happen on some other platform, eg some frontend to showcase these, so I dont think it makes sense to think about that part here.

So overall I think it should be just a provider integration, but I am not sure if it could be simplified to remove duplications. Do you have any suggestions here?

@0xMochan
Copy link
Contributor

@0xMochan Yep it is very similar to the OpenAI provider, but the Galadriel one does not support embedding models, making it kind of incorrect to use the openai provider?

The main part of the sentience library is still just LLM inference, users can also fetch their inferences and prove their LLM inputs/outputs, but that would usually happen on some other platform, eg some frontend to showcase these, so I dont think it makes sense to think about that part here.

So overall I think it should be just a provider integration, but I am not sure if it could be simplified to remove duplications. Do you have any suggestions here?

I see, then I think it would be suitable to make a duplicate client of OpenAI but w/o any of the embeddings builder api. That way, when you create an Agent, it'll just use the openai code paths!

@kristjanpeterson1
Copy link
Author

@0xMochan Thats kind of what I did, or do you have something else in mind?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants