-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🎉 Support for Anthropic Claude 3 Models 🤖 #450
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
Processing... ⌛ |
Large parts of the code contains reformats and parenthesis changes - makes it very hard to look at the diff, as the surface of changes is largely inflated. Considering whether to reimplement from scratch based on @slapglif code, or going through a painful manual verification. In any case streaming support seems to be missing? |
The great part is that it works! thanks for that part, to start with. |
@enricoros as to the fidelity of the fix, upon further inspection, i receive some cases with: ○ Compiling /api/trpc-node/[trpc] ...
✓ Compiled /api/trpc-node/[trpc] in 1037ms (700 modules)
Error in stream: [SyntaxError: Expected ',' or '}' after property value in JSON at position 39] this is likley a parsing error, ill see to it shortly, as for the lack of streaming support - the current situation is that streaming is supported and streams to console, but i wanted to push a working version before attacking the streaming to next bug, to address the formatting - sorry yeah python guy, can you define your standard for me so i can use an llm to reformat? |
@slapglif: I like your stream parser and completion payload. Good job with those❗ I will proceed with a reimplementation to support streaming; I can pair the best of the Steam Transform architecture we modeled with the Vercel AI SDK folks, and your changes for the Messages API. Will update in an hour or two. The great stuff is that you got it working already. |
Thanks for your help @slapglif - I reimplemented the feature but used key insights from your implementation, such as messages coalescing, the stream decoder, etc. Now it's working REALLY WELL, in streaming mode too. |
This pull request adds support for the newly released Anthropic Claude 3 models to the big-AGI project. The changes include:
🔧 Updating the backend code to utilize the new Anthropic Messages API, as Claude 3 models only support this API.
🆕 Implementing support for Claude 3 models, allowing users to leverage the latest and greatest from Anthropic.
📝 Updating the documentation and comments to reflect the changes and provide clarity on how to use the new functionality.
🧪 Adding tests to ensure the Claude 3 functionality works as expected and to maintain code quality.
🤔 Why This Matters
By adding support for Anthropic Claude 3 models, we're enabling big-AGI users to take advantage of the latest advancements in AI technology. This will help keep our project on the cutting edge and provide our users with even more powerful tools for their AI needs.
🛠️ How It Works
The changes primarily focus on updating the backend code located in src/modules/llms/server/anthropic/anthropic.router.ts to use the new Anthropic Messages API. This allows seamless integration of Claude 3 models into the existing big-AGI infrastructure.
💡 What's Next
With Claude 3 support in place, we can continue to explore new ways to leverage these powerful models and provide even more value to our users. We welcome feedback and suggestions from the community on how we can further improve and expand our AI capabilities.
🙌 Thank you for your time and consideration. We look forward to your feedback and the opportunity to merge these exciting changes into the main branch!