-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Bedrock, Ollama, Palm, Claude-2, Cohere, Replicate Llama2, CodeLlama, Hugging Face (100+LLMs) - using LiteLLM #86
Conversation
@cpacker can you take a look at this PR when possible ? |
when I was trying to work this with LM Studio, it was complaining about missing function role "[2023-10-21 23:34:06.027] [ERROR] Error: 'messages' array must only contain objects with a 'role' field that is either 'user', 'assistant', or 'system'." Model: Mistral 7b Mistral doesn't know function role |
Voting this as one of the highest leverage PRs of all time. |
Thank you for taking the time to propose the integration of the We've decided that we won't be integrating
We hope you understand our decision and wish you all the best with the continued development of |
hey @vivi , thanks for the feedback:
|
We believe our users are better off using OpenAI's officially supported Please stop spamming github repositories with pull requests, thank you. |
wait @cpacker litellm isn't a proxy server. we let users spin up an openai-compatible server if they'd like. It's just a python package for translating llm api calls. I agree with you, unnecessarily routing things through a proxy would be a bit weird. Is there nothing we can do to help you translate llm api calls across providers? |
This PR adds support for the above mentioned LLMs using LiteLLM https://github.com/BerriAI/litellm/
LiteLLM is a lightweight package to simplify LLM API calls - use any llm as a drop in replacement for gpt-3.5-turbo.
Example