Replies: 1 comment
-
Yes this should be possible. Don't include the providers:
litellm:
base_url: http://myserver.com:4000/v1 And then set |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi!
Since I'm already running a LiteLLM proxy server for another project I'm wondering if I could use it also for the llmcord. I use it with virtual keys for tracking usage, set budget, guardrails and to attach a group of models to a team virtual key. I manage all from the Admin UI.
Is it possible to set the baseUrl as the LiteLLM OpenAI API compatible endpoint? Would be like this: http://myserver.com:4000/v1/chat/completions
Beta Was this translation helpful? Give feedback.
All reactions