Replies: 2 comments
-
If you're using LiteLLM, then this should already work. We're using it already. https://docs.litellm.ai/docs/proxy/logging#logging-proxy-inputoutput---langfuse |
Beta Was this translation helpful? Give feedback.
0 replies
-
Closing this thread as this works already when changing the base url of the sdk. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Currently I can switch to langchain when using OpenAI
But in case if I am hosting my own model with openai-compatible API using vllm or litellm, I would like still to benefit from langfuse observability
Beta Was this translation helpful? Give feedback.
All reactions