Upcoming architecture changes for Langfuse 3.0 (self-hosted) #1902
Replies: 23 comments 43 replies
-
As requested from Discord, my comment: I really do not want to move off serverless infra to a dedicated VM, I'd say a major reason I chose Langfuse was its Cloud Run deployment I could couple with my existing AlloyDB. And perhaps AlloyDB is quick enough it doesn't need help with analytical queries. Cloud Run recently introduced side car containers so perhaps that is an option? There is a managed Redis option too but its a bit more pricy. My serverless deployment at the moment I don't pay for Langfuse until I'm browsing the UI or its capturing traces, aside the already sunk cost of the database. |
Beta Was this translation helpful? Give feedback.
-
The current docs advise not to use Docker Compose for production. I'm guessing that will change for v3? I would be great to have a ready to deploy docker compose that just needs an env file to get started. I'd use the cloud hosted service but have relatively strict data privacy requirements. |
Beta Was this translation helpful? Give feedback.
-
I'm hesitant on a more complicated docker-compose setup. One of the reasons we were able to open to using Langfuse to begin with, was how easy it was to deploy on a serverless platform. |
Beta Was this translation helpful? Give feedback.
-
It feels weird to deploy using docker-compose. For me, it's always have been a good tool to use during development, but not really for production. My team is currently planning to deploy it. We haven't decided if we'll deploy to kubernetes or Cloud Run though. We'll stay tuned to see what will work best for us. |
Beta Was this translation helpful? Give feedback.
-
I want Radis and OLAP Clickhouse to be keep opt-in. I am deploying LangFuse on AWS with a simple architecture. |
Beta Was this translation helpful? Give feedback.
-
Any specific reason why not providing arm based images ? |
Beta Was this translation helpful? Give feedback.
-
when clickhouse, I think it's great |
Beta Was this translation helpful? Give feedback.
-
I'm not a big fan of docker-compose and we unable to use docker-compose files for deployments. Our requirements dictate that we first build an image using an ARM template, scan it with an infosec tool like Aqua in the CI/CD pipeline, and then deploy it. It would be great if there is option to deploy each container as stand along app. |
Beta Was this translation helpful? Give feedback.
-
We are currently very happy users of Langfuse 2 (massive internal adoption success).
Thank you for considering our requirements |
Beta Was this translation helpful? Give feedback.
-
Will there be a way to migrate existing traces, prompts and datasets from v2 to v3? We have massive internal success and adoption rate for this and there are a lot of data, prompts and traces stored in the containers. We are really excited about v3 but also do not want to redo these steps again. Wishing the entire team good luck for the v3 release. Looking forward to this. |
Beta Was this translation helpful? Give feedback.
-
Hey team. Thanks for the updates in the newsletter. I agree on the general comments that docker compose isn't a great production solution, however I generally see docker compose as a great way to document the required setup for self hosters. It's simple to grok the required services and interactions / configuration. We will continue to use a k8s deployment, and are happy to integrate into our existing Redis and Clickhouse services / spin up new instances. Bringing up Clickhouse and Redis shouldn't be considered a large operational hurdle IMO for people wanting to self host. With the reality of services becoming more complex over time, and features / scale are added, it is a reasonable expectation (Posthog is a good example). As long as the migration path is documented and clear this all sounds great! 👏 |
Beta Was this translation helpful? Give feedback.
-
Update to the above: we plan to release v3 in July (no strict ETA yet) as we are currently going though many optimization steps to make the new setup as performant as possible. We will post an update here once there is documentation and a pre-release version to try. |
Beta Was this translation helpful? Give feedback.
-
I've received an inquiry as to how we'd be able to deploy your changes to Azure: We can set up a Redis connection to Azure Cache for Redis. I'm not familiar with Clickhouse. Do you already have an idea about how we could get that working on Azure? |
Beta Was this translation helpful? Give feedback.
-
Would one then need to use managed services from AWS for Postgres, Redis and Clickhouse if hosting Langfuse as ECS containers ? |
Beta Was this translation helpful? Give feedback.
-
Happy to see Langfuse is keep evolving! |
Beta Was this translation helpful? Give feedback.
-
We've just updated the initial post with more information and guidance on the envisaged changes. Here attached in PDF format. |
Beta Was this translation helpful? Give feedback.
-
Hello, AFAIK Redis is not open-source anymore. Does this mean we need a prod-licence of Redis to self-host on our VMs or can we still use their docker containers freely? |
Beta Was this translation helpful? Give feedback.
-
Any update on the timeline for V3 release? We're looking to deploy self-hosted Langfuse but are hesitant to proceed with V2 given V3 is right around the corner... |
Beta Was this translation helpful? Give feedback.
-
Hello, If needed can we keep: and add click house and redis on 2 separate cloud run container In large organization, it is a pain (sometime not even possible) to deploy new service like that, and we don't need high volume yet. Some organizations don't have dedicated teams to manage a K8S and all new services need to pass by multiple committee (security ...). And VM are not authorized because it is hard to managed and serverless solution are prefered. Thanks in advance ! |
Beta Was this translation helpful? Give feedback.
-
Please extend V2 security updates. For example, a few months after V3 GA. |
Beta Was this translation helpful? Give feedback.
-
Hi, will caching of llm outputs like Helicone offers be included in v3 or is it in any way planned ? Your planned architecture is going to look quite similar to theirs. |
Beta Was this translation helpful? Give feedback.
-
Hi, I work with a leading US enterprise AI consultancy. We love Langfuse V2 because:
Regarding the upgrade path, we have the following concerns:
Generally we support the evolution of Langfuse in the direction mentioned. It makes sense to add Redis and ClickHouse for latency and improved reporting and petabyte analytics scale. We are very excited to see what you guys have cooked up, but do keep our requirements in mind. We are really pleased with the small and nimble footprint of V2 for now and haven't run into any scaling issues yet. Keep up the great work. Excited to be part of your user community! |
Beta Was this translation helpful? Give feedback.
-
Any update on the timeline for V3 release? |
Beta Was this translation helpful? Give feedback.
-
Hi all,
Langfuse is growing a lot, both in feature scope as well as in usage on single instances. Thus we plan for a couple of changes that will be released in Langfuse v3.
We are currently required to mature our architecture as we are working on the following challenges:
✅ Building model-based evals, which requires us to run asynchronous tasks, rate limited, with failover capabilities.
🧑🍳 Improve performance as instances scale out.
I wanted to give you a heads up on upcoming changes which are required to make these features work. Currently, Langfuse contains a single Docker container, which takes care of everything we do. This was fast to set up Langfuse initially, but we need more technical capabilities now. In addition to the existing components (Docker container + Postgres database), we will add the following:
If you self-host Langfuse, this means that we will likely advise to change to the following setup to be able to benefit from new infra changes easily. We are happy to hear your thoughts on this:
Feel free to share your thoughts below on these topics:
Find more context in the last Langfuse Townhall meeting. We will provide an easy to follow upgrade path for self-hosters once v3 is generally available. The infrastructure change does not affect public APIs, thus, users of Langfuse Cloud will not be affected by this change. Currently we pilot the async container & queue for the evals feature which is currently in public beta on Langfuse Cloud.
UPDATE JULY 22nd
A more detailled overview of planned changes:
Architectures v2 vs. v3
web
container: hosts public api, and all resources for the user interfaceworker
container: asynchronous processes, no exposed portsRedis
used as cache and queuePostgres
stores transactional data such as projects or API keysClickhouse
stores tracing data generated by the SDKs. This database will do most of the processing as our server will insert all the SDK data and read it for tables and dashboards.Next to the core application, an application load balancer for TLS termination and routing of requests to the
Web
container is necessary. We use nginx but you can also use e.g. the fully managed AWS load balancer.Upgrade path from v2 to v3
Thousands of teams run on Langfuse (~400k docker pulls)
→ we aim to offer the easiest migration experience that is automated and documented
Application deployment
You will be able to deploy the containers kia Kubernetes or your own Container Deployment Service (such as Google Cloud Run) or via docker compose on a virtual machine. In either case, you will also be able to use dockerized databases or you can provide us with connection strings for managed databases.
For low-volume/non-production deployments, dockerized DBs + docker compose is a sensible option to keep complexity low. We will publish guidance on when options 3 and 4 are necessary.
DB deployment
Databases (see above):
redis
,postgres
,clickhouse
Low-volume
High-volume / fully-managed → databases external of application cluster
We will provide guidance at which scale high-volume/fully-managed clickhouse is necessary. On hosted Langfuse, we are currently in the process of migrating to arrive at a scalable architecture. Once we are done with the migration, we will release 3.0. We will keep you posted here on updates regarding the migration.
FAQ
Timeline
We currently test many of the v3 infrastructure pieces on Langfuse Cloud. We will release v3 once all of these changes are "battle-tested" to be sure that this is a smooth transition for everyone self-hosting Langfuse without uncertainties. While there is no strict timeline, we aim for a release in late November.
Beta Was this translation helpful? Give feedback.
All reactions