-
Notifications
You must be signed in to change notification settings - Fork 446
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add possibility to expose first slave via proxy #489
base: master
Are you sure you want to change the base?
Conversation
@epustobaev Thanks for the PR! What's your use case to access just one random standby? Some people are doing this generating an haproxy conf to balance to all standbys from the cluster data since the proxy main role is to force connections to elected masters. Perhaps a little tool (i.e. a stolonctl command) to generate something similar and keep it updated should be better? |
Thanks for comment . It was the simplest way to reproduce concrete project configuration with one read-only instance for some cases e.g. admin interfaces. And in the same time we're not using any balancing/proxy tools for database. So we're using it already. And It just works and solves the problem. But anyway I get your point and it sounds logical. If it not fits your architecture - I'll try to use your advice. |
Hi, I have read the issue #132 and I know that the decision was to use HAProxy to do load balancing instead of Stolon proxy, which is a reasonable solution for many readonly replicas if someone has complex requirements for load balancing. On the other hand, this pull request is as simple as it can be and number of thumbs up in the issue suggests that lot of people would appreciate functionality provided by this pull request. Our use case is pretty much the same as described by @epustobaev - we have two powerful machines running Stolon keeper in order to have high availability. Currently one of them is never used at all, which is a pitty (not only our managers like to have all the resources available utilised). Using template for HAProxy is a possible approach, but it adds yet another point of failure and another component that must be installed, configured, updated and taken care of. If this could be added to stolon proxy, it would be nice also from ops point of view. Also I do not think it is that far from the functionality that Stolon proxy currently provides - choosing proper master and redirecting connections to it. If it can choose a replica (randomly or round-robin -- it doesn't matter in our use case) and redirect connections to it, it would be perfect. As for closing the RO connections when replica becomes master, this is not managed by this pull request if I understood it correctly? This is good and also matches our use-case, because if one server goes down, we do not have another RO replica and clients should stay connect to the new master. |
Yeah just another voice in this, being able to expose a replica, perhaps just the sync, is something we'd really want to do via stolon-proxy. We want the fencing to break connections whenever the sync moves, and we don't want to insert another tool (like haproxy) in the middle of our connections when we already have a combo of stolon and pgbouncer. |
@epustobaev @czernitko @lawrencejones Balancing to standbys, has we noticed by the comments, can be implemented in many different ways and every user has different requirements. So at first the interested parties should define which features are needed and then I'll suggest to implement it in another component outside the stolon-proxy (stolon-standbys-proxy). It could always be merged inside the stolon proxy when it'll be stable. I don't see having different processes an issue, instead I'll prefer having different processes managing primary vs standby connections and in 2020 it's not so difficult to define a new service in systemd, k8s etc... |
This is what we're running with atm: gocardless#33 We run two stolon proxies per postgres node, one on port 7432 for the connection to the primary, another on 8432 for synchronous replica. It's working well for us. |
No description provided.