Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow clients to connect to standby replicas in RO mode #132

Closed
inean opened this issue Apr 4, 2016 · 12 comments · Fixed by #557
Closed

Allow clients to connect to standby replicas in RO mode #132

inean opened this issue Apr 4, 2016 · 12 comments · Fixed by #557

Comments

@inean
Copy link

inean commented Apr 4, 2016

Sometimes apps maintains a 2 pool of connections; one for master node for writing purposes only and another one for replicas for 'read' operations. It would be good if proxies allow somehow to do that

@sgotti
Copy link
Member

sgotti commented Apr 5, 2016

@inean Also if the proxy was born with the primary needs to forcibly close connections to old master (I tried to explain it in the FAQ) I think that this can be a good idea (though about it but waiting for someone needing it before implementing 😄).

I'm thinking to just add a --ro-port option for connections to the standbys

Some question:

  • Should the proxy close "standby" connections when a standby is promoted to master or just leave them open?
  • Will a round robin standby selection be enough?

@inean
Copy link
Author

inean commented Apr 7, 2016

the --ro-port is perfect 💃

Should the proxy close "standby" connections when a standby is promoted to master or just leave them open?

yes, IMHO client connections should be closed. In that case client apps react to those broken connections and try to acquire new RO ones from proxy. It's consistent with the procedure for RW connections.

Will a round robin standby selection be enough?

Yes, another option maybe to use a (Weighter) Least Connection scheduler; your solution uses etcd/consul so we can store usage count there. Said that, we need to handle the case of a new standby added to cluster.

Typically, client apps maintain a pool of connections to DB, in this case to stolon proxy. When an new standby DB is added to cluster, there's no way that a client app to be acknowledge of this so it can react and reserve new RO connections to this standby server, no matter what scheduler is in use.

One possible solution is that stolon proxy close Total Connections / Total Standby Servers connections to force client apps to request new DB connections to proxy. This combined with a LC scheduler will allow to use all standby servers.

@afiskon
Copy link

afiskon commented Oct 21, 2016

I would like to describe a workaround that could be used before v0.4.0 release.

Currently application can learn cluster configuration from stolon/cluster/mycluster/clusterdata key. Consul allows to subscribe to updates of this key like this: http://localhost:8500/v1/kv/stolon/cluster/mycluster/clusterdata?wait=0s&index=14379 where 14379 is ModifyIndex of a key reported by Consul.

@testark
Copy link

testark commented Feb 6, 2017

@sgotti can we expect this to land in stolon anytime soon?

@czernitko
Copy link

Hi, I would like to add a note regarding PostgreSQL 10. libpq was extended to handle new attribute in the connection string called "target_session_attrs" (http://paquier.xyz/postgresql-2/postgres-10-libpq-read-write/).
It allows clients to specify, if they want to connect to master or slave. I am not sure if this setting is relevant to this issue since I don't know if Golang is using libpq (I suppose not, since last time I checked, golang does not support for example the scram-sha-256 auth scheme).

@sgotti
Copy link
Member

sgotti commented Jun 13, 2018

Also related to #489.

My final thoughts are to avoid reinventing the wheel and modifyng stolon proxy:

  • Stolon is for postgres HA, having some standbys that could be used is a good "side" effect but not the main purpose.

  • Stolon proxy purpose is to always redirect client to the current master and forcibly close connections to "old" masters. It acts as a "fencer" and avoids wrong and lost transactions (a client successfully committing a transaction to an old master and then "losing" that transaction when reconnecting to the current master).

So, instead of reinventing a load balancer and putting it inside stolon proxy my current idea (that someone already probably does with some custom scripts) is to make people able to use something already existing like haproxy to load balance to active standbys. For example we could add a command (to stolonctl or a new one) that will populate a template haproxy conf retrieving the current active standbys from the stolon clusterdata and execute a reload when the this standby list changes. Something similar to how confd or consul-template works.

If someone is willing to do this work I'll happily review their PRs.

@strangeman
Copy link
Contributor

I created the simple python script that talks with stolonctl and generates an HAProxy config for standby replicas. Hope it may be useful.
Here the repository: https://github.com/UnitedTraders/stolon-standby-haproxy

@liqweed
Copy link

liqweed commented Aug 23, 2018

Just linking to a description of pgpool's load-balancing solution:
http://www.pgpool.net/docs/latest/en/html/runtime-config-load-balancing.html

@sgotti
Copy link
Member

sgotti commented Sep 7, 2018

I created the simple python script that talks with stolonctl and generates an HAProxy config for standby replicas. Hope it may be useful.
Here the repository: https://github.com/UnitedTraders/stolon-standby-haproxy

@strangeman Your solution implements what I meant. Congrats! I'd like to link it from the from the FAQ. Would you like to open a PR?

@sgotti
Copy link
Member

sgotti commented Sep 11, 2018

Just linking to a description of pgpool's load-balancing solution:
http://www.pgpool.net/docs/latest/en/html/runtime-config-load-balancing.html

@liqweed The proxy is just a layer4 proxy and doesn't speak the postgres protocol: #39 (comment) . Pgpool is a totally different project with totally different features and can be used on top of stolon if needed.

A good implementation of this feature request is the @strangeman project reported before your comment: #132 (comment)

@strangeman
Copy link
Contributor

@sgotti sorry for the delay. :(
PR - #575

@viggy28
Copy link

viggy28 commented Aug 10, 2019

Based on @strangeman script, if you want to try the setup on Mac, https://gist.github.com/viggy28/e3de61a2070f6171c966b16dc36a1d32

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants