Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed up leader search #320

Merged
merged 23 commits into from
Oct 18, 2024
Merged

Conversation

cole-miller
Copy link
Contributor

This PR contains two changes that are intended to address #300 by cutting the number of dqlite connections that need to be opened in typical cases.

The first change is to remember the address of the cluster leader whenever we connect to it successfully, and to try this remembered address first when opening a new leader connection. When the cluster is stable, this means that we open only one connection in Connector.Connect; previously we would open on the order of N (= cluster size) connections. The tradeoff is that Connector.Connect is now slower and less efficient when the cluster is not stable. This optimization applies to both client.FindLeader and the driver implementation.

The second change is to enable client.FindLeader to reuse an existing connection to the leader instead of opening a new one. This is valid because the operations that can be performed on the connection using the returned Client do not depend on the logical state of the connection (open database, prepared statements). When the leader is stable, this saves one new connection per client.FindLeader call after the first change has been implemented. The long-lived connection is checked for validity and leadership before returning it to be reused.

Both of these changes rely on storing some new state in the NodeStore using some fun embedding tricks. I did it this way because the NodeStore is the only object that is passed around to all the right places and lives long enough to persist state between connection attempts.

On my machine, using @masnax's repro script, these two changes combined cut the spikes in CPU usage from 30% to 8-10%, with the first change being responsible for most of that improvement. The remaining spike is due to opening N connections (in parallel) within makeRolesChanges, and could perhaps be dealt with by augmenting the NodeStore further with a pool of connections to all nodes instead of just the last known leader, but I've left that for a possible follow-up.

Signed-off-by: Cole Miller cole.miller@canonical.com

Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
This makes it easier to inject and test connection attempt failures.

Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
@cole-miller cole-miller marked this pull request as ready for review October 1, 2024 02:48
Signed-off-by: Cole Miller <cole.miller@canonical.com>
Copy link

@just-now just-now left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

client/leader_test.go Outdated Show resolved Hide resolved
internal/protocol/connector_test.go Show resolved Hide resolved
Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
@cole-miller
Copy link
Contributor Author

Pushed new commits to update the interface following discussion with @marco6. Instead of attaching leader tracking to the NodeStore, now we attach it to protocol.Connector. This is a more natural separation of concerns and cleans things up nicely within the protocol package. It requires some changes to client and driver to take advantage of leader tracking, since previously these packages would just create one-off protocol.Connectors instead of reusing them.

For the driver package, the speedup from leader tracking remains transparent---we store a *protocol.Connector in the driver.Connector, and the database/sql connection pool will internally reuse a driver.Connector many times.

For the client package, there's nowhere in the signature of FindLeader that we could smuggle in a persistent *protocol.Connector, so I've just introduced a new LeaderConnector type that takes care of this. App has been updated to keep a LeaderConnector around.

We save slightly less work with the new design because client and driver can't share a LeaderTracker (although in the future I'd love to turn LeaderConnector into a place for sharing common functionality between client and driver, instead of duplicating options).

Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
First, avoid infinite recursion when logging in connectAttemptOne.

Second, fix the tests by logging the error from Semaphore.Acquire. It
seems that before we bumped the version of golang.org/x/sync/semaphore,
this type didn't honor the context deadline, hence why the tests were
fine for me locally without merging in the go.mod updates that are on
master.

Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
Signed-off-by: Cole Miller <cole.miller@canonical.com>
@cole-miller cole-miller merged commit 6702ded into canonical:master Oct 18, 2024
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants