Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-21.1: kv: don't allow node liveness to regress in Gossip network #65357

Merged

Conversation

nvanbenschoten
Copy link
Member

Backport 1/1 commits from #64032.

/cc @cockroachdb/release


In #64028, we fixed a long-standing flake in TestLeaderAfterSplit. However,
the test had actually gotten more flaky recently, which I bisected back to
df826cd. The problem we occasionally see with the test is that all three
replicas of a post-split Range call an election, resulting in a hung vote. Since
the test is configured with RaftElectionTimeoutTicks=1000000, a follow-up
election is never called, so the test times out.

After some debugging, I found that the range would occasionally split while the
non-leaseholder nodes (n2 and n3) thought that the leaseholder node (n1) was not
live. This meant that their call to shouldCampaignOnWake in the split trigger
considered the RHS's epoch-based lease to be invalid (state = ERROR). So all
three replicas would call an election and the test would get stuck.

The offending commit introduced this new flake because of this change:
df826cd#diff-488a090afc4b6eaf56cd6d13b347bac67cb3313ce11c49df9ee8cd95fd73b3e8R454

Now that the call to MaybeGossipNodeLiveness is asynchronous on the
node-liveness range, it was possible for two calls to MaybeGossipNodeLiveness
to race, one asynchronously triggered by leasePostApplyLocked and one
synchronously triggered by handleReadWriteLocalEvalResult due to a node
liveness update. This allowed for the following ordering of events:

- async call reads liveness(nid:1 epo:0 exp:0,0)
- sync call writes and then reads liveness(nid:1 epo:1 exp:1619645671.921265300,0)
- sync call adds liveness(nid:1 epo:1 exp:1619645671.921265300,0) to gossip
- async call adds liveness(nid:1 epo:0 exp:0,0) to gossip

One this had occurred, n2 and n3 never again considered n1 live. Gossip never
recovered from this state because the liveness record was never heartbeated
again, due to the test's configuration of RaftElectionTimeoutTicks=1000000.

This commit fixes the bug by ensuring that all calls to MaybeGossipNodeLiveness
and MaybeGossipSystemConfig hold the raft mutex. This provides the necessary
serialization to avoid data races, which was actually already documented on
MaybeGossipSystemConfig.

In cockroachdb#64028, we fixed a long-standing flake in `TestLeaderAfterSplit`. However,
the test had actually gotten more flaky recently, which I bisected back to
df826cd. The problem we occasionally see with the test is that all three
replicas of a post-split Range call an election, resulting in a hung vote. Since
the test is configured with RaftElectionTimeoutTicks=1000000, a follow-up
election is never called, so the test times out.

After some debugging, I found that the range would occasionally split while the
non-leaseholder nodes (n2 and n3) thought that the leaseholder node (n1) was not
live. This meant that their call to `shouldCampaignOnWake` in the split trigger
considered the RHS's epoch-based lease to be invalid (state = ERROR). So all
three replicas would call an election and the test would get stuck.

The offending commit introduced this new flake because of this change:
cockroachdb@df826cd#diff-488a090afc4b6eaf56cd6d13b347bac67cb3313ce11c49df9ee8cd95fd73b3e8R454

Now that the call to `MaybeGossipNodeLiveness` is asynchronous on the
node-liveness range, it was possible for two calls to `MaybeGossipNodeLiveness`
to race, one asynchronously triggered by `leasePostApplyLocked` and one
synchronously triggered by `handleReadWriteLocalEvalResult` due to a node
liveness update. This allowed for the following ordering of events:
```
- async call reads liveness(nid:1 epo:0 exp:0,0)
- sync call writes and then reads liveness(nid:1 epo:1 exp:1619645671.921265300,0)
- sync call adds liveness(nid:1 epo:1 exp:1619645671.921265300,0) to gossip
- async call adds liveness(nid:1 epo:0 exp:0,0) to gossip
```

One this had occurred, n2 and n3 never again considered n1 live. Gossip never
recovered from this state because the liveness record was never heartbeated
again, due to the test's configuration of `RaftElectionTimeoutTicks=1000000`.

This commit fixes the bug by ensuring that all calls to MaybeGossipNodeLiveness
and MaybeGossipSystemConfig hold the raft mutex. This provides the necessary
serialization to avoid data races, which was actually already documented on
MaybeGossipSystemConfig.
@nvanbenschoten nvanbenschoten requested a review from tbg May 17, 2021 22:37
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@nvanbenschoten nvanbenschoten merged commit 803e71d into cockroachdb:release-21.1 May 18, 2021
@nvanbenschoten nvanbenschoten deleted the backport21.1-64032 branch June 11, 2021 03:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants