-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kv: don't allow node liveness to regress in Gossip network #64032
kv: don't allow node liveness to regress in Gossip network #64032
Conversation
In cockroachdb#64028, we fixed a long-standing flake in `TestLeaderAfterSplit`. However, the test had actually gotten more flaky recently, which I bisected back to df826cd. The problem we occasionally see with the test is that all three replicas of a post-split Range call an election, resulting in a hung vote. Since the test is configured with RaftElectionTimeoutTicks=1000000, a follow-up election is never called, so the test times out. After some debugging, I found that the range would occasionally split while the non-leaseholder nodes (n2 and n3) thought that the leaseholder node (n1) was not live. This meant that their call to `shouldCampaignOnWake` in the split trigger considered the RHS's epoch-based lease to be invalid (state = ERROR). So all three replicas would call an election and the test would get stuck. The offending commit introduced this new flake because of this change: cockroachdb@df826cd#diff-488a090afc4b6eaf56cd6d13b347bac67cb3313ce11c49df9ee8cd95fd73b3e8R454 Now that the call to `MaybeGossipNodeLiveness` is asynchronous on the node-liveness range, it was possible for two calls to `MaybeGossipNodeLiveness` to race, one asynchronously triggered by `leasePostApplyLocked` and one synchronously triggered by `handleReadWriteLocalEvalResult` due to a node liveness update. This allowed for the following ordering of events: ``` - async call reads liveness(nid:1 epo:0 exp:0,0) - sync call writes and then reads liveness(nid:1 epo:1 exp:1619645671.921265300,0) - sync call adds liveness(nid:1 epo:1 exp:1619645671.921265300,0) to gossip - async call adds liveness(nid:1 epo:0 exp:0,0) to gossip ``` One this had occurred, n2 and n3 never again considered n1 live. Gossip never recovered from this state because the liveness record was never heartbeated again, due to the test's configuration of `RaftElectionTimeoutTicks=1000000`. This commit fixes the bug by ensuring that all calls to MaybeGossipNodeLiveness and MaybeGossipSystemConfig hold the raft mutex. This provides the necessary serialization to avoid data races, which was actually already documented on MaybeGossipSystemConfig.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for tracking this down!
Reviewed 3 of 3 files at r1.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @nvanbenschoten)
TFTR! Do you have thoughts about backporting this to release-21.1? I'm a little hesitant to do so for the v21.1.0 release because we've never seen real issues from this (that I know of), but think it's a good candidate for v21.1.1. bors r+ |
Build succeeded: |
21.1.1 sounds good to me. I generally always prefer to let things bake if
we can afford it and here it seems like we can.
…On Thu, Apr 22, 2021 at 7:30 PM craig[bot] ***@***.***> wrote:
Merged #64032 <#64032> into
master.
—
You are receiving this because your review was requested.
Reply to this email directly, view it on GitHub
<#64032 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABGXPZCSRBDNLB4P22ZJHXTTKBMKZANCNFSM43LLCUAA>
.
|
In #64028, we fixed a long-standing flake in
TestLeaderAfterSplit
. However,the test had actually gotten more flaky recently, which I bisected back to
df826cd. The problem we occasionally see with the test is that all three
replicas of a post-split Range call an election, resulting in a hung vote. Since
the test is configured with RaftElectionTimeoutTicks=1000000, a follow-up
election is never called, so the test times out.
After some debugging, I found that the range would occasionally split while the
non-leaseholder nodes (n2 and n3) thought that the leaseholder node (n1) was not
live. This meant that their call to
shouldCampaignOnWake
in the split triggerconsidered the RHS's epoch-based lease to be invalid (state = ERROR). So all
three replicas would call an election and the test would get stuck.
The offending commit introduced this new flake because of this change:
df826cd#diff-488a090afc4b6eaf56cd6d13b347bac67cb3313ce11c49df9ee8cd95fd73b3e8R454
Now that the call to
MaybeGossipNodeLiveness
is asynchronous on thenode-liveness range, it was possible for two calls to
MaybeGossipNodeLiveness
to race, one asynchronously triggered by
leasePostApplyLocked
and onesynchronously triggered by
handleReadWriteLocalEvalResult
due to a nodeliveness update. This allowed for the following ordering of events:
One this had occurred, n2 and n3 never again considered n1 live. Gossip never
recovered from this state because the liveness record was never heartbeated
again, due to the test's configuration of
RaftElectionTimeoutTicks=1000000
.This commit fixes the bug by ensuring that all calls to MaybeGossipNodeLiveness
and MaybeGossipSystemConfig hold the raft mutex. This provides the necessary
serialization to avoid data races, which was actually already documented on
MaybeGossipSystemConfig.