From 4e5389a9b802b991e95c9a1d21c8743cf98be451 Mon Sep 17 00:00:00 2001 From: Nathan VanBenschoten Date: Wed, 21 Apr 2021 19:17:12 -0400 Subject: [PATCH] kv: don't clear raftRequestQueue of right-hand side of Range split This commit fixes a test flake of `TestLeaderAfterSplit` I observed in CI and which we've seen at least once in https://github.com/cockroachdb/cockroach/issues/43564#issuecomment-794543331. I bisected the flake back to a591707, but that wasn't the real source of the flakiness - the move from `multiTestContext` to `TestCluster` just changed transport mechanism between replicas and revealed an existing bug. The real issue here was that, upon applying a split, any previously established `raftRequestQueue` to the RHS replica was discarded. The effect of this is that we could see the following series of events: ``` 1. r1 is created from a split 2. r1 campaigns to establish a leader for the new range 3. r1 sends MsgPreVote msgs to r2 and r3 4. s2 and s3 both receive the messages for the uninitialized r2 and r3, respectively. 5. raftRequestQueues are established for r2 and r3, and the MsgPreVotes are added 6. the split triggers to create r2 and r3 finally fire 7. the raftRequestQueues for r2 and r3 are discarded 8. the election stalls indefinitely, because the test sets RaftElectionTimeoutTicks=1000000 ``` Of course, in real deployments, `RaftElectionTimeoutTicks` will never be set so high, so a new election will be called again after about 3 seconds. Still, this could cause unavailability immediately after a split for about 3s even in real deployments, so it seems worthwhile to fix. This commit fixes the issue by removing the logic to discard an uninitialized replica's `raftRequestQueue` upon applying a split that initializes the replica. That logic looks quite intentional, but if we look back at when it was added, we see that it wasn't entirely deliberate. The code was added in d3b0e73, which extracted everything except the call to `s.mu.replicas.Delete(int64(rangeID))` from `unlinkReplicaByRangeIDLocked`. So the change wasn't intentionally discarding the queue, it was just trying not to change the existing behavior. This change is safe and does not risk leaking the `raftRequestQueue` because we are removing from `s.mu.uninitReplicas` but will immediately call into `addReplicaInternalLocked` to add an initialized replica. Release notes (bug fix): Fix a rare race that could lead to a 3 second stall before a Raft leader was elected on a Range immediately after it was split off from its left-hand neighbor. --- pkg/kv/kvserver/store_split.go | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/pkg/kv/kvserver/store_split.go b/pkg/kv/kvserver/store_split.go index 4c6e9db5b192..081cf6fc3dfd 100644 --- a/pkg/kv/kvserver/store_split.go +++ b/pkg/kv/kvserver/store_split.go @@ -311,11 +311,9 @@ func (s *Store) SplitRange( if exRng != rightReplOrNil { log.Fatalf(ctx, "found unexpected uninitialized replica: %s vs %s", exRng, rightReplOrNil) } - // NB: We only remove from uninitReplicas and the replicaQueues maps here - // so that we don't leave open a window where a replica is temporarily not - // present in Store.mu.replicas. + // NB: We only remove from uninitReplicas here so that we don't leave open a + // window where a replica is temporarily not present in Store.mu.replicas. delete(s.mu.uninitReplicas, rightDesc.RangeID) - s.replicaQueues.Delete(int64(rightDesc.RangeID)) } leftRepl.setDescRaftMuLocked(ctx, newLeftDesc)