Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regression from PR #445 Incorrectly Allows Slot Ownership Updates via Replica #753

Closed
PingXie opened this issue Jul 7, 2024 · 1 comment · Fixed by #754
Closed

Regression from PR #445 Incorrectly Allows Slot Ownership Updates via Replica #753

PingXie opened this issue Jul 7, 2024 · 1 comment · Fixed by #754
Assignees

Comments

@PingXie
Copy link
Member

PingXie commented Jul 7, 2024

I took a look too and realized it’s a regression introduced by my slot migration PR #445. This change started allowing a replica to report its primary’s slot states and trigger clusterUdpateSlotsConfigWith.

image

PR #445 - Slot Migration Changes.

Here's what I think happens in these test failures involving a 3-node shard:

[T1] - Node A, B, and C are in the same shard with A as the primary.
[T2] - Node A loses its primaryship to B via a graceful/manual failover.
[T3] - After winning the election, B broadcasts the news to every node in the cluster, including C.
[T4] - C receives B's latest PING message and correctly registers B as its new primary.
[T5] - C then sends a new PING message to A, claiming B is its primary with all the slots.
[T6] - A still hasn't received B's broadcast message from [T3], and C's PING message from [T4] arrives at A.
And this is where things go wrong—a replicaof cycle is created.

At this point, A still thinks it’s the primary of the shard, and B -> replicaof == A. Since C is still a replica (as before), the role change handling logic doesn’t apply. So, A enters clusterUdpateSlotsConfigWith using C’s slot information (which is up to date with B’s). More importantly, B is the sender, and A assumes B -> replicaof == A. The slot ownership update logic correctly gives the ownership of the slots to B. Because A loses all its slots to another node in the same shard with a higher config epoch, this demotes A to a replica of the winner, B. And we set A -> replicaof = B, completing the replicaof cycle.

Originally posted by @PingXie in #573 (comment)

@hpatro
Copy link
Contributor

hpatro commented Jul 9, 2024

Thanks for the nice explanation @PingXie!

Before diving into the code fix. Is it any problematic to handle topology update from replica with higher config epoch? My initial thought was to solve this problem by fixing primary-replica relationship correctly for all the nodes in a given shard.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

2 participants