-
Notifications
You must be signed in to change notification settings - Fork 286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Delete and create the same changefeed in a short time may cause deadlock and data loss #7657
Labels
affects-5.0
affects-5.1
affects-5.2
affects-5.3
affects-5.4
This bug affects the 5.4.x(LTS) versions.
affects-6.0
affects-6.1
This bug affects the 6.1.x(LTS) versions.
affects-6.2
affects-6.3
affects-6.4
affects-6.5
This bug affects the 6.5.x(LTS) versions.
affects-6.6
area/ticdc
Issues or PRs related to TiCDC.
found/automation
Bugs found by automation cases
severity/major
type/bug
The issue is confirmed as a bug.
Comments
overvenus
added
type/bug
The issue is confirmed as a bug.
area/ticdc
Issues or PRs related to TiCDC.
labels
Nov 21, 2022
overvenus
added
affects-5.3
affects-5.2
affects-5.1
affects-5.0
affects-4.0
affects-5.4
This bug affects the 5.4.x(LTS) versions.
affects-6.0
affects-6.1
This bug affects the 6.1.x(LTS) versions.
affects-6.2
affects-6.3
affects-6.4
labels
Nov 21, 2022
/found automation |
/severity major |
/assign @overvenus |
This issue has not been fixed, #7730 is a workaround that only mitigates the issue. |
ti-chi-bot
added a commit
that referenced
this issue
Dec 5, 2022
15 tasks
overvenus
added a commit
to ti-chi-bot/tiflow
that referenced
this issue
Dec 22, 2022
ref pingcap#7657 Signed-off-by: Neil Shen <overvenus@gmail.com>
ti-chi-bot
added a commit
that referenced
this issue
Dec 22, 2022
14 tasks
Find a similar case, restart changefeed in a very short period. Owner may see processor's state pre-restart changefeed. See also #8242 |
ti-chi-bot
pushed a commit
that referenced
this issue
Feb 19, 2023
This was referenced Feb 19, 2023
overvenus
added a commit
to ti-chi-bot/tiflow
that referenced
this issue
Apr 6, 2023
close pingcap#7657 Signed-off-by: Neil Shen <overvenus@gmail.com>
ti-chi-bot
pushed a commit
that referenced
this issue
Apr 11, 2023
close #7657 Signed-off-by: Neil Shen <overvenus@gmail.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
affects-5.0
affects-5.1
affects-5.2
affects-5.3
affects-5.4
This bug affects the 5.4.x(LTS) versions.
affects-6.0
affects-6.1
This bug affects the 6.1.x(LTS) versions.
affects-6.2
affects-6.3
affects-6.4
affects-6.5
This bug affects the 6.5.x(LTS) versions.
affects-6.6
area/ticdc
Issues or PRs related to TiCDC.
found/automation
Bugs found by automation cases
severity/major
type/bug
The issue is confirmed as a bug.
What did you do?
Because of
model.ChangeFeedID
is non-unique, owner and process has no way tofind out whether they are running the same changefeed. After appling delete and
create operation on the same changefeed name, there is a race condition between
owner and processor that they are running two different changefeeds. This may
cause deadlock and data loss.
To fix the issue, we should make sure:
response after the changefeed is closed completely on every TiCDC node. It
prevents data loss.
model.ChangeFeed
. It prevents deadlock and otherillegal states.
What did you expect to see?
No deadlock and no data loss.
What did you see instead?
Changefeed is a deadlock state, and lag is increasing.
Versions of the cluster
TiCDC version (execute
cdc version
):All TiCDC released version
The text was updated successfully, but these errors were encountered: