-
Notifications
You must be signed in to change notification settings - Fork 286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dm/syncer: multiple rows use downstream schema #3308
dm/syncer: multiple rows use downstream schema #3308
Conversation
[REVIEW NOTIFICATION] This pull request has been approved by:
To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
…tiple-rows-use-downstream-schema
/run-dm-integration-test |
/run-dm-compatibility-test |
/run-kafka-integration-test |
/run-integration-test |
Codecov Report
@@ Coverage Diff @@
## master #3308 +/- ##
================================================
+ Coverage 56.5356% 57.0807% +0.5450%
================================================
Files 211 216 +5
Lines 22798 22985 +187
================================================
+ Hits 12889 13120 +231
+ Misses 8598 8523 -75
- Partials 1311 1342 +31 |
/run-dm-integration-tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rest LGTM
@@ -90,6 +91,8 @@ func (s *testSyncerSuite) TestCompactJob(c *C) { | |||
Length: types.UnspecifiedLength, | |||
}}, | |||
} | |||
downTi := schema.GetDownStreamTi(ti, ti) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use a different tableinfo in test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about add some integration test in shardddl1 for compact and multiple-rows by change the origin test or add new test cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think it needs add integration test for compact and multiple-rows too, i will do it later.
…tiple-rows-use-downstream-schema
…com/WizardXiao/ticdc into multiple-rows-use-downstream-schema
/run-dm-integration-test |
/run-dm-compatibility-test |
/run-kafka-integration-test |
/run-integration-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
restLGTM
…tiple-rows-use-downstream-schema
/run-dm-compatibility-test |
/run-integration-test |
/run-kafka-integration-test |
/run-leak-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rest lgtm
/run-dm-integration-test |
/merge |
This pull request has been accepted and is ready to merge. Commit hash: 37a03c8
|
/label needs-cherry-pick-release-5.3 |
/run-cherry-picker |
/run-cherry-picker |
/cherry-pick release-5.3 |
1 similar comment
/cherry-pick release-5.3 |
Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io>
@amyangfei: new pull request created: #3953. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
* fix the txn_batch_size metric inaccuracy bug when the sink target is MQ * address comments * add comments for exported functions * fix the compiling problem * workerpool: limit the rate to output deadlock warning (#3775) (#3795) * tests(ticdc): set up the sync diff output directory correctly (#3725) (#3741) * relay(dm): use binlog name comparison (#3710) (#3712) * dm/load: fix concurrent call Loader.Status (#3459) (#3468) * cdc/sorter: make unified sorter cgroup aware (#3436) (#3439) * tz (ticdc): fix timezone error (#3887) (#3906) * pkg,cdc: do not use log package (#3902) (#3940) * *: rename repo from pingcap/ticdc to pingcap/tiflow (#3959) * http_*: add log for http api and refine the err handle logic (#2997) (#3307) * etcd_worker: batch etcd patch (#3277) (#3389) * http_api (ticdc): check --cert-allowed-cn before add server common name (#3628) (#3882) * kvclient(ticdc): fix kvclient takes too long time to recover (#3612) (#3663) * owner: fix owner tick block http request (#3490) (#3530) * dm/syncer: use downstream PK/UK to generate DML (#3168) (#3256) * dep(dm): update go-mysql (#3914) (#3934) * dm/syncer: multiple rows use downstream schema (#3308) (#3953) * errorutil,sink,syncer: add errorutil to handle ignorable error (#3264) (#3995) * dm/worker: don't exit when failed to read checkpoint in relay (#3345) (#4005) * syncer(dm): use an early location to reset binlog and open safemode (#3860) * ticdc/owner: Fix ddl special comment syntax error (#3845) (#3978) * dm/scheduler: fix inconsistent of relay status (#3474) (#4009) * owner,scheduler(cdc): fix nil pointer panic in owner scheduler (#2980) (#4007) (#4016) * config(ticdc): Fix old value configuration check for maxwell protocol (#3747) (#3783) * sink(ticdc): cherry pick sink bug fix to release 5.3 (#4083) * master(dm): clean and treat invalid load task (#4004) (#4145) * loader: fix wrong progress in query-status for loader (#4093) (#4143) close #3252 * ticdc/processor: Fix backoff base delay misconfiguration (#3992) (#4028) * dm: load table structure from dump files (#3295) (#4163) * compactor: fix duplicate entry in safemode (#3432) (#3434) (#4088) * kv(ticdc): reduce eventfeed rate limited log (#4072) (#4111) close #4006 * metrics(ticdc): add resolved ts and add changefeed to dataflow (#4038) (#4104) * This is an automated cherry-pick of #4192 Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io> * retry(dm): align with tidb latest error message (#4172) (#4254) close #4159, close #4246 * owner(ticdc): Add bootstrap and try to fix the meta information in it (#3838) (#3865) * redolog: add a precleanup process when s3 enable (#3525) (#3878) * ddl(dm): make skipped ddl pass `SplitDDL()` (#4176) (#4227) close #4173 * cdc/sink: remove Initialize method from the sink interface (#3682) (#3765) Co-authored-by: Ling Jin <7138436+3AceShowHand@users.noreply.github.com> * http_api (ticdc): fix http api 'get processor' panic. (#4117) (#4123) close #3840 * sink (ticdc): fix a deadlock due to checkpointTs fall back in sinkNode (#4084) (#4099) close #4055 * cdc/sink: adjust kafka initialization logic (#3192) (#4162) * try fix conflicts. * This is an automated cherry-pick of #4192 Signed-off-by: ti-chi-bot <ti-community-prow-bot@tidb.io> * fix conflicts. * fix conflicts. Co-authored-by: zhaoxinyu <zhaoxinyu512@gmail.com> Co-authored-by: amyangfei <yangfei@pingcap.com> Co-authored-by: lance6716 <lance6716@gmail.com> Co-authored-by: sdojjy <sdojjy@qq.com> Co-authored-by: Ling Jin <7138436+3AceShowHand@users.noreply.github.com> Co-authored-by: 3AceShowHand <jinl1037@hotmail.com>
What problem does this PR solve?
This PR is a supplement about https://github.com/pingcap/ticdc/pull/3168 which uses downstream pk/uk just to generate where condition.There will be error when upstream and downstream hava diffrent pk/uk like example below. This PR uses downstream pk/uk in
compact
,causality
,genSql
to fix this error.DMLS :
DML1: INSERT INTO t1 (c0, c1, c2) VALUES (1, 2, 3);
DML2: INSERT INTO t1(c0, c1, c2) VALUES (2, 2, 4);
DML3: UPDATE t1 SET c1= 3, c2 = 3 WHERE c0= 2;
DML4: DELETE FROM t1 WHERE c0 = 1;
DML5: INSERT INTO t1 (c0, c1, c2) VALUES (1, 2, 4);
Upstream Schema :
create table t1(c0 int premary key, c1 int, c2 int)
Causality Group :
DML1: INSERT INTO t1 (c0, c1, c2) VALUES (1, 2, 3); --> group1_sql1
DML2: INSERT INTO t1(c0, c1, c2) VALUES (2, 2, 4); --> group2_sql1
DML3: UPDATE t1 SET c1= 3, c2 = 3 WHERE c0= 2; --> group2_sql2
DML4: DELETE FROM t1 WHERE c0 = 1; --> group1_sql2
DML5: INSERT INTO t1 (c0, c1, c2) VALUES (1, 2, 4); --> group1_sql3
Upstream Schema :
create table t1(c0 int, c1 int, c2 int, unique key(c1, c2))
In this situation,
group1_sql3
will conflict withgroup2_sql1
when execute sql in downstream.What is changed and how it works?
Causality will use downstream pk/uk. There will no conflict in ex.
DML1: INSERT INTO t1 (c0, c1, c2) VALUES (1, 2, 3); --> group1_sql1
DML2: INSERT INTO t1(c0, c1, c2) VALUES (2, 2, 4); --> group2_sql1
DML3: UPDATE t1 SET c1= 3, c2 = 3 WHERE c0= 2; --> group2_sql2
DML4: DELETE FROM t1 WHERE c0 = 1; --> group1_sql2
DML5: INSERT INTO t1 (c0, c1, c2) VALUES (1, 2, 4); --> group2_sql3
Check List
Tests
Code changes
Side effects
Related changes
Release note