You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We now use an early location to reset upstream replication so will not lose data. But we may replicate duplicate data so will meet a "duplicate entry" error. For GTID-based replication, we use maybeSkipNRowsEvent to skip duplicate events, but for position-based replicate, we have no way to protect it.
I'll port #3860 to master branch as an ugly fix, which is always open safe mode for one transaction. After the ugly fix I'll change it to a feature request and wait a better fix.
What did you do?
Upstream has a GTID of binlog events:
after upstream send event 2,
lastLocation
becomes the GTID set including this GTID in https://github.com/pingcap/ticdc/blob/20626babf21fc381d4364646c40dd84598533d66/dm/syncer/syncer.go#L2126-L2135now upstream raise a network error, so getEvent returns an error
https://github.com/pingcap/ticdc/blob/20626babf21fc381d4364646c40dd84598533d66/dm/syncer/syncer.go#L1590
Now DM will do a finer grained retry
What did you expect to see?
no data loss
What did you see instead?
data is lost because the retry is using
lastLocation
.https://github.com/pingcap/ticdc/blob/20626babf21fc381d4364646c40dd84598533d66/dm/syncer/syncer.go#L1622-L1624
However as described above (comment is wrong),
lastLocation
becomes the GTID set including this GTID, so using it will not send this GTID again.Versions of the cluster
DM version (run
dmctl -V
ordm-worker -V
ordm-master -V
):v5.3.0
Upstream MySQL/MariaDB server version:
(paste upstream MySQL/MariaDB server version here)
Downstream TiDB cluster version (execute
SELECT tidb_version();
in a MySQL client):(paste TiDB cluster version here)
How did you deploy DM: tiup or manually?
(leave TiUP or manually here)
Other interesting information (system version, hardware config, etc):
current status of DM cluster (execute
query-status <task-name>
in dmctl)(paste current status of DM cluster here)
The text was updated successfully, but these errors were encountered: