Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lightning: fix auto_increment out-of-range error #34146

Merged
merged 18 commits into from
Apr 29, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 10 additions & 2 deletions br/pkg/lightning/restore/table_restore.go
Original file line number Diff line number Diff line change
Expand Up @@ -683,10 +683,18 @@ func (tr *TableRestore) postProcess(
tblInfo := tr.tableInfo.Core
var err error
if tblInfo.PKIsHandle && tblInfo.ContainsAutoRandomBits() {
err = AlterAutoRandom(ctx, rc.tidbGlue.GetSQLExecutor(), tr.tableName, tr.alloc.Get(autoid.AutoRandomType).Base()+1)
var maxAutoRandom, autoRandomTotalBits uint64
autoRandomTotalBits = 64
autoRandomBits := tblInfo.AutoRandomBits // range from (0, 15]
if !tblInfo.IsAutoRandomBitColUnsigned() {
// if auto_random is signed, leave one extra bit
autoRandomTotalBits = 63
}
maxAutoRandom = 1<<(autoRandomTotalBits-autoRandomBits) - 1
err = AlterAutoRandom(ctx, rc.tidbGlue.GetSQLExecutor(), tr.tableName, uint64(tr.alloc.Get(autoid.AutoRandomType).Base())+1, maxAutoRandom)
} else if common.TableHasAutoRowID(tblInfo) || tblInfo.GetAutoIncrementColInfo() != nil {
// only alter auto increment id iff table contains auto-increment column or generated handle
err = AlterAutoIncrement(ctx, rc.tidbGlue.GetSQLExecutor(), tr.tableName, tr.alloc.Get(autoid.RowIDAllocType).Base()+1)
err = AlterAutoIncrement(ctx, rc.tidbGlue.GetSQLExecutor(), tr.tableName, uint64(tr.alloc.Get(autoid.RowIDAllocType).Base())+1)
}
rc.alterTableLock.Unlock()
saveCpErr := rc.saveStatusCheckpoint(ctx, tr.tableName, checkpoints.WholeTableEngineID, err, checkpoints.CheckpointStatusAlteredAutoInc)
Expand Down
27 changes: 22 additions & 5 deletions br/pkg/lightning/restore/tidb.go
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ import (
"context"
"database/sql"
"fmt"
"math"
"strconv"
"strings"

Expand Down Expand Up @@ -373,9 +374,17 @@ func ObtainNewCollationEnabled(ctx context.Context, g glue.SQLExecutor) (bool, e
// NOTE: since tidb can make sure the auto id is always be rebase even if the `incr` value is smaller
// the the auto incremanet base in tidb side, we needn't fetch currently auto increment value here.
// See: https://github.com/pingcap/tidb/blob/64698ef9a3358bfd0fdc323996bb7928a56cadca/ddl/ddl_api.go#L2528-L2533
func AlterAutoIncrement(ctx context.Context, g glue.SQLExecutor, tableName string, incr int64) error {
logger := log.With(zap.String("table", tableName), zap.Int64("auto_increment", incr))
query := fmt.Sprintf("ALTER TABLE %s AUTO_INCREMENT=%d", tableName, incr)
func AlterAutoIncrement(ctx context.Context, g glue.SQLExecutor, tableName string, incr uint64) error {
var query string
logger := log.With(zap.String("table", tableName), zap.Uint64("auto_increment", incr))
if incr > math.MaxInt64 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since incr > math.MaxInt64 can only happen when there is one row that there is one rwo explitly set the max-value, so here you should alter the auto increment value to the max value then. So the behavior then is the same as import via sql.
Please also add a test case to ensure: Lightning local backend can successfully import data which explicitly set auto_increment row value to the max valid value according to tidb's restriction. After import, any new insert statement without explicitly set the auto_increment row will result with an error.

Copy link
Contributor Author

@buchuitoudegou buchuitoudegou Apr 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add a IT later. But it seems alter table xxx auto_increment=math.MaxInt64 will lead to some unexpected errors (I've talked to the guys from sql-infra, it might be bugs of TiDB #34142). So I won't set it until they fix this issue (leave a TODO here, or perhaps I can use alter table xxx force ...).

// automatically set max value
logger.Warn("auto_increment out of the maximum value TiDB supports, automatically set to the max", zap.Uint64("auto_increment", incr))
incr = math.MaxInt64
query = fmt.Sprintf("ALTER TABLE %s FORCE AUTO_INCREMENT=%d", tableName, incr)
} else {
query = fmt.Sprintf("ALTER TABLE %s AUTO_INCREMENT=%d", tableName, incr)
}
Copy link
Contributor

@sleepymole sleepymole Apr 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to set auto_increment to math.MaxInt64 when incr > math.MaxInt64?

Copy link
Contributor Author

@buchuitoudegou buchuitoudegou Apr 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MaxInt64 is a legal value in TiDB (i.e. you can insert (9223372036854775807, ....) to a table) whereas MaxInt64+1 is illegal, which will trigger the error, ERROR 1467 (HY000): Failed to read auto-increment value from storage engine.

e.g.

mysql> create table test1(
    -> a bigint auto_increment,
    -> b int,
    -> primary key(a));
Query OK, 0 rows affected (0.11 sec)

mysql> alter table test1 auto_increment=9223372036854775807;
Query OK, 0 rows affected (0.12 sec)

mysql> insert into test1(b) values(1);
Query OK, 1 row affected (0.00 sec)

mysql> select * from test1;
+---------------------+------+
| a                   | b    |
+---------------------+------+
| 9223372036854775807 |    1 |
+---------------------+------+
1 row in set (0.00 sec)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does incr > math.MaxInt64 means lightning has inserted rows whose id is greater than math.MaxInt64? Will this cause duplicate entry or data corruption?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly yes. But in terms of the case shown in the linked issue, 9223372036854775807 is a valid input, but after inserting it, Lightning would try alter table xxx auto_increment=9223372036854775807+1, get syntax error from TiDB, and fail unexpectedly.

So I'm not trying to legalize auto-incr value that exceeds math.MaxInt64 but try to skip this syntax error and let Lightning fail by other clearer issues (such as data corruption, duplicate entry, etc.). As for the case in this issue, because Lightning doesn't try to write entries that are larger than 9223372036854775807, it will succeed.

Failing by duplicate entries might be more intuitive in terms of the effectiveness of reporting the error. Syntax error is helpless...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could confirm whether data corruption or any other error can be reported when using local backend and auto_inc is overflow?

Copy link
Contributor Author

@buchuitoudegou buchuitoudegou Apr 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. If the auto_incr column overflows, the checksum will be mismatched.

e.g. create a table with auto-incremental key:

CREATE TABLE `test1` (
  `id` tinyint(4) NOT NULL AUTO_INCREMENT,
  `a` int(11) NOT NULL,
  PRIMARY KEY (`a`),
  UNIQUE KEY `id` (`id`)
)

insert 256 rows (where only one column "a" in the source file), and get error:

Error: [Lighting:Restore:ErrChecksumMismatch]checksum mismatched remote vs local => (checksum: 1088876813058384307 vs 11572238353217052498) (total_kvs: 383 vs 512) (total_bytes:11996 vs 16640)
tidb lightning encountered error: [Lighting:Restore:ErrChecksumMismatch]checksum mismatched remote vs local => (checksum: 1088876813058384307 vs 11572238353217052498) (total_kvs: 383 vs 512) (total_bytes:11996 vs 16640)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"checksum mismatch" is hard for user to know what happened. How about checking overflow during encoding and give a clear error to user?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"checksum mismatch" is hard for user to know what happened. How about checking overflow during encoding and give a clear error to user?

Yes. Actually I'm looking at it #28776. This PR only solves the unexpected syntax error.

task := logger.Begin(zap.InfoLevel, "alter table auto_increment")
err := g.ExecuteWithLog(ctx, query, "alter table auto_increment", logger)
task.End(zap.ErrorLevel, err)
Expand All @@ -388,8 +397,16 @@ func AlterAutoIncrement(ctx context.Context, g glue.SQLExecutor, tableName strin
return errors.Annotatef(err, "%s", query)
}

func AlterAutoRandom(ctx context.Context, g glue.SQLExecutor, tableName string, randomBase int64) error {
logger := log.With(zap.String("table", tableName), zap.Int64("auto_random", randomBase))
func AlterAutoRandom(ctx context.Context, g glue.SQLExecutor, tableName string, randomBase uint64, maxAutoRandom uint64) error {
logger := log.With(zap.String("table", tableName), zap.Uint64("auto_random", randomBase))
if randomBase == maxAutoRandom+1 {
// insert a tuple with key maxAutoRandom
randomBase = maxAutoRandom
} else if randomBase > maxAutoRandom {
// TiDB does nothing when inserting an overflow value
logger.Warn("auto_random out of the maximum value TiDB supports")
return nil
}
query := fmt.Sprintf("ALTER TABLE %s AUTO_RANDOM_BASE=%d", tableName, randomBase)
task := logger.Begin(zap.InfoLevel, "alter table auto_random")
err := g.ExecuteWithLog(ctx, query, "alter table auto_random_base", logger)
Expand Down
19 changes: 18 additions & 1 deletion br/pkg/lightning/restore/tidb_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ package restore
import (
"context"
"database/sql"
"math"
"testing"

"github.com/DATA-DOG/go-sqlmock"
Expand Down Expand Up @@ -404,11 +405,17 @@ func TestAlterAutoInc(t *testing.T) {
s.mockDB.
ExpectExec("\\QALTER TABLE `db`.`table` AUTO_INCREMENT=12345\\E").
WillReturnResult(sqlmock.NewResult(1, 1))
s.mockDB.
ExpectExec("\\QALTER TABLE `db`.`table` FORCE AUTO_INCREMENT=9223372036854775807\\E").
WillReturnResult(sqlmock.NewResult(1, 1))
s.mockDB.
ExpectClose()

err := AlterAutoIncrement(ctx, s.tiGlue.GetSQLExecutor(), "`db`.`table`", 12345)
require.NoError(t, err)

err = AlterAutoIncrement(ctx, s.tiGlue.GetSQLExecutor(), "`db`.`table`", uint64(math.MaxInt64)+1)
require.NoError(t, err)
}

func TestAlterAutoRandom(t *testing.T) {
Expand All @@ -419,10 +426,20 @@ func TestAlterAutoRandom(t *testing.T) {
s.mockDB.
ExpectExec("\\QALTER TABLE `db`.`table` AUTO_RANDOM_BASE=12345\\E").
WillReturnResult(sqlmock.NewResult(1, 1))
s.mockDB.
ExpectExec("\\QALTER TABLE `db`.`table` AUTO_RANDOM_BASE=288230376151711743\\E").
WillReturnResult(sqlmock.NewResult(1, 1))
s.mockDB.
ExpectClose()

err := AlterAutoRandom(ctx, s.tiGlue.GetSQLExecutor(), "`db`.`table`", 12345)
err := AlterAutoRandom(ctx, s.tiGlue.GetSQLExecutor(), "`db`.`table`", 12345, 288230376151711743)
require.NoError(t, err)

// insert 288230376151711743 and try rebase to 288230376151711744
err = AlterAutoRandom(ctx, s.tiGlue.GetSQLExecutor(), "`db`.`table`", 288230376151711744, 288230376151711743)
require.NoError(t, err)

err = AlterAutoRandom(ctx, s.tiGlue.GetSQLExecutor(), "`db`.`table`", uint64(math.MaxInt64)+1, 288230376151711743)
require.NoError(t, err)
}

Expand Down
2 changes: 2 additions & 0 deletions br/tests/lightning_max_incr/config.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[tikv-importer]
backend = 'local'
1 change: 1 addition & 0 deletions br/tests/lightning_max_incr/data/db-schema-create.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
create database db;
5 changes: 5 additions & 0 deletions br/tests/lightning_max_incr/data/db.test-schema.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
create table test(
a bigint auto_increment,
b int,
primary key(a)
);
3 changes: 3 additions & 0 deletions br/tests/lightning_max_incr/data/db.test.000000000.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
"a","b"
1,2
9223372036854775805,3
5 changes: 5 additions & 0 deletions br/tests/lightning_max_incr/data/db.test1-schema.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
create table test1(
a bigint auto_increment,
b int,
primary key(a)
);
3 changes: 3 additions & 0 deletions br/tests/lightning_max_incr/data/db.test1.000000000.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
"a","b"
1,2
9223372036854775807,3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think 2^53 should be a good value here to show the max auto value is far more smaller than 2^63-1 as expected.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is 2^53 better? Though 2^63-1 is larger than the max_auto_random_base, it's still a valid value. Perhaps I could add a check to see if we could go on inserting tuples to test1 to prove auto_random_base is set to correct value so that the table remains available and fulfills constraints.

Copy link
Contributor

@sleepymole sleepymole Apr 27, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a reminder, we should compare the increment bits, but not the final value. e.g. (2^54)+1 is less than max_auto_random_base.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As 2^63-1 it the max value of int64, so it's obvious that any further allocate will cause overflow. But the available allocation range for autorandom is much smaller, so inserting with this value cause panic can't fully prove that the logic is correct. E.g, a implement the same as auto increment will also pass this check. BTW, you should use a different test case to show the difference between auto_increment and auto_random.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix in 17b9c33

52 changes: 52 additions & 0 deletions br/tests/lightning_max_incr/run.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
#!/bin/sh
#
# Copyright 2022 PingCAP, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

set -eux

check_cluster_version 4 0 0 'local backend' || exit 0

ENGINE_COUNT=6

check_result() {
run_sql 'SHOW DATABASES;'
check_contains 'Database: db';
run_sql 'SHOW TABLES IN db;'
check_contains 'Tables_in_db: test'
check_contains 'Tables_in_db: test1'
run_sql 'SELECT count(*) FROM db.test;'
check_contains 'count(*): 2'
run_sql 'SELECT count(*) FROM db.test1;'
check_contains 'count(*): 2'
}

cleanup() {
rm -f $TEST_DIR/lightning.log
rm -rf $TEST_DIR/sst
run_sql 'DROP DATABASE IF EXISTS db;'
}

cleanup

# db.test contains key that is less than int64 - 1
# while db.test1 contains key that equals int64 - 1
run_lightning --sorted-kv-dir "$TEST_DIR/sst" --config "tests/$TEST_NAME/config.toml" --log-file "$TEST_DIR/lightning.log"
check_result
# successfully insert: max key has not reached maximum
run_sql 'INSERT INTO db.test(b) VALUES(11);'
# fail for insertion: db.test1 has key int64 - 1
run_sql 'INSERT INTO db.test1(b) VALUES(22);' 2>&1 | tee -a "$TEST_DIR/sql_res.$TEST_NAME.txt"
check_contains 'ERROR'
cleanup
2 changes: 2 additions & 0 deletions br/tests/lightning_max_random/config.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[tikv-importer]
backend = 'local'
1 change: 1 addition & 0 deletions br/tests/lightning_max_random/data/db-schema-create.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
create database db;
5 changes: 5 additions & 0 deletions br/tests/lightning_max_random/data/db.test-schema.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
create table test(
a bigint auto_random(10),
b int,
primary key(a)
);
3 changes: 3 additions & 0 deletions br/tests/lightning_max_random/data/db.test.000000000.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
"a","b"
1,2
9007199254740990,3
5 changes: 5 additions & 0 deletions br/tests/lightning_max_random/data/db.test1-schema.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
create table test1(
a bigint auto_random(10),
b int,
primary key(a)
);
3 changes: 3 additions & 0 deletions br/tests/lightning_max_random/data/db.test1.000000000.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
"a","b"
1,2
9007199254740991,3
5 changes: 5 additions & 0 deletions br/tests/lightning_max_random/data/db.test2-schema.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
create table test2(
a bigint auto_random(10),
b int,
primary key(a)
);
3 changes: 3 additions & 0 deletions br/tests/lightning_max_random/data/db.test2.000000000.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
"a","b"
1,2
9007199254740992,3
65 changes: 65 additions & 0 deletions br/tests/lightning_max_random/run.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
#!/bin/sh
#
# Copyright 2022 PingCAP, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

set -eux

check_cluster_version 4 0 0 'local backend' || exit 0

ENGINE_COUNT=6

check_result() {
run_sql 'SHOW DATABASES;'
check_contains 'Database: db';
run_sql 'SHOW TABLES IN db;'
check_contains 'Tables_in_db: test'
check_contains 'Tables_in_db: test1'
check_contains 'Tables_in_db: test2'
run_sql 'SELECT count(*) FROM db.test;'
check_contains 'count(*): 2'
run_sql 'SELECT count(*) FROM db.test1;'
check_contains 'count(*): 2'
run_sql 'SELECT count(*) FROM db.test2;'
check_contains 'count(*): 2'
}

cleanup() {
rm -f $TEST_DIR/lightning.log
rm -rf $TEST_DIR/sst
run_sql 'DROP DATABASE IF EXISTS db;'
}

cleanup

# auto_random_max = 2^{64-1-10}-1
# db.test contains key auto_random_max - 1
# db.test1 contains key auto_random_max
# db.test2 contains key auto_random_max + 1 (overflow)
run_lightning --sorted-kv-dir "$TEST_DIR/sst" --config "tests/$TEST_NAME/config.toml" --log-file "$TEST_DIR/lightning.log"
check_result
# successfully insert: d.test auto_random key has not reached maximum
run_sql 'INSERT INTO db.test(b) VALUES(11);'
# fail for further insertion
run_sql 'INSERT INTO db.test(b) VALUES(22);' 2>&1 | tee -a "$TEST_DIR/sql_res.$TEST_NAME.txt"
check_contains 'ERROR'
# fail: db.test1 has key auto_random_max
run_sql 'INSERT INTO db.test1(b) VALUES(11);'
run_sql 'INSERT INTO db.test1(b) VALUES(22);' 2>&1 | tee -a "$TEST_DIR/sql_res.$TEST_NAME.txt"
check_contains 'ERROR'
# successfully insert for overflow key
run_sql 'INSERT INTO db.test2(b) VALUES(33);'
run_sql 'INSERT INTO db.test2(b) VALUES(44);'
run_sql 'INSERT INTO db.test2(b) VALUES(55);'
cleanup