title |
---|
TiDB 6.2.0 Release Notes |
Release date: August 23, 2022
TiDB version: 6.2.0-DMR
Note:
The TiDB 6.2.0-DMR documentation has been archived. PingCAP encourages you to use the latest LTS version of the TiDB database.
In v6.2.0-DMR, the key new features and improvements are as follows:
- TiDB Dashboard supports visual execution plans, allowing more intuitive display of execution plans.
- Add a Monitoring page in TiDB Dashboard to make the performance analysis and tuning more efficient.
- The Lock View of TiDB feature supports showing the waiting information of optimistic transactions, facilitating quick locating of lock conflicts.
- TiFlash supports a newer version of storage format, enhancing stability and performance.
- The Fine Grained Shuffle feature allows parallel execution of window functions in multiple threads.
- A new concurrent DDL framework: Less DDL statements blocked and higher execution efficiency.
- TiKV supports automatically tuning the CPU usage, thus ensuring stable and efficient database operations.
- Point-in-time recovery (PITR) is introduced to restore a snapshot of a TiDB cluster to a new cluster from any given time point in the past.
- TiDB Lightning supports pausing the scheduling on the table level in the physical import mode, instead of on the cluster level.
- BR supports restoring user and privilege data, making backup and restore smoother.
- TiCDC unlocks more data replication scenarios by supporting filtering specific types of DDL events.
- The
SAVEPOINT
mechanism is supported, with which you can flexibly control the rollback points within a transaction. - TiDB supports adding, dropping, and modifying multiple columns or indexes with only one
ALTER TABLE
statement. - Cross-cluster RawKV replication is now supported.
-
The physical data compaction feature is GA
The TiFlash backend automatically compacts physical data based on specific conditions to reduce the backlog of useless data and optimize the data storage structure.
There is often a certain amount of useless data in TiFlash tables before data compaction is automatically triggered. This feature lets you choose the right timing and manually execute SQL statements to immediately compact the physical data in TiFlash, thus reducing storage space usage and improving query performance. This feature is experimental in TiDB v6.1, and now is in General Availability (GA) in TiDB v6.2.0.
-
Split TiDB Dashboard from PD
TiDB Dashboard is moved from PD to the monitoring node. This reduces the impact of TiDB Dashboard on PD and makes PD more stable.
-
TiDB Dashboard adds a Monitoring page
The new Monitoring page shows key indicators required for performance tuning, based on which you can analyze and tune performance with reference to Performance tuning by database time.
Specifically, you can analyze user response time and database time from a global and top-down perspective, to confirm whether the bottleneck in user response time is caused by database issues. If the bottleneck is in the database, you can use the database time overview and SQL latency breakdowns to identify the bottleneck and tune performance.
-
TiDB Dashboard supports visual execution plans
TiDB Dashboard provides visual execution plans and basic diagnosis service through the SQL Statements and Monitoring pages. This feature offers a fresh new perspective for you to identify each step of a query plan. Therefore, you can learn all traces of query execution plans more intuitively.
This feature is particularly useful when you are trying to learn the execution of complex and large queries. Meanwhile, for each query execution plan, TiDB Dashboard automatically analyzes the execution details, spots potential problems, and provides optimization suggestions to reduce the time required for executing specific query plans.
-
Lock View supports showing the waiting information of optimistic transactions
Too many lock conflicts might cause serious performance problems, and detecting the lock conflicts is a necessary way to troubleshoot such problems. Before v6.2.0, TiDB supported viewing the lock conflict relationships using the
INFORMATION_SCHEMA.DATA_LOCK_WAITS
system view, but it does not show the waiting information of optimistic transactions. TiDB v6.2.0 extends theDATA_LOCK_WAITS
view and lists the optimistic transactions blocked by pessimistic locks in the view. This feature helps users detect lock conflicts quickly, and provides a basis for improving the application, thus reducing the frequency of lock conflicts and improving the overall performance.
-
Improve the
LEADING
optimizer hint to support outer join orderingIn v6.1.0, the optimizer hint
LEADING
was introduced to modify the join order of tables. But this hint was not applicable to queries that contain outer joins. For more information, seeLEADING
document. In v6.2.0, TiDB lifts this restriction. In a query that contains outer join, now you can use this hint to specify the join order of tables to get better SQL execution performance and to avoid the sudden change of execution plans. -
Add a new optimizer
SEMI_JOIN_REWRITE
to improve the performance ofEXISTS
queriesIn some scenarios, queries with
EXISTS
cannot have the optimal execution plan and might be executed for too long. In v6.2.0, the optimizer adds rewriting rules for such scenarios, and you can useSEMI_JOIN_REWRITE
in queries to forcibly make the optimizer rewrite the query and get better query performance. -
Add a new optimizer hint
MERGE
to improve the performance of analytical queriesCommon table expression (CTE) is an effective way to simplify the query logic. It is widely used to write complex queries. Before v6.2.0, CTE cannot be automatically expanded in TiFlash environments, which, to some extent, limits the execution efficiency of MPP. In v6.2.0, a MySQL-compatible optimizer hint
MERGE
is introduced. With this hint, the optimizer now allows CTE inlines to be expanded, so that the consumers of the CTE query results can concurrently execute the query in TiFlash, which improves the performance of some analytical queries. -
Optimize the performance of aggregation operations in some analytical scenarios
When you use TiFlash to perform aggregation operations on a column in an OLAP scenario, if serious data skew exists due to uneven distribution of the aggregated column, and if the aggregated column has many different values, the execution efficiency of
COUNT(DISTINCT)
queries on the column is low. In v6.2.0, new rewriting rules are introduced to improve the performance ofCOUNT(DISTINCT)
queries on a single column. -
TiDB supports concurrent DDL operations
TiDB v6.2.0 introduces a new concurrent DDL framework, which enables DDL statements to be concurrently executed on different table objects and fixes the issue that DDL operations are blocked by DDL operations on other tables. In addition, TiDB supports concurrent DDL execution when adding an index on multiple tables or changing a column type. This improves the efficiency of DDL execution.
-
Optimizer enhances the estimation of string matching
In the string matching scenario, if the optimizer cannot accurately estimate the number of rows, it affects the generation of the optimal execution plan. For example, the condition is
like '%xyz'
or using a regular expressionregex ()
. To improve the estimation accuracy in such scenarios, TiDB v6.2.0 enhances the estimation method. The new method combines the TopN information of statistics and system variables to improve the accuracy and makes it possible to modify the match selectivity manually, thus improving the SQL performance. -
Window functions pushed down to TiFlash can be executed in multiple threads
After the Fine Grained Shuffle feature is enabled, window functions can be executed in multiple threads, instead of in a single thread. This feature reduces the query response time significantly without changing user behavior. You can control the granularity of the shuffle by adjusting the value of the variables.
-
TiFlash supports a newer version of storage format
The new storage format relieves high CPU usage caused by GC in high-concurrency and heavy workload scenarios. This significantly reduces IO traffic of background tasks, thereby boosting stability under high concurrencies and heavy workloads. At the same time, space amplification and disk waste can be significantly reduced.
In TiDB v6.2.0, data is stored in the new storage format by default. Note that if TiFlash is upgraded from earlier versions to v6.2.0, you cannot perform in-place downgrade on TiFlash, because earlier TiFlash versions cannot recognize the new storage format.
For more information about upgrading TiFlash, see TiFlash Upgrade Guide.
-
TiFlash optimizes data scanning performance in multiple concurrency scenarios (experimental)
TiFlash reduces duplicate reads of the same data by merging read operations of the same data, and optimizes the resource overhead in the case of multiple concurrent tasks to improve data scanning performance. It avoids the situation where the same data has to be read separately in each task, or even the same data may be read multiple times at the same time, if the same data is involved in multiple concurrent tasks.
-
TiFlash adds FastScan for data scanning to increase read and write speed by sacrificing data consistency (experimental)
TiDB introduces FastScan in v6.2.0. It supports skipping consistency checks to increase the speed significantly. FastScan is suitable for scenarios that do not require high accuracy and consistency of data such as offline analysis tasks. Previously, to ensure data consistency, TiFlash needed to perform data consistency checks during the data scanning process to find the required data from multiple different versions.
When you upgrade from an earlier version to TiDB v6.2.0, FastScan is not enabled by default for all tables, which ensures data consistency. You can enable FastScan for each table independently. If the table is set to FastScan in TiDB v6.2.0, it will be disabled when you downgrade to a lower version, but this does not affect the normal data read. In this case, it is equivalent to strong consistency read.
-
TiKV supports automatically tuning the CPU usage (experimental)
Databases usually have background processes to perform internal operations. Statistical information can be collected to help identify performance problems, generate better execution plans, and improve the stability and performance of the database. However, how to more efficiently collect information, and how to balance the resource overhead of background operations and foreground operations without affecting the daily use have always been one of the headaches in the database industry.
Starting from v6.2.0, TiDB supports setting the CPU usage rate of background requests using the TiKV configuration file, thereby limiting the CPU usage ratio of background operations such as automatically collecting statistics in TiKV, and avoiding the resource preemption of user operations by background operations in extreme cases. This ensures that the operations of the database are stable and efficient.
At the same time, TiDB also supports automatically adjusting CPU usage. Then, TiKV will adaptively adjust the CPU resources occupied by background requests according to the CPU usage of the instance. This feature is disabled by default.
-
TiKV supports listing detailed configuration information using command-line flags
The TiKV configuration file can be used to manage TiKV instances. However, for instances that run for a long time and are managed by different users, it is difficult to know which configuration item has been modified and what the default value is. This might cause confusion when you upgrade the cluster or migrate data. Since TiDB v6.2.0, tikv-server supports a new command-line flag
—-config-info
that lists default and current values of all TiKV configuration items, helps users to quickly verify the startup parameters of the TiKV process, and improves usability.
-
TiDB supports modifying multiple columns or indexes in a single
ALTER TABLE
statementBefore v6.2.0, TiDB only supports single DDL changes, which leads to incompatible DDL operations when migrating heterogeneous databases, and it takes extra effort to modify a complex DDL statement into multiple TiDB-supported simple DDL statements. In addition, some users rely on the ORM framework to create assembly in SQL, thus causing SQL incompatibility. Since v6.2.0, TiDB supports modifying multiple schema objects in a single SQL statement, which is convenient for users to implement SQL and improves usability.
-
Support setting savepoints in transactions
A transaction is a logical collection of a series of consecutive operations with which the database guarantees ACID properties. In some complex application scenarios, you might need to manage many operations in a transaction, and sometimes you might need to roll back some operations in the transaction. "Savepoint" is a nameable mechanism for the internal implementation of transactions. With this mechanism, you can flexibly control the rollback points within a transaction, thereby managing the more complex transactions and having more freedom in designing diverse applications.
-
BR supports restoring user and privilege data
BR supports restoring user and privilege data when it performs a normal restoration. You do not need any additional restoration plan to restore user and privilege data. To enable this feature, specify the
--with-sys-table
parameter when you use BR to restore data. -
Support point-in-time recovery (PITR) based on backup and restoration of log and snapshot
PITR is implemented based on the backup and restoration of log and snapshot. It allows you to restore the snapshots of a cluster at any point in history to a new cluster. This feature satisfies the following needs:
- Reduce the RPO in disaster recovery to less than 20 minutes.
- Handle the cases of incorrect writes from applications by, for example, rolling back data to before the error event.
- Perform history data auditing to meet the requirements of laws and regulations.
This feature has usage limitations. For details, refer to the user document.
-
DM supports continuous data validation (experimental)
Continuous data validation is used to continuously compare the upstream binlog with the data written into the downstream during data migration. The validator identifies data exceptions, such as inconsistent data and missing records.
This feature solves the issues of lagging validation and excessive resource consumption in common full data validation schemes.
-
Automatically identify the region of Amazon S3 buckets
Data migration tasks can automatically identify the region of Amazon S3 buckets. You do not need to explicitly pass the region parameter.
-
Support configuring disk quota for TiDB Lightning (experimental)
When TiDB Lightning imports data in the physical import mode (backend='local'), sorted-kv-dir must have enough space to store the source data. Insufficient disk space might cause the import task to fail. Now you can use the new
disk_quota
configuration to limit the total amount of disk space used by TiDB Lightning, so that the import task can be completed normally even when sorted-kv-dir does not have enough storage space. -
TiDB Lightning supports importing data to production clusters in the physical import mode
Previously, the physical import mode of TiDB Lightning (backend='local') had a significant impact on the target cluster. For example, during the migration, PD global scheduling is paused. Therefore, the previous physical import mode is only suitable for initial data import.
TiDB Lightning improves the existing physical import mode. By allowing pausing the scheduling of tables, the impact of import is reduced from cluster level to table level. That is, you can read and write tables that are not being imported.
This feature does not need manual configuration. If your TiDB cluster is v6.1.0 or later versions and TiDB Lightning is v6.2.0 or later versions, the new physical import mode takes effect automatically.
-
Refactor the user documentation of TiDB Lightning to make its structure more reasonable and clear. The terms for "backend" is also modified to lower the understanding barrier for new users:
- Replace "local backend" with "physical import mode".
- Replace "tidb backend" with "logical import mode".
-
Support cross-cluster RawKV replication (experimental)
Support subscribing to the data change of RawKV and replicating the data change to a downstream TiKV cluster in real-time using a new component TiKV-CDC, which makes the cross-cluster replication possible.
-
Support filtering DDL and DML events
In some special occasions, you might want to set filter rules for incremental data change logs. For example, filtering high risk DDL events such as DROP TABLE. Starting from v6.2.0, TiCDC supports filtering DDL events of specified types and filtering DML events based on SQL expressions. This makes TiCDC applicable to more data replication scenarios.
Variable name | Change type | Description |
---|---|---|
tidb_enable_new_cost_interface | Newly added | This variable controls whether to enable the refactored Cost Model implementation. |
tidb_cost_model_version | Newly added | TiDB uses a cost model to choose an index and operator during physical optimization. This variable is used to select the cost model version. TiDB v6.2.0 introduces the Cost Model Version 2, which is more accurate than the previous version in internal tests. |
tidb_enable_concurrent_ddl | Newly added | This variable controls whether to allow TiDB to use concurrent DDL statements. DO NOT modify this variable. The risk of disabling this variable is unknown and might corrupt the metadata of the cluster. |
tiflash_fine_grained_shuffle_stream_count | Newly added | This variable controls the concurrency level of the window function execution When the window function is pushed down to TiFlash for execution. |
tiflash_fine_grained_shuffle_batch_size | Newly added | When Fine Grained Shuffle is enabled, the window function pushed down to TiFlash can be executed in parallel. This variable controls the batch size of the data sent by the sender. The sender will send data once the cumulative number of rows exceeds this value. |
tidb_default_string_match_selectivity | Newly added | This variable is used to set the default selectivity of like , rlike , and regexp functions in the filter condition when estimating the number of rows. This variable also controls whether to enable TopN to help estimate these functions. |
tidb_enable_analyze_snapshot | Newly added | This variable controls whether to read historical data or the latest data when performing ANALYZE . |
tidb_generate_binary_plan | Newly added | This variable controls whether to generate binary-encoded execution plans in slow logs and statement summaries. |
tidb_opt_skew_distinct_agg | Newly added | This variable sets whether the optimizer rewrites the aggregate functions with DISTINCT to the two-level aggregate functions, such as rewriting SELECT b, COUNT(DISTINCT a) FROM t GROUP BY b to SELECT b, COUNT(a) FROM (SELECT b, a FROM t GROUP BY b, a) t GROUP BY b . |
tidb_enable_noop_variables | Newly added | This variable controls whether to show noop variables in the result of SHOW [GLOBAL] VARIABLES . |
tidb_min_paging_size | Newly added | This variable is used to set the maximum number of rows during the coprocessor paging request process. |
tidb_txn_commit_batch_size | Newly added | This variable is used to control the batch size of transaction commit requests that TiDB sends to TiKV. |
tidb_enable_change_multi_schema | Deleted | This variable is used to control whether multiple columns or indexes can be altered in one ALTER TABLE statement. |
tidb_enable_outer_join_reorder | Modified | This variable controls whether the Join Reorder algorithm of TiDB supports Outer Join. In v6.1.0, the default value is ON , which means the Join Reorder's support for Outer Join is enabled by default. From v6.2.0, the default value is OFF , which means the support is disabled by default. |
Configuration file | Configuration | Change type | Description |
---|---|---|---|
TiDB | feedback-probability | Deleted | This configuration is no longer effective and is not recommended. |
TiDB | query-feedback-limit | Deleted | This configuration is no longer effective and is not recommended. |
TiKV | server.simplify-metrics | Newly added | This configuration specifies whether to simplify the returned monitoring metrics. |
TiKV | quota.background-cpu-time | Newly added | This configuration specifies the soft limit on the CPU resources used by TiKV background to process read and write requests. |
TiKV | quota.background-write-bandwidth | Newly added | This configuration specifies the soft limit on the bandwidth with which background transactions write data (not effective currently). |
TiKV | quota.background-read-bandwidth | Newly added | This configuration specifies the soft limit on the bandwidth with which background transactions and the Coprocessor read data (not effective currently). |
TiKV | quota.enable-auto-tune | Newly added | This configuration specifies whether to enable the auto-tuning of quota. If this configuration item is enabled, TiKV dynamically adjusts the quota for the background requests based on the load of TiKV instances. |
TiKV | rocksdb.enable-pipelined-commit | Deleted | This configuration is no longer effective. |
TiKV | gc-merge-rewrite | Deleted | This configuration is no longer effective. |
TiKV | log-backup.enable | Newly added | This configuration controls whether to enable log backup on TiKV. |
TiKV | log-backup.file-size-limit | Newly added | This configuration specifies the size limit on log backup data. Once this limit is reached, data is automatically flushed to external storage. |
TiKV | log-backup.initial-scan-pending-memory-quota | Newly added | This configuration specifies the quota of cache used for storing incremental scan data. |
TiKV | log-backup.max-flush-interval | Newly added | This configuration specifies the maximum interval for writing backup data to external storage in log backup. |
TiKV | log-backup.initial-scan-rate-limit | Newly added | This configuration specifies the rate limit on throughput in an incremental data scan in log backup. |
TiKV | log-backup.num-threads | Newly added | This configuration specifies the number of threads used in log backup. |
TiKV | log-backup.temp-path | Newly added | This configuration specifies temporary path to which log files are written before being flushed to external storage. |
TiKV | rocksdb.defaultcf.format-version | Newly added | The format version of SST files. |
TiKV | rocksdb.writecf.format-version | Newly added | The format version of SST files. |
TiKV | rocksdb.lockcf.format-version | Newly added | The format version of SST files. |
PD | replication-mode.dr-auto-sync.wait-async-timeout | Deleted | This configuration does not take effect and is deleted. |
PD | replication-mode.dr-auto-sync.wait-sync-timeout | Deleted | This configuration does not take effect and is deleted. |
TiFlash | storage.format_version |
Modified | The default value of format_version changes to 4 , the default format for v6.2.0 and later versions, which reduces write amplification and background task resource consumption. |
TiFlash | profiles.default.dt_enable_read_thread | Newly added | This configuration controls whether to use the thread pool to handle read requests from the storage engine. The default value is false . |
TiFlash | profiles.default.dt_page_gc_threshold | Newly added | This configuration specifies the minimum ratio of valid data in a PageStorage data file. |
TiCDC | --overwrite-checkpoint-ts | Newly added | This configuration is added to the cdc cli changefeed resume sub-command. |
TiCDC | --no-confirm | Newly added | This configuration is added to the cdc cli changefeed resume sub-command. |
DM | mode | Newly added | This configuration is a validator parameter. Optional values are full , fast , and none . The default value is none , which does not validate the data. |
DM | worker-count | Newly added | This configuration is a validator parameter and specifies the number of validation workers in the background. The default value is 4 . |
DM | row-error-delay | Newly added | This configuration is a validator parameter. If a row is not validated within the specified time, it will be marked as an error row. The default value is 30m, which means 30 minutes. |
TiDB Lightning | tikv-importer.store-write-bwlimit | Newly added | This configuration determines the write bandwidth when TiDB Lightning writes data to each TiKV Store. The default value is 0 , indicating the bandwidth is not limited. |
TiDB Lightning | tikv-importer.disk-quota | Newly added | This configuration limits the storage space used by TiDB Lightning. |
- TiFlash
format_version
cannot be downgraded from4
to3
. For details, see TiFlash Upgrade Guide. - In v6.2.0 and later versions, it is strongly recommended to keep the default value
false
ofdt_enable_logical_split
and not to change it totrue
. For details, see known issue #5576. - If the backup cluster has a TiFlash replica, after you perform PITR, the restoration cluster does not contain the data in the TiFlash replica. To restore data from the TiFlash replica, you need to manually configure TiFlash replicas. Executing the
exchange partition
DDL statement might result in a failure of PITR. If the upstream database uses TiDB Lightning's physical import mode to import data, the data cannot be backed up in log backup. It is recommended to perform a full backup after the data import. For other compatibility issues of PITR, see PITR limitations. - Since TiDB v6.2.0, you can restore table in
mysql
schema by specifying the parameter--with-sys-table=true
when restoring data. - When you execute the
ALTER TABLE
statement to add, drop, or modify multiple columns or indexes, TiDB checks table consistency by comparing the table before and after statement execution, regardless of the change in the same DDL statement. The execution order of the DDLs is not fully compatible with MySQL in some scenarios. - If the TiDB component is v6.2.0 or later, the TiKV component should not be earlier than v6.2.0.
- TiKV adds a configuration item
split.region-cpu-overload-threshold-ratio
that supports dynamic configuration. - Slow query logs,
information_schema.statements_summary
, andinformation_schema.slow_query
can exportbinary_plan
, or execution plans encoded in the binary format. - Two columns are added to the
SHOW TABLE ... REGIONS
statement:SCHEDULING_CONSTRAINTS
andSCHEDULING_STATE
, which respectively indicate Region scheduling constraints in Placement in SQL and the current scheduling state. - Since TiDB v6.2.0, you can capture data changes of RawKV via TiKV-CDC.
- When
ROLLBACK TO SAVEPOINT
is used to roll back a transaction to a specified savepoint, MySQL releases the locks held only after the specified savepoint, while in TiDB pessimistic transaction, TiDB does not immediately release the locks held after the specified savepoint. Instead, TiDB releases all locks when the transaction is committed or rolled back. - Since TiDB v6.2.0, the
SELECT tidb_version()
statement also returns Store type (tikv or unistore). - TiDB no longer has hidden system variables.
- TiDB v6.2.0 introduces two new system tables:
INFORMATION_SCHEMA.VARIABLES_INFO
: used for viewing information about TiDB system variables.PERFORMANCE_SCHEMA.SESSION_VARIABLES
: used for viewing information about TiDB session-level system variables.
Since TiDB v6.2.0, backing up and restoring RawKV using BR is deprecated.
-
TiDB
-
Support the
SHOW COUNT(*) WARNINGS
andSHOW COUNT(*) ERRORS
statements #25068 @likzn -
Add validation check for some system variables #35048 @morgo
-
Optimize the error messages for some type conversions #32447 @fanrenhoo
-
Make the output of
SHOW TABLES/DATABASES LIKE …
more MySQL-compatible. The column names in the output contain theLIKE
value #35116 @likzn -
Improve the performance of JSON-related functions #35859 @wjhuang2016
-
Improve the verification speed of password login using SHA-2 #35998 @virusdefender
-
Optimize the Coprocessor communication protocol. This can greatly reduce the memory consumption of the TiDB processes when reading data, and further alleviate the OOM issue in the scenario of scanning tables and exporting data by Dumpling. The system variable
tidb_enable_paging
is introduced to control whether to enable this communication protocol (with the scope of SESSION or GLOBAL). This protocol is disabled by default. To enable it, set the variable value totrue
#35633 @tiancaiama @wshwsh12 -
Optimize the accuracy of memory tracking for some operators (HashJoin, HashAgg, Update, Delete) (#35634, #35631, #35635 @wshwsh12) (#34096 @ekexium)
-
The system table
INFORMATION_SCHEMA.DATA_LOCK_WAIT
supports recording the locking information of optimistic transactions #34609 @longfangson -
Add some monitoring metrics for transactions #34456 @longfangsong
-
-
TiKV
- Support compressing the metrics response using gzip to reduce the HTTP body size #12355 @glorv
- Improve the readability of the TiKV panel in Grafana Dashboard #12007 @kevin-xianliu
- Optimize the commit pipeline performance of the Apply operator #12898 @ethercflow
- Support dynamically modifying the number of sub-compaction operations performed concurrently in RocksDB (
rocksdb.max-sub-compactions
) #13145 @ethercflow
-
PD
-
TiFlash
-
Refine error handling of the TiFlash MPP engine, thereby enhancing stability #5095 @windtalker @yibin87
-
Optimize the comparison and sorting of UTF8_BIN and UTF8MB4_BIN collations #5294 @solotzg
-
-
Tools
-
Backup & Restore (BR)
- Adjust the backup data directory structure to fix backup failure caused by S3 rate limiting in large cluster backup #30087 @MoCuishle28
-
TiCDC
-
TiDB Lightning
-
TiUP
-
-
TiDB
- Fix the issue that a partition is incorrectly pruned if a partition key is used in the query condition and the collate is different from the one in the query partition table #32749 @mjonss
- Fix the issue that
SET ROLE
cannot match the granted role if there are capital letters in the host #33061 @morgo - Fix the issue that columns with
auto_increment
cannot be dropped #34891 @Defined2014 - Fix the issue that
SHOW CONFIG
shows some configuration items that have been removed #34867 @morgo - Fix the issue that
SHOW DATABASES LIKE …
is case-sensitive #34766 @e1ijah1 - Fix the issue that
SHOW TABLE STATUS LIKE ...
is case-sensitive #7518 @likzn - Fix the issue that
max-index-length
still reports an error in non-strict mode #34931 @e1ijah1 - Fix the issue that
ALTER COLUMN ... DROP DEFAULT
does not work #35018 @Defined2014 - Fix the issue that when you create a table, the default value and the type of a column are not consistent and are not automatically corrected #34881 @Lloyd-Pottiger
- Fix the issue that data in the
mysql.columns_priv
table is not deleted synchronously after you runDROP USER
#35059 @lcwangchao - Fix the issue of DDL jam by disallowing creating tables within the schemas of some systems #35205 @tangenta
- Fix the issue that querying partitioned tables might report "index-out-of-range" and "non used index" errors in some cases #35181 @mjonss
- Fix the issue that
INTERVAL expr unit + expr
might report an error #30253 @mjonss - Fix a bug that a temporary table cannot be found after being created in a transaction #35644 @djshow832
- Fix the panic issue that occurs when setting collation to the
ENUM
column #31637 @wjhuang2016 - Fix the issue that when one PD node goes down, the query of
information_schema.TIKV_REGION_STATUS
fails due to not retrying other PD nodes #35708 @tangenta - Fix the issue that
SHOW CREATE TABLE …
cannot correctly display set orENUM
columns afterSET character_set_results = GBK
#31338 @tangenta - Fix the incorrect scope of the system variables
tidb_log_file_max_days
andtidb_config
#35190 @morgo - Fix the issue that the output of
SHOW CREATE TABLE
is not compatible with MySQL for theENUM
orSET
column #36317 @Defined2014 - Fix the issue that when creating a table, the behavior of a
LONG BYTE
column is not compatible with MySQL #36239 @Defined2014 - Fix the issue that
auto_increment = x
does not take effect on temporary tables #36224 @djshow832 - Fix the wrong default value when modifying columns concurrently #35846 @wjhuang2016
- Avoid sending requests to unhealthy TiKV nodes to improve availability #34906 @sticnarf
- Fix the issue that the column list does not work in the LOAD DATA statement #35198 @SpadeA-Tang
- Fix the issue that in some scenarios the pessimistic lock is incorrectly added to the non-unique secondary index #36235 @ekexium
-
TiKV
- Avoid reporting
WriteConflict
errors in pessimistic transactions #11612 @sticnarf - Fix the possible duplicate commit records in pessimistic transactions when async commit is enabled #12615 @sticnarf
- Fix the issue that TiKV panics when modifying the
storage.api-version
from1
to2
#12600 @pingyu - Fix the issue of inconsistent Region size configuration between TiKV and PD #12518 @5kbpers
- Fix the issue that TiKV keeps reconnecting PD clients #12506, #12827 @Connor1996
- Fix the issue that TiKV panics when performing type conversion for an empty string #12673 @wshwsh12
- Fix the issue of time parsing error that occurs when the
DATETIME
values contain a fraction andZ
#12739 @gengliqi - Fix the issue that the perf context written by the Apply operator to TiKV RocksDB is coarse-grained #11044 @LykxSassinator
- Fix the issue that TiKV fails to start when the configuration of backup/import/cdc is invalid #12771 @3pointer
- Fix the panic issue that might occur when a peer is being split and destroyed at the same time #12825 @BusyJay
- Fix the panic issue that might occur when the source peer catches up logs by snapshot in the Region merge process #12663 @BusyJay
- Fix the panic issue caused by analyzing statistics when
max_sample_size
is set to0
#11192 @LykxSassinator - Fix the issue that encryption keys are not cleaned up when Raft Engine is enabled #12890 @tabokie
- Fix the issue that the
get_valid_int_prefix
function is incompatible with TiDB. For example, theFLOAT
type was incorrectly converted toINT
#13045 @guo-shaoge - Fix the issue that the Commit Log Duration of a new Region is too high, which causes QPS to drop #13077 @Connor1996
- Fix the issue that PD does not reconnect to TiKV after the Region heartbeat is interrupted #12934 @bufferflies
- Avoid reporting
-
Tools
-
Backup & Restore (BR)
- Fix the issue that BR does not reset the rate limit after finishing a rate-limited backup task #31722 @MoCuishle28
-
We would like to thank the following contributors from the TiDB community:
- e1ijah1
- PrajwalBorkar
- likzn
- rahulk789
- virusdefender
- joycse06
- morgo
- ixuh12
- blacktear23
- johnhaxx7
- GoGim1
- renbaoshuo
- Zheaoli
- fanrenhoo
- njuwelkin
- wirybeaver
- hey-kong
- fatelei
- eastfisher: First-time contributor
- Juneezee: First-time contributor