-
-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(tianmu): merge to Stonedb 5.7 stable #1919
feat(tianmu): merge to Stonedb 5.7 stable #1919
Conversation
…anmu_no_key_error (#1462)" In version 1.0.4, we will discard "MANDATORY_TIANMU" and "NO_KEY_ERROR" in (sql_mode) Specifies whether to enable mandatory Tianmu engine in table. if yes ,set tianmu_mandatory to ON, otherwise set the variable to OFF. Specifies whether to to directly skip DDL statements that are not supported by the SQL layer, instead of reporting errors. if yes ,set tianmu_no_key_error to ON, otherwise set the variable to OFF.
…primary/secondary synchronization if UUIDs are used as the primary key(#1464) Cause of the problem: When performing a primary key scan under the master slave, "ha_tianmu:: position()" is called first to obtain the primary key value from the "record", However, in this scenario, after calling the "key_copy()" function, the "record" will be cleared, resulting in the subsequent "GetKeys()" obtaining a null primary key value. Solution: Because the value "handler->ref" is not used in the future, you can simply delete the call to "key_copy()".
…s and update escape.test(#1196)
… testcases, add date type and std func testcase(#1196)
1. Fix the crash first 2. then redesign the entire aggregated data stream
Bumps [nth-check](https://github.com/fb55/nth-check) to 2.1.1 and updates ancestor dependency [unist-util-select](https://github.com/syntax-tree/unist-util-select). These dependencies need to be updated together. Updates `nth-check` from 1.0.2 to 2.1.1 - [Release notes](https://github.com/fb55/nth-check/releases) - [Commits](fb55/nth-check@v1.0.2...v2.1.1) Updates `unist-util-select` from 2.0.2 to 4.0.1 - [Release notes](https://github.com/syntax-tree/unist-util-select/releases) - [Commits](syntax-tree/unist-util-select@2.0.2...4.0.1) --- updated-dependencies: - dependency-name: nth-check dependency-type: indirect - dependency-name: unist-util-select dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com>
update the Sponsor button
fix the link of opencollective
…on issues(#366) Cause of the problem: 1. For multiple versions, Tianmu needs to first copy an original pack when performing DML operations, Modify the copied package and use append write or overwrite write after modification (If there is invalid space in the DATA file that can be written to the current pack, use overwrite write, otherwise use append write) to write to the file, After the latest package is written to a file, the latest version chain will point to the address that was last written. There is a problem with the current (TianmuAttr:: LoadData) logic. Every time you call (TianmuAttr:: LoadData), Will write data to disk, If there are multiple rows written in a transaction, there will be multiple copies of data, "Because the current transaction has not been committed, the space for previous repeated writes has not been released, so the logic of overwriting writes will not be reached.", "I only follow the logic of additional writing, which is the fundamental reason for the skyrocketing space.". If you encounter a particularly large multiline write transaction, it will lead to a space explosion. Moreover, disk IO is performed once per load line, which can also lead to degraded insert performance. Solution: To optimize the logic of (TianmuAttr:: LoadData), it is necessary to determine whether the data in the pack is full before saving changes, Is whether to reach 65536 lines, and if so, write again, If it cannot be reached, it is necessary to write again in the commit phase.
…ving direct insert performance.
[summary] case_when.test drop_restric.test empty_string_not_null.test left_right_func.test like_not_like.test multi_join.test order_by.test ssb_small.test union_case.test
To support `update ignore` statement. The logic of uniqueness check is re-implemented.
Cause: in the function ParsingStrategy::ParseResult ParsingStrategy::GetOneRow field->val_str(str) cannot distinguish 0 and NULL value. Solution: Check whether field's default value is NULL.
…m clause 1: To fixup unsupport union or union all a sql statement which is without from clause. 2: Re-format some codes and functions.
1:Removes the unnessary optimization in stage of compiliation of tianmu. It doesnot have any helps for us. and may introuduce unexepected behaviors. 2:Refine MTR: issue848, issue1865, alter_table1, issue1523
In multi-thread aggregation, ExpressionColumn will occur double free due to without protection. Thread A will do ValueOrNull::operator ==, but in thread B, it will try to free it. Therefore, it leads to instance crash.
…_max #1564 [summary] 1. static_cast<int64_t>(18446744073709551601) = -15 2. Item will set 18446744073709551601 with unsigned flag, but in tianmu transform to ValueOrNot, the value will be set to `-15`. 3. add `unsigned flag` in value_or_null & TianmuNum & tianmu expr.
…ision loss problem (#1173) When converting TIME/DATETIME to ulonglong numeric, tianmu engine does not take the TIME_to_ulonglong_time_round process. This causes the results different from innodb. Furthermore, when we close the tianmu_insert_delayed parameter and execute an insert SQL, the TIME/DATETIME/TIMESTAMP type's data will loss precision due to incomplete attribute copying. PR Close #1173
files deleted: storage/tianmu/core/rc_attr_typeinfo.h storage/tianmu/handler/tianmu_handler.cpp storage/tianmu/handler/tianmu_handler_com.cpp storage/tianmu/types/rc_data_types.cpp storage/tianmu/types/rc_num.cpp storage/tianmu/types/rc_num.h storage/tianmu/types/rc_value_object.cpp
…t cannot be recorded(#1876) 1. Actually, tianmu uses its own code to handle load which lacks support of row format of binlog 2. When tianmu parsing rows, write table map event first 3. Once tianmu constructs a row, just add it to the rows log event, when parsing is done, the rows log event will also be ready, then write it to the binlog
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks for the contribution! Please review the labels and make any necessary changes. |
Summary about this PR
Issue Number: close #issue_number_you_created
Tests Check List
Changelog
Documentation