-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
executor: parallel cancel mpp query #36161
Conversation
Signed-off-by: xufei <xufeixw@mail.ustc.edu.cn>
[REVIEW NOTIFICATION] This pull request has been approved by:
To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
Please follow PR Title Format:
Or if the count of mainly changed packages are more than 3, use
After you have format title, you can leave a comment |
/run-check_title |
Code Coverage Details: https://codecov.io/github/pingcap/tidb/commit/d4d968c6b6d7aab79e45c449d656eae45e5956e6 |
Co-authored-by: HuaiyuXu <xuhuaiyu@pingcap.com>
Co-authored-by: HuaiyuXu <xuhuaiyu@pingcap.com>
defer func() { | ||
wg.Done() | ||
}() | ||
_, err := m.store.GetTiKVClient().SendRequest(context.Background(), storeAddr, wrappedReq, tikv.ReadTimeoutShort) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to limit the batch number?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, the max number will be the TiFlash nodes in the cluster. Considering that TiDB always dispatchMPPTask to TiFlash node parallelly without limit, I think here is ok to not limit the batch number
Signed-off-by: xufei <xufeixw@mail.ustc.edu.cn>
/merge |
@xhebox: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
build failed [2022-07-13T06:15:10.514Z] + cat importer.log [2022-07-13T06:15:10.514Z] cat: importer.log: No such file or directory |
/rebuild |
fail to build because of the goimports |
/merge |
This pull request has been accepted and is ready to merge. Commit hash: 4dc4799
|
/run-mysql-test |
TiDB MergeCI notify🔴 Bad News! New failing [2] after this pr merged.
|
* master: (27 commits) executor: parallel cancel mpp query (pingcap#36161) store/copr: adjust the cop cache admission process time for paging (pingcap#36157) log-backup: get can restored global-checkpoint-ts when support v3 checkpoint advance (pingcap#36197) executor: optimize cursor read point get by reading through pessimistic lock cache (pingcap#36149) *: add tidb_min_paging_size system variable (pingcap#36107) planner: handle the expected row count for pushed-down selection in mpp (pingcap#36195) *: support show ddl jobs for sub-jobs (pingcap#36168) table-filter: optimize table pattern message and unit tests (pingcap#36160) domain: fix unstable test TestAbnormalSessionPool (pingcap#36154) executor: check the error returned by `handleNoDelay` (pingcap#36105) log-backup: fix checkpoint display (pingcap#36166) store/mockstore/unistore: fix several issues of coprocessor paging in unistore (pingcap#36147) test: refactor restart test (pingcap#36174) ddl: support rename index and columns for multi-schema change (pingcap#36148) test: remove meaningless test and update bazel (pingcap#36136) planner: Reduce verbosity of logging unknown system variables (pingcap#36013) metrics/grafana: bring back the plan cache miss panel (pingcap#36081) ddl: implement table granularity DDL for SchemaTracker (pingcap#36077) *: bazel use jdk 17 (pingcap#36070) telemetry: add reviewer rule (pingcap#36084) ...
Signed-off-by: xufei xufeixw@mail.ustc.edu.cn
What problem does this PR solve?
Issue Number: close #36164, ref pingcap/tiflash#5095
Problem Summary:
Currently, when cancel MPP query, TiDB send cancel repquest to all the TiFlash node one by one, so if the cancel request hangs in the TiFlash, the total cancel time will be
n * 30
seconds, n is the number of TiFlash nodes, this pr send cancel request to all TiFlash node parallellyWhat is changed and how it works?
Check List
Tests
Side effects
Documentation
Release note
Please refer to Release Notes Language Style Guide to write a quality release note.