-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[dst] Improve tail-latency for operations of transactions using wait queues #13580
Labels
area/docdb
YugabyteDB core features
kind/enhancement
This is an enhancement of an existing feature
priority/medium
Medium priority issue
Comments
yugabyte-ci
added
kind/bug
This issue is a bug
priority/medium
Medium priority issue
labels
Aug 11, 2022
This was referenced Aug 11, 2022
yugabyte-ci
added
kind/enhancement
This is an enhancement of an existing feature
and removed
kind/bug
This issue is a bug
labels
Oct 19, 2022
robertsami
changed the title
[dst] Improve the continuation of waiters in wait_queue.cc
[dst] Improve tail-latency for operations of transactions using wait queues
Jan 17, 2023
robertsami
added a commit
to robertsami/yugabyte-db
that referenced
this issue
Feb 28, 2023
Summary: tbd Test Plan: Jenkins Subscribers: bogdan Differential Revision: https://phabricator.dev.yugabyte.com/D22968
robertsami
added a commit
that referenced
this issue
Mar 1, 2023
Summary: The main contribution of this revision is to drastically improve p99 performance of workloads using wait-on-conflict concurrency control under high contention, without harming p50 or average performance under normal amounts of contention. We achieve this by making the following improvements: 1. Force incoming requests to check the wait queue once for active blockers, to ensure incoming requests cannot starve waiting transactions which are racing to exit the wait queue 2. Assign serial numbers to incoming requests, and whenever a batch of waiters can be resumed at the same time, ensure they are resumed roughly in the order in which they arrived at the tserver Additional enhancements include: 1. Reduce copying by consolidating on using TransactionData everywhere, which is pulled into a conflict_data.h file with associated data structures 2. Populate granular intent information on a sub-transaction basis for use by the wait queue 3. Piggy-back off transaction status request in conflict resolution to obtain status tablet info Test Plan: Performance was tested on a 16 core 32gb ram alma8 server with a full-LTO release build. In both cases we used the following set-up: ``` create table test (k int primary key, v int); insert into test select generate_series(0, 11), 0; ``` In both cases, we also ran ysql_bench as follows: ``` build/latest/postgres/bin/ysql_bench --transactions=2000 --jobs=16 --client=16 --file=workload.sql --progress=1 --debug=fails ``` = First test: Max contention = `workload.sql` ``` begin; select * from test where k=1 for update; commit; ``` Baseline: ``` latency average = 19.779 ms latency stddev = 26.684 ms tps = 792.780284 (including connections establishing) tps = 793.793930 (excluding connections establishing) ``` With revision: ``` latency average = 22.632 ms latency stddev = 3.266 ms tps = 705.108285 (including connections establishing) tps = 705.914647 (excluding connections establishing) ``` = Second test: Normal contention = `workload.sql` ``` begin; with rand as (select floor(random() * 10 + 1)::int as k) select * from test join rand on rand.k=test.k for update; commit; ``` Baseline: ``` latency average = 7.317 ms latency stddev = 6.516 ms tps = 2117.437801 (including connections establishing) tps = 2126.594897 (excluding connections establishing) ``` With revision: ``` latency average = 7.055 ms latency stddev = 5.124 ms tps = 2236.062486 (including connections establishing) tps = 2244.260708 (excluding connections establishing) ``` ==Takeaways== 1. stddev of latency is substantially improved with this revision, at the expense of a 25% drop in throughput and 35% increase in latency 2. Throughput is not significantly changed in normal contention case Reviewers: pjain, bkolagani, sergei Reviewed By: sergei Subscribers: mbautin, rthallam, bogdan Differential Revision: https://phabricator.dev.yugabyte.com/D22968
@robertsami , Can we close this or were you planning additional followups? |
the main fix for fairness has landed in f69dc2a regarding the remaining minor points in the description:
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
area/docdb
YugabyteDB core features
kind/enhancement
This is an enhancement of an existing feature
priority/medium
Medium priority issue
Jira Link: DB-3158
The biggest contributor to higher tail latency is caused by the following case of starvation -- in case there is a high-degree of contention, waiting transactions may get starved by incoming operations which contend for the same latch. We currently have no mechanism to prevent this, which can lead to high tail-latency in some workloads.
Less critically, our process for determining which waiters can be resumed and subsequently resuming them could be improved in a couple ways:
waiter_status_
before resuming the waiter. We need not re-acquire this write lock for every waiter and can simply acquire it oncea. Resolve the first-in waiter and all non-conflicting other waiters in parallel
b. Resolve the largest set of non-conflicting waiters in parallel, then the second largest, etc
The text was updated successfully, but these errors were encountered: