-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Api tables perf #1474
Draft
kathirsvn
wants to merge
6
commits into
main
Choose a base branch
from
api_tables_perf
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Api tables perf #1474
Changes from all commits
Commits
Show all changes
6 commits
Select commit
Hold shift + click to select a range
ee71de9
Added API tables kv nb test and fixed docker compose scripts
kathirsvn 9e0065e
API Tables kv workload fixes
kathirsvn ce268a8
CQL key value workload file added temporarily
kathirsvn 338ff04
added scenario for astra
kathirsvn a85a0c9
added readonly and insertonly scenarios
kathirsvn 77631b7
Merge branch 'main' into api_tables_perf
tatu-at-datastax File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,110 @@ | ||
min_version: "5.17.1" | ||
|
||
description: | | ||
A workload with only text keys and text values. | ||
The CQL Key-Value workload demonstrates the simplest possible schema with payload data. This is useful for measuring | ||
system capacity most directly in terms of raw operations. As a reference point, it provides some insight around types of | ||
workloads that are constrained around messaging, threading, and tasking, rather than bulk throughput. | ||
During preload, all keys are set with a value. During the main phase of the workload, random keys from the known | ||
population are replaced with new values which never repeat. During the main phase, random partitions are selected for | ||
upsert, with row values never repeating. | ||
|
||
TEMPLATE(batchsize,100) | ||
|
||
scenarios: | ||
default: | ||
schema: run driver=cql tags==block:schema threads==1 cycles==UNDEF | ||
rampup: run driver=cql tags==block:rampup cycles===TEMPLATE(rampup-cycles,10000000) threads=auto | ||
main: run driver=cql tags==block:"main.*" cycles===TEMPLATE(main-cycles,10000000) threads=auto | ||
batch: | ||
schema: run driver=cql tags==block:schema threads==1 cycles==UNDEF | ||
rampup: run driver=cql tags==block:rampup_batch cycles===TEMPLATE(rampup-cycles,10000000) threads=auto | ||
main: run driver=cql tags==block:"main.*" cycles===TEMPLATE(main-cycles,10000000) threads=auto | ||
astra: | ||
schema: run driver=cql tags==block:schema_astra threads==1 cycles==UNDEF | ||
rampup: run driver=cql tags==block:rampup cycles===TEMPLATE(rampup-cycles,10000000) threads=auto | ||
main: run driver=cql tags==block:"main.*" cycles===TEMPLATE(main-cycles,10000000) threads=auto | ||
astra_read_only: | ||
schema: run driver=cql tags==block:schema_astra threads==1 cycles==UNDEF | ||
rampup: run driver=cql tags==block:rampup cycles===TEMPLATE(rampup-cycles,10000000) threads=auto | ||
main: run driver=cql tags==block:main_read cycles===TEMPLATE(main-cycles,10000000) threads=auto | ||
asta_write_only: | ||
schema: run driver=cql tags==block:schema_astra threads==1 cycles==UNDEF | ||
main: run driver=cql tags==block:main_write cycles===TEMPLATE(main-cycles,10000000) threads=auto | ||
basic_check: | ||
schema: run driver=cql tags==block:schema threads==1 cycles==UNDEF | ||
rampup: run driver=cql tags==block:rampup cycles===TEMPLATE(rampup-cycles,10) threads=auto | ||
main: run driver=cql tags==block:"main.*" cycles===TEMPLATE(main-cycles,10) threads=auto | ||
|
||
bindings: | ||
seq_key: Mod(TEMPLATE(keycount,1000000000)); ToString() -> String | ||
seq_value: Hash(); Mod(TEMPLATE(valuecount,1000000000)); ToString() -> String | ||
batch_seq_value: Mul(TEMPLATE(batchsize,100)L); Hash(); Mod(TEMPLATE(valuecount,1000000000)); ToString() -> String | ||
rw_key: TEMPLATE(keydist,Uniform(0,1000000000)); ToString() -> String | ||
rw_value: Hash(); TEMPLATE(valdist,Uniform(0,1000000000)); ToString() -> String | ||
|
||
blocks: | ||
schema: | ||
params: | ||
prepared: false | ||
ops: | ||
create_keyspace: | | ||
create keyspace if not exists TEMPLATE(keyspace,baselines) | ||
WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 'TEMPLATE(rf:1)'} | ||
AND durable_writes = true; | ||
create_table: | | ||
create table if not exists TEMPLATE(keyspace,baselines).TEMPLATE(table,keyvalue) ( | ||
key text, | ||
value text, | ||
PRIMARY KEY (key) | ||
); | ||
schema_astra: | ||
params: | ||
prepared: false | ||
statements: | ||
create_table: | | ||
create table if not exists TEMPLATE(keyspace,baselines).TEMPLATE(table,keyvalue) ( | ||
key text, | ||
value text, | ||
PRIMARY KEY (key) | ||
); | ||
rampup: | ||
params: | ||
cl: TEMPLATE(write_cl,LOCAL_QUORUM) | ||
ops: | ||
rampup_insert: | | ||
insert into TEMPLATE(keyspace,baselines).TEMPLATE(table,keyvalue) | ||
(key, value) | ||
values ({seq_key},{seq_value}); | ||
rampup_batch: | ||
params: | ||
cl: TEMPLATE(write_cl,LOCAL_QUORUM) | ||
ops: | ||
rampup_insert: | ||
batch: testing | ||
repeat: 100 | ||
op_template: | ||
prepared: | | ||
insert into TEMPLATE(keyspace,baselines).TEMPLATE(table,keyvalue) | ||
(key, value) | ||
values ({seq_key},{seq_value}); | ||
verify: | ||
params: | ||
cl: TEMPLATE(read_cl,LOCAL_QUORUM) | ||
ops: | ||
verify_select: | | ||
select * from TEMPLATE(keyspace,baselines).TEMPLATE(table,keyvalue) where key={seq_key}; | ||
verify-fields: key->seq_key, value->seq_value | ||
main_read: | ||
params: | ||
cl: TEMPLATE(read_cl,LOCAL_QUORUM) | ||
statements: | ||
main_select: | | ||
select * from TEMPLATE(keyspace,baselines).TEMPLATE(table,keyvalue) where key={rw_key}; | ||
main_write: | ||
params: | ||
cl: TEMPLATE(write_cl,LOCAL_QUORUM) | ||
statements: | ||
main_insert: | | ||
insert into TEMPLATE(keyspace,baselines).TEMPLATE(table,keyvalue) | ||
(key, value) values ({rw_key}, {rw_value}); |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,173 @@ | ||
min_version: "5.17.3" | ||
|
||
# Example command line (when Stargate is running on localhost): | ||
# nb5 -v http-dataapi-keyvalue dataapi_host=localhost rowscount=20000 threads=20 | ||
|
||
description: >2 | ||
This workload emulates a key-value data model and access patterns. | ||
This should be identical to the cql variant except for: | ||
- Schema creation with the Data API | ||
- There is no instrumentation with the http driver. | ||
- There is no async mode with the http driver. | ||
Note that dataapi_port should reflect the port where the Data API is exposed (defaults to 8181). | ||
|
||
scenarios: | ||
default: | ||
schema: run driver=http tags==block:schema* threads==1 cycles==UNDEF | ||
rampup: run driver=http tags==block:rampup cycles===TEMPLATE(rampup-cycles,TEMPLATE(rowscount,10000000)) threads=auto | ||
main: run driver=http tags==block:"main.*" cycles===TEMPLATE(main-cycles,TEMPLATE(rowscount,10000000)) threads=auto | ||
astra: | ||
schema: run driver=http tags==block:schema threads==1 cycles==UNDEF | ||
rampup: run driver=http tags==block:rampup cycles===TEMPLATE(rampup-cycles,TEMPLATE(rowscount,10000000)) threads=auto | ||
main: run driver=http tags==block:"main.*" cycles===TEMPLATE(main-cycles,TEMPLATE(rowscount,10000000)) threads=auto | ||
astra_findone_only: | ||
schema: run driver=http tags==block:schema threads==1 cycles==UNDEF | ||
rampup: run driver=http tags==block:rampup cycles===TEMPLATE(rampup-cycles,TEMPLATE(rowscount,10000000)) threads=auto | ||
main: run driver=http tags==block:main-findone cycles===TEMPLATE(main-cycles,TEMPLATE(rowscount,10000000)) threads=auto | ||
astra_insertone_only: | ||
schema: run driver=http tags==block:schema threads==1 cycles==UNDEF | ||
main: run driver=http tags==block:main-insertone cycles===TEMPLATE(main-cycles,TEMPLATE(rowscount,10000000)) threads=auto | ||
|
||
bindings: | ||
# To enable an optional weighted set of hosts in place of a load balancer | ||
# Examples | ||
# single host: dataapi_host=host1 | ||
# multiple hosts: dataapi_host=host1,host2,host3 | ||
# multiple weighted hosts: dataapi_host=host1:3,host2:7 | ||
weighted_hosts: WeightedStrings('<<dataapi_host:dataapi>>') | ||
|
||
# spread into different spaces to use multiple connections | ||
space: HashRange(1,<<connections:20>>); ToString(); | ||
|
||
# http request id | ||
request_id: ToHashedUUID(); ToString(); | ||
|
||
# autogenerate auth token to use on API calls using configured uri/uid/password, unless one is provided | ||
token: Discard(); Token('<<auth_token:>>','<<uri:http://localhost:8081/v1/auth>>', '<<uid:cassandra>>', '<<pswd:cassandra>>'); | ||
|
||
seq_key: Mod(<<keycount:10000000>>); ToString() -> String | ||
seq_value: Hash(); Mod(<<valuecount:10000000>>); ToString() -> String | ||
rw_key: <<keydist:Uniform(0,<<keycount:10000000>>)->int>>; ToString() -> String | ||
rw_value: Hash(); <<valdist:Uniform(0,<<keycount:10000000>>)->int>>; ToString() -> String | ||
|
||
blocks: | ||
schema_local: | ||
ops: | ||
create-namespace: | ||
method: POST | ||
uri: <<protocol:http>>://{weighted_hosts}:<<dataapi_port:8181>><<path_prefix:>>/v1 | ||
Accept: "application/json" | ||
X-Cassandra-Request-Id: "{request_id}" | ||
Token: "{token}" | ||
Content-Type: "application/json" | ||
ok-body: ".*\"ok\":1.*" | ||
body: >2 | ||
{ | ||
"createNamespace": { | ||
"name": "<<namespace:dataapi_keyvalue>>" | ||
} | ||
} | ||
schema: | ||
ops: | ||
drop-table: | ||
method: POST | ||
uri: <<protocol:http>>://{weighted_hosts}:<<dataapi_port:8181>><<path_prefix:>>/v1/<<namespace:dataapi_keyvalue>> | ||
Accept: "application/json" | ||
X-Cassandra-Request-Id: "{request_id}" | ||
Token: "{token}" | ||
Content-Type: "application/json" | ||
ok-body: ".*\"ok\":1.*" | ||
body: >2 | ||
{ | ||
"dropTable": { | ||
"name": "<<table:api_table>>" | ||
} | ||
} | ||
|
||
create-table: | ||
method: POST | ||
uri: <<protocol:http>>://{weighted_hosts}:<<dataapi_port:8181>><<path_prefix:>>/v1/<<namespace:dataapi_keyvalue>> | ||
Accept: "application/json" | ||
X-Cassandra-Request-Id: "{request_id}" | ||
Token: "{token}" | ||
Content-Type: "application/json" | ||
ok-body: ".*\"ok\":1.*" | ||
body: >2 | ||
{ | ||
"createTable": { | ||
"name": "<<table:api_table>>", | ||
"definition": { | ||
"columns": { | ||
"key": { | ||
"type": "text" | ||
}, | ||
"value": { | ||
"type": "text" | ||
} | ||
}, | ||
"primaryKey": "key" | ||
} | ||
} | ||
} | ||
|
||
rampup: | ||
ops: | ||
rampup-insert: | ||
space: "{space}" | ||
method: POST | ||
uri: <<protocol:http>>://{weighted_hosts}:<<dataapi_port:8181>><<path_prefix:>>/v1/<<namespace:dataapi_keyvalue>>/<<table:api_table>> | ||
Accept: "application/json" | ||
X-Cassandra-Request-Id: "{request_id}" | ||
Token: "{token}" | ||
Content-Type: "application/json" | ||
ok-body: '.*\"insertedIds\":\[.*\].*' | ||
body: >2 | ||
{ | ||
"insertOne" : { | ||
"document" : { | ||
"key" : "{seq_key}", | ||
"value" : "{seq_value}" | ||
} | ||
} | ||
} | ||
|
||
main-findone: | ||
ops: | ||
main-select: | ||
space: "{space}" | ||
method: POST | ||
uri: <<protocol:http>>://{weighted_hosts}:<<dataapi_port:8181>><<path_prefix:>>/v1/<<namespace:dataapi_keyvalue>>/<<table:api_table>> | ||
Accept: "application/json" | ||
X-Cassandra-Request-Id: "{request_id}" | ||
Token: "{token}" | ||
Content-Type: "application/json" | ||
ok-body: ".*\"data\".*" | ||
body: >2 | ||
{ | ||
"findOne" : { | ||
"filter" : { | ||
"key" : "{rw_key}" | ||
} | ||
} | ||
} | ||
main-insertone: | ||
ops: | ||
main-write: | ||
space: "{space}" | ||
method: POST | ||
uri: <<protocol:http>>://{weighted_hosts}:<<dataapi_port:8181>><<path_prefix:>>/v1/<<namespace:dataapi_keyvalue>>/<<table:api_table>> | ||
Accept: "application/json" | ||
X-Cassandra-Request-Id: "{request_id}" | ||
Token: "{token}" | ||
Content-Type: "application/json" | ||
# because this is not an upsert, modified count could be 0 or 1 | ||
ok-body: '.*\"insertedIds\":\[.*\].*' | ||
body: >2 | ||
{ | ||
"insertOne" : { | ||
"document" : { | ||
"key" : "{seq_key}", | ||
"value" : "{seq_value}" | ||
} | ||
} | ||
} |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably should not change?