Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add --user-tags as alias for --user-tag #1599

Merged
merged 5 commits into from
Nov 2, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 10 additions & 9 deletions docs/command_line_reference.rst
Original file line number Diff line number Diff line change
Expand Up @@ -258,7 +258,7 @@ Example JSON file::

All track parameters are recorded for each metrics record in the metrics store. Also, when you run ``esrally list races``, it will show all track parameters::

Race Timestamp Track Track Parameters Challenge Car User Tag
Race Timestamp Track Track Parameters Challenge Car User Tags
---------------- ------- ------------------------- ------------------- -------- ---------
20160518T122341Z pmc bulk_size=8000 append-no-conflicts defaults
20160518T112341Z pmc bulk_size=2000,clients=16 append-no-conflicts defaults
Expand Down Expand Up @@ -902,38 +902,39 @@ Rally usually installs and launches an Elasticsearch cluster internally and wipe
.. note::
This option does only affect clusters that are provisioned by Rally. More specifically, if you use the pipeline ``benchmark-only``, this option is ineffective as Rally does not provision a cluster in this case.

``user-tag``
~~~~~~~~~~~~
``user-tags``
~~~~~~~~~~~~~

This is only relevant when you want to run :doc:`tournaments </tournament>`. You can use this flag to attach an arbitrary text to the meta-data of each metric record and also the corresponding race. This will help you to recognize a race when you run ``esrally list races`` as you don't need to remember the concrete timestamp on which a race has been run but can instead use your own descriptive names.

The required format is ``key`` ":" ``value``. You can choose ``key`` and ``value`` freely.
The required format is ``key`` ":" ``value``. You can choose ``key`` and ``value`` freely. You can also specify multiple tags. They need to be separated by a comma.

**Example**

::

esrally race --track=pmc --user-tag="intention:github-issue-1234-baseline,gc:cms"

You can also specify multiple tags. They need to be separated by a comma.
esrally race --track=pmc --user-tags="intention:github-issue-1234-baseline,gc:cms"

**Example**

::

esrally race --track=pmc --user-tag="disk:SSD,data_node_count:4"
esrally race --track=pmc --user-tags="disk:SSD,data_node_count:4"



When you run ``esrally list races``, this will show up again::

Race Timestamp Track Track Parameters Challenge Car User Tag
Race Timestamp Track Track Parameters Challenge Car User Tags
---------------- ------- ------------------ ------------------- -------- ------------------------------------
20160518T122341Z pmc append-no-conflicts defaults intention:github-issue-1234-baseline
20160518T112341Z pmc append-no-conflicts defaults disk:SSD,data_node_count:4

This will help you recognize a specific race when running ``esrally compare``.

.. note::
This option used to be named `--user-tag` without an s, which was confusing as multiple tags are supported. While users are now encouraged to use `--user-tags` for clarity, Rally will continue to honor `--user-tag` in the future to avoid breaking backwards-compatibility.

``indices``
~~~~~~~~~~~

Expand Down
2 changes: 1 addition & 1 deletion docs/metrics.rst
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ Rally captures also some meta information for each metric record:
* Node name: If Rally provisions the cluster, it will choose a unique name for each node.
* Source revision: We always record the git hash of the version of Elasticsearch that is benchmarked. This is even done if you benchmark an official binary release.
* Distribution version: We always record the distribution version of Elasticsearch that is benchmarked. This is even done if you benchmark a source release.
* Custom tag: You can define one custom tag with the command line flag ``--user-tag``. The tag is prefixed by ``tag_`` in order to avoid accidental clashes with Rally internal tags.
* Custom tags: You can define custom tags with the command line flag ``--user-tags``. The tags are prefixed by ``tag_`` in order to avoid accidental clashes with Rally internal tags.
* Operation-specific: The optional substructure ``operation`` contains additional information depending on the type of operation. For bulk requests, this may be the number of documents or for searches the number of hits.

Note that depending on the "level" of a metric record, certain meta information might be missing. It makes no sense to record host level meta info for a cluster wide metric record, like a query latency (as it cannot be attributed to a single node).
Expand Down
8 changes: 4 additions & 4 deletions docs/tournament.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,13 @@ Suppose, we want to analyze the impact of a performance improvement.

First, we need a baseline measurement. For example::

esrally race --track=pmc --revision=latest --user-tag="intention:baseline_github_1234"
esrally race --track=pmc --revision=latest --user-tags="intention:baseline_github_1234"

Above we run the baseline measurement based on the latest source code revision of Elasticsearch. We can use the command line parameter ``--user-tag`` to provide a key-value pair to document the intent of a race.
Above we run the baseline measurement based on the latest source code revision of Elasticsearch. We can use the command line parameter ``--user-tags`` to provide a key-value pair to document the intent of a race.

Then we implement our changes and finally we want to run another benchmark to see the performance impact of the change. In that case, we do not want Rally to change our source tree and thus specify the pseudo-revision ``current``::

esrally race --track=pmc --revision=current --user-tag="intention:reduce_alloc_1234"
esrally race --track=pmc --revision=current --user-tags="intention:reduce_alloc_1234"

After we've run both races, we want to know about the performance impact. With Rally we can analyze differences of two given races easily. First of all, we need to find two races to compare by issuing ``esrally list races``::

Expand All @@ -32,7 +32,7 @@ After we've run both races, we want to know about the performance impact. With R
0cfb3576-3025-4c17-b672-d6c9e811b93e 20160518T101957Z pmc append-no-conflicts defaults


We can see that the user tag helps us to recognize races. We want to compare the two most recent races and have to provide the two race IDs in the next step::
We can see that the user tags help us to recognize races. We want to compare the two most recent races and have to provide the two race IDs in the next step::

$ esrally compare --baseline=0bfd4542-3821-4c79-81a2-0858636068ce --contender=beb154e4-0a05-4f45-ad9f-e34f9a9e51f7

Expand Down
2 changes: 1 addition & 1 deletion esrally/metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -310,7 +310,7 @@ def extract_user_tags_from_config(cfg):
:param cfg: The current configuration object.
:return: A dict containing user tags. If no user tags are given, an empty dict is returned.
"""
user_tags = cfg.opts("race", "user.tag", mandatory=False)
user_tags = cfg.opts("race", "user.tags", mandatory=False)
return extract_user_tags_from_string(user_tags)


Expand Down
3 changes: 2 additions & 1 deletion esrally/rally.py
Original file line number Diff line number Diff line change
Expand Up @@ -651,6 +651,7 @@ def add_track_source(subparser):
)
race_parser.add_argument(
"--user-tag",
"--user-tags",
Comment on lines 653 to +654
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

didn't change the order to keep backward compatibility, in order to be able to set tag by env var USER_TAG.

help="Define a user-specific key-value pair (separated by ':'). It is added to each metric record as meta info. "
"Example: intention:baseline-ticket-12345",
default="",
Expand Down Expand Up @@ -1047,7 +1048,7 @@ def dispatch_sub_command(arg_parser, args, cfg):
# use the race id implicitly also as the install id.
cfg.add(config.Scope.applicationOverride, "system", "install.id", args.race_id)
cfg.add(config.Scope.applicationOverride, "race", "pipeline", args.pipeline)
cfg.add(config.Scope.applicationOverride, "race", "user.tag", args.user_tag)
cfg.add(config.Scope.applicationOverride, "race", "user.tags", args.user_tag)
cfg.add(config.Scope.applicationOverride, "driver", "profiling", args.enable_driver_profiling)
cfg.add(config.Scope.applicationOverride, "driver", "assertions", args.enable_assertions)
cfg.add(config.Scope.applicationOverride, "driver", "on.error", args.on_error)
Expand Down
14 changes: 7 additions & 7 deletions tests/metrics_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,21 +116,21 @@ def test_no_tags_returns_empty_dict(self):

def test_missing_comma_raises_error(self):
cfg = config.Config()
cfg.add(config.Scope.application, "race", "user.tag", "invalid")
cfg.add(config.Scope.application, "race", "user.tags", "invalid")
with pytest.raises(exceptions.SystemSetupError) as ctx:
metrics.extract_user_tags_from_config(cfg)
assert ctx.value.args[0] == "User tag keys and values have to separated by a ':'. Invalid value [invalid]"

def test_missing_value_raises_error(self):
cfg = config.Config()
cfg.add(config.Scope.application, "race", "user.tag", "invalid1,invalid2")
cfg.add(config.Scope.application, "race", "user.tags", "invalid1,invalid2")
with pytest.raises(exceptions.SystemSetupError) as ctx:
metrics.extract_user_tags_from_config(cfg)
assert ctx.value.args[0] == "User tag keys and values have to separated by a ':'. Invalid value [invalid1,invalid2]"

def test_extracts_proper_user_tags(self):
cfg = config.Config()
cfg.add(config.Scope.application, "race", "user.tag", "os:Linux,cpu:ARM")
cfg.add(config.Scope.application, "race", "user.tags", "os:Linux,cpu:ARM")
assert metrics.extract_user_tags_from_config(cfg) == {"os": "Linux", "cpu": "ARM"}


Expand Down Expand Up @@ -364,7 +364,7 @@ def test_put_value_with_explicit_timestamps(self):
def test_put_value_with_meta_info(self):
throughput = 5000
# add a user-defined tag
self.cfg.add(config.Scope.application, "race", "user.tag", "intention:testing,disk_type:hdd")
self.cfg.add(config.Scope.application, "race", "user.tags", "intention:testing,disk_type:hdd")
self.metrics_store.open(self.RACE_ID, self.RACE_TIMESTAMP, "test", "append", "defaults", create=True)

# Ensure we also merge in cluster level meta info
Expand Down Expand Up @@ -436,7 +436,7 @@ def test_put_doc_no_meta_data(self):

def test_put_doc_with_metadata(self):
# add a user-defined tag
self.cfg.add(config.Scope.application, "race", "user.tag", "intention:testing,disk_type:hdd")
self.cfg.add(config.Scope.application, "race", "user.tags", "intention:testing,disk_type:hdd")
self.metrics_store.open(self.RACE_ID, self.RACE_TIMESTAMP, "test", "append", "defaults", create=True)

# Ensure we also merge in cluster level meta info
Expand Down Expand Up @@ -1667,7 +1667,7 @@ def test_calculate_global_stats(self):
cfg.add(config.Scope.application, "mechanic", "car.names", ["unittest_car"])
cfg.add(config.Scope.application, "mechanic", "car.params", {})
cfg.add(config.Scope.application, "mechanic", "plugin.params", {})
cfg.add(config.Scope.application, "race", "user.tag", "")
cfg.add(config.Scope.application, "race", "user.tags", "")
cfg.add(config.Scope.application, "race", "pipeline", "from-sources")
cfg.add(config.Scope.application, "track", "params", {})

Expand Down Expand Up @@ -1835,7 +1835,7 @@ def test_calculate_system_stats(self):
cfg.add(config.Scope.application, "mechanic", "car.names", ["unittest_car"])
cfg.add(config.Scope.application, "mechanic", "car.params", {})
cfg.add(config.Scope.application, "mechanic", "plugin.params", {})
cfg.add(config.Scope.application, "race", "user.tag", "")
cfg.add(config.Scope.application, "race", "user.tags", "")
cfg.add(config.Scope.application, "race", "pipeline", "from-sources")
cfg.add(config.Scope.application, "track", "params", {})

Expand Down