You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During a race, Rally stores information in multiple Elasticsearch indices:
rally-races-YYYY-MM: metadata about the race, including results, in a single doc
rally-results-YYYY-MM: individual results docs: typically one hundred per race. Results are written in the textual summary report at the end of each race, where we show up to 15 lines per task (error rate, min/mean/media/max throughput and p50/p90/p99/p99.9/p100 latency/service time). We also have tooling to compare results.
rally-metrics-YYYY-MM: individual metrics docs: typically multiple millions (!) for long-running races. Metrics are... less pleasant to work with. You need access to the metrics store and then need to figure out your own queries or visualization. There's also no tool to compare metrics between races.
Given this, when working on #1428, @nik9000 decided to store the collected data of his new telemetry devices in the results. That way, it shows up in the summary report, it's easy to compare and he does not have to worry about the metrics store. In his own words: "i'd love an option I think to get all the info to print. in my normal workflow I don't touch the metric store and really just want to print things."
So, what we should do about it?
Allow non-default telemetry devices to somehow put their metrics in the report?
Add a command to show data for a specific telemetry device in text form? After all, if Rally can write to it, it can read from it.
Special case the disk usage telemetry device and just dump its metrics? @jpountz mentioned this specific device was interesting to him to diagnose our nightly benchmarks.
I'm personally more in favor of option 2.
The text was updated successfully, but these errors were encountered:
It's important to me at least to be able to compare the results here. Yesterday I put together esrally compare support for the field-disk-usage prototype I'm working on and the output was quite educational:
Allow non-default telemetry devices to somehow put their metrics in the report?
After thinking about it and discussing it with @danielmitterdorfer, this is fine for useful non-default telemetry devices in general and #1428 in particular. As Nik showed, this is really useful when you care about it. For more exotic cases, then #1224 would be the way to go.
During a race, Rally stores information in multiple Elasticsearch indices:
rally-races-YYYY-MM
: metadata about the race, including results, in a single docrally-results-YYYY-MM
: individual results docs: typically one hundred per race. Results are written in the textual summary report at the end of each race, where we show up to 15 lines per task (error rate, min/mean/media/max throughput and p50/p90/p99/p99.9/p100 latency/service time). We also have tooling to compare results.rally-metrics-YYYY-MM
: individual metrics docs: typically multiple millions (!) for long-running races. Metrics are... less pleasant to work with. You need access to the metrics store and then need to figure out your own queries or visualization. There's also no tool to compare metrics between races.Given this, when working on #1428, @nik9000 decided to store the collected data of his new telemetry devices in the results. That way, it shows up in the summary report, it's easy to compare and he does not have to worry about the metrics store. In his own words: "i'd love an option I think to get all the info to print. in my normal workflow I don't touch the metric store and really just want to print things."
So, what we should do about it?
I'm personally more in favor of option 2.
The text was updated successfully, but these errors were encountered: