-
Notifications
You must be signed in to change notification settings - Fork 9
Provide a mechanism to compare results from different runs #4
Comments
I'm not sure the go benchmark format is flexible enough for what we want to measure. I'm also still figuring out just what that is. Can we repurpose/rename this issue to "Provide a mechanism to compare results from different runs"? |
@ncabatoff sounds good. I'd have some ideas, but I guess I should learn first what you had in mind. Maybe let's discuss per chat? Ping me on #prometheus in freenode (https://prometheus.io/community/) |
I've been thinking about this a bit. I currently see the following requirements.
And then I thought, we already have the text file format, which would even allow us to us to important multiple results (with a carefully set timestamp) and use prometheus/grafana to compare test results
|
I like it. Why defer the Prometheus import of this data though, i.e. why go through a text file other than as an optional extra output? I'm thinking these metrics should be published continuously by prombench, and an external Prometheus that's not involved in the tests capture and store all these result metrics. I think I'll also move 'foobar' (benchmark name) out of the metric name and into a label, and add a label ("runname"?) to differentiate different executions. |
…s Prometheus metrics. Write prombench's own metrics to testdir/metrics.txt on exit. Add a short sleep before exiting to make it more likely that an external Prometheus scraping us has time to get the final result. Partially addresses #4.
I guess I was looking at it from the query performance side again. I imagine scraping prombench directly is super helpful when testing the performance impact while working on some feature, but I believe a textfile output has some advantages as well:
I'm open to move the benchmark name into a common metric as label. I tend to keep the number of labels low usually and was looking at it with textfile readability in mind, but it might be just the same in the end. |
No argument, but as I said in the commit message the change I made both publishes the metrics to Prometheus, and on exit it now scrapes itself and records the results in testdir/metrics.txt. Is that not adequate? |
This is cool and probably enough for the beginning. I can see that we'll have benchmarks at some point which will run queries at certain intervals during a benchmark. For now working on just the standard text-file output should be fine. |
If prombench had the option to output the results in a benchcmp compatible output, we could easily generate reports on performance changes between versions.
The text was updated successfully, but these errors were encountered: