Upcoming changes in benchmark setup #217
rbergen
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
In the upcoming period, we'll be making some changes to the way benchmark results are generated, collected and processed. With this announcement, we want to inform the people following and contributing to this project of this. This will allow you to factor in the upcoming changes in your contributions, and facilitates the correct interpretation of benchmark results.
Open pull requests
For a number of upcoming changes, pull requests have been opened in this repo:
The current benchmark tool will be replaced by a new implementation (PR #214).
The current tool is implemented in Python and runs in a Docker container. The new tool is implemented in TypeScript and runs directly on the benchmark host. An important reason for this is that this allows us to collect some properties of the system that the benchmark is run on. These properties will be used in consolidated reporting that is also under development (as described below); the new tool as a whole is a step towards that consolidated reporting.
This change does mean that Node.js will be a dependency of the new benchmark tool.
We're extending the drag-race output format with tags (key/value pairs) that can be used to provide information about the characteristics of the implementation (PR #195). One of the sources of input for this change is issue #177.
The tags will be optional in principle, although we may not be able to include implementations that do not output them in some specific future (segmented) reports.
We will retroactively apply the tags that are initially defined to existing implementations, or at least to those in which this can be done by adding literal text to the existing drag-race output. The contributing guidelines will also be modified to document this change.
Although a specific set of initial tag keys and values will be defined, they will be implemented in such a way that additonal key/value pairs can be added in a flexible manner.
Documentation is being drafted on how the benchmark tool can be used to execute benchmark runs on a number of *nix platforms, with many thanks to @fvbakel for his contribution (PR #213).
The documentation will include information about applicable prerequisites and instructions on how to install those.
It is likely that these changes will be applied on the short term.
Other developments
As referred to earlier, work is also underway to:
Due to the amount of work involved with this development, it's likely that it will take more time for the first version of it to become available.
Please let us know if you have any questions or comments about the contents of this message.
Thank you,
@rhbvkleef, @marghidanu, @rbergen
Beta Was this translation helpful? Give feedback.
All reactions