-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide benchmarks tests #103
Comments
Update: the HSF Data Analysis WG considers defining benchmark PWA analyses for comparing different PWA fitter frameworks. Once those benchmarks are defined, they can be addressed in this issue. |
Also worth considering: host these benchmark test under a separate repository, otherwise it slows down CI of TensorWaves and could clutter the repo with a lot of additional testing code. Alternatively, the tests are run only upon merging into the stable branch. |
@Leongrim this may be a nice way to record performance over time |
Would be nice to profile/monitor this in a standardised way, so that we can see whether there are improvements upon each PR. The benchmarks should be similar in structure, probably making use of some shared façade functions.
What we probably want as input:
The recipe file is generated with the expertsystem based on this input. All the rest (e.g. which amplitude generator to use), should be deduced from the recipe.
Some potential tools:
pytest-benchmark
(nicely integrated withpytest
, though seems to be more for micro-benchmarks)pycallgraph
(seems rather outdated)timeit
The text was updated successfully, but these errors were encountered: