Benchmarking of hardware acceleration of pyhf
For the time being, until a library can be created, use the requirements.txt
to also serve setup duty in your virtual environment in addition to providing a reproducible benchmarking environment.
(pyhf-benchmark) $ python -m pip install -r requirements.txt
$ pyhf-benchmark run --help
Usage: pyhf-benchmark run [OPTIONS]
Automatic process of taking pyhf computation.
Usage:
$ pyhf-benchmark run -c [-b] [-p] [-u] [-m] [-n] [-mm]
Examples:
$ pyhf-benchmark run -c mle -b numpy -u https://www.hepdata.net/record/resource/1267798?view=true -m [750,100]
$ pyhf-benchmark run -c mle -u https://www.hepdata.net/record/resource/1267798?view=true -m [750,100]
$ pyhf-benchmark run -c mle -b numpy -p 1Lbb-likelihoods-hepdata -m [750,100]
$ pyhf-benchmark run -c interpolation -b jax -n 0 -mm fast
$ pyhf-benchmark run -c interpolation -b numpy -n 0 -mm slow
More information:
https://github.com/pyhf/pyhf-benchmark
Options:
-c, -computation TEXT Type of computation [required]
-b, --backend TEXT Name of the pyhf backend to run with.
-p, --path TEXT Local path of workspace.
-u, --url TEXT Online data link.
-m, --model-point TEXT Model point.
-n, --number TEXT Number.
-mm, --mode TEXT Mode.
-h, --help Show this message and exit.
pyhf-benchmark
is openly developed by Bo Zheng and the pyhf
dev team.
Please check the contribution statistics for a list of contributors.
Bo Zheng was awarded an IRIS-HEP Fellowship for this work.