Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tool for exploring performance by varying JIT behavior #381

Merged
merged 4 commits into from
Oct 3, 2023

Conversation

AndyAyersMS
Copy link
Member

Initial version of a tool that can run BenchmarkDotNet (BDN) over a set of benchmarks in a feedback loop. The tool can vary JIT behavior, observe the impact this modification on jitted code or benchmark perf, and then plan and try out further variations in pursuit of some goal (say higher perf, or smaller code, etc).

Requires access to InstructionsRetiredExplorer as a helper tool, for parsing the ETW that BDN produces. Also requires a local enlistment of the performance repo. You will need to modify file paths within the source to adapt all this to your local setup. Must be run with admin priveleges so that BDN can collect ETW.

The only supported variation right now is modification of which CSEs we allow the JIT to perform for the hottest Tier-1 method in each benchmark. If a benchmark does not have a sufficiently hot Tier-1 method, then it is effectively left out of the experiment.

The experiments on each benchmark are prioritized to explore variations in performance for subsets of currently performed CSEs. For methods with many CSEs we can realistically afford to only explore a small fraction of all possibilities. So we try and bias the exploration towards CSEs that have higher performance impacts.

Results are locally cached so that rerunning the tool will not rerun experiments.

Experiments are summarized by CSV file with a schema that lists benchmark name, number of CSEs, code size, perf score, and perf.

Initial version of a tool that can run BenchmarkDotNet (BDN) over a set of
benchmarks in a feedback loop. The tool can vary JIT behavior, observe the
impact this modification on jitted code or benchmark perf, and then plan
and try out further variations in pursuit of some goal (say higher perf,
or smaller code, etc).

Requires access to InstructionsRetiredExplorer as a helper tool,
for parsing the ETW that BDN produces. Also requires a local enlistment
of the performance repo. You will need to modify file paths
within the source to adapt all this to your local setup. Must be run
with admin priveleges so that BDN can collect ETW.

The only supported variation right now is modification of which CSEs we
allow the JIT to perform for the hottest Tier-1 method in each benchmark.
If a benchmark does not have a sufficiently hot Tier-1 method, then
it is effectively left out of the experiment.

The experiments on each benchmark are prioritized to explore variations in
performance for subsets of currently performed CSEs. For methods with many
CSEs we can realistically afford to only explore a small fraction of all
possibilities. So we try and bias the exploration towards CSEs that have
higher performance impacts.

Results are locally cached so that rerunning the tool will not rerun
experiments.

Experiments are summarized by CSV file with a schema that lists benchmark name,
number of CSEs, code size, perf score, and perf.
@AndyAyersMS
Copy link
Member Author

cc @dotnet/jit-contrib

I have lots of ideas for building on this, but wanted to get an initial version checked in somewhere.

Copy link
Member

@BruceForstall BruceForstall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great to see this as a snapshot.

I'd suggest adding a README.md that includes at least the text of the GitHub PR description, plus a short example of how to use/invoke it.

I have lots of ideas for building on this,

... and the README can include a brain dump of ideas for future work.

Copy link
Member

@jakobbotsch jakobbotsch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I do fear that something like this may quickly bit rot in this repo, though, given the specificity. But I wasn't personally quite sure how to make the tool more generic and still remain useful; all my uses of it have been very ad hoc.

@AndyAyersMS AndyAyersMS merged commit 122ec0c into dotnet:main Oct 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants