Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce overhead of sampling profiler by having only one thread do it #6433

Merged
merged 4 commits into from
Dec 2, 2021

Conversation

abadams
Copy link
Member

@abadams abadams commented Nov 19, 2021

The current built-in profiler has a lot of overhead for fine-grained compute_at schedules. E.g. for the bgu app it inflates runtime by about 50% to turn on the profiler. This is happening because all threads are writing their current state to the same cache line, causing a lot of cross-cache traffic. Each one of these writes is effectively a cache miss.

This PR changes it so that whenever we have lots of threads all doing the same thing (so in a leaf parallel loop body and not inside a fork node), one of them gets elected to write to that status field. The election is done by racing to grab a pipeline-scope token using an atomic op. The winner does the reporting. This speeds things up in two ways: First, the threads that don't write don't incur the cache misses. Second, the thread that does write can keep that line in cache, with the sampler thread just snooping on the bus traffic when it wants to read instead of invalidating the cache line (assuming I remember how MESI works properly).

Copy link
Contributor

@steven-johnson steven-johnson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but the explanatory description in this PR should really be added in code comments somewhere.

@@ -30,6 +72,8 @@ class InjectProfiling : public IRMutator {

string pipeline_name;

bool in_fork = false, in_parallel = false, in_leaf_task = false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ubernit: this is an unusual formatting for our code; we almost always put member var declarations one-per-line.

@abadams
Copy link
Member Author

abadams commented Nov 24, 2021

I'm a bit nervous about how hard it is to test this. I'd appreciate it if a someone could run an important production pipeline at Google before and after and tell me if the profile looks reasonable.

@abadams
Copy link
Member Author

abadams commented Dec 2, 2021

Actually a systematic test of the app suite is probably enough. I'll do that and report back.

@abadams
Copy link
Member Author

abadams commented Dec 2, 2021

The geomean overhead of the old profiler is 19% across the apps (usually it's zero overhead, but there are a couple of apps with fine-grained compute where the overhead is large). The new profiler has geomean 3% overhead. The few profiles I've spot-checked look reasonable. Merging.

@abadams abadams merged commit 5cf9ae5 into master Dec 2, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants