-
Notifications
You must be signed in to change notification settings - Fork 12.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use DefPathHash instead of HirId to break inlining cycles. #85321
Conversation
r? @jackh726 (rust-highfive has picked a reviewer for you, use r? to override) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't feel really comfortable reviewing this, since I'm not familiar with the intricacies here. I also don't know who would be best to review.
// So don't do it if that is enabled. | ||
if !self.tcx.dep_graph.is_fully_enabled() && self.hir_id < callee_hir_id { | ||
// a lower `DefPathHash` than the callee. This ensures that the callee will | ||
// not inline us. This trick only works even with incremental compilation, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- This trick only works even with
+ This trick works even with
// since `DefPathHash` is stable. | ||
if self.tcx.def_path_hash(caller_def_id) | ||
< self.tcx.def_path_hash(callee_def_id.to_def_id()) | ||
{ | ||
return Ok(()); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this still exist now that we have mir_callgraph_reachable
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two advantages:
- it halves the amount of calls to
mir_callgraph_reachable
, - it allows for inlining even in the presence of a cycle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But this completely omits half of all inlining candidates. Also with hir id comparisons this would give a consistent result. Now any rustc version increment would cause the result to change as the def path hash contains the -Cmetadata
calculated by cargo based on the compiler version, which would break the tests and seemingly randomly regress/improve performance when enabling mir inlining.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But this completely omits half of all inlining candidates.
To the contrary, this branch allows half of all candidates without further examination.
Also with hir id comparisons this would give a consistent result. Now any rustc version increment would cause the result to change as the def path hash contains the -Cmetadata calculated by cargo based on the compiler version, which would break the tests and seemingly randomly regress/improve performance when enabling mir inlining.
I will change the test to only consider the local part of the DefPathHash, removing the influence of -Cmetadata.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer to avoid optimization decisions being nondeterministic or otherwise uniform - we already see a lot of pain due to CGU partitioning, for example. This seems likely to be even less explainable, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The mir_callgraph_reachable
is relatively expensive. If we can avoid calling it half the time that seems like a nice win!
The unpredictable nature of the check is a disadvantage, but the condition is effectively: caller_hash < callee_hash || !reachable(callee, caller)
, which means that it becomes unpredictable only in a presence of a cycle, since otherwise MIR would be available for inlining anyway. I think this is a reasonable trade-off to make.
BTW. Does comparing local_hash
help? It seems to incorporate complete DefPathHash
of a parent:
rust/compiler/rustc_hir/src/definitions.rs
Lines 143 to 156 in fa72878
parent.hash(&mut hasher); | |
let DisambiguatedDefPathData { ref data, disambiguator } = self.disambiguated_data; | |
std::mem::discriminant(data).hash(&mut hasher); | |
if let Some(name) = data.get_opt_name() { | |
// Get a stable hash by considering the symbol chars rather than | |
// the symbol index. | |
name.as_str().hash(&mut hasher); | |
} | |
disambiguator.hash(&mut hasher); | |
let local_hash: u64 = hasher.finish(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
which means that it becomes unpredictable only in a presence of a cycle, since otherwise MIR would be available for inlining anyway. I think this is a reasonable trade-off to make.
I see. In that case I think it is fine.
@cjgillot any idea on a better reviewer? |
Let's r? @bjorn3 😉 |
The inliner isn't enabled by default. Any idea how to do a perf run for this? |
|
Generally speaking there's no trivial way - my recommendation is to edit a PR to change the defaults and then run a try build with that, then revert anything on the PR and run a try build with that. You want to make sure the two try builds have the same parent commit (from master). After that, perf runs for each of the try builds and compare the try build hashes in the UI. |
☔ The latest upstream changes (presumably #80522) made this pull request unmergeable. Please resolve the merge conflicts. |
@cjgillot I would like to see a perf run first before merging this. #85321 (comment) suggests how to do this. |
triage: merge conflicts |
This comment has been minimized.
This comment has been minimized.
☔ The latest upstream changes (presumably #90408) made this pull request unmergeable. Please resolve the merge conflicts. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
☔ The latest upstream changes (presumably #94121) made this pull request unmergeable. Please resolve the merge conflicts. |
I recall that this PR was blocked on a perf run with inliner enabled. Enabling the inliner by default is not possible yet, because of 2 unresolved type normalization bugs. #92233 will remove this inlining tie breaker. I see two options:
|
I think merging this PR first is best so as to not break MIR inlining even more. r=me with or without the nit fixed. |
@bors r+ |
📌 Commit 297dde9 has been approved by |
☀️ Test successful - checks-actions |
Finished benchmarking commit (ec7b753): comparison url. Summary:
If you disagree with this performance assessment, please file an issue in rust-lang/rustc-perf. @rustbot label: -perf-regression Footnotes |
The
DefPathHash
is stable across incremental compilation sessions, so provides a total order onLocalDefId
. Using it instead ofHirId
ensures the MIR inliner has the same behaviour for incremental and non-incremental compilation.A downside is that the cycle tie break is not as predictable is with
HirId
.