-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve optimization of harmless call cycles #38231
base: master
Are you sure you want to change the base?
Conversation
When inference detects a call cycle, one of two things could happen: 1. It determines that in order for inference to converge it needs to limit the signatures of some methods to something more general, or 2. The cycle is determined to be harmless at the inference level (because though there is a cycle in the CFG there is no dependency cycle of type information). In the first case, we simply disable optimizations, which is sensible, because we're likely to have to recompute some information anyway, when we actually get there dynamically. In the second case however, we do do optimizations, but it's a bit of an unusual case. In particular, inference usually delivers methods to inlining in postorder (meaning callees get optimized before their callers) such that a caller can always inline a callee. However, if there is a cycle, there is of course no unique postorder of functions, since by definition we're looking a locally strongly connected component. In this case, we would just essentially pick an arbitrary order (and moreover, depending on the point at which we enter the cycle and subsequently cached, leading to potential performance instabilities, depending on function order). However, the arbitrary order is quite possibly suboptimal. For example in #36414, we have a cycle randn -> _randn -> randn_unlikely -> rand. In this cycle the author of this code expected both `randn` and `_randn` to inline and annotated the functions as such. However, in 1.5+ the order we happed to pick would have inlined randn_unlikely into _randn (had it not been marked noinline), with a hard call break between randn and _randn, whch is problematic from a performance standpoint. This PR aims to address this by introducing a heuristic: If some functions in a cycle are marked as `@noinline`, we want to make sure to infer these last (since they won't ever be inlined anyway). To ensure this happens, while restoring postorder if this happens to break the cycle, we perform a DFS traversal rooted at any `@noinline` functions and then optimize the functions in the cycle in DFS-postorder. Of course still may still not be a true postorder in the inlining graph (if the `@noinline` functions don't break the cycle), but even in that case, it should be no worse than the default order. Fixes #36414 Closes #37234
This seems somewhat complicated, and seems to be trying to work around the design intent of this code, without actually updating it. This function was intended to produce stable results by batching the <inference>, <optimization>, and <publishing> steps (thereby preserving them as unordered). While not critical (since code_typed will always opt to lie to you about this anyways), it seemed preferable for a predictable run time. This was known to be suboptimal (particular in the case of |
I think your comment got garbled. I don't really know what you're saying. |
The structure of the |
I don't really know what you're saying. This is entirely within the |
Yeah, I don't want it to preserve batching there. If there's a better order, it should define and use it. |
Alright, then I'm just confused at this point. Could you elaborate what you're proposing as an alternative? |
@vtjnash and I discussed this on a call and the primary objection is to the new cycle caching scheme this introduced. The reason for that was that we currently batch inference frames for optimizations such that we can intersect their world ages. However, @vtjnash argued that such a batching was unnecessary and didn't make much sense. I'll be submitting a separate pull request to remove the batching, which will make the caching scheme in this PR unnecessary. |
I've tried that, and not batching the world-age computation seems to cause about a 10-15% inference slowdown, presumably because it leads to unnecessary re-inference of some cycles. Given that, should we just go with this instead (which doesn't have the slowdown)? |
Bump: @vtjnash? |
When inference detects a call cycle, one of two things could happen:
to limit the signatures of some methods to something more general, or
though there is a cycle in the CFG there is no dependency cycle of
type information).
In the first case, we simply disable optimizations, which is sensible,
because we're likely to have to recompute some information anyway,
when we actually get there dynamically.
In the second case however, we do do optimizations, but it's a bit
of an unusual case. In particular, inference usually delivers
methods to inlining in postorder (meaning callees get optimized
before their callers) such that a caller can always inline a callee.
However, if there is a cycle, there is of course no unique postorder
of functions, since by definition we're looking a locally strongly
connected component. In this case, we would just essentially pick
an arbitrary order (and moreover the order may differ depending on
the point at which we enter the cycle and subsequently cached,
leading to potential performance instabilities, depending on function order).
However, the arbitrary order is quite possibly
suboptimal. For example in #36414, we have a cycle
randn
->_randn
->randn_unlikely
->randn
. In this cycle the authorof this code expected both
randn
and_randn
to inline andannotated the functions as such. However, in 1.5+ the order we happed
to pick would have inlined
randn_unlikely
into_randn
(had it notbeen marked noinline), with a hard call break between
randn
and_randn
, which is problematic from a performance standpoint.This PR aims to address this by introducing a heuristic: If some
functions in a cycle are marked as
@noinline
, we want to makesure to optimize these last (since they won't ever be inlined anyway).
To make this guarantee, while also restoring postorder if this happens
to break the cycle, we perform a DFS traversal rooted at any
@noinline
functions and then optimize the functions in the cycle in DFS-postorder.
Of course still may still not be a true postorder in the inlining
graph (if the
@noinline
functions don't break the cycle), buteven in that case, it should be no worse than the default order.
Fixes #36414
Closes #37234