Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

leaf-types cache for ml-matches #36166

Merged
merged 6 commits into from
Jun 19, 2020
Merged

leaf-types cache for ml-matches #36166

merged 6 commits into from
Jun 19, 2020

Conversation

vtjnash
Copy link
Member

@vtjnash vtjnash commented Jun 5, 2020

When we do a methods lookup and get back a single result, it's very easy for us to cache that information, since we essentially already have the data-structure for it (TypeMapEntry). This lets us shave off a bit of time on micro-benchmarks by putting this information into a hash table instead of a tree (which is still used to handle the general case):

julia> @btime methods(+, (Int, Int))
master:  3.322 μs (21 allocations: 912 bytes)
PR:      2.069 μs (18 allocations: 784 bytes)

julia> @btime Base._methods_by_ftype(Tuple{typeof(+), Int, Int}, -1, typemax(UInt64))
master:  1.496 μs (7 allocations: 464 bytes)
PR:      0.175 μs (4 allocations: 336 bytes)

vtjnash added 6 commits June 5, 2020 15:20
Since we are doing a subtype search (like a the top of the function),
the resulting object from the search does not need to be the same.
This lets us put more objects in here without incurring additional
search code (just the initial cost of computing the hash for the tuple
type lookup computation).
@vtjnash vtjnash requested a review from JeffBezanson June 5, 2020 19:32
@@ -390,6 +390,7 @@ end
t = Timer(0) do t
tc[] += 1
end
Libc.systemsleep(0.005)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there something better we can do here. Tests that depend on system scheduling behavior almost always turn out flakey.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without this, the test depends on the scheduling (this test assumes that this statement takes a least a millisecond to complete). This completely fixes the test to avoid that flakiness by ensuring it always takes at least that long.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see.

return env.t;
}
}
if (((jl_datatype_t*)unw)->isdispatchtuple) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this the same condition as the previous if block?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is in characters in this PR right now. But conceptually, they don't share a common root, so I've listed them separately. That way, if someone alters one, it won't affect the other.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, you're the methodcache czar, it just looked odd as is. Maybe there should be helpers that make clear in which sense this is used, even if the implementation of these helpers is the same at the moment?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, I'll keep that in mind. It won't be the last time (even this month) that I edit the code, haha.

@vtjnash
Copy link
Member Author

vtjnash commented Jun 15, 2020

I'd like to plan to merge this tomorrow (after #36260). Please let me know if review is incomplete.

@vtjnash vtjnash merged commit 92197a7 into master Jun 19, 2020
@vtjnash vtjnash deleted the jn/ml-matches-leaf-cache branch June 19, 2020 01:32
@JeffBezanson
Copy link
Member

Looks fine to me. No reason to block merging, but I have a couple thoughts.

I wonder if we can take this farther to memoize more of abstract_call_gf_by_type --- i.e. avoid the "double lookup" of ml_matches followed by typeinf_edge.

JL_TIMING(METHOD_LOOKUP_FAST);
mt = jl_gf_mtable(F);
entry = jl_typemap_assoc_exact(mt->cache, F, args, nargs, jl_cachearg_offset(mt), world);
if (entry == NULL) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect we'll lose a bit of performance from needing two lookups here sometimes. Would be nice to be able to combine the tables somehow.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be likely to hurt performance and memory usage to combine them (had that situation in a intermediate state of this PR, before I finished separating the tables). What we're doing here is expecting that one of these two tables is most likely empty (functions usually either get specialized on leaf types entirely or they don't)—and so we're just going to bypass the previous table in most cases without even attempting a lookup.

env.match.ti = mi->specTypes;
}
else {
// TODO: should we use jl_subtype_env instead (since we know that `type <: meth->sig` by transitivity)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍, plus we can skip this entirely if there are no static params.

Copy link
Member Author

@vtjnash vtjnash Jun 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, wow, yeah, not sure how I missed that, as it seems so obvious. I'm also thinking of removing this value entirely from the results (since often it should be available from the later typeinf_edge cache lookup)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:latency Compiler latency
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants