Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I heard you like inferring types but don't like global state... #21677

Merged
merged 21 commits into from
May 15, 2017
Merged

Conversation

jrevels
Copy link
Member

@jrevels jrevels commented May 2, 2017

...so @vtjnash and I removed the work queue by refactoring inference to run recursively.

I'm not sure yet how far this will make it through CI. @vtjnash will probably be able to answer any algorithmic questions better than myself - this is my first foray into the dangerous world of type inference. I mainly sat there like a monkey poking at the keyboard.

Besides reducing dependence on global state, this PR provides the following goodies:

  • exactly precise cycle detection
  • more optimal inference state provided to inlining pass
  • Jameson says it might be faster (still need to time this)

This PR enables the following future work:

  • a more optimal inlining order
  • effect-free type inference
  • more optimal/reliable inference recursion detection rules (will allow automatic effect-free computation and constant propagation)

@vtjnash is sad that he has to write a new blog post now.

@JeffBezanson
Copy link
Member

Cool!

Does this do anything to avoid using a very large amount of stack space?

const nactive = Array{Int,0}()
nactive[] = 0
const workq = Vector{InferenceState}() # set of InferenceState objects that can make immediate progress
# None! There is none! :)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:-)

end
end

# limit argument type size growth
if mightlimitlength || mightlimitdepth
# TODO: FIXME: this heuristic depends on non-local state making type-inference unpredictable
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this change fix this issue?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately not, but this PR does enable a code rewrite to fix the issue. This is also relevant to your other question:

Does this do anything to avoid using a very large amount of stack space?

In short, nope. @vtjnash believes that the future rewrite of the call recursion limit code here will help with stack space usage.

@kshyatt kshyatt added the compiler:inference Type inference label May 2, 2017
return frame
end

function in_egal(item, collection)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Already exists in this file; called contains_is.

@jrevels jrevels force-pushed the jr/types branch 2 times, most recently from ad3534e to c260e1c Compare May 2, 2017 20:10
@martinholters
Copy link
Member

Looks like

@test Core.Inference.isempty(Core.Inference.active) && Core.Inference.isempty(Core.Inference.workq)
is the only problem hit by CI? That would be pretty awesome!

@jrevels
Copy link
Member Author

jrevels commented May 3, 2017

Looks like

@test Core.Inference.isempty(Core.Inference.active) && Core.Inference.isempty(Core.Inference.workq)
is the only problem hit by CI? That would be pretty awesome!

That would be crazy! I removed the now-invalid reference to the work queue (and kept the rest of that tiny module), so I guess we'll see what happens.

Let's see if this can make it through a whole BaseBenchmarks run:

@nanosoldier runbenchmarks(ALL, vs = ":master")

@nanosoldier
Copy link
Collaborator

Your benchmark job has completed - possible performance regressions were detected. A full report can be found here. cc @jrevels

@martinholters
Copy link
Member

This seems to go so smooth, it's almost scary.

@jrevels
Copy link
Member Author

jrevels commented May 3, 2017

Could anybody hazard a guess as to what's causing the Base.Test.TestSetException on 32-bit Windows?

@vtjnash
Copy link
Member

vtjnash commented May 3, 2017

It seems like something is failing to drop a gc-reference (https://ci.appveyor.com/project/JuliaLang/julia/build/1.0.13610/job/iex595jdaqb6w9og#L2826)

@@ -296,10 +305,7 @@ end

#### current global inference state ####
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seriously though, would be nice to just delete this section :)

end
end

function resolve_call_cycle!(linfo::MethodInstance, parent::InferenceState)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function could use a comment. Not obvious to me what resolve means in this context.

return nothing
function merge_call_chain!(parent::InferenceState, ancestor::InferenceState, child::InferenceState)
# add backedge of parent <- child
# then add all backeges of parent <- parent.parent
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

backedges

src = ccall(:jl_uncompress_ast, Any, (Any, Any), li.def, li.def.source)
function InferenceState(linfo::MethodInstance,
optimize::Bool, cached::Bool, params::InferenceParams)
# prepare an InferenceState object for infering lambda
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

inferring

base/sysimg.jl Outdated
@@ -73,6 +72,9 @@ include("refpointer.jl")
include("checked.jl")
importall .Checked

# buggy handling of ispure in type-inference means this should be after re-defining the basic operations that they might try to call
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

line wrap comment

@tkelman
Copy link
Contributor

tkelman commented May 13, 2017

We could still use devdocs explaining what a backedge is

(general comment and lingering todo from the last rewrite, not necessary for this pr to solve)

@jrevels
Copy link
Member Author

jrevels commented May 13, 2017

Made the requested clean-ups and added a comment describing resolve_call_cycle (@vtjnash might want to check that my explanation is correct/complete).

We could still use devdocs explaining what a backedge is

AFAIK, a backedge is a reference from one frame to another (within a call graph) that facilitates the reverse-propagation of type information. I'll leave the actual documentation to Jameson, though I imagine that could be done in a separate PR.

@vtjnash vtjnash merged commit 5847317 into master May 15, 2017
@vtjnash
Copy link
Member

vtjnash commented May 15, 2017

We could still use devdocs explaining what a backedge is

There's several different types. An edge is simply a function call. A backedge is that same thing, backwards. At some point, I'll get Jeff to sit down for a couple days with me (and anyone else who's interested) and try to figure out what they are.

@martinholters martinholters deleted the jr/types branch May 15, 2017 18:29
@vtjnash
Copy link
Member

vtjnash commented May 15, 2017

https://juliacomputing.com/blog/2017/05/15/inference-converage2.html

@andyferris
Copy link
Member

Will this one still make it into v0.6-final? (it fixes some problems I'm having with intermittent (i.e. unit tests failing @inferred while @code_warntype works OK at REPL) inference on rc3).

@tkelman
Copy link
Contributor

tkelman commented Jun 11, 2017

Can you contribute base tests that were fixed by this?

ararslan pushed a commit that referenced this pull request Sep 11, 2017
removes the global work queue, which allows increasing the precision of cycle detection and resolution, and decreases the need for the threading synchronization lock surrounding inference

updates `inInference` flag usage to be merely a hint for `jl_type_infer` to not bother trying to infer a method (helps avoid accidental infinite recursion over inferring type inference), enable inferring inference

ref #21677
(cherry picked from commit 5847317)
ararslan pushed a commit that referenced this pull request Sep 13, 2017
removes the global work queue, which allows increasing the precision of cycle detection and resolution, and decreases the need for the threading synchronization lock surrounding inference

updates `inInference` flag usage to be merely a hint for `jl_type_infer` to not bother trying to infer a method (helps avoid accidental infinite recursion over inferring type inference), enable inferring inference

(cherry picked from commit 5847317)
vtjnash pushed a commit that referenced this pull request Sep 14, 2017
removes the global work queue, which allows increasing the precision of cycle detection and resolution, and decreases the need for the threading synchronization lock surrounding inference

updates `inInference` flag usage to be merely a hint for `jl_type_infer` to not bother trying to infer a method (helps avoid accidental infinite recursion over inferring type inference), enable inferring inference

(cherry picked from commit 5847317)
ararslan pushed a commit that referenced this pull request Sep 15, 2017
removes the global work queue, which allows increasing the precision of cycle detection and resolution, and decreases the need for the threading synchronization lock surrounding inference

updates `inInference` flag usage to be merely a hint for `jl_type_infer` to not bother trying to infer a method (helps avoid accidental infinite recursion over inferring type inference), enable inferring inference

(cherry picked from commit 5847317)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:inference Type inference
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants