-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inference: mega lattice implementation overhaul #42596
base: master
Are you sure you want to change the base?
Conversation
How will you implement |
|
8647107
to
723ed65
Compare
Hm, even if this overhaul could help us improve the correctness/accuracy of inference, but we may end up with I took this simple benchmark that compares the performance of julia> using BenchmarkTools
julia> # simulates a `Const` construction in the current lattice implmentation
struct Const
val
Const(@nospecialize val) = new(val)
end
julia> mkconst1(xs) = (for i in 1:length(xs)
xs[i] = Const(xs[i])
end; xs)
mkconst1 (generic function with 1 method)
julia> # actual `Const` construction in the proposed lattice design
mkconst2(xs) = (for i in 1:length(xs)
xs[i] = Core.Compiler.Const(xs[i])
end; xs)
mkconst2 (generic function with 1 method)
julia> @benchmark mkconst1(xs) setup = (xs = Any[i for i in 1:100])
BenchmarkTools.Trial: 10000 samples with 199 evaluations.
Range (min … max): 439.025 ns … 12.916 μs ┊ GC (min … max): 0.00% … 94.56%
Time (median): 492.656 ns ┊ GC (median): 0.00%
Time (mean ± σ): 640.437 ns ± 509.804 ns ┊ GC (mean ± σ): 5.02% ± 6.92%
█▅▃▂▃▃▃▃▂▃▄▄▄▃▂▂▁▁▁▁▁ ▁ ▁
█████████████████████████████▇█▇▇▇▇▆▇▇▆▆▆▅▅▅▅▅▅▄▄▅▄▃▄▂▄▄▄▃▃▄▄ █
439 ns Histogram: log(frequency) by time 1.81 μs <
Memory estimate: 1.56 KiB, allocs estimate: 100.
julia> @benchmark mkconst2(xs) setup = (xs = Any[i for i in 1:100])
BenchmarkTools.Trial: 10000 samples with 9 evaluations.
Range (min … max): 2.717 μs … 213.877 μs ┊ GC (min … max): 0.00% … 95.28%
Time (median): 2.832 μs ┊ GC (median): 0.00%
Time (mean ± σ): 3.435 μs ± 5.473 μs ┊ GC (mean ± σ): 7.37% ± 4.69%
▇█▅▃▃▂ ▂▂ ▁ ▁▁▁▁ ▁
██████▇▇███▇██▇▅▆▆▇████████▆▅▆▃▃▄▆██▇▆▆▅▅▆█▇▇▅▆▅▄▆▃▃▄▅▃▄▂▃▃ █
2.72 μs Histogram: log(frequency) by time 7.58 μs <
Memory estimate: 12.50 KiB, allocs estimate: 200. (and similarly, now we wrap every native Julia Well, inference is already costly and lattice element constructions might not be a serious bottleneck. # master (8985a2d629a525e42b9f12145e51b3aee95906b1)
~/julia2 master aviatesk@amdci2 10s
❯ ./usr/bin/julia -e '@time using Plots; @time plot(rand(10,3))'
7.281519 seconds (12.80 M allocations: 821.300 MiB, 4.65% gc time, 27.71% compilation time)
2.648468 seconds (3.55 M allocations: 198.837 MiB, 1.31% gc time, 99.79% compilation time)
# this PR
~/julia3 remotes/origin/avi/typelattice* aviatesk@amdci2 11s
❯ ./usr/bin/julia -e '@time using Plots; @time plot(rand(10,3))'
7.385233 seconds (13.16 M allocations: 847.765 MiB, 5.15% gc time, 29.00% compilation time)
2.789241 seconds (4.35 M allocations: 270.663 MiB, 1.92% gc time, 99.82% compilation time) |
interesting is the increase in allocations, but yes I was wondering about the additional overheads since |
There are many unnecessary allocations/indictions because of remaining works, e.g. this line is very hot but Lines 1619 to 1620 in 723ed65
I hope this sort of remaining TODOs are contributing to the performance regression, |
So I did some profilings. # setup new AbstractInterpreter
const CC = Core.Compiler
import .CC: MethodInstance, CodeInstance, WorldRange, WorldView
struct CCProfilerCache
dict::IdDict{MethodInstance,CodeInstance}
end
struct CCProfiler <: CC.AbstractInterpreter
interp::CC.NativeInterpreter
cache::CCProfilerCache
CCProfiler(world = Base.get_world_counter();
interp = CC.NativeInterpreter(world),
cache = CCProfilerCache(IdDict{MethodInstance,CodeInstance}())
) = new(interp, cache)
end
CC.InferenceParams(profiler::CCProfiler) = CC.InferenceParams(profiler.interp)
CC.OptimizationParams(profiler::CCProfiler) = CC.OptimizationParams(profiler.interp)
CC.get_world_counter(profiler::CCProfiler) = CC.get_world_counter(profiler.interp)
CC.get_inference_cache(profiler::CCProfiler) = CC.get_inference_cache(profiler.interp)
CC.code_cache(profiler::CCProfiler) = WorldView(profiler.cache, WorldRange(CC.get_world_counter(profiler)))
CC.get(wvc::WorldView{<:CCProfilerCache}, mi::MethodInstance, default) = get(wvc.cache.dict, mi, default)
CC.getindex(wvc::WorldView{<:CCProfilerCache}, mi::MethodInstance) = getindex(wvc.cache.dict, mi)
CC.haskey(wvc::WorldView{<:CCProfilerCache}, mi::MethodInstance) = haskey(wvc.cache.dict, mi)
CC.setindex!(wvc::WorldView{<:CCProfilerCache}, ci::CodeInstance, mi::MethodInstance) = setindex!(wvc.cache.dict, ci, mi)
# compile things, and check that the global cache isn't shared across inferences
@time Base.return_types(println, (QuoteNode,), CCProfiler());
@time Base.return_types(println, (QuoteNode,), CCProfiler());
@time Base.return_types(println, (QuoteNode,), CCProfiler());
# profile !
using Profile
@profile Base.return_types(println, (QuoteNode,), CCProfiler());
Profile.print(; format=:flat, sortedby=:count)
Profile.print(; recur=:flat, mincount=10) Here are the results:
julia> # compile things, and check that global cache isn't shared across inferences
@time Base.return_types(println, (QuoteNode,), CCProfiler());
4.526008 seconds (9.79 M allocations: 600.839 MiB, 3.07% gc time, 62.95% compilation time)
julia> @time Base.return_types(println, (QuoteNode,), CCProfiler());
0.930251 seconds (6.52 M allocations: 400.996 MiB, 16.07% gc time, 100.00% compilation time)
julia> @time Base.return_types(println, (QuoteNode,), CCProfiler());
0.909979 seconds (6.52 M allocations: 400.996 MiB, 15.17% gc time, 100.00% compilation time)
julia> # profile !
using Profile
julia> @profile Base.return_types(println, (QuoteNode,), CCProfiler());
julia> Profile.print(; format=:flat, sortedby=:count, mincount=10)
Count Overhead File Line Function
===== ======== ==== ==== ========
10 0 @Base/compiler/typeinfer.jl 361 transform_result_for_cache
10 0 @Base/compiler/ssair/ir.jl 1443 finish(compact::Core.Compiler.IncrementalCompact)
10 0 @Base/compiler/ssair/ir.jl 1327 maybe_erase_unused!(extra_worklist::Vector{Int64}, compact::Core.Compiler.Incre...
10 0 @Base/array.jl 411 getindex
10 0 @Base/compiler/ssair/slot2ssa.jl 899 construct_ssa!(ci::Core.CodeInfo, ir::Core.Compiler.IRCode, domtree::Core.Compi...
10 0 @Base/compiler/ssair/legacy.jl 6 inflate_ir(ci::Core.CodeInfo, linfo::MethodInstance)
10 0 @Base/compiler/ssair/slot2ssa.jl 15 scan_entry!(result::Vector{Core.Compiler.SlotInfo}, idx::Int64, stmt::Any)
10 0 @Base/range.jl 870 iterate
10 0 @Base/compiler/inferenceresult.jl 65 most_general_argtypes(method::Method, specTypes::Any, isva::Bool, withfirst::Bool)
10 0 @Base/compiler/ssair/slot2ssa.jl 807 construct_ssa!(ci::Core.CodeInfo, ir::Core.Compiler.IRCode, domtree::Core.Compi...
11 0 @Base/compiler/ssair/inlining.jl 929 call_sig(ir::Core.Compiler.IRCode, stmt::Expr)
11 0 @Base/compiler/ssair/ir.jl 253 setindex!(is::Core.Compiler.InstructionStream, newval::Core.Compiler.Instructio...
11 0 @Base/compiler/tfuncs.jl 1668 builtin_tfunction(interp::Core.Compiler.AbstractInterpreter, f::Any, argtypes::...
11 0 @Base/compiler/compiler.jl 142 anymap(f::typeof(Core.Compiler.widenconst), a::Vector{Union{Core.Compiler._Abst...
11 0 @Base/compiler/ssair/legacy.jl 42 inflate_ir(ci::Core.CodeInfo, sptypes::Vector{Union{Core.Compiler._AbstractLatt...
11 0 @Base/compiler/abstractinterpretation.jl 547 abstract_call_method_with_const_args(interp::CCProfiler, result::Core.Compiler....
11 0 @Base/compiler/typelattice.jl 190 TypeLattice#265
11 0 @Base/compiler/typelattice.jl 187 Type##kw
11 0 @Base/compiler/typelattice.jl 231 Const
12 0 @Base/compiler/ssair/ir.jl 201 Core.Compiler.InstructionStream(len::Int64)
12 0 @Base/compiler/ssair/passes.jl 766 getfield_elim_pass!(ir::Core.Compiler.IRCode)
12 0 @Base/compiler/ssair/ir.jl 1325 maybe_erase_unused!
12 0 @Base/compiler/ssair/queries.jl 87 compact_exprtype(compact::Core.Compiler.IncrementalCompact, value::Any)
12 0 @Base/compiler/ssair/inlining.jl 584 batch_inline!(todo::Vector{Pair{Int64, Any}}, ir::Core.Compiler.IRCode, linetab...
12 0 @Base/compiler/optimize.jl 325 run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
12 0 @Base/compiler/abstractinterpretation.jl 1485 abstract_eval_value(interp::CCProfiler, e::Any, vtypes::Vector{Core.Compiler.Va...
12 0 @Base/compiler/ssair/ir.jl 293 IRCode
12 0 @Base/compiler/types.jl 35 Type##kw
12 0 @Base/compiler/abstractinterpretation.jl 542 abstract_call_method_with_const_args(interp::CCProfiler, result::Core.Compiler....
12 0 @Base/compiler/inferenceresult.jl 53 most_general_argtypes(method::Method, specTypes::Any, isva::Bool, withfirst::Bool)
13 0 @Base/compiler/ssair/slot2ssa.jl 635 construct_ssa!(ci::Core.CodeInfo, ir::Core.Compiler.IRCode, domtree::Core.Compi...
13 0 @Base/boot.jl 468 Array
13 0 @Base/compiler/abstractinterpretation.jl 1486 abstract_eval_value(interp::CCProfiler, e::Any, vtypes::Vector{Core.Compiler.Va...
14 0 @Base/compiler/optimize.jl 333 run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
14 0 @Base/compiler/ssair/ir.jl 1306 iterate(compact::Core.Compiler.IncrementalCompact, ::Tuple{Int64, Int64})
14 0 @Base/compiler/abstractinterpretation.jl 1096 abstract_call_builtin(interp::CCProfiler, f::Core.Builtin, fargs::Vector{Any}, ...
15 0 @Base/compiler/ssair/ir.jl 274 NewNodeStream
15 0 @Base/array.jl 921 getindex
15 0 @Base/compiler/ssair/domtree.jl 204 construct_domtree(blocks::Vector{Core.Compiler.BasicBlock})
15 0 @Base/compiler/ssair/passes.jl 594 getfield_elim_pass!(ir::Core.Compiler.IRCode)
15 0 @Base/compiler/inferencestate.jl 253 Core.Compiler.InferenceState(result::Core.Compiler.InferenceResult, cache::Symb...
16 0 @Base/compiler/abstractinterpretation.jl 1265 abstract_call_known(interp::CCProfiler, f::Any, fargs::Vector{Any}, argtypes::V...
16 0 @Base/compiler/ssair/inlining.jl 838 Core.Compiler.InliningTodo(mi::MethodInstance, src::Vector{UInt8})
16 0 @Base/compiler/ssair/ir.jl 1035 process_node!(compact::Core.Compiler.IncrementalCompact, result_idx::Int64, ins...
17 0 @Base/compiler/ssair/inlining.jl 589 batch_inline!(todo::Vector{Pair{Int64, Any}}, ir::Core.Compiler.IRCode, linetab...
17 0 @Base/compiler/typeutils.jl 56 argtypes_to_type
17 0 @Base/compiler/inferenceresult.jl 155 cache_lookup(linfo::MethodInstance, given_argtypes::Vector{Union{Core.Compiler....
17 0 @Base/compiler/typelattice.jl 229 Const
18 0 @Base/compiler/ssair/legacy.jl 10 inflate_ir(ci::Core.CodeInfo, linfo::MethodInstance)
18 0 @Base/compiler/typeinfer.jl 366 transform_result_for_cache
18 0 @Base/compiler/typeinfer.jl 347 maybe_compress_codeinfo(interp::CCProfiler, linfo::MethodInstance, ci::Core.Cod...
19 0 @Base/compiler/ssair/ir.jl 1456 compact!(code::Core.Compiler.IRCode, allow_cfg_transforms::Bool)
19 0 @Base/compiler/utilities.jl 128 retrieve_code_info
20 0 @Base/compiler/ssair/ir.jl 477 iterate(it::Core.Compiler.UseRefIterator, #unused#::Nothing)
20 0 @Base/compiler/ssair/slot2ssa.jl 27 scan_entry!(result::Vector{Core.Compiler.SlotInfo}, idx::Int64, stmt::Any)
20 0 @Base/compiler/types.jl 35 InferenceResult
20 0 @Base/compiler/typeinfer.jl 821 typeinf_edge(interp::CCProfiler, method::Method, atypes::Any, sparams::Core.Sim...
21 0 @Base/array.jl 1055 push!
21 0 @Base/compiler/inferencestate.jl 250 Core.Compiler.InferenceState(result::Core.Compiler.InferenceResult, cache::Symb...
22 0 @Base/compiler/ssair/ir.jl 469 iterate
23 0 @Base/array.jl 533 fill
23 0 @Base/array.jl 531 fill
24 0 @Base/compiler/ssair/ir.jl 1307 iterate(compact::Core.Compiler.IncrementalCompact, ::Tuple{Int64, Int64})
24 0 @Base/compiler/abstractinterpretation.jl 1494 collect_argtypes(interp::CCProfiler, ea::Vector{Any}, vtypes::Vector{Core.Compi...
25 0 @Base/compiler/ssair/inlining.jl 1097 process_simple!(ir::Core.Compiler.IRCode, todo::Vector{Pair{Int64, Any}}, idx::...
26 0 @Base/compiler/ssair/ir.jl 1454 compact!
26 0 @Base/compiler/abstractinterpretation.jl 1518 abstract_eval_statement(interp::CCProfiler, e::Any, vtypes::Vector{Core.Compile...
26 0 @Base/array.jl 1008 _growend!
26 0 @Base/compiler/typeinfer.jl 822 typeinf_edge(interp::CCProfiler, method::Method, atypes::Any, sparams::Core.Sim...
27 0 @Base/compiler/abstractinterpretation.jl 526 abstract_call_method_with_const_args(interp::CCProfiler, result::Core.Compiler....
28 0 @Base/compiler/typeinfer.jl 280 _typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
28 0 @Base/compiler/typeinfer.jl 392 cache_result!(interp::CCProfiler, result::Core.Compiler.InferenceResult)
28 0 @Base/compiler/abstractinterpretation.jl 1667 abstract_eval_global
32 0 @Base/compiler/types.jl 35 InferenceResult#245
33 0 @Base/compiler/utilities.jl 256 argextype(x::Any, src::Core.Compiler.IRCode, sptypes::Vector{Union{Core.Compile...
33 0 @Base/compiler/inferenceresult.jl 52 most_general_argtypes(method::Method, specTypes::Any, isva::Bool)
34 0 @Base/compiler/ssair/slot2ssa.jl 45 scan_slot_def_use(nargs::Int64, ci::Core.CodeInfo, code::Vector{Any})
34 0 @Base/compiler/inferenceresult.jl 139 matching_cache_argtypes(linfo::MethodInstance, #unused#::Nothing, va_override::...
35 0 @Base/boot.jl 458 Array
35 0 @Base/compiler/optimize.jl 423 slot2reg
38 0 @Base/compiler/ssair/queries.jl 101 is_known_call(e::Expr, func::Any, src::Core.Compiler.IncrementalCompact)
38 0 @Base/compiler/ssair/queries.jl 91 compact_exprtype(compact::Core.Compiler.IncrementalCompact, value::Any)
41 0 @Base/compiler/ssair/inlining.jl 1263 assemble_inline_todo!(ir::Core.Compiler.IRCode, state::Core.Compiler.InliningSt...
43 0 @Base/compiler/utilities.jl 234 argextype
49 0 @Base/compiler/ssair/inlining.jl 73 ssa_inlining_pass!(ir::Core.Compiler.IRCode, linetable::Vector{Core.LineInfoNod...
53 0 @Base/iddict.jl 178 get!
53 0 @Base/compiler/methodtable.jl 97 (::Core.Compiler.var"#259#260"{Int64, Core.Compiler.CachedMethodTable{Core.Comp...
53 0 @Base/reflection.jl 908 _methods_by_ftype
53 0 @Base/compiler/methodtable.jl 68 #findall#256
53 0 @Base/compiler/methodtable.jl 65 (::Core.Compiler.var"#findall##kw")(::NamedTuple{(:limit,), Tuple{Int64}}, ::ty...
54 0 @Base/compiler/methodtable.jl 96 #findall#258
54 0 @Base/compiler/methodtable.jl 95 (::Core.Compiler.var"#findall##kw")(::NamedTuple{(:limit,), Tuple{Int64}}, ::ty...
55 0 @Base/compiler/abstractinterpretation.jl 29 abstract_call_gf_by_type(interp::CCProfiler, f::Any, fargs::Vector{Any}, argtyp...
55 0 @Base/compiler/abstractinterpretation.jl 301 find_matching_methods(argtypes::Vector{Union{Core.Compiler._AbstractLattice, Co...
60 0 @Base/compiler/optimize.jl 424 slot2reg
62 0 @Base/compiler/ssair/legacy.jl 4 inflate_ir(ci::Core.CodeInfo, linfo::MethodInstance)
62 0 @Base/compiler/inferencestate.jl 321 sptypes_from_meth_instance(linfo::MethodInstance)
63 0 @Base/compiler/typelattice.jl 210 Core.Compiler.TypeLattice(x::Core.Compiler.TypeLattice; typ::Type, constant::Co...
73 0 @Base/boot.jl 449 Array
75 0 @Base/compiler/typelattice.jl 210 Core.Compiler.TypeLattice(x::Core.Compiler.TypeLattice)
79 0 @Base/compiler/optimize.jl 330 run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
91 0 @Base/compiler/ssair/inlining.jl 842 Core.Compiler.InliningTodo(mi::MethodInstance, src::Vector{UInt8})
103 0 @Base/compiler/optimize.jl 322 run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
105 0 @Base/compiler/ssair/inlining.jl 829 analyze_method!(match::Core.MethodMatch, atypes::Vector{Union{Core.Compiler._Ab...
106 0 @Base/compiler/ssair/inlining.jl 1181 analyze_single_call!(ir::Core.Compiler.IRCode, todo::Vector{Pair{Int64, Any}}, ...
108 0 @Base/compiler/ssair/inlining.jl 779 resolve_todo(todo::Core.Compiler.InliningTodo, state::Core.Compiler.InliningSta...
113 0 @Base/compiler/ssair/inlining.jl 1325 assemble_inline_todo!(ir::Core.Compiler.IRCode, state::Core.Compiler.InliningSt...
164 0 @Base/compiler/ssair/inlining.jl 70 ssa_inlining_pass!(ir::Core.Compiler.IRCode, linetable::Vector{Core.LineInfoNod...
174 0 @Base/compiler/abstractinterpretation.jl 550 abstract_call_method_with_const_args(interp::CCProfiler, result::Core.Compiler....
204 0 @Base/compiler/abstractinterpretation.jl 103 abstract_call_gf_by_type(interp::CCProfiler, f::Any, fargs::Vector{Any}, argtyp...
215 0 @Base/compiler/optimize.jl 326 run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
266 0 @Base/compiler/abstractinterpretation.jl 1896 typeinf_local(interp::CCProfiler, frame::Core.Compiler.InferenceState)
447 0 @Base/compiler/optimize.jl 315 optimize
450 0 @Base/compiler/typeinfer.jl 255 _typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
761 0 @Base/reflection.jl 1246 return_types(f::Any, types::Any, interp::CCProfiler)
761 0 @Base/compiler/typeinfer.jl 8 typeinf
761 0 @Base/compiler/typeinfer.jl 934 typeinf_type(interp::CCProfiler, method::Method, atypes::Any, sparams::Core.Sim...
761 0 @Base/compiler/typeinfer.jl 209 typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
761 0 @Base/compiler/typeinfer.jl 226 _typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
761 0 @Base/compiler/abstractinterpretation.jl 2016 typeinf_nocycle(interp::CCProfiler, frame::Core.Compiler.InferenceState)
761 0 @Base/compiler/abstractinterpretation.jl 1916 typeinf_local(interp::CCProfiler, frame::Core.Compiler.InferenceState)
761 0 @Base/compiler/abstractinterpretation.jl 1522 abstract_eval_statement(interp::CCProfiler, e::Any, vtypes::Vector{Core.Compile...
761 0 @Base/compiler/abstractinterpretation.jl 1382 abstract_call(interp::CCProfiler, fargs::Vector{Any}, argtypes::Vector{Union{Co...
761 0 @Base/compiler/abstractinterpretation.jl 1398 abstract_call(interp::CCProfiler, fargs::Vector{Any}, argtypes::Vector{Union{Co...
761 0 @Base/compiler/abstractinterpretation.jl 1259 abstract_call_known(interp::CCProfiler, f::Any, fargs::Vector{Any}, argtypes::V...
761 0 @Base/compiler/abstractinterpretation.jl 1004 abstract_apply(interp::CCProfiler, argtypes::Vector{Union{Core.Compiler._Abstra...
761 0 @Base/compiler/abstractinterpretation.jl 1344 abstract_call_known(interp::CCProfiler, f::Any, fargs::Nothing, argtypes::Vecto...
761 0 @Base/compiler/abstractinterpretation.jl 95 abstract_call_gf_by_type(interp::CCProfiler, f::Any, fargs::Nothing, argtypes::...
761 0 @Base/compiler/abstractinterpretation.jl 498 abstract_call_method(interp::CCProfiler, method::Method, sig::Any, sparams::Cor...
761 0 @Base/compiler/typeinfer.jl 831 typeinf_edge(interp::CCProfiler, method::Method, atypes::Any, sparams::Core.Sim...
770 770 @Base/client.jl 497 _start()
770 0 @Base/client.jl 309 exec_options(opts::Base.JLOptions)
770 0 @Base/essentials.jl 718 #invokelatest#2
770 0 @Base/essentials.jl 716 invokelatest
770 0 @Base/client.jl 379 run_main_repl(interactive::Bool, quiet::Bool, banner::Bool, history_file::Bool,...
770 0 @Base/client.jl 394 (::Base.var"#932#934"{Bool, Bool, Bool})(REPL::Module)
770 0 @REPL/src/REPL.jl 350 run_repl(repl::AbstractREPL, consumer::Any)
770 0 @REPL/src/REPL.jl 363 run_repl(repl::AbstractREPL, consumer::Any; backend_on_current_task::Bool)
770 0 @REPL/src/REPL.jl 230 start_repl_backend(backend::REPL.REPLBackend, consumer::Any)
770 0 @REPL/src/REPL.jl 245 repl_backend_loop(backend::REPL.REPLBackend)
770 0 @Base/boot.jl 368 eval
770 0 @REPL/src/REPL.jl 151 eval_user_input(ast::Any, backend::REPL.REPLBackend)
Total snapshots: 797 (100% utilization across all threads and tasks. Use the `groupby` kwarg to break down by thread and/or task)
julia> Profile.print(; recur=:flat, mincount=10)
Overhead ╎ [+additional indent] Count File:Line; Function
=========================================================
╎770 @Base/client.jl:497; _start()
╎ 770 @Base/client.jl:309; exec_options(opts::Base.JLOptions)
╎ 770 @Base/client.jl:379; run_main_repl(interactive::Bool, quiet::Bool, banner::Bool, history_file::Bool, col...
╎ 770 @Base/essentials.jl:716; invokelatest
╎ 770 @Base/essentials.jl:718; #invokelatest#2
╎ 770 @Base/client.jl:394; (::Base.var"#932#934"{Bool, Bool, Bool})(REPL::Module)
╎ ╎ 770 @REPL/src/REPL.jl:350; run_repl(repl::AbstractREPL, consumer::Any)
╎ ╎ 770 @REPL/src/REPL.jl:363; run_repl(repl::AbstractREPL, consumer::Any; backend_on_current_task::Bool)
╎ ╎ 770 @REPL/src/REPL.jl:230; start_repl_backend(backend::REPL.REPLBackend, consumer::Any)
╎ ╎ 770 @REPL/src/REPL.jl:245; repl_backend_loop(backend::REPL.REPLBackend)
╎ ╎ 770 @REPL/src/REPL.jl:151; eval_user_input(ast::Any, backend::REPL.REPLBackend)
8╎ ╎ ╎ 770 @Base/boot.jl:368; eval
╎ ╎ ╎ 761 @Base/reflection.jl:1246; return_types(f::Any, types::Any, interp::CCProfiler)
╎ ╎ ╎ 761 @Base/compiler/typeinfer.jl:934; typeinf_type(interp::CCProfiler, method::Method, atypes::Any, sparams::Core....
╎ ╎ ╎ 761 @Base/compiler/typeinfer.jl:8; typeinf
╎ ╎ ╎ 761 @Base/compiler/typeinfer.jl:209; typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
╎ ╎ ╎ ╎ 761 @Base/compiler/typeinfer.jl:226; _typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
╎ ╎ ╎ ╎ 761 @Base/compiler/abstractinterpretation.jl:2016; typeinf_nocycle(interp::CCProfiler, frame::Core.Compiler.InferenceState)
╎ ╎ ╎ ╎ 266 @Base/compiler/abstractinterpretation.jl:1896; typeinf_local(interp::CCProfiler, frame::Core.Compiler.InferenceState)
╎ ╎ ╎ ╎ 761 @Base/compiler/abstractinterpretation.jl:1916; typeinf_local(interp::CCProfiler, frame::Core.Compiler.InferenceState)
╎ ╎ ╎ ╎ 18 @Base/compiler/abstractinterpretation.jl:1518; abstract_eval_statement(interp::CCProfiler, e::Any, vtypes::Vector{Core....
╎ ╎ ╎ ╎ 17 @Base/compiler/abstractinterpretation.jl:1494; collect_argtypes(interp::CCProfiler, ea::Vector{Any}, vtypes::Vector{Cor...
╎ ╎ ╎ ╎ 761 @Base/compiler/abstractinterpretation.jl:1522; abstract_eval_statement(interp::CCProfiler, e::Any, vtypes::Vector{Core....
1╎ ╎ ╎ ╎ 761 @Base/compiler/abstractinterpretation.jl:1382; abstract_call(interp::CCProfiler, fargs::Vector{Any}, argtypes::Vector{U...
2╎ ╎ ╎ ╎ ╎ 761 @Base/compiler/abstractinterpretation.jl:1398; abstract_call(interp::CCProfiler, fargs::Vector{Any}, argtypes::Vector{...
╎ ╎ ╎ ╎ ╎ 761 @Base/compiler/abstractinterpretation.jl:1259; abstract_call_known(interp::CCProfiler, f::Any, fargs::Vector{Any}, ar...
╎ ╎ ╎ ╎ ╎ 761 @Base/compiler/abstractinterpretation.jl:1004; abstract_apply(interp::CCProfiler, argtypes::Vector{Union{Core.Compile...
╎ ╎ ╎ ╎ ╎ 761 @Base/compiler/abstractinterpretation.jl:1398; abstract_call(interp::CCProfiler, fargs::Nothing, argtypes::Vector{Un...
╎ ╎ ╎ ╎ ╎ 761 @Base/compiler/abstractinterpretation.jl:1344; abstract_call_known(interp::CCProfiler, f::Any, fargs::Nothing, argty...
╎ ╎ ╎ ╎ ╎ ╎ 761 @Base/compiler/abstractinterpretation.jl:95; abstract_call_gf_by_type(interp::CCProfiler, f::Any, fargs::Nothing, ...
╎ ╎ ╎ ╎ ╎ ╎ 761 @Base/compiler/abstractinterpretation.jl:498; abstract_call_method(interp::CCProfiler, method::Method, sig::Any, s...
╎ ╎ ╎ ╎ ╎ ╎ 20 @Base/compiler/typeinfer.jl:821; typeinf_edge(interp::CCProfiler, method::Method, atypes::Any, spara...
╎ ╎ ╎ ╎ ╎ ╎ 20 @Base/compiler/types.jl:35; InferenceResult
╎ ╎ ╎ ╎ ╎ ╎ 20 @Base/compiler/types.jl:35; InferenceResult#245
╎ ╎ ╎ ╎ ╎ ╎ ╎ 20 @Base/compiler/inferenceresult.jl:139; matching_cache_argtypes(linfo::MethodInstance, #unused#::Nothing, ...
╎ ╎ ╎ ╎ ╎ ╎ ╎ 20 @Base/compiler/inferenceresult.jl:52; most_general_argtypes(method::Method, specTypes::Any, isva::Bool)
╎ ╎ ╎ ╎ ╎ ╎ ╎ 10 @Base/compiler/inferenceresult.jl:65; most_general_argtypes(method::Method, specTypes::Any, isva::Bool...
10╎ ╎ ╎ ╎ ╎ ╎ ╎ 10 @Base/range.jl:870; iterate
╎ ╎ ╎ ╎ ╎ ╎ 26 @Base/compiler/typeinfer.jl:822; typeinf_edge(interp::CCProfiler, method::Method, atypes::Any, spara...
╎ ╎ ╎ ╎ ╎ ╎ 15 @Base/compiler/inferencestate.jl:250; Core.Compiler.InferenceState(result::Core.Compiler.InferenceResult,...
13╎ ╎ ╎ ╎ ╎ ╎ 13 @Base/compiler/utilities.jl:128; retrieve_code_info
╎ ╎ ╎ ╎ ╎ ╎ 10 @Base/compiler/inferencestate.jl:253; Core.Compiler.InferenceState(result::Core.Compiler.InferenceResult,...
╎ ╎ ╎ ╎ ╎ ╎ 761 @Base/compiler/typeinfer.jl:831; typeinf_edge(interp::CCProfiler, method::Method, atypes::Any, spara...
╎ ╎ ╎ ╎ ╎ 16 @Base/compiler/abstractinterpretation.jl:1265; abstract_call_known(interp::CCProfiler, f::Any, fargs::Vector{Any}, ar...
╎ ╎ ╎ ╎ ╎ 14 @Base/compiler/abstractinterpretation.jl:1096; abstract_call_builtin(interp::CCProfiler, f::Core.Builtin, fargs::Vect...
1╎ ╎ ╎ ╎ ╎ 11 @Base/compiler/tfuncs.jl:1668; builtin_tfunction(interp::Core.Compiler.AbstractInterpreter, f::Any, ...
1╎ ╎ ╎ ╎ ╎ 760 @Base/compiler/abstractinterpretation.jl:1344; abstract_call_known(interp::CCProfiler, f::Any, fargs::Vector{Any}, ar...
╎ ╎ ╎ ╎ ╎ 54 @Base/compiler/abstractinterpretation.jl:29; abstract_call_gf_by_type(interp::CCProfiler, f::Any, fargs::Vector{Any...
1╎ ╎ ╎ ╎ ╎ 54 @Base/compiler/abstractinterpretation.jl:301; find_matching_methods(argtypes::Vector{Union{Core.Compiler._AbstractLa...
╎ ╎ ╎ ╎ ╎ 53 @Base/compiler/methodtable.jl:95; (::Core.Compiler.var"#findall##kw")(::NamedTuple{(:limit,), Tuple{Int6...
╎ ╎ ╎ ╎ ╎ ╎ 53 @Base/compiler/methodtable.jl:96; #findall#258
╎ ╎ ╎ ╎ ╎ ╎ 52 @Base/iddict.jl:178; get!
╎ ╎ ╎ ╎ ╎ ╎ 52 @Base/compiler/methodtable.jl:97; (::Core.Compiler.var"#259#260"{Int64, Core.Compiler.CachedMethodTabl...
╎ ╎ ╎ ╎ ╎ ╎ 52 @Base/compiler/methodtable.jl:65; (::Core.Compiler.var"#findall##kw")(::NamedTuple{(:limit,), Tuple{I...
╎ ╎ ╎ ╎ ╎ ╎ 52 @Base/compiler/methodtable.jl:68; #findall#256
52╎ ╎ ╎ ╎ ╎ ╎ ╎ 52 @Base/reflection.jl:908; _methods_by_ftype
╎ ╎ ╎ ╎ ╎ 759 @Base/compiler/abstractinterpretation.jl:95; abstract_call_gf_by_type(interp::CCProfiler, f::Any, fargs::Vector{Any...
╎ ╎ ╎ ╎ ╎ 204 @Base/compiler/abstractinterpretation.jl:103; abstract_call_gf_by_type(interp::CCProfiler, f::Any, fargs::Vector{Any...
╎ ╎ ╎ ╎ ╎ 27 @Base/compiler/abstractinterpretation.jl:526; abstract_call_method_with_const_args(interp::CCProfiler, result::Core....
1╎ ╎ ╎ ╎ ╎ 17 @Base/compiler/inferenceresult.jl:155; cache_lookup(linfo::MethodInstance, given_argtypes::Vector{Union{Core...
╎ ╎ ╎ ╎ ╎ 12 @Base/compiler/abstractinterpretation.jl:542; abstract_call_method_with_const_args(interp::CCProfiler, result::Core....
╎ ╎ ╎ ╎ ╎ 12 @Base/compiler/types.jl:35; Type##kw
╎ ╎ ╎ ╎ ╎ ╎ 12 @Base/compiler/types.jl:35; InferenceResult#245
╎ ╎ ╎ ╎ ╎ 11 @Base/compiler/abstractinterpretation.jl:547; abstract_call_method_with_const_args(interp::CCProfiler, result::Core....
1╎ ╎ ╎ ╎ ╎ 174 @Base/compiler/abstractinterpretation.jl:550; abstract_call_method_with_const_args(interp::CCProfiler, result::Core....
╎ ╎ ╎ ╎ 450 @Base/compiler/typeinfer.jl:255; _typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
1╎ ╎ ╎ ╎ 447 @Base/compiler/optimize.jl:315; optimize
╎ ╎ ╎ ╎ 103 @Base/compiler/optimize.jl:322; run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
╎ ╎ ╎ ╎ 35 @Base/compiler/optimize.jl:423; slot2reg
╎ ╎ ╎ ╎ 34 @Base/compiler/ssair/slot2ssa.jl:45; scan_slot_def_use(nargs::Int64, ci::Core.CodeInfo, code::Vector{Any})
╎ ╎ ╎ ╎ ╎ 10 @Base/compiler/ssair/slot2ssa.jl:15; scan_entry!(result::Vector{Core.Compiler.SlotInfo}, idx::Int64, stmt::Any)
╎ ╎ ╎ ╎ ╎ 10 @Base/array.jl:1055; push!
10╎ ╎ ╎ ╎ ╎ 10 @Base/array.jl:1008; _growend!
╎ ╎ ╎ ╎ ╎ 20 @Base/compiler/ssair/slot2ssa.jl:27; scan_entry!(result::Vector{Core.Compiler.SlotInfo}, idx::Int64, stmt::Any)
╎ ╎ ╎ ╎ ╎ 18 @Base/compiler/ssair/ir.jl:469; iterate
18╎ ╎ ╎ ╎ ╎ 18 @Base/compiler/ssair/ir.jl:477; iterate(it::Core.Compiler.UseRefIterator, #unused#::Nothing)
╎ ╎ ╎ ╎ 60 @Base/compiler/optimize.jl:424; slot2reg
╎ ╎ ╎ ╎ 13 @Base/compiler/ssair/slot2ssa.jl:635; construct_ssa!(ci::Core.CodeInfo, ir::Core.Compiler.IRCode, domtree::Cor...
╎ ╎ ╎ ╎ 10 @Base/compiler/ssair/slot2ssa.jl:807; construct_ssa!(ci::Core.CodeInfo, ir::Core.Compiler.IRCode, domtree::Cor...
╎ ╎ ╎ ╎ ╎ 10 @Base/array.jl:531; fill
╎ ╎ ╎ ╎ ╎ 10 @Base/array.jl:533; fill
╎ ╎ ╎ ╎ ╎ 10 @Base/boot.jl:458; Array
10╎ ╎ ╎ ╎ ╎ 10 @Base/boot.jl:449; Array
╎ ╎ ╎ ╎ 10 @Base/compiler/ssair/slot2ssa.jl:899; construct_ssa!(ci::Core.CodeInfo, ir::Core.Compiler.IRCode, domtree::Cor...
╎ ╎ ╎ ╎ 12 @Base/compiler/optimize.jl:325; run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
╎ ╎ ╎ ╎ 12 @Base/compiler/ssair/ir.jl:1454; compact!
2╎ ╎ ╎ ╎ 215 @Base/compiler/optimize.jl:326; run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
╎ ╎ ╎ ╎ 164 @Base/compiler/ssair/inlining.jl:70; ssa_inlining_pass!(ir::Core.Compiler.IRCode, linetable::Vector{Core.LineI...
╎ ╎ ╎ ╎ 41 @Base/compiler/ssair/inlining.jl:1263; assemble_inline_todo!(ir::Core.Compiler.IRCode, state::Core.Compiler.Inl...
╎ ╎ ╎ ╎ ╎ 25 @Base/compiler/ssair/inlining.jl:1097; process_simple!(ir::Core.Compiler.IRCode, todo::Vector{Pair{Int64, Any}...
╎ ╎ ╎ ╎ ╎ 11 @Base/compiler/ssair/inlining.jl:929; call_sig(ir::Core.Compiler.IRCode, stmt::Expr)
╎ ╎ ╎ ╎ ╎ 11 @Base/compiler/utilities.jl:234; argextype
4╎ ╎ ╎ ╎ 113 @Base/compiler/ssair/inlining.jl:1325; assemble_inline_todo!(ir::Core.Compiler.IRCode, state::Core.Compiler.Inl...
╎ ╎ ╎ ╎ ╎ 106 @Base/compiler/ssair/inlining.jl:1181; analyze_single_call!(ir::Core.Compiler.IRCode, todo::Vector{Pair{Int64,...
╎ ╎ ╎ ╎ ╎ 105 @Base/compiler/ssair/inlining.jl:829; analyze_method!(match::Core.MethodMatch, atypes::Vector{Union{Core.Comp...
1╎ ╎ ╎ ╎ ╎ 103 @Base/compiler/ssair/inlining.jl:779; resolve_todo(todo::Core.Compiler.InliningTodo, state::Core.Compiler.In...
16╎ ╎ ╎ ╎ ╎ 16 @Base/compiler/ssair/inlining.jl:838; Core.Compiler.InliningTodo(mi::MethodInstance, src::Vector{UInt8})
╎ ╎ ╎ ╎ ╎ 86 @Base/compiler/ssair/inlining.jl:842; Core.Compiler.InliningTodo(mi::MethodInstance, src::Vector{UInt8})
╎ ╎ ╎ ╎ ╎ 62 @Base/compiler/ssair/legacy.jl:4; inflate_ir(ci::Core.CodeInfo, linfo::MethodInstance)
╎ ╎ ╎ ╎ ╎ ╎ 62 @Base/compiler/inferencestate.jl:321; sptypes_from_meth_instance(linfo::MethodInstance)
1╎ ╎ ╎ ╎ ╎ ╎ 62 @Base/compiler/typelattice.jl:210; Core.Compiler.TypeLattice(x::Core.Compiler.TypeLattice)
61╎ ╎ ╎ ╎ ╎ ╎ 61 @Base/compiler/typelattice.jl:210; Core.Compiler.TypeLattice(x::Core.Compiler.TypeLattice; typ::Type, ...
╎ ╎ ╎ ╎ ╎ 17 @Base/compiler/ssair/legacy.jl:10; inflate_ir(ci::Core.CodeInfo, linfo::MethodInstance)
╎ ╎ ╎ ╎ ╎ ╎ 10 @Base/compiler/ssair/legacy.jl:42; inflate_ir(ci::Core.CodeInfo, sptypes::Vector{Union{Core.Compiler._Ab...
╎ ╎ ╎ ╎ ╎ ╎ 10 @Base/compiler/ssair/ir.jl:293; IRCode
╎ ╎ ╎ ╎ ╎ ╎ 10 @Base/compiler/ssair/ir.jl:274; NewNodeStream
╎ ╎ ╎ ╎ ╎ ╎ 10 @Base/compiler/ssair/ir.jl:274; NewNodeStream
╎ ╎ ╎ ╎ ╎ ╎ 10 @Base/compiler/ssair/ir.jl:201; Core.Compiler.InstructionStream(len::Int64)
╎ ╎ ╎ ╎ ╎ ╎ ╎ 10 @Base/array.jl:531; fill
╎ ╎ ╎ ╎ ╎ ╎ ╎ 10 @Base/array.jl:533; fill
╎ ╎ ╎ ╎ ╎ ╎ ╎ 10 @Base/boot.jl:458; Array
10╎ ╎ ╎ ╎ ╎ ╎ ╎ 10 @Base/boot.jl:449; Array
╎ ╎ ╎ ╎ 49 @Base/compiler/ssair/inlining.jl:73; ssa_inlining_pass!(ir::Core.Compiler.IRCode, linetable::Vector{Core.LineI...
2╎ ╎ ╎ ╎ 12 @Base/compiler/ssair/inlining.jl:584; batch_inline!(todo::Vector{Pair{Int64, Any}}, ir::Core.Compiler.IRCode, ...
╎ ╎ ╎ ╎ 17 @Base/compiler/ssair/inlining.jl:589; batch_inline!(todo::Vector{Pair{Int64, Any}}, ir::Core.Compiler.IRCode, ...
╎ ╎ ╎ ╎ 79 @Base/compiler/optimize.jl:330; run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
╎ ╎ ╎ ╎ 15 @Base/compiler/ssair/passes.jl:594; getfield_elim_pass!(ir::Core.Compiler.IRCode)
╎ ╎ ╎ ╎ 14 @Base/compiler/ssair/queries.jl:101; is_known_call(e::Expr, func::Any, src::Core.Compiler.IncrementalCompact)
2╎ ╎ ╎ ╎ ╎ 14 @Base/compiler/ssair/queries.jl:91; compact_exprtype(compact::Core.Compiler.IncrementalCompact, value::Any)
╎ ╎ ╎ ╎ ╎ 12 @Base/compiler/utilities.jl:234; argextype
1╎ ╎ ╎ ╎ ╎ 12 @Base/compiler/utilities.jl:256; argextype(x::Any, src::Core.Compiler.IRCode, sptypes::Vector{Union{Cor...
╎ ╎ ╎ ╎ ╎ 11 @Base/compiler/abstractinterpretation.jl:1667; abstract_eval_global
╎ ╎ ╎ ╎ ╎ 11 @Base/compiler/typelattice.jl:231; Const
╎ ╎ ╎ ╎ ╎ ╎ 11 @Base/compiler/typelattice.jl:187; Type##kw
11╎ ╎ ╎ ╎ ╎ ╎ 11 @Base/compiler/typelattice.jl:190; TypeLattice#265
╎ ╎ ╎ ╎ 12 @Base/compiler/ssair/passes.jl:766; getfield_elim_pass!(ir::Core.Compiler.IRCode)
╎ ╎ ╎ ╎ 14 @Base/compiler/optimize.jl:333; run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
╎ ╎ ╎ ╎ 28 @Base/compiler/typeinfer.jl:280; _typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
╎ ╎ ╎ ╎ 28 @Base/compiler/typeinfer.jl:392; cache_result!(interp::CCProfiler, result::Core.Compiler.InferenceResult)
╎ ╎ ╎ ╎ 10 @Base/compiler/typeinfer.jl:361; transform_result_for_cache
╎ ╎ ╎ ╎ 18 @Base/compiler/typeinfer.jl:366; transform_result_for_cache
18╎ ╎ ╎ ╎ 18 @Base/compiler/typeinfer.jl:347; maybe_compress_codeinfo(interp::CCProfiler, linfo::MethodInstance, ci::Co...
Total snapshots: 797 (100% utilization across all threads and tasks. Use the `groupby` kwarg to break down by thread and/or task)
julia> # compile things, and check that global cache isn't shared across inferences
@time Base.return_types(println, (QuoteNode,), CCProfiler());
3.631322 seconds (7.11 M allocations: 385.003 MiB, 3.19% gc time, 64.50% compilation time)
julia> @time Base.return_types(println, (QuoteNode,), CCProfiler());
0.712653 seconds (5.20 M allocations: 283.478 MiB, 16.46% gc time, 100.00% compilation time)
julia> @time Base.return_types(println, (QuoteNode,), CCProfiler());
0.772917 seconds (5.20 M allocations: 283.478 MiB, 25.16% gc time, 100.00% compilation time)
julia> # profile !
using Profile
julia> @profile Base.return_types(println, (QuoteNode,), CCProfiler());
julia> Profile.print(; format=:flat, sortedby=:count, mincount=10)
Count Overhead File Line Function
===== ======== ==== ==== ========
10 0 @Base/compiler/typeinfer.jl 239 _typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
10 0 @Base/compiler/abstractinterpretation.jl 1328 abstract_call_known(interp::CCProfiler, f::Any, fargs::Vector{Any}, argtypes::V...
10 0 @Base/compiler/abstractinterpretation.jl 553 abstract_call_method_with_const_args(interp::CCProfiler, result::Core.Compiler....
10 0 @Base/compiler/ssair/ir.jl 580 Core.Compiler.IncrementalCompact(code::Core.Compiler.IRCode, allow_cfg_transfor...
11 0 @Base/compiler/ssair/ir.jl 1030 process_node!(compact::Core.Compiler.IncrementalCompact, result_idx::Int64, ins...
11 0 @Base/compiler/optimize.jl 333 run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
11 0 @Base/compiler/ssair/queries.jl 91 compact_exprtype
11 0 @Base/compiler/ssair/queries.jl 101 is_known_call(e::Expr, func::Any, src::Core.Compiler.IncrementalCompact)
11 0 @Base/array.jl 921 getindex
11 0 @Base/array.jl 411 getindex
11 0 @Base/boot.jl 417 LineInfoNode
12 0 @Base/compiler/ssair/domtree.jl 343 SNCA!(domtree::Core.Compiler.DomTree, blocks::Vector{Core.Compiler.BasicBlock},...
12 0 @Base/compiler/ssair/inlining.jl 1236 maybe_handle_const_call!(ir::Core.Compiler.IRCode, idx::Int64, stmt::Expr, info...
12 0 @Base/compiler/ssair/inlining.jl 593 batch_inline!(todo::Vector{Pair{Int64, Any}}, ir::Core.Compiler.IRCode, linetab...
12 0 @Base/compiler/ssair/inlining.jl 497 ir_inline_unionsplit!(compact::Core.Compiler.IncrementalCompact, idx::Int64, ar...
12 0 @Base/compiler/ssair/inlining.jl 320 ir_inline_item!(compact::Core.Compiler.IncrementalCompact, idx::Int64, argexprs...
13 0 @Base/compiler/ssair/inlining.jl 20 with_atype
13 0 @Base/compiler/ssair/inlining.jl 1128 process_simple!(ir::Core.Compiler.IRCode, todo::Vector{Pair{Int64, Any}}, idx::...
13 0 @Base/compiler/typelattice.jl 280 widenconst
13 0 @Base/compiler/typeutils.jl 53 (::Core.Compiler.var"#257#258")(a::Core.Const)
13 0 @Base/compiler/abstractinterpretation.jl 1647 abstract_eval_global
13 0 @Base/compiler/utilities.jl 256 argextype(x::Any, src::Core.Compiler.IRCode, sptypes::Vector{Any}, slottypes::V...
13 0 @Base/compiler/inferencestate.jl 251 Core.Compiler.InferenceState(result::Core.Compiler.InferenceResult, cache::Symb...
13 0 @Base/compiler/ssair/passes.jl 767 getfield_elim_pass!(ir::Core.Compiler.IRCode)
13 0 @Base/compiler/ssair/slot2ssa.jl 416 domsort_ssa!(ir::Core.Compiler.IRCode, domtree::Core.Compiler.DomTree)
13 0 @Base/iddict.jl 30 IdDict
13 0 @Base/iddict.jl 33 Core.Compiler.IdDict{Int64, Int64}(itr::Core.Compiler.Generator{Core.Compiler.I...
14 0 @Base/compiler/ssair/domtree.jl 217 update_domtree!
14 0 @Base/compiler/ssair/inlining.jl 1285 assemble_inline_todo!(ir::Core.Compiler.IRCode, state::Core.Compiler.InliningSt...
14 0 @Base/compiler/typeinfer.jl 280 _typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
14 0 @Base/compiler/typeinfer.jl 365 transform_result_for_cache
14 0 @Base/compiler/typeinfer.jl 391 cache_result!(interp::CCProfiler, result::Core.Compiler.InferenceResult)
14 0 @Base/compiler/typeinfer.jl 346 maybe_compress_codeinfo(interp::CCProfiler, linfo::MethodInstance, ci::Core.Cod...
14 0 @Base/compiler/ssair/ir.jl 619 Core.Compiler.IncrementalCompact(code::Core.Compiler.IRCode, allow_cfg_transfor...
15 0 @Base/compiler/utilities.jl 234 argextype
15 0 @Base/compiler/ssair/inlining.jl 837 Core.Compiler.InliningTodo(mi::MethodInstance, src::Vector{UInt8})
16 0 @Base/compiler/utilities.jl 39 anymap(f::Core.Compiler.var"#257#258", a::Vector{Any})
16 0 @Base/compiler/utilities.jl 128 retrieve_code_info
17 0 @Base/compiler/ssair/passes.jl 781 getfield_elim_pass!(ir::Core.Compiler.IRCode)
18 0 @Base/compiler/ssair/ir.jl 1451 compact!(code::Core.Compiler.IRCode, allow_cfg_transforms::Bool)
18 0 @Base/compiler/ssair/ir.jl 195 Core.Compiler.InstructionStream(len::Int64)
19 0 @Base/compiler/ssair/ir.jl 1302 iterate(compact::Core.Compiler.IncrementalCompact, ::Tuple{Int64, Int64})
19 0 @Base/compiler/ssair/legacy.jl 10 inflate_ir(ci::Core.CodeInfo, linfo::MethodInstance)
19 0 @Base/compiler/ssair/slot2ssa.jl 899 construct_ssa!(ci::Core.CodeInfo, ir::Core.Compiler.IRCode, domtree::Core.Compi...
19 0 @Base/compiler/optimize.jl 336 run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
20 0 @Base/int.jl 86 -
20 0 @Base/int.jl 982 -
20 0 @Base/compiler/inferencestate.jl 248 Core.Compiler.InferenceState(result::Core.Compiler.InferenceResult, cache::Symb...
21 0 @Base/compiler/inferenceresult.jl 167 cache_lookup(linfo::MethodInstance, given_argtypes::Vector{Any}, cache::Vector{...
21 0 @Base/compiler/ssair/ir.jl 269 NewNodeStream
22 0 @Base/compiler/ssair/domtree.jl 204 construct_domtree(blocks::Vector{Core.Compiler.BasicBlock})
22 0 @Base/compiler/ssair/inlining.jl 591 batch_inline!(todo::Vector{Pair{Int64, Any}}, ir::Core.Compiler.IRCode, linetab...
23 0 @Base/compiler/typeutils.jl 53 argtypes_to_type
23 0 @Base/compiler/abstractinterpretation.jl 532 abstract_call_method_with_const_args(interp::CCProfiler, result::Core.Compiler....
23 0 @Base/compiler/typeinfer.jl 820 typeinf_edge
24 0 @Base/compiler/ssair/inlining.jl 1257 assemble_inline_todo!(ir::Core.Compiler.IRCode, state::Core.Compiler.InliningSt...
25 0 @Base/array.jl 1233 resize!
25 0 @Base/array.jl 895 iterate
27 0 @Base/boot.jl 471 Array
31 0 @Base/array.jl 1008 _growend!
31 0 @Base/compiler/ssair/inlining.jl 841 Core.Compiler.InliningTodo(mi::MethodInstance, src::Core.CodeInfo)
32 0 @Base/compiler/optimize.jl 328 run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
33 0 @Base/array.jl 533 fill
33 0 @Base/array.jl 531 fill
35 0 @Base/compiler/ssair/ir.jl 1449 compact!(code::Core.Compiler.IRCode, allow_cfg_transforms::Bool)
37 0 @Base/compiler/ssair/inlining.jl 828 analyze_method!(match::Core.MethodMatch, atypes::Vector{Any}, state::Core.Compi...
38 0 @Base/compiler/ssair/inlining.jl 1175 analyze_single_call!(ir::Core.Compiler.IRCode, todo::Vector{Pair{Int64, Any}}, ...
41 0 @Base/compiler/ssair/inlining.jl 75 ssa_inlining_pass!(ir::Core.Compiler.IRCode, linetable::Vector{Core.LineInfoNod...
43 0 @Base/compiler/ssair/inlining.jl 1316 assemble_inline_todo!(ir::Core.Compiler.IRCode, state::Core.Compiler.InliningSt...
45 0 @Base/boot.jl 461 Array
47 0 @Base/compiler/ssair/inlining.jl 778 resolve_todo(todo::Core.Compiler.InliningTodo, state::Core.Compiler.InliningSta...
52 0 @Base/compiler/optimize.jl 424 slot2reg
56 0 @Base/compiler/optimize.jl 330 run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
56 0 @Base/compiler/ssair/ir.jl 1449 compact!
58 0 @Base/reflection.jl 908 _methods_by_ftype
58 0 @Base/compiler/methodtable.jl 68 #findall#252
58 0 @Base/compiler/methodtable.jl 65 (::Core.Compiler.var"#findall##kw")(::NamedTuple{(:limit,), Tuple{Int64}}, ::ty...
60 0 @Base/iddict.jl 178 get!
60 0 @Base/compiler/methodtable.jl 97 (::Core.Compiler.var"#255#256"{Int64, Core.Compiler.CachedMethodTable{Core.Comp...
62 0 @Base/compiler/optimize.jl 322 run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
63 0 @Base/compiler/methodtable.jl 96 #findall#254
63 0 @Base/compiler/methodtable.jl 95 (::Core.Compiler.var"#findall##kw")(::NamedTuple{(:limit,), Tuple{Int64}}, ::ty...
66 0 @Base/compiler/abstractinterpretation.jl 308 find_matching_methods(argtypes::Vector{Any}, atype::Any, method_table::Core.Com...
67 0 @Base/compiler/abstractinterpretation.jl 39 abstract_call_gf_by_type(interp::CCProfiler, f::Any, fargs::Vector{Any}, argtyp...
82 0 @Base/compiler/ssair/inlining.jl 72 ssa_inlining_pass!(ir::Core.Compiler.IRCode, linetable::Vector{Core.LineInfoNod...
102 0 @Base/boot.jl 452 Array
123 0 @Base/compiler/optimize.jl 326 run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
123 0 @Base/compiler/abstractinterpretation.jl 556 abstract_call_method_with_const_args(interp::CCProfiler, result::Core.Compiler....
152 0 @Base/compiler/abstractinterpretation.jl 113 abstract_call_gf_by_type(interp::CCProfiler, f::Any, fargs::Vector{Any}, argtyp...
206 0 @Base/compiler/abstractinterpretation.jl 1871 typeinf_local(interp::CCProfiler, frame::Core.Compiler.InferenceState)
315 0 @Base/compiler/optimize.jl 315 optimize
316 0 @Base/compiler/typeinfer.jl 255 _typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
538 0 @Base/reflection.jl 1246 return_types(f::Any, types::Any, interp::CCProfiler)
538 0 @Base/compiler/typeinfer.jl 8 typeinf
538 0 @Base/compiler/typeinfer.jl 932 typeinf_type(interp::CCProfiler, method::Method, atypes::Any, sparams::Core.Sim...
538 0 @Base/compiler/typeinfer.jl 209 typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
538 0 @Base/compiler/typeinfer.jl 226 _typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
538 0 @Base/compiler/abstractinterpretation.jl 1987 typeinf_nocycle(interp::CCProfiler, frame::Core.Compiler.InferenceState)
538 0 @Base/compiler/abstractinterpretation.jl 1891 typeinf_local(interp::CCProfiler, frame::Core.Compiler.InferenceState)
538 0 @Base/compiler/abstractinterpretation.jl 1505 abstract_eval_statement(interp::CCProfiler, e::Any, vtypes::Vector{Core.Compile...
538 0 @Base/compiler/abstractinterpretation.jl 1369 abstract_call(interp::CCProfiler, fargs::Vector{Any}, argtypes::Vector{Any}, sv...
538 0 @Base/compiler/abstractinterpretation.jl 1384 abstract_call(interp::CCProfiler, fargs::Vector{Any}, argtypes::Vector{Any}, sv...
538 0 @Base/compiler/abstractinterpretation.jl 1246 abstract_call_known(interp::CCProfiler, f::Any, fargs::Vector{Any}, argtypes::V...
538 0 @Base/compiler/abstractinterpretation.jl 993 abstract_apply(interp::CCProfiler, argtypes::Vector{Any}, sv::Core.Compiler.Inf...
538 0 @Base/compiler/abstractinterpretation.jl 1329 abstract_call_known(interp::CCProfiler, f::Any, fargs::Nothing, argtypes::Vecto...
538 0 @Base/compiler/abstractinterpretation.jl 105 abstract_call_gf_by_type(interp::CCProfiler, f::Any, fargs::Nothing, argtypes::...
538 0 @Base/compiler/typeinfer.jl 829 typeinf_edge
538 0 @Base/compiler/abstractinterpretation.jl 504 abstract_call_method(interp::CCProfiler, method::Method, sig::Any, sparams::Cor...
548 548 @Base/client.jl 497 _start()
548 0 @Base/client.jl 309 exec_options(opts::Base.JLOptions)
548 0 @Base/essentials.jl 725 #invokelatest#2
548 0 @Base/essentials.jl 723 invokelatest
548 0 @Base/client.jl 379 run_main_repl(interactive::Bool, quiet::Bool, banner::Bool, history_file::Bool,...
548 0 @Base/client.jl 394 (::Base.var"#936#938"{Bool, Bool, Bool})(REPL::Module)
548 0 @REPL/src/REPL.jl 350 run_repl(repl::AbstractREPL, consumer::Any)
548 0 @REPL/src/REPL.jl 363 run_repl(repl::AbstractREPL, consumer::Any; backend_on_current_task::Bool)
548 0 @REPL/src/REPL.jl 230 start_repl_backend(backend::REPL.REPLBackend, consumer::Any)
548 0 @REPL/src/REPL.jl 245 repl_backend_loop(backend::REPL.REPLBackend)
548 0 @Base/boot.jl 368 eval
548 0 @REPL/src/REPL.jl 151 eval_user_input(ast::Any, backend::REPL.REPLBackend)
Total snapshots: 571 (100% utilization across all threads and tasks. Use the `groupby` kwarg to break down by thread and/or task)
julia> Profile.print(; recur=:flat, mincount=10)
Overhead ╎ [+additional indent] Count File:Line; Function
=========================================================
╎548 @Base/client.jl:497; _start()
╎ 548 @Base/client.jl:309; exec_options(opts::Base.JLOptions)
╎ 548 @Base/client.jl:379; run_main_repl(interactive::Bool, quiet::Bool, banner::Bool, history_file::Bool, col...
╎ 548 @Base/essentials.jl:723; invokelatest
╎ 548 @Base/essentials.jl:725; #invokelatest#2
╎ 548 @Base/client.jl:394; (::Base.var"#936#938"{Bool, Bool, Bool})(REPL::Module)
╎ ╎ 548 @REPL/src/REPL.jl:350; run_repl(repl::AbstractREPL, consumer::Any)
╎ ╎ 548 @REPL/src/REPL.jl:363; run_repl(repl::AbstractREPL, consumer::Any; backend_on_current_task::Bool)
╎ ╎ 548 @REPL/src/REPL.jl:230; start_repl_backend(backend::REPL.REPLBackend, consumer::Any)
╎ ╎ 548 @REPL/src/REPL.jl:245; repl_backend_loop(backend::REPL.REPLBackend)
╎ ╎ 548 @REPL/src/REPL.jl:151; eval_user_input(ast::Any, backend::REPL.REPLBackend)
9╎ ╎ ╎ 548 @Base/boot.jl:368; eval
╎ ╎ ╎ 538 @Base/reflection.jl:1246; return_types(f::Any, types::Any, interp::CCProfiler)
╎ ╎ ╎ 538 @Base/compiler/typeinfer.jl:932; typeinf_type(interp::CCProfiler, method::Method, atypes::Any, sparams::Core....
╎ ╎ ╎ 538 @Base/compiler/typeinfer.jl:8; typeinf
╎ ╎ ╎ 538 @Base/compiler/typeinfer.jl:209; typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
╎ ╎ ╎ ╎ 538 @Base/compiler/typeinfer.jl:226; _typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
╎ ╎ ╎ ╎ 538 @Base/compiler/abstractinterpretation.jl:1987; typeinf_nocycle(interp::CCProfiler, frame::Core.Compiler.InferenceState)
╎ ╎ ╎ ╎ 206 @Base/compiler/abstractinterpretation.jl:1871; typeinf_local(interp::CCProfiler, frame::Core.Compiler.InferenceState)
╎ ╎ ╎ ╎ 538 @Base/compiler/abstractinterpretation.jl:1891; typeinf_local(interp::CCProfiler, frame::Core.Compiler.InferenceState)
╎ ╎ ╎ ╎ 538 @Base/compiler/abstractinterpretation.jl:1505; abstract_eval_statement(interp::CCProfiler, e::Any, vtypes::Vector{Core....
╎ ╎ ╎ ╎ 538 @Base/compiler/abstractinterpretation.jl:1369; abstract_call(interp::CCProfiler, fargs::Vector{Any}, argtypes::Vector{A...
3╎ ╎ ╎ ╎ ╎ 538 @Base/compiler/abstractinterpretation.jl:1384; abstract_call(interp::CCProfiler, fargs::Vector{Any}, argtypes::Vector{...
╎ ╎ ╎ ╎ ╎ 538 @Base/compiler/abstractinterpretation.jl:1246; abstract_call_known(interp::CCProfiler, f::Any, fargs::Vector{Any}, ar...
╎ ╎ ╎ ╎ ╎ 538 @Base/compiler/abstractinterpretation.jl:993; abstract_apply(interp::CCProfiler, argtypes::Vector{Any}, sv::Core.Com...
╎ ╎ ╎ ╎ ╎ 538 @Base/compiler/abstractinterpretation.jl:1384; abstract_call(interp::CCProfiler, fargs::Nothing, argtypes::Vector{An...
╎ ╎ ╎ ╎ ╎ 538 @Base/compiler/abstractinterpretation.jl:1329; abstract_call_known(interp::CCProfiler, f::Any, fargs::Nothing, argty...
╎ ╎ ╎ ╎ ╎ ╎ 538 @Base/compiler/abstractinterpretation.jl:105; abstract_call_gf_by_type(interp::CCProfiler, f::Any, fargs::Nothing, ...
╎ ╎ ╎ ╎ ╎ ╎ 538 @Base/compiler/abstractinterpretation.jl:504; abstract_call_method(interp::CCProfiler, method::Method, sig::Any, s...
╎ ╎ ╎ ╎ ╎ ╎ 538 @Base/compiler/typeinfer.jl:829; typeinf_edge
╎ ╎ ╎ ╎ ╎ 10 @Base/compiler/abstractinterpretation.jl:1328; abstract_call_known(interp::CCProfiler, f::Any, fargs::Vector{Any}, ar...
3╎ ╎ ╎ ╎ ╎ 10 @Base/compiler/typeutils.jl:53; argtypes_to_type
╎ ╎ ╎ ╎ ╎ 536 @Base/compiler/abstractinterpretation.jl:1329; abstract_call_known(interp::CCProfiler, f::Any, fargs::Vector{Any}, ar...
╎ ╎ ╎ ╎ ╎ 64 @Base/compiler/abstractinterpretation.jl:39; abstract_call_gf_by_type(interp::CCProfiler, f::Any, fargs::Vector{Any...
3╎ ╎ ╎ ╎ ╎ 63 @Base/compiler/abstractinterpretation.jl:308; find_matching_methods(argtypes::Vector{Any}, atype::Any, method_table:...
╎ ╎ ╎ ╎ ╎ 60 @Base/compiler/methodtable.jl:95; (::Core.Compiler.var"#findall##kw")(::NamedTuple{(:limit,), Tuple{Int6...
╎ ╎ ╎ ╎ ╎ ╎ 60 @Base/compiler/methodtable.jl:96; #findall#254
╎ ╎ ╎ ╎ ╎ ╎ 57 @Base/iddict.jl:178; get!
2╎ ╎ ╎ ╎ ╎ ╎ 57 @Base/compiler/methodtable.jl:97; (::Core.Compiler.var"#255#256"{Int64, Core.Compiler.CachedMethodTabl...
╎ ╎ ╎ ╎ ╎ ╎ 55 @Base/compiler/methodtable.jl:65; (::Core.Compiler.var"#findall##kw")(::NamedTuple{(:limit,), Tuple{I...
╎ ╎ ╎ ╎ ╎ ╎ 55 @Base/compiler/methodtable.jl:68; #findall#252
55╎ ╎ ╎ ╎ ╎ ╎ ╎ 55 @Base/reflection.jl:908; _methods_by_ftype
╎ ╎ ╎ ╎ ╎ 536 @Base/compiler/abstractinterpretation.jl:105; abstract_call_gf_by_type(interp::CCProfiler, f::Any, fargs::Vector{Any...
╎ ╎ ╎ ╎ ╎ 28 @Base/compiler/abstractinterpretation.jl:504; abstract_call_method(interp::CCProfiler, method::Method, sig::Any, spa...
╎ ╎ ╎ ╎ ╎ 22 @Base/compiler/typeinfer.jl:820; typeinf_edge
╎ ╎ ╎ ╎ ╎ ╎ 14 @Base/compiler/inferencestate.jl:248; Core.Compiler.InferenceState(result::Core.Compiler.InferenceResult, c...
10╎ ╎ ╎ ╎ ╎ ╎ 10 @Base/compiler/utilities.jl:128; retrieve_code_info
╎ ╎ ╎ ╎ ╎ 151 @Base/compiler/abstractinterpretation.jl:113; abstract_call_gf_by_type(interp::CCProfiler, f::Any, fargs::Vector{Any...
╎ ╎ ╎ ╎ ╎ 23 @Base/compiler/abstractinterpretation.jl:532; abstract_call_method_with_const_args(interp::CCProfiler, result::Core....
╎ ╎ ╎ ╎ ╎ 21 @Base/compiler/inferenceresult.jl:167; cache_lookup(linfo::MethodInstance, given_argtypes::Vector{Any}, cach...
╎ ╎ ╎ ╎ ╎ ╎ 21 @Base/array.jl:895; iterate
╎ ╎ ╎ ╎ ╎ ╎ 20 @Base/int.jl:982; -
20╎ ╎ ╎ ╎ ╎ ╎ 20 @Base/int.jl:86; -
╎ ╎ ╎ ╎ ╎ 10 @Base/compiler/abstractinterpretation.jl:553; abstract_call_method_with_const_args(interp::CCProfiler, result::Core....
╎ ╎ ╎ ╎ ╎ 123 @Base/compiler/abstractinterpretation.jl:556; abstract_call_method_with_const_args(interp::CCProfiler, result::Core....
╎ ╎ ╎ ╎ 10 @Base/compiler/typeinfer.jl:239; _typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
╎ ╎ ╎ ╎ 316 @Base/compiler/typeinfer.jl:255; _typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
╎ ╎ ╎ ╎ 315 @Base/compiler/optimize.jl:315; optimize
╎ ╎ ╎ ╎ 62 @Base/compiler/optimize.jl:322; run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
╎ ╎ ╎ ╎ 52 @Base/compiler/optimize.jl:424; slot2reg
╎ ╎ ╎ ╎ 19 @Base/compiler/ssair/slot2ssa.jl:899; construct_ssa!(ci::Core.CodeInfo, ir::Core.Compiler.IRCode, domtree::Cor...
╎ ╎ ╎ ╎ ╎ 13 @Base/compiler/ssair/slot2ssa.jl:416; domsort_ssa!(ir::Core.Compiler.IRCode, domtree::Core.Compiler.DomTree)
╎ ╎ ╎ ╎ ╎ 13 @Base/iddict.jl:33; Core.Compiler.IdDict{Int64, Int64}(itr::Core.Compiler.Generator{Core.Com...
╎ ╎ ╎ ╎ ╎ 13 @Base/iddict.jl:30; IdDict
13╎ ╎ ╎ ╎ ╎ 13 @Base/boot.jl:452; Array
╎ ╎ ╎ ╎ 123 @Base/compiler/optimize.jl:326; run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
╎ ╎ ╎ ╎ 82 @Base/compiler/ssair/inlining.jl:72; ssa_inlining_pass!(ir::Core.Compiler.IRCode, linetable::Vector{Core.LineI...
╎ ╎ ╎ ╎ 24 @Base/compiler/ssair/inlining.jl:1257; assemble_inline_todo!(ir::Core.Compiler.IRCode, state::Core.Compiler.Inl...
╎ ╎ ╎ ╎ ╎ 13 @Base/compiler/ssair/inlining.jl:1128; process_simple!(ir::Core.Compiler.IRCode, todo::Vector{Pair{Int64, Any}...
╎ ╎ ╎ ╎ ╎ 13 @Base/compiler/ssair/inlining.jl:20; with_atype
4╎ ╎ ╎ ╎ ╎ 13 @Base/compiler/typeutils.jl:53; argtypes_to_type
╎ ╎ ╎ ╎ 14 @Base/compiler/ssair/inlining.jl:1285; assemble_inline_todo!(ir::Core.Compiler.IRCode, state::Core.Compiler.Inl...
╎ ╎ ╎ ╎ ╎ 12 @Base/compiler/ssair/inlining.jl:1236; maybe_handle_const_call!(ir::Core.Compiler.IRCode, idx::Int64, stmt::Ex...
╎ ╎ ╎ ╎ ╎ 12 @Base/compiler/ssair/inlining.jl:778; resolve_todo(todo::Core.Compiler.InliningTodo, state::Core.Compiler.Inl...
1╎ ╎ ╎ ╎ ╎ 12 @Base/compiler/ssair/inlining.jl:841; Core.Compiler.InliningTodo(mi::MethodInstance, src::Core.CodeInfo)
╎ ╎ ╎ ╎ 43 @Base/compiler/ssair/inlining.jl:1316; assemble_inline_todo!(ir::Core.Compiler.IRCode, state::Core.Compiler.Inl...
╎ ╎ ╎ ╎ ╎ 38 @Base/compiler/ssair/inlining.jl:1175; analyze_single_call!(ir::Core.Compiler.IRCode, todo::Vector{Pair{Int64,...
1╎ ╎ ╎ ╎ ╎ 37 @Base/compiler/ssair/inlining.jl:828; analyze_method!(match::Core.MethodMatch, atypes::Vector{Any}, state::Co...
1╎ ╎ ╎ ╎ ╎ 35 @Base/compiler/ssair/inlining.jl:778; resolve_todo(todo::Core.Compiler.InliningTodo, state::Core.Compiler.In...
15╎ ╎ ╎ ╎ ╎ 15 @Base/compiler/ssair/inlining.jl:837; Core.Compiler.InliningTodo(mi::MethodInstance, src::Vector{UInt8})
╎ ╎ ╎ ╎ ╎ 19 @Base/compiler/ssair/inlining.jl:841; Core.Compiler.InliningTodo(mi::MethodInstance, src::Vector{UInt8})
╎ ╎ ╎ ╎ ╎ 12 @Base/compiler/ssair/legacy.jl:10; inflate_ir(ci::Core.CodeInfo, linfo::MethodInstance)
╎ ╎ ╎ ╎ 41 @Base/compiler/ssair/inlining.jl:75; ssa_inlining_pass!(ir::Core.Compiler.IRCode, linetable::Vector{Core.LineI...
╎ ╎ ╎ ╎ 22 @Base/compiler/ssair/inlining.jl:591; batch_inline!(todo::Vector{Pair{Int64, Any}}, ir::Core.Compiler.IRCode, ...
╎ ╎ ╎ ╎ 12 @Base/compiler/ssair/inlining.jl:593; batch_inline!(todo::Vector{Pair{Int64, Any}}, ir::Core.Compiler.IRCode, ...
╎ ╎ ╎ ╎ ╎ 12 @Base/compiler/ssair/inlining.jl:497; ir_inline_unionsplit!(compact::Core.Compiler.IncrementalCompact, idx::In...
╎ ╎ ╎ ╎ ╎ 11 @Base/compiler/ssair/inlining.jl:320; ir_inline_item!(compact::Core.Compiler.IncrementalCompact, idx::Int64, ...
11╎ ╎ ╎ ╎ ╎ 11 @Base/boot.jl:417; LineInfoNode
╎ ╎ ╎ ╎ 32 @Base/compiler/optimize.jl:328; run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
╎ ╎ ╎ ╎ 32 @Base/compiler/ssair/ir.jl:1449; compact!
╎ ╎ ╎ ╎ 24 @Base/compiler/ssair/ir.jl:1449; compact!(code::Core.Compiler.IRCode, allow_cfg_transforms::Bool)
╎ ╎ ╎ ╎ ╎ 10 @Base/compiler/ssair/ir.jl:580; Core.Compiler.IncrementalCompact(code::Core.Compiler.IRCode, allow_cfg_t...
╎ ╎ ╎ ╎ ╎ 10 @Base/array.jl:531; fill
╎ ╎ ╎ ╎ ╎ 10 @Base/array.jl:533; fill
╎ ╎ ╎ ╎ ╎ 10 @Base/boot.jl:461; Array
10╎ ╎ ╎ ╎ ╎ 10 @Base/boot.jl:452; Array
╎ ╎ ╎ ╎ ╎ 14 @Base/compiler/ssair/ir.jl:619; Core.Compiler.IncrementalCompact(code::Core.Compiler.IRCode, allow_cfg_t...
╎ ╎ ╎ ╎ ╎ 14 @Base/compiler/ssair/ir.jl:269; NewNodeStream
╎ ╎ ╎ ╎ ╎ 14 @Base/compiler/ssair/ir.jl:269; NewNodeStream
╎ ╎ ╎ ╎ ╎ 14 @Base/compiler/ssair/ir.jl:195; Core.Compiler.InstructionStream(len::Int64)
╎ ╎ ╎ ╎ ╎ 14 @Base/array.jl:531; fill
╎ ╎ ╎ ╎ ╎ ╎ 14 @Base/array.jl:533; fill
╎ ╎ ╎ ╎ ╎ ╎ 14 @Base/boot.jl:461; Array
14╎ ╎ ╎ ╎ ╎ ╎ 14 @Base/boot.jl:452; Array
╎ ╎ ╎ ╎ 56 @Base/compiler/optimize.jl:330; run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
╎ ╎ ╎ ╎ 13 @Base/compiler/ssair/passes.jl:767; getfield_elim_pass!(ir::Core.Compiler.IRCode)
╎ ╎ ╎ ╎ 17 @Base/compiler/ssair/passes.jl:781; getfield_elim_pass!(ir::Core.Compiler.IRCode)
╎ ╎ ╎ ╎ 17 @Base/compiler/ssair/domtree.jl:204; construct_domtree(blocks::Vector{Core.Compiler.BasicBlock})
╎ ╎ ╎ ╎ ╎ 13 @Base/compiler/ssair/domtree.jl:217; update_domtree!
╎ ╎ ╎ ╎ ╎ 12 @Base/compiler/ssair/domtree.jl:343; SNCA!(domtree::Core.Compiler.DomTree, blocks::Vector{Core.Compiler.Basi...
╎ ╎ ╎ ╎ ╎ 12 @Base/array.jl:1233; resize!
12╎ ╎ ╎ ╎ ╎ 12 @Base/array.jl:1008; _growend!
╎ ╎ ╎ ╎ 11 @Base/compiler/optimize.jl:333; run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
╎ ╎ ╎ ╎ 19 @Base/compiler/optimize.jl:336; run_passes(ci::Core.CodeInfo, sv::Core.Compiler.OptimizationState)
╎ ╎ ╎ ╎ 19 @Base/compiler/ssair/ir.jl:1449; compact!
╎ ╎ ╎ ╎ 10 @Base/compiler/ssair/ir.jl:1449; compact!(code::Core.Compiler.IRCode, allow_cfg_transforms::Bool)
╎ ╎ ╎ ╎ 14 @Base/compiler/typeinfer.jl:280; _typeinf(interp::CCProfiler, frame::Core.Compiler.InferenceState)
╎ ╎ ╎ ╎ 14 @Base/compiler/typeinfer.jl:391; cache_result!(interp::CCProfiler, result::Core.Compiler.InferenceResult)
╎ ╎ ╎ ╎ 14 @Base/compiler/typeinfer.jl:365; transform_result_for_cache
14╎ ╎ ╎ ╎ 14 @Base/compiler/typeinfer.jl:346; maybe_compress_codeinfo(interp::CCProfiler, linfo::MethodInstance, ci::Co...
Total snapshots: 571 (100% utilization across all threads and tasks. Use the `groupby` kwarg to break down by thread and/or task) So as for this specific inference on
@vtjnash do you think we want to pay more attentions on this problem at this point ? |
Eventually TypeLattice should end up only ever on the stack. I think we can continue working towards that. |
Bikeshed suggestion: could we call it |
3ead597
to
7cbf430
Compare
7cbf430
to
65473aa
Compare
yeah, I decided to go with |
dae5ad3
to
c059ef3
Compare
A very high level works on lattice ! This IMHO lands to
I have just (re)coded disjoint_union, ispartition above AbstractSet Base.:(<=)(a::AbstractSet, b::AbstractSet) = issubset(a, b) # or ⊑ ? infix op is what matters
Base.:(&)(a::AbstractSet, bs::AbstractSet...) = intersect(a, bs...)
Base.:(|)(a::AbstractSet, bs::AbstractSet...) = union(a, bs...)
Base.:(-)(a::AbstractSet, bs::AbstractSet...) = setdiff(a, bs...)
# todo xor, disjoint_union Those can be easily done @ abstractset too union!(a::AbstractSet, bs...) = Base.afoldl(union!, a, bs...)
setdiff!(a::AbstractSet, bs...) = Base.afoldl(setdiff!, a, bs...)
intersect!(a::AbstractSet, bs...) = Base.afoldl(intersect!, a, bs...)
symdiff!(a::AbstractSet, bs...) = Base.afoldl(symdiff!, a, bs...) May be not be the exact place, but lattice is run so everywhere, coded quite unfrequently; and set/lattice are so close ... |
56b1714
to
5711dc5
Compare
c999912
to
f45a8b5
Compare
e5d6d0f
to
0750387
Compare
…re expected from those where extended lattice wrappers are
…o `LatticeElement` attributes - pack `PartialStruct` into `LatticeElement.fields` - pack `Conditional`/`InterConditional` into `LatticeElement.conditional` - pack `Const` into `LatticeElement.constant` - pack `PartialTypeVar` into `LatticeElement.partialtypevar` - pack `LimitedAccuracy` into `LatticeElement.causes` - pack `PartialOpaque` into `LatticeElement.partialopaque` - pack `MaybeUndef` into `LatticeElement.maybeundef` - merge `LatticeElement.partialopaque` and `LatticeElement.partialopaque` There is not much value in keeping them separate, since a variable usually doesn't have these "special" attributes at the same time. - wrap `Vararg` in `LatticeElement.special::Vararg` - add HACK to allow `DelayedTyp` to sneak in `LatticeElement` system And now we can eliminate `AbstractLattice`, and our inference code works with `LatticeElement` (mostly). - define `SSAValueType(s)` / `Argtypes` aliases
0750387
to
d223d61
Compare
There's been two threads of work involving the compiler's notion of the inference lattice. One is that the lattice has gotten to complicated and with too many internal constraints that are not manifest in the type system. #42596 attempted to address this, but it's quite disruptive as it changes the lattice types and all the signatures of the lattice operations, which are used quite extensively throughout the ecosystem (despite being internal), so that change is quite disruptive (and something we'd ideally only make the ecosystem do once). The other thread of work is that people would like to experiment with a variety of extended lattices outside of base (either to prototype potential additions to the lattice in base or to do custom abstract interpretation over the julia code). At the moment, the lattice is quite closely interwoven with the rest of the abstract interpreter. In response to this request in #40992, I had proposed a `CustomLattice` element with callbacks, but this doesn't compose particularly well, is cumbersome and imposes overhead on some of the hottest parts of the compiler, so it's a bit of a tough sell to merge into Base. In this PR, I'd like to propose a refactoring that is relatively non-invasive to non-Base users, but I think would allow easier experimentation with changes to the lattice for these two use cases. In essence, we're splitting the lattice into a ladder of 5 different lattices, each containing the previous lattice as a sub-lattice. These 5 lattices are: - JLTypeLattice (Anything that's a `Type`) - ConstsLattice ( + `Const`, `PartialTypeVar`) - PartialsLattice ( + `PartialStruct` ) - ConditionalsLattice ( + `Conditional` ) - InferenceLattice ( + `LimitedAccuracy`, `MaybeUndef` ) The idea is that where a lattice element contains another lattice element (e.g. in `PartialStruct` or `Conditional`), the element contained may only be from a wider lattice. In this PR, this is not enforced by the type system. This is quite deliberate, as I want to retain the types and object layouts of the lattice elements, but of course a future #42596-like change could add such type enforcement. Of particular note is that the `PartialsLattice` and `ConditionalsLattice` is parameterized and additional layers may be added in the stack. For example, in #40992, I had proposed a lattice element that refines `Int` and tracks symbolic expressions. In this setup, this could be accomplished by adding an appropriate lattice in between the `ConstsLattice` and the `PartialsLattice` (of course, additional hooks would be required to make the tfuncs work, but that is outside the scope of this PR). I don't think this is a full solution, but I think it'll help us play with some of these extended lattice options over the next 6-12 months in the packages that want to do this sort of thing. Presumably once we know what all the potential lattice extensions look like, we will want to take another look at this (likely together with whatever solution we come up with for the AbstractInterpreter composability problem and a rebase of #42596). WIP because I didn't bother updating and plumbing through the lattice in all the call sites yet, but that's mostly mechanical, so if we like this direction, I will make that change and hope to merge this in short order (because otherwise it'll accumulate massive merge conflicts).
There's been two threads of work involving the compiler's notion of the inference lattice. One is that the lattice has gotten to complicated and with too many internal constraints that are not manifest in the type system. #42596 attempted to address this, but it's quite disruptive as it changes the lattice types and all the signatures of the lattice operations, which are used quite extensively throughout the ecosystem (despite being internal), so that change is quite disruptive (and something we'd ideally only make the ecosystem do once). The other thread of work is that people would like to experiment with a variety of extended lattices outside of base (either to prototype potential additions to the lattice in base or to do custom abstract interpretation over the julia code). At the moment, the lattice is quite closely interwoven with the rest of the abstract interpreter. In response to this request in #40992, I had proposed a `CustomLattice` element with callbacks, but this doesn't compose particularly well, is cumbersome and imposes overhead on some of the hottest parts of the compiler, so it's a bit of a tough sell to merge into Base. In this PR, I'd like to propose a refactoring that is relatively non-invasive to non-Base users, but I think would allow easier experimentation with changes to the lattice for these two use cases. In essence, we're splitting the lattice into a ladder of 5 different lattices, each containing the previous lattice as a sub-lattice. These 5 lattices are: - JLTypeLattice (Anything that's a `Type`) - ConstsLattice ( + `Const`, `PartialTypeVar`) - PartialsLattice ( + `PartialStruct` ) - ConditionalsLattice ( + `Conditional` ) - InferenceLattice ( + `LimitedAccuracy`, `MaybeUndef` ) The idea is that where a lattice element contains another lattice element (e.g. in `PartialStruct` or `Conditional`), the element contained may only be from a wider lattice. In this PR, this is not enforced by the type system. This is quite deliberate, as I want to retain the types and object layouts of the lattice elements, but of course a future #42596-like change could add such type enforcement. Of particular note is that the `PartialsLattice` and `ConditionalsLattice` is parameterized and additional layers may be added in the stack. For example, in #40992, I had proposed a lattice element that refines `Int` and tracks symbolic expressions. In this setup, this could be accomplished by adding an appropriate lattice in between the `ConstsLattice` and the `PartialsLattice` (of course, additional hooks would be required to make the tfuncs work, but that is outside the scope of this PR). I don't think this is a full solution, but I think it'll help us play with some of these extended lattice options over the next 6-12 months in the packages that want to do this sort of thing. Presumably once we know what all the potential lattice extensions look like, we will want to take another look at this (likely together with whatever solution we come up with for the AbstractInterpreter composability problem and a rebase of #42596). WIP because I didn't bother updating and plumbing through the lattice in all the call sites yet, but that's mostly mechanical, so if we like this direction, I will make that change and hope to merge this in short order (because otherwise it'll accumulate massive merge conflicts).
There's been two threads of work involving the compiler's notion of the inference lattice. One is that the lattice has gotten to complicated and with too many internal constraints that are not manifest in the type system. #42596 attempted to address this, but it's quite disruptive as it changes the lattice types and all the signatures of the lattice operations, which are used quite extensively throughout the ecosystem (despite being internal), so that change is quite disruptive (and something we'd ideally only make the ecosystem do once). The other thread of work is that people would like to experiment with a variety of extended lattices outside of base (either to prototype potential additions to the lattice in base or to do custom abstract interpretation over the julia code). At the moment, the lattice is quite closely interwoven with the rest of the abstract interpreter. In response to this request in #40992, I had proposed a `CustomLattice` element with callbacks, but this doesn't compose particularly well, is cumbersome and imposes overhead on some of the hottest parts of the compiler, so it's a bit of a tough sell to merge into Base. In this PR, I'd like to propose a refactoring that is relatively non-invasive to non-Base users, but I think would allow easier experimentation with changes to the lattice for these two use cases. In essence, we're splitting the lattice into a ladder of 5 different lattices, each containing the previous lattice as a sub-lattice. These 5 lattices are: - JLTypeLattice (Anything that's a `Type`) - ConstsLattice ( + `Const`, `PartialTypeVar`) - PartialsLattice ( + `PartialStruct` ) - ConditionalsLattice ( + `Conditional` ) - InferenceLattice ( + `LimitedAccuracy`, `MaybeUndef` ) The idea is that where a lattice element contains another lattice element (e.g. in `PartialStruct` or `Conditional`), the element contained may only be from a wider lattice. In this PR, this is not enforced by the type system. This is quite deliberate, as I want to retain the types and object layouts of the lattice elements, but of course a future #42596-like change could add such type enforcement. Of particular note is that the `PartialsLattice` and `ConditionalsLattice` is parameterized and additional layers may be added in the stack. For example, in #40992, I had proposed a lattice element that refines `Int` and tracks symbolic expressions. In this setup, this could be accomplished by adding an appropriate lattice in between the `ConstsLattice` and the `PartialsLattice` (of course, additional hooks would be required to make the tfuncs work, but that is outside the scope of this PR). I don't think this is a full solution, but I think it'll help us play with some of these extended lattice options over the next 6-12 months in the packages that want to do this sort of thing. Presumably once we know what all the potential lattice extensions look like, we will want to take another look at this (likely together with whatever solution we come up with for the AbstractInterpreter composability problem and a rebase of #42596). WIP because I didn't bother updating and plumbing through the lattice in all the call sites yet, but that's mostly mechanical, so if we like this direction, I will make that change and hope to merge this in short order (because otherwise it'll accumulate massive merge conflicts).
There's been two threads of work involving the compiler's notion of the inference lattice. One is that the lattice has gotten to complicated and with too many internal constraints that are not manifest in the type system. #42596 attempted to address this, but it's quite disruptive as it changes the lattice types and all the signatures of the lattice operations, which are used quite extensively throughout the ecosystem (despite being internal), so that change is quite disruptive (and something we'd ideally only make the ecosystem do once). The other thread of work is that people would like to experiment with a variety of extended lattices outside of base (either to prototype potential additions to the lattice in base or to do custom abstract interpretation over the julia code). At the moment, the lattice is quite closely interwoven with the rest of the abstract interpreter. In response to this request in #40992, I had proposed a `CustomLattice` element with callbacks, but this doesn't compose particularly well, is cumbersome and imposes overhead on some of the hottest parts of the compiler, so it's a bit of a tough sell to merge into Base. In this PR, I'd like to propose a refactoring that is relatively non-invasive to non-Base users, but I think would allow easier experimentation with changes to the lattice for these two use cases. In essence, we're splitting the lattice into a ladder of 5 different lattices, each containing the previous lattice as a sub-lattice. These 5 lattices are: - JLTypeLattice (Anything that's a `Type`) - ConstsLattice ( + `Const`, `PartialTypeVar`) - PartialsLattice ( + `PartialStruct` ) - ConditionalsLattice ( + `Conditional` ) - InferenceLattice ( + `LimitedAccuracy`, `MaybeUndef` ) The idea is that where a lattice element contains another lattice element (e.g. in `PartialStruct` or `Conditional`), the element contained may only be from a wider lattice. In this PR, this is not enforced by the type system. This is quite deliberate, as I want to retain the types and object layouts of the lattice elements, but of course a future #42596-like change could add such type enforcement. Of particular note is that the `PartialsLattice` and `ConditionalsLattice` is parameterized and additional layers may be added in the stack. For example, in #40992, I had proposed a lattice element that refines `Int` and tracks symbolic expressions. In this setup, this could be accomplished by adding an appropriate lattice in between the `ConstsLattice` and the `PartialsLattice` (of course, additional hooks would be required to make the tfuncs work, but that is outside the scope of this PR). I don't think this is a full solution, but I think it'll help us play with some of these extended lattice options over the next 6-12 months in the packages that want to do this sort of thing. Presumably once we know what all the potential lattice extensions look like, we will want to take another look at this (likely together with whatever solution we come up with for the AbstractInterpreter composability problem and a rebase of #42596). WIP because I didn't bother updating and plumbing through the lattice in all the call sites yet, but that's mostly mechanical, so if we like this direction, I will make that change and hope to merge this in short order (because otherwise it'll accumulate massive merge conflicts).
There's been two threads of work involving the compiler's notion of the inference lattice. One is that the lattice has gotten to complicated and with too many internal constraints that are not manifest in the type system. #42596 attempted to address this, but it's quite disruptive as it changes the lattice types and all the signatures of the lattice operations, which are used quite extensively throughout the ecosystem (despite being internal), so that change is quite disruptive (and something we'd ideally only make the ecosystem do once). The other thread of work is that people would like to experiment with a variety of extended lattices outside of base (either to prototype potential additions to the lattice in base or to do custom abstract interpretation over the julia code). At the moment, the lattice is quite closely interwoven with the rest of the abstract interpreter. In response to this request in #40992, I had proposed a `CustomLattice` element with callbacks, but this doesn't compose particularly well, is cumbersome and imposes overhead on some of the hottest parts of the compiler, so it's a bit of a tough sell to merge into Base. In this PR, I'd like to propose a refactoring that is relatively non-invasive to non-Base users, but I think would allow easier experimentation with changes to the lattice for these two use cases. In essence, we're splitting the lattice into a ladder of 5 different lattices, each containing the previous lattice as a sub-lattice. These 5 lattices are: - JLTypeLattice (Anything that's a `Type`) - ConstsLattice ( + `Const`, `PartialTypeVar`) - PartialsLattice ( + `PartialStruct` ) - ConditionalsLattice ( + `Conditional` ) - InferenceLattice ( + `LimitedAccuracy`, `MaybeUndef` ) The idea is that where a lattice element contains another lattice element (e.g. in `PartialStruct` or `Conditional`), the element contained may only be from a wider lattice. In this PR, this is not enforced by the type system. This is quite deliberate, as I want to retain the types and object layouts of the lattice elements, but of course a future #42596-like change could add such type enforcement. Of particular note is that the `PartialsLattice` and `ConditionalsLattice` is parameterized and additional layers may be added in the stack. For example, in #40992, I had proposed a lattice element that refines `Int` and tracks symbolic expressions. In this setup, this could be accomplished by adding an appropriate lattice in between the `ConstsLattice` and the `PartialsLattice` (of course, additional hooks would be required to make the tfuncs work, but that is outside the scope of this PR). I don't think this is a full solution, but I think it'll help us play with some of these extended lattice options over the next 6-12 months in the packages that want to do this sort of thing. Presumably once we know what all the potential lattice extensions look like, we will want to take another look at this (likely together with whatever solution we come up with for the AbstractInterpreter composability problem and a rebase of #42596). WIP because I didn't bother updating and plumbing through the lattice in all the call sites yet, but that's mostly mechanical, so if we like this direction, I will make that change and hope to merge this in short order (because otherwise it'll accumulate massive merge conflicts).
There's been two threads of work involving the compiler's notion of the inference lattice. One is that the lattice has gotten to complicated and with too many internal constraints that are not manifest in the type system. #42596 attempted to address this, but it's quite disruptive as it changes the lattice types and all the signatures of the lattice operations, which are used quite extensively throughout the ecosystem (despite being internal), so that change is quite disruptive (and something we'd ideally only make the ecosystem do once). The other thread of work is that people would like to experiment with a variety of extended lattices outside of base (either to prototype potential additions to the lattice in base or to do custom abstract interpretation over the julia code). At the moment, the lattice is quite closely interwoven with the rest of the abstract interpreter. In response to this request in #40992, I had proposed a `CustomLattice` element with callbacks, but this doesn't compose particularly well, is cumbersome and imposes overhead on some of the hottest parts of the compiler, so it's a bit of a tough sell to merge into Base. In this PR, I'd like to propose a refactoring that is relatively non-invasive to non-Base users, but I think would allow easier experimentation with changes to the lattice for these two use cases. In essence, we're splitting the lattice into a ladder of 5 different lattices, each containing the previous lattice as a sub-lattice. These 5 lattices are: - JLTypeLattice (Anything that's a `Type`) - ConstsLattice ( + `Const`, `PartialTypeVar`) - PartialsLattice ( + `PartialStruct` ) - ConditionalsLattice ( + `Conditional` ) - InferenceLattice ( + `LimitedAccuracy`, `MaybeUndef` ) The idea is that where a lattice element contains another lattice element (e.g. in `PartialStruct` or `Conditional`), the element contained may only be from a wider lattice. In this PR, this is not enforced by the type system. This is quite deliberate, as I want to retain the types and object layouts of the lattice elements, but of course a future #42596-like change could add such type enforcement. Of particular note is that the `PartialsLattice` and `ConditionalsLattice` is parameterized and additional layers may be added in the stack. For example, in #40992, I had proposed a lattice element that refines `Int` and tracks symbolic expressions. In this setup, this could be accomplished by adding an appropriate lattice in between the `ConstsLattice` and the `PartialsLattice` (of course, additional hooks would be required to make the tfuncs work, but that is outside the scope of this PR). I don't think this is a full solution, but I think it'll help us play with some of these extended lattice options over the next 6-12 months in the packages that want to do this sort of thing. Presumably once we know what all the potential lattice extensions look like, we will want to take another look at this (likely together with whatever solution we come up with for the AbstractInterpreter composability problem and a rebase of #42596). WIP because I didn't bother updating and plumbing through the lattice in all the call sites yet, but that's mostly mechanical, so if we like this direction, I will make that change and hope to merge this in short order (because otherwise it'll accumulate massive merge conflicts).
There's been two threads of work involving the compiler's notion of the inference lattice. One is that the lattice has gotten to complicated and with too many internal constraints that are not manifest in the type system. #42596 attempted to address this, but it's quite disruptive as it changes the lattice types and all the signatures of the lattice operations, which are used quite extensively throughout the ecosystem (despite being internal), so that change is quite disruptive (and something we'd ideally only make the ecosystem do once). The other thread of work is that people would like to experiment with a variety of extended lattices outside of base (either to prototype potential additions to the lattice in base or to do custom abstract interpretation over the julia code). At the moment, the lattice is quite closely interwoven with the rest of the abstract interpreter. In response to this request in #40992, I had proposed a `CustomLattice` element with callbacks, but this doesn't compose particularly well, is cumbersome and imposes overhead on some of the hottest parts of the compiler, so it's a bit of a tough sell to merge into Base. In this PR, I'd like to propose a refactoring that is relatively non-invasive to non-Base users, but I think would allow easier experimentation with changes to the lattice for these two use cases. In essence, we're splitting the lattice into a ladder of 5 different lattices, each containing the previous lattice as a sub-lattice. These 5 lattices are: - JLTypeLattice (Anything that's a `Type`) - ConstsLattice ( + `Const`, `PartialTypeVar`) - PartialsLattice ( + `PartialStruct` ) - ConditionalsLattice ( + `Conditional` ) - InferenceLattice ( + `LimitedAccuracy`, `MaybeUndef` ) The idea is that where a lattice element contains another lattice element (e.g. in `PartialStruct` or `Conditional`), the element contained may only be from a wider lattice. In this PR, this is not enforced by the type system. This is quite deliberate, as I want to retain the types and object layouts of the lattice elements, but of course a future #42596-like change could add such type enforcement. Of particular note is that the `PartialsLattice` and `ConditionalsLattice` is parameterized and additional layers may be added in the stack. For example, in #40992, I had proposed a lattice element that refines `Int` and tracks symbolic expressions. In this setup, this could be accomplished by adding an appropriate lattice in between the `ConstsLattice` and the `PartialsLattice` (of course, additional hooks would be required to make the tfuncs work, but that is outside the scope of this PR). I don't think this is a full solution, but I think it'll help us play with some of these extended lattice options over the next 6-12 months in the packages that want to do this sort of thing. Presumably once we know what all the potential lattice extensions look like, we will want to take another look at this (likely together with whatever solution we come up with for the AbstractInterpreter composability problem and a rebase of #42596). WIP because I didn't bother updating and plumbing through the lattice in all the call sites yet, but that's mostly mechanical, so if we like this direction, I will make that change and hope to merge this in short order (because otherwise it'll accumulate massive merge conflicts).
There's been two threads of work involving the compiler's notion of the inference lattice. One is that the lattice has gotten to complicated and with too many internal constraints that are not manifest in the type system. #42596 attempted to address this, but it's quite disruptive as it changes the lattice types and all the signatures of the lattice operations, which are used quite extensively throughout the ecosystem (despite being internal), so that change is quite disruptive (and something we'd ideally only make the ecosystem do once). The other thread of work is that people would like to experiment with a variety of extended lattices outside of base (either to prototype potential additions to the lattice in base or to do custom abstract interpretation over the julia code). At the moment, the lattice is quite closely interwoven with the rest of the abstract interpreter. In response to this request in #40992, I had proposed a `CustomLattice` element with callbacks, but this doesn't compose particularly well, is cumbersome and imposes overhead on some of the hottest parts of the compiler, so it's a bit of a tough sell to merge into Base. In this PR, I'd like to propose a refactoring that is relatively non-invasive to non-Base users, but I think would allow easier experimentation with changes to the lattice for these two use cases. In essence, we're splitting the lattice into a ladder of 5 different lattices, each containing the previous lattice as a sub-lattice. These 5 lattices are: - JLTypeLattice (Anything that's a `Type`) - ConstsLattice ( + `Const`, `PartialTypeVar`) - PartialsLattice ( + `PartialStruct` ) - ConditionalsLattice ( + `Conditional` ) - InferenceLattice ( + `LimitedAccuracy`, `MaybeUndef` ) The idea is that where a lattice element contains another lattice element (e.g. in `PartialStruct` or `Conditional`), the element contained may only be from a wider lattice. In this PR, this is not enforced by the type system. This is quite deliberate, as I want to retain the types and object layouts of the lattice elements, but of course a future #42596-like change could add such type enforcement. Of particular note is that the `PartialsLattice` and `ConditionalsLattice` is parameterized and additional layers may be added in the stack. For example, in #40992, I had proposed a lattice element that refines `Int` and tracks symbolic expressions. In this setup, this could be accomplished by adding an appropriate lattice in between the `ConstsLattice` and the `PartialsLattice` (of course, additional hooks would be required to make the tfuncs work, but that is outside the scope of this PR). I don't think this is a full solution, but I think it'll help us play with some of these extended lattice options over the next 6-12 months in the packages that want to do this sort of thing. Presumably once we know what all the potential lattice extensions look like, we will want to take another look at this (likely together with whatever solution we come up with for the AbstractInterpreter composability problem and a rebase of #42596). WIP because I didn't bother updating and plumbing through the lattice in all the call sites yet, but that's mostly mechanical, so if we like this direction, I will make that change and hope to merge this in short order (because otherwise it'll accumulate massive merge conflicts).
There's been two threads of work involving the compiler's notion of the inference lattice. One is that the lattice has gotten to complicated and with too many internal constraints that are not manifest in the type system. #42596 attempted to address this, but it's quite disruptive as it changes the lattice types and all the signatures of the lattice operations, which are used quite extensively throughout the ecosystem (despite being internal), so that change is quite disruptive (and something we'd ideally only make the ecosystem do once). The other thread of work is that people would like to experiment with a variety of extended lattices outside of base (either to prototype potential additions to the lattice in base or to do custom abstract interpretation over the Julia code). At the moment, the lattice is quite closely interwoven with the rest of the abstract interpreter. In response to this request in #40992, I had proposed a `CustomLattice` element with callbacks, but this doesn't compose particularly well, is cumbersome and imposes overhead on some of the hottest parts of the compiler, so it's a bit of a tough sell to merge into `Base`. In this PR, I'd like to propose a refactoring that is relatively non-invasive to non-Base users, but I think would allow easier experimentation with changes to the lattice for these two use cases. In essence, we're splitting the lattice into a ladder of 5 different lattices, each containing the previous lattice as a sub-lattice. These 5 lattices are: - `JLTypeLattice` (Anything that's a `Type`) - `ConstsLattice` ( + `Const`, `PartialTypeVar`) - `PartialsLattice` ( + `PartialStruct`, `PartialOpaque` ) - `ConditionalsLattice` ( + `Conditional` ) - `InferenceLattice` ( + `LimitedAccuracy` ) - `OptimizerLattice` ( + `MaybeUndef` ) The idea is that where a lattice element contains another lattice element (e.g. in `PartialStruct` or `Conditional`), the element contained may only be from a wider lattice. In this PR, this is not enforced by the type system. This is quite deliberate, as I want to retain the types and object layouts of the lattice elements, but of course a future #42596-like change could add such type enforcement. Of particular note is that the `PartialsLattice` and `ConditionalsLattice` is parameterized and additional layers may be added in the stack. For example, in #40992, I had proposed a lattice element that refines `Int` and tracks symbolic expressions. In this setup, this could be accomplished by adding an appropriate lattice in between the `ConstsLattice` and the `PartialsLattice` (of course, additional hooks would be required to make the tfuncs work, but that is outside the scope of this PR). I don't think this is a full solution, but I think it'll help us play with some of these extended lattice options over the next 6-12 months in the packages that want to do this sort of thing. Presumably once we know what all the potential lattice extensions look like, we will want to take another look at this (likely together with whatever solution we come up with for the AbstractInterpreter composability problem and a rebase of #42596). WIP because I didn't bother updating and plumbing through the lattice in all the call sites yet, but that's mostly mechanical, so if we like this direction, I will make that change and hope to merge this in short order (because otherwise it'll accumulate massive merge conflicts).
There's been two threads of work involving the compiler's notion of the inference lattice. One is that the lattice has gotten to complicated and with too many internal constraints that are not manifest in the type system. #42596 attempted to address this, but it's quite disruptive as it changes the lattice types and all the signatures of the lattice operations, which are used quite extensively throughout the ecosystem (despite being internal), so that change is quite disruptive (and something we'd ideally only make the ecosystem do once). The other thread of work is that people would like to experiment with a variety of extended lattices outside of base (either to prototype potential additions to the lattice in base or to do custom abstract interpretation over the Julia code). At the moment, the lattice is quite closely interwoven with the rest of the abstract interpreter. In response to this request in #40992, I had proposed a `CustomLattice` element with callbacks, but this doesn't compose particularly well, is cumbersome and imposes overhead on some of the hottest parts of the compiler, so it's a bit of a tough sell to merge into `Base`. In this PR, I'd like to propose a refactoring that is relatively non-invasive to non-Base users, but I think would allow easier experimentation with changes to the lattice for these two use cases. In essence, we're splitting the lattice into a ladder of 5 different lattices, each containing the previous lattice as a sub-lattice. These 5 lattices are: - `JLTypeLattice` (Anything that's a `Type`) - `ConstsLattice` ( + `Const`, `PartialTypeVar`) - `PartialsLattice` ( + `PartialStruct`, `PartialOpaque` ) - `ConditionalsLattice` ( + `Conditional` ) - `InferenceLattice` ( + `LimitedAccuracy` ) - `OptimizerLattice` ( + `MaybeUndef` ) The idea is that where a lattice element contains another lattice element (e.g. in `PartialStruct` or `Conditional`), the element contained may only be from a wider lattice. In this PR, this is not enforced by the type system. This is quite deliberate, as I want to retain the types and object layouts of the lattice elements, but of course a future #42596-like change could add such type enforcement. Of particular note is that the `PartialsLattice` and `ConditionalsLattice` is parameterized and additional layers may be added in the stack. For example, in #40992, I had proposed a lattice element that refines `Int` and tracks symbolic expressions. In this setup, this could be accomplished by adding an appropriate lattice in between the `ConstsLattice` and the `PartialsLattice` (of course, additional hooks would be required to make the tfuncs work, but that is outside the scope of this PR). I don't think this is a full solution, but I think it'll help us play with some of these extended lattice options over the next 6-12 months in the packages that want to do this sort of thing. Presumably once we know what all the potential lattice extensions look like, we will want to take another look at this (likely together with whatever solution we come up with for the AbstractInterpreter composability problem and a rebase of #42596). WIP because I didn't bother updating and plumbing through the lattice in all the call sites yet, but that's mostly mechanical, so if we like this direction, I will make that change and hope to merge this in short order (because otherwise it'll accumulate massive merge conflicts).
There's been two threads of work involving the compiler's notion of the inference lattice. One is that the lattice has gotten to complicated and with too many internal constraints that are not manifest in the type system. #42596 attempted to address this, but it's quite disruptive as it changes the lattice types and all the signatures of the lattice operations, which are used quite extensively throughout the ecosystem (despite being internal), so that change is quite disruptive (and something we'd ideally only make the ecosystem do once). The other thread of work is that people would like to experiment with a variety of extended lattices outside of base (either to prototype potential additions to the lattice in base or to do custom abstract interpretation over the Julia code). At the moment, the lattice is quite closely interwoven with the rest of the abstract interpreter. In response to this request in #40992, I had proposed a `CustomLattice` element with callbacks, but this doesn't compose particularly well, is cumbersome and imposes overhead on some of the hottest parts of the compiler, so it's a bit of a tough sell to merge into `Base`. In this PR, I'd like to propose a refactoring that is relatively non-invasive to non-Base users, but I think would allow easier experimentation with changes to the lattice for these two use cases. In essence, we're splitting the lattice into a ladder of 5 different lattices, each containing the previous lattice as a sub-lattice. These 5 lattices are: - `JLTypeLattice` (Anything that's a `Type`) - `ConstsLattice` ( + `Const`, `PartialTypeVar`) - `PartialsLattice` ( + `PartialStruct`, `PartialOpaque` ) - `ConditionalsLattice` ( + `Conditional` ) - `InferenceLattice` ( + `LimitedAccuracy` ) - `OptimizerLattice` ( + `MaybeUndef` ) The idea is that where a lattice element contains another lattice element (e.g. in `PartialStruct` or `Conditional`), the element contained may only be from a wider lattice. In this PR, this is not enforced by the type system. This is quite deliberate, as I want to retain the types and object layouts of the lattice elements, but of course a future #42596-like change could add such type enforcement. Of particular note is that the `PartialsLattice` and `ConditionalsLattice` is parameterized and additional layers may be added in the stack. For example, in #40992, I had proposed a lattice element that refines `Int` and tracks symbolic expressions. In this setup, this could be accomplished by adding an appropriate lattice in between the `ConstsLattice` and the `PartialsLattice` (of course, additional hooks would be required to make the tfuncs work, but that is outside the scope of this PR). I don't think this is a full solution, but I think it'll help us play with some of these extended lattice options over the next 6-12 months in the packages that want to do this sort of thing. Presumably once we know what all the potential lattice extensions look like, we will want to take another look at this (likely together with whatever solution we come up with for the AbstractInterpreter composability problem and a rebase of #42596). WIP because I didn't bother updating and plumbing through the lattice in all the call sites yet, but that's mostly mechanical, so if we like this direction, I will make that change and hope to merge this in short order (because otherwise it'll accumulate massive merge conflicts).
…46526) There's been two threads of work involving the compiler's notion of the inference lattice. One is that the lattice has gotten to complicated and with too many internal constraints that are not manifest in the type system. #42596 attempted to address this, but it's quite disruptive as it changes the lattice types and all the signatures of the lattice operations, which are used quite extensively throughout the ecosystem (despite being internal), so that change is quite disruptive (and something we'd ideally only make the ecosystem do once). The other thread of work is that people would like to experiment with a variety of extended lattices outside of base (either to prototype potential additions to the lattice in base or to do custom abstract interpretation over the Julia code). At the moment, the lattice is quite closely interwoven with the rest of the abstract interpreter. In response to this request in #40992, I had proposed a `CustomLattice` element with callbacks, but this doesn't compose particularly well, is cumbersome and imposes overhead on some of the hottest parts of the compiler, so it's a bit of a tough sell to merge into `Base`. In this PR, I'd like to propose a refactoring that is relatively non-invasive to non-Base users, but I think would allow easier experimentation with changes to the lattice for these two use cases. In essence, we're splitting the lattice into a ladder of 5 different lattices, each containing the previous lattice as a sub-lattice. These 5 lattices are: - `JLTypeLattice` (Anything that's a `Type`) - `ConstsLattice` ( + `Const`, `PartialTypeVar`) - `PartialsLattice` ( + `PartialStruct`, `PartialOpaque` ) - `ConditionalsLattice` ( + `Conditional` ) - `InferenceLattice` ( + `LimitedAccuracy` ) - `OptimizerLattice` ( + `MaybeUndef` ) The idea is that where a lattice element contains another lattice element (e.g. in `PartialStruct` or `Conditional`), the element contained may only be from a wider lattice. In this PR, this is not enforced by the type system. This is quite deliberate, as I want to retain the types and object layouts of the lattice elements, but of course a future #42596-like change could add such type enforcement. Of particular note is that the `PartialsLattice` and `ConditionalsLattice` is parameterized and additional layers may be added in the stack. For example, in #40992, I had proposed a lattice element that refines `Int` and tracks symbolic expressions. In this setup, this could be accomplished by adding an appropriate lattice in between the `ConstsLattice` and the `PartialsLattice` (of course, additional hooks would be required to make the tfuncs work, but that is outside the scope of this PR). I don't think this is a full solution, but I think it'll help us play with some of these extended lattice options over the next 6-12 months in the packages that want to do this sort of thing. Presumably once we know what all the potential lattice extensions look like, we will want to take another look at this (likely together with whatever solution we come up with for the AbstractInterpreter composability problem and a rebase of #42596). WIP because I didn't bother updating and plumbing through the lattice in all the call sites yet, but that's mostly mechanical, so if we like this direction, I will make that change and hope to merge this in short order (because otherwise it'll accumulate massive merge conflicts).
Is this something we still want? |
This is basically UniTyper.jl, for inference |
This PR proposes an alternative design of the inference lattice, and overhauls our inference
implementation based on the design.
In another PRs, I will demonstrate examples of how new lattice properties can be introduced on top of
this lattice design, hopefully with much less development costs.
Motivation
Recently I and @vtjnash have found that it is getting harder and harder to maintain the correctness
of our lattice implementations and improve the accuracy of inference especially when introducing new lattice
properties. More specifically, we have observed there are these three issues:
a extended lattice wrapper can wrap another wrapper
Under the current lattice design, there are various wrapper types that convey specific information
(I will refer to them as "extended lattice wrappers" in the following part of this description).
They require special cares in order to preserve their correctness, but if such a wrapper can wrap
another wrapper which also requires such special care, it becomes very tricky to maintain the correctness.
inference: fix #42090, make sure not to wrap
Conditional
inPartialStruct
#42091 was an example of this problem, where we broke an invariant thatConditional
assumeswhen
PartialStruct
wrappedConditional
.messes around
isa
/widenconst
/widenconditional
/ignorelimited
Our inference implementation is messed up with the
isa
-predicate and "unwrapping" utilitiesthat transform a wrapper to another wrapper or native Julia type.
For example, currently native Julia
Type
s and extended lattice wrappers (e.g.Const
) canappear in same places, and we need to use
widenconst(x)
anywhere we want to use subtyping predicatewith
x
, just because the extended lattice wrappers are not valid native JuliaType
s that can be passed to<:
.The pain point here is that most parts of our inference implementation treat lattice
elements as
Any
-typed objects, and so it's not very clear if a variable at some point isexpected to be an extended lattice object or a native Julia type already (and accordingly JET is
also unable to detect possible invalid operations on such cases).
complex lattice implementations of
⊑
andtmerge
Another pain point of having bunch of extended lattice wrappers is that implementations of lattice
operations can be very complex. Especially, the reasoning could be very tricky when some extended
lattice wrapper can be transformed to another wrapper. For example, some
x::Conditional
can beconverted to
x::Const
, and so we want to handle theisa(x, Conditional)
cases before wehandle the
isa(x, Const)
cases in⊑
ortmerge
in order to get the best accuracy.This lattice complexity makes it difficult to introduce new lattice wrapper, because this sort of relational
handling needs to be reconsidered every time new lattice wrapper is added in.
Especially, the second and third ones are very problematic in my opinion, since they make it very
hard to improve inference accuracy by adding new lattice properties without introducing yet another
complexities. For instance, we're hesitate to push #41199 forward under the current situation
because of these concerns.
Proposed lattice design
In order to resolve these issues, we propose the following new lattice design:
As shown in the struct definition,
LatticeElement
encodes several lattice properties andthese attributes are combined to create a partial lattice whose height is infinite.
All the existing extended lattice wrappers like
Const
will be replaced with these properties,e.g.
x.constant === 10
can hold the same information asx::Const(10)
.How "2. messes around
isa
/widenconst
/widenconditional
/ignorelimited
" will be resolved:This
LatticeElement
object also wraps native Julia types. In other word, with this overhaul,inference routine basically works on
LatticeElement
objects rather than mixing up extended latticewrappers and native Julia types.
For example, now
abstract_call
will acceptargtypes::Vector{LatticeElement}
rather thanargtypes::Vector{Any}
and
frame::InferenceState
will maintainframe.bestguess::LatticeElement
rather thanframe.bestguess::Any
.This essentially separates contexts where
LatticeElement
is expected from where native Julia types are expected,and it allows us to reason inference logic far more easily, e.g. it would be much easier to find
places where we need to use
widenconst
to get a native Julia type fromLatticeElement
, hopefullywith more help of static analyses like JET.jl.
And also note that
LatticeElement
conveys all the lattice properties, while under the current latticeimplementation, each wrapper conveys each lattice property independently.
It allows
LatticeElement
to hold e.g.constant
(corresponding toConst
),conditional
(Conditional
)and
causes
(LimitedAccuracy
) information all at the same time, so that now we don't need to usewidenconditional
in order to getConst
information from aConditional
wrapper and now we don'tneed to use
ignorelimited
to getConditional
information from aLimitedAccuracy
.Rather, now
widenconditional
andignorelimited
are only used as constructors, which convertLatticeElement
with conditional/limited information to newLatticeElement
without such information.And we can just eliminate their previous usages to get wrapped information.
How "3. complex lattice implementations of
⊑
andtmerge
will be resolved:One another important observation is that most of these lattice attributes are actually orthogonal
to each other. For example,
causes
andconditional
can be compared or merged separately.This property will allow us to vastly simplify the implementations of
⊑
andtmerge
:now we can just compare or merge each attribute separately1 and don't need to care about what
order we want handle lattice properties (wrappers previously).
Once this overhaul is done,
⊑
andtmerge
would look like:Compared to the current implementations, I'd argue that these new implementations will be much
simpiler and I hope they can be easily enhanced with new attributes in the future.
How "1. a extended lattice wrapper can wrap another wrapper" can be resolved
The proposed lattice design won't resolve this issue on its own, because each attribute of
LatticeElement
can also be
LatticeElement
(thusLatticeElement
's lattice has an infinite height).I propose to address this issue by adding more constructor assertions.
For example, in order to make #42091 never happen again in the future, we can have this
PartialStruct
constructor:We still need to remember to add
@assert !isMustAlias(field) "invalid PartialStruct field"
whenreviving #41199 on top of this overhaul though, but I couldn't come up with any effective alternative
idea that resolves this issue.
Plan
I plan to finish this mega refactor with the following 4 big steps:
LatticeElement
attributeswidenconditional
, try to simplifytmerge
)Progress tracking
AbstractLattice
interface, which doesn't include native Julia types, but only contain extended lattice wrappers including newLatticeElement
AbstractLattice
/Vector{AbstractLattice}
rather thanAny
/Vector{Any}
LatticeElement
attributesPartialStruct
Conditional
/InterConditional
Const
PartialTypeVar
LimitedAccuracy
PartialOpaque
MaybeUndef
this is not a part of lattice, and I added a hack to sneak this into aDelayedTyp
LatticeElement
systemVararg
TODO (lattice overhaul)
leftoverAbstractLattice
withLatticeElement
⊑
tmerge
(maybe rename to⊔
?)tmeet
(maybe rename to⊓
?)widenconditional
ignorelimited
abstract_iteration
precise_container_type
xxx_nothrow
)PartialStruct
wrapLatticeElement
Conditional
/InterConditional
wrapLatticeElement
Any
annotation withLatticeElement
MustAlias
/InterMustAlias
: revive#41199
NInitialized
: records # of initialized fields (when notPartialStruct
-folded) in order to helpstmt_effect_free
DefinedFields
: back-propagateisdefined
information in order to helpstmt_effect_free
Collaborations 🙏
Any sort of development help would be super appreciated.
Just leave a comment if you find anything that needs a fix or can be improved,
or pick up any of the remaining tasks tracked above and make a PR against this branch.
Pro-tip: If you want to use Cthulhu on this branch, JuliaDebug/Cthulhu.jl#238 is for you.
Discussion
Vararg
s (andTypeVar
s2), because it can appear in same context asLatticeElement
but they are definitely not valid Julia types.
inference: improve
TypeVar
/Vararg
handling #42583 should improve the situation a bit by revealingwhere we currently need to special case
Vararg
s, but they definitely appear inargtypes
.EDIT: I decided to just wrap them in the
LatticeElement.special
fieldFootnotes
Of course there are exceptions, for our current lattice implementation,
Const
andPartialStruct
are entangled with each other and need a care about the handling ordering. ↩
Fortunately, after working on inference: improve
TypeVar
/Vararg
handling #42583 I found that we seem to have eliminated most usages ofTypeVar
s as argtype. ↩