-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tracking the time-to-first-solve issue #1313
Comments
The bridging caseinclude("bench.jl")
using SnoopCompile
tinf = @snoopi_deep example_diet(Clp.Optimizer, true)
julia> tinf
InferenceTimingNode: 11.698169/22.652688 on InferenceFrameInfo for Core.Compiler.Timings.ROOT() with 107 direct children
julia> staleinstances(tinf)
SnoopCompileCore.InferenceTiming[] This call took 22.65 seconds, of which 11.7 seconds was on something other than inference (e.g., code gen). But that means 11 seconds was spend on type inference! This is a good target to attack. julia> using ProfileView
julia> fg = flamegraph(tinf)
Node(FlameGraphs.NodeData(ROOT() at typeinfer.jl:73, 0x00, 0:22666765718))
julia> ProfileView.view(fg)
MathOptInterface.jl/src/Bridges/bridge_optimizer.jl Lines 882 to 891 in 2e99e25
In order to infer how to set the objective sense, we also need to infer how to delete an objective bridge, which involves a whooooole lot of stuff. Moreover, this function is red because the method is owned by The big stack on top of the red is: Base.precompile(Tuple{typeof(MathOptInterface.delete),UniversalFallback{GenericModel{Float64, ModelFunctionConstraints{Float64}}},MathOptInterface.VariableIndex}) but this didn't seem to help. Let's look at inference triggers: julia> itrigs = inference_triggers(tinf)
106-element Vector{InferenceTrigger}:
Inference triggered to call MethodInstance for hvcat(::NTuple{4, Int64}, ::Float64, ::Vararg{Float64, N} where N) from example_diet (/Users/oscar/Documents/JuMP/performance/auto-cache/bench.jl:5) with specialization MethodInstance for example_diet(::Type, ::Bool)
Inference triggered to call MethodInstance for promote_typeof(::Int64, ::Int64, ::Vararg{Any, N} where N) from hvcat (./abstractarray.jl:1931) inlined into MethodInstance for example_diet(::Type, ::Bool) (/Users/oscar/Documents/JuMP/performance/auto-cache/bench.jl:12)
Inference triggered to call MethodInstance for promote_typeof(::Int64, ::Float64, ::Vararg{Any, N} where N) from promote_typeof (./promotion.jl:272) with specialization MethodInstance for promote_typeof(::Int64, ::Int64, ::Vararg{Any, N} where N)
Inference triggered to call MethodInstance for promote_typeof(::Float64, ::Int64, ::Vararg{Any, N} where N) from promote_typeof (./promotion.jl:272) with specialization MethodInstance for promote_typeof(::Int64, ::Float64, ::Vararg{Any, N} where N)
Inference triggered to call MethodInstance for typed_hvcat(::Type{Float64}, ::NTuple{9, Int64}, ::Int64, ::Vararg{Number, N} where N) from hvcat (./abstractarray.jl:1931) inlined into MethodInstance for example_diet(::Type, ::Bool) (/Users/oscar/Documents/JuMP/performance/auto-cache/bench.jl:12)
Inference triggered to call MethodInstance for var"#instantiate#23"(::Type{Float64}, ::Bool, ::typeof(MathOptInterface.instantiate), ::Type) from instantiate##kw (/Users/oscar/.julia/dev/MathOptInterface/src/instantiate.jl:120) with specialization MethodInstance for (::MathOptInterface.var"#instantiate##kw")(::NamedTuple{(:with_bridge_type,), Tuple{DataType}}, ::typeof(MathOptInterface.instantiate), ::Type)
[...] julia> mtrigs = accumulate_by_source(Method, itrigs)
41-element Vector{SnoopCompile.TaggedTriggers{Method}}:
get(model::Union{MathOptInterface.Utilities.AbstractModelLike{T}, MathOptInterface.Utilities.AbstractOptimizer{T}} where T, ::MathOptInterface.ListOfVariableAttributesSet) in MathOptInterface.Utilities at /Users/oscar/.julia/dev/MathOptInterface/src/Utilities/model.jl:239 (1 callees from 1 callers)
(::MathOptInterface.var"#instantiate##kw")(::Any, ::typeof(MathOptInterface.instantiate), optimizer_constructor) in MathOptInterface at /Users/oscar/.julia/dev/MathOptInterface/src/instantiate.jl:115 (1 callees from 1 callers)
attach_optimizer(model::MathOptInterface.Utilities.CachingOptimizer) in MathOptInterface.Utilities at /Users/oscar/.julia/dev/MathOptInterface/src/Utilities/cachingoptimizer.jl:181 (1 callees from 1 callers)
empty!(model::Clp.Optimizer) in Clp at /Users/oscar/.julia/packages/Clp/E3N8m/src/MOI_wrapper/MOI_wrapper.jl:79 (1 callees from 1 callers)
[...] The non bridging caseThese should be fixable. They're all |
@blegat, so a lot of the slow down is coming because the design of Something like We start here: MathOptInterface.jl/src/Utilities/model.jl Lines 628 to 646 in 2e99e25
The first line calls the following (and the copy can be removed): MathOptInterface.jl/src/Utilities/struct_of_constraints.jl Lines 83 to 90 in 2e99e25
which calls MathOptInterface.jl/src/Utilities/struct_of_constraints.jl Lines 245 to 247 in 2e99e25
going through every field (also a StructOfConstraints ) and eventually calling:MathOptInterface.jl/src/Utilities/vector_of_constraints.jl Lines 117 to 122 in 2e99e25
So in total, there are 97 leaf MathOptInterface.jl/src/Utilities/vector_of_constraints.jl Lines 117 to 122 in 2e99e25
|
How much of this can be addressed by precompile statements? |
For some reason the precompile statements don't help. If you generate them, add them to the package, they're still called at runtime. From https://timholy.github.io/SnoopCompile.jl/stable/snoopi_deep_parcel/: But one of the methods that failed is
which suggests it's failing to infer the element type |
(Edit: well wrong on the previous comment. Now deleted.) One cause of the slow-down is due to julia> using MathOptInterface, Clp
julia> @time @eval MathOptInterface.Utilities.Model{Float64}()
4.884869 seconds (11.88 M allocations: 694.229 MiB, 7.17% gc time, 100.03% compilation time)
MOIU.GenericModel{Float64, MOIU.ModelFunctionConstraints{Float64}} julia> using MathOptInterface
julia> @time @eval MathOptInterface.Utilities.Model{Float64}()
0.294457 seconds (539.43 k allocations: 33.832 MiB, 26.33% gc time, 100.77% compilation time)
MOIU.Model{Float64} |
That's interesting. So the fact that all constraints are supported and we see that at compile time indeed allows to avoid compiling on the constraint bridging. For the objective, setting an objective should remove the old one and when you set a given objective, you don't know from the types of the current objective whether the previous one is bridged. |
Hmm. I'm stuck on handling deletion in julia> using MathOptInterface
julia> const MOI = MathOptInterface
MathOptInterface
julia> model = MOI.Utilities.Model{Float64}()
MOIU.GenericModel{Float64, MOIU.ModelFunctionConstraints{Float64}}
julia> x = MOI.add_variable(model)
MathOptInterface.VariableIndex(1)
julia> @time @eval MOI.delete(model, x)
11.937550 seconds (17.38 M allocations: 957.306 MiB, 3.09% gc time, 100.01% compilation time) Fixed in #1315. |
This biggest issue now is the inferrability of MathOptInterface.jl/src/Utilities/copy.jl Lines 638 to 650 in 85f9703
Things like this are quite bad MathOptInterface.jl/src/Utilities/copy.jl Lines 457 to 509 in 85f9703
|
Removing this from 0.10. We've made some good progress, and there is a plan to improve things further in the next JuMP release. |
Take a look at JuliaDiff/ChainRules.jl#499 |
A big offender is: using SnoopCompile
using GLPK; const MOI = GLPK.MOI
model = MOI.instantiate(GLPK.Optimizer; with_bridge_type = Float64)
f(model) = MOI.set(model, MOI.ObjectiveSense(), MOI.MIN_SENSE)
tinf = @snoopi_deep f(model)
itrigs = inference_triggers(tinf)
mtrigs = accumulate_by_source(Method, itrigs)
using ProfileView
fg = flamegraph(tinf)
ProfileView.view(fg) |
Closing because things are now muuuuch better (with Julia 1.9)
We can continue to improve by adding PrecompileTools, but let's use #2226 as that issue. |
We know we have a time-to-first-solve problem. There have been a few different attempts at solving it (#1156, #1249, #1251, #1252), but they're getting a bit scattered, so I thought I would open an issue to track progress.
Currently, things are going in the wrong direction.
I'm going to argue that this benchmark is very important to users, particularly new users, and we should put a high priority on improving this. The script is setup to replicate how JuMP builds MOI models.
1.1.0
0.10.6
0.9.22
0.9.21
0.9.20
Script
setup.jl
bench.jl
run.sh
The text was updated successfully, but these errors were encountered: