Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add auto_bridge to CachingOptimizer #1252

Closed
wants to merge 4 commits into from
Closed

Conversation

odow
Copy link
Member

@odow odow commented Feb 26, 2021

What problem is this trying to solve

Consider this model in Clp

model = Model(Clp.Optimizer)
@variable(model, x[1:2] >= 0)
optimize!(model)

Pop quiz: how many caches are there?

julia> backend(model)
MOIU.CachingOptimizer{MOI.AbstractOptimizer,MOIU.UniversalFallback{MOIU.Model{Float64}}}
in state ATTACHED_OPTIMIZER
in mode AUTOMATIC
with model cache MOIU.UniversalFallback{MOIU.Model{Float64}}
  fallback for MOIU.Model{Float64}
with optimizer MOIB.LazyBridgeOptimizer{MOIU.CachingOptimizer{Clp.Optimizer,MOIU.UniversalFallback{MOIU.Model{Float64}}}}
  with 0 variable bridges
  with 0 constraint bridges
  with 0 objective bridges
  with inner model MOIU.CachingOptimizer{Clp.Optimizer,MOIU.UniversalFallback{MOIU.Model{Float64}}}
    in state ATTACHED_OPTIMIZER
    in mode AUTOMATIC
    with model cache MOIU.UniversalFallback{MOIU.Model{Float64}}
      fallback for MOIU.Model{Float64}
    with optimizer Clp.Optimizer

There are two! There is the outer cache, then a bridging layer, and then the inner cache.
But there are 0 bridges added to this problem! So our two caches are just copies of each other.

Why do we need two? Because a user might write this

model = Model(Clp.Optimizer)
@variable(model, x[1:2])
@constraint(model, x in MOI.Nonnegatives(2))
optimize!(model)
Julia> backend(model)
MOIU.CachingOptimizer{MOI.AbstractOptimizer,MOIU.UniversalFallback{MOIU.Model{Float64}}}
in state ATTACHED_OPTIMIZER
in mode AUTOMATIC
with model cache MOIU.UniversalFallback{MOIU.Model{Float64}}
  fallback for MOIU.Model{Float64}
with optimizer MOIB.LazyBridgeOptimizer{MOIU.CachingOptimizer{Clp.Optimizer,MOIU.UniversalFallback{MOIU.Model{Float64}}}}
  with 0 variable bridges
  with 1 constraint bridge
  with 0 objective bridges
  with inner model MOIU.CachingOptimizer{Clp.Optimizer,MOIU.UniversalFallback{MOIU.Model{Float64}}}
    in state ATTACHED_OPTIMIZER
    in mode AUTOMATIC
    with model cache MOIU.UniversalFallback{MOIU.Model{Float64}}
      fallback for MOIU.Model{Float64}
    with optimizer Clp.Optimizer

For the majority of users just trying to solve LPs and MIPs, this means that their first solve involves a double copy, and lots of extra inference sorting out the bridging functions.

The alternative

If you knew there was going to be no bridging, you could write

model = Model(Clp.Optimizer; bridge_constraints = false)
@variable(model, x[1:2] >= 0)
optimize!(model)
julia> backend(model)
MOIU.CachingOptimizer{MOI.AbstractOptimizer,MOIU.UniversalFallback{MOIU.Model{Float64}}}
in state ATTACHED_OPTIMIZER
in mode AUTOMATIC
with model cache MOIU.UniversalFallback{MOIU.Model{Float64}}
  fallback for MOIU.Model{Float64}
with optimizer Clp.Optimizer

and if you get it wrong, you get a nice error message:

model = Model(Clp.Optimizer; bridge_constraints = false)
@variable(model, x[1:2])
julia> @constraint(model, x in MOI.Nonnegatives(2))
ERROR: Constraints of type MathOptInterface.VectorOfVariables-in-MathOptInterface.Nonnegatives are not supported by the solver, try using `bridge_constraints=true` in the `JuMP.Model` constructor if you believe the constraint can be reformulated to constraints supported by the solver.

Thus, one option is to change the default in JuMP from bridge_constraints = true to bridge_constraints = false.

Pros: faster. Users are more aware when bridges are used. Most users don't use bridges
Cons: breaking. But it's a one-line change for users.

Proposed approach

Start with the equivalent of bridge_constraints = false. If, when adding a constraint, the constraint is unsupported, add a full_bridge_optimizer.

Pros: Opt-in at MOI level. Better performance without breaking at JuMP level.
Cons: More complexity. Might be good for users to see when bridges are used.

See the companion PR in JuMP: jump-dev/JuMP.jl#2513 which contains benchmarks proving the efficacy.

TODOs

  • Decide whether to do this, or change JuMP's default.
  • Tests
  • Benchmarks
  • Docs
  • A way to get the Number type, rather than hard-coding Float64
  • A way for JuMP to add extra bridges?
  • Bikeshed the argument name

Closes #1156
Closes #1249
Closes #1251

@blegat
Copy link
Member

blegat commented Feb 26, 2021

Solvers that benefit from this second case are the solvers that have an efficient copy_to. These solvers usually have a copy_to function that start by converting the model into a matrix form and then load it into the solver.
So here we really have 3 copies of the model:
User model -> Bridged model -> Matrix form
With your suggestion, it would be reduced to two:
User model -> Matrix form

So another way to do this would be to use the Matrix form for the Bridged model. That is, instead of using a MOIU.Model in the CachingOptimizer, we use a matrix format instead so that when we call MOI.copy_to(::Optimtizer, ::MatrixForm), the solvers sees that it's already exactly the matrix form it needs, get the pointers of the vectors describing the sparse matrix and give that to the solver directly, no copy needed.

So if we combine the two ideas we could have only one copy:
Matrix form
The issues with that is: If the user does

model = Model()
# Write problem
set_optimizer(model, ...)

Then we don't know what is the Matrix form we should use before set_optimizer is called.
Moreover, if the user change the optimizer, we should change the matrix form.
Maybe it's not a big issue though, we can just create a new cache in the new matrix form and copy.

@odow odow force-pushed the od/cache_auto_bridge branch 2 times, most recently from 57430dc to ad4f077 Compare February 28, 2021 21:46
@odow
Copy link
Member Author

odow commented Feb 28, 2021

The matrix form is only used for loading though. It gets discarded and gc'd after copy_to.

There could be a way for the optimizer to say here is my desired cache. The default could be

function desired_cache(model::AbstractOptimizer)
    return Utilities.UniversalFallback(Utilities.Model{Float64}())
end

and matrix solvers could specify something else.

@assert MOI.is_empty(optimizer)
if bridge_constraints
state = EMPTY_OPTIMIZER
T = MOI.AbstractOptimizer
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The type is not concrete so it will be inefficient

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could use a union of the type with and without bridges

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not sufficient to do this because the bridges may need a caching layer below. The abstract type has worked fine for JuMP, and having a complicated union may not actually help things.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't agree, we get an allocation everytime we need to access the field, we have all these _moi function barriers in JuMP to avoid this. If we have the same issue in the CachingOptimizer, then it doesn't make sense.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an opt-in feature for users of CachingOptimizer. JuMP will opt-in with no change to performance because it already has the abstract type. If others opt-in, they should check performance and/or implement function barriers.

Overall, this is a big win for JuMP with minimal impact on other users. It's simple to implement, and there are no edge cases.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thing there will be a change in perf even for JuMP. Now there are two fields of abstract type.
We don't know what backend is, so have one hit, then you figure out it's CachingOptimizer so you make a call to it and then the optimizer field is abstract again so you get a second hit.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no change in the JuMP behavior:

julia> model = Model(Clp.Optimizer);

julia> backend(model)
MOIU.CachingOptimizer{MOI.AbstractOptimizer,MOIU.UniversalFallback{MOIU.Model{Float64}}}
in state EMPTY_OPTIMIZER
in mode AUTOMATIC
with model cache MOIU.UniversalFallback{MOIU.Model{Float64}}
  fallback for MOIU.Model{Float64}
with optimizer MOIB.LazyBridgeOptimizer{MOIU.CachingOptimizer{Clp.Optimizer,MOIU.UniversalFallback{MOIU.Model{Float64}}}}
  with 0 variable bridges
  with 0 constraint bridges
  with 0 objective bridges
  with inner model MOIU.CachingOptimizer{Clp.Optimizer,MOIU.UniversalFallback{MOIU.Model{Float64}}}
    in state ATTACHED_OPTIMIZER
    in mode AUTOMATIC
    with model cache MOIU.UniversalFallback{MOIU.Model{Float64}}
      fallback for MOIU.Model{Float64}
    with optimizer Clp.Optimizer

julia> model2 = Model(Clp.Optimizer; auto_bridge = true);

julia> backend(model2)
MOIU.CachingOptimizer{MOI.AbstractOptimizer,MOIU.UniversalFallback{MOIU.Model{Float64}}}
in state EMPTY_OPTIMIZER
in mode AUTOMATIC
with model cache MOIU.UniversalFallback{MOIU.Model{Float64}}
  fallback for MOIU.Model{Float64}
with optimizer Clp.Optimizer

julia> @variable(model2, x[1:2] in MOI.Nonnegatives(2))
2-element Array{VariableRef,1}:
 x[1]
 x[2]

julia> backend(model2)
MOIU.CachingOptimizer{MOI.AbstractOptimizer,MOIU.UniversalFallback{MOIU.Model{Float64}}}
in state EMPTY_OPTIMIZER
in mode AUTOMATIC
with model cache MOIU.UniversalFallback{MOIU.Model{Float64}}
  fallback for MOIU.Model{Float64}
with optimizer MOIB.LazyBridgeOptimizer{MOIU.CachingOptimizer{Clp.Optimizer,MOIU.UniversalFallback{MOIU.Model{Float64}}}}
  with 0 variable bridges
  with 0 constraint bridges
  with 0 objective bridges
  with inner model MOIU.CachingOptimizer{Clp.Optimizer,MOIU.UniversalFallback{MOIU.Model{Float64}}}
    in state ATTACHED_OPTIMIZER
    in mode AUTOMATIC
    with model cache MOIU.UniversalFallback{MOIU.Model{Float64}}
      fallback for MOIU.Model{Float64}
    with optimizer Clp.Optimizer

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If bridges are applied, we get back to the current JuMP behavior. If bridges are not applied, then we skip all the issues with bridges, but the backend type that JuMP sees is still the same.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have a call to discuss instead of this back-and-forth?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think we reached the point where messages with opposite timezone makes this process long and inefficient :D

@blegat
Copy link
Member

blegat commented Mar 2, 2021

One issue with the approach in the PR is that the bridge layer is added when the user calls supports_.... However, we should only add it when the user actually calls add_.... To make supports_... work, we should indeed check if it is supported if a bridge layer is added but we should not modify model.optimizer.

Another concern is that it fixes the use case of JuMP which has a cache before the bridge layer but for other use cases such as meta-solvers, you want to do basically

optimizer = MOI.instantiate(solver, with_bridge_type=Float64)
MOI.copy_to(optimizer, src)

In this case, we don't get any benefit from this PR.
To solve both use cases and avoid the issue with supports discussed above we could do the following:

  1. Replace
    function MOI.copy_to(mock::AbstractBridgeOptimizer, src::MOI.ModelLike; kws...)
    return MOIU.automatic_copy_to(mock, src; kws...)
    end

    by
function MOI.copy_to(dest::AbstractBridgeOptimizer, src::MOI.ModelLike; kws...)
    if # Nothing is going to be bridged anyway
        MOI.copy_to(dest.model, src)
    else
        MOIU.default_copy_to(dest, , src; kws...)
    end
end
  1. Add an option for the CachingOptimizer and do
function MOI.copy_to(dest::CachingOptimizer, ...)
    if state != NO_OPTIMIZER && # the option is true
        MOI.copy_to(dest.optimizer, ...)
    else
        # Same than what's currently done
    end
end

So with this option, we avoid using the cache in case copy_to is called.
In case modifications are done that are not supported by the solver, we simple do not catch the error.
JuMP could safely use it as there is a second layer of cache on top that would catch it and empty the optimizer (containing the bridges, the (unused) second cache and optimizer).

src/Utilities/cachingoptimizer.jl Outdated Show resolved Hide resolved
src/Utilities/cachingoptimizer.jl Outdated Show resolved Hide resolved
src/Utilities/cachingoptimizer.jl Show resolved Hide resolved
@odow
Copy link
Member Author

odow commented Mar 2, 2021

The reason to add bridges on a supports call is two-fold:

  1. If you're asking supports, we don't want to return false when actually calling add_constraint will work.
  2. We don't want to initialize a bridge every time we call supports, only to reinitialize it when we call add_constraint.

I'll have a play with your suggestion. But it seems like there are a lot of edge cases with modification and deletion.

@blegat
Copy link
Member

blegat commented Mar 3, 2021

The reason to add bridges on a supports call is two-fold:

  1. If you're asking supports, we don't want to return false when actually calling add_constraint will work.
  2. We don't want to initialize a bridge every time we call supports, only to reinitialize it when we call add_constraint.

Yes, for this reason, the only sensible way to make supports not have any effect is to have two fields optimizer and bridged_optimizer and a Bool indicating which one is active.
But that's going away from having MOI layers following the "Do one thing and do it well" principle.

@odow odow added the Submodule: Utilities About the Utilities submodule label Mar 3, 2021
@odow odow force-pushed the od/cache_auto_bridge branch 2 times, most recently from b44c470 to 031d1a5 Compare March 4, 2021 00:36
@odow odow changed the title WIP: only bridge if needed Add auto_bridge to CachingOptimizer Mar 4, 2021
@odow
Copy link
Member Author

odow commented Mar 4, 2021

More JuMP benchmarks using the script below:

(base) oscar@Oscars-MBP JuMP % ~/julia --project=. example_diet.jl clp       
 17.319493 seconds (57.13 M allocations: 2.858 GiB, 8.30% gc time)
(base) oscar@Oscars-MBP JuMP % ~/julia --project=. example_diet.jl clp --auto
  6.071365 seconds (18.02 M allocations: 925.261 MiB, 5.73% gc time)
(base) oscar@Oscars-MBP JuMP % ~/julia --project=. example_diet.jl glpk      
 13.341120 seconds (44.97 M allocations: 2.257 GiB, 8.24% gc time)
(base) oscar@Oscars-MBP JuMP % ~/julia --project=. example_diet.jl glpk --auto
  8.593967 seconds (28.04 M allocations: 1.406 GiB, 8.13% gc time)

2 or 3x improvement in time-to-first-solve for models users care about. And this is not counting additional speedups we will get from precompilation.

Code

using JuMP, GLPK, Clp
function example_diet(optimizer, auto_bridge)
    categories = ["calories", "protein", "fat", "sodium"]
    category_data = Containers.DenseAxisArray([
        1800 2200;
        91   Inf;
        0    65;
        0    1779
        ], categories, ["min", "max"]
    )
    foods = [
        "hamburger", "chicken", "hot dog", "fries", "macaroni", "pizza",
        "salad", "milk", "ice cream",
    ]
    cost = Containers.DenseAxisArray(
        [2.49, 2.89, 1.50, 1.89, 2.09, 1.99, 2.49, 0.89, 1.59],
        foods
    )
    food_data = Containers.DenseAxisArray(
        [
            410 24 26 730;
            420 32 10 1190;
            560 20 32 1800;
            380  4 19 270;
            320 12 10 930;
            320 15 12 820;
            320 31 12 1230;
            100  8 2.5 125;
            330  8 10 180
        ], foods, categories
    )
    model = Model(optimizer, auto_bridge = auto_bridge)
    set_silent(model)
    @variables(model, begin
        category_data[c, "min"] <= nutrition[c = categories] <= category_data[c, "max"]
        buy[foods] >= 0
    end)
    @objective(model, Min, sum(cost[f] * buy[f] for f in foods))
    @constraint(model, [c in categories],
        sum(food_data[f, c] * buy[f] for f in foods) == nutrition[c]
    )
    optimize!(model)
    term_status = termination_status(model)
    @assert term_status == MOI.OPTIMAL
    @constraint(model, buy["milk"] + buy["ice cream"] <= 6)
    optimize!(model)
    @assert termination_status(model) == MOI.INFEASIBLE
    return
end

if length(ARGS) > 0
    auto = get(ARGS, 2, "") == "--auto"
    if ARGS[1] == "clp"
        @time example_diet(Clp.Optimizer, auto)
    else
        @assert ARGS[1] == "glpk"
        @time example_diet(GLPK.Optimizer, auto)
    end
end

@odow odow force-pushed the od/cache_auto_bridge branch from a9638bf to 79d1cdb Compare March 4, 2021 02:18
@odow odow requested a review from blegat March 4, 2021 03:08
@blegat blegat mentioned this pull request Mar 4, 2021
@odow
Copy link
Member Author

odow commented Mar 5, 2021

Talked to @blegat offline. The alternative is to add a DIRECT state to CachingOptimizer which just ignores the cache, and throws errors if things are not supported.

@odow odow requested a review from mlubin March 5, 2021 20:39
@odow
Copy link
Member Author

odow commented Mar 5, 2021

@mlubin, I think we need your input on this one.

@mlubin
Copy link
Member

mlubin commented Mar 6, 2021

This PR claims to close #1156. #1156 says:

Rather than making an ad-hoc change, we should take the time to throughly document the caching optimizer system, choose a design that makes the most sense, and then implement that.

Where's the thorough documentation? :)

Changing the external state of the CachineOptimizer in supports_* is a bad code smell. Even changing the internal state would be strange because it's implied that supports_* has no side effects.

Could you elaborate on what the DIRECT state would look like?

@odow
Copy link
Member Author

odow commented Mar 7, 2021

Where's the thorough documentation? :)

Still to come
image

is a bad code smell

I've made it so the optimizer is only modified during an actual add_constraint call.

Could you elaborate on what the DIRECT state would look like?

It would function just like JuMP's current direct mode. Everything is forwarded straight to .optimizer, the .model_cache is not kept in sync (it stays empty), and anything that isn't supported throws an error. This is only useful if there are two caches though, so something like: Cache -> Bridge -> Cache(direct) -> Optimizer. Because then when the inner cache(direct) throws, we can repopulate the model from the outer cache. We would also have to use Benoit's suggestion for copy_to, so that the caching optimizer in direct would copy_to the optimizer, and the bridges would have to check if any bridging needed to happen, and if not, it would forward the copy_to to the optimizer.

Note that even if we merge this PR, we could still add a DIRECT state in future...

@odow
Copy link
Member Author

odow commented Mar 7, 2021

As for the documentation, the code in cachingoptimizer.jl is actually pretty straight forward, and less complicated than I had realized. (Most of the complexity is in MOI.Utilities.Model and in the bridges.)

One option I would be in favor of is to remove MANUAL mode. It doesn't get used, and there aren't many reasons to use it over AUTOMATIC mode. But that's a different topic.

@odow odow force-pushed the od/cache_auto_bridge branch from addf965 to e17e88f Compare March 8, 2021 18:39
@odow
Copy link
Member Author

odow commented Mar 8, 2021

Latest timings on Julia 1.5.3

(base) oscar@Oscars-MBP scripts % ~/julia --project=. example_diet.jl clp 
 13.592436 seconds (35.24 M allocations: 1.775 GiB, 6.56% gc time)
(base) oscar@Oscars-MBP scripts % ~/julia --project=. example_diet.jl clp --auto
  5.070573 seconds (11.74 M allocations: 606.745 MiB, 4.71% gc time)
(base) oscar@Oscars-MBP scripts % ~/julia --project=. example_diet.jl glpk      
 10.806421 seconds (29.22 M allocations: 1.471 GiB, 7.37% gc time)
(base) oscar@Oscars-MBP scripts % ~/julia --project=. example_diet.jl glpk --auto
  5.876158 seconds (12.85 M allocations: 666.154 MiB, 3.98% gc time)

And 1.6-RC1

(base) oscar@Oscars-MBP scripts % '/Applications/Julia-1.6.app/Contents/Resources/julia/bin/julia' --project=. example_diet.jl clp
 12.588525 seconds (32.80 M allocations: 1.842 GiB, 6.04% gc time, 33.69% compilation time)
(base) oscar@Oscars-MBP scripts % '/Applications/Julia-1.6.app/Contents/Resources/julia/bin/julia' --project=. example_diet.jl clp --auto
  5.087086 seconds (10.93 M allocations: 647.489 MiB, 3.38% gc time, 92.28% compilation time)
(base) oscar@Oscars-MBP scripts % '/Applications/Julia-1.6.app/Contents/Resources/julia/bin/julia' --project=. example_diet.jl glpk     
  9.666504 seconds (27.98 M allocations: 1.566 GiB, 5.27% gc time, 38.85% compilation time)
(base) oscar@Oscars-MBP scripts % '/Applications/Julia-1.6.app/Contents/Resources/julia/bin/julia' --project=. example_diet.jl glpk --auto
  6.029965 seconds (12.95 M allocations: 772.171 MiB, 3.82% gc time, 99.95% compilation time)

Still lots of room to improve, but we've gone from 17s to 5s on Clp, and 13s to 6s on GLPK.

@blegat
Copy link
Member

blegat commented Mar 9, 2021

As we discussed offline, the use case of this is to remove this second layer of cache that is required for solvers that to not implement incremental building such as Clp. However, there is currently another copy to a third model that is done internally in the MOI wrapper of Clp into a matrix format that can directly be passed (by pointers, without copy) to Clp.
As detailed in #1261, we could merge the two last models by directly storing the bridged model in the right format needed by Clp in the cache of the CachingOptimizer.
So it seems #1261 and this PR are solving the same use-case so we should probably keep only one of them.
As we discussed, this could be a short-term solution while we do #1261 but I don't think it would take to much time to do #1261 (given that #1245 and #1267 are already done) and we don't want to add an option (resp. an opt-in behavior) that we deprecate shortly after adding it (resp. that we change shortly after).
It's good to see the latency reduction that we get with this PR though and we should check that we can at least the same amount whether we choose to do this change or another one.

@odow
Copy link
Member Author

odow commented Mar 9, 2021

we should check that we can at least the same amount whether we choose to do this change or another one.

Agreed.

What if we release 0.9.21 soon with the current changes, then start on 0.10? There are quite a few breaking changes queued up, so we could merge this into a 0.10-dev, but hold off releasing 0.10 for a while.

@blegat
Copy link
Member

blegat commented Mar 9, 2021

There are quite a few breaking changes queued up

I don't think we need to make a breaking releases. We can merge #1254 and see if it breaks anything but I would suspect that if it breaks a package then the package was using the CachingOptimizer incorrectly.

so we could merge this into a 0.10-dev

I don't think this should be merged, we can leave it open as a reference but as discussed, I don't see how it fits in the long term plan. Also, we should aim at getting the advantages of this PR also if only some part of the model is bridged.

@odow
Copy link
Member Author

odow commented Mar 9, 2021

Okay. We're clearly at an impasse. I'll wait for #1261.

@odow odow added this to the v0.10 milestone May 2, 2021
@odow
Copy link
Member Author

odow commented May 2, 2021

Adding this to the 0.10 milestone because we should decide whether or not to add it before releasing 0.10. I'm in favor of adding it, unless we can demonstrate that the alternatives are more performant.

@blegat
Copy link
Member

blegat commented May 3, 2021

I think we should better understand what's going on with the benchmark. Is that only compile-time difference or does some difference persists in the run-time ? What happens if you remove the bridge layer but keep a second cache ? Intuitively, I would think that there is not much to compile at the bridge layers if all constraints are supported. As the bridge graph is built lazily, if all constraints are supported, nothing is built. If we notice there is still a lot of things compiled when nothing is computed then maybe there is a simple fix to the bridges module that could resolve it. This fix would be useful even when only part of the model is bridged.
For the second layer of cache, this should be resolved by #1287. It has taken time but this PR should now be ready.

@odow odow force-pushed the od/cache_auto_bridge branch from e17e88f to e3659bd Compare May 3, 2021 10:01
@odow
Copy link
Member Author

odow commented May 3, 2021

I don't think it was compilation, it was mainly an inference problem. But I'll redo the benchmarks in light of the new MOI release and Julia 1.6.

@odow
Copy link
Member Author

odow commented May 3, 2021

Things have gotten worse with MOI 0.9.21:

(base) oscar@Oscars-MBP auto-cache % ~/julia --project=. --depwarn=error bench.jl clp
 24.237062 seconds (57.88 M allocations: 3.274 GiB, 6.30% gc time, 41.11% compilation time)
  0.001585 seconds (6.36 k allocations: 563.711 KiB)
(base) oscar@Oscars-MBP auto-cache % ~/julia --project=. --depwarn=error bench.jl clp --auto
 12.316551 seconds (20.23 M allocations: 1.157 GiB, 4.61% gc time, 96.23% compilation time)
  0.001926 seconds (3.29 k allocations: 329.508 KiB)
(base) oscar@Oscars-MBP auto-cache % ~/julia --project=. --depwarn=error bench.jl glpk      
 16.994045 seconds (36.90 M allocations: 2.125 GiB, 5.72% gc time, 55.21% compilation time)
  0.000760 seconds (3.46 k allocations: 270.109 KiB)
(base) oscar@Oscars-MBP auto-cache % ~/julia --project=. --depwarn=error bench.jl glpk --auto
 13.525074 seconds (22.89 M allocations: 1.315 GiB, 4.67% gc time, 99.97% compilation time)
  0.000798 seconds (2.80 k allocations: 254.281 KiB)

This is with

(auto-cache) pkg> st
      Status `~/Documents/JuMP/performance/auto-cache/Project.toml`
  [e2554f3b] Clp v0.8.4 `https://github.com/jump-dev/Clp.jl.git#od/moi10`
  [60bf3e95] GLPK v0.14.8 `https://github.com/jump-dev/GLPK.jl.git#od/moi10`
  [4076af6c] JuMP v0.21.7 `https://github.com/jump-dev/JuMP.jl.git#od/autobridge`
  [b8f27783] MathOptInterface v0.9.21 `https://github.com/jump-dev/MathOptInterface.jl.git#od/cache_auto_bridge`

I'll take a deeper look.

odow added 4 commits May 20, 2021 10:29
If true, this argument enables CachingOptimizer to automatically add
a bridging layer if it will all the underlying optimizer to support
the constraint or objective function.
@odow odow force-pushed the od/cache_auto_bridge branch from ecc9bfd to 4c803b8 Compare May 19, 2021 22:35
@odow
Copy link
Member Author

odow commented May 20, 2021

Closing this because I talked to @blegat and we decided this could actually be done at the JuMP level, rather than as a complication of MathOptInterface.

@odow odow closed this May 20, 2021
@odow odow deleted the od/cache_auto_bridge branch May 20, 2021 01:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Submodule: Utilities About the Utilities submodule Type: Performance
4 participants