Skip to content

Commit

Permalink
WIP: bridge backend of Model if needed
Browse files Browse the repository at this point in the history
Rebase and fix formatting

bridge_formulation instead of bridge_constraints

Some test fixes

More fixes

Fix docs. Requires new GLPK release

Update models.md

Remove TODO

Another fix

More fixes

Fix formatting

Fix typo

More tests

Update solvers_and_solutions.jl

Rename bridge_formulation to force_bridge_formulation
  • Loading branch information
odow committed Oct 1, 2021
1 parent 4f99d32 commit a770356
Show file tree
Hide file tree
Showing 15 changed files with 410 additions and 344 deletions.
53 changes: 13 additions & 40 deletions docs/src/manual/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -234,16 +234,12 @@ CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: GLPK
julia> b = backend(model)
MOIU.CachingOptimizer{MOI.AbstractOptimizer, MOIU.UniversalFallback{MOIU.Model{Float64}}}
MOIU.CachingOptimizer{GLPK.Optimizer, MOIU.UniversalFallback{MOIU.Model{Float64}}}
in state EMPTY_OPTIMIZER
in mode AUTOMATIC
with model cache MOIU.UniversalFallback{MOIU.Model{Float64}}
fallback for MOIU.Model{Float64}
with optimizer MOIB.LazyBridgeOptimizer{GLPK.Optimizer}
with 0 variable bridges
with 0 constraint bridges
with 0 objective bridges
with inner model A GLPK model
with optimizer A GLPK model
```

The backend is a `MOIU.CachingOptimizer` in the state `EMPTY_OPTIMIZER` and mode
Expand Down Expand Up @@ -280,17 +276,9 @@ It has two parts:
2. An optimizer, which is used to solve the problem
```jldoctest models_backends
julia> b.optimizer
MOIB.LazyBridgeOptimizer{GLPK.Optimizer}
with 0 variable bridges
with 0 constraint bridges
with 0 objective bridges
with inner model A GLPK model
A GLPK model
```
!!! info
The [LazyBridgeOptimizer](@ref) section explains what a
`LazyBridgeOptimizer` is.
The `CachingOptimizer` has logic to decide when to copy the problem from the
cache to the optimizer, and when it can efficiently update the optimizer
in-place.
Expand All @@ -317,25 +305,10 @@ A `CachingOptimizer` has two modes of operation:
an operation in the incorrect state results in an error.
By default [`Model`](@ref) will create a `CachingOptimizer` in `AUTOMATIC` mode.
Use the `caching_mode` keyword to create a model in `MANUAL` mode:
```jldoctest
julia> Model(GLPK.Optimizer; caching_mode = MOI.Utilities.MANUAL)
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: MANUAL
CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: GLPK
```

!!! tip
Only use `MANUAL` mode if you have a very good reason. If you want to reduce
the overhead between JuMP and the underlying solver, consider
[Direct mode](@ref) instead.
### LazyBridgeOptimizer
The second layer that JuMP applies automatically is a `LazyBridgeOptimizer`. A
The second layer that JuMP may apply is a `LazyBridgeOptimizer`. A
`LazyBridgeOptimizer` is an MOI layer that attempts to transform constraints
added by the user into constraints supported by the solver. This may involve
adding new variables and constraints to the optimizer. The transformations are
Expand All @@ -345,9 +318,10 @@ A common example of a bridge is one that splits an interval constrait like
`@constraint(model, 1 <= x + y <= 2)` into two constraints,
`@constraint(model, x + y >= 1)` and `@constraint(model, x + y <= 2)`.
Use the `bridge_constraints=false` keyword to remove the bridging layer:
The `LazyBridgeOptimizer` is added only if necessary. However, you can use the
`force_bridge_formulation = true` keyword to add the bridging layer by default:
```jldoctest
julia> model = Model(GLPK.Optimizer; bridge_constraints = false)
julia> model = Model(GLPK.Optimizer; force_bridge_formulation = true)
A JuMP Model
Feasibility problem with:
Variables: 0
Expand All @@ -356,19 +330,18 @@ CachingOptimizer state: EMPTY_OPTIMIZER
Solver name: GLPK
julia> backend(model)
MOIU.CachingOptimizer{MOI.AbstractOptimizer, MOIU.UniversalFallback{MOIU.Model{Float64}}}
MOIU.CachingOptimizer{MOIB.LazyBridgeOptimizer{GLPK.Optimizer}, MOIU.UniversalFallback{MOIU.Model{Float64}}}
in state EMPTY_OPTIMIZER
in mode AUTOMATIC
with model cache MOIU.UniversalFallback{MOIU.Model{Float64}}
fallback for MOIU.Model{Float64}
with optimizer A GLPK model
with optimizer MOIB.LazyBridgeOptimizer{GLPK.Optimizer}
with 0 variable bridges
with 0 constraint bridges
with 0 objective bridges
with inner model A GLPK model
```

!!! tip
Only disable bridges if you have a very good reason. If you want to reduce
the overhead between JuMP and the underlying solver, consider
[Direct mode](@ref) instead.

## Direct mode

Using a `CachingOptimizer` results in an additional copy of the model being
Expand Down
1 change: 0 additions & 1 deletion docs/src/reference/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,6 @@ MOIU.attach_optimizer(::JuMP.Model)
## Bridge tools

```@docs
bridge_constraints
print_bridge_graph
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,7 @@
using JuMP
using GLPK
model = Model(GLPK.Optimizer)
set_silent(model)
@variable(model, x >= 0)
@variable(model, 0 <= y <= 3)
@objective(model, Min, 12x + 20y)
Expand Down Expand Up @@ -108,6 +109,10 @@ using GLPK

model = Model(GLPK.Optimizer)

# Turn off printing from GLPK:

set_silent(model)

# Variables are modeled using [`@variable`](@ref):

@variable(model, x >= 0)
Expand Down
11 changes: 1 addition & 10 deletions docs/src/tutorials/Getting started/performance_tips.jl
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ using GLPK # hide

# Similar to the infamous [time-to-first-plot](https://discourse.julialang.org/t/roadmap-for-a-faster-time-to-first-plot/22956)
# plotting problem, JuMP suffers from time-to-first-solve latency. This latency
# occurs because the first time you call JuMP code in each session, Julia needs
# occurs because the first time you call JuMP code in each session, Julia needs
# to compile a lot of code specific to your problem. This issue is actively being
# worked on, but there are a few things you can do to improve things.

Expand All @@ -48,15 +48,6 @@ using GLPK # hide
# every time you run the script. Instead, use one of the [suggested workflows](https://docs.julialang.org/en/v1/manual/workflow-tips/)
# from the Julia documentation.

# ### Disable bridges if none are being used

# At present, the majority of the latency problems are caused by JuMP's bridging
# mechanism. If you only use constraints that are natively supported by the
# solver, you can disable bridges by passing `bridge_constraints = false` to
# [`Model`](@ref).

model = Model(GLPK.Optimizer; bridge_constraints = false)

# ### Use PackageCompiler

# As a final option, consider using [PackageCompiler.jl](https://julialang.github.io/PackageCompiler.jl/dev/)
Expand Down
105 changes: 43 additions & 62 deletions docs/src/tutorials/Getting started/solvers_and_solutions.jl
Original file line number Diff line number Diff line change
Expand Up @@ -53,8 +53,8 @@

# ## Constructing a model

# JuMP models can be created in three different modes: `AUTOMATIC`, `MANUAL` and
# `DIRECT`. We'll use the following LP to illustrate them.
# JuMP models can be created in a number of ways. We'll use the following LP to
# illustrate them.

# ```math
# \begin{aligned}
Expand All @@ -67,55 +67,35 @@
using JuMP
using GLPK

# ### `AUTOMATIC` Mode

# #### With Optimizer
# ### With Optimizer

# This is the easiest method to use a solver in JuMP. In order to do so, we
# simply set the solver inside the Model constructor.

model_auto = Model(GLPK.Optimizer)
@variable(model_auto, 0 <= x <= 1)
@variable(model_auto, 0 <= y <= 1)
@constraint(model_auto, x + y <= 1)
@objective(model_auto, Max, x + 2y)
optimize!(model_auto)
objective_value(model_auto)
model_1 = Model(GLPK.Optimizer)
set_silent(model_1)
@variable(model_1, 0 <= x <= 1)
@variable(model_1, 0 <= y <= 1)
@constraint(model_1, x + y <= 1)
@objective(model_1, Max, x + 2y)
optimize!(model_1)
objective_value(model_1)

# #### No Optimizer (at first)
# ### No Optimizer (at first)

# It is also possible to create a JuMP model with no optimizer attached. After
# the model object is initialized empty and all its variables, constraints and
# objective are set, then we can attach the solver at `optimize!` time.

model_auto_no = Model()
@variable(model_auto_no, 0 <= x <= 1)
@variable(model_auto_no, 0 <= y <= 1)
@constraint(model_auto_no, x + y <= 1)
@objective(model_auto_no, Max, x + 2y)
set_optimizer(model_auto_no, GLPK.Optimizer)
optimize!(model_auto_no)
objective_value(model_auto_no)

# Note that we can also enforce the automatic mode by passing
# `caching_mode = MOIU.AUTOMATIC` in the Model function call.

# ### `MANUAL` Mode

# This mode is similar to the `AUTOMATIC` mode, but there are less protections
# from the user getting errors from the solver API. On the other side, nothing
# happens silently, which might give the user more control. It requires
# attaching the solver before the solve step using the `MOIU.attach_optimizer()`
# function.

model_manual = Model(GLPK.Optimizer, caching_mode = MOIU.MANUAL)
@variable(model_manual, 0 <= x <= 1)
@variable(model_manual, 0 <= y <= 1)
@constraint(model_manual, x + y <= 1)
@objective(model_manual, Max, x + 2y)
MOIU.attach_optimizer(model_manual)
optimize!(model_manual)
objective_value(model_manual)
model = Model()
@variable(model, 0 <= x <= 1)
@variable(model, 0 <= y <= 1)
@constraint(model, x + y <= 1)
@objective(model, Max, x + 2y)
set_optimizer(model, GLPK.Optimizer)
set_silent(model)
optimize!(model)
objective_value(model)

# ### `DIRECT` Mode

Expand All @@ -124,13 +104,14 @@ objective_value(model_manual)
# we do not set a optimizer, we set a backend which is more generic and is able
# to hold data and not only solving a model.

model_direct = direct_model(GLPK.Optimizer())
@variable(model_direct, 0 <= x <= 1)
@variable(model_direct, 0 <= y <= 1)
@constraint(model_direct, x + y <= 1)
@objective(model_direct, Max, x + 2y)
optimize!(model_direct)
objective_value(model_direct)
model = direct_model(GLPK.Optimizer())
set_silent(model)
@variable(model, 0 <= x <= 1)
@variable(model, 0 <= y <= 1)
@constraint(model, x + y <= 1)
@objective(model, Max, x + 2y)
optimize!(model)
objective_value(model)

# ### Solver Options

Expand All @@ -143,15 +124,15 @@ using GLPK

# To turn off printing (i.e. silence the solver),

model = Model(optimizer_with_attributes(GLPK.Optimizer, "msg_lev" => 0));
Model(optimizer_with_attributes(GLPK.Optimizer, "msg_lev" => 0));

# To increase the maximum number of simplex iterations:

model = Model(optimizer_with_attributes(GLPK.Optimizer, "it_lim" => 10_000));
Model(optimizer_with_attributes(GLPK.Optimizer, "it_lim" => 10_000));

# To set the solution timeout limit (in milliseconds):

model = Model(optimizer_with_attributes(GLPK.Optimizer, "tm_lim" => 5_000));
Model(optimizer_with_attributes(GLPK.Optimizer, "tm_lim" => 5_000));

# ## How to querying the solution

Expand All @@ -167,7 +148,7 @@ model = Model(optimizer_with_attributes(GLPK.Optimizer, "tm_lim" => 5_000));
# Termination statuses are meant to explain the reason why the optimizer stopped
# executing in the most recent call to `optimize!`.

termination_status(model_auto)
termination_status(model_1)

# You can view the different termination status codes by referring to the docs
# or though checking the possible types using the below command.
Expand All @@ -180,11 +161,11 @@ display(typeof(MOI.OPTIMAL))
# the model. It's possible that no result is available to be queried. We shall
# discuss more on the dual status and solutions in the Duality tutorial.

primal_status(model_auto)
primal_status(model_1)

#-

dual_status(model_auto)
dual_status(model_1)

# As we saw before, the result (solution) status codes can be viewed directly
# from Julia.
Expand All @@ -204,19 +185,19 @@ value(y)

#-

objective_value(model_auto)
objective_value(model_1)

# Since it is possible that no solution is available to be queried from the
# model, calls to [`value`](@ref) may throw errors. Hence, it is recommended to
# check for the presence of solutions.

model_no_solution = Model(GLPK.Optimizer)
@variable(model_no_solution, 0 <= x <= 1)
@variable(model_no_solution, 0 <= y <= 1)
@constraint(model_no_solution, x + y >= 3)
@objective(model_no_solution, Max, x + 2y)

optimize!(model_no_solution)
model = Model(GLPK.Optimizer)
@variable(model, 0 <= x <= 1)
@variable(model, 0 <= y <= 1)
@constraint(model, x + y >= 3)
@objective(model, Max, x + 2y)
set_silent(model)
optimize!(model)

try #hide
if termination_status(model_no_solution) == MOI.OPTIMAL
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ model = Model(GLPK.Optimizer)
@variable(model, y >= 0)
set_objective_sense(model, MOI.MIN_SENSE)
set_objective_function(model, x + y)

set_silent(model)
optimize!(model)

#-
Expand Down Expand Up @@ -272,7 +272,7 @@ c = [1; 3; 5; 2]
@variable(vector_model, x[1:4] >= 0)
@constraint(vector_model, A * x .== b)
@objective(vector_model, Min, c' * x)

set_silent(vector_model)
optimize!(vector_model)

#-
Expand Down
4 changes: 4 additions & 0 deletions docs/src/tutorials/Mixed-integer linear programs/callbacks.jl
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ import Test #src

function example_lazy_constraint()
model = Model(GLPK.Optimizer)
set_silent(model)
@variable(model, 0 <= x <= 2.5, Int)
@variable(model, 0 <= y <= 2.5, Int)
@objective(model, Max, y)
Expand Down Expand Up @@ -68,6 +69,7 @@ function example_user_cut_constraint()
N = 30
item_weights, item_values = rand(N), rand(N)
model = Model(GLPK.Optimizer)
set_silent(model)
@variable(model, x[1:N], Bin)
@constraint(model, sum(item_weights[i] * x[i] for i in 1:N) <= 10)
@objective(model, Max, sum(item_values[i] * x[i] for i in 1:N))
Expand Down Expand Up @@ -106,6 +108,7 @@ function example_heuristic_solution()
N = 30
item_weights, item_values = rand(N), rand(N)
model = Model(GLPK.Optimizer)
set_silent(model)
@variable(model, x[1:N], Bin)
@constraint(model, sum(item_weights[i] * x[i] for i in 1:N) <= 10)
@objective(model, Max, sum(item_values[i] * x[i] for i in 1:N))
Expand Down Expand Up @@ -137,6 +140,7 @@ example_heuristic_solution()

function example_solver_dependent_callback()
model = Model(GLPK.Optimizer)
set_silent(model)
@variable(model, 0 <= x <= 2.5, Int)
@variable(model, 0 <= y <= 2.5, Int)
@objective(model, Max, y)
Expand Down
Loading

0 comments on commit a770356

Please sign in to comment.