-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
optimize!
has large overhead
#286
Comments
It's plausible that there are some performance improvements that we could make. I haven't benchmarked the package closely. Ipopt is very "simple" so it also doesn't surprise me that KNITRO has more overhead. |
Thank you for including a MWE. I am able to reproduce your issue on my laptop. Investigating it further, it looks like the bottleneck is in import JuMP: MOI
model = goc.get_multiperiod_acopf_model(input_data)
optimizer = KNITRO.Optimizer()
MOI.copy_to(optimizer, model) I observe we spend ~70s in the I think there is a key difference compared to Ipopt, as Knitro is implementing an incremental interface. Meaning that instead of passing the model all in once to the optimizer, we build it incrementally. Each time we add new variables / new constraints we have to reallocate some memory inside the solver, and that can prove to be expensive if we have to build a large model (as it is the case here). A workaround would be to pass the structure in a vectorized fashion, by passing the constraints and the variables all in once to the solver (instead of one by one). This might be slightly related to: |
I only have a limited size license so I can't test this, but can someone post the result of:
I also see that: julia> model = goc.get_multiperiod_acopf_model(input_data)
A JuMP Model
Maximization problem with:
Variables: 529056
Objective function type: JuMP.AffExpr
`JuMP.NonlinearExpr`-in-`MathOptInterface.EqualTo{Float64}`: 164832 constraints
`JuMP.AffExpr`-in-`MathOptInterface.EqualTo{Float64}`: 24000 constraints
`JuMP.AffExpr`-in-`MathOptInterface.GreaterThan{Float64}`: 47904 constraints
`JuMP.AffExpr`-in-`MathOptInterface.LessThan{Float64}`: 47904 constraints
`JuMP.AffExpr`-in-`MathOptInterface.Interval{Float64}`: 64896 constraints
`JuMP.QuadExpr`-in-`MathOptInterface.EqualTo{Float64}`: 58176 constraints
`JuMP.QuadExpr`-in-`MathOptInterface.LessThan{Float64}`: 81888 constraints
`JuMP.VariableRef`-in-`MathOptInterface.GreaterThan{Float64}`: 499440 constraints
`JuMP.VariableRef`-in-`MathOptInterface.LessThan{Float64}`: 380976 constraints
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.
Names registered in the model: p_balance, p_balance_slack_neg, p_balance_slack_pos, p_branch, p_sdd, pq_eq, pq_lb, pq_ub, q_balance, q_balance_slack_neg, q_balance_slack_pos, q_branch, q_implication_max, q_implication_min, q_sdd, ramp_lb, ramp_ub, shunt_step, va, vm it doesn't seem unreasonable that KNITRO might take a while to build this problem in incremental mode. Hard to know what the problem is without a profile. |
Ooof. Yeah. We can improve this: Lines 20 to 35 in 38d473f
It's costly, especially for small sizes. |
@Robbybp how did you build the quadratic equality constraints? They don't seem to have any quadratic terms? |
Fixed with #296, thanks! The time for
My best guess is that these are power balance equations on buses that have no shunts, e.g.: @constraint(model,
p_balance[uid in bus_ids],
sum(p_branch[k] for k in bus_branch_keys[uid], init = 0) ==
sum(p_sdd[ssd_id] for ssd_id in bus_sdd_producer_ids[uid], init = 0) -
sum(p_sdd[ssd_id] for ssd_id in bus_sdd_consumer_ids[uid], init = 0) -
sum(
shunt_lookup[shunt_id]["gs"]*shunt_step[shunt_id]
for shunt_id in bus_shunt_ids[uid],
init = 0
)*vm[uid]^2
#gs*vm[uid]^2
) |
Great! I wasn't able to reproduce such an extreme discrepancy in my local testing, but I guess it was causing a GC issue or something. If you notice any performance issues like this, they're often a simple fix away once you profile. |
I'm solving multi-period ACOPF problems with Knitro and Ipopt, and
optimize!
with Knitro seems to have a large overhead compared to what the solver reports.This can be observed with the following script:
Here,
./scenario_002.json
is theC3E4N00617_20231002/D2/C3E4N00617D2/scenario_002.json
file inC3E4N00617_20231002.zip
that can be downloaded from this webpage.GOC3Benchmark
is the open-source version of the Grid Optimization Competition Challenge 3 benchmark algorithm.I get the following results:
optimize!
time (s)Knitro.jl seems to be spending more time than I expect in data structure initialization. The discrepancy between Knitro's "actual" and "reported" times grows as I try larger problems from the GOC E4 datasets, and is not present when I solve the same problems with the
knitroampl
executable. If this is due to something intended, or unavoidable for some reason, feel free to close the issue.Versions
Julia 1.10.0
JuMP 1.20.0
Ipopt 1.6.2
KNITRO 0.14.1
Platform: M1 Mac
The text was updated successfully, but these errors were encountered: