Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Full Optimizer approach for MOI interface #57

Merged
merged 14 commits into from
Mar 1, 2024
Merged

Full Optimizer approach for MOI interface #57

merged 14 commits into from
Mar 1, 2024

Conversation

sshin23
Copy link
Collaborator

@sshin23 sshin23 commented Feb 20, 2024

With this approach,

using PowerModels, NLPModelsIpopt, ExaModels

@time pm = PowerModels.instantiate_model(
    joinpath(ENV["PGLIB_DEPOT"], "pglib_opf_case1354_pegase.m"),
    ACPPowerModel,
    PowerModels.build_opf
)
@time optimize_model!(
    pm,
    optimizer= ()->ExaModels.Optimizer(
        ipopt;
        linear_solver="ma27"
    )
)

and

using PowerModels, ExaModels, MadNLP, MadNLPHSL, MadNLPGPU, CUDA

@time pm = PowerModels.instantiate_model(
    joinpath(ENV["PGLIB_DEPOT"], "pglib_opf_case1354_pegase.m"),
    ACPPowerModel,
    PowerModels.build_opf
)
@time optimize_model!(
    pm,
    optimizer= ()->ExaModels.Optimizer(
        madnlp, CUDABackend()
    )
)

should work, though CUDA will complain that you're performing scalar indexing for the CUDA array (happening in non performance-critical part), which can be fixed in the future.

I'm not sure yet if the full optimizer approach would be the right approach here.

Upsides

  • If we create an Optimizer, we can directly handle the interface with the solvers, so it will make interfacing with GPU-accelerated solvers (MadNLP) easier.

Downsides

  • For each solver, extension packages need to be implemented ExaModelsIpopt, ExaModelsMadNLP, etc., which will eventually make the maintenance more difficult.

Thoughts? @odow @frapac @ccoffrin

@odow
Copy link

odow commented Feb 20, 2024

Not immediately obvious to me: does this exploit the repeated structure? Or does it create N independent constraints?

@sshin23
Copy link
Collaborator Author

sshin23 commented Feb 20, 2024

@odow It does exploit the repeated structure. It first goes into each scalar term, and create expression tree and the associated data entry. If the expression tree is redundant (something we already have created), it doesn't store it, and just add a new data entry to the data array associated with that expression tree.

Compared to native ExaModels API, there is significant overhead when creating the model (as we need to inspect each scalar term), but the AD performance is pretty close to native ExaModels API.

@odow
Copy link

odow commented Feb 20, 2024

Ah cool! Okay. It should be possible then to make this an AD backend as well. That'd enable a fairer comparison.

@sshin23
Copy link
Collaborator Author

sshin23 commented Feb 20, 2024

One caveat though, is that we don't have interface to MOI.Nonlinear yet. We only have an interface to MOIU.CachingOptimizer's expression tree. Thus, to make this as a MOI.AutomatidDifferentiationBackend, there should be some interface from JuMP that allows you to directly use MOIU.CachingOptimizer instead of hadling them via MOI.Nonlinear

@odow
Copy link

odow commented Feb 20, 2024

Yeah I was imagining repurposing your code for the Expr case, rather than directly trying to use this. I just couldn't figure out how to build scalar expressions and detect if they were the same using your framework when I tried a couple of weeks ago.

@sshin23
Copy link
Collaborator Author

sshin23 commented Feb 20, 2024

@odow This is the approach taken in ExaModels.jl
https://github.com/exanauts/ExaModels.jl/blob/main/ext/ExaModelsMOI.jl#L28-L53

We build this nested Bin, which stores expression tree in head and stores data entries in data.

@sshin23 sshin23 merged commit 9ce5619 into main Mar 1, 2024
4 of 5 checks passed
@sshin23 sshin23 deleted the ss/ad_backend branch March 9, 2024 17:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants