-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Full Optimizer approach for MOI interface #57
Conversation
Not immediately obvious to me: does this exploit the repeated structure? Or does it create N independent constraints? |
@odow It does exploit the repeated structure. It first goes into each scalar term, and create expression tree and the associated data entry. If the expression tree is redundant (something we already have created), it doesn't store it, and just add a new data entry to the data array associated with that expression tree. Compared to native ExaModels API, there is significant overhead when creating the model (as we need to inspect each scalar term), but the AD performance is pretty close to native ExaModels API. |
Ah cool! Okay. It should be possible then to make this an AD backend as well. That'd enable a fairer comparison. |
One caveat though, is that we don't have interface to |
Yeah I was imagining repurposing your code for the |
@odow This is the approach taken in ExaModels.jl We build this nested |
With this approach,
and
should work, though CUDA will complain that you're performing scalar indexing for the CUDA array (happening in non performance-critical part), which can be fixed in the future.
I'm not sure yet if the full optimizer approach would be the right approach here.
Upsides
Optimizer
, we can directly handle the interface with the solvers, so it will make interfacing with GPU-accelerated solvers (MadNLP) easier.Downsides
ExaModelsIpopt
,ExaModelsMadNLP
, etc., which will eventually make the maintenance more difficult.Thoughts? @odow @frapac @ccoffrin