Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flexibility Analysis #1398

Open
wants to merge 66 commits into
base: main
Choose a base branch
from
Open

Flexibility Analysis #1398

wants to merge 66 commits into from

Conversation

michaelbynum
Copy link
Contributor

@michaelbynum michaelbynum commented Apr 23, 2024

Summary/Motivation:

This PR adds a module for formulating flexibility analysis problems.

Legal Acknowledgement

By contributing to this software project, I agree to the following terms and conditions for my contribution:

  1. I agree my contributions are submitted under the license terms described in the LICENSE.txt file at the top level of this directory.
  2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.

@michaelbynum
Copy link
Contributor Author

Thanks!

@@ -29,6 +30,7 @@ jsonschema
jupyter_contrib_nbextensions
snowballstemmer==1.2.1
addheader>=0.2.2
highspy
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would SCIP be sufficient for these tests? We already have SCIP available (conditionally) for use in the Diagnostics tests, and we do try to keep our dependencies to a minimum.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SCIP would work, but it makes the tests way slower. I'm solving a bunch of LPs with changes only to parameters. The Pyomo interface to HiGHS is a persistent interface, and is much faster than the interface to SCIP for that type of workflow.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might want to see if it makes sense to do something like @lbianchi-lbl did for SCIP then so that it does not need to be a dependency in requirements-dev.txt. If I recall, we basically have a check that sees if SCIP is available and otherwise skips the tests. We then just make sure we have it installed on the test server (via GHA setup) so the test will run on the server.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good to me. As long as it gets installed in GHA so the tests run, I'm happy.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if it applies, but would the other way around work? i.e. is it possible to use HiGHS instead of SCIP for our existing tests for which SCIP is currently being used?

If so, it might make sense to consider having highspy as a dev dependency and just use it for testing for both cases (of course, this is purely on the side of streamlining dependencies and very much solver-domain-ignorant, so feel free to add why this wouldn't work).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually need both. SCIP also solves nonlinear problems, while HiGHS does not...

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, that makes sense then, thanks for clarifying!

Comment on lines +20 to +21
omlt, nump_available = attempt_import("omlt")
if not tensorflow_available or not nump_available:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
omlt, nump_available = attempt_import("omlt")
if not tensorflow_available or not nump_available:
omlt, omlt_available = attempt_import("omlt")
if not tensorflow_available or not omlt_available:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! Thanks!

@ksbeattie ksbeattie added the Priority:Normal Normal Priority Issue or PR label Apr 25, 2024
Copy link
Member

@andrewlee94 andrewlee94 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here are some comments on the first file (I'll try to get through the rest tomorrow).

One general comment; could you add yourself to the top-level CODEOWNERS file as the maintainer for this code (or a nominee).

"""
This module is only used to check dependencies for unit tests.
"""
import unittest
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seeing as this is just for testing, should it be moved into the tests folder?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great point. I'll move this.

"""
This module is only used to check dependencies for unit tests.
"""
import unittest
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As above.

works for both APPSI solver interfaces and non-appsi solver interfaces.
"""
import pyomo.environ as pe
from pyomo.contrib import appsi
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A minor comment (so does not need to be fixed), but this seems like it would be better located in Pyomo (probably in pyomo.contrib with the appsi code seeing as it related to that).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great point.

visibility=0,
):
super().__init__(
description=description,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the purpose of this just to change the default values for the implicit and implicit_domain arguments (whilst still leaving them as arguments)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think I changed any defaults. This class is just meant to be a base class for other, related config classes. I guess it is not really necessary. I think I just created it for type hinting.

visibility=visibility,
)
self.solver = self.declare(
"solver", ConfigValue(default=pe.SolverFactory("ipopt"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this also have a doc attribute, just in case someone tries to query the ConfigDict?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Absolutely. I'll add one.

implicit_domain=implicit_domain,
visibility=visibility,
)
self.n_layers: int = self.declare(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These might be obvious to people with more experience with neural networks, but it might be good to have some short doc strings for these arguments for less experiences users.


config: ReluDRConfig = config()
if config.batch_size > n_samples:
config.batch_size = n_samples
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be good to log a message for the user here to let them know they set a batch size greater than the number of samples and thus this value was changed.

for _ in range(config.n_layers - 1):
nn.add(keras.layers.Dense(config.n_nodes_per_layer, activation="relu"))
nn.add(keras.layers.Dense(len(outputs)))
if config.learning_rate is None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume this is needed as keras does not accept None as a value for learning_rate (i.e. if you provide the argument it must have a meaningful value). An inline comment might help to make sure no-one tries to "fix" this later.

training_output,
batch_size=config.batch_size,
epochs=config.epochs,
# verbose=0,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commented line.

# All rights reserved. Please see the files COPYRIGHT.md and LICENSE.md
# for full copyright and license information.
#################################################################################
from idaes.apps.flexibility_analysis import _check_dependencies
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should there be tests for the relu_dr as well?

@ksbeattie
Copy link
Member

@michaelbynum any update here?

@andrewlee94
Copy link
Member

@michaelbynum Something you might want to think about for the docs here: I have received a number of questions this week about this PR and how it is distinguished from robust optimization and Pyros (and thus when you would want to use one versus the other). It would probably be good to have some discussion of this somewhere in the docs to guide new users to the best tool for what they want to do. On the plus side, there is already some interest in tools like this.

@ksbeattie
Copy link
Member

@michaelbynum I'm moving this to the Aug release as the May release will be created next week.

@ksbeattie
Copy link
Member

@michaelbynum any update on this?

@michaelbynum
Copy link
Contributor Author

Apologies. I'll be pushing updates to address reviews early next week.

@ksbeattie
Copy link
Member

Apologies. I'll be pushing updates to address reviews early next week.

@michaelbynum I know you're busy... any news on this?

@ksbeattie
Copy link
Member

@michaelbynum, I'm taking this off the Aug release board as it doesn't appear work is progressing here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Priority:Normal Normal Priority Issue or PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants