-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow nonlinear constraints #353 #371
base: develop
Are you sure you want to change the base?
Conversation
Hi @MarkBlyth, great to hear that you could achieve the desired functionality with the current code structure! I wonder if you could achieve the same result more efficiently (and for all optimisers) by extending the |
Interesting idea, I hadn't thought of that! I've included an example now of using a subclassed Thevenin model, where the I'd lean towards wanting to pass the parameter constraints into the optimiser, rather than the model. But given that this works easily and has no breaking changes, then perhaps it's the right way to go. |
Hi @MarkBlyth, thanks for the two examples! I've just merged #388 into develop so please check if it helps. Have you compared the performance of the two methods? I like the extension of the SciPy For constraints though, I think the |
The extra speed is possibly because it's a gradient-based method, which is suitable for this particular flavour of problem. For robustness, I've found the gradient-free / metaheuristic optimisers can sometimes give up (for lack of better phrasing!) when I try to apply fairly tight bounds, whereas
Yep agreed, that sounds like the nicest way to do it. I'll get on that. |
Finally got round to committing these changes. So far I've only set up the empirical models to take a user-defined parameter checker, but I'm sure the same could be done easily for physics-based models too |
Hi @MarkBlyth, looks like you're almost ready to merge but I noticed a small mistake in the The fix is to add a default value of |
@NicolaCourtier fix pushed, should be ready to merge now! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking great, a few suggestions/requests. Happy to merge once they are resolved! Nice one @MarkBlyth
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the notebook! It looks great. Interestingly, it seems that the non-linear constraint removes the parameter bounds within the SciPy optimisation. In the parameter convergence plots, R1 is negative at the beginning (exceeds the lower bound previously defined) with an increasing cost. Any ideas as to why the bounds are not being applied to this optimisation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty sure the bounds are still being applied, but the default is to apply themwithout keep_feasible
. PyBOP defines the bounds as a list (scipy_optimisers.py line 58)
self._scipy_bounds = [
(lower, upper)
for lower, upper in zip(self.bounds["lower"], self.bounds["upper"])
]
but there's also the option to build a Bounds
object instead
self._scipy_bounds = Bounds(self.bounds["lower"], self.bounds["upper"], True)
The benefit of the Bounds
object is that it offers a keep_feasible
argument (that last True
argument). This isn't an option when bounds are passed as a list, so keep_feasible
will default to False
for the standard PyBOP implementation, giving the negative resistances that we're seeing.
I'd lean towards keeping things as they are. The bounds are checked elsewhere in a way that's marginally more convenient with the current setup, and generally, the optimisers will perform better if they're not forced to stay within the feasible region. For the cases like this, where the infeasible parameters lead to simulation failure, the rest of the PyBOP infrastructure is able handles it properly to makes sure it doesn't lead to problems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the info on SciPy's keep_feasible
argument, I wasn't aware of it. I've updated the SciPy bounds implementation to enable this functionality by default. The way I see it, if a user has requested bounds, they should be enforced, since we provide the functionality to not apply bounds on construction. While PyBOP has functionality to catch this in other areas, enforcing at the right level simplifies the complexity in the codebase. Please have a look at the change in the notebook and let me know if anything is wrong, the lower bound of the nonlinear constraint had to be reduced as the initial conditions were outside this value.
Overall, the performance appears to be improved with the keep_feasible
option enabled for this problem.
Co-authored-by: Brady Planden <55357039+BradyPlanden@users.noreply.github.com>
Co-authored-by: Brady Planden <55357039+BradyPlanden@users.noreply.github.com>
Co-authored-by: Brady Planden <55357039+BradyPlanden@users.noreply.github.com>
…r integration tests, small fixes on examples
Description
Enables nonlinear optimisation constraints. This turned out to be somewhat trivial with the newest PyBOP API; all it needed was a slightly different callback signature for the "trust-constr" optimiser.
Issue reference
Fixes #353
Review
Before you mark your PR as ready for review, please ensure that you've considered the following:
Type of change
Key checklist:
$ pre-commit run
(or$ nox -s pre-commit
) (see CONTRIBUTING.md for how to set this up to run automatically when committing locally, in just two lines of code)$ nox -s tests
$ nox -s doctest
You can run integration tests, unit tests, and doctests together at once, using
$ nox -s quick
.Further checks:
Thank you for contributing to our project! Your efforts help us to deliver great software.