Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggesting changes to #338 #352

Merged
merged 47 commits into from
Jun 28, 2024
Merged

Conversation

NicolaCourtier
Copy link
Member

Description

Demonstrating alternative changes to implement the additions in #338. The main difference is how the sigma parameter is treated in the GaussianLogLikelihood cost function. Here, it is implemented as an additional PyBOP Parameter.

Issue reference

Towards fixing #257.

Review

Before you mark your PR as ready for review, please ensure that you've considered the following:

  • Updated the CHANGELOG.md in reverse chronological order (newest at the top) with a concise description of the changes, including the PR number.
  • Noted any breaking changes, including details on how it might impact existing functionality.

Type of change

  • New Feature: A non-breaking change that adds new functionality.
  • Optimization: A code change that improves performance.
  • Examples: A change to existing or additional examples.
  • Bug Fix: A non-breaking change that addresses an issue.
  • Documentation: Updates to documentation or new documentation for new features.
  • Refactoring: Non-functional changes that improve the codebase.
  • Style: Non-functional changes related to code style (formatting, naming, etc).
  • Testing: Additional tests to improve coverage or confirm functionality.
  • Other: (Insert description of change)

Key checklist:

  • No style issues: $ pre-commit run (or $ nox -s pre-commit) (see CONTRIBUTING.md for how to set this up to run automatically when committing locally, in just two lines of code)
  • All unit tests pass: $ nox -s tests
  • The documentation builds: $ nox -s doctest

You can run integration tests, unit tests, and doctests together at once, using $ nox -s quick.

Further checks:

  • Code is well-commented, especially in complex or unclear areas.
  • Added tests that prove my fix is effective or that my feature works.
  • Checked that coverage remains or improves, and added tests if necessary to maintain or increase coverage.

Thank you for contributing to our project! Your efforts help us to deliver great software.

@BradyPlanden
Copy link
Member

Hi @NicolaCourtier - thanks for this alternative implementation! I went down a similar path before stopping during the implementation of #338, I came to the conclusion that adding the sigma hyper into the constructed Parameters requires quite a bit of patching. Having the parameters constructed early in the life of an optimisation task and then building the x0, sigma0, and boundaries from them provides a clean separation between parameters -> model and cost -> optim. At the point of the cost and optim construction, we only need to worry about / modify x0, sigma0, and boundaries attributes, which is very easy to understand and review. I'd like to continue with the x0, sigma0, and boundaries implementation in #338, as I think it's concise and provides simplicity.

However, I'm keen to integrate your general improvements into #338. Specifically, your updates to check_sigma0, plot_2d and the CMAES check look great! Also, all of the docstrings changes I had missed 😄. If you'd like to commit those changes directly to that branch, that would be amazing, otherwise I'm happy to add them during review.

Have a great weekend!

Copy link

codecov bot commented Jun 7, 2024

Codecov Report

Attention: Patch coverage is 98.21429% with 4 lines in your changes missing coverage. Please review.

Project coverage is 97.53%. Comparing base (e0421a5) to head (0e75c8f).

Files with missing lines Patch % Lines
pybop/parameters/parameter.py 91.30% 4 Missing ⚠️
Additional details and impacted files
@@                   Coverage Diff                    @@
##           gauss-log-like-fixes     #352      +/-   ##
========================================================
+ Coverage                 97.36%   97.53%   +0.16%     
========================================================
  Files                        42       42              
  Lines                      2433     2476      +43     
========================================================
+ Hits                       2369     2415      +46     
+ Misses                       64       61       -3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@NicolaCourtier
Copy link
Member Author

Thanks for the comment @BradyPlanden, I'm just getting the coverage up on this branch and then it would be great to discuss the different implementations tomorrow, if you have time.

@NicolaCourtier
Copy link
Member Author

There will be some conflicts between the changes to parameters.as_dict() in this branch and the change from parameters to inputs in #358. Merging #358 (when ready) may help to clean up the implementation in this branch.

@NicolaCourtier
Copy link
Member Author

I have merged #359 into this one and resolved the conflicts. Merging the same into #338 will significantly reduce the number of files changed.

Copy link
Member

@BradyPlanden BradyPlanden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @NicolaCourtier, I've added a few comments for me to address in #338. Will merge now.

self.parameters.join(self.sigma)

if dsigma_scale is None:
self._dsigma_scale = sigma0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change in #338 to self._dsgima_scale == 1.0

Comment on lines +191 to +192
problem_inputs = self.problem.parameters.as_dict()
y = self.problem.evaluate(problem_inputs)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change in #338 to:

Suggested change
problem_inputs = self.problem.parameters.as_dict()
y = self.problem.evaluate(problem_inputs)
y = self.problem.evaluate(self.problem.parameters.as_dict())

Comment on lines +231 to +232
problem_inputs = self.problem.parameters.as_dict()
y, dy = self.problem.evaluateS1(problem_inputs)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change in #338 to:

Suggested change
problem_inputs = self.problem.parameters.as_dict()
y, dy = self.problem.evaluateS1(problem_inputs)
y, dy = self.problem.evaluateS1(self.problem.parameters.as_dict())

dsigma = (
-self.n_time_data / sigma + sigma ** (-3.0) * np.sum(r**2, axis=1)
-self.n_time_data / sigma + np.sum(r**2, axis=1) / (sigma**3)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change in #338 to:

Suggested change
-self.n_time_data / sigma + np.sum(r**2, axis=1) / (sigma**3)
-self.n_time_data / sigma + np.sum(r**2, axis=1) / (sigma**3.0)

r = np.array([self._target[signal] - y[signal] for signal in self.signal])
likelihood = self._evaluate(x)
dl = np.sum((sigma ** (-2.0) * np.sum((r * dy.T), axis=2)), axis=1)
dl = np.sum((np.sum((r * dy.T), axis=2) / (sigma**2)), axis=1)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

change in #338 to:

Suggested change
dl = np.sum((np.sum((r * dy.T), axis=2) / (sigma**2)), axis=1)
dl = np.sum((np.sum((r * dy.T), axis=2) / (sigma**2.0)), axis=1)

Comment on lines +333 to +343
# Compute a finite difference approximation of the gradient of the log prior
delta = 1e-3
dl_prior_approx = [
(
param.prior.logpdf(inputs[param.name] * (1 + delta))
- param.prior.logpdf(inputs[param.name] * (1 - delta))
)
/ (2 * delta * inputs[param.name] + np.finfo(float).eps)
for param in self.problem.parameters
]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change in #340, or split the prior update for evaluate and evaluate_S1 into separate PR.

Input parameters for the simulation. If the input is array-like, it is converted
to a dictionary using the model's fitting keys. Defaults to None, indicating
that the default parameters should be used.
inputs : Inputse, optional
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update in #338:

Suggested change
inputs : Inputse, optional
inputs : Inputs, optional

@@ -72,7 +72,7 @@ def spm_costs(self, model, parameters, cost_class, init_soc):
if cost_class in [pybop.GaussianLogLikelihoodKnownSigma]:
return cost_class(problem, sigma0=0.002)
elif cost_class in [pybop.GaussianLogLikelihood]:
return cost_class(problem)
return cost_class(problem, sigma0=0.002 * 3)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update in #338 to:

Suggested change
return cost_class(problem, sigma0=0.002 * 3)
return cost_class(problem, sigma0=0.002 * 3) # Initial sigma0 guess

@@ -127,7 +132,25 @@ def test_gaussian_log_likelihood(self, one_signal_problem):
grad_result, grad_likelihood = likelihood.evaluateS1(np.array([0.5, 0.5]))
assert isinstance(result, float)
np.testing.assert_allclose(result, grad_result, atol=1e-5)
assert np.all(grad_likelihood <= 0)
assert grad_likelihood[0] <= 0 # TEMPORARY WORKAROUND
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update in #338

@BradyPlanden BradyPlanden merged commit 5c99037 into gauss-log-like-fixes Jun 28, 2024
29 checks passed
@NicolaCourtier NicolaCourtier deleted the 338b-gauss-loglikelihood branch July 5, 2024 11:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants