-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suggesting changes to #338 #352
Conversation
Hi @NicolaCourtier - thanks for this alternative implementation! I went down a similar path before stopping during the implementation of #338, I came to the conclusion that adding the However, I'm keen to integrate your general improvements into #338. Specifically, your updates to check_sigma0, plot_2d and the CMAES check look great! Also, all of the docstrings changes I had missed 😄. If you'd like to commit those changes directly to that branch, that would be amazing, otherwise I'm happy to add them during review. Have a great weekend! |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## gauss-log-like-fixes #352 +/- ##
========================================================
+ Coverage 97.36% 97.53% +0.16%
========================================================
Files 42 42
Lines 2433 2476 +43
========================================================
+ Hits 2369 2415 +46
+ Misses 64 61 -3 ☔ View full report in Codecov by Sentry. |
Thanks for the comment @BradyPlanden, I'm just getting the coverage up on this branch and then it would be great to discuss the different implementations tomorrow, if you have time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @NicolaCourtier, I've added a few comments for me to address in #338. Will merge now.
self.parameters.join(self.sigma) | ||
|
||
if dsigma_scale is None: | ||
self._dsigma_scale = sigma0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change in #338 to self._dsgima_scale == 1.0
problem_inputs = self.problem.parameters.as_dict() | ||
y = self.problem.evaluate(problem_inputs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change in #338 to:
problem_inputs = self.problem.parameters.as_dict() | |
y = self.problem.evaluate(problem_inputs) | |
y = self.problem.evaluate(self.problem.parameters.as_dict()) |
problem_inputs = self.problem.parameters.as_dict() | ||
y, dy = self.problem.evaluateS1(problem_inputs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change in #338 to:
problem_inputs = self.problem.parameters.as_dict() | |
y, dy = self.problem.evaluateS1(problem_inputs) | |
y, dy = self.problem.evaluateS1(self.problem.parameters.as_dict()) |
dsigma = ( | ||
-self.n_time_data / sigma + sigma ** (-3.0) * np.sum(r**2, axis=1) | ||
-self.n_time_data / sigma + np.sum(r**2, axis=1) / (sigma**3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change in #338 to:
-self.n_time_data / sigma + np.sum(r**2, axis=1) / (sigma**3) | |
-self.n_time_data / sigma + np.sum(r**2, axis=1) / (sigma**3.0) |
r = np.array([self._target[signal] - y[signal] for signal in self.signal]) | ||
likelihood = self._evaluate(x) | ||
dl = np.sum((sigma ** (-2.0) * np.sum((r * dy.T), axis=2)), axis=1) | ||
dl = np.sum((np.sum((r * dy.T), axis=2) / (sigma**2)), axis=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change in #338 to:
dl = np.sum((np.sum((r * dy.T), axis=2) / (sigma**2)), axis=1) | |
dl = np.sum((np.sum((r * dy.T), axis=2) / (sigma**2.0)), axis=1) |
# Compute a finite difference approximation of the gradient of the log prior | ||
delta = 1e-3 | ||
dl_prior_approx = [ | ||
( | ||
param.prior.logpdf(inputs[param.name] * (1 + delta)) | ||
- param.prior.logpdf(inputs[param.name] * (1 - delta)) | ||
) | ||
/ (2 * delta * inputs[param.name] + np.finfo(float).eps) | ||
for param in self.problem.parameters | ||
] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change in #340, or split the prior update for evaluate and evaluate_S1 into separate PR.
Input parameters for the simulation. If the input is array-like, it is converted | ||
to a dictionary using the model's fitting keys. Defaults to None, indicating | ||
that the default parameters should be used. | ||
inputs : Inputse, optional |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update in #338:
inputs : Inputse, optional | |
inputs : Inputs, optional |
@@ -72,7 +72,7 @@ def spm_costs(self, model, parameters, cost_class, init_soc): | |||
if cost_class in [pybop.GaussianLogLikelihoodKnownSigma]: | |||
return cost_class(problem, sigma0=0.002) | |||
elif cost_class in [pybop.GaussianLogLikelihood]: | |||
return cost_class(problem) | |||
return cost_class(problem, sigma0=0.002 * 3) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update in #338 to:
return cost_class(problem, sigma0=0.002 * 3) | |
return cost_class(problem, sigma0=0.002 * 3) # Initial sigma0 guess |
@@ -127,7 +132,25 @@ def test_gaussian_log_likelihood(self, one_signal_problem): | |||
grad_result, grad_likelihood = likelihood.evaluateS1(np.array([0.5, 0.5])) | |||
assert isinstance(result, float) | |||
np.testing.assert_allclose(result, grad_result, atol=1e-5) | |||
assert np.all(grad_likelihood <= 0) | |||
assert grad_likelihood[0] <= 0 # TEMPORARY WORKAROUND |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update in #338
Description
Demonstrating alternative changes to implement the additions in #338. The main difference is how the
sigma
parameter is treated in theGaussianLogLikelihood
cost function. Here, it is implemented as an additional PyBOPParameter
.Issue reference
Towards fixing #257.
Review
Before you mark your PR as ready for review, please ensure that you've considered the following:
Type of change
Key checklist:
$ pre-commit run
(or$ nox -s pre-commit
) (see CONTRIBUTING.md for how to set this up to run automatically when committing locally, in just two lines of code)$ nox -s tests
$ nox -s doctest
You can run integration tests, unit tests, and doctests together at once, using
$ nox -s quick
.Further checks:
Thank you for contributing to our project! Your efforts help us to deliver great software.