Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds Minkowski and SumofPower Cost #383

Merged
merged 30 commits into from
Jul 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
30 commits
Select commit Hold shift + click to select a range
7423dd8
feat: add Minkowski cost, remove redundant code in fitting_costs.py, …
BradyPlanden Jun 30, 2024
4c074b2
tests: add catch for incorrect order, tests
BradyPlanden Jul 1, 2024
b3ef680
feat: add cost function comparision example, updt parameters.rvs to w…
BradyPlanden Jul 1, 2024
294347a
fix: default renderer for github display, updt ploting range
BradyPlanden Jul 1, 2024
5b6f42c
Apply suggestions from code review
BradyPlanden Jul 1, 2024
1b93101
add changelog entry
BradyPlanden Jul 1, 2024
43cfcb5
tests: Fix incorrect sigma0 values for likelihood based spm fitting
BradyPlanden Jul 2, 2024
f146c99
fix: add explicit tests on scipy_minimise cost wrapper for cost==np.inf
BradyPlanden Jul 3, 2024
1dff143
Merge branch 'develop' into minkowski-cost
BradyPlanden Jul 5, 2024
ff03bd2
fix: update scipy.cost_wrapper==np.inf assert values for scipy_minimi…
BradyPlanden Jul 5, 2024
cea297f
Apply suggestions from code review
BradyPlanden Jul 5, 2024
215f743
Merge branch 'develop' into minkowski-cost
BradyPlanden Jul 9, 2024
60acebc
fix: non-finite gradient values
BradyPlanden Jul 9, 2024
acd135e
fix: revert to e.item() as float(e) is depreciated, refactor spm_para…
BradyPlanden Jul 9, 2024
5f5737a
fix: remove missed merge items, change isfinite check to return -np.inf
BradyPlanden Jul 9, 2024
12a9818
tests: updt MAP prior lower bound, improve MAP example
BradyPlanden Jul 9, 2024
cac3f6b
Merge branch 'develop' into minkowski-cost
BradyPlanden Jul 11, 2024
09be74c
updt. docstring for sphinx rendering
BradyPlanden Jul 11, 2024
cfe0f31
Add default n_samples
NicolaCourtier Jul 12, 2024
f5664e1
Update Minkowski definition
NicolaCourtier Jul 12, 2024
9e39409
Update comparing_cost_functions.ipynb
NicolaCourtier Jul 12, 2024
7795b1d
Add Minkowski p=inf test
NicolaCourtier Jul 12, 2024
973b51c
style: pre-commit fixes
pre-commit-ci[bot] Jul 12, 2024
73427f9
Fix for plot2d bounds error
NicolaCourtier Jul 12, 2024
73f5477
Reset test_cost.py
NicolaCourtier Jul 12, 2024
64312ae
feat: Add SumofPower cost function
BradyPlanden Jul 12, 2024
5491702
Merge pull request #407 from pybop-team/382b-minkowski-cost
BradyPlanden Jul 15, 2024
82d4d9e
feat: add catch for p=np.inf for SumofPower, updt examples
BradyPlanden Jul 15, 2024
58eded6
Minor notebook edits
NicolaCourtier Jul 15, 2024
f06dd4a
applying suggestion from review
BradyPlanden Jul 15, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

## Features

- [#393](https://github.com/pybop-team/PyBOP/pull/383) - Adds Minkowski and SumofPower cost classes, with an example and corresponding tests.
- [#403](https://github.com/pybop-team/PyBOP/pull/403/) - Adds lychee link checking action.

## Bug Fixes
Expand Down
609 changes: 609 additions & 0 deletions examples/notebooks/comparing_cost_functions.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion examples/scripts/spm_AdamW.py
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ def noise(sigma):
problem = pybop.FittingProblem(
model, parameters, dataset, signal=signal, init_soc=init_soc
)
cost = pybop.RootMeanSquaredError(problem)
cost = pybop.Minkowski(problem, p=2)
optim = pybop.AdamW(
cost,
verbose=True,
Expand Down
45 changes: 31 additions & 14 deletions examples/scripts/spm_MAP.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,16 @@

import pybop

# Set variables
sigma = 0.002
init_soc = 0.7

# Construct and update initial parameter values
parameter_set = pybop.ParameterSet.pybamm("Chen2020")
parameter_set.update(
{
"Negative electrode active material volume fraction": 0.63,
"Positive electrode active material volume fraction": 0.51,
"Negative electrode active material volume fraction": 0.43,
"Positive electrode active material volume fraction": 0.56,
}
)

Expand All @@ -18,38 +22,51 @@
parameters = pybop.Parameters(
pybop.Parameter(
"Negative electrode active material volume fraction",
prior=pybop.Gaussian(0.6, 0.05),
bounds=[0.5, 0.8],
prior=pybop.Uniform(0.3, 0.8),
bounds=[0.3, 0.8],
initial_value=0.653,
true_value=parameter_set["Negative electrode active material volume fraction"],
),
pybop.Parameter(
"Positive electrode active material volume fraction",
prior=pybop.Gaussian(0.48, 0.05),
prior=pybop.Uniform(0.3, 0.8),
bounds=[0.4, 0.7],
initial_value=0.657,
true_value=parameter_set["Positive electrode active material volume fraction"],
),
)

# Generate data
sigma = 0.005
t_eval = np.arange(0, 900, 3)
values = model.predict(t_eval=t_eval)
corrupt_values = values["Voltage [V]"].data + np.random.normal(0, sigma, len(t_eval))
# Generate data and corrupt it with noise
experiment = pybop.Experiment(
[
(
"Discharge at 0.5C for 3 minutes (4 second period)",
"Charge at 0.5C for 3 minutes (4 second period)",
),
]
)
values = model.predict(
init_soc=init_soc, experiment=experiment, inputs=parameters.true_value()
)
corrupt_values = values["Voltage [V]"].data + np.random.normal(
0, sigma, len(values["Voltage [V]"].data)
)

# Form dataset
dataset = pybop.Dataset(
{
"Time [s]": t_eval,
"Time [s]": values["Time [s]"].data,
"Current function [A]": values["Current [A]"].data,
"Voltage [V]": corrupt_values,
}
)

# Generate problem, cost function, and optimisation class
problem = pybop.FittingProblem(model, parameters, dataset)
problem = pybop.FittingProblem(model, parameters, dataset, init_soc=init_soc)
cost = pybop.MAP(problem, pybop.GaussianLogLikelihoodKnownSigma, sigma0=sigma)
optim = pybop.AdamW(
cost,
sigma0=0.05,
max_unchanged_iterations=20,
min_iterations=20,
max_iterations=100,
Expand All @@ -61,7 +78,7 @@
print("Estimated parameters:", x)

# Plot the timeseries output
pybop.quick_plot(problem, problem_inputs=x[0:2], title="Optimised Comparison")
pybop.quick_plot(problem, problem_inputs=x, title="Optimised Comparison")

# Plot convergence
pybop.plot_convergence(optim)
Expand All @@ -73,5 +90,5 @@
pybop.plot2d(cost, steps=15)

# Plot the cost landscape with optimisation path
bounds = np.asarray([[0.55, 0.77], [0.48, 0.68]])
bounds = np.asarray([[0.35, 0.7], [0.45, 0.625]])
pybop.plot2d(optim, bounds=bounds, steps=15)
2 changes: 1 addition & 1 deletion examples/scripts/spm_SNES.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@

# Generate problem, cost function, and optimisation class
problem = pybop.FittingProblem(model, parameters, dataset)
cost = pybop.SumSquaredError(problem)
cost = pybop.SumofPower(problem, p=2)
optim = pybop.SNES(cost, max_iterations=100)

x, final_cost = optim.run()
Expand Down
2 changes: 2 additions & 0 deletions pybop/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,8 @@
from .costs.fitting_costs import (
RootMeanSquaredError,
SumSquaredError,
Minkowski,
SumofPower,
ObserverCost,
)
from .costs.design_costs import (
Expand Down
47 changes: 22 additions & 25 deletions pybop/costs/_likelihoods.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,19 +38,17 @@ def __init__(self, problem: BaseProblem, sigma0: Union[list[float], float]):
self.sigma2 = sigma0**2.0
self._offset = -0.5 * self.n_time_data * np.log(2 * np.pi * self.sigma2)
self._multip = -1 / (2.0 * self.sigma2)
self._dl = np.ones(self.n_parameters)

def _evaluate(self, inputs: Inputs, grad: Union[None, np.ndarray] = None) -> float:
"""
Evaluates the Gaussian log-likelihood for the given parameters with known sigma.
"""
y = self.problem.evaluate(inputs)
if any(
len(y.get(key, [])) != len(self._target.get(key, [])) for key in self.signal
):
return -np.inf # prediction length doesn't match target

e = np.sum(
if not self.verify_prediction(y):
return -np.inf

e = np.asarray(
[
np.sum(
self._offset
Expand All @@ -60,18 +58,16 @@ def _evaluate(self, inputs: Inputs, grad: Union[None, np.ndarray] = None) -> flo
]
)

return e if self.n_outputs != 1 else e.item()
return e.item() if self.n_outputs == 1 else np.sum(e)

def _evaluateS1(self, inputs: Inputs) -> tuple[float, np.ndarray]:
"""
Calls the problem.evaluateS1 method and calculates the log-likelihood and gradient.
"""
y, dy = self.problem.evaluateS1(inputs)

if any(
len(y.get(key, [])) != len(self._target.get(key, [])) for key in self.signal
):
return -np.inf, -self._dl
if not self.verify_prediction(y):
return -np.inf, -self._de * np.ones(self.n_parameters)

likelihood = self._evaluate(inputs)

Expand Down Expand Up @@ -125,7 +121,6 @@ def __init__(
self.sigma = Parameters()
self._add_sigma_parameters(sigma0)
self.parameters.join(self.sigma)
self._dl = np.ones(self.n_parameters)

def _add_sigma_parameters(self, sigma0):
sigma0 = [sigma0] if not isinstance(sigma0, list) else sigma0
Expand Down Expand Up @@ -195,12 +190,10 @@ def _evaluate(self, inputs: Inputs, grad: Union[None, np.ndarray] = None) -> flo
return -np.inf

y = self.problem.evaluate(self.problem.parameters.as_dict())
if any(
len(y.get(key, [])) != len(self._target.get(key, [])) for key in self.signal
):
return -np.inf # prediction length doesn't match target
if not self.verify_prediction(y):
return -np.inf

e = np.sum(
e = np.asarray(
[
np.sum(
self._logpi
Expand All @@ -212,7 +205,7 @@ def _evaluate(self, inputs: Inputs, grad: Union[None, np.ndarray] = None) -> flo
]
)

return e if self.n_outputs != 1 else e.item()
return e.item() if self.n_outputs == 1 else np.sum(e)

def _evaluateS1(self, inputs: Inputs) -> tuple[float, np.ndarray]:
"""
Expand All @@ -232,13 +225,11 @@ def _evaluateS1(self, inputs: Inputs) -> tuple[float, np.ndarray]:

sigma = self.sigma.current_value()
if np.any(sigma <= 0):
return -np.inf, -self._dl
return -np.inf, -self._de * np.ones(self.n_parameters)

y, dy = self.problem.evaluateS1(self.problem.parameters.as_dict())
if any(
len(y.get(key, [])) != len(self._target.get(key, [])) for key in self.signal
):
return -np.inf, -self._dl
if not self.verify_prediction(y):
return -np.inf, -self._de * np.ones(self.n_parameters)

likelihood = self._evaluate(inputs)

Expand Down Expand Up @@ -302,11 +293,14 @@ def _evaluate(self, inputs: Inputs, grad=None) -> float:
float
The maximum a posteriori cost.
"""
log_likelihood = self.likelihood._evaluate(inputs)
log_prior = sum(
self.parameters[key].prior.logpdf(value) for key, value in inputs.items()
)

if not np.isfinite(log_prior).any():
return -np.inf

log_likelihood = self.likelihood._evaluate(inputs)
posterior = log_likelihood + log_prior
return posterior

Expand All @@ -331,10 +325,13 @@ def _evaluateS1(self, inputs: Inputs) -> tuple[float, np.ndarray]:
ValueError
If an error occurs during the calculation of the cost or gradient.
"""
log_likelihood, dl = self.likelihood._evaluateS1(inputs)
log_prior = sum(
self.parameters[key].prior.logpdf(value) for key, value in inputs.items()
)
if not np.isfinite(log_prior).any():
return -np.inf, -self._de * np.ones(self.n_parameters)

log_likelihood, dl = self.likelihood._evaluateS1(inputs)

# Compute a finite difference approximation of the gradient of the log prior
delta = self.parameters.initial_value() * self.gradient_step
Expand Down
62 changes: 51 additions & 11 deletions pybop/costs/base_cost.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
from typing import Union

from pybop import BaseProblem
from pybop.parameters.parameter import Inputs, Parameters

Expand Down Expand Up @@ -25,6 +27,7 @@ class BaseCost:
def __init__(self, problem=None):
self.parameters = Parameters()
self.problem = problem
self.set_fail_gradient()
if isinstance(self.problem, BaseProblem):
self._target = self.problem._target
self.parameters.join(self.problem.parameters)
Expand All @@ -35,20 +38,20 @@ def __init__(self, problem=None):
def n_parameters(self):
return len(self.parameters)

def __call__(self, x, grad=None):
def __call__(self, inputs: Union[Inputs, list], grad=None):
"""
Call the evaluate function for a given set of parameters.
"""
return self.evaluate(x, grad)
return self.evaluate(inputs, grad)

def evaluate(self, x, grad=None):
def evaluate(self, inputs: Union[Inputs, list], grad=None):
"""
Call the evaluate function for a given set of parameters.

Parameters
----------
x : array-like
The parameters for which to evaluate the cost.
inputs : Inputs or array-like
The parameters for which to compute the cost and gradient.
grad : array-like, optional
An array to store the gradient of the cost function with respect
to the parameters.
Expand All @@ -63,7 +66,7 @@ def evaluate(self, x, grad=None):
ValueError
If an error occurs during the calculation of the cost.
"""
inputs = self.parameters.verify(x)
inputs = self.parameters.verify(inputs)

try:
return self._evaluate(inputs, grad)
Expand Down Expand Up @@ -100,27 +103,27 @@ def _evaluate(self, inputs: Inputs, grad=None):
"""
raise NotImplementedError

def evaluateS1(self, x):
def evaluateS1(self, inputs: Union[Inputs, list]):
"""
Call _evaluateS1 for a given set of parameters.

Parameters
----------
x : array-like
inputs : Inputs or array-like
The parameters for which to compute the cost and gradient.

Returns
-------
tuple
A tuple containing the cost and the gradient. The cost is a float,
and the gradient is an array-like of the same length as `x`.
and the gradient is an array-like of the same length as `inputs`.

Raises
------
ValueError
If an error occurs during the calculation of the cost or gradient.
"""
inputs = self.parameters.verify(x)
inputs = self.parameters.verify(inputs)

try:
return self._evaluateS1(inputs)
Expand All @@ -144,11 +147,48 @@ def _evaluateS1(self, inputs: Inputs):
-------
tuple
A tuple containing the cost and the gradient. The cost is a float,
and the gradient is an array-like of the same length as `x`.
and the gradient is an array-like of the same length as `inputs`.

Raises
------
NotImplementedError
If the method has not been implemented by the subclass.
"""
raise NotImplementedError

def set_fail_gradient(self, de: float = 1.0):
"""
Set the fail gradient to a specified value.

The fail gradient is used if an error occurs during the calculation
of the gradient. This method allows updating the default gradient value.

Parameters
----------
de : float
The new fail gradient value to be used.
"""
if not isinstance(de, float):
de = float(de)
self._de = de

def verify_prediction(self, y):
BradyPlanden marked this conversation as resolved.
Show resolved Hide resolved
"""
Verify that the prediction matches the target data.

Parameters
----------
y : dict
The model predictions.

Returns
-------
bool
True if the prediction matches the target data, otherwise False.
"""
if any(
len(y.get(key, [])) != len(self._target.get(key, [])) for key in self.signal
):
return False

return True
Loading
Loading