Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DRAM ACMC #982

Merged
merged 28 commits into from
Apr 1, 2020
Merged
Show file tree
Hide file tree
Changes from 25 commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
34c2c8d
added docstrings
ben18785 Sep 30, 2019
cbdcfd9
more thorough docstrings
ben18785 Sep 30, 2019
20d1f7d
updated docstrings
ben18785 Sep 30, 2019
f960639
added example notebook
ben18785 Sep 30, 2019
0e941c1
added helper function
ben18785 Sep 30, 2019
7a3c94f
simplified function
ben18785 Sep 30, 2019
5eaa9c2
something has gone awry
ben18785 Sep 30, 2019
880beec
now works
ben18785 Sep 30, 2019
8283175
cleaned up docstrings
ben18785 Sep 30, 2019
1d2b855
Added tests
ben18785 Sep 30, 2019
d87f630
Replaced numpy flip with base routine
ben18785 Sep 30, 2019
c918c7d
improved test coverage
ben18785 Sep 30, 2019
368e11d
remove newly introduced numpy flip
ben18785 Sep 30, 2019
ec70fd7
Merge branch 'master' into dram
ben18785 Mar 18, 2020
c118694
Merge branch 'master' into dram
ben18785 Mar 18, 2020
1095e77
Merge branch 'dram' of https://github.com/pints-team/pints into dram
ben18785 Mar 18, 2020
7ab18c1
now working
ben18785 Mar 18, 2020
72a1bda
Update README.md
ben18785 Mar 18, 2020
8493b16
Merge branch 'master' into dram
MichaelClerx Mar 19, 2020
39cbfad
Updated copyright headers
MichaelClerx Mar 19, 2020
2d81e88
Cosmetic changes
MichaelClerx Mar 19, 2020
383d471
Merge branch 'master' into dram
ben18785 Mar 19, 2020
1de9f7b
added much more detail as per @michaelclerx comments
ben18785 Mar 19, 2020
61cf8b9
removed os import
ben18785 Mar 19, 2020
41421ea
Merge branch 'master' into dram
MichaelClerx Mar 20, 2020
4ee5344
Merge branch 'master' into dram
MichaelClerx Mar 31, 2020
8bd5a27
Merge branch 'master' into dram
MichaelClerx Apr 1, 2020
b1b3690
Merge branch 'master' into dram
MichaelClerx Apr 1, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions docs/source/mcmc_samplers/dram_ac_mcmc.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
*********
Dram ACMC
*********

.. currentmodule:: pints

.. autoclass:: DramACMC
1 change: 1 addition & 0 deletions docs/source/mcmc_samplers/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ interface, that can be used to sample from an unknown
base_classes
adaptive_covariance_mc
differential_evolution_mcmc
dram_ac_mcmc
dream_mcmc
emcee_hammer_mcmc
haario_ac_mcmc
Expand Down
1 change: 1 addition & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ relevant code.

### MCMC without gradients
- [Differential Evolution MCMC](./sampling/differential-evolution-mcmc.ipynb)
- [DRAM ACMC](./sampling/adaptive-covariance-dram.ipynb)
- [DREAM MCMC](./sampling/dream-mcmc.ipynb)
- [Emcee Hammer](./sampling/emcee-hammer.ipynb)
- [Haario Adaptive Covariance MCMC](./sampling/adaptive-covariance-haario.ipynb)
Expand Down
299 changes: 299 additions & 0 deletions examples/sampling/adaptive-covariance-dram.ipynb

Large diffs are not rendered by default.

4 changes: 4 additions & 0 deletions pints/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -195,8 +195,12 @@ def version(formatted=False):
MultiChainMCMC,
SingleChainMCMC,
)
# base classes first
from ._mcmc._adaptive_covariance import AdaptiveCovarianceMC

# methods
from ._mcmc._differential_evolution import DifferentialEvolutionMCMC
from ._mcmc._dram_ac import DramACMC
from ._mcmc._dream import DreamMCMC
from ._mcmc._emcee_hammer import EmceeHammerMCMC
from ._mcmc._haario_ac import HaarioACMC
Expand Down
290 changes: 290 additions & 0 deletions pints/_mcmc/_dram_ac.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,290 @@
#
# DRAM AC MC method
#
# This file is part of PINTS (https://github.com/pints-team/pints/) which is
# released under the BSD 3-clause license. See accompanying LICENSE.md for
# copyright notice and full license details.
#
from __future__ import absolute_import, division
from __future__ import print_function, unicode_literals
import pints
import numpy as np
import scipy.stats as stats


class DramACMC(pints.AdaptiveCovarianceMC):
"""
DRAM (Delayed Rejection Adaptive Covariance) MCMC, as described in [1]_.

In this method, rejections do not necessarily lead an iteration to end.
Instead, if a rejection occurs, another point is proposed although
typically from a narrower (i.e. more conservative) proposal kernel than was
used for the first proposal.

In this approach, in each iteration, the following steps return the next
state of the Markov chain (assuming the current state is ``theta_0`` and
that there are 2 proposal kernels)::
MichaelClerx marked this conversation as resolved.
Show resolved Hide resolved

theta_1 ~ N(theta_0, lambda * scale_1 * sigma)
alpha_1(theta_0, theta_1) = min(1, p(theta_1|X) / p(theta_0|X))
u_1 ~ uniform(0, 1)
if alpha_1(theta_0, theta_1) > u_1:
return theta_1
theta_2 ~ N(theta_0, lambda * scale_2 * sigma0)
alpha_2(theta_0, theta_1, theta_2) =
min(1, p(theta_2|X) (1 - alpha_1(theta_2, theta_1)) /
(p(theta_0|X) (1 - alpha_1(theta_0, theta_1))))
u_2 ~ uniform(0, 1)
if alpha_2(theta_0, theta_1, theta_2) > u_2:
return theta_2
else:
return theta_0

Our implementation also allows more than 2 proposal kernels to be used.
This means that ``k`` accept-reject steps are taken. In each step (``i``),
the probability that a proposal ``theta_i`` is accepted is::

alpha_i(theta_0, theta_1, ..., theta_i) = min(1, p(theta_i|X) /
p(theta_0|X) * n_i / d_i)

where::

n_i = (1 - alpha_1(theta_i, theta_i-1)) *
(1 - alpha_2(theta_i, theta_i-1, theta_i-2)) *
...
(1 - alpha_i-1(theta_i, theta_i-1, ..., theta_0))
d_i = (1 - alpha_1(theta_0, theta_1)) *
(1 - alpha_2(theta_0, theta_1, theta_2)) *
...
(1 - alpha_i-1(theta_0, theta_1, ..., theta_i-1))

If ``k`` proposals have been rejected, the initial point ``theta_0`` is
returned.

At the end of each iterations, a 'base' proposal kernel is adapted::

mu = (1 - gamma) mu + gamma theta
sigma = (1 - gamma) sigma + gamma (theta - mu)(theta - mu)^t
log_lambda = log_lambda + gamma (accepted - target_acceptance_rate)

where ``gamma = adaptations^-eta``, ``theta`` is the current state of
the Markov chain and ``accepted`` is a binary indicator for whether any of
the series of proposals were accepted. The kernels for the all proposals
are then adapted as ``[scale_1, scale_2, ..., scale_k] * sigma``, where the
scale factors are set using ``set_sigma_scale``.

*Extends:* :class:`GlobalAdaptiveCovarianceMC`

References
----------
.. [1] "DRAM: Efficient adaptive MCMC".
H Haario, M Laine, A Mira, E Saksman, (2006) Statistical Computing
https://doi.org/10.1007/s11222-006-9438-0
"""
def __init__(self, x0, sigma0=None):
super(DramACMC, self).__init__(x0, sigma0)

self._adapt_kernel = True
self._before_kernels_set = True
self._log_lambda = 0
self._n_kernels = 2
self._proposal_count = 0
self._sigma_base = np.copy(self._sigma)
self._upper_scale = 1000
self._Y = [None] * self._n_kernels
self._Y_log_pdf = np.zeros(self._n_kernels)

def _adapt_sigma(self):
"""
Updates the covariance matrices of the various kernels being used
according to adaptive Metropolis routine.
"""
dsigm = np.reshape(self._current - self._mu, (self._n_parameters, 1))
self._sigma_base = ((1 - self._gamma) * self._sigma_base +
self._gamma * np.dot(dsigm, dsigm.T))
self._sigma = [self._sigma_scale[i] * self._sigma_base
for i in range(self._n_kernels)]

def _calculate_alpha_log(self, n, Y, log_Y):
"""
Calculates alpha expression necessary in eq. 3 of Haario et al. for
determining accept/reject
"""
alpha_log = log_Y[n + 1] - log_Y[0]
if n == 0:
return min(0, alpha_log)
Y_rev = Y[::-1]
log_Y_rev = log_Y[::-1]
for i in range(n):
alpha_log += (
stats.multivariate_normal.logpdf(
x=Y[n - i - 1],
mean=Y[n + 1],
cov=self._sigma[n],
allow_singular=True) -
stats.multivariate_normal.logpdf(
x=Y[i],
mean=self._current,
cov=self._sigma[0],
allow_singular=True) +
np.log(1 - np.exp(self._calculate_alpha_log(
i, Y_rev[0:(i + 2)], log_Y_rev[0:(i + 2)]))) -
np.log(1 - np.exp(self._calculate_alpha_log(
i, Y[0:(i + 2)], log_Y[0:(i + 2)])))
)
return min(0, alpha_log)

def _calculate_r_log(self, fx):
"""
Calculates value of logged acceptance ratio (eq. 3 in [1]_).
"""
c = self._proposal_count
temp_Y = np.concatenate([[self._current], self._Y[0:(c + 1)]])
temp_log_Y = np.concatenate(
[[self._current_log_pdf], self._Y_log_pdf[0:(c + 1)]])
self._r_log = self._calculate_alpha_log(c, temp_Y, temp_log_Y)

def _generate_proposal(self):
""" See :meth:`AdaptiveCovarianceMC._generate_proposal()`. """
if self._before_kernels_set:
self.set_sigma_scale()
self._Y = [None] * self._n_kernels
self._Y_log_pdf = np.zeros(self._n_kernels)
self._before_kernels_set = False

proposed = np.random.multivariate_normal(
self._current, np.exp(self._log_lambda) *
self._sigma[self._proposal_count])
self._Y[self._proposal_count] = proposed
return proposed

def name(self):
""" See :meth:`pints.MCMCSampler.name()`. """
return 'Delayed Rejection Adaptive Metropolis (Dram) MCMC'

def n_kernels(self):
""" Returns number of proposal kernels. """
return self._n_kernels

def n_hyper_parameters(self):
""" See :meth:`TunableMethod.n_hyper_parameters()`. """
return 3

def set_hyper_parameters(self, x):
"""
The hyper-parameter vector is ``[eta, n_kernels, upper_scale]``.

See :meth:`TunableMethod.set_hyper_parameters()`.
"""
self.set_eta(x[0])
self.set_n_kernels(x[1])
self.set_upper_scale(x[2])

def set_n_kernels(self, n_kernels):
""" Sets number of proposal kernels. """
if n_kernels < 1:
raise ValueError('Number of proposal kernels must be equal to ' +
'or greater than 1.')
self._n_kernels = int(n_kernels)

def set_upper_scale(self, upper_scale):
"""
Set the upper scale of initial covariance matrix multipliers for each
of the kernels: ``[0,...,upper]`` where the gradations are uniform on
the log10 scale meaning the proposal covariance matrices are:
``[10^upper,..., 1] * sigma``.
"""
if upper_scale < 0:
raise ValueError('Upper scale must be positive.')
self._upper_scale = upper_scale

def set_sigma_scale(self):
"""
Set the scale of initial covariance matrix multipliers for each of the
kernels: ``[0,...,upper]`` where the gradations are uniform on the
log10 scale meaning the proposal covariance matrices are:
``[10^upper,..., 1] * sigma``.
"""
a_min = np.log10(1)
a_max = np.log10(self._upper_scale)
self._sigma_scale = 10**np.linspace(a_min, a_max, self._n_kernels)
self._sigma_scale = self._sigma_scale[::-1]
self._sigma = [self._sigma_scale[i] * self._sigma_base
for i in range(self._n_kernels)]

def sigma_scale(self):
"""
Returns scale factors used to multiply a base covariance matrix,
resulting in proposal matrices for each accept-reject step.
"""
return self._sigma_scale

def tell(self, fx):
"""
If first proposal, then accept with ordinary Metropolis probability; if
a later proposal, use probability determined by [1]_.
"""
# Check if we had a proposal
if self._proposed is None:
raise RuntimeError('Tell called before proposal was set.')

# Ensure fx is a float
fx = float(fx)
self._Y_log_pdf[self._proposal_count] = fx

# First point?
if self._current is None:
if not np.isfinite(fx):
raise ValueError(
'Initial point for MCMC must have finite logpdf.')

# Accept
self._current = self._proposed
self._current_log_pdf = fx

# Increase iteration count
self._iterations += 1

# Clear proposal
self._proposed = None

# Return first point for chain
return self._current

# Check if the proposed point can be accepted
accepted = 0

if np.isfinite(fx):
self._calculate_r_log(fx)
u = np.log(np.random.uniform(0, 1))
if u < self._r_log:
accepted = 1
self._current = self._proposed
self._current_log_pdf = fx

self._proposed = None

if accepted == 0:
# rejected proposal
if self._n_kernels > 1 and (
self._proposal_count < (self._n_kernels - 1)):
self._proposal_count += 1
return None
else:
self._proposal_count = 0
self._gamma = (self._adaptations**-self._eta)
self._adaptations += 1

# Update mu, covariance matrix and log lambda
self._adapt_mu()
self._adapt_sigma()
self._log_lambda += (self._gamma *
(accepted - self._target_acceptance))
return self._current

def upper_scale(self):
"""
Returns upper scale limit (see
:meth:`pints.DramACMC.set_upper_scale()`).
"""
return self._upper_scale
Loading