Faster Hamiltonian Monte Carlo by Learning
Leapfrog Scale
Changye Wu, Julien Stoehr and Christian P. Robert.
Université Paris-Dauphine, Université PSL, CNRS, CEREMADE, 75016 PARIS, FRANCE
Abstract
Hamiltonian Monte Carlo samplers have become standard algo-
rithms for MCMC implementations, as opposed to more basic
versions, but they still require some amount of tuning and cali-
bration. Exploiting the U-turn criterion of the NUTS algorithm
[2], we propose a version of HMC that relies on the distribution
of the integration time of the associated leapfrog integrator.
Using in addition the primal-dual averaging method for tuning
the step size of the integrator, we achieve an essentially calibra-
tion free version of HMC. When compared with the original
NUTS on benchmarks, this algorithm exhibits a significantly
improved efficiency.
Hamiltonian Monte Carlo (HMC, [3])
Consider a density π on Θ ⊂ Rd
with respect to the Lebesgue
measure,
π(θ) ∝ exp{−U(θ)}, where U ∈ C1
(Θ).
Aim: generate a Markov chain (θ1, . . . , θN) with invariant dis-
tribution π to estimate, for some function h, functionals with
respect to π,
1
N
N
n=1
h(θn)
a.s.
−→
N→+∞ Θ
h(θ)π(dθ).
Principle: sample from an augmented target distribution
π(θ, v) = π(θ)N(v | 0, M) ∝ exp {−H(θ, v)} .
• auxiliary variable v ∈ Rd
referred to as momentum variable
as opposed to θ referred to as position,
• marginal chain in θ is the distribution of interest.
Hamiltonian dynamics: generating proposals for (θ, v) based
on



dθ
dt
=
∂H
∂v
= M−1
v
dv
dt
= −
∂H
∂θ
= − U(θ).
⊕ leaves π invariant, allows
large moves,
requires the solution flow
to the differential equa-
tions.
(θ, v)
(θ , v )
Leapfrog integrator: second order symplectic integrator
which yields an approximate solution flow by iterating the fol-
lowing procedure from (θ0, v0) = (θ, v)



r = vn − /2 U(θn),
θn+1 = θn + M−1
r,
vn+1 = r − /2 U(θn + 1).
• : a discretisation time-step.
• L: a number of leapfrog steps
solution at a time t = L
(θ, v)
(θ , v )
This scheme does no longer leave the measure π invariant!
Correction: an accept-reject step is introduced. A transition
from (θ, v) to proposal θ , −v is accepted with probability
ρ θ, v, θ , v = 1 ∧ exp H(θ, v) − H θ , −v .
Pros & cons:
⊕ The algorithm theoretically benefits from a fast explo-
ration of the parameter space by accepting large transi-
tions with high probability.
High sensitivity to hand-tuned parameters, namely the
step size of the discretisation scheme, the number of
steps L of the integrator, and the covariance matrix M.
The No-U-Turn Sampler (NUTS, [2])
Idea: version of HMC sampler that eliminates the need to
specify the number L by adaptively choosing the locally largest
value at each iteration of the algorithm.
How? Doubling the leapfrog path, either forward or back-
ward with equal probability, until the backward and the for-
ward end points of the path, (θ−
, v−
) and (θ+
, v+
), satisfy
(θ+
− θ−
) · M−1
v−
< 0
or
(θ+
− θ−
) · M−1
v+
< 0.
(θ, v)
(θ+
, v+
)
(θ−
, v−
)
Proposal: sampling along the generated trajectory
• slice sampling [2]
• multinomial sampling ([1], version implemented in
Stan).
What about ? Tuned via primal-dual averaging [4], by aim-
ing at a targeted acceptance probability δ0 ∈ (0, 1).
Numerical experiment: Susceptible-Infected-Recovered Model (SIR)
SIR: model used to represent disease transmission, for epidemics like cholera, within a population
ηk ∼ Poisson(ytk−1,1 − ytk,1) and ˆBk ∼ LN(ytk,4, 0.152
), where



k = 1, · · · , Nt = 20,
tk = 7k.
The dynamic of yt ∈ R4
is
dyt,1
dt
= −
βyt,4
yt,4 + κ0
yt,1,
dyt,2
dt
=
βyt,4
yt,4 + κ0
yt,1 − γyt,2,
dyt,3
dt
= γyt,3,
dyt,4
dt
= ξyt,2 − φyt,4,
Interpretation:
• yt,1, yt,2, yt,3 : no. of susceptible, infected, and recovered people
within the community.
• yt,4: concentration of the virus in the water reservoir.
• ηk: size of the pop. that becomes infected during [tk−1, tk].
Assumptions:
• the size of the population yt,1 +yt,2 +yt,3 is constant,
• β ∼ C(0, 2.5), γ ∼ C(0, 1),
ξ ∼ C(0, 25), φ ∼ C(0, 1),
Observed dataset: https://github.com/stan-dev/stat_comp_benchmarks/tree/master/benchmarks/sir.
q
q
qq
qq
qqq
qqqq
q
qqqqq
q
q
q
qq
q
q
q
q
qqqqqq
q
q
q
qq
q
qq
q
q
q
q
qqq
q
qqq
q
qqq
q
qq
q
qq
q
qqqqq
qq
q
q
q
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
qqq
q
qqq
qqq
q
q
q
q
qqqqqqq
qqq q
qqqq
qq
q
qq
q
q
qqq
qq
q
qqqqqq
q
qqq
q
q
q
qq
q
qqq
q
qq qq
qqq
q
q
q
qqqqqqq
q
q
q
qqqq
q
qqq
q
qqq
q
qqq
qqqqqq qqqqqqqqqqqqqqqqqq
q
q
q
q
qqqq
q
qqqq
qq
q
qqq
q
q
q
q
q
q
qq
q
q
q
qq
q
q
qqq
q
qqqqq
q
q
qq
q
q
q
q
q
qqqqq
qq
q
q
q
q
q
qq
q
q
q
q
qq
q
q
q
q
q
qq
q
qq
qq
q
qq
qq
q
q
qqq
qq
qq
q
q
q
q
qqq
qqqq
q
q
q
q
qq
qqqq
q
qq
q
qq
q
q
q
q
qqq
qq
q
q
q
qqq
q
q
q
q
q
q
q
q
qqq
q
qqq
q
q
q
qq
q
q
qqq
q
q
qqq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
qq
q
q
q
qqq
qqq
qqq
q
qq
q
q
qq
q
q
q
q
qq
q
q
q
q
q
qq
q
q
qq
qq
q
q
q
q
q
q
q
q
q
q
q
qqq
q
qq qq
q
qq
q
q
q
q
qqqq
q
q
q
q
q
q
q
q
qq
qq
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
qq
q
qq
q
q
q
qqq
q
qq
q
q
q
q
q
qqq
q
q
qqq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
qq
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
q
qq
q
q
q
q
q
q
q
qq
qq
q
q
qq
q
q
q
q
qqq
qq
q
q
q
q
q
q
NUTS eHMC
0.6 0.7 0.8 0.9 0.6 0.7 0.8 0.9
0.00
0.01
0.02
ESJDpergradient
qq
q
q
q
q
q
q
q
qq
q
q
qq
q
q
qq
q
qqq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
qq
q
q
qqq
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
qq
qq
q
q
qqq
q
qq
qqq
qqqq
q
q
q
q
q
q
qqqqq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qqqqq
q
q
q
qq
q
q
q
q
q
q
qq
qq
qqq
q
q
q
qq
q
qq
qq
q
qq
q
q
q
q
qq
q
q
qq
q
qq
q
qqq
q
q
qq
q
q
q
q
qq
q
qq
q
qqqqq
q
q
qq
q
q
qqq
q
q
qq
q
qq
q
q
qq
q
q
q
qq
q
q
q
qqq
q
q
q
q
q
q
qqq
q
qq
q
q
qq
q
q
q
qqq
q
q
qq
q
q
q
q
q
q
qqq
q
q
q
q
q
q
q
q
qq
qq
q
q
q
q
q
q
q
qqq
q
qqq
qqq
q
q
q
qq
q
qq
q
q
q
qq
q
q
q
q
q
q
q
qq
q
q
q
q
qq
q
q
q
q
q
q
qqqqq
q
q
q
q
qq
q
qq
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
qq
qq
q
qq
q
q
q
q
qqq
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
q
qqqq
q
q
q
q
q
q
q
q
q
q
q
qqqq
q
q
q
q
q
q
qq
q
q
q
qq
q
q
q
q
q
q
q
q
qq
qq
qq
q
qqqqq
q
q
q
qq
q
q
q
q
q
q
q
qq
qq
q
q
q
qq
q
q
q
q
q
qq
q
qq
q
qq
q
qqq
q
q
qqq
q
qqq
qq
q
q
qq
q
q
qq
qq
qq
q
q
q
q
q
q
q
q
q
qq
q
qq
qq
q
qqq
q
q
q
q
q
q
qqq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qqq
q
qq
q
q
qqq
q
q
q
q
qqq
q
qq
q
q
q
q
q
q
q
q
qq
qq
q
q
q
q
qqqqq
q
q
q
qq
q
qqq
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
qq
q
q
qq
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
qqqq
q
qq
q
q
q
q
q
q
qqq
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
qqq
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qqq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
qqq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
qq
qq
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
qq
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qqq
q
qq
q
q
q
qq
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
qqqq
q
q
q
q
q
qqq
q
qq
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
qq
qq
q
qqq
qq
qq
qqqqqqqq
q
q
q
q
q
qq
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
qqqq
q
q
q
qqqq
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
qqq
q
q
q
qqq
qq
qq
q
q
q
q
q
q
qqq
q
q
q
q
qq
q
q
qqq
q
q
qq
qqq
q
q
qq
qq
q
qqqqq
q
q
q
q
q
qqq
q
q
q
q
q
q
qq
qq
q
q
q
q
q
qq
q
qqqqq
q
q
q
q
q
q
q
qq
q
qq
q
q
qq
q
q
q
qqq
q
q
qq
q
q
qq
q
q
qqq
q
q
q
q
q
q
q
q
qqqq
q
q
q
qq
q
q
q
qq
q
q
qqqqq
q
q
q
qq
q
q
qqq
q
qq
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
qq
q
q
qq
q
q
q
q
q
q
q
q
q
q
qq
q
q
qqq
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qqq
q
qq
qq
q
q
q
q
q
q
qq
q
q
q
q
q
qq
q
q
q
q
q qqq
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
qq
q
q
q
q
qq
qq
q
q
qq
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
qqqq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qqq
q
q
q
qq
q
qq
q
qq
q
q
q
q
qqqq
q
q
q
q
q
q
q
q
q
qqq
qq
qq
q
q
q
q
q
q
q
q qq
q
qq
qq
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
qqq
q
q
q
q
q
qq
qq
q
q
q
qq qq
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
qqq
qqq
q
q
q
qqqqq
qq
qq
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
qq
q
q
q
q
q
q
qq
q
q
q
q
qqq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
qq
q
q
qq
q
q
q
q
q
qq
q
q
q
q
q
q
q
qq
qq
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qqq
q
q
q
q
q
q
q
qq
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
qq
q
q
q
q
q
qqq
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
qq
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
qq
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
qqq
q
q
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
qq
q
q
q
qqq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
qq
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
q
q
q
qq
q
q
q
qq
q
q
q
q
q
qq
q
q
q
qq
q
q
q
qq
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
q
q
q
q
q
q
qq
qq
q
q
q
q
q
q
q
qq
q
q
q
q
q
q
q
qq
qq
q
qq
q
q
q
q
q
q
q
q
q
NUTS (xi) eHMC (xi) NUTS (phi) eHMC (phi)
NUTS (beta) eHMC (beta) NUTS (gamma) eHMC (gamma)
0.6 0.7 0.8 0.9 0.6 0.7 0.8 0.9 0.6 0.7 0.8 0.9 0.6 0.7 0.8 0.9
0.005
0.010
0.015
0.020
0.025
0.005
0.010
0.015
0.020
0.025
Targeted acceptance probability δ
ESSpergradient
Empirical HMC (eHMC, [5])
Longest batch associated with (θ, v, ):
L (θ, v) = inf ∈ N (θ − θ) · M−1
v < 0 ,
where (θ , v ) is the value of the pair after iterations of the
leapfrog integrator.
Learning leapfrog scale: tuning phase with the optimised
step size and an initial number of leapfrog steps L0. At each
iteration, one
1. iterates L0 leapfrog steps to generate the next state of the
Markov chain.
2. computes the longest batch for the current state of the chain.
Output: empirical distribution of the longest batches
ˆPL =
1
K
K−1
k=0
δ L (θk, v(k+1)
) .
eHMC: randomly pick a number of leapfrog steps accord-
ing to the empirical distribution ˆPL at each iteration of HMC
algorithm.
⇒ valid: the resulting transition kernel can be seen as a com-
position of multiple Markov kernels attached to the same
stationary distribution π.
References
[1] M. Betancourt A conceptual introduction to Hamiltonian Monte Carlo. arXiv preprint arXiv:1701.02434, 2017.
[2] M. D. Hoffmand and A. Gelman. The No-U-Turn Sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15(1):1593–1623, 2014.
[3] R. M. Neal. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, Chapter 5, Chapman & Hall / CRC Press, 2011.
[4] Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical Programming, 120(1):221–259, 2009.
[5] C. Wu, J. Stoehr and C. P. Robert. Faster Hamiltonian Monte Carlo by Learning Leapfrog Scale. arXiv preprint arXiv:1810.04449v1, 2018.

Poster for Bayesian Statistics in the Big Data Era conference

  • 1.
    Faster Hamiltonian MonteCarlo by Learning Leapfrog Scale Changye Wu, Julien Stoehr and Christian P. Robert. Université Paris-Dauphine, Université PSL, CNRS, CEREMADE, 75016 PARIS, FRANCE Abstract Hamiltonian Monte Carlo samplers have become standard algo- rithms for MCMC implementations, as opposed to more basic versions, but they still require some amount of tuning and cali- bration. Exploiting the U-turn criterion of the NUTS algorithm [2], we propose a version of HMC that relies on the distribution of the integration time of the associated leapfrog integrator. Using in addition the primal-dual averaging method for tuning the step size of the integrator, we achieve an essentially calibra- tion free version of HMC. When compared with the original NUTS on benchmarks, this algorithm exhibits a significantly improved efficiency. Hamiltonian Monte Carlo (HMC, [3]) Consider a density π on Θ ⊂ Rd with respect to the Lebesgue measure, π(θ) ∝ exp{−U(θ)}, where U ∈ C1 (Θ). Aim: generate a Markov chain (θ1, . . . , θN) with invariant dis- tribution π to estimate, for some function h, functionals with respect to π, 1 N N n=1 h(θn) a.s. −→ N→+∞ Θ h(θ)π(dθ). Principle: sample from an augmented target distribution π(θ, v) = π(θ)N(v | 0, M) ∝ exp {−H(θ, v)} . • auxiliary variable v ∈ Rd referred to as momentum variable as opposed to θ referred to as position, • marginal chain in θ is the distribution of interest. Hamiltonian dynamics: generating proposals for (θ, v) based on    dθ dt = ∂H ∂v = M−1 v dv dt = − ∂H ∂θ = − U(θ). ⊕ leaves π invariant, allows large moves, requires the solution flow to the differential equa- tions. (θ, v) (θ , v ) Leapfrog integrator: second order symplectic integrator which yields an approximate solution flow by iterating the fol- lowing procedure from (θ0, v0) = (θ, v)    r = vn − /2 U(θn), θn+1 = θn + M−1 r, vn+1 = r − /2 U(θn + 1). • : a discretisation time-step. • L: a number of leapfrog steps solution at a time t = L (θ, v) (θ , v ) This scheme does no longer leave the measure π invariant! Correction: an accept-reject step is introduced. A transition from (θ, v) to proposal θ , −v is accepted with probability ρ θ, v, θ , v = 1 ∧ exp H(θ, v) − H θ , −v . Pros & cons: ⊕ The algorithm theoretically benefits from a fast explo- ration of the parameter space by accepting large transi- tions with high probability. High sensitivity to hand-tuned parameters, namely the step size of the discretisation scheme, the number of steps L of the integrator, and the covariance matrix M. The No-U-Turn Sampler (NUTS, [2]) Idea: version of HMC sampler that eliminates the need to specify the number L by adaptively choosing the locally largest value at each iteration of the algorithm. How? Doubling the leapfrog path, either forward or back- ward with equal probability, until the backward and the for- ward end points of the path, (θ− , v− ) and (θ+ , v+ ), satisfy (θ+ − θ− ) · M−1 v− < 0 or (θ+ − θ− ) · M−1 v+ < 0. (θ, v) (θ+ , v+ ) (θ− , v− ) Proposal: sampling along the generated trajectory • slice sampling [2] • multinomial sampling ([1], version implemented in Stan). What about ? Tuned via primal-dual averaging [4], by aim- ing at a targeted acceptance probability δ0 ∈ (0, 1). Numerical experiment: Susceptible-Infected-Recovered Model (SIR) SIR: model used to represent disease transmission, for epidemics like cholera, within a population ηk ∼ Poisson(ytk−1,1 − ytk,1) and ˆBk ∼ LN(ytk,4, 0.152 ), where    k = 1, · · · , Nt = 20, tk = 7k. The dynamic of yt ∈ R4 is dyt,1 dt = − βyt,4 yt,4 + κ0 yt,1, dyt,2 dt = βyt,4 yt,4 + κ0 yt,1 − γyt,2, dyt,3 dt = γyt,3, dyt,4 dt = ξyt,2 − φyt,4, Interpretation: • yt,1, yt,2, yt,3 : no. of susceptible, infected, and recovered people within the community. • yt,4: concentration of the virus in the water reservoir. • ηk: size of the pop. that becomes infected during [tk−1, tk]. Assumptions: • the size of the population yt,1 +yt,2 +yt,3 is constant, • β ∼ C(0, 2.5), γ ∼ C(0, 1), ξ ∼ C(0, 25), φ ∼ C(0, 1), Observed dataset: https://github.com/stan-dev/stat_comp_benchmarks/tree/master/benchmarks/sir. q q qq qq qqq qqqq q qqqqq q q q qq q q q q qqqqqq q q q qq q qq q q q q qqq q qqq q qqq q qq q qq q qqqqq qq q q q q q q qq q q q q q q qq q q q q q q q q q qqq q qqq qqq q q q q qqqqqqq qqq q qqqq qq q qq q q qqq qq q qqqqqq q qqq q q q qq q qqq q qq qq qqq q q q qqqqqqq q q q qqqq q qqq q qqq q qqq qqqqqq qqqqqqqqqqqqqqqqqq q q q q qqqq q qqqq qq q qqq q q q q q q qq q q q qq q q qqq q qqqqq q q qq q q q q q qqqqq qq q q q q q qq q q q q qq q q q q q qq q qq qq q qq qq q q qqq qq qq q q q q qqq qqqq q q q q qq qqqq q qq q qq q q q q qqq qq q q q qqq q q q q q q q q qqq q qqq q q q qq q q qqq q q qqq q q q q q q q q q q q q q q q q q qq q q q q q q q q q qq q q q q q qq q q q qqq qqq qqq q qq q q qq q q q q qq q q q q q qq q q qq qq q q q q q q q q q q q qqq q qq qq q qq q q q q qqqq q q q q q q q q qq qq q q q q q q q q q q q qq q q q q q q qq q qq q q q qqq q qq q q q q q qqq q q qqq q q q q q q q q q q q q q q q q q q q q q qq q q q qq qq q q q q q q q q q q q q q qq q q q q q q qq q q q q qq q q q q q q q qq qq q q qq q q q q qqq qq q q q q q q NUTS eHMC 0.6 0.7 0.8 0.9 0.6 0.7 0.8 0.9 0.00 0.01 0.02 ESJDpergradient qq q q q q q q q qq q q qq q q qq q qqq q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q qq q q qqq q q q q q q q q q q q q qq q q q q q q q q qq qq q q qqq q qq qqq qqqq q q q q q q qqqqq q q q q q q q q q q q q q q q q q q qqqqq q q q qq q q q q q q qq qq qqq q q q qq q qq qq q qq q q q q qq q q qq q qq q qqq q q qq q q q q qq q qq q qqqqq q q qq q q qqq q q qq q qq q q qq q q q qq q q q qqq q q q q q q qqq q qq q q qq q q q qqq q q qq q q q q q q qqq q q q q q q q q qq qq q q q q q q q qqq q qqq qqq q q q qq q qq q q q qq q q q q q q q qq q q q q qq q q q q q q qqqqq q q q q qq q qq q q q q q qq q q q q q q q q q qq qq q qq q q q q qqq q q q q q q q q q q q qq q q q q q q qq q q q q qqqq q q q q q q q q q q q qqqq q q q q q q qq q q q qq q q q q q q q q qq qq qq q qqqqq q q q qq q q q q q q q qq qq q q q qq q q q q q qq q qq q qq q qqq q q qqq q qqq qq q q qq q q qq qq qq q q q q q q q q q qq q qq qq q qqq q q q q q q qqq q q q q q q q q q q q q q q q q q qqq q qq q q qqq q q q q qqq q qq q q q q q q q q qq qq q q q q qqqqq q q q qq q qqq q q q q q q q q q q q qq q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q qq q q q q q q q q q qq q q qq q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qqqq q qq q q q q q q qqq q q q q q q q q q q qq q q q q q q q q q q q q q q q qq q q q q q q q q q q qq q q q q q q q q q qqq q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qqq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q qqq q q q q q q q q q q q q q q q q q q q qq q q q qq q q q q q q qq q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q qq qq q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q qq qq q q q q q q q q q q q q q q q qqq q qq q q q qq q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q qqqq q q q q q qqq q qq q q q q q q qq q q q q q q q q qq qq q qqq qq qq qqqqqqqq q q q q q qq q qq q q q q q q q q q q q q qqqq q q q qqqq q q q q q q q q q q q qq q q qqq q q q qqq qq qq q q q q q q qqq q q q q qq q q qqq q q qq qqq q q qq qq q qqqqq q q q q q qqq q q q q q q qq qq q q q q q qq q qqqqq q q q q q q q qq q qq q q qq q q q qqq q q qq q q qq q q qqq q q q q q q q q qqqq q q q qq q q q qq q q qqqqq q q q qq q q qqq q qq q q q q q q q q q q qq q q q qq q q qq q q q q q q q q q q qq q q qqq q q qq q q q q q q q q q q q q q q q q qqq q qq qq q q q q q q qq q q q q q qq q q q q q qqq q q q q q q q q q q q q qq q q qq q q q q qq qq q q qq q q q q qq q q q q q q q q q q q q q q q qq qqqq q q q q q q q q q q q q q q qqq q q q qq q qq q qq q q q q qqqq q q q q q q q q q qqq qq qq q q q q q q q q qq q qq qq q q q q qq q q q q q q q q q q qqq q q q q q qq qq q q q qq qq q qq q q q q q q q q q q q q q qq qqq qqq q q q qqqqq qq qq q q q q q qq q q q q q q q q q q q q q q q q q q q q q q qq q qq q q q q q q qq q q q q qqq q q q q q q q q q q q q q q qq q q q q qq q q qq q q q q q qq q q q q q q q qq qq q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q qqq q q q q q q q qq q q q q q qq q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q qq q q q q q qqq q q q q q q q q q q q qq q q q q q q q q q q q q q q qq qq q q q q q q q q qq q q q q q qq qq q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q qq q q q q q q q q q q q q q q q q q q q q q qq q q q q qqq q q q q q qq q q q q q q qq q q q q q qq q q q q q q q q q qq q q q qqq q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q qq q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q qq qq q q q qq q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q qq q q q q q q qq q q q q q q qq q q q qq q q q q q qq q q q qq q q q qq q qq q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q qq qq q q q q q q q qq q q q q q q q qq qq q qq q q q q q q q q q NUTS (xi) eHMC (xi) NUTS (phi) eHMC (phi) NUTS (beta) eHMC (beta) NUTS (gamma) eHMC (gamma) 0.6 0.7 0.8 0.9 0.6 0.7 0.8 0.9 0.6 0.7 0.8 0.9 0.6 0.7 0.8 0.9 0.005 0.010 0.015 0.020 0.025 0.005 0.010 0.015 0.020 0.025 Targeted acceptance probability δ ESSpergradient Empirical HMC (eHMC, [5]) Longest batch associated with (θ, v, ): L (θ, v) = inf ∈ N (θ − θ) · M−1 v < 0 , where (θ , v ) is the value of the pair after iterations of the leapfrog integrator. Learning leapfrog scale: tuning phase with the optimised step size and an initial number of leapfrog steps L0. At each iteration, one 1. iterates L0 leapfrog steps to generate the next state of the Markov chain. 2. computes the longest batch for the current state of the chain. Output: empirical distribution of the longest batches ˆPL = 1 K K−1 k=0 δ L (θk, v(k+1) ) . eHMC: randomly pick a number of leapfrog steps accord- ing to the empirical distribution ˆPL at each iteration of HMC algorithm. ⇒ valid: the resulting transition kernel can be seen as a com- position of multiple Markov kernels attached to the same stationary distribution π. References [1] M. Betancourt A conceptual introduction to Hamiltonian Monte Carlo. arXiv preprint arXiv:1701.02434, 2017. [2] M. D. Hoffmand and A. Gelman. The No-U-Turn Sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15(1):1593–1623, 2014. [3] R. M. Neal. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, Chapter 5, Chapman & Hall / CRC Press, 2011. [4] Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical Programming, 120(1):221–259, 2009. [5] C. Wu, J. Stoehr and C. P. Robert. Faster Hamiltonian Monte Carlo by Learning Leapfrog Scale. arXiv preprint arXiv:1810.04449v1, 2018.