SlideShare a Scribd company logo
Markov chain Monte Carlo methods and some
attempts at parallelizing them
Pierre E. Jacob
Department of Statistics, Harvard University
(and many fantastic collaborators!)
MIT IDS.190, October 2019
blog: https://statisfaction.wordpress.com
Pierre E. Jacob Unbiased MCMC
Setting
Continuous or discrete space of dimension d.
Target probability distribution π,
with probability density/mass function x → π(x).
Goal: approximate π, e.g.
Eπ[h(X)] = h(x)π(x)dx = π(h),
for a class of “test” functions h.
Pierre E. Jacob Unbiased MCMC
Monte Carlo
Originates from physics, and still very much a research
topic in physics e.g.
K. Binder et al, Monte Carlo methods in statistical physics, 2012.
Often state-of-the-art for numerical integration e.g.
E. Novak, Some results on the complexity of numerical
integration, 2016.
Plays an important role in Bayesian inference e.g.
P. Green et al, Bayesian computation: a summary of the current
state, and samples backwards and forwards, 2015.
Can be useful for many other tasks in statistics e.g.
J. Besag, MCMC for Statistical Inference, 2001.
See also P. Diaconis, The MCMC revolution, 2009.
Pierre E. Jacob Unbiased MCMC
Outline
1 Monte Carlo and bias
2 Sequential Monte Carlo samplers
3 Regeneration
4 Unbiased estimators from coupled Markov chains
5 Bonus: new convergence diagnostics for MCMC
Pierre E. Jacob Unbiased MCMC
Outline
1 Monte Carlo and bias
2 Sequential Monte Carlo samplers
3 Regeneration
4 Unbiased estimators from coupled Markov chains
5 Bonus: new convergence diagnostics for MCMC
Pierre E. Jacob Unbiased MCMC
Markov chain Monte Carlo
Initially, X0 ∼ π0, then Xt|Xt−1 ∼ P(Xt−1, ·) for t = 1, . . . , T.
Estimator:
1
T − b
T
t=b+1
h(Xt),
where b iterations are discarded as burn-in.
Might converge to Eπ[h(X)] as T → ∞ by the ergodic theorem.
Biased for any fixed b, T, unless π0 is equal to π.
Averaging independent copies of such estimators for fixed b, T
would not provide a consistent estimator of Eπ[h(X)]
as the number of independent copies goes to infinity.
Pierre E. Jacob Unbiased MCMC
Example: Metropolis–Hastings kernel P
With Markov chain at state Xt,
1 propose X ∼ q(Xt, ·),
2 sample U ∼ Uniform(0, 1),
3 if
U ≤
π(X )q(X , Xt)
π(Xt)q(Xt, X )
,
set Xt+1 = X , otherwise set Xt+1 = Xt.
Hastings, Monte Carlo sampling methods using Markov chains and
their applications, 1970.
Pierre E. Jacob Unbiased MCMC
MCMC trace
π = N(0, 1), RWMH with Normal proposal std = 0.5, π0 = N(10, 32
)
Pierre E. Jacob Unbiased MCMC
MCMC marginal distributions
π = N(0, 1), RWMH with Normal proposal std = 0.5, π0 = N(10, 32
)
Pierre E. Jacob Unbiased MCMC
Independent replicates and MCMC
The bias is the difference |E[h(Xt)] − Eπ[h(X)]| for fixed t.
The bias has always been recognized as an obstacle on the way
to parallelize Monte Carlo calculations, e.g.
When running parallel Monte Carlo with many comput-
ers, it is more important to start with an unbiased (or
low-bias) estimate than with a low-variance estimate.
Rosenthal, Parallel computing and Monte Carlo algorithms, 2000.
For general statistical estimators, mean squared error is often the
prefered measure of accuracy.
In Monte Carlo, variance can be both quantified and arbitrarily
reduced with independent runs, but neither is true for the bias.
Pierre E. Jacob Unbiased MCMC
Outline
1 Monte Carlo and bias
2 Sequential Monte Carlo samplers
3 Regeneration
4 Unbiased estimators from coupled Markov chains
5 Bonus: new convergence diagnostics for MCMC
Pierre E. Jacob Unbiased MCMC
Importance sampling
Importance sampling relies on a proposal distribution q, chosen
by user to be an approximation of π.
1 Sample X1:N ∼ q, independently.
2 Weight w(Xn) = π(Xn)/q(Xn).
3 Normalize weights to obtain W1:N .
The procedure yields
ˆπN
(·) =
N
n=1
Wn
δXn (·)
approximates π as N → ∞ under conditions on q and π.
Pierre E. Jacob Unbiased MCMC
Importance sampling with MCMC proposals
Finding proposal q that approximates π might be difficult.
Can we use MCMC as an importance sampling proposal?
Something that would look like:
1 Sample X1:N by running N chains for T steps.
2 Weight w(Xn) somehow (?).
3 Normalize weights to obtain W1:N .
An immediate difficulty is that the marginal distributions of
MCMC chains are generally intractable, so importance weights
seem hard to compute.
Pierre E. Jacob Unbiased MCMC
Annealed importance sampling
For instance, sample X0 ∼ π0 and X1|X0 ∼ P(X0, ·).
Problem: marginal distribution of X1 is intractable.
Introduce backward kernel L(x1, x0) = P(x0, x1)π(x0)/π(x1).
Then consider
proposal distribution ¯q(x0, x1) = π0(x0)P(x0, x1),
target distribution ¯π(x0, x1) = π(x1)L(x1, x0).
Writing down importance sampling procedure leads to
tractable weights ∝ π(x0)/π0(x0),
desired marginal distribution: ¯π(x0, x1)dx0 = π(x1).
Neal, Annealed importance sampling, 2001,
Pierre E. Jacob Unbiased MCMC
Sequential Monte Carlo samplers
Del Moral, Doucet & Jasra, SMC samplers, 2006.
AIS and SMC samplers work by introducing sequence of target
distributions πt, for t = 0, . . . , T, and a sequence of MCMC
kernels Pt targeting πt.
Then N chains start from π0 and
move through the specified Markov kernels,
are weighted using ratios of successive target distributions,
are resampled according to weights (in SMC samplers).
At final step T, weighted samples approximate π.
The resampling steps induce interaction between the chains,
which possibly means communication between machines.
Whiteley, Lee & Heine, On the role of interaction in sequential Monte
Carlo algorithms, 2016.
Pierre E. Jacob Unbiased MCMC
Sequential Monte Carlo samplers
π = N(0, 1), adaptive SMC sampler with MH moves, π0 = N(10, 32
)
Pierre E. Jacob Unbiased MCMC
Outline
1 Monte Carlo and bias
2 Sequential Monte Carlo samplers
3 Regeneration
4 Unbiased estimators from coupled Markov chains
5 Bonus: new convergence diagnostics for MCMC
Pierre E. Jacob Unbiased MCMC
Regeneration in Markov chain samplers
Mykland, Tierney & Yu, Regeneration in Markov chain samplers, 1995.
−3
0
3
6
0 50 100 150 200
iteration
x
We might be able to identify regeneration times (Tn)n≥1
such that the tours (XTn−1 , . . . , XTn−1) are i.i.d.
and such that
N
n=1
Tn
t=Tn−1
h(Xt)
N
n=1(Tn − Tn−1)
a.s.
−−−−→
N→∞
Eπ[h(X)]
. . . but it might be difficult to identify these times.
Pierre E. Jacob Unbiased MCMC
Brockwell and Kadane’s regeneration technique
Design new chain such that regeneration is easier to identify.
State space E ∪ α, Markov kernel ˜P on E ∪ α that targets ˜π,
such that ˜π is equal to π on E.
Set ˜π(α) (to be chosen), and design “re-entry” proposal φ on E.
If Xt = α, propose X ∼ φ on E, acceptance probability
min(1, π(X )/(˜π(α)φ(X ))),
if Xt ∈ E, propose move to α, acceptance probability
min(1, ˜π(α)φ(Xt)/π(Xt)).
Perform these moves with probability ω, otherwise sample
Xt+1 ∼ P(Xt, ·) if Xt ∈ E, and set Xt+1 = α if Xt = α.
With the new chain, every re-entry in E is a regeneration.
Pierre E. Jacob Unbiased MCMC
Illustration of regeneration technique
π = N(0, 1), MH with Normal proposal std = 0.5, π0 = N(10, 32
)
Set ˜π(α) = 1, φ = N(2, 1), ω = 0.1.
−2
0
2
0 50 100 150 200
iteration
x
Brockwell & Kadane, Identification of regeneration times in MCMC
simulation, with application to adaptive schemes, 2005.
See also Nummelin, MC’s for MCMC’ists, 2002.
Pierre E. Jacob Unbiased MCMC
Outline
1 Monte Carlo and bias
2 Sequential Monte Carlo samplers
3 Regeneration
4 Unbiased estimators from coupled Markov chains
5 Bonus: new convergence diagnostics for MCMC
Pierre E. Jacob Unbiased MCMC
Coupled chains
Glynn & Rhee, Exact estimation for MC equilibrium expectations, 2014.
Generate two chains (Xt) and (Yt) as follows,
sample X0 and Y0 from π0 (independently, or not),
sample X1|X0 ∼ P(X0, ·),
for t ≥ 1, sample (Xt+1, Yt)|(Xt, Yt−1) ∼ ¯P ((Xt, Yt−1), ·).
¯P must be such that
Xt+1|Xt ∼ P(Xt, ·) and Yt|Yt−1 ∼ P(Yt−1, ·)
(thus Xt and Yt have the same distribution for all t ≥ 0),
there exists a random time τ such that Xt = Yt−1 for t ≥ τ
(the chains meet and remain “faithful”).
Pierre E. Jacob Unbiased MCMC
Metropolis on Normal target: coupled paths
0
4
8
0 50 100 150 200
iteration
x
π = N(0, 1), RWMH with Normal proposal std = 0.5, π0 = N(10, 32
)
Pierre E. Jacob Unbiased MCMC
Metropolis on Normal target: coupled paths
0
5
10
15
0 50 100 150 200
iteration
x
π = N(0, 1), RWMH with Normal proposal std = 0.5, π0 = N(10, 32
)
Pierre E. Jacob Unbiased MCMC
Debiasing idea (one slide version)
Limit as a telescopic sum, for all k ≥ 0,
Eπ[h(X)] = lim
t→∞
E[h(Xt)] = E[h(Xk)] +
∞
t=k+1
E[h(Xt) − h(Xt−1)].
Since for all t ≥ 0, Xt and Yt have the same distribution,
= E[h(Xk)] +
∞
t=k+1
E[h(Xt) − h(Yt−1)].
If we can swap expectation and limit,
= E[h(Xk) +
∞
t=k+1
(h(Xt) − h(Yt−1))].
Random variable in above expectation is unbiased for Eπ[h(X)].
Pierre E. Jacob Unbiased MCMC
Unbiased estimators
Unbiased estimator, for any user-chosen k, is given by
Hk(X, Y ) = h(Xk) +
τ−1
t=k+1
(h(Xt) − h(Yt−1)),
with the convention τ−1
t=k+1{·} = 0 if τ − 1 < k + 1.
h(Xk) alone is biased; the other terms correct for the bias.
Cost: τ − 1 calls to ¯P and 1 + max(0, k − τ) calls to P.
Glynn & Rhee, Exact estimation for Markov chain equilibrium expectations,
2014. Also Agapiou, Roberts & Vollmer, Unbiased Monte Carlo: Posterior
estimation for intractable/infinite-dimensional models, 2018.
Note: same reasoning would work with arbitrary lags L ≥ 1.
Pierre E. Jacob Unbiased MCMC
Conditions
Jacob, O’Leary, Atchad´e, Unbiased MCMC with couplings, 2019.
1 Marginal chain converges:
E[h(Xt)] → Eπ[h(X)],
and h(Xt) has (2 + η)-finite moments for all t.
2 Meeting time τ has geometric tails:
∃C < +∞ ∃δ ∈ (0, 1) ∀t ≥ 0 P(τ > t) ≤ Cδt
.
3 Chains stay together: Xt = Yt−1 for all t ≥ τ.
Condition 2 itself implied by e.g. geometric drift condition.
Under these conditions, Hk(X, Y ) is unbiased, has finite
expected cost and finite variance, for all k.
Pierre E. Jacob Unbiased MCMC
Metropolis on Normal target: meeting times
0.000
0.005
0.010
0.015
0 50 100 150 200
meeting time
density
π = N(0, 1), RWMH with Normal proposal std = 0.5, π0 = N(10, 32
)
Pierre E. Jacob Unbiased MCMC
Metropolis on Normal target: estimators of Eπ[X]
0.000
0.002
0.004
0.006
−1000 0 1000
estimator
density
k = 0
E[2τ] ≈ 96, V[H0(X, Y )] ≈ 65, 000.
Pierre E. Jacob Unbiased MCMC
Asymptotic inefficiency
Final estimator: average of R independent estimators.
In a given computing time,
more estimators can be produced if each estimator is cheaper.
An appropriate measure of performance is
[expected cost] × [variance],
called the asymptotic inefficiency.
Glynn & Whitt, Asymptotic efficiency of simulation estimators, 1992.
Glynn & Heidelberger, Bias properties of budget constrained
simulations, 1990.
Pierre E. Jacob Unbiased MCMC
Metropolis on Normal target: estimators of Eπ[X]
0.0
0.1
0.2
0.3
−200 −100 0 100
estimator
density
k = 100
E[max(k + τ, 2τ)] ≈ 148, V[Hk(X, Y )] ≈ 100.
Pierre E. Jacob Unbiased MCMC
Metropolis on Normal target: estimators of Eπ[X]
0.0
0.1
0.2
0.3
0.4
−4 −2 0 2 4
estimator
density
k = 200
E[max(k + τ, 2τ)] ≈ 248, V[Hk(X, Y )] ≈ 1.
Pierre E. Jacob Unbiased MCMC
Time-averaged unbiased estimators
Efficiency matters, thus in practice we recommend a variation
of the previous estimator, defined for integers k ≤ m as
Hk:m(X, Y ) =
1
m − k + 1
m
t=k
Ht(X, Y )
which can also be written
1
m − k + 1
m
t=k
h(Xt)+
τ−1
t=k+1
min 1,
t − k
m − k + 1
(h(Xt)−h(Yt−1)),
i.e. standard MCMC average + bias correction term.
Pierre E. Jacob Unbiased MCMC
Metropolis on Normal target: time-averaged estimators
0.0
0.5
1.0
1.5
2.0
2.5
−0.4 0.0 0.4
estimator
density
k = 200, m = 1000
E[max(m + τ, 2τ)] ≈ 1048, V[Hk(X, Y )] ≈ 0.028.
Pierre E. Jacob Unbiased MCMC
How to design appropriate coupled chains?
To implement the proposed unbiased estimators,
we need to sample from a Markov kernel ¯P,
such that, when (Xt+1, Yt) is sampled from ¯P ((Xt, Yt−1), ·),
marginally Xt+1|Xt ∼ P(Xt, ·), and Yt|Yt−1 ∼ P(Yt−1, ·),
it is possible that Xt+1 = Yt exactly for some t ≥ 0,
if Xt = Yt−1, then Xt+1 = Yt almost surely.
Pierre E. Jacob Unbiased MCMC
Couplings of MCMC algorithms
We can find many couplings in the literature. . .
Propp & Wilson, Exact sampling with coupled Markov chains
and applications to statistical mechanics, Random Structures &
Algorithms, 1996.
Johnson, Studying convergence of Markov chain Monte Carlo
algorithms using coupled sample paths, JASA, 1996.
Neal, Circularly-coupled Markov chain sampling, UoT tech
report, 1999.
Pinto & Neal, Improving Markov chain Monte Carlo estimators
by coupling to an approximating chain, UoT tech report, 2001.
Glynn & Rhee, Exact estimation for Markov chain equilibrium
expectations, Journal of Applied Probability, 2014.
Pierre E. Jacob Unbiased MCMC
Couplings of MCMC algorithms
Conditional particle filters
Jacob, Lindsten, Sch¨on, Smoothing with Couplings of
Conditional Particle Filters, 2019.
Metropolis–Hastings, Gibbs samplers, parallel tempering
Jacob, O’Leary, Atchad´e, Unbiased MCMC with couplings, 2019.
Hamiltonian Monte Carlo
Heng & Jacob, Unbiased HMC with couplings, 2019.
Pseudo-marginal MCMC, exchange algorithm
Middleton, Deligiannidis, Doucet, Jacob, Unbiased MCMC for
intractable target distributions, 2018.
Particle independent Metropolis–Hastings
Middleton, Deligiannidis, Doucet, Jacob, Unbiased Smoothing
using Particle Independent Metropolis-Hastings, 2019.
Pierre E. Jacob Unbiased MCMC
Maximal couplings
(X, Y ) follows a coupling of p and q if X ∼ p and Y ∼ q,
The coupling inequality states that
P(X = Y ) ≤ 1 − p − q TV,
for any coupling, with p − q TV = 1
2 |p(x) − q(x)|dx.
Maximal couplings achieve the bound.
Pierre E. Jacob Unbiased MCMC
Maximal coupling of Gamma and Normal
Pierre E. Jacob Unbiased MCMC
Maximal coupling: algorithm
Requires: evaluations of p and q, sampling from p and q.
1 Sample X ∼ p and W ∼ Uniform(0, 1).
If W ≤ q(X)/p(X), set Y = X, output (X, Y ).
2 Otherwise, sample Y ∼ q and W ∼ Uniform(0, 1)
until W > p(Y )/q(Y ), set Y = Y and output (X, Y ).
Output: a pair (X, Y ) such that X ∼ p, Y ∼ q
and P(X = Y ) is maximal.
Pierre E. Jacob Unbiased MCMC
Back to Metropolis–Hastings (kernel P)
At each iteration t, Markov chain at state Xt,
1 propose X ∼ q(Xt, ·),
2 sample U ∼ Uniform(0, 1),
3 if
U ≤
π(X )q(X , Xt)
π(Xt)q(Xt, X )
,
set Xt+1 = X , otherwise set Xt+1 = Xt.
How to propagate two MH chains from states Xt and Yt−1
such that {Xt+1 = Yt} can happen?
Pierre E. Jacob Unbiased MCMC
Coupling of Metropolis–Hastings (kernel ¯P)
At each iteration t, two Markov chains at states Xt, Yt−1,
1 propose (X , Y ) from max coupling of q(Xt, ·), q(Yt−1, ·),
2 sample U ∼ Uniform(0, 1),
3 if
U ≤
π(X )q(X , Xt)
π(Xt)q(Xt, X )
,
set Xt+1 = X , otherwise set Xt+1 = Xt,
if
U ≤
π(Y )q(Y , Yt−1)
π(Yt−1)q(Yt−1, Y )
,
set Yt = Y , otherwise set Yt = Yt−1.
Pierre E. Jacob Unbiased MCMC
Scaling with dimension (not doing so well)
With naive maximum coupling of proposals. . .
10
100
1000
10000
1 2 3 4 5
dimension
averagemeetingtime
initialization: target offset
Pierre E. Jacob Unbiased MCMC
Scaling with dimension (much better)
With “reflection-maximal” couplings of proposals. . .
0
500
1000
1500
2000
1 13 25 37 50
dimension
averagemeetingtime
initialization: target offset
Pierre E. Jacob Unbiased MCMC
Hamiltonian Monte Carlo
Introduce potential energy U(q) = − log π(q),
and total energy E(q, p) = U(q) + 1
2|p|2.
Hamiltonian dynamics for (q(s), p(s)), where s ≥ 0:
d
ds
q(s) = pE(q(s), p(s)),
d
ds
p(s) = − qE(q(s), p(s)).
Solving Hamiltonian dynamics exactly is not feasible,
but discretization + Metropolis–Hastings correction ensure that
π remains invariant.
Common random numbers can make two HMC chains contract,
under assumptions on the target such as strong log-concavity.
Pierre E. Jacob Unbiased MCMC
Coupling of Hamiltonian Monte Carlo
Mangoubi & Smith, Rapid mixing of HMC on strongly
log-concave distributions, 2017
Bou-Rabee, Eberle & Zimmer, Coupling and Convergence for
Hamiltonian Monte Carlo, 2018.
Heng & Jacob, Unbiased HMC with couplings, 2019.
Pierre E. Jacob Unbiased MCMC
Coupling of Hamiltonian Monte Carlo
Figure 2 of Mangoubi & Smith, Rapid mixing of HMC strongly
log-concave distributions, 2017.
Coupling two copies X1, X2, . . . (blue) and Y1, Y2, . . .
(green) of HMC by choosing same momentum pi at ev-
ery step.
Pierre E. Jacob Unbiased MCMC
Scaling of Hamiltonian Monte Carlo
0
20
40
60
10 50 100 200 300
dimension
averagemeetingtime
initialization: target offset
Pierre E. Jacob Unbiased MCMC
Outline
1 Monte Carlo and bias
2 Sequential Monte Carlo samplers
3 Regeneration
4 Unbiased estimators from coupled Markov chains
5 Bonus: new convergence diagnostics for MCMC
Pierre E. Jacob Unbiased MCMC
Assessing finite-time bias of MCMC
Total variation distance between Xk ∼ πk and π = limk→∞ πk:
πk − π TV =
1
2
sup
h:|h|≤1
|E[h(Xk)] − Eπ[h(X)]|
=
1
2
sup
h:|h|≤1
|E[
τ−1
t=k+1
h(Xt) − h(Yt−1)]|
≤ E[max(0, τ − k − 1)].
0.000
0.005
0.010
0.015
0 50 100 150 200
meeting time
density
1e−03
1e−02
1e−01
1e+00
1e+01
0 50 100
k
upperbound
Pierre E. Jacob Unbiased MCMC
Assessing finite-time bias of MCMC
With L-lag couplings, τ(L) = inf{t ≥ L : Xt = Yt−L},
πk − π TV ≤ E max(0, (τ(L)
− L − k)/L ) .
0.00
0.25
0.50
0.75
1.00
1e+01 1e+02 1e+03 1e+04 1e+05 1e+06
iterations
dTV
SSG PT
Biswas, Jacob & Vanetti, Estimating Convergence of Markov chains
with L-Lag Couplings, 2019.
Pierre E. Jacob Unbiased MCMC
Discussion
Perfect samplers, that sample i.i.d. from π, would yield the
same benefits and more. Is any of this helping create
perfect samplers?
If underlying MCMC “doesn’t work”, proposed unbiased
estimators will have large cost and/or large variance.
Choice of tuning parameters? Choice of lag? Why couple
only two chains?
Lack of bias useful beyond parallel computation.
So far we have used Markovian couplings: can we do
better?
Thank you for listening!
Funding provided by the National Science Foundation, grants
DMS-1712872 and DMS-1844695.
Pierre E. Jacob Unbiased MCMC

More Related Content

What's hot

MCMC and likelihood-free methods
MCMC and likelihood-free methodsMCMC and likelihood-free methods
MCMC and likelihood-free methods
Christian Robert
 
Sampling and Markov Chain Monte Carlo Techniques
Sampling and Markov Chain Monte Carlo TechniquesSampling and Markov Chain Monte Carlo Techniques
Sampling and Markov Chain Monte Carlo Techniques
Tomasz Kusmierczyk
 
Recent developments on unbiased MCMC
Recent developments on unbiased MCMCRecent developments on unbiased MCMC
Recent developments on unbiased MCMC
Pierre Jacob
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
Christian Robert
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
Christian Robert
 
Markov Chain Monte Carlo explained
Markov Chain Monte Carlo explainedMarkov Chain Monte Carlo explained
Markov Chain Monte Carlo explained
dariodigiuni
 
Richard Everitt's slides
Richard Everitt's slidesRichard Everitt's slides
Richard Everitt's slides
Christian Robert
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
Christian Robert
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Hastings 1970
Hastings 1970Hastings 1970
Hastings 1970
Julyan Arbel
 
Linear response theory and TDDFT
Linear response theory and TDDFT Linear response theory and TDDFT
Linear response theory and TDDFT
Claudio Attaccalite
 
Linear response theory
Linear response theoryLinear response theory
Linear response theory
Claudio Attaccalite
 
Introduction to Diffusion Monte Carlo
Introduction to Diffusion Monte CarloIntroduction to Diffusion Monte Carlo
Introduction to Diffusion Monte Carlo
Claudio Attaccalite
 
BSE and TDDFT at work
BSE and TDDFT at workBSE and TDDFT at work
BSE and TDDFT at work
Claudio Attaccalite
 
Statistical Physics Assignment Help
Statistical Physics Assignment HelpStatistical Physics Assignment Help
Statistical Physics Assignment Help
Statistics Assignment Help
 
Mechanical Engineering Assignment Help
Mechanical Engineering Assignment HelpMechanical Engineering Assignment Help
Mechanical Engineering Assignment Help
Matlab Assignment Experts
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018
Christian Robert
 
Introduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsIntroduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsUniversity of Salerno
 
Introduction to advanced Monte Carlo methods
Introduction to advanced Monte Carlo methodsIntroduction to advanced Monte Carlo methods
Introduction to advanced Monte Carlo methodsChristian Robert
 
Talk 5
Talk 5Talk 5

What's hot (20)

MCMC and likelihood-free methods
MCMC and likelihood-free methodsMCMC and likelihood-free methods
MCMC and likelihood-free methods
 
Sampling and Markov Chain Monte Carlo Techniques
Sampling and Markov Chain Monte Carlo TechniquesSampling and Markov Chain Monte Carlo Techniques
Sampling and Markov Chain Monte Carlo Techniques
 
Recent developments on unbiased MCMC
Recent developments on unbiased MCMCRecent developments on unbiased MCMC
Recent developments on unbiased MCMC
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
Markov Chain Monte Carlo explained
Markov Chain Monte Carlo explainedMarkov Chain Monte Carlo explained
Markov Chain Monte Carlo explained
 
Richard Everitt's slides
Richard Everitt's slidesRichard Everitt's slides
Richard Everitt's slides
 
Monte Carlo Statistical Methods
Monte Carlo Statistical MethodsMonte Carlo Statistical Methods
Monte Carlo Statistical Methods
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Hastings 1970
Hastings 1970Hastings 1970
Hastings 1970
 
Linear response theory and TDDFT
Linear response theory and TDDFT Linear response theory and TDDFT
Linear response theory and TDDFT
 
Linear response theory
Linear response theoryLinear response theory
Linear response theory
 
Introduction to Diffusion Monte Carlo
Introduction to Diffusion Monte CarloIntroduction to Diffusion Monte Carlo
Introduction to Diffusion Monte Carlo
 
BSE and TDDFT at work
BSE and TDDFT at workBSE and TDDFT at work
BSE and TDDFT at work
 
Statistical Physics Assignment Help
Statistical Physics Assignment HelpStatistical Physics Assignment Help
Statistical Physics Assignment Help
 
Mechanical Engineering Assignment Help
Mechanical Engineering Assignment HelpMechanical Engineering Assignment Help
Mechanical Engineering Assignment Help
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018
 
Introduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsIntroduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov Chains
 
Introduction to advanced Monte Carlo methods
Introduction to advanced Monte Carlo methodsIntroduction to advanced Monte Carlo methods
Introduction to advanced Monte Carlo methods
 
Talk 5
Talk 5Talk 5
Talk 5
 

Similar to Markov chain Monte Carlo methods and some attempts at parallelizing them

Unbiased MCMC with couplings
Unbiased MCMC with couplingsUnbiased MCMC with couplings
Unbiased MCMC with couplings
Pierre Jacob
 
Unbiased Markov chain Monte Carlo methods
Unbiased Markov chain Monte Carlo methods Unbiased Markov chain Monte Carlo methods
Unbiased Markov chain Monte Carlo methods
Pierre Jacob
 
Unbiased Markov chain Monte Carlo
Unbiased Markov chain Monte CarloUnbiased Markov chain Monte Carlo
Unbiased Markov chain Monte Carlo
JeremyHeng10
 
Adaptive Restore algorithm & importance Monte Carlo
Adaptive Restore algorithm & importance Monte CarloAdaptive Restore algorithm & importance Monte Carlo
Adaptive Restore algorithm & importance Monte Carlo
Christian Robert
 
Unbiased Markov chain Monte Carlo
Unbiased Markov chain Monte CarloUnbiased Markov chain Monte Carlo
Unbiased Markov chain Monte Carlo
JeremyHeng10
 
Metodo Monte Carlo -Wang Landau
Metodo Monte Carlo -Wang LandauMetodo Monte Carlo -Wang Landau
Metodo Monte Carlo -Wang Landau
angely alcendra
 
Zeros of orthogonal polynomials generated by a Geronimus perturbation of meas...
Zeros of orthogonal polynomials generated by a Geronimus perturbation of meas...Zeros of orthogonal polynomials generated by a Geronimus perturbation of meas...
Zeros of orthogonal polynomials generated by a Geronimus perturbation of meas...
Edmundo José Huertas Cejudo
 
A bit about мcmc
A bit about мcmcA bit about мcmc
A bit about мcmc
Alexander Favorov
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
Linear models for classification
Linear models for classificationLinear models for classification
Linear models for classification
Sung Yub Kim
 
Monte Carlo methods for some not-quite-but-almost Bayesian problems
Monte Carlo methods for some not-quite-but-almost Bayesian problemsMonte Carlo methods for some not-quite-but-almost Bayesian problems
Monte Carlo methods for some not-quite-but-almost Bayesian problems
Pierre Jacob
 
Upm etsiccp-seminar-vf
Upm etsiccp-seminar-vfUpm etsiccp-seminar-vf
Upm etsiccp-seminar-vf
Edmundo José Huertas Cejudo
 
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
The Statistical and Applied Mathematical Sciences Institute
 
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
The Statistical and Applied Mathematical Sciences Institute
 
SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012
Zheng Mengdi
 
Talk at CIRM on Poisson equation and debiasing techniques
Talk at CIRM on Poisson equation and debiasing techniquesTalk at CIRM on Poisson equation and debiasing techniques
Talk at CIRM on Poisson equation and debiasing techniques
Pierre Jacob
 
Nonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo MethodNonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo Method
SSA KPI
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
Fabian Pedregosa
 
restore.pdf
restore.pdfrestore.pdf
restore.pdf
Christian Robert
 

Similar to Markov chain Monte Carlo methods and some attempts at parallelizing them (20)

Unbiased MCMC with couplings
Unbiased MCMC with couplingsUnbiased MCMC with couplings
Unbiased MCMC with couplings
 
Unbiased Markov chain Monte Carlo methods
Unbiased Markov chain Monte Carlo methods Unbiased Markov chain Monte Carlo methods
Unbiased Markov chain Monte Carlo methods
 
Unbiased Markov chain Monte Carlo
Unbiased Markov chain Monte CarloUnbiased Markov chain Monte Carlo
Unbiased Markov chain Monte Carlo
 
Adaptive Restore algorithm & importance Monte Carlo
Adaptive Restore algorithm & importance Monte CarloAdaptive Restore algorithm & importance Monte Carlo
Adaptive Restore algorithm & importance Monte Carlo
 
Unbiased Markov chain Monte Carlo
Unbiased Markov chain Monte CarloUnbiased Markov chain Monte Carlo
Unbiased Markov chain Monte Carlo
 
Metodo Monte Carlo -Wang Landau
Metodo Monte Carlo -Wang LandauMetodo Monte Carlo -Wang Landau
Metodo Monte Carlo -Wang Landau
 
Zeros of orthogonal polynomials generated by a Geronimus perturbation of meas...
Zeros of orthogonal polynomials generated by a Geronimus perturbation of meas...Zeros of orthogonal polynomials generated by a Geronimus perturbation of meas...
Zeros of orthogonal polynomials generated by a Geronimus perturbation of meas...
 
A bit about мcmc
A bit about мcmcA bit about мcmc
A bit about мcmc
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Linear models for classification
Linear models for classificationLinear models for classification
Linear models for classification
 
Monte Carlo methods for some not-quite-but-almost Bayesian problems
Monte Carlo methods for some not-quite-but-almost Bayesian problemsMonte Carlo methods for some not-quite-but-almost Bayesian problems
Monte Carlo methods for some not-quite-but-almost Bayesian problems
 
Upm etsiccp-seminar-vf
Upm etsiccp-seminar-vfUpm etsiccp-seminar-vf
Upm etsiccp-seminar-vf
 
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
 
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
 
SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012
 
Talk at CIRM on Poisson equation and debiasing techniques
Talk at CIRM on Poisson equation and debiasing techniquesTalk at CIRM on Poisson equation and debiasing techniques
Talk at CIRM on Poisson equation and debiasing techniques
 
Nonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo MethodNonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo Method
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
restore.pdf
restore.pdfrestore.pdf
restore.pdf
 

More from Pierre Jacob

ISBA 2022 Susie Bayarri lecture
ISBA 2022 Susie Bayarri lectureISBA 2022 Susie Bayarri lecture
ISBA 2022 Susie Bayarri lecture
Pierre Jacob
 
Couplings of Markov chains and the Poisson equation
Couplings of Markov chains and the Poisson equation Couplings of Markov chains and the Poisson equation
Couplings of Markov chains and the Poisson equation
Pierre Jacob
 
Monte Carlo methods for some not-quite-but-almost Bayesian problems
Monte Carlo methods for some not-quite-but-almost Bayesian problemsMonte Carlo methods for some not-quite-but-almost Bayesian problems
Monte Carlo methods for some not-quite-but-almost Bayesian problems
Pierre Jacob
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
Pierre Jacob
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
Pierre Jacob
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
Pierre Jacob
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
Pierre Jacob
 
Current limitations of sequential inference in general hidden Markov models
Current limitations of sequential inference in general hidden Markov modelsCurrent limitations of sequential inference in general hidden Markov models
Current limitations of sequential inference in general hidden Markov models
Pierre Jacob
 
On non-negative unbiased estimators
On non-negative unbiased estimatorsOn non-negative unbiased estimators
On non-negative unbiased estimators
Pierre Jacob
 
Path storage in the particle filter
Path storage in the particle filterPath storage in the particle filter
Path storage in the particle filter
Pierre Jacob
 
Density exploration methods
Density exploration methodsDensity exploration methods
Density exploration methods
Pierre Jacob
 
SMC^2: an algorithm for sequential analysis of state-space models
SMC^2: an algorithm for sequential analysis of state-space modelsSMC^2: an algorithm for sequential analysis of state-space models
SMC^2: an algorithm for sequential analysis of state-space models
Pierre Jacob
 
PAWL - GPU meeting @ Warwick
PAWL - GPU meeting @ WarwickPAWL - GPU meeting @ Warwick
PAWL - GPU meeting @ Warwick
Pierre Jacob
 
Presentation of SMC^2 at BISP7
Presentation of SMC^2 at BISP7Presentation of SMC^2 at BISP7
Presentation of SMC^2 at BISP7
Pierre Jacob
 
Presentation MCB seminar 09032011
Presentation MCB seminar 09032011Presentation MCB seminar 09032011
Presentation MCB seminar 09032011
Pierre Jacob
 

More from Pierre Jacob (15)

ISBA 2022 Susie Bayarri lecture
ISBA 2022 Susie Bayarri lectureISBA 2022 Susie Bayarri lecture
ISBA 2022 Susie Bayarri lecture
 
Couplings of Markov chains and the Poisson equation
Couplings of Markov chains and the Poisson equation Couplings of Markov chains and the Poisson equation
Couplings of Markov chains and the Poisson equation
 
Monte Carlo methods for some not-quite-but-almost Bayesian problems
Monte Carlo methods for some not-quite-but-almost Bayesian problemsMonte Carlo methods for some not-quite-but-almost Bayesian problems
Monte Carlo methods for some not-quite-but-almost Bayesian problems
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
Current limitations of sequential inference in general hidden Markov models
Current limitations of sequential inference in general hidden Markov modelsCurrent limitations of sequential inference in general hidden Markov models
Current limitations of sequential inference in general hidden Markov models
 
On non-negative unbiased estimators
On non-negative unbiased estimatorsOn non-negative unbiased estimators
On non-negative unbiased estimators
 
Path storage in the particle filter
Path storage in the particle filterPath storage in the particle filter
Path storage in the particle filter
 
Density exploration methods
Density exploration methodsDensity exploration methods
Density exploration methods
 
SMC^2: an algorithm for sequential analysis of state-space models
SMC^2: an algorithm for sequential analysis of state-space modelsSMC^2: an algorithm for sequential analysis of state-space models
SMC^2: an algorithm for sequential analysis of state-space models
 
PAWL - GPU meeting @ Warwick
PAWL - GPU meeting @ WarwickPAWL - GPU meeting @ Warwick
PAWL - GPU meeting @ Warwick
 
Presentation of SMC^2 at BISP7
Presentation of SMC^2 at BISP7Presentation of SMC^2 at BISP7
Presentation of SMC^2 at BISP7
 
Presentation MCB seminar 09032011
Presentation MCB seminar 09032011Presentation MCB seminar 09032011
Presentation MCB seminar 09032011
 

Recently uploaded

Penicillin...........................pptx
Penicillin...........................pptxPenicillin...........................pptx
Penicillin...........................pptx
Cherry
 
EY - Supply Chain Services 2018_template.pptx
EY - Supply Chain Services 2018_template.pptxEY - Supply Chain Services 2018_template.pptx
EY - Supply Chain Services 2018_template.pptx
AlguinaldoKong
 
Seminar of U.V. Spectroscopy by SAMIR PANDA
 Seminar of U.V. Spectroscopy by SAMIR PANDA Seminar of U.V. Spectroscopy by SAMIR PANDA
Seminar of U.V. Spectroscopy by SAMIR PANDA
SAMIR PANDA
 
GBSN- Microbiology (Lab 3) Gram Staining
GBSN- Microbiology (Lab 3) Gram StainingGBSN- Microbiology (Lab 3) Gram Staining
GBSN- Microbiology (Lab 3) Gram Staining
Areesha Ahmad
 
Hemostasis_importance& clinical significance.pptx
Hemostasis_importance& clinical significance.pptxHemostasis_importance& clinical significance.pptx
Hemostasis_importance& clinical significance.pptx
muralinath2
 
Cancer cell metabolism: special Reference to Lactate Pathway
Cancer cell metabolism: special Reference to Lactate PathwayCancer cell metabolism: special Reference to Lactate Pathway
Cancer cell metabolism: special Reference to Lactate Pathway
AADYARAJPANDEY1
 
The ASGCT Annual Meeting was packed with exciting progress in the field advan...
The ASGCT Annual Meeting was packed with exciting progress in the field advan...The ASGCT Annual Meeting was packed with exciting progress in the field advan...
The ASGCT Annual Meeting was packed with exciting progress in the field advan...
Health Advances
 
Lab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerinLab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerin
ossaicprecious19
 
Nutraceutical market, scope and growth: Herbal drug technology
Nutraceutical market, scope and growth: Herbal drug technologyNutraceutical market, scope and growth: Herbal drug technology
Nutraceutical market, scope and growth: Herbal drug technology
Lokesh Patil
 
ESR_factors_affect-clinic significance-Pathysiology.pptx
ESR_factors_affect-clinic significance-Pathysiology.pptxESR_factors_affect-clinic significance-Pathysiology.pptx
ESR_factors_affect-clinic significance-Pathysiology.pptx
muralinath2
 
Comparative structure of adrenal gland in vertebrates
Comparative structure of adrenal gland in vertebratesComparative structure of adrenal gland in vertebrates
Comparative structure of adrenal gland in vertebrates
sachin783648
 
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Sérgio Sacani
 
Large scale production of streptomycin.pptx
Large scale production of streptomycin.pptxLarge scale production of streptomycin.pptx
Large scale production of streptomycin.pptx
Cherry
 
Anemia_ different types_causes_ conditions
Anemia_ different types_causes_ conditionsAnemia_ different types_causes_ conditions
Anemia_ different types_causes_ conditions
muralinath2
 
Structural Classification Of Protein (SCOP)
Structural Classification Of Protein  (SCOP)Structural Classification Of Protein  (SCOP)
Structural Classification Of Protein (SCOP)
aishnasrivastava
 
Predicting property prices with machine learning algorithms.pdf
Predicting property prices with machine learning algorithms.pdfPredicting property prices with machine learning algorithms.pdf
Predicting property prices with machine learning algorithms.pdf
binhminhvu04
 
Richard's entangled aventures in wonderland
Richard's entangled aventures in wonderlandRichard's entangled aventures in wonderland
Richard's entangled aventures in wonderland
Richard Gill
 
Citrus Greening Disease and its Management
Citrus Greening Disease and its ManagementCitrus Greening Disease and its Management
Citrus Greening Disease and its Management
subedisuryaofficial
 
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...
Scintica Instrumentation
 
erythropoiesis-I_mechanism& clinical significance.pptx
erythropoiesis-I_mechanism& clinical significance.pptxerythropoiesis-I_mechanism& clinical significance.pptx
erythropoiesis-I_mechanism& clinical significance.pptx
muralinath2
 

Recently uploaded (20)

Penicillin...........................pptx
Penicillin...........................pptxPenicillin...........................pptx
Penicillin...........................pptx
 
EY - Supply Chain Services 2018_template.pptx
EY - Supply Chain Services 2018_template.pptxEY - Supply Chain Services 2018_template.pptx
EY - Supply Chain Services 2018_template.pptx
 
Seminar of U.V. Spectroscopy by SAMIR PANDA
 Seminar of U.V. Spectroscopy by SAMIR PANDA Seminar of U.V. Spectroscopy by SAMIR PANDA
Seminar of U.V. Spectroscopy by SAMIR PANDA
 
GBSN- Microbiology (Lab 3) Gram Staining
GBSN- Microbiology (Lab 3) Gram StainingGBSN- Microbiology (Lab 3) Gram Staining
GBSN- Microbiology (Lab 3) Gram Staining
 
Hemostasis_importance& clinical significance.pptx
Hemostasis_importance& clinical significance.pptxHemostasis_importance& clinical significance.pptx
Hemostasis_importance& clinical significance.pptx
 
Cancer cell metabolism: special Reference to Lactate Pathway
Cancer cell metabolism: special Reference to Lactate PathwayCancer cell metabolism: special Reference to Lactate Pathway
Cancer cell metabolism: special Reference to Lactate Pathway
 
The ASGCT Annual Meeting was packed with exciting progress in the field advan...
The ASGCT Annual Meeting was packed with exciting progress in the field advan...The ASGCT Annual Meeting was packed with exciting progress in the field advan...
The ASGCT Annual Meeting was packed with exciting progress in the field advan...
 
Lab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerinLab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerin
 
Nutraceutical market, scope and growth: Herbal drug technology
Nutraceutical market, scope and growth: Herbal drug technologyNutraceutical market, scope and growth: Herbal drug technology
Nutraceutical market, scope and growth: Herbal drug technology
 
ESR_factors_affect-clinic significance-Pathysiology.pptx
ESR_factors_affect-clinic significance-Pathysiology.pptxESR_factors_affect-clinic significance-Pathysiology.pptx
ESR_factors_affect-clinic significance-Pathysiology.pptx
 
Comparative structure of adrenal gland in vertebrates
Comparative structure of adrenal gland in vertebratesComparative structure of adrenal gland in vertebrates
Comparative structure of adrenal gland in vertebrates
 
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
 
Large scale production of streptomycin.pptx
Large scale production of streptomycin.pptxLarge scale production of streptomycin.pptx
Large scale production of streptomycin.pptx
 
Anemia_ different types_causes_ conditions
Anemia_ different types_causes_ conditionsAnemia_ different types_causes_ conditions
Anemia_ different types_causes_ conditions
 
Structural Classification Of Protein (SCOP)
Structural Classification Of Protein  (SCOP)Structural Classification Of Protein  (SCOP)
Structural Classification Of Protein (SCOP)
 
Predicting property prices with machine learning algorithms.pdf
Predicting property prices with machine learning algorithms.pdfPredicting property prices with machine learning algorithms.pdf
Predicting property prices with machine learning algorithms.pdf
 
Richard's entangled aventures in wonderland
Richard's entangled aventures in wonderlandRichard's entangled aventures in wonderland
Richard's entangled aventures in wonderland
 
Citrus Greening Disease and its Management
Citrus Greening Disease and its ManagementCitrus Greening Disease and its Management
Citrus Greening Disease and its Management
 
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...
 
erythropoiesis-I_mechanism& clinical significance.pptx
erythropoiesis-I_mechanism& clinical significance.pptxerythropoiesis-I_mechanism& clinical significance.pptx
erythropoiesis-I_mechanism& clinical significance.pptx
 

Markov chain Monte Carlo methods and some attempts at parallelizing them

  • 1. Markov chain Monte Carlo methods and some attempts at parallelizing them Pierre E. Jacob Department of Statistics, Harvard University (and many fantastic collaborators!) MIT IDS.190, October 2019 blog: https://statisfaction.wordpress.com Pierre E. Jacob Unbiased MCMC
  • 2. Setting Continuous or discrete space of dimension d. Target probability distribution π, with probability density/mass function x → π(x). Goal: approximate π, e.g. Eπ[h(X)] = h(x)π(x)dx = π(h), for a class of “test” functions h. Pierre E. Jacob Unbiased MCMC
  • 3. Monte Carlo Originates from physics, and still very much a research topic in physics e.g. K. Binder et al, Monte Carlo methods in statistical physics, 2012. Often state-of-the-art for numerical integration e.g. E. Novak, Some results on the complexity of numerical integration, 2016. Plays an important role in Bayesian inference e.g. P. Green et al, Bayesian computation: a summary of the current state, and samples backwards and forwards, 2015. Can be useful for many other tasks in statistics e.g. J. Besag, MCMC for Statistical Inference, 2001. See also P. Diaconis, The MCMC revolution, 2009. Pierre E. Jacob Unbiased MCMC
  • 4. Outline 1 Monte Carlo and bias 2 Sequential Monte Carlo samplers 3 Regeneration 4 Unbiased estimators from coupled Markov chains 5 Bonus: new convergence diagnostics for MCMC Pierre E. Jacob Unbiased MCMC
  • 5. Outline 1 Monte Carlo and bias 2 Sequential Monte Carlo samplers 3 Regeneration 4 Unbiased estimators from coupled Markov chains 5 Bonus: new convergence diagnostics for MCMC Pierre E. Jacob Unbiased MCMC
  • 6. Markov chain Monte Carlo Initially, X0 ∼ π0, then Xt|Xt−1 ∼ P(Xt−1, ·) for t = 1, . . . , T. Estimator: 1 T − b T t=b+1 h(Xt), where b iterations are discarded as burn-in. Might converge to Eπ[h(X)] as T → ∞ by the ergodic theorem. Biased for any fixed b, T, unless π0 is equal to π. Averaging independent copies of such estimators for fixed b, T would not provide a consistent estimator of Eπ[h(X)] as the number of independent copies goes to infinity. Pierre E. Jacob Unbiased MCMC
  • 7. Example: Metropolis–Hastings kernel P With Markov chain at state Xt, 1 propose X ∼ q(Xt, ·), 2 sample U ∼ Uniform(0, 1), 3 if U ≤ π(X )q(X , Xt) π(Xt)q(Xt, X ) , set Xt+1 = X , otherwise set Xt+1 = Xt. Hastings, Monte Carlo sampling methods using Markov chains and their applications, 1970. Pierre E. Jacob Unbiased MCMC
  • 8. MCMC trace π = N(0, 1), RWMH with Normal proposal std = 0.5, π0 = N(10, 32 ) Pierre E. Jacob Unbiased MCMC
  • 9. MCMC marginal distributions π = N(0, 1), RWMH with Normal proposal std = 0.5, π0 = N(10, 32 ) Pierre E. Jacob Unbiased MCMC
  • 10. Independent replicates and MCMC The bias is the difference |E[h(Xt)] − Eπ[h(X)]| for fixed t. The bias has always been recognized as an obstacle on the way to parallelize Monte Carlo calculations, e.g. When running parallel Monte Carlo with many comput- ers, it is more important to start with an unbiased (or low-bias) estimate than with a low-variance estimate. Rosenthal, Parallel computing and Monte Carlo algorithms, 2000. For general statistical estimators, mean squared error is often the prefered measure of accuracy. In Monte Carlo, variance can be both quantified and arbitrarily reduced with independent runs, but neither is true for the bias. Pierre E. Jacob Unbiased MCMC
  • 11. Outline 1 Monte Carlo and bias 2 Sequential Monte Carlo samplers 3 Regeneration 4 Unbiased estimators from coupled Markov chains 5 Bonus: new convergence diagnostics for MCMC Pierre E. Jacob Unbiased MCMC
  • 12. Importance sampling Importance sampling relies on a proposal distribution q, chosen by user to be an approximation of π. 1 Sample X1:N ∼ q, independently. 2 Weight w(Xn) = π(Xn)/q(Xn). 3 Normalize weights to obtain W1:N . The procedure yields ˆπN (·) = N n=1 Wn δXn (·) approximates π as N → ∞ under conditions on q and π. Pierre E. Jacob Unbiased MCMC
  • 13. Importance sampling with MCMC proposals Finding proposal q that approximates π might be difficult. Can we use MCMC as an importance sampling proposal? Something that would look like: 1 Sample X1:N by running N chains for T steps. 2 Weight w(Xn) somehow (?). 3 Normalize weights to obtain W1:N . An immediate difficulty is that the marginal distributions of MCMC chains are generally intractable, so importance weights seem hard to compute. Pierre E. Jacob Unbiased MCMC
  • 14. Annealed importance sampling For instance, sample X0 ∼ π0 and X1|X0 ∼ P(X0, ·). Problem: marginal distribution of X1 is intractable. Introduce backward kernel L(x1, x0) = P(x0, x1)π(x0)/π(x1). Then consider proposal distribution ¯q(x0, x1) = π0(x0)P(x0, x1), target distribution ¯π(x0, x1) = π(x1)L(x1, x0). Writing down importance sampling procedure leads to tractable weights ∝ π(x0)/π0(x0), desired marginal distribution: ¯π(x0, x1)dx0 = π(x1). Neal, Annealed importance sampling, 2001, Pierre E. Jacob Unbiased MCMC
  • 15. Sequential Monte Carlo samplers Del Moral, Doucet & Jasra, SMC samplers, 2006. AIS and SMC samplers work by introducing sequence of target distributions πt, for t = 0, . . . , T, and a sequence of MCMC kernels Pt targeting πt. Then N chains start from π0 and move through the specified Markov kernels, are weighted using ratios of successive target distributions, are resampled according to weights (in SMC samplers). At final step T, weighted samples approximate π. The resampling steps induce interaction between the chains, which possibly means communication between machines. Whiteley, Lee & Heine, On the role of interaction in sequential Monte Carlo algorithms, 2016. Pierre E. Jacob Unbiased MCMC
  • 16. Sequential Monte Carlo samplers π = N(0, 1), adaptive SMC sampler with MH moves, π0 = N(10, 32 ) Pierre E. Jacob Unbiased MCMC
  • 17. Outline 1 Monte Carlo and bias 2 Sequential Monte Carlo samplers 3 Regeneration 4 Unbiased estimators from coupled Markov chains 5 Bonus: new convergence diagnostics for MCMC Pierre E. Jacob Unbiased MCMC
  • 18. Regeneration in Markov chain samplers Mykland, Tierney & Yu, Regeneration in Markov chain samplers, 1995. −3 0 3 6 0 50 100 150 200 iteration x We might be able to identify regeneration times (Tn)n≥1 such that the tours (XTn−1 , . . . , XTn−1) are i.i.d. and such that N n=1 Tn t=Tn−1 h(Xt) N n=1(Tn − Tn−1) a.s. −−−−→ N→∞ Eπ[h(X)] . . . but it might be difficult to identify these times. Pierre E. Jacob Unbiased MCMC
  • 19. Brockwell and Kadane’s regeneration technique Design new chain such that regeneration is easier to identify. State space E ∪ α, Markov kernel ˜P on E ∪ α that targets ˜π, such that ˜π is equal to π on E. Set ˜π(α) (to be chosen), and design “re-entry” proposal φ on E. If Xt = α, propose X ∼ φ on E, acceptance probability min(1, π(X )/(˜π(α)φ(X ))), if Xt ∈ E, propose move to α, acceptance probability min(1, ˜π(α)φ(Xt)/π(Xt)). Perform these moves with probability ω, otherwise sample Xt+1 ∼ P(Xt, ·) if Xt ∈ E, and set Xt+1 = α if Xt = α. With the new chain, every re-entry in E is a regeneration. Pierre E. Jacob Unbiased MCMC
  • 20. Illustration of regeneration technique π = N(0, 1), MH with Normal proposal std = 0.5, π0 = N(10, 32 ) Set ˜π(α) = 1, φ = N(2, 1), ω = 0.1. −2 0 2 0 50 100 150 200 iteration x Brockwell & Kadane, Identification of regeneration times in MCMC simulation, with application to adaptive schemes, 2005. See also Nummelin, MC’s for MCMC’ists, 2002. Pierre E. Jacob Unbiased MCMC
  • 21. Outline 1 Monte Carlo and bias 2 Sequential Monte Carlo samplers 3 Regeneration 4 Unbiased estimators from coupled Markov chains 5 Bonus: new convergence diagnostics for MCMC Pierre E. Jacob Unbiased MCMC
  • 22. Coupled chains Glynn & Rhee, Exact estimation for MC equilibrium expectations, 2014. Generate two chains (Xt) and (Yt) as follows, sample X0 and Y0 from π0 (independently, or not), sample X1|X0 ∼ P(X0, ·), for t ≥ 1, sample (Xt+1, Yt)|(Xt, Yt−1) ∼ ¯P ((Xt, Yt−1), ·). ¯P must be such that Xt+1|Xt ∼ P(Xt, ·) and Yt|Yt−1 ∼ P(Yt−1, ·) (thus Xt and Yt have the same distribution for all t ≥ 0), there exists a random time τ such that Xt = Yt−1 for t ≥ τ (the chains meet and remain “faithful”). Pierre E. Jacob Unbiased MCMC
  • 23. Metropolis on Normal target: coupled paths 0 4 8 0 50 100 150 200 iteration x π = N(0, 1), RWMH with Normal proposal std = 0.5, π0 = N(10, 32 ) Pierre E. Jacob Unbiased MCMC
  • 24. Metropolis on Normal target: coupled paths 0 5 10 15 0 50 100 150 200 iteration x π = N(0, 1), RWMH with Normal proposal std = 0.5, π0 = N(10, 32 ) Pierre E. Jacob Unbiased MCMC
  • 25. Debiasing idea (one slide version) Limit as a telescopic sum, for all k ≥ 0, Eπ[h(X)] = lim t→∞ E[h(Xt)] = E[h(Xk)] + ∞ t=k+1 E[h(Xt) − h(Xt−1)]. Since for all t ≥ 0, Xt and Yt have the same distribution, = E[h(Xk)] + ∞ t=k+1 E[h(Xt) − h(Yt−1)]. If we can swap expectation and limit, = E[h(Xk) + ∞ t=k+1 (h(Xt) − h(Yt−1))]. Random variable in above expectation is unbiased for Eπ[h(X)]. Pierre E. Jacob Unbiased MCMC
  • 26. Unbiased estimators Unbiased estimator, for any user-chosen k, is given by Hk(X, Y ) = h(Xk) + τ−1 t=k+1 (h(Xt) − h(Yt−1)), with the convention τ−1 t=k+1{·} = 0 if τ − 1 < k + 1. h(Xk) alone is biased; the other terms correct for the bias. Cost: τ − 1 calls to ¯P and 1 + max(0, k − τ) calls to P. Glynn & Rhee, Exact estimation for Markov chain equilibrium expectations, 2014. Also Agapiou, Roberts & Vollmer, Unbiased Monte Carlo: Posterior estimation for intractable/infinite-dimensional models, 2018. Note: same reasoning would work with arbitrary lags L ≥ 1. Pierre E. Jacob Unbiased MCMC
  • 27. Conditions Jacob, O’Leary, Atchad´e, Unbiased MCMC with couplings, 2019. 1 Marginal chain converges: E[h(Xt)] → Eπ[h(X)], and h(Xt) has (2 + η)-finite moments for all t. 2 Meeting time τ has geometric tails: ∃C < +∞ ∃δ ∈ (0, 1) ∀t ≥ 0 P(τ > t) ≤ Cδt . 3 Chains stay together: Xt = Yt−1 for all t ≥ τ. Condition 2 itself implied by e.g. geometric drift condition. Under these conditions, Hk(X, Y ) is unbiased, has finite expected cost and finite variance, for all k. Pierre E. Jacob Unbiased MCMC
  • 28. Metropolis on Normal target: meeting times 0.000 0.005 0.010 0.015 0 50 100 150 200 meeting time density π = N(0, 1), RWMH with Normal proposal std = 0.5, π0 = N(10, 32 ) Pierre E. Jacob Unbiased MCMC
  • 29. Metropolis on Normal target: estimators of Eπ[X] 0.000 0.002 0.004 0.006 −1000 0 1000 estimator density k = 0 E[2τ] ≈ 96, V[H0(X, Y )] ≈ 65, 000. Pierre E. Jacob Unbiased MCMC
  • 30. Asymptotic inefficiency Final estimator: average of R independent estimators. In a given computing time, more estimators can be produced if each estimator is cheaper. An appropriate measure of performance is [expected cost] × [variance], called the asymptotic inefficiency. Glynn & Whitt, Asymptotic efficiency of simulation estimators, 1992. Glynn & Heidelberger, Bias properties of budget constrained simulations, 1990. Pierre E. Jacob Unbiased MCMC
  • 31. Metropolis on Normal target: estimators of Eπ[X] 0.0 0.1 0.2 0.3 −200 −100 0 100 estimator density k = 100 E[max(k + τ, 2τ)] ≈ 148, V[Hk(X, Y )] ≈ 100. Pierre E. Jacob Unbiased MCMC
  • 32. Metropolis on Normal target: estimators of Eπ[X] 0.0 0.1 0.2 0.3 0.4 −4 −2 0 2 4 estimator density k = 200 E[max(k + τ, 2τ)] ≈ 248, V[Hk(X, Y )] ≈ 1. Pierre E. Jacob Unbiased MCMC
  • 33. Time-averaged unbiased estimators Efficiency matters, thus in practice we recommend a variation of the previous estimator, defined for integers k ≤ m as Hk:m(X, Y ) = 1 m − k + 1 m t=k Ht(X, Y ) which can also be written 1 m − k + 1 m t=k h(Xt)+ τ−1 t=k+1 min 1, t − k m − k + 1 (h(Xt)−h(Yt−1)), i.e. standard MCMC average + bias correction term. Pierre E. Jacob Unbiased MCMC
  • 34. Metropolis on Normal target: time-averaged estimators 0.0 0.5 1.0 1.5 2.0 2.5 −0.4 0.0 0.4 estimator density k = 200, m = 1000 E[max(m + τ, 2τ)] ≈ 1048, V[Hk(X, Y )] ≈ 0.028. Pierre E. Jacob Unbiased MCMC
  • 35. How to design appropriate coupled chains? To implement the proposed unbiased estimators, we need to sample from a Markov kernel ¯P, such that, when (Xt+1, Yt) is sampled from ¯P ((Xt, Yt−1), ·), marginally Xt+1|Xt ∼ P(Xt, ·), and Yt|Yt−1 ∼ P(Yt−1, ·), it is possible that Xt+1 = Yt exactly for some t ≥ 0, if Xt = Yt−1, then Xt+1 = Yt almost surely. Pierre E. Jacob Unbiased MCMC
  • 36. Couplings of MCMC algorithms We can find many couplings in the literature. . . Propp & Wilson, Exact sampling with coupled Markov chains and applications to statistical mechanics, Random Structures & Algorithms, 1996. Johnson, Studying convergence of Markov chain Monte Carlo algorithms using coupled sample paths, JASA, 1996. Neal, Circularly-coupled Markov chain sampling, UoT tech report, 1999. Pinto & Neal, Improving Markov chain Monte Carlo estimators by coupling to an approximating chain, UoT tech report, 2001. Glynn & Rhee, Exact estimation for Markov chain equilibrium expectations, Journal of Applied Probability, 2014. Pierre E. Jacob Unbiased MCMC
  • 37. Couplings of MCMC algorithms Conditional particle filters Jacob, Lindsten, Sch¨on, Smoothing with Couplings of Conditional Particle Filters, 2019. Metropolis–Hastings, Gibbs samplers, parallel tempering Jacob, O’Leary, Atchad´e, Unbiased MCMC with couplings, 2019. Hamiltonian Monte Carlo Heng & Jacob, Unbiased HMC with couplings, 2019. Pseudo-marginal MCMC, exchange algorithm Middleton, Deligiannidis, Doucet, Jacob, Unbiased MCMC for intractable target distributions, 2018. Particle independent Metropolis–Hastings Middleton, Deligiannidis, Doucet, Jacob, Unbiased Smoothing using Particle Independent Metropolis-Hastings, 2019. Pierre E. Jacob Unbiased MCMC
  • 38. Maximal couplings (X, Y ) follows a coupling of p and q if X ∼ p and Y ∼ q, The coupling inequality states that P(X = Y ) ≤ 1 − p − q TV, for any coupling, with p − q TV = 1 2 |p(x) − q(x)|dx. Maximal couplings achieve the bound. Pierre E. Jacob Unbiased MCMC
  • 39. Maximal coupling of Gamma and Normal Pierre E. Jacob Unbiased MCMC
  • 40. Maximal coupling: algorithm Requires: evaluations of p and q, sampling from p and q. 1 Sample X ∼ p and W ∼ Uniform(0, 1). If W ≤ q(X)/p(X), set Y = X, output (X, Y ). 2 Otherwise, sample Y ∼ q and W ∼ Uniform(0, 1) until W > p(Y )/q(Y ), set Y = Y and output (X, Y ). Output: a pair (X, Y ) such that X ∼ p, Y ∼ q and P(X = Y ) is maximal. Pierre E. Jacob Unbiased MCMC
  • 41. Back to Metropolis–Hastings (kernel P) At each iteration t, Markov chain at state Xt, 1 propose X ∼ q(Xt, ·), 2 sample U ∼ Uniform(0, 1), 3 if U ≤ π(X )q(X , Xt) π(Xt)q(Xt, X ) , set Xt+1 = X , otherwise set Xt+1 = Xt. How to propagate two MH chains from states Xt and Yt−1 such that {Xt+1 = Yt} can happen? Pierre E. Jacob Unbiased MCMC
  • 42. Coupling of Metropolis–Hastings (kernel ¯P) At each iteration t, two Markov chains at states Xt, Yt−1, 1 propose (X , Y ) from max coupling of q(Xt, ·), q(Yt−1, ·), 2 sample U ∼ Uniform(0, 1), 3 if U ≤ π(X )q(X , Xt) π(Xt)q(Xt, X ) , set Xt+1 = X , otherwise set Xt+1 = Xt, if U ≤ π(Y )q(Y , Yt−1) π(Yt−1)q(Yt−1, Y ) , set Yt = Y , otherwise set Yt = Yt−1. Pierre E. Jacob Unbiased MCMC
  • 43. Scaling with dimension (not doing so well) With naive maximum coupling of proposals. . . 10 100 1000 10000 1 2 3 4 5 dimension averagemeetingtime initialization: target offset Pierre E. Jacob Unbiased MCMC
  • 44. Scaling with dimension (much better) With “reflection-maximal” couplings of proposals. . . 0 500 1000 1500 2000 1 13 25 37 50 dimension averagemeetingtime initialization: target offset Pierre E. Jacob Unbiased MCMC
  • 45. Hamiltonian Monte Carlo Introduce potential energy U(q) = − log π(q), and total energy E(q, p) = U(q) + 1 2|p|2. Hamiltonian dynamics for (q(s), p(s)), where s ≥ 0: d ds q(s) = pE(q(s), p(s)), d ds p(s) = − qE(q(s), p(s)). Solving Hamiltonian dynamics exactly is not feasible, but discretization + Metropolis–Hastings correction ensure that π remains invariant. Common random numbers can make two HMC chains contract, under assumptions on the target such as strong log-concavity. Pierre E. Jacob Unbiased MCMC
  • 46. Coupling of Hamiltonian Monte Carlo Mangoubi & Smith, Rapid mixing of HMC on strongly log-concave distributions, 2017 Bou-Rabee, Eberle & Zimmer, Coupling and Convergence for Hamiltonian Monte Carlo, 2018. Heng & Jacob, Unbiased HMC with couplings, 2019. Pierre E. Jacob Unbiased MCMC
  • 47. Coupling of Hamiltonian Monte Carlo Figure 2 of Mangoubi & Smith, Rapid mixing of HMC strongly log-concave distributions, 2017. Coupling two copies X1, X2, . . . (blue) and Y1, Y2, . . . (green) of HMC by choosing same momentum pi at ev- ery step. Pierre E. Jacob Unbiased MCMC
  • 48. Scaling of Hamiltonian Monte Carlo 0 20 40 60 10 50 100 200 300 dimension averagemeetingtime initialization: target offset Pierre E. Jacob Unbiased MCMC
  • 49. Outline 1 Monte Carlo and bias 2 Sequential Monte Carlo samplers 3 Regeneration 4 Unbiased estimators from coupled Markov chains 5 Bonus: new convergence diagnostics for MCMC Pierre E. Jacob Unbiased MCMC
  • 50. Assessing finite-time bias of MCMC Total variation distance between Xk ∼ πk and π = limk→∞ πk: πk − π TV = 1 2 sup h:|h|≤1 |E[h(Xk)] − Eπ[h(X)]| = 1 2 sup h:|h|≤1 |E[ τ−1 t=k+1 h(Xt) − h(Yt−1)]| ≤ E[max(0, τ − k − 1)]. 0.000 0.005 0.010 0.015 0 50 100 150 200 meeting time density 1e−03 1e−02 1e−01 1e+00 1e+01 0 50 100 k upperbound Pierre E. Jacob Unbiased MCMC
  • 51. Assessing finite-time bias of MCMC With L-lag couplings, τ(L) = inf{t ≥ L : Xt = Yt−L}, πk − π TV ≤ E max(0, (τ(L) − L − k)/L ) . 0.00 0.25 0.50 0.75 1.00 1e+01 1e+02 1e+03 1e+04 1e+05 1e+06 iterations dTV SSG PT Biswas, Jacob & Vanetti, Estimating Convergence of Markov chains with L-Lag Couplings, 2019. Pierre E. Jacob Unbiased MCMC
  • 52. Discussion Perfect samplers, that sample i.i.d. from π, would yield the same benefits and more. Is any of this helping create perfect samplers? If underlying MCMC “doesn’t work”, proposed unbiased estimators will have large cost and/or large variance. Choice of tuning parameters? Choice of lag? Why couple only two chains? Lack of bias useful beyond parallel computation. So far we have used Markovian couplings: can we do better? Thank you for listening! Funding provided by the National Science Foundation, grants DMS-1712872 and DMS-1844695. Pierre E. Jacob Unbiased MCMC