SlideShare a Scribd company logo
Inference in generative models using the
Wasserstein distance
Christian P. Robert
(Paris Dauphine PSL & Warwick U.)
joint work with E. Bernton (Harvard), P.E. Jacob
(Harvard), and M. Gerber (Bristol)
Big Bayes, CIRM, Nov. 2018
Outline
1 ABC and distance between samples
2 Wasserstein distance
3 Computational aspects
4 Asymptotics
5 Handling time series
Outline
1 ABC and distance between
samples
2 Wasserstein distance
3 Computational aspects
4 Asymptotics
5 Handling time series
Generative model
Assumption of a data-generating distribution µ
(n)
for data
y1:n = y1, . . . , yn ∈ Yn
Parametric generative model
M = {µ
(n)
θ : θ ∈ H}
such that sampling (generating) z1:n from µ
(n)
θ is feasible
Prior distribution π(θ) available as density and generative model
Goal: inference on parameters θ given observations y1:n
Generic ABC
Basic (summary-less) ABC posterior with density
(θ, z1:n) ∼ π(θ)
µ
(n)
θ 1 ( y1:n − z1:n < ε)
Yn 1 ( y1:n − z1:n < ε) dz1:n
.
and ABC marginal
qε
(θ) = Yn
n
i=1 µ(dzi|θ)1 ( y1:n − z1:n < ε)
Yn 1 ( y1:n − z1:n < ε) dz1:n
unbiasedly estimated by π(θ)1 ( y1:n − z1:n < ε) where
z1:n ∼ µ
(n)
θ
Reminder: ABC-posterior goes to posterior as ε → 0
Summary statistics
Since random variable y1:n − z1:n may have large variance,
{ y1:n − z1:n < ε}
gets rare as ε → 0 and rarer when d ↑
When using
|η(y1:n) − η(z1:n) < ε
based on (insufficient) summary statistic η, variance and
dimension decrease but q-likelihood differs from likelihood
Arbitrariness and impact of summaries, incl. curse of
dimensionality
[X et al., 2011; Fearnhead & Prangle, 2012; Li & Fearnhead, 2016]
Distances between samples
Aim: Ressort to alternate distances D between samples y1:n
and z1:n such that
D(y1:n, z1:n)
has smaller variance than
y1:n − z1:n
while induced ABC-posterior still converges to posterior when
ε → 0
ABC with order statistics
Recall that, for univariate i.i.d. data, order statistics are
sufficient
1. sort observed and generated samples y1:n and z1:n
2. compute
yσy(1:n) − zσz(1:n) p =
n
i=1
|y(i) − z(i)|p
1/p
for order p (e.g. 1 or 2) instead of
y1:n − z1:n =
n
i=1
|yi − zi|p
1/p
Toy example
Data-generating process given by
Y1:1000 ∼ Gamma(10, 5)
Hypothesised model:
M = {N(µ, σ2
) : (µ, σ) ∈ R × R+
}
Prior µ ∼ N(0, 1) and σ ∼ Gamma(2, 1)
ABC-Rejection sampling: 105 draws, using Euclidean
distance, on sorted vs. unsorted samples and keeping 102
draws with smallest distances
Toy example
0
2
4
6
1.6 2.0 2.4
µ
density
distance: L2 sorted L2
0
2
4
0.00 0.25 0.50 0.75 1.00
σ
density
distance: L2 sorted L2
ABC with transport distances
Distance
D(y1:n, z1:n) =
1
n
n
i=1
|y(i) − z(i)|p
1/p
is p-Wasserstein distance between empirical cdfs
ˆµn(dy) =
1
n
n
i=1
δyi (dy) and ˆνn(dy) =
1
n
n
i=1
δzi (dy)
Rather than comparing samples as vectors, alternative
representation as empirical distributions
c Novel ABC method, which does not require
summary statistics, available with multivariate or
dependent data
Outline
1 ABC and distance between
samples
2 Wasserstein distance
3 Computational aspects
4 Asymptotics
5 Handling time series
Wasserstein distance
Ground distance ρ (x, y) → ρ(x, y) on Y along with order p ≥ 1
leads to Wasserstein distance between µ, ν ∈ Pp(Y), p ≥ 1:
Wp(µ, ν) = inf
γ∈Γ(µ,ν) Y×Y
ρ(x, y)p
dγ(x, y)
1/p
where Γ(µ, ν) set of joints with marginals µ, ν and Pp(Y) set of
distributions µ for which Eµ[ρ(Y, y0)p] < ∞ for one y0
Wasserstein distance: univariate case
1 23 123
Two empirical distributions on R with 3 atoms:
1
3
3
i=1
δyi and
1
3
3
j=1
δzj
Matrix of pair-wise costs:



ρ(y1, z1)p ρ(y1, z2)p ρ(y1, z3)p
ρ(y2, z1)p ρ(y2, z2)p ρ(y2, z3)p
ρ(y3, z1)p ρ(y3, z2)p ρ(y3, z3)p



Wasserstein distance: univariate case
1 23 123
Joint distribution
γ =



γ1,1 γ1,2 γ1,3
γ2,1 γ2,2 γ2,3
γ3,1 γ3,2 γ3,3


 ,
with marginals (1/3 1/3 1/3), corresponds to a transport
cost of
3
i,j=1
γi,jρ(yi, zj)p
Wasserstein distance: univariate case
1 23 123
Optimal assignment:
y1 ←→ z3
y2 ←→ z1
y3 ←→ z2
corresponds to choice of joint distribution γ
γ =



0 0 1/3
1/3 0 0
0 1/3 0


 ,
with marginals (1/3 1/3 1/3) and cost 3
i=1 ρ(y(i), z(i))p
Wasserstein distance
Two samples y1, . . . , yn and z1, . . . , zm
Wp(ˆµn, ˆνm) =
1
nm i,j
ρ(y,zj)
Important special case when n = m, for which solution to the
optimization problem γ corresponds to an assignment matrix,
with only one non-zero entry per row and column, equal to n−1.
[Villani, 2003]
Wasserstein distance
Two samples y1, . . . , yn and z1, . . . , zm
Wp(ˆµn, ˆνm) =
1
nm i,j
ρ(y,zj)
Wasserstein distance thus represented as
Wp(y1:n, z1:n)p
= inf
σ∈Sn
1
n
n
i=1
ρ(yi, zσ(i))p
Computing Wasserstein distance between two samples of same
size equivalent to optimal matching problem.
Wasserstein distance: bivariate case
1
2
3
1
2
3
there exists a joint distribution γ minimizing cost
3
i,j=1
γi,jρ(yi, zj)p
with various algorithms to compute/approximate it
Wasserstein distance
also called Vaserˇste˘ın, Earth Mover, Gini, Mallows,
Kantorovich, Rubinstein, &tc.
can be defined between arbitrary distributions
actual distance
statistically sound:
ˆθn = arginf
θ∈H
Wp(
1
n
n
i=1
δyi , µθ) → θ = arginf
θ∈H
Wp(µ , µθ),
at rate
√
n, plus asymptotic distribution
[Bassetti & al., 2006]
Optimal transport to Parliement
Short bio
Outline
1 ABC and distance between
samples
2 Wasserstein distance
3 Computational aspects
4 Asymptotics
5 Handling time series
Computing Wasserstein distances
when Y = R, computing Wp(µn, νn) costs O(n log n)
when Y = Rd, exact calculation is O(n3) [Hungarian]or
O(n2.5 log n) [short-list]
For entropic regularization, with δ > 0
Wp,δ(ˆµn, ˆνn)p
= inf
γ∈Γ(ˆµn,ˆνn) Y×Y
ρ(x, y)p
dγ(x, y) − δH(γ) ,
where H(γ) = − ij γij log γij entropy of γ, existence of
Sinkhorn’s algorithm that yields cost O(n2)
[Genevay et al., 2016]
Computing Wasserstein distances
other approximations, like Ye et al. (2016) using Simulated
Annealing
regularized Wasserstein not a distance, but as δ goes to
zero,
Wp,δ(ˆµn, ˆνn) → Wp(ˆµn, ˆνn)
for δ small enough, Wp,δ(ˆµn, ˆνn) = Wp(ˆµn, ˆνn) (exact)
in practice, δ 5% of median(ρ(yi, zj)p)i,j
[Cuturi, 2013]
Computing Wasserstein distances
cost linear in the dimension of
observations
distance calculations
model-independent
other transport distances
calculated in O(n log n), based
on different generalizations of
“sorting” (swapping, Hilbert)
[Gerber & Chopin, 2015]
acceleration by combination of
distances and subsampling
[source: Wikipedia]
Computing Wasserstein distances
cost linear in the dimension of
observations
distance calculations
model-independent
other transport distances
calculated in O(n log n), based
on different generalizations of
“sorting” (swapping, Hilbert)
[Gerber & Chopin, 2015]
acceleration by combination of
distances and subsampling
[source: Wikipedia]
Transport distance via Hilbert curve
Sort multivariate data via space-filling curves, like Hilbert
space-filling curve
H : [0, 1] → [0, 1]d
continuous mapping, with pseudo-inverse
h : [0, 1]d
→ [0, 1]
Compute order σ ∈ S of projected points, and compute
hp(y1:n, z1:n) =
1
n
n
i=1
ρ(yσy(i), zσz(i))p
1/p
,
called Hilbert ordering transport distance
[Gerber & Chopin, 2015]
Transport distance via Hilbert curve
Fact: hp(y1:n, z1:n) is a distance between empirical distributions
with n atoms, for all p ≥ 1
Hence, hp(y1:n, z1:n) = 0 if and only if y1:n = zσ(1:n), for a
permutation σ, with hope to retrieve posterior as ε → 0
Cost O(n log n) per calculation, but encompassing sampler
might be more costly than with regularized or exact
Wasserstein distances
Upper bound on corresponding Wasserstein distance, only
accurate for small dimension
Adaptive SMC with r-hit moves
Start with ε0 = ∞
1. ∀k ∈ 1 : N, sample θk
0 ∼ π(θ) (prior)
2. ∀k ∈ 1 : N, sample zk
1:n from µ
(n)
θk
3. ∀k ∈ 1 : N, compute the distance dk
0 = D(y1:n, zk
1:n)
4. based on (θk
0)N
k=1 and (dk
0)N
k=1, compute ε1, s.t.
resampled particles have at least 50% unique values
At step t ≥ 1, weight wk
t ∝ 1(dk
t−1 ≤ εt), resample, and perform
r-hit MCMC with adaptive independent proposals
[Lee, 2012; Lee and Latuszy´nski, 2014]
Toy example: bivariate Normal
100 observations from bivariate Normal with variance 1 and
covariance 0.55
Compare WABC with ABC versions based on raw Euclidean
distance and Euclidean distance between (sufficient) sample
means on 106 model simulations.
Toy example: bivariate Normal
100 observations from bivariate Normal with variance 1 and
covariance 0.55
Compare WABC with ABC versions based on raw Euclidean
distance and Euclidean distance between (sufficient) sample
means on 106 model simulations.
In terms of computing time, based on our R implementation on
an Intel Core i7-5820K (3.30GHz), each Euclidean distance
calculation takes an average 2.2 x 104 s while each Wasserstein
distance calculation takes an average 8:2 x 103s, i.e. 40 times
greater
Quantile “g-and-k” distribution
bivariate extension of the g-and-k distribution with quantile
functions
ai + bi 1 + 0.8
1 − exp(−gizi(r)
1 + exp(−giz(r)
1 + zi(r)2
k
zi(r) (1)
and correlation ρ
Intractable density that can be numerically approximated
[Rayner and MacGillivray, 2002; Prangle, 2017]
Simulation by MCMC and W-ABC (sequential tolerance
exploration)
Quantile “g-and-k” distribution
(a) a1. (b) b1. (c) g1. (d) k1.
(e) a2. (f) b2. (g) g2. (h) k2.
(i) ρ. (j) g2 last
10 steps.
(k) W1 to
posterior, vs.
simulations.
Outline
1 ABC and distance between
samples
2 Wasserstein distance
3 Computational aspects
4 Asymptotics
5 Handling time series
Minimum Wasserstein estimator
Under some assumptions, ˆθn exists and
lim sup
n→∞
argmin
θ∈H
Wp(ˆµn, µθ) ⊂ argmin
θ∈H
Wp(µ , µθ),
almost surely
In particular, if θ = argminθ∈H Wp(µ , µθ) is unique, then
ˆθn
a.s.
−→ θ
Minimum Wasserstein estimator
Under stronger assumptions, incl. well-specification,
dim(Y) = 1, and p = 1
√
n(ˆθn − θ )
w
−→ argmin
u∈H R
|G (t) − u, D (t) |dt,
where G is a µ -Brownian bridge, and D ∈ (L1(R))dθ satisfies
R
|Fθ(t) − F (t) − θ − θ , D (t) |dt = o( θ − θ H)
[Pollard, 1980; del Barrio et al., 1999, 2005]
Hard to use for confidence intervals, but the bootstrap is an
intersting alternative.
Toy Gamma example
Data-generating process:
y1:n
i.i.d.
∼ Gamma(10, 5)
Model:
M = {N(µ, σ2
) : (µ, σ) ∈ R × R+
}
MLE converges to
argminθ∈H KL(µ , µθ)
MLE top, MWE bottom
Toy Gamma example
Data-generating process:
y1:n
i.i.d.
∼ Gamma(10, 5)
Model:
M = {N(µ, σ2
) : (µ, σ) ∈ R × R+
}
MLE converges to
argminθ∈H KL(µ , µθ)
0
20
40
60
1.96 1.98 2.00 2.02
µ
density
0
20
40
60
80
0.60 0.61 0.62 0.63 0.64 0.65
σ
density
n = 10, 000, MLE dashed,
MWE solid
Minimum expected Wasserstein estimator
ˆθn,m = argmin
θ∈H
E[D(ˆµn, ˆµθ,m)]
with expectation under distribution of sample z1:m ∼ µ
(m)
θ
giving rise to ˆµθ,m = m−1 m
i=1 δzi .
Minimum expected Wasserstein estimator
ˆθn,m = argmin
θ∈H
E[D(ˆµn, ˆµθ,m)]
with expectation under distribution of sample z1:m ∼ µ
(m)
θ
giving rise to ˆµθ,m = m−1 m
i=1 δzi .
Under further assumptions, incl. m(n) → ∞ with n,
inf
θ∈H
EWp(ˆµn(ω), ˆµθ,m(n)) → inf
θ∈H
Wp(µ , µθ)
and
lim sup
n→∞
argmin
θ∈H
EWp(ˆµn(ω), ˆµθ,m(n)) ⊂ argmin
θ∈H
Wp(µ , µθ).
Minimum expected Wasserstein estimator
ˆθn,m = argmin
θ∈H
E[D(ˆµn, ˆµθ,m)]
with expectation under distribution of sample z1:m ∼ µ
(m)
θ
giving rise to ˆµθ,m = m−1 m
i=1 δzi .
Further, for n fixed,
inf
θ∈H
EWp(ˆµn, ˆµθ,m) → inf
θ∈H
Wp(ˆµn, µθ)
as m → ∞ and
lim sup
m→∞
argmin
θ∈H
EWp(ˆµn, ˆµθ,m) ⊂ argmin
θ∈H
Wp(ˆµn, µθ).
Quantile “g-and-k” distribution
Sampling achieved by plugging standard Normal variables into
(1) in place of z(r).
MEWE with large m can be computed to high precision
(a) MEWE: a vs
b.
(b) MEWE: g vs
κ.
0.00
0.05
0.10
0.15
0.20
−4 0 4
κ
density
(c)
√
n-scaled
estim. of κ.
0.0
0.1
0.2
0.3
0.4
0 10 20 30
observations
density
(d) Histogram of
data.
Asymptotics of WABC-posterior
convergence to true posterior as → 0
convergence to non-Dirac as n → ∞ for fixed
Bayesian consistency if n ↓ at proper speed
[Frazier, X & Rousseau, 2017]
WARNING: Theoretical conditions extremely rarely open
checks in practice
Asymptotics of WABC-posterior
convergence to true posterior as → 0
convergence to non-Dirac as n → ∞ for fixed
Bayesian consistency if n ↓ at proper speed
[Frazier, X & Rousseau, 2017]
WARNING: Theoretical conditions extremely rarely open
checks in practice
Asymptotics of WABC-posterior
For fixed n and ε → 0, for i.i.d. data, assuming
sup
y,θ
µθ(y) < ∞
y → µθ(y) continuous, the Wasserstein ABC-posterior converges
to the posterior irrespective of the choice of ρ and p
Concentration as both n → ∞ and ε → ε = inf Wp(µ , µθ)
[Frazier et al., 2018]
Concentration on neighborhoods of θ = arginf Wp(µ , µθ),
whereas posterior concentrates on arginf KL(µ , µθ)
Asymptotics of WABC-posterior
Rate of posterior concentration (and choice of εn) relates to
rate of convergence of the distance, e.g.
µ
(n)
θ Wp µθ,
1
n
n
i=1
δzi > u ≤ c(θ)fn(u),
[Fournier & Guillin, 2015]
Rate of convergence decays with the dimension of Y, fast or
slow, depending on moments of µθ and choice of p
Asymptotics of WABC-posterior
Rate of posterior concentration (and choice of εn) relates to
rate of convergence of the distance, e.g.
µ
(n)
θ Wp µθ,
1
n
n
i=1
δzi > u ≤ c(θ)fn(u),
[Fournier & Guillin, 2015]
Rate of convergence decays with the dimension of Y, fast or
slow, depending on moments of µθ and choice of p
Toy example: univariate
Data-generating process:
Y1:n
i.i.d.
∼ Gamma(10, 5), n = 100,
with mean 2 and standard
deviation ≈ 0.63 0.0
0.1
0.2
0 10 20
step
ε
Evolution of εt against t, the step
index in the adaptive SMC sampler
Theoretical model:
M = {N(µ, σ2
) : (µ, σ) ∈ R × R+
}
Prior: µ ∼ N(0, 1) and
σ ∼ Gamma(2, 1)
0
3
6
9
0 1 2
σ
density
Convergence to posterior
Toy example: univariate
Data-generating process:
Y1:n
i.i.d.
∼ Gamma(10, 5), n = 100,
with mean 2 and standard
deviation ≈ 0.63 0.0
0.1
0.2
0 10 20
step
ε
Evolution of εt against t, the step
index in the adaptive SMC sampler
Theoretical model:
M = {N(µ, σ2
) : (µ, σ) ∈ R × R+
}
Prior: µ ∼ N(0, 1) and
σ ∼ Gamma(2, 1)
0
3
6
9
0 1 2
σ
density
Convergence to posterior
Toy example: multivariate
Observation space: Y = R10
Model: Yi ∼ N10(θ, S), for
i ∈ 1 : 100, where Skj = 0.5|k−j| for
k, j ∈ 1 : 10
Data generated with θ defined as
a 10-vector, chosen by drawing
standard Normal variables
Prior: θi ∼ N(0, 1) for all i ∈ 1 : 10
Toy example: multivariate
Observation space: Y = R10
Model: Yi ∼ N10(θ, S), for
i ∈ 1 : 100, where Skj = 0.5|k−j| for
k, j ∈ 1 : 10
Data generated with θ defined as
a 10-vector, chosen by drawing
standard Normal variables
Prior: θi ∼ N(0, 1) for all i ∈ 1 : 10
1e+04
1e+06
0 10 20 30
step
#distancescalculated
Evolution of number of distances
calculated up to t, step index in
adaptive SMC sampler
Toy example: multivariate
Observation space: Y = R10
Model: Yi ∼ N10(θ, S), for
i ∈ 1 : 100, where Skj = 0.5|k−j| for
k, j ∈ 1 : 10
Data generated with θ defined as
a 10-vector, chosen by drawing
standard Normal variables
Prior: θi ∼ N(0, 1) for all i ∈ 1 : 10
Bivariate marginal of (θ3, θ7)
approximated by SMC sampler
(posterior contours in yellow, θ
indicated by black lines)
sum of log-Normals
Distribution of the sum of log-Normal random variables
intractable but easy to simulate
x1, . . . , xL ∼ N(γ, σ2
) y =
L
=1
exp(x )
(a) MEWE of
(γ, σ).
0.0
0.2
0.4
0.6
−2 −1 0 1 2
γ
density
(b)
√
n-scaled
estim. of γ.
0.0
0.2
0.4
0.6
−2 −1 0 1 2
σ
density
(c)
√
n-scaled
estim. of σ.
0.00
0.02
0.04
0.06
0 25 50 75
observations
density
(d) Histogram of
data.
misspecified model
Gamma Gamma(10, 5) data fitted with a Normal model
N(γ, σ2)
approximate MEWE by sampling k = 20 independent u(i) and
minimize
θ → k−1
k
i=1
Wp(y1:n, gm(u(i)
, θ))
(a) MLE of
(γ, σ).
(b) MEWE of
(γ, σ).
0
20
40
60
1.94 1.96 1.98 2.00 2.02
γ
density
(c) Estimators
of γ.
0
20
40
60
80
0.60 0.61 0.62 0.63 0.64 0.65
σ
density
(d) Estimators
of σ.
Outline
1 ABC and distance between
samples
2 Wasserstein distance
3 Computational aspects
4 Asymptotics
5 Handling time series
Method 1 (0?): ignoring dependencies
Consider only marginal distribution
AR(1) example:
y0 ∼ N 0,
σ2
1 − φ2
, yt+1 ∼ N φyt, σ2
.
Marginally
yt ∼ N 0, σ2
/ 1 − φ2
which identifies σ2/ 1 − φ2 but
not (φ, σ)
Produces a region of plausible
parameters
For n = 1, 000, generated with
φ = 0.7 and log σ = 0.9
Method 2: delay reconstruction
Introduce ˜yt = (yt, yt−1, . . . , yt−k) for lag k, and treat ˜yt as data
AR(1) example: ˜yt = (yt, yt−1)
with marginal distribution
N
0
0
,
σ2
1 − φ2
1 φ
φ 1
,
identifies both φ and σ
Related to Takens’ theorem in
dynamical systems literature
For
n = 1, 000, generated with φ = 0.7
and log σ = 0.9.
Method 3: residual reconstruction
Time series y1:n deterministic transform of θ and w1:n
Given y1:n and θ, reconstruct w1:n
Cosine example:
yt = A cos(2πωt + φ) + σwt
wt ∼ N(0, 1)
wt = (yt − A cos(2πωt + φ))/σ
and calculate distance between
reconstructed w1:n and Normal
sample
[Mengersen et al., 2013]
Method 3: residual reconstruction
Time series y1:n deterministic transform of θ and w1:n
Given y1:n and θ, reconstruct w1:n
Cosine example:
yt = A cos(2πωt + φ) + σwt
n = 500 observations with
ω = 1/80, φ = π/4,
σ = 1, A = 2, under prior
U[0, 0.1] and U[0, 2π] for ω and φ,
and N(0, 1) for log σ, log A
−2.5
0.0
2.5
0 100 200 300 400 500
time
y
Cosine example with delay reconstruction, k = 3
0
10
20
30
40
0.000 0.025 0.050 0.075 0.100
ω
density
0.00
0.05
0.10
0.15
0.20
0 2 4 6
φ
density
0
2
4
6
−4 −2 0 2
log(σ)
density
0.0
2.5
5.0
7.5
−2 0 2
log(A)
density
and with residual and delay reconstructions,
k = 1
0
2000
4000
6000
0.000 0.025 0.050 0.075 0.100
ω
density
0.0
0.5
1.0
1.5
2.0
0 2 4 6
φ
density
0.0
2.5
5.0
7.5
10.0
−2 0 2
log(σ)
density
0
1
2
3
−2 0 2
log(A)
density
Method 4: curve matching
Define ˜yt = (t, yt) for all t ∈ 1 : n.
Define a metric on {1, . . . , T} × Y. e.g.
ρ((t, yt), (s, zs)) = λ|t − s| + |yt − zs|, for some λ
Use distance D to compare ˜y1:n = (t, yt)n
t=1 and ˜z1:n = (s, zs)n
s=1
If λ 1, optimal transport will associate each (t, yt) with (t, zt)
We get back the “vector” norm y1:n − z1:n .
If λ = 0, time indices are ignored: identical to Method 1
For any λ > 0, there is hope to retrieve the posterior as ε → 0
Cosine example
(a) Posteriors of ω. (b) Posteriors of φ.
(c) Posteriors of log(σ). (d) Posteriors of log(A).
Discussion
Transport metrics can be used to compare samples
Various complexities from n3 log n to n2 to n log n
Asymptotic guarantees as ε → 0 for fixed n,
and as n → ∞ and ε → ε
Various ways of applying these ideas to time series and
maybe spatial data, maps, images. . .

More Related Content

What's hot

Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1
Christian Robert
 
Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017
Christian Robert
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergence
Christian Robert
 
asymptotics of ABC
asymptotics of ABCasymptotics of ABC
asymptotics of ABC
Christian Robert
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conference
Christian Robert
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
Christian Robert
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like sampler
Christian Robert
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
Christian Robert
 
accurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannaccurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannolli0601
 
ABC short course: introduction chapters
ABC short course: introduction chaptersABC short course: introduction chapters
ABC short course: introduction chapters
Christian Robert
 
Convergence of ABC methods
Convergence of ABC methodsConvergence of ABC methods
Convergence of ABC methods
Christian Robert
 
comments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle samplercomments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle sampler
Christian Robert
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
Christian Robert
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussion
Christian Robert
 
random forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationrandom forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimation
Christian Robert
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes Factors
Christian Robert
 
Bayesian model choice in cosmology
Bayesian model choice in cosmologyBayesian model choice in cosmology
Bayesian model choice in cosmology
Christian Robert
 
ABC workshop: 17w5025
ABC workshop: 17w5025ABC workshop: 17w5025
ABC workshop: 17w5025
Christian Robert
 
Approximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forestsApproximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forests
Christian Robert
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimation
Christian Robert
 

What's hot (20)

Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1
 
Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergence
 
asymptotics of ABC
asymptotics of ABCasymptotics of ABC
asymptotics of ABC
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conference
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like sampler
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
 
accurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannaccurate ABC Oliver Ratmann
accurate ABC Oliver Ratmann
 
ABC short course: introduction chapters
ABC short course: introduction chaptersABC short course: introduction chapters
ABC short course: introduction chapters
 
Convergence of ABC methods
Convergence of ABC methodsConvergence of ABC methods
Convergence of ABC methods
 
comments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle samplercomments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle sampler
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussion
 
random forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationrandom forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimation
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes Factors
 
Bayesian model choice in cosmology
Bayesian model choice in cosmologyBayesian model choice in cosmology
Bayesian model choice in cosmology
 
ABC workshop: 17w5025
ABC workshop: 17w5025ABC workshop: 17w5025
ABC workshop: 17w5025
 
Approximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forestsApproximate Bayesian model choice via random forests
Approximate Bayesian model choice via random forests
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimation
 

Similar to ABC based on Wasserstein distances

Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael Martin
Christian Robert
 
Patch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective DivergencesPatch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective Divergences
Frank Nielsen
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
Valentin De Bortoli
 
Threshold network models
Threshold network modelsThreshold network models
Threshold network models
Naoki Masuda
 
Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de France
Christian Robert
 
On Clustering Histograms with k-Means by Using Mixed α-Divergences
 On Clustering Histograms with k-Means by Using Mixed α-Divergences On Clustering Histograms with k-Means by Using Mixed α-Divergences
On Clustering Histograms with k-Means by Using Mixed α-Divergences
Frank Nielsen
 
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
The Statistical and Applied Mathematical Sciences Institute
 
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Frank Nielsen
 
Slides: A glance at information-geometric signal processing
Slides: A glance at information-geometric signal processingSlides: A glance at information-geometric signal processing
Slides: A glance at information-geometric signal processing
Frank Nielsen
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
Christian Robert
 
Non-informative reparametrisation for location-scale mixtures
Non-informative reparametrisation for location-scale mixturesNon-informative reparametrisation for location-scale mixtures
Non-informative reparametrisation for location-scale mixtures
Christian Robert
 
Nested sampling
Nested samplingNested sampling
Nested sampling
Christian Robert
 
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithmsRao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
Christian Robert
 
Iaetsd vlsi implementation of gabor filter based image edge detection
Iaetsd vlsi implementation of gabor filter based image edge detectionIaetsd vlsi implementation of gabor filter based image edge detection
Iaetsd vlsi implementation of gabor filter based image edge detection
Iaetsd Iaetsd
 
Continuous and Discrete-Time Analysis of SGD
Continuous and Discrete-Time Analysis of SGDContinuous and Discrete-Time Analysis of SGD
Continuous and Discrete-Time Analysis of SGD
Valentin De Bortoli
 
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Dahua Lin
 
MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...
MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...
MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...
The Statistical and Applied Mathematical Sciences Institute
 

Similar to ABC based on Wasserstein distances (20)

Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael Martin
 
Patch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective DivergencesPatch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective Divergences
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
 
Threshold network models
Threshold network modelsThreshold network models
Threshold network models
 
Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de France
 
On Clustering Histograms with k-Means by Using Mixed α-Divergences
 On Clustering Histograms with k-Means by Using Mixed α-Divergences On Clustering Histograms with k-Means by Using Mixed α-Divergences
On Clustering Histograms with k-Means by Using Mixed α-Divergences
 
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo...
 
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
 
Hsu etal-2009
Hsu etal-2009Hsu etal-2009
Hsu etal-2009
 
Slides: A glance at information-geometric signal processing
Slides: A glance at information-geometric signal processingSlides: A glance at information-geometric signal processing
Slides: A glance at information-geometric signal processing
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
 
Non-informative reparametrisation for location-scale mixtures
Non-informative reparametrisation for location-scale mixturesNon-informative reparametrisation for location-scale mixtures
Non-informative reparametrisation for location-scale mixtures
 
Nested sampling
Nested samplingNested sampling
Nested sampling
 
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithmsRao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithms
 
Iaetsd vlsi implementation of gabor filter based image edge detection
Iaetsd vlsi implementation of gabor filter based image edge detectionIaetsd vlsi implementation of gabor filter based image edge detection
Iaetsd vlsi implementation of gabor filter based image edge detection
 
Igv2008
Igv2008Igv2008
Igv2008
 
Continuous and Discrete-Time Analysis of SGD
Continuous and Discrete-Time Analysis of SGDContinuous and Discrete-Time Analysis of SGD
Continuous and Discrete-Time Analysis of SGD
 
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
 
MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...
MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...
MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...
 
talk MCMC & SMC 2004
talk MCMC & SMC 2004talk MCMC & SMC 2004
talk MCMC & SMC 2004
 

More from Christian Robert

Adaptive Restore algorithm & importance Monte Carlo
Adaptive Restore algorithm & importance Monte CarloAdaptive Restore algorithm & importance Monte Carlo
Adaptive Restore algorithm & importance Monte Carlo
Christian Robert
 
discussion of ICML23.pdf
discussion of ICML23.pdfdiscussion of ICML23.pdf
discussion of ICML23.pdf
Christian Robert
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?
Christian Robert
 
restore.pdf
restore.pdfrestore.pdf
restore.pdf
Christian Robert
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13
Christian Robert
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?
Christian Robert
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
Christian Robert
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihood
Christian Robert
 
eugenics and statistics
eugenics and statisticseugenics and statistics
eugenics and statistics
Christian Robert
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
Christian Robert
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018
Christian Robert
 
better together? statistical learning in models made of modules
better together? statistical learning in models made of modulesbetter together? statistical learning in models made of modules
better together? statistical learning in models made of modules
Christian Robert
 

More from Christian Robert (12)

Adaptive Restore algorithm & importance Monte Carlo
Adaptive Restore algorithm & importance Monte CarloAdaptive Restore algorithm & importance Monte Carlo
Adaptive Restore algorithm & importance Monte Carlo
 
discussion of ICML23.pdf
discussion of ICML23.pdfdiscussion of ICML23.pdf
discussion of ICML23.pdf
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?
 
restore.pdf
restore.pdfrestore.pdf
restore.pdf
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihood
 
eugenics and statistics
eugenics and statisticseugenics and statistics
eugenics and statistics
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018
 
better together? statistical learning in models made of modules
better together? statistical learning in models made of modulesbetter together? statistical learning in models made of modules
better together? statistical learning in models made of modules
 

Recently uploaded

extra-chromosomal-inheritance[1].pptx.pdfpdf
extra-chromosomal-inheritance[1].pptx.pdfpdfextra-chromosomal-inheritance[1].pptx.pdfpdf
extra-chromosomal-inheritance[1].pptx.pdfpdf
DiyaBiswas10
 
In silico drugs analogue design: novobiocin analogues.pptx
In silico drugs analogue design: novobiocin analogues.pptxIn silico drugs analogue design: novobiocin analogues.pptx
In silico drugs analogue design: novobiocin analogues.pptx
AlaminAfendy1
 
Body fluids_tonicity_dehydration_hypovolemia_hypervolemia.pptx
Body fluids_tonicity_dehydration_hypovolemia_hypervolemia.pptxBody fluids_tonicity_dehydration_hypovolemia_hypervolemia.pptx
Body fluids_tonicity_dehydration_hypovolemia_hypervolemia.pptx
muralinath2
 
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdfUnveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Erdal Coalmaker
 
Nutraceutical market, scope and growth: Herbal drug technology
Nutraceutical market, scope and growth: Herbal drug technologyNutraceutical market, scope and growth: Herbal drug technology
Nutraceutical market, scope and growth: Herbal drug technology
Lokesh Patil
 
Hemoglobin metabolism_pathophysiology.pptx
Hemoglobin metabolism_pathophysiology.pptxHemoglobin metabolism_pathophysiology.pptx
Hemoglobin metabolism_pathophysiology.pptx
muralinath2
 
erythropoiesis-I_mechanism& clinical significance.pptx
erythropoiesis-I_mechanism& clinical significance.pptxerythropoiesis-I_mechanism& clinical significance.pptx
erythropoiesis-I_mechanism& clinical significance.pptx
muralinath2
 
Penicillin...........................pptx
Penicillin...........................pptxPenicillin...........................pptx
Penicillin...........................pptx
Cherry
 
filosofia boliviana introducción jsjdjd.pptx
filosofia boliviana introducción jsjdjd.pptxfilosofia boliviana introducción jsjdjd.pptx
filosofia boliviana introducción jsjdjd.pptx
IvanMallco1
 
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Sérgio Sacani
 
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Sérgio Sacani
 
general properties of oerganologametal.ppt
general properties of oerganologametal.pptgeneral properties of oerganologametal.ppt
general properties of oerganologametal.ppt
IqrimaNabilatulhusni
 
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
muralinath2
 
Viksit bharat till 2047 India@2047.pptx
Viksit bharat till 2047  India@2047.pptxViksit bharat till 2047  India@2047.pptx
Viksit bharat till 2047 India@2047.pptx
rakeshsharma20142015
 
Citrus Greening Disease and its Management
Citrus Greening Disease and its ManagementCitrus Greening Disease and its Management
Citrus Greening Disease and its Management
subedisuryaofficial
 
justice-and-fairness-ethics with example
justice-and-fairness-ethics with examplejustice-and-fairness-ethics with example
justice-and-fairness-ethics with example
azzyixes
 
Structures and textures of metamorphic rocks
Structures and textures of metamorphic rocksStructures and textures of metamorphic rocks
Structures and textures of metamorphic rocks
kumarmathi863
 
Anemia_ different types_causes_ conditions
Anemia_ different types_causes_ conditionsAnemia_ different types_causes_ conditions
Anemia_ different types_causes_ conditions
muralinath2
 
Lab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerinLab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerin
ossaicprecious19
 
Multi-source connectivity as the driver of solar wind variability in the heli...
Multi-source connectivity as the driver of solar wind variability in the heli...Multi-source connectivity as the driver of solar wind variability in the heli...
Multi-source connectivity as the driver of solar wind variability in the heli...
Sérgio Sacani
 

Recently uploaded (20)

extra-chromosomal-inheritance[1].pptx.pdfpdf
extra-chromosomal-inheritance[1].pptx.pdfpdfextra-chromosomal-inheritance[1].pptx.pdfpdf
extra-chromosomal-inheritance[1].pptx.pdfpdf
 
In silico drugs analogue design: novobiocin analogues.pptx
In silico drugs analogue design: novobiocin analogues.pptxIn silico drugs analogue design: novobiocin analogues.pptx
In silico drugs analogue design: novobiocin analogues.pptx
 
Body fluids_tonicity_dehydration_hypovolemia_hypervolemia.pptx
Body fluids_tonicity_dehydration_hypovolemia_hypervolemia.pptxBody fluids_tonicity_dehydration_hypovolemia_hypervolemia.pptx
Body fluids_tonicity_dehydration_hypovolemia_hypervolemia.pptx
 
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdfUnveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdf
 
Nutraceutical market, scope and growth: Herbal drug technology
Nutraceutical market, scope and growth: Herbal drug technologyNutraceutical market, scope and growth: Herbal drug technology
Nutraceutical market, scope and growth: Herbal drug technology
 
Hemoglobin metabolism_pathophysiology.pptx
Hemoglobin metabolism_pathophysiology.pptxHemoglobin metabolism_pathophysiology.pptx
Hemoglobin metabolism_pathophysiology.pptx
 
erythropoiesis-I_mechanism& clinical significance.pptx
erythropoiesis-I_mechanism& clinical significance.pptxerythropoiesis-I_mechanism& clinical significance.pptx
erythropoiesis-I_mechanism& clinical significance.pptx
 
Penicillin...........................pptx
Penicillin...........................pptxPenicillin...........................pptx
Penicillin...........................pptx
 
filosofia boliviana introducción jsjdjd.pptx
filosofia boliviana introducción jsjdjd.pptxfilosofia boliviana introducción jsjdjd.pptx
filosofia boliviana introducción jsjdjd.pptx
 
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
 
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
 
general properties of oerganologametal.ppt
general properties of oerganologametal.pptgeneral properties of oerganologametal.ppt
general properties of oerganologametal.ppt
 
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
Circulatory system_ Laplace law. Ohms law.reynaults law,baro-chemo-receptors-...
 
Viksit bharat till 2047 India@2047.pptx
Viksit bharat till 2047  India@2047.pptxViksit bharat till 2047  India@2047.pptx
Viksit bharat till 2047 India@2047.pptx
 
Citrus Greening Disease and its Management
Citrus Greening Disease and its ManagementCitrus Greening Disease and its Management
Citrus Greening Disease and its Management
 
justice-and-fairness-ethics with example
justice-and-fairness-ethics with examplejustice-and-fairness-ethics with example
justice-and-fairness-ethics with example
 
Structures and textures of metamorphic rocks
Structures and textures of metamorphic rocksStructures and textures of metamorphic rocks
Structures and textures of metamorphic rocks
 
Anemia_ different types_causes_ conditions
Anemia_ different types_causes_ conditionsAnemia_ different types_causes_ conditions
Anemia_ different types_causes_ conditions
 
Lab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerinLab report on liquid viscosity of glycerin
Lab report on liquid viscosity of glycerin
 
Multi-source connectivity as the driver of solar wind variability in the heli...
Multi-source connectivity as the driver of solar wind variability in the heli...Multi-source connectivity as the driver of solar wind variability in the heli...
Multi-source connectivity as the driver of solar wind variability in the heli...
 

ABC based on Wasserstein distances

  • 1. Inference in generative models using the Wasserstein distance Christian P. Robert (Paris Dauphine PSL & Warwick U.) joint work with E. Bernton (Harvard), P.E. Jacob (Harvard), and M. Gerber (Bristol) Big Bayes, CIRM, Nov. 2018
  • 2. Outline 1 ABC and distance between samples 2 Wasserstein distance 3 Computational aspects 4 Asymptotics 5 Handling time series
  • 3. Outline 1 ABC and distance between samples 2 Wasserstein distance 3 Computational aspects 4 Asymptotics 5 Handling time series
  • 4. Generative model Assumption of a data-generating distribution µ (n) for data y1:n = y1, . . . , yn ∈ Yn Parametric generative model M = {µ (n) θ : θ ∈ H} such that sampling (generating) z1:n from µ (n) θ is feasible Prior distribution π(θ) available as density and generative model Goal: inference on parameters θ given observations y1:n
  • 5. Generic ABC Basic (summary-less) ABC posterior with density (θ, z1:n) ∼ π(θ) µ (n) θ 1 ( y1:n − z1:n < ε) Yn 1 ( y1:n − z1:n < ε) dz1:n . and ABC marginal qε (θ) = Yn n i=1 µ(dzi|θ)1 ( y1:n − z1:n < ε) Yn 1 ( y1:n − z1:n < ε) dz1:n unbiasedly estimated by π(θ)1 ( y1:n − z1:n < ε) where z1:n ∼ µ (n) θ Reminder: ABC-posterior goes to posterior as ε → 0
  • 6. Summary statistics Since random variable y1:n − z1:n may have large variance, { y1:n − z1:n < ε} gets rare as ε → 0 and rarer when d ↑ When using |η(y1:n) − η(z1:n) < ε based on (insufficient) summary statistic η, variance and dimension decrease but q-likelihood differs from likelihood Arbitrariness and impact of summaries, incl. curse of dimensionality [X et al., 2011; Fearnhead & Prangle, 2012; Li & Fearnhead, 2016]
  • 7. Distances between samples Aim: Ressort to alternate distances D between samples y1:n and z1:n such that D(y1:n, z1:n) has smaller variance than y1:n − z1:n while induced ABC-posterior still converges to posterior when ε → 0
  • 8. ABC with order statistics Recall that, for univariate i.i.d. data, order statistics are sufficient 1. sort observed and generated samples y1:n and z1:n 2. compute yσy(1:n) − zσz(1:n) p = n i=1 |y(i) − z(i)|p 1/p for order p (e.g. 1 or 2) instead of y1:n − z1:n = n i=1 |yi − zi|p 1/p
  • 9. Toy example Data-generating process given by Y1:1000 ∼ Gamma(10, 5) Hypothesised model: M = {N(µ, σ2 ) : (µ, σ) ∈ R × R+ } Prior µ ∼ N(0, 1) and σ ∼ Gamma(2, 1) ABC-Rejection sampling: 105 draws, using Euclidean distance, on sorted vs. unsorted samples and keeping 102 draws with smallest distances
  • 10. Toy example 0 2 4 6 1.6 2.0 2.4 µ density distance: L2 sorted L2 0 2 4 0.00 0.25 0.50 0.75 1.00 σ density distance: L2 sorted L2
  • 11. ABC with transport distances Distance D(y1:n, z1:n) = 1 n n i=1 |y(i) − z(i)|p 1/p is p-Wasserstein distance between empirical cdfs ˆµn(dy) = 1 n n i=1 δyi (dy) and ˆνn(dy) = 1 n n i=1 δzi (dy) Rather than comparing samples as vectors, alternative representation as empirical distributions c Novel ABC method, which does not require summary statistics, available with multivariate or dependent data
  • 12. Outline 1 ABC and distance between samples 2 Wasserstein distance 3 Computational aspects 4 Asymptotics 5 Handling time series
  • 13. Wasserstein distance Ground distance ρ (x, y) → ρ(x, y) on Y along with order p ≥ 1 leads to Wasserstein distance between µ, ν ∈ Pp(Y), p ≥ 1: Wp(µ, ν) = inf γ∈Γ(µ,ν) Y×Y ρ(x, y)p dγ(x, y) 1/p where Γ(µ, ν) set of joints with marginals µ, ν and Pp(Y) set of distributions µ for which Eµ[ρ(Y, y0)p] < ∞ for one y0
  • 14. Wasserstein distance: univariate case 1 23 123 Two empirical distributions on R with 3 atoms: 1 3 3 i=1 δyi and 1 3 3 j=1 δzj Matrix of pair-wise costs:    ρ(y1, z1)p ρ(y1, z2)p ρ(y1, z3)p ρ(y2, z1)p ρ(y2, z2)p ρ(y2, z3)p ρ(y3, z1)p ρ(y3, z2)p ρ(y3, z3)p   
  • 15. Wasserstein distance: univariate case 1 23 123 Joint distribution γ =    γ1,1 γ1,2 γ1,3 γ2,1 γ2,2 γ2,3 γ3,1 γ3,2 γ3,3    , with marginals (1/3 1/3 1/3), corresponds to a transport cost of 3 i,j=1 γi,jρ(yi, zj)p
  • 16. Wasserstein distance: univariate case 1 23 123 Optimal assignment: y1 ←→ z3 y2 ←→ z1 y3 ←→ z2 corresponds to choice of joint distribution γ γ =    0 0 1/3 1/3 0 0 0 1/3 0    , with marginals (1/3 1/3 1/3) and cost 3 i=1 ρ(y(i), z(i))p
  • 17. Wasserstein distance Two samples y1, . . . , yn and z1, . . . , zm Wp(ˆµn, ˆνm) = 1 nm i,j ρ(y,zj) Important special case when n = m, for which solution to the optimization problem γ corresponds to an assignment matrix, with only one non-zero entry per row and column, equal to n−1. [Villani, 2003]
  • 18. Wasserstein distance Two samples y1, . . . , yn and z1, . . . , zm Wp(ˆµn, ˆνm) = 1 nm i,j ρ(y,zj) Wasserstein distance thus represented as Wp(y1:n, z1:n)p = inf σ∈Sn 1 n n i=1 ρ(yi, zσ(i))p Computing Wasserstein distance between two samples of same size equivalent to optimal matching problem.
  • 19. Wasserstein distance: bivariate case 1 2 3 1 2 3 there exists a joint distribution γ minimizing cost 3 i,j=1 γi,jρ(yi, zj)p with various algorithms to compute/approximate it
  • 20. Wasserstein distance also called Vaserˇste˘ın, Earth Mover, Gini, Mallows, Kantorovich, Rubinstein, &tc. can be defined between arbitrary distributions actual distance statistically sound: ˆθn = arginf θ∈H Wp( 1 n n i=1 δyi , µθ) → θ = arginf θ∈H Wp(µ , µθ), at rate √ n, plus asymptotic distribution [Bassetti & al., 2006]
  • 21. Optimal transport to Parliement
  • 23. Outline 1 ABC and distance between samples 2 Wasserstein distance 3 Computational aspects 4 Asymptotics 5 Handling time series
  • 24. Computing Wasserstein distances when Y = R, computing Wp(µn, νn) costs O(n log n) when Y = Rd, exact calculation is O(n3) [Hungarian]or O(n2.5 log n) [short-list] For entropic regularization, with δ > 0 Wp,δ(ˆµn, ˆνn)p = inf γ∈Γ(ˆµn,ˆνn) Y×Y ρ(x, y)p dγ(x, y) − δH(γ) , where H(γ) = − ij γij log γij entropy of γ, existence of Sinkhorn’s algorithm that yields cost O(n2) [Genevay et al., 2016]
  • 25. Computing Wasserstein distances other approximations, like Ye et al. (2016) using Simulated Annealing regularized Wasserstein not a distance, but as δ goes to zero, Wp,δ(ˆµn, ˆνn) → Wp(ˆµn, ˆνn) for δ small enough, Wp,δ(ˆµn, ˆνn) = Wp(ˆµn, ˆνn) (exact) in practice, δ 5% of median(ρ(yi, zj)p)i,j [Cuturi, 2013]
  • 26. Computing Wasserstein distances cost linear in the dimension of observations distance calculations model-independent other transport distances calculated in O(n log n), based on different generalizations of “sorting” (swapping, Hilbert) [Gerber & Chopin, 2015] acceleration by combination of distances and subsampling [source: Wikipedia]
  • 27. Computing Wasserstein distances cost linear in the dimension of observations distance calculations model-independent other transport distances calculated in O(n log n), based on different generalizations of “sorting” (swapping, Hilbert) [Gerber & Chopin, 2015] acceleration by combination of distances and subsampling [source: Wikipedia]
  • 28. Transport distance via Hilbert curve Sort multivariate data via space-filling curves, like Hilbert space-filling curve H : [0, 1] → [0, 1]d continuous mapping, with pseudo-inverse h : [0, 1]d → [0, 1] Compute order σ ∈ S of projected points, and compute hp(y1:n, z1:n) = 1 n n i=1 ρ(yσy(i), zσz(i))p 1/p , called Hilbert ordering transport distance [Gerber & Chopin, 2015]
  • 29. Transport distance via Hilbert curve Fact: hp(y1:n, z1:n) is a distance between empirical distributions with n atoms, for all p ≥ 1 Hence, hp(y1:n, z1:n) = 0 if and only if y1:n = zσ(1:n), for a permutation σ, with hope to retrieve posterior as ε → 0 Cost O(n log n) per calculation, but encompassing sampler might be more costly than with regularized or exact Wasserstein distances Upper bound on corresponding Wasserstein distance, only accurate for small dimension
  • 30. Adaptive SMC with r-hit moves Start with ε0 = ∞ 1. ∀k ∈ 1 : N, sample θk 0 ∼ π(θ) (prior) 2. ∀k ∈ 1 : N, sample zk 1:n from µ (n) θk 3. ∀k ∈ 1 : N, compute the distance dk 0 = D(y1:n, zk 1:n) 4. based on (θk 0)N k=1 and (dk 0)N k=1, compute ε1, s.t. resampled particles have at least 50% unique values At step t ≥ 1, weight wk t ∝ 1(dk t−1 ≤ εt), resample, and perform r-hit MCMC with adaptive independent proposals [Lee, 2012; Lee and Latuszy´nski, 2014]
  • 31. Toy example: bivariate Normal 100 observations from bivariate Normal with variance 1 and covariance 0.55 Compare WABC with ABC versions based on raw Euclidean distance and Euclidean distance between (sufficient) sample means on 106 model simulations.
  • 32. Toy example: bivariate Normal 100 observations from bivariate Normal with variance 1 and covariance 0.55 Compare WABC with ABC versions based on raw Euclidean distance and Euclidean distance between (sufficient) sample means on 106 model simulations. In terms of computing time, based on our R implementation on an Intel Core i7-5820K (3.30GHz), each Euclidean distance calculation takes an average 2.2 x 104 s while each Wasserstein distance calculation takes an average 8:2 x 103s, i.e. 40 times greater
  • 33. Quantile “g-and-k” distribution bivariate extension of the g-and-k distribution with quantile functions ai + bi 1 + 0.8 1 − exp(−gizi(r) 1 + exp(−giz(r) 1 + zi(r)2 k zi(r) (1) and correlation ρ Intractable density that can be numerically approximated [Rayner and MacGillivray, 2002; Prangle, 2017] Simulation by MCMC and W-ABC (sequential tolerance exploration)
  • 34. Quantile “g-and-k” distribution (a) a1. (b) b1. (c) g1. (d) k1. (e) a2. (f) b2. (g) g2. (h) k2. (i) ρ. (j) g2 last 10 steps. (k) W1 to posterior, vs. simulations.
  • 35. Outline 1 ABC and distance between samples 2 Wasserstein distance 3 Computational aspects 4 Asymptotics 5 Handling time series
  • 36. Minimum Wasserstein estimator Under some assumptions, ˆθn exists and lim sup n→∞ argmin θ∈H Wp(ˆµn, µθ) ⊂ argmin θ∈H Wp(µ , µθ), almost surely In particular, if θ = argminθ∈H Wp(µ , µθ) is unique, then ˆθn a.s. −→ θ
  • 37. Minimum Wasserstein estimator Under stronger assumptions, incl. well-specification, dim(Y) = 1, and p = 1 √ n(ˆθn − θ ) w −→ argmin u∈H R |G (t) − u, D (t) |dt, where G is a µ -Brownian bridge, and D ∈ (L1(R))dθ satisfies R |Fθ(t) − F (t) − θ − θ , D (t) |dt = o( θ − θ H) [Pollard, 1980; del Barrio et al., 1999, 2005] Hard to use for confidence intervals, but the bootstrap is an intersting alternative.
  • 38. Toy Gamma example Data-generating process: y1:n i.i.d. ∼ Gamma(10, 5) Model: M = {N(µ, σ2 ) : (µ, σ) ∈ R × R+ } MLE converges to argminθ∈H KL(µ , µθ) MLE top, MWE bottom
  • 39. Toy Gamma example Data-generating process: y1:n i.i.d. ∼ Gamma(10, 5) Model: M = {N(µ, σ2 ) : (µ, σ) ∈ R × R+ } MLE converges to argminθ∈H KL(µ , µθ) 0 20 40 60 1.96 1.98 2.00 2.02 µ density 0 20 40 60 80 0.60 0.61 0.62 0.63 0.64 0.65 σ density n = 10, 000, MLE dashed, MWE solid
  • 40. Minimum expected Wasserstein estimator ˆθn,m = argmin θ∈H E[D(ˆµn, ˆµθ,m)] with expectation under distribution of sample z1:m ∼ µ (m) θ giving rise to ˆµθ,m = m−1 m i=1 δzi .
  • 41. Minimum expected Wasserstein estimator ˆθn,m = argmin θ∈H E[D(ˆµn, ˆµθ,m)] with expectation under distribution of sample z1:m ∼ µ (m) θ giving rise to ˆµθ,m = m−1 m i=1 δzi . Under further assumptions, incl. m(n) → ∞ with n, inf θ∈H EWp(ˆµn(ω), ˆµθ,m(n)) → inf θ∈H Wp(µ , µθ) and lim sup n→∞ argmin θ∈H EWp(ˆµn(ω), ˆµθ,m(n)) ⊂ argmin θ∈H Wp(µ , µθ).
  • 42. Minimum expected Wasserstein estimator ˆθn,m = argmin θ∈H E[D(ˆµn, ˆµθ,m)] with expectation under distribution of sample z1:m ∼ µ (m) θ giving rise to ˆµθ,m = m−1 m i=1 δzi . Further, for n fixed, inf θ∈H EWp(ˆµn, ˆµθ,m) → inf θ∈H Wp(ˆµn, µθ) as m → ∞ and lim sup m→∞ argmin θ∈H EWp(ˆµn, ˆµθ,m) ⊂ argmin θ∈H Wp(ˆµn, µθ).
  • 43. Quantile “g-and-k” distribution Sampling achieved by plugging standard Normal variables into (1) in place of z(r). MEWE with large m can be computed to high precision (a) MEWE: a vs b. (b) MEWE: g vs κ. 0.00 0.05 0.10 0.15 0.20 −4 0 4 κ density (c) √ n-scaled estim. of κ. 0.0 0.1 0.2 0.3 0.4 0 10 20 30 observations density (d) Histogram of data.
  • 44. Asymptotics of WABC-posterior convergence to true posterior as → 0 convergence to non-Dirac as n → ∞ for fixed Bayesian consistency if n ↓ at proper speed [Frazier, X & Rousseau, 2017] WARNING: Theoretical conditions extremely rarely open checks in practice
  • 45. Asymptotics of WABC-posterior convergence to true posterior as → 0 convergence to non-Dirac as n → ∞ for fixed Bayesian consistency if n ↓ at proper speed [Frazier, X & Rousseau, 2017] WARNING: Theoretical conditions extremely rarely open checks in practice
  • 46. Asymptotics of WABC-posterior For fixed n and ε → 0, for i.i.d. data, assuming sup y,θ µθ(y) < ∞ y → µθ(y) continuous, the Wasserstein ABC-posterior converges to the posterior irrespective of the choice of ρ and p Concentration as both n → ∞ and ε → ε = inf Wp(µ , µθ) [Frazier et al., 2018] Concentration on neighborhoods of θ = arginf Wp(µ , µθ), whereas posterior concentrates on arginf KL(µ , µθ)
  • 47. Asymptotics of WABC-posterior Rate of posterior concentration (and choice of εn) relates to rate of convergence of the distance, e.g. µ (n) θ Wp µθ, 1 n n i=1 δzi > u ≤ c(θ)fn(u), [Fournier & Guillin, 2015] Rate of convergence decays with the dimension of Y, fast or slow, depending on moments of µθ and choice of p
  • 48. Asymptotics of WABC-posterior Rate of posterior concentration (and choice of εn) relates to rate of convergence of the distance, e.g. µ (n) θ Wp µθ, 1 n n i=1 δzi > u ≤ c(θ)fn(u), [Fournier & Guillin, 2015] Rate of convergence decays with the dimension of Y, fast or slow, depending on moments of µθ and choice of p
  • 49. Toy example: univariate Data-generating process: Y1:n i.i.d. ∼ Gamma(10, 5), n = 100, with mean 2 and standard deviation ≈ 0.63 0.0 0.1 0.2 0 10 20 step ε Evolution of εt against t, the step index in the adaptive SMC sampler Theoretical model: M = {N(µ, σ2 ) : (µ, σ) ∈ R × R+ } Prior: µ ∼ N(0, 1) and σ ∼ Gamma(2, 1) 0 3 6 9 0 1 2 σ density Convergence to posterior
  • 50. Toy example: univariate Data-generating process: Y1:n i.i.d. ∼ Gamma(10, 5), n = 100, with mean 2 and standard deviation ≈ 0.63 0.0 0.1 0.2 0 10 20 step ε Evolution of εt against t, the step index in the adaptive SMC sampler Theoretical model: M = {N(µ, σ2 ) : (µ, σ) ∈ R × R+ } Prior: µ ∼ N(0, 1) and σ ∼ Gamma(2, 1) 0 3 6 9 0 1 2 σ density Convergence to posterior
  • 51. Toy example: multivariate Observation space: Y = R10 Model: Yi ∼ N10(θ, S), for i ∈ 1 : 100, where Skj = 0.5|k−j| for k, j ∈ 1 : 10 Data generated with θ defined as a 10-vector, chosen by drawing standard Normal variables Prior: θi ∼ N(0, 1) for all i ∈ 1 : 10
  • 52. Toy example: multivariate Observation space: Y = R10 Model: Yi ∼ N10(θ, S), for i ∈ 1 : 100, where Skj = 0.5|k−j| for k, j ∈ 1 : 10 Data generated with θ defined as a 10-vector, chosen by drawing standard Normal variables Prior: θi ∼ N(0, 1) for all i ∈ 1 : 10 1e+04 1e+06 0 10 20 30 step #distancescalculated Evolution of number of distances calculated up to t, step index in adaptive SMC sampler
  • 53. Toy example: multivariate Observation space: Y = R10 Model: Yi ∼ N10(θ, S), for i ∈ 1 : 100, where Skj = 0.5|k−j| for k, j ∈ 1 : 10 Data generated with θ defined as a 10-vector, chosen by drawing standard Normal variables Prior: θi ∼ N(0, 1) for all i ∈ 1 : 10 Bivariate marginal of (θ3, θ7) approximated by SMC sampler (posterior contours in yellow, θ indicated by black lines)
  • 54. sum of log-Normals Distribution of the sum of log-Normal random variables intractable but easy to simulate x1, . . . , xL ∼ N(γ, σ2 ) y = L =1 exp(x ) (a) MEWE of (γ, σ). 0.0 0.2 0.4 0.6 −2 −1 0 1 2 γ density (b) √ n-scaled estim. of γ. 0.0 0.2 0.4 0.6 −2 −1 0 1 2 σ density (c) √ n-scaled estim. of σ. 0.00 0.02 0.04 0.06 0 25 50 75 observations density (d) Histogram of data.
  • 55. misspecified model Gamma Gamma(10, 5) data fitted with a Normal model N(γ, σ2) approximate MEWE by sampling k = 20 independent u(i) and minimize θ → k−1 k i=1 Wp(y1:n, gm(u(i) , θ)) (a) MLE of (γ, σ). (b) MEWE of (γ, σ). 0 20 40 60 1.94 1.96 1.98 2.00 2.02 γ density (c) Estimators of γ. 0 20 40 60 80 0.60 0.61 0.62 0.63 0.64 0.65 σ density (d) Estimators of σ.
  • 56. Outline 1 ABC and distance between samples 2 Wasserstein distance 3 Computational aspects 4 Asymptotics 5 Handling time series
  • 57. Method 1 (0?): ignoring dependencies Consider only marginal distribution AR(1) example: y0 ∼ N 0, σ2 1 − φ2 , yt+1 ∼ N φyt, σ2 . Marginally yt ∼ N 0, σ2 / 1 − φ2 which identifies σ2/ 1 − φ2 but not (φ, σ) Produces a region of plausible parameters For n = 1, 000, generated with φ = 0.7 and log σ = 0.9
  • 58. Method 2: delay reconstruction Introduce ˜yt = (yt, yt−1, . . . , yt−k) for lag k, and treat ˜yt as data AR(1) example: ˜yt = (yt, yt−1) with marginal distribution N 0 0 , σ2 1 − φ2 1 φ φ 1 , identifies both φ and σ Related to Takens’ theorem in dynamical systems literature For n = 1, 000, generated with φ = 0.7 and log σ = 0.9.
  • 59. Method 3: residual reconstruction Time series y1:n deterministic transform of θ and w1:n Given y1:n and θ, reconstruct w1:n Cosine example: yt = A cos(2πωt + φ) + σwt wt ∼ N(0, 1) wt = (yt − A cos(2πωt + φ))/σ and calculate distance between reconstructed w1:n and Normal sample [Mengersen et al., 2013]
  • 60. Method 3: residual reconstruction Time series y1:n deterministic transform of θ and w1:n Given y1:n and θ, reconstruct w1:n Cosine example: yt = A cos(2πωt + φ) + σwt n = 500 observations with ω = 1/80, φ = π/4, σ = 1, A = 2, under prior U[0, 0.1] and U[0, 2π] for ω and φ, and N(0, 1) for log σ, log A −2.5 0.0 2.5 0 100 200 300 400 500 time y
  • 61. Cosine example with delay reconstruction, k = 3 0 10 20 30 40 0.000 0.025 0.050 0.075 0.100 ω density 0.00 0.05 0.10 0.15 0.20 0 2 4 6 φ density 0 2 4 6 −4 −2 0 2 log(σ) density 0.0 2.5 5.0 7.5 −2 0 2 log(A) density
  • 62. and with residual and delay reconstructions, k = 1 0 2000 4000 6000 0.000 0.025 0.050 0.075 0.100 ω density 0.0 0.5 1.0 1.5 2.0 0 2 4 6 φ density 0.0 2.5 5.0 7.5 10.0 −2 0 2 log(σ) density 0 1 2 3 −2 0 2 log(A) density
  • 63. Method 4: curve matching Define ˜yt = (t, yt) for all t ∈ 1 : n. Define a metric on {1, . . . , T} × Y. e.g. ρ((t, yt), (s, zs)) = λ|t − s| + |yt − zs|, for some λ Use distance D to compare ˜y1:n = (t, yt)n t=1 and ˜z1:n = (s, zs)n s=1 If λ 1, optimal transport will associate each (t, yt) with (t, zt) We get back the “vector” norm y1:n − z1:n . If λ = 0, time indices are ignored: identical to Method 1 For any λ > 0, there is hope to retrieve the posterior as ε → 0
  • 64. Cosine example (a) Posteriors of ω. (b) Posteriors of φ. (c) Posteriors of log(σ). (d) Posteriors of log(A).
  • 65. Discussion Transport metrics can be used to compare samples Various complexities from n3 log n to n2 to n log n Asymptotic guarantees as ε → 0 for fixed n, and as n → ∞ and ε → ε Various ways of applying these ideas to time series and maybe spatial data, maps, images. . .