SlideShare a Scribd company logo
1 of 70
Download to read offline
ABCD
: ABC in High Dimensions
Christian P. Robert
Universit´e Paris-Dauphine, University of Warwick, & IUF
workshop 17w5025, BIRS, Banff, Alberta, Canada
Approximate Bayesian computation
1 Approximate Bayesian computation
ABC basics
Remarks on high D
Non-parametric roots
Automated summary selection
Series B discussion
The ABC method
Bayesian setting: target is π(θ)f (x|θ)
The ABC method
Bayesian setting: target is π(θ)f (x|θ)
When likelihood f (x|θ) not in closed form, likelihood-free rejection
technique:
The ABC method
Bayesian setting: target is π(θ)f (x|θ)
When likelihood f (x|θ) not in closed form, likelihood-free rejection
technique:
ABC algorithm
For an observation y ∼ f (y|θ), under the prior π(θ), keep jointly
simulating
θ ∼ π(θ) , z ∼ f (z|θ ) ,
until the auxiliary variable z is equal to the observed value, z = y.
[Tavar´e et al., 1997]
A as A...pproximative
When y is a continuous random variable, equality z = y is replaced
with a tolerance condition,
(y, z) ≤
where is a distance
A as A...pproximative
When y is a continuous random variable, equality z = y is replaced
with a tolerance condition,
(y, z) ≤
where is a distance
Output distributed from
π(θ) Pθ{ (y, z) < } ∝ π(θ| (y, z) < )
[Pritchard et al., 1999]
ABC algorithm
Algorithm 1 Likelihood-free rejection sampler 2
for i = 1 to N do
repeat
generate θ from the prior distribution π(·)
generate z from the likelihood f (·|θ )
until ρ{η(z), η(y)} ≤
set θi = θ
end for
where η(y) defines a (not necessarily sufficient) statistic
Output
The likelihood-free algorithm samples from the marginal in z of:
π (θ, z|y) =
π(θ)f (z|θ)IA ,y (z)
A ,y×Θ π(θ)f (z|θ)dzdθ
,
where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
Output
The likelihood-free algorithm samples from the marginal in z of:
π (θ, z|y) =
π(θ)f (z|θ)IA ,y (z)
A ,y×Θ π(θ)f (z|θ)dzdθ
,
where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
The idea behind ABC is that the summary statistics coupled with a
small tolerance should provide a good approximation of the
posterior distribution:
π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) .
MA example
Consider the MA(q) model
xt = t +
q
i=1
ϑi t−i
Simple prior(s): uniform over the inverse [real and complex] roots
in
Q(u) = 1 −
q
i=1
ϑi ui
under the identifiability conditions
MA example
Consider the MA(q) model
xt = t +
q
i=1
ϑi t−i
Simple prior(s): uniform prior over the identifiability zone, e.g.
triangle for MA(2)
MA example (2)
ABC algorithm thus made of
1 picking a new value (ϑ1, ϑ2) in the triangle
2 generating an iid sequence ( t)−q<t≤T
3 producing a simulated series (xt)1≤t≤T
MA example (2)
ABC algorithm thus made of
1 picking a new value (ϑ1, ϑ2) in the triangle
2 generating an iid sequence ( t)−q<t≤T
3 producing a simulated series (xt)1≤t≤T
Distance: basic distance between the series
ρ((xt)1≤t≤T , (xt)1≤t≤T ) =
T
t=1
(xt − xt)2
or distance between summary statistics like the q autocorrelations
τj =
T
t=j+1
xtxt−j
Comparison of distance impact
Evaluation of the tolerance on the ABC sample against both
distances ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
Comparison of distance impact
0.0 0.2 0.4 0.6 0.8
01234
θ1
−2.0 −1.0 0.0 0.5 1.0 1.5
0.00.51.01.5
θ2
Evaluation of the tolerance on the ABC sample against both
distances ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
Comparison of distance impact
0.0 0.2 0.4 0.6 0.8
01234
θ1
−2.0 −1.0 0.0 0.5 1.0 1.5
0.00.51.01.5
θ2
Evaluation of the tolerance on the ABC sample against both
distances ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
Approximate Bayesian computation
1 Approximate Bayesian computation
ABC basics
Remarks on high D
Non-parametric roots
Automated summary selection
Series B discussion
ABC[and the]D[curse]
“These theoretical results suggests that when performing
ABC in high dimensions, the most important problem is
how to address the poor performance of the ABC
estimator. This is often bourne out in practice.” R.
Everitt
• how to most effectively make use of the sample from the joint
distribution on (θ, t) that is generated by ABC algorithms
• algorithms that estimate the distribution or statistics of θ
given η(y)
• methods that estimate the conditional η(y)|θ that is often less
important in this context (but is still relevant in some
applications): that of the
• design of inference algorithms that are computationally
scalable to high dimensional cases
Parameter given summary
• local regression correction `a la Beaumont et al. (2002)
• non-parametric regression `a la Blum et al. (2013)
• ridge regression `a la Blum et al. (2013)
• semi-automatic ABC `a la Fernhead and Prangle (2012)
• neural networks `a la Blum and Fran¸cois (2010)
• random forest `a la Marin et al. (2016)
• not particularly dimension resistant
• restricted to estimation
Summary given parameter
• non-parametric regression `a la Blum et al. (2013)
• synthetic likelihood `a la Wood (2010) and Dutta et al. (2016)
• Gaussian processes `a la Wilkinson (2014), Meeds and Welling
(2013), J¨arvenp¨a¨a et al. (2016)
Reproducing kernel Hilbert spaces
Several papers suggesting moving from kernel estimation to RKHS
regression:
• Fukumizu, Le & Gretton (2013)
• Nakagome, Fukumizu, & Mano (2013)
• Park, Jitkrittum, & Sejdinovic (2016)
• Mitrovic, Sejdinovic, & Teh (2016)
Reproducing kernel Hilbert spaces
Several papers suggesting moving from kernel estimation to RKHS
regression:
with arguments
• removing dependence on summary statistics by using “the
data themselves”
• correlatively no loss of information
• providing a form of distance between empirical measures
• learning directly the regression
• relatively easy and fast implementation (with linear time
MMD)
K-ABC = ABC + RKHS
”A notable property of KBR is that the prior and
likelihood are represented in terms of samples.”
[Park et al., 2016]
Concept of a kernel function h(·, ·) that serves as a discrepancy
between two samples or two sets of summary statistics, h(y, yobs)
K-ABC = ABC + RKHS
”A notable property of KBR is that the prior and
likelihood are represented in terms of samples.”
[Park et al., 2016]
Concept of a kernel function h(·, ·) that serves as a discrepancy
between two samples or two sets of summary statistics, h(y, yobs)
Estimates based on learning sample (θt, yt)T
t=1 expressed as
T
t=1
ωth(θt)
with G = (h(η(yi ), η(yj )))i,j and
ω = [G + n nIn]−1
(h(η(y1), η(yobs
)) · · · h(η(yT ), η(yobs
)))T
[Fukumizu et al., 2013]
K2-ABC = ABC + RKHS + RKHS
Second level RKHS when
h(yi , yj )) = exp − −1
MMD2
(yi , yj )
• no need of summary statistics
• require maximum mean discrepancy or other distance between
distributions
• extension to non-iid data problematic
[Park et al., 2016]
DR-ABC = ABC + RKHS + RKHS
Kernel-based distribution regression (DR): instead of
kernel-weighted regression for posterior expectations, construction
of ABC summaries
Claim optimality of a summary statistics against given loss
function but unclear to me
Ridge-like summary
Θ[G + n nIn]−1
h
close to above derivation of weights
[Mitrovic et al., 2016]
Approximate Bayesian computation
1 Approximate Bayesian computation
ABC basics
Remarks on high D
Non-parametric roots
Automated summary selection
Series B discussion
ABC-NP
Better usage of [prior] simulations by
adjustement: instead of throwing away
θ such that ρ(η(z), η(y)) > , replace
θ’s with locally regressed transforms
(use with BIC)
θ∗
= θ − {η(z) − η(y)}T ˆβ [Csill´ery et al., TEE, 2010]
where ˆβ is obtained by [NP] weighted least square regression on
(η(z) − η(y)) with weights
Kδ {ρ(η(z), η(y))}
[Beaumont et al., 2002, Genetics]
ABC-NP (regression)
Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :
weight directly simulation by
Kδ {ρ(η(z(θ)), η(y))}
or
1
S
S
s=1
Kδ {ρ(η(zs
(θ)), η(y))}
[consistent estimate of f (η|θ)]
ABC-NP (regression)
Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :
weight directly simulation by
Kδ {ρ(η(z(θ)), η(y))}
or
1
S
S
s=1
Kδ {ρ(η(zs
(θ)), η(y))}
[consistent estimate of f (η|θ)]
Curse of dimensionality: poor estimate when d = dim(η) is large...
ABC-NP (density estimation)
Use of the kernel weights
Kδ {ρ(η(z(θ)), η(y))}
leads to the NP estimate of the posterior expectation
i θi Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
[Blum, JASA, 2010]
ABC-NP (density estimation)
Use of the kernel weights
Kδ {ρ(η(z(θ)), η(y))}
leads to the NP estimate of the posterior conditional density
i
˜Kb(θi − θ)Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
[Blum, JASA, 2010]
ABC-NP (density estimations)
Other versions incorporating regression adjustments
i
˜Kb(θ∗
i − θ)Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
ABC-NP (density estimations)
Other versions incorporating regression adjustments
i
˜Kb(θ∗
i − θ)Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
In all cases, error
E[ˆg(θ|y)] − g(θ|y) = cb2
+ cδ2
+ OP(b2
+ δ2
) + OP(1/nδd
)
var(ˆg(θ|y)) =
c
nbδd
(1 + oP(1))
[Blum, JASA, 2010]
ABC-NP (density estimations)
Other versions incorporating regression adjustments
i
˜Kb(θ∗
i − θ)Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
In all cases, error
E[ˆg(θ|y)] − g(θ|y) = cb2
+ cδ2
+ OP(b2
+ δ2
) + OP(1/nδd
)
var(ˆg(θ|y)) =
c
nbδd
(1 + oP(1))
[standard NP calculations]
ABC-NCH
Incorporating non-linearities and heterocedasticities:
θ∗
= ˆm(η(y)) + [θ − ˆm(η(z))]
ˆσ(η(y))
ˆσ(η(z))
ABC-NCH
Incorporating non-linearities and heterocedasticities:
θ∗
= ˆm(η(y)) + [θ − ˆm(η(z))]
ˆσ(η(y))
ˆσ(η(z))
where
• ˆm(η) estimated by non-linear regression (e.g., neural network)
• ˆσ(η) estimated by non-linear regression on residuals
log{θi − ˆm(ηi )}2
= log σ2
(ηi ) + ξi
[Blum & Fran¸cois, 2009]
ABC-NCH (2)
Why neural network?
ABC-NCH (2)
Why neural network?
• fights curse of dimensionality
• selects relevant summary statistics
• provides automated dimension reduction
• offers a model choice capability
• improves upon multinomial logistic
[Blum & Fran¸cois, 2009]
ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance as a quantile on observed
distances, say 10% or 1% quantile,
= N = qα(d1, . . . , dN)
ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance as a quantile on observed
distances, say 10% or 1% quantile,
= N = qα(d1, . . . , dN)
• Interpretation of ε as nonparametric bandwidth only
approximation of the actual practice
[Blum & Fran¸cois, 2010]
ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance as a quantile on observed
distances, say 10% or 1% quantile,
= N = qα(d1, . . . , dN)
• Interpretation of ε as nonparametric bandwidth only
approximation of the actual practice
[Blum & Fran¸cois, 2010]
• ABC is a k-nearest neighbour (knn) method with kN = N N
[Loftsgaarden & Quesenberry, 1965]
ABC consistency
Provided
kN/ log log N −→ ∞ and kN/N −→ 0
as N → ∞, for almost all s0 (with respect to the distribution of
S), with probability 1,
1
kN
kN
j=1
ϕ(θj ) −→ E[ϕ(θj )|S = s0]
[Devroye, 1982]
ABC consistency
Provided
kN/ log log N −→ ∞ and kN/N −→ 0
as N → ∞, for almost all s0 (with respect to the distribution of
S), with probability 1,
1
kN
kN
j=1
ϕ(θj ) −→ E[ϕ(θj )|S = s0]
[Devroye, 1982]
Biau et al. (2013) also recall pointwise and integrated mean square error
consistency results on the corresponding kernel estimate of the
conditional posterior distribution, under constraints
kN → ∞, kN /N → 0, hN → 0 and hp
N kN → ∞,
Rates of convergence
Further assumptions (on target and kernel) allow for precise
(integrated mean square) convergence rates (as a power of the
sample size N), derived from classical k-nearest neighbour
regression, like
• when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N
− 4
p+8
• when m = 4, kN ≈ N(p+4)/(p+8) and rate N
− 4
p+8 log N
• when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N
− 4
m+p+4
[Biau et al., 2013]
Noisy ABC
Idea: Modify the data from the start
˜y = y0 + ζ1
with the same scale as ABC
[ see Fearnhead-Prangle ]
run ABC on ˜y
Noisy ABC
Idea: Modify the data from the start
˜y = y0 + ζ1
with the same scale as ABC
[ see Fearnhead-Prangle ]
run ABC on ˜y
Then ABC produces an exact simulation from π(θ|˜y) = π(θ|˜y)
[Dean et al., 2011; Fearnhead and Prangle, 2012]
Consistent noisy ABC
• Degrading the data improves the estimation performances:
• Noisy ABC-MLE is asymptotically (in n) consistent
• under further assumptions, the noisy ABC-MLE is
asymptotically normal
• increase in variance of order −2
• likely degradation in precision or computing time due to the
lack of summary statistic [curse of dimensionality]
Approximate Bayesian computation
1 Approximate Bayesian computation
ABC basics
Remarks on high D
Non-parametric roots
Automated summary selection
Series B discussion
Semi-automatic ABC
Fearnhead and Prangle (2010) study ABC and the selection of the
summary statistic in close proximity to Wilkinson’s proposal
ABC then considered from a purely inferential viewpoint and
calibrated for estimation purposes
Use of a randomised (or ‘noisy’) version of the summary statistics
˜η(y) = η(y) + τ
Derivation of a well-calibrated version of ABC, i.e. an algorithm
that gives proper predictions for the distribution associated with
this randomised summary statistic
Semi-automatic ABC
Fearnhead and Prangle (2010) study ABC and the selection of the
summary statistic in close proximity to Wilkinson’s proposal
ABC then considered from a purely inferential viewpoint and
calibrated for estimation purposes
Use of a randomised (or ‘noisy’) version of the summary statistics
˜η(y) = η(y) + τ
Derivation of a well-calibrated version of ABC, i.e. an algorithm
that gives proper predictions for the distribution associated with
this randomised summary statistic [calibration constraint: ABC
approximation with same posterior mean as the true randomised
posterior]
Summary statistics
• Optimality of the posterior expectation E[θ|y] of the
parameter of interest as summary statistics η(y)!
Summary statistics
• Optimality of the posterior expectation E[θ|y] of the
parameter of interest as summary statistics η(y)!
• Use of the standard quadratic loss function
(θ − θ0)T
A(θ − θ0) .
bare summary
Details on Fearnhead and Prangle (F&P) ABC
Use of a summary statistic S(·), an importance proposal g(·), a
kernel K(·) ≤ 1 and a bandwidth h > 0 such that
(θ, ysim) ∼ g(θ)f (ysim|θ)
is accepted with probability (hence the bound)
K[{S(ysim) − sobs}/h]
and the corresponding importance weight defined by
π(θ) g(θ)
[Fearnhead & Prangle, 2012]
Errors, errors, and errors
Three levels of approximation
• π(θ|yobs) by π(θ|sobs) loss of information
[ignored]
• π(θ|sobs) by
πABC(θ|sobs) =
π(s)K[{s − sobs}/h]π(θ|s) ds
π(s)K[{s − sobs}/h] ds
noisy observations
• πABC(θ|sobs) by importance Monte Carlo based on N
simulations, represented by var(a(θ)|sobs)/Nacc [expected
number of acceptances]
[M. Twain/B. Disraeli]
Average acceptance asymptotics
For the average acceptance probability/approximate likelihood
p(θ|sobs) = f (ysim|θ) K[{S(ysim) − sobs}/h] dysim ,
overall acceptance probability
p(sobs) = p(θ|sobs) π(θ) dθ = π(sobs)hd
+ o(hd
)
[F&P, Lemma 1]
Calibration of h
“This result gives insight into how S(·) and h affect the Monte
Carlo error. To minimize Monte Carlo error, we need hd
to be not
too small. Thus ideally we want S(·) to be a low dimensional
summary of the data that is sufficiently informative about θ that
π(θ|sobs) is close, in some sense, to π(θ|yobs)” (F&P, p.5)
• turns h into an absolute value while it should be
context-dependent and user-calibrated
• only addresses one term in the approximation error and
acceptance probability (“curse of dimensionality”)
• h large prevents πABC(θ|sobs) to be close to π(θ|sobs)
• d small prevents π(θ|sobs) to be close to π(θ|yobs) (“curse of
[dis]information”)
Converging ABC
Theorem (F&P)
For noisy ABC, the expected noisy-ABC log-likelihood,
E {log[p(θ|sobs)]} = log[p(θ|S(yobs) + )]π(yobs|θ0)K( )dyobsd ,
has its maximum at θ = θ0.
Converging ABC
Corollary
For noisy ABC, the ABC posterior converges onto a point mass on
the true parameter value as m → ∞.
For standard ABC, not always the case (unless h goes to zero).
Loss motivated statistic
Under quadratic loss function,
Theorem (F&P)
(i) The minimal posterior error E[L(θ, ˆθ)|yobs] occurs when
ˆθ = E(θ|yobs) (!)
(ii) When h → 0, EABC(θ|sobs) converges to E(θ|yobs)
(iii) If S(yobs) = E[θ|yobs] then for ˆθ = EABC[θ|sobs]
E[L(θ, ˆθ)|yobs] = trace(AΣ) + h2
xT
AxK(x)dx + o(h2
).
Optimal summary statistic
“We take a different approach, and weaken the requirement for
πABC to be a good approximation to π(θ|yobs). We argue for πABC
to be a good approximation solely in terms of the accuracy of
certain estimates of the parameters.” (F&P, p.5)
From this result, F&P
• derive their choice of summary statistic,
S(y) = E(θ|y)
[almost sufficient]
• suggest
h = O(N−1/(2+d)
) and h = O(N−1/(4+d)
)
as optimal bandwidths for noisy and standard ABC.
Optimal summary statistic
“We take a different approach, and weaken the requirement for
πABC to be a good approximation to π(θ|yobs). We argue for πABC
to be a good approximation solely in terms of the accuracy of
certain estimates of the parameters.” (F&P, p.5)
From this result, F&P
• derive their choice of summary statistic,
S(y) = E(θ|y)
[wow! EABC[θ|S(yobs)] = E[θ|yobs]]
• suggest
h = O(N−1/(2+d)
) and h = O(N−1/(4+d)
)
as optimal bandwidths for noisy and standard ABC.
Caveat
Since E(θ|yobs) is most usually
unavailable, F&P suggest
(i) use a pilot run of ABC to
determine a region of
non-negligible posterior mass;
(ii) simulate sets of parameter
values and data;
(iii) use the simulated sets of
parameter values and data to
estimate the summary statistic;
and
(iv) run ABC with this choice of
summary statistic.
Approximate Bayesian computation
1 Approximate Bayesian computation
ABC basics
Remarks on high D
Non-parametric roots
Automated summary selection
Series B discussion
More discussions about semi-automatic ABC
[ End of section derived from comments on Read Paper, Series B, 2012]
“The apparently arbitrary nature of the choice of summary statistics
has always been perceived as the Achilles heel of ABC.” M.
Beaumont
More discussions about semi-automatic ABC
[ End of section derived from comments on Read Paper, Series B, 2012]
“The apparently arbitrary nature of the choice of summary statistics
has always been perceived as the Achilles heel of ABC.” M.
Beaumont
• “Curse of dimensionality” linked with the increase of the
dimension of the summary statistic
• Connection with principal component analysis
[Itan et al., 2010]
• Connection with partial least squares
[Wegman et al., 2009]
• Beaumont et al. (2002) postprocessed output is used as input
by F&P to run a second ABC
Wood’s alternative
Instead of a non-parametric kernel approximation to the likelihood
1
R r
K {η(yr ) − η(yobs
)}
Wood (2010) suggests a normal approximation
η(y(θ)) ∼ Nd (µθ, Σθ)
whose parameters can be approximated based on the R simulations
(for each value of θ).
Wood’s alternative
Instead of a non-parametric kernel approximation to the likelihood
1
R r
K {η(yr ) − η(yobs
)}
Wood (2010) suggests a normal approximation
η(y(θ)) ∼ Nd (µθ, Σθ)
whose parameters can be approximated based on the R simulations
(for each value of θ).
• Parametric versus non-parametric rate [Uh?!]
• Automatic weighting of components of η(·) through Σθ
• Dependence on normality assumption (pseudo-likelihood?)
[Cornebise, Girolami & Kosmidis, 2012]
ABC and BIC
Idea of applying BIC during the local regression :
• Run regular ABC
• Select summary statistics during local regression
• Recycle the prior simulation sample (reference table) with
those summary statistics
• Rerun the corresponding local regression (low cost)
[Pudlo & Sedki, 2012]

More Related Content

What's hot

Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Christian Robert
 
Inference in generative models using the Wasserstein distance [[INI]
Inference in generative models using the Wasserstein distance [[INI]Inference in generative models using the Wasserstein distance [[INI]
Inference in generative models using the Wasserstein distance [[INI]Christian Robert
 
Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017Christian Robert
 
Multiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsMultiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsChristian Robert
 
random forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationrandom forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationChristian Robert
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceChristian Robert
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixturesChristian Robert
 
accurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannaccurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannolli0601
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussionChristian Robert
 
ABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsChristian Robert
 
better together? statistical learning in models made of modules
better together? statistical learning in models made of modulesbetter together? statistical learning in models made of modules
better together? statistical learning in models made of modulesChristian Robert
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?Christian Robert
 
from model uncertainty to ABC
from model uncertainty to ABCfrom model uncertainty to ABC
from model uncertainty to ABCChristian Robert
 
On the vexing dilemma of hypothesis testing and the predicted demise of the B...
On the vexing dilemma of hypothesis testing and the predicted demise of the B...On the vexing dilemma of hypothesis testing and the predicted demise of the B...
On the vexing dilemma of hypothesis testing and the predicted demise of the B...Christian Robert
 

What's hot (20)

ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1
 
Inference in generative models using the Wasserstein distance [[INI]
Inference in generative models using the Wasserstein distance [[INI]Inference in generative models using the Wasserstein distance [[INI]
Inference in generative models using the Wasserstein distance [[INI]
 
Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017Monte Carlo in Montréal 2017
Monte Carlo in Montréal 2017
 
Multiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsMultiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximations
 
random forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationrandom forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimation
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergence
 
asymptotics of ABC
asymptotics of ABCasymptotics of ABC
asymptotics of ABC
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixtures
 
the ABC of ABC
the ABC of ABCthe ABC of ABC
the ABC of ABC
 
accurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannaccurate ABC Oliver Ratmann
accurate ABC Oliver Ratmann
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussion
 
ABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified models
 
better together? statistical learning in models made of modules
better together? statistical learning in models made of modulesbetter together? statistical learning in models made of modules
better together? statistical learning in models made of modules
 
Intractable likelihoods
Intractable likelihoodsIntractable likelihoods
Intractable likelihoods
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
 
from model uncertainty to ABC
from model uncertainty to ABCfrom model uncertainty to ABC
from model uncertainty to ABC
 
On the vexing dilemma of hypothesis testing and the predicted demise of the B...
On the vexing dilemma of hypothesis testing and the predicted demise of the B...On the vexing dilemma of hypothesis testing and the predicted demise of the B...
On the vexing dilemma of hypothesis testing and the predicted demise of the B...
 
Big model, big data
Big model, big dataBig model, big data
Big model, big data
 
Boston talk
Boston talkBoston talk
Boston talk
 

Viewers also liked

Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methodsChristian Robert
 
ABC in London, May 5, 2011
ABC in London, May 5, 2011ABC in London, May 5, 2011
ABC in London, May 5, 2011Christian Robert
 
RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010Christian Robert
 
Networks community detection using artificial bee colony swarm optimization
Networks community detection using artificial bee colony swarm optimizationNetworks community detection using artificial bee colony swarm optimization
Networks community detection using artificial bee colony swarm optimizationAboul Ella Hassanien
 
Ratio of uniforms and beyond
Ratio of uniforms and beyondRatio of uniforms and beyond
Ratio of uniforms and beyondChristian Robert
 
Reading Birnbaum's (1962) paper, by Li Chenlu
Reading Birnbaum's (1962) paper, by Li ChenluReading Birnbaum's (1962) paper, by Li Chenlu
Reading Birnbaum's (1962) paper, by Li ChenluChristian Robert
 
Markov Chain Monte Carlo explained
Markov Chain Monte Carlo explainedMarkov Chain Monte Carlo explained
Markov Chain Monte Carlo explaineddariodigiuni
 
ABC Algorithm.
ABC Algorithm.ABC Algorithm.
ABC Algorithm.N Vinayak
 
Artificial bee colony (abc)
Artificial bee colony (abc)Artificial bee colony (abc)
Artificial bee colony (abc)quadmemo
 
ABC, an effective tool for selective harmonic elimination in multilevel inve...
ABC, an effective tool for selective  harmonic elimination in multilevel inve...ABC, an effective tool for selective  harmonic elimination in multilevel inve...
ABC, an effective tool for selective harmonic elimination in multilevel inve...Shridhar kulkarni
 
Artificial Bee Colony algorithm
Artificial Bee Colony algorithmArtificial Bee Colony algorithm
Artificial Bee Colony algorithmAhmed Fouad Ali
 

Viewers also liked (13)

Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methods
 
ABC in London, May 5, 2011
ABC in London, May 5, 2011ABC in London, May 5, 2011
ABC in London, May 5, 2011
 
RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010
 
Networks community detection using artificial bee colony swarm optimization
Networks community detection using artificial bee colony swarm optimizationNetworks community detection using artificial bee colony swarm optimization
Networks community detection using artificial bee colony swarm optimization
 
Ratio of uniforms and beyond
Ratio of uniforms and beyondRatio of uniforms and beyond
Ratio of uniforms and beyond
 
ISBA 2016: Foundations
ISBA 2016: FoundationsISBA 2016: Foundations
ISBA 2016: Foundations
 
Reading Birnbaum's (1962) paper, by Li Chenlu
Reading Birnbaum's (1962) paper, by Li ChenluReading Birnbaum's (1962) paper, by Li Chenlu
Reading Birnbaum's (1962) paper, by Li Chenlu
 
Markov Chain Monte Carlo explained
Markov Chain Monte Carlo explainedMarkov Chain Monte Carlo explained
Markov Chain Monte Carlo explained
 
ABC Algorithm.
ABC Algorithm.ABC Algorithm.
ABC Algorithm.
 
Artificial bee colony (abc)
Artificial bee colony (abc)Artificial bee colony (abc)
Artificial bee colony (abc)
 
ABC, an effective tool for selective harmonic elimination in multilevel inve...
ABC, an effective tool for selective  harmonic elimination in multilevel inve...ABC, an effective tool for selective  harmonic elimination in multilevel inve...
ABC, an effective tool for selective harmonic elimination in multilevel inve...
 
Abc analysis
Abc analysisAbc analysis
Abc analysis
 
Artificial Bee Colony algorithm
Artificial Bee Colony algorithmArtificial Bee Colony algorithm
Artificial Bee Colony algorithm
 

Similar to ABC workshop: 17w5025

Workshop on Bayesian Inference for Latent Gaussian Models with Applications
Workshop on Bayesian Inference for Latent Gaussian Models with ApplicationsWorkshop on Bayesian Inference for Latent Gaussian Models with Applications
Workshop on Bayesian Inference for Latent Gaussian Models with ApplicationsChristian Robert
 
Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]Christian Robert
 
Semi-automatic ABC: a discussion
Semi-automatic ABC: a discussionSemi-automatic ABC: a discussion
Semi-automatic ABC: a discussionChristian Robert
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinChristian Robert
 
Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011
Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011
Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011Christian Robert
 
Approximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelApproximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelMatt Moores
 
Colloquium in honor of Hans Ruedi Künsch
Colloquium in honor of Hans Ruedi KünschColloquium in honor of Hans Ruedi Künsch
Colloquium in honor of Hans Ruedi KünschChristian Robert
 
Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceChristian Robert
 
Pre-computation for ABC in image analysis
Pre-computation for ABC in image analysisPre-computation for ABC in image analysis
Pre-computation for ABC in image analysisMatt Moores
 
seminar at Princeton University
seminar at Princeton Universityseminar at Princeton University
seminar at Princeton UniversityChristian Robert
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsChristian Robert
 
Iaetsd vlsi implementation of gabor filter based image edge detection
Iaetsd vlsi implementation of gabor filter based image edge detectionIaetsd vlsi implementation of gabor filter based image edge detection
Iaetsd vlsi implementation of gabor filter based image edge detectionIaetsd Iaetsd
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?Christian Robert
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)Christian Robert
 

Similar to ABC workshop: 17w5025 (20)

Workshop on Bayesian Inference for Latent Gaussian Models with Applications
Workshop on Bayesian Inference for Latent Gaussian Models with ApplicationsWorkshop on Bayesian Inference for Latent Gaussian Models with Applications
Workshop on Bayesian Inference for Latent Gaussian Models with Applications
 
Intro to ABC
Intro to ABCIntro to ABC
Intro to ABC
 
Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]Columbia workshop [ABC model choice]
Columbia workshop [ABC model choice]
 
Edinburgh, Bayes-250
Edinburgh, Bayes-250Edinburgh, Bayes-250
Edinburgh, Bayes-250
 
Semi-automatic ABC: a discussion
Semi-automatic ABC: a discussionSemi-automatic ABC: a discussion
Semi-automatic ABC: a discussion
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael Martin
 
Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011
Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011
Discussion of Fearnhead and Prangle, RSS&lt; Dec. 14, 2011
 
Approximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelApproximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts model
 
BIRS 12w5105 meeting
BIRS 12w5105 meetingBIRS 12w5105 meeting
BIRS 12w5105 meeting
 
Colloquium in honor of Hans Ruedi Künsch
Colloquium in honor of Hans Ruedi KünschColloquium in honor of Hans Ruedi Künsch
Colloquium in honor of Hans Ruedi Künsch
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de France
 
Pre-computation for ABC in image analysis
Pre-computation for ABC in image analysisPre-computation for ABC in image analysis
Pre-computation for ABC in image analysis
 
ABC model choice
ABC model choiceABC model choice
ABC model choice
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
 
seminar at Princeton University
seminar at Princeton Universityseminar at Princeton University
seminar at Princeton University
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithms
 
Iaetsd vlsi implementation of gabor filter based image edge detection
Iaetsd vlsi implementation of gabor filter based image edge detectionIaetsd vlsi implementation of gabor filter based image edge detection
Iaetsd vlsi implementation of gabor filter based image edge detection
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
 

More from Christian Robert

Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Christian Robert
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Christian Robert
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking componentsChristian Robert
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihoodChristian Robert
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerChristian Robert
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsChristian Robert
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distancesChristian Robert
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferenceChristian Robert
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018Christian Robert
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distancesChristian Robert
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimationChristian Robert
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerChristian Robert
 

More from Christian Robert (15)

discussion of ICML23.pdf
discussion of ICML23.pdfdiscussion of ICML23.pdf
discussion of ICML23.pdf
 
restore.pdf
restore.pdfrestore.pdf
restore.pdf
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihood
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like sampler
 
eugenics and statistics
eugenics and statisticseugenics and statistics
eugenics and statistics
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distances
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conference
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimation
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like sampler
 

Recently uploaded

Pests of safflower_Binomics_Identification_Dr.UPR.pdf
Pests of safflower_Binomics_Identification_Dr.UPR.pdfPests of safflower_Binomics_Identification_Dr.UPR.pdf
Pests of safflower_Binomics_Identification_Dr.UPR.pdfPirithiRaju
 
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptx
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptxRESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptx
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptxFarihaAbdulRasheed
 
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptxTHE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptxNandakishor Bhaurao Deshmukh
 
User Guide: Orion™ Weather Station (Columbia Weather Systems)
User Guide: Orion™ Weather Station (Columbia Weather Systems)User Guide: Orion™ Weather Station (Columbia Weather Systems)
User Guide: Orion™ Weather Station (Columbia Weather Systems)Columbia Weather Systems
 
Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024AyushiRastogi48
 
Pests of Bengal gram_Identification_Dr.UPR.pdf
Pests of Bengal gram_Identification_Dr.UPR.pdfPests of Bengal gram_Identification_Dr.UPR.pdf
Pests of Bengal gram_Identification_Dr.UPR.pdfPirithiRaju
 
Microphone- characteristics,carbon microphone, dynamic microphone.pptx
Microphone- characteristics,carbon microphone, dynamic microphone.pptxMicrophone- characteristics,carbon microphone, dynamic microphone.pptx
Microphone- characteristics,carbon microphone, dynamic microphone.pptxpriyankatabhane
 
preservation, maintanence and improvement of industrial organism.pptx
preservation, maintanence and improvement of industrial organism.pptxpreservation, maintanence and improvement of industrial organism.pptx
preservation, maintanence and improvement of industrial organism.pptxnoordubaliya2003
 
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxLIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxmalonesandreagweneth
 
Behavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdfBehavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdfSELF-EXPLANATORY
 
Scheme-of-Work-Science-Stage-4 cambridge science.docx
Scheme-of-Work-Science-Stage-4 cambridge science.docxScheme-of-Work-Science-Stage-4 cambridge science.docx
Scheme-of-Work-Science-Stage-4 cambridge science.docxyaramohamed343013
 
OECD bibliometric indicators: Selected highlights, April 2024
OECD bibliometric indicators: Selected highlights, April 2024OECD bibliometric indicators: Selected highlights, April 2024
OECD bibliometric indicators: Selected highlights, April 2024innovationoecd
 
Neurodevelopmental disorders according to the dsm 5 tr
Neurodevelopmental disorders according to the dsm 5 trNeurodevelopmental disorders according to the dsm 5 tr
Neurodevelopmental disorders according to the dsm 5 trssuser06f238
 
Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...Nistarini College, Purulia (W.B) India
 
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptxSTOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptxMurugaveni B
 
Sulphur & Phosphrus Cycle PowerPoint Presentation (2) [Autosaved]-3-1.pptx
Sulphur & Phosphrus Cycle PowerPoint Presentation (2) [Autosaved]-3-1.pptxSulphur & Phosphrus Cycle PowerPoint Presentation (2) [Autosaved]-3-1.pptx
Sulphur & Phosphrus Cycle PowerPoint Presentation (2) [Autosaved]-3-1.pptxnoordubaliya2003
 
GenBio2 - Lesson 1 - Introduction to Genetics.pptx
GenBio2 - Lesson 1 - Introduction to Genetics.pptxGenBio2 - Lesson 1 - Introduction to Genetics.pptx
GenBio2 - Lesson 1 - Introduction to Genetics.pptxBerniceCayabyab1
 
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCRCall Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCRlizamodels9
 
Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?Patrick Diehl
 
Call Girls in Majnu Ka Tilla Delhi 🔝9711014705🔝 Genuine
Call Girls in Majnu Ka Tilla Delhi 🔝9711014705🔝 GenuineCall Girls in Majnu Ka Tilla Delhi 🔝9711014705🔝 Genuine
Call Girls in Majnu Ka Tilla Delhi 🔝9711014705🔝 Genuinethapagita
 

Recently uploaded (20)

Pests of safflower_Binomics_Identification_Dr.UPR.pdf
Pests of safflower_Binomics_Identification_Dr.UPR.pdfPests of safflower_Binomics_Identification_Dr.UPR.pdf
Pests of safflower_Binomics_Identification_Dr.UPR.pdf
 
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptx
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptxRESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptx
RESPIRATORY ADAPTATIONS TO HYPOXIA IN HUMNAS.pptx
 
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptxTHE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
THE ROLE OF PHARMACOGNOSY IN TRADITIONAL AND MODERN SYSTEM OF MEDICINE.pptx
 
User Guide: Orion™ Weather Station (Columbia Weather Systems)
User Guide: Orion™ Weather Station (Columbia Weather Systems)User Guide: Orion™ Weather Station (Columbia Weather Systems)
User Guide: Orion™ Weather Station (Columbia Weather Systems)
 
Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024
 
Pests of Bengal gram_Identification_Dr.UPR.pdf
Pests of Bengal gram_Identification_Dr.UPR.pdfPests of Bengal gram_Identification_Dr.UPR.pdf
Pests of Bengal gram_Identification_Dr.UPR.pdf
 
Microphone- characteristics,carbon microphone, dynamic microphone.pptx
Microphone- characteristics,carbon microphone, dynamic microphone.pptxMicrophone- characteristics,carbon microphone, dynamic microphone.pptx
Microphone- characteristics,carbon microphone, dynamic microphone.pptx
 
preservation, maintanence and improvement of industrial organism.pptx
preservation, maintanence and improvement of industrial organism.pptxpreservation, maintanence and improvement of industrial organism.pptx
preservation, maintanence and improvement of industrial organism.pptx
 
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxLIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
 
Behavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdfBehavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdf
 
Scheme-of-Work-Science-Stage-4 cambridge science.docx
Scheme-of-Work-Science-Stage-4 cambridge science.docxScheme-of-Work-Science-Stage-4 cambridge science.docx
Scheme-of-Work-Science-Stage-4 cambridge science.docx
 
OECD bibliometric indicators: Selected highlights, April 2024
OECD bibliometric indicators: Selected highlights, April 2024OECD bibliometric indicators: Selected highlights, April 2024
OECD bibliometric indicators: Selected highlights, April 2024
 
Neurodevelopmental disorders according to the dsm 5 tr
Neurodevelopmental disorders according to the dsm 5 trNeurodevelopmental disorders according to the dsm 5 tr
Neurodevelopmental disorders according to the dsm 5 tr
 
Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...Bentham & Hooker's Classification. along with the merits and demerits of the ...
Bentham & Hooker's Classification. along with the merits and demerits of the ...
 
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptxSTOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
 
Sulphur & Phosphrus Cycle PowerPoint Presentation (2) [Autosaved]-3-1.pptx
Sulphur & Phosphrus Cycle PowerPoint Presentation (2) [Autosaved]-3-1.pptxSulphur & Phosphrus Cycle PowerPoint Presentation (2) [Autosaved]-3-1.pptx
Sulphur & Phosphrus Cycle PowerPoint Presentation (2) [Autosaved]-3-1.pptx
 
GenBio2 - Lesson 1 - Introduction to Genetics.pptx
GenBio2 - Lesson 1 - Introduction to Genetics.pptxGenBio2 - Lesson 1 - Introduction to Genetics.pptx
GenBio2 - Lesson 1 - Introduction to Genetics.pptx
 
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCRCall Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
Call Girls In Nihal Vihar Delhi ❤️8860477959 Looking Escorts In 24/7 Delhi NCR
 
Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?
 
Call Girls in Majnu Ka Tilla Delhi 🔝9711014705🔝 Genuine
Call Girls in Majnu Ka Tilla Delhi 🔝9711014705🔝 GenuineCall Girls in Majnu Ka Tilla Delhi 🔝9711014705🔝 Genuine
Call Girls in Majnu Ka Tilla Delhi 🔝9711014705🔝 Genuine
 

ABC workshop: 17w5025

  • 1. ABCD : ABC in High Dimensions Christian P. Robert Universit´e Paris-Dauphine, University of Warwick, & IUF workshop 17w5025, BIRS, Banff, Alberta, Canada
  • 2. Approximate Bayesian computation 1 Approximate Bayesian computation ABC basics Remarks on high D Non-parametric roots Automated summary selection Series B discussion
  • 3. The ABC method Bayesian setting: target is π(θ)f (x|θ)
  • 4. The ABC method Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique:
  • 5. The ABC method Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique: ABC algorithm For an observation y ∼ f (y|θ), under the prior π(θ), keep jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y. [Tavar´e et al., 1997]
  • 6. A as A...pproximative When y is a continuous random variable, equality z = y is replaced with a tolerance condition, (y, z) ≤ where is a distance
  • 7. A as A...pproximative When y is a continuous random variable, equality z = y is replaced with a tolerance condition, (y, z) ≤ where is a distance Output distributed from π(θ) Pθ{ (y, z) < } ∝ π(θ| (y, z) < ) [Pritchard et al., 1999]
  • 8. ABC algorithm Algorithm 1 Likelihood-free rejection sampler 2 for i = 1 to N do repeat generate θ from the prior distribution π(·) generate z from the likelihood f (·|θ ) until ρ{η(z), η(y)} ≤ set θi = θ end for where η(y) defines a (not necessarily sufficient) statistic
  • 9. Output The likelihood-free algorithm samples from the marginal in z of: π (θ, z|y) = π(θ)f (z|θ)IA ,y (z) A ,y×Θ π(θ)f (z|θ)dzdθ , where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
  • 10. Output The likelihood-free algorithm samples from the marginal in z of: π (θ, z|y) = π(θ)f (z|θ)IA ,y (z) A ,y×Θ π(θ)f (z|θ)dzdθ , where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) .
  • 11. MA example Consider the MA(q) model xt = t + q i=1 ϑi t−i Simple prior(s): uniform over the inverse [real and complex] roots in Q(u) = 1 − q i=1 ϑi ui under the identifiability conditions
  • 12. MA example Consider the MA(q) model xt = t + q i=1 ϑi t−i Simple prior(s): uniform prior over the identifiability zone, e.g. triangle for MA(2)
  • 13. MA example (2) ABC algorithm thus made of 1 picking a new value (ϑ1, ϑ2) in the triangle 2 generating an iid sequence ( t)−q<t≤T 3 producing a simulated series (xt)1≤t≤T
  • 14. MA example (2) ABC algorithm thus made of 1 picking a new value (ϑ1, ϑ2) in the triangle 2 generating an iid sequence ( t)−q<t≤T 3 producing a simulated series (xt)1≤t≤T Distance: basic distance between the series ρ((xt)1≤t≤T , (xt)1≤t≤T ) = T t=1 (xt − xt)2 or distance between summary statistics like the q autocorrelations τj = T t=j+1 xtxt−j
  • 15. Comparison of distance impact Evaluation of the tolerance on the ABC sample against both distances ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
  • 16. Comparison of distance impact 0.0 0.2 0.4 0.6 0.8 01234 θ1 −2.0 −1.0 0.0 0.5 1.0 1.5 0.00.51.01.5 θ2 Evaluation of the tolerance on the ABC sample against both distances ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
  • 17. Comparison of distance impact 0.0 0.2 0.4 0.6 0.8 01234 θ1 −2.0 −1.0 0.0 0.5 1.0 1.5 0.00.51.01.5 θ2 Evaluation of the tolerance on the ABC sample against both distances ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
  • 18. Approximate Bayesian computation 1 Approximate Bayesian computation ABC basics Remarks on high D Non-parametric roots Automated summary selection Series B discussion
  • 19. ABC[and the]D[curse] “These theoretical results suggests that when performing ABC in high dimensions, the most important problem is how to address the poor performance of the ABC estimator. This is often bourne out in practice.” R. Everitt • how to most effectively make use of the sample from the joint distribution on (θ, t) that is generated by ABC algorithms • algorithms that estimate the distribution or statistics of θ given η(y) • methods that estimate the conditional η(y)|θ that is often less important in this context (but is still relevant in some applications): that of the • design of inference algorithms that are computationally scalable to high dimensional cases
  • 20. Parameter given summary • local regression correction `a la Beaumont et al. (2002) • non-parametric regression `a la Blum et al. (2013) • ridge regression `a la Blum et al. (2013) • semi-automatic ABC `a la Fernhead and Prangle (2012) • neural networks `a la Blum and Fran¸cois (2010) • random forest `a la Marin et al. (2016) • not particularly dimension resistant • restricted to estimation
  • 21. Summary given parameter • non-parametric regression `a la Blum et al. (2013) • synthetic likelihood `a la Wood (2010) and Dutta et al. (2016) • Gaussian processes `a la Wilkinson (2014), Meeds and Welling (2013), J¨arvenp¨a¨a et al. (2016)
  • 22. Reproducing kernel Hilbert spaces Several papers suggesting moving from kernel estimation to RKHS regression: • Fukumizu, Le & Gretton (2013) • Nakagome, Fukumizu, & Mano (2013) • Park, Jitkrittum, & Sejdinovic (2016) • Mitrovic, Sejdinovic, & Teh (2016)
  • 23. Reproducing kernel Hilbert spaces Several papers suggesting moving from kernel estimation to RKHS regression: with arguments • removing dependence on summary statistics by using “the data themselves” • correlatively no loss of information • providing a form of distance between empirical measures • learning directly the regression • relatively easy and fast implementation (with linear time MMD)
  • 24. K-ABC = ABC + RKHS ”A notable property of KBR is that the prior and likelihood are represented in terms of samples.” [Park et al., 2016] Concept of a kernel function h(·, ·) that serves as a discrepancy between two samples or two sets of summary statistics, h(y, yobs)
  • 25. K-ABC = ABC + RKHS ”A notable property of KBR is that the prior and likelihood are represented in terms of samples.” [Park et al., 2016] Concept of a kernel function h(·, ·) that serves as a discrepancy between two samples or two sets of summary statistics, h(y, yobs) Estimates based on learning sample (θt, yt)T t=1 expressed as T t=1 ωth(θt) with G = (h(η(yi ), η(yj )))i,j and ω = [G + n nIn]−1 (h(η(y1), η(yobs )) · · · h(η(yT ), η(yobs )))T [Fukumizu et al., 2013]
  • 26. K2-ABC = ABC + RKHS + RKHS Second level RKHS when h(yi , yj )) = exp − −1 MMD2 (yi , yj ) • no need of summary statistics • require maximum mean discrepancy or other distance between distributions • extension to non-iid data problematic [Park et al., 2016]
  • 27. DR-ABC = ABC + RKHS + RKHS Kernel-based distribution regression (DR): instead of kernel-weighted regression for posterior expectations, construction of ABC summaries Claim optimality of a summary statistics against given loss function but unclear to me Ridge-like summary Θ[G + n nIn]−1 h close to above derivation of weights [Mitrovic et al., 2016]
  • 28. Approximate Bayesian computation 1 Approximate Bayesian computation ABC basics Remarks on high D Non-parametric roots Automated summary selection Series B discussion
  • 29. ABC-NP Better usage of [prior] simulations by adjustement: instead of throwing away θ such that ρ(η(z), η(y)) > , replace θ’s with locally regressed transforms (use with BIC) θ∗ = θ − {η(z) − η(y)}T ˆβ [Csill´ery et al., TEE, 2010] where ˆβ is obtained by [NP] weighted least square regression on (η(z) − η(y)) with weights Kδ {ρ(η(z), η(y))} [Beaumont et al., 2002, Genetics]
  • 30. ABC-NP (regression) Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) : weight directly simulation by Kδ {ρ(η(z(θ)), η(y))} or 1 S S s=1 Kδ {ρ(η(zs (θ)), η(y))} [consistent estimate of f (η|θ)]
  • 31. ABC-NP (regression) Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) : weight directly simulation by Kδ {ρ(η(z(θ)), η(y))} or 1 S S s=1 Kδ {ρ(η(zs (θ)), η(y))} [consistent estimate of f (η|θ)] Curse of dimensionality: poor estimate when d = dim(η) is large...
  • 32. ABC-NP (density estimation) Use of the kernel weights Kδ {ρ(η(z(θ)), η(y))} leads to the NP estimate of the posterior expectation i θi Kδ {ρ(η(z(θi )), η(y))} i Kδ {ρ(η(z(θi )), η(y))} [Blum, JASA, 2010]
  • 33. ABC-NP (density estimation) Use of the kernel weights Kδ {ρ(η(z(θ)), η(y))} leads to the NP estimate of the posterior conditional density i ˜Kb(θi − θ)Kδ {ρ(η(z(θi )), η(y))} i Kδ {ρ(η(z(θi )), η(y))} [Blum, JASA, 2010]
  • 34. ABC-NP (density estimations) Other versions incorporating regression adjustments i ˜Kb(θ∗ i − θ)Kδ {ρ(η(z(θi )), η(y))} i Kδ {ρ(η(z(θi )), η(y))}
  • 35. ABC-NP (density estimations) Other versions incorporating regression adjustments i ˜Kb(θ∗ i − θ)Kδ {ρ(η(z(θi )), η(y))} i Kδ {ρ(η(z(θi )), η(y))} In all cases, error E[ˆg(θ|y)] − g(θ|y) = cb2 + cδ2 + OP(b2 + δ2 ) + OP(1/nδd ) var(ˆg(θ|y)) = c nbδd (1 + oP(1)) [Blum, JASA, 2010]
  • 36. ABC-NP (density estimations) Other versions incorporating regression adjustments i ˜Kb(θ∗ i − θ)Kδ {ρ(η(z(θi )), η(y))} i Kδ {ρ(η(z(θi )), η(y))} In all cases, error E[ˆg(θ|y)] − g(θ|y) = cb2 + cδ2 + OP(b2 + δ2 ) + OP(1/nδd ) var(ˆg(θ|y)) = c nbδd (1 + oP(1)) [standard NP calculations]
  • 37. ABC-NCH Incorporating non-linearities and heterocedasticities: θ∗ = ˆm(η(y)) + [θ − ˆm(η(z))] ˆσ(η(y)) ˆσ(η(z))
  • 38. ABC-NCH Incorporating non-linearities and heterocedasticities: θ∗ = ˆm(η(y)) + [θ − ˆm(η(z))] ˆσ(η(y)) ˆσ(η(z)) where • ˆm(η) estimated by non-linear regression (e.g., neural network) • ˆσ(η) estimated by non-linear regression on residuals log{θi − ˆm(ηi )}2 = log σ2 (ηi ) + ξi [Blum & Fran¸cois, 2009]
  • 40. ABC-NCH (2) Why neural network? • fights curse of dimensionality • selects relevant summary statistics • provides automated dimension reduction • offers a model choice capability • improves upon multinomial logistic [Blum & Fran¸cois, 2009]
  • 41. ABC as knn [Biau et al., 2013, Annales de l’IHP] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα(d1, . . . , dN)
  • 42. ABC as knn [Biau et al., 2013, Annales de l’IHP] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα(d1, . . . , dN) • Interpretation of ε as nonparametric bandwidth only approximation of the actual practice [Blum & Fran¸cois, 2010]
  • 43. ABC as knn [Biau et al., 2013, Annales de l’IHP] Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα(d1, . . . , dN) • Interpretation of ε as nonparametric bandwidth only approximation of the actual practice [Blum & Fran¸cois, 2010] • ABC is a k-nearest neighbour (knn) method with kN = N N [Loftsgaarden & Quesenberry, 1965]
  • 44. ABC consistency Provided kN/ log log N −→ ∞ and kN/N −→ 0 as N → ∞, for almost all s0 (with respect to the distribution of S), with probability 1, 1 kN kN j=1 ϕ(θj ) −→ E[ϕ(θj )|S = s0] [Devroye, 1982]
  • 45. ABC consistency Provided kN/ log log N −→ ∞ and kN/N −→ 0 as N → ∞, for almost all s0 (with respect to the distribution of S), with probability 1, 1 kN kN j=1 ϕ(θj ) −→ E[ϕ(θj )|S = s0] [Devroye, 1982] Biau et al. (2013) also recall pointwise and integrated mean square error consistency results on the corresponding kernel estimate of the conditional posterior distribution, under constraints kN → ∞, kN /N → 0, hN → 0 and hp N kN → ∞,
  • 46. Rates of convergence Further assumptions (on target and kernel) allow for precise (integrated mean square) convergence rates (as a power of the sample size N), derived from classical k-nearest neighbour regression, like • when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N − 4 p+8 • when m = 4, kN ≈ N(p+4)/(p+8) and rate N − 4 p+8 log N • when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N − 4 m+p+4 [Biau et al., 2013]
  • 47. Noisy ABC Idea: Modify the data from the start ˜y = y0 + ζ1 with the same scale as ABC [ see Fearnhead-Prangle ] run ABC on ˜y
  • 48. Noisy ABC Idea: Modify the data from the start ˜y = y0 + ζ1 with the same scale as ABC [ see Fearnhead-Prangle ] run ABC on ˜y Then ABC produces an exact simulation from π(θ|˜y) = π(θ|˜y) [Dean et al., 2011; Fearnhead and Prangle, 2012]
  • 49. Consistent noisy ABC • Degrading the data improves the estimation performances: • Noisy ABC-MLE is asymptotically (in n) consistent • under further assumptions, the noisy ABC-MLE is asymptotically normal • increase in variance of order −2 • likely degradation in precision or computing time due to the lack of summary statistic [curse of dimensionality]
  • 50. Approximate Bayesian computation 1 Approximate Bayesian computation ABC basics Remarks on high D Non-parametric roots Automated summary selection Series B discussion
  • 51. Semi-automatic ABC Fearnhead and Prangle (2010) study ABC and the selection of the summary statistic in close proximity to Wilkinson’s proposal ABC then considered from a purely inferential viewpoint and calibrated for estimation purposes Use of a randomised (or ‘noisy’) version of the summary statistics ˜η(y) = η(y) + τ Derivation of a well-calibrated version of ABC, i.e. an algorithm that gives proper predictions for the distribution associated with this randomised summary statistic
  • 52. Semi-automatic ABC Fearnhead and Prangle (2010) study ABC and the selection of the summary statistic in close proximity to Wilkinson’s proposal ABC then considered from a purely inferential viewpoint and calibrated for estimation purposes Use of a randomised (or ‘noisy’) version of the summary statistics ˜η(y) = η(y) + τ Derivation of a well-calibrated version of ABC, i.e. an algorithm that gives proper predictions for the distribution associated with this randomised summary statistic [calibration constraint: ABC approximation with same posterior mean as the true randomised posterior]
  • 53. Summary statistics • Optimality of the posterior expectation E[θ|y] of the parameter of interest as summary statistics η(y)!
  • 54. Summary statistics • Optimality of the posterior expectation E[θ|y] of the parameter of interest as summary statistics η(y)! • Use of the standard quadratic loss function (θ − θ0)T A(θ − θ0) . bare summary
  • 55. Details on Fearnhead and Prangle (F&P) ABC Use of a summary statistic S(·), an importance proposal g(·), a kernel K(·) ≤ 1 and a bandwidth h > 0 such that (θ, ysim) ∼ g(θ)f (ysim|θ) is accepted with probability (hence the bound) K[{S(ysim) − sobs}/h] and the corresponding importance weight defined by π(θ) g(θ) [Fearnhead & Prangle, 2012]
  • 56. Errors, errors, and errors Three levels of approximation • π(θ|yobs) by π(θ|sobs) loss of information [ignored] • π(θ|sobs) by πABC(θ|sobs) = π(s)K[{s − sobs}/h]π(θ|s) ds π(s)K[{s − sobs}/h] ds noisy observations • πABC(θ|sobs) by importance Monte Carlo based on N simulations, represented by var(a(θ)|sobs)/Nacc [expected number of acceptances] [M. Twain/B. Disraeli]
  • 57. Average acceptance asymptotics For the average acceptance probability/approximate likelihood p(θ|sobs) = f (ysim|θ) K[{S(ysim) − sobs}/h] dysim , overall acceptance probability p(sobs) = p(θ|sobs) π(θ) dθ = π(sobs)hd + o(hd ) [F&P, Lemma 1]
  • 58. Calibration of h “This result gives insight into how S(·) and h affect the Monte Carlo error. To minimize Monte Carlo error, we need hd to be not too small. Thus ideally we want S(·) to be a low dimensional summary of the data that is sufficiently informative about θ that π(θ|sobs) is close, in some sense, to π(θ|yobs)” (F&P, p.5) • turns h into an absolute value while it should be context-dependent and user-calibrated • only addresses one term in the approximation error and acceptance probability (“curse of dimensionality”) • h large prevents πABC(θ|sobs) to be close to π(θ|sobs) • d small prevents π(θ|sobs) to be close to π(θ|yobs) (“curse of [dis]information”)
  • 59. Converging ABC Theorem (F&P) For noisy ABC, the expected noisy-ABC log-likelihood, E {log[p(θ|sobs)]} = log[p(θ|S(yobs) + )]π(yobs|θ0)K( )dyobsd , has its maximum at θ = θ0.
  • 60. Converging ABC Corollary For noisy ABC, the ABC posterior converges onto a point mass on the true parameter value as m → ∞. For standard ABC, not always the case (unless h goes to zero).
  • 61. Loss motivated statistic Under quadratic loss function, Theorem (F&P) (i) The minimal posterior error E[L(θ, ˆθ)|yobs] occurs when ˆθ = E(θ|yobs) (!) (ii) When h → 0, EABC(θ|sobs) converges to E(θ|yobs) (iii) If S(yobs) = E[θ|yobs] then for ˆθ = EABC[θ|sobs] E[L(θ, ˆθ)|yobs] = trace(AΣ) + h2 xT AxK(x)dx + o(h2 ).
  • 62. Optimal summary statistic “We take a different approach, and weaken the requirement for πABC to be a good approximation to π(θ|yobs). We argue for πABC to be a good approximation solely in terms of the accuracy of certain estimates of the parameters.” (F&P, p.5) From this result, F&P • derive their choice of summary statistic, S(y) = E(θ|y) [almost sufficient] • suggest h = O(N−1/(2+d) ) and h = O(N−1/(4+d) ) as optimal bandwidths for noisy and standard ABC.
  • 63. Optimal summary statistic “We take a different approach, and weaken the requirement for πABC to be a good approximation to π(θ|yobs). We argue for πABC to be a good approximation solely in terms of the accuracy of certain estimates of the parameters.” (F&P, p.5) From this result, F&P • derive their choice of summary statistic, S(y) = E(θ|y) [wow! EABC[θ|S(yobs)] = E[θ|yobs]] • suggest h = O(N−1/(2+d) ) and h = O(N−1/(4+d) ) as optimal bandwidths for noisy and standard ABC.
  • 64. Caveat Since E(θ|yobs) is most usually unavailable, F&P suggest (i) use a pilot run of ABC to determine a region of non-negligible posterior mass; (ii) simulate sets of parameter values and data; (iii) use the simulated sets of parameter values and data to estimate the summary statistic; and (iv) run ABC with this choice of summary statistic.
  • 65. Approximate Bayesian computation 1 Approximate Bayesian computation ABC basics Remarks on high D Non-parametric roots Automated summary selection Series B discussion
  • 66. More discussions about semi-automatic ABC [ End of section derived from comments on Read Paper, Series B, 2012] “The apparently arbitrary nature of the choice of summary statistics has always been perceived as the Achilles heel of ABC.” M. Beaumont
  • 67. More discussions about semi-automatic ABC [ End of section derived from comments on Read Paper, Series B, 2012] “The apparently arbitrary nature of the choice of summary statistics has always been perceived as the Achilles heel of ABC.” M. Beaumont • “Curse of dimensionality” linked with the increase of the dimension of the summary statistic • Connection with principal component analysis [Itan et al., 2010] • Connection with partial least squares [Wegman et al., 2009] • Beaumont et al. (2002) postprocessed output is used as input by F&P to run a second ABC
  • 68. Wood’s alternative Instead of a non-parametric kernel approximation to the likelihood 1 R r K {η(yr ) − η(yobs )} Wood (2010) suggests a normal approximation η(y(θ)) ∼ Nd (µθ, Σθ) whose parameters can be approximated based on the R simulations (for each value of θ).
  • 69. Wood’s alternative Instead of a non-parametric kernel approximation to the likelihood 1 R r K {η(yr ) − η(yobs )} Wood (2010) suggests a normal approximation η(y(θ)) ∼ Nd (µθ, Σθ) whose parameters can be approximated based on the R simulations (for each value of θ). • Parametric versus non-parametric rate [Uh?!] • Automatic weighting of components of η(·) through Σθ • Dependence on normality assumption (pseudo-likelihood?) [Cornebise, Girolami & Kosmidis, 2012]
  • 70. ABC and BIC Idea of applying BIC during the local regression : • Run regular ABC • Select summary statistics during local regression • Recycle the prior simulation sample (reference table) with those summary statistics • Rerun the corresponding local regression (low cost) [Pudlo & Sedki, 2012]