최근 이수가 되고 있는 Bayesian Deep Learning 관련 이론과 최근 어플리케이션들을 소개합니다. Bayesian Inference 의 이론에 관해서 간단히 설명하고 Yarin Gal 의 Monte Carlo Dropout 의 이론과 어플리케이션들을 소개합니다.
Approximate Bayesian computation for the Ising/Potts modelMatt Moores
Bayes’ formula involves the likelihood function, p(y|theta), which is a problem when the likelihood is unavailable in closed form. ABC is a method for approximating the posterior p(theta|y) without evaluating the likelihood. Instead, pseudo-data is simulated from a generative model and compared with the observations. This talk will give an introduction to ABC algorithms: rejection sampling, ABC-MCMC and ABC-SMC. Application of these algorithms to image analysis will be presented as an illustrative example. These methods have been implemented in the R package bayesImageS.
This is joint work with Christian Robert (Warwick/Dauphine), Kerrie Mengersen and Christopher Drovandi (QUT).
Confidence Intervals––Exact Intervals, Jackknife, and BootstrapFrancesco Casalegno
••• Learn how to correctly compute and interprete Confidence Intervals •••
In this presentation:
▸ (mis)understanding the real meaning of confidence intervals
▸ exact methods for known distributions
▸ approximated methods for non-parametric statistics
▸ resampling techniques: jackknife and bootstrap
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
The increased availability of biomedical data, particularly in the public domain, offers the opportunity to better understand human health and to develop effective therapeutics for a wide range of unmet medical needs. However, data scientists remain stymied by the fact that data remain hard to find and to productively reuse because data and their metadata i) are wholly inaccessible, ii) are in non-standard or incompatible representations, iii) do not conform to community standards, and iv) have unclear or highly restricted terms and conditions that preclude legitimate reuse. These limitations require a rethink on data can be made machine and AI-ready - the key motivation behind the FAIR Guiding Principles. Concurrently, while recent efforts have explored the use of deep learning to fuse disparate data into predictive models for a wide range of biomedical applications, these models often fail even when the correct answer is already known, and fail to explain individual predictions in terms that data scientists can appreciate. These limitations suggest that new methods to produce practical artificial intelligence are still needed.
In this talk, I will discuss our work in (1) building an integrative knowledge infrastructure to prepare FAIR and "AI-ready" data and services along with (2) neurosymbolic AI methods to improve the quality of predictions and to generate plausible explanations. Attention is given to standards, platforms, and methods to wrangle knowledge into simple, but effective semantic and latent representations, and to make these available into standards-compliant and discoverable interfaces that can be used in model building, validation, and explanation. Our work, and those of others in the field, creates a baseline for building trustworthy and easy to deploy AI models in biomedicine.
Bio
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University, founder and executive director of the Institute of Data Science, and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research explores socio-technological approaches for responsible discovery science, which includes collaborative multi-modal knowledge graphs, privacy-preserving distributed data mining, and AI methods for drug discovery and personalized medicine. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon Europe, the European Open Science Cloud, the US National Institutes of Health, and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
1. The ABC of ABC
Christian P. Robert
(U. Paris Dauphine PSL & Warwick U.)
JSM 2019, Denver, Colorado
2. Not to be confused with...
The XKCD of ABC
Christian P. Robert
(U. Paris Dauphine PSL & Warwick U.)
JSM 2019, Denver, Colorado
3. Thanks
Jean-Marie Cornuet, Arnaud Estoup (INRA
Montpellier)
David Frazier, Gael Martin (Monash U)
Pierre Jacob, Natesh Pillai (Harvard U)
Kerrie Mengersen (QUT)
Jean-Michel Marin (U Montpellier)
Judith Rousseau (U Oxford)
Robin Ryder, Julien Stoehr (U Paris Dauphine)
5. Posterior distribution
Central concept of Bayesian inference:
π( θ
parameter
| xobs
observation
) ∝ π(θ)
prior
× L(θ | xobs
)
likelihood
Drives
derivation of optimal decisions
assessment of uncertainty
model selection
[McElreath, 2015]
6. Monte Carlo representation
Exploration of posterior π(θ|xobs) may require to produce
sample
θ1, . . . , θT
distributed from π(θ|xobs) (or asymptotically by Markov chain
Monte Carlo)
[McElreath, 2015]
7. Difficulties
MCMC = workhorse of practical Bayesian analysis (BUGS,
JAGS, Stan, &tc.), except when product
π(θ) × L(θ | xobs
)
well-defined but numerically unavailable or too costly to
compute
Only partial solutions are available:
demarginalisation (latent variables)
exchange algorithm (auxiliary variables)
pseudo-marginal (unbiased estimator)
8. Difficulties
MCMC = workhorse of practical Bayesian analysis (BUGS,
JAGS, Stan, &tc.), except when product
π(θ) × L(θ | xobs
)
well-defined but numerically unavailable or too costly to
compute
Only partial solutions are available:
demarginalisation (latent variables)
exchange algorithm (auxiliary variables)
pseudo-marginal (unbiased estimator)
9. Example 1: Dynamic mixture
Mixture model
{1 − wµ,τ(x)}fβ,λ(x) + wµ,τ(x)gε,σ(x) x > 0
where fβ,λ Weibull density, gε,σ generalised Pareto density, and
wµ,τ Cauchy cdf
Intractable normalising constant
C(µ, τ, β, λ, ε, σ) =
∞
0
{(1 − wµ,τ(x))fβ,λ(x) + wµ,τ(x)gε,σ(x)} dx
[Frigessi, Haug & Rue, 2002]
10. Example 2: truncated Normal
Given set A ⊂ Rk (k large), truncated Normal model
f(x | µ, Σ, A) ∝ exp{−(x − µ)T
Σ−1
(x − µ)/2} IA(x)
with intractable normalising constant
C(µ, Σ, A) =
A
exp{−(x − µ)T
Σ−1
(x − µ)/2}dx
11. Example 3: robust Normal statistics
Normal sample
x1, . . . , xn ∼ N(µ, σ2
)
summarised into (insufficient)
^µn = med(x1, . . . , xn)
^σn = mad(x1, . . . , xn) = med |xi − med(x1, . . . , xn)|
Under a conjugate prior π(µ, σ2), posterior close to intractable.
but simulation of (^µn, ^σn) straightforward
12. Example 3: robust Normal statistics
Normal sample
x1, . . . , xn ∼ N(µ, σ2
)
summarised into (insufficient)
^µn = med(x1, . . . , xn)
^σn = mad(x1, . . . , xn) = med |xi − med(x1, . . . , xn)|
Under a conjugate prior π(µ, σ2), posterior close to intractable.
but simulation of (^µn, ^σn) straightforward
13. Example 5: exponential random graph
ERGM: binary random vector x indexed
by all edges on set of nodes plus graph
f(x | θ) =
1
C(θ)
exp(θT
S(x))
with S(x) vector of statistics and C(θ)
intractable normalising constant
[Grelaud & al., 2009; Everitt, 2012; Bouranis & al., 2017]
15. A?B?C?
A stands for approximate [wrong
likelihood]
B stands for Bayesian [right prior]
C stands for computation [producing
a parameter sample]
16. A?B?C?
Rough version of the data [from dot
to ball]
Non-parametric approximation of
the likelihood [near actual
observation]
Use of non-sufficient statistics
[dimension reduction]
Monte Carlo error [and no
unbiasedness]
17. A?B?C?
“Marin et al. (2012) proposed
Approximate Bayesian Computation
(ABC), which is more complex than
MCMC, but outputs cruder estimates”
[Heavens et al., 2017]
18. A seemingly na¨ıve representation
When likelihood f(x|θ) not in closed form, likelihood-free
rejection technique:
ABC algorithm
For an observation xobs ∼ f(x|θ), under the prior π(θ), keep
jointly simulating
θ ∼ π(θ) , z ∼ f(z|θ ) ,
until the auxiliary variable z is equal to the observed value,
z = xobs
[Diggle & Gratton, 1984; Rubin, 1984; Tavar´e et al., 1997]
19. A seemingly na¨ıve representation
When likelihood f(x|θ) not in closed form, likelihood-free
rejection technique:
ABC algorithm
For an observation xobs ∼ f(x|θ), under the prior π(θ), keep
jointly simulating
θ ∼ π(θ) , z ∼ f(z|θ ) ,
until the auxiliary variable z is equal to the observed value,
z = xobs
[Diggle & Gratton, 1984; Rubin, 1984; Tavar´e et al., 1997]
20. Example 3: robust Normal statistics
mu=rnorm(N<-1e6) #prior
sig=1/sqrt(rgamma(N,2,2))
medobs=median(obs) #summary
madobs=mad(obs)
for(t in diz<-1:N){
psud=rnorm(1e2)*sig[t]+mu[t]
medpsu=median(psud)-medobs
madpsu=mad(psud)-madobs
diz[t]=medpsuˆ2+madpsuˆ2}
#ABC sample
subz=which(diz<quantile(diz,.1))
plot(mu[subz],sig[subz])
21. Example 6: combinatorics
If one draws 11 socks out of m socks made of f orphans and g
pairs, with f + 2g = m, number k of socks from the orphan
group is hypergeometric H(11, m, f) and probability to observe
11 orphan socks total is
11
k=0
f
k
2g
11−k
m
11
×
211−k g
11−k
2g
11−k
22. Example 6: combinatorics
Given a prior distribution on nsocks and npairs,
nsocks ∼ Neg(30, 15) npairs|nsocks ∼ nsocks/2Be(15, 2)
possible to
1. generate new values
of nsocks and npairs,
2. generate a new
observation of X,
number of unique
socks out of 11.
3. accept the pair
(nsocks, npairs) if the
realisation of X is
equal to 11
23. Example 6: combinatorics
ns
Density
0 10 20 30 40 50 60
0.000.010.020.030.040.050.06
The outcome of this simulation method returns a distribution
on the pair (nsocks, npairs) that is the conditional distribution of
the pair given the observation X = 11
24. Why does it work?
The mathematical proof is trivial:
f(θi) ∝
z∈D
π(θi)f(z|θi)Iy(z)
∝ π(θi)f(y|θi)
= π(θi|y) .
[Accept–Reject 101]
But very impractical when
Pθ(Z = xobs
) ≈ 0
25. Why does it work?
The mathematical proof is trivial:
f(θi) ∝
z∈D
π(θi)f(z|θi)Iy(z)
∝ π(θi)f(y|θi)
= π(θi|y) .
[Accept–Reject 101]
But very impractical when
Pθ(Z = xobs
) ≈ 0
26. A as approximative
When y is a continuous random variable, strict equality
z = xobs is replaced with a tolerance zone
ρ(xobs
, z) ε
where ρ is a distance
Output distributed from
π(θ) Pθ{ρ(xobs
, z) < ε}
def
∝ π(θ|ρ(xobs
, z) < ε)
[Pritchard et al., 1999]
27. A as approximative
When y is a continuous random variable, strict equality
z = xobs is replaced with a tolerance zone
ρ(xobs
, z) ε
where ρ is a distance
Output distributed from
π(θ) Pθ{ρ(xobs
, z) < ε}
def
∝ π(θ|ρ(xobs
, z) < ε)
[Pritchard et al., 1999]
28. ABC algorithm
Algorithm 1 Likelihood-free rejection sampler
for i = 1 to N do
repeat
generate θ from prior π(·)
generate z from sampling density f(·|θ )
until ρ{η(z), η(xobs)} ε
set θi = θ
end for
where η(xobs) defines a (not necessarily sufficient) statistic
Custom: η(xobs) called Summary statistic
29. ABC algorithm
Algorithm 2 Likelihood-free rejection sampler
for i = 1 to N do
repeat
generate θ from prior π(·)
generate z from sampling density f(·|θ )
until ρ{η(z), η(xobs)} ε
set θi = θ
end for
where η(xobs) defines a (not necessarily sufficient) statistic
Custom: η(xobs) called Summary statistic
30. Exact ABC posterior
Algorithm samples from marginal in z of posterior
πABC
ε (θ, z|xobs
) =
π(θ)f(z|θ)IAε,xobs
(z)
Aε,xobs ×Θ π(θ)f(z|θ)dzdθ
,
where Aε,xobs = {z ∈ D|ρ(η(z), η(xobs)) < ε}.
Intuition that proper summary statistics coupled with small
tolerance ε = εη should provide good approximation of the
posterior distribution:
πABC
ε (θ|xobs
) = πABC
ε (θ, z|xobs
)dz ≈ π(θ|η(xobs
))
31. Exact ABC posterior
Algorithm samples from marginal in z of posterior
πABC
ε (θ, z|xobs
) =
π(θ)f(z|θ)IAε,xobs
(z)
Aε,xobs ×Θ π(θ)f(z|θ)dzdθ
,
where Aε,xobs = {z ∈ D|ρ(η(z), η(xobs)) < ε}.
Intuition that proper summary statistics coupled with small
tolerance ε = εη should provide good approximation of the
posterior distribution:
πABC
ε (θ|xobs
) = πABC
ε (θ, z|xobs
)dz ≈ π(θ|η(xobs
))
32. why summaries?
reduction of dimension
improvement of signal to noise ratio
reduce tolerance ε considerably
whole data may be unavailable
33. Example 7: MA inference
Moving average model MA(2):
xt = εt + θ1εt−1 + θ2εt−2 εt
iid
∼ N(0, 1)
Comparison of raw series:
34. Example 7: MA inference
Moving average model MA(2):
xt = εt + θ1εt−1 + θ2εt−2 εt
iid
∼ N(0, 1)
Comparison of acf’s:
36. why not summaries?
loss of sufficient information with πABC(θ|xobs) replaced
with πABC(θ|η(xobs))
arbitrariness of summaries
uncalibrated approximation
whole data may be available (at same cost as summaries)
distributions may be compared (Wasserstein distances)
[Bernton et al., 2019]
37. summary strategies
Fundamental difficulty of selecting summary statistics when
there is no non-trivial sufficient statistic [except when done by
experimenters from the field]
38. summary strategies
Fundamental difficulty of selecting summary statistics when
there is no non-trivial sufficient statistic [except when done by
experimenters from the field]
1. Starting from large collection of summary statistics, Joyce
and Marjoram (2008) consider the sequential inclusion into the
ABC target, with a stopping rule based on a likelihood ratio
test.
39. summary strategies
Fundamental difficulty of selecting summary statistics when
there is no non-trivial sufficient statistic [except when done by
experimenters from the field]
2. Based on decision-theoretic principles, Fearnhead and
Prangle (2012) end up with E[θ|xobs] as the optimal summary
statistic
40. summary strategies
Fundamental difficulty of selecting summary statistics when
there is no non-trivial sufficient statistic [except when done by
experimenters from the field]
3. Use of indirect inference by Drovandi, Pettit, & Paddy
(2011) with estimators of parameters of auxiliary model as
summary statistics.
41. summary strategies
Fundamental difficulty of selecting summary statistics when
there is no non-trivial sufficient statistic [except when done by
experimenters from the field]
4. Starting from large collection of summary statistics, Raynal
& al. (2018, 2019) rely on random forests to build estimators
and select summaries
42. summary strategies
Fundamental difficulty of selecting summary statistics when
there is no non-trivial sufficient statistic [except when done by
experimenters from the field]
5. Starting from large collection of summary statistics, Sedki &
Pudlo (2012) use the Lasso to eliminate summaries
43. Semi-automated ABC
Use of summary statistic η(·), importance proposal g(·), kernel
K(·) 1 with bandwidth h ↓ 0 such that
(θ, z) ∼ g(θ)f(z|θ)
accepted with probability (hence the bound)
K[{η(z) − η(xobs
)}/h]
and the corresponding importance weight defined by
π(θ) g(θ)
Theorem Optimality of posterior expectation E[θ|xobs] of
parameter of interest as summary statistics η(xobs)
[Fearnhead & Prangle, 2012; Sisson et al., 2019]
44. Semi-automated ABC
Use of summary statistic η(·), importance proposal g(·), kernel
K(·) 1 with bandwidth h ↓ 0 such that
(θ, z) ∼ g(θ)f(z|θ)
accepted with probability (hence the bound)
K[{η(z) − η(xobs
)}/h]
and the corresponding importance weight defined by
π(θ) g(θ)
Theorem Optimality of posterior expectation E[θ|xobs] of
parameter of interest as summary statistics η(xobs)
[Fearnhead & Prangle, 2012; Sisson et al., 2019]
45. Random forests
Technique that stemmed from Leo Breiman’s bagging (or
bootstrap aggregating) machine learning algorithm for both
classification [testing] and regression [estimation]
[Breiman, 1996]
Improved performances by averaging over classification schemes
of randomly generated training sets, creating a “forest” of
(CART) decision trees, inspired by Amit and Geman (1997)
ensemble learning
[Breiman, 2001]
46. Growing the forest
Breiman’s solution for inducing random features in the trees of
the forest:
boostrap resampling of the dataset and
random subset-ing [of size
√
t] of the covariates driving the
classification or regression at every node of each tree
Covariate (summary) xτ that drives the node separation
xτ cτ
and the separation bound cτ chosen by minimising entropy or
Gini index
47. ABC with random forests
Idea: Starting with
possibly large collection of summary statistics (η1, . . . , ηp)
(from scientific theory input to available statistical
softwares, methodologies, to machine-learning alternatives)
ABC reference table involving model index, parameter
values and summary statistics for the associated simulated
pseudo-data
run R randomforest to infer M or θ from (η1i, . . . , ηpi)
48. ABC with random forests
Idea: Starting with
possibly large collection of summary statistics (η1, . . . , ηp)
(from scientific theory input to available statistical
softwares, methodologies, to machine-learning alternatives)
ABC reference table involving model index, parameter
values and summary statistics for the associated simulated
pseudo-data
run R randomforest to infer M or θ from (η1i, . . . , ηpi)
at each step O(
√
p) indices sampled at random and most
discriminating statistic selected, by minimising Gini loss
49. ABC with random forests
Idea: Starting with
possibly large collection of summary statistics (η1, . . . , ηp)
(from scientific theory input to available statistical
softwares, methodologies, to machine-learning alternatives)
ABC reference table involving model index, parameter
values and summary statistics for the associated simulated
pseudo-data
run R randomforest to infer M or θ from (η1i, . . . , ηpi)
Average of the trees is resulting summary statistics, highly
non-linear predictor of the model index
50. ABC with random forests
Idea: Starting with
possibly large collection of summary statistics (η1, . . . , ηp)
(from scientific theory input to available statistical
softwares, methodologies, to machine-learning alternatives)
ABC reference table involving model index, parameter
values and summary statistics for the associated simulated
pseudo-data
run R randomforest to infer M or θ from (η1i, . . . , ηpi)
Potential selection of most active summaries, calibrated against
pure noise
51. Classification of summaries by random forests
Given large collection of summary statistics, rather than
selecting a subset and excluding the others, estimate each
parameter random forests
handles thousands of predictors
ignores useless components
fast estimation with good local properties
automatised with few calibration steps
substitute to Fearnhead and Prangle (2012) preliminary
estimation of ^θ(yobs)
includes a natural (classification) distance measure that
avoids choice of either distance or tolerance
[Marin et al., 2016]
52. Calibration of tolerance
Calibration of threshold ε
from scratch [how small is small?]
from k-nearest neighbour perspective [quantile of prior
predictive]
from asymptotics [convergence speed]
related with choice of distance [automated selection by
random forests]
[Fearnhead & Prangle, 1992; Wilkinson, 2013; Liu & Fearnhead 2018]
54. Sofware
Several abc R packages for performing parameter estimation
and model selection
[Nunes & Prangle, 2017]
55. Sofware
Several abc R packages for performing parameter estimation
and model selection
[Nunes & Prangle, 2017]
abctools R package tuning ABC analyses
56. Sofware
Several abc R packages for performing parameter estimation
and model selection
[Nunes & Prangle, 2017]
abcrf R package ABC via random forests
57. Sofware
Several abc R packages for performing parameter estimation
and model selection
[Nunes & Prangle, 2017]
EasyABC R package several algorithms for performing
efficient ABC sampling schemes, including four sequential
sampling schemes and 3 MCMC schemes
58. Sofware
Several abc R packages for performing parameter estimation
and model selection
[Nunes & Prangle, 2017]
DIYABC non R software for population genetics
59. ABC-IS
Basic ABC algorithm suggests simulating from the prior
blind [no learning]
inefficient [curse of simension]
inapplicable to improper priors
Importance sampling version
importance density g(θ)
bounded kernel function Kh with bandwidth h
acceptance probability of
Kh{ρ[η(xobs
), η(x{θ})]} π(θ) g(θ) max
θ
Aθ
[Fearnhead & Prangle, 2012]
60. ABC-IS
Basic ABC algorithm suggests simulating from the prior
blind [no learning]
inefficient [curse of simension]
inapplicable to improper priors
Importance sampling version
importance density g(θ)
bounded kernel function Kh with bandwidth h
acceptance probability of
Kh{ρ[η(xobs
), η(x{θ})]} π(θ) g(θ) max
θ
Aθ
[Fearnhead & Prangle, 2012]
61. ABC-MCMC
Markov chain (θ(t)) created via transition function
θ(t+1)
=
θ ∼ Kω(θ |θ(t)) if x ∼ f(x|θ ) is such that x = y
and u ∼ U(0, 1) π(θ )Kω(θ(t)|θ )
π(θ(t))Kω(θ |θ(t))
,
θ(t) otherwise,
has the posterior π(θ|y) as stationary distribution
[Marjoram et al, 2003]
62. ABC-MCMC
Algorithm 3 Likelihood-free MCMC sampler
get (θ(0), z(0)) by Algorithm 1
for t = 1 to N do
generate θ from Kω ·|θ(t−1) , z from f(·|θ ), u from U[0,1],
if u π(θ )Kω(θ(t−1)|θ )
π(θ(t−1)Kω(θ |θ(t−1))
IAε,xobs
(z ) then
set (θ(t), z(t)) = (θ , z )
else
(θ(t), z(t))) = (θ(t−1), z(t−1)),
end if
end for
63. ABC-PMC
Generate a sample at iteration t by
^πt(θ(t)
) ∝
N
j=1
ω
(t−1)
j Kt(θ(t)
|θ
(t−1)
j )
modulo acceptance of the associated xt, with tolerance εt ↓, and
use importance weight associated with accepted simulation θ
(t)
i
ω
(t)
i ∝ π(θ
(t)
i ) ^πt(θ
(t)
i )
c Still likelihood free
[Sisson et al., 2007; Beaumont et al., 2009]
64. ABC-SMC
Use of a kernel Kt associated with target πεt and derivation of
the backward kernel
Lt−1(z, z ) =
πεt (z )Kt(z , z)
πεt (z)
Update of the weights
ω
(t)
i ∝ ω
(t−1)
i
M
m=1 IAεt
(x
(t)
im )
M
m=1 IAεt−1
(x
(t−1)
im )
when x
(t)
im ∼ Kt(x
(t−1)
i , ·)
[Del Moral, Doucet & Jasra, 2009]
65. ABC-NP
Better usage of [prior] simulations by
adjustement: instead of throwing
away θ such that ρ(η(z), η(xobs)) > ε,
replace θ’s with locally regressed
transforms
θ∗
= θ − {η(z) − η(xobs
)}T ^β [Csill´ery et al., TEE, 2010]
where ^β is obtained by [NP] weighted least square regression on
(η(z) − η(xobs)) with weights
Kδ ρ(η(z), η(xobs
))
[Beaumont et al., 2002, Genetics]
66. ABC-NP (regression)
Also found in the subsequent literature, e.g. in Fearnhead
& Prangle (2012): weight directly simulation by
Kδ ρ[η(z(θ)), η(xobs
)]
or
1
S
S
s=1
Kδ ρ[η(zs
(θ)), η(xobs
)]
[consistent estimate of f(η|θ)]
Curse of dimensionality: poor when d = dim(η) large
67. ABC-NP (regression)
Also found in the subsequent literature, e.g. in Fearnhead
& Prangle (2012): weight directly simulation by
Kδ ρ[η(z(θ)), η(xobs
)]
or
1
S
S
s=1
Kδ ρ[η(zs
(θ)), η(xobs
)]
[consistent estimate of f(η|θ)]
Curse of dimensionality: poor when d = dim(η) large
68. ABC-NP (density estimation)
Use of the kernel weights
Kδ ρ[η(z(θ)), η(xobs
)]
leads to the NP estimate of the posterior expectation
EABC
[θ | xobs
] ≈ i θiKδ ρ[η(z(θi)), η(xobs)]
i Kδ {ρ[η(z(θi)), η(xobs)]}
[Blum, JASA, 2010]
69. ABC-NP (density estimation)
Use of the kernel weights
Kδ ρ[η(z(θ)), η(xobs
)]
leads to the NP estimate of the posterior conditional density
πABC
[θ | xobs
] ≈ i
˜Kb(θi − θ)Kδ ρ[η(z(θi)), η(xobs)]
i Kδ {ρ[η(z(θi)), η(xobs)]}
[Blum, JASA, 2010]
70. ABC-NP (density estimations)
Other versions incorporating regression adjustments
i
˜Kb(θ∗
i − θ)Kδ ρ[η(z(θi)), η(xobs)]
i Kδ {ρ[η(z(θi)), η(xobs)]}
In all cases, error
E[^g(θ|xobs
)] − g(θ|xobs
) = cb2
+ cδ2
+ OP(b2
+ δ2
) + OP(1/nδd
)
var(^g(θ|xobs
)) =
c
nbδd
(1 + oP(1))
71. ABC-NP (density estimations)
Other versions incorporating regression adjustments
i
˜Kb(θ∗
i − θ)Kδ ρ[η(z(θi)), η(xobs)]
i Kδ {ρ[η(z(θi)), η(xobs)]}
In all cases, error
E[^g(θ|xobs
)] − g(θ|xobs
) = cb2
+ cδ2
+ OP(b2
+ δ2
) + OP(1/nδd
)
var(^g(θ|xobs
)) =
c
nbδd
(1 + oP(1))
[Blum, JASA, 2010]
72. ABC-NP (density estimations)
Other versions incorporating regression adjustments
i
˜Kb(θ∗
i − θ)Kδ ρ[η(z(θi)), η(xobs)]
i Kδ {ρ[η(z(θi)), η(xobs)]}
In all cases, error
E[^g(θ|xobs
)] − g(θ|xobs
) = cb2
+ cδ2
+ OP(b2
+ δ2
) + OP(1/nδd
)
var(^g(θ|xobs
)) =
c
nbδd
(1 + oP(1))
[standard NP calculations]
73. ABC-NCH
Incorporating non-linearities and heterocedasticities:
θ∗
= ^m(η(xobs
)) + [θ − ^m(η(z))]
^σ(η(xobs))
^σ(η(z))
where
^m(η) estimated by non-linear regression (e.g., neural
network)
^σ(η) estimated by non-linear regression on residuals
log{θi − ^m(ηi)}2
= log σ2
(ηi) + ξi
[Blum & Franc¸ois, 2009]
74. ABC-NCH
Incorporating non-linearities and heterocedasticities:
θ∗
= ^m(η(xobs
)) + [θ − ^m(η(z))]
^σ(η(xobs))
^σ(η(z))
where
^m(η) estimated by non-linear regression (e.g., neural
network)
^σ(η) estimated by non-linear regression on residuals
log{θi − ^m(ηi)}2
= log σ2
(ηi) + ξi
[Blum & Franc¸ois, 2009]
75. ABC-NCH
Why neural network?
fights curse of dimensionality
selects relevant summary statistics
provides automated dimension reduction
offers a model choice capability
improves upon multinomial logistic
[Blum & Franc¸ois, 2009]
77. Asymptotics of ABC
Since πABC(· | xobs) is an approximation of π(· | xobs) or
π(· | η(xobs)) coherence of ABC-based inference need be
established on its own
[Li & Fearnhead, 2018a, 2018b; Frazier et al., 2018]
Meaning
establishing large sample (n) properties of ABC posteriors
and ABC procedures
finding sufficient conditions and checks on summary
statistics η(cdot)
determining proper rate of convergence of tolerance to 0 as
ε = εn
[mostly] ignoring Monte Carlo errors
78. Asymptotics of ABC
Since πABC(· | xobs) is an approximation of π(· | xobs) or
π(· | η(xobs)) coherence of ABC-based inference need be
established on its own
[Li & Fearnhead, 2018a, 2018b; Frazier et al., 2018]
Meaning
establishing large sample (n) properties of ABC posteriors
and ABC procedures
finding sufficient conditions and checks on summary
statistics η(cdot)
determining proper rate of convergence of tolerance to 0 as
ε = εn
[mostly] ignoring Monte Carlo errors
79. Consistency of ABC posteriors
ABC algorithm Bayesian consistent at θ0 if for any δ > 0,
Π θ − θ0 > δ|ρ{η(xobs
), η(Z} ε → 0
as n → +∞, ε → 0
Bayesian consistency implies that sets containing θ0 have
posterior probability tending to one as n → +∞, with
implication being the existence of a specific rate of
concentration
80. Consistency of ABC posteriors
ABC algorithm Bayesian consistent at θ0 if for any δ > 0,
Π θ − θ0 > δ|ρ{η(xobs
), η(Z} ε → 0
as n → +∞, ε → 0
Concentration around true value and Bayesian consistency
impose less stringent conditions on the convergence speed
of tolerance εn to zero, when compared with asymptotic
normality of ABC posterior
asymptotic normality of ABC posterior mean does not
require asymptotic normality of ABC posterior
81. Asymptotic setup
Assumptions:
asymptotic: xobs = xobs(n)
∼ Pn
θ0
and ε = εn, n → +∞
parametric: θ ∈ Rk, k fixed concentration of summary
statistic η(zn):
∃b : θ → b(θ) η(zn
) − b(θ) = oPθ
(1), ∀θ
identifiability of parameter b(θ) = b(θ ) when θ = θ
82. Consistency of ABC posteriors
Concentration of summary η(z): there exists b(θ) such that
η(z) − b(θ) = oPθ
(1)
Consistency:
Πεn θ − θ0 δ|η(xobs
) = 1 + op(1)
Convergence rate: there exists δn = o(1) such that
Πεn θ − θ0 δn|η(xobs
) = 1 + op(1)
84. Related convergence results
Studies on the large sample properties of ABC, with focus the
asymptotic properties of ABC point estimators
[Creel et al., 2015; Jasra, 2015; Li & Fearnhead, 2018a,b]
assumption of CLT for summary statistic plus regularity
assumptions on convergence of its density to Normal limit
convergence rate of posterior mean if εT = o(1/v
3/5
T )
acceptance probability chosen as arbitrary density
η(xobs) − η(z)
85. Related convergence results
Studies on the large sample properties of ABC, with focus the
asymptotic properties of ABC point estimators
[Creel et al., 2015; Jasra, 2015; Li & Fearnhead, 2018a,b]
characterisation of asymptotic behavior of ABC posterior
for all εT = o(1) with posterior concentration
asymptotic normality and unbiasedness of posterior mean
remain achievable even when limT vT εT = ∞, provided
εT = o(1/v
1/2
T )
86. Related convergence results
Studies on the large sample properties of ABC, with focus the
asymptotic properties of ABC point estimators
[Creel et al., 2015; Jasra, 2015; Li & Fearnhead, 2018a,b]
if kη > kθ 1 and if εT = o(1/v
3/5
T ), posterior mean
asymptotically normal, unbiased, but asymptotically
inefficient
if vT εT → ∞ and vT ε2
T = o(1), for kη > kθ 1,
EΠε {vT (θ−θ0)} = { θb(θ0) θb(θ0)}−1
θb(θ0) vT {η(xobs
)−b(θ0)}+op(1).
87. Asymptotic shape of posterior distribution
Shape of
Π · | η(xobs
), η(z) εn
depending on relation between εn and rate σn at which η(xobsn
)
satisfy CLT
Three different regimes:
1. σn = o(εn) −→ Uniform limit
2. σn εn −→ perturbated Gaussian limit
3. σn εn −→ Gaussian limit
88. asymptotic behaviour of posterior mean
When p = dim(η(xobs)) = d = dim(θ) and εn = o(n−3/10)
EABC[dT (θ − θ0)|xobs
] ⇒ N(0, ( bo
)T
Σ−1
bo −1
[Li & Fearnhead (2018a)]
In fact, if εβ+1
n
√
n = o(1), with β H¨older-smoothness of π
EABC[(θ−θ0)|xobs
] =
( bo)−1Zo
√
n
+
k
j=1
hj(θ0)ε2j
n +op(1), 2k = β
[Fearnhead & Prangle, 2012]
89. asymptotic behaviour of posterior mean
When p = dim(η(xobs)) = d = dim(θ) and εn = o(n−3/10)
EABC[dT (θ − θ0)|xobs
] ⇒ N(0, ( bo
)T
Σ−1
bo −1
[Li & Fearnhead (2018a)]
Iterating for fixed p mildly interesting: if
˜η(xobs) = EABC[θ|xobs
]
then
EABC[θ|˜η(xobs
)] = θ0 +
( bo)−1Zo
√
n
+
π (θ0)
π(θ0)
ε2
n + o()
[Fearnhead & Prangle, 2012]
90. Curse of dimensionality
For reasonable statistical behavior, decline of acceptance
αT the faster the larger the dimension of θ, kθ, but
unaffected by dimension of η, kη
Theoretical justification for dimension reduction methods
that process parameter components individually and
independently of other components
[Fearnhead & Prangle, 2012; Martin & al., 2016]
importance sampling approach of Li & Fearnhead (2018a)
yields acceptance rates αT = O(1), when εT = O(1/vT )
91. Curse of dimensionality
For reasonable statistical behavior, decline of acceptance
αT the faster the larger the dimension of θ, kθ, but
unaffected by dimension of η, kη
Theoretical justification for dimension reduction methods
that process parameter components individually and
independently of other components
[Fearnhead & Prangle, 2012; Martin & al., 2016]
importance sampling approach of Li & Fearnhead (2018a)
yields acceptance rates αT = O(1), when εT = O(1/vT )
92. Monte Carlo error
Link the choice of εT to Monte Carlo error associated with NT
draws in ABC Algorithm
Conditions (on εT ) under which
^αT = αT {1 + op(1)}
where ^αT = NT
i=1 1l [d{η(y), η(z)} εT ] /NT proportion of
accepted draws from NT simulated draws of θ
Either
(i) εT = o(v−1
T ) and (vT εT )−kη ε−kθ
T MNT
or
(ii) εT v−1
T and ε−kθ
T MNT
for M large enough
93. Bayesian model choice
Several models M1, M2, . . . to be compared for dataset xobs
making model index M part of inference
Use of a prior distribution. π(M = m), plus a prior distribution
on the parameter conditional on the value m of the model
index, πm(θm)
Goal to derive the posterior distribution of M, challenging
computational target when models are complex
94. Generic ABC for model choice
Algorithm 4 Likelihood-free model choice sampler (ABC-MC)
for t = 1 to T do
repeat
Generate m from the prior π(M = m)
Generate θm from the prior πm(θm)
Generate z from the model fm(z|θm)
until ρ{η(z), η(xobs)} < ε
Set m(t) = m and θ(t) = θm
end for
[Cornuet et al., DIYABC, 2009]
95. ABC model choice consistency
Leaving approximations aside, limiting ABC procedure is Bayes
factor based on η(xobs)
B12(η(xobs
))
Potential loss of information at the testing level
[Robert et al., 2010]
When is Bayes factor based on insufficient statistic η(xobs)
consistent?
[Marin et al., 2013]
96. ABC model choice consistency
Leaving approximations aside, limiting ABC procedure is Bayes
factor based on η(xobs)
B12(η(xobs
))
Potential loss of information at the testing level
[Robert et al., 2010]
When is Bayes factor based on insufficient statistic η(xobs)
consistent?
[Marin et al., 2013]
97. Example 8: Gauss versus Laplace
Model M1: xobs ∼ N(θ1, 1)⊗n opposed to model M2:
xobs ∼ L(θ2, 1/
√
2)⊗n, Laplace distribution with mean θ2 and
variance one
Four possible statistics η(xobs)
1. sample mean xobs (sufficient for M1 if not M);
2. sample median med(xobs) (insufficient);
3. sample variance var(xobs) (ancillary);
4. median absolute deviation
mad(xobs) = med(|xobs − med(xobs)|);
98. Example 8: Gauss versus Laplace
Model M1: xobs ∼ N(θ1, 1)⊗n opposed to model M2:
xobs ∼ L(θ2, 1/
√
2)⊗n, Laplace distribution with mean θ2 and
variance one
q
q
q
q
q
q
q
q
q
q
q
Gauss Laplace
0.00.10.20.30.40.50.60.7
n=100
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
Gauss Laplace
0.00.20.40.60.81.0
n=100
99. Consistency
Summary statistics
η(xobs
) = (τ1(xobs
), τ2(xobs
), · · · , τd(xobs
)) ∈ Rd
with true distribution η ∼ Gn, true mean µ0, and distribution
Gi,n under model Mi, corresponding posteriors pi2(· | ηn)
Assumptions of central limit theorem and large deviations for
eta(xobs) under true, plus usual Bayesian asymptotics with di
effective dimension of the parameter)
[Pillai et al., 2013]
100. Asymptotic marginals
Asymptotically
mi,n(t) =
Θi
gi,n(t|θi) πi(θi) dθi
such that
(i) if inf{|µi(θi) − µ0|; θi ∈ Θi} = 0,
Cl
√
n
d−di
mi,n(ηn
) Cu
√
n
d−di
and
(ii) if inf{|µi(θi) − µ0|; θi ∈ Θi} > 0
mi,n(ηn
) = o¶n [
√
n
d−τi
+
√
n
d−αi
].
101. Between-model consistency
Consequence of above is that asymptotic behaviour of the Bayes
factor is driven by the asymptotic mean value µ(θ) of ηn under
both models. And only by this mean value!
102. Between-model consistency
Consequence of above is that asymptotic behaviour of the Bayes
factor is driven by the asymptotic mean value µ(θ) of ηn under
both models. And only by this mean value!
Indeed, if
inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} = inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0
then
Cl
√
n
−(d1−d2)
m1,n(ηn
) m2(ηn
) Cu
√
n
−(d1−d2)
,
where Cl, Cu = O¶n (1), irrespective of the true model.
c Only depends on the difference d1 − d2: no consistency
103. Between-model consistency
Consequence of above is that asymptotic behaviour of the Bayes
factor is driven by the asymptotic mean value µ(θ) of ηn under
both models. And only by this mean value!
Else, if
inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} > inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0
then
m1,n(ηn)
m2,n(ηn)
Cu min
√
n
−(d1−α2)
,
√
n
−(d1−τ2)
104. Checking for adequate statistics
Run a practical check of the relevance (or non-relevance) of ηn
null hypothesis that both models are compatible with the
statistic ηn
H0 : inf{|µ2(θ2) − µ0|; θ2 ∈ Θ2} = 0
against
H1 : inf{|µ2(θ2) − µ0|; θ2 ∈ Θ2} > 0
testing procedure provides estimates of mean of ηn under each
model and checks for equality
105. ABC under misspecification
ABC methods rely on simulations z(θ) from the model to
identify those close to xobs
What is happening when the model is wrong?
for some tolerance sequences εn ↓ ε∗, well-behaved ABC
posteriors that concentrate posterior mass on pseudo-true
value
if εn too large, asymptotic limit of ABC posterior uniform
with radius of order εn − ε∗
even if
√
n{εn − ε∗} → 2c ∈ R, limiting distribution no
longer Gaussian
ABC credible sets invalid confidence sets
106. ABC under misspecification
ABC methods rely on simulations z(θ) from the model to
identify those close to xobs
What is happening when the model is wrong?
for some tolerance sequences εn ↓ ε∗, well-behaved ABC
posteriors that concentrate posterior mass on pseudo-true
value
if εn too large, asymptotic limit of ABC posterior uniform
with radius of order εn − ε∗
even if
√
n{εn − ε∗} → 2c ∈ R, limiting distribution no
longer Gaussian
ABC credible sets invalid confidence sets
107. Example 9: Normal model with wrong variance
Assumed data generating process (DGP) is z ∼ N(θ, 1) but
actual DGP is xobs ∼ N(θ, ˜σ2)
Use of summaries
sample mean η1(xobs) = 1
n
n
i=1 xi
centered summary η2(xobs) = 1
n−1
n
i=1(xi − η1(xobs)2 − 1
Three ABC:
ABC-AR: accept/reject approach with
Kε(d{η(z), η(xobs)}) = I d{η(z), η(xobs)} ε and
d{x, y} = x − y
ABC-K: smooth rejection approach, with
Kε(d{η(z), η(xobs)}) univariate Gaussian kernel
ABC-Reg: post-processing ABC approach with weighted
linear regression adjustment
108. Example 9: Normal model with wrong variance
posterior means for ABC-AR, ABC-K and ABC-Reg as σ2
increases (N = 50, 000 simulated data sets)
αn = n−5/9 quantile for ABC-AR
ABC-K and ABC-Reg bandwidth of n−5/9
109. ABC misspecification
data xobs with true distribution P0 assumed issued from
model Pθ θ ∈ Θ ⊂ Rkθ and summary statistic
η(xobs) = (η1(xobs), ..., ηkη (xobs))
misspecification
inf
θ∈Θ
D(P0||Pθ) = inf
θ∈Θ
log
dP0(x)
dPθ(x)
dP0(y) > 0,
with
θ∗
= arg inf
θ∈Θ
D(P0||Pθ)
[Muller, 2013]
ABC misspecification:
for b0 (resp. b(θ)) limit of η(xobs) (resp. η(z))
inf
θ∈Θ
d{b0, b(θ)} > 0
110. ABC misspecification
data xobs with true distribution P0 assumed issued from
model Pθ θ ∈ Θ ⊂ Rkθ and summary statistic
η(xobs) = (η1(xobs), ..., ηkη (xobs))
ABC misspecification:
for b0 (resp. b(θ)) limit of η(xobs) (resp. η(z))
inf
θ∈Θ
d{b0, b(θ)} > 0
ABC pseudo-true value:
θ∗
= arg inf
θ∈Θ
d{b0, b(θ)}.
111. Minimum tolerance
Approximate nature of ABC means that D(P0||Pθ) yields no
meaningful notion of model misspecification associated with the
output of ABC posterior distributions
Under identification conditions on b(·) ∈ Rkη , there exists ε∗
such that
ε∗
= inf
θ∈Θ
d{b0, b(θ)} > 0
Once εn < ε∗ no draw of θ to be selected and posterior
Πε[A|η(xobs)] ill-conditioned
But appropriately chosen tolerance sequence (εn)n allows
ABC-based posterior to concentrate on distance-dependent
pseudo-true value θ∗
112. Minimum tolerance
Approximate nature of ABC means that D(P0||Pθ) yields no
meaningful notion of model misspecification associated with the
output of ABC posterior distributions
Under identification conditions on b(·) ∈ Rkη , there exists ε∗
such that
ε∗
= inf
θ∈Θ
d{b0, b(θ)} > 0
Once εn < ε∗ no draw of θ to be selected and posterior
Πε[A|η(xobs)] ill-conditioned
But appropriately chosen tolerance sequence (εn)n allows
ABC-based posterior to concentrate on distance-dependent
pseudo-true value θ∗
113. Minimum tolerance
Approximate nature of ABC means that D(P0||Pθ) yields no
meaningful notion of model misspecification associated with the
output of ABC posterior distributions
Under identification conditions on b(·) ∈ Rkη , there exists ε∗
such that
ε∗
= inf
θ∈Θ
d{b0, b(θ)} > 0
Once εn < ε∗ no draw of θ to be selected and posterior
Πε[A|η(xobs)] ill-conditioned
But appropriately chosen tolerance sequence (εn)n allows
ABC-based posterior to concentrate on distance-dependent
pseudo-true value θ∗
114. Minimum tolerance
Approximate nature of ABC means that D(P0||Pθ) yields no
meaningful notion of model misspecification associated with the
output of ABC posterior distributions
Under identification conditions on b(·) ∈ Rkη , there exists ε∗
such that
ε∗
= inf
θ∈Θ
d{b0, b(θ)} > 0
Once εn < ε∗ no draw of θ to be selected and posterior
Πε[A|η(xobs)] ill-conditioned
But appropriately chosen tolerance sequence (εn)n allows
ABC-based posterior to concentrate on distance-dependent
pseudo-true value θ∗
115. ABC concentration under misspecification
Assumptions
(A0) Existence of unique b0 such that d(η(xobs), b0) = oP0
(1)
and of sequence v0,n → +∞ such that
lim inf
n→+∞
P0 d(η(xobs
n ), b0) v−1
0,n = 1.
116. ABC concentration under misspecification
Assumptions
(A1) Existence of injective map b : Θ → B ⊂ Rkη and function
ρn with ρn(·) ↓ 0 as n → +∞, and ρn(·) non-increasing,
such that
Pθ [d(η(Z), b(θ)) > u] c(θ)ρn(u),
Θ
c(θ)dΠ(θ) < ∞
and assume either
(i) Polynomial deviations: existence of vn ↑ +∞ and
u0, κ > 0 such that ρn(u) = v−κ
n u−κ, for u u0
(ii) Exponential deviations:
117. ABC concentration under misspecification
Assumptions
(A1) Existence of injective map b : Θ → B ⊂ Rkη and function
ρn with ρn(·) ↓ 0 as n → +∞, and ρn(·) non-increasing,
such that
Pθ [d(η(Z), b(θ)) > u] c(θ)ρn(u),
Θ
c(θ)dΠ(θ) < ∞
and assume either
(i) Polynomial deviations:
(ii) Exponential deviations: existence of hθ(·) > 0 such
that Pθ[d(η(z), b(θ)) > u] c(θ)e−hθ(uvn) and
existence of m, C > 0 such that
Θ
c(θ)e−hθ(uvn)
dΠ(θ) Ce−m·(uvn)τ
, for u u0.
118. ABC concentration under misspecification
Assumptions
(A2) existence of D > 0 and M0, δ0 > 0 such that, for all
δ0 δ > 0 and M M0, existence of
Sδ ⊂ {θ ∈ Θ : d(b(θ), b0) − ε∗ δ} for which
(i) In case (i), D < κ and Sδ
1 − c(θ)
M dΠ(θ) δD.
(ii) In case (ii), Sδ
1 − c(θ)e−hθ(M) dΠ(θ) δD.
119. Consistency
Assume (A0), [A1] and [A2], with εn ↓ ε∗ with
εn ε∗
+ Mv−1
n + v−1
0,n,
for M large enough. Let Mn ↑ ∞ and δn Mn(εn − ε∗), then
Πε d(b(θ), b0) ε∗
+ δn|η(xobs
) = oP0
(1),
1. if δn Mnv−1
n u
−D/κ
n = o(1) in case (i)
2. if δn Mnv−1
n | log(un)|1/τ = o(1) in case (ii)
with un = εn − (ε∗ + Mv−1
n + v−1
0,n) 0 .
[Bernton et al., 2017; Frazier et al., 2019]
120. Regression adjustement under misspecification
Original accepted value θ artificially related to η(xobs) and η(z)
through local linear regression model
θ = µ + β {η(xobs
) − η(z)} + ν,
where νi model residual
[Beaumont et al., 2003]
Asymptotic behavior of ABC-Reg posterior
Πε[·|η(xobs
)]
determined by behavior of
Πε[·|η(xobs
)], ^β, and {η(xobs
) − η(z)}
121. Regression adjustement under misspecification
Original accepted value θ artificially related to η(xobs) and η(z)
through local linear regression model
θ = µ + β {η(xobs
) − η(z)} + ν,
where νi model residual
[Beaumont et al., 2003]
Asymptotic behavior of ABC-Reg posterior
Πε[·|η(xobs
)]
determined by behavior of
Πε[·|η(xobs
)], ^β, and {η(xobs
) − η(z)}
122. Regression adjustement under misspecification
ABC-Reg takes draws of asymptotically optimal θ and
perturbs them in a manner that need not preserve
optimality of the original draws
for β0 large, pseudo-true value ˜θ∗ possibly outside Θ
extends to nonlinear regression adjustments (Blum
& Fran¸cois, 2010)
potential correction of the adjustment (Frazier et al., 2019)
all local regression adjustments have much smaller
posterior variability than ABC-AR but false sense of the
procedures precision
123. Example 11: misspecified g-and-k
Quantile function of g-and-k model:
F−1
(q) ∈ (0, 1) → a+b 1 + 0.8
1 − exp(−gz(q)
1 + exp(−gz(q)
1 + z(q)2 k
z(q),
where z(q) q-th N(0, 1) quantile
Data generated from a mixture distribution with minor
bi-modality
126. computational bottleneck
Time per iteration increases with sample size n of the data:
cost of sampling associated with a reasonable acceptance
probability makes ABC infeasible for large datasets
surrogate models to get samples (e.g., using copulas)
direct sampling of summary statistics (e.g., synthetic
likelihood)
borrow from proposals for scalable MCMC e.g., divide
& conquer)
127. Approximate ABC [AABC]
Idea approximations on both parameter and model spaces by
resorting to bootstrap techniques.
[Buzbas & Rosenberg, 2015]
Procedure scheme
1. Sample (θi, xi), i = 1, . . . , m, from prior predictive
2. Simulate θ∗ ∼ π(·) and assign weight wi to dataset x(i)
simulated under k-closest θi to θ∗ by Epanechnikov kernel
3. Generate dataset x∗ as bootstrap weighted sample from
(x(1), . . . , x(k))
Drawbacks
If m too small, prior predictive sample may miss
informative parameters
large error and misleading representation of true posterior
only well-suited for models with very few parameters
128. Approximate ABC [AABC]
Procedure scheme
1. Sample (θi, xi), i = 1, . . . , m, from prior predictive
2. Simulate θ∗ ∼ π(·) and assign weight wi to dataset x(i)
simulated under k-closest θi to θ∗ by Epanechnikov kernel
3. Generate dataset x∗ as bootstrap weighted sample from
(x(1), . . . , x(k))
Drawbacks
If m too small, prior predictive sample may miss
informative parameters
large error and misleading representation of true posterior
only well-suited for models with very few parameters
129. Divide-and-conquer perspectives
1. divide the large dataset into smaller batches
2. sample from the batch posterior
3. combine the result to get a sample from the targeted
posterior
Alternative via ABC-EP
[Barthelm´e & Chopin, 2014]
130. Divide-and-conquer perspectives
1. divide the large dataset into smaller batches
2. sample from the batch posterior
3. combine the result to get a sample from the targeted
posterior
Alternative via ABC-EP
[Barthelm´e & Chopin, 2014]
131. Geometric combination: WASP
Subset posteriors given partition xobs
[1] , . . . , xobs
[B] of the
observed data xobs, let define
π(θ | xobs
[b] ) ∝ π(θ)
j∈[b]
f(xobs
j | θ)B
.
[Srivastava et al., 2015]
Subset posteriors are combined using the Wasserstein
barycenter
[Cuturi, 2014]
Drawback require sampling from f(· | θ)B by ABC means.
Should be feasible for latent variable (z) representations when
f(x | z, θ) available in closed form
[Doucet & Robert, 2001]
132. Geometric combination: WASP
Subset posteriors given partition xobs
[1] , . . . , xobs
[B] of the
observed data xobs, let define
π(θ | xobs
[b] ) ∝ π(θ)
j∈[b]
f(xobs
j | θ)B
.
[Srivastava et al., 2015]
Subset posteriors are combined using the Wasserstein
barycenter
[Cuturi, 2014]
Alternative backfeed subset posteriors as priors to other
subsets
133. Consensus ABC
Na¨ıve scheme
For each data batch b = 1, . . . , B
1. Sample (θ
[b]
1 , . . . , θ
[b]
n ) from diffused prior
˜π(·) ∝ π(·)1/B
2. Run ABC to sample from batch posterior
^π(· | d(S(xobs
[b] ), S(x[b])) < ε)
3. Compute sample posterior variance Σb
Combine batch posterior approximations
θj =
B
b=1
Σb
−1 B
b=1
Σbθ
[b]
j .
Remark Diffuse prior quite non informative which call for
ABC-MCMC to avoid sampling from ˜π(·).
134. Consensus ABC
Na¨ıve scheme
For each data batch b = 1, . . . , B
1. Sample (θ
[b]
1 , . . . , θ
[b]
n ) from diffused prior
˜π(·) ∝ π(·)1/B
2. Run ABC to sample from batch posterior
^π(· | d(S(xobs
[b] ), S(x[b])) < ε)
3. Compute sample posterior variance Σb
Combine batch posterior approximations
θj =
B
b=1
Σb
−1 B
b=1
Σbθ
[b]
j .
Remark Diffuse prior quite non informative which call for
ABC-MCMC to avoid sampling from ˜π(·).
135. Big parameter issues
Curse of dimensionality: as dim(Θ) increases
exploration of parameter space gets harder
summary statistic η forced to increase, since at least of
dimension dim(Θ)
Some solutions
adopt more local algorithms like ABC-MCMCM or
ABC-SMC
aim at posterior marginals and approximate joint posterior
by copula
[Li et al., 2016]
run ABC-Gibbs
[Clart´e et al., 2016]
137. Example 9: Hierarchical MA(2)
c Based on 106 prior and 103 posteriors simulations, 3n
summary statistics, and series of length 100, ABC posterior
hardly distinguishable from prior!
138. ABC-Gibbs
When parameter decomposed into θ = (θ1, . . . , θn)
Algorithm 5 ABC-Gibbs sampler
starting point θ(0) = (θ
(0)
1 , . . . , θ
(0)
n ), observation xobs
for i = 1, . . . , N do
for j = 1, . . . , n do
θ
(i)
j ∼ πεj
(· | x , sj, θ
(i)
1 , . . . , θ
(i)
j−1, θ
(i−1)
j+1 , . . . , θ
(i−1)
n )
end for
end for
Divide & conquer:
one tolerance εj for each parameter θj
one statistic sj for each parameter θj
[Clart´e et al., 2016]
139. ABC-Gibbs
When parameter decomposed into θ = (θ1, . . . , θn)
Algorithm 6 ABC-Gibbs sampler
starting point θ(0) = (θ
(0)
1 , . . . , θ
(0)
n ), observation xobs
for i = 1, . . . , N do
for j = 1, . . . , n do
θ
(i)
j ∼ πεj
(· | x , sj, θ
(i)
1 , . . . , θ
(i)
j−1, θ
(i−1)
j+1 , . . . , θ
(i−1)
n )
end for
end for
Divide & conquer:
one tolerance εj for each parameter θj
one statistic sj for each parameter θj
[Clart´e et al., 2016]
140. Compatibility
When using ABC-Gibbs conditionals with different acceptance
events, e.g., different statistics
π(α)π(sα(µ) | α) and π(µ)f(sµ(x ) | α, µ).
conditionals are incompatible
ABC-Gibbs does not necessarily converge (even for
tolerance equal to zero)
potential limiting distribution
not a genuine posterior (double use of data)
unknown [except for a specific version]
possibly far from genuine posterior(s)
[Clart´e et al., 2016]
141. Compatibility
When using ABC-Gibbs conditionals with different acceptance
events, e.g., different statistics
π(α)π(sα(µ) | α) and π(µ)f(sµ(x ) | α, µ).
conditionals are incompatible
ABC-Gibbs does not necessarily converge (even for
tolerance equal to zero)
potential limiting distribution
not a genuine posterior (double use of data)
unknown [except for a specific version]
possibly far from genuine posterior(s)
[Clart´e et al., 2016]
142. Convergence
In the hierarchical case n = 2,
Theorem
If there exists 0 < κ < 1/2 such that
sup
θ1,˜θ1
πε2
(· | x , s2, θ1) − πε2
(· | x , s2, ˜θ1) TV = κ
the ABCG Markov chain geometrically converges in total
variation to a stationary distribution νε, with geometric rate
1 − 2κ.
143. Example 9: Hierarchical MA(2)
Separation from the prior for identical number of simulations
144. ABC sessions at JSM [and further]
#107 - The ABC of Making an Impact, Monday
8:30-10:20am
#???
[A]BayesComp, Jan 7-10 2020
ABC in Grenoble, March 18-19 2020
ABC in Longyearbyen, April 8-9 2021
145. ABC sessions at JSM [and further]
#107 - The ABC of Making an Impact, Monday
8:30-10:20am
#???
[A]BayesComp, Jan 7-10 2020
ABC in Grenoble, March 18-19 2020
ABC in Longyearbyen, April 8-9 2021
146. ABC postdoc positions
2 post-doc positions with the ABSint research grant:
Focus on approximate Bayesian techniques like ABC,
variational Bayes, PAC-Bayes, Bayesian non-parametrics,
scalable MCMC, and related topics. A potential direction
of research would be the derivation of new Bayesian tools
for model checking in such complex environments.
Terms: up to 24 months, no teaching duty attached,
primarily located in Universit´e Paris-Dauphine, with
supported periods in Oxford (J. Rousseau) and visits to
Montpellier (J.-M. Marin). No hard deadline.
If interested, send application to
bayesianstatistics@gmail.com