ABC, NCE, GANs & VAEs
Christian P. Robert
U. Paris Dauphine & Warwick U.
CDT masterclass, May 2022
Outline
1 Geyer’s 1994 logistic
2 Links with bridge sampling
3 Noise contrastive estimation
4 Generative models
5 Variational autoencoders (VAEs)
6 Generative adversarial networks
(GANs)
An early entry
A standard issue in Bayesian inference is to approximate the
marginal likelihood (or evidence)
Ek =
Z
Θk
πk(ϑk)Lk(ϑk) dϑk,
aka the marginal likelihood.
[Jeffreys, 1939]
Bayes factor
For testing hypotheses H0 : ϑ ∈ Θ0 vs. Ha : ϑ 6∈ Θ0, under prior
π(Θ0)π0(ϑ) + π(Θc
0)π1(ϑ) ,
central quantity
B01 =
π(Θ0|x)
π(Θc
0|x)

π(Θ0)
π(Θc
0)
=
Z
Θ0
f(x|ϑ)π0(ϑ)dϑ
Z
Θc
0
f(x|ϑ)π1(ϑ)dϑ
[Kass  Raftery, 1995, Jeffreys, 1939]
Bayes factor approximation
When approximating the Bayes factor
B01 =
Z
Θ0
f0(x|ϑ0)π0(ϑ0)dϑ0
Z
Θ1
f1(x|ϑ1)π1(ϑ1)dϑ1
use of importance functions $0 and $1 and
b
B01 =
n−1
0
Pn0
i=1 f0(x|ϑi
0)π0(ϑi
0)/$0(ϑi
0)
n−1
1
Pn1
i=1 f1(x|ϑi
1)π1(ϑi
1)/$1(ϑi
1)
when ϑi
0 ∼ $0(ϑ) and ϑi
1 ∼ $1(ϑ)
Forgetting and learning
Counterintuitive choice of importance function based on
mixtures
If ϑit ∼ $i(ϑ) (i = 1, . . . , I, t = 1, . . . , Ti)
Eπ[h(ϑ)] ≈
1
Ti
Ti
X
t=1
h(ϑit)
π(ϑit)
$i(ϑit)
replaced with
Eπ[h(ϑ)] ≈
I
X
i=1
Ti
X
t=1
h(ϑit)
π(ϑit)
PI
j=1 Tj$j(ϑit)
Preserves unbiasedness and brings stability (while forgetting
about original index)
[Geyer, 1991, unpublished; Owen  Zhou, 2000]
Enters the logistic
If considering unnormalised $j’s, i.e.
$j(ϑ) = cj
e
$j(ϑ) j = 1, . . . , I
and realisations ϑit’s from the mixture
$(ϑ) =
1
T
I
X
i=1
Tj$j(ϑ) =
1
T
I
X
i=1
e
$j(ϑ)e
ηj
z }| {
log(cj) + log(Tj)
Geyer (1994) introduces allocation probabilities for the mixture
components
pj(ϑ, η) = e
$j(ϑ)eηj
. I
X
m=1
e
$m(ϑ)eηm
to construct a pseudo-log-likelihood
`(η) :=
I
X
i=1
Ti
X
t=1
log pi(ϑit, η)
Enters the logistic (2)
Estimating η as
^
η = arg max
η
`(η)
produces the reverse logistic regression estimator of the
constants cj as
I partial forgetting of initial distribution
I objective function equivalent to a multinomial logistic
regression with the log e
$i(ϑit)’s as covariates
I randomness reversed from Ti’s to ϑit’s
I constants cj identifiable up to a constant
I resulting biased importance sampling estimator
Illustration
Special case when I = 2, c1 = 1, T1 = T2
−`(c2) =
T
X
t=1
log{1 + c2
e
$2(ϑ1t)/$1(ϑ1t)}
+
T
X
t=1
log{1 + $1(ϑ2t)/c2
e
$2(ϑ2t)}
and
$1(ϑ) = ϕ(ϑ; 0, 32
) e
$2(ϑ) = exp{−(ϑ − 5)2
/2} c2 = 1/
√
2π
Illustration
Special case when I = 2, c1 = 1, T1 = T2
−`(c2) =
T
X
t=1
log{1 + c2
e
$2(ϑ1t)/$1(ϑ1t)}
+
T
X
t=1
log{1 + $1(ϑ2t)/c2
e
$2(ϑ2t)}
tg=function(x)exp(-(x-5)**2/2)
pl=function(a)
sum(log(1+a*tg(x)/dnorm(x,0,3)))+sum(log(1+dnorm(y,0,3)/a/tg(y)))
nrm=matrix(0,3,1e2)
for(i in 1:3)
for(j in 1:1e2)
x=rnorm(10**(i+1),0,3)
y=rnorm(10**(i+1),5,1)
nrm[i,j]=optimise(pl,c(.01,1))
Illustration
Special case when I = 2, c1 = 1, T1 = T2
−`(c2) =
T
X
t=1
log{1 + c2
e
$2(ϑ1t)/$1(ϑ1t)}
+
T
X
t=1
log{1 + $1(ϑ2t)/c2
e
$2(ϑ2t)}
1 2 3
0.3
0.4
0.5
0.6
0.7
Illustration
Special case when I = 2, c1 = 1, T1 = T2
−`(c2) =
T
X
t=1
log{1 + c2
e
$2(ϑ1t)/$1(ϑ1t)}
+
T
X
t=1
log{1 + $1(ϑ2t)/c2
e
$2(ϑ2t)}
Full logistic
Outline
1 Geyer’s 1994 logistic
2 Links with bridge sampling
3 Noise contrastive estimation
4 Generative models
5 Variational autoencoders (VAEs)
6 Generative adversarial networks
(GANs)
Bridge sampling
Approximation of Bayes factors (and other ratios of integrals)
Special case:
If
π1(ϑ1|x) ∝ π̃1(ϑ1|x)
π2(ϑ2|x) ∝ π̃2(ϑ2|x)
live on the same space (Θ1 = Θ2), then
B12 ≈
1
n
n
X
i=1
π̃1(ϑi|x)
π̃2(ϑi|x)
ϑi ∼ π2(ϑ|x)
[Bennett, 1976; Gelman  Meng, 1998; Chen, Shao  Ibrahim, 2000]
Bridge sampling variance
The bridge sampling estimator does poorly if
var(b
B12)
B2
12
≈
1
n
Eπ2

π1(ϑ) − π2(ϑ)
π2(ϑ)
2
#
is large, i.e. if π1 and π2 have little overlap...
Bridge sampling variance
The bridge sampling estimator does poorly if
var(b
B12)
B2
12
≈
1
n
Eπ2

π1(ϑ) − π2(ϑ)
π2(ϑ)
2
#
is large, i.e. if π1 and π2 have little overlap...
(Further) bridge sampling
General identity:
c1
c2
= B12 =
Z
π̃2(ϑ|x)α(ϑ)π1(ϑ|x)dϑ
Z
π̃1(ϑ|x)α(ϑ)π2(ϑ|x)dϑ
∀ α(·)
≈
1
n1
n1
X
i=1
π̃2(ϑ1i|x)α(ϑ1i)
1
n2
n2
X
i=1
π̃1(ϑ2i|x)α(ϑ2i)
ϑji ∼ πj(ϑ|x)
Optimal bridge sampling
The optimal choice of auxiliary function is
α?
(ϑ) =
n1 + n2
n1π1(ϑ|x) + n2π2(ϑ|x)
leading to
b
B12 ≈
1
n1
n1
X
i=1
π̃2(ϑ1i|x)
n1π1(ϑ1i|x) + n2π2(ϑ1i|x)
1
n2
n2
X
i=1
π̃1(ϑ2i|x)
n1π1(ϑ2i|x) + n2π2(ϑ2i|x)
Optimal bridge sampling (2)
Reason:
Var(b
B12)
B2
12
≈
1
n1n2
R
π1(ϑ)π2(ϑ)[n1π1(ϑ) + n2π2(ϑ)]α(ϑ)2 dϑ
R
π1(ϑ)π2(ϑ)α(ϑ) dϑ
2
− 1

(by the δ method)
Drawback: Dependence on the unknown normalising constants
solved iteratively
Optimal bridge sampling (2)
Reason:
Var(b
B12)
B2
12
≈
1
n1n2
R
π1(ϑ)π2(ϑ)[n1π1(ϑ) + n2π2(ϑ)]α(ϑ)2 dϑ
R
π1(ϑ)π2(ϑ)α(ϑ) dϑ
2
− 1

(by the δ method)
Drawback: Dependence on the unknown normalising constants
solved iteratively
Back to the logistic
When T1 = T2 = T, optimising
−`(c2) =
T
X
t=1
log{1+c2
e
$2(ϑ1t)/$1(ϑ1t)}+
T
X
t=1
log{1+$1(ϑ2t)/c2
e
$2(ϑ2t)}
cancelling derivative in c2
T
X
t=1
e
$2(ϑ1t)
c2
e
$2(ϑ1t) + $1(ϑ1t)
− c−1
2
T
X
t=1
$1(ϑ2t)
$1(ϑ2t) + c2
e
$2(ϑ2t)
= 0
leads to
c0
2 =
PT
t=1
$1(ϑ2t)
$1(ϑ2t)+c2 e
$2(ϑ2t)
PT
t=1
e
$2(ϑ1t)
c2 e
$2(ϑ1t)+$1(ϑ1t)
EM step for the maximum pseudo-likelihood estimation
Back to the logistic
When T1 = T2 = T, optimising
−`(c2) =
T
X
t=1
log{1+c2
e
$2(ϑ1t)/$1(ϑ1t)}+
T
X
t=1
log{1+$1(ϑ2t)/c2
e
$2(ϑ2t)}
cancelling derivative in c2
T
X
t=1
e
$2(ϑ1t)
c2
e
$2(ϑ1t) + $1(ϑ1t)
− c−1
2
T
X
t=1
$1(ϑ2t)
$1(ϑ2t) + c2
e
$2(ϑ2t)
= 0
leads to
c0
2 =
PT
t=1
$1(ϑ2t)
$1(ϑ2t)+c2 e
$2(ϑ2t)
PT
t=1
e
$2(ϑ1t)
c2 e
$2(ϑ1t)+$1(ϑ1t)
EM step for the maximum pseudo-likelihood estimation
Back to the logistic
When T1 = T2 = T, optimising
−`(c2) =
T
X
t=1
log{1+c2
e
$2(ϑ1t)/$1(ϑ1t)}+
T
X
t=1
log{1+$1(ϑ2t)/c2
e
$2(ϑ2t)}
cancelling derivative in c2
T
X
t=1
e
$2(ϑ1t)
c2
e
$2(ϑ1t) + $1(ϑ1t)
− c−1
2
T
X
t=1
$1(ϑ2t)
$1(ϑ2t) + c2
e
$2(ϑ2t)
= 0
leads to
c0
2 =
PT
t=1
$1(ϑ2t)
$1(ϑ2t)+c2 e
$2(ϑ2t)
PT
t=1
e
$2(ϑ1t)
c2 e
$2(ϑ1t)+$1(ϑ1t)
EM step for the maximum pseudo-likelihood estimation
Mixtures as proposals
Design specific mixture for simulation purposes, with density
e
ϕ(ϑ) ∝ ω1π(ϑ)L(ϑ) + ϕ(ϑ) ,
where ϕ(ϑ) is arbitrary (but normalised)
Note: ω1 is not a probability weight
[Chopin  Robert, 2011]
Mixtures as proposals
Design specific mixture for simulation purposes, with density
e
ϕ(ϑ) ∝ ω1π(ϑ)L(ϑ) + ϕ(ϑ) ,
where ϕ(ϑ) is arbitrary (but normalised)
Note: ω1 is not a probability weight
[Chopin  Robert, 2011]
evidence approximation by mixtures
Rao-Blackwellised estimate
^
ξ =
1
T
T
X
t=1
ω1π(ϑ(t)
)L(ϑ(t)
)

ω1π(ϑ(t)
)L(ϑ(t)
) + ϕ(ϑ(t)
) ,
converges to ω1Z/{ω1Z + 1}
Deduce ^
Z from
ω1
^
Z/{ω1
^
Z + 1} = ^
ξ
Back to bridge sampling optimal estimate
[Chopin  Robert, 2011]
Non-parametric MLE
“At first glance, the problem appears to be an exercise in
calculus or numerical analysis, and not amenable to statistical
formulation” Kong et al. (JRSS B, 2002)
I use of Fisher information
I non-parametric MLE based on
simulations
I comparison of sampling
schemes through variances
I Rao–Blackwellised
improvements by invariance
constraints [Meng, 2011, IRCEM]
Non-parametric MLE
“At first glance, the problem appears to be an exercise in
calculus or numerical analysis, and not amenable to statistical
formulation” Kong et al. (JRSS B, 2002)
I use of Fisher information
I non-parametric MLE based on
simulations
I comparison of sampling
schemes through variances
I Rao–Blackwellised
improvements by invariance
constraints [Meng, 2011, IRCEM]
NPMLE
Observing
Yij ∼ Fi(t) = c−1
i
Zt
−∞
ωi(x) dF(x)
with ωi known and F unknown
NPMLE
Observing
Yij ∼ Fi(t) = c−1
i
Zt
−∞
ωi(x) dF(x)
with ωi known and F unknown
“Maximum likelihood estimate” defined by weighted empirical
cdf X
i,j
ωi(yij)p(yij)δyij
maximising in p Y
ij
c−1
i ωi(yij) p(yij)
NPMLE
Observing
Yij ∼ Fi(t) = c−1
i
Zt
−∞
ωi(x) dF(x)
with ωi known and F unknown
“Maximum likelihood estimate” defined by weighted empirical
cdf X
i,j
ωi(yij)p(yij)δyij
maximising in p Y
ij
c−1
i ωi(yij) p(yij)
Result such that
X
ij
^
c−1
r ωr(yij)
P
s ns^
c−1
s ωs(yij)
= 1
[Vardi, 1985]
NPMLE
Observing
Yij ∼ Fi(t) = c−1
i
Zt
−∞
ωi(x) dF(x)
with ωi known and F unknown
Result such that
X
ij
^
c−1
r ωr(yij)
P
s ns^
c−1
s ωs(yij)
= 1
[Vardi, 1985]
Bridge sampling estimator
X
ij
^
c−1
r ωr(yij)
P
s ns^
c−1
s ωs(yij)
= 1
[Gelman  Meng, 1998; Tan, 2004]
end of the Series B 2002 discussion
“...essentially every Monte Carlo activity may be interpreted as
parameter estimation by maximum likelihood in a statistical
model. We do not claim that this point of view is necessary; nor
do we seek to establish a working principle from it.”
I restriction to discrete support measures [may be] suboptimal
[Ritov  Bickel, 1990; Robins et al., 1997, 2000, 2003]
I group averaging versions in-between multiple mixture
estimators and quasi-Monte Carlo version
[Owen  Zhou, 2000; Cornuet et al., 2012; Owen, 2003]
I statistical analogy provides at best narrative thread
end of the Series B 2002 discussion
“The hard part of the exercise is to construct a submodel such
that the gain in precision is sufficient to justify the additional
computational effort”
I garden of forking paths, with infinite possibilities
I no free lunch (variance, budget, time)
I Rao–Blackwellisation may be detrimental in Markov setups
end of the 2002 discussion
“The statistician can considerably improve the efficiency of the
estimator by using the known values of different functionals
such as moments and probabilities of different sets. The
algorithm becomes increasingly efficient as the number of
functionals becomes larger. The result, however, is an extremely
complicated algorithm, which is not necessarily faster.” Y. Ritov
“...the analyst must violate the likelihood principle and eschew
semiparametric, nonparametric or fully parametric maximum
likelihood estimation in favour of non-likelihood-based locally
efficient semiparametric estimators.” J. Robins
Outline
1 Geyer’s 1994 logistic
2 Links with bridge sampling
3 Noise contrastive estimation
4 Generative models
5 Variational autoencoders (VAEs)
6 Generative adversarial networks
(GANs)
Noise contrastive estimation
New estimation principle for parameterised and unnormalised
statistical models also based on nonlinear logistic regression
Case of parameterised model with density
p(x; α) =
p̃(x; α)
Z(α)
and untractable normalising constant Z(α)
Estimating Z(α) as extra parameter is impossible via maximum
likelihood methods
Use of estimation techniques bypassing the constant like
contrastive divergence (Hinton, 2002) and score matching
(Hyvärinen, 2005)
[Gutmann  Hyvärinen, 2010]
Noise contrastive estimation
New estimation principle for parameterised and unnormalised
statistical models also based on nonlinear logistic regression
Case of parameterised model with density
p(x; α) =
p̃(x; α)
Z(α)
and untractable normalising constant Z(α)
Estimating Z(α) as extra parameter is impossible via maximum
likelihood methods
Use of estimation techniques bypassing the constant like
contrastive divergence (Hinton, 2002) and score matching
(Hyvärinen, 2005)
[Gutmann  Hyvärinen, 2010]
Noise contrastive estimation
New estimation principle for parameterised and unnormalised
statistical models also based on nonlinear logistic regression
Case of parameterised model with density
p(x; α) =
p̃(x; α)
Z(α)
and untractable normalising constant Z(α)
Estimating Z(α) as extra parameter is impossible via maximum
likelihood methods
Use of estimation techniques bypassing the constant like
contrastive divergence (Hinton, 2002) and score matching
(Hyvärinen, 2005)
[Gutmann  Hyvärinen, 2010]
NCE principle
As in Geyer’s method, given sample x1, . . . , xT from p(x; α)
I generate artificial sample from known distribution q,
y1, . . . , yT
I maximise the classification log-likelihood (where ϑ = (α, c))
`(ϑ; x, y) :=
T
X
i=1
log h(xi; ϑ) +
T
X
i=1
log{1 − h(yi; ϑ)}
of a logistic regression model which discriminates the
observed data from the simulated data, where
h(z; ϑ) =
cp̃(z; α)
cp̃(z; α) + q(z)
NCE principle
As in Geyer’s method, given sample x1, . . . , xT from p(x; α)
I generate artificial sample from known distribution q,
y1, . . . , yT
I maximise the classification log-likelihood (where ϑ = (α, c))
`(ϑ; x, y) :=
T
X
i=1
log h(xi; ϑ) +
T
X
i=1
log{1 − h(yi; ϑ)}
of a logistic regression model which discriminates the
observed data from the simulated data, where
h(z; ϑ) =
cp̃(z; α)
cp̃(z; α) + q(z)
NCE consistency
Objective function that converges (in T) to
J(ϑ) = E [log h(x; ϑ) + log{1 − h(y; ϑ)}]
Defining f(·) = log p(·; ϑ) and
J̃(f) = Ep [log r(f(x) − log q(x)) + log{1 − r(f(y) − log q(y))}]
NCE consistency
Objective function that converges (in T) to
J(ϑ) = E [log h(x; ϑ) + log{1 − h(y; ϑ)}]
Defining f(·) = log p(·; ϑ) and
J̃(f) = Ep [log r(f(x) − log q(x)) + log{1 − r(f(y) − log q(y))}]
Assuming q(·) positive everywhere,
I J̃(·) attains its maximum at f?(·) = log p(·) true distribution
I maximization performed without any normalisation
constraint
NCE consistency
Objective function that converges (in T) to
J(ϑ) = E [log h(x; ϑ) + log{1 − h(y; ϑ)}]
Defining f(·) = log p(·; ϑ) and
J̃(f) = Ep [log r(f(x) − log q(x)) + log{1 − r(f(y) − log q(y))}]
Under regularity condition, assuming the true distribution
belongs to parametric family, the solution
^
ϑT = arg max
ϑ
`(ϑ; x, y) (1)
converges to true ϑ
Consequence: log-normalisation constant consistently
estimated by maximizing (??)
Convergence of noise contrastive estimation
Opposition of Monte Carlo MLE à la Geyer (1994, JASA)
L = 1/n
n
X
i=1
log p̃(xi; ϑ)

p̃(xi; ϑ0
)
− log


1/m
m
X
j=1
p̃(zi; ϑ)

p̃(zi; ϑ0
)
| {z }
≈Z(ϑ0)/Z(ϑ)

x1, . . . , xn ∼ p∗
z1, . . . , zm ∼ p(z; ϑ0
)
[Riou-Durand  Chopin, 2018]
Convergence of noise contrastive estimation
and of noise contrastive estimation à la Gutmann and
Hyvärinen (2012)
L(ϑ, ν) = 1/n
n
X
i=1
log qϑ,ν(xi) + 1/m
m
X
i=1
log[1 − qϑ,ν(zi)]m/n
log
qϑ,ν(z)
1 − qϑ,ν(z)
= log
p̃(xi; ϑ)
p̃(xi; ϑ0)
+ ν + log n/m
x1, . . . , xn ∼ p∗
z1, . . . , zm ∼ p(z; ϑ0
)
[Riou-Durand  Chopin, 2018]
Poisson transform
Equivalent likelihoods
L(ϑ, ν) = 1/n
n
X
i=1
log
p̃(xi; ϑ)
p̃(xi; ϑ0)
+ ν − eν Z(ϑ)
Z(ϑ0)
and
L(ϑ, ν) = 1/n
n
X
i=1
log
p̃(xi; ϑ)
p̃(xi; ϑ0)
+ ν −
eν
m
m
X
j=1
p̃(zi; ϑ)

p̃(zi; ϑ0
)
sharing same ^
ϑ as originals
NCE consistency
Under mild assumptions, almost surely
^
ξMCMLE
n,m
m→∞
−→ ^
ξn
and
^
ξNCE
n,m
m→∞
−→ ^
ξn
the maximum likelihood estimator associated with
x1, . . . , xn ∼ p(·; ϑ)
and
e−^
ν
=
Z(^
ϑ)
Z(ϑ0)
[Geyer, 1994; Riou-Durand  Chopin, 2018]
NCE asymptotics
Under less mild assumptions (more robust for NCE),
asymptotic normality of both NCE and MC-MLE estimates as
n −→ +∞ m/n −→ τ
√
n(^
ξMCMLE
n,m − ξ∗
) ≈ Nd(0, ΣMCMLE
)
and √
n(^
ξNCE
n,m − ξ∗
) ≈ Nd(0, ΣNCE
)
with important ordering
ΣMCMLE
 ΣNCE
showing that NCE dominates MCMLE in terms of mean square
error (for iid simulations)
[Geyer, 1994; Riou-Durand  Chopin, 2018]
NCE asymptotics
Under less mild assumptions (more robust for NCE),
asymptotic normality of both NCE and MC-MLE estimates as
n −→ +∞ m/n −→ τ
√
n(^
ξMCMLE
n,m − ξ∗
) ≈ Nd(0, ΣMCMLE
)
and √
n(^
ξNCE
n,m − ξ∗
) ≈ Nd(0, ΣNCE
)
with important ordering except when ϑ0 = ϑ∗
ΣMCMLE
= ΣNCE
= (1 + τ−1
)ΣRMLNCE
[Geyer, 1994; Riou-Durand  Chopin, 2018]
NCE asymptotics
[Riou-Durand  Chopin, 2018]
NCE contrast distribution
Choice of q(·) free but
I easy to sample from
I must allows for analytical expression of its log-pdf
I must be close to true density p(·), so that mean squared
error E[|^
ϑT − ϑ?|2] small
Learning an approximation ^
q to p(·), for instance via
normalising flows
[Tabak and Turner, 2013; Jia  Seljiak, 2019]
NCE contrast distribution
Choice of q(·) free but
I easy to sample from
I must allows for analytical expression of its log-pdf
I must be close to true density p(·), so that mean squared
error E[|^
ϑT − ϑ?|2] small
Learning an approximation ^
q to p(·), for instance via
normalising flows
[Tabak and Turner, 2013; Jia  Seljiak, 2019]
NCE contrast distribution
Choice of q(·) free but
I easy to sample from
I must allows for analytical expression of its log-pdf
I must be close to true density p(·), so that mean squared
error E[|^
ϑT − ϑ?|2] small
Learning an approximation ^
q to p(·), for instance via
normalising flows
[Tabak and Turner, 2013; Jia  Seljiak, 2019]
Density estimation by normalising flows
“A normalizing flow describes the transformation of a
probability density through a sequence of invertible map-
pings. By repeatedly applying the rule for change of
variables, the initial density ‘flows’ through the sequence
of invertible mappings. At the end of this sequence we
obtain a valid probability distribution and hence this type
of flow is referred to as a normalizing flow.”
[Rezende  Mohammed, 2015; Papamakarios et al., 2021]
Density estimation by normalising flows
Based on invertible and 2×differentiable transforms
(diffeomorphisms) gi(·) = g(·; ηi) of a standard distribution ϕ(·)
Representation
z = g1 ◦ · · · ◦ gp(x) x ∼ ϕ(x)
Density of z by Jacobian transform
ϕ(x(z)) × detJg1◦···◦gp (z) = ϕ(x(z))
Y
i
|dgi/dzi−1|−1
where zi = gi(zi−1)
Flow defined as x − z1 − . . . − zp = z
[Rezende  Mohammed, 2015; Papamakarios et al., 2021]
Density estimation by normalising flows
Flow defined as x − z1 − . . . − zp = z
Density of z by Jacobian transform
ϕ(x(z)) × detJg1◦···◦gp (z) = ϕ(x(z))
Y
i
|dgi/dzi−1|−1
where zi = gi(zi−1)
Composition of transforms
(g1 ◦ g2)−1
= g−1
2 ◦ g−1
1 (2)
detJg1◦g2
(u) = detJg1
(g2(u)) × detJg2
(u) (3)
[Rezende  Mohammed, 2015; Papamakarios et al., 2021]
Density estimation by normalising flows
Flow defined as x, z1, . . . , zp = z
Density of z by Jacobian transform
ϕ(x(z)) × detJg1◦···◦gp (z) = ϕ(x(z))
Y
i
|dgi/dzi−1|−1
where zi = gi(zi−1)
[Rezende  Mohammed, 2015; Papamakarios et al., 2021]
Density estimation by normalising flows
Normalising flows are
I flexible family of densities
I easy to train by optimisation (e.g., maximum likelihood
estimation, variational inference)
I neural version of density estimation and generative model
I trained from observed densities
I natural tools for approximate Bayesian inference
(variational inference, ABC, synthetic likelihood)
Invertible linear-time transformations
Family of transformations
g(z) = z + uh(w0
z + b), u, w ∈ Rd
, b ∈ R
with h smooth element-wise non-linearity transform, with
derivative h0
Jacobian term computed in O(d) time
ψ(z) = h0
(w0
z + b)w
det
∂g
∂z
= |det(Id + uψ(z)0
)| = |1 + u0
ψ(z)|
[Rezende  Mohammed, 2015]
Invertible linear-time transformations
Family of transformations
g(z) = z + uh(w0
z + b), u, w ∈ Rd
, b ∈ R
with h smooth element-wise non-linearity transform, with
derivative h0
Density q(z) obtained by transforming initial density ϕ(z)
through sequence of maps gi, i.e.
z = gp ◦ · · · ◦ g1(x)
and
log q(z) = log ϕ(x) −
p
X
k=1
log |1 + u0
ψk(zk−1)|
[Rezende  Mohammed, 2015]
General theory of normalising flows
”Normalizing flows provide a general mechanism for
defining expressive probability distributions, only requir-
ing the specification of a (usually simple) base distribu-
tion and a series of bijective transformations.”
T(u; ψ) = gp(gp−1(. . . g1(u; η1) . . . ; ηp−1); ηp)
[Papamakarios et al., 2021]
General theory of normalising flows
“...how expressive are flow-based models? Can they rep-
resent any distribution p(x), even if the base distribution
is restricted to be simple? We show that this universal
representation is possible under reasonable conditions
on p(x).”
Obvious when considering the inverse conditional cdf
transforms, assuming differentiability
[Papamakarios et al., 2021]
General theory of normalising flows
[Hyvärinen  Pajunen (1999)]
I Write
px(x) =
d
Y
i=1
p(xi|xi)
I define
zi = Fi(xi, xi) = P(Xi ≤ xi|xi)
I deduce that
det JF(x) = p(x)
I conclude that pz(z) = 1 Uniform on (0, 1)d
[Papamakarios et al., 2021]
General theory of normalising flows
“Minimizing the Monte Carlo approximation of the
Kullback–Leibler divergence [between the true and the
model densities] is equivalent to fitting the flow-based
model to the sample by maximum likelihood estimation.”
MLEstimate flow-based model parameters by
arg max
ψ
n
X
i=1
log{ϕ(T−1
(xi; ψ))} − log |det{JT−1 (xi; ψ)}|
Note possible use of reverse Kullback–Leibler divergence when
learning an approximation (VA, IS, ABC) to a known [up to a
constant] target p(x)
[Papamakarios et al., 2021]
Constructing flows
Autoregressive flows
Component-wise transform (i = 1, . . . , d)
z0
i = τ(zi; hi)
| {z }
transformer
where hi = ci(z1:(i−1))
| {z }
conditioner
= ci(z1:(i−1); ϕi)
Jacobian
log |detJϕ(z)| = log
d
Y
i=1
∂τ
∂zi
(zi; hi)
=
d
X
i=1
log
∂τ
∂zi
(zi; hi)
Table 1: Multiple choices for
I transformer τ(·; ϕ)
I conditioner c(·) (neural network)
[Papamakarios et al., 2021]
Practical considerations
“Implementing a flow often amounts to composing as
many transformations as computation and memory will
allow. Working with such deep flows introduces addi-
tional challenges of a practical nature.”
I the more the merrier?!
I batch normalisation for maintaining stable gradients
(between layers)
I fighting curse of dimension (“evaluating T incurs an
increasing computational cost as dimensionality grows”)
with multiscale architecture (clamping: component-wise
stopping rules)
[Papamakarios et al., 2021]
Practical considerations
“Implementing a flow often amounts to composing as
many transformations as computation and memory will
allow. Working with such deep flows introduces addi-
tional challenges of a practical nature.”
I the more the merrier?!
I batch normalisation for maintaining stable gradients
(between layers)
I “...early work on flow precursors dismissed the
autoregressive approach as prohibitively expensive”
addressed by sharing parameters within conditioners ci(·)
[Papamakarios et al., 2021]
Applications
“Normalizing flows have two primitive operations: den-
sity calculation and sampling. In turn, flows are effec-
tive in any application requiring a probabilistic model
with either of those capabilities.”
I density estimation [speed of convergence?]
I proxy generative model
I importance sampling for integration by minimising distance
to integrand or IS variance [finite?]
I MCMC flow substitute for HMC
[Papamakarios et al., 2021]
Applications
“Normalizing flows have two primitive operations: den-
sity calculation and sampling. In turn, flows are effec-
tive in any application requiring a probabilistic model
with either of those capabilities.”
I optimised reparameterisation of target for MCMC [exact?]
I variational approximation by maximising evidence lower
bound (ELBO) to posterior on parameter η = T(u, ϕ)
n
X
i=1
log p(xobs
, T(ui; ϕ))
| {z }
joint
+ log |detJT (ui; ϕ)|
I substitutes for likelihood-free inference on either π(η|xobs)
or p(xobs|η)
[Papamakarios et al., 2021]
A[nother] revolution in machine learning?
“One area where neural networks are being actively de-
veloped is density estimation in high dimensions: given
a set of points x ∼ p(x), the goal is to estimate the
probability density p(·). As there are no explicit la-
bels, this is usually considered an unsupervised learning
task. We have already discussed that classical methods
based for instance on histograms or kernel density esti-
mation do not scale well to high-dimensional data. In
this regime, density estimation techniques based on neu-
ral networks are becoming more and more popular. One
class of these neural density estimation techniques are
normalizing flows.”
[Cranmer et al., PNAS, 2020]
Crucially lacking
No connection with statistical density estimation, with no
general study of convergence (in training sample size) to the
true density
...or in evaluating approximation error (as in ABC)
[Kobyzev et al., 2019; Papamakarios et al., 2021]
Reconnecting with Geyer (1994)
“...neural networks can be trained to learn the likelihood
ratio function p(x|ϑ0)/p(x|ϑ1) or p(x|ϑ0)/p(x), where in
the latter case the denominator is given by a marginal
model integrated over a proposal or the prior (...) The
key idea is closely related to the discriminator network
in GANs mentioned above: a classifier is trained us-
ing supervised learning to discriminate two sets of data,
though in this case both sets come from the simulator
and are generated for different parameter points ϑ0 and
ϑ1. The classifier output function can be converted into
an approximation of the likelihood ratio between ϑ0 and
ϑ1! This manifestation of the Neyman-Pearson lemma
in a machine learning setting is often called the likeli-
hood ratio trick.”
[Cranmer et al., PNAS, 2020]
A comparison with MLE
[Guttmann  Hyvärinen, 2012]
A comparison with MLE
[Guttmann  Hyvärinen, 2012]
A comparison with MLE
[Guttmann  Hyvärinen, 2012]
Outline
1 Geyer’s 1994 logistic
2 Links with bridge sampling
3 Noise contrastive estimation
4 Generative models
5 Variational autoencoders (VAEs)
6 Generative adversarial networks
(GANs)
Generative models
“Deep generative model than can learn via the principle
of maximum likelihood differ with respect to how they
represent or approximate the likelihood.” I. Goodfellow
Likelihood function
L(ϑ|x1, . . . , xn) ∝
n
Y
i=1
pmodel(xi|ϑ)
leading to MLE estimate
^
ϑ(x1, . . . , xn) = arg max
ϑ
n
X
i=1
log pmodel(xi|ϑ)
with
^
ϑ(x1, . . . , xn) = arg max
ϑ
DKL
(pdata||pmodel(·|ϑ))
Likelihood complexity
Explicit solutions:
I domino representation (“fully visible belief networks”)
pmodel(x) =
T
Y
t=1
pmodel(xt|x1:t−1)
I “non-linear independent component analysis”
(cf. normalizing flows)
pmodel(x) = pz(g−1
ϕ (x))
∂g−1
ϕ (x)
∂x
Likelihood complexity
Explicit solutions:
I domino representation (“fully visible belief networks”)
pmodel(x) =
T
Y
t=1
pmodel(xt|Pa(xt))
I “non-linear independent component analysis”
(cf. normalizing flows)
pmodel(x) = pz(g−1
ϕ (x))
∂g−1
ϕ (x)
∂x
Likelihood complexity
Explicit solutions:
I domino representation (“fully visible belief networks”)
pmodel(x) =
T
Y
t=1
pmodel(xt|Pa(xt))
I “non-linear independent component analysis”
(cf. normalizing flows)
pmodel(x) = pz(g−1
ϕ (x))
∂g−1
ϕ (x)
∂x
I variational approximations
log pmodel(x; ϑ) ≥ L(x; ϑ)
represented by variational autoencoders
Likelihood complexity
Explicit solutions:
I domino representation (“fully visible belief networks”)
pmodel(x) =
T
Y
t=1
pmodel(xt|Pa(xt))
I “non-linear independent component analysis”
(cf. normalizing flows)
pmodel(x) = pz(g−1
ϕ (x))
∂g−1
ϕ (x)
∂x
I Markov chain Monte Carlo (MCMC) maximisation
Likelihood complexity
Implicit solutions involving sampling from the model pmodel
without computing density
I ABC algorithms for MLE derivation
[Piccini  Anderson, 2017]
I generative stochastic networks
[Bengio et al., 2014]
I generative adversarial networks (GANs)
[Goodfellow et al., 2014]
Variational autoencoders (VAEs)
1 Geyer’s 1994 logistic
2 Links with bridge sampling
3 Noise contrastive estimation
4 Generative models
5 Variational autoencoders (VAEs)
6 Generative adversarial networks
(GANs)
Variational autoencoders
“... provide a principled framework for learning deep
latent-variable models and corresponding inference mod-
els (...) can be viewed as two coupled, but indepen-
dently parameterized models: the encoder or recogni-
tion model, and the decoder or generative model.
These two models support each other. The recognition
model delivers to the generative model an approximation
to its posterior over latent random variables, which it
needs to update its parameters inside an iteration of “ex-
pectation maximization” learning. Reversely, the gener-
ative model is a scaffolding of sorts for the recognition
model to learn meaningful representations of the data
(...) The recognition model is the approximate inverse
of the generative model according to Bayes rule.”
[Kingma  Welling, 2019]
Autoencoders
“An autoencoder is a neural network that is trained to
attempt to copy its input x to its output r = g(h) via
a hidden layer h = f(x) (...) [they] are designed to be
unable to copy perfectly”
I undercomplete autoencoders (with dim(h)  dim(x))
I regularised autoencoders, with objective
L(x, g ◦ f(x)) + Ω(h)
where penalty akin to log-prior
I denoising autoencoders (learning x on noisy version x̃ of x)
I stochastic autoencoders (learning pdecode(x|h) for a given
pencode(h|x) w/o compatibility)
[Goodfellow et al., 2016, p.496]
Variational autoencoders (VAEs)
Variational autoencoders (VAEs)
“The key idea behind the variational autoencoder is to
attempt to sample values of Z that are likely to have
produced X = x, and compute p(x) just from those.”
Representation of (marginal) likelihood pϑ(x) based on latent
variable z
pϑ(x) =
Z
pϑ(x|z)pϑ(z) dz
Machine-learning usually preoccupied only by maximising pϑ(x)
(in ϑ) by simulating z efficiently (i.e., not from the prior)
log pϑ(x) − D[pϑ(·)||pϑ(·|x)] = Epϑ
[log pϑ(x|Z)] − D[qϕ(·)||pϑ(·)]
[Kingma  Welling, 2019]
Variational autoencoders (VAEs)
“The key idea behind the variational autoencoder is to
attempt to sample values of Z that are likely to have
produced X = x, and compute p(x) just from those.”
Representation of (marginal) likelihood pϑ(x) based on latent
variable z
pϑ(x) =
Z
pϑ(x|z)pϑ(z) dz
Machine-learning usually preoccupied only by maximising pϑ(x)
(in ϑ) by simulating z efficiently (i.e., not from the prior)
log pϑ(x)−D[qϕ(·|x)||pϑ(·|x)] = Eqϕ(·|x)[log pϑ(x|Z)]−D[qϕ(·|x)||pϑ(·)]
since x is fixed (Bayesian analogy)
[Kingma  Welling, 2019]
Variational autoencoders (VAEs)
[Kingma  Welling, 2019]
Variational autoencoders (VAEs)
log pϑ(x)−D[qϕ(·|x)||pϑ(·|x)] = Eqϕ(·|x)[log pϑ(x|Z)]−D[qϕ(·|x)||pϑ(·)]
I lhs is quantity to maximize (plus error term, small for good
approximation qϕ, or regularisation)
I rhs can be optimised by stochastic gradient descent when
qϕ manageable
I link with autoencoder, as qϕ(z|x) “encoding” x into z, and
pϑ(x|z) “decoding” z to reconstruct x
[Doersch, 2021]
Variational autoencoders (VAEs)
log pϑ(x)−D[qϕ(·|x)||pϑ(·|x)] = Eqϕ(·|x)[log pϑ(x|Z)]−D[qϕ(·|x)||pϑ(·)]
I lhs is quantity to maximize (plus error term, small for good
approximation qϕ, or regularisation)
I rhs can be optimised by stochastic gradient descent when
qϕ manageable
I link with autoencoder, as qϕ(z|x) “encoding” x into z, and
pϑ(x|z) “decoding” z to reconstruct x
[Doersch, 2021]
Variational autoencoders (VAEs)
“One major division in machine learning is generative
versus discriminative modeling (...) To turn a genera-
tive model into a discriminator we need Bayes rule.”
Representation of (marginal) likelihood pϑ(x) based on latent
variable z
Variational approximation qϕ(z|x) (also called encoder) to
posterior distribution on latent variable z, pϑ(z|x), associated
with conditional distribution pϑ(x|z) (also called decoder)
Example: qϕ(z|x) Normal distribution Nd(µ(x), Σ(x)) with
I (µ(x), Σ(x)) estimated by deep neural network
I (µ(x), Σ(x)) estimated by ABC (synthetic likelihood)
[Kingma  Welling, 2014]
Variational autoencoders (VAEs)
“One major division in machine learning is generative
versus discriminative modeling (...) To turn a genera-
tive model into a discriminator we need Bayes rule.”
Representation of (marginal) likelihood pϑ(x) based on latent
variable z
Variational approximation qϕ(z|x) (also called encoder) to
posterior distribution on latent variable z, pϑ(z|x), associated
with conditional distribution pϑ(x|z) (also called decoder)
Example: qϕ(z|x) Normal distribution Nd(µ(x), Σ(x)) with
I (µ(x), Σ(x)) estimated by deep neural network
I (µ(x), Σ(x)) estimated by ABC (synthetic likelihood)
[Kingma  Welling, 2014]
ELBO objective
Since
log pϑ(x) = Eqϕ(z|x)[log pϑ(x)]
= Eqϕ(z|x)[log
pϑ(x, z)
pϑ(z|x)
]
= Eqϕ(z|x)[log
pϑ(x, z)
qϕ(z|x)
] + Eqϕ(z|x)[log
qϕ(x, z)
pϑ(z|x)
]
| {z }
KL≥0
evidence lower bound (ELBO) defined by
Lϑ,ϕ(x) = Eqϕ(z|x)[log pϑ(x, z)] − Eqϕ(z|x)[log qϕ(z|x)]
and used as objective function to be maximised in (ϑ, ϕ)
ELBO maximisation
Stochastic gradient step, one parameter at a time
In iid settings
Lϑ,ϕ(x) =
n
X
i=1
Lϑ,ϕ(xi)
and
∇ϑLϑ,ϕ(xi) = Eqϕ(z|xi)[∇ϑ log pϑ(xi, z)] ≈ ∇ϑ log pϑ(x, z̃(xi))
for one simulation z̃(xi) ∼ qϕ(z|xi)
but ∇ϕLϑ,ϕ(xi) more difficult to compute
ELBO maximisation
Stochastic gradient step, one parameter at a time
In iid settings
Lϑ,ϕ(x) =
n
X
i=1
Lϑ,ϕ(xi)
and
∇ϑLϑ,ϕ(xi) = Eqϕ(z|xi)[∇ϑ log pϑ(xi, z)] ≈ ∇ϑ log pϑ(x, z̃(xi))
for one simulation z̃(xi) ∼ qϕ(z|xi)
but ∇ϕLϑ,ϕ(xi) more difficult to compute
ELBO maximisation (2)
Reparameterisation (form of normalising flow)
If z = g(x, ϕ, ε) ∼ qϕ(z|x) when ε ∼ r(ε),
Eqϕ(z|xi)[h(Z)] = Er[h(g(x, ϕ, ε))]
and
∇ϕEqϕ(z|xi)[h(Z)] = ∇ϕEr[h ◦ g(x, ϕ, ε)]
= Er[∇ϕh ◦ g(x, ϕ, ε)]
≈ ∇ϕh ◦ g(x, ϕ, ε̃)
for one simulation ε̃ ∼ r
[Kingma  Welling, 2014]
ELBO maximisation (2)
Reparameterisation (form of normalising flow)
If z = g(x, ϕ, ε) ∼ qϕ(z|x) when ε ∼ r(ε),
Eqϕ(z|xi)[h(Z)] = Er[h(g(x, ϕ, ε))]
leading to unbiased estimator of gradient of ELBO
∇ϑ,ϕ {log pϑ(x, g(x, ϕ, ε)) − log qϕ(g(x, ϕ, ε)|x)}
[Kingma  Welling, 2014]
ELBO maximisation (2)
Reparameterisation (form of normalising flow)
If z = g(x, ϕ, ε) ∼ qϕ(z|x) when ε ∼ r(ε),
Eqϕ(z|xi)[h(Z)] = Er[h(g(x, ϕ, ε))]
leading to unbiased estimator of gradient of ELBO
∇ϑ,ϕ
log pϑ(x, g(x, ϕ, ε)) − log r(ε) + log
∂z
∂ε
[Kingma  Welling, 2014]
Marginal likelihood estimation
Since
log pϑ(x) = log Eqϕ(z|x)

pϑ(x, Z)

qϕ(Z|x)

a importance sample estimate of the log-marginal likelihood is
log pϑ(x) ≈ log
1
T
T
X
t=1
pϑ(x, zt)

qϕ(zt|x) zt ∼ qϕ(z|x)
When T = 1
log pϑ(x)
| {z }
ideal objective
≈ log pϑ(x, z1(x))

qϕ(z1(x)|x)
| {z }
ELBO objective
ELBO estimator.
Marginal likelihood estimation
Since
log pϑ(x) = log Eqϕ(z|x)

pϑ(x, Z)

qϕ(Z|x)

a importance sample estimate of the log-marginal likelihood is
log pϑ(x) ≈ log
1
T
T
X
t=1
pϑ(x, zt)

qϕ(zt|x) zt ∼ qϕ(z|x)
When T = 1
log pϑ(x)
| {z }
ideal objective
≈ log pϑ(x, z1(x))

qϕ(z1(x)|x)
| {z }
ELBO objective
ELBO estimator.
Generative adversarial networks
1 Geyer’s 1994 logistic
2 Links with bridge sampling
3 Noise contrastive estimation
4 Generative models
5 Variational autoencoders (VAEs)
6 Generative adversarial networks
(GANs)
Generative adversarial networks (GANs)
“Generative adversarial networks (GANs) provide an
algorithmic framework for constructing generative mod-
els with several appealing properties:
– they do not require a likelihood function to be specified,
only a generating procedure;
– they provide samples that are sharp and compelling;
– they allow us to harness our knowledge of building
highly accurate neural network classifiers.”
[Mohamed  Lakshminarayanan, 2016]
Implicit generative models
Representation of random variables as
x = Gϑ(z) z ∼ µ(z)
where µ(·) reference distribution and Gϑ multi-layered and
highly non-linear transform (as, e.g., in normalizing flows)
I more general and flexible than “prescriptive” if implicit
(black box)
I connected with pseudo-random variable generation
I call for likelihood-free inference on ϑ
[Mohamed  Lakshminarayanan, 2016]
Implicit generative models
Representation of random variables as
x = Gϑ(z) z ∼ µ(z)
where µ(·) reference distribution and Gϑ multi-layered and
highly non-linear transform (as, e.g., in normalizing flows)
I more general and flexible than “prescriptive” if implicit
(black box)
I connected with pseudo-random variable generation
I call for likelihood-free inference on ϑ
[Mohamed  Lakshminarayanan, 2016]
Untractable likelihoods
Cases when the likelihood function
f(y|ϑ) is unavailable and when the
completion step
f(y|ϑ) =
Z
Z
f(y, z|ϑ) dz
is impossible or too costly because of
the dimension of z
© MCMC cannot be implemented!
Untractable likelihoods
Cases when the likelihood function
f(y|ϑ) is unavailable and when the
completion step
f(y|ϑ) =
Z
Z
f(y, z|ϑ) dz
is impossible or too costly because of
the dimension of z
© MCMC cannot be implemented!
The ABC method
Bayesian setting: target is π(ϑ)f(x|ϑ)
When likelihood f(x|ϑ) not in closed form, likelihood-free
rejection technique:
ABC algorithm
For an observation y ∼ f(y|ϑ), under the prior π(ϑ), keep jointly
simulating
ϑ0
∼ π(ϑ) , z ∼ f(z|ϑ0
) ,
until the auxiliary variable z is equal to the observed value,
z = y.
[Tavaré et al., 1997]
The ABC method
Bayesian setting: target is π(ϑ)f(x|ϑ)
When likelihood f(x|ϑ) not in closed form, likelihood-free
rejection technique:
ABC algorithm
For an observation y ∼ f(y|ϑ), under the prior π(ϑ), keep jointly
simulating
ϑ0
∼ π(ϑ) , z ∼ f(z|ϑ0
) ,
until the auxiliary variable z is equal to the observed value,
z = y.
[Tavaré et al., 1997]
The ABC method
Bayesian setting: target is π(ϑ)f(x|ϑ)
When likelihood f(x|ϑ) not in closed form, likelihood-free
rejection technique:
ABC algorithm
For an observation y ∼ f(y|ϑ), under the prior π(ϑ), keep jointly
simulating
ϑ0
∼ π(ϑ) , z ∼ f(z|ϑ0
) ,
until the auxiliary variable z is equal to the observed value,
z = y.
[Tavaré et al., 1997]
Why does it work?!
The proof is trivial:
f(ϑi) ∝
X
z∈D
π(ϑi)f(z|ϑi)Iy(z)
∝ π(ϑi)f(y|ϑi)
= π(ϑi|y) .
[Accept–Reject 101]
ABC as A...pproximative
When y is a continuous random variable, equality z = y is
replaced with a tolerance condition,
ρ{η(z), η(y)} ≤ ε
where ρ is a distance and η(y) defines a (not necessarily
sufficient) statistic
Output distributed from
π(ϑ) Pϑ{ρ(y, z)  ε} ∝ π(ϑ|ρ(η(y), η(z))  ε)
[Pritchard et al., 1999]
ABC as A...pproximative
When y is a continuous random variable, equality z = y is
replaced with a tolerance condition,
ρ{η(z), η(y)} ≤ ε
where ρ is a distance and η(y) defines a (not necessarily
sufficient) statistic
Output distributed from
π(ϑ) Pϑ{ρ(y, z)  ε} ∝ π(ϑ|ρ(η(y), η(z))  ε)
[Pritchard et al., 1999]
ABC posterior
The likelihood-free algorithm samples from the marginal in z of:
πε(ϑ, z|y) =
π(ϑ)f(z|ϑ)IAε,y (z)
R
Aε,y×Θ π(ϑ)f(z|ϑ)dzdϑ
,
where Aε,y = {z ∈ D|ρ(η(z), η(y))  ε}.
The idea behind ABC is that the summary statistics coupled
with a small tolerance should provide a good approximation of
the posterior distribution:
πε(ϑ|y) =
Z
πε(ϑ, z|y)dz ≈ π(ϑ|η(y)) .
ABC posterior
The likelihood-free algorithm samples from the marginal in z of:
πε(ϑ, z|y) =
π(ϑ)f(z|ϑ)IAε,y (z)
R
Aε,y×Θ π(ϑ)f(z|ϑ)dzdϑ
,
where Aε,y = {z ∈ D|ρ(η(z), η(y))  ε}.
The idea behind ABC is that the summary statistics coupled
with a small tolerance should provide a good approximation of
the posterior distribution:
πε(ϑ|y) =
Z
πε(ϑ, z|y)dz ≈ π(ϑ|η(y)) .
MA example
Back to the MA(2) model
xt = εt +
2
X
i=1
ϑiεt−i
Simple prior: uniform over the inverse [real and complex] roots
in
Q(u) = 1 −
2
X
i=1
ϑiui
under identifiability conditions
MA example
Back to the MA(2) model
xt = εt +
2
X
i=1
ϑiεt−i
Simple prior: uniform prior over identifiability zone
MA example (2)
ABC algorithm thus made of
1. picking a new value (ϑ1, ϑ2) in the triangle
2. generating an iid sequence (εt)−2t≤T
3. producing a simulated series (x0
t)1≤t≤T
Distance: basic distance between the series
ρ((x0
t)1≤t≤T , (xt)1≤t≤T ) =
T
X
t=1
(xt − x0
t)2
or distance between summary statistics like the 2
autocorrelations
τj =
T
X
t=j+1
xtxt−j
MA example (2)
ABC algorithm thus made of
1. picking a new value (ϑ1, ϑ2) in the triangle
2. generating an iid sequence (εt)−2t≤T
3. producing a simulated series (x0
t)1≤t≤T
Distance: basic distance between the series
ρ((x0
t)1≤t≤T , (xt)1≤t≤T ) =
T
X
t=1
(xt − x0
t)2
or distance between summary statistics like the 2
autocorrelations
τj =
T
X
t=j+1
xtxt−j
Comparison of distance impact
Evaluation of the tolerance on the ABC sample against both
distances (ε = 100%, 10%, 1%, 0.1%) for an MA(2) model
Comparison of distance impact
0.0 0.2 0.4 0.6 0.8
0
1
2
3
4
θ1
−2.0 −1.0 0.0 0.5 1.0 1.5
0.0
0.5
1.0
1.5
θ2
Evaluation of the tolerance on the ABC sample against both
distances (ε = 100%, 10%, 1%, 0.1%) for an MA(2) model
Comparison of distance impact
0.0 0.2 0.4 0.6 0.8
0
1
2
3
4
θ1
−2.0 −1.0 0.0 0.5 1.0 1.5
0.0
0.5
1.0
1.5
θ2
Evaluation of the tolerance on the ABC sample against both
distances (ε = 100%, 10%, 1%, 0.1%) for an MA(2) model
Occurence of simulation in Econometrics
Simulation–based techniques in Econometrics
I Simulated method of moments
I Method of simulated moments
I Simulated pseudo-maximum-likelihood
I Indirect inference
[Gouriéroux  Monfort, 1996]
Simulated method of moments
Given observations yo
1:n from a model
yt = r(y1:(t−1), εt, ϑ) , εt ∼ g(·)
simulate ε?
1:n, derive
y?
t (ϑ) = r(y1:(t−1), ε?
t , ϑ)
and estimate ϑ by
arg min
ϑ
n
X
t=1
(yo
t − y?
t (ϑ))2
Simulated method of moments
Given observations yo
1:n from a model
yt = r(y1:(t−1), εt, ϑ) , εt ∼ g(·)
simulate ε?
1:n, derive
y?
t (ϑ) = r(y1:(t−1), ε?
t , ϑ)
and estimate ϑ by
arg min
ϑ
 n
X
t=1
yo
t −
n
X
t=1
y?
t (ϑ)
2
Indirect inference
Minimise (in ϑ) the distance between estimators ^
β based on
pseudo-models for genuine observations and for observations
simulated under the true model and the parameter ϑ.
[Gouriéroux, Monfort,  Renault, 1993;
Smith, 1993; Gallant  Tauchen, 1996]
Indirect inference (PML vs. PSE)
Example of the pseudo-maximum-likelihood (PML)
^
β(y) = arg max
β
X
t
log f?
(yt|β, y1:(t−1))
leading to
arg min
ϑ
||^
β(yo
) − ^
β(y1(ϑ), . . . , yS(ϑ))||2
when
ys(ϑ) ∼ f(y|ϑ) s = 1, . . . , S
Indirect inference (PML vs. PSE)
Example of the pseudo-score-estimator (PSE)
^
β(y) = arg min
β

X
t
∂ log f?
∂β
(yt|β, y1:(t−1))
2
leading to
arg min
ϑ
||^
β(yo
) − ^
β(y1(ϑ), . . . , yS(ϑ))||2
when
ys(ϑ) ∼ f(y|ϑ) s = 1, . . . , S
AR(2) vs. MA(1) example
true (MA) model
yt = εt − ϑεt−1
and [wrong!] auxiliary (AR) model
yt = β1yt−1 + β2yt−2 + ut
R code
x=eps=rnorm(250)
x[2:250]=x[2:250]-0.5*x[1:249] #MA(1)
simeps=rnorm(250)
propeta=seq(-.99,.99,le=199)
dist=rep(0,199)
bethat=as.vector(arima(x,c(2,0,0),incl=FALSE)$coef) #AR(2)
for (t in 1:199)
dist[t]=sum((as.vector(arima(c(simeps[1],simeps[2:250]-propeta[t]*
simeps[1:249]),c(2,0,0),incl=FALSE)$coef)-bethat)ˆ2)
AR(2) vs. MA(1) example
One sample:
−1.0 −0.5 0.0 0.5 1.0
0.0
0.2
0.4
0.6
0.8
θ
distance
AR(2) vs. MA(1) example
Many samples:
0.2 0.4 0.6 0.8 1.0
0
1
2
3
4
5
6
Bayesian synthetic likelihood
Approach contemporary (?) of ABC where distribution of
summary statistic s(·) replaced with parametric family, e.g.
g(s|ϑ) = ϕ(s; µ(ϑ), Σ(ϑ))
when ϑ [true] parameter value behind data
Normal parameters µ(ϑ), Σ(ϑ)) unknown in closed form and
evaluated by simulation, based on Monte Carlo sample of
zi ∼ f(z|ϑ)
Outcome used as substitute in posterior updating
[Wood, 2010; Drovandi  al., 2015; Price  al., 2018]
Bayesian synthetic likelihood
Approach contemporary (?) of ABC where distribution of
summary statistic s(·) replaced with parametric family, e.g.
g(s|ϑ) = ϕ(s; µ(ϑ), Σ(ϑ))
when ϑ [true] parameter value behind data
Normal parameters µ(ϑ), Σ(ϑ)) unknown in closed form and
evaluated by simulation, based on Monte Carlo sample of
zi ∼ f(z|ϑ)
Outcome used as substitute in posterior updating
[Wood, 2010; Drovandi  al., 2015; Price  al., 2018]
Bayesian synthetic likelihood
Approach contemporary (?) of ABC where distribution of
summary statistic s(·) replaced with parametric family, e.g.
g(s|ϑ) = ϕ(s; µ(ϑ), Σ(ϑ))
when ϑ [true] parameter value behind data
Normal parameters µ(ϑ), Σ(ϑ)) unknown in closed form and
evaluated by simulation, based on Monte Carlo sample of
zi ∼ f(z|ϑ)
Outcome used as substitute in posterior updating
[Wood, 2010; Drovandi  al., 2015; Price  al., 2018]
Asymptotics of BSL
Based on three approximations
1. representation of data information by summary statistic
information
2. Normal substitute for summary distribution
3. Monte Carlo versions of mean and variance
Existence of Bernstein-von Mises convergence under consistency
of selected covariance estimator
[Frazier  al., 2021]
Asymptotics of BSL
Based on three approximations
1. representation of data information by summary statistic
information
2. Normal substitute for summary distribution
3. Monte Carlo versions of mean and variance
Existence of Bernstein-von Mises convergence under consistency
of selected covariance estimator
[Frazier  al., 2021]
Asymptotics of BSL
Assumptions
I Central Limit Theorem on Sn = s(x1:n)
I Idenfitiability of parameter ϑ based on Sn
I Existence of some prior moment of Σ(ϑ)
I sub-Gaussian tail of simulated summaries
I Monte Carlo effort in nγ for γ  0
Similarity with ABC sufficient conditions, but BSL point
estimators generally asymptotically less efficient
[Frazier  al., 2018; Li  Fearnhead, 2018]
Asymptotics of BSL
Assumptions
I Central Limit Theorem on Sn = s(x1:n)
I Idenfitiability of parameter ϑ based on Sn
I Existence of some prior moment of Σ(ϑ)
I sub-Gaussian tail of simulated summaries
I Monte Carlo effort in nγ for γ  0
Similarity with ABC sufficient conditions, but BSL point
estimators generally asymptotically less efficient
[Frazier  al., 2018; Li  Fearnhead, 2018]
Asymptotics of ABC
For a sample y = y(n) and a tolerance ε = εn, when n → +∞,
assuming a parametric model ϑ ∈ Rk, k fixed
I Concentration of summary η(z): there exists b(ϑ) such
that
η(z) − b(ϑ) = oPϑ
(1)
I Consistency:
Πεn (kϑ − ϑ0k ≤ δ|y) = 1 + op(1)
I Convergence rate: there exists δn = o(1) such that
Πεn (kϑ − ϑ0k ≤ δn|y) = 1 + op(1)
[Frazier  al., 2018]
Asymptotics of ABC
Under assumptions
(A1) ∃σn → +∞
Pϑ

σ−1
n kη(z) − b(ϑ)k  u

≤ c(ϑ)h(u), lim
u→+∞
h(u) = 0
(A2)
Π(kb(ϑ) − b(ϑ0)k ≤ u)  uD
, u ≈ 0
posterior consistency and posterior concentration rate λT that
depends on the deviation control of d2{η(z), b(ϑ)}
posterior concentration rate for b(ϑ) bounded from below by
O(εT )
[Frazier  al., 2018]
Asymptotics of ABC
Under assumptions
(A1) ∃σn → +∞
Pϑ

σ−1
n kη(z) − b(ϑ)k  u

≤ c(ϑ)h(u), lim
u→+∞
h(u) = 0
(A2)
Π(kb(ϑ) − b(ϑ0)k ≤ u)  uD
, u ≈ 0
then
Πεn

kb(ϑ) − b(ϑ0)k . εn + σnh−1
(εD
n )|y

= 1 + op0
(1)
If also kϑ − ϑ0k ≤ Lkb(ϑ) − c(ϑ0)kα, locally and ϑ → b(ϑ) 1-1
Πεn (kϑ − ϑ0k . εα
n + σα
n(h−1
(εD
n ))α
| {z }
δn
|y) = 1 + op0
(1)
[Frazier  al., 2018]
Further ABC assumptions
I (B1) Concentration of summary η: Σn(ϑ) ∈ Rk1×k1 is o(1)
Σn(ϑ)−1
{η(z)−b(ϑ)} ⇒ Nk1
(0, Id), (Σn(ϑ)Σn(ϑ0)−1
)n = Co
I (B2) b(ϑ) is C1 and
kϑ − ϑ0k . kb(ϑ) − b(ϑ0)k
I (B3) Dominated convergence and
lim
n
Pϑ(Σn(ϑ)−1{η(z) − b(ϑ)} ∈ u + B(0, un))
Q
j un(j)
→ ϕ(u)
[Frazier  al., 2018]
ABC asymptotic regime
Set Σn(ϑ) = σnD(ϑ) for ϑ ≈ ϑ0 and Zo = Σn(ϑ0)−1(η(y) − ϑ0),
then under (B1) and (B2)
I when εnσ−1
n → +∞
Πεn [ε−1
n (ϑ − ϑ0) ∈ A|y] ⇒ UB0 (A), B0 = {x ∈ Rk
; kb0
(ϑ0)T
xk ≤ 1}
I when εnσ−1
n → c
Πεn [Σn(ϑ0)−1
(ϑ − ϑ0) − Zo
∈ A|y] ⇒ Qc(A), Qc 6= N
I when εnσ−1
n → 0 and (B3) holds, set
Vn = [b0
(ϑ0)]T
Σn(ϑ0)b0
(ϑ0)
then
Πεn [V−1
n (ϑ − ϑ0) − Z̃o
∈ A|y] ⇒ Φ(A),
[Frazier  al., 2018]
conclusion on ABC consistency
I asymptotic description of ABC: different regimes
depending on εn σn
I no point in choosing εn arbitrarily small: just εn = o(σn)
I no gain in iterative ABC
I results under weak conditions by not studying g(η(z)|ϑ)
[Frazier  al., 2018]

CDT 22 slides.pdf

  • 1.
    ABC, NCE, GANs& VAEs Christian P. Robert U. Paris Dauphine & Warwick U. CDT masterclass, May 2022
  • 2.
    Outline 1 Geyer’s 1994logistic 2 Links with bridge sampling 3 Noise contrastive estimation 4 Generative models 5 Variational autoencoders (VAEs) 6 Generative adversarial networks (GANs)
  • 3.
    An early entry Astandard issue in Bayesian inference is to approximate the marginal likelihood (or evidence) Ek = Z Θk πk(ϑk)Lk(ϑk) dϑk, aka the marginal likelihood. [Jeffreys, 1939]
  • 4.
    Bayes factor For testinghypotheses H0 : ϑ ∈ Θ0 vs. Ha : ϑ 6∈ Θ0, under prior π(Θ0)π0(ϑ) + π(Θc 0)π1(ϑ) , central quantity B01 = π(Θ0|x) π(Θc 0|x) π(Θ0) π(Θc 0) = Z Θ0 f(x|ϑ)π0(ϑ)dϑ Z Θc 0 f(x|ϑ)π1(ϑ)dϑ [Kass Raftery, 1995, Jeffreys, 1939]
  • 5.
    Bayes factor approximation Whenapproximating the Bayes factor B01 = Z Θ0 f0(x|ϑ0)π0(ϑ0)dϑ0 Z Θ1 f1(x|ϑ1)π1(ϑ1)dϑ1 use of importance functions $0 and $1 and b B01 = n−1 0 Pn0 i=1 f0(x|ϑi 0)π0(ϑi 0)/$0(ϑi 0) n−1 1 Pn1 i=1 f1(x|ϑi 1)π1(ϑi 1)/$1(ϑi 1) when ϑi 0 ∼ $0(ϑ) and ϑi 1 ∼ $1(ϑ)
  • 6.
    Forgetting and learning Counterintuitivechoice of importance function based on mixtures If ϑit ∼ $i(ϑ) (i = 1, . . . , I, t = 1, . . . , Ti) Eπ[h(ϑ)] ≈ 1 Ti Ti X t=1 h(ϑit) π(ϑit) $i(ϑit) replaced with Eπ[h(ϑ)] ≈ I X i=1 Ti X t=1 h(ϑit) π(ϑit) PI j=1 Tj$j(ϑit) Preserves unbiasedness and brings stability (while forgetting about original index) [Geyer, 1991, unpublished; Owen Zhou, 2000]
  • 7.
    Enters the logistic Ifconsidering unnormalised $j’s, i.e. $j(ϑ) = cj e $j(ϑ) j = 1, . . . , I and realisations ϑit’s from the mixture $(ϑ) = 1 T I X i=1 Tj$j(ϑ) = 1 T I X i=1 e $j(ϑ)e ηj z }| { log(cj) + log(Tj) Geyer (1994) introduces allocation probabilities for the mixture components pj(ϑ, η) = e $j(ϑ)eηj . I X m=1 e $m(ϑ)eηm to construct a pseudo-log-likelihood `(η) := I X i=1 Ti X t=1 log pi(ϑit, η)
  • 8.
    Enters the logistic(2) Estimating η as ^ η = arg max η `(η) produces the reverse logistic regression estimator of the constants cj as I partial forgetting of initial distribution I objective function equivalent to a multinomial logistic regression with the log e $i(ϑit)’s as covariates I randomness reversed from Ti’s to ϑit’s I constants cj identifiable up to a constant I resulting biased importance sampling estimator
  • 9.
    Illustration Special case whenI = 2, c1 = 1, T1 = T2 −`(c2) = T X t=1 log{1 + c2 e $2(ϑ1t)/$1(ϑ1t)} + T X t=1 log{1 + $1(ϑ2t)/c2 e $2(ϑ2t)} and $1(ϑ) = ϕ(ϑ; 0, 32 ) e $2(ϑ) = exp{−(ϑ − 5)2 /2} c2 = 1/ √ 2π
  • 10.
    Illustration Special case whenI = 2, c1 = 1, T1 = T2 −`(c2) = T X t=1 log{1 + c2 e $2(ϑ1t)/$1(ϑ1t)} + T X t=1 log{1 + $1(ϑ2t)/c2 e $2(ϑ2t)} tg=function(x)exp(-(x-5)**2/2) pl=function(a) sum(log(1+a*tg(x)/dnorm(x,0,3)))+sum(log(1+dnorm(y,0,3)/a/tg(y))) nrm=matrix(0,3,1e2) for(i in 1:3) for(j in 1:1e2) x=rnorm(10**(i+1),0,3) y=rnorm(10**(i+1),5,1) nrm[i,j]=optimise(pl,c(.01,1))
  • 11.
    Illustration Special case whenI = 2, c1 = 1, T1 = T2 −`(c2) = T X t=1 log{1 + c2 e $2(ϑ1t)/$1(ϑ1t)} + T X t=1 log{1 + $1(ϑ2t)/c2 e $2(ϑ2t)} 1 2 3 0.3 0.4 0.5 0.6 0.7
  • 12.
    Illustration Special case whenI = 2, c1 = 1, T1 = T2 −`(c2) = T X t=1 log{1 + c2 e $2(ϑ1t)/$1(ϑ1t)} + T X t=1 log{1 + $1(ϑ2t)/c2 e $2(ϑ2t)} Full logistic
  • 13.
    Outline 1 Geyer’s 1994logistic 2 Links with bridge sampling 3 Noise contrastive estimation 4 Generative models 5 Variational autoencoders (VAEs) 6 Generative adversarial networks (GANs)
  • 14.
    Bridge sampling Approximation ofBayes factors (and other ratios of integrals) Special case: If π1(ϑ1|x) ∝ π̃1(ϑ1|x) π2(ϑ2|x) ∝ π̃2(ϑ2|x) live on the same space (Θ1 = Θ2), then B12 ≈ 1 n n X i=1 π̃1(ϑi|x) π̃2(ϑi|x) ϑi ∼ π2(ϑ|x) [Bennett, 1976; Gelman Meng, 1998; Chen, Shao Ibrahim, 2000]
  • 15.
    Bridge sampling variance Thebridge sampling estimator does poorly if var(b B12) B2 12 ≈ 1 n Eπ2 π1(ϑ) − π2(ϑ) π2(ϑ) 2 # is large, i.e. if π1 and π2 have little overlap...
  • 16.
    Bridge sampling variance Thebridge sampling estimator does poorly if var(b B12) B2 12 ≈ 1 n Eπ2 π1(ϑ) − π2(ϑ) π2(ϑ) 2 # is large, i.e. if π1 and π2 have little overlap...
  • 17.
    (Further) bridge sampling Generalidentity: c1 c2 = B12 = Z π̃2(ϑ|x)α(ϑ)π1(ϑ|x)dϑ Z π̃1(ϑ|x)α(ϑ)π2(ϑ|x)dϑ ∀ α(·) ≈ 1 n1 n1 X i=1 π̃2(ϑ1i|x)α(ϑ1i) 1 n2 n2 X i=1 π̃1(ϑ2i|x)α(ϑ2i) ϑji ∼ πj(ϑ|x)
  • 18.
    Optimal bridge sampling Theoptimal choice of auxiliary function is α? (ϑ) = n1 + n2 n1π1(ϑ|x) + n2π2(ϑ|x) leading to b B12 ≈ 1 n1 n1 X i=1 π̃2(ϑ1i|x) n1π1(ϑ1i|x) + n2π2(ϑ1i|x) 1 n2 n2 X i=1 π̃1(ϑ2i|x) n1π1(ϑ2i|x) + n2π2(ϑ2i|x)
  • 19.
    Optimal bridge sampling(2) Reason: Var(b B12) B2 12 ≈ 1 n1n2 R π1(ϑ)π2(ϑ)[n1π1(ϑ) + n2π2(ϑ)]α(ϑ)2 dϑ R π1(ϑ)π2(ϑ)α(ϑ) dϑ 2 − 1 (by the δ method) Drawback: Dependence on the unknown normalising constants solved iteratively
  • 20.
    Optimal bridge sampling(2) Reason: Var(b B12) B2 12 ≈ 1 n1n2 R π1(ϑ)π2(ϑ)[n1π1(ϑ) + n2π2(ϑ)]α(ϑ)2 dϑ R π1(ϑ)π2(ϑ)α(ϑ) dϑ 2 − 1 (by the δ method) Drawback: Dependence on the unknown normalising constants solved iteratively
  • 21.
    Back to thelogistic When T1 = T2 = T, optimising −`(c2) = T X t=1 log{1+c2 e $2(ϑ1t)/$1(ϑ1t)}+ T X t=1 log{1+$1(ϑ2t)/c2 e $2(ϑ2t)} cancelling derivative in c2 T X t=1 e $2(ϑ1t) c2 e $2(ϑ1t) + $1(ϑ1t) − c−1 2 T X t=1 $1(ϑ2t) $1(ϑ2t) + c2 e $2(ϑ2t) = 0 leads to c0 2 = PT t=1 $1(ϑ2t) $1(ϑ2t)+c2 e $2(ϑ2t) PT t=1 e $2(ϑ1t) c2 e $2(ϑ1t)+$1(ϑ1t) EM step for the maximum pseudo-likelihood estimation
  • 22.
    Back to thelogistic When T1 = T2 = T, optimising −`(c2) = T X t=1 log{1+c2 e $2(ϑ1t)/$1(ϑ1t)}+ T X t=1 log{1+$1(ϑ2t)/c2 e $2(ϑ2t)} cancelling derivative in c2 T X t=1 e $2(ϑ1t) c2 e $2(ϑ1t) + $1(ϑ1t) − c−1 2 T X t=1 $1(ϑ2t) $1(ϑ2t) + c2 e $2(ϑ2t) = 0 leads to c0 2 = PT t=1 $1(ϑ2t) $1(ϑ2t)+c2 e $2(ϑ2t) PT t=1 e $2(ϑ1t) c2 e $2(ϑ1t)+$1(ϑ1t) EM step for the maximum pseudo-likelihood estimation
  • 23.
    Back to thelogistic When T1 = T2 = T, optimising −`(c2) = T X t=1 log{1+c2 e $2(ϑ1t)/$1(ϑ1t)}+ T X t=1 log{1+$1(ϑ2t)/c2 e $2(ϑ2t)} cancelling derivative in c2 T X t=1 e $2(ϑ1t) c2 e $2(ϑ1t) + $1(ϑ1t) − c−1 2 T X t=1 $1(ϑ2t) $1(ϑ2t) + c2 e $2(ϑ2t) = 0 leads to c0 2 = PT t=1 $1(ϑ2t) $1(ϑ2t)+c2 e $2(ϑ2t) PT t=1 e $2(ϑ1t) c2 e $2(ϑ1t)+$1(ϑ1t) EM step for the maximum pseudo-likelihood estimation
  • 24.
    Mixtures as proposals Designspecific mixture for simulation purposes, with density e ϕ(ϑ) ∝ ω1π(ϑ)L(ϑ) + ϕ(ϑ) , where ϕ(ϑ) is arbitrary (but normalised) Note: ω1 is not a probability weight [Chopin Robert, 2011]
  • 25.
    Mixtures as proposals Designspecific mixture for simulation purposes, with density e ϕ(ϑ) ∝ ω1π(ϑ)L(ϑ) + ϕ(ϑ) , where ϕ(ϑ) is arbitrary (but normalised) Note: ω1 is not a probability weight [Chopin Robert, 2011]
  • 26.
    evidence approximation bymixtures Rao-Blackwellised estimate ^ ξ = 1 T T X t=1 ω1π(ϑ(t) )L(ϑ(t) ) ω1π(ϑ(t) )L(ϑ(t) ) + ϕ(ϑ(t) ) , converges to ω1Z/{ω1Z + 1} Deduce ^ Z from ω1 ^ Z/{ω1 ^ Z + 1} = ^ ξ Back to bridge sampling optimal estimate [Chopin Robert, 2011]
  • 27.
    Non-parametric MLE “At firstglance, the problem appears to be an exercise in calculus or numerical analysis, and not amenable to statistical formulation” Kong et al. (JRSS B, 2002) I use of Fisher information I non-parametric MLE based on simulations I comparison of sampling schemes through variances I Rao–Blackwellised improvements by invariance constraints [Meng, 2011, IRCEM]
  • 28.
    Non-parametric MLE “At firstglance, the problem appears to be an exercise in calculus or numerical analysis, and not amenable to statistical formulation” Kong et al. (JRSS B, 2002) I use of Fisher information I non-parametric MLE based on simulations I comparison of sampling schemes through variances I Rao–Blackwellised improvements by invariance constraints [Meng, 2011, IRCEM]
  • 29.
    NPMLE Observing Yij ∼ Fi(t)= c−1 i Zt −∞ ωi(x) dF(x) with ωi known and F unknown
  • 30.
    NPMLE Observing Yij ∼ Fi(t)= c−1 i Zt −∞ ωi(x) dF(x) with ωi known and F unknown “Maximum likelihood estimate” defined by weighted empirical cdf X i,j ωi(yij)p(yij)δyij maximising in p Y ij c−1 i ωi(yij) p(yij)
  • 31.
    NPMLE Observing Yij ∼ Fi(t)= c−1 i Zt −∞ ωi(x) dF(x) with ωi known and F unknown “Maximum likelihood estimate” defined by weighted empirical cdf X i,j ωi(yij)p(yij)δyij maximising in p Y ij c−1 i ωi(yij) p(yij) Result such that X ij ^ c−1 r ωr(yij) P s ns^ c−1 s ωs(yij) = 1 [Vardi, 1985]
  • 32.
    NPMLE Observing Yij ∼ Fi(t)= c−1 i Zt −∞ ωi(x) dF(x) with ωi known and F unknown Result such that X ij ^ c−1 r ωr(yij) P s ns^ c−1 s ωs(yij) = 1 [Vardi, 1985] Bridge sampling estimator X ij ^ c−1 r ωr(yij) P s ns^ c−1 s ωs(yij) = 1 [Gelman Meng, 1998; Tan, 2004]
  • 33.
    end of theSeries B 2002 discussion “...essentially every Monte Carlo activity may be interpreted as parameter estimation by maximum likelihood in a statistical model. We do not claim that this point of view is necessary; nor do we seek to establish a working principle from it.” I restriction to discrete support measures [may be] suboptimal [Ritov Bickel, 1990; Robins et al., 1997, 2000, 2003] I group averaging versions in-between multiple mixture estimators and quasi-Monte Carlo version [Owen Zhou, 2000; Cornuet et al., 2012; Owen, 2003] I statistical analogy provides at best narrative thread
  • 34.
    end of theSeries B 2002 discussion “The hard part of the exercise is to construct a submodel such that the gain in precision is sufficient to justify the additional computational effort” I garden of forking paths, with infinite possibilities I no free lunch (variance, budget, time) I Rao–Blackwellisation may be detrimental in Markov setups
  • 35.
    end of the2002 discussion “The statistician can considerably improve the efficiency of the estimator by using the known values of different functionals such as moments and probabilities of different sets. The algorithm becomes increasingly efficient as the number of functionals becomes larger. The result, however, is an extremely complicated algorithm, which is not necessarily faster.” Y. Ritov “...the analyst must violate the likelihood principle and eschew semiparametric, nonparametric or fully parametric maximum likelihood estimation in favour of non-likelihood-based locally efficient semiparametric estimators.” J. Robins
  • 36.
    Outline 1 Geyer’s 1994logistic 2 Links with bridge sampling 3 Noise contrastive estimation 4 Generative models 5 Variational autoencoders (VAEs) 6 Generative adversarial networks (GANs)
  • 37.
    Noise contrastive estimation Newestimation principle for parameterised and unnormalised statistical models also based on nonlinear logistic regression Case of parameterised model with density p(x; α) = p̃(x; α) Z(α) and untractable normalising constant Z(α) Estimating Z(α) as extra parameter is impossible via maximum likelihood methods Use of estimation techniques bypassing the constant like contrastive divergence (Hinton, 2002) and score matching (Hyvärinen, 2005) [Gutmann Hyvärinen, 2010]
  • 38.
    Noise contrastive estimation Newestimation principle for parameterised and unnormalised statistical models also based on nonlinear logistic regression Case of parameterised model with density p(x; α) = p̃(x; α) Z(α) and untractable normalising constant Z(α) Estimating Z(α) as extra parameter is impossible via maximum likelihood methods Use of estimation techniques bypassing the constant like contrastive divergence (Hinton, 2002) and score matching (Hyvärinen, 2005) [Gutmann Hyvärinen, 2010]
  • 39.
    Noise contrastive estimation Newestimation principle for parameterised and unnormalised statistical models also based on nonlinear logistic regression Case of parameterised model with density p(x; α) = p̃(x; α) Z(α) and untractable normalising constant Z(α) Estimating Z(α) as extra parameter is impossible via maximum likelihood methods Use of estimation techniques bypassing the constant like contrastive divergence (Hinton, 2002) and score matching (Hyvärinen, 2005) [Gutmann Hyvärinen, 2010]
  • 40.
    NCE principle As inGeyer’s method, given sample x1, . . . , xT from p(x; α) I generate artificial sample from known distribution q, y1, . . . , yT I maximise the classification log-likelihood (where ϑ = (α, c)) `(ϑ; x, y) := T X i=1 log h(xi; ϑ) + T X i=1 log{1 − h(yi; ϑ)} of a logistic regression model which discriminates the observed data from the simulated data, where h(z; ϑ) = cp̃(z; α) cp̃(z; α) + q(z)
  • 41.
    NCE principle As inGeyer’s method, given sample x1, . . . , xT from p(x; α) I generate artificial sample from known distribution q, y1, . . . , yT I maximise the classification log-likelihood (where ϑ = (α, c)) `(ϑ; x, y) := T X i=1 log h(xi; ϑ) + T X i=1 log{1 − h(yi; ϑ)} of a logistic regression model which discriminates the observed data from the simulated data, where h(z; ϑ) = cp̃(z; α) cp̃(z; α) + q(z)
  • 42.
    NCE consistency Objective functionthat converges (in T) to J(ϑ) = E [log h(x; ϑ) + log{1 − h(y; ϑ)}] Defining f(·) = log p(·; ϑ) and J̃(f) = Ep [log r(f(x) − log q(x)) + log{1 − r(f(y) − log q(y))}]
  • 43.
    NCE consistency Objective functionthat converges (in T) to J(ϑ) = E [log h(x; ϑ) + log{1 − h(y; ϑ)}] Defining f(·) = log p(·; ϑ) and J̃(f) = Ep [log r(f(x) − log q(x)) + log{1 − r(f(y) − log q(y))}] Assuming q(·) positive everywhere, I J̃(·) attains its maximum at f?(·) = log p(·) true distribution I maximization performed without any normalisation constraint
  • 44.
    NCE consistency Objective functionthat converges (in T) to J(ϑ) = E [log h(x; ϑ) + log{1 − h(y; ϑ)}] Defining f(·) = log p(·; ϑ) and J̃(f) = Ep [log r(f(x) − log q(x)) + log{1 − r(f(y) − log q(y))}] Under regularity condition, assuming the true distribution belongs to parametric family, the solution ^ ϑT = arg max ϑ `(ϑ; x, y) (1) converges to true ϑ Consequence: log-normalisation constant consistently estimated by maximizing (??)
  • 45.
    Convergence of noisecontrastive estimation Opposition of Monte Carlo MLE à la Geyer (1994, JASA) L = 1/n n X i=1 log p̃(xi; ϑ) p̃(xi; ϑ0 ) − log 1/m m X j=1 p̃(zi; ϑ) p̃(zi; ϑ0 ) | {z } ≈Z(ϑ0)/Z(ϑ) x1, . . . , xn ∼ p∗ z1, . . . , zm ∼ p(z; ϑ0 ) [Riou-Durand Chopin, 2018]
  • 46.
    Convergence of noisecontrastive estimation and of noise contrastive estimation à la Gutmann and Hyvärinen (2012) L(ϑ, ν) = 1/n n X i=1 log qϑ,ν(xi) + 1/m m X i=1 log[1 − qϑ,ν(zi)]m/n log qϑ,ν(z) 1 − qϑ,ν(z) = log p̃(xi; ϑ) p̃(xi; ϑ0) + ν + log n/m x1, . . . , xn ∼ p∗ z1, . . . , zm ∼ p(z; ϑ0 ) [Riou-Durand Chopin, 2018]
  • 47.
    Poisson transform Equivalent likelihoods L(ϑ,ν) = 1/n n X i=1 log p̃(xi; ϑ) p̃(xi; ϑ0) + ν − eν Z(ϑ) Z(ϑ0) and L(ϑ, ν) = 1/n n X i=1 log p̃(xi; ϑ) p̃(xi; ϑ0) + ν − eν m m X j=1 p̃(zi; ϑ) p̃(zi; ϑ0 ) sharing same ^ ϑ as originals
  • 48.
    NCE consistency Under mildassumptions, almost surely ^ ξMCMLE n,m m→∞ −→ ^ ξn and ^ ξNCE n,m m→∞ −→ ^ ξn the maximum likelihood estimator associated with x1, . . . , xn ∼ p(·; ϑ) and e−^ ν = Z(^ ϑ) Z(ϑ0) [Geyer, 1994; Riou-Durand Chopin, 2018]
  • 49.
    NCE asymptotics Under lessmild assumptions (more robust for NCE), asymptotic normality of both NCE and MC-MLE estimates as n −→ +∞ m/n −→ τ √ n(^ ξMCMLE n,m − ξ∗ ) ≈ Nd(0, ΣMCMLE ) and √ n(^ ξNCE n,m − ξ∗ ) ≈ Nd(0, ΣNCE ) with important ordering ΣMCMLE ΣNCE showing that NCE dominates MCMLE in terms of mean square error (for iid simulations) [Geyer, 1994; Riou-Durand Chopin, 2018]
  • 50.
    NCE asymptotics Under lessmild assumptions (more robust for NCE), asymptotic normality of both NCE and MC-MLE estimates as n −→ +∞ m/n −→ τ √ n(^ ξMCMLE n,m − ξ∗ ) ≈ Nd(0, ΣMCMLE ) and √ n(^ ξNCE n,m − ξ∗ ) ≈ Nd(0, ΣNCE ) with important ordering except when ϑ0 = ϑ∗ ΣMCMLE = ΣNCE = (1 + τ−1 )ΣRMLNCE [Geyer, 1994; Riou-Durand Chopin, 2018]
  • 51.
  • 52.
    NCE contrast distribution Choiceof q(·) free but I easy to sample from I must allows for analytical expression of its log-pdf I must be close to true density p(·), so that mean squared error E[|^ ϑT − ϑ?|2] small Learning an approximation ^ q to p(·), for instance via normalising flows [Tabak and Turner, 2013; Jia Seljiak, 2019]
  • 53.
    NCE contrast distribution Choiceof q(·) free but I easy to sample from I must allows for analytical expression of its log-pdf I must be close to true density p(·), so that mean squared error E[|^ ϑT − ϑ?|2] small Learning an approximation ^ q to p(·), for instance via normalising flows [Tabak and Turner, 2013; Jia Seljiak, 2019]
  • 54.
    NCE contrast distribution Choiceof q(·) free but I easy to sample from I must allows for analytical expression of its log-pdf I must be close to true density p(·), so that mean squared error E[|^ ϑT − ϑ?|2] small Learning an approximation ^ q to p(·), for instance via normalising flows [Tabak and Turner, 2013; Jia Seljiak, 2019]
  • 55.
    Density estimation bynormalising flows “A normalizing flow describes the transformation of a probability density through a sequence of invertible map- pings. By repeatedly applying the rule for change of variables, the initial density ‘flows’ through the sequence of invertible mappings. At the end of this sequence we obtain a valid probability distribution and hence this type of flow is referred to as a normalizing flow.” [Rezende Mohammed, 2015; Papamakarios et al., 2021]
  • 56.
    Density estimation bynormalising flows Based on invertible and 2×differentiable transforms (diffeomorphisms) gi(·) = g(·; ηi) of a standard distribution ϕ(·) Representation z = g1 ◦ · · · ◦ gp(x) x ∼ ϕ(x) Density of z by Jacobian transform ϕ(x(z)) × detJg1◦···◦gp (z) = ϕ(x(z)) Y i |dgi/dzi−1|−1 where zi = gi(zi−1) Flow defined as x − z1 − . . . − zp = z [Rezende Mohammed, 2015; Papamakarios et al., 2021]
  • 57.
    Density estimation bynormalising flows Flow defined as x − z1 − . . . − zp = z Density of z by Jacobian transform ϕ(x(z)) × detJg1◦···◦gp (z) = ϕ(x(z)) Y i |dgi/dzi−1|−1 where zi = gi(zi−1) Composition of transforms (g1 ◦ g2)−1 = g−1 2 ◦ g−1 1 (2) detJg1◦g2 (u) = detJg1 (g2(u)) × detJg2 (u) (3) [Rezende Mohammed, 2015; Papamakarios et al., 2021]
  • 58.
    Density estimation bynormalising flows Flow defined as x, z1, . . . , zp = z Density of z by Jacobian transform ϕ(x(z)) × detJg1◦···◦gp (z) = ϕ(x(z)) Y i |dgi/dzi−1|−1 where zi = gi(zi−1) [Rezende Mohammed, 2015; Papamakarios et al., 2021]
  • 59.
    Density estimation bynormalising flows Normalising flows are I flexible family of densities I easy to train by optimisation (e.g., maximum likelihood estimation, variational inference) I neural version of density estimation and generative model I trained from observed densities I natural tools for approximate Bayesian inference (variational inference, ABC, synthetic likelihood)
  • 60.
    Invertible linear-time transformations Familyof transformations g(z) = z + uh(w0 z + b), u, w ∈ Rd , b ∈ R with h smooth element-wise non-linearity transform, with derivative h0 Jacobian term computed in O(d) time ψ(z) = h0 (w0 z + b)w
  • 64.
  • 68.
    = |det(Id +uψ(z)0 )| = |1 + u0 ψ(z)| [Rezende Mohammed, 2015]
  • 69.
    Invertible linear-time transformations Familyof transformations g(z) = z + uh(w0 z + b), u, w ∈ Rd , b ∈ R with h smooth element-wise non-linearity transform, with derivative h0 Density q(z) obtained by transforming initial density ϕ(z) through sequence of maps gi, i.e. z = gp ◦ · · · ◦ g1(x) and log q(z) = log ϕ(x) − p X k=1 log |1 + u0 ψk(zk−1)| [Rezende Mohammed, 2015]
  • 70.
    General theory ofnormalising flows ”Normalizing flows provide a general mechanism for defining expressive probability distributions, only requir- ing the specification of a (usually simple) base distribu- tion and a series of bijective transformations.” T(u; ψ) = gp(gp−1(. . . g1(u; η1) . . . ; ηp−1); ηp) [Papamakarios et al., 2021]
  • 71.
    General theory ofnormalising flows “...how expressive are flow-based models? Can they rep- resent any distribution p(x), even if the base distribution is restricted to be simple? We show that this universal representation is possible under reasonable conditions on p(x).” Obvious when considering the inverse conditional cdf transforms, assuming differentiability [Papamakarios et al., 2021]
  • 72.
    General theory ofnormalising flows [Hyvärinen Pajunen (1999)] I Write px(x) = d Y i=1 p(xi|xi) I define zi = Fi(xi, xi) = P(Xi ≤ xi|xi) I deduce that det JF(x) = p(x) I conclude that pz(z) = 1 Uniform on (0, 1)d [Papamakarios et al., 2021]
  • 73.
    General theory ofnormalising flows “Minimizing the Monte Carlo approximation of the Kullback–Leibler divergence [between the true and the model densities] is equivalent to fitting the flow-based model to the sample by maximum likelihood estimation.” MLEstimate flow-based model parameters by arg max ψ n X i=1 log{ϕ(T−1 (xi; ψ))} − log |det{JT−1 (xi; ψ)}| Note possible use of reverse Kullback–Leibler divergence when learning an approximation (VA, IS, ABC) to a known [up to a constant] target p(x) [Papamakarios et al., 2021]
  • 74.
  • 75.
    Autoregressive flows Component-wise transform(i = 1, . . . , d) z0 i = τ(zi; hi) | {z } transformer where hi = ci(z1:(i−1)) | {z } conditioner = ci(z1:(i−1); ϕi) Jacobian log |detJϕ(z)| = log
  • 81.
  • 87.
  • 91.
  • 95.
    Table 1: Multiplechoices for I transformer τ(·; ϕ) I conditioner c(·) (neural network) [Papamakarios et al., 2021]
  • 96.
    Practical considerations “Implementing aflow often amounts to composing as many transformations as computation and memory will allow. Working with such deep flows introduces addi- tional challenges of a practical nature.” I the more the merrier?! I batch normalisation for maintaining stable gradients (between layers) I fighting curse of dimension (“evaluating T incurs an increasing computational cost as dimensionality grows”) with multiscale architecture (clamping: component-wise stopping rules) [Papamakarios et al., 2021]
  • 97.
    Practical considerations “Implementing aflow often amounts to composing as many transformations as computation and memory will allow. Working with such deep flows introduces addi- tional challenges of a practical nature.” I the more the merrier?! I batch normalisation for maintaining stable gradients (between layers) I “...early work on flow precursors dismissed the autoregressive approach as prohibitively expensive” addressed by sharing parameters within conditioners ci(·) [Papamakarios et al., 2021]
  • 98.
    Applications “Normalizing flows havetwo primitive operations: den- sity calculation and sampling. In turn, flows are effec- tive in any application requiring a probabilistic model with either of those capabilities.” I density estimation [speed of convergence?] I proxy generative model I importance sampling for integration by minimising distance to integrand or IS variance [finite?] I MCMC flow substitute for HMC [Papamakarios et al., 2021]
  • 99.
    Applications “Normalizing flows havetwo primitive operations: den- sity calculation and sampling. In turn, flows are effec- tive in any application requiring a probabilistic model with either of those capabilities.” I optimised reparameterisation of target for MCMC [exact?] I variational approximation by maximising evidence lower bound (ELBO) to posterior on parameter η = T(u, ϕ) n X i=1 log p(xobs , T(ui; ϕ)) | {z } joint + log |detJT (ui; ϕ)| I substitutes for likelihood-free inference on either π(η|xobs) or p(xobs|η) [Papamakarios et al., 2021]
  • 100.
    A[nother] revolution inmachine learning? “One area where neural networks are being actively de- veloped is density estimation in high dimensions: given a set of points x ∼ p(x), the goal is to estimate the probability density p(·). As there are no explicit la- bels, this is usually considered an unsupervised learning task. We have already discussed that classical methods based for instance on histograms or kernel density esti- mation do not scale well to high-dimensional data. In this regime, density estimation techniques based on neu- ral networks are becoming more and more popular. One class of these neural density estimation techniques are normalizing flows.” [Cranmer et al., PNAS, 2020]
  • 101.
    Crucially lacking No connectionwith statistical density estimation, with no general study of convergence (in training sample size) to the true density ...or in evaluating approximation error (as in ABC) [Kobyzev et al., 2019; Papamakarios et al., 2021]
  • 102.
    Reconnecting with Geyer(1994) “...neural networks can be trained to learn the likelihood ratio function p(x|ϑ0)/p(x|ϑ1) or p(x|ϑ0)/p(x), where in the latter case the denominator is given by a marginal model integrated over a proposal or the prior (...) The key idea is closely related to the discriminator network in GANs mentioned above: a classifier is trained us- ing supervised learning to discriminate two sets of data, though in this case both sets come from the simulator and are generated for different parameter points ϑ0 and ϑ1. The classifier output function can be converted into an approximation of the likelihood ratio between ϑ0 and ϑ1! This manifestation of the Neyman-Pearson lemma in a machine learning setting is often called the likeli- hood ratio trick.” [Cranmer et al., PNAS, 2020]
  • 103.
    A comparison withMLE [Guttmann Hyvärinen, 2012]
  • 104.
    A comparison withMLE [Guttmann Hyvärinen, 2012]
  • 105.
    A comparison withMLE [Guttmann Hyvärinen, 2012]
  • 106.
    Outline 1 Geyer’s 1994logistic 2 Links with bridge sampling 3 Noise contrastive estimation 4 Generative models 5 Variational autoencoders (VAEs) 6 Generative adversarial networks (GANs)
  • 107.
    Generative models “Deep generativemodel than can learn via the principle of maximum likelihood differ with respect to how they represent or approximate the likelihood.” I. Goodfellow Likelihood function L(ϑ|x1, . . . , xn) ∝ n Y i=1 pmodel(xi|ϑ) leading to MLE estimate ^ ϑ(x1, . . . , xn) = arg max ϑ n X i=1 log pmodel(xi|ϑ) with ^ ϑ(x1, . . . , xn) = arg max ϑ DKL (pdata||pmodel(·|ϑ))
  • 108.
    Likelihood complexity Explicit solutions: Idomino representation (“fully visible belief networks”) pmodel(x) = T Y t=1 pmodel(xt|x1:t−1) I “non-linear independent component analysis” (cf. normalizing flows) pmodel(x) = pz(g−1 ϕ (x))
  • 113.
  • 119.
    Likelihood complexity Explicit solutions: Idomino representation (“fully visible belief networks”) pmodel(x) = T Y t=1 pmodel(xt|Pa(xt)) I “non-linear independent component analysis” (cf. normalizing flows) pmodel(x) = pz(g−1 ϕ (x))
  • 124.
  • 130.
    Likelihood complexity Explicit solutions: Idomino representation (“fully visible belief networks”) pmodel(x) = T Y t=1 pmodel(xt|Pa(xt)) I “non-linear independent component analysis” (cf. normalizing flows) pmodel(x) = pz(g−1 ϕ (x))
  • 135.
  • 140.
    I variational approximations logpmodel(x; ϑ) ≥ L(x; ϑ) represented by variational autoencoders
  • 141.
    Likelihood complexity Explicit solutions: Idomino representation (“fully visible belief networks”) pmodel(x) = T Y t=1 pmodel(xt|Pa(xt)) I “non-linear independent component analysis” (cf. normalizing flows) pmodel(x) = pz(g−1 ϕ (x))
  • 146.
  • 151.
    I Markov chainMonte Carlo (MCMC) maximisation
  • 152.
    Likelihood complexity Implicit solutionsinvolving sampling from the model pmodel without computing density I ABC algorithms for MLE derivation [Piccini Anderson, 2017] I generative stochastic networks [Bengio et al., 2014] I generative adversarial networks (GANs) [Goodfellow et al., 2014]
  • 153.
    Variational autoencoders (VAEs) 1Geyer’s 1994 logistic 2 Links with bridge sampling 3 Noise contrastive estimation 4 Generative models 5 Variational autoencoders (VAEs) 6 Generative adversarial networks (GANs)
  • 154.
    Variational autoencoders “... providea principled framework for learning deep latent-variable models and corresponding inference mod- els (...) can be viewed as two coupled, but indepen- dently parameterized models: the encoder or recogni- tion model, and the decoder or generative model. These two models support each other. The recognition model delivers to the generative model an approximation to its posterior over latent random variables, which it needs to update its parameters inside an iteration of “ex- pectation maximization” learning. Reversely, the gener- ative model is a scaffolding of sorts for the recognition model to learn meaningful representations of the data (...) The recognition model is the approximate inverse of the generative model according to Bayes rule.” [Kingma Welling, 2019]
  • 155.
    Autoencoders “An autoencoder isa neural network that is trained to attempt to copy its input x to its output r = g(h) via a hidden layer h = f(x) (...) [they] are designed to be unable to copy perfectly” I undercomplete autoencoders (with dim(h) dim(x)) I regularised autoencoders, with objective L(x, g ◦ f(x)) + Ω(h) where penalty akin to log-prior I denoising autoencoders (learning x on noisy version x̃ of x) I stochastic autoencoders (learning pdecode(x|h) for a given pencode(h|x) w/o compatibility) [Goodfellow et al., 2016, p.496]
  • 156.
  • 157.
    Variational autoencoders (VAEs) “Thekey idea behind the variational autoencoder is to attempt to sample values of Z that are likely to have produced X = x, and compute p(x) just from those.” Representation of (marginal) likelihood pϑ(x) based on latent variable z pϑ(x) = Z pϑ(x|z)pϑ(z) dz Machine-learning usually preoccupied only by maximising pϑ(x) (in ϑ) by simulating z efficiently (i.e., not from the prior) log pϑ(x) − D[pϑ(·)||pϑ(·|x)] = Epϑ [log pϑ(x|Z)] − D[qϕ(·)||pϑ(·)] [Kingma Welling, 2019]
  • 158.
    Variational autoencoders (VAEs) “Thekey idea behind the variational autoencoder is to attempt to sample values of Z that are likely to have produced X = x, and compute p(x) just from those.” Representation of (marginal) likelihood pϑ(x) based on latent variable z pϑ(x) = Z pϑ(x|z)pϑ(z) dz Machine-learning usually preoccupied only by maximising pϑ(x) (in ϑ) by simulating z efficiently (i.e., not from the prior) log pϑ(x)−D[qϕ(·|x)||pϑ(·|x)] = Eqϕ(·|x)[log pϑ(x|Z)]−D[qϕ(·|x)||pϑ(·)] since x is fixed (Bayesian analogy) [Kingma Welling, 2019]
  • 159.
  • 160.
    Variational autoencoders (VAEs) logpϑ(x)−D[qϕ(·|x)||pϑ(·|x)] = Eqϕ(·|x)[log pϑ(x|Z)]−D[qϕ(·|x)||pϑ(·)] I lhs is quantity to maximize (plus error term, small for good approximation qϕ, or regularisation) I rhs can be optimised by stochastic gradient descent when qϕ manageable I link with autoencoder, as qϕ(z|x) “encoding” x into z, and pϑ(x|z) “decoding” z to reconstruct x [Doersch, 2021]
  • 161.
    Variational autoencoders (VAEs) logpϑ(x)−D[qϕ(·|x)||pϑ(·|x)] = Eqϕ(·|x)[log pϑ(x|Z)]−D[qϕ(·|x)||pϑ(·)] I lhs is quantity to maximize (plus error term, small for good approximation qϕ, or regularisation) I rhs can be optimised by stochastic gradient descent when qϕ manageable I link with autoencoder, as qϕ(z|x) “encoding” x into z, and pϑ(x|z) “decoding” z to reconstruct x [Doersch, 2021]
  • 162.
    Variational autoencoders (VAEs) “Onemajor division in machine learning is generative versus discriminative modeling (...) To turn a genera- tive model into a discriminator we need Bayes rule.” Representation of (marginal) likelihood pϑ(x) based on latent variable z Variational approximation qϕ(z|x) (also called encoder) to posterior distribution on latent variable z, pϑ(z|x), associated with conditional distribution pϑ(x|z) (also called decoder) Example: qϕ(z|x) Normal distribution Nd(µ(x), Σ(x)) with I (µ(x), Σ(x)) estimated by deep neural network I (µ(x), Σ(x)) estimated by ABC (synthetic likelihood) [Kingma Welling, 2014]
  • 163.
    Variational autoencoders (VAEs) “Onemajor division in machine learning is generative versus discriminative modeling (...) To turn a genera- tive model into a discriminator we need Bayes rule.” Representation of (marginal) likelihood pϑ(x) based on latent variable z Variational approximation qϕ(z|x) (also called encoder) to posterior distribution on latent variable z, pϑ(z|x), associated with conditional distribution pϑ(x|z) (also called decoder) Example: qϕ(z|x) Normal distribution Nd(µ(x), Σ(x)) with I (µ(x), Σ(x)) estimated by deep neural network I (µ(x), Σ(x)) estimated by ABC (synthetic likelihood) [Kingma Welling, 2014]
  • 164.
    ELBO objective Since log pϑ(x)= Eqϕ(z|x)[log pϑ(x)] = Eqϕ(z|x)[log pϑ(x, z) pϑ(z|x) ] = Eqϕ(z|x)[log pϑ(x, z) qϕ(z|x) ] + Eqϕ(z|x)[log qϕ(x, z) pϑ(z|x) ] | {z } KL≥0 evidence lower bound (ELBO) defined by Lϑ,ϕ(x) = Eqϕ(z|x)[log pϑ(x, z)] − Eqϕ(z|x)[log qϕ(z|x)] and used as objective function to be maximised in (ϑ, ϕ)
  • 165.
    ELBO maximisation Stochastic gradientstep, one parameter at a time In iid settings Lϑ,ϕ(x) = n X i=1 Lϑ,ϕ(xi) and ∇ϑLϑ,ϕ(xi) = Eqϕ(z|xi)[∇ϑ log pϑ(xi, z)] ≈ ∇ϑ log pϑ(x, z̃(xi)) for one simulation z̃(xi) ∼ qϕ(z|xi) but ∇ϕLϑ,ϕ(xi) more difficult to compute
  • 166.
    ELBO maximisation Stochastic gradientstep, one parameter at a time In iid settings Lϑ,ϕ(x) = n X i=1 Lϑ,ϕ(xi) and ∇ϑLϑ,ϕ(xi) = Eqϕ(z|xi)[∇ϑ log pϑ(xi, z)] ≈ ∇ϑ log pϑ(x, z̃(xi)) for one simulation z̃(xi) ∼ qϕ(z|xi) but ∇ϕLϑ,ϕ(xi) more difficult to compute
  • 167.
    ELBO maximisation (2) Reparameterisation(form of normalising flow) If z = g(x, ϕ, ε) ∼ qϕ(z|x) when ε ∼ r(ε), Eqϕ(z|xi)[h(Z)] = Er[h(g(x, ϕ, ε))] and ∇ϕEqϕ(z|xi)[h(Z)] = ∇ϕEr[h ◦ g(x, ϕ, ε)] = Er[∇ϕh ◦ g(x, ϕ, ε)] ≈ ∇ϕh ◦ g(x, ϕ, ε̃) for one simulation ε̃ ∼ r [Kingma Welling, 2014]
  • 168.
    ELBO maximisation (2) Reparameterisation(form of normalising flow) If z = g(x, ϕ, ε) ∼ qϕ(z|x) when ε ∼ r(ε), Eqϕ(z|xi)[h(Z)] = Er[h(g(x, ϕ, ε))] leading to unbiased estimator of gradient of ELBO ∇ϑ,ϕ {log pϑ(x, g(x, ϕ, ε)) − log qϕ(g(x, ϕ, ε)|x)} [Kingma Welling, 2014]
  • 169.
    ELBO maximisation (2) Reparameterisation(form of normalising flow) If z = g(x, ϕ, ε) ∼ qϕ(z|x) when ε ∼ r(ε), Eqϕ(z|xi)[h(Z)] = Er[h(g(x, ϕ, ε))] leading to unbiased estimator of gradient of ELBO ∇ϑ,ϕ
  • 170.
    log pϑ(x, g(x,ϕ, ε)) − log r(ε) + log
  • 174.
  • 178.
  • 179.
    Marginal likelihood estimation Since logpϑ(x) = log Eqϕ(z|x) pϑ(x, Z) qϕ(Z|x) a importance sample estimate of the log-marginal likelihood is log pϑ(x) ≈ log 1 T T X t=1 pϑ(x, zt) qϕ(zt|x) zt ∼ qϕ(z|x) When T = 1 log pϑ(x) | {z } ideal objective ≈ log pϑ(x, z1(x)) qϕ(z1(x)|x) | {z } ELBO objective ELBO estimator.
  • 180.
    Marginal likelihood estimation Since logpϑ(x) = log Eqϕ(z|x) pϑ(x, Z) qϕ(Z|x) a importance sample estimate of the log-marginal likelihood is log pϑ(x) ≈ log 1 T T X t=1 pϑ(x, zt) qϕ(zt|x) zt ∼ qϕ(z|x) When T = 1 log pϑ(x) | {z } ideal objective ≈ log pϑ(x, z1(x)) qϕ(z1(x)|x) | {z } ELBO objective ELBO estimator.
  • 181.
    Generative adversarial networks 1Geyer’s 1994 logistic 2 Links with bridge sampling 3 Noise contrastive estimation 4 Generative models 5 Variational autoencoders (VAEs) 6 Generative adversarial networks (GANs)
  • 182.
    Generative adversarial networks(GANs) “Generative adversarial networks (GANs) provide an algorithmic framework for constructing generative mod- els with several appealing properties: – they do not require a likelihood function to be specified, only a generating procedure; – they provide samples that are sharp and compelling; – they allow us to harness our knowledge of building highly accurate neural network classifiers.” [Mohamed Lakshminarayanan, 2016]
  • 183.
    Implicit generative models Representationof random variables as x = Gϑ(z) z ∼ µ(z) where µ(·) reference distribution and Gϑ multi-layered and highly non-linear transform (as, e.g., in normalizing flows) I more general and flexible than “prescriptive” if implicit (black box) I connected with pseudo-random variable generation I call for likelihood-free inference on ϑ [Mohamed Lakshminarayanan, 2016]
  • 184.
    Implicit generative models Representationof random variables as x = Gϑ(z) z ∼ µ(z) where µ(·) reference distribution and Gϑ multi-layered and highly non-linear transform (as, e.g., in normalizing flows) I more general and flexible than “prescriptive” if implicit (black box) I connected with pseudo-random variable generation I call for likelihood-free inference on ϑ [Mohamed Lakshminarayanan, 2016]
  • 185.
    Untractable likelihoods Cases whenthe likelihood function f(y|ϑ) is unavailable and when the completion step f(y|ϑ) = Z Z f(y, z|ϑ) dz is impossible or too costly because of the dimension of z © MCMC cannot be implemented!
  • 186.
    Untractable likelihoods Cases whenthe likelihood function f(y|ϑ) is unavailable and when the completion step f(y|ϑ) = Z Z f(y, z|ϑ) dz is impossible or too costly because of the dimension of z © MCMC cannot be implemented!
  • 187.
    The ABC method Bayesiansetting: target is π(ϑ)f(x|ϑ) When likelihood f(x|ϑ) not in closed form, likelihood-free rejection technique: ABC algorithm For an observation y ∼ f(y|ϑ), under the prior π(ϑ), keep jointly simulating ϑ0 ∼ π(ϑ) , z ∼ f(z|ϑ0 ) , until the auxiliary variable z is equal to the observed value, z = y. [Tavaré et al., 1997]
  • 188.
    The ABC method Bayesiansetting: target is π(ϑ)f(x|ϑ) When likelihood f(x|ϑ) not in closed form, likelihood-free rejection technique: ABC algorithm For an observation y ∼ f(y|ϑ), under the prior π(ϑ), keep jointly simulating ϑ0 ∼ π(ϑ) , z ∼ f(z|ϑ0 ) , until the auxiliary variable z is equal to the observed value, z = y. [Tavaré et al., 1997]
  • 189.
    The ABC method Bayesiansetting: target is π(ϑ)f(x|ϑ) When likelihood f(x|ϑ) not in closed form, likelihood-free rejection technique: ABC algorithm For an observation y ∼ f(y|ϑ), under the prior π(ϑ), keep jointly simulating ϑ0 ∼ π(ϑ) , z ∼ f(z|ϑ0 ) , until the auxiliary variable z is equal to the observed value, z = y. [Tavaré et al., 1997]
  • 190.
    Why does itwork?! The proof is trivial: f(ϑi) ∝ X z∈D π(ϑi)f(z|ϑi)Iy(z) ∝ π(ϑi)f(y|ϑi) = π(ϑi|y) . [Accept–Reject 101]
  • 191.
    ABC as A...pproximative Wheny is a continuous random variable, equality z = y is replaced with a tolerance condition, ρ{η(z), η(y)} ≤ ε where ρ is a distance and η(y) defines a (not necessarily sufficient) statistic Output distributed from π(ϑ) Pϑ{ρ(y, z) ε} ∝ π(ϑ|ρ(η(y), η(z)) ε) [Pritchard et al., 1999]
  • 192.
    ABC as A...pproximative Wheny is a continuous random variable, equality z = y is replaced with a tolerance condition, ρ{η(z), η(y)} ≤ ε where ρ is a distance and η(y) defines a (not necessarily sufficient) statistic Output distributed from π(ϑ) Pϑ{ρ(y, z) ε} ∝ π(ϑ|ρ(η(y), η(z)) ε) [Pritchard et al., 1999]
  • 193.
    ABC posterior The likelihood-freealgorithm samples from the marginal in z of: πε(ϑ, z|y) = π(ϑ)f(z|ϑ)IAε,y (z) R Aε,y×Θ π(ϑ)f(z|ϑ)dzdϑ , where Aε,y = {z ∈ D|ρ(η(z), η(y)) ε}. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: πε(ϑ|y) = Z πε(ϑ, z|y)dz ≈ π(ϑ|η(y)) .
  • 194.
    ABC posterior The likelihood-freealgorithm samples from the marginal in z of: πε(ϑ, z|y) = π(ϑ)f(z|ϑ)IAε,y (z) R Aε,y×Θ π(ϑ)f(z|ϑ)dzdϑ , where Aε,y = {z ∈ D|ρ(η(z), η(y)) ε}. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: πε(ϑ|y) = Z πε(ϑ, z|y)dz ≈ π(ϑ|η(y)) .
  • 195.
    MA example Back tothe MA(2) model xt = εt + 2 X i=1 ϑiεt−i Simple prior: uniform over the inverse [real and complex] roots in Q(u) = 1 − 2 X i=1 ϑiui under identifiability conditions
  • 196.
    MA example Back tothe MA(2) model xt = εt + 2 X i=1 ϑiεt−i Simple prior: uniform prior over identifiability zone
  • 197.
    MA example (2) ABCalgorithm thus made of 1. picking a new value (ϑ1, ϑ2) in the triangle 2. generating an iid sequence (εt)−2t≤T 3. producing a simulated series (x0 t)1≤t≤T Distance: basic distance between the series ρ((x0 t)1≤t≤T , (xt)1≤t≤T ) = T X t=1 (xt − x0 t)2 or distance between summary statistics like the 2 autocorrelations τj = T X t=j+1 xtxt−j
  • 198.
    MA example (2) ABCalgorithm thus made of 1. picking a new value (ϑ1, ϑ2) in the triangle 2. generating an iid sequence (εt)−2t≤T 3. producing a simulated series (x0 t)1≤t≤T Distance: basic distance between the series ρ((x0 t)1≤t≤T , (xt)1≤t≤T ) = T X t=1 (xt − x0 t)2 or distance between summary statistics like the 2 autocorrelations τj = T X t=j+1 xtxt−j
  • 199.
    Comparison of distanceimpact Evaluation of the tolerance on the ABC sample against both distances (ε = 100%, 10%, 1%, 0.1%) for an MA(2) model
  • 200.
    Comparison of distanceimpact 0.0 0.2 0.4 0.6 0.8 0 1 2 3 4 θ1 −2.0 −1.0 0.0 0.5 1.0 1.5 0.0 0.5 1.0 1.5 θ2 Evaluation of the tolerance on the ABC sample against both distances (ε = 100%, 10%, 1%, 0.1%) for an MA(2) model
  • 201.
    Comparison of distanceimpact 0.0 0.2 0.4 0.6 0.8 0 1 2 3 4 θ1 −2.0 −1.0 0.0 0.5 1.0 1.5 0.0 0.5 1.0 1.5 θ2 Evaluation of the tolerance on the ABC sample against both distances (ε = 100%, 10%, 1%, 0.1%) for an MA(2) model
  • 202.
    Occurence of simulationin Econometrics Simulation–based techniques in Econometrics I Simulated method of moments I Method of simulated moments I Simulated pseudo-maximum-likelihood I Indirect inference [Gouriéroux Monfort, 1996]
  • 203.
    Simulated method ofmoments Given observations yo 1:n from a model yt = r(y1:(t−1), εt, ϑ) , εt ∼ g(·) simulate ε? 1:n, derive y? t (ϑ) = r(y1:(t−1), ε? t , ϑ) and estimate ϑ by arg min ϑ n X t=1 (yo t − y? t (ϑ))2
  • 204.
    Simulated method ofmoments Given observations yo 1:n from a model yt = r(y1:(t−1), εt, ϑ) , εt ∼ g(·) simulate ε? 1:n, derive y? t (ϑ) = r(y1:(t−1), ε? t , ϑ) and estimate ϑ by arg min ϑ n X t=1 yo t − n X t=1 y? t (ϑ) 2
  • 205.
    Indirect inference Minimise (inϑ) the distance between estimators ^ β based on pseudo-models for genuine observations and for observations simulated under the true model and the parameter ϑ. [Gouriéroux, Monfort, Renault, 1993; Smith, 1993; Gallant Tauchen, 1996]
  • 206.
    Indirect inference (PMLvs. PSE) Example of the pseudo-maximum-likelihood (PML) ^ β(y) = arg max β X t log f? (yt|β, y1:(t−1)) leading to arg min ϑ ||^ β(yo ) − ^ β(y1(ϑ), . . . , yS(ϑ))||2 when ys(ϑ) ∼ f(y|ϑ) s = 1, . . . , S
  • 207.
    Indirect inference (PMLvs. PSE) Example of the pseudo-score-estimator (PSE) ^ β(y) = arg min β X t ∂ log f? ∂β (yt|β, y1:(t−1)) 2 leading to arg min ϑ ||^ β(yo ) − ^ β(y1(ϑ), . . . , yS(ϑ))||2 when ys(ϑ) ∼ f(y|ϑ) s = 1, . . . , S
  • 208.
    AR(2) vs. MA(1)example true (MA) model yt = εt − ϑεt−1 and [wrong!] auxiliary (AR) model yt = β1yt−1 + β2yt−2 + ut R code x=eps=rnorm(250) x[2:250]=x[2:250]-0.5*x[1:249] #MA(1) simeps=rnorm(250) propeta=seq(-.99,.99,le=199) dist=rep(0,199) bethat=as.vector(arima(x,c(2,0,0),incl=FALSE)$coef) #AR(2) for (t in 1:199) dist[t]=sum((as.vector(arima(c(simeps[1],simeps[2:250]-propeta[t]* simeps[1:249]),c(2,0,0),incl=FALSE)$coef)-bethat)ˆ2)
  • 209.
    AR(2) vs. MA(1)example One sample: −1.0 −0.5 0.0 0.5 1.0 0.0 0.2 0.4 0.6 0.8 θ distance
  • 210.
    AR(2) vs. MA(1)example Many samples: 0.2 0.4 0.6 0.8 1.0 0 1 2 3 4 5 6
  • 211.
    Bayesian synthetic likelihood Approachcontemporary (?) of ABC where distribution of summary statistic s(·) replaced with parametric family, e.g. g(s|ϑ) = ϕ(s; µ(ϑ), Σ(ϑ)) when ϑ [true] parameter value behind data Normal parameters µ(ϑ), Σ(ϑ)) unknown in closed form and evaluated by simulation, based on Monte Carlo sample of zi ∼ f(z|ϑ) Outcome used as substitute in posterior updating [Wood, 2010; Drovandi al., 2015; Price al., 2018]
  • 212.
    Bayesian synthetic likelihood Approachcontemporary (?) of ABC where distribution of summary statistic s(·) replaced with parametric family, e.g. g(s|ϑ) = ϕ(s; µ(ϑ), Σ(ϑ)) when ϑ [true] parameter value behind data Normal parameters µ(ϑ), Σ(ϑ)) unknown in closed form and evaluated by simulation, based on Monte Carlo sample of zi ∼ f(z|ϑ) Outcome used as substitute in posterior updating [Wood, 2010; Drovandi al., 2015; Price al., 2018]
  • 213.
    Bayesian synthetic likelihood Approachcontemporary (?) of ABC where distribution of summary statistic s(·) replaced with parametric family, e.g. g(s|ϑ) = ϕ(s; µ(ϑ), Σ(ϑ)) when ϑ [true] parameter value behind data Normal parameters µ(ϑ), Σ(ϑ)) unknown in closed form and evaluated by simulation, based on Monte Carlo sample of zi ∼ f(z|ϑ) Outcome used as substitute in posterior updating [Wood, 2010; Drovandi al., 2015; Price al., 2018]
  • 214.
    Asymptotics of BSL Basedon three approximations 1. representation of data information by summary statistic information 2. Normal substitute for summary distribution 3. Monte Carlo versions of mean and variance Existence of Bernstein-von Mises convergence under consistency of selected covariance estimator [Frazier al., 2021]
  • 215.
    Asymptotics of BSL Basedon three approximations 1. representation of data information by summary statistic information 2. Normal substitute for summary distribution 3. Monte Carlo versions of mean and variance Existence of Bernstein-von Mises convergence under consistency of selected covariance estimator [Frazier al., 2021]
  • 216.
    Asymptotics of BSL Assumptions ICentral Limit Theorem on Sn = s(x1:n) I Idenfitiability of parameter ϑ based on Sn I Existence of some prior moment of Σ(ϑ) I sub-Gaussian tail of simulated summaries I Monte Carlo effort in nγ for γ 0 Similarity with ABC sufficient conditions, but BSL point estimators generally asymptotically less efficient [Frazier al., 2018; Li Fearnhead, 2018]
  • 217.
    Asymptotics of BSL Assumptions ICentral Limit Theorem on Sn = s(x1:n) I Idenfitiability of parameter ϑ based on Sn I Existence of some prior moment of Σ(ϑ) I sub-Gaussian tail of simulated summaries I Monte Carlo effort in nγ for γ 0 Similarity with ABC sufficient conditions, but BSL point estimators generally asymptotically less efficient [Frazier al., 2018; Li Fearnhead, 2018]
  • 218.
    Asymptotics of ABC Fora sample y = y(n) and a tolerance ε = εn, when n → +∞, assuming a parametric model ϑ ∈ Rk, k fixed I Concentration of summary η(z): there exists b(ϑ) such that η(z) − b(ϑ) = oPϑ (1) I Consistency: Πεn (kϑ − ϑ0k ≤ δ|y) = 1 + op(1) I Convergence rate: there exists δn = o(1) such that Πεn (kϑ − ϑ0k ≤ δn|y) = 1 + op(1) [Frazier al., 2018]
  • 219.
    Asymptotics of ABC Underassumptions (A1) ∃σn → +∞ Pϑ σ−1 n kη(z) − b(ϑ)k u ≤ c(ϑ)h(u), lim u→+∞ h(u) = 0 (A2) Π(kb(ϑ) − b(ϑ0)k ≤ u) uD , u ≈ 0 posterior consistency and posterior concentration rate λT that depends on the deviation control of d2{η(z), b(ϑ)} posterior concentration rate for b(ϑ) bounded from below by O(εT ) [Frazier al., 2018]
  • 220.
    Asymptotics of ABC Underassumptions (A1) ∃σn → +∞ Pϑ σ−1 n kη(z) − b(ϑ)k u ≤ c(ϑ)h(u), lim u→+∞ h(u) = 0 (A2) Π(kb(ϑ) − b(ϑ0)k ≤ u) uD , u ≈ 0 then Πεn kb(ϑ) − b(ϑ0)k . εn + σnh−1 (εD n )|y = 1 + op0 (1) If also kϑ − ϑ0k ≤ Lkb(ϑ) − c(ϑ0)kα, locally and ϑ → b(ϑ) 1-1 Πεn (kϑ − ϑ0k . εα n + σα n(h−1 (εD n ))α | {z } δn |y) = 1 + op0 (1) [Frazier al., 2018]
  • 221.
    Further ABC assumptions I(B1) Concentration of summary η: Σn(ϑ) ∈ Rk1×k1 is o(1) Σn(ϑ)−1 {η(z)−b(ϑ)} ⇒ Nk1 (0, Id), (Σn(ϑ)Σn(ϑ0)−1 )n = Co I (B2) b(ϑ) is C1 and kϑ − ϑ0k . kb(ϑ) − b(ϑ0)k I (B3) Dominated convergence and lim n Pϑ(Σn(ϑ)−1{η(z) − b(ϑ)} ∈ u + B(0, un)) Q j un(j) → ϕ(u) [Frazier al., 2018]
  • 222.
    ABC asymptotic regime SetΣn(ϑ) = σnD(ϑ) for ϑ ≈ ϑ0 and Zo = Σn(ϑ0)−1(η(y) − ϑ0), then under (B1) and (B2) I when εnσ−1 n → +∞ Πεn [ε−1 n (ϑ − ϑ0) ∈ A|y] ⇒ UB0 (A), B0 = {x ∈ Rk ; kb0 (ϑ0)T xk ≤ 1} I when εnσ−1 n → c Πεn [Σn(ϑ0)−1 (ϑ − ϑ0) − Zo ∈ A|y] ⇒ Qc(A), Qc 6= N I when εnσ−1 n → 0 and (B3) holds, set Vn = [b0 (ϑ0)]T Σn(ϑ0)b0 (ϑ0) then Πεn [V−1 n (ϑ − ϑ0) − Z̃o ∈ A|y] ⇒ Φ(A), [Frazier al., 2018]
  • 223.
    conclusion on ABCconsistency I asymptotic description of ABC: different regimes depending on εn σn I no point in choosing εn arbitrarily small: just εn = o(σn) I no gain in iterative ABC I results under weak conditions by not studying g(η(z)|ϑ) [Frazier al., 2018]