• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
[A]BCel : a presentation at ABC in Roma
 

[A]BCel : a presentation at ABC in Roma

on

  • 2,116 views

slides (updated) of my talk on Bayesian computation using empirical likelihood

slides (updated) of my talk on Bayesian computation using empirical likelihood

Statistics

Views

Total Views
2,116
Views on SlideShare
475
Embed Views
1,641

Actions

Likes
0
Downloads
14
Comments
0

6 Embeds 1,641

http://xianblog.wordpress.com 1623
http://feedheap.com 12
http://newsblur.com 2
http://www.newsblur.com 2
http://networkwebhosting.us 1
http://cloud.feedly.com 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    [A]BCel : a presentation at ABC in Roma [A]BCel : a presentation at ABC in Roma Presentation Transcript

    • (Approximate) Bayesian Computation as a newempirical Bayes methodChristian P. RobertUniversit´e Paris-Dauphine & CREST, ParisJoint work with K.L. Mengersen and P. PudloABC in Roma, 30 mai 2013
    • meeting of potential interest?!MCMSki IV to be held in Chamonix Mt Blanc, France, fromMonday, Jan. 6 to Wed., Jan. 8, 2014All aspects of MCMC++ theory and methodology and applicationsand more!Website http://www.pages.drexel.edu/∼mwl25/mcmski/
    • OutlineIntractable likelihoodsABC methodsABC as an inference machine[A]BCelConclusion and perspectives
    • Intractable likelihoodCase of a well-defined statistical model where the likelihoodfunction(θ|y) = f (y1, . . . , yn|θ)is (really!) not available in closed formcannot (easily!) be either completed or demarginalisedcannot be (at all!) estimated by an unbiased estimatorc Prohibits direct implementation of a generic MCMC algorithmlike Metropolis–Hastings
    • Intractable likelihoodCase of a well-defined statistical model where the likelihoodfunction(θ|y) = f (y1, . . . , yn|θ)is (really!) not available in closed formcannot (easily!) be either completed or demarginalisedcannot be (at all!) estimated by an unbiased estimatorc Prohibits direct implementation of a generic MCMC algorithmlike Metropolis–Hastings
    • The abc alternativeCase of a well-defined statistical model where the likelihoodfunction(θ|y) = f (y1, . . . , yn|θ)is out of reachEmpirical approximations to the original B problemDegrading the data precision down to a tolerance εReplacing the likelihood with a non-parametric approximationSummarising/replacing the data with insufficient statistics
    • The abc alternativeCase of a well-defined statistical model where the likelihoodfunction(θ|y) = f (y1, . . . , yn|θ)is out of reachEmpirical approximations to the original B problemDegrading the data precision down to a tolerance εReplacing the likelihood with a non-parametric approximationSummarising/replacing the data with insufficient statistics
    • The abc alternativeCase of a well-defined statistical model where the likelihoodfunction(θ|y) = f (y1, . . . , yn|θ)is out of reachEmpirical approximations to the original B problemDegrading the data precision down to a tolerance εReplacing the likelihood with a non-parametric approximationSummarising/replacing the data with insufficient statistics
    • Different [levels of] worries about abcImpact on B inferencea mere computational issue (that will eventually end up beingsolved by more powerful computers, &tc, even if too costly inthe short term, as for regular Monte Carlo methods) Not!an inferential issue (opening opportunities for new inferencemachine on complex/Big Data models, with legitimitydiffering from classical B approach)a Bayesian conundrum (how closely related to the/a Bapproach? more than recycling B tools? true B with rawdata? a new type of empirical Bayes inference?)
    • Different [levels of] worries about abcImpact on B inferencea mere computational issue (that will eventually end up beingsolved by more powerful computers, &tc, even if too costly inthe short term, as for regular Monte Carlo methods) Not!an inferential issue (opening opportunities for new inferencemachine on complex/Big Data models, with legitimitydiffering from classical B approach)a Bayesian conundrum (how closely related to the/a Bapproach? more than recycling B tools? true B with rawdata? a new type of empirical Bayes inference?)
    • Different [levels of] worries about abcImpact on B inferencea mere computational issue (that will eventually end up beingsolved by more powerful computers, &tc, even if too costly inthe short term, as for regular Monte Carlo methods) Not!an inferential issue (opening opportunities for new inferencemachine on complex/Big Data models, with legitimitydiffering from classical B approach)a Bayesian conundrum (how closely related to the/a Bapproach? more than recycling B tools? true B with rawdata? a new type of empirical Bayes inference?)
    • summary: reply to Lange at al. [ISR, 2013]“Surprisingly, the confident prediction of the previous generation that Bayesian methods wouldultimately supplant frequentist methods has given way to a realization that Markov chain MonteCarlo (MCMC) may be too slow to handle modern data sets. Size matters because large data setsstress computer storage and processing power to the breaking point. The most successfulcompromises between Bayesian and frequentist methods now rely on penalization andoptimization.”sad reality constraint that size does matterfocus on much smaller dimensions and on sparse summariesmany (fast) ways of producing those summariesBayesian inference can kick in almost automatically at thisstage
    • summary: reply to Lange at al. [ISR, 2013]“Surprisingly, the confident prediction of the previous generation that Bayesian methods wouldultimately supplant frequentist methods has given way to a realization that Markov chain MonteCarlo (MCMC) may be too slow to handle modern data sets. Size matters because large data setsstress computer storage and processing power to the breaking point. The most successfulcompromises between Bayesian and frequentist methods now rely on penalization andoptimization.”sad reality constraint that size does matterfocus on much smaller dimensions and on sparse summariesmany (fast) ways of producing those summariesBayesian inference can kick in almost automatically at thisstage
    • Econom’ectionsSimilar exploration of simulation-based and approximationtechniques in EconometricsSimulated method of momentsMethod of simulated momentsSimulated pseudo-maximum-likelihoodIndirect inference[Gouri´eroux & Monfort, 1996]even though motivation is partly-defined models rather thancomplex likelihoods
    • Econom’ectionsSimilar exploration of simulation-based and approximationtechniques in EconometricsSimulated method of momentsMethod of simulated momentsSimulated pseudo-maximum-likelihoodIndirect inference[Gouri´eroux & Monfort, 1996]even though motivation is partly-defined models rather thancomplex likelihoods
    • Indirect inferenceMinimise [in θ] a distance between estimators ^β based on apseudo-model for genuine observations and for observationssimulated under the true model and the parameter θ.[Gouri´eroux, Monfort, & Renault, 1993;Smith, 1993; Gallant & Tauchen, 1996]
    • Indirect inference (PML vs. PSE)Example of the pseudo-maximum-likelihood (PML)^β(y) = arg maxβtlog f (yt|β, y1:(t−1))leading toarg minθ||^β(yo) − ^β(y1(θ), . . . , yS (θ))||2whenys(θ) ∼ f (y|θ) s = 1, . . . , S
    • Indirect inference (PML vs. PSE)Example of the pseudo-score-estimator (PSE)^β(y) = arg minβt∂ log f∂β(yt|β, y1:(t−1))2leading toarg minθ||^β(yo) − ^β(y1(θ), . . . , yS (θ))||2whenys(θ) ∼ f (y|θ) s = 1, . . . , S
    • Consistent indirect inference“...in order to get a unique solution the dimension ofthe auxiliary parameter β must be larger than or equal tothe dimension of the initial parameter θ. If the problem isjust identified the different methods become easier...”Consistency depending on the criterion and on the asymptoticidentifiability of θ[Gouri´eroux & Monfort, 1996, p. 66]Which connection [if any] with the B perspective?
    • Consistent indirect inference“...in order to get a unique solution the dimension ofthe auxiliary parameter β must be larger than or equal tothe dimension of the initial parameter θ. If the problem isjust identified the different methods become easier...”Consistency depending on the criterion and on the asymptoticidentifiability of θ[Gouri´eroux & Monfort, 1996, p. 66]Which connection [if any] with the B perspective?
    • Consistent indirect inference“...in order to get a unique solution the dimension ofthe auxiliary parameter β must be larger than or equal tothe dimension of the initial parameter θ. If the problem isjust identified the different methods become easier...”Consistency depending on the criterion and on the asymptoticidentifiability of θ[Gouri´eroux & Monfort, 1996, p. 66]Which connection [if any] with the B perspective?
    • Approximate Bayesian computationIntractable likelihoodsABC methodsGenesis of ABCABC basicsAdvances and interpretationsABC as knnABC as an inference machine[A]BCelConclusion and perspectives
    • Genetic background of ABCskip geneticsABC is a recent computational technique that only requires beingable to sample from the likelihood f (·|θ)This technique stemmed from population genetics models, about15 years ago, and population geneticists still contributesignificantly to methodological developments of ABC.[Griffith & al., 1997; Tavar´e & al., 1999]
    • Demo-genetic inferenceEach model is characterized by a set of parameters θ that coverhistorical (time divergence, admixture time ...), demographics(population sizes, admixture rates, migration rates, ...) and genetic(mutation rate, ...) factorsThe goal is to estimate these parameters from a dataset ofpolymorphism (DNA sample) y observed at the present timeProblem:most of the time, we cannot calculate the likelihood of thepolymorphism data f (y|θ)...
    • Demo-genetic inferenceEach model is characterized by a set of parameters θ that coverhistorical (time divergence, admixture time ...), demographics(population sizes, admixture rates, migration rates, ...) and genetic(mutation rate, ...) factorsThe goal is to estimate these parameters from a dataset ofpolymorphism (DNA sample) y observed at the present timeProblem:most of the time, we cannot calculate the likelihood of thepolymorphism data f (y|θ)...
    • Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibriumSample of 8 genesMutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2
    • Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibriumKingman’s genealogyWhen time axis isnormalized,T(k) ∼ Exp(k(k − 1)/2)Mutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2
    • Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibriumKingman’s genealogyWhen time axis isnormalized,T(k) ∼ Exp(k(k − 1)/2)Mutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2
    • Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibriumObservations: leafs of the tree^θ = ?Kingman’s genealogyWhen time axis isnormalized,T(k) ∼ Exp(k(k − 1)/2)Mutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2
    • Much more interesting models. . .several independent locusIndependent gene genealogies and mutationsdifferent populationslinked by an evolutionary scenario made of divergences,admixtures, migrations between populations, etc.larger sample sizeusually between 50 and 100 genesA typical evolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2
    • Intractable likelihoodMissing (too much missing!) data structure:f (y|θ) =Gf (y|G, θ)f (G|θ)dGcannot be computed in a manageable way...[Stephens & Donnelly, 2000]The genealogies are considered as nuisance parametersThis modelling clearly differs from the phylogenetic perspectivewhere the tree is the parameter of interest.
    • Intractable likelihoodMissing (too much missing!) data structure:f (y|θ) =Gf (y|G, θ)f (G|θ)dGcannot be computed in a manageable way...[Stephens & Donnelly, 2000]The genealogies are considered as nuisance parametersThis modelling clearly differs from the phylogenetic perspectivewhere the tree is the parameter of interest.
    • A?B?C?A stands for approximate[wrong likelihood / pic?]B stands for BayesianC stands for computation[producing a parametersample]
    • A?B?C?A stands for approximate[wrong likelihood / pic?]B stands for BayesianC stands for computation[producing a parametersample]
    • A?B?C?A stands for approximate[wrong likelihood / pic?]B stands for BayesianC stands for computation[producing a parametersample]ESS=155.6θDensity−0.5 0.0 0.5 1.00.01.0ESS=75.93θDensity−0.4 −0.2 0.0 0.2 0.40.01.02.0ESS=76.87θDensity−0.4 −0.2 0.0 0.201234ESS=91.54θDensity−0.6 −0.4 −0.2 0.0 0.201234ESS=108.4θDensity−0.4 0.0 0.2 0.4 0.60.01.02.03.0ESS=85.13θDensity−0.2 0.0 0.2 0.4 0.60.01.02.03.0ESS=149.1θDensity−0.5 0.0 0.5 1.00.01.02.0ESS=96.31θDensity−0.4 0.0 0.2 0.4 0.60.01.02.0ESS=83.77θDensity−0.6 −0.4 −0.2 0.0 0.2 0.401234ESS=155.7θDensity−0.5 0.0 0.50.01.02.0ESS=92.42θDensity−0.4 0.0 0.2 0.4 0.60.01.02.03.0ESS=95.01θDensity−0.4 0.0 0.2 0.4 0.60.01.53.0ESS=139.2Density −0.6 −0.2 0.2 0.60.01.02.0ESS=99.33Density−0.4 −0.2 0.0 0.2 0.40.01.02.03.0ESS=87.28Density−0.2 0.0 0.2 0.4 0.60123
    • How Bayesian is aBc?Could we turn the resolution into a Bayesian answer?ideally so (not meaningfull: requires ∞-ly powerful computerapproximation error unknown (w/o costly simulation)true Bayes for wrong model (formal and artificial)true Bayes for noisy model (much more convincing)true Bayes for estimated likelihood (back to econometrics?)illuminating the tension between information and precision
    • ABC methodologyBayesian setting: target is π(θ)f (x|θ)When likelihood f (x|θ) not in closed form, likelihood-free rejectiontechnique:FoundationFor an observation y ∼ f (y|θ), under the prior π(θ), if one keepsjointly simulatingθ ∼ π(θ) , z ∼ f (z|θ ) ,until the auxiliary variable z is equal to the observed value, z = y,then the selectedθ ∼ π(θ|y)[Rubin, 1984; Diggle & Gratton, 1984; Tavar´e et al., 1997]
    • ABC methodologyBayesian setting: target is π(θ)f (x|θ)When likelihood f (x|θ) not in closed form, likelihood-free rejectiontechnique:FoundationFor an observation y ∼ f (y|θ), under the prior π(θ), if one keepsjointly simulatingθ ∼ π(θ) , z ∼ f (z|θ ) ,until the auxiliary variable z is equal to the observed value, z = y,then the selectedθ ∼ π(θ|y)[Rubin, 1984; Diggle & Gratton, 1984; Tavar´e et al., 1997]
    • ABC methodologyBayesian setting: target is π(θ)f (x|θ)When likelihood f (x|θ) not in closed form, likelihood-free rejectiontechnique:FoundationFor an observation y ∼ f (y|θ), under the prior π(θ), if one keepsjointly simulatingθ ∼ π(θ) , z ∼ f (z|θ ) ,until the auxiliary variable z is equal to the observed value, z = y,then the selectedθ ∼ π(θ|y)[Rubin, 1984; Diggle & Gratton, 1984; Tavar´e et al., 1997]
    • A as A...pproximativeWhen y is a continuous random variable, strict equality z = y isreplaced with a tolerance zoneρ(y, z)where ρ is a distanceOutput distributed fromπ(θ) Pθ{ρ(y, z) < }def∝ π(θ|ρ(y, z) < )[Pritchard et al., 1999]
    • A as A...pproximativeWhen y is a continuous random variable, strict equality z = y isreplaced with a tolerance zoneρ(y, z)where ρ is a distanceOutput distributed fromπ(θ) Pθ{ρ(y, z) < }def∝ π(θ|ρ(y, z) < )[Pritchard et al., 1999]
    • ABC algorithmIn most implementations, further degree of A...pproximation:Algorithm 1 Likelihood-free rejection samplerfor i = 1 to N dorepeatgenerate θ from the prior distribution π(·)generate z from the likelihood f (·|θ )until ρ{η(z), η(y)}set θi = θend forwhere η(y) defines a (not necessarily sufficient) statistic
    • OutputThe likelihood-free algorithm samples from the marginal in z of:π (θ, z|y) =π(θ)f (z|θ)IA ,y (z)A ,y×Θ π(θ)f (z|θ)dzdθ,where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) ....does it?!
    • OutputThe likelihood-free algorithm samples from the marginal in z of:π (θ, z|y) =π(θ)f (z|θ)IA ,y (z)A ,y×Θ π(θ)f (z|θ)dzdθ,where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) ....does it?!
    • OutputThe likelihood-free algorithm samples from the marginal in z of:π (θ, z|y) =π(θ)f (z|θ)IA ,y (z)A ,y×Θ π(θ)f (z|θ)dzdθ,where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) ....does it?!
    • OutputThe likelihood-free algorithm samples from the marginal in z of:π (θ, z|y) =π(θ)f (z|θ)IA ,y (z)A ,y×Θ π(θ)f (z|θ)dzdθ,where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of therestricted posterior distribution:π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) .Not so good..!skip convergence details! to non-parametrics
    • Convergence of ABCWhat happens when → 0?For B ⊂ Θ, we haveBA ,yf (z|θ)dzA ,y×Θ π(θ)f (z|θ)dzdθπ(θ)dθ =A ,yB f (z|θ)π(θ)dθA ,y×Θ π(θ)f (z|θ)dzdθdz=A ,yB f (z|θ)π(θ)dθm(z)m(z)A ,y×Θ π(θ)f (z|θ)dzdθdz=A ,yπ(B|z)m(z)A ,y×Θ π(θ)f (z|θ)dzdθdzwhich indicates convergence for a continuous π(B|z).
    • Convergence of ABCWhat happens when → 0?For B ⊂ Θ, we haveBA ,yf (z|θ)dzA ,y×Θ π(θ)f (z|θ)dzdθπ(θ)dθ =A ,yB f (z|θ)π(θ)dθA ,y×Θ π(θ)f (z|θ)dzdθdz=A ,yB f (z|θ)π(θ)dθm(z)m(z)A ,y×Θ π(θ)f (z|θ)dzdθdz=A ,yπ(B|z)m(z)A ,y×Θ π(θ)f (z|θ)dzdθdzwhich indicates convergence for a continuous π(B|z).
    • Convergence (do not attempt!)...and the above does not apply to insufficient statistics:If η(y) is not a sufficient statistics, the best one can hope for isπ(θ|η(y)) , not π(θ|y)If η(y) is an ancillary statistic, the whole information contained iny is lost!, the “best” one can “hope” for isπ(θ|η(y)) = π(θ)
    • Convergence (do not attempt!)...and the above does not apply to insufficient statistics:If η(y) is not a sufficient statistics, the best one can hope for isπ(θ|η(y)) , not π(θ|y)If η(y) is an ancillary statistic, the whole information contained iny is lost!, the “best” one can “hope” for isπ(θ|η(y)) = π(θ)
    • Convergence (do not attempt!)...and the above does not apply to insufficient statistics:If η(y) is not a sufficient statistics, the best one can hope for isπ(θ|η(y)) , not π(θ|y)If η(y) is an ancillary statistic, the whole information contained iny is lost!, the “best” one can “hope” for isπ(θ|η(y)) = π(θ)
    • CommentsRole of distance paramount (because = 0)Scaling of components of η(y) is also determinantmatters little if “small enough”representative of “curse of dimensionality”small is beautiful!the data as a whole may be paradoxically weakly informativefor ABC
    • ABC (simul’) advanceshow approximative is ABC? ABC as knnSimulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger[Beaumont et al., 2002].....or even by including in the inferential framework [ABCµ][Ratmann et al., 2009]
    • ABC (simul’) advanceshow approximative is ABC? ABC as knnSimulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger[Beaumont et al., 2002].....or even by including in the inferential framework [ABCµ][Ratmann et al., 2009]
    • ABC (simul’) advanceshow approximative is ABC? ABC as knnSimulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger[Beaumont et al., 2002].....or even by including in the inferential framework [ABCµ][Ratmann et al., 2009]
    • ABC (simul’) advanceshow approximative is ABC? ABC as knnSimulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger[Beaumont et al., 2002].....or even by including in the inferential framework [ABCµ][Ratmann et al., 2009]
    • ABC-NPBetter usage of [prior] simulations byadjustement: instead of throwing awayθ such that ρ(η(z), η(y)) > , replaceθ’s with locally regressed transformsθ∗= θ − {η(z) − η(y)}T ^β[Csill´ery et al., TEE, 2010]where ^β is obtained by [NP] weighted least square regression on(η(z) − η(y)) with weightsKδ {ρ(η(z), η(y))}[Beaumont et al., 2002, Genetics]
    • ABC-NP (regression)Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :weight directly simulation byKδ {ρ(η(z(θ)), η(y))}or1SSs=1Kδ {ρ(η(zs(θ)), η(y))}[consistent estimate of f (η|θ)]Curse of dimensionality: poor estimate when d = dim(η) is large...
    • ABC-NP (regression)Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :weight directly simulation byKδ {ρ(η(z(θ)), η(y))}or1SSs=1Kδ {ρ(η(zs(θ)), η(y))}[consistent estimate of f (η|θ)]Curse of dimensionality: poor estimate when d = dim(η) is large...
    • ABC-NP (density estimation)Use of the kernel weightsKδ {ρ(η(z(θ)), η(y))}leads to the NP estimate of the posterior expectationi θi Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ(η(z(θi )), η(y))}[Blum, JASA, 2010]
    • ABC-NP (density estimation)Use of the kernel weightsKδ {ρ(η(z(θ)), η(y))}leads to the NP estimate of the posterior conditional densityi˜Kb(θi − θ)Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ(η(z(θi )), η(y))}[Blum, JASA, 2010]
    • ABC-NP (density estimations)Other versions incorporating regression adjustmentsi˜Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ(η(z(θi )), η(y))}In all cases, errorE[^g(θ|y)] − g(θ|y) = cb2+ cδ2+ OP(b2+ δ2) + OP(1/nδd)var(^g(θ|y)) =cnbδd(1 + oP(1))
    • ABC-NP (density estimations)Other versions incorporating regression adjustmentsi˜Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ(η(z(θi )), η(y))}In all cases, errorE[^g(θ|y)] − g(θ|y) = cb2+ cδ2+ OP(b2+ δ2) + OP(1/nδd)var(^g(θ|y)) =cnbδd(1 + oP(1))[Blum, JASA, 2010]
    • ABC-NP (density estimations)Other versions incorporating regression adjustmentsi˜Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ(η(z(θi )), η(y))}In all cases, errorE[^g(θ|y)] − g(θ|y) = cb2+ cδ2+ OP(b2+ δ2) + OP(1/nδd)var(^g(θ|y)) =cnbδd(1 + oP(1))[standard NP calculations]
    • ABC as knnsee and listen to G´erard’s talk[Biau et al., 2013, Annales de l’IHP]Practice of ABC: determine tolerance as a quantile on observeddistances, say 10% or 1% quantile,= N = qα(d1, . . . , dN)Interpretation of ε as nonparametric bandwidth onlyapproximation of the actual practice[Blum & Fran¸cois, 2010]ABC is a k-nearest neighbour (knn) method with kN = N N[Loftsgaarden & Quesenberry, 1965]
    • ABC as knnsee and listen to G´erard’s talk[Biau et al., 2013, Annales de l’IHP]Practice of ABC: determine tolerance as a quantile on observeddistances, say 10% or 1% quantile,= N = qα(d1, . . . , dN)Interpretation of ε as nonparametric bandwidth onlyapproximation of the actual practice[Blum & Fran¸cois, 2010]ABC is a k-nearest neighbour (knn) method with kN = N N[Loftsgaarden & Quesenberry, 1965]
    • ABC as knnsee and listen to G´erard’s talk[Biau et al., 2013, Annales de l’IHP]Practice of ABC: determine tolerance as a quantile on observeddistances, say 10% or 1% quantile,= N = qα(d1, . . . , dN)Interpretation of ε as nonparametric bandwidth onlyapproximation of the actual practice[Blum & Fran¸cois, 2010]ABC is a k-nearest neighbour (knn) method with kN = N N[Loftsgaarden & Quesenberry, 1965]
    • ABC consistencyProvidedkN/ log log N −→ ∞ and kN/N −→ 0as N → ∞, for almost all s0 (with respect to the distribution ofS), with probability 1,1kNkNj=1ϕ(θj ) −→ E[ϕ(θj )|S = s0][Devroye, 1982]Biau et al. (2012) also recall pointwise and integrated mean square errorconsistency results on the corresponding kernel estimate of theconditional posterior distribution, under constraintskN → ∞, kN /N → 0, hN → 0 and hpN kN → ∞,
    • ABC consistencyProvidedkN/ log log N −→ ∞ and kN/N −→ 0as N → ∞, for almost all s0 (with respect to the distribution ofS), with probability 1,1kNkNj=1ϕ(θj ) −→ E[ϕ(θj )|S = s0][Devroye, 1982]Biau et al. (2012) also recall pointwise and integrated mean square errorconsistency results on the corresponding kernel estimate of theconditional posterior distribution, under constraintskN → ∞, kN /N → 0, hN → 0 and hpN kN → ∞,
    • Rates of convergenceFurther assumptions (on target and kernel) allow for precise(integrated mean square) convergence rates (as a power of thesample size N), derived from classical k-nearest neighbourregression, likewhen m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N− 4p+8when m = 4, kN ≈ N(p+4)/(p+8) and rate N− 4p+8 log Nwhen m > 4, kN ≈ N(p+4)/(m+p+4) and rate N− 4m+p+4[Biau et al., 2012, arxiv:1207.6461]Drag: Only applies to sufficient summary statistics
    • Rates of convergenceFurther assumptions (on target and kernel) allow for precise(integrated mean square) convergence rates (as a power of thesample size N), derived from classical k-nearest neighbourregression, likewhen m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N− 4p+8when m = 4, kN ≈ N(p+4)/(p+8) and rate N− 4p+8 log Nwhen m > 4, kN ≈ N(p+4)/(m+p+4) and rate N− 4m+p+4[Biau et al., 2012, arxiv:1207.6461]Drag: Only applies to sufficient summary statistics
    • ABC inference machineIntractable likelihoodsABC methodsABC as an inference machineError inc.Exact BC and approximatetargetssummary statistic[A]BCelConclusion and perspectives
    • How Bayesian is aBc..?may be a convergent method of inference (meaningful?sufficient? foreign?)approximation error unknown (w/o massive simulation)pragmatic/empirical B (there is no other solution!)many calibration issues (tolerance, distance, statistics)the NP side should be incorporated into the whole B picturethe approximation error should also be part of the B inferenceto ABCel
    • ABCµIdea Infer about the error as well as about the parameter:Use of a joint densityf (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )where y is the data, and ξ( |y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)Warning! Replacement of ξ( |y, θ) with a non-parametric kernelapproximation.[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
    • ABCµIdea Infer about the error as well as about the parameter:Use of a joint densityf (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )where y is the data, and ξ( |y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)Warning! Replacement of ξ( |y, θ) with a non-parametric kernelapproximation.[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
    • ABCµIdea Infer about the error as well as about the parameter:Use of a joint densityf (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )where y is the data, and ξ( |y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)Warning! Replacement of ξ( |y, θ) with a non-parametric kernelapproximation.[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
    • ABCµ detailsMultidimensional distances ρk (k = 1, . . . , K) and errorsk = ρk(ηk(z), ηk(y)), withk ∼ ξk( |y, θ) ≈ ^ξk( |y, θ) =1BhkbK[{ k−ρk(ηk(zb), ηk(y))}/hk]then used in replacing ξ( |y, θ) with mink^ξk( |y, θ)ABCµ involves acceptance probabilityπ(θ , )π(θ, )q(θ , θ)q( , )q(θ, θ )q( , )mink^ξk( |y, θ )mink^ξk( |y, θ)
    • ABCµ detailsMultidimensional distances ρk (k = 1, . . . , K) and errorsk = ρk(ηk(z), ηk(y)), withk ∼ ξk( |y, θ) ≈ ^ξk( |y, θ) =1BhkbK[{ k−ρk(ηk(zb), ηk(y))}/hk]then used in replacing ξ( |y, θ) with mink^ξk( |y, θ)ABCµ involves acceptance probabilityπ(θ , )π(θ, )q(θ , θ)q( , )q(θ, θ )q( , )mink^ξk( |y, θ )mink^ξk( |y, θ)
    • Wilkinson’s exact BC (not exactly!)ABC approximation error (i.e. non-zero tolerance) replaced withexact simulation from a controlled approximation to the target,convolution of true posterior with kernel functionπ (θ, z|y) =π(θ)f (z|θ)K (y − z)π(θ)f (z|θ)K (y − z)dzdθ,with K kernel parameterised by bandwidth .[Wilkinson, 2013]TheoremThe ABC algorithm based on the assumption of a randomisedobservation y = ˜y + ξ, ξ ∼ K , and an acceptance probability ofK (y − z)/Mgives draws from the posterior distribution π(θ|y).
    • Wilkinson’s exact BC (not exactly!)ABC approximation error (i.e. non-zero tolerance) replaced withexact simulation from a controlled approximation to the target,convolution of true posterior with kernel functionπ (θ, z|y) =π(θ)f (z|θ)K (y − z)π(θ)f (z|θ)K (y − z)dzdθ,with K kernel parameterised by bandwidth .[Wilkinson, 2013]TheoremThe ABC algorithm based on the assumption of a randomisedobservation y = ˜y + ξ, ξ ∼ K , and an acceptance probability ofK (y − z)/Mgives draws from the posterior distribution π(θ|y).
    • How exact a BC?ProsPseudo-data from true model vs. observed data from noisymodelOutcome is completely controlledLink with ABCµ when assuming y observed withmeasurement error with density KRelates to the theory of model approximation[Kennedy & O’Hagan, 2001]leads to “noisy ABC”: justdo it!, i.e. perturbates the data down to precision ε and proceed[Fearnhead & Prangle, 2012]ConsRequires K to be bounded by MTrue approximation error never assessed
    • How exact a BC?ProsPseudo-data from true model vs. observed data from noisymodelOutcome is completely controlledLink with ABCµ when assuming y observed withmeasurement error with density KRelates to the theory of model approximation[Kennedy & O’Hagan, 2001]leads to “noisy ABC”: justdo it!, i.e. perturbates the data down to precision ε and proceed[Fearnhead & Prangle, 2012]ConsRequires K to be bounded by MTrue approximation error never assessed
    • Noisy ABC for HMMsSpecific case of a hidden Markov model summariesXt+1 ∼ Qθ(Xt, ·)Yt+1 ∼ gθ(·|xt)where only y01:n is observed.[Dean, Singh, Jasra, & Peters, 2011]Use of specific constraints, adapted to the Markov structure:y1 ∈ B(y01 , ) × · · · × yn ∈ B(y0n , )
    • Noisy ABC for HMMsSpecific case of a hidden Markov model summariesXt+1 ∼ Qθ(Xt, ·)Yt+1 ∼ gθ(·|xt)where only y01:n is observed.[Dean, Singh, Jasra, & Peters, 2011]Use of specific constraints, adapted to the Markov structure:y1 ∈ B(y01 , ) × · · · × yn ∈ B(y0n , )
    • Noisy ABC-MLEIdea: Modify instead the data from the start(y01 + ζ1, . . . , yn + ζn)[ see Fearnhead-Prangle ]noisy ABC-MLE estimatearg maxθPθ Y1 ∈ B(y01 + ζ1, ), . . . , Yn ∈ B(y0n + ζn, )[Dean, Singh, Jasra, & Peters, 2011]
    • Consistent noisy ABC-MLEDegrading the data improves the estimation performances:Noisy ABC-MLE is asymptotically (in n) consistentunder further assumptions, the noisy ABC-MLE isasymptotically normalincrease in variance of order −2likely degradation in precision or computing time due to thelack of summary statistic [curse of dimensionality]
    • Which summary?Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statisticsLoss of statistical information balanced against gain in datarougheningApproximation error and information loss remain unknownChoice of statistics induces choice of distance functiontowards standardisationmay be imposed for external/practical reasonsmay gather several non-B point estimatescan learn about efficient combination
    • Which summary?Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statisticsLoss of statistical information balanced against gain in datarougheningApproximation error and information loss remain unknownChoice of statistics induces choice of distance functiontowards standardisationmay be imposed for external/practical reasonsmay gather several non-B point estimatescan learn about efficient combination
    • Which summary?Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statisticsLoss of statistical information balanced against gain in datarougheningApproximation error and information loss remain unknownChoice of statistics induces choice of distance functiontowards standardisationmay be imposed for external/practical reasonsmay gather several non-B point estimatescan learn about efficient combination
    • Which summary for model choice?Depending on the choice of η(·), the Bayes factor based on thisinsufficient statistic,Bη12(y) =π1(θ1)f η1 (η(y)|θ1) dθ1π2(θ2)f η2 (η(y)|θ2) dθ2,is either consistent or inconsistent[X, Cornuet, Marin, & Pillai, 2012]Consistency only depends on the range ofEi [η(y)]under both models[Marin, Pillai, X, & Rousseau, 2012]
    • Which summary for model choice?Depending on the choice of η(·), the Bayes factor based on thisinsufficient statistic,Bη12(y) =π1(θ1)f η1 (η(y)|θ1) dθ1π2(θ2)f η2 (η(y)|θ2) dθ2,is either consistent or inconsistent[X, Cornuet, Marin, & Pillai, 2012]Consistency only depends on the range ofEi [η(y)]under both models[Marin, Pillai, X, & Rousseau, 2012]
    • Semi-automatic ABCFearnhead and Prangle (2012) study ABC and the selection of thesummary statistic in close proximity to Wilkinson’s proposalABC considered as inferential method and calibrated as suchrandomised (or ‘noisy’) version of the summary statistics˜η(y) = η(y) + τderivation of a well-calibrated version of ABC, i.e. analgorithm that gives proper predictions for the distributionassociated with this randomised summary statistic
    • Summary [of F&P/statistics)optimality of the posterior expectationE[θ|y]of the parameter of interest as summary statistics η(y)![requires iterative process]use of the standard quadratic loss function(θ − θ0)TA(θ − θ0)recent extension to model choice, optimality of Bayes factorB12(y)[F&P, ISBA 2012, Kyoto]
    • Summary [of F&P/statistics)optimality of the posterior expectationE[θ|y]of the parameter of interest as summary statistics η(y)![requires iterative process]use of the standard quadratic loss function(θ − θ0)TA(θ − θ0)recent extension to model choice, optimality of Bayes factorB12(y)[F&P, ISBA 2012, Kyoto]
    • Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates π(. | η(y)) [unavoidable]Model selection feasible with ABC [with caution!]For estimation, consistency if {θ; µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅[Marin et al., 2011]
    • Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates π(. | η(y)) [unavoidable]Model selection feasible with ABC [with caution!]For estimation, consistency if {θ; µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅[Marin et al., 2011]
    • Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates π(. | η(y)) [unavoidable]Model selection feasible with ABC [with caution!]For estimation, consistency if {θ; µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅[Marin et al., 2011]
    • Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates π(. | η(y)) [unavoidable]Model selection feasible with ABC [with caution!]For estimation, consistency if {θ; µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅[Marin et al., 2011]
    • Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates π(. | η(y)) [unavoidable]Model selection feasible with ABC [with caution!]For estimation, consistency if {θ; µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅[Marin et al., 2011]
    • Empirical likelihood (EL)Intractable likelihoodsABC methodsABC as an inference machine[A]BCelABC and ELComposite likelihoodIllustrationsConclusion and perspectives
    • Empirical likelihood (EL)Dataset x made of n independent replicates x = (x1, . . . , xn) of arv X ∼ FGeneralized moment condition pseudo-modelEF h(X, φ) = 0,where h is a known function, and φ an unknown parameterInduced empirical likelihoodLel(φ|x) = maxpni=1pifor all p such that 0 pi 1, i pi = 1, i pi h(xi , φ) = 0.[Owen, 1988, Bio’ka, & Empirical Likelihood, 2001]
    • Empirical likelihood (EL)Dataset x made of n independent replicates x = (x1, . . . , xn) of arv X ∼ FGeneralized moment condition pseudo-modelEF h(X, φ) = 0,where h is a known function, and φ an unknown parameterInduced empirical likelihoodLel(φ|x) = maxpni=1pifor all p such that 0 pi 1, i pi = 1, i pi h(xi , φ) = 0.[Owen, 1988, Bio’ka, & Empirical Likelihood, 2001]
    • EL motivationEstimate of the data distribution under moment conditionsipi h(xi , φ) = 0ExampleNormal N(θ, σ2) samplex1, . . . , x25ConstraintsEθ,σ[(x − θ, (x − θ)2− σ2)] = 0with ^θ = −.45, ^σ = 1.12 −2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0012345θσ
    • Convergence of EL [3.4]Theorem 3.4 Let X, Y1, . . . , Yn be independent rv’s with commondistribution F0. For θ ∈ Θ, and the function h(X, θ) ∈ Rs, letθ0 ∈ Θ be such thatVar(h(Yi , θ0))is finite and has rank q > 0. If θ0 satisfiesE(h(X, θ0)) = 0,then−2 logLel(θ0|Y1, . . . , Yn)n−n→ χ2(q)in distribution when n → ∞.[Owen, 2001]
    • Convergence of EL [3.4]“...The interesting thing about Theorem 3.4 is what is not there. Itincludes no conditions to make ^θ a good estimate of θ0, nor evenconditions to ensure a unique value for θ0, nor even that any solution θ0exists. Theorem 3.4 applies in the just determined, over-determined, andunder-determined cases. When we can prove that our estimatingequations uniquely define θ0, and provide a consistent estimator ^θ of it,then confidence regions and tests follow almost automatically throughTheorem 3.4.”.[Owen, 2001]
    • Raw [A]BCelsamplerNa¨ıve implementation: Act as if EL was an exact likelihood[Lazar, 2003]for i = 1 → N dogenerate φi from the prior distribution π(·)set the weight ωi = Lel(φi |xobs)end forreturn (φi , ωi ), i = 1, . . . , NOutput weighted sample of size N
    • Raw [A]BCelsamplerNa¨ıve implementation: Act as if EL was an exact likelihood[Lazar, 2003]for i = 1 → N dogenerate φi from the prior distribution π(·)set the weight ωi = Lel(φi |xobs)end forreturn (φi , ωi ), i = 1, . . . , NPerformance evaluated through effective sample sizeESS = 1Ni=1ωiNj=1ωj2
    • AMIS implementationMore advanced algorithms can be adapted to EL: E.g., adaptivemultiple importance sampling (AMIS) to speed up computations[Cornuet et al., 2012]for i = 1 to M doGenerate θ1,i ∼ q1(·)Set ω1,i = Lel (θ1,i |y)end forfor t = 2 to TM doCompute weighted mean mt and variance Σtfor i = 1 to M doGenerate θt,i ∼ qt(·) = t3(·|mt, Σt)Set ωt,i = π(θt,i )Lel (θt,i |y) t−1s=1 qs(θt,i )end forfor r = 1 to t − 1 and i = 1 to M doUpdate weights asωr,i = π(θt,i )Lel (θr,i |y) t−1s=1 qs(θr,i )
    • Moment condition in population genetics?EL does not require a fully defined and often complex (hencedebatable) parametric modelMain difficultyDerive a constraintEF h(X, φ) = 0,on the parameters of interest φ when X is made of the genotypesof the sample of individuals at a given locusE.g., in phylogeography, φ is composed ofdates of divergence between populations,ratio of population sizes,mutation rates, etc.None of them moments of the distribution of the allelic states ofthe sample
    • Moment condition in population genetics?EL does not require a fully defined and often complex (hencedebatable) parametric modelMain difficultyDerive a constraintEF h(X, φ) = 0,on the parameters of interest φ when X is made of the genotypesof the sample of individuals at a given locusc take an h made of pairwise composite scores (whose zero is thepairwise maximum likelihood estimator)
    • Pairwise composite likelihoodThe intra-locus pairwise likelihood2(xk|φ) =i<j2(xik, xjk|φ)with x1k , . . . , xnk : allelic states of the gene sample at the k-th locusThe pairwise score functionφ log 2(xk|φ) =i<jφ log 2(xik, xjk|φ)Composite likelihoods are often much narrower than theoriginal likelihood of the modelSafe(r) with EL because we only use position of its mode
    • Pairwise likelihood: a simple caseAssumptionssample ⊂ closed, panmicticpopulation at equilibriummarker: microsatellitemutation rate: θ/2if xik et xjk are two genes of thesample,2(xik, xjk|θ) depends only onδ = xik − xjk2(δ|θ) =1√1 + 2θρ (θ)|δ|withρ(θ) =θ1 + θ +√1 + 2θPairwise score function∂θ log 2(δ|θ) =−11 + 2θ+|δ|θ√1 + 2θ
    • Pairwise likelihood: a simple caseAssumptionssample ⊂ closed, panmicticpopulation at equilibriummarker: microsatellitemutation rate: θ/2if xik et xjk are two genes of thesample,2(xik, xjk|θ) depends only onδ = xik − xjk2(δ|θ) =1√1 + 2θρ (θ)|δ|withρ(θ) =θ1 + θ +√1 + 2θPairwise score function∂θ log 2(δ|θ) =−11 + 2θ+|δ|θ√1 + 2θ
    • Pairwise likelihood: a simple caseAssumptionssample ⊂ closed, panmicticpopulation at equilibriummarker: microsatellitemutation rate: θ/2if xik et xjk are two genes of thesample,2(xik, xjk|θ) depends only onδ = xik − xjk2(δ|θ) =1√1 + 2θρ (θ)|δ|withρ(θ) =θ1 + θ +√1 + 2θPairwise score function∂θ log 2(δ|θ) =−11 + 2θ+|δ|θ√1 + 2θ
    • Pairwise likelihood: 2 diverging populationsMRCAPOP a POP bτAssumptionsτ: divergence date ofpop. a and bθ/2: mutation rateLet xik and xjk be two genescoming resp. from pop. a andbSet δ = xik − xjk.Then 2(δ|θ, τ) =e−τθ√1 + 2θ+∞k=−∞ρ(θ)|k|Iδ−k(τθ).whereIn(z) nth-order modifiedBessel function of the firstkind
    • Pairwise likelihood: 2 diverging populationsMRCAPOP a POP bτAssumptionsτ: divergence date ofpop. a and bθ/2: mutation rateLet xik and xjk be two genescoming resp. from pop. a andbSet δ = xik − xjk.A 2-dim score function∂τ log 2(δ|θ, τ) = −θ+θ22(δ − 1|θ, τ) + 2(δ + 1|θ, τ)2(δ|θ, τ)∂θ log 2(δ|θ, τ) =−τ −11 + 2θ+q(δ|θ, τ)2(δ|θ, τ)+τ22(δ − 1|θ, τ) + 2(δ + 1|θ, τ)2(δ|θ, τ)whereq(δ|θ, τ) :=e−τθ√1 + 2θρ (θ)ρ(θ)∞k=−∞|k|ρ(θ)|k|Iδ−k (τθ)
    • Example: normal posterior[A]BCel with two constraintsESS=155.6θDensity−0.5 0.0 0.5 1.00.01.0ESS=75.93θDensity−0.4 −0.2 0.0 0.2 0.40.01.02.0ESS=76.87θDensity−0.4 −0.2 0.0 0.201234ESS=91.54θDensity−0.6 −0.4 −0.2 0.0 0.201234ESS=108.4θDensity−0.4 0.0 0.2 0.4 0.60.01.02.03.0ESS=85.13θDensity−0.2 0.0 0.2 0.4 0.60.01.02.03.0ESS=149.1θDensity−0.5 0.0 0.5 1.00.01.02.0ESS=96.31θDensity−0.4 0.0 0.2 0.4 0.60.01.02.0ESS=83.77θDensity−0.6 −0.4 −0.2 0.0 0.2 0.401234ESS=155.7θDensity−0.5 0.0 0.50.01.02.0ESS=92.42θDensity−0.4 0.0 0.2 0.4 0.60.01.02.03.0ESS=95.01θDensity−0.4 0.0 0.2 0.4 0.60.01.53.0ESS=139.2Density−0.6 −0.2 0.2 0.60.01.02.0ESS=99.33Density−0.4 −0.2 0.0 0.2 0.40.01.02.03.0ESS=87.28Density−0.2 0.0 0.2 0.4 0.60123Sample sizes are of 25 (column 1), 50 (column 2) and 75 (column 3)observations
    • Example: normal posterior[A]BCel with three constraintsESS=300.1θDensity−0.4 0.0 0.4 0.80.01.0ESS=205.5θDensity−0.6 −0.2 0.0 0.2 0.40.01.02.0ESS=179.4θDensity−0.2 0.0 0.2 0.40.01.53.0ESS=265.1θDensity−0.3 −0.2 −0.1 0.0 0.101234ESS=250.3θDensity−0.6 −0.4 −0.2 0.0 0.20.01.02.0ESS=134.8θDensity−0.4 −0.2 0.0 0.101234ESS=331.5θDensity−0.8 −0.4 0.0 0.40.01.02.0ESS=167.4θDensity−0.9 −0.7 −0.5 −0.30123ESS=136.5θDensity−0.4 −0.2 0.0 0.201234ESS=322.4θDensity−0.2 0.0 0.2 0.4 0.6 0.80.01.02.0ESS=202.7θDensity−0.4 −0.2 0.0 0.2 0.40.01.02.03.0ESS=166θDensity−0.4 −0.2 0.0 0.201234ESS=263.7Density−1.0 −0.6 −0.20.01.02.0ESS=190.9Density−0.4 −0.2 0.0 0.2 0.4 0.60123ESS=165.3Density−0.5 −0.3 −0.1 0.10.01.53.0Sample sizes are of 25 (column 1), 50 (column 2) and 75 (column 3)observations
    • Example: quantile g-and-k distributionsDistributions defined by a closed-form quantile function F−1(p; θ)equal toA + B 1 + c1 − exp(−gz(r))1 + exp(−gz(r))1 + z(r)2 kz(r)where z(r) the rth standard normal quantile2 4 6 8 10 120.00.20.40.60.81.0Fn(x)True and fitted cdf functions with pointwise 95% credible(shaded) region centered on fitted cdf for a dataset ofn = 100 observations from the g-and-k distribution,based on M = 103 simulations of [A]BCel.
    • Example: quantile g-and-k distributionsDistributions defined by a closed-form quantile function F−1(p; θ)equal toA + B 1 + c1 − exp(−gz(r))1 + exp(−gz(r))1 + z(r)2 kz(r)where z(r) the rth standard normal quantileqqqqqqq1 2 3 4 5 6 7 8 9 102.83.03.23.4Aqq1 2 3 4 5 6 7 8 9 100.60.81.01.21.41.6Bqq1 2 3 4 5 6 7 8 9 101.52.02.53.03.5gq1 2 3 4 5 6 7 8 9 100.20.40.60.8kVariations of posterior means for n = 100 (1 to 5) andn = 500 observations (6 to 10). Moment conditions inthe [A]BCel algorithm are quantiles of order(0.2, 0.5, 0.8) (1 and 6), (0.2, 0.4, 0.6, 0.8) (2 and 7),(0.1, 0.4, 0.6, 0.9) (3 and 8), (0.1, 0.25, 0.5, 0.75, 0.9)(4 and 9), and (0.1, 0.2, . . . .0.9) (5 and 10).
    • Example: ARCH and GARCH modelsSimple dynamic model, namely the ARCH(1) model:yt = σtεt, εt ∼ N(0, 1) , σ2t = α0 + α1y2t−1 ,with uniform prior over α0, α1 0, α0 + α1 1.Natural empirical likelihood representation based on thereconstituted εt’s, defined as yt/σt when the σt’s are derivedrecursivelyqqqqABCopti.ABCmom.ABCELmom.ABCELcorr.012345α0qqqqqqqqqqqqqqqqqABCopti.ABCmom.ABCELmom.ABCELcorr.0.20.40.60.8α1qqqqqqqqqqqqqqqqqqqABC ABC ABCEL ABCEL0.00.51.01.52.02.5qqqqqqqqqABC ABC ABCEL ABCEL0.050.100.150.200.250.30ABC evaluations of posterior expectations (top, with truevalues in dashed lines) and variances (bottom) of(α0, α1) with 100 observations. 1-2: two choices ofsummaries (least squares estimates and mean of thelog yt ’s plus autocorrelations of order 2 and 3,respectively). 3-4: two sets of constraints (first 3moments and second moment plus autocorrelation oforder 1 plus correlation with previous observation for thereconstituted εt ’s).
    • Example: ARCH and GARCH modelsGARCH(1, 1) model formalized as the observation of yt ∼ N(0, σ2t )whenσ2t = α0 + α1y2t−1 + β1σ2t−1under the constraints α0, α1, β1 > 0 and α1 + β1 < 1, that is,yt = σtεt.Given constraints, natural prior is exponential distribution onα0 and Dirichlet D3(1, 1, 1) on (α1, β1, 1 − α1 − β1)For ABC, use of maximum likelihood estimator as summarystatistics, provided by R function garchChan and Ling (2006) derived natural score constraints for theempirical likelihood, used in [A]BCel algorithm
    • Example: ARCH and GARCH modelsGARCH(1, 1) model formalized as the observation of yt ∼ N(0, σ2t )whenσ2t = α0 + α1y2t−1 + β1σ2t−1under the constraints α0, α1, β1 > 0 and α1 + β1 < 1, that is,yt = σtεt.qqqqqqqqqqqABC BCel MLE0.000.100.200.30α0qqqqqqqqqABC BCel MLE0.000.100.200.30α1qqqABC BCel MLE0.00.20.40.60.8β1Evaluations of posterior expectations (with true values indashed lines) of (α0, α1, β1) with 250 observations.First row: optimal ABC algorithm (using the MLE assummary statistic and with the tolerance ε chosen as the5% quantile of the distances), second row:[A]BCel algorithm based on the constraints derived in [?],and third row: MLE derived by R garch initialized at thetrue value.
    • Example: Superposition of gamma processesExample of superposition of N renewal processes with waitingtimes τij (i = 1, . . . , M, j = 1, . . .) ∼ G(α, β), when N is unknown.Renewal processesζi1 = τi1, ζi2 = ζi1 + τi2, . . .with observations made of first n values of the ζij ’s,z1 = min{ζij }, z2 = min{ζij ; ζij > z1}, . . .ending withzn = min{ζij ; ζij > zn−1} .[Cox & Kartsonaki, B’ka, 2012]
    • Example: Superposition of gamma processes (ABC)Interesting testing ground for [A]BCel since data (zt) neither iidnor MarkovRecovery of an iid structure by1. simulating a pseudo-dataset,(z1 , . . . , zn ), as in regularABC,2. deriving sequence ofindicators (ν1, . . . , νn), asz1 = ζν11, z2 = ζν2j2 , . . .3. exploiting that thoseindicators are distributedfrom the prior distributionon the νt’s leading to an iidsample of G(α, β) variablesComparison of ABC and[A]BCel posteriorsαDensity0 1 2 3 40.00.20.40.60.81.01.21.4βDensity0 1 2 3 40.00.51.01.5NDensity0 5 10 15 200.000.020.040.060.08αDensity0 1 2 3 40.00.51.01.5βDensity0 1 2 3 40.00.20.40.60.81.0NDensity0 5 10 15 200.000.010.020.030.040.050.06Top: [A]BCelBottom: regular ABC
    • Example: Superposition of gamma processes (ABC)Interesting testing ground for [A]BCel since data (zt) neither iidnor MarkovRecovery of an iid structure by1. simulating a pseudo-dataset,(z1 , . . . , zn ), as in regularABC,2. deriving sequence ofindicators (ν1, . . . , νn), asz1 = ζν11, z2 = ζν2j2 , . . .3. exploiting that thoseindicators are distributedfrom the prior distributionon the νt’s leading to an iidsample of G(α, β) variablesComparison of ABC and[A]BCel posteriorsαDensity0 1 2 3 40.00.20.40.60.81.01.21.4βDensity0 1 2 3 40.00.51.01.5NDensity0 5 10 15 200.000.020.040.060.08αDensity0 1 2 3 40.00.51.01.5βDensity0 1 2 3 40.00.20.40.60.81.0NDensity0 5 10 15 200.000.010.020.030.040.050.06Top: [A]BCelBottom: regular ABC
    • Pop’gen’: A first experimentEvolutionary scenario:MRCAPOP 0 POP 1τDataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ = (log10 θ, log10 τ)uniform prior over(−1., 1.5) × (−1., 1.)Comparison of the originalABC with [A]BCelESS=7034log(theta)Density0.00 0.05 0.10 0.15 0.20 0.25051015log(tau1)Density−0.3 −0.2 −0.1 0.0 0.1 0.2 0.301234567histogram = [A]BCelcurve = original ABCvertical line = “true”parameter
    • Pop’gen’: A first experimentEvolutionary scenario:MRCAPOP 0 POP 1τDataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ = (log10 θ, log10 τ)uniform prior over(−1., 1.5) × (−1., 1.)Comparison of the originalABC with [A]BCelESS=7034log(theta)Density0.00 0.05 0.10 0.15 0.20 0.25051015log(tau1)Density−0.3 −0.2 −0.1 0.0 0.1 0.2 0.301234567histogram = [A]BCelcurve = original ABCvertical line = “true”parameter
    • ABC vs. [A]BCel on 100 replicates of the 1st experimentAccuracy:log10 θ log10 τABC BCel ABC BCel(1) 0.097 0.094 0.315 0.117(2) 0.071 0.059 0.272 0.077(3) 0.68 0.81 1.0 0.80(1) Root Mean Square Error of the posterior mean(2) Median Absolute Deviation of the posterior median(3) Coverage of the credibility interval of probability 0.8Computation time: on a recent 6-core computer(C++/OpenMP)ABC ≈ 4 hours[A]BCel≈ 2 minutes
    • Pop’gen’: Second experimentEvolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2Dataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ =(log10 θ, log10 τ1, log10 τ2)non-informative uniformComparison of the original ABCwith [A]BCelhistogram = [A]BCelcurve = original ABCvertical line = “true” parameter
    • Pop’gen’: Second experimentEvolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2Dataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ =(log10 θ, log10 τ1, log10 τ2)non-informative uniformComparison of the original ABCwith [A]BCelhistogram = [A]BCelcurve = original ABCvertical line = “true” parameter
    • Pop’gen’: Second experimentEvolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2Dataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ =(log10 θ, log10 τ1, log10 τ2)non-informative uniformComparison of the original ABCwith [A]BCelhistogram = [A]BCelcurve = original ABCvertical line = “true” parameter
    • Pop’gen’: Second experimentEvolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2Dataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ =(log10 θ, log10 τ1, log10 τ2)non-informative uniformComparison of the original ABCwith [A]BCelhistogram = [A]BCelcurve = original ABCvertical line = “true” parameter
    • ABC vs. [A]BCel on 100 replicates of the 2nd experimentAccuracy:log10 θ log10 τ1 log10 τ2ABC BCel ABC BCel ABC BCel(1) .0059 .0794 .472 .483 29.3 4.76(2) .048 .053 .32 .28 4.13 3.36(3) .79 .76 .88 .76 .89 .79(1) Root Mean Square Error of the posterior mean(2) Median Absolute Deviation of the posterior median(3) Coverage of the credibility interval of probability 0.8Computation time: on a recent 6-core computer(C++/OpenMP)ABC ≈ 6 hours[A]BCel≈ 8 minutes
    • ComparisonOn large datasets, [A]BCel gives more accurate results than ABCABC simplifies the dataset through summary statisticsDue to the large dimension of x, the original ABC algorithmestimatesπ θ η(xobs) ,where η(xobs) is some (non-linear) projection of the observeddataset on a space with smaller dimension→ Some information is lost[A]BCel simplifies the model through a generalized momentcondition model.→ Here, the moment condition model is based on pairwisecomposition likelihood
    • Conclusion/perspectivesabc part of a wider pictureto handle complex/Big Datamodelsmany formats of empirical[likelihood] Bayes methodsavailablelack of comparative toolsand of an assessment forinformation loss