slides of ABC talk at i-like workshop, Warwick, May 16
Upcoming SlideShare
Loading in...5
×
 

slides of ABC talk at i-like workshop, Warwick, May 16

on

  • 1,635 views

these are slightly updated slides compared with the ones of my Padova talk two month ago...

these are slightly updated slides compared with the ones of my Padova talk two month ago...

Statistics

Views

Total Views
1,635
Views on SlideShare
400
Embed Views
1,235

Actions

Likes
0
Downloads
8
Comments
0

6 Embeds 1,235

http://xianblog.wordpress.com 1223
https://xianblog.wordpress.com 5
http://www.newsblur.com 4
http://dev.newsblur.com 1
http://prlog.ru 1
https://www.google.com 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

slides of ABC talk at i-like workshop, Warwick, May 16 slides of ABC talk at i-like workshop, Warwick, May 16 Presentation Transcript

  • (Approximate) Bayesian Computation as a newempirical Bayes methodChristian P. RobertUniversit´e Paris-Dauphine & CREST, ParisJoint work with K.L. Mengersen and P. Pudloi-like Workshop, Warwick, 15-17 May, 2013
  • of interest for i-like usersMCMSki IV to be held in Chamonix Mt Blanc, France, fromMonday, Jan. 6 to Wed., Jan. 8, 2014All aspects of MCMC++ theory and methodology and applicationsand more!Website http://www.pages.drexel.edu/∼mwl25/mcmski/
  • OutlineUnavailable likelihoodsABC methodsABC as an inference machine[A]BCelConclusion and perspectives
  • Intractable likelihoodCase of a well-defined statistical model where the likelihoodfunction(θ|y) = f (y1, . . . , yn|θ)is (really!) not available in closed formcannot (easily!) be either completed or demarginalisedcannot be estimated by an unbiased estimatorc Prohibits direct implementation of a generic MCMC algorithmlike Metropolis–Hastings
  • Intractable likelihoodCase of a well-defined statistical model where the likelihoodfunction(θ|y) = f (y1, . . . , yn|θ)is (really!) not available in closed formcannot (easily!) be either completed or demarginalisedcannot be estimated by an unbiased estimatorc Prohibits direct implementation of a generic MCMC algorithmlike Metropolis–Hastings
  • The abc alternativeEmpirical approximations to the original B problemDegrading the precision down to a tolerance εReplacing the likelihood with a non-parametric approximationSummarising/replacing the data with insufficient statistics
  • The abc alternativeEmpirical approximations to the original B problemDegrading the precision down to a tolerance εReplacing the likelihood with a non-parametric approximationSummarising/replacing the data with insufficient statistics
  • The abc alternativeEmpirical approximations to the original B problemDegrading the precision down to a tolerance εReplacing the likelihood with a non-parametric approximationSummarising/replacing the data with insufficient statistics
  • Different worries about abcImpact on B inferencea mere computational issue (that will eventually end up beingsolved by more powerful computers, &tc, even if too costly inthe short term, as for regular Monte Carlo methods) Not!an inferential issue (opening opportunities for new inferencemachine on complex/Big Data models, with legitimitydiffering from classical B approach)a Bayesian conundrum (how closely related to the/a Bapproach? more than recycling B tools? true B with rawdata? a new type of empirical Bayes inference?)
  • Different worries about abcImpact on B inferencea mere computational issue (that will eventually end up beingsolved by more powerful computers, &tc, even if too costly inthe short term, as for regular Monte Carlo methods) Not!an inferential issue (opening opportunities for new inferencemachine on complex/Big Data models, with legitimitydiffering from classical B approach)a Bayesian conundrum (how closely related to the/a Bapproach? more than recycling B tools? true B with rawdata? a new type of empirical Bayes inference?)
  • Different worries about abcImpact on B inferencea mere computational issue (that will eventually end up beingsolved by more powerful computers, &tc, even if too costly inthe short term, as for regular Monte Carlo methods) Not!an inferential issue (opening opportunities for new inferencemachine on complex/Big Data models, with legitimitydiffering from classical B approach)a Bayesian conundrum (how closely related to the/a Bapproach? more than recycling B tools? true B with rawdata? a new type of empirical Bayes inference?)
  • summary as an answer to Lange, Chi & Zhou [ISR, 2013]“Surprisingly, the confident prediction of the previous generation that Bayesian methods wouldultimately supplant frequentist methods has given way to a realization that Markov chain MonteCarlo (MCMC) may be too slow to handle modern data sets. Size matters because large data setsstress computer storage and processing power to the breaking point. The most successfulcompromises between Bayesian and frequentist methods now rely on penalization andoptimization.”sad reality constraint that size does matterfocus on much smaller dimensions and on sparse summariesmany (fast) ways of producing those summariesBayesian inference can kick in almost automatically at thisstage
  • summary as an answer to Lange, Chi & Zhou [ISR, 2013]“Surprisingly, the confident prediction of the previous generation that Bayesian methods wouldultimately supplant frequentist methods has given way to a realization that Markov chain MonteCarlo (MCMC) may be too slow to handle modern data sets. Size matters because large data setsstress computer storage and processing power to the breaking point. The most successfulcompromises between Bayesian and frequentist methods now rely on penalization andoptimization.”sad reality constraint that size does matterfocus on much smaller dimensions and on sparse summariesmany (fast) ways of producing those summariesBayesian inference can kick in almost automatically at thisstage
  • Econom’ectionsSimilar exploration of simulation-based and approximationtechniques in EconometricsSimulated method of momentsMethod of simulated momentsSimulated pseudo-maximum-likelihoodIndirect inference[Gouri´eroux & Monfort, 1996]even though motivation is partly-defined models rather thancomplex likelihoods
  • Econom’ectionsSimilar exploration of simulation-based and approximationtechniques in EconometricsSimulated method of momentsMethod of simulated momentsSimulated pseudo-maximum-likelihoodIndirect inference[Gouri´eroux & Monfort, 1996]even though motivation is partly-defined models rather thancomplex likelihoods
  • Indirect inferenceMinimise [in θ] a distance between estimators ^β based on apseudo-model for genuine observations and for observationssimulated under the true model and the parameter θ.[Gouri´eroux, Monfort, & Renault, 1993;Smith, 1993; Gallant & Tauchen, 1996]
  • Indirect inference (PML vs. PSE)Example of the pseudo-maximum-likelihood (PML)^β(y) = arg maxβtlog f (yt|β, y1:(t−1))leading toarg minθ||^β(yo) − ^β(y1(θ), . . . , yS (θ))||2whenys(θ) ∼ f (y|θ) s = 1, . . . , S
  • Indirect inference (PML vs. PSE)Example of the pseudo-score-estimator (PSE)^β(y) = arg minβt∂ log f∂β(yt|β, y1:(t−1))2leading toarg minθ||^β(yo) − ^β(y1(θ), . . . , yS (θ))||2whenys(θ) ∼ f (y|θ) s = 1, . . . , S
  • Consistent indirect inference“...in order to get a unique solution the dimension ofthe auxiliary parameter β must be larger than or equal tothe dimension of the initial parameter θ. If the problem isjust identified the different methods become easier...”Consistency depending on the criterion and on the asymptoticidentifiability of θ[Gouri´eroux & Monfort, 1996, p. 66]Which connection [if any] with the B perspective?
  • Consistent indirect inference“...in order to get a unique solution the dimension ofthe auxiliary parameter β must be larger than or equal tothe dimension of the initial parameter θ. If the problem isjust identified the different methods become easier...”Consistency depending on the criterion and on the asymptoticidentifiability of θ[Gouri´eroux & Monfort, 1996, p. 66]Which connection [if any] with the B perspective?
  • Consistent indirect inference“...in order to get a unique solution the dimension ofthe auxiliary parameter β must be larger than or equal tothe dimension of the initial parameter θ. If the problem isjust identified the different methods become easier...”Consistency depending on the criterion and on the asymptoticidentifiability of θ[Gouri´eroux & Monfort, 1996, p. 66]Which connection [if any] with the B perspective?
  • Approximate Bayesian computationUnavailable likelihoodsABC methodsGenesis of ABCABC basicsAdvances and interpretationsABC as knnABC as an inference machine[A]BCelConclusion and perspectives
  • Genetic background of ABCskip geneticsABC is a recent computational technique that only requires beingable to sample from the likelihood f (·|θ)This technique stemmed from population genetics models, about15 years ago, and population geneticists still contributesignificantly to methodological developments of ABC.[Griffith & al., 1997; Tavar´e & al., 1999]
  • Demo-genetic inferenceEach model is characterized by a set of parameters θ that coverhistorical (time divergence, admixture time ...), demographics(population sizes, admixture rates, migration rates, ...) and genetic(mutation rate, ...) factorsThe goal is to estimate these parameters from a dataset ofpolymorphism (DNA sample) y observed at the present timeProblem:most of the time, we cannot calculate the likelihood of thepolymorphism data f (y|θ)...
  • Demo-genetic inferenceEach model is characterized by a set of parameters θ that coverhistorical (time divergence, admixture time ...), demographics(population sizes, admixture rates, migration rates, ...) and genetic(mutation rate, ...) factorsThe goal is to estimate these parameters from a dataset ofpolymorphism (DNA sample) y observed at the present timeProblem:most of the time, we cannot calculate the likelihood of thepolymorphism data f (y|θ)...
  • Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibriumSample of 8 genesMutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2
  • Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibriumKingman’s genealogyWhen time axis isnormalized,T(k) ∼ Exp(k(k − 1)/2)Mutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2
  • Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibriumKingman’s genealogyWhen time axis isnormalized,T(k) ∼ Exp(k(k − 1)/2)Mutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2
  • Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibriumObservations: leafs of the tree^θ = ?Kingman’s genealogyWhen time axis isnormalized,T(k) ∼ Exp(k(k − 1)/2)Mutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2
  • Much more interesting models. . .several independent locusIndependent gene genealogies and mutationsdifferent populationslinked by an evolutionary scenario made of divergences,admixtures, migrations between populations, etc.larger sample sizeusually between 50 and 100 genesA typical evolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2
  • Intractable likelihoodMissing (too much missing!) data structure:f (y|θ) =Gf (y|G, θ)f (G|θ)dGcannot be computed in a manageable way...[Stephens & Donnelly, 2000]The genealogies are considered as nuisance parametersThis modelling clearly differs from the phylogenetic perspectivewhere the tree is the parameter of interest.
  • Intractable likelihoodMissing (too much missing!) data structure:f (y|θ) =Gf (y|G, θ)f (G|θ)dGcannot be computed in a manageable way...[Stephens & Donnelly, 2000]The genealogies are considered as nuisance parametersThis modelling clearly differs from the phylogenetic perspectivewhere the tree is the parameter of interest.
  • A?B?C?A stands for approximate[wrong likelihood / pic?]B stands for BayesianC stands for computation[producing a parametersample]
  • A?B?C?A stands for approximate[wrong likelihood / pic?]B stands for BayesianC stands for computation[producing a parametersample]
  • A?B?C?A stands for approximate[wrong likelihood / pic?]B stands for BayesianC stands for computation[producing a parametersample]ESS=155.6θDensity−0.5 0.0 0.5 1.00.01.0ESS=75.93θDensity−0.4 −0.2 0.0 0.2 0.40.01.02.0ESS=76.87θDensity−0.4 −0.2 0.0 0.201234ESS=91.54θDensity−0.6 −0.4 −0.2 0.0 0.201234ESS=108.4θDensity−0.4 0.0 0.2 0.4 0.60.01.02.03.0ESS=85.13θDensity−0.2 0.0 0.2 0.4 0.60.01.02.03.0ESS=149.1θDensity−0.5 0.0 0.5 1.00.01.02.0ESS=96.31θDensity−0.4 0.0 0.2 0.4 0.60.01.02.0ESS=83.77θDensity−0.6 −0.4 −0.2 0.0 0.2 0.401234ESS=155.7θDensity−0.5 0.0 0.50.01.02.0ESS=92.42θDensity−0.4 0.0 0.2 0.4 0.60.01.02.03.0ESS=95.01θDensity−0.4 0.0 0.2 0.4 0.60.01.53.0ESS=139.2Density −0.6 −0.2 0.2 0.60.01.02.0ESS=99.33Density−0.4 −0.2 0.0 0.2 0.40.01.02.03.0ESS=87.28Density−0.2 0.0 0.2 0.4 0.60123
  • How Bayesian is aBc?Could we turn the resolution into a Bayesian answer?ideally so (not meaningfull: requires ∞-ly powerful computerapproximation error unknown (w/o costly simulation)true Bayes for wrong model (formal and artificial)true Bayes for noisy model (much more convincing)true Bayes for estimated likelihood (back to econometrics?)illuminating the tension between information and precision
  • ABC methodologyBayesian setting: target is π(θ)f (x|θ)When likelihood f (x|θ) not in closed form, likelihood-free rejectiontechnique:FoundationFor an observation y ∼ f (y|θ), under the prior π(θ), if one keepsjointly simulatingθ ∼ π(θ) , z ∼ f (z|θ ) ,until the auxiliary variable z is equal to the observed value, z = y,then the selectedθ ∼ π(θ|y)[Rubin, 1984; Diggle & Gratton, 1984; Tavar´e et al., 1997]
  • ABC methodologyBayesian setting: target is π(θ)f (x|θ)When likelihood f (x|θ) not in closed form, likelihood-free rejectiontechnique:FoundationFor an observation y ∼ f (y|θ), under the prior π(θ), if one keepsjointly simulatingθ ∼ π(θ) , z ∼ f (z|θ ) ,until the auxiliary variable z is equal to the observed value, z = y,then the selectedθ ∼ π(θ|y)[Rubin, 1984; Diggle & Gratton, 1984; Tavar´e et al., 1997]
  • ABC methodologyBayesian setting: target is π(θ)f (x|θ)When likelihood f (x|θ) not in closed form, likelihood-free rejectiontechnique:FoundationFor an observation y ∼ f (y|θ), under the prior π(θ), if one keepsjointly simulatingθ ∼ π(θ) , z ∼ f (z|θ ) ,until the auxiliary variable z is equal to the observed value, z = y,then the selectedθ ∼ π(θ|y)[Rubin, 1984; Diggle & Gratton, 1984; Tavar´e et al., 1997]
  • A as A...pproximativeWhen y is a continuous random variable, strict equality z = y isreplaced with a tolerance zoneρ(y, z)where ρ is a distanceOutput distributed fromπ(θ) Pθ{ρ(y, z) < }def∝ π(θ|ρ(y, z) < )[Pritchard et al., 1999]
  • A as A...pproximativeWhen y is a continuous random variable, strict equality z = y isreplaced with a tolerance zoneρ(y, z)where ρ is a distanceOutput distributed fromπ(θ) Pθ{ρ(y, z) < }def∝ π(θ|ρ(y, z) < )[Pritchard et al., 1999]
  • ABC algorithmIn most implementations, further degree of A...pproximation:Algorithm 1 Likelihood-free rejection samplerfor i = 1 to N dorepeatgenerate θ from the prior distribution π(·)generate z from the likelihood f (·|θ )until ρ{η(z), η(y)}set θi = θend forwhere η(y) defines a (not necessarily sufficient) statistic
  • OutputThe likelihood-free algorithm samples from the marginal in z of:π (θ, z|y) =π(θ)f (z|θ)IA ,y (z)A ,y×Θ π(θ)f (z|θ)dzdθ,where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) ....does it?!
  • OutputThe likelihood-free algorithm samples from the marginal in z of:π (θ, z|y) =π(θ)f (z|θ)IA ,y (z)A ,y×Θ π(θ)f (z|θ)dzdθ,where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) ....does it?!
  • OutputThe likelihood-free algorithm samples from the marginal in z of:π (θ, z|y) =π(θ)f (z|θ)IA ,y (z)A ,y×Θ π(θ)f (z|θ)dzdθ,where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) ....does it?!
  • OutputThe likelihood-free algorithm samples from the marginal in z of:π (θ, z|y) =π(θ)f (z|θ)IA ,y (z)A ,y×Θ π(θ)f (z|θ)dzdθ,where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of therestricted posterior distribution:π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) .Not so good..!skip convergence details!
  • Convergence of ABCWhat happens when → 0?For B ⊂ Θ, we haveBA ,yf (z|θ)dzA ,y×Θ π(θ)f (z|θ)dzdθπ(θ)dθ =A ,yB f (z|θ)π(θ)dθA ,y×Θ π(θ)f (z|θ)dzdθdz=A ,yB f (z|θ)π(θ)dθm(z)m(z)A ,y×Θ π(θ)f (z|θ)dzdθdz=A ,yπ(B|z)m(z)A ,y×Θ π(θ)f (z|θ)dzdθdzwhich indicates convergence for a continuous π(B|z).
  • Convergence of ABCWhat happens when → 0?For B ⊂ Θ, we haveBA ,yf (z|θ)dzA ,y×Θ π(θ)f (z|θ)dzdθπ(θ)dθ =A ,yB f (z|θ)π(θ)dθA ,y×Θ π(θ)f (z|θ)dzdθdz=A ,yB f (z|θ)π(θ)dθm(z)m(z)A ,y×Θ π(θ)f (z|θ)dzdθdz=A ,yπ(B|z)m(z)A ,y×Θ π(θ)f (z|θ)dzdθdzwhich indicates convergence for a continuous π(B|z).
  • Convergence (do not attempt!)...and the above does not apply to insufficient statistics:If η(y) is not a sufficient statistics, the best one can hope for isπ(θ|η(y)) , not π(θ|y)If η(y) is an ancillary statistic, the whole information contained iny is lost!, the “best” one can “hope” for isπ(θ|η(y)) = π(θ)
  • Convergence (do not attempt!)...and the above does not apply to insufficient statistics:If η(y) is not a sufficient statistics, the best one can hope for isπ(θ|η(y)) , not π(θ|y)If η(y) is an ancillary statistic, the whole information contained iny is lost!, the “best” one can “hope” for isπ(θ|η(y)) = π(θ)
  • Convergence (do not attempt!)...and the above does not apply to insufficient statistics:If η(y) is not a sufficient statistics, the best one can hope for isπ(θ|η(y)) , not π(θ|y)If η(y) is an ancillary statistic, the whole information contained iny is lost!, the “best” one can “hope” for isπ(θ|η(y)) = π(θ)
  • CommentsRole of distance paramount (because = 0)Scaling of components of η(y) is also determinantmatters little if “small enough”representative of “curse of dimensionality”small is beautiful!the data as a whole may be paradoxically weakly informativefor ABC
  • ABC (simul’) advanceshow approximative is ABC? ABC as knnSimulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger[Beaumont et al., 2002].....or even by including in the inferential framework [ABCµ][Ratmann et al., 2009]
  • ABC (simul’) advanceshow approximative is ABC? ABC as knnSimulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger[Beaumont et al., 2002].....or even by including in the inferential framework [ABCµ][Ratmann et al., 2009]
  • ABC (simul’) advanceshow approximative is ABC? ABC as knnSimulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger[Beaumont et al., 2002].....or even by including in the inferential framework [ABCµ][Ratmann et al., 2009]
  • ABC (simul’) advanceshow approximative is ABC? ABC as knnSimulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger[Beaumont et al., 2002].....or even by including in the inferential framework [ABCµ][Ratmann et al., 2009]
  • ABC-NPBetter usage of [prior] simulations byadjustement: instead of throwing awayθ such that ρ(η(z), η(y)) > , replaceθ’s with locally regressed transformsθ∗= θ − {η(z) − η(y)}T ^β[Csill´ery et al., TEE, 2010]where ^β is obtained by [NP] weighted least square regression on(η(z) − η(y)) with weightsKδ {ρ(η(z), η(y))}[Beaumont et al., 2002, Genetics]
  • ABC-NP (regression)Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :weight directly simulation byKδ {ρ(η(z(θ)), η(y))}or1SSs=1Kδ {ρ(η(zs(θ)), η(y))}[consistent estimate of f (η|θ)]Curse of dimensionality: poor estimate when d = dim(η) is large...
  • ABC-NP (regression)Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :weight directly simulation byKδ {ρ(η(z(θ)), η(y))}or1SSs=1Kδ {ρ(η(zs(θ)), η(y))}[consistent estimate of f (η|θ)]Curse of dimensionality: poor estimate when d = dim(η) is large...
  • ABC-NP (density estimation)Use of the kernel weightsKδ {ρ(η(z(θ)), η(y))}leads to the NP estimate of the posterior expectationi θi Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ(η(z(θi )), η(y))}[Blum, JASA, 2010]
  • ABC-NP (density estimation)Use of the kernel weightsKδ {ρ(η(z(θ)), η(y))}leads to the NP estimate of the posterior conditional densityi˜Kb(θi − θ)Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ(η(z(θi )), η(y))}[Blum, JASA, 2010]
  • ABC-NP (density estimations)Other versions incorporating regression adjustmentsi˜Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ(η(z(θi )), η(y))}In all cases, errorE[^g(θ|y)] − g(θ|y) = cb2+ cδ2+ OP(b2+ δ2) + OP(1/nδd)var(^g(θ|y)) =cnbδd(1 + oP(1))
  • ABC-NP (density estimations)Other versions incorporating regression adjustmentsi˜Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ(η(z(θi )), η(y))}In all cases, errorE[^g(θ|y)] − g(θ|y) = cb2+ cδ2+ OP(b2+ δ2) + OP(1/nδd)var(^g(θ|y)) =cnbδd(1 + oP(1))[Blum, JASA, 2010]
  • ABC-NP (density estimations)Other versions incorporating regression adjustmentsi˜Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ(η(z(θi )), η(y))}In all cases, errorE[^g(θ|y)] − g(θ|y) = cb2+ cδ2+ OP(b2+ δ2) + OP(1/nδd)var(^g(θ|y)) =cnbδd(1 + oP(1))[standard NP calculations]
  • ABC as knn[Biau et al., 2013, Annales de l’IHP]Practice of ABC: determine tolerance as a quantile on observeddistances, say 10% or 1% quantile,= N = qα(d1, . . . , dN)Interpretation of ε as nonparametric bandwidth onlyapproximation of the actual practice[Blum & Fran¸cois, 2010]ABC is a k-nearest neighbour (knn) method with kN = N N[Loftsgaarden & Quesenberry, 1965]
  • ABC as knn[Biau et al., 2013, Annales de l’IHP]Practice of ABC: determine tolerance as a quantile on observeddistances, say 10% or 1% quantile,= N = qα(d1, . . . , dN)Interpretation of ε as nonparametric bandwidth onlyapproximation of the actual practice[Blum & Fran¸cois, 2010]ABC is a k-nearest neighbour (knn) method with kN = N N[Loftsgaarden & Quesenberry, 1965]
  • ABC as knn[Biau et al., 2013, Annales de l’IHP]Practice of ABC: determine tolerance as a quantile on observeddistances, say 10% or 1% quantile,= N = qα(d1, . . . , dN)Interpretation of ε as nonparametric bandwidth onlyapproximation of the actual practice[Blum & Fran¸cois, 2010]ABC is a k-nearest neighbour (knn) method with kN = N N[Loftsgaarden & Quesenberry, 1965]
  • ABC consistencyProvidedkN/ log log N −→ ∞ and kN/N −→ 0as N → ∞, for almost all s0 (with respect to the distribution ofS), with probability 1,1kNkNj=1ϕ(θj ) −→ E[ϕ(θj )|S = s0][Devroye, 1982]Biau et al. (2012) also recall pointwise and integrated mean square errorconsistency results on the corresponding kernel estimate of theconditional posterior distribution, under constraintskN → ∞, kN /N → 0, hN → 0 and hpN kN → ∞,
  • ABC consistencyProvidedkN/ log log N −→ ∞ and kN/N −→ 0as N → ∞, for almost all s0 (with respect to the distribution ofS), with probability 1,1kNkNj=1ϕ(θj ) −→ E[ϕ(θj )|S = s0][Devroye, 1982]Biau et al. (2012) also recall pointwise and integrated mean square errorconsistency results on the corresponding kernel estimate of theconditional posterior distribution, under constraintskN → ∞, kN /N → 0, hN → 0 and hpN kN → ∞,
  • Rates of convergenceFurther assumptions (on target and kernel) allow for precise(integrated mean square) convergence rates (as a power of thesample size N), derived from classical k-nearest neighbourregression, likewhen m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N− 4p+8when m = 4, kN ≈ N(p+4)/(p+8) and rate N− 4p+8 log Nwhen m > 4, kN ≈ N(p+4)/(m+p+4) and rate N− 4m+p+4[Biau et al., 2012, arxiv:1207.6461]Drag: Only applies to sufficient summary statistics
  • Rates of convergenceFurther assumptions (on target and kernel) allow for precise(integrated mean square) convergence rates (as a power of thesample size N), derived from classical k-nearest neighbourregression, likewhen m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N− 4p+8when m = 4, kN ≈ N(p+4)/(p+8) and rate N− 4p+8 log Nwhen m > 4, kN ≈ N(p+4)/(m+p+4) and rate N− 4m+p+4[Biau et al., 2012, arxiv:1207.6461]Drag: Only applies to sufficient summary statistics
  • ABC inference machineUnavailable likelihoodsABC methodsABC as an inference machineError inc.Exact BC and approximatetargetssummary statistic[A]BCelConclusion and perspectives
  • How Bayesian is aBc..?may be a convergent method of inference (meaningful?sufficient? foreign?)approximation error unknown (w/o massive simulation)pragmatic/empirical B (there is no other solution!)many calibration issues (tolerance, distance, statistics)the NP side should be incorporated into the whole B picturethe approximation error should also be part of the B inferenceto ABCel
  • ABCµIdea Infer about the error as well as about the parameter:Use of a joint densityf (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )where y is the data, and ξ( |y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)Warning! Replacement of ξ( |y, θ) with a non-parametric kernelapproximation.[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
  • ABCµIdea Infer about the error as well as about the parameter:Use of a joint densityf (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )where y is the data, and ξ( |y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)Warning! Replacement of ξ( |y, θ) with a non-parametric kernelapproximation.[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
  • ABCµIdea Infer about the error as well as about the parameter:Use of a joint densityf (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )where y is the data, and ξ( |y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)Warning! Replacement of ξ( |y, θ) with a non-parametric kernelapproximation.[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
  • ABCµ detailsMultidimensional distances ρk (k = 1, . . . , K) and errorsk = ρk(ηk(z), ηk(y)), withk ∼ ξk( |y, θ) ≈ ^ξk( |y, θ) =1BhkbK[{ k−ρk(ηk(zb), ηk(y))}/hk]then used in replacing ξ( |y, θ) with mink^ξk( |y, θ)ABCµ involves acceptance probabilityπ(θ , )π(θ, )q(θ , θ)q( , )q(θ, θ )q( , )mink^ξk( |y, θ )mink^ξk( |y, θ)
  • ABCµ detailsMultidimensional distances ρk (k = 1, . . . , K) and errorsk = ρk(ηk(z), ηk(y)), withk ∼ ξk( |y, θ) ≈ ^ξk( |y, θ) =1BhkbK[{ k−ρk(ηk(zb), ηk(y))}/hk]then used in replacing ξ( |y, θ) with mink^ξk( |y, θ)ABCµ involves acceptance probabilityπ(θ , )π(θ, )q(θ , θ)q( , )q(θ, θ )q( , )mink^ξk( |y, θ )mink^ξk( |y, θ)
  • Wilkinson’s exact BC (not exactly!)ABC approximation error (i.e. non-zero tolerance) replaced withexact simulation from a controlled approximation to the target,convolution of true posterior with kernel functionπ (θ, z|y) =π(θ)f (z|θ)K (y − z)π(θ)f (z|θ)K (y − z)dzdθ,with K kernel parameterised by bandwidth .[Wilkinson, 2013]TheoremThe ABC algorithm based on the assumption of a randomisedobservation y = ˜y + ξ, ξ ∼ K , and an acceptance probability ofK (y − z)/Mgives draws from the posterior distribution π(θ|y).
  • Wilkinson’s exact BC (not exactly!)ABC approximation error (i.e. non-zero tolerance) replaced withexact simulation from a controlled approximation to the target,convolution of true posterior with kernel functionπ (θ, z|y) =π(θ)f (z|θ)K (y − z)π(θ)f (z|θ)K (y − z)dzdθ,with K kernel parameterised by bandwidth .[Wilkinson, 2013]TheoremThe ABC algorithm based on the assumption of a randomisedobservation y = ˜y + ξ, ξ ∼ K , and an acceptance probability ofK (y − z)/Mgives draws from the posterior distribution π(θ|y).
  • How exact a BC?ProsPseudo-data from true model and observed data from noisymodelInteresting perspective in that outcome is completelycontrolledLink with ABCµ and assuming y is observed with ameasurement error with density KRelates to the theory of model approximation[Kennedy & O’Hagan, 2001]leads to “noisy ABC”: perturbates the data down to precisionε and proceed[Fearnhead & Prangle, 2012]ConsRequires K to be bounded by MTrue approximation error never assessed
  • How exact a BC?ProsPseudo-data from true model and observed data from noisymodelInteresting perspective in that outcome is completelycontrolledLink with ABCµ and assuming y is observed with ameasurement error with density KRelates to the theory of model approximation[Kennedy & O’Hagan, 2001]leads to “noisy ABC”: perturbates the data down to precisionε and proceed[Fearnhead & Prangle, 2012]ConsRequires K to be bounded by MTrue approximation error never assessed
  • Noisy ABC for HMMsSpecific case of a hidden Markov model summariesXt+1 ∼ Qθ(Xt, ·)Yt+1 ∼ gθ(·|xt)where only y01:n is observed.[Dean, Singh, Jasra, & Peters, 2011]Use of specific constraints, adapted to the Markov structure:y1 ∈ B(y01 , ) × · · · × yn ∈ B(y0n , )
  • Noisy ABC for HMMsSpecific case of a hidden Markov model summariesXt+1 ∼ Qθ(Xt, ·)Yt+1 ∼ gθ(·|xt)where only y01:n is observed.[Dean, Singh, Jasra, & Peters, 2011]Use of specific constraints, adapted to the Markov structure:y1 ∈ B(y01 , ) × · · · × yn ∈ B(y0n , )
  • Noisy ABC-MLEIdea: Modify instead the data from the start(y01 + ζ1, . . . , yn + ζn)[ see Fearnhead-Prangle ]noisy ABC-MLE estimatearg maxθPθ Y1 ∈ B(y01 + ζ1, ), . . . , Yn ∈ B(y0n + ζn, )[Dean, Singh, Jasra, & Peters, 2011]
  • Consistent noisy ABC-MLEDegrading the data improves the estimation performances:Noisy ABC-MLE is asymptotically (in n) consistentunder further assumptions, the noisy ABC-MLE isasymptotically normalincrease in variance of order −2likely degradation in precision or computing time due to thelack of summary statistic [curse of dimensionality]
  • Which summary?Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statisticsLoss of statistical information balanced against gain in datarougheningApproximation error and information loss remain unknownChoice of statistics induces choice of distance functiontowards standardisationmay be imposed for external reasonsmay gather several non-B point estimatescan learn about efficient combination
  • Which summary?Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statisticsLoss of statistical information balanced against gain in datarougheningApproximation error and information loss remain unknownChoice of statistics induces choice of distance functiontowards standardisationmay be imposed for external reasonsmay gather several non-B point estimatescan learn about efficient combination
  • Which summary?Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statisticsLoss of statistical information balanced against gain in datarougheningApproximation error and information loss remain unknownChoice of statistics induces choice of distance functiontowards standardisationmay be imposed for external reasonsmay gather several non-B point estimatescan learn about efficient combination
  • Which summary for model choice?Depending on the choice of η(·), the Bayes factor based on thisinsufficient statistic,Bη12(y) =π1(θ1)f η1 (η(y)|θ1) dθ1π2(θ2)f η2 (η(y)|θ2) dθ2,is either consistent or inconsistent[X, Cornuet, Marin, & Pillai, 2012]Consistency only depends on the range of Ei [η(y)] under bothmodels[Marin, Pillai, X, & Rousseau, 2012]
  • Which summary for model choice?Depending on the choice of η(·), the Bayes factor based on thisinsufficient statistic,Bη12(y) =π1(θ1)f η1 (η(y)|θ1) dθ1π2(θ2)f η2 (η(y)|θ2) dθ2,is either consistent or inconsistent[X, Cornuet, Marin, & Pillai, 2012]Consistency only depends on the range of Ei [η(y)] under bothmodels[Marin, Pillai, X, & Rousseau, 2012]
  • Semi-automatic ABCFearnhead and Prangle (2012) study ABC and the selection of thesummary statistic in close proximity to Wilkinson’s proposalABC considered as inferential method and calibrated as suchrandomised (or ‘noisy’) version of the summary statistics˜η(y) = η(y) + τderivation of a well-calibrated version of ABC, i.e. analgorithm that gives proper predictions for the distributionassociated with this randomised summary statistic
  • Summary [of F&P/statistics)optimality of the posterior expectationE[θ|y]of the parameter of interest as summary statistics η(y)![requires iterative process]use of the standard quadratic loss function(θ − θ0)TA(θ − θ0)recent extension to model choice, optimality of Bayes factorB12(y)[F&P, ISBA 2012, Kyoto]
  • Summary [of F&P/statistics)optimality of the posterior expectationE[θ|y]of the parameter of interest as summary statistics η(y)![requires iterative process]use of the standard quadratic loss function(θ − θ0)TA(θ − θ0)recent extension to model choice, optimality of Bayes factorB12(y)[F&P, ISBA 2012, Kyoto]
  • Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates π(. | η(y)) [unavoidable]Model selection feasible with ABC [with caution!]For estimation, consistency if {θ; µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅[Marin et al., 2011]
  • Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates π(. | η(y)) [unavoidable]Model selection feasible with ABC [with caution!]For estimation, consistency if {θ; µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅[Marin et al., 2011]
  • Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates π(. | η(y)) [unavoidable]Model selection feasible with ABC [with caution!]For estimation, consistency if {θ; µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅[Marin et al., 2011]
  • Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates π(. | η(y)) [unavoidable]Model selection feasible with ABC [with caution!]For estimation, consistency if {θ; µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅[Marin et al., 2011]
  • Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates π(. | η(y)) [unavoidable]Model selection feasible with ABC [with caution!]For estimation, consistency if {θ; µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅[Marin et al., 2011]
  • Empirical likelihood (EL)Unavailable likelihoodsABC methodsABC as an inference machine[A]BCelABC and ELComposite likelihoodIllustrationsConclusion and perspectives
  • Empirical likelihood (EL)Dataset x made of n independent replicates x = (x1, . . . , xn) ofsome X ∼ FGeneralized moment condition modelEF h(X, φ) = 0,where h is a known function, and φ an unknown parameterCorresponding empirical likelihoodLel(φ|x) = maxpni=1pifor all p such that 0 pi 1, i pi = 1, i pi h(xi , φ) = 0.[Owen, 1988, Bio’ka, & Empirical Likelihood, 2001]
  • Empirical likelihood (EL)Dataset x made of n independent replicates x = (x1, . . . , xn) ofsome X ∼ FGeneralized moment condition modelEF h(X, φ) = 0,where h is a known function, and φ an unknown parameterCorresponding empirical likelihoodLel(φ|x) = maxpni=1pifor all p such that 0 pi 1, i pi = 1, i pi h(xi , φ) = 0.[Owen, 1988, Bio’ka, & Empirical Likelihood, 2001]
  • Convergence of EL [3.4]Theorem 3.4 Let X, Y1, . . . , Yn be independent rv’s with commondistribution F0. For θ ∈ Θ, and the function h(X, θ) ∈ Rs, letθ0 ∈ Θ be such thatVar(h(Yi , θ0))is finite and has rank q > 0. If θ0 satisfiesE(h(X, θ0)) = 0,then−2 logLel(θ0|Y1, . . . , Yn)n−n→ χ2(q)in distribution when n → ∞.[Owen, 2001]
  • Convergence of EL [3.4]“...The interesting thing about Theorem 3.4 is what is not there. Itincludes no conditions to make ^θ a good estimate of θ0, nor evenconditions to ensure a unique value for θ0, nor even that any solution θ0exists. Theorem 3.4 applies in the just determined, over-determined, andunder-determined cases. When we can prove that our estimatingequations uniquely define θ0, and provide a consistent estimator ^θ of it,then confidence regions and tests follow almost automatically throughTheorem 3.4.”.[Owen, 2001]
  • Raw [A]BCelsamplerAct as if EL was an exact likelihood[Lazar, 2003]for i = 1 → N dogenerate φi from the prior distribution π(·)set the weight ωi = Lel(φi |xobs)end forreturn (φi , ωi ), i = 1, . . . , NOutput weighted sample of size N
  • Raw [A]BCelsamplerAct as if EL was an exact likelihood[Lazar, 2003]for i = 1 → N dogenerate φi from the prior distribution π(·)set the weight ωi = Lel(φi |xobs)end forreturn (φi , ωi ), i = 1, . . . , NPerformance evaluated through effective sample sizeESS = 1Ni=1ωiNj=1ωj2
  • Raw [A]BCelsamplerAct as if EL was an exact likelihood[Lazar, 2003]for i = 1 → N dogenerate φi from the prior distribution π(·)set the weight ωi = Lel(φi |xobs)end forreturn (φi , ωi ), i = 1, . . . , NMore advanced algorithms can be adapted to EL:E.g., adaptive multiple importance sampling (AMIS) ofCornuet et al. to speed up computations[Cornuet et al., 2012]
  • Moment condition in population genetics?EL does not require a fully defined and often complex (hencedebatable) parametric modelMain difficultyDerive a constraintEF h(X, φ) = 0,on the parameters of interest φ when X is made of the genotypesof the sample of individuals at a given locusE.g., in phylogeography, φ is composed ofdates of divergence between populations,ratio of population sizes,mutation rates, etc.None of them are moments of the distribution of the allelic statesof the sample
  • Moment condition in population genetics?EL does not require a fully defined and often complex (hencedebatable) parametric modelMain difficultyDerive a constraintEF h(X, φ) = 0,on the parameters of interest φ when X is made of the genotypesof the sample of individuals at a given locusc h made of pairwise composite scores (whose zero is the pairwisemaximum likelihood estimator)
  • Pairwise composite likelihoodThe intra-locus pairwise likelihood2(xk|φ) =i<j2(xik, xjk|φ)with x1k , . . . , xnk : allelic states of the gene sample at the k-th locusThe pairwise score functionφ log 2(xk|φ) =i<jφ log 2(xik, xjk|φ)Composite likelihoods are often much narrower than theoriginal likelihood of the modelSafe(r) with EL because we only use position of its mode
  • Pairwise likelihood: a simple caseAssumptionssample ⊂ closed, panmicticpopulation at equilibriummarker: microsatellitemutation rate: θ/2if xik et xjk are two genes of thesample,2(xik, xjk|θ) depends only onδ = xik − xjk2(δ|θ) =1√1 + 2θρ (θ)|δ|withρ(θ) =θ1 + θ +√1 + 2θPairwise score function∂θ log 2(δ|θ) =−11 + 2θ+|δ|θ√1 + 2θ
  • Pairwise likelihood: a simple caseAssumptionssample ⊂ closed, panmicticpopulation at equilibriummarker: microsatellitemutation rate: θ/2if xik et xjk are two genes of thesample,2(xik, xjk|θ) depends only onδ = xik − xjk2(δ|θ) =1√1 + 2θρ (θ)|δ|withρ(θ) =θ1 + θ +√1 + 2θPairwise score function∂θ log 2(δ|θ) =−11 + 2θ+|δ|θ√1 + 2θ
  • Pairwise likelihood: a simple caseAssumptionssample ⊂ closed, panmicticpopulation at equilibriummarker: microsatellitemutation rate: θ/2if xik et xjk are two genes of thesample,2(xik, xjk|θ) depends only onδ = xik − xjk2(δ|θ) =1√1 + 2θρ (θ)|δ|withρ(θ) =θ1 + θ +√1 + 2θPairwise score function∂θ log 2(δ|θ) =−11 + 2θ+|δ|θ√1 + 2θ
  • Pairwise likelihood: 2 diverging populationsMRCAPOP a POP bτAssumptionsτ: divergence date ofpop. a and bθ/2: mutation rateLet xik and xjk be two genescoming resp. from pop. a andbSet δ = xik − xjk.Then 2(δ|θ, τ) =e−τθ√1 + 2θ+∞k=−∞ρ(θ)|k|Iδ−k(τθ).whereIn(z) nth-order modifiedBessel function of the firstkind
  • Pairwise likelihood: 2 diverging populationsMRCAPOP a POP bτAssumptionsτ: divergence date ofpop. a and bθ/2: mutation rateLet xik and xjk be two genescoming resp. from pop. a andbSet δ = xik − xjk.A 2-dim score function∂τ log 2(δ|θ, τ) = −θ+θ22(δ − 1|θ, τ) + 2(δ + 1|θ, τ)2(δ|θ, τ)∂θ log 2(δ|θ, τ) =−τ −11 + 2θ+q(δ|θ, τ)2(δ|θ, τ)+τ22(δ − 1|θ, τ) + 2(δ + 1|θ, τ)2(δ|θ, τ)whereq(δ|θ, τ) :=e−τθ√1 + 2θρ (θ)ρ(θ)∞k=−∞|k|ρ(θ)|k|Iδ−k (τθ)
  • Example: normal posterior[A]BCel with two constraintsESS=155.6θDensity−0.5 0.0 0.5 1.00.01.0ESS=75.93θDensity−0.4 −0.2 0.0 0.2 0.40.01.02.0ESS=76.87θDensity−0.4 −0.2 0.0 0.201234ESS=91.54θDensity−0.6 −0.4 −0.2 0.0 0.201234ESS=108.4θDensity−0.4 0.0 0.2 0.4 0.60.01.02.03.0ESS=85.13θDensity−0.2 0.0 0.2 0.4 0.60.01.02.03.0ESS=149.1θDensity−0.5 0.0 0.5 1.00.01.02.0ESS=96.31θDensity−0.4 0.0 0.2 0.4 0.60.01.02.0ESS=83.77θDensity−0.6 −0.4 −0.2 0.0 0.2 0.401234ESS=155.7θDensity−0.5 0.0 0.50.01.02.0ESS=92.42θDensity−0.4 0.0 0.2 0.4 0.60.01.02.03.0ESS=95.01θDensity−0.4 0.0 0.2 0.4 0.60.01.53.0ESS=139.2Density−0.6 −0.2 0.2 0.60.01.02.0ESS=99.33Density−0.4 −0.2 0.0 0.2 0.40.01.02.03.0ESS=87.28Density−0.2 0.0 0.2 0.4 0.60123Sample sizes are of 25 (column 1), 50 (column 2) and 75 (column 3)observations
  • Example: normal posterior[A]BCel with three constraintsESS=300.1θDensity−0.4 0.0 0.4 0.80.01.0ESS=205.5θDensity−0.6 −0.2 0.0 0.2 0.40.01.02.0ESS=179.4θDensity−0.2 0.0 0.2 0.40.01.53.0ESS=265.1θDensity−0.3 −0.2 −0.1 0.0 0.101234ESS=250.3θDensity−0.6 −0.4 −0.2 0.0 0.20.01.02.0ESS=134.8θDensity−0.4 −0.2 0.0 0.101234ESS=331.5θDensity−0.8 −0.4 0.0 0.40.01.02.0ESS=167.4θDensity−0.9 −0.7 −0.5 −0.30123ESS=136.5θDensity−0.4 −0.2 0.0 0.201234ESS=322.4θDensity−0.2 0.0 0.2 0.4 0.6 0.80.01.02.0ESS=202.7θDensity−0.4 −0.2 0.0 0.2 0.40.01.02.03.0ESS=166θDensity−0.4 −0.2 0.0 0.201234ESS=263.7Density−1.0 −0.6 −0.20.01.02.0ESS=190.9Density−0.4 −0.2 0.0 0.2 0.4 0.60123ESS=165.3Density−0.5 −0.3 −0.1 0.10.01.53.0Sample sizes are of 25 (column 1), 50 (column 2) and 75 (column 3)observations
  • Example: Superposition of gamma processesExample of superposition of N renewal processes with waitingtimes τij (i = 1, . . . , M, j = 1, . . .) ∼ G(α, β), when N is unknown.Renewal processesζi1 = τi1, ζi2 = ζi1 + τi2, . . .with observations made of first n values of the ζij ’s,z1 = min{ζij }, z2 = min{ζij ; ζij > z1}, . . .ending withzn = min{ζij ; ζij > zn−1} .[Cox & Kartsonaki, B’ka, 2012]
  • Example: Superposition of gamma processes (ABC)Interesting testing ground for [A]BCel since data (zt) neither iidnor MarkovRecovery of an iid structure by1. simulating a pseudo-dataset,(z1 , . . . , zn ), as in regularABC,2. deriving sequence ofindicators (ν1, . . . , νn), asz1 = ζν11, z2 = ζν2j2 , . . .3. exploiting that thoseindicators are distributedfrom the prior distributionon the νt’s leading to an iidsample of G(α, β) variablesComparison of ABC and[A]BCel posteriorsαDensity0 1 2 3 40.00.20.40.60.81.01.21.4βDensity0 1 2 3 40.00.51.01.5NDensity0 5 10 15 200.000.020.040.060.08αDensity0 1 2 3 40.00.51.01.5βDensity0 1 2 3 40.00.20.40.60.81.0NDensity0 5 10 15 200.000.010.020.030.040.050.06Top: [A]BCelBottom: regular ABC
  • Example: Superposition of gamma processes (ABC)Interesting testing ground for [A]BCel since data (zt) neither iidnor MarkovRecovery of an iid structure by1. simulating a pseudo-dataset,(z1 , . . . , zn ), as in regularABC,2. deriving sequence ofindicators (ν1, . . . , νn), asz1 = ζν11, z2 = ζν2j2 , . . .3. exploiting that thoseindicators are distributedfrom the prior distributionon the νt’s leading to an iidsample of G(α, β) variablesComparison of ABC and[A]BCel posteriorsαDensity0 1 2 3 40.00.20.40.60.81.01.21.4βDensity0 1 2 3 40.00.51.01.5NDensity0 5 10 15 200.000.020.040.060.08αDensity0 1 2 3 40.00.51.01.5βDensity0 1 2 3 40.00.20.40.60.81.0NDensity0 5 10 15 200.000.010.020.030.040.050.06Top: [A]BCelBottom: regular ABC
  • Pop’gen’: A first experimentEvolutionary scenario:MRCAPOP 0 POP 1τDataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ = (log10 θ, log10 τ)uniform prior over(−1., 1.5) × (−1., 1.)Comparison of the originalABC with [A]BCelESS=7034log(theta)Density0.00 0.05 0.10 0.15 0.20 0.25051015log(tau1)Density−0.3 −0.2 −0.1 0.0 0.1 0.2 0.301234567histogram = [A]BCelcurve = original ABCvertical line = “true”parameter
  • Pop’gen’: A first experimentEvolutionary scenario:MRCAPOP 0 POP 1τDataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ = (log10 θ, log10 τ)uniform prior over(−1., 1.5) × (−1., 1.)Comparison of the originalABC with [A]BCelESS=7034log(theta)Density0.00 0.05 0.10 0.15 0.20 0.25051015log(tau1)Density−0.3 −0.2 −0.1 0.0 0.1 0.2 0.301234567histogram = [A]BCelcurve = original ABCvertical line = “true”parameter
  • ABC vs. [A]BCel on 100 replicates of the 1st experimentAccuracy:log10 θ log10 τABC [A]BCel ABC [A]BCel(1) 0.097 0.094 0.315 0.117(2) 0.071 0.059 0.272 0.077(3) 0.68 0.81 1.0 0.80(1) Root Mean Square Error of the posterior mean(2) Median Absolute Deviation of the posterior median(3) Coverage of the credibility interval of probability 0.8Computation time: on a recent 6-core computer(C++/OpenMP)ABC ≈ 4 hours[A]BCel≈ 2 minutes
  • Pop’gen’: Second experimentEvolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2Dataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ =(log10 θ, log10 τ1, log10 τ2)non-informative uniformComparison of the original ABCwith [A]BCelhistogram = [A]BCelcurve = original ABCvertical line = “true” parameter
  • Pop’gen’: Second experimentEvolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2Dataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ =(log10 θ, log10 τ1, log10 τ2)non-informative uniformComparison of the original ABCwith [A]BCelhistogram = [A]BCelcurve = original ABCvertical line = “true” parameter
  • Pop’gen’: Second experimentEvolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2Dataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ =(log10 θ, log10 τ1, log10 τ2)non-informative uniformComparison of the original ABCwith [A]BCelhistogram = [A]BCelcurve = original ABCvertical line = “true” parameter
  • Pop’gen’: Second experimentEvolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2Dataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ =(log10 θ, log10 τ1, log10 τ2)non-informative uniformComparison of the original ABCwith [A]BCelhistogram = [A]BCelcurve = original ABCvertical line = “true” parameter
  • ABC vs. [A]BCel on 100 replicates of the 2nd experimentAccuracy:log10 θ log10 τ1 log10 τ2ABC [A]BCel ABC [A]BCel ABC [A]B(1) .0059 .0794 .472 .483 29.3 4.7(2) .048 .053 .32 .28 4.13 3.3(3) .79 .76 .88 .76 .89 .79(1) Root Mean Square Error of the posterior mean(2) Median Absolute Deviation of the posterior median(3) Coverage of the credibility interval of probability 0.8Computation time: on a recent 6-core computer(C++/OpenMP)ABC ≈ 6 hours[A]BCel≈ 8 minutes
  • ComparisonOn large datasets, [A]BCel gives more accurate results than ABCABC simplifies the dataset through summary statisticsDue to the large dimension of x, the original ABC algorithmestimatesπ θ η(xobs) ,where η(xobs) is some (non-linear) projection of the observeddataset on a space with smaller dimension→ Some information is lost[A]BCel simplifies the model through a generalized momentcondition model.→ Here, the moment condition model is based on pairwisecomposition likelihood
  • Conclusion/perspectivesabc part of a wider pictureto handle complex/Big Datamodelsmany formats of empirical[likelihood] Bayes methodsavailablelack of comparative toolsand of an assessment forinformation loss