(Approximate) Bayesian Computation as a newempirical Bayes methodChristian P. RobertUniversit´e Paris-Dauphine & CREST, Pa...
of interest for i-like usersMCMSki IV to be held in Chamonix Mt Blanc, France, fromMonday, Jan. 6 to Wed., Jan. 8, 2014All...
OutlineUnavailable likelihoodsABC methodsABC as an inference machine[A]BCelConclusion and perspectives
Intractable likelihoodCase of a well-defined statistical model where the likelihoodfunction(θ|y) = f (y1, . . . , yn|θ)is (...
Intractable likelihoodCase of a well-defined statistical model where the likelihoodfunction(θ|y) = f (y1, . . . , yn|θ)is (...
The abc alternativeEmpirical approximations to the original B problemDegrading the precision down to a tolerance εReplacin...
The abc alternativeEmpirical approximations to the original B problemDegrading the precision down to a tolerance εReplacin...
The abc alternativeEmpirical approximations to the original B problemDegrading the precision down to a tolerance εReplacin...
Different worries about abcImpact on B inferencea mere computational issue (that will eventually end up beingsolved by more...
Different worries about abcImpact on B inferencea mere computational issue (that will eventually end up beingsolved by more...
Different worries about abcImpact on B inferencea mere computational issue (that will eventually end up beingsolved by more...
summary as an answer to Lange, Chi & Zhou [ISR, 2013]“Surprisingly, the confident prediction of the previous generation tha...
summary as an answer to Lange, Chi & Zhou [ISR, 2013]“Surprisingly, the confident prediction of the previous generation tha...
Econom’ectionsSimilar exploration of simulation-based and approximationtechniques in EconometricsSimulated method of momen...
Econom’ectionsSimilar exploration of simulation-based and approximationtechniques in EconometricsSimulated method of momen...
Indirect inferenceMinimise [in θ] a distance between estimators ^β based on apseudo-model for genuine observations and for...
Indirect inference (PML vs. PSE)Example of the pseudo-maximum-likelihood (PML)^β(y) = arg maxβtlog f (yt|β, y1:(t−1))leadi...
Indirect inference (PML vs. PSE)Example of the pseudo-score-estimator (PSE)^β(y) = arg minβt∂ log f∂β(yt|β, y1:(t−1))2lead...
Consistent indirect inference“...in order to get a unique solution the dimension ofthe auxiliary parameter β must be large...
Consistent indirect inference“...in order to get a unique solution the dimension ofthe auxiliary parameter β must be large...
Consistent indirect inference“...in order to get a unique solution the dimension ofthe auxiliary parameter β must be large...
Approximate Bayesian computationUnavailable likelihoodsABC methodsGenesis of ABCABC basicsAdvances and interpretationsABC ...
Genetic background of ABCskip geneticsABC is a recent computational technique that only requires beingable to sample from ...
Demo-genetic inferenceEach model is characterized by a set of parameters θ that coverhistorical (time divergence, admixtur...
Demo-genetic inferenceEach model is characterized by a set of parameters θ that coverhistorical (time divergence, admixtur...
Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibriumSample of 8 genesMutations ac...
Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibriumKingman’s genealogyWhen time ...
Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibriumKingman’s genealogyWhen time ...
Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibriumObservations: leafs of the tr...
Much more interesting models. . .several independent locusIndependent gene genealogies and mutationsdifferent populationsli...
Intractable likelihoodMissing (too much missing!) data structure:f (y|θ) =Gf (y|G, θ)f (G|θ)dGcannot be computed in a mana...
Intractable likelihoodMissing (too much missing!) data structure:f (y|θ) =Gf (y|G, θ)f (G|θ)dGcannot be computed in a mana...
A?B?C?A stands for approximate[wrong likelihood / pic?]B stands for BayesianC stands for computation[producing a parameter...
A?B?C?A stands for approximate[wrong likelihood / pic?]B stands for BayesianC stands for computation[producing a parameter...
A?B?C?A stands for approximate[wrong likelihood / pic?]B stands for BayesianC stands for computation[producing a parameter...
How Bayesian is aBc?Could we turn the resolution into a Bayesian answer?ideally so (not meaningfull: requires ∞-ly powerfu...
ABC methodologyBayesian setting: target is π(θ)f (x|θ)When likelihood f (x|θ) not in closed form, likelihood-free rejectio...
ABC methodologyBayesian setting: target is π(θ)f (x|θ)When likelihood f (x|θ) not in closed form, likelihood-free rejectio...
ABC methodologyBayesian setting: target is π(θ)f (x|θ)When likelihood f (x|θ) not in closed form, likelihood-free rejectio...
A as A...pproximativeWhen y is a continuous random variable, strict equality z = y isreplaced with a tolerance zoneρ(y, z)...
A as A...pproximativeWhen y is a continuous random variable, strict equality z = y isreplaced with a tolerance zoneρ(y, z)...
ABC algorithmIn most implementations, further degree of A...pproximation:Algorithm 1 Likelihood-free rejection samplerfor ...
OutputThe likelihood-free algorithm samples from the marginal in z of:π (θ, z|y) =π(θ)f (z|θ)IA ,y (z)A ,y×Θ π(θ)f (z|θ)dz...
OutputThe likelihood-free algorithm samples from the marginal in z of:π (θ, z|y) =π(θ)f (z|θ)IA ,y (z)A ,y×Θ π(θ)f (z|θ)dz...
OutputThe likelihood-free algorithm samples from the marginal in z of:π (θ, z|y) =π(θ)f (z|θ)IA ,y (z)A ,y×Θ π(θ)f (z|θ)dz...
OutputThe likelihood-free algorithm samples from the marginal in z of:π (θ, z|y) =π(θ)f (z|θ)IA ,y (z)A ,y×Θ π(θ)f (z|θ)dz...
Convergence of ABCWhat happens when → 0?For B ⊂ Θ, we haveBA ,yf (z|θ)dzA ,y×Θ π(θ)f (z|θ)dzdθπ(θ)dθ =A ,yB f (z|θ)π(θ)dθA...
Convergence of ABCWhat happens when → 0?For B ⊂ Θ, we haveBA ,yf (z|θ)dzA ,y×Θ π(θ)f (z|θ)dzdθπ(θ)dθ =A ,yB f (z|θ)π(θ)dθA...
Convergence (do not attempt!)...and the above does not apply to insufficient statistics:If η(y) is not a sufficient statistics...
Convergence (do not attempt!)...and the above does not apply to insufficient statistics:If η(y) is not a sufficient statistics...
Convergence (do not attempt!)...and the above does not apply to insufficient statistics:If η(y) is not a sufficient statistics...
CommentsRole of distance paramount (because = 0)Scaling of components of η(y) is also determinantmatters little if “small ...
ABC (simul’) advanceshow approximative is ABC? ABC as knnSimulating from the prior is often poor in efficiencyEither modify ...
ABC (simul’) advanceshow approximative is ABC? ABC as knnSimulating from the prior is often poor in efficiencyEither modify ...
ABC (simul’) advanceshow approximative is ABC? ABC as knnSimulating from the prior is often poor in efficiencyEither modify ...
ABC (simul’) advanceshow approximative is ABC? ABC as knnSimulating from the prior is often poor in efficiencyEither modify ...
ABC-NPBetter usage of [prior] simulations byadjustement: instead of throwing awayθ such that ρ(η(z), η(y)) > , replaceθ’s ...
ABC-NP (regression)Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :weight directly simulation b...
ABC-NP (regression)Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :weight directly simulation b...
ABC-NP (density estimation)Use of the kernel weightsKδ {ρ(η(z(θ)), η(y))}leads to the NP estimate of the posterior expecta...
ABC-NP (density estimation)Use of the kernel weightsKδ {ρ(η(z(θ)), η(y))}leads to the NP estimate of the posterior conditi...
ABC-NP (density estimations)Other versions incorporating regression adjustmentsi˜Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ...
ABC-NP (density estimations)Other versions incorporating regression adjustmentsi˜Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ...
ABC-NP (density estimations)Other versions incorporating regression adjustmentsi˜Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ...
ABC as knn[Biau et al., 2013, Annales de l’IHP]Practice of ABC: determine tolerance as a quantile on observeddistances, sa...
ABC as knn[Biau et al., 2013, Annales de l’IHP]Practice of ABC: determine tolerance as a quantile on observeddistances, sa...
ABC as knn[Biau et al., 2013, Annales de l’IHP]Practice of ABC: determine tolerance as a quantile on observeddistances, sa...
ABC consistencyProvidedkN/ log log N −→ ∞ and kN/N −→ 0as N → ∞, for almost all s0 (with respect to the distribution ofS),...
ABC consistencyProvidedkN/ log log N −→ ∞ and kN/N −→ 0as N → ∞, for almost all s0 (with respect to the distribution ofS),...
Rates of convergenceFurther assumptions (on target and kernel) allow for precise(integrated mean square) convergence rates...
Rates of convergenceFurther assumptions (on target and kernel) allow for precise(integrated mean square) convergence rates...
ABC inference machineUnavailable likelihoodsABC methodsABC as an inference machineError inc.Exact BC and approximatetarget...
How Bayesian is aBc..?may be a convergent method of inference (meaningful?sufficient? foreign?)approximation error unknown (...
ABCµIdea Infer about the error as well as about the parameter:Use of a joint densityf (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )w...
ABCµIdea Infer about the error as well as about the parameter:Use of a joint densityf (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )w...
ABCµIdea Infer about the error as well as about the parameter:Use of a joint densityf (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )w...
ABCµ detailsMultidimensional distances ρk (k = 1, . . . , K) and errorsk = ρk(ηk(z), ηk(y)), withk ∼ ξk( |y, θ) ≈ ^ξk( |y,...
ABCµ detailsMultidimensional distances ρk (k = 1, . . . , K) and errorsk = ρk(ηk(z), ηk(y)), withk ∼ ξk( |y, θ) ≈ ^ξk( |y,...
Wilkinson’s exact BC (not exactly!)ABC approximation error (i.e. non-zero tolerance) replaced withexact simulation from a ...
Wilkinson’s exact BC (not exactly!)ABC approximation error (i.e. non-zero tolerance) replaced withexact simulation from a ...
How exact a BC?ProsPseudo-data from true model and observed data from noisymodelInteresting perspective in that outcome is...
How exact a BC?ProsPseudo-data from true model and observed data from noisymodelInteresting perspective in that outcome is...
Noisy ABC for HMMsSpecific case of a hidden Markov model summariesXt+1 ∼ Qθ(Xt, ·)Yt+1 ∼ gθ(·|xt)where only y01:n is observ...
Noisy ABC for HMMsSpecific case of a hidden Markov model summariesXt+1 ∼ Qθ(Xt, ·)Yt+1 ∼ gθ(·|xt)where only y01:n is observ...
Noisy ABC-MLEIdea: Modify instead the data from the start(y01 + ζ1, . . . , yn + ζn)[ see Fearnhead-Prangle ]noisy ABC-MLE...
Consistent noisy ABC-MLEDegrading the data improves the estimation performances:Noisy ABC-MLE is asymptotically (in n) con...
Which summary?Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statisticsL...
Which summary?Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statisticsL...
Which summary?Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statisticsL...
Which summary for model choice?Depending on the choice of η(·), the Bayes factor based on thisinsufficient statistic,Bη12(y)...
Which summary for model choice?Depending on the choice of η(·), the Bayes factor based on thisinsufficient statistic,Bη12(y)...
Semi-automatic ABCFearnhead and Prangle (2012) study ABC and the selection of thesummary statistic in close proximity to W...
Summary [of F&P/statistics)optimality of the posterior expectationE[θ|y]of the parameter of interest as summary statistics...
Summary [of F&P/statistics)optimality of the posterior expectationE[θ|y]of the parameter of interest as summary statistics...
Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates...
Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates...
Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates...
Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates...
Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates...
Empirical likelihood (EL)Unavailable likelihoodsABC methodsABC as an inference machine[A]BCelABC and ELComposite likelihoo...
Empirical likelihood (EL)Dataset x made of n independent replicates x = (x1, . . . , xn) ofsome X ∼ FGeneralized moment co...
Empirical likelihood (EL)Dataset x made of n independent replicates x = (x1, . . . , xn) ofsome X ∼ FGeneralized moment co...
Convergence of EL [3.4]Theorem 3.4 Let X, Y1, . . . , Yn be independent rv’s with commondistribution F0. For θ ∈ Θ, and th...
Convergence of EL [3.4]“...The interesting thing about Theorem 3.4 is what is not there. Itincludes no conditions to make ...
Raw [A]BCelsamplerAct as if EL was an exact likelihood[Lazar, 2003]for i = 1 → N dogenerate φi from the prior distribution...
Raw [A]BCelsamplerAct as if EL was an exact likelihood[Lazar, 2003]for i = 1 → N dogenerate φi from the prior distribution...
Raw [A]BCelsamplerAct as if EL was an exact likelihood[Lazar, 2003]for i = 1 → N dogenerate φi from the prior distribution...
Moment condition in population genetics?EL does not require a fully defined and often complex (hencedebatable) parametric m...
Moment condition in population genetics?EL does not require a fully defined and often complex (hencedebatable) parametric m...
Pairwise composite likelihoodThe intra-locus pairwise likelihood2(xk|φ) =i<j2(xik, xjk|φ)with x1k , . . . , xnk : allelic ...
Pairwise likelihood: a simple caseAssumptionssample ⊂ closed, panmicticpopulation at equilibriummarker: microsatellitemuta...
Pairwise likelihood: a simple caseAssumptionssample ⊂ closed, panmicticpopulation at equilibriummarker: microsatellitemuta...
Pairwise likelihood: a simple caseAssumptionssample ⊂ closed, panmicticpopulation at equilibriummarker: microsatellitemuta...
Pairwise likelihood: 2 diverging populationsMRCAPOP a POP bτAssumptionsτ: divergence date ofpop. a and bθ/2: mutation rate...
Pairwise likelihood: 2 diverging populationsMRCAPOP a POP bτAssumptionsτ: divergence date ofpop. a and bθ/2: mutation rate...
Example: normal posterior[A]BCel with two constraintsESS=155.6θDensity−0.5 0.0 0.5 1.00.01.0ESS=75.93θDensity−0.4 −0.2 0.0...
Example: normal posterior[A]BCel with three constraintsESS=300.1θDensity−0.4 0.0 0.4 0.80.01.0ESS=205.5θDensity−0.6 −0.2 0...
Example: Superposition of gamma processesExample of superposition of N renewal processes with waitingtimes τij (i = 1, . ....
Example: Superposition of gamma processes (ABC)Interesting testing ground for [A]BCel since data (zt) neither iidnor Marko...
Example: Superposition of gamma processes (ABC)Interesting testing ground for [A]BCel since data (zt) neither iidnor Marko...
Pop’gen’: A first experimentEvolutionary scenario:MRCAPOP 0 POP 1τDataset:50 genes per populations,100 microsat. lociAssump...
Pop’gen’: A first experimentEvolutionary scenario:MRCAPOP 0 POP 1τDataset:50 genes per populations,100 microsat. lociAssump...
ABC vs. [A]BCel on 100 replicates of the 1st experimentAccuracy:log10 θ log10 τABC [A]BCel ABC [A]BCel(1) 0.097 0.094 0.31...
Pop’gen’: Second experimentEvolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2Dataset:50 genes per populations,100 microsat. l...
Pop’gen’: Second experimentEvolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2Dataset:50 genes per populations,100 microsat. l...
Pop’gen’: Second experimentEvolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2Dataset:50 genes per populations,100 microsat. l...
Pop’gen’: Second experimentEvolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2Dataset:50 genes per populations,100 microsat. l...
ABC vs. [A]BCel on 100 replicates of the 2nd experimentAccuracy:log10 θ log10 τ1 log10 τ2ABC [A]BCel ABC [A]BCel ABC [A]B(...
ComparisonOn large datasets, [A]BCel gives more accurate results than ABCABC simplifies the dataset through summary statist...
Conclusion/perspectivesabc part of a wider pictureto handle complex/Big Datamodelsmany formats of empirical[likelihood] Ba...
Upcoming SlideShare
Loading in …5
×

slides of ABC talk at i-like workshop, Warwick, May 16

1,721 views

Published on

these are slightly updated slides compared with the ones of my Padova talk two month ago...

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,721
On SlideShare
0
From Embeds
0
Number of Embeds
1,309
Actions
Shares
0
Downloads
11
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

slides of ABC talk at i-like workshop, Warwick, May 16

  1. 1. (Approximate) Bayesian Computation as a newempirical Bayes methodChristian P. RobertUniversit´e Paris-Dauphine & CREST, ParisJoint work with K.L. Mengersen and P. Pudloi-like Workshop, Warwick, 15-17 May, 2013
  2. 2. of interest for i-like usersMCMSki IV to be held in Chamonix Mt Blanc, France, fromMonday, Jan. 6 to Wed., Jan. 8, 2014All aspects of MCMC++ theory and methodology and applicationsand more!Website http://www.pages.drexel.edu/∼mwl25/mcmski/
  3. 3. OutlineUnavailable likelihoodsABC methodsABC as an inference machine[A]BCelConclusion and perspectives
  4. 4. Intractable likelihoodCase of a well-defined statistical model where the likelihoodfunction(θ|y) = f (y1, . . . , yn|θ)is (really!) not available in closed formcannot (easily!) be either completed or demarginalisedcannot be estimated by an unbiased estimatorc Prohibits direct implementation of a generic MCMC algorithmlike Metropolis–Hastings
  5. 5. Intractable likelihoodCase of a well-defined statistical model where the likelihoodfunction(θ|y) = f (y1, . . . , yn|θ)is (really!) not available in closed formcannot (easily!) be either completed or demarginalisedcannot be estimated by an unbiased estimatorc Prohibits direct implementation of a generic MCMC algorithmlike Metropolis–Hastings
  6. 6. The abc alternativeEmpirical approximations to the original B problemDegrading the precision down to a tolerance εReplacing the likelihood with a non-parametric approximationSummarising/replacing the data with insufficient statistics
  7. 7. The abc alternativeEmpirical approximations to the original B problemDegrading the precision down to a tolerance εReplacing the likelihood with a non-parametric approximationSummarising/replacing the data with insufficient statistics
  8. 8. The abc alternativeEmpirical approximations to the original B problemDegrading the precision down to a tolerance εReplacing the likelihood with a non-parametric approximationSummarising/replacing the data with insufficient statistics
  9. 9. Different worries about abcImpact on B inferencea mere computational issue (that will eventually end up beingsolved by more powerful computers, &tc, even if too costly inthe short term, as for regular Monte Carlo methods) Not!an inferential issue (opening opportunities for new inferencemachine on complex/Big Data models, with legitimitydiffering from classical B approach)a Bayesian conundrum (how closely related to the/a Bapproach? more than recycling B tools? true B with rawdata? a new type of empirical Bayes inference?)
  10. 10. Different worries about abcImpact on B inferencea mere computational issue (that will eventually end up beingsolved by more powerful computers, &tc, even if too costly inthe short term, as for regular Monte Carlo methods) Not!an inferential issue (opening opportunities for new inferencemachine on complex/Big Data models, with legitimitydiffering from classical B approach)a Bayesian conundrum (how closely related to the/a Bapproach? more than recycling B tools? true B with rawdata? a new type of empirical Bayes inference?)
  11. 11. Different worries about abcImpact on B inferencea mere computational issue (that will eventually end up beingsolved by more powerful computers, &tc, even if too costly inthe short term, as for regular Monte Carlo methods) Not!an inferential issue (opening opportunities for new inferencemachine on complex/Big Data models, with legitimitydiffering from classical B approach)a Bayesian conundrum (how closely related to the/a Bapproach? more than recycling B tools? true B with rawdata? a new type of empirical Bayes inference?)
  12. 12. summary as an answer to Lange, Chi & Zhou [ISR, 2013]“Surprisingly, the confident prediction of the previous generation that Bayesian methods wouldultimately supplant frequentist methods has given way to a realization that Markov chain MonteCarlo (MCMC) may be too slow to handle modern data sets. Size matters because large data setsstress computer storage and processing power to the breaking point. The most successfulcompromises between Bayesian and frequentist methods now rely on penalization andoptimization.”sad reality constraint that size does matterfocus on much smaller dimensions and on sparse summariesmany (fast) ways of producing those summariesBayesian inference can kick in almost automatically at thisstage
  13. 13. summary as an answer to Lange, Chi & Zhou [ISR, 2013]“Surprisingly, the confident prediction of the previous generation that Bayesian methods wouldultimately supplant frequentist methods has given way to a realization that Markov chain MonteCarlo (MCMC) may be too slow to handle modern data sets. Size matters because large data setsstress computer storage and processing power to the breaking point. The most successfulcompromises between Bayesian and frequentist methods now rely on penalization andoptimization.”sad reality constraint that size does matterfocus on much smaller dimensions and on sparse summariesmany (fast) ways of producing those summariesBayesian inference can kick in almost automatically at thisstage
  14. 14. Econom’ectionsSimilar exploration of simulation-based and approximationtechniques in EconometricsSimulated method of momentsMethod of simulated momentsSimulated pseudo-maximum-likelihoodIndirect inference[Gouri´eroux & Monfort, 1996]even though motivation is partly-defined models rather thancomplex likelihoods
  15. 15. Econom’ectionsSimilar exploration of simulation-based and approximationtechniques in EconometricsSimulated method of momentsMethod of simulated momentsSimulated pseudo-maximum-likelihoodIndirect inference[Gouri´eroux & Monfort, 1996]even though motivation is partly-defined models rather thancomplex likelihoods
  16. 16. Indirect inferenceMinimise [in θ] a distance between estimators ^β based on apseudo-model for genuine observations and for observationssimulated under the true model and the parameter θ.[Gouri´eroux, Monfort, & Renault, 1993;Smith, 1993; Gallant & Tauchen, 1996]
  17. 17. Indirect inference (PML vs. PSE)Example of the pseudo-maximum-likelihood (PML)^β(y) = arg maxβtlog f (yt|β, y1:(t−1))leading toarg minθ||^β(yo) − ^β(y1(θ), . . . , yS (θ))||2whenys(θ) ∼ f (y|θ) s = 1, . . . , S
  18. 18. Indirect inference (PML vs. PSE)Example of the pseudo-score-estimator (PSE)^β(y) = arg minβt∂ log f∂β(yt|β, y1:(t−1))2leading toarg minθ||^β(yo) − ^β(y1(θ), . . . , yS (θ))||2whenys(θ) ∼ f (y|θ) s = 1, . . . , S
  19. 19. Consistent indirect inference“...in order to get a unique solution the dimension ofthe auxiliary parameter β must be larger than or equal tothe dimension of the initial parameter θ. If the problem isjust identified the different methods become easier...”Consistency depending on the criterion and on the asymptoticidentifiability of θ[Gouri´eroux & Monfort, 1996, p. 66]Which connection [if any] with the B perspective?
  20. 20. Consistent indirect inference“...in order to get a unique solution the dimension ofthe auxiliary parameter β must be larger than or equal tothe dimension of the initial parameter θ. If the problem isjust identified the different methods become easier...”Consistency depending on the criterion and on the asymptoticidentifiability of θ[Gouri´eroux & Monfort, 1996, p. 66]Which connection [if any] with the B perspective?
  21. 21. Consistent indirect inference“...in order to get a unique solution the dimension ofthe auxiliary parameter β must be larger than or equal tothe dimension of the initial parameter θ. If the problem isjust identified the different methods become easier...”Consistency depending on the criterion and on the asymptoticidentifiability of θ[Gouri´eroux & Monfort, 1996, p. 66]Which connection [if any] with the B perspective?
  22. 22. Approximate Bayesian computationUnavailable likelihoodsABC methodsGenesis of ABCABC basicsAdvances and interpretationsABC as knnABC as an inference machine[A]BCelConclusion and perspectives
  23. 23. Genetic background of ABCskip geneticsABC is a recent computational technique that only requires beingable to sample from the likelihood f (·|θ)This technique stemmed from population genetics models, about15 years ago, and population geneticists still contributesignificantly to methodological developments of ABC.[Griffith & al., 1997; Tavar´e & al., 1999]
  24. 24. Demo-genetic inferenceEach model is characterized by a set of parameters θ that coverhistorical (time divergence, admixture time ...), demographics(population sizes, admixture rates, migration rates, ...) and genetic(mutation rate, ...) factorsThe goal is to estimate these parameters from a dataset ofpolymorphism (DNA sample) y observed at the present timeProblem:most of the time, we cannot calculate the likelihood of thepolymorphism data f (y|θ)...
  25. 25. Demo-genetic inferenceEach model is characterized by a set of parameters θ that coverhistorical (time divergence, admixture time ...), demographics(population sizes, admixture rates, migration rates, ...) and genetic(mutation rate, ...) factorsThe goal is to estimate these parameters from a dataset ofpolymorphism (DNA sample) y observed at the present timeProblem:most of the time, we cannot calculate the likelihood of thepolymorphism data f (y|θ)...
  26. 26. Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibriumSample of 8 genesMutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2
  27. 27. Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibriumKingman’s genealogyWhen time axis isnormalized,T(k) ∼ Exp(k(k − 1)/2)Mutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2
  28. 28. Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibriumKingman’s genealogyWhen time axis isnormalized,T(k) ∼ Exp(k(k − 1)/2)Mutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2
  29. 29. Neutral model at a given microsatellite locus, in a closedpanmictic population at equilibriumObservations: leafs of the tree^θ = ?Kingman’s genealogyWhen time axis isnormalized,T(k) ∼ Exp(k(k − 1)/2)Mutations according tothe Simple stepwiseMutation Model(SMM)• date of the mutations ∼Poisson process withintensity θ/2 over thebranches• MRCA = 100• independent mutations:±1 with pr. 1/2
  30. 30. Much more interesting models. . .several independent locusIndependent gene genealogies and mutationsdifferent populationslinked by an evolutionary scenario made of divergences,admixtures, migrations between populations, etc.larger sample sizeusually between 50 and 100 genesA typical evolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2
  31. 31. Intractable likelihoodMissing (too much missing!) data structure:f (y|θ) =Gf (y|G, θ)f (G|θ)dGcannot be computed in a manageable way...[Stephens & Donnelly, 2000]The genealogies are considered as nuisance parametersThis modelling clearly differs from the phylogenetic perspectivewhere the tree is the parameter of interest.
  32. 32. Intractable likelihoodMissing (too much missing!) data structure:f (y|θ) =Gf (y|G, θ)f (G|θ)dGcannot be computed in a manageable way...[Stephens & Donnelly, 2000]The genealogies are considered as nuisance parametersThis modelling clearly differs from the phylogenetic perspectivewhere the tree is the parameter of interest.
  33. 33. A?B?C?A stands for approximate[wrong likelihood / pic?]B stands for BayesianC stands for computation[producing a parametersample]
  34. 34. A?B?C?A stands for approximate[wrong likelihood / pic?]B stands for BayesianC stands for computation[producing a parametersample]
  35. 35. A?B?C?A stands for approximate[wrong likelihood / pic?]B stands for BayesianC stands for computation[producing a parametersample]ESS=155.6θDensity−0.5 0.0 0.5 1.00.01.0ESS=75.93θDensity−0.4 −0.2 0.0 0.2 0.40.01.02.0ESS=76.87θDensity−0.4 −0.2 0.0 0.201234ESS=91.54θDensity−0.6 −0.4 −0.2 0.0 0.201234ESS=108.4θDensity−0.4 0.0 0.2 0.4 0.60.01.02.03.0ESS=85.13θDensity−0.2 0.0 0.2 0.4 0.60.01.02.03.0ESS=149.1θDensity−0.5 0.0 0.5 1.00.01.02.0ESS=96.31θDensity−0.4 0.0 0.2 0.4 0.60.01.02.0ESS=83.77θDensity−0.6 −0.4 −0.2 0.0 0.2 0.401234ESS=155.7θDensity−0.5 0.0 0.50.01.02.0ESS=92.42θDensity−0.4 0.0 0.2 0.4 0.60.01.02.03.0ESS=95.01θDensity−0.4 0.0 0.2 0.4 0.60.01.53.0ESS=139.2Density −0.6 −0.2 0.2 0.60.01.02.0ESS=99.33Density−0.4 −0.2 0.0 0.2 0.40.01.02.03.0ESS=87.28Density−0.2 0.0 0.2 0.4 0.60123
  36. 36. How Bayesian is aBc?Could we turn the resolution into a Bayesian answer?ideally so (not meaningfull: requires ∞-ly powerful computerapproximation error unknown (w/o costly simulation)true Bayes for wrong model (formal and artificial)true Bayes for noisy model (much more convincing)true Bayes for estimated likelihood (back to econometrics?)illuminating the tension between information and precision
  37. 37. ABC methodologyBayesian setting: target is π(θ)f (x|θ)When likelihood f (x|θ) not in closed form, likelihood-free rejectiontechnique:FoundationFor an observation y ∼ f (y|θ), under the prior π(θ), if one keepsjointly simulatingθ ∼ π(θ) , z ∼ f (z|θ ) ,until the auxiliary variable z is equal to the observed value, z = y,then the selectedθ ∼ π(θ|y)[Rubin, 1984; Diggle & Gratton, 1984; Tavar´e et al., 1997]
  38. 38. ABC methodologyBayesian setting: target is π(θ)f (x|θ)When likelihood f (x|θ) not in closed form, likelihood-free rejectiontechnique:FoundationFor an observation y ∼ f (y|θ), under the prior π(θ), if one keepsjointly simulatingθ ∼ π(θ) , z ∼ f (z|θ ) ,until the auxiliary variable z is equal to the observed value, z = y,then the selectedθ ∼ π(θ|y)[Rubin, 1984; Diggle & Gratton, 1984; Tavar´e et al., 1997]
  39. 39. ABC methodologyBayesian setting: target is π(θ)f (x|θ)When likelihood f (x|θ) not in closed form, likelihood-free rejectiontechnique:FoundationFor an observation y ∼ f (y|θ), under the prior π(θ), if one keepsjointly simulatingθ ∼ π(θ) , z ∼ f (z|θ ) ,until the auxiliary variable z is equal to the observed value, z = y,then the selectedθ ∼ π(θ|y)[Rubin, 1984; Diggle & Gratton, 1984; Tavar´e et al., 1997]
  40. 40. A as A...pproximativeWhen y is a continuous random variable, strict equality z = y isreplaced with a tolerance zoneρ(y, z)where ρ is a distanceOutput distributed fromπ(θ) Pθ{ρ(y, z) < }def∝ π(θ|ρ(y, z) < )[Pritchard et al., 1999]
  41. 41. A as A...pproximativeWhen y is a continuous random variable, strict equality z = y isreplaced with a tolerance zoneρ(y, z)where ρ is a distanceOutput distributed fromπ(θ) Pθ{ρ(y, z) < }def∝ π(θ|ρ(y, z) < )[Pritchard et al., 1999]
  42. 42. ABC algorithmIn most implementations, further degree of A...pproximation:Algorithm 1 Likelihood-free rejection samplerfor i = 1 to N dorepeatgenerate θ from the prior distribution π(·)generate z from the likelihood f (·|θ )until ρ{η(z), η(y)}set θi = θend forwhere η(y) defines a (not necessarily sufficient) statistic
  43. 43. OutputThe likelihood-free algorithm samples from the marginal in z of:π (θ, z|y) =π(θ)f (z|θ)IA ,y (z)A ,y×Θ π(θ)f (z|θ)dzdθ,where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) ....does it?!
  44. 44. OutputThe likelihood-free algorithm samples from the marginal in z of:π (θ, z|y) =π(θ)f (z|θ)IA ,y (z)A ,y×Θ π(θ)f (z|θ)dzdθ,where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) ....does it?!
  45. 45. OutputThe likelihood-free algorithm samples from the marginal in z of:π (θ, z|y) =π(θ)f (z|θ)IA ,y (z)A ,y×Θ π(θ)f (z|θ)dzdθ,where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) ....does it?!
  46. 46. OutputThe likelihood-free algorithm samples from the marginal in z of:π (θ, z|y) =π(θ)f (z|θ)IA ,y (z)A ,y×Θ π(θ)f (z|θ)dzdθ,where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of therestricted posterior distribution:π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) .Not so good..!skip convergence details!
  47. 47. Convergence of ABCWhat happens when → 0?For B ⊂ Θ, we haveBA ,yf (z|θ)dzA ,y×Θ π(θ)f (z|θ)dzdθπ(θ)dθ =A ,yB f (z|θ)π(θ)dθA ,y×Θ π(θ)f (z|θ)dzdθdz=A ,yB f (z|θ)π(θ)dθm(z)m(z)A ,y×Θ π(θ)f (z|θ)dzdθdz=A ,yπ(B|z)m(z)A ,y×Θ π(θ)f (z|θ)dzdθdzwhich indicates convergence for a continuous π(B|z).
  48. 48. Convergence of ABCWhat happens when → 0?For B ⊂ Θ, we haveBA ,yf (z|θ)dzA ,y×Θ π(θ)f (z|θ)dzdθπ(θ)dθ =A ,yB f (z|θ)π(θ)dθA ,y×Θ π(θ)f (z|θ)dzdθdz=A ,yB f (z|θ)π(θ)dθm(z)m(z)A ,y×Θ π(θ)f (z|θ)dzdθdz=A ,yπ(B|z)m(z)A ,y×Θ π(θ)f (z|θ)dzdθdzwhich indicates convergence for a continuous π(B|z).
  49. 49. Convergence (do not attempt!)...and the above does not apply to insufficient statistics:If η(y) is not a sufficient statistics, the best one can hope for isπ(θ|η(y)) , not π(θ|y)If η(y) is an ancillary statistic, the whole information contained iny is lost!, the “best” one can “hope” for isπ(θ|η(y)) = π(θ)
  50. 50. Convergence (do not attempt!)...and the above does not apply to insufficient statistics:If η(y) is not a sufficient statistics, the best one can hope for isπ(θ|η(y)) , not π(θ|y)If η(y) is an ancillary statistic, the whole information contained iny is lost!, the “best” one can “hope” for isπ(θ|η(y)) = π(θ)
  51. 51. Convergence (do not attempt!)...and the above does not apply to insufficient statistics:If η(y) is not a sufficient statistics, the best one can hope for isπ(θ|η(y)) , not π(θ|y)If η(y) is an ancillary statistic, the whole information contained iny is lost!, the “best” one can “hope” for isπ(θ|η(y)) = π(θ)
  52. 52. CommentsRole of distance paramount (because = 0)Scaling of components of η(y) is also determinantmatters little if “small enough”representative of “curse of dimensionality”small is beautiful!the data as a whole may be paradoxically weakly informativefor ABC
  53. 53. ABC (simul’) advanceshow approximative is ABC? ABC as knnSimulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger[Beaumont et al., 2002].....or even by including in the inferential framework [ABCµ][Ratmann et al., 2009]
  54. 54. ABC (simul’) advanceshow approximative is ABC? ABC as knnSimulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger[Beaumont et al., 2002].....or even by including in the inferential framework [ABCµ][Ratmann et al., 2009]
  55. 55. ABC (simul’) advanceshow approximative is ABC? ABC as knnSimulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger[Beaumont et al., 2002].....or even by including in the inferential framework [ABCµ][Ratmann et al., 2009]
  56. 56. ABC (simul’) advanceshow approximative is ABC? ABC as knnSimulating from the prior is often poor in efficiencyEither modify the proposal distribution on θ to increase the densityof x’s within the vicinity of y...[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]...or by viewing the problem as a conditional density estimationand by developing techniques to allow for larger[Beaumont et al., 2002].....or even by including in the inferential framework [ABCµ][Ratmann et al., 2009]
  57. 57. ABC-NPBetter usage of [prior] simulations byadjustement: instead of throwing awayθ such that ρ(η(z), η(y)) > , replaceθ’s with locally regressed transformsθ∗= θ − {η(z) − η(y)}T ^β[Csill´ery et al., TEE, 2010]where ^β is obtained by [NP] weighted least square regression on(η(z) − η(y)) with weightsKδ {ρ(η(z), η(y))}[Beaumont et al., 2002, Genetics]
  58. 58. ABC-NP (regression)Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :weight directly simulation byKδ {ρ(η(z(θ)), η(y))}or1SSs=1Kδ {ρ(η(zs(θ)), η(y))}[consistent estimate of f (η|θ)]Curse of dimensionality: poor estimate when d = dim(η) is large...
  59. 59. ABC-NP (regression)Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :weight directly simulation byKδ {ρ(η(z(θ)), η(y))}or1SSs=1Kδ {ρ(η(zs(θ)), η(y))}[consistent estimate of f (η|θ)]Curse of dimensionality: poor estimate when d = dim(η) is large...
  60. 60. ABC-NP (density estimation)Use of the kernel weightsKδ {ρ(η(z(θ)), η(y))}leads to the NP estimate of the posterior expectationi θi Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ(η(z(θi )), η(y))}[Blum, JASA, 2010]
  61. 61. ABC-NP (density estimation)Use of the kernel weightsKδ {ρ(η(z(θ)), η(y))}leads to the NP estimate of the posterior conditional densityi˜Kb(θi − θ)Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ(η(z(θi )), η(y))}[Blum, JASA, 2010]
  62. 62. ABC-NP (density estimations)Other versions incorporating regression adjustmentsi˜Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ(η(z(θi )), η(y))}In all cases, errorE[^g(θ|y)] − g(θ|y) = cb2+ cδ2+ OP(b2+ δ2) + OP(1/nδd)var(^g(θ|y)) =cnbδd(1 + oP(1))
  63. 63. ABC-NP (density estimations)Other versions incorporating regression adjustmentsi˜Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ(η(z(θi )), η(y))}In all cases, errorE[^g(θ|y)] − g(θ|y) = cb2+ cδ2+ OP(b2+ δ2) + OP(1/nδd)var(^g(θ|y)) =cnbδd(1 + oP(1))[Blum, JASA, 2010]
  64. 64. ABC-NP (density estimations)Other versions incorporating regression adjustmentsi˜Kb(θ∗i − θ)Kδ {ρ(η(z(θi )), η(y))}i Kδ {ρ(η(z(θi )), η(y))}In all cases, errorE[^g(θ|y)] − g(θ|y) = cb2+ cδ2+ OP(b2+ δ2) + OP(1/nδd)var(^g(θ|y)) =cnbδd(1 + oP(1))[standard NP calculations]
  65. 65. ABC as knn[Biau et al., 2013, Annales de l’IHP]Practice of ABC: determine tolerance as a quantile on observeddistances, say 10% or 1% quantile,= N = qα(d1, . . . , dN)Interpretation of ε as nonparametric bandwidth onlyapproximation of the actual practice[Blum & Fran¸cois, 2010]ABC is a k-nearest neighbour (knn) method with kN = N N[Loftsgaarden & Quesenberry, 1965]
  66. 66. ABC as knn[Biau et al., 2013, Annales de l’IHP]Practice of ABC: determine tolerance as a quantile on observeddistances, say 10% or 1% quantile,= N = qα(d1, . . . , dN)Interpretation of ε as nonparametric bandwidth onlyapproximation of the actual practice[Blum & Fran¸cois, 2010]ABC is a k-nearest neighbour (knn) method with kN = N N[Loftsgaarden & Quesenberry, 1965]
  67. 67. ABC as knn[Biau et al., 2013, Annales de l’IHP]Practice of ABC: determine tolerance as a quantile on observeddistances, say 10% or 1% quantile,= N = qα(d1, . . . , dN)Interpretation of ε as nonparametric bandwidth onlyapproximation of the actual practice[Blum & Fran¸cois, 2010]ABC is a k-nearest neighbour (knn) method with kN = N N[Loftsgaarden & Quesenberry, 1965]
  68. 68. ABC consistencyProvidedkN/ log log N −→ ∞ and kN/N −→ 0as N → ∞, for almost all s0 (with respect to the distribution ofS), with probability 1,1kNkNj=1ϕ(θj ) −→ E[ϕ(θj )|S = s0][Devroye, 1982]Biau et al. (2012) also recall pointwise and integrated mean square errorconsistency results on the corresponding kernel estimate of theconditional posterior distribution, under constraintskN → ∞, kN /N → 0, hN → 0 and hpN kN → ∞,
  69. 69. ABC consistencyProvidedkN/ log log N −→ ∞ and kN/N −→ 0as N → ∞, for almost all s0 (with respect to the distribution ofS), with probability 1,1kNkNj=1ϕ(θj ) −→ E[ϕ(θj )|S = s0][Devroye, 1982]Biau et al. (2012) also recall pointwise and integrated mean square errorconsistency results on the corresponding kernel estimate of theconditional posterior distribution, under constraintskN → ∞, kN /N → 0, hN → 0 and hpN kN → ∞,
  70. 70. Rates of convergenceFurther assumptions (on target and kernel) allow for precise(integrated mean square) convergence rates (as a power of thesample size N), derived from classical k-nearest neighbourregression, likewhen m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N− 4p+8when m = 4, kN ≈ N(p+4)/(p+8) and rate N− 4p+8 log Nwhen m > 4, kN ≈ N(p+4)/(m+p+4) and rate N− 4m+p+4[Biau et al., 2012, arxiv:1207.6461]Drag: Only applies to sufficient summary statistics
  71. 71. Rates of convergenceFurther assumptions (on target and kernel) allow for precise(integrated mean square) convergence rates (as a power of thesample size N), derived from classical k-nearest neighbourregression, likewhen m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N− 4p+8when m = 4, kN ≈ N(p+4)/(p+8) and rate N− 4p+8 log Nwhen m > 4, kN ≈ N(p+4)/(m+p+4) and rate N− 4m+p+4[Biau et al., 2012, arxiv:1207.6461]Drag: Only applies to sufficient summary statistics
  72. 72. ABC inference machineUnavailable likelihoodsABC methodsABC as an inference machineError inc.Exact BC and approximatetargetssummary statistic[A]BCelConclusion and perspectives
  73. 73. How Bayesian is aBc..?may be a convergent method of inference (meaningful?sufficient? foreign?)approximation error unknown (w/o massive simulation)pragmatic/empirical B (there is no other solution!)many calibration issues (tolerance, distance, statistics)the NP side should be incorporated into the whole B picturethe approximation error should also be part of the B inferenceto ABCel
  74. 74. ABCµIdea Infer about the error as well as about the parameter:Use of a joint densityf (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )where y is the data, and ξ( |y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)Warning! Replacement of ξ( |y, θ) with a non-parametric kernelapproximation.[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
  75. 75. ABCµIdea Infer about the error as well as about the parameter:Use of a joint densityf (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )where y is the data, and ξ( |y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)Warning! Replacement of ξ( |y, θ) with a non-parametric kernelapproximation.[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
  76. 76. ABCµIdea Infer about the error as well as about the parameter:Use of a joint densityf (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )where y is the data, and ξ( |y, θ) is the prior predictive density ofρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)Warning! Replacement of ξ( |y, θ) with a non-parametric kernelapproximation.[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
  77. 77. ABCµ detailsMultidimensional distances ρk (k = 1, . . . , K) and errorsk = ρk(ηk(z), ηk(y)), withk ∼ ξk( |y, θ) ≈ ^ξk( |y, θ) =1BhkbK[{ k−ρk(ηk(zb), ηk(y))}/hk]then used in replacing ξ( |y, θ) with mink^ξk( |y, θ)ABCµ involves acceptance probabilityπ(θ , )π(θ, )q(θ , θ)q( , )q(θ, θ )q( , )mink^ξk( |y, θ )mink^ξk( |y, θ)
  78. 78. ABCµ detailsMultidimensional distances ρk (k = 1, . . . , K) and errorsk = ρk(ηk(z), ηk(y)), withk ∼ ξk( |y, θ) ≈ ^ξk( |y, θ) =1BhkbK[{ k−ρk(ηk(zb), ηk(y))}/hk]then used in replacing ξ( |y, θ) with mink^ξk( |y, θ)ABCµ involves acceptance probabilityπ(θ , )π(θ, )q(θ , θ)q( , )q(θ, θ )q( , )mink^ξk( |y, θ )mink^ξk( |y, θ)
  79. 79. Wilkinson’s exact BC (not exactly!)ABC approximation error (i.e. non-zero tolerance) replaced withexact simulation from a controlled approximation to the target,convolution of true posterior with kernel functionπ (θ, z|y) =π(θ)f (z|θ)K (y − z)π(θ)f (z|θ)K (y − z)dzdθ,with K kernel parameterised by bandwidth .[Wilkinson, 2013]TheoremThe ABC algorithm based on the assumption of a randomisedobservation y = ˜y + ξ, ξ ∼ K , and an acceptance probability ofK (y − z)/Mgives draws from the posterior distribution π(θ|y).
  80. 80. Wilkinson’s exact BC (not exactly!)ABC approximation error (i.e. non-zero tolerance) replaced withexact simulation from a controlled approximation to the target,convolution of true posterior with kernel functionπ (θ, z|y) =π(θ)f (z|θ)K (y − z)π(θ)f (z|θ)K (y − z)dzdθ,with K kernel parameterised by bandwidth .[Wilkinson, 2013]TheoremThe ABC algorithm based on the assumption of a randomisedobservation y = ˜y + ξ, ξ ∼ K , and an acceptance probability ofK (y − z)/Mgives draws from the posterior distribution π(θ|y).
  81. 81. How exact a BC?ProsPseudo-data from true model and observed data from noisymodelInteresting perspective in that outcome is completelycontrolledLink with ABCµ and assuming y is observed with ameasurement error with density KRelates to the theory of model approximation[Kennedy & O’Hagan, 2001]leads to “noisy ABC”: perturbates the data down to precisionε and proceed[Fearnhead & Prangle, 2012]ConsRequires K to be bounded by MTrue approximation error never assessed
  82. 82. How exact a BC?ProsPseudo-data from true model and observed data from noisymodelInteresting perspective in that outcome is completelycontrolledLink with ABCµ and assuming y is observed with ameasurement error with density KRelates to the theory of model approximation[Kennedy & O’Hagan, 2001]leads to “noisy ABC”: perturbates the data down to precisionε and proceed[Fearnhead & Prangle, 2012]ConsRequires K to be bounded by MTrue approximation error never assessed
  83. 83. Noisy ABC for HMMsSpecific case of a hidden Markov model summariesXt+1 ∼ Qθ(Xt, ·)Yt+1 ∼ gθ(·|xt)where only y01:n is observed.[Dean, Singh, Jasra, & Peters, 2011]Use of specific constraints, adapted to the Markov structure:y1 ∈ B(y01 , ) × · · · × yn ∈ B(y0n , )
  84. 84. Noisy ABC for HMMsSpecific case of a hidden Markov model summariesXt+1 ∼ Qθ(Xt, ·)Yt+1 ∼ gθ(·|xt)where only y01:n is observed.[Dean, Singh, Jasra, & Peters, 2011]Use of specific constraints, adapted to the Markov structure:y1 ∈ B(y01 , ) × · · · × yn ∈ B(y0n , )
  85. 85. Noisy ABC-MLEIdea: Modify instead the data from the start(y01 + ζ1, . . . , yn + ζn)[ see Fearnhead-Prangle ]noisy ABC-MLE estimatearg maxθPθ Y1 ∈ B(y01 + ζ1, ), . . . , Yn ∈ B(y0n + ζn, )[Dean, Singh, Jasra, & Peters, 2011]
  86. 86. Consistent noisy ABC-MLEDegrading the data improves the estimation performances:Noisy ABC-MLE is asymptotically (in n) consistentunder further assumptions, the noisy ABC-MLE isasymptotically normalincrease in variance of order −2likely degradation in precision or computing time due to thelack of summary statistic [curse of dimensionality]
  87. 87. Which summary?Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statisticsLoss of statistical information balanced against gain in datarougheningApproximation error and information loss remain unknownChoice of statistics induces choice of distance functiontowards standardisationmay be imposed for external reasonsmay gather several non-B point estimatescan learn about efficient combination
  88. 88. Which summary?Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statisticsLoss of statistical information balanced against gain in datarougheningApproximation error and information loss remain unknownChoice of statistics induces choice of distance functiontowards standardisationmay be imposed for external reasonsmay gather several non-B point estimatescan learn about efficient combination
  89. 89. Which summary?Fundamental difficulty of the choice of the summary statistic whenthere is no non-trivial sufficient statisticsLoss of statistical information balanced against gain in datarougheningApproximation error and information loss remain unknownChoice of statistics induces choice of distance functiontowards standardisationmay be imposed for external reasonsmay gather several non-B point estimatescan learn about efficient combination
  90. 90. Which summary for model choice?Depending on the choice of η(·), the Bayes factor based on thisinsufficient statistic,Bη12(y) =π1(θ1)f η1 (η(y)|θ1) dθ1π2(θ2)f η2 (η(y)|θ2) dθ2,is either consistent or inconsistent[X, Cornuet, Marin, & Pillai, 2012]Consistency only depends on the range of Ei [η(y)] under bothmodels[Marin, Pillai, X, & Rousseau, 2012]
  91. 91. Which summary for model choice?Depending on the choice of η(·), the Bayes factor based on thisinsufficient statistic,Bη12(y) =π1(θ1)f η1 (η(y)|θ1) dθ1π2(θ2)f η2 (η(y)|θ2) dθ2,is either consistent or inconsistent[X, Cornuet, Marin, & Pillai, 2012]Consistency only depends on the range of Ei [η(y)] under bothmodels[Marin, Pillai, X, & Rousseau, 2012]
  92. 92. Semi-automatic ABCFearnhead and Prangle (2012) study ABC and the selection of thesummary statistic in close proximity to Wilkinson’s proposalABC considered as inferential method and calibrated as suchrandomised (or ‘noisy’) version of the summary statistics˜η(y) = η(y) + τderivation of a well-calibrated version of ABC, i.e. analgorithm that gives proper predictions for the distributionassociated with this randomised summary statistic
  93. 93. Summary [of F&P/statistics)optimality of the posterior expectationE[θ|y]of the parameter of interest as summary statistics η(y)![requires iterative process]use of the standard quadratic loss function(θ − θ0)TA(θ − θ0)recent extension to model choice, optimality of Bayes factorB12(y)[F&P, ISBA 2012, Kyoto]
  94. 94. Summary [of F&P/statistics)optimality of the posterior expectationE[θ|y]of the parameter of interest as summary statistics η(y)![requires iterative process]use of the standard quadratic loss function(θ − θ0)TA(θ − θ0)recent extension to model choice, optimality of Bayes factorB12(y)[F&P, ISBA 2012, Kyoto]
  95. 95. Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates π(. | η(y)) [unavoidable]Model selection feasible with ABC [with caution!]For estimation, consistency if {θ; µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅[Marin et al., 2011]
  96. 96. Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates π(. | η(y)) [unavoidable]Model selection feasible with ABC [with caution!]For estimation, consistency if {θ; µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅[Marin et al., 2011]
  97. 97. Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates π(. | η(y)) [unavoidable]Model selection feasible with ABC [with caution!]For estimation, consistency if {θ; µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅[Marin et al., 2011]
  98. 98. Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates π(. | η(y)) [unavoidable]Model selection feasible with ABC [with caution!]For estimation, consistency if {θ; µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅[Marin et al., 2011]
  99. 99. Summary [about summaries]Choice of summary statistics is paramount for ABCvalidation/performancesAt best, ABC approximates π(. | η(y)) [unavoidable]Model selection feasible with ABC [with caution!]For estimation, consistency if {θ; µ(θ) = µ0} = θ0 whenEθ[η(y)] = µ(θ)For testing consistency if{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅[Marin et al., 2011]
  100. 100. Empirical likelihood (EL)Unavailable likelihoodsABC methodsABC as an inference machine[A]BCelABC and ELComposite likelihoodIllustrationsConclusion and perspectives
  101. 101. Empirical likelihood (EL)Dataset x made of n independent replicates x = (x1, . . . , xn) ofsome X ∼ FGeneralized moment condition modelEF h(X, φ) = 0,where h is a known function, and φ an unknown parameterCorresponding empirical likelihoodLel(φ|x) = maxpni=1pifor all p such that 0 pi 1, i pi = 1, i pi h(xi , φ) = 0.[Owen, 1988, Bio’ka, & Empirical Likelihood, 2001]
  102. 102. Empirical likelihood (EL)Dataset x made of n independent replicates x = (x1, . . . , xn) ofsome X ∼ FGeneralized moment condition modelEF h(X, φ) = 0,where h is a known function, and φ an unknown parameterCorresponding empirical likelihoodLel(φ|x) = maxpni=1pifor all p such that 0 pi 1, i pi = 1, i pi h(xi , φ) = 0.[Owen, 1988, Bio’ka, & Empirical Likelihood, 2001]
  103. 103. Convergence of EL [3.4]Theorem 3.4 Let X, Y1, . . . , Yn be independent rv’s with commondistribution F0. For θ ∈ Θ, and the function h(X, θ) ∈ Rs, letθ0 ∈ Θ be such thatVar(h(Yi , θ0))is finite and has rank q > 0. If θ0 satisfiesE(h(X, θ0)) = 0,then−2 logLel(θ0|Y1, . . . , Yn)n−n→ χ2(q)in distribution when n → ∞.[Owen, 2001]
  104. 104. Convergence of EL [3.4]“...The interesting thing about Theorem 3.4 is what is not there. Itincludes no conditions to make ^θ a good estimate of θ0, nor evenconditions to ensure a unique value for θ0, nor even that any solution θ0exists. Theorem 3.4 applies in the just determined, over-determined, andunder-determined cases. When we can prove that our estimatingequations uniquely define θ0, and provide a consistent estimator ^θ of it,then confidence regions and tests follow almost automatically throughTheorem 3.4.”.[Owen, 2001]
  105. 105. Raw [A]BCelsamplerAct as if EL was an exact likelihood[Lazar, 2003]for i = 1 → N dogenerate φi from the prior distribution π(·)set the weight ωi = Lel(φi |xobs)end forreturn (φi , ωi ), i = 1, . . . , NOutput weighted sample of size N
  106. 106. Raw [A]BCelsamplerAct as if EL was an exact likelihood[Lazar, 2003]for i = 1 → N dogenerate φi from the prior distribution π(·)set the weight ωi = Lel(φi |xobs)end forreturn (φi , ωi ), i = 1, . . . , NPerformance evaluated through effective sample sizeESS = 1Ni=1ωiNj=1ωj2
  107. 107. Raw [A]BCelsamplerAct as if EL was an exact likelihood[Lazar, 2003]for i = 1 → N dogenerate φi from the prior distribution π(·)set the weight ωi = Lel(φi |xobs)end forreturn (φi , ωi ), i = 1, . . . , NMore advanced algorithms can be adapted to EL:E.g., adaptive multiple importance sampling (AMIS) ofCornuet et al. to speed up computations[Cornuet et al., 2012]
  108. 108. Moment condition in population genetics?EL does not require a fully defined and often complex (hencedebatable) parametric modelMain difficultyDerive a constraintEF h(X, φ) = 0,on the parameters of interest φ when X is made of the genotypesof the sample of individuals at a given locusE.g., in phylogeography, φ is composed ofdates of divergence between populations,ratio of population sizes,mutation rates, etc.None of them are moments of the distribution of the allelic statesof the sample
  109. 109. Moment condition in population genetics?EL does not require a fully defined and often complex (hencedebatable) parametric modelMain difficultyDerive a constraintEF h(X, φ) = 0,on the parameters of interest φ when X is made of the genotypesof the sample of individuals at a given locusc h made of pairwise composite scores (whose zero is the pairwisemaximum likelihood estimator)
  110. 110. Pairwise composite likelihoodThe intra-locus pairwise likelihood2(xk|φ) =i<j2(xik, xjk|φ)with x1k , . . . , xnk : allelic states of the gene sample at the k-th locusThe pairwise score functionφ log 2(xk|φ) =i<jφ log 2(xik, xjk|φ)Composite likelihoods are often much narrower than theoriginal likelihood of the modelSafe(r) with EL because we only use position of its mode
  111. 111. Pairwise likelihood: a simple caseAssumptionssample ⊂ closed, panmicticpopulation at equilibriummarker: microsatellitemutation rate: θ/2if xik et xjk are two genes of thesample,2(xik, xjk|θ) depends only onδ = xik − xjk2(δ|θ) =1√1 + 2θρ (θ)|δ|withρ(θ) =θ1 + θ +√1 + 2θPairwise score function∂θ log 2(δ|θ) =−11 + 2θ+|δ|θ√1 + 2θ
  112. 112. Pairwise likelihood: a simple caseAssumptionssample ⊂ closed, panmicticpopulation at equilibriummarker: microsatellitemutation rate: θ/2if xik et xjk are two genes of thesample,2(xik, xjk|θ) depends only onδ = xik − xjk2(δ|θ) =1√1 + 2θρ (θ)|δ|withρ(θ) =θ1 + θ +√1 + 2θPairwise score function∂θ log 2(δ|θ) =−11 + 2θ+|δ|θ√1 + 2θ
  113. 113. Pairwise likelihood: a simple caseAssumptionssample ⊂ closed, panmicticpopulation at equilibriummarker: microsatellitemutation rate: θ/2if xik et xjk are two genes of thesample,2(xik, xjk|θ) depends only onδ = xik − xjk2(δ|θ) =1√1 + 2θρ (θ)|δ|withρ(θ) =θ1 + θ +√1 + 2θPairwise score function∂θ log 2(δ|θ) =−11 + 2θ+|δ|θ√1 + 2θ
  114. 114. Pairwise likelihood: 2 diverging populationsMRCAPOP a POP bτAssumptionsτ: divergence date ofpop. a and bθ/2: mutation rateLet xik and xjk be two genescoming resp. from pop. a andbSet δ = xik − xjk.Then 2(δ|θ, τ) =e−τθ√1 + 2θ+∞k=−∞ρ(θ)|k|Iδ−k(τθ).whereIn(z) nth-order modifiedBessel function of the firstkind
  115. 115. Pairwise likelihood: 2 diverging populationsMRCAPOP a POP bτAssumptionsτ: divergence date ofpop. a and bθ/2: mutation rateLet xik and xjk be two genescoming resp. from pop. a andbSet δ = xik − xjk.A 2-dim score function∂τ log 2(δ|θ, τ) = −θ+θ22(δ − 1|θ, τ) + 2(δ + 1|θ, τ)2(δ|θ, τ)∂θ log 2(δ|θ, τ) =−τ −11 + 2θ+q(δ|θ, τ)2(δ|θ, τ)+τ22(δ − 1|θ, τ) + 2(δ + 1|θ, τ)2(δ|θ, τ)whereq(δ|θ, τ) :=e−τθ√1 + 2θρ (θ)ρ(θ)∞k=−∞|k|ρ(θ)|k|Iδ−k (τθ)
  116. 116. Example: normal posterior[A]BCel with two constraintsESS=155.6θDensity−0.5 0.0 0.5 1.00.01.0ESS=75.93θDensity−0.4 −0.2 0.0 0.2 0.40.01.02.0ESS=76.87θDensity−0.4 −0.2 0.0 0.201234ESS=91.54θDensity−0.6 −0.4 −0.2 0.0 0.201234ESS=108.4θDensity−0.4 0.0 0.2 0.4 0.60.01.02.03.0ESS=85.13θDensity−0.2 0.0 0.2 0.4 0.60.01.02.03.0ESS=149.1θDensity−0.5 0.0 0.5 1.00.01.02.0ESS=96.31θDensity−0.4 0.0 0.2 0.4 0.60.01.02.0ESS=83.77θDensity−0.6 −0.4 −0.2 0.0 0.2 0.401234ESS=155.7θDensity−0.5 0.0 0.50.01.02.0ESS=92.42θDensity−0.4 0.0 0.2 0.4 0.60.01.02.03.0ESS=95.01θDensity−0.4 0.0 0.2 0.4 0.60.01.53.0ESS=139.2Density−0.6 −0.2 0.2 0.60.01.02.0ESS=99.33Density−0.4 −0.2 0.0 0.2 0.40.01.02.03.0ESS=87.28Density−0.2 0.0 0.2 0.4 0.60123Sample sizes are of 25 (column 1), 50 (column 2) and 75 (column 3)observations
  117. 117. Example: normal posterior[A]BCel with three constraintsESS=300.1θDensity−0.4 0.0 0.4 0.80.01.0ESS=205.5θDensity−0.6 −0.2 0.0 0.2 0.40.01.02.0ESS=179.4θDensity−0.2 0.0 0.2 0.40.01.53.0ESS=265.1θDensity−0.3 −0.2 −0.1 0.0 0.101234ESS=250.3θDensity−0.6 −0.4 −0.2 0.0 0.20.01.02.0ESS=134.8θDensity−0.4 −0.2 0.0 0.101234ESS=331.5θDensity−0.8 −0.4 0.0 0.40.01.02.0ESS=167.4θDensity−0.9 −0.7 −0.5 −0.30123ESS=136.5θDensity−0.4 −0.2 0.0 0.201234ESS=322.4θDensity−0.2 0.0 0.2 0.4 0.6 0.80.01.02.0ESS=202.7θDensity−0.4 −0.2 0.0 0.2 0.40.01.02.03.0ESS=166θDensity−0.4 −0.2 0.0 0.201234ESS=263.7Density−1.0 −0.6 −0.20.01.02.0ESS=190.9Density−0.4 −0.2 0.0 0.2 0.4 0.60123ESS=165.3Density−0.5 −0.3 −0.1 0.10.01.53.0Sample sizes are of 25 (column 1), 50 (column 2) and 75 (column 3)observations
  118. 118. Example: Superposition of gamma processesExample of superposition of N renewal processes with waitingtimes τij (i = 1, . . . , M, j = 1, . . .) ∼ G(α, β), when N is unknown.Renewal processesζi1 = τi1, ζi2 = ζi1 + τi2, . . .with observations made of first n values of the ζij ’s,z1 = min{ζij }, z2 = min{ζij ; ζij > z1}, . . .ending withzn = min{ζij ; ζij > zn−1} .[Cox & Kartsonaki, B’ka, 2012]
  119. 119. Example: Superposition of gamma processes (ABC)Interesting testing ground for [A]BCel since data (zt) neither iidnor MarkovRecovery of an iid structure by1. simulating a pseudo-dataset,(z1 , . . . , zn ), as in regularABC,2. deriving sequence ofindicators (ν1, . . . , νn), asz1 = ζν11, z2 = ζν2j2 , . . .3. exploiting that thoseindicators are distributedfrom the prior distributionon the νt’s leading to an iidsample of G(α, β) variablesComparison of ABC and[A]BCel posteriorsαDensity0 1 2 3 40.00.20.40.60.81.01.21.4βDensity0 1 2 3 40.00.51.01.5NDensity0 5 10 15 200.000.020.040.060.08αDensity0 1 2 3 40.00.51.01.5βDensity0 1 2 3 40.00.20.40.60.81.0NDensity0 5 10 15 200.000.010.020.030.040.050.06Top: [A]BCelBottom: regular ABC
  120. 120. Example: Superposition of gamma processes (ABC)Interesting testing ground for [A]BCel since data (zt) neither iidnor MarkovRecovery of an iid structure by1. simulating a pseudo-dataset,(z1 , . . . , zn ), as in regularABC,2. deriving sequence ofindicators (ν1, . . . , νn), asz1 = ζν11, z2 = ζν2j2 , . . .3. exploiting that thoseindicators are distributedfrom the prior distributionon the νt’s leading to an iidsample of G(α, β) variablesComparison of ABC and[A]BCel posteriorsαDensity0 1 2 3 40.00.20.40.60.81.01.21.4βDensity0 1 2 3 40.00.51.01.5NDensity0 5 10 15 200.000.020.040.060.08αDensity0 1 2 3 40.00.51.01.5βDensity0 1 2 3 40.00.20.40.60.81.0NDensity0 5 10 15 200.000.010.020.030.040.050.06Top: [A]BCelBottom: regular ABC
  121. 121. Pop’gen’: A first experimentEvolutionary scenario:MRCAPOP 0 POP 1τDataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ = (log10 θ, log10 τ)uniform prior over(−1., 1.5) × (−1., 1.)Comparison of the originalABC with [A]BCelESS=7034log(theta)Density0.00 0.05 0.10 0.15 0.20 0.25051015log(tau1)Density−0.3 −0.2 −0.1 0.0 0.1 0.2 0.301234567histogram = [A]BCelcurve = original ABCvertical line = “true”parameter
  122. 122. Pop’gen’: A first experimentEvolutionary scenario:MRCAPOP 0 POP 1τDataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ = (log10 θ, log10 τ)uniform prior over(−1., 1.5) × (−1., 1.)Comparison of the originalABC with [A]BCelESS=7034log(theta)Density0.00 0.05 0.10 0.15 0.20 0.25051015log(tau1)Density−0.3 −0.2 −0.1 0.0 0.1 0.2 0.301234567histogram = [A]BCelcurve = original ABCvertical line = “true”parameter
  123. 123. ABC vs. [A]BCel on 100 replicates of the 1st experimentAccuracy:log10 θ log10 τABC [A]BCel ABC [A]BCel(1) 0.097 0.094 0.315 0.117(2) 0.071 0.059 0.272 0.077(3) 0.68 0.81 1.0 0.80(1) Root Mean Square Error of the posterior mean(2) Median Absolute Deviation of the posterior median(3) Coverage of the credibility interval of probability 0.8Computation time: on a recent 6-core computer(C++/OpenMP)ABC ≈ 4 hours[A]BCel≈ 2 minutes
  124. 124. Pop’gen’: Second experimentEvolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2Dataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ =(log10 θ, log10 τ1, log10 τ2)non-informative uniformComparison of the original ABCwith [A]BCelhistogram = [A]BCelcurve = original ABCvertical line = “true” parameter
  125. 125. Pop’gen’: Second experimentEvolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2Dataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ =(log10 θ, log10 τ1, log10 τ2)non-informative uniformComparison of the original ABCwith [A]BCelhistogram = [A]BCelcurve = original ABCvertical line = “true” parameter
  126. 126. Pop’gen’: Second experimentEvolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2Dataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ =(log10 θ, log10 τ1, log10 τ2)non-informative uniformComparison of the original ABCwith [A]BCelhistogram = [A]BCelcurve = original ABCvertical line = “true” parameter
  127. 127. Pop’gen’: Second experimentEvolutionary scenario:MRCAPOP 0 POP 1 POP 2τ1τ2Dataset:50 genes per populations,100 microsat. lociAssumptions:Ne identical over allpopulationsφ =(log10 θ, log10 τ1, log10 τ2)non-informative uniformComparison of the original ABCwith [A]BCelhistogram = [A]BCelcurve = original ABCvertical line = “true” parameter
  128. 128. ABC vs. [A]BCel on 100 replicates of the 2nd experimentAccuracy:log10 θ log10 τ1 log10 τ2ABC [A]BCel ABC [A]BCel ABC [A]B(1) .0059 .0794 .472 .483 29.3 4.7(2) .048 .053 .32 .28 4.13 3.3(3) .79 .76 .88 .76 .89 .79(1) Root Mean Square Error of the posterior mean(2) Median Absolute Deviation of the posterior median(3) Coverage of the credibility interval of probability 0.8Computation time: on a recent 6-core computer(C++/OpenMP)ABC ≈ 6 hours[A]BCel≈ 8 minutes
  129. 129. ComparisonOn large datasets, [A]BCel gives more accurate results than ABCABC simplifies the dataset through summary statisticsDue to the large dimension of x, the original ABC algorithmestimatesπ θ η(xobs) ,where η(xobs) is some (non-linear) projection of the observeddataset on a space with smaller dimension→ Some information is lost[A]BCel simplifies the model through a generalized momentcondition model.→ Here, the moment condition model is based on pairwisecomposition likelihood
  130. 130. Conclusion/perspectivesabc part of a wider pictureto handle complex/Big Datamodelsmany formats of empirical[likelihood] Bayes methodsavailablelack of comparative toolsand of an assessment forinformation loss

×