Glasgow seminar, April 20, 2012

1,408 views

Published on

Published in: Education, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,408
On SlideShare
0
From Embeds
0
Number of Embeds
892
Actions
Shares
0
Downloads
8
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Glasgow seminar, April 20, 2012

  1. 1. ABC Methods for Bayesian Model Choice ABC Methods for Bayesian Model Choice Christian P. Robert Universit´ Paris-Dauphine, IuF, & CREST e http://www.ceremade.dauphine.fr/~xian Glasgow University, April 20, 2012 Joint work with J.-M. Cornuet, A. Grelaud, J.-M. Marin, N. Pillai, & J. Rousseau
  2. 2. ABC Methods for Bayesian Model ChoiceOutline Approximate Bayesian computation ABC for model choice Model choice consistency
  3. 3. ABC Methods for Bayesian Model Choice Approximate Bayesian computationApproximate Bayesian computation Approximate Bayesian computation ABC basics Alphabet soup Semi-automatic ABC simulation-based methods in Econometrics ABC for model choice Model choice consistency
  4. 4. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsBayesian computation issues When faced with a non-standard posterior distribution π(θ|y) ∝ π(θ)L(θ|y) the standard solution is to use simulation (Monte Carlo) to produce a sample θ1 , . . . , θ T from π(θ|y) (or approximately by Markov chain Monte Carlo methods) [Robert & Casella, 2004]
  5. 5. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsUntractable likelihoods Cases when the likelihood function f (y|θ) is unavailable and when the completion step f (y|θ) = f (y, z|θ) dz Z is impossible or too costly because of the dimension of z c MCMC cannot be implemented!
  6. 6. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsIllustrations Example Stochastic volatility model: for Highest weight trajectories t = 1, . . . , T, 0.4 0.2 yt = exp(zt ) t , zt = a+bzt−1 +σηt , 0.0 −0.2 T very large makes it difficult to −0.4 include z within the simulated 0 200 400 t 600 800 1000 parameters
  7. 7. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsIllustrations Example Potts model: if y takes values on a grid Y of size k n and f (y|θ) ∝ exp θ Iyl =yi l∼i where l∼i denotes a neighbourhood relation, n moderately large prohibits the computation of the normalising constant
  8. 8. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsIllustrations Example Coalescence tree: in population genetics, reconstitution of a common ancestor from a sample of genes via a phylogenetic tree that is close to impossible to integrate out [100 processor days with 4 parameters] [Cornuet et al., 2009, Bioinformatics]
  9. 9. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsThe ABC method Bayesian setting: target is π(θ)f (x|θ)
  10. 10. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsThe ABC method Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique:
  11. 11. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsThe ABC method Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique: ABC algorithm For an observation y ∼ f (y|θ), under the prior π(θ), keep jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y. [Tavar´ et al., 1997] e
  12. 12. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsA as approximative When y is a continuous random variable, equality z = y is replaced with a tolerance condition, (y, z) ≤ where is a distance
  13. 13. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsA as approximative When y is a continuous random variable, equality z = y is replaced with a tolerance condition, (y, z) ≤ where is a distance Output distributed from π(θ) Pθ { (y, z) < } ∝ π(θ| (y, z) < )
  14. 14. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsABC algorithm (with summary) Algorithm 1 Likelihood-free rejection sampler for i = 1 to N do repeat generate θ from the prior distribution π(·) generate z from the likelihood f (·|θ ) until ρ{η(z), η(y)} ≤ set θi = θ end for where η(y) defines a (maybe in-sufficient) statistic
  15. 15. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsOutput (with summary) The likelihood-free algorithm samples from the marginal in z of: π(θ)f (z|θ)IA ,y (z) π (θ, z|y) = , A ,y ×Θ π(θ)f (z|θ)dzdθ where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
  16. 16. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsOutput (with summary) The likelihood-free algorithm samples from the marginal in z of: π(θ)f (z|θ)IA ,y (z) π (θ, z|y) = , A ,y ×Θ π(θ)f (z|θ)dzdθ where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution based on the summary statistic: π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) .
  17. 17. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsMA example Consider the MA(q) model q xt = t+ ϑi t−i i=1 Simple prior: uniform prior over the identifiability zone, e.g. triangle for MA(2)
  18. 18. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsMA example (2) ABC algorithm thus made of 1. picking a new value (ϑ1 , ϑ2 ) in the triangle 2. generating an iid sequence ( t )−q<t≤T 3. producing a simulated series (xt )1≤t≤T
  19. 19. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsMA example (2) ABC algorithm thus made of 1. picking a new value (ϑ1 , ϑ2 ) in the triangle 2. generating an iid sequence ( t )−q<t≤T 3. producing a simulated series (xt )1≤t≤T Distance: basic distance between the series T ρ((xt )1≤t≤T , (xt )1≤t≤T ) = (xt − xt )2 t=1 or between summary statistics like the first 2 autocorrelations T τj = xt xt−j t=j+1
  20. 20. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsComparison of distance impact Impact of the tolerance on the ABC sample against both distances ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
  21. 21. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsComparison of distance impact 4 1.5 3 1.0 2 0.5 1 0.0 0 0.0 0.2 0.4 0.6 0.8 −2.0 −1.0 0.0 0.5 1.0 1.5 θ1 θ2 Impact of the tolerance on the ABC sample against raw distance ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
  22. 22. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsComparison of distance impact 4 1.5 3 1.0 2 0.5 1 0.0 0 0.0 0.2 0.4 0.6 0.8 −2.0 −1.0 0.0 0.5 1.0 1.5 θ1 θ2 Impact of the tolerance on the ABC sample against autocorrelation distance ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
  23. 23. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsABC advances Simulating from the prior is often poor in efficiency
  24. 24. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsABC advances Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
  25. 25. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsABC advances Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002]
  26. 26. ABC Methods for Bayesian Model Choice Approximate Bayesian computation ABC basicsABC advances Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ ] [Ratmann et al., 2009]
  27. 27. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Alphabet soupABC-NP Better usage of [prior] simulations by adjustement: instead of throwing away θ such that ρ(η(z), η(y)) > , replace θ’s with locally regressed transforms θ∗ = θ − {η(z) − η(y)}T β ˆ [Csill´ry et al., TEE, 2010] e ˆ where β is obtained by [NP] weighted least square regression on (η(z) − η(y)) with weights Kδ {ρ(η(z), η(y))} [Beaumont et al., 2002, Genetics]
  28. 28. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Alphabet soupABC-NP (density estimation) Use of the kernel weights Kδ {ρ(η(z(θ)), η(y))} leads to the NP estimate of the posterior expectation i θi Kδ{ρ(η(z(θi )), η(y))} i Kδ {ρ(η(z(θi )), η(y))} [Blum, JASA, 2010]
  29. 29. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Alphabet soupABC-NP (density estimation) Use of the kernel weights Kδ {ρ(η(z(θ)), η(y))} leads to the NP estimate of the posterior conditional density ˜ − θ)Kδ {ρ(η(z(θi )), η(y))} i Kb (θi i Kδ {ρ(η(z(θi )), η(y))} [Blum, JASA, 2010]
  30. 30. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Alphabet soupABC-NP (density estimations) Other versions incorporating regression adjustments ˜ ∗ − θ)Kδ {ρ(η(z(θi )), η(y))} i Kb (θi i Kδ {ρ(η(z(θi )), η(y))}
  31. 31. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Alphabet soupABC-NP (density estimations) Other versions incorporating regression adjustments ˜ ∗ − θ)Kδ {ρ(η(z(θi )), η(y))} i Kb (θi i Kδ {ρ(η(z(θi )), η(y))} In all cases, error E[ˆ(θ|y)] − g(θ|y) = cb2 + cδ 2 + OP (b2 + δ 2 ) + OP (1/nδ d ) g c var(ˆ(θ|y)) = g (1 + oP (1)) nbδ d [Blum, JASA, 2010]
  32. 32. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Alphabet soupABC-NP (density estimations) Other versions incorporating regression adjustments ˜ ∗ − θ)Kδ {ρ(η(z(θi )), η(y))} i Kb (θi i Kδ {ρ(η(z(θi )), η(y))} In all cases, error E[ˆ(θ|y)] − g(θ|y) = cb2 + cδ 2 + OP (b2 + δ 2 ) + OP (1/nδ d ) g c var(ˆ(θ|y)) = g (1 + oP (1)) nbδ d [standard NP calculations]
  33. 33. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Alphabet soupWhich summary statistics? Fundamental difficulty of the choice of the summary statistic when there is no non-trivial sufficient statistic [except when done by the experimenters in the field]
  34. 34. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Alphabet soupWhich summary statistics? Fundamental difficulty of the choice of the summary statistic when there is no non-trivial sufficient statistic [except when done by the experimenters in the field] Starting from a large collection of summary statistics is available, Joyce and Marjoram (2008) consider the sequential inclusion into the ABC target, with a stopping rule based on a likelihood ratio test.
  35. 35. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Semi-automatic ABCSemi-automatic ABC Fearnhead and Prangle (2012) study ABC and the selection of the summary statistic in close proximity to Wilkinson’s (2008) proposal ABC then considered from a purely inferential viewpoint and calibrated for estimation purposes
  36. 36. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Semi-automatic ABCSemi-automatic ABC Fearnhead and Prangle (2012) study ABC and the selection of the summary statistic in close proximity to Wilkinson’s (2008) proposal ABC then considered from a purely inferential viewpoint and calibrated for estimation purposes Use of a randomised (or ‘noisy’) version of the summary statistics η (y) = η(y) + τ ˜ Derivation of a well-calibrated version of ABC, i.e. an algorithm that gives proper predictions for the distribution associated with this randomised summary statistic [calibration constraint: ABC approximation with same posterior mean as the true randomised posterior]
  37. 37. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Semi-automatic ABCSemi-automatic ABC Fearnhead and Prangle (2012) study ABC and the selection of the summary statistic in close proximity to Wilkinson’s (2008) proposal ABC then considered from a purely inferential viewpoint and calibrated for estimation purposes Optimality of the posterior expectation E[θ|y] of the parameter of interest as summary statistics η(y)! Use of the standard quadratic loss function (θ − θ0 )T A(θ − θ0 ) .
  38. 38. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Semi-automatic ABCDetails on Fearnhead and Prangle (F&P) ABC Use of a summary statistic S(·), an importance proposal g(·), a kernel K(·) ≤ 1 and a bandwidth h > 0 such that (θ, ysim ) ∼ g(θ)f (ysim |θ) is accepted with probability (hence the bound) K[{S(ysim ) − sobs }/h] and the corresponding importance weight defined by π(θ) g(θ) [Fearnhead & Prangle, 2012]
  39. 39. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Semi-automatic ABCErrors, errors, and errors Three levels of approximation π(θ|yobs ) by π(θ|sobs ) loss of information [ignored] π(θ|sobs ) by π(s)K[{s − sobs }/h]π(θ|s) ds πABC (θ|sobs ) = π(s)K[{s − sobs }/h] ds noisy observations πABC (θ|sobs ) by importance Monte Carlo based on N simulations, represented by var(a(θ)|sobs )/Nacc [expected number of acceptances]
  40. 40. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Semi-automatic ABCAverage acceptance asymptotics For the average acceptance probability/approximate likelihood p(θ|sobs ) = f (ysim |θ) K[{S(ysim ) − sobs }/h] dysim , overall acceptance probability p(sobs ) = p(θ|sobs ) π(θ) dθ = π(sobs )hd + o(hd ) [F&P, Lemma 1]
  41. 41. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Semi-automatic ABCCalibration of h “This result gives insight into how S(·) and h affect the Monte Carlo error. To minimize Monte Carlo error, we need hd to be not too small. Thus ideally we want S(·) to be a low dimensional summary of the data that is sufficiently informative about θ that π(θ|sobs ) is close, in some sense, to π(θ|yobs )” (F&P, p.5) turns h into an absolute value while it should be context-dependent and user-calibrated only addresses one term in the approximation error and acceptance probability (“curse of dimensionality”) h large prevents πABC (θ|sobs ) to be close to π(θ|sobs ) d small prevents π(θ|sobs ) to be close to π(θ|yobs ) (“curse of [dis]information”)
  42. 42. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Semi-automatic ABCConverging ABC Theorem (F&P) For noisy ABC, the expected noisy-ABC log-likelihood, E {log[p(θ|sobs )]} = log[p(θ|S(yobs ) + )]π(yobs |θ0 )K( )dyobs d , has its maximum at θ = θ0 . True for any choice of summary statistic? even ancilary statistics?! [Imposes at least identifiability...] Relevant in asymptotia and not for the data
  43. 43. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Semi-automatic ABCConverging ABC Corollary (F&P) For noisy ABC, the ABC posterior converges onto a point mass on the true parameter value as m → ∞. For standard ABC, not always the case (unless h goes to zero). Strength of regularity conditions (c1) and (c2) in Bernardo & Smith, 1994? [out-of-reach constraints on likelihood and posterior] Again, there must be conditions imposed upon summary statistics...
  44. 44. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Semi-automatic ABCCaveat Since E(θ|yobs ) is most usually unavailable, F&P suggest (i) use a pilot run of ABC to determine a region of non-negligible posterior mass; (ii) simulate sets of parameter values and data; (iii) use the simulated sets of parameter values and data to estimate the summary statistic; and (iv) run ABC with this choice of summary statistic.
  45. 45. ABC Methods for Bayesian Model Choice Approximate Bayesian computation Semi-automatic ABCCaveat Since E(θ|yobs ) is most usually unavailable, F&P suggest (i) use a pilot run of ABC to determine a region of non-negligible posterior mass; (ii) simulate sets of parameter values and data; (iii) use the simulated sets of parameter values and data to estimate the summary statistic; and (iv) run ABC with this choice of summary statistic. where is the assessment of the first stage error?
  46. 46. ABC Methods for Bayesian Model Choice Approximate Bayesian computation simulation-based methods in EconometricsEcon’ections Model choice Similar exploration of simulation-based techniques in Econometrics Simulated method of moments Method of simulated moments Simulated pseudo-maximum-likelihood Indirect inference [Gouri´roux & Monfort, 1996] e
  47. 47. ABC Methods for Bayesian Model Choice Approximate Bayesian computation simulation-based methods in EconometricsSimulated method of moments o Given observations y1:n from a model yt = r(y1:(t−1) , t , θ) , t ∼ g(·) simulate 1:n , derive yt (θ) = r(y1:(t−1) , t , θ) and estimate θ by n arg min (yt − yt (θ))2 o θ t=1
  48. 48. ABC Methods for Bayesian Model Choice Approximate Bayesian computation simulation-based methods in EconometricsSimulated method of moments o Given observations y1:n from a model yt = r(y1:(t−1) , t , θ) , t ∼ g(·) simulate 1:n , derive yt (θ) = r(y1:(t−1) , t , θ) and estimate θ by n n 2 o arg min yt − yt (θ) θ t=1 t=1
  49. 49. ABC Methods for Bayesian Model Choice Approximate Bayesian computation simulation-based methods in EconometricsMethod of simulated moments Given a statistic vector K(y) with Eθ [K(Yt )|y1:(t−1) ] = k(y1:(t−1) ; θ) find an unbiased estimator of k(y1:(t−1) ; θ), ˜ k( t , y1:(t−1) ; θ) Estimate θ by n S arg min K(yt ) − ˜ k( s , y1:(t−1) ; θ)/S t θ t=1 s=1 [Pakes & Pollard, 1989]
  50. 50. ABC Methods for Bayesian Model Choice Approximate Bayesian computation simulation-based methods in EconometricsIndirect inference ˆ Minimise (in θ) the distance between estimators β based on pseudo-models for genuine observations and for observations simulated under the true model and the parameter θ. [Gouri´roux, Monfort, & Renault, 1993; e Smith, 1993; Gallant & Tauchen, 1996]
  51. 51. ABC Methods for Bayesian Model Choice Approximate Bayesian computation simulation-based methods in EconometricsIndirect inference (PML vs. PSE) Example of the pseudo-maximum-likelihood (PML) ˆ β(y) = arg max log f (yt |β, y1:(t−1) ) β t leading to ˆ ˆ arg min ||β(yo ) − β(y1 (θ), . . . , yS (θ))||2 θ when ys (θ) ∼ f (y|θ) s = 1, . . . , S
  52. 52. ABC Methods for Bayesian Model Choice Approximate Bayesian computation simulation-based methods in EconometricsIndirect inference (PML vs. PSE) Example of the pseudo-score-estimator (PSE) 2 ˆ ∂ log f β(y) = arg min (yt |β, y1:(t−1) ) β t ∂β leading to ˆ ˆ arg min ||β(yo ) − β(y1 (θ), . . . , yS (θ))||2 θ when ys (θ) ∼ f (y|θ) s = 1, . . . , S
  53. 53. ABC Methods for Bayesian Model Choice Approximate Bayesian computation simulation-based methods in EconometricsABC using indirect inference (1) We present a novel approach for developing summary statistics for use in approximate Bayesian computation (ABC) algorithms by using indirect inference(...) In the indirect inference approach to ABC the parameters of an auxiliary model fitted to the data become the summary statistics. Although applicable to any ABC technique, we embed this approach within a sequential Monte Carlo algorithm that is completely adaptive and requires very little tuning(...) [Drovandi, Pettitt & Faddy, 2011] c Indirect inference provides summary statistics for ABC...
  54. 54. ABC Methods for Bayesian Model Choice Approximate Bayesian computation simulation-based methods in EconometricsABC using indirect inference (2) ...the above result shows that, in the limit as h → 0, ABC will be more accurate than an indirect inference method whose auxiliary statistics are the same as the summary statistic that is used for ABC(...) Initial analysis showed that which method is more accurate depends on the true value of θ. [Fearnhead and Prangle, 2012] c Indirect inference provides estimates rather than global inference...
  55. 55. ABC Methods for Bayesian Model Choice ABC for model choiceABC for model choice Approximate Bayesian computation ABC for model choice Principle Gibbs random fields Generic ABC model choice Model choice consistency
  56. 56. ABC Methods for Bayesian Model Choice ABC for model choice PrincipleBayesian model choice Principle Several models M1 , M2 , . . . are considered simultaneously for dataset y and model index M central to inference. Use of a prior π(M = m), plus a prior distribution on the parameter conditional on the value m of the model index, πm (θ m ) Goal is to derive the posterior distribution of M, π(M = m|data) a challenging computational target when models are complex.
  57. 57. ABC Methods for Bayesian Model Choice ABC for model choice PrincipleGeneric ABC for model choice Algorithm 2 Likelihood-free model choice sampler (ABC-MC) for t = 1 to T do repeat Generate m from the prior π(M = m) Generate θ m from the prior πm (θ m ) Generate z from the model fm (z|θ m ) until ρ{η(z), η(y)} < Set m(t) = m and θ (t) = θ m end for [Grelaud & al., 2009; Toni & al., 2009]
  58. 58. ABC Methods for Bayesian Model Choice ABC for model choice PrincipleABC estimates Posterior probability π(M = m|y) approximated by the frequency of acceptances from model m T 1 Im(t) =m . T t=1
  59. 59. ABC Methods for Bayesian Model Choice ABC for model choice PrincipleABC estimates Posterior probability π(M = m|y) approximated by the frequency of acceptances from model m T 1 Im(t) =m . T t=1 Extension to a weighted polychotomous logistic regression estimate of π(M = m|y), with non-parametric kernel weights [Cornuet et al., DIYABC, 2009]
  60. 60. ABC Methods for Bayesian Model Choice ABC for model choice Gibbs random fieldsPotts model Skip MRFs Potts model Distribution with an energy function of the form θS(y) = θ δyl =yi l∼i where l∼i denotes a neighbourhood structure
  61. 61. ABC Methods for Bayesian Model Choice ABC for model choice Gibbs random fieldsPotts model Skip MRFs Potts model Distribution with an energy function of the form θS(y) = θ δyl =yi l∼i where l∼i denotes a neighbourhood structure In most realistic settings, summation Zθ = exp{θS(x)} x∈X involves too many terms to be manageable and numerical approximations cannot always be trusted
  62. 62. ABC Methods for Bayesian Model Choice ABC for model choice Gibbs random fieldsNeighbourhood relations Setup Choice to be made between M neighbourhood relations m i∼i (0 ≤ m ≤ M − 1) with Sm (x) = I{xi =xi } m i∼i driven by the posterior probabilities of the models.
  63. 63. ABC Methods for Bayesian Model Choice ABC for model choice Gibbs random fieldsModel index Computational target: P(M = m|x) ∝ fm (x|θm )πm (θm ) dθm π(M = m) Θm
  64. 64. ABC Methods for Bayesian Model Choice ABC for model choice Gibbs random fieldsModel index Computational target: P(M = m|x) ∝ fm (x|θm )πm (θm ) dθm π(M = m) Θm If S(x) sufficient statistic for the joint parameters (M, θ0 , . . . , θM −1 ), P(M = m|x) = P(M = m|S(x)) .
  65. 65. ABC Methods for Bayesian Model Choice ABC for model choice Gibbs random fieldsSufficient statistics in Gibbs random fields Each model m has its own sufficient statistic Sm (·) and S(·) = (S0 (·), . . . , SM −1 (·)) is also (model-)sufficient.
  66. 66. ABC Methods for Bayesian Model Choice ABC for model choice Gibbs random fieldsSufficient statistics in Gibbs random fields Each model m has its own sufficient statistic Sm (·) and S(·) = (S0 (·), . . . , SM −1 (·)) is also (model-)sufficient. Explanation: For Gibbs random fields, 1 2 x|M = m ∼ fm (x|θm ) = fm (x|S(x))fm (S(x)|θm ) 1 = f 2 (S(x)|θm ) n(S(x)) m where n(S(x)) = {˜ ∈ X : S(˜ ) = S(x)} x x c S(x) is sufficient for the joint parameters
  67. 67. ABC Methods for Bayesian Model Choice ABC for model choice Gibbs random fieldsToy example iid Bernoulli model versus two-state first-order Markov chain, i.e. n f0 (x|θ0 ) = exp θ0 I{xi =1} {1 + exp(θ0 )}n , i=1 versus n 1 f1 (x|θ1 ) = exp θ1 I{xi =xi−1 } {1 + exp(θ1 )}n−1 , 2 i=2 with priors θ0 ∼ U(−5, 5) and θ1 ∼ U(0, 6) (inspired by “phase transition” boundaries).
  68. 68. ABC Methods for Bayesian Model Choice ABC for model choice Gibbs random fieldsToy example (2) 10 5 5 BF01 BF01 0 ^ ^ 0 −5 −5 −40 −20 0 10 −10 −40 −20 0 10 BF01 BF01 (left) Comparison of the true BF m0 /m1 (x0 ) with BF m0 /m1 (x0 ) (in logs) over 2, 000 simulations and 4.106 proposals from the prior. (right) Same when using tolerance corresponding to the 1% quantile on the distances.
  69. 69. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceAbout sufficiency If η1 (x) sufficient statistic for model m = 1 and parameter θ1 and η2 (x) sufficient statistic for model m = 2 and parameter θ2 , (η1 (x), η2 (x)) is not always sufficient for (m, θm )
  70. 70. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceAbout sufficiency If η1 (x) sufficient statistic for model m = 1 and parameter θ1 and η2 (x) sufficient statistic for model m = 2 and parameter θ2 , (η1 (x), η2 (x)) is not always sufficient for (m, θm ) c Potential loss of information at the testing level
  71. 71. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choicePoisson/geometric example Sample x = (x1 , . . . , xn ) from either a Poisson P(λ) or from a geometric G(p) Sum n S= xi = η(x) i=1 sufficient statistic for either model but not simultaneously
  72. 72. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceLimiting behaviour of B12 (T → ∞) ABC approximation T t=1 Imt =1 Iρ{η(zt ),η(y)}≤ B12 (y) = T , t=1 Imt =2 Iρ{η(zt ),η(y)}≤ where the (mt , z t )’s are simulated from the (joint) prior
  73. 73. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceLimiting behaviour of B12 (T → ∞) ABC approximation T t=1 Imt =1 Iρ{η(zt ),η(y)}≤ B12 (y) = T , t=1 Imt =2 Iρ{η(zt ),η(y)}≤ where the (mt , z t )’s are simulated from the (joint) prior As T go to infinity, limit Iρ{η(z),η(y)}≤ π1 (θ 1 )f1 (z|θ 1 ) dz dθ 1 B12 (y) = Iρ{η(z),η(y)}≤ π2 (θ 2 )f2 (z|θ 2 ) dz dθ 2 η Iρ{η,η(y)}≤ π1 (θ 1 )f1 (η|θ 1 ) dη dθ 1 = η , Iρ{η,η(y)}≤ π2 (θ 2 )f2 (η|θ 2 ) dη dθ 2 η η where f1 (η|θ 1 ) and f2 (η|θ 2 ) distributions of η(z)
  74. 74. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceLimiting behaviour of B12 ( → 0) When goes to zero, η η π1 (θ 1 )f1 (η(y)|θ 1 ) dθ 1 B12 (y) = η π2 (θ 2 )f2 (η(y)|θ 2 ) dθ 2
  75. 75. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceLimiting behaviour of B12 ( → 0) When goes to zero, η η π1 (θ 1 )f1 (η(y)|θ 1 ) dθ 1 B12 (y) = η π2 (θ 2 )f2 (η(y)|θ 2 ) dθ 2 c Bayes factor based on the sole observation of η(y)
  76. 76. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceLimiting behaviour of B12 (under sufficiency) If η(y) sufficient statistic in both models, fi (y|θ i ) = gi (y)fiη (η(y)|θ i ) Thus η Θ1 π(θ 1 )g1 (y)f1 (η(y)|θ 1 ) dθ 1 B12 (y) = η Θ2 π(θ 2 )g2 (y)f2 (η(y)|θ 2 ) dθ 2 η g1 (y) π1 (θ 1 )f1 (η(y)|θ 1 ) dθ 1 g1 (y) η = η = B (y) . g2 (y) π2 (θ 2 )f2 (η(y)|θ 2 ) dθ 2 g2 (y) 12 [Didelot, Everitt, Johansen & Lawson, 2011]
  77. 77. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceLimiting behaviour of B12 (under sufficiency) If η(y) sufficient statistic in both models, fi (y|θ i ) = gi (y)fiη (η(y)|θ i ) Thus η Θ1 π(θ 1 )g1 (y)f1 (η(y)|θ 1 ) dθ 1 B12 (y) = η Θ2 π(θ 2 )g2 (y)f2 (η(y)|θ 2 ) dθ 2 η g1 (y) π1 (θ 1 )f1 (η(y)|θ 1 ) dθ 1 g1 (y) η = η = B (y) . g2 (y) π2 (θ 2 )f2 (η(y)|θ 2 ) dθ 2 g2 (y) 12 [Didelot, Everitt, Johansen & Lawson, 2011] c No discrepancy only when cross-model sufficiency
  78. 78. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choicePoisson/geometric example (back) Sample x = (x1 , . . . , xn ) from either a Poisson P(λ) or from a geometric G(p) Discrepancy ratio g1 (x) S!n−S / i xi ! = g2 (x) 1 n+S−1 S
  79. 79. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choicePoisson/geometric discrepancy η Range of B12 (x) versus B12 (x): The values produced have nothing in common.
  80. 80. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceFormal recovery Creating an encompassing exponential family T T f (x|θ1 , θ2 , α1 , α2 ) ∝ exp{θ1 η1 (x) + θ2 η2 (x) + α1 t1 (x) + α2 t2 (x)} leads to a sufficient statistic (η1 (x), η2 (x), t1 (x), t2 (x)) [Didelot, Everitt, Johansen & Lawson, 2011]
  81. 81. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceFormal recovery Creating an encompassing exponential family T T f (x|θ1 , θ2 , α1 , α2 ) ∝ exp{θ1 η1 (x) + θ2 η2 (x) + α1 t1 (x) + α2 t2 (x)} leads to a sufficient statistic (η1 (x), η2 (x), t1 (x), t2 (x)) [Didelot, Everitt, Johansen & Lawson, 2011] In the Poisson/geometric case, if i xi ! is added to S, no discrepancy
  82. 82. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceFormal recovery Creating an encompassing exponential family T T f (x|θ1 , θ2 , α1 , α2 ) ∝ exp{θ1 η1 (x) + θ2 η2 (x) + α1 t1 (x) + α2 t2 (x)} leads to a sufficient statistic (η1 (x), η2 (x), t1 (x), t2 (x)) [Didelot, Everitt, Johansen & Lawson, 2011] Only applies in genuine sufficiency settings... c Inability to evaluate information loss due to summary statistics
  83. 83. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceMeaning of the ABC-Bayes factor The ABC approximation to the Bayes Factor is based solely on the summary statistics....
  84. 84. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceMeaning of the ABC-Bayes factor The ABC approximation to the Bayes Factor is based solely on the summary statistics.... In the Poisson/geometric case, if E[yi ] = θ0 > 0, η (θ0 + 1)2 −θ0 lim B12 (y) = e n→∞ θ0
  85. 85. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceMA example 0.7 0.6 0.6 0.6 0.6 0.5 0.5 0.5 0.5 0.4 0.4 0.4 0.4 0.3 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.1 0.1 0.1 0.1 0.0 0.0 0.0 0.0 1 2 1 2 1 2 1 2 Evolution [against ] of ABC Bayes factor, in terms of frequencies of visits to models MA(1) (left) and MA(2) (right) when equal to 10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sample of 50 points from a MA(2) with θ1 = 0.6, θ2 = 0.2. True Bayes factor equal to 17.71.
  86. 86. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceMA example 0.8 0.6 0.6 0.6 0.6 0.4 0.4 0.4 0.4 0.2 0.2 0.2 0.2 0.0 0.0 0.0 0.0 1 2 1 2 1 2 1 2 Evolution [against ] of ABC Bayes factor, in terms of frequencies of visits to models MA(1) (left) and MA(2) (right) when equal to 10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sample of 50 points from a MA(1) model with θ1 = 0.6. True Bayes factor B21 equal to .004.
  87. 87. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceA population genetics evaluation Gene free Population genetics example with 3 populations 2 scenari 15 individuals 5 loci single mutation parameter
  88. 88. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceA population genetics evaluation Gene free Population genetics example with 3 populations 2 scenari 15 individuals 5 loci single mutation parameter 24 summary statistics 2 million ABC proposal importance [tree] sampling alternative
  89. 89. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceStability of importance sampling 1.0 1.0 1.0 1.0 1.0 q q 0.8 0.8 0.8 0.8 0.8 0.6 0.6 0.6 0.6 0.6 q 0.4 0.4 0.4 0.4 0.4 0.2 0.2 0.2 0.2 0.2 0.0 0.0 0.0 0.0 0.0
  90. 90. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceComparison with ABC Use of 24 summary statistics and DIY-ABC logistic correction 1.0 qq qq qq qq qqq qq q q qq q q qq q q q q q q q q q q qq q q qq q q q qq qq qq q q q q qq q q q q q q q q q q q qq q q q qq q q 0.8 q q q q q q q q q q q q qq q q q q q q q qq q q q q q qqq q q qq q ABC direct and logistic q q qq q q q q q q q 0.6 q q q qq q q qq q q q q q q q q q q q q q q q q q q q q q q q q q qq qq q q qq 0.4 qq q q q q q q q q q q q q q q q q q 0.2 q qq q q q q q q q q q q 0.0 qq 0.0 0.2 0.4 0.6 0.8 1.0 importance sampling
  91. 91. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceComparison with ABC Use of 15 summary statistics 6 4 q q q q q q q qq 2 q q q q qq ABC direct q q q qq q qq q q q q qq qq q qq q q q q q qq q q q q q qq q q qq q qq q q qq q q q qqq q q q q q q q q 0 qq qq q q q q qq qq q q qq q q q q q q q q −2 −4 −4 −2 0 2 4 6 importance sampling
  92. 92. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceComparison with ABC Use of 15 summary statistics and DIY-ABC logistic correction qq 6 q q q q q q q q q q q 4 q qq q qq qq q q q q q q q q q q qq q q q q q q q ABC direct and logistic q q q q q q q qq q q q q q q q qq q 2 q q q q q qq q qq q q q qq qq q qq qq q q q qq q q qq qq qq q q qqqq qq q q q qqqqq qq q q q qq q q q q qq q qq q q qqq q q qqqqq q q qq q q qq q q q qq 0 qq qqq q q qq q qqq q q q q q q q q q q q q q q q q q q q q qq q q −2 q q q q q −4 q q q −4 −2 0 2 4 6 importance sampling
  93. 93. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceThe only safe cases??? Besides specific models like Gibbs random fields, using distances over the data itself escapes the discrepancy... [Toni & Stumpf, 2010; Sousa & al., 2009]
  94. 94. ABC Methods for Bayesian Model Choice ABC for model choice Generic ABC model choiceThe only safe cases??? Besides specific models like Gibbs random fields, using distances over the data itself escapes the discrepancy... [Toni & Stumpf, 2010; Sousa & al., 2009] ...and so does the use of more informal model fitting measures [Ratmann & al., 2009]
  95. 95. ABC Methods for Bayesian Model Choice Model choice consistencyABC model choice consistency Approximate Bayesian computation ABC for model choice Model choice consistency Formalised framework Consistency results Summary statistics Conclusions
  96. 96. ABC Methods for Bayesian Model Choice Model choice consistency Formalised frameworkThe starting point Central question to the validation of ABC for model choice: When is a Bayes factor based on an insufficient statistic T (y) consistent?
  97. 97. ABC Methods for Bayesian Model Choice Model choice consistency Formalised frameworkThe starting point Central question to the validation of ABC for model choice: When is a Bayes factor based on an insufficient statistic T (y) consistent? T Note: c drawn on T (y) through B12 (y) necessarily differs from c drawn on y through B12 (y)
  98. 98. ABC Methods for Bayesian Model Choice Model choice consistency Formalised frameworkA benchmark if toy example Comparison suggested by referee of PNAS paper [thanks]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1 : y ∼ N (θ1 , 1) opposed √ to model M2 : y ∼ L(θ√ 1/ 2), Laplace distribution with mean θ2 2, and scale parameter 1/ 2 (variance one).
  99. 99. ABC Methods for Bayesian Model Choice Model choice consistency Formalised frameworkA benchmark if toy example Comparison suggested by referee of PNAS paper [thanks]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1 : y ∼ N (θ1 , 1) opposed √ to model M2 : y ∼ L(θ√ 1/ 2), Laplace distribution with mean θ2 2, and scale parameter 1/ 2 (variance one). Four possible statistics 1. sample mean y (sufficient for M1 if not M2 ); 2. sample median med(y) (insufficient); 3. sample variance var(y) (ancillary); 4. median absolute deviation mad(y) = med(|y − med(y)|);
  100. 100. ABC Methods for Bayesian Model Choice Model choice consistency Formalised frameworkA benchmark if toy example Comparison suggested by referee of PNAS paper [thanks]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1 : y ∼ N (θ1 , 1) opposed √ to model M2 : y ∼ L(θ√ 1/ 2), Laplace distribution with mean θ2 2, and scale parameter 1/ 2 (variance one). n=100 0.7 0.6 0.5 0.4 q 0.3 q q q 0.2 0.1 q q q q q 0.0 q q Gauss Laplace
  101. 101. ABC Methods for Bayesian Model Choice Model choice consistency Formalised frameworkA benchmark if toy example Comparison suggested by referee of PNAS paper [thanks]: [X, Cornuet, Marin, & Pillai, Aug. 2011] Model M1 : y ∼ N (θ1 , 1) opposed √ to model M2 : y ∼ L(θ√ 1/ 2), Laplace distribution with mean θ2 2, and scale parameter 1/ 2 (variance one). n=100 n=100 1.0 0.7 q q 0.6 q 0.8 q q 0.5 q q q 0.6 q 0.4 q q q q 0.3 q q 0.4 q q 0.2 0.2 q 0.1 q q q q q q q 0.0 0.0 q q Gauss Laplace Gauss Laplace
  102. 102. ABC Methods for Bayesian Model Choice Model choice consistency Formalised frameworkFramework Starting from sample y = (y1 , . . . , yn ) the observed sample, not necessarily iid with true distribution y ∼ Pn Summary statistics T (y) = T n = (T1 (y), T2 (y), · · · , Td (y)) ∈ Rd with true distribution T n ∼ Gn .
  103. 103. ABC Methods for Bayesian Model Choice Model choice consistency Formalised frameworkFramework c Comparison of – under M1 , y ∼ F1,n (·|θ1 ) where θ1 ∈ Θ1 ⊂ Rp1 – under M2 , y ∼ F2,n (·|θ2 ) where θ2 ∈ Θ2 ⊂ Rp2 turned into – under M1 , T (y) ∼ G1,n (·|θ1 ), and θ1 |T (y) ∼ π1 (·|T n ) – under M2 , T (y) ∼ G2,n (·|θ2 ), and θ2 |T (y) ∼ π2 (·|T n )
  104. 104. ABC Methods for Bayesian Model Choice Model choice consistency Consistency resultsAssumptions A collection of asymptotic “standard” assumptions: [A1] is a standard central limit theorem under the true model [A2] controls the large deviations of the estimator T n from the estimand µ(θ) [A3] is the standard prior mass condition found in Bayesian asymptotics (di effective dimension of the parameter) [A4] restricts the behaviour of the model density against the true density [Think CLT!]
  105. 105. ABC Methods for Bayesian Model Choice Model choice consistency Consistency resultsAssumptions A collection of asymptotic “standard” assumptions: [A1] There exist a sequence {vn } converging to +∞, a distribution Q, a symmetric, d × d positive definite matrix V0 and a vector µ0 ∈ Rd such that −1/2 n→∞ vn V0 (T n − µ0 ) Q, under Gn [Think CLT!]
  106. 106. ABC Methods for Bayesian Model Choice Model choice consistency Consistency resultsAssumptions A collection of asymptotic “standard” assumptions: [Think CLT!] [A2] For i = 1, 2, there exist sets Fn,i ⊂ Θi , functions µi (θi ) and constants i , τi , αi > 0 such that for all τ > 0, Gi,n |T n − µi (θi )| > τ |µi (θi ) − µ0 | ∧ i |θi sup −αi θi ∈Fn,i (|τ µi (θi ) − µ0 | ∧ i ) −α vn i with c −τ πi (Fn,i ) = o(vn i ).
  107. 107. ABC Methods for Bayesian Model Choice Model choice consistency Consistency resultsAssumptions A collection of asymptotic “standard” assumptions: [Think CLT!] [A3] For u > 0 −1 Sn,i (u) = θi ∈ Fn,i ; |µi (θi ) − µ0 | ≤ u vn if inf{|µi (θi ) − µ0 |; θi ∈ Θi } = 0, there exist constants di < τi ∧ αi − 1 such that −d πi (Sn,i (u)) ∼ udi vn i , ∀u vn
  108. 108. ABC Methods for Bayesian Model Choice Model choice consistency Consistency resultsAssumptions A collection of asymptotic “standard” assumptions: [Think CLT!] [A4] If inf{|µi (θi ) − µ0 |; θi ∈ Θi } = 0, for any > 0, there exist U, δ > 0 and (En )n such that, if θi ∈ Sn,i (U ) c En ⊂ {t; gi (t|θi ) < δgn (t)} and Gn (En ) < .
  109. 109. ABC Methods for Bayesian Model Choice Model choice consistency Consistency resultsAssumptions A collection of asymptotic “standard” assumptions: [Think CLT!] Again [A1] is a standard central limit theorem under the true model [A2] controls the large deviations of the estimator T n from the estimand µ(θ) [A3] is the standard prior mass condition found in Bayesian asymptotics (di effective dimension of the parameter) [A4] restricts the behaviour of the model density against the true density
  110. 110. ABC Methods for Bayesian Model Choice Model choice consistency Consistency resultsEffective dimension Understanding di in [A3]: defined only when µ0 ∈ {µi (θi ), θi ∈ Θi }, πi (θi : |µi (θi ) − µ0 | < n−1/2 ) = O(n−di /2 ) is the effective dimension of the model Θi around µ0
  111. 111. ABC Methods for Bayesian Model Choice Model choice consistency Consistency resultsEffective dimension Understanding di in [A3]: when inf{|µi (θi ) − µ0 |; θi ∈ Θi } = 0, mi (T n ) −d ∼ vn i , gn (T n ) thus log(mi (T n )/gn (T n )) ∼ −di log vn −d and vn i penalization factor resulting from integrating θi out as the effective number of parameters appears in the DIC
  112. 112. ABC Methods for Bayesian Model Choice Model choice consistency Consistency resultsEffective dimension Understanding di in [A3]: In regular models, di dimension of T (Θi ), leading to BIC. In non-regular models, di can be smaller.
  113. 113. ABC Methods for Bayesian Model Choice Model choice consistency Consistency resultsAsymptotic marginals Asymptotically, under [A1]–[A4] mi (t) = gi (t|θi ) πi (θi ) dθi Θi is such that (i) if inf{|µi (θi ) − µ0 |; θi ∈ Θi } = 0, Cl vn i ≤ mi (T n ) ≤ Cu vn i d−d d−d and (ii) if inf{|µi (θi ) − µ0 |; θi ∈ Θi } > 0 mi (T n ) = oPn [vn i + vn i ]. d−τ d−α
  114. 114. ABC Methods for Bayesian Model Choice Model choice consistency Consistency resultsWithin-model consistency Under same assumptions, if inf{|µi (θi ) − µ0 |; θi ∈ Θi } = 0, the posterior distribution of µi (θi ) given T n is consistent at rate 1/vn provided αi ∧ τi > di .
  115. 115. ABC Methods for Bayesian Model Choice Model choice consistency Consistency resultsBetween-model consistency Consequence of above is that asymptotic behaviour of the Bayes factor is driven by the asymptotic mean value of T n under both models. And only by this mean value!
  116. 116. ABC Methods for Bayesian Model Choice Model choice consistency Consistency resultsBetween-model consistency Consequence of above is that asymptotic behaviour of the Bayes factor is driven by the asymptotic mean value of T n under both models. And only by this mean value! Indeed, if inf{|µ0 − µ2 (θ2 )|; θ2 ∈ Θ2 } = inf{|µ0 − µ1 (θ1 )|; θ1 ∈ Θ1 } = 0 then Cl vn 1 −d2 ) ≤ m1 (T n ) m2 (T n ) ≤ Cu vn 1 −d2 ) , −(d −(d where Cl , Cu = OPn (1), irrespective of the true model. c Only depends on the difference d1 − d2 : no consistency
  117. 117. ABC Methods for Bayesian Model Choice Model choice consistency Consistency resultsBetween-model consistency Consequence of above is that asymptotic behaviour of the Bayes factor is driven by the asymptotic mean value of T n under both models. And only by this mean value! Else, if inf{|µ0 − µ2 (θ2 )|; θ2 ∈ Θ2 } > inf{|µ0 − µ1 (θ1 )|; θ1 ∈ Θ1 } = 0 then m1 (T n ) ≥ Cu min vn 1 −α2 ) , vn 1 −τ2 ) −(d −(d m2 (T n )
  118. 118. ABC Methods for Bayesian Model Choice Model choice consistency Consistency resultsConsistency theorem If inf{|µ0 − µ2 (θ2 )|; θ2 ∈ Θ2 } = inf{|µ0 − µ1 (θ1 )|; θ1 ∈ Θ1 } = 0, Bayes factor B12 = O(vn 1 −d2 ) ) T −(d irrespective of the true model. It is inconsistent since it always picks the model with the smallest dimension
  119. 119. ABC Methods for Bayesian Model Choice Model choice consistency Consistency resultsConsistency theorem If Pn belongs to one of the two models and if µ0 cannot be attained by the other one : 0 = min (inf{|µ0 − µi (θi )|; θi ∈ Θi }, i = 1, 2) < max (inf{|µ0 − µi (θi )|; θi ∈ Θi }, i = 1, 2) , T then the Bayes factor B12 is consistent
  120. 120. ABC Methods for Bayesian Model Choice Model choice consistency Summary statisticsConsequences on summary statistics Bayes factor driven by the means µi (θi ) and the relative position of µ0 wrt both sets {µi (θi ); θi ∈ Θi }, i = 1, 2. For ABC, this implies the most likely statistics T n are ancillary statistics with different mean values under both models
  121. 121. ABC Methods for Bayesian Model Choice Model choice consistency Summary statisticsConsequences on summary statistics Bayes factor driven by the means µi (θi ) and the relative position of µ0 wrt both sets {µi (θi ); θi ∈ Θi }, i = 1, 2. For ABC, this implies the most likely statistics T n are ancillary statistics with different mean values under both models Else, if T n asymptotically depends on some of the parameters of the models, it is quite likely that there exists θi ∈ Θi such that µi (θi ) = µ0 even though model M1 is misspecified

×