Loading…

Flash Player 9 (or above) is needed to view presentations.
We have detected that you do not have it on your computer. To install it, go here.

Like this presentation? Why not share!

Bayesian model choice (and some alternatives)

on

  • 1,952 views

 

Statistics

Views

Total Views
1,952
Views on SlideShare
1,847
Embed Views
105

Actions

Likes
1
Downloads
52
Comments
0

3 Embeds 105

http://xianblog.wordpress.com 101
url_unknown 3
https://xianblog.wordpress.com 1

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Bayesian model choice (and some alternatives) Bayesian model choice (and some alternatives) Presentation Transcript

  • Bayesian model choice (and some alternatives) Christian P. Robert Universit´ Paris-Dauphine, IuF, & CRESt e http://www.ceremade.dauphine.fr/~xian November 20, 2010 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 1 / 64
  • Outline Anyone not shocked by the Bayesian theory of inference has not understood it Senn, BA., 2008 1 Introduction 2 Tests and model choice 3 Incoherent inferences Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 2 / 64
  • Vocabulary and concepts Bayesian inference is a coherent mathematical theory but I don’t trust it in scientific applications. Gelman, BA, 2008 1 Introduction Models The Bayesian framework Improper prior distributions Noninformative prior distributions 2 Tests and model choice 3 Incoherent inferences Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 3 / 64
  • Parametric model Bayesians promote the idea that a multiplicity of parameters can be handled via hierarchical, typically exchangeable, models, but it seems implausible that this could really work automatically [instead of] giving reasonable answers using minimal assumptions. Gelman, BA, 2008 Observations x1 , . . . , xn generated from a probability distribution fi (xi |θi , x1 , . . . , xi−1 ) = fi (xi |θi , x1:i−1 ) x = (x1 , . . . , xn ) ∼ f (x|θ), θ = (θ1 , . . . , θn ) Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 4 / 64
  • Parametric model Bayesians promote the idea that a multiplicity of parameters can be handled via hierarchical, typically exchangeable, models, but it seems implausible that this could really work automatically [instead of] giving reasonable answers using minimal assumptions. Gelman, BA, 2008 Observations x1 , . . . , xn generated from a probability distribution fi (xi |θi , x1 , . . . , xi−1 ) = fi (xi |θi , x1:i−1 ) x = (x1 , . . . , xn ) ∼ f (x|θ), θ = (θ1 , . . . , θn ) Associated likelihood (θ|x) = f (x|θ) [inverted density & starting point] Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 4 / 64
  • Bayes theorem 101 Bayes theorem = Inversion of probabilities If A and E are events such that P (E) = 0, P (A|E) and P (E|A) are related by P (A|E) = P (E|A)P (A) P (E|A)P (A) + P (E|Ac )P (Ac ) P (E|A)P (A) = P (E) Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 5 / 64
  • Bayes theorem 101 Bayes theorem = Inversion of probabilities If A and E are events such that P (E) = 0, P (A|E) and P (E|A) are related by P (A|E) = P (E|A)P (A) P (E|A)P (A) + P (E|Ac )P (Ac ) P (E|A)P (A) = P (E) [Thomas Bayes (?)] Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 5 / 64
  • Bayesian approach The impact of treating x as a fixed constant is to increase statistical power as an artefact Templeton, Molec. Ecol., 2009 New perspective Uncertainty on the parameters θ of a model modeled through a probability distribution π on Θ, called prior distribution Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 6 / 64
  • Bayesian approach The impact of treating x as a fixed constant is to increase statistical power as an artefact Templeton, Molec. Ecol., 2009 New perspective Uncertainty on the parameters θ of a model modeled through a probability distribution π on Θ, called prior distribution Inference based on the distribution of θ conditional on x, π(θ|x), called posterior distribution f (x|θ)π(θ) π(θ|x) = . f (x|θ)π(θ) dθ Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 6 / 64
  • [Nonphilosophical] justifications Ignoring the sampling error of x undermines the statistical validity of all inferences made by the method Templeton, Molec. Ecol., 2009 Semantic drift from unknown to random Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 7 / 64
  • [Nonphilosophical] justifications Ignoring the sampling error of x undermines the statistical validity of all inferences made by the method Templeton, Molec. Ecol., 2009 Semantic drift from unknown to random Actualization of the information on θ by extracting the information on θ contained in the observation x Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 7 / 64
  • [Nonphilosophical] justifications Ignoring the sampling error of x undermines the statistical validity of all inferences made by the method Templeton, Molec. Ecol., 2009 Semantic drift from unknown to random Actualization of the information on θ by extracting the information on θ contained in the observation x Allows incorporation of imperfect information in the decision process Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 7 / 64
  • [Nonphilosophical] justifications Ignoring the sampling error of x undermines the statistical validity of all inferences made by the method Templeton, Molec. Ecol., 2009 Semantic drift from unknown to random Actualization of the information on θ by extracting the information on θ contained in the observation x Allows incorporation of imperfect information in the decision process Unique mathematical way to condition upon the observations (conditional perspective) Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 7 / 64
  • [Nonphilosophical] justifications Ignoring the sampling error of x undermines the statistical validity of all inferences made by the method Templeton, Molec. Ecol., 2009 Semantic drift from unknown to random Actualization of the information on θ by extracting the information on θ contained in the observation x Allows incorporation of imperfect information in the decision process Unique mathematical way to condition upon the observations (conditional perspective) Unique way to give meaning to statements like P(θ > 0) Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 7 / 64
  • Posterior distribution Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience Gelman, BA, 2008 π(θ|x) central to Bayesian inference Operates conditional upon the observations Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 8 / 64
  • Posterior distribution Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience Gelman, BA, 2008 π(θ|x) central to Bayesian inference Operates conditional upon the observations Incorporates the requirement of the Likelihood Principle Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 8 / 64
  • Posterior distribution Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience Gelman, BA, 2008 π(θ|x) central to Bayesian inference Operates conditional upon the observations Incorporates the requirement of the Likelihood Principle Avoids averaging over the unobserved values of x Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 8 / 64
  • Posterior distribution Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience Gelman, BA, 2008 π(θ|x) central to Bayesian inference Operates conditional upon the observations Incorporates the requirement of the Likelihood Principle Avoids averaging over the unobserved values of x Coherent updating of the information available on θ Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 8 / 64
  • Posterior distribution Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience Gelman, BA, 2008 π(θ|x) central to Bayesian inference Operates conditional upon the observations Incorporates the requirement of the Likelihood Principle Avoids averaging over the unobserved values of x Coherent updating of the information available on θ Provides a complete inferential machinery Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 8 / 64
  • Improper distributions If we take P (dσ) ∝ dσ as a statement that σ may have any value between 0 and ∞ (...), we must use ∞ instead of 1 to denote certainty. Jeffreys, ToP, 1939 Necessary extension from a prior distribution to a prior σ-finite measure π such that π(θ) dθ = +∞ Θ Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 9 / 64
  • Improper distributions If we take P (dσ) ∝ dσ as a statement that σ may have any value between 0 and ∞ (...), we must use ∞ instead of 1 to denote certainty. Jeffreys, ToP, 1939 Necessary extension from a prior distribution to a prior σ-finite measure π such that π(θ) dθ = +∞ Θ Improper prior distribution Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 9 / 64
  • Improper distributions If we take P (dσ) ∝ dσ as a statement that σ may have any value between 0 and ∞ (...), we must use ∞ instead of 1 to denote certainty. Jeffreys, ToP, 1939 Necessary extension from a prior distribution to a prior σ-finite measure π such that π(θ) dθ = +∞ Θ Improper prior distribution [Weird? Inappropriate?? report!! ] Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 9 / 64
  • Justifications If the parameter may have any value from −∞ to +∞, its prior probability should be taken as uniformly distributed Jeffreys, ToP, 1939 Automated prior determination often leads to improper priors 1 Similar performances of estimators derived from these generalized distributions Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 10 / 64
  • Justifications If the parameter may have any value from −∞ to +∞, its prior probability should be taken as uniformly distributed Jeffreys, ToP, 1939 Automated prior determination often leads to improper priors 1 Similar performances of estimators derived from these generalized distributions 2 Improper priors as limits of proper distributions in many [mathematical] senses Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 10 / 64
  • Further justifications There is no good objective principle for choosing a noninformative prior (even if that concept were mathematically defined, which it is not) Gelman, BA, 2008 4 Robust answer against possible misspecifications of the prior Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 11 / 64
  • Further justifications There is no good objective principle for choosing a noninformative prior (even if that concept were mathematically defined, which it is not) Gelman, BA, 2008 4 Robust answer against possible misspecifications of the prior 5 Frequencial justifications, such as: (i) minimaxity (ii) admissibility (iii) invariance (Haar measure) Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 11 / 64
  • Further justifications There is no good objective principle for choosing a noninformative prior (even if that concept were mathematically defined, which it is not) Gelman, BA, 2008 4 Robust answer against possible misspecifications of the prior 5 Frequencial justifications, such as: (i) minimaxity (ii) admissibility (iii) invariance (Haar measure) 6 Improper priors [much] prefered to vague proper priors like N (0, 106 ) Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 11 / 64
  • Validation The mistake is to think of them as representing ignorance Lindley, JASA, 1990 Extension of the posterior distribution π(θ|x) associated with an improper prior π as given by Bayes’s formula f (x|θ)π(θ) π(θ|x) = , Θ f (x|θ)π(θ) dθ Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 12 / 64
  • Validation The mistake is to think of them as representing ignorance Lindley, JASA, 1990 Extension of the posterior distribution π(θ|x) associated with an improper prior π as given by Bayes’s formula f (x|θ)π(θ) π(θ|x) = , Θ f (x|θ)π(θ) dθ when f (x|θ)π(θ) dθ < ∞ Θ Delete emotionally loaded names Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 12 / 64
  • Noninformative priors ...cannot be expected to represent exactly total ignorance about the problem, but should rather be taken as reference priors, upon which everyone could fall back when the prior information is missing. Kass and Wasserman, JASA, 1996 What if all we know is that we know “nothing” ?! Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 13 / 64
  • Noninformative priors ...cannot be expected to represent exactly total ignorance about the problem, but should rather be taken as reference priors, upon which everyone could fall back when the prior information is missing. Kass and Wasserman, JASA, 1996 What if all we know is that we know “nothing” ?! In the absence of prior information, prior distributions solely derived from the sample distribution f (x|θ) Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 13 / 64
  • Noninformative priors ...cannot be expected to represent exactly total ignorance about the problem, but should rather be taken as reference priors, upon which everyone could fall back when the prior information is missing. Kass and Wasserman, JASA, 1996 What if all we know is that we know “nothing” ?! In the absence of prior information, prior distributions solely derived from the sample distribution f (x|θ) Difficulty with uniform priors, lacking invariance properties. Rather use Jeffreys’ prior. [Jeffreys, 1939; Robert, Chopin & Rousseau, 2009] Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 13 / 64
  • Tests and model choice The Jeffreys-subjective synthesis betrays a much more dangerous confusion than the Neyman-Pearson-Fisher synthesis as regards hypothesis tests Senn, BA, 2008 1 Introduction 2 Tests and model choice Bayesian tests Opposition to classical tests Model choice Pseudo-Bayes factors Compatible priors Variable selection 3 Incoherent inferences Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 14 / 64
  • Construction of Bayes tests What is almost never used, however, is the Jeffreys significance test. Senn, BA, 2008 Definition (Test) Given an hypothesis H0 : θ ∈ Θ0 on the parameter θ ∈ Θ0 of a statistical model, a test is a statistical procedure that takes its values in {0, 1}. Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 15 / 64
  • Construction of Bayes tests What is almost never used, however, is the Jeffreys significance test. Senn, BA, 2008 Definition (Test) Given an hypothesis H0 : θ ∈ Θ0 on the parameter θ ∈ Θ0 of a statistical model, a test is a statistical procedure that takes its values in {0, 1}. Example (Normal mean) For x ∼ N (θ, 1), decide whether or not θ ≤ 0. Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 15 / 64
  • Decision-theoretic perspective Loss functions [are] not relevant to statistical inference Gelman, BA, 2008 Theorem (Optimal Bayes decision) Under the 0 − 1 loss function  0  if d = IΘ0 (θ) L(θ, d) = a0 if d = 1 and θ ∈ Θ0  a1 if d = 0 and θ ∈ Θ0  Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 16 / 64
  • Decision-theoretic perspective Loss functions [are] not relevant to statistical inference Gelman, BA, 2008 Theorem (Optimal Bayes decision) Under the 0 − 1 loss function  0  if d = IΘ0 (θ) L(θ, d) = a0 if d = 1 and θ ∈ Θ0  a1 if d = 0 and θ ∈ Θ0  the Bayes procedure is 1 if Prπ (θ ∈ Θ0 |x) ≥ a0 /(a0 + a1 ) δ π (x) = 0 otherwise Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 16 / 64
  • A function of posterior probabilities The method posits two or more alternative hypotheses and tests their relative fits to some observed statistics — Templeton, Mol. Ecol., 2009 Definition (Bayes factors) For hypotheses H0 : θ ∈ Θ0 vs. Ha : θ ∈ Θ0 π(Θ0 |x) π(Θ0 ) B01 = = f (x|θ)π0 (θ)dθ f (x|θ)π1 (θ)dθ π(Θc |x) 0 π(Θc ) 0 Θ0 Θc 0 [Good, 1958 & Jeffreys, 1961] pseudo-Bayes factors Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 17 / 64
  • Self-contained concept Having a high relative probability does not mean that a hypothesis is true or supported by the data — Templeton, Mol. Ecol., 2009 Non-decision-theoretic: eliminates choice of π(Θ0 ) Bayesian/marginal equivalent to the likelihood ratio Jeffreys’ scale of evidence: π if log10 (B10 ) between 0 and 0.5, evidence against H0 weak, π if log10 (B10 ) 0.5 and 1, evidence substantial, π if log10 (B10 ) 1 and 2, evidence strong and π if log10 (B10 ) above 2, evidence decisive Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 18 / 64
  • A major modification Considering whether a location parameter α is 0. The prior is uniform and we should have to take f (α) = 0 and B10 would always be infinite Jeffreys, ToP, 1939 When the null hypothesis is supported by a set of measure 0, π(Θ0 ) = 0 and thus π(Θ0 |x) = 0. [End of the story?!] Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 19 / 64
  • Changing the prior to fit the hypotheses Given that some logical overlap is common when dealing with complex models, this means that much of the literature is invalid Templeton, Trends in Ecology and Evolution, 2010 Requirement Define prior distributions under both assumptions, π0 (θ) ∝ π(θ)IΘ0 (θ), π1 (θ) ∝ π(θ)IΘ1 (θ), [under the standard dominating measures on Θ0 and Θ1 ], leading to π(θ) = 0 π0 (θ) + 1 π1 (θ). Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 20 / 64
  • Point null hypotheses I have no patience for statistical methods that assign positive probability to point hypotheses of the θ = 0 type that can never actually be true Gelman, BA, 2008 Take ρ0 = Prπ (θ = θ0 ) and g1 prior density under Ha . Then Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 21 / 64
  • Point null hypotheses I have no patience for statistical methods that assign positive probability to point hypotheses of the θ = 0 type that can never actually be true Gelman, BA, 2008 Take ρ0 = Prπ (θ = θ0 ) and g1 prior density under Ha . Then f (x|θ0 )ρ0 f (x|θ0 )ρ0 π(Θ0 |x) = = f (x|θ)π(θ) dθ f (x|θ0 )ρ0 + (1 − ρ0 )m1 (x) and Bayes factor π f (x|θ0 )ρ0 ρ0 f (x|θ0 ) B01 (x) = = m1 (x)(1 − ρ0 ) 1 − ρ0 m1 (x) Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 21 / 64
  • Point null hypotheses (cont’d) Example (Normal mean) Test of H0 : θ = 0 when x ∼ N (θ, 1): we take π1 as N (0, τ 2 ) m1 (x) σ2 τ 2 x2 = exp f (x|0) σ2 + τ 2 2σ 2 (σ 2 + τ 2 ) and the posterior probability is τ /x 0 0.68 1.28 1.96 1 0.586 0.557 0.484 0.351 10 0.768 0.729 0.612 0.366 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 22 / 64
  • Comparison with classical tests The 95 percent frequentist intervals will live up to their advertised coverage claims — Wasserman, BA, 2008 Standard/classical answer Definition (p-value) The p-value p(x) associated with a test is the largest significance level for which H0 is rejected Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 23 / 64
  • Problems with p-values The use of P implies that a hypothesis that may be true may be rejected because it had not predicted observable results that have not occurred Jeffreys, ToP, 1939 Evaluation of the wrong quantity, namely the probability to exceed the observed quantity.(wrong conditioning) Evaluation only under the null hypothesis Huge numerical difference with the Bayesian range of answers Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 24 / 64
  • Bayesian lower bounds If the Bayes estimator has good frequency behavior then we might as well use the frequentist method. If it has bad frequency behavior then we shouldn’t use it. Wasserman, BA, 2008 Least favourable Bayesian answer is f (x|θ0 ) B(x, GA ) = inf , g∈GA Θ f (x|θ)g(θ) dθ ˆ i.e., if there exists a mle for θ, θ(x), f (x|θ0 ) B(x, GA ) = ˆ f (x|θ(x)) Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 25 / 64
  • Illustration Example (Normal case) When x ∼ N (θ, 1) and H0 : θ0 = 0, the lower bounds are 2 /2 2 /2 −1 B(x, GA ) = e−x and P(x, GA ) = 1 + ex , Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 26 / 64
  • Illustration Example (Normal case) When x ∼ N (θ, 1) and H0 : θ0 = 0, the lower bounds are 2 /2 2 /2 −1 B(x, GA ) = e−x and P(x, GA ) = 1 + ex , i.e. p-value 0.10 0.05 0.01 0.001 P 0.205 0.128 0.035 0.004 B 0.256 0.146 0.036 0.004 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 26 / 64
  • Illustration Example (Normal case) When x ∼ N (θ, 1) and H0 : θ0 = 0, the lower bounds are 2 /2 2 /2 −1 B(x, GA ) = e−x and P(x, GA ) = 1 + ex , i.e. p-value 0.10 0.05 0.01 0.001 P 0.205 0.128 0.035 0.004 B 0.256 0.146 0.036 0.004 [Quite different!] Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 26 / 64
  • Model choice and model comparison There is no null hypothesis, which complicates the computation of sampling error Templeton, Mol. Ecol., 2009 Choice among models: Several models available for the same observation(s) Mi : x ∼ fi (x|θi ), i∈I where I can be finite or infinite Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 27 / 64
  • Bayesian resolution The posterior probabilities are constructed by using a numerator that is a function of the observation for a particular model, then divided by a denominator that ensures that the ”probabilities” sum to one. — Templeton, Mol. Ecol., 2009 Probabilise the entire model/parameter space Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 28 / 64
  • Bayesian resolution The posterior probabilities are constructed by using a numerator that is a function of the observation for a particular model, then divided by a denominator that ensures that the ”probabilities” sum to one. — Templeton, Mol. Ecol., 2009 Probabilise the entire model/parameter space allocate probabilities pi to all models Mi define priors πi (θi ) for each parameter space Θi Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 28 / 64
  • Bayesian resolution The posterior probabilities are constructed by using a numerator that is a function of the observation for a particular model, then divided by a denominator that ensures that the ”probabilities” sum to one. — Templeton, Mol. Ecol., 2009 Probabilise the entire model/parameter space allocate probabilities pi to all models Mi define priors πi (θi ) for each parameter space Θi compute pi fi (x|θi )πi (θi )dθi Θi π(Mi |x) = pj fj (x|θj )πj (θj )dθj j Θj Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 28 / 64
  • Bayesian resolution(2) The numerators are not co-measurable across hypotheses, and the denominators are sums of non-co-measurable entities. This means that it is mathematically impossible for them to be probabilities — Templeton, Mol. Ecol., 2009 take largest π(Mi |x) to determine “best” model, or use averaged predictive π(Mj |x) fj (x |θj )πj (θj |x)dθj j Θj Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 29 / 64
  • Natural Occam’s razor Pluralitas non est ponenda sine neccesitate Variation is random until the contrary is shown; and new parameters in laws, when they are suggested, must be tested one at a time, unless there is specific reason to the contrary. Jeffreys, ToP, 1939 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 30 / 64
  • Natural Occam’s razor Pluralitas non est ponenda sine neccesitate Variation is random until the contrary is shown; and new parameters in laws, when they are suggested, must be tested one at a time, unless there is specific reason to the contrary. Jeffreys, ToP, 1939 The Bayesian approach naturally weights differently models with different parameter dimensions (BIC being an approximative log-Bayes factor). Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 30 / 64
  • A fundamental difficulty 1) ABC can and does produce results that are mathematically impossible; 2) the “posterior probabilities” of ABC cannot possibly be true probability measures; and 3) ABC is statistically incoherent. Templeton, Trends in Ecology and Evolution, 2010 Improper priors are NOT allowed here If π1 (dθ1 ) = ∞ or π2 (dθ2 ) = ∞ Θ1 Θ2 then either π1 or π2 cannot be coherently normalised Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 31 / 64
  • A fundamental difficulty 1) ABC can and does produce results that are mathematically impossible; 2) the “posterior probabilities” of ABC cannot possibly be true probability measures; and 3) ABC is statistically incoherent. Templeton, Trends in Ecology and Evolution, 2010 Improper priors are NOT allowed here If π1 (dθ1 ) = ∞ or π2 (dθ2 ) = ∞ Θ1 Θ2 then either π1 or π2 cannot be coherently normalised but the normalisation matters in the Bayes factor Recall Bayes factor Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 31 / 64
  • Normal illustration Take x ∼ N (θ, 1) and H0 : θ = 0 Impact of the constant x 0.0 1.0 1.65 1.96 2.58 π(θ) = 1 0.285 0.195 0.089 0.055 0.014 π(θ) = 10 0.0384 0.0236 0.0101 0.00581 0.00143 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 32 / 64
  • Vague proper priors are NOT the solution Taking a proper prior and take a “very large” variance (e.g., BUGS) Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 33 / 64
  • Vague proper priors are NOT the solution Taking a proper prior and take a “very large” variance (e.g., BUGS) will most often result in an undefined or ill-defined limit Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 33 / 64
  • Vague proper priors are NOT the solution Taking a proper prior and take a “very large” variance (e.g., BUGS) will most often result in an undefined or ill-defined limit Example (Lindley’s paradox) If testing H0 : θ = 0 when observing x ∼ N (θ, 1), under a normal N (0, α) prior π1 (θ), α−→∞ B01 (x) −→ 0 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 33 / 64
  • Learning from the sample It is possible for data to discriminate among a set of hypotheses without saying anything about a proposition that is common to all the alternatives considered. Seber, Evidence and Evolution, 2008 Definition (Learning sample) Given an improper prior π, (x1 , . . . , xn ) is a learning sample if π(·|x1 , . . . , xn ) is proper and a minimal learning sample if none of its subsamples is a learning sample Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 34 / 64
  • Learning from the sample It is possible for data to discriminate among a set of hypotheses without saying anything about a proposition that is common to all the alternatives considered. Seber, Evidence and Evolution, 2008 Definition (Learning sample) Given an improper prior π, (x1 , . . . , xn ) is a learning sample if π(·|x1 , . . . , xn ) is proper and a minimal learning sample if none of its subsamples is a learning sample There is just enough information in a minimal learning sample to make inference about θ under the prior π Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 34 / 64
  • Pseudo-Bayes factors Idea Use a first part x[i] of the data x to make the prior proper: Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 35 / 64
  • Pseudo-Bayes factors Idea Use a first part x[i] of the data x to make the prior proper: πi improper but πi (·|x[i] ) proper and fi (x[n/i] |θi ) πi (θi |x[i] )dθi fj (x[n/i] |θj ) πj (θj |x[i] )dθj independent of normalizing constant Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 35 / 64
  • Pseudo-Bayes factors Idea Use a first part x[i] of the data x to make the prior proper: πi improper but πi (·|x[i] ) proper and fi (x[n/i] |θi ) πi (θi |x[i] )dθi fj (x[n/i] |θj ) πj (θj |x[i] )dθj independent of normalizing constant Use remaining part x[n/i] to run test as if πj (θj |x[i] ) was the true prior Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 35 / 64
  • Motivation Provides a working principle for improper priors Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 36 / 64
  • Motivation Provides a working principle for improper priors Gather enough information from data to achieve properness and use this properness to run the test on remaining data Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 36 / 64
  • Motivation Provides a working principle for improper priors Gather enough information from data to achieve properness and use this properness to run the test on remaining data does not use the data x twice as in Aitkin’s (1991,2010) Back later! Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 36 / 64
  • Fractional Bayes factor To test a theory, you need to test it against alternatives. Seber, Evidence and Evolution, 2008 Idea use directly the likelihood to separate training sample from testing sample F B12 = B12 (x) × Lb (θ2 )π2 (θ2 )dθ2 2 Lb (θ1 )π1 (θ1 )dθ1 1 [O’Hagan, 1995] Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 37 / 64
  • Fractional Bayes factor To test a theory, you need to test it against alternatives. Seber, Evidence and Evolution, 2008 Idea use directly the likelihood to separate training sample from testing sample F B12 = B12 (x) × Lb (θ2 )π2 (θ2 )dθ2 2 Lb (θ1 )π1 (θ1 )dθ1 1 [O’Hagan, 1995] Proportion b of the sample used to gain proper-ness Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 37 / 64
  • Fractional Bayes factor (cont’d) Example (Normal mean) 1 x2 F B12 = √ en(b−1)¯n /2 b corresponds to exact Bayes factor for the prior N 0, 1−b nb If b constant, prior variance goes to 0 1 If b = , prior variance stabilises around 1 n If b = n−α , α < 1, prior variance goes to 0 too. Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 38 / 64
  • Compatibility principle Further complicating dimensionality of test statistics is the fact that the models are often not nested, and one model may contain parameters that do not have analogues in the other models and vice versa Templeton, Mol. Ecol., 2009 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 39 / 64
  • Compatibility principle Further complicating dimensionality of test statistics is the fact that the models are often not nested, and one model may contain parameters that do not have analogues in the other models and vice versa Templeton, Mol. Ecol., 2009 Difficulty of finding simultaneously priors on a collection of models Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 39 / 64
  • Compatibility principle Further complicating dimensionality of test statistics is the fact that the models are often not nested, and one model may contain parameters that do not have analogues in the other models and vice versa Templeton, Mol. Ecol., 2009 Difficulty of finding simultaneously priors on a collection of models Easier to start from a single prior on a “big” [encompassing] model and to derive others from a coherence principle [Dawid & Lauritzen, 2000] Raw regression output Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 39 / 64
  • An illustration for linear regression In the case M1 and M2 are two nested Gaussian linear regression models with Zellner’s g-priors and the same variance σ 2 ∼ π(σ 2 ): M1 : y|β1 , σ 2 ∼ N (X1 β1 , σ 2 ) with β1 |σ 2 ∼ N s1 , σ 2 n1 (X1 X1 )−1 T where X1 is a (n × k1 ) matrix of rank k1 ≤ n Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 40 / 64
  • An illustration for linear regression In the case M1 and M2 are two nested Gaussian linear regression models with Zellner’s g-priors and the same variance σ 2 ∼ π(σ 2 ): M1 : y|β1 , σ 2 ∼ N (X1 β1 , σ 2 ) with β1 |σ 2 ∼ N s1 , σ 2 n1 (X1 X1 )−1 T where X1 is a (n × k1 ) matrix of rank k1 ≤ n M2 : y|β2 , σ 2 ∼ N (X2 β2 , σ 2 ) with β2 |σ 2 ∼ N s2 , σ 2 n2 (X2 X2 )−1 , T where X2 is a (n × k2 ) matrix with span(X2 ) ⊆ span(X1 ) [ c Marin & Robert, Bayesian Core] Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 40 / 64
  • Compatible g-priors I don’t see any role for squared error loss, minimax, or the rest of what is sometimes called statistical decision theory Gelman, BA, 2008 Since σ 2 is a nuisance parameter, minimize the Kullback-Leibler divergence between both marginal distributions conditional on σ 2 : m1 (y|σ 2 ; s1 , n1 ) and m2 (y|σ 2 ; s2 , n2 ), with solution Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 41 / 64
  • Compatible g-priors I don’t see any role for squared error loss, minimax, or the rest of what is sometimes called statistical decision theory Gelman, BA, 2008 Since σ 2 is a nuisance parameter, minimize the Kullback-Leibler divergence between both marginal distributions conditional on σ 2 : m1 (y|σ 2 ; s1 , n1 ) and m2 (y|σ 2 ; s2 , n2 ), with solution β2 |X2 , σ 2 ∼ N s∗ , σ 2 n∗ (X2 X2 )−1 2 2 T with s∗ = (X2 X2 )−1 X2 X1 s1 2 T T n∗ = n1 2 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 41 / 64
  • Symmetrised compatible priors If those prior probabilities are obscure, the same will be true of the posterior probabilities — Seber, Evidence and Evolution, 2008 Postulate: Previous principle requires embedded models (or an encompassing model) and proper priors, while being hard to implement outside exponential families Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 42 / 64
  • Symmetrised compatible priors If those prior probabilities are obscure, the same will be true of the posterior probabilities — Seber, Evidence and Evolution, 2008 Postulate: Previous principle requires embedded models (or an encompassing model) and proper priors, while being hard to implement outside exponential families We determine prior measures on two models M1 and M2 , π1 and π2 , directly by a compatibility principle. Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 42 / 64
  • Generalised expected posterior priors [Perez & Berger, 2000] EPP Principle N N Starting from reference priors π1 and π2 , substitute by prior distributions π1 and π2 that solve the system of integral equations N π1 (θ1 ) = π1 (θ1 | x)m2 (x)dx X and N π2 (θ2 ) = π2 (θ2 | x)m1 (x)dx, X where x is an imaginary minimal training sample and m1 , m2 are the marginals associated with π1 and π2 respectively. Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 43 / 64
  • Motivations Eliminates the “imaginary observation” device and proper-isation through part of the data by integration under the “truth” Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 44 / 64
  • Motivations Eliminates the “imaginary observation” device and proper-isation through part of the data by integration under the “truth” Assumes that both models are equally valid and equipped with ideal unknown priors πi , i = 1, 2, that yield “true” marginals balancing each model wrt the other Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 44 / 64
  • Motivations Eliminates the “imaginary observation” device and proper-isation through part of the data by integration under the “truth” Assumes that both models are equally valid and equipped with ideal unknown priors πi , i = 1, 2, that yield “true” marginals balancing each model wrt the other For a given π1 , π2 is an expected posterior prior Using both equations introduces symmetry into the game Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 44 / 64
  • Bayesian coherence Logical overlap is the norm for the complex models analyzed with ABC, so many ABC posterior model probabilities published to date are wrong. Templeton, PNAS, 2009 Theorem (True Bayes factor) If π1 and π2 are the EPPs and if their marginals are finite, then the corresponding Bayes factor B1,2 (x) is either a (true) Bayes factor or a limit of (true) Bayes factors. Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 45 / 64
  • Bayesian coherence Logical overlap is the norm for the complex models analyzed with ABC, so many ABC posterior model probabilities published to date are wrong. Templeton, PNAS, 2009 Theorem (True Bayes factor) If π1 and π2 are the EPPs and if their marginals are finite, then the corresponding Bayes factor B1,2 (x) is either a (true) Bayes factor or a limit of (true) Bayes factors. Obviously only interesting when both π1 and π2 are improper. Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 45 / 64
  • Variable selection Regression setup where y regressed on a set {x1 , . . . , xp } of p potential explanatory regressors (plus intercept) Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 46 / 64
  • Variable selection Regression setup where y regressed on a set {x1 , . . . , xp } of p potential explanatory regressors (plus intercept) Corresponding 2p submodels Mγ , where γ ∈ Γ = {0, 1}p indicates inclusion/exclusion of variables by a binary representation, Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 46 / 64
  • Variable selection Regression setup where y regressed on a set {x1 , . . . , xp } of p potential explanatory regressors (plus intercept) Corresponding 2p submodels Mγ , where γ ∈ Γ = {0, 1}p indicates inclusion/exclusion of variables by a binary representation, e.g. γ = 101001011 means that x1 , x3 , x5 , x7 and x8 are included. Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 46 / 64
  • Notations For model Mγ , qγ variables included t1 (γ) = {t1,1 (γ), . . . , t1,qγ (γ)} indices of those variables and t0 (γ) indices of the variables not included For β ∈ Rp+1 , βt1 (γ) = β0 , βt1,1 (γ) , . . . , βt1,qγ (γ) Xt1 (γ) = 1n |xt1,1 (γ) | . . . |xt1,qγ (γ) . Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 47 / 64
  • Notations For model Mγ , qγ variables included t1 (γ) = {t1,1 (γ), . . . , t1,qγ (γ)} indices of those variables and t0 (γ) indices of the variables not included For β ∈ Rp+1 , βt1 (γ) = β0 , βt1,1 (γ) , . . . , βt1,qγ (γ) Xt1 (γ) = 1n |xt1,1 (γ) | . . . |xt1,qγ (γ) . Submodel Mγ is thus y|β, γ, σ 2 ∼ N Xt1 (γ) βt1 (γ) , σ 2 In Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 47 / 64
  • Global and compatible priors Use Zellner’s g-prior, i.e. a normal prior for β conditional on σ 2 , β|σ 2 ∼ N (β, cσ 2 (X T X)−1 ) ˜ and a Jeffreys prior for σ 2 , π(σ 2 ) ∝ σ −2 Noninformative g Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 48 / 64
  • Global and compatible priors Use Zellner’s g-prior, i.e. a normal prior for β conditional on σ 2 , β|σ 2 ∼ N (β, cσ 2 (X T X)−1 ) ˜ and a Jeffreys prior for σ 2 , π(σ 2 ) ∝ σ −2 Noninformative g Resulting compatible prior −1 −1 βt1 (γ) ∼ N T Xt1 (γ) Xt1 (γ) T ˜ Xt1 (γ) X β, cσ 2 Xt1 (γ) Xt1 (γ) T Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 48 / 64
  • Posterior model probability Can be obtained in closed form: −n/2 ˜ ˜ 2y T P1 X β cy T P1 y β T X T P1 X β ˜ −(qγ +1)/2 T π(γ|y) ∝ (c + 1) y y− + − . c+1 c+1 c+1 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 49 / 64
  • Posterior model probability Can be obtained in closed form: −n/2 ˜ ˜ 2y T P1 X β cy T P1 y β T X T P1 X β ˜ −(qγ +1)/2 T π(γ|y) ∝ (c + 1) y y− + − . c+1 c+1 c+1 Conditionally on γ, posterior distributions of β and σ 2 : c ˜ σ2 c −1 βt1 (γ) |σ 2 , y, γ ∼ N (U1 y + U1 X β/c), T Xt1 (γ) Xt1 (γ) , c+1 c+1 n yT y cy T P1 y ˜ ˜ y T P1 X β β T X T P1 X β ˜ σ 2 |y, γ ∼ IG , − + − . 2 2 2(c + 1) 2(c + 1) c+1 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 49 / 64
  • Noninformative case ˜ Use the same compatible informative g-prior distribution with β = 0p+1 and a hierarchical diffuse prior distribution on c, π(c) ∝ c−1 IN∗ (c) or π(c) ∝ c−1 Ic>0 Recall g-prior Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 50 / 64
  • Noninformative case ˜ Use the same compatible informative g-prior distribution with β = 0p+1 and a hierarchical diffuse prior distribution on c, π(c) ∝ c−1 IN∗ (c) or π(c) ∝ c−1 Ic>0 Recall g-prior The choice of this hierarchical diffuse prior distribution on c is due to the model posterior sensitivity to large values of c: Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 50 / 64
  • Noninformative case ˜ Use the same compatible informative g-prior distribution with β = 0p+1 and a hierarchical diffuse prior distribution on c, π(c) ∝ c−1 IN∗ (c) or π(c) ∝ c−1 Ic>0 Recall g-prior The choice of this hierarchical diffuse prior distribution on c is due to the model posterior sensitivity to large values of c: Taking ˜ β = 0p+1 and c large does not work Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 50 / 64
  • Processionary caterpillar Influence of some forest settlement characteristics on the development of caterpillar colonies Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 51 / 64
  • Processionary caterpillar Influence of some forest settlement characteristics on the development of caterpillar colonies Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 51 / 64
  • Processionary caterpillar Influence of some forest settlement characteristics on the development of caterpillar colonies Response y log-transform of the average number of nests of caterpillars per tree on an area of 500 square meters (n = 33 areas) [ c Marin & Robert, Bayesian Core] Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 51 / 64
  • Processionary caterpillar (cont’d) Potential explanatory variables x1 altitude (in meters), x2 slope (in degrees), x3 number of pines in the square, x4 height (in meters) of the tree at the center of the square, x5 diameter of the tree at the center of the square, x6 index of the settlement density, x7 orientation of the square (from 1 if southb’d to 2 ow), x8 height (in meters) of the dominant tree, x9 number of vegetation strata, x10 mix settlement index (from 1 if not mixed to 2 if mixed). Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 52 / 64
  • Bayesian regression output Estimate BF log10(BF) (Intercept) 9.2714 26.334 1.4205 (***) X1 -0.0037 7.0839 0.8502 (**) X2 -0.0454 3.6850 0.5664 (**) X3 0.0573 0.4356 -0.3609 X4 -1.0905 2.8314 0.4520 (*) X5 0.1953 2.5157 0.4007 (*) X6 -0.3008 0.3621 -0.4412 X7 -0.2002 0.3627 -0.4404 X8 0.1526 0.4589 -0.3383 X9 -1.0835 0.9069 -0.0424 X10 -0.3651 0.4132 -0.3838 evidence against H0: (****) decisive, (***) strong, (**) subtantial, (*) poor Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 53 / 64
  • Bayesian variable selection t1 (γ) π(γ|y, X) 0,1,2,4,5 0.0929 0,1,2,4,5,9 0.0325 0,1,2,4,5,10 0.0295 0,1,2,4,5,7 0.0231 0,1,2,4,5,8 0.0228 0,1,2,4,5,6 0.0228 0,1,2,3,4,5 0.0224 0,1,2,3,4,5,9 0.0167 0,1,2,4,5,6,9 0.0167 0,1,2,4,5,8,9 0.0137 Noninformative G-prior model choice Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 54 / 64
  • Fringe alternatives 1 Introduction 2 Tests and model choice 3 Incoherent inferences Templeton’s debate Bayes/likelihood fusion Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 55 / 64
  • A revealing confusion In statistics, coherent measures of fit of nested and overlapping composite hypotheses are technically those measures that are consistent with the constraints of formal logic. For example, the probability of the nested special case must be less than or equal to the probability of the general model within which the special case is nested. Any statistic that assigns greater probability to the special case is said to be incoherent. Templeton, PNAS, 2009 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 56 / 64
  • ABC algorithm Instead of evaluating hypotheses in terms of how probable they say the data are, we evaluate them by estimating how accurately they’ll predict new data when fitted to old — Seber, Evidence and Evolution, 2008 Algorithm 1 Likelihood-free rejection sampler for i = 1 to N do repeat generate θ from the prior distribution π(·) generate z from the likelihood f (·|θ ) until ρ{η(z), η(y)} ≤ set θi = θ end for where η(y) defines a (not necessarily sufficient) statistic [Pritchard et al., 1999] Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 57 / 64
  • ABC output The likelihood-free algorithm samples from the marginal in z of: π(θ)f (z|θ)IA ,y (z) π (θ, z|y) = , A ,y ×Θ π(θ)f (z|θ)dzdθ where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 58 / 64
  • ABC output The likelihood-free algorithm samples from the marginal in z of: π(θ)f (z|θ)IA ,y (z) π (θ, z|y) = , A ,y ×Θ π(θ)f (z|θ)dzdθ where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) . Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 58 / 64
  • The ”Great ABC controversy” On-going controvery in phylogeographic genetics about the validity of using ABC for testing Against: Templeton, 2008, 2009, 2010a, 2010b, 2010c argues that nested hypotheses cannot have higher probabilities than nesting hypotheses (!) Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 59 / 64
  • The ”Great ABC controversy” On-going controvery in phylogeographic genetics about the validity of using ABC for testing The probability of the nested special case must be less than or equal to the probability of the general model Against: Templeton, 2008, 2009, within which the special case is 2010a, 2010b, 2010c argues that nested. Any statistic that assigns nested hypotheses cannot have greater probability to the special case higher probabilities than nesting is incoherent. An example of hypotheses (!) incoherence is shown for the ABC method. Templeton, PNAS, 2010 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 59 / 64
  • The ”Great ABC controversy” On-going controvery in phylogeographic genetics about the validity of using ABC for testing Incoherent methods, such as ABC, Against: Templeton, 2008, 2009, Bayes factor, or any simulation 2010a, 2010b, 2010c argues that approach that treats all hypotheses nested hypotheses cannot have as mutually exclusive, should never higher probabilities than nesting be used with logically overlapping hypotheses (!) hypotheses. Templeton, PNAS, 2010 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 59 / 64
  • The ”Great ABC controversy” On-going controvery in phylogeographic genetics about the validity of using ABC for testing The central equation of ABC Gi (||Si − S ∗ ||)Πi Against: Templeton, 2008, 2009, P (Hi |H, S ∗ ) = Pn ∗ j=1 Gj (||Sj − S ||)Πj 2010a, 2010b, 2010c argues that nested hypotheses cannot have is inherently incoherent. This higher probabilities than nesting fundamental equation is hypotheses (!) mathematically incorrect in every instance of overlap. Templeton, PNAS, 2010 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 59 / 64
  • The ”Great ABC controversy” On-going controvery in phylogeographic genetics about the validity of using ABC for testing Replies: Fagundes et al., 2008, Against: Templeton, 2008, 2009, Beaumont et al., 2010, Berger et al., 2010a, 2010b, 2010c argues that 2010, Csill`ry et al., 2010 point out e nested hypotheses cannot have that the criticisms are addressed at higher probabilities than nesting [Bayesian] model-based inference and hypotheses (!) have nothing to do with ABC... Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 59 / 64
  • The ”Great ABC controversy” On-going controvery in phylogeographic genetics about the validity of using ABC for testing ABC is a statistically valid approach, alongside other computational Replies: Fagundes et al., 2008, statistical techniques that have been Beaumont et al., 2010, Berger et al., successfully used to infer parameters 2010, Csill`ry et al., 2010 point out e and compare models in population that the criticisms are addressed at genetics. [Bayesian] model-based inference and Beaumont et al., Molec. Ecology, have nothing to do with ABC... 2010 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 59 / 64
  • The ”Great ABC controversy” On-going controvery in phylogeographic genetics about the validity of using ABC for testing The confusion seems to arise from misunderstanding the difference Replies: Fagundes et al., 2008, between scientific hypotheses and Beaumont et al., 2010, Berger et al., their mathematical representation. 2010, Csill`ry et al., 2010 point out e Bayes’ theorem shows that the that the criticisms are addressed at simpler model can indeed have a [Bayesian] model-based inference and much higher posterior probability. have nothing to do with ABC... Berger et al., PNAS, 2010 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 59 / 64
  • Aitkin’s alternative Without a specific alternative, the best we can do is to make posterior probability statements about µ and transfer these to the posterior distribution of the likelihood ratio. Aitkin, Statistical Inference, 2010 Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 60 / 64
  • Aitkin’s alternative Without a specific alternative, the best we can do is to make posterior probability statements about µ and transfer these to the posterior distribution of the likelihood ratio. Aitkin, Statistical Inference, 2010 Proposal to examine the posterior distribution of the likelihood function : compare models via the “posterior distribution” of the likelihood ratio. L1 (θ1 |x) L2 (θ2 |x) , with θ1 ∼ π1 (θ1 |x) and θ2 ∼ π2 (θ2 |x). Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 60 / 64
  • Using the data “twice” A persistent criticism of the posterior likelihood approach has been based on the claim that these approaches are ‘using the data twice’, or are ‘violating temporal coherence’ — Aitkin, Statistical Inference, 2010 Complete separation between both models due to simulation under product of the posterior distributions, i.e. replaces standard Bayesian inference under joint posterior of (θ1 , θ2 ), p1 m1 (x)π1 (θ1 |x)π2 (θ2 ) + p2 m2 (x)π2 (θ2 |x)π1 (θ1 ) by product of both posteriors Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 61 / 64
  • Illustration Comparison of a Poisson model against a negative binomial with m = 5 successes, when x = 3: Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 62 / 64
  • Pros ... This quite small change to standard Bayesian analysis allows a very general approach to a wide range of apparently different inference problems; a particular advantage of the approach is that it can use the same noninformative priors — Aitkin, Statistical Inference, 2010 the approach is general and allows to resolve the difficulties with the Bayesian processing of point null hypotheses; the approach allows for the use of generic noninformative and improper priors; the approach handles more naturally the “vexed question of model fit”; the approach is “simple”. Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 63 / 64
  • ... & cons The p-value is equal to the posterior probability that the likelihood ratio, for null hypothesis to alternative, is greater than 1 (...) The posterior probability is p that the posterior probability of H0 is greater than 0.5. Aitkin, Statistical Inference, 2010 the approach is not Bayesian (product of the posteriors) the approach uses undeterminate entities (“posterior probability that the posterior probability is larger than 0.5”...) the approach tries to get as close as possible to the p-value Christian P. Robert (Paris-Dauphine) Bayesian model choice November 20, 2010 64 / 64