Likelihood free computational statistics
Pierre Pudlo
Universit´e Montpellier 2
Institut de Math´ematiques et Mod´elisation de Montpellier (I3M)
Institut de Biologie Computationelle
Labex NUMEV
17/04/2015
Pierre Pudlo (UM2) Avignon 17/04/2015 1 / 20
Contents
1 Approximate Bayesian computation
2 ABC model choice
3 Bayesian computation with empirical likelihood
Pierre Pudlo (UM2) Avignon 17/04/2015 2 / 20
Contents
1 Approximate Bayesian computation
2 ABC model choice
3 Bayesian computation with empirical likelihood
Pierre Pudlo (UM2) Avignon 17/04/2015 3 / 20
Intractable likelihoods
Problem
How to perform a Bayesian analysis when the likelihood f(y|φ) is intractable?
Example 1. Gibbs random fields
f(y|φ) ∝ exp(−H(y, φ))
is known up to a constant
Z(φ) =
y
exp(−H(y, φ))
Example 2. Neutral population
genetics
Aim. Infer demographic parameters on
the past of some populations based on
the trace left in genomes of individuals
sampled from current populations.
Latent process (past history of the
sample) ∈ space of high dimension.
If y is the genetic data of the sample,
the likelihood is
f(y|φ) =
Z
f(y, z | φ) dz
Typically, dim(Z ) dim(Y ).
No hope to compute the likelihood with
clever Monte Carlo algorithms?
Coralie Merle, Rapha¨el Leblois et
Franc¸ois Rousset
Pierre Pudlo (UM2) Avignon 17/04/2015 4 / 20
A bend via importance sampling
If y is the genetic data of the sample,
the likelihood is
f(y|φ) =
Z
f(y, z | φ) dz
We are trying to compute this integral
with importance sampling.
Actually z = (z1, . . . , zT) is a measured
valued Markov chain, stopped at a
given optional time T and y = zT hence
f(y|φ) =
Z
1{y = zT} f(z1, . . . , zT | φ) dz
Importance sampling introduces an
auxiliary distribution q(dz | φ)
f(y|φ) =
Z
1{y = zT}
f(z | φ)
q(z | φ)
weight of z
sampling distr.
q(z | φ)dz
The most efficient q is the conditional
distribution of the Markov chain
knowing that zT = y, but even harder to
compute than f(y | φ).
Any other q who is a Markovian
distribution is inefficient as the
variance of the weight grows
exponentially with T.
Need a clever q: see the seminal paper
of Stephens and Donnelly (2000)
And resampling algorithms. . .
Pierre Pudlo (UM2) Avignon 17/04/2015 5 / 20
Approximate Bayesian computation
Idea
Infer conditional distribution of φ given yobs on simulations from the joint π(φ)f(y|φ)
ABC algorithm
A) Generate a large set of (φ, y)
from the Bayesian model
π(φ) f(y|φ)
B) Keep the particles (φ, y) such
that d(η(yobs), η(y)) ≤ ε
C) Return the φ’s of the kept
particles
Curse of dimensionality: y is replaced
by some numerical summaries η(y)
Stage A) is computationally heavy!
We end up rejecting almost all
simulations except if fallen in the
neighborhood of η(yobs)
Sequential ABC algorithms try to avoid
drawing φ is area of low π(φ|y).
An auto-calibrated ABC-SMC
sampler with Mohammed Sedki,
Jean-Michel Marin, Jean-Marie
Cornuet and Christian P. Robert
Pierre Pudlo (UM2) Avignon 17/04/2015 6 / 20
ABC sequential sampler
How to calibrate ε1 ≥ ε2 ≥ · · · ≥ εT and T to be efficient?
The auto-calibrated ABC-SMC sampler developed with Mohammed Sedki,
Jean-Michel Marin, Jean-Marie Cornet and Christian P. Robert
Pierre Pudlo (UM2) Avignon 17/04/2015 7 / 20
ABC target
Three levels of approximation of the
posterior π φ yobs
1 the ABC posterior distribution
π φ η(yobs)
2 approximated with a kernel of
bandwidth ε (or with k-nearest
neighbours)
π φ d(η(y), η(yobs)) ≤ ε
3 a Monte Carlo error:
sample size N < ∞
See, e.g., our review with J.-M. Marin,
C. Robert and R. Ryder
If η(y) are not sufficient statistics,
π φ yobs π φ η(yobs)
Information regarding yobs might be
lost!
Curse of dimensionality:
cannot have both ε small, N large
when η(y) is of large dimension
Post-processing of Beaumont et al.
(2002) with local linear regression.
But the lack of sufficiency might still be
problematic. See Robert et al. (2011)
for model choice.
Pierre Pudlo (UM2) Avignon 17/04/2015 8 / 20
Contents
1 Approximate Bayesian computation
2 ABC model choice
3 Bayesian computation with empirical likelihood
Pierre Pudlo (UM2) Avignon 17/04/2015 9 / 20
ABC model choice
ABC model choice
A) Generate a large set of
(m, φ, y) from the Bayesian
model, π(m)πm(φ) fm(y|φ)
B) Keep the particles (m, φ, y)
such that d(η(y), η(yobs)) ≤ ε
C) For each m, return
pm(yobs) = porportion of m
among the kept particles
Likewise, if η(y) is not sufficient for the
model choice issue,
π m y π m η(y)
It might be difficult to design
informative η(y).
Toy example.
Model 1. yi
iid
∼ N (φ, 1)
Model 2. yi
iid
∼ N (φ, 2)
Same prior on φ (whatever the model)
& uniform prior on model index
η(y) = y1 + · · · + yn is sufficient to
estimate φ in both models
But η(y) carries no information
regarding the variance (hence the
model choice issue)
Other examples in Robert et al. (2011)
In population genetics. Might be
difficult to find summary statistics that
help discriminate between models
(= possible historical scenarios on the
sampled populations)
Pierre Pudlo (UM2) Avignon 17/04/2015 10 / 20
ABC model choice
ABC model choice
A) Generate a large set of
(m, φ, y) from the Bayesian
model π(m)πm(φ) fm(y|φ)
B) Keep the particles (m, φ, y)
such that d(η(y), η(yobs)) ≤ ε
C) For each m, return
pm(yobs) = porportion of m
among the kept particles
If ε is tuned so that the number of kept
particles is k, then pm is a k-nearest
neighbor estimate of
E 1{M = m} η(yobs)
Approximating the posterior
probabilities of model m is a
regression problem where
the response is 1{M = m},
the co-variables are the summary
statistics η(y),
the loss is L2
(conditional
expectation)
The prefered method to approximate
the postererior probabilities in DIYABC
is a local multinomial regression.
Ticklish if dim(η(y)) large, or high
correlation in the summary statistics.
Pierre Pudlo (UM2) Avignon 17/04/2015 11 / 20
Choosing between hidden random fields
Choosing between dependency
graph: 4 or 8 neighbours?
Models. α, β ∼ prior
z | β ∼ Potts on G4 or G8 with interaction β
y | z, α ∼ i P(yi|zi, α)
How to sum up the noisy y?
Without noise (directly observed field),
sufficient statistics for the model choice
issue.
With Julien Stoehr and Lionel Cucala
a method to design new summary
statistics
Based on a clustering of the observed
data on possible dependency graphs
number of connected components
size of the largest connected
component,
. . .
Pierre Pudlo (UM2) Avignon 17/04/2015 12 / 20
Machine learning to analyse machine simulated data
ABC model choice
A) Generate a large set of
(m, φ, y) from π(m)πm(φ) fm(y|φ)
B) Infer (anything?) about
m η(y) with machine learning
methods
In this machine learning perspective:
the (iid) simulations of A) form the
training set
yobs becomes a new data point
With J.-M. Marin, J.-M. Cornuet, A.
Estoup, M. Gautier and C. P. Robert
Predicting m is a classification
problem
Computing π(m|η(y)) is a
regression problem
It is well known that classification is
much simple than regression.
(dimension of the object we infer)
Why computing π(m|η(y)) if we know
that
π(m|y) π(m|η(y))?
Pierre Pudlo (UM2) Avignon 17/04/2015 13 / 20
An example with random forest on human SNP data
Out of Africa
6 scenarios, 6 models
Observed data. 4 populations, 30
individuals per population; 10,000
genotyped SNP from the 1000 Genome
Project
Random forest trained on 40, 000
simulations (112 summary statistics)
predict the model which supports
a single out-of-Africa colonization
event,
a secondary split between European
and Asian lineages and
a recent admixture for Americans
with African origin
Confidence in the selected model?
Pierre Pudlo (UM2) Avignon 17/04/2015 14 / 20
Example (continued)
Observed data. 4 populations, 30
individuals per population; 10,000
genotyped SNP from the 1000 Genome
Project
Random forest trained on 40, 000
simulations (112 summary statistics)
predict the model which supports
a single out-of-Africa colonization
event,
a secondary split between European
and Asian lineages and
a recent admixture for Americans
with African origin
Benefits of random forests?
1 Can find the relevant statistics in a
large set of statistics (112) to
discriminate models
2 Lower prior misclassification error
(≈ 6%) than other methods (ABC, i.e.
k-nn ≈ 18%)
3 Supply a similarity measure to
compare η(y) and η(yobs)
Confidence in the selected model?
Compute the average of the
misclassification error over an ABC
approximation of the predictive (∗). Here,
≤ 0.1%
(∗) π(m, φ, y | ηobs) = π(m | ηobs)πm(φ | ηobs)fm(y | φ)
Pierre Pudlo (UM2) Avignon 17/04/2015 15 / 20
Contents
1 Approximate Bayesian computation
2 ABC model choice
3 Bayesian computation with empirical likelihood
Pierre Pudlo (UM2) Avignon 17/04/2015 16 / 20
Another approximation of the likelihood
What if both
the likelihood is intractable and
unable to simulate a dataset in a reasonable amount of time to resort on ABC?
First answer: use pseudo-likelihoods
such as the pairwise composite likelihood
fPCL(y | φ) =
i<j
f(yi, yj | φ)
Maximum composite likelihood
estimators φ(y) are suitable estimators
But cannot substitute a true likelihood
in a Bayesian framework
leads to credible intervals which are
too narrow: over-confidence in φ(y), see
e.g. Ribatet et al. (2012)
Our proposal with Kerrie Mengersen and
Christian P. Robert:
use the empirical likelihood of Owen
(2001, 2011)
It relies on iid blocks in the dataset y to
reconstruct a likelihood
& permits likelihood ratio tests
confidence intervals are correct
Original aim of Owen: remove parametric
assumptions
Pierre Pudlo (UM2) Avignon 17/04/2015 17 / 20
Bayesian computation via empirical likelihood
Our proposal with Kerrie Mengersen and
Christian P. Robert:
use the empirical likelihood of Owen
(2001, 2011)
It relies on iid blocks in the dataset y to
reconstruct a likelihood
& permits likelihood ratio tests
confidence intervals are correct
Original aim of Owen: remove parametric
assumptions
With empirical likelihood, the parameter φ
is defined as
(∗) E h(yb, φ) = 0
where
yb is one block of y,
E the expected value according to
the true distribution of the block yb
h is a known function
E.g, if φ is the mean of an iid sample,
h(yb, φ) = yb − φ
In population genetics, what is (∗) with
dates of population splits
population sizes, etc. ?
Pierre Pudlo (UM2) Avignon 17/04/2015 18 / 20
Bayesian computation via empirical likelihood
With empirical likelihood, the parameter φ
is defined as
(∗) E h(yb, φ) = 0
where
yb is one block of y,
E the expected value according to
the true distribution of the block yb
h is a known function
E.g, if φ is the mean of an iid sample,
h(yb, φ) = yb − φ
In population genetics, what is (∗) with
dates of population splits
population sizes, etc. ?
A block = genetic data at given locus
h(yb, φ) is the pairwise composite score
function we can explicitly compute in
many situations:
h(yb, φ) = φ log fPCL(yb | φ)
Benefits.
much faster than ABC (no need to
simulate fake data)
same accuracy than ABC or even
much precise: no loss of information
with summary statistics
Pierre Pudlo (UM2) Avignon 17/04/2015 19 / 20
An experiment
Evolutionary scenario:
MRCA
POP 0 POP 1 POP 2
τ1
τ2
Dataset:
50 genes per populations,
100 microsat. loci
Assumptions:
Ne identical over all populations
φ = log10(θ, τ1, τ2)
non-informative prior
Comparison of ABC and EL
histogram = EL
curve = ABC
vertical line = “true” parameter
Pierre Pudlo (UM2) Avignon 17/04/2015 20 / 20
An experiment
Evolutionary scenario:
MRCA
POP 0 POP 1 POP 2
τ1
τ2
Dataset:
50 genes per populations,
100 microsat. loci
Assumptions:
Ne identical over all populations
φ = log10(θ, τ1, τ2)
non-informative prior
Comparison of ABC and EL
histogram = EL
curve = ABC
vertical line = “true” parameter
Pierre Pudlo (UM2) Avignon 17/04/2015 20 / 20
An experiment
Evolutionary scenario:
MRCA
POP 0 POP 1 POP 2
τ1
τ2
Dataset:
50 genes per populations,
100 microsat. loci
Assumptions:
Ne identical over all populations
φ = log10(θ, τ1, τ2)
non-informative prior
Comparison of ABC and EL
histogram = EL
curve = ABC
vertical line = “true” parameter
Pierre Pudlo (UM2) Avignon 17/04/2015 20 / 20

Likelihood free computational statistics

  • 1.
    Likelihood free computationalstatistics Pierre Pudlo Universit´e Montpellier 2 Institut de Math´ematiques et Mod´elisation de Montpellier (I3M) Institut de Biologie Computationelle Labex NUMEV 17/04/2015 Pierre Pudlo (UM2) Avignon 17/04/2015 1 / 20
  • 2.
    Contents 1 Approximate Bayesiancomputation 2 ABC model choice 3 Bayesian computation with empirical likelihood Pierre Pudlo (UM2) Avignon 17/04/2015 2 / 20
  • 3.
    Contents 1 Approximate Bayesiancomputation 2 ABC model choice 3 Bayesian computation with empirical likelihood Pierre Pudlo (UM2) Avignon 17/04/2015 3 / 20
  • 4.
    Intractable likelihoods Problem How toperform a Bayesian analysis when the likelihood f(y|φ) is intractable? Example 1. Gibbs random fields f(y|φ) ∝ exp(−H(y, φ)) is known up to a constant Z(φ) = y exp(−H(y, φ)) Example 2. Neutral population genetics Aim. Infer demographic parameters on the past of some populations based on the trace left in genomes of individuals sampled from current populations. Latent process (past history of the sample) ∈ space of high dimension. If y is the genetic data of the sample, the likelihood is f(y|φ) = Z f(y, z | φ) dz Typically, dim(Z ) dim(Y ). No hope to compute the likelihood with clever Monte Carlo algorithms? Coralie Merle, Rapha¨el Leblois et Franc¸ois Rousset Pierre Pudlo (UM2) Avignon 17/04/2015 4 / 20
  • 5.
    A bend viaimportance sampling If y is the genetic data of the sample, the likelihood is f(y|φ) = Z f(y, z | φ) dz We are trying to compute this integral with importance sampling. Actually z = (z1, . . . , zT) is a measured valued Markov chain, stopped at a given optional time T and y = zT hence f(y|φ) = Z 1{y = zT} f(z1, . . . , zT | φ) dz Importance sampling introduces an auxiliary distribution q(dz | φ) f(y|φ) = Z 1{y = zT} f(z | φ) q(z | φ) weight of z sampling distr. q(z | φ)dz The most efficient q is the conditional distribution of the Markov chain knowing that zT = y, but even harder to compute than f(y | φ). Any other q who is a Markovian distribution is inefficient as the variance of the weight grows exponentially with T. Need a clever q: see the seminal paper of Stephens and Donnelly (2000) And resampling algorithms. . . Pierre Pudlo (UM2) Avignon 17/04/2015 5 / 20
  • 6.
    Approximate Bayesian computation Idea Inferconditional distribution of φ given yobs on simulations from the joint π(φ)f(y|φ) ABC algorithm A) Generate a large set of (φ, y) from the Bayesian model π(φ) f(y|φ) B) Keep the particles (φ, y) such that d(η(yobs), η(y)) ≤ ε C) Return the φ’s of the kept particles Curse of dimensionality: y is replaced by some numerical summaries η(y) Stage A) is computationally heavy! We end up rejecting almost all simulations except if fallen in the neighborhood of η(yobs) Sequential ABC algorithms try to avoid drawing φ is area of low π(φ|y). An auto-calibrated ABC-SMC sampler with Mohammed Sedki, Jean-Michel Marin, Jean-Marie Cornuet and Christian P. Robert Pierre Pudlo (UM2) Avignon 17/04/2015 6 / 20
  • 7.
    ABC sequential sampler Howto calibrate ε1 ≥ ε2 ≥ · · · ≥ εT and T to be efficient? The auto-calibrated ABC-SMC sampler developed with Mohammed Sedki, Jean-Michel Marin, Jean-Marie Cornet and Christian P. Robert Pierre Pudlo (UM2) Avignon 17/04/2015 7 / 20
  • 8.
    ABC target Three levelsof approximation of the posterior π φ yobs 1 the ABC posterior distribution π φ η(yobs) 2 approximated with a kernel of bandwidth ε (or with k-nearest neighbours) π φ d(η(y), η(yobs)) ≤ ε 3 a Monte Carlo error: sample size N < ∞ See, e.g., our review with J.-M. Marin, C. Robert and R. Ryder If η(y) are not sufficient statistics, π φ yobs π φ η(yobs) Information regarding yobs might be lost! Curse of dimensionality: cannot have both ε small, N large when η(y) is of large dimension Post-processing of Beaumont et al. (2002) with local linear regression. But the lack of sufficiency might still be problematic. See Robert et al. (2011) for model choice. Pierre Pudlo (UM2) Avignon 17/04/2015 8 / 20
  • 9.
    Contents 1 Approximate Bayesiancomputation 2 ABC model choice 3 Bayesian computation with empirical likelihood Pierre Pudlo (UM2) Avignon 17/04/2015 9 / 20
  • 10.
    ABC model choice ABCmodel choice A) Generate a large set of (m, φ, y) from the Bayesian model, π(m)πm(φ) fm(y|φ) B) Keep the particles (m, φ, y) such that d(η(y), η(yobs)) ≤ ε C) For each m, return pm(yobs) = porportion of m among the kept particles Likewise, if η(y) is not sufficient for the model choice issue, π m y π m η(y) It might be difficult to design informative η(y). Toy example. Model 1. yi iid ∼ N (φ, 1) Model 2. yi iid ∼ N (φ, 2) Same prior on φ (whatever the model) & uniform prior on model index η(y) = y1 + · · · + yn is sufficient to estimate φ in both models But η(y) carries no information regarding the variance (hence the model choice issue) Other examples in Robert et al. (2011) In population genetics. Might be difficult to find summary statistics that help discriminate between models (= possible historical scenarios on the sampled populations) Pierre Pudlo (UM2) Avignon 17/04/2015 10 / 20
  • 11.
    ABC model choice ABCmodel choice A) Generate a large set of (m, φ, y) from the Bayesian model π(m)πm(φ) fm(y|φ) B) Keep the particles (m, φ, y) such that d(η(y), η(yobs)) ≤ ε C) For each m, return pm(yobs) = porportion of m among the kept particles If ε is tuned so that the number of kept particles is k, then pm is a k-nearest neighbor estimate of E 1{M = m} η(yobs) Approximating the posterior probabilities of model m is a regression problem where the response is 1{M = m}, the co-variables are the summary statistics η(y), the loss is L2 (conditional expectation) The prefered method to approximate the postererior probabilities in DIYABC is a local multinomial regression. Ticklish if dim(η(y)) large, or high correlation in the summary statistics. Pierre Pudlo (UM2) Avignon 17/04/2015 11 / 20
  • 12.
    Choosing between hiddenrandom fields Choosing between dependency graph: 4 or 8 neighbours? Models. α, β ∼ prior z | β ∼ Potts on G4 or G8 with interaction β y | z, α ∼ i P(yi|zi, α) How to sum up the noisy y? Without noise (directly observed field), sufficient statistics for the model choice issue. With Julien Stoehr and Lionel Cucala a method to design new summary statistics Based on a clustering of the observed data on possible dependency graphs number of connected components size of the largest connected component, . . . Pierre Pudlo (UM2) Avignon 17/04/2015 12 / 20
  • 13.
    Machine learning toanalyse machine simulated data ABC model choice A) Generate a large set of (m, φ, y) from π(m)πm(φ) fm(y|φ) B) Infer (anything?) about m η(y) with machine learning methods In this machine learning perspective: the (iid) simulations of A) form the training set yobs becomes a new data point With J.-M. Marin, J.-M. Cornuet, A. Estoup, M. Gautier and C. P. Robert Predicting m is a classification problem Computing π(m|η(y)) is a regression problem It is well known that classification is much simple than regression. (dimension of the object we infer) Why computing π(m|η(y)) if we know that π(m|y) π(m|η(y))? Pierre Pudlo (UM2) Avignon 17/04/2015 13 / 20
  • 14.
    An example withrandom forest on human SNP data Out of Africa 6 scenarios, 6 models Observed data. 4 populations, 30 individuals per population; 10,000 genotyped SNP from the 1000 Genome Project Random forest trained on 40, 000 simulations (112 summary statistics) predict the model which supports a single out-of-Africa colonization event, a secondary split between European and Asian lineages and a recent admixture for Americans with African origin Confidence in the selected model? Pierre Pudlo (UM2) Avignon 17/04/2015 14 / 20
  • 15.
    Example (continued) Observed data.4 populations, 30 individuals per population; 10,000 genotyped SNP from the 1000 Genome Project Random forest trained on 40, 000 simulations (112 summary statistics) predict the model which supports a single out-of-Africa colonization event, a secondary split between European and Asian lineages and a recent admixture for Americans with African origin Benefits of random forests? 1 Can find the relevant statistics in a large set of statistics (112) to discriminate models 2 Lower prior misclassification error (≈ 6%) than other methods (ABC, i.e. k-nn ≈ 18%) 3 Supply a similarity measure to compare η(y) and η(yobs) Confidence in the selected model? Compute the average of the misclassification error over an ABC approximation of the predictive (∗). Here, ≤ 0.1% (∗) π(m, φ, y | ηobs) = π(m | ηobs)πm(φ | ηobs)fm(y | φ) Pierre Pudlo (UM2) Avignon 17/04/2015 15 / 20
  • 16.
    Contents 1 Approximate Bayesiancomputation 2 ABC model choice 3 Bayesian computation with empirical likelihood Pierre Pudlo (UM2) Avignon 17/04/2015 16 / 20
  • 17.
    Another approximation ofthe likelihood What if both the likelihood is intractable and unable to simulate a dataset in a reasonable amount of time to resort on ABC? First answer: use pseudo-likelihoods such as the pairwise composite likelihood fPCL(y | φ) = i<j f(yi, yj | φ) Maximum composite likelihood estimators φ(y) are suitable estimators But cannot substitute a true likelihood in a Bayesian framework leads to credible intervals which are too narrow: over-confidence in φ(y), see e.g. Ribatet et al. (2012) Our proposal with Kerrie Mengersen and Christian P. Robert: use the empirical likelihood of Owen (2001, 2011) It relies on iid blocks in the dataset y to reconstruct a likelihood & permits likelihood ratio tests confidence intervals are correct Original aim of Owen: remove parametric assumptions Pierre Pudlo (UM2) Avignon 17/04/2015 17 / 20
  • 18.
    Bayesian computation viaempirical likelihood Our proposal with Kerrie Mengersen and Christian P. Robert: use the empirical likelihood of Owen (2001, 2011) It relies on iid blocks in the dataset y to reconstruct a likelihood & permits likelihood ratio tests confidence intervals are correct Original aim of Owen: remove parametric assumptions With empirical likelihood, the parameter φ is defined as (∗) E h(yb, φ) = 0 where yb is one block of y, E the expected value according to the true distribution of the block yb h is a known function E.g, if φ is the mean of an iid sample, h(yb, φ) = yb − φ In population genetics, what is (∗) with dates of population splits population sizes, etc. ? Pierre Pudlo (UM2) Avignon 17/04/2015 18 / 20
  • 19.
    Bayesian computation viaempirical likelihood With empirical likelihood, the parameter φ is defined as (∗) E h(yb, φ) = 0 where yb is one block of y, E the expected value according to the true distribution of the block yb h is a known function E.g, if φ is the mean of an iid sample, h(yb, φ) = yb − φ In population genetics, what is (∗) with dates of population splits population sizes, etc. ? A block = genetic data at given locus h(yb, φ) is the pairwise composite score function we can explicitly compute in many situations: h(yb, φ) = φ log fPCL(yb | φ) Benefits. much faster than ABC (no need to simulate fake data) same accuracy than ABC or even much precise: no loss of information with summary statistics Pierre Pudlo (UM2) Avignon 17/04/2015 19 / 20
  • 20.
    An experiment Evolutionary scenario: MRCA POP0 POP 1 POP 2 τ1 τ2 Dataset: 50 genes per populations, 100 microsat. loci Assumptions: Ne identical over all populations φ = log10(θ, τ1, τ2) non-informative prior Comparison of ABC and EL histogram = EL curve = ABC vertical line = “true” parameter Pierre Pudlo (UM2) Avignon 17/04/2015 20 / 20
  • 21.
    An experiment Evolutionary scenario: MRCA POP0 POP 1 POP 2 τ1 τ2 Dataset: 50 genes per populations, 100 microsat. loci Assumptions: Ne identical over all populations φ = log10(θ, τ1, τ2) non-informative prior Comparison of ABC and EL histogram = EL curve = ABC vertical line = “true” parameter Pierre Pudlo (UM2) Avignon 17/04/2015 20 / 20
  • 22.
    An experiment Evolutionary scenario: MRCA POP0 POP 1 POP 2 τ1 τ2 Dataset: 50 genes per populations, 100 microsat. loci Assumptions: Ne identical over all populations φ = log10(θ, τ1, τ2) non-informative prior Comparison of ABC and EL histogram = EL curve = ABC vertical line = “true” parameter Pierre Pudlo (UM2) Avignon 17/04/2015 20 / 20