1. Approximate Bayesian computation (ABC) is a technique used for Bayesian inference when the likelihood function is intractable or unavailable. ABC works by simulating data under different parameter values and accepting simulations that are close to the observed data according to some distance measure.
2. The document outlines ABC and some of its applications, including ABC as an inference machine for estimating posterior distributions, ABC for model choice between different models, and the "genetics" of ABC in terms of algorithmic improvements.
3. A toy example is provided to illustrate ABC, showing how it can approximate the true posterior distribution of a parameter given a simulated dataset when the likelihood function is unavailable in closed form.
random forests for ABC model choice and parameter estimationChristian Robert
The document discusses Approximate Bayesian Computation (ABC). It begins by introducing ABC as a likelihood-free method for Bayesian inference when the likelihood function is unavailable or computationally intractable. ABC works by simulating data under different parameter values and accepting simulations that are close to the observed data based on a distance measure.
The document then discusses advances in ABC, including modifying the proposal distribution to increase efficiency, viewing it as a conditional density estimation problem, and including measurement error in the framework. It also discusses the consistency of ABC as the number of simulations increases and sample size grows large. Finally, it discusses applications of ABC to model selection by treating the model index as an additional parameter.
This document summarizes approximate Bayesian computation (ABC) methods. It begins with an overview of ABC, which provides a likelihood-free rejection technique for Bayesian inference when the likelihood function is intractable. The ABC algorithm works by simulating parameters and data until the simulated and observed data are close according to some distance measure and tolerance level. The document then discusses the asymptotic properties of ABC, including consistency of ABC posteriors and rates of convergence under certain assumptions. It also notes relationships between ABC and k-nearest neighbor methods. Examples applying ABC to autoregressive time series models are provided.
Approximate Bayesian model choice via random forestsChristian Robert
The document describes approximate Bayesian computation (ABC) methods for model choice when likelihoods are intractable. ABC generates parameter-dataset pairs from the prior and retains those where the simulated and observed datasets are similar according to a distance measure on summary statistics. For model choice, ABC approximates posterior model probabilities by the proportion of simulations from each model that are retained. Machine learning techniques can also be used to infer the most likely model directly from the simulated summary statistics.
The document discusses approximate Bayesian computation (ABC), a simulation-based method for conducting Bayesian inference when the likelihood function is intractable or impossible to compute directly. ABC works by simulating data under different parameter values, and accepting simulations that are close to the observed data according to some distance measure. The document covers the basic ABC algorithm, convergence properties as the tolerance approaches zero, examples of ABC for probit models and MA time series models, and advances such as modifying the proposal distribution to increase efficiency.
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithmsChristian Robert
Aggregate of three different papers on Rao-Blackwellisation, from Casella & Robert (1996), to Douc & Robert (2010), to Banterle et al. (2015), presented during an OxWaSP workshop on MCMC methods, Warwick, Nov 20, 2015
A 3hrs intro lecture to Approximate Bayesian Computation (ABC), given as part of a PhD course at Lund University, February 2016. For sample codes see http://www.maths.lu.se/kurshemsida/phd-course-fms020f-nams002-statistical-inference-for-partially-observed-stochastic-processes/
random forests for ABC model choice and parameter estimationChristian Robert
The document discusses Approximate Bayesian Computation (ABC). It begins by introducing ABC as a likelihood-free method for Bayesian inference when the likelihood function is unavailable or computationally intractable. ABC works by simulating data under different parameter values and accepting simulations that are close to the observed data based on a distance measure.
The document then discusses advances in ABC, including modifying the proposal distribution to increase efficiency, viewing it as a conditional density estimation problem, and including measurement error in the framework. It also discusses the consistency of ABC as the number of simulations increases and sample size grows large. Finally, it discusses applications of ABC to model selection by treating the model index as an additional parameter.
This document summarizes approximate Bayesian computation (ABC) methods. It begins with an overview of ABC, which provides a likelihood-free rejection technique for Bayesian inference when the likelihood function is intractable. The ABC algorithm works by simulating parameters and data until the simulated and observed data are close according to some distance measure and tolerance level. The document then discusses the asymptotic properties of ABC, including consistency of ABC posteriors and rates of convergence under certain assumptions. It also notes relationships between ABC and k-nearest neighbor methods. Examples applying ABC to autoregressive time series models are provided.
Approximate Bayesian model choice via random forestsChristian Robert
The document describes approximate Bayesian computation (ABC) methods for model choice when likelihoods are intractable. ABC generates parameter-dataset pairs from the prior and retains those where the simulated and observed datasets are similar according to a distance measure on summary statistics. For model choice, ABC approximates posterior model probabilities by the proportion of simulations from each model that are retained. Machine learning techniques can also be used to infer the most likely model directly from the simulated summary statistics.
The document discusses approximate Bayesian computation (ABC), a simulation-based method for conducting Bayesian inference when the likelihood function is intractable or impossible to compute directly. ABC works by simulating data under different parameter values, and accepting simulations that are close to the observed data according to some distance measure. The document covers the basic ABC algorithm, convergence properties as the tolerance approaches zero, examples of ABC for probit models and MA time series models, and advances such as modifying the proposal distribution to increase efficiency.
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithmsChristian Robert
Aggregate of three different papers on Rao-Blackwellisation, from Casella & Robert (1996), to Douc & Robert (2010), to Banterle et al. (2015), presented during an OxWaSP workshop on MCMC methods, Warwick, Nov 20, 2015
A 3hrs intro lecture to Approximate Bayesian Computation (ABC), given as part of a PhD course at Lund University, February 2016. For sample codes see http://www.maths.lu.se/kurshemsida/phd-course-fms020f-nams002-statistical-inference-for-partially-observed-stochastic-processes/
The document discusses using random forests for approximate Bayesian computation (ABC) model choice. ABC can be framed as a machine learning problem where simulated datasets are used to learn which model is most appropriate. Random forests are well-suited for this as they can handle many correlated summary statistics without information loss. The random forest predicts the most likely model but not posterior probabilities. Instead, the posterior predictive expected error rate across models is proposed to evaluate model selection performance without unstable probability approximations. An example comparing MA(1) and MA(2) time series models illustrates the approach.
Multiple estimators for Monte Carlo approximationsChristian Robert
This document discusses multiple estimators that can be used to approximate integrals using Monte Carlo simulations. It begins by introducing concepts like multiple importance sampling, Rao-Blackwellisation, and delayed acceptance that allow combining multiple estimators to improve accuracy. It then discusses approaches like mixtures as proposals, global adaptation, and nonparametric maximum likelihood estimation (NPMLE) that frame Monte Carlo estimation as a statistical estimation problem. The document notes various advantages of the statistical formulation, like the ability to directly estimate simulation error from the Fisher information. Overall, the document presents an overview of different techniques for combining Monte Carlo simulations to obtain more accurate integral approximations.
Recently, there has been a surge in activity at the interface of optimal transport and statistics (with special emphasis on machine learning applications). The talk will summarize new results and challenges in this active area. For example, we will show how many of the most popular estimators in machine learning (such as Lasso and svm's) can be interpreted as games. This interpretation opens the door for new and potentially better estimators and algorithms, as well as questions about the underlying complexity of these new class of estimators.
(This talk is based on joint work with F. He, Y. Kang, K. Murthy, and F. Zhang)
The document summarizes a talk given by Mark Girolami on manifold Monte Carlo methods. It discusses using stochastic diffusions and geometric concepts to improve MCMC methods. Specifically, it proposes using discretized Langevin and Hamiltonian diffusions across a Riemann manifold as an adaptive proposal mechanism. This is founded on deterministic geodesic flows on the manifold. Examples presented include a warped bivariate Gaussian, Gaussian mixture model, and log-Gaussian Cox process.
Bayesian hybrid variable selection under generalized linear modelsCaleb (Shiqiang) Jin
This document presents a method for Bayesian variable selection under generalized linear models. It begins by introducing the model setting and Bayesian model selection framework. It then discusses three algorithms for model search: deterministic search, stochastic search, and a hybrid search method. The key contribution is a method to simultaneously evaluate the marginal likelihoods of all neighbor models, without parallel computing. This is achieved by decomposing the coefficient vectors and estimating additional coefficients conditioned on the current model's coefficients. Newton-Raphson iterations are used to solve the system of equations and obtain the maximum a posteriori estimates for all neighbor models simultaneously in a single computation. This allows for a fast, inexpensive search of the model space.
Approximate Bayesian Computation with Quasi-LikelihoodsStefano Cabras
This document describes ABC-MCMC algorithms that use quasi-likelihoods as proposals. It introduces quasi-likelihoods as approximations to true likelihoods that can be estimated from pilot runs. The ABCql algorithm uses a quasi-likelihood estimated from a pilot run as the proposal in an ABC-MCMC algorithm. Examples applying ABCql to mixture of normals, coalescent, and gamma models are provided to demonstrate its effectiveness compared to standard ABC-MCMC.
The document discusses using unusual data sources in insurance. It provides examples of using pictures, text, social media data, telematics, and satellite imagery in insurance. It also discusses challenges in analyzing complex and high-dimensional data from these sources and introduces machine learning tools like PCA, generalized linear models, and evaluating models using loss, risk, and cross-validation.
My data are incomplete and noisy: Information-reduction statistical methods f...Umberto Picchini
We review parameter inference for stochastic modelling in complex scenario, such as bad parameters initialization and near-chaotic dynamics. We show how state-of-art methods for state-space models can fail while, in some situations, reducing data to summary statistics (information reduction) enables robust estimation. Wood's synthetic likelihoods method is reviewed and the lecture closes with an example of approximate Bayesian computation methodology.
Accompanying code is available at https://github.com/umbertopicchini/pomp-ricker and https://github.com/umbertopicchini/abc_g-and-k
Readership lecture given at Lund University on 7 June 2016. The lecture is of popular science nature hence mathematical detail is kept to a minimum. However numerous links and references are offered for further reading.
The document provides an overview of the EM algorithm and its application to outlier detection. It begins with introducing the EM algorithm and explaining its iterative process of estimating parameters via E-step and M-step. It then proves properties of the EM algorithm such as non-decreasing log-likelihood and convergence. An example of using EM for Gaussian mixture modeling is provided. Finally, the document discusses directly and indirectly applying EM to outlier detection.
This document discusses Bayesian inference on mixtures models. It covers several key topics:
1. Density approximation and consistency results for mixtures as a way to approximate unknown distributions.
2. The "scarcity phenomenon" where the posterior probabilities of most component allocations in mixture models are zero, concentrating on just a few high probability allocations.
3. Challenges with Bayesian inference for mixtures, including identifiability issues, label switching, and complex combinatorial calculations required to integrate over all possible component allocations.
1. The document discusses approximate Bayesian computation (ABC), a technique used when the likelihood function is intractable. ABC works by simulating parameters from the prior and simulating data, rejecting simulations that are not close to the observed data based on a tolerance level.
2. Random forests can be used in ABC to select informative summary statistics from a large set of possibilities and estimate parameters. The random forests classify simulations as accepted or rejected based on the summaries, implicitly selecting important summaries.
3. Calibrating the tolerance level in ABC is important but difficult, as it determines how close simulations must be to the observed data. Methods discussed include using quantiles of prior predictive simulations or asymptotic convergence properties.
This document discusses challenges and recent advances in Approximate Bayesian Computation (ABC) methods. ABC methods are used when the likelihood function is intractable or unavailable in closed form. The core ABC algorithm involves simulating parameters from the prior and simulating data, retaining simulations where the simulated and observed data are close according to a distance measure on summary statistics. The document outlines key issues like scalability to large datasets, assessment of uncertainty, and model choice, and discusses advances such as modified proposals, nonparametric methods, and perspectives that include summary construction in the framework. Validation of ABC model choice and selection of summary statistics remains an open challenge.
This document provides an introduction to global sensitivity analysis. It discusses how sensitivity analysis can quantify the sensitivity of a model output to variations in its input parameters. It introduces Sobol' sensitivity indices, which measure the contribution of each input parameter to the variance of the model output. The document outlines how Sobol' indices are defined based on decomposing the model output variance into terms related to individual input parameters and their interactions. It notes that Sobol' indices are generally estimated using Monte Carlo-type sampling approaches due to the high-dimensional integrals involved in their exact calculation.
better together? statistical learning in models made of modulesChristian Robert
The document discusses statistical models composed of modular components called modules. Each module may be developed independently and represent different data modalities or domains of knowledge. Joint Bayesian updating treats all modules simultaneously but misspecification of one module can impact the others. Alternative approaches are proposed to allow uncertainty propagation between modules while preventing feedback that could lead to misspecification. Candidate distributions for the modules are discussed, along with strategies for choosing among them based on predictive performance.
This document discusses various methods for approximating marginal likelihoods and Bayes factors, including:
1. Geyer's 1994 logistic regression approach for approximating marginal likelihoods using importance sampling.
2. Bridge sampling and its connection to Geyer's approach. Optimal bridge sampling requires knowledge of unknown normalizing constants.
3. Using mixtures of importance distributions and the target distribution as proposals to estimate marginal likelihoods through Rao-Blackwellization. This connects to bridge sampling estimates.
4. The document discusses various methods for approximating marginal likelihoods and comparing hypotheses using Bayes factors. It outlines the historical development and connections between different approximation techniques.
This document provides an overview of advanced econometrics techniques including simulations, bootstrap methods, and penalization. It discusses how computers allow for numerical standard errors and testing procedures through simulations and resampling rather than relying on asymptotic formulas. Specific techniques covered include the linear regression model, nonlinear transformations, asymptotics versus finite samples using bootstrap, and moving from least squares to other regressions like quantile regression. Historical references for techniques like permutation methods, the jackknife, and bootstrapping are also provided.
The document discusses automatic differentiation as a technique for efficiently computing derivatives in machine learning. It explains how automatic differentiation uses computational graphs and either forward or reverse mode to compute derivatives without symbolic manipulation or numerical approximations. Forward mode computes derivatives with respect to one input, while reverse mode (backpropagation) computes derivatives with respect to all inputs with one pass. PyTorch code is provided as an example to demonstrate reverse mode automatic differentiation for neural network training.
A Quick and Terse Introduction to Efficient Frontier MathematicsAshwin Rao
A Quick and Terse Introduction to Efficient Frontier Mathematics. Only a basic background in Linear Algebra, Probability and Optimization is expected to cover this material and gain a reasonable understanding of this topic within one hour.
- Approximate Bayesian computation (ABC) is a technique used when the likelihood function is intractable or unavailable. It approximates the Bayesian posterior distribution in a likelihood-free manner.
- ABC works by simulating parameter values from the prior and simulating pseudo-data. Parameter values are accepted if the simulated pseudo-data are "close" to the observed data according to some distance measure and tolerance level.
- ABC originated in population genetics models where genealogies are considered nuisance parameters that cannot be integrated out of the likelihood. It has since been applied to other fields like econometrics for models with complex or undefined likelihoods.
This document discusses several computational methods for Bayesian model choice, including importance sampling, cross-model solutions, nested sampling, and approximate Bayesian computation (ABC) model choice. It introduces Bayes factors and marginal likelihoods as key quantities for Bayesian model comparison, and describes how Bayesian model choice involves allocating probabilities to different models and computing the marginal likelihood or evidence for each model.
The document discusses using random forests for approximate Bayesian computation (ABC) model choice. ABC can be framed as a machine learning problem where simulated datasets are used to learn which model is most appropriate. Random forests are well-suited for this as they can handle many correlated summary statistics without information loss. The random forest predicts the most likely model but not posterior probabilities. Instead, the posterior predictive expected error rate across models is proposed to evaluate model selection performance without unstable probability approximations. An example comparing MA(1) and MA(2) time series models illustrates the approach.
Multiple estimators for Monte Carlo approximationsChristian Robert
This document discusses multiple estimators that can be used to approximate integrals using Monte Carlo simulations. It begins by introducing concepts like multiple importance sampling, Rao-Blackwellisation, and delayed acceptance that allow combining multiple estimators to improve accuracy. It then discusses approaches like mixtures as proposals, global adaptation, and nonparametric maximum likelihood estimation (NPMLE) that frame Monte Carlo estimation as a statistical estimation problem. The document notes various advantages of the statistical formulation, like the ability to directly estimate simulation error from the Fisher information. Overall, the document presents an overview of different techniques for combining Monte Carlo simulations to obtain more accurate integral approximations.
Recently, there has been a surge in activity at the interface of optimal transport and statistics (with special emphasis on machine learning applications). The talk will summarize new results and challenges in this active area. For example, we will show how many of the most popular estimators in machine learning (such as Lasso and svm's) can be interpreted as games. This interpretation opens the door for new and potentially better estimators and algorithms, as well as questions about the underlying complexity of these new class of estimators.
(This talk is based on joint work with F. He, Y. Kang, K. Murthy, and F. Zhang)
The document summarizes a talk given by Mark Girolami on manifold Monte Carlo methods. It discusses using stochastic diffusions and geometric concepts to improve MCMC methods. Specifically, it proposes using discretized Langevin and Hamiltonian diffusions across a Riemann manifold as an adaptive proposal mechanism. This is founded on deterministic geodesic flows on the manifold. Examples presented include a warped bivariate Gaussian, Gaussian mixture model, and log-Gaussian Cox process.
Bayesian hybrid variable selection under generalized linear modelsCaleb (Shiqiang) Jin
This document presents a method for Bayesian variable selection under generalized linear models. It begins by introducing the model setting and Bayesian model selection framework. It then discusses three algorithms for model search: deterministic search, stochastic search, and a hybrid search method. The key contribution is a method to simultaneously evaluate the marginal likelihoods of all neighbor models, without parallel computing. This is achieved by decomposing the coefficient vectors and estimating additional coefficients conditioned on the current model's coefficients. Newton-Raphson iterations are used to solve the system of equations and obtain the maximum a posteriori estimates for all neighbor models simultaneously in a single computation. This allows for a fast, inexpensive search of the model space.
Approximate Bayesian Computation with Quasi-LikelihoodsStefano Cabras
This document describes ABC-MCMC algorithms that use quasi-likelihoods as proposals. It introduces quasi-likelihoods as approximations to true likelihoods that can be estimated from pilot runs. The ABCql algorithm uses a quasi-likelihood estimated from a pilot run as the proposal in an ABC-MCMC algorithm. Examples applying ABCql to mixture of normals, coalescent, and gamma models are provided to demonstrate its effectiveness compared to standard ABC-MCMC.
The document discusses using unusual data sources in insurance. It provides examples of using pictures, text, social media data, telematics, and satellite imagery in insurance. It also discusses challenges in analyzing complex and high-dimensional data from these sources and introduces machine learning tools like PCA, generalized linear models, and evaluating models using loss, risk, and cross-validation.
My data are incomplete and noisy: Information-reduction statistical methods f...Umberto Picchini
We review parameter inference for stochastic modelling in complex scenario, such as bad parameters initialization and near-chaotic dynamics. We show how state-of-art methods for state-space models can fail while, in some situations, reducing data to summary statistics (information reduction) enables robust estimation. Wood's synthetic likelihoods method is reviewed and the lecture closes with an example of approximate Bayesian computation methodology.
Accompanying code is available at https://github.com/umbertopicchini/pomp-ricker and https://github.com/umbertopicchini/abc_g-and-k
Readership lecture given at Lund University on 7 June 2016. The lecture is of popular science nature hence mathematical detail is kept to a minimum. However numerous links and references are offered for further reading.
The document provides an overview of the EM algorithm and its application to outlier detection. It begins with introducing the EM algorithm and explaining its iterative process of estimating parameters via E-step and M-step. It then proves properties of the EM algorithm such as non-decreasing log-likelihood and convergence. An example of using EM for Gaussian mixture modeling is provided. Finally, the document discusses directly and indirectly applying EM to outlier detection.
This document discusses Bayesian inference on mixtures models. It covers several key topics:
1. Density approximation and consistency results for mixtures as a way to approximate unknown distributions.
2. The "scarcity phenomenon" where the posterior probabilities of most component allocations in mixture models are zero, concentrating on just a few high probability allocations.
3. Challenges with Bayesian inference for mixtures, including identifiability issues, label switching, and complex combinatorial calculations required to integrate over all possible component allocations.
1. The document discusses approximate Bayesian computation (ABC), a technique used when the likelihood function is intractable. ABC works by simulating parameters from the prior and simulating data, rejecting simulations that are not close to the observed data based on a tolerance level.
2. Random forests can be used in ABC to select informative summary statistics from a large set of possibilities and estimate parameters. The random forests classify simulations as accepted or rejected based on the summaries, implicitly selecting important summaries.
3. Calibrating the tolerance level in ABC is important but difficult, as it determines how close simulations must be to the observed data. Methods discussed include using quantiles of prior predictive simulations or asymptotic convergence properties.
This document discusses challenges and recent advances in Approximate Bayesian Computation (ABC) methods. ABC methods are used when the likelihood function is intractable or unavailable in closed form. The core ABC algorithm involves simulating parameters from the prior and simulating data, retaining simulations where the simulated and observed data are close according to a distance measure on summary statistics. The document outlines key issues like scalability to large datasets, assessment of uncertainty, and model choice, and discusses advances such as modified proposals, nonparametric methods, and perspectives that include summary construction in the framework. Validation of ABC model choice and selection of summary statistics remains an open challenge.
This document provides an introduction to global sensitivity analysis. It discusses how sensitivity analysis can quantify the sensitivity of a model output to variations in its input parameters. It introduces Sobol' sensitivity indices, which measure the contribution of each input parameter to the variance of the model output. The document outlines how Sobol' indices are defined based on decomposing the model output variance into terms related to individual input parameters and their interactions. It notes that Sobol' indices are generally estimated using Monte Carlo-type sampling approaches due to the high-dimensional integrals involved in their exact calculation.
better together? statistical learning in models made of modulesChristian Robert
The document discusses statistical models composed of modular components called modules. Each module may be developed independently and represent different data modalities or domains of knowledge. Joint Bayesian updating treats all modules simultaneously but misspecification of one module can impact the others. Alternative approaches are proposed to allow uncertainty propagation between modules while preventing feedback that could lead to misspecification. Candidate distributions for the modules are discussed, along with strategies for choosing among them based on predictive performance.
This document discusses various methods for approximating marginal likelihoods and Bayes factors, including:
1. Geyer's 1994 logistic regression approach for approximating marginal likelihoods using importance sampling.
2. Bridge sampling and its connection to Geyer's approach. Optimal bridge sampling requires knowledge of unknown normalizing constants.
3. Using mixtures of importance distributions and the target distribution as proposals to estimate marginal likelihoods through Rao-Blackwellization. This connects to bridge sampling estimates.
4. The document discusses various methods for approximating marginal likelihoods and comparing hypotheses using Bayes factors. It outlines the historical development and connections between different approximation techniques.
This document provides an overview of advanced econometrics techniques including simulations, bootstrap methods, and penalization. It discusses how computers allow for numerical standard errors and testing procedures through simulations and resampling rather than relying on asymptotic formulas. Specific techniques covered include the linear regression model, nonlinear transformations, asymptotics versus finite samples using bootstrap, and moving from least squares to other regressions like quantile regression. Historical references for techniques like permutation methods, the jackknife, and bootstrapping are also provided.
The document discusses automatic differentiation as a technique for efficiently computing derivatives in machine learning. It explains how automatic differentiation uses computational graphs and either forward or reverse mode to compute derivatives without symbolic manipulation or numerical approximations. Forward mode computes derivatives with respect to one input, while reverse mode (backpropagation) computes derivatives with respect to all inputs with one pass. PyTorch code is provided as an example to demonstrate reverse mode automatic differentiation for neural network training.
A Quick and Terse Introduction to Efficient Frontier MathematicsAshwin Rao
A Quick and Terse Introduction to Efficient Frontier Mathematics. Only a basic background in Linear Algebra, Probability and Optimization is expected to cover this material and gain a reasonable understanding of this topic within one hour.
- Approximate Bayesian computation (ABC) is a technique used when the likelihood function is intractable or unavailable. It approximates the Bayesian posterior distribution in a likelihood-free manner.
- ABC works by simulating parameter values from the prior and simulating pseudo-data. Parameter values are accepted if the simulated pseudo-data are "close" to the observed data according to some distance measure and tolerance level.
- ABC originated in population genetics models where genealogies are considered nuisance parameters that cannot be integrated out of the likelihood. It has since been applied to other fields like econometrics for models with complex or undefined likelihoods.
This document discusses several computational methods for Bayesian model choice, including importance sampling, cross-model solutions, nested sampling, and approximate Bayesian computation (ABC) model choice. It introduces Bayes factors and marginal likelihoods as key quantities for Bayesian model comparison, and describes how Bayesian model choice involves allocating probabilities to different models and computing the marginal likelihood or evidence for each model.
The document proposes using random forests (RF), a machine learning tool, for approximate Bayesian computation (ABC) model choice rather than estimating model posterior probabilities. RF improves on existing ABC model choice methods by having greater discriminative power among models, being robust to the choice and number of summary statistics, requiring less computation, and providing an error rate to evaluate confidence in the model choice. The authors illustrate the power of the RF-based ABC methodology on controlled experiments and real population genetics datasets.
The document discusses computational methods for Bayesian model choice and model comparison. It introduces Bayes factors and the evidence as central quantities for model comparison. It then describes various computational methods for approximating the evidence, including importance sampling solutions like bridge sampling, harmonic mean approximations using posterior samples, and approximating the evidence using mixture representations.
Approximative Bayesian Computation (ABC) methods allow approximating intractable likelihoods in Bayesian inference. ABC rejection sampling simulates parameters from the prior and keeps those where simulated data is close to observed data. ABC Markov chain Monte Carlo creates a Markov chain over the parameters where proposed moves are accepted if simulated data is similar to observed. Population Monte Carlo and ABC-MCMC improve on rejection sampling by using sequential importance sampling and MCMC moves to propose parameters in high density regions.
The document discusses statistical models and exponential families. It states that for most of the course, data is assumed to be a random sample from a distribution F. Repetition of observations via the law of large numbers and central limit theorem increases information about F. Exponential families are a class of parametric distributions with convenient analytic properties, where the density can be written as a function of natural parameters in an exponential form. Examples of exponential families include the binomial and normal distributions.
This document summarizes a talk given by Heiko Strathmann on using partial posterior paths to estimate expectations from large datasets without full posterior simulation. The key ideas are:
1. Construct a path of "partial posteriors" by sequentially adding mini-batches of data and computing expectations over these posteriors.
2. "Debias" the path of expectations to obtain an unbiased estimator of the true posterior expectation using a technique from stochastic optimization literature.
3. This approach allows estimating posterior expectations with sub-linear computational cost in the number of data points, without requiring full posterior simulation or imposing restrictions on the likelihood.
Experiments on synthetic and real-world examples demonstrate competitive performance versus standard M
no U-turn sampler, a discussion of Hoffman & Gelman NUTS algorithmChristian Robert
The document describes the No-U-Turn Sampler (NUTS), an extension of Hamiltonian Monte Carlo (HMC) that aims to avoid the random walk behavior and poor mixing that can occur when the trajectory length L is not set appropriately. NUTS augments the model with a slice variable and uses a deterministic procedure to select a set of candidate states C based on the instantaneous distance gain, avoiding the need to manually tune L. It builds up a set of possible states B by doubling a binary tree and checking the distance criterion on subtrees, then samples from the uniform distribution over C to generate proposals. This allows NUTS to automatically determine an appropriate trajectory length and avoid issues like periodicity that can plague
Last year, I gave an advanced graduate course at CREST about Jeffreys' Theory of Probability. Those are the slides. I also wrote a paper published on arXiv with two participants to the course.
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les cordeliers
Jere Koskela's slides
On the vexing dilemma of hypothesis testing and the predicted demise of the B...Christian Robert
The document discusses hypothesis testing from both frequentist and Bayesian perspectives. It introduces the concept of statistical tests as functions that output accept or reject decisions for hypotheses. P-values are presented as a way to quantify uncertainty in these decisions. Bayes' original 1763 paper on Bayesian statistics is summarized, introducing the concept of the posterior distribution. Bayesian hypothesis testing is then discussed, including the optimal Bayes test and the use of Bayes factors to compare hypotheses without requiring prior probabilities on the hypotheses.
This document discusses various methods for estimating normalizing constants that arise when evaluating integrals numerically. It begins by noting there are many computational methods for approximating normalizing constants across different communities. It then lists the topics that will be covered in the upcoming workshop, including discussions on estimating constants using Monte Carlo methods and Bayesian versus frequentist approaches. The document provides examples of estimating normalizing constants using Monte Carlo integration, reverse logistic regression, and Xiao-Li Meng's maximum likelihood estimation approach. It concludes by discussing some of the challenges in bringing a statistical framework to constant estimation problems.
This document discusses Bayesian hypothesis testing and some of the challenges associated with it. It makes three key points:
1) There is tension between using posterior probabilities from a loss function approach versus Bayes factors, which eliminate prior dependence but have no direct connection to the posterior.
2) Bayesian hypothesis testing relies on choosing prior probabilities for hypotheses and prior distributions for parameters, which can strongly impact results and are often arbitrary.
3) Common Bayesian testing procedures like using Bayes factors can produce paradoxical results in some cases, like Lindley's paradox where the Bayes factor favors the null hypothesis as sample size increases despite evidence against it.
Statistics (1): estimation Chapter 3: likelihood function and likelihood esti...Christian Robert
The document discusses likelihood functions and inference. It begins by defining the likelihood function as the function that gives the probability of observing a sample given a parameter value. The likelihood varies with the parameter, while the density function varies with the data. Maximum likelihood estimation chooses parameters that maximize the likelihood function. The score function is the gradient of the log-likelihood and has an expected value of zero at the true parameter value. The Fisher information matrix measures the curvature of the likelihood surface and provides information about the precision of parameter estimates. It relates to the concentration of likelihood functions around the true parameter value as sample size increases.
Delayed acceptance for Metropolis-Hastings algorithmsChristian Robert
The document proposes a delayed acceptance method for accelerating Metropolis-Hastings algorithms. It begins with a motivating example of non-informative inference for mixture models where computing the prior density is costly. It then introduces the delayed acceptance approach which splits the acceptance probability into pieces that are evaluated sequentially, avoiding computing the full acceptance ratio each time. It validates that the delayed acceptance chain is reversible and provides bounds on its spectral gap and asymptotic variance compared to the original chain. Finally, it discusses optimizing the delayed acceptance approach by considering the expected square jump distance and cost per iteration to maximize efficiency.
This document discusses approximate Bayesian computation (ABC) methods for performing Bayesian inference when the likelihood function is intractable. ABC methods approximate the posterior distribution by simulating data under different parameter values and selecting simulations that match the observed data based on summary statistics. The document outlines how ABC originated in population genetics to model complex demographic scenarios and mutation processes. It then describes the basic ABC rejection sampling algorithm and how it provides an approximation of the posterior distribution by sampling from regions of high density defined by the summary statistics.
The document describes Approximate Bayesian Computation (ABC), a technique for performing Bayesian inference when the likelihood function is intractable or impossible to evaluate directly. ABC works by simulating data under different parameter values, and accepting simulations that are close to the observed data according to a distance measure and tolerance level. ABC provides an approximation to the posterior distribution that improves as the tolerance level decreases and more informative summary statistics are used. The document discusses the ABC algorithm, properties of the exact ABC posterior distribution, and challenges in selecting appropriate summary statistics.
This document provides an introduction to Approximate Bayesian Computation (ABC), a likelihood-free method for approximating posterior distributions when the likelihood function is unavailable or computationally intractable. It describes the ABC rejection sampling algorithm and key concepts like tolerance levels, distance functions, summary statistics, and improvements like ABC-MCMC and ABC-SMC. ABC is presented as an alternative to traditional Bayesian inference methods for models where direct likelihood evaluation is impossible or too expensive.
ABC stands for approximate Bayesian computation. It is a method for performing Bayesian inference when the likelihood function is intractable or impossible to evaluate directly. ABC produces samples from an approximate posterior distribution by simulating parameter and summary statistic values that match the observed summary statistics within a tolerance level. The choice of summary statistics is important but difficult, as there is typically no sufficient statistic. Several strategies have been developed for selecting good summary statistics, including using random forests or the Lasso to evaluate and select from a large set of potential summaries.
The document summarizes Approximate Bayesian Computation (ABC). It discusses how ABC provides a way to approximate Bayesian inference when the likelihood function is intractable or too computationally expensive to evaluate directly. ABC works by simulating data under different parameter values and accepting simulations that are close to the observed data according to a distance measure and tolerance level. Key points discussed include:
- ABC provides an approximation to the posterior distribution by sampling from simulations that fall within a tolerance of the observed data.
- Summary statistics are often used to reduce the dimension of the data and improve the signal-to-noise ratio when applying the tolerance criterion.
- Random forests can help select informative summary statistics and provide semi-automated ABC
After we applied the stochastic Galerkin method to solve stochastic PDE, and solve large linear system, we obtain stochastic solution (random field), which is represented in Karhunen Loeve and PCE basis. No sampling error is involved, only algebraic truncation error. Now we would like to escape classical MCMC path to compute the posterior. We develop an Bayesian* update formula for KLE-PCE coefficients.
The document provides an introduction to Markov Chain Monte Carlo (MCMC) methods. It discusses using MCMC to sample from distributions when direct sampling is difficult. Specifically, it introduces Gibbs sampling and the Metropolis-Hastings algorithm. Gibbs sampling updates variables one at a time based on their conditional distributions. Metropolis-Hastings proposes candidate samples and accepts or rejects them to converge to the target distribution. The document provides examples and outlines the algorithms to construct Markov chains that sample distributions of interest.
The document discusses probabilistic inference on Bayesian networks. It provides examples of common inference queries such as computing the likelihood of evidence, conditional probabilities given evidence, and the most probable assignment. It also describes the variable elimination algorithm for exact inference on general Bayesian networks through iteratively eliminating variables based on a specified elimination order. An example application of the variable elimination algorithm is shown to compute the conditional probability P(B|h) on a sample Bayesian network.
Maximum likelihood estimation of regularisation parameters in inverse problem...Valentin De Bortoli
This document discusses an empirical Bayesian approach for estimating regularization parameters in inverse problems using maximum likelihood estimation. It proposes the Stochastic Optimization with Unadjusted Langevin (SOUL) algorithm, which uses Markov chain sampling to approximate gradients in a stochastic projected gradient descent scheme for optimizing the regularization parameter. The algorithm is shown to converge to the maximum likelihood estimate under certain conditions on the log-likelihood and prior distributions.
We approach the screening problem - i.e. detecting which inputs of a computer model significantly impact the output - from a formal Bayesian model selection point of view. That is, we place a Gaussian process prior on the computer model and consider the $2^p$ models that result from assuming that each of the subsets of the $p$ inputs affect the response. The goal is to obtain the posterior probabilities of each of these models. In this talk, we focus on the specification of objective priors on the model-specific parameters and on convenient ways to compute the associated marginal likelihoods. These two problems that normally are seen as unrelated, have challenging connections since the priors proposed in the literature are specifically designed to have posterior modes in the boundary of the parameter space, hence precluding the application of approximate integration techniques based on e.g. Laplace approximations. We explore several ways of circumventing this difficulty, comparing different methodologies with synthetic examples taken from the literature.
Authors: Gonzalo Garcia-Donato (Universidad de Castilla-La Mancha) and Rui Paulo (Universidade de Lisboa)
This document discusses approximate Bayesian computation (ABC) for model choice between multiple models. It introduces the ABC algorithm for model choice, which approximates the posterior probabilities of models given the data by simulating parameters from the prior and accepting simulations based on the distance between simulated and observed sufficient statistics. Issues with choosing sufficient statistics that apply to all models are discussed. The document also examines the limiting behavior of the ABC approximation to the Bayes factor as the tolerance approaches 0 and infinity. It notes that discrepancies can arise if sufficient statistics are not cross-model sufficient. An example comparing Poisson and geometric models demonstrates this.
Stratified sampling and resampling for approximate Bayesian computationUmberto Picchini
Stratified Monte Carlo is proposed as a method to accelerate ABC-MCMC by reducing its computational cost. It involves partitioning the summary statistic space into strata and estimating the ABC likelihood using a stratified Monte Carlo approach based on resampling. This reduces the variance compared to using a single resampled dataset, without introducing significant bias as resampling alone would. The method is tested on a simple Gaussian example where it provides a posterior approximation closer to the true posterior than standard ABC-MCMC.
최근 이수가 되고 있는 Bayesian Deep Learning 관련 이론과 최근 어플리케이션들을 소개합니다. Bayesian Inference 의 이론에 관해서 간단히 설명하고 Yarin Gal 의 Monte Carlo Dropout 의 이론과 어플리케이션들을 소개합니다.
When models are defined implicitly as systems of differential equations with no closed form solution, the choice of discretization grid for their approximation represents a trade-off between accuracy of the estimated solution and computational resources. We apply principles of statistical design to a class of sequential probability based models of discretization uncertainty for selecting the optimal discretization grid adaptively. Our proposal is compared to other approaches in the literature.
Fractional hot deck imputation - Jae KimJae-kwang Kim
Fractional hot deck imputation is a method for handling multivariate missing data in survey sampling. It involves splitting records with missing data into multiple imputed values, and assigning fractional weights to each imputed value. This results in a single imputed data file with size less than or equal to the original sample size multiplied by the number of imputations. Fractional weights are replicated to estimate variance taking into account uncertainty in parameter estimates used in the imputation model. For categorical variables, possible values are used as imputed values and fractional weights are conditional probabilities of the imputed values given observed data, estimated using an EM algorithm.
1. The document discusses likelihood-free computational statistics methods for Bayesian inference when the likelihood function is intractable. It covers approximate Bayesian computation (ABC), ABC model choice, and Bayesian computation using empirical likelihood.
2. ABC approximates the posterior distribution by simulating data under different parameter values and retaining simulations that best match the observed data. ABC model choice extends this to model selection problems.
3. Empirical likelihood provides an alternative to ABC by reconstructing a likelihood function from independent blocks of data, allowing faster Bayesian inference without loss of information from summary statistics.
This document provides a summary of spatial data modeling and analysis techniques. It begins with an outline of the topics to be covered, including additive statistical models for spatial data, spatial covariance functions, the multivariate normal distribution, kriging for prediction and uncertainty, and the likelihood function for parameter estimation. It then introduces the key concepts and equations for modeling spatial processes as Gaussian random fields with specified covariance functions. Examples are given of commonly used covariance functions and the types of random surfaces they generate. Kriging is described as a best linear unbiased prediction technique that uses a spatial covariance function and observations to make predictions at unknown locations. The document concludes with examples of parameter estimation via maximum likelihood and using the fitted model to make predictions and conditional simulations
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les Cordeliers
Slides of Richard Everitt's presentation
This document discusses differentially private distributed Bayesian linear regression with Markov chain Monte Carlo (MCMC) methods. It proposes adding noise to the summaries (S) and coefficients (z) of local linear regression models on different devices to provide differential privacy. Gibbs sampling is used to simulate the genuine posterior distribution over the linear model parameters (theta, sigma_y, Sigma_x, z1:J, S1:J) in a distributed manner while maintaining privacy. Alternative approaches like exploiting approximate posteriors from all devices or learning iteratively are also mentioned.
This document discusses mixture models and approximations to computing model evidence. It contains:
1) An overview of mixtures of distributions and common priors used for mixtures.
2) Approximations to computing marginal likelihoods or model evidence using Chib's representation and Rao-Blackwellization. Permutations are used to address label switching issues.
3) Methods for more efficient sampling for computing model evidence, including iterative bridge sampling and dual importance sampling with approximations to reduce the number of permutations considered.
Sequential Monte Carlo is also briefly mentioned as an alternative approach.
This document describes the adaptive restore algorithm, a non-reversible Markov chain Monte Carlo method. It begins with an overview of the restore process, which takes regenerations from an underlying diffusion or jump process to construct a reversible Markov chain with a target distribution. The adaptive restore process enriches this by allowing the regeneration distribution to adapt over time. It converges almost surely to the minimal regeneration distribution. Parameters like the initial regeneration distribution and rates are discussed. Examples are provided for the adaptive Brownian restore algorithm and calibrating the parameters.
This document summarizes techniques for approximating marginal likelihoods and Bayes factors, which are important quantities in Bayesian inference. It discusses Geyer's 1994 logistic regression approach, links to bridge sampling, and how mixtures can be used as importance sampling proposals. Specifically, it shows how optimizing the logistic pseudo-likelihood relates to the bridge sampling optimal estimator. It also discusses non-parametric maximum likelihood estimation based on simulations.
This document discusses Bayesian restricted likelihood methods for situations where the likelihood cannot be fully trusted. It presents several approaches including empirical likelihood, Bayesian empirical likelihood, using insufficient statistics, approximate Bayesian computation (ABC), and MCMC on manifolds. The key ideas are developing Bayesian tools that are robust to model misspecification by questioning the likelihood, prior, and other assumptions.
This document describes a new method called component-wise approximate Bayesian computation (ABCG or ABC-Gibbs) that combines approximate Bayesian computation (ABC) with Gibbs sampling. ABCG aims to more efficiently explore parameter spaces when the number of parameters is large. It works by alternately sampling each parameter from its ABC-approximated conditional distribution given current values of other parameters. The document provides theoretical analysis showing ABCG converges to a stationary distribution under certain conditions. It also presents examples demonstrating ABCG can better separate estimates from the prior compared to simple ABC, especially for hierarchical models.
The document describes a new method called component-wise approximate Bayesian computation (ABC) that combines ABC with Gibbs sampling. It aims to improve ABC's ability to efficiently explore parameter spaces when the number of parameters is large. The method works by alternating sampling from each parameter's ABC posterior conditional distribution given current values of other parameters and the observed data. The method is proven to converge to a stationary distribution under certain assumptions, especially for hierarchical models where conditional distributions are often simplified. Numerical experiments on toy examples demonstrate the method can provide a better approximation of the true posterior than vanilla ABC.
1) Likelihood-free Bayesian experimental design is discussed as an intractable likelihood optimization problem, where the goal is to find the optimal design d that minimizes expected loss without using the full posterior distribution.
2) Several Bayesian tools are proposed to make the design problem more Bayesian, including Bayesian non-parametrics, annealing algorithms, and placing a posterior on the design d.
3) Gaussian processes are a default modeling choice for complex unknown functions in these problems, but their accuracy is difficult to assess and they may incur a dimension curse.
The document discusses Approximate Bayesian Computation (ABC), a simulation-based method for conducting Bayesian inference when the likelihood function is intractable or unavailable. ABC works by simulating data from the model, accepting simulations that are close to the observed data based on a distance measure and tolerance level. This provides samples from an approximation of the posterior distribution. The document provides examples that motivate ABC and outlines the basic ABC algorithm. It also discusses extensions and improvements to the standard ABC method.
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsChristian Robert
This document discusses Bayesian estimation of conditional moment models. It presents several approaches for completing conditional moment models for Bayesian processing, including using non-parametric parts, empirical likelihood Bayesian tools, or maximum entropy alternatives. It also discusses simplistic ABC alternatives and innovative aspects of introducing tolerance parameters for misspecification and cancelling conditional aspects. Unconditional and conditional model comparison using empirical likelihoods and Bayes factors is proposed.
This document discusses using the Wasserstein distance for inference in generative models. It begins with an overview of approximate Bayesian computation (ABC) and how distances between samples are used. It then introduces the Wasserstein distance as an alternative distance that can have lower variance than the Euclidean distance. Computational aspects and asymptotics of using the Wasserstein distance are discussed. The document also covers how transport distances can handle time series data.
Poster for Bayesian Statistics in the Big Data Era conferenceChristian Robert
The document proposes a new version of Hamiltonian Monte Carlo (HMC) sampling that is essentially calibration-free. It achieves this by learning the optimal leapfrog scale from the distribution of integration times using the No-U-Turn Sampler algorithm. Compared to the original NUTS algorithm on benchmark models, this new enhanced HMC (eHMC) exhibits significantly improved efficiency with no hand-tuning of parameters required. The document tests eHMC on a Susceptible-Infected-Recovered model of disease transmission.
short course at CIRM, Bayesian Masterclass, October 2018Christian Robert
Markov Chain Monte Carlo (MCMC) methods generate dependent samples from a target distribution using a Markov chain. The Metropolis-Hastings algorithm constructs a Markov chain with a desired stationary distribution by proposing moves to new states and accepting or rejecting them probabilistically. The algorithm is used to approximate integrals that are difficult to compute directly. It has been shown to converge to the target distribution as the number of iterations increases.
This document discusses using the Wasserstein distance for inference in generative models. It begins by introducing ABC methods that use a distance between samples to compare observed and simulated data. It then discusses using the Wasserstein distance as an alternative distance metric that has lower variance than the Euclidean distance. The document covers computational aspects of calculating the Wasserstein distance, asymptotic properties of minimum Wasserstein estimators, and applications to time series data.
Candidate young stellar objects in the S-cluster: Kinematic analysis of a sub...Sérgio Sacani
Context. The observation of several L-band emission sources in the S cluster has led to a rich discussion of their nature. However, a definitive answer to the classification of the dusty objects requires an explanation for the detection of compact Doppler-shifted Brγ emission. The ionized hydrogen in combination with the observation of mid-infrared L-band continuum emission suggests that most of these sources are embedded in a dusty envelope. These embedded sources are part of the S-cluster, and their relationship to the S-stars is still under debate. To date, the question of the origin of these two populations has been vague, although all explanations favor migration processes for the individual cluster members. Aims. This work revisits the S-cluster and its dusty members orbiting the supermassive black hole SgrA* on bound Keplerian orbits from a kinematic perspective. The aim is to explore the Keplerian parameters for patterns that might imply a nonrandom distribution of the sample. Additionally, various analytical aspects are considered to address the nature of the dusty sources. Methods. Based on the photometric analysis, we estimated the individual H−K and K−L colors for the source sample and compared the results to known cluster members. The classification revealed a noticeable contrast between the S-stars and the dusty sources. To fit the flux-density distribution, we utilized the radiative transfer code HYPERION and implemented a young stellar object Class I model. We obtained the position angle from the Keplerian fit results; additionally, we analyzed the distribution of the inclinations and the longitudes of the ascending node. Results. The colors of the dusty sources suggest a stellar nature consistent with the spectral energy distribution in the near and midinfrared domains. Furthermore, the evaporation timescales of dusty and gaseous clumps in the vicinity of SgrA* are much shorter ( 2yr) than the epochs covered by the observations (≈15yr). In addition to the strong evidence for the stellar classification of the D-sources, we also find a clear disk-like pattern following the arrangements of S-stars proposed in the literature. Furthermore, we find a global intrinsic inclination for all dusty sources of 60 ± 20◦, implying a common formation process. Conclusions. The pattern of the dusty sources manifested in the distribution of the position angles, inclinations, and longitudes of the ascending node strongly suggests two different scenarios: the main-sequence stars and the dusty stellar S-cluster sources share a common formation history or migrated with a similar formation channel in the vicinity of SgrA*. Alternatively, the gravitational influence of SgrA* in combination with a massive perturber, such as a putative intermediate mass black hole in the IRS 13 cluster, forces the dusty objects and S-stars to follow a particular orbital arrangement. Key words. stars: black holes– stars: formation– Galaxy: center– galaxies: star formation
PPT on Sustainable Land Management presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
Embracing Deep Variability For Reproducibility and Replicability
Abstract: Reproducibility (aka determinism in some cases) constitutes a fundamental aspect in various fields of computer science, such as floating-point computations in numerical analysis and simulation, concurrency models in parallelism, reproducible builds for third parties integration and packaging, and containerization for execution environments. These concepts, while pervasive across diverse concerns, often exhibit intricate inter-dependencies, making it challenging to achieve a comprehensive understanding. In this short and vision paper we delve into the application of software engineering techniques, specifically variability management, to systematically identify and explicit points of variability that may give rise to reproducibility issues (eg language, libraries, compiler, virtual machine, OS, environment variables, etc). The primary objectives are: i) gaining insights into the variability layers and their possible interactions, ii) capturing and documenting configurations for the sake of reproducibility, and iii) exploring diverse configurations to replicate, and hence validate and ensure the robustness of results. By adopting these methodologies, we aim to address the complexities associated with reproducibility and replicability in modern software systems and environments, facilitating a more comprehensive and nuanced perspective on these critical aspects.
https://hal.science/hal-04582287
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
Anti-Universe And Emergent Gravity and the Dark UniverseSérgio Sacani
Recent theoretical progress indicates that spacetime and gravity emerge together from the entanglement structure of an underlying microscopic theory. These ideas are best understood in Anti-de Sitter space, where they rely on the area law for entanglement entropy. The extension to de Sitter space requires taking into account the entropy and temperature associated with the cosmological horizon. Using insights from string theory, black hole physics and quantum information theory we argue that the positive dark energy leads to a thermal volume law contribution to the entropy that overtakes the area law precisely at the cosmological horizon. Due to the competition between area and volume law entanglement the microscopic de Sitter states do not thermalise at sub-Hubble scales: they exhibit memory effects in the form of an entropy displacement caused by matter. The emergent laws of gravity contain an additional ‘dark’ gravitational force describing the ‘elastic’ response due to the entropy displacement. We derive an estimate of the strength of this extra force in terms of the baryonic mass, Newton’s constant and the Hubble acceleration scale a0 = cH0, and provide evidence for the fact that this additional ‘dark gravity force’ explains the observed phenomena in galaxies and clusters currently attributed to dark matter.
TOPIC OF DISCUSSION: CENTRIFUGATION SLIDESHARE.pptxshubhijain836
Centrifugation is a powerful technique used in laboratories to separate components of a heterogeneous mixture based on their density. This process utilizes centrifugal force to rapidly spin samples, causing denser particles to migrate outward more quickly than lighter ones. As a result, distinct layers form within the sample tube, allowing for easy isolation and purification of target substances.
1. ABC methodology and applications
Jean-Michel Marin & Christian P. Robert
I3M, Universit´e Montpellier 2, Universit´e Paris-Dauphine, & University of Warwick
ISBA 2014, Cancun. July 13
3. Approximate Bayesian computation
1 Approximate Bayesian computation
ABC basics
Alphabet soup
ABC-MCMC
2 ABC as an inference machine
3 ABC for model choice
4 Genetics of ABC
4. Untractable likelihoods
Cases when the likelihood function
f (y|θ) is unavailable and when the
completion step
f (y|θ) =
Z
f (y, z|θ) dz
is impossible or too costly because of
the dimension of z
c MCMC cannot be implemented!
5. Untractable likelihoods
Cases when the likelihood function
f (y|θ) is unavailable and when the
completion step
f (y|θ) =
Z
f (y, z|θ) dz
is impossible or too costly because of
the dimension of z
c MCMC cannot be implemented!
7. The ABC method
Bayesian setting: target is π(θ)f (y|θ)
When likelihood f (y|θ) not in closed form, likelihood-free rejection
technique:
8. The ABC method
Bayesian setting: target is π(θ)f (y|θ)
When likelihood f (y|θ) not in closed form, likelihood-free rejection
technique:
ABC algorithm
For an observation y ∼ f (y|θ), under the prior π(θ), keep jointly
simulating
θ ∼ π(θ) , z ∼ f (z|θ ) ,
until the auxiliary variable z is equal to the observed value, z = y.
[Tavar´e et al., 1997]
9. Why does it work?!
The proof is trivial:
f (θi ) ∝
z∈D
π(θi )f (z|θi )Iy(z)
∝ π(θi )f (y|θi )
= π(θi |y) .
[Accept–Reject 101]
10. Earlier occurrence
‘Bayesian statistics and Monte Carlo methods are ideally
suited to the task of passing many models over one
dataset’
[Don Rubin, Annals of Statistics, 1984]
Note Rubin (1984) does not promote this algorithm for
likelihood-free simulation but frequentist intuition on posterior
distributions: parameters from posteriors are more likely to be
those that could have generated the data.
11. A as A...pproximative
When y is a continuous random variable, equality z = y is replaced
with a tolerance condition,
(y, z) ≤
where is a distance
12. A as A...pproximative
When y is a continuous random variable, equality z = y is replaced
with a tolerance condition,
(y, z) ≤
where is a distance
Output distributed from
π(θ) Pθ{ (y, z) < } ∝ π(θ| (y, z) < )
[Pritchard et al., 1999]
13. ABC algorithm
Algorithm 1 Likelihood-free rejection sampler 2
for i = 1 to N do
repeat
generate θ from the prior distribution π(·)
generate z from the likelihood f (·|θ )
until ρ{η(z), η(y)} ≤
set θi = θ
end for
where η(y) defines a (not necessarily sufficient) [summary] statistic
14. Output
The likelihood-free algorithm samples from the marginal in z of:
π (θ, z|y) =
π(θ)f (z|θ)IA ,y (z)
A ,y×Θ π(θ)f (z|θ)dzdθ
,
where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
15. Output
The likelihood-free algorithm samples from the marginal in z of:
π (θ, z|y) =
π(θ)f (z|θ)IA ,y (z)
A ,y×Θ π(θ)f (z|θ)dzdθ
,
where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
The idea behind ABC is that the summary statistics coupled with a
small tolerance should provide a good approximation of the
posterior distribution:
π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) .
16. A toy example
Case of
y|θ ∼ N1 2(θ + 2)θ(θ − 2), 0.1 + θ2
and
θ ∼ U[−10,10]
when y = 2 and ρ(y, z) = |y − z|
[Richard Wilkinson, Tutorial at NIPS 2013]
21. ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance as a quantile on observed
distances, say 10% or 1% quantile,
= N = qα(d1, . . . , dN)
22. ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance as a quantile on observed
distances, say 10% or 1% quantile,
= N = qα(d1, . . . , dN)
• Interpretation of ε as nonparametric bandwidth only
approximation of the actual practice
[Blum & Fran¸cois, 2010]
23. ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance as a quantile on observed
distances, say 10% or 1% quantile,
= N = qα(d1, . . . , dN)
• Interpretation of ε as nonparametric bandwidth only
approximation of the actual practice
[Blum & Fran¸cois, 2010]
• ABC is a k-nearest neighbour (knn) method with kN = N N
[Loftsgaarden & Quesenberry, 1965]
24. ABC consistency
Provided
kN/ log log N −→ ∞ and kN/N −→ 0
as N → ∞, for almost all s0 (with respect to the distribution of
S), with probability 1,
1
kN
kN
j=1
ϕ(θj ) −→ E[ϕ(θj )|S = s0]
[Devroye, 1982]
25. ABC consistency
Provided
kN/ log log N −→ ∞ and kN/N −→ 0
as N → ∞, for almost all s0 (with respect to the distribution of
S), with probability 1,
1
kN
kN
j=1
ϕ(θj ) −→ E[ϕ(θj )|S = s0]
[Devroye, 1982]
Biau et al. (2013) also recall pointwise and integrated mean square error
consistency results on the corresponding kernel estimate of the
conditional posterior distribution, under constraints
kN → ∞, kN /N → 0, hN → 0 and hp
N kN → ∞,
26. Rates of convergence
Further assumptions (on target and kernel) allow for precise
(integrated mean square) convergence rates (as a power of the
sample size N), derived from classical k-nearest neighbour
regression, like
• when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N
− 4
p+8
• when m = 4, kN ≈ N(p+4)/(p+8) and rate N
− 4
p+8 log N
• when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N
− 4
m+p+4
[Biau et al., 2013]
where p dimension of θ and m dimension of summary statistics
27. Rates of convergence
Further assumptions (on target and kernel) allow for precise
(integrated mean square) convergence rates (as a power of the
sample size N), derived from classical k-nearest neighbour
regression, like
• when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N
− 4
p+8
• when m = 4, kN ≈ N(p+4)/(p+8) and rate N
− 4
p+8 log N
• when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N
− 4
m+p+4
[Biau et al., 2013]
where p dimension of θ and m dimension of summary statistics
Drag: Only applies to sufficient summary statistics
28. Probit modelling on Pima Indian women
Example (R benchmark)
200 Pima Indian women with observed variables
• plasma glucose concentration in oral glucose tolerance test
• diastolic blood pressure
• diabetes pedigree function
• presence/absence of diabetes
29. Probit modelling on Pima Indian women
Example (R benchmark)
200 Pima Indian women with observed variables
• plasma glucose concentration in oral glucose tolerance test
• diastolic blood pressure
• diabetes pedigree function
• presence/absence of diabetes
Probability of diabetes function of above variables
P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) ,
for 200 observations based on a g-prior modelling:
β ∼ N3(0, n XT
X)−1
30. Probit modelling on Pima Indian women
Example (R benchmark)
200 Pima Indian women with observed variables
• plasma glucose concentration in oral glucose tolerance test
• diastolic blood pressure
• diabetes pedigree function
• presence/absence of diabetes
Probability of diabetes function of above variables
P(y = 1|x) = Φ(x1β1 + x2β2 + x3β3) ,
for 200 observations based on a g-prior modelling:
β ∼ N3(0, n XT
X)−1
Use of MLE estimates as summary statistics and of distance
ρ(y, z) = ||ˆβ(z) − ˆβ(y)||2
31. Pima Indian benchmark
−0.005 0.010 0.020 0.030
020406080100
Density
−0.05 −0.03 −0.01
020406080
Density
−1.0 0.0 1.0 2.0
0.00.20.40.60.81.0
Density
Figure: Comparison between density estimates of the marginals on β1
(left), β2 (center) and β3 (right) from ABC rejection samples (red) and
MCMC samples (black)
.
32. MA example
Case of the MA(2) model
xt = t +
2
i=1
ϑi t−i
Simple prior: uniform prior over the identifiability zone, e.g.
triangle for MA(2)
33. MA example (2)
ABC algorithm thus made of
1 picking a new value (ϑ1, ϑ2) in the triangle
2 generating an iid sequence ( t)−2<t≤T
3 producing a simulated series (xt)1≤t≤T
34. MA example (2)
ABC algorithm thus made of
1 picking a new value (ϑ1, ϑ2) in the triangle
2 generating an iid sequence ( t)−2<t≤T
3 producing a simulated series (xt)1≤t≤T
Distance: choice between basic distance between the series
ρ((xt)1≤t≤T , (xt)1≤t≤T ) =
T
t=1
(xt − xt)2
or distance between summary statistics like the 2 autocorrelations
τj =
T
t=j+1
xtxt−j
35. Comparison of distance impact
Evaluation of the tolerance on the ABC sample against both
distances ( = 10%, 1%, 0.1% quantiles of simulated distances) for
an MA(2) model
36. Comparison of distance impact
0.0 0.2 0.4 0.6 0.8
01234
θ1
−2.0 −1.0 0.0 0.5 1.0 1.5
0.00.51.01.5
θ2
Evaluation of the tolerance on the ABC sample against both
distances ( = 10%, 1%, 0.1% quantiles of simulated distances) for
an MA(2) model
37. Comparison of distance impact
0.0 0.2 0.4 0.6 0.8
01234
θ1
−2.0 −1.0 0.0 0.5 1.0 1.5
0.00.51.01.5
θ2
Evaluation of the tolerance on the ABC sample against both
distances ( = 10%, 1%, 0.1% quantiles of simulated distances) for
an MA(2) model
38. Homonomy
The ABC algorithm is not to be confused with the ABC algorithm
The Artificial Bee Colony algorithm is a swarm based meta-heuristic
algorithm that was introduced by Karaboga in 2005 for optimizing
numerical problems. It was inspired by the intelligent foraging
behavior of honey bees. The algorithm is specifically based on the
model proposed by Tereshko and Loengarov (2005) for the foraging
behaviour of honey bee colonies. The model consists of three
essential components: employed and unemployed foraging bees, and
food sources. The first two components, employed and unemployed
foraging bees, search for rich food sources (...) close to their hive.
The model also defines two leading modes of behaviour (...):
recruitment of foragers to rich food sources resulting in positive
feedback and abandonment of poor sources by foragers causing
negative feedback.
[Karaboga, Scholarpedia]
40. ABC advances
Simulating from the prior is often poor in efficiency:
Either modify the proposal distribution on θ to increase the density
of z’s within the vicinity of y...
[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
41. ABC advances
Simulating from the prior is often poor in efficiency:
Either modify the proposal distribution on θ to increase the density
of z’s within the vicinity of y...
[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
...or by viewing the problem as a conditional density estimation
and by developing techniques to allow for larger
[Beaumont et al., 2002]
42. ABC advances
Simulating from the prior is often poor in efficiency:
Either modify the proposal distribution on θ to increase the density
of z’s within the vicinity of y...
[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
...or by viewing the problem as a conditional density estimation
and by developing techniques to allow for larger
[Beaumont et al., 2002]
.....or even by including in the inferential framework [ABCµ]
[Ratmann et al., 2009]
43. ABC-NP
Better usage of [prior] simulations by
adjustement: instead of throwing away
θ such that ρ(η(z), η(y)) > , replace
θ’s with locally regressed transforms
θ∗
= θ − {η(z) − η(y)}T ˆβ
[Csill´ery et al., TEE, 2010]
where ˆβ is obtained by [NP] weighted least square regression on
(η(z) − η(y)) with weights
Kδ {ρ(η(z), η(y))}
[Beaumont et al., 2002, Genetics]
44. ABC-NP (regression)
Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :
weight directly simulation by
Kδ {ρ(η(z(θ)), η(y))}
or
1
S
S
s=1
Kδ {ρ(η(zs
(θ)), η(y))}
[consistent estimate of f (η|θ)]
45. ABC-NP (regression)
Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :
weight directly simulation by
Kδ {ρ(η(z(θ)), η(y))}
or
1
S
S
s=1
Kδ {ρ(η(zs
(θ)), η(y))}
[consistent estimate of f (η|θ)]
Curse of dimensionality: poor estimate when d = dim(η) is large...
46. ABC-NP (regression)
Use of the kernel weights
Kδ {ρ(η(z(θ)), η(y))}
leads to the NP estimate of the posterior expectation
i θi Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
[Blum, JASA, 2010]
47. ABC-NP (regression)
Use of the kernel weights
Kδ {ρ(η(z(θ)), η(y))}
leads to the NP estimate of the posterior conditional density
i
˜Kb(θi − θ)Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
[Blum, JASA, 2010]
55. ABC-NCH (2)
Why neural network?
• fights curse of dimensionality
• selects relevant summary statistics
• provides automated dimension reduction
• offers a model choice capability
• improves upon multinomial logistic
[Blum & Fran¸cois, 2009]
56. ABC-MCMC
Markov chain (θ(t)) created via the transition function
θ(t+1)
=
θ ∼ Kω(θ |θ(t)) if x ∼ f (x|θ ) is such that x = y
and u ∼ U(0, 1) ≤ π(θ )Kω(θ(t)|θ )
π(θ(t))Kω(θ |θ(t))
,
θ(t) otherwise,
57. ABC-MCMC
Markov chain (θ(t)) created via the transition function
θ(t+1)
=
θ ∼ Kω(θ |θ(t)) if x ∼ f (x|θ ) is such that x = y
and u ∼ U(0, 1) ≤ π(θ )Kω(θ(t)|θ )
π(θ(t))Kω(θ |θ(t))
,
θ(t) otherwise,
has the posterior π(θ|y) as stationary distribution
[Marjoram et al, 2003]
58. ABC-MCMC (2)
Algorithm 2 Likelihood-free MCMC sampler
Use Algorithm 1 to get (θ(0), z(0))
for t = 1 to N do
Generate θ from Kω ·|θ(t−1) ,
Generate z from the likelihood f (·|θ ),
Generate u from U[0,1],
if u ≤ π(θ )Kω(θ(t−1)|θ )
π(θ(t−1)Kω(θ |θ(t−1))
IA ,y (z ) then
set (θ(t), z(t)) = (θ , z )
else
(θ(t), z(t))) = (θ(t−1), z(t−1)),
end if
end for
59. Why does it work?
Acceptance probability does not involve calculating the likelihood
and
π (θ , z |y)
π (θ(t−1)
, z(t−1)|y)
×
q(θ(t−1)
|θ )f (z(t−1)|θ(t−1)
)
q(θ |θ(t−1)
)f (z |θ )
=
π(θ ) XXXXf (z |θ ) IA ,y (z )
π(θ(t−1)
) f (z(t−1)|θ(t−1)
) IA ,y (z(t−1))
×
q(θ(t−1)
|θ ) f (z(t−1)|θ(t−1)
)
q(θ |θ(t−1)
) XXXXf (z |θ )
60. Why does it work?
Acceptance probability does not involve calculating the likelihood
and
π (θ , z |y)
π (θ(t−1)
, z(t−1)|y)
×
q(θ(t−1)
|θ )f (z(t−1)|θ(t−1)
)
q(θ |θ(t−1)
)f (z |θ )
=
π(θ ) XXXXf (z |θ ) IA ,y (z )
π(θ(t−1)
)((((((((hhhhhhhhf (z(t−1)|θ(t−1)
) IA ,y (z(t−1))
×
q(θ(t−1)
|θ ) ((((((((hhhhhhhhf (z(t−1)|θ(t−1)
)
q(θ |θ(t−1)
) XXXXf (z |θ )
61. Why does it work?
Acceptance probability does not involve calculating the likelihood
and
π (θ , z |y)
π (θ(t−1)
, z(t−1)|y)
×
q(θ(t−1)
|θ )f (z(t−1)|θ(t−1)
)
q(θ |θ(t−1)
)f (z |θ )
=
π(θ ) XXXXf (z |θ ) IA ,y (z )
π(θ(t−1)
)((((((((hhhhhhhhf (z(t−1)|θ(t−1)
)XXXXXXIA ,y (z(t−1))
×
q(θ(t−1)
|θ ) ((((((((hhhhhhhhf (z(t−1)|θ(t−1)
)
q(θ |θ(t−1)
) XXXXf (z |θ )
=
π(θ )q(θ(t−1)
|θ )
π(θ(t−1)
q(θ |θ(t−1)
)
IA ,y (z )
62. A toy example
Case of
x ∼
1
2
N(θ, 1) +
1
2
N(−θ, 1)
under prior θ ∼ N(0, 10)
63. A toy example
Case of
x ∼
1
2
N(θ, 1) +
1
2
N(−θ, 1)
under prior θ ∼ N(0, 10)
ABC sampler
thetas=rnorm(N,sd=10)
zed=sample(c(1,-1),N,rep=TRUE)*thetas+rnorm(N,sd=1)
eps=quantile(abs(zed-x),.01)
abc=thetas[abs(zed-x)eps]
64. A toy example
Case of
x ∼
1
2
N(θ, 1) +
1
2
N(−θ, 1)
under prior θ ∼ N(0, 10)
ABC-MCMC sampler
metas=rep(0,N)
metas[1]=rnorm(1,sd=10)
zed[1]=x
for (t in 2:N){
metas[t]=rnorm(1,mean=metas[t-1],sd=5)
zed[t]=rnorm(1,mean=(1-2*(runif(1).5))*metas[t],sd=1)
if ((abs(zed[t]-x)eps)||(runif(1)dnorm(metas[t],sd=10)/dnorm(metas[t-1],sd=10))){
metas[t]=metas[t-1]
zed[t]=zed[t-1]}
}
70. A toy example
x = 50
θ
−40 −20 0 20 40
0.000.020.040.060.080.10
71. A PMC version
Use of the same kernel idea as ABC-PRC (Sisson et al., 2007) but
with IS correction
Generate a sample at iteration t by
ˆπt(θ(t)
) ∝
N
j=1
ω
(t−1)
j Kt(θ(t)
|θ
(t−1)
j )
modulo acceptance of the associated xt, and use an importance
weight associated with an accepted simulation θ
(t)
i
ω
(t)
i ∝ π(θ
(t)
i ) ˆπt(θ
(t)
i ) .
c Still likelihood free
[Beaumont et al., 2009]
72. ABC-PMC algorithm
Given a decreasing sequence of approximation levels 1 ≥ . . . ≥ T ,
1. At iteration t = 1,
For i = 1, ..., N
Simulate θ
(1)
i ∼ π(θ) and x ∼ f (x|θ
(1)
i ) until (x, y) 1
Set ω
(1)
i = 1/N
Take τ2
as twice the empirical variance of the θ
(1)
i ’s
2. At iteration 2 ≤ t ≤ T,
For i = 1, ..., N, repeat
Pick θi from the θ
(t−1)
j ’s with probabilities ω
(t−1)
j
generate θ
(t)
i |θi ∼ N(θi , σ2
t ) and x ∼ f (x|θ
(t)
i )
until (x, y) t
Set ω
(t)
i ∝ π(θ
(t)
i )/ N
j=1 ω
(t−1)
j ϕ σ−1
t θ
(t)
i − θ
(t−1)
j )
Take τ2
t+1 as twice the weighted empirical variance of the θ
(t)
i ’s
73. Sequential Monte Carlo
SMC is a simulation technique to approximate a sequence of
related probability distributions πn with π0 “easy” and πT as
target.
Iterated IS as PMC: particles moved from time n to time n via
kernel Kn and use of a sequence of extended targets ˜πn
˜πn(z0:n) = πn(zn)
n
j=0
Lj (zj+1, zj )
where the Lj ’s are backward Markov kernels [check that πn(zn) is a
marginal]
[Del Moral, Doucet Jasra, Series B, 2006]
74. Sequential Monte Carlo (2)
Algorithm 3 SMC sampler
sample z
(0)
i ∼ γ0(x) (i = 1, . . . , N)
compute weights w
(0)
i = π0(z
(0)
i )/γ0(z
(0)
i )
for t = 1 to N do
if ESS(w(t−1)) NT then
resample N particles z(t−1) and set weights to 1
end if
generate z
(t−1)
i ∼ Kt(z
(t−1)
i , ·) and set weights to
w
(t)
i = w
(t−1)
i−1
πt(z
(t)
i ))Lt−1(z
(t)
i ), z
(t−1)
i ))
πt−1(z
(t−1)
i ))Kt(z
(t−1)
i ), z
(t)
i ))
end for
[Del Moral, Doucet Jasra, Series B, 2006]
75. ABC-SMC
[Del Moral, Doucet Jasra, 2009]
True derivation of an SMC-ABC algorithm
Use of a kernel Kn associated with target π n and derivation of the
backward kernel
Ln−1(z, z ) =
π n (z )Kn(z , z)
πn(z)
Update of the weights
win ∝ wi(n−1)
M
m=1 IA n
(xm
in )
M
m=1 IA n−1
(xm
i(n−1))
when xm
in ∼ K(xi(n−1), ·)
76. ABC-SMCM
Modification: Makes M repeated simulations of the pseudo-data z
given the parameter, rather than using a single [M = 1] simulation,
leading to weight that is proportional to the number of accepted
zi s
ω(θ) =
1
M
M
i=1
Iρ(η(y),η(zi ))
[limit in M means exact simulation from (tempered) target]
77. Properties of ABC-SMC
The ABC-SMC method properly uses a backward kernel L(z, z ) to
simplify the importance weight and to remove the dependence on
the unknown likelihood from this weight. Update of importance
weights is reduced to the ratio of the proportions of surviving
particles
Major assumption: the forward kernel K is supposed to be invariant
against the true target [tempered version of the true posterior]
78. Properties of ABC-SMC
The ABC-SMC method properly uses a backward kernel L(z, z ) to
simplify the importance weight and to remove the dependence on
the unknown likelihood from this weight. Update of importance
weights is reduced to the ratio of the proportions of surviving
particles
Major assumption: the forward kernel K is supposed to be invariant
against the true target [tempered version of the true posterior]
Adaptivity in ABC-SMC algorithm only found in on-line
construction of the thresholds t, slowly enough to keep a large
number of accepted transitions
79. A mixture example (2)
Recovery of the target, whether using a fixed standard deviation of
τ = 0.15 or τ = 1/0.15, or a sequence of adaptive τt’s.
θθ
−3 −2 −1 0 1 2 3
0.00.20.40.60.81.0
θθ
−3 −2 −1 0 1 2 3
0.00.20.40.60.81.0
θθ
−3 −2 −1 0 1 2 3
0.00.20.40.60.81.0
θθ
−3 −2 −1 0 1 2 3
0.00.20.40.60.81.0
θθ
−3 −2 −1 0 1 2 3
0.00.20.40.60.81.0
80. ABC inference machine
1 Approximate Bayesian
computation
2 ABC as an inference machine
Exact ABC
ABCµ
Automated summary
statistic selection
3 ABC for model choice
4 Genetics of ABC
81. How Bayesian is aBc..?
• may be a convergent method of inference (meaningful?
sufficient? foreign?)
• approximation error unknown (w/o massive simulation)
• pragmatic/empirical B (there is no other solution!)
• many calibration issues (tolerance, distance, statistics)
• the NP side should be incorporated into the whole B picture
• the approximation error should also be part of the B inference
82. Wilkinson’s exact (A)BC
ABC approximation error (i.e. non-zero tolerance) replaced with
exact simulation from a controlled approximation to the target,
convolution of true posterior with kernel function
π (θ, z|y) =
π(θ)f (z|θ)K (y − z)
π(θ)f (z|θ)K (y − z)dzdθ
,
with K kernel parameterised by bandwidth .
[Wilkinson, 2013]
83. Wilkinson’s exact (A)BC
ABC approximation error (i.e. non-zero tolerance) replaced with
exact simulation from a controlled approximation to the target,
convolution of true posterior with kernel function
π (θ, z|y) =
π(θ)f (z|θ)K (y − z)
π(θ)f (z|θ)K (y − z)dzdθ
,
with K kernel parameterised by bandwidth .
[Wilkinson, 2013]
Theorem
The ABC algorithm based on the assumption of a randomised
observation y = ˜y + ξ, ξ ∼ K , and an acceptance probability of
K (y − z)/M
gives draws from the posterior distribution π(θ|y).
84. How exact a BC?
“Using to represent measurement error is
straightforward, whereas using to model the model
discrepancy is harder to conceptualize and not as
commonly used”
[Richard Wilkinson, 2013]
85. How exact a BC?
Pros
• Pseudo-data from true model and observed data from noisy
model
• Interesting perspective in that outcome is completely
controlled
• Link with ABCµ and assuming y is observed with a
measurement error with density K
• Relates to the theory of model approximation
[Kennedy O’Hagan, 2001]
Cons
• Requires K to be bounded by M
• True approximation error never assessed
• Requires a modification of the standard ABC algorithm
86. Noisy ABC
Idea: Modify the data from the start
˜y = y0 + ζ1
with the same scale as ABC
[ see Fearnhead-Prangle ]
run ABC on ˜y
87. Noisy ABC
Idea: Modify the data from the start
˜y = y0 + ζ1
with the same scale as ABC
[ see Fearnhead-Prangle ]
run ABC on ˜y
Then ABC produces an exact simulation from π(θ|˜y) = π(θ|˜y)
[Dean et al., 2011; Fearnhead and Prangle, 2012]
88. Consistent noisy ABC
• Degrading the data improves the estimation performances:
• Noisy ABC-MLE is asymptotically (in n) consistent
• under further assumptions, the noisy ABC-MLE is
asymptotically normal
• increase in variance of order −2
• likely degradation in precision or computing time due to the
lack of summary statistic [curse of dimensionality]
89. ABCµ
[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
Use of a joint density
f (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )
where y is the data, and ξ( |y, θ) is the prior predictive density of
ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
90. ABCµ
[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
Use of a joint density
f (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )
where y is the data, and ξ( |y, θ) is the prior predictive density of
ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
Warning! Replacement of ξ( |y, θ) with a non-parametric kernel
approximation.
91. ABCµ details
Multidimensional distances ρk (k = 1, . . . , K) and errors
k = ρk(ηk(z), ηk(y)), with
k ∼ ξk( |y, θ) ≈ ˆξk( |y, θ) =
1
Bhk
b
K[{ k−ρk(ηk(zb), ηk(y))}/hk]
then used in replacing ξ( |y, θ) with mink
ˆξk( |y, θ)
92. ABCµ details
Multidimensional distances ρk (k = 1, . . . , K) and errors
k = ρk(ηk(z), ηk(y)), with
k ∼ ξk( |y, θ) ≈ ˆξk( |y, θ) =
1
Bhk
b
K[{ k−ρk(ηk(zb), ηk(y))}/hk]
then used in replacing ξ( |y, θ) with mink
ˆξk( |y, θ)
ABCµ involves acceptance probability
π(θ , )
π(θ, )
q(θ , θ)q( , )
q(θ, θ )q( , )
mink
ˆξk( |y, θ )
mink
ˆξk( |y, θ)
95. Semi-automatic ABC
Fearnhead and Prangle (2012) study ABC and the selection of the
summary statistic in close proximity to Wilkinson’s proposal
ABC is then considered from a purely inferential viewpoint and
calibrated for estimation purposes
Use of a randomised (or ‘noisy’) version of the summary statistics
˜η(y) = η(y) + τ
Derivation of a well-calibrated version of ABC, i.e. an algorithm
that gives proper predictions for the distribution associated with
this randomised summary statistic
96. Summary statistics
Main results:
• Optimality of the posterior expectation E[θ|y] of the
parameter of interest as summary statistics η(y)!
97. Summary statistics
Main results:
• Optimality of the posterior expectation E[θ|y] of the
parameter of interest as summary statistics η(y)!
• Use of the standard quadratic loss function
(θ − θ0)T
A(θ − θ0) .
bare summary
98. Details on Fearnhead and Prangle (FP) ABC
Use of a summary statistic S(·), an importance proposal g(·), a
kernel K(·) ≤ 1 and a bandwidth h 0 such that
(θ, ysim) ∼ g(θ)f (ysim|θ)
is accepted with probability (hence the bound)
K[{S(ysim) − sobs}/h]
and the corresponding importance weight defined by
π(θ) g(θ)
[Fearnhead Prangle, 2012]
99. Average acceptance asymptotics
For the average acceptance probability/approximate likelihood
p(θ|sobs) = f (ysim|θ) K[{S(ysim) − sobs}/h] dysim ,
overall acceptance probability
p(sobs) = p(θ|sobs) π(θ) dθ = π(sobs)hd
+ o(hd
)
[FP, Lemma 1]
100. Calibration of h
“This result gives insight into how S(·) and h affect the Monte
Carlo error. To minimize Monte Carlo error, we need hd
to be not
too small. Thus ideally we want S(·) to be a low dimensional
summary of the data that is sufficiently informative about θ that
π(θ|sobs) is close, in some sense, to π(θ|yobs)” (FP, p.5)
• turns h into an absolute value while it should be
context-dependent and user-calibrated
• only addresses one term in the approximation error and
acceptance probability (“curse of dimensionality”)
• h large prevents πABC(θ|sobs) to be close to π(θ|sobs)
• d small prevents π(θ|sobs) to be close to π(θ|yobs) (“curse of
[dis]information”)
101. Converging ABC
Theorem (FP)
For noisy ABC, under identifiability assumptions, the expected
noisy-ABC log-likelihood,
E {log[p(θ|sobs)]} = log[p(θ|S(yobs) + )]π(yobs|θ0)K( )dyobsd ,
has its maximum at θ = θ0.
102. Converging ABC
Corollary
For noisy ABC, under regularity constraints on summary statistics,
the ABC posterior converges onto a point mass on the true
parameter value as m → ∞.
103. Loss motivated statistic
Under quadratic loss function,
Theorem (FP)
(i) The minimal posterior error E[L(θ, ˆθ)|yobs] occurs when
ˆθ = E(θ|yobs) (!)
(ii) When h → 0, EABC(θ|sobs) converges to E(θ|yobs)
(iii) If S(yobs) = E[θ|yobs] then for ˆθ = EABC[θ|sobs]
E[L(θ, ˆθ)|yobs] = trace(AΣ) + h2
xT
AxK(x)dx + o(h2
).
104. Optimal summary statistic
“We take a different approach, and weaken the requirement for
πABC to be a good approximation to π(θ|yobs). We argue for πABC
to be a good approximation solely in terms of the accuracy of
certain estimates of the parameters.” (FP, p.5)
From this result, FP
• derive their choice of summary statistic,
S(y) = E(θ|y)
[almost sufficient]
• suggest
h = O(N−1/(2+d)
) and h = O(N−1/(4+d)
)
as optimal bandwidths for noisy and standard ABC.
105. Optimal summary statistic
“We take a different approach, and weaken the requirement for
πABC to be a good approximation to π(θ|yobs). We argue for πABC
to be a good approximation solely in terms of the accuracy of
certain estimates of the parameters.” (FP, p.5)
From this result, FP
• derive their choice of summary statistic,
S(y) = E(θ|y)
[wow! EABC[θ|S(yobs)] = E[θ|yobs]]
• suggest
h = O(N−1/(2+d)
) and h = O(N−1/(4+d)
)
as optimal bandwidths for noisy and standard ABC.
106. Caveat
Since E(θ|yobs) is most usually unavailable, FP suggest
(i) use a pilot run of ABC to determine a region of non-negligible
posterior mass;
(ii) simulate sets of parameter values and data;
(iii) use the simulated sets of parameter values and data to
estimate the summary statistic; and
(iv) run ABC with this choice of summary statistic.
107. ABC for model choice
1 Approximate Bayesian computation
2 ABC as an inference machine
3 ABC for model choice
4 Genetics of ABC
108. Bayesian model choice
Several models M1, M2, . . . are considered simultaneously for a
dataset y and the model index M is part of the inference.
Use of a prior distribution. π(M = m), plus a prior distribution on
the parameter conditional on the value m of the model index,
πm(θm)
Goal is to derive the posterior distribution of M, challenging
computational target when models are complex.
109. Generic ABC for model choice
Algorithm 4 Likelihood-free model choice sampler (ABC-MC)
for t = 1 to T do
repeat
Generate m from the prior π(M = m)
Generate θm from the prior πm(θm)
Generate z from the model fm(z|θm)
until ρ{η(z), η(y)}
Set m(t) = m and θ(t)
= θm
end for
110. ABC estimates
Posterior probability π(M = m|y) approximated by the frequency
of acceptances from model m
1
T
T
t=1
Im(t)=m .
Issues with implementation:
• should tolerances be the same for all models?
• should summary statistics vary across models (incl. their
dimension)?
• should the distance measure ρ vary as well?
111. ABC estimates
Posterior probability π(M = m|y) approximated by the frequency
of acceptances from model m
1
T
T
t=1
Im(t)=m .
Extension to a weighted polychotomous logistic regression estimate
of π(M = m|y), with non-parametric kernel weights
[Cornuet et al., DIYABC, 2009]
112. The Great ABC controversy
On-going controvery in phylogeographic genetics about the validity
of using ABC for testing
Against: Templeton, 2008,
2009, 2010a, 2010b, 2010c
argues that nested hypotheses
cannot have higher probabilities
than nesting hypotheses (!)
113. The Great ABC controversy
On-going controvery in phylogeographic genetics about the validity
of using ABC for testing
Against: Templeton, 2008,
2009, 2010a, 2010b, 2010c
argues that nested hypotheses
cannot have higher probabilities
than nesting hypotheses (!)
Replies: Fagundes et al., 2008,
Beaumont et al., 2010, Berger et
al., 2010, Csill`ery et al., 2010
point out that the criticisms are
addressed at [Bayesian]
model-based inference and have
nothing to do with ABC...
114. Back to sufficiency
If η1(x) sufficient statistic for model m = 1 and parameter θ1 and
η2(x) sufficient statistic for model m = 2 and parameter θ2,
(η1(x), η2(x)) is not always sufficient for (m, θm)
115. Back to sufficiency
If η1(x) sufficient statistic for model m = 1 and parameter θ1 and
η2(x) sufficient statistic for model m = 2 and parameter θ2,
(η1(x), η2(x)) is not always sufficient for (m, θm)
c Potential loss of information at the testing level
116. Limiting behaviour of B12 (T → ∞)
ABC approximation
B12(y) =
T
t=1 Imt =1 Iρ{η(zt ),η(y)}≤
T
t=1 Imt =2 Iρ{η(zt ),η(y)}≤
,
where the (mt, zt)’s are simulated from the (joint) prior
117. Limiting behaviour of B12 (T → ∞)
ABC approximation
B12(y) =
T
t=1 Imt =1 Iρ{η(zt ),η(y)}≤
T
t=1 Imt =2 Iρ{η(zt ),η(y)}≤
,
where the (mt, zt)’s are simulated from the (joint) prior
As T go to infinity, limit
B12(y) =
Iρ{η(z),η(y)}≤ π1(θ1)f1(z|θ1) dz dθ1
Iρ{η(z),η(y)}≤ π2(θ2)f2(z|θ2) dz dθ2
=
Iρ{η,η(y)}≤ π1(θ1)f η
1 (η|θ1) dη dθ1
Iρ{η,η(y)}≤ π2(θ2)f η
2 (η|θ2) dη dθ2
,
where f η
1 (η|θ1) and f η
2 (η|θ2) distributions of η(z)
118. Limiting behaviour of B12 ( → 0)
When goes to zero,
Bη
12(y) =
π1(θ1)f η
1 (η(y)|θ1) dθ1
π2(θ2)f η
2 (η(y)|θ2) dθ2
,
119. Limiting behaviour of B12 ( → 0)
When goes to zero,
Bη
12(y) =
π1(θ1)f η
1 (η(y)|θ1) dθ1
π2(θ2)f η
2 (η(y)|θ2) dθ2
,
c Bayes factor based on the sole observation of η(y)
120. Limiting behaviour of B12 (under sufficiency)
If η(y) sufficient statistic for both models,
fi (y|θi ) = gi (y)f η
i (η(y)|θi )
Thus
B12(y) = Θ1
π(θ1)g1(y)f η
1 (η(y)|θ1) dθ1
Θ2
π(θ2)g2(y)f η
2 (η(y)|θ2) dθ2
=
g1(y) π1(θ1)f η
1 (η(y)|θ1) dθ1
g2(y) π2(θ2)f η
2 (η(y)|θ2) dθ2
=
g1(y)
g2(y)
Bη
12(y) .
[Didelot, Everitt, Johansen Lawson, 2011]
121. Limiting behaviour of B12 (under sufficiency)
If η(y) sufficient statistic for both models,
fi (y|θi ) = gi (y)f η
i (η(y)|θi )
Thus
B12(y) = Θ1
π(θ1)g1(y)f η
1 (η(y)|θ1) dθ1
Θ2
π(θ2)g2(y)f η
2 (η(y)|θ2) dθ2
=
g1(y) π1(θ1)f η
1 (η(y)|θ1) dθ1
g2(y) π2(θ2)f η
2 (η(y)|θ2) dθ2
=
g1(y)
g2(y)
Bη
12(y) .
[Didelot, Everitt, Johansen Lawson, 2011]
c No discrepancy only when cross-model sufficiency
122. MA(q) divergence
1 2
0.00.20.40.60.81.0
1 2
0.00.20.40.60.81.0
1 2
0.00.20.40.60.81.0
1 2
0.00.20.40.60.81.0
Evolution [against ] of ABC Bayes factor, in terms of frequencies of
visits to models MA(1) (left) and MA(2) (right) when equal to
10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sample
of 50 points from a MA(2) with θ1 = 0.6, θ2 = 0.2. True Bayes factor
equal to 17.71.
123. MA(q) divergence
1 2
0.00.20.40.60.81.0
1 2
0.00.20.40.60.81.0
1 2
0.00.20.40.60.81.0
1 2
0.00.20.40.60.81.0
Evolution [against ] of ABC Bayes factor, in terms of frequencies of
visits to models MA(1) (left) and MA(2) (right) when equal to
10, 1, .1, .01% quantiles on insufficient autocovariance distances. Sample
of 50 points from a MA(1) model with θ1 = 0.6. True Bayes factor B21
equal to .004.
124. Further comments
‘There should be the possibility that for the same model,
but different (non-minimal) [summary] statistics (so
different η’s: η1 and η∗
1) the ratio of evidences may no
longer be equal to one.’
[Michael Stumpf, Jan. 28, 2011, ’Og]
Using different summary statistics [on different models] may
indicate the loss of information brought by each set but agreement
does not lead to trustworthy approximations.
125. LDA summaries for model choice
In parallel to F P semi-automatic ABC, selection of most
discriminant subvector out of a collection of summary statistics,
can be based on Linear Discriminant Analysis (LDA)
[Estoup al., 2012, Mol. Ecol. Res.]
Solution now implemented in DIYABC.2
[Cornuet al., 2008, Bioinf.; Estoup al., 2013]
126. Implementation
Step 1: Take a subset of the α% (e.g., 1%) best simulations
from an ABC reference table usually including 106–109
simulations for each of the M compared scenarios/models
Selection based on normalized Euclidian distance computed
between observed and simulated raw summaries
Step 2: run LDA on this subset to transform summaries into
(M − 1) discriminant variables
Step 3: Estimation of the posterior probabilities of each
competing scenario/model by polychotomous local logistic
regression against the M − 1 most discriminant variables
[Cornuet al., 2008, Bioinformatics]
127. Implementation
Step 1: Take a subset of the α% (e.g., 1%) best simulations
from an ABC reference table usually including 106–109
simulations for each of the M compared scenarios/models
Step 2: run LDA on this subset to transform summaries into
(M − 1) discriminant variables
When computing LDA functions, weight simulated data with an
Epanechnikov kernel
Step 3: Estimation of the posterior probabilities of each
competing scenario/model by polychotomous local logistic
regression against the M − 1 most discriminant variables
[Cornuet al., 2008, Bioinformatics]
128. LDA advantages
• much faster computation of scenario probabilities via
polychotomous regression
• a (much) lower number of explanatory variables improves the
accuracy of the ABC approximation, reduces the tolerance
and avoids extra costs in constructing the reference table
• allows for a large collection of initial summaries
• ability to evaluate Type I and Type II errors on more complex
models [more on this later]
• LDA reduces correlation among explanatory variables
129. LDA advantages
• much faster computation of scenario probabilities via
polychotomous regression
• a (much) lower number of explanatory variables improves the
accuracy of the ABC approximation, reduces the tolerance
and avoids extra costs in constructing the reference table
• allows for a large collection of initial summaries
• ability to evaluate Type I and Type II errors on more complex
models [more on this later]
• LDA reduces correlation among explanatory variables
When available, using both simulated and real data sets, posterior
probabilities of scenarios computed from LDA-transformed and raw
summaries are strongly correlated
130. A stylised problem
Central question to the validation of ABC for model choice:
When is a Bayes factor based on an insufficient statistic T(y)
consistent?
131. A stylised problem
Central question to the validation of ABC for model choice:
When is a Bayes factor based on an insufficient statistic T(y)
consistent?
Note/warnin: c drawn on T(y) through BT
12(y) necessarily differs
from c drawn on y through B12(y)
[Marin, Pillai, X, Rousseau, JRSS B, 2013]
132. A benchmark if toy example
Comparison suggested by referee of PNAS paper [thanks!]:
[X, Cornuet, Marin, Pillai, Aug. 2011]
Model M1: y ∼ N(θ1, 1) opposed
to model M2: y ∼ L(θ2, 1/
√
2), Laplace distribution with mean θ2
and scale parameter 1/
√
2 (variance one).
Four possible statistics
1 sample mean y (sufficient for M1 if not M2);
2 sample median med(y) (insufficient);
3 sample variance var(y) (ancillary);
4 median absolute deviation mad(y) = med(|y − med(y)|);
133. A benchmark if toy example
Comparison suggested by referee of PNAS paper [thanks!]:
[X, Cornuet, Marin, Pillai, Aug. 2011]
Model M1: y ∼ N(θ1, 1) opposed
to model M2: y ∼ L(θ2, 1/
√
2), Laplace distribution with mean θ2
and scale parameter 1/
√
2 (variance one).
q
q
q
q
q
q
q
q
q
q
q
Gauss Laplace
0.00.10.20.30.40.50.60.7
n=100
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
Gauss Laplace
0.00.20.40.60.81.0
n=100
134. Framework
Starting from sample
y = (y1, . . . , yn)
the observed sample, not necessarily iid with true distribution
y ∼ Pn
Summary statistics
T(y) = Tn
= (T1(y), T2(y), · · · , Td (y)) ∈ Rd
with true distribution Tn
∼ Gn.
135. Framework
c Comparison of
– under M1, y ∼ F1,n(·|θ1) where θ1 ∈ Θ1 ⊂ Rp1
– under M2, y ∼ F2,n(·|θ2) where θ2 ∈ Θ2 ⊂ Rp2
turned into
– under M1, T(y) ∼ G1,n(·|θ1), and θ1|T(y) ∼ π1(·|Tn
)
– under M2, T(y) ∼ G2,n(·|θ2), and θ2|T(y) ∼ π2(·|Tn
)
136. Assumptions
A collection of asymptotic “standard” assumptions:
[A1] is a standard central limit theorem under the true model with
asymptotic mean µ0
[A2] controls the large deviations of the estimator Tn
from the
model mean µ(θ)
[A3] is the standard prior mass condition found in Bayesian
asymptotics (di effective dimension of the parameter)
[A4] restricts the behaviour of the model density against the true
density
[Think CLT!]
137. Asymptotic marginals
Asymptotically, under [A1]–[A4]
mi (t) =
Θi
gi (t|θi ) πi (θi ) dθi
is such that
(i) if inf{|µi (θi ) − µ0|; θi ∈ Θi } = 0,
Cl vd−di
n ≤ mi (Tn
) ≤ Cuvd−di
n
and
(ii) if inf{|µi (θi ) − µ0|; θi ∈ Θi } 0
mi (Tn
) = oPn [vd−τi
n + vd−αi
n ].
138. Between-model consistency
Consequence of above is that asymptotic behaviour of the Bayes
factor is driven by the asymptotic mean value µ(θ) of Tn
under
both models. And only by this mean value!
139. Between-model consistency
Consequence of above is that asymptotic behaviour of the Bayes
factor is driven by the asymptotic mean value µ(θ) of Tn
under
both models. And only by this mean value!
Indeed, if
inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} = inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0
then
Cl v
−(d1−d2)
n ≤ m1(Tn
) m2(Tn
) ≤ Cuv
−(d1−d2)
n ,
where Cl , Cu = OPn (1), irrespective of the true model.
c Only depends on the difference d1 − d2: no consistency
140. Between-model consistency
Consequence of above is that asymptotic behaviour of the Bayes
factor is driven by the asymptotic mean value µ(θ) of Tn
under
both models. And only by this mean value!
Else, if
inf{|µ0 − µ2(θ2)|; θ2 ∈ Θ2} inf{|µ0 − µ1(θ1)|; θ1 ∈ Θ1} = 0
then
m1(Tn
)
m2(Tn
)
≥ Cu min v
−(d1−α2)
n , v
−(d1−τ2)
n
141. Checking for adequate statistics
Run a practical check of the relevance (or non-relevance) of Tn
null hypothesis that both models are compatible with the statistic
Tn
H0 : inf{|µ2(θ2) − µ0|; θ2 ∈ Θ2} = 0
against
H1 : inf{|µ2(θ2) − µ0|; θ2 ∈ Θ2} 0
testing procedure provides estimates of mean of Tn
under each
model and checks for equality
142. Checking in practice
• Under each model Mi , generate ABC sample θi,l , l = 1, · · · , L
• For each θi,l , generate yi,l ∼ Fi,n(·|ψi,l ), derive Tn
(yi,l ) and
compute
ˆµi =
1
L
L
l=1
Tn
(yi,l ), i = 1, 2 .
• Conditionally on Tn
(y),
√
L { ˆµi − Eπ
[µi (θi )|Tn
(y)]} N(0, Vi ),
• Test for a common mean
H0 : ˆµ1 ∼ N(µ0, V1) , ˆµ2 ∼ N(µ0, V2)
against the alternative of different means
H1 : ˆµi ∼ N(µi , Vi ), with µ1 = µ2 .
143. Toy example: Laplace versus Gauss
qqqqqqqqqqqqqqq
qqqqqqqqqq
q
qq
q
q
Gauss Laplace Gauss Laplace
010203040
Normalised χ2 without and with mad
144. Genetics of ABC
1 Approximate Bayesian computation
2 ABC as an inference machine
3 ABC for model choice
4 Genetics of ABC
145. Genetic background of ABC
ABC is a recent computational technique that only requires being
able to sample from the likelihood f (·|θ)
This technique stemmed from population genetics models, about
15 years ago, and population geneticists still contribute
significantly to methodological developments of ABC.
[Griffith al., 1997; Tavar´e al., 1999]
146. Population genetics
[Part derived from the teaching material of Raphael Leblois, ENS Lyon, November 2010]
• Describe the genotypes, estimate the alleles frequencies,
determine their distribution among individuals, populations
and between populations;
• Predict and understand the evolution of gene frequencies in
populations as a result of various factors.
c Analyses the effect of various evolutive forces (mutation, drift,
migration, selection) on the evolution of gene frequencies in time
and space.
147. Wright-Fisher model
Le modèle de Wright-Fisher
•! En l’absence de mutation et de
sélection, les fréquences
alléliques dérivent (augmentent
et diminuent) inévitablement
jusqu’à la fixation d’un allèle
•! La dérive conduit donc à la
perte de variation génétique à
l’intérieur des populations
• A population of constant
size, in which individuals
reproduce at the same time.
• Each gene in a generation is
a copy of a gene of the
previous generation.
• In the absence of mutation
and selection, allele
frequencies derive inevitably
until the fixation of an
allele.
148. Coalescent theory
[Kingman, 1982; Tajima, Tavar´e, tc]
!#$%'((')**+$,-'.'/010234%'.'5*$*%()23$156
!!7**+$,-',()5534%' !7**+$,-'8,$)('5,'1,'9
:;;=7?@# :ABC7#?@#
Coalescence theory interested in the genealogy of a sample of
genes back in time to the common ancestor of the sample.
149. Common ancestor
6
Timeofcoalescence
(T)
Modélisation du processus de dérive génétique
en “remontant dans le temps”
jusqu’à l’ancêtre commun d’un échantillon de gènes
Les différentes
lignées fusionnent
(coalescent) au fur
et à mesure que
l’on remonte vers le
passé
The different lineages merge when we go back in the past.
150. Neutral mutations
20
Sous l’hypothèse de neutralité des marqueurs génétiques étudiés,
les mutations sont indépendantes de la généalogie
i.e. la généalogie ne dépend que des processus démographiques
On construit donc la généalogie selon les paramètres
démographiques (ex. N),
puis on ajoute a posteriori les
mutations sur les différentes
branches, du MRCA au feuilles de
l’arbre
On obtient ainsi des données de
polymorphisme sous les modèles
démographiques et mutationnels
considérés
• Under the assumption of
neutrality, the mutations
are independent of the
genealogy.
• We construct the genealogy
according to the
demographic parameters,
then we add a posteriori the
mutations.
151. Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
Kingman’s genealogy
When time axis is
normalized,
T(k) ∼ Exp(k(k −1)/2)
152. Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
Kingman’s genealogy
When time axis is
normalized,
T(k) ∼ Exp(k(k −1)/2)
Mutations according to
the Simple stepwise
Mutation Model
(SMM)
• date of the mutations ∼
Poisson process with
intensity θ/2 over the
branches
153. Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
Observations: leafs of the tree
ˆθ =?
Kingman’s genealogy
When time axis is
normalized,
T(k) ∼ Exp(k(k −1)/2)
Mutations according to
the Simple stepwise
Mutation Model
(SMM)
• date of the mutations ∼
Poisson process with
intensity θ/2 over the
branches
• MRCA = 100
• independent mutations:
±1 with pr. 1/2
154. Much more interesting models. . .
• several independent locus
Independent gene genealogies and mutations
• different populations
linked by an evolutionary scenario made of divergences,
admixtures, migrations between populations, selection
pressure, etc.
• larger sample size
usually between 50 and 100 genes
155. Available population scenarios
Between populations: three types of events, backward in time
• the divergence is the fusion between two populations,
• the admixture is the split of a population into two parts,
• the migration allows the move of some lineages of a
population to another.
•
4
•
2
•
5
•
3
•
1
Lignée ancestrale
Présent
T5
T4
T3
T2
FIGURE 2.2: Exemple de généalogie de cinq individus issus d’une seule population fermée à l’équilibre. Les
individus échantillonnés sont représentés par les feuilles du dendrogramme, les durées inter-coalescences
T2, . . . , T5 sont indépendantes, et Tk est de loi exponentielle de paramètre k k - 1 /2.
Pop1 Pop2
Pop1
Divergence
(a)
t
t0
Pop1 Pop3 Pop2
Admixture
(b)
1 - rr
t
t0
m12
m21
Pop1 Pop2
Migration
(c)
t
t0
FIGURE 2.3: Représentations graphiques des trois types d’évènements inter-populationnels d’un scénario
démographique. Il existe deux familles d’évènements inter-populationnels. La première famille est simple,
elle correspond aux évènement inter-populationnels instantanés. C’est le cas d’une divergence ou d’une
admixture. (a) Deux populations qui évoluent pour se fusionner dans le cas d’une divergence. (b) Trois po-
pulations qui évoluent en parallèle pour une admixture. Pour cette situation, chacun des tubes représente
(on peut imaginer qu’il porte à l’intérieur) la généalogie de la population qui évolue indépendamment des
156. A complex scenario
The goal is to discriminate between different population scenarios
from a dataset of polymorphism (DNA sample) y observed at the
present time.
2.5 Conclusion 37
Divergence
Pop1
Ne1
Pop4
Ne4
Admixture
Pop3
Ne3
Pop6Ne6
Pop2
Ne2
Pop5Ne5
Migration
m
m0
t = 0
t5
t4
t0
4
Ne4
Ne0
4
t3
t2
t1
r 1 - r
1 - ss
FIGURE 2.1: Exemple d’un scénario évolutif complexe composé d’évènements inter-populationnels. Ce
scénario implique quatre populations échantillonnées Pop1, . . . , Pop4 et deux autres populations non-
observées Pop5 et Pop6. Les branches de ce schéma sont des tubes et le scénario démographique contraint
la généalogie à rester à l’intérieur de ces tubes. La migration entre les populations Pop3 et Pop4 sur la
0
157. Demo-genetic inference
Each model is characterized by a set of parameters θ that cover
historical (time divergence, admixture time ...), demographics
(population sizes, admixture rates, migration rates, ...) and genetic
(mutation rate, ...) factors
The goal is to estimate these parameters from a dataset of
polymorphism (DNA sample) y observed at the present time
Problem: most of the time, we can not calculate the likelihood of
the polymorphism data f (y|θ).
158. Untractable likelihood
Missing (too missing!) data structure:
f (y|θ) =
G
f (y|G, θ)f (G|θ)dG
The genealogies are considered as nuisance parameters.
This problematic thus differs from the phylogenetic approach
where the tree is the parameter of interesst.
159. A genuine example of application
94
!#$%'()*+,(-*.(/+0$'1)()$/+2!,03!
1/+*%*'4*+56(47()$/.+.1#+4*.+8-9':*.+
Pygmies populations: do they have a common origin? Is there a
lot of exchanges between pygmies and non-pygmies populations?
161. Simulation results
Différents scénarios possibles, choix de scenari
Le scenario 1a est largement soutenu par rap
autres ! plaide pour une origine commune
!#$%'()*+,(-*.(/+0$'1)()$/+2!,03
1/+*%*'4*+56(47()$/.+.1#+4*.+8-9':*.
Différents scénarios possibles, choix de scenario par ABC
Le scenario 1a est largement soutenu par rapport aux
autres ! plaide pour une origine commune des
populations pygmées d’Afrique de l’Ouest
Verdu e
c Scenario 1A is chosen.
163. Instance of ecological questions [message in a beetle]
• How the Asian Ladybird
beetle arrived in Europe?
• Why does they swarm right
now?
• What are the routes of
invasion?
• How to get rid of them?
• Why did the chicken cross
the road?
[Lombaert al., 2010, PLoS ONE]
beetles in forests
164. Worldwide invasion routes of Harmonia Axyridis
For each outbreak, the arrow indicates the most likely invasion
pathway and the associated posterior probability, with 95% credible
intervals in brackets
[Estoup et al., 2012, Molecular Ecology Res.]
165. Worldwide invasion routes of Harmonia Axyridis
For each outbreak, the arrow indicates the most likely invasion
pathway and the associated posterior probability, with 95% credible
intervals in brackets
[Estoup et al., 2012, Molecular Ecology Res.]
166. A population genetic illustration of ABC model choice
Two populations (1 and 2) having diverged at a fixed known time
in the past and third population (3) which diverged from one of
those two populations (models 1 and 2, respectively).
Observation of 50 diploid individuals/population genotyped at 5,
50 or 100 independent microsatellite loci.
Model 2
167. A population genetic illustration of ABC model choice
Two populations (1 and 2) having diverged at a fixed known time
in the past and third population (3) which diverged from one of
those two populations (models 1 and 2, respectively).
Observation of 50 diploid individuals/population genotyped at 5,
50 or 100 independent microsatellite loci.
Stepwise mutation model: the number of repeats of the mutated
gene increases or decreases by one. Mutation rate µ common to all
loci set to 0.005 (single parameter) with uniform prior distribution
µ ∼ U[0.0001, 0.01]
168. A population genetic illustration of ABC model choice
Summary statistics associated to the (δµ)2 distance
xl,i,j repeated number of allele in locus l = 1, . . . , L for individual
i = 1, . . . , 100 within the population j = 1, 2, 3. Then
(δµ)2
j1,j2
=
1
L
L
l=1
1
100
100
i1=1
xl,i1,j1 −
1
100
100
i2=1
xl,i2,j2
2
.
169. A population genetic illustration of ABC model choice
For two copies of locus l with allele sizes xl,i,j1 and xl,i ,j2
, most
recent common ancestor at coalescence time τj1,j2 , gene genealogy
distance of 2τj1,j2 , hence number of mutations Poisson with
parameter 2µτj1,j2 . Therefore,
E xl,i,j1 − xl,i ,j2
2
|τj1,j2 = 2µτj1,j2
and
Model 1 Model 2
E (δµ)2
1,2 2µ1t 2µ2t
E (δµ)2
1,3 2µ1t 2µ2t
E (δµ)2
2,3 2µ1t 2µ2t
170. A population genetic illustration of ABC model choice
Thus,
• Bayes factor based only on distance (δµ)2
1,2 not convergent: if
µ1 = µ2, same expectation
• Bayes factor based only on distance (δµ)2
1,3 or (δµ)2
2,3 not
convergent: if µ1 = 2µ2 or 2µ1 = µ2 same expectation
• if two of the three distances are used, Bayes factor converges:
there is no (µ1, µ2) for which all expectations are equal
171. A population genetic illustration of ABC model choice
q
q q
5 50 100
0.00.40.8 DM2(12)
q
q
q
q
q
q
q
q
q
qq
q q
q
5 50 100
0.00.40.8
DM2(13)
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
q
qqqqq
q
qq
q
qqqq
q
q
q
q
q
q
5 50 100
0.00.40.8
DM2(13) DM2(23)
Posterior probabilities that the data is from model 1 for 5, 50
and 100 loci
172. A population genetic illustration of ABC model choice
qqqqq
DM2(12) DM2(13) DM2(23)
020406080100120140
qqqqqqqqq
DM2(12) DM2(13) DM2(23)
0.00.20.40.60.81.0
p−values