Approximate Bayesian computation (ABC) is a new empirical Bayes method for performing Bayesian inference when the likelihood function is intractable or unavailable in closed form. ABC replaces the likelihood with a non-parametric approximation based on simulating data under different parameter values and comparing simulated and observed data using summary statistics. This allows Bayesian inference to be performed even when direct calculation of the likelihood is not possible. However, ABC introduces an approximation error that is unknown without extensive simulation. Some view ABC as a true Bayesian approach for an estimated or noisy likelihood, while others see it as more of a computational technique that is only approximately Bayesian.
The document discusses Approximate Bayesian Computation (ABC) as a new empirical Bayes method for performing inference when the likelihood function is intractable. ABC methods replace the intractable likelihood with a non-parametric approximation by degrading the data precision to a tolerance level and summarizing or replacing the data with insufficient statistics. ABC originated in population genetics models where likelihoods for polymorphism data are often intractable. ABC allows performing inference by simulating data under different parameter values when only the ability to simulate from the likelihood is available.
Pittsburgh and Toronto "Halloween US trip" seminarsChristian Robert
ABC stands for approximate Bayesian computation. ABC methods are used when the likelihood function is intractable or impossible to compute directly. ABC produces approximate samples from the posterior distribution by simulating data under different parameter values and comparing them to the observed data. ABC has been widely applied in population genetics where genealogies make likelihoods difficult to calculate. While ABC provides a practical computational approach, it raises questions about how closely it relates to true Bayesian inference with the raw data.
The document proposes using random forests (RF), a machine learning tool, for approximate Bayesian computation (ABC) model choice rather than estimating model posterior probabilities. RF improves on existing ABC model choice methods by having greater discriminative power among models, being robust to the choice and number of summary statistics, requiring less computation, and providing an error rate to evaluate confidence in the model choice. The authors illustrate the power of the RF-based ABC methodology on controlled experiments and real population genetics datasets.
This document outlines the author's PhD research, which bridges probabilistic modelling and deep learning. It first provides an overview of the author's work moving from probabilistic modelling to deep learning. It then provides a deep dive into a probabilistic model for temporal interaction data using point processes. Finally, it discusses open questions around bridging statistical methods and deep learning by making each approach more effective.
- Approximate Bayesian computation (ABC) is a technique used when the likelihood function is intractable or unavailable. It approximates the Bayesian posterior distribution in a likelihood-free manner.
- ABC works by simulating parameter values from the prior and simulating pseudo-data. Parameter values are accepted if the simulated pseudo-data are "close" to the observed data according to some distance measure and tolerance level.
- ABC originated in population genetics models where genealogies are considered nuisance parameters that cannot be integrated out of the likelihood. It has since been applied to other fields like econometrics for models with complex or undefined likelihoods.
A Dynamic Factor Model: Inference and Empirical Application. Ioannis Vrontos SYRTO Project
The document describes a dynamic factor model to analyze how financial risks are interconnected within the Eurozone. It uses the model to examine risk dynamics using sovereign CDS and equity returns from 2007-2009 covering the US financial crisis and pre-sovereign crisis in Europe. The model relates asset returns to latent sector factors, macro factors, and covariates. Bayesian inference is applied using MCMC to estimate the time-varying parameters and latent factors.
The document discusses form-content confusion in computer science. It argues that an excessive focus on formalism is hindering development in three areas: theory of computation, programming languages, and education. In theory of computation, our understanding of fundamental computational tradeoffs like time-memory and serial-parallel exchanges is limited. In programming languages, the focus is too much on syntax and not enough on semantics. In education, the curriculum emphasizes formal classifications over practical knowledge.
The document discusses using random forests for approximate Bayesian computation (ABC) model choice. ABC can be framed as a machine learning problem where simulated datasets are used to learn which model is most appropriate. Random forests are well-suited for this as they can handle many correlated summary statistics without information loss. The random forest predicts the most likely model but not posterior probabilities. Instead, the posterior predictive expected error rate across models is proposed to evaluate model selection performance without unstable probability approximations. An example comparing MA(1) and MA(2) time series models illustrates the approach.
The document discusses Approximate Bayesian Computation (ABC) as a new empirical Bayes method for performing inference when the likelihood function is intractable. ABC methods replace the intractable likelihood with a non-parametric approximation by degrading the data precision to a tolerance level and summarizing or replacing the data with insufficient statistics. ABC originated in population genetics models where likelihoods for polymorphism data are often intractable. ABC allows performing inference by simulating data under different parameter values when only the ability to simulate from the likelihood is available.
Pittsburgh and Toronto "Halloween US trip" seminarsChristian Robert
ABC stands for approximate Bayesian computation. ABC methods are used when the likelihood function is intractable or impossible to compute directly. ABC produces approximate samples from the posterior distribution by simulating data under different parameter values and comparing them to the observed data. ABC has been widely applied in population genetics where genealogies make likelihoods difficult to calculate. While ABC provides a practical computational approach, it raises questions about how closely it relates to true Bayesian inference with the raw data.
The document proposes using random forests (RF), a machine learning tool, for approximate Bayesian computation (ABC) model choice rather than estimating model posterior probabilities. RF improves on existing ABC model choice methods by having greater discriminative power among models, being robust to the choice and number of summary statistics, requiring less computation, and providing an error rate to evaluate confidence in the model choice. The authors illustrate the power of the RF-based ABC methodology on controlled experiments and real population genetics datasets.
This document outlines the author's PhD research, which bridges probabilistic modelling and deep learning. It first provides an overview of the author's work moving from probabilistic modelling to deep learning. It then provides a deep dive into a probabilistic model for temporal interaction data using point processes. Finally, it discusses open questions around bridging statistical methods and deep learning by making each approach more effective.
- Approximate Bayesian computation (ABC) is a technique used when the likelihood function is intractable or unavailable. It approximates the Bayesian posterior distribution in a likelihood-free manner.
- ABC works by simulating parameter values from the prior and simulating pseudo-data. Parameter values are accepted if the simulated pseudo-data are "close" to the observed data according to some distance measure and tolerance level.
- ABC originated in population genetics models where genealogies are considered nuisance parameters that cannot be integrated out of the likelihood. It has since been applied to other fields like econometrics for models with complex or undefined likelihoods.
A Dynamic Factor Model: Inference and Empirical Application. Ioannis Vrontos SYRTO Project
The document describes a dynamic factor model to analyze how financial risks are interconnected within the Eurozone. It uses the model to examine risk dynamics using sovereign CDS and equity returns from 2007-2009 covering the US financial crisis and pre-sovereign crisis in Europe. The model relates asset returns to latent sector factors, macro factors, and covariates. Bayesian inference is applied using MCMC to estimate the time-varying parameters and latent factors.
The document discusses form-content confusion in computer science. It argues that an excessive focus on formalism is hindering development in three areas: theory of computation, programming languages, and education. In theory of computation, our understanding of fundamental computational tradeoffs like time-memory and serial-parallel exchanges is limited. In programming languages, the focus is too much on syntax and not enough on semantics. In education, the curriculum emphasizes formal classifications over practical knowledge.
The document discusses using random forests for approximate Bayesian computation (ABC) model choice. ABC can be framed as a machine learning problem where simulated datasets are used to learn which model is most appropriate. Random forests are well-suited for this as they can handle many correlated summary statistics without information loss. The random forest predicts the most likely model but not posterior probabilities. Instead, the posterior predictive expected error rate across models is proposed to evaluate model selection performance without unstable probability approximations. An example comparing MA(1) and MA(2) time series models illustrates the approach.
Min-based qualitative possibilistic networks are one of the effective tools for a compact representation of
decision problems under uncertainty. The exact approaches for computing decision based on possibilistic
networks are limited by the size of the possibility distributions. Generally, these approaches are based on
possibilistic propagation algorithms. An important step in the computation of the decision is the
transformation of the DAG (Direct Acyclic Graph) into a secondary structure, known as the junction trees
(JT). This transformation is known to be costly and represents a difficult problem. We propose in this paper
a new approximate approach for the computation of decision under uncertainty within possibilistic
networks. The computing of the optimal optimistic decision no longer goes through the junction tree
construction step. Instead, it is performed by calculating the degree of normalization in the moral graph
resulting from the merging of the possibilistic network codifying knowledge of the agent and that codifying
its preferences.
This document discusses Bayesian approaches to combining evidence from multiple data sources or models. It recommends combining data probabilistically using Bayes' theorem rather than averaging. It provides an example of combining three data sources on annual rainfall measurements by treating the data sources as independent measurements and deriving the posterior distribution of rainfall amounts given the data. It also discusses challenges that arise when combining dependent data sources or models, and presents examples of hierarchical modeling approaches.
This lecture introduces Bayesian hypothesis testing. It discusses an example comparing HIV infection rates between a treatment and placebo group. A Bayesian analysis is presented that calculates posterior probabilities for the null and alternative hypotheses using prior probabilities and Bayes factors. The lecture outlines general notation for Bayesian testing and discusses issues like choosing prior distributions and testing precise versus imprecise hypotheses. It also discusses interpreting Bayes factors and relates posterior probabilities to p-values in some cases.
This document provides an overview of Bayesian networks through a 3-day tutorial. Day 1 introduces Bayesian networks and provides a medical diagnosis example. It defines key concepts like Bayes' theorem and influence diagrams. Day 2 covers propagation algorithms, demonstrating how evidence is propagated through a sample chain network. Day 3 will cover learning from data and using continuous variables and software. The overview outlines propagation algorithms for singly and multiply connected graphs.
High-Dimensional Methods: Examples for Inference on Structural EffectsNBER
This document describes a study that uses high-dimensional methods to estimate the effect of 401(k) eligibility on measures of accumulated assets. It begins by outlining the baseline model and notes areas for improvement, such as controlling for income. It then discusses using regularization like LASSO for variable selection in high-dimensional settings. The document explores more flexible specifications by generating many interaction and polynomial terms but notes the need for dimension reduction. It describes using LASSO to select important variables from a large set. The results select a parsimonious set of variables and estimate similar 401(k) effects as the baseline.
This document discusses a model for analyzing how network connectivity impacts asset returns and risks. The model augments a traditional multi-factor model to account for systemic links between assets represented by a network connectivity matrix. The model shows that network links inflate asset loadings to common factors, impacting expected returns and total risk decomposition into systematic and idiosyncratic components. Greater network connectivity reduces diversification benefits by slowing the decrease in portfolio idiosyncratic risk as the number of assets increases. The authors propose extending the model to incorporate heterogeneous asset responses to links and time-varying network structures.
This document provides an introduction to Bayesian networks, including:
- An outline of topics covered such as Bayesian networks, inference, learning Bayesian networks, and software packages
- A definition of Bayesian networks as directed acyclic graphs combined with conditional probability distributions
- An overview of inference techniques in Bayesian networks including belief propagation, junction tree algorithms, and Monte Carlo methods
1) The document discusses Vapnik's approach to statistical modeling and machine learning, focusing on the concepts of generalization, overfitting, and VC dimension.
2) It introduces the idea of Structural Risk Minimization (SRM), which aims to control a model's complexity through the VC dimension in order to maximize generalization. SRM selects the model that minimizes the total risk of empirical risk and confidence interval.
3) As an example, it describes how SRM can be implemented in an industrial data mining context to optimize variables, model structure, and hyperparameters for tasks like classification and regression.
Eeee2017 Conference - OR in the digital era - ICT challenges | PresentationChristos Papalitsas
The document describes a quantum-inspired generalized variable neighborhood search (qGVNS) metaheuristic for solving the traveling salesman problem with time windows (TSP-TW). qGVNS uses a 1-shift local search operator and a quantum-inspired shaking procedure to perturb solutions. It was tested on 10 benchmark TSP-TW instances and found feasible solutions for all within the time limit. The authors conclude that qGVNS shows promise for solving TSP-TW and future work could include more extensive testing and applying it to optimization phases.
Approximate Bayesian Computation (ABC) can be used as a new empirical Bayes approach when the likelihood function is not available in closed form. ABC replaces the intractable likelihood with a non-parametric approximation and summarizes data with insufficient statistics. ABC has opened opportunities for new inference machines that are legitimate but different from classical Bayesian approaches, raising questions about how closely ABC relates to Bayesian inference. ABC originated in population genetics where likelihoods are often intractable, and population geneticists have contributed significantly to ABC methodology.
Approximate Bayesian computation (ABC) is a computational technique for Bayesian inference when the likelihood function is intractable or impossible to compute directly. ABC approximates the likelihood by simulating data under different parameter values and comparing simulated and observed data using summary statistics. ABC produces a parameter sample without evaluating the full likelihood function, thus allowing Bayesian inference when likelihoods are unavailable or difficult to compute.
The document discusses Approximate Bayesian Computation (ABC), a computational technique for Bayesian inference when the likelihood function is intractable. ABC allows sampling from the likelihood and making inferences based on simulated data without calculating the actual likelihood. The technique originated in population genetics models where likelihoods for genetic polymorphism data cannot be calculated in closed form. ABC is presented as both an inference machine with its own legitimacy compared to classical Bayesian approaches, as well as a way to address computational issues with intractable likelihoods.
This document discusses challenges and recent advances in Approximate Bayesian Computation (ABC) methods. ABC methods are used when the likelihood function is intractable or unavailable in closed form. The core ABC algorithm involves simulating parameters from the prior and simulating data, retaining simulations where the simulated and observed data are close according to a distance measure on summary statistics. The document outlines key issues like scalability to large datasets, assessment of uncertainty, and model choice, and discusses advances such as modified proposals, nonparametric methods, and perspectives that include summary construction in the framework. Validation of ABC model choice and selection of summary statistics remains an open challenge.
Discussion of Persi Diaconis' lecture at ISBA 2016Christian Robert
This document discusses Monte Carlo methods for numerical integration and estimating normalizing constants. It summarizes several approaches: estimating normalizing constants using samples; reverse logistic regression for estimating constants in mixtures; Xiao-Li's maximum likelihood formulation for Monte Carlo integration; and Persi's probabilistic numerics which provide uncertainties for numerical calculations. The document advocates first approximating the distribution of an integrand before estimating its expectation to incorporate non-parametric information and account for multiple estimators.
The document discusses Approximate Bayesian Computation (ABC). ABC allows inference for statistical models where the likelihood function is not available in closed form. ABC works by simulating data under different parameter values and comparing simulated to observed data. ABC has been used for model choice by comparing evidence for different models. Consistency of ABC for model choice depends on the criterion used and asymptotic identifiability of the parameters.
Approximate Bayesian Computation (ABC) provides a framework for Bayesian inference when the likelihood function is intractable or impossible to evaluate directly. ABC produces approximations of posterior distributions by simulating data under different parameter values and accepting simulations that match the observed data closely according to some predefined tolerance level. ABC has been widely used for inference in population genetics models where the genealogical structure relating samples makes the likelihood intractable. While ABC does not provide true Bayesian inference, it can produce inferences that are consistent under certain conditions as the number of simulations increases.
This document provides an introduction to statistical model selection. It discusses various approaches to model selection including predictive risk, Bayesian methods, information theoretic measures like AIC and MDL, and adaptive methods. The key goals of model selection are to understand the bias-variance tradeoff and select models that offer the best guaranteed predictive performance on new data. Model selection aims to find the right level of complexity to explain patterns in available data while avoiding overfitting.
This document introduces the plm package for panel data econometrics in R. It discusses panel data models and estimation approaches, describes the software approach and functions in plm, and compares plm to other R packages for longitudinal data analysis. Key features of plm include functions for estimating linear panel models with fixed effects, random effects, and GMM, as well as data management and diagnostic testing capabilities for panel data. The package aims to make panel data analysis intuitive and straightforward for econometricians in R.
This PhD research proposal discusses using Bayesian inference methods for multi-target tracking in big data settings. The researcher proposes developing new stochastic MCMC algorithms that can scale to billions of data points by using small subsets of data in each iteration. This would make Bayesian methods computationally feasible for big data. The proposal outlines reviewing relevant literature, developing the theoretical foundations, and empirically validating new algorithms like sequential Monte Carlo on real-world problems to analyze text and user preferences at large scale.
Min-based qualitative possibilistic networks are one of the effective tools for a compact representation of
decision problems under uncertainty. The exact approaches for computing decision based on possibilistic
networks are limited by the size of the possibility distributions. Generally, these approaches are based on
possibilistic propagation algorithms. An important step in the computation of the decision is the
transformation of the DAG (Direct Acyclic Graph) into a secondary structure, known as the junction trees
(JT). This transformation is known to be costly and represents a difficult problem. We propose in this paper
a new approximate approach for the computation of decision under uncertainty within possibilistic
networks. The computing of the optimal optimistic decision no longer goes through the junction tree
construction step. Instead, it is performed by calculating the degree of normalization in the moral graph
resulting from the merging of the possibilistic network codifying knowledge of the agent and that codifying
its preferences.
This document discusses Bayesian approaches to combining evidence from multiple data sources or models. It recommends combining data probabilistically using Bayes' theorem rather than averaging. It provides an example of combining three data sources on annual rainfall measurements by treating the data sources as independent measurements and deriving the posterior distribution of rainfall amounts given the data. It also discusses challenges that arise when combining dependent data sources or models, and presents examples of hierarchical modeling approaches.
This lecture introduces Bayesian hypothesis testing. It discusses an example comparing HIV infection rates between a treatment and placebo group. A Bayesian analysis is presented that calculates posterior probabilities for the null and alternative hypotheses using prior probabilities and Bayes factors. The lecture outlines general notation for Bayesian testing and discusses issues like choosing prior distributions and testing precise versus imprecise hypotheses. It also discusses interpreting Bayes factors and relates posterior probabilities to p-values in some cases.
This document provides an overview of Bayesian networks through a 3-day tutorial. Day 1 introduces Bayesian networks and provides a medical diagnosis example. It defines key concepts like Bayes' theorem and influence diagrams. Day 2 covers propagation algorithms, demonstrating how evidence is propagated through a sample chain network. Day 3 will cover learning from data and using continuous variables and software. The overview outlines propagation algorithms for singly and multiply connected graphs.
High-Dimensional Methods: Examples for Inference on Structural EffectsNBER
This document describes a study that uses high-dimensional methods to estimate the effect of 401(k) eligibility on measures of accumulated assets. It begins by outlining the baseline model and notes areas for improvement, such as controlling for income. It then discusses using regularization like LASSO for variable selection in high-dimensional settings. The document explores more flexible specifications by generating many interaction and polynomial terms but notes the need for dimension reduction. It describes using LASSO to select important variables from a large set. The results select a parsimonious set of variables and estimate similar 401(k) effects as the baseline.
This document discusses a model for analyzing how network connectivity impacts asset returns and risks. The model augments a traditional multi-factor model to account for systemic links between assets represented by a network connectivity matrix. The model shows that network links inflate asset loadings to common factors, impacting expected returns and total risk decomposition into systematic and idiosyncratic components. Greater network connectivity reduces diversification benefits by slowing the decrease in portfolio idiosyncratic risk as the number of assets increases. The authors propose extending the model to incorporate heterogeneous asset responses to links and time-varying network structures.
This document provides an introduction to Bayesian networks, including:
- An outline of topics covered such as Bayesian networks, inference, learning Bayesian networks, and software packages
- A definition of Bayesian networks as directed acyclic graphs combined with conditional probability distributions
- An overview of inference techniques in Bayesian networks including belief propagation, junction tree algorithms, and Monte Carlo methods
1) The document discusses Vapnik's approach to statistical modeling and machine learning, focusing on the concepts of generalization, overfitting, and VC dimension.
2) It introduces the idea of Structural Risk Minimization (SRM), which aims to control a model's complexity through the VC dimension in order to maximize generalization. SRM selects the model that minimizes the total risk of empirical risk and confidence interval.
3) As an example, it describes how SRM can be implemented in an industrial data mining context to optimize variables, model structure, and hyperparameters for tasks like classification and regression.
Eeee2017 Conference - OR in the digital era - ICT challenges | PresentationChristos Papalitsas
The document describes a quantum-inspired generalized variable neighborhood search (qGVNS) metaheuristic for solving the traveling salesman problem with time windows (TSP-TW). qGVNS uses a 1-shift local search operator and a quantum-inspired shaking procedure to perturb solutions. It was tested on 10 benchmark TSP-TW instances and found feasible solutions for all within the time limit. The authors conclude that qGVNS shows promise for solving TSP-TW and future work could include more extensive testing and applying it to optimization phases.
Approximate Bayesian Computation (ABC) can be used as a new empirical Bayes approach when the likelihood function is not available in closed form. ABC replaces the intractable likelihood with a non-parametric approximation and summarizes data with insufficient statistics. ABC has opened opportunities for new inference machines that are legitimate but different from classical Bayesian approaches, raising questions about how closely ABC relates to Bayesian inference. ABC originated in population genetics where likelihoods are often intractable, and population geneticists have contributed significantly to ABC methodology.
Approximate Bayesian computation (ABC) is a computational technique for Bayesian inference when the likelihood function is intractable or impossible to compute directly. ABC approximates the likelihood by simulating data under different parameter values and comparing simulated and observed data using summary statistics. ABC produces a parameter sample without evaluating the full likelihood function, thus allowing Bayesian inference when likelihoods are unavailable or difficult to compute.
The document discusses Approximate Bayesian Computation (ABC), a computational technique for Bayesian inference when the likelihood function is intractable. ABC allows sampling from the likelihood and making inferences based on simulated data without calculating the actual likelihood. The technique originated in population genetics models where likelihoods for genetic polymorphism data cannot be calculated in closed form. ABC is presented as both an inference machine with its own legitimacy compared to classical Bayesian approaches, as well as a way to address computational issues with intractable likelihoods.
This document discusses challenges and recent advances in Approximate Bayesian Computation (ABC) methods. ABC methods are used when the likelihood function is intractable or unavailable in closed form. The core ABC algorithm involves simulating parameters from the prior and simulating data, retaining simulations where the simulated and observed data are close according to a distance measure on summary statistics. The document outlines key issues like scalability to large datasets, assessment of uncertainty, and model choice, and discusses advances such as modified proposals, nonparametric methods, and perspectives that include summary construction in the framework. Validation of ABC model choice and selection of summary statistics remains an open challenge.
Discussion of Persi Diaconis' lecture at ISBA 2016Christian Robert
This document discusses Monte Carlo methods for numerical integration and estimating normalizing constants. It summarizes several approaches: estimating normalizing constants using samples; reverse logistic regression for estimating constants in mixtures; Xiao-Li's maximum likelihood formulation for Monte Carlo integration; and Persi's probabilistic numerics which provide uncertainties for numerical calculations. The document advocates first approximating the distribution of an integrand before estimating its expectation to incorporate non-parametric information and account for multiple estimators.
The document discusses Approximate Bayesian Computation (ABC). ABC allows inference for statistical models where the likelihood function is not available in closed form. ABC works by simulating data under different parameter values and comparing simulated to observed data. ABC has been used for model choice by comparing evidence for different models. Consistency of ABC for model choice depends on the criterion used and asymptotic identifiability of the parameters.
Approximate Bayesian Computation (ABC) provides a framework for Bayesian inference when the likelihood function is intractable or impossible to evaluate directly. ABC produces approximations of posterior distributions by simulating data under different parameter values and accepting simulations that match the observed data closely according to some predefined tolerance level. ABC has been widely used for inference in population genetics models where the genealogical structure relating samples makes the likelihood intractable. While ABC does not provide true Bayesian inference, it can produce inferences that are consistent under certain conditions as the number of simulations increases.
This document provides an introduction to statistical model selection. It discusses various approaches to model selection including predictive risk, Bayesian methods, information theoretic measures like AIC and MDL, and adaptive methods. The key goals of model selection are to understand the bias-variance tradeoff and select models that offer the best guaranteed predictive performance on new data. Model selection aims to find the right level of complexity to explain patterns in available data while avoiding overfitting.
This document introduces the plm package for panel data econometrics in R. It discusses panel data models and estimation approaches, describes the software approach and functions in plm, and compares plm to other R packages for longitudinal data analysis. Key features of plm include functions for estimating linear panel models with fixed effects, random effects, and GMM, as well as data management and diagnostic testing capabilities for panel data. The package aims to make panel data analysis intuitive and straightforward for econometricians in R.
This PhD research proposal discusses using Bayesian inference methods for multi-target tracking in big data settings. The researcher proposes developing new stochastic MCMC algorithms that can scale to billions of data points by using small subsets of data in each iteration. This would make Bayesian methods computationally feasible for big data. The proposal outlines reviewing relevant literature, developing the theoretical foundations, and empirically validating new algorithms like sequential Monte Carlo on real-world problems to analyze text and user preferences at large scale.
Stratified sampling and resampling for approximate Bayesian computationUmberto Picchini
Stratified Monte Carlo is proposed as a method to accelerate ABC-MCMC by reducing its computational cost. It involves partitioning the summary statistic space into strata and estimating the ABC likelihood using a stratified Monte Carlo approach based on resampling. This reduces the variance compared to using a single resampled dataset, without introducing significant bias as resampling alone would. The method is tested on a simple Gaussian example where it provides a posterior approximation closer to the true posterior than standard ABC-MCMC.
This presentation is based on ``Statistical Modeling: The two cultures'' from Leo Breiman. It compares the data modeling culture (statistics) and the algorithmic modeling culture (machine learning).
IRJET- Big Data and Bayes Theorem used Analyze the Student’s Performance in E...IRJET Journal
This document discusses using big data and Bayes' theorem to analyze student performance in educational settings. It proposes a system to check for plagiarism in student assignments using probabilistic methods. Bayes' theorem is introduced as a way to calculate conditional probabilities. The system would analyze assignments to determine the probability that a student copied work from another based on characteristics like gender and past copying behavior. This probabilistic approach could help teachers efficiently check assignments for plagiarism using big data on student performance patterns.
This document summarizes interactions between computational complexity theory and several fields of mathematics. It discusses the computational complexity of primality testing in number theory, point-line incidences in combinatorial geometry, the Kadison-Singer problem in operator theory, and generation problems in group theory. For each area, it provides background, describes important problems and results, and notes connections to algorithms and complexity theory.
Data science is an area at the interface of statistics, computer science, and mathematics.
• Statisticians contributed a large inferential framework, important Bayesian perspectives, the bootstrap and CART and random forests, and the concepts of sparsity and parsimony.
• Computer scientists contributed an appetite for big, challenging problems.They also pioneered neural networks, boosting, PAC bounds, and developed programming languages, such as Spark and hadoop, for handling Big Data.
• Mathematicians contributed support vector machines, modern optimization, tensor analysis, and (maybe) topological data analysis.
A quantum-inspired optimization heuristic for the multiple sequence alignment...Konstantinos Giannakis
The document presents a quantum-inspired heuristic for solving the multiple sequence alignment problem in bioinformatics. It models the sequence similarity as a traveling salesman problem instance using a normalized similarity matrix. The method applies a quantum-inspired generalized variable neighborhood search metaheuristic to approximate the shortest Hamiltonian path and generate an initial alignment. Evaluation on real biological sequences shows it outperforms progressive methods, producing alignments with good sum-of-pairs scores, especially for large sequence sets.
This dissertation consists of three chapters that study identification and inference in econometric models.
Chapter 1 considers identification robust inference when the moment variance matrix is singular. It develops a novel asymptotic approach based on higher order expansions of the eigensystem to show that the Generalized Anderson-Rubin statistic possesses a chi-squared limit under additional regularity conditions. When these conditions are violated, the statistic is shown to be Op(n) and exhibit "moment-singularity bias".
Chapter 2 provides a method called "Normalized Principal Components" to minimize many weak instrument bias in linear IV settings. It derives an asymptotically valid ranking of instruments in terms of correlation and selects instruments to minimize MSE approximations.
Chapter
Similar to slides of ABC talk at i-like workshop, Warwick, May 16 (20)
This document discusses differentially private distributed Bayesian linear regression with Markov chain Monte Carlo (MCMC) methods. It proposes adding noise to the summaries (S) and coefficients (z) of local linear regression models on different devices to provide differential privacy. Gibbs sampling is used to simulate the genuine posterior distribution over the linear model parameters (theta, sigma_y, Sigma_x, z1:J, S1:J) in a distributed manner while maintaining privacy. Alternative approaches like exploiting approximate posteriors from all devices or learning iteratively are also mentioned.
This document discusses mixture models and approximations to computing model evidence. It contains:
1) An overview of mixtures of distributions and common priors used for mixtures.
2) Approximations to computing marginal likelihoods or model evidence using Chib's representation and Rao-Blackwellization. Permutations are used to address label switching issues.
3) Methods for more efficient sampling for computing model evidence, including iterative bridge sampling and dual importance sampling with approximations to reduce the number of permutations considered.
Sequential Monte Carlo is also briefly mentioned as an alternative approach.
This document describes the adaptive restore algorithm, a non-reversible Markov chain Monte Carlo method. It begins with an overview of the restore process, which takes regenerations from an underlying diffusion or jump process to construct a reversible Markov chain with a target distribution. The adaptive restore process enriches this by allowing the regeneration distribution to adapt over time. It converges almost surely to the minimal regeneration distribution. Parameters like the initial regeneration distribution and rates are discussed. Examples are provided for the adaptive Brownian restore algorithm and calibrating the parameters.
This document summarizes techniques for approximating marginal likelihoods and Bayes factors, which are important quantities in Bayesian inference. It discusses Geyer's 1994 logistic regression approach, links to bridge sampling, and how mixtures can be used as importance sampling proposals. Specifically, it shows how optimizing the logistic pseudo-likelihood relates to the bridge sampling optimal estimator. It also discusses non-parametric maximum likelihood estimation based on simulations.
This document discusses Bayesian restricted likelihood methods for situations where the likelihood cannot be fully trusted. It presents several approaches including empirical likelihood, Bayesian empirical likelihood, using insufficient statistics, approximate Bayesian computation (ABC), and MCMC on manifolds. The key ideas are developing Bayesian tools that are robust to model misspecification by questioning the likelihood, prior, and other assumptions.
This document discusses various methods for approximating marginal likelihoods and Bayes factors, including:
1. Geyer's 1994 logistic regression approach for approximating marginal likelihoods using importance sampling.
2. Bridge sampling and its connection to Geyer's approach. Optimal bridge sampling requires knowledge of unknown normalizing constants.
3. Using mixtures of importance distributions and the target distribution as proposals to estimate marginal likelihoods through Rao-Blackwellization. This connects to bridge sampling estimates.
4. The document discusses various methods for approximating marginal likelihoods and comparing hypotheses using Bayes factors. It outlines the historical development and connections between different approximation techniques.
1. The document discusses approximate Bayesian computation (ABC), a technique used when the likelihood function is intractable. ABC works by simulating parameters from the prior and simulating data, rejecting simulations that are not close to the observed data based on a tolerance level.
2. Random forests can be used in ABC to select informative summary statistics from a large set of possibilities and estimate parameters. The random forests classify simulations as accepted or rejected based on the summaries, implicitly selecting important summaries.
3. Calibrating the tolerance level in ABC is important but difficult, as it determines how close simulations must be to the observed data. Methods discussed include using quantiles of prior predictive simulations or asymptotic convergence properties.
The document summarizes Approximate Bayesian Computation (ABC). It discusses how ABC provides a way to approximate Bayesian inference when the likelihood function is intractable or too computationally expensive to evaluate directly. ABC works by simulating data under different parameter values and accepting simulations that are close to the observed data according to a distance measure and tolerance level. Key points discussed include:
- ABC provides an approximation to the posterior distribution by sampling from simulations that fall within a tolerance of the observed data.
- Summary statistics are often used to reduce the dimension of the data and improve the signal-to-noise ratio when applying the tolerance criterion.
- Random forests can help select informative summary statistics and provide semi-automated ABC
This document describes a new method called component-wise approximate Bayesian computation (ABCG or ABC-Gibbs) that combines approximate Bayesian computation (ABC) with Gibbs sampling. ABCG aims to more efficiently explore parameter spaces when the number of parameters is large. It works by alternately sampling each parameter from its ABC-approximated conditional distribution given current values of other parameters. The document provides theoretical analysis showing ABCG converges to a stationary distribution under certain conditions. It also presents examples demonstrating ABCG can better separate estimates from the prior compared to simple ABC, especially for hierarchical models.
ABC stands for approximate Bayesian computation. It is a method for performing Bayesian inference when the likelihood function is intractable or impossible to evaluate directly. ABC produces samples from an approximate posterior distribution by simulating parameter and summary statistic values that match the observed summary statistics within a tolerance level. The choice of summary statistics is important but difficult, as there is typically no sufficient statistic. Several strategies have been developed for selecting good summary statistics, including using random forests or the Lasso to evaluate and select from a large set of potential summaries.
The document describes a new method called component-wise approximate Bayesian computation (ABC) that combines ABC with Gibbs sampling. It aims to improve ABC's ability to efficiently explore parameter spaces when the number of parameters is large. The method works by alternating sampling from each parameter's ABC posterior conditional distribution given current values of other parameters and the observed data. The method is proven to converge to a stationary distribution under certain assumptions, especially for hierarchical models where conditional distributions are often simplified. Numerical experiments on toy examples demonstrate the method can provide a better approximation of the true posterior than vanilla ABC.
1) Likelihood-free Bayesian experimental design is discussed as an intractable likelihood optimization problem, where the goal is to find the optimal design d that minimizes expected loss without using the full posterior distribution.
2) Several Bayesian tools are proposed to make the design problem more Bayesian, including Bayesian non-parametrics, annealing algorithms, and placing a posterior on the design d.
3) Gaussian processes are a default modeling choice for complex unknown functions in these problems, but their accuracy is difficult to assess and they may incur a dimension curse.
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
Executive Directors Chat Leveraging AI for Diversity, Equity, and InclusionTechSoup
Let’s explore the intersection of technology and equity in the final session of our DEI series. Discover how AI tools, like ChatGPT, can be used to support and enhance your nonprofit's DEI initiatives. Participants will gain insights into practical AI applications and get tips for leveraging technology to advance their DEI goals.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
Strategies for Effective Upskilling is a presentation by Chinwendu Peace in a Your Skill Boost Masterclass organisation by the Excellence Foundation for South Sudan on 08th and 09th June 2024 from 1 PM to 3 PM on each day.
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
Community pharmacy- Social and preventive pharmacy UNIT 5
slides of ABC talk at i-like workshop, Warwick, May 16
1. (Approximate) Bayesian Computation as a new
empirical Bayes method
Christian P. Robert
Universit´e Paris-Dauphine & CREST, Paris
Joint work with K.L. Mengersen and P. Pudlo
i-like Workshop, Warwick, 15-17 May, 2013
2. of interest for i-like users
MCMSki IV to be held in Chamonix Mt Blanc, France, from
Monday, Jan. 6 to Wed., Jan. 8, 2014
All aspects of MCMC++ theory and methodology and applications
and more!
Website http://www.pages.drexel.edu/∼mwl25/mcmski/
4. Intractable likelihood
Case of a well-defined statistical model where the likelihood
function
(θ|y) = f (y1, . . . , yn|θ)
is (really!) not available in closed form
cannot (easily!) be either completed or demarginalised
cannot be estimated by an unbiased estimator
c Prohibits direct implementation of a generic MCMC algorithm
like Metropolis–Hastings
5. Intractable likelihood
Case of a well-defined statistical model where the likelihood
function
(θ|y) = f (y1, . . . , yn|θ)
is (really!) not available in closed form
cannot (easily!) be either completed or demarginalised
cannot be estimated by an unbiased estimator
c Prohibits direct implementation of a generic MCMC algorithm
like Metropolis–Hastings
6. The abc alternative
Empirical approximations to the original B problem
Degrading the precision down to a tolerance ε
Replacing the likelihood with a non-parametric approximation
Summarising/replacing the data with insufficient statistics
7. The abc alternative
Empirical approximations to the original B problem
Degrading the precision down to a tolerance ε
Replacing the likelihood with a non-parametric approximation
Summarising/replacing the data with insufficient statistics
8. The abc alternative
Empirical approximations to the original B problem
Degrading the precision down to a tolerance ε
Replacing the likelihood with a non-parametric approximation
Summarising/replacing the data with insufficient statistics
9. Different worries about abc
Impact on B inference
a mere computational issue (that will eventually end up being
solved by more powerful computers, &tc, even if too costly in
the short term, as for regular Monte Carlo methods) Not!
an inferential issue (opening opportunities for new inference
machine on complex/Big Data models, with legitimity
differing from classical B approach)
a Bayesian conundrum (how closely related to the/a B
approach? more than recycling B tools? true B with raw
data? a new type of empirical Bayes inference?)
10. Different worries about abc
Impact on B inference
a mere computational issue (that will eventually end up being
solved by more powerful computers, &tc, even if too costly in
the short term, as for regular Monte Carlo methods) Not!
an inferential issue (opening opportunities for new inference
machine on complex/Big Data models, with legitimity
differing from classical B approach)
a Bayesian conundrum (how closely related to the/a B
approach? more than recycling B tools? true B with raw
data? a new type of empirical Bayes inference?)
11. Different worries about abc
Impact on B inference
a mere computational issue (that will eventually end up being
solved by more powerful computers, &tc, even if too costly in
the short term, as for regular Monte Carlo methods) Not!
an inferential issue (opening opportunities for new inference
machine on complex/Big Data models, with legitimity
differing from classical B approach)
a Bayesian conundrum (how closely related to the/a B
approach? more than recycling B tools? true B with raw
data? a new type of empirical Bayes inference?)
12. summary as an answer to Lange, Chi & Zhou [ISR, 2013]
“Surprisingly, the confident prediction of the previous generation that Bayesian methods would
ultimately supplant frequentist methods has given way to a realization that Markov chain Monte
Carlo (MCMC) may be too slow to handle modern data sets. Size matters because large data sets
stress computer storage and processing power to the breaking point. The most successful
compromises between Bayesian and frequentist methods now rely on penalization and
optimization.”
sad reality constraint that size does matter
focus on much smaller dimensions and on sparse summaries
many (fast) ways of producing those summaries
Bayesian inference can kick in almost automatically at this
stage
13. summary as an answer to Lange, Chi & Zhou [ISR, 2013]
“Surprisingly, the confident prediction of the previous generation that Bayesian methods would
ultimately supplant frequentist methods has given way to a realization that Markov chain Monte
Carlo (MCMC) may be too slow to handle modern data sets. Size matters because large data sets
stress computer storage and processing power to the breaking point. The most successful
compromises between Bayesian and frequentist methods now rely on penalization and
optimization.”
sad reality constraint that size does matter
focus on much smaller dimensions and on sparse summaries
many (fast) ways of producing those summaries
Bayesian inference can kick in almost automatically at this
stage
14. Econom’ections
Similar exploration of simulation-based and approximation
techniques in Econometrics
Simulated method of moments
Method of simulated moments
Simulated pseudo-maximum-likelihood
Indirect inference
[Gouri´eroux & Monfort, 1996]
even though motivation is partly-defined models rather than
complex likelihoods
15. Econom’ections
Similar exploration of simulation-based and approximation
techniques in Econometrics
Simulated method of moments
Method of simulated moments
Simulated pseudo-maximum-likelihood
Indirect inference
[Gouri´eroux & Monfort, 1996]
even though motivation is partly-defined models rather than
complex likelihoods
16. Indirect inference
Minimise [in θ] a distance between estimators ^β based on a
pseudo-model for genuine observations and for observations
simulated under the true model and the parameter θ.
[Gouri´eroux, Monfort, & Renault, 1993;
Smith, 1993; Gallant & Tauchen, 1996]
17. Indirect inference (PML vs. PSE)
Example of the pseudo-maximum-likelihood (PML)
^β(y) = arg max
β
t
log f (yt|β, y1:(t−1))
leading to
arg min
θ
||^β(yo
) − ^β(y1(θ), . . . , yS (θ))||2
when
ys(θ) ∼ f (y|θ) s = 1, . . . , S
18. Indirect inference (PML vs. PSE)
Example of the pseudo-score-estimator (PSE)
^β(y) = arg min
β
t
∂ log f
∂β
(yt|β, y1:(t−1))
2
leading to
arg min
θ
||^β(yo
) − ^β(y1(θ), . . . , yS (θ))||2
when
ys(θ) ∼ f (y|θ) s = 1, . . . , S
19. Consistent indirect inference
“...in order to get a unique solution the dimension of
the auxiliary parameter β must be larger than or equal to
the dimension of the initial parameter θ. If the problem is
just identified the different methods become easier...”
Consistency depending on the criterion and on the asymptotic
identifiability of θ
[Gouri´eroux & Monfort, 1996, p. 66]
Which connection [if any] with the B perspective?
20. Consistent indirect inference
“...in order to get a unique solution the dimension of
the auxiliary parameter β must be larger than or equal to
the dimension of the initial parameter θ. If the problem is
just identified the different methods become easier...”
Consistency depending on the criterion and on the asymptotic
identifiability of θ
[Gouri´eroux & Monfort, 1996, p. 66]
Which connection [if any] with the B perspective?
21. Consistent indirect inference
“...in order to get a unique solution the dimension of
the auxiliary parameter β must be larger than or equal to
the dimension of the initial parameter θ. If the problem is
just identified the different methods become easier...”
Consistency depending on the criterion and on the asymptotic
identifiability of θ
[Gouri´eroux & Monfort, 1996, p. 66]
Which connection [if any] with the B perspective?
22. Approximate Bayesian computation
Unavailable likelihoods
ABC methods
Genesis of ABC
ABC basics
Advances and interpretations
ABC as knn
ABC as an inference machine
[A]BCel
Conclusion and perspectives
23. Genetic background of ABC
skip genetics
ABC is a recent computational technique that only requires being
able to sample from the likelihood f (·|θ)
This technique stemmed from population genetics models, about
15 years ago, and population geneticists still contribute
significantly to methodological developments of ABC.
[Griffith & al., 1997; Tavar´e & al., 1999]
24. Demo-genetic inference
Each model is characterized by a set of parameters θ that cover
historical (time divergence, admixture time ...), demographics
(population sizes, admixture rates, migration rates, ...) and genetic
(mutation rate, ...) factors
The goal is to estimate these parameters from a dataset of
polymorphism (DNA sample) y observed at the present time
Problem:
most of the time, we cannot calculate the likelihood of the
polymorphism data f (y|θ)...
25. Demo-genetic inference
Each model is characterized by a set of parameters θ that cover
historical (time divergence, admixture time ...), demographics
(population sizes, admixture rates, migration rates, ...) and genetic
(mutation rate, ...) factors
The goal is to estimate these parameters from a dataset of
polymorphism (DNA sample) y observed at the present time
Problem:
most of the time, we cannot calculate the likelihood of the
polymorphism data f (y|θ)...
26. Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
Sample of 8 genes
Mutations according to
the Simple stepwise
Mutation Model
(SMM)
• date of the mutations ∼
Poisson process with
intensity θ/2 over the
branches
• MRCA = 100
• independent mutations:
±1 with pr. 1/2
27. Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
Kingman’s genealogy
When time axis is
normalized,
T(k) ∼ Exp(k(k − 1)/2)
Mutations according to
the Simple stepwise
Mutation Model
(SMM)
• date of the mutations ∼
Poisson process with
intensity θ/2 over the
branches
• MRCA = 100
• independent mutations:
±1 with pr. 1/2
28. Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
Kingman’s genealogy
When time axis is
normalized,
T(k) ∼ Exp(k(k − 1)/2)
Mutations according to
the Simple stepwise
Mutation Model
(SMM)
• date of the mutations ∼
Poisson process with
intensity θ/2 over the
branches
• MRCA = 100
• independent mutations:
±1 with pr. 1/2
29. Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
Observations: leafs of the tree
^θ = ?
Kingman’s genealogy
When time axis is
normalized,
T(k) ∼ Exp(k(k − 1)/2)
Mutations according to
the Simple stepwise
Mutation Model
(SMM)
• date of the mutations ∼
Poisson process with
intensity θ/2 over the
branches
• MRCA = 100
• independent mutations:
±1 with pr. 1/2
30. Much more interesting models. . .
several independent locus
Independent gene genealogies and mutations
different populations
linked by an evolutionary scenario made of divergences,
admixtures, migrations between populations, etc.
larger sample size
usually between 50 and 100 genes
A typical evolutionary scenario:
MRCA
POP 0 POP 1 POP 2
τ1
τ2
31. Intractable likelihood
Missing (too much missing!) data structure:
f (y|θ) =
G
f (y|G, θ)f (G|θ)dG
cannot be computed in a manageable way...
[Stephens & Donnelly, 2000]
The genealogies are considered as nuisance parameters
This modelling clearly differs from the phylogenetic perspective
where the tree is the parameter of interest.
32. Intractable likelihood
Missing (too much missing!) data structure:
f (y|θ) =
G
f (y|G, θ)f (G|θ)dG
cannot be computed in a manageable way...
[Stephens & Donnelly, 2000]
The genealogies are considered as nuisance parameters
This modelling clearly differs from the phylogenetic perspective
where the tree is the parameter of interest.
33. A?B?C?
A stands for approximate
[wrong likelihood / pic?]
B stands for Bayesian
C stands for computation
[producing a parameter
sample]
34. A?B?C?
A stands for approximate
[wrong likelihood / pic?]
B stands for Bayesian
C stands for computation
[producing a parameter
sample]
35. A?B?C?
A stands for approximate
[wrong likelihood / pic?]
B stands for Bayesian
C stands for computation
[producing a parameter
sample]
ESS=155.6
θ
Density
−0.5 0.0 0.5 1.0
0.01.0
ESS=75.93
θ
Density
−0.4 −0.2 0.0 0.2 0.4
0.01.02.0
ESS=76.87
θ
Density
−0.4 −0.2 0.0 0.2
01234
ESS=91.54
θ
Density
−0.6 −0.4 −0.2 0.0 0.2
01234
ESS=108.4
θ
Density
−0.4 0.0 0.2 0.4 0.6
0.01.02.03.0
ESS=85.13
θ
Density
−0.2 0.0 0.2 0.4 0.6
0.01.02.03.0
ESS=149.1
θ
Density
−0.5 0.0 0.5 1.0
0.01.02.0
ESS=96.31
θ
Density
−0.4 0.0 0.2 0.4 0.6
0.01.02.0
ESS=83.77
θ
Density
−0.6 −0.4 −0.2 0.0 0.2 0.4
01234
ESS=155.7
θ
Density
−0.5 0.0 0.5
0.01.02.0
ESS=92.42
θ
Density
−0.4 0.0 0.2 0.4 0.6
0.01.02.03.0
ESS=95.01
θ
Density
−0.4 0.0 0.2 0.4 0.6
0.01.53.0
ESS=139.2
Density −0.6 −0.2 0.2 0.6
0.01.02.0
ESS=99.33
Density
−0.4 −0.2 0.0 0.2 0.4
0.01.02.03.0
ESS=87.28
Density
−0.2 0.0 0.2 0.4 0.6
0123
36. How Bayesian is aBc?
Could we turn the resolution into a Bayesian answer?
ideally so (not meaningfull: requires ∞-ly powerful computer
approximation error unknown (w/o costly simulation)
true Bayes for wrong model (formal and artificial)
true Bayes for noisy model (much more convincing)
true Bayes for estimated likelihood (back to econometrics?)
illuminating the tension between information and precision
37. ABC methodology
Bayesian setting: target is π(θ)f (x|θ)
When likelihood f (x|θ) not in closed form, likelihood-free rejection
technique:
Foundation
For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps
jointly simulating
θ ∼ π(θ) , z ∼ f (z|θ ) ,
until the auxiliary variable z is equal to the observed value, z = y,
then the selected
θ ∼ π(θ|y)
[Rubin, 1984; Diggle & Gratton, 1984; Tavar´e et al., 1997]
38. ABC methodology
Bayesian setting: target is π(θ)f (x|θ)
When likelihood f (x|θ) not in closed form, likelihood-free rejection
technique:
Foundation
For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps
jointly simulating
θ ∼ π(θ) , z ∼ f (z|θ ) ,
until the auxiliary variable z is equal to the observed value, z = y,
then the selected
θ ∼ π(θ|y)
[Rubin, 1984; Diggle & Gratton, 1984; Tavar´e et al., 1997]
39. ABC methodology
Bayesian setting: target is π(θ)f (x|θ)
When likelihood f (x|θ) not in closed form, likelihood-free rejection
technique:
Foundation
For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps
jointly simulating
θ ∼ π(θ) , z ∼ f (z|θ ) ,
until the auxiliary variable z is equal to the observed value, z = y,
then the selected
θ ∼ π(θ|y)
[Rubin, 1984; Diggle & Gratton, 1984; Tavar´e et al., 1997]
40. A as A...pproximative
When y is a continuous random variable, strict equality z = y is
replaced with a tolerance zone
ρ(y, z)
where ρ is a distance
Output distributed from
π(θ) Pθ{ρ(y, z) < }
def
∝ π(θ|ρ(y, z) < )
[Pritchard et al., 1999]
41. A as A...pproximative
When y is a continuous random variable, strict equality z = y is
replaced with a tolerance zone
ρ(y, z)
where ρ is a distance
Output distributed from
π(θ) Pθ{ρ(y, z) < }
def
∝ π(θ|ρ(y, z) < )
[Pritchard et al., 1999]
42. ABC algorithm
In most implementations, further degree of A...pproximation:
Algorithm 1 Likelihood-free rejection sampler
for i = 1 to N do
repeat
generate θ from the prior distribution π(·)
generate z from the likelihood f (·|θ )
until ρ{η(z), η(y)}
set θi = θ
end for
where η(y) defines a (not necessarily sufficient) statistic
43. Output
The likelihood-free algorithm samples from the marginal in z of:
π (θ, z|y) =
π(θ)f (z|θ)IA ,y (z)
A ,y×Θ π(θ)f (z|θ)dzdθ
,
where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
The idea behind ABC is that the summary statistics coupled with a
small tolerance should provide a good approximation of the
posterior distribution:
π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) .
...does it?!
44. Output
The likelihood-free algorithm samples from the marginal in z of:
π (θ, z|y) =
π(θ)f (z|θ)IA ,y (z)
A ,y×Θ π(θ)f (z|θ)dzdθ
,
where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
The idea behind ABC is that the summary statistics coupled with a
small tolerance should provide a good approximation of the
posterior distribution:
π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) .
...does it?!
45. Output
The likelihood-free algorithm samples from the marginal in z of:
π (θ, z|y) =
π(θ)f (z|θ)IA ,y (z)
A ,y×Θ π(θ)f (z|θ)dzdθ
,
where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
The idea behind ABC is that the summary statistics coupled with a
small tolerance should provide a good approximation of the
posterior distribution:
π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) .
...does it?!
46. Output
The likelihood-free algorithm samples from the marginal in z of:
π (θ, z|y) =
π(θ)f (z|θ)IA ,y (z)
A ,y×Θ π(θ)f (z|θ)dzdθ
,
where A ,y = {z ∈ D|ρ(η(z), η(y)) < }.
The idea behind ABC is that the summary statistics coupled with a
small tolerance should provide a good approximation of the
restricted posterior distribution:
π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) .
Not so good..!
skip convergence details!
47. Convergence of ABC
What happens when → 0?
For B ⊂ Θ, we have
B
A ,y
f (z|θ)dz
A ,y×Θ π(θ)f (z|θ)dzdθ
π(θ)dθ =
A ,y
B f (z|θ)π(θ)dθ
A ,y×Θ π(θ)f (z|θ)dzdθ
dz
=
A ,y
B f (z|θ)π(θ)dθ
m(z)
m(z)
A ,y×Θ π(θ)f (z|θ)dzdθ
dz
=
A ,y
π(B|z)
m(z)
A ,y×Θ π(θ)f (z|θ)dzdθ
dz
which indicates convergence for a continuous π(B|z).
48. Convergence of ABC
What happens when → 0?
For B ⊂ Θ, we have
B
A ,y
f (z|θ)dz
A ,y×Θ π(θ)f (z|θ)dzdθ
π(θ)dθ =
A ,y
B f (z|θ)π(θ)dθ
A ,y×Θ π(θ)f (z|θ)dzdθ
dz
=
A ,y
B f (z|θ)π(θ)dθ
m(z)
m(z)
A ,y×Θ π(θ)f (z|θ)dzdθ
dz
=
A ,y
π(B|z)
m(z)
A ,y×Θ π(θ)f (z|θ)dzdθ
dz
which indicates convergence for a continuous π(B|z).
49. Convergence (do not attempt!)
...and the above does not apply to insufficient statistics:
If η(y) is not a sufficient statistics, the best one can hope for is
π(θ|η(y)) , not π(θ|y)
If η(y) is an ancillary statistic, the whole information contained in
y is lost!, the “best” one can “hope” for is
π(θ|η(y)) = π(θ)
50. Convergence (do not attempt!)
...and the above does not apply to insufficient statistics:
If η(y) is not a sufficient statistics, the best one can hope for is
π(θ|η(y)) , not π(θ|y)
If η(y) is an ancillary statistic, the whole information contained in
y is lost!, the “best” one can “hope” for is
π(θ|η(y)) = π(θ)
51. Convergence (do not attempt!)
...and the above does not apply to insufficient statistics:
If η(y) is not a sufficient statistics, the best one can hope for is
π(θ|η(y)) , not π(θ|y)
If η(y) is an ancillary statistic, the whole information contained in
y is lost!, the “best” one can “hope” for is
π(θ|η(y)) = π(θ)
52. Comments
Role of distance paramount (because = 0)
Scaling of components of η(y) is also determinant
matters little if “small enough”
representative of “curse of dimensionality”
small is beautiful!
the data as a whole may be paradoxically weakly informative
for ABC
53. ABC (simul’) advances
how approximative is ABC? ABC as knn
Simulating from the prior is often poor in efficiency
Either modify the proposal distribution on θ to increase the density
of x’s within the vicinity of y...
[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
...or by viewing the problem as a conditional density estimation
and by developing techniques to allow for larger
[Beaumont et al., 2002]
.....or even by including in the inferential framework [ABCµ]
[Ratmann et al., 2009]
54. ABC (simul’) advances
how approximative is ABC? ABC as knn
Simulating from the prior is often poor in efficiency
Either modify the proposal distribution on θ to increase the density
of x’s within the vicinity of y...
[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
...or by viewing the problem as a conditional density estimation
and by developing techniques to allow for larger
[Beaumont et al., 2002]
.....or even by including in the inferential framework [ABCµ]
[Ratmann et al., 2009]
55. ABC (simul’) advances
how approximative is ABC? ABC as knn
Simulating from the prior is often poor in efficiency
Either modify the proposal distribution on θ to increase the density
of x’s within the vicinity of y...
[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
...or by viewing the problem as a conditional density estimation
and by developing techniques to allow for larger
[Beaumont et al., 2002]
.....or even by including in the inferential framework [ABCµ]
[Ratmann et al., 2009]
56. ABC (simul’) advances
how approximative is ABC? ABC as knn
Simulating from the prior is often poor in efficiency
Either modify the proposal distribution on θ to increase the density
of x’s within the vicinity of y...
[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
...or by viewing the problem as a conditional density estimation
and by developing techniques to allow for larger
[Beaumont et al., 2002]
.....or even by including in the inferential framework [ABCµ]
[Ratmann et al., 2009]
57. ABC-NP
Better usage of [prior] simulations by
adjustement: instead of throwing away
θ such that ρ(η(z), η(y)) > , replace
θ’s with locally regressed transforms
θ∗
= θ − {η(z) − η(y)}T ^β
[Csill´ery et al., TEE, 2010]
where ^β is obtained by [NP] weighted least square regression on
(η(z) − η(y)) with weights
Kδ {ρ(η(z), η(y))}
[Beaumont et al., 2002, Genetics]
58. ABC-NP (regression)
Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :
weight directly simulation by
Kδ {ρ(η(z(θ)), η(y))}
or
1
S
S
s=1
Kδ {ρ(η(zs
(θ)), η(y))}
[consistent estimate of f (η|θ)]
Curse of dimensionality: poor estimate when d = dim(η) is large...
59. ABC-NP (regression)
Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) :
weight directly simulation by
Kδ {ρ(η(z(θ)), η(y))}
or
1
S
S
s=1
Kδ {ρ(η(zs
(θ)), η(y))}
[consistent estimate of f (η|θ)]
Curse of dimensionality: poor estimate when d = dim(η) is large...
60. ABC-NP (density estimation)
Use of the kernel weights
Kδ {ρ(η(z(θ)), η(y))}
leads to the NP estimate of the posterior expectation
i θi Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
[Blum, JASA, 2010]
61. ABC-NP (density estimation)
Use of the kernel weights
Kδ {ρ(η(z(θ)), η(y))}
leads to the NP estimate of the posterior conditional density
i
˜Kb(θi − θ)Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
[Blum, JASA, 2010]
62. ABC-NP (density estimations)
Other versions incorporating regression adjustments
i
˜Kb(θ∗
i − θ)Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
In all cases, error
E[^g(θ|y)] − g(θ|y) = cb2
+ cδ2
+ OP(b2
+ δ2
) + OP(1/nδd
)
var(^g(θ|y)) =
c
nbδd
(1 + oP(1))
63. ABC-NP (density estimations)
Other versions incorporating regression adjustments
i
˜Kb(θ∗
i − θ)Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
In all cases, error
E[^g(θ|y)] − g(θ|y) = cb2
+ cδ2
+ OP(b2
+ δ2
) + OP(1/nδd
)
var(^g(θ|y)) =
c
nbδd
(1 + oP(1))
[Blum, JASA, 2010]
64. ABC-NP (density estimations)
Other versions incorporating regression adjustments
i
˜Kb(θ∗
i − θ)Kδ {ρ(η(z(θi )), η(y))}
i Kδ {ρ(η(z(θi )), η(y))}
In all cases, error
E[^g(θ|y)] − g(θ|y) = cb2
+ cδ2
+ OP(b2
+ δ2
) + OP(1/nδd
)
var(^g(θ|y)) =
c
nbδd
(1 + oP(1))
[standard NP calculations]
65. ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance as a quantile on observed
distances, say 10% or 1% quantile,
= N = qα(d1, . . . , dN)
Interpretation of ε as nonparametric bandwidth only
approximation of the actual practice
[Blum & Fran¸cois, 2010]
ABC is a k-nearest neighbour (knn) method with kN = N N
[Loftsgaarden & Quesenberry, 1965]
66. ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance as a quantile on observed
distances, say 10% or 1% quantile,
= N = qα(d1, . . . , dN)
Interpretation of ε as nonparametric bandwidth only
approximation of the actual practice
[Blum & Fran¸cois, 2010]
ABC is a k-nearest neighbour (knn) method with kN = N N
[Loftsgaarden & Quesenberry, 1965]
67. ABC as knn
[Biau et al., 2013, Annales de l’IHP]
Practice of ABC: determine tolerance as a quantile on observed
distances, say 10% or 1% quantile,
= N = qα(d1, . . . , dN)
Interpretation of ε as nonparametric bandwidth only
approximation of the actual practice
[Blum & Fran¸cois, 2010]
ABC is a k-nearest neighbour (knn) method with kN = N N
[Loftsgaarden & Quesenberry, 1965]
68. ABC consistency
Provided
kN/ log log N −→ ∞ and kN/N −→ 0
as N → ∞, for almost all s0 (with respect to the distribution of
S), with probability 1,
1
kN
kN
j=1
ϕ(θj ) −→ E[ϕ(θj )|S = s0]
[Devroye, 1982]
Biau et al. (2012) also recall pointwise and integrated mean square error
consistency results on the corresponding kernel estimate of the
conditional posterior distribution, under constraints
kN → ∞, kN /N → 0, hN → 0 and hp
N kN → ∞,
69. ABC consistency
Provided
kN/ log log N −→ ∞ and kN/N −→ 0
as N → ∞, for almost all s0 (with respect to the distribution of
S), with probability 1,
1
kN
kN
j=1
ϕ(θj ) −→ E[ϕ(θj )|S = s0]
[Devroye, 1982]
Biau et al. (2012) also recall pointwise and integrated mean square error
consistency results on the corresponding kernel estimate of the
conditional posterior distribution, under constraints
kN → ∞, kN /N → 0, hN → 0 and hp
N kN → ∞,
70. Rates of convergence
Further assumptions (on target and kernel) allow for precise
(integrated mean square) convergence rates (as a power of the
sample size N), derived from classical k-nearest neighbour
regression, like
when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N− 4
p+8
when m = 4, kN ≈ N(p+4)/(p+8) and rate N− 4
p+8 log N
when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N− 4
m+p+4
[Biau et al., 2012, arxiv:1207.6461]
Drag: Only applies to sufficient summary statistics
71. Rates of convergence
Further assumptions (on target and kernel) allow for precise
(integrated mean square) convergence rates (as a power of the
sample size N), derived from classical k-nearest neighbour
regression, like
when m = 1, 2, 3, kN ≈ N(p+4)/(p+8) and rate N− 4
p+8
when m = 4, kN ≈ N(p+4)/(p+8) and rate N− 4
p+8 log N
when m > 4, kN ≈ N(p+4)/(m+p+4) and rate N− 4
m+p+4
[Biau et al., 2012, arxiv:1207.6461]
Drag: Only applies to sufficient summary statistics
72. ABC inference machine
Unavailable likelihoods
ABC methods
ABC as an inference machine
Error inc.
Exact BC and approximate
targets
summary statistic
[A]BCel
Conclusion and perspectives
73. How Bayesian is aBc..?
may be a convergent method of inference (meaningful?
sufficient? foreign?)
approximation error unknown (w/o massive simulation)
pragmatic/empirical B (there is no other solution!)
many calibration issues (tolerance, distance, statistics)
the NP side should be incorporated into the whole B picture
the approximation error should also be part of the B inference
to ABCel
74. ABCµ
Idea Infer about the error as well as about the parameter:
Use of a joint density
f (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )
where y is the data, and ξ( |y, θ) is the prior predictive density of
ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
Warning! Replacement of ξ( |y, θ) with a non-parametric kernel
approximation.
[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
75. ABCµ
Idea Infer about the error as well as about the parameter:
Use of a joint density
f (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )
where y is the data, and ξ( |y, θ) is the prior predictive density of
ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
Warning! Replacement of ξ( |y, θ) with a non-parametric kernel
approximation.
[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
76. ABCµ
Idea Infer about the error as well as about the parameter:
Use of a joint density
f (θ, |y) ∝ ξ( |y, θ) × πθ(θ) × π ( )
where y is the data, and ξ( |y, θ) is the prior predictive density of
ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
Warning! Replacement of ξ( |y, θ) with a non-parametric kernel
approximation.
[Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
77. ABCµ details
Multidimensional distances ρk (k = 1, . . . , K) and errors
k = ρk(ηk(z), ηk(y)), with
k ∼ ξk( |y, θ) ≈ ^ξk( |y, θ) =
1
Bhk
b
K[{ k−ρk(ηk(zb), ηk(y))}/hk]
then used in replacing ξ( |y, θ) with mink
^ξk( |y, θ)
ABCµ involves acceptance probability
π(θ , )
π(θ, )
q(θ , θ)q( , )
q(θ, θ )q( , )
mink
^ξk( |y, θ )
mink
^ξk( |y, θ)
78. ABCµ details
Multidimensional distances ρk (k = 1, . . . , K) and errors
k = ρk(ηk(z), ηk(y)), with
k ∼ ξk( |y, θ) ≈ ^ξk( |y, θ) =
1
Bhk
b
K[{ k−ρk(ηk(zb), ηk(y))}/hk]
then used in replacing ξ( |y, θ) with mink
^ξk( |y, θ)
ABCµ involves acceptance probability
π(θ , )
π(θ, )
q(θ , θ)q( , )
q(θ, θ )q( , )
mink
^ξk( |y, θ )
mink
^ξk( |y, θ)
79. Wilkinson’s exact BC (not exactly!)
ABC approximation error (i.e. non-zero tolerance) replaced with
exact simulation from a controlled approximation to the target,
convolution of true posterior with kernel function
π (θ, z|y) =
π(θ)f (z|θ)K (y − z)
π(θ)f (z|θ)K (y − z)dzdθ
,
with K kernel parameterised by bandwidth .
[Wilkinson, 2013]
Theorem
The ABC algorithm based on the assumption of a randomised
observation y = ˜y + ξ, ξ ∼ K , and an acceptance probability of
K (y − z)/M
gives draws from the posterior distribution π(θ|y).
80. Wilkinson’s exact BC (not exactly!)
ABC approximation error (i.e. non-zero tolerance) replaced with
exact simulation from a controlled approximation to the target,
convolution of true posterior with kernel function
π (θ, z|y) =
π(θ)f (z|θ)K (y − z)
π(θ)f (z|θ)K (y − z)dzdθ
,
with K kernel parameterised by bandwidth .
[Wilkinson, 2013]
Theorem
The ABC algorithm based on the assumption of a randomised
observation y = ˜y + ξ, ξ ∼ K , and an acceptance probability of
K (y − z)/M
gives draws from the posterior distribution π(θ|y).
81. How exact a BC?
Pros
Pseudo-data from true model and observed data from noisy
model
Interesting perspective in that outcome is completely
controlled
Link with ABCµ and assuming y is observed with a
measurement error with density K
Relates to the theory of model approximation
[Kennedy & O’Hagan, 2001]
leads to “noisy ABC”: perturbates the data down to precision
ε and proceed
[Fearnhead & Prangle, 2012]
Cons
Requires K to be bounded by M
True approximation error never assessed
82. How exact a BC?
Pros
Pseudo-data from true model and observed data from noisy
model
Interesting perspective in that outcome is completely
controlled
Link with ABCµ and assuming y is observed with a
measurement error with density K
Relates to the theory of model approximation
[Kennedy & O’Hagan, 2001]
leads to “noisy ABC”: perturbates the data down to precision
ε and proceed
[Fearnhead & Prangle, 2012]
Cons
Requires K to be bounded by M
True approximation error never assessed
83. Noisy ABC for HMMs
Specific case of a hidden Markov model summaries
Xt+1 ∼ Qθ(Xt, ·)
Yt+1 ∼ gθ(·|xt)
where only y0
1:n is observed.
[Dean, Singh, Jasra, & Peters, 2011]
Use of specific constraints, adapted to the Markov structure:
y1 ∈ B(y0
1 , ) × · · · × yn ∈ B(y0
n , )
84. Noisy ABC for HMMs
Specific case of a hidden Markov model summaries
Xt+1 ∼ Qθ(Xt, ·)
Yt+1 ∼ gθ(·|xt)
where only y0
1:n is observed.
[Dean, Singh, Jasra, & Peters, 2011]
Use of specific constraints, adapted to the Markov structure:
y1 ∈ B(y0
1 , ) × · · · × yn ∈ B(y0
n , )
85. Noisy ABC-MLE
Idea: Modify instead the data from the start
(y0
1 + ζ1, . . . , yn + ζn)
[ see Fearnhead-Prangle ]
noisy ABC-MLE estimate
arg max
θ
Pθ Y1 ∈ B(y0
1 + ζ1, ), . . . , Yn ∈ B(y0
n + ζn, )
[Dean, Singh, Jasra, & Peters, 2011]
86. Consistent noisy ABC-MLE
Degrading the data improves the estimation performances:
Noisy ABC-MLE is asymptotically (in n) consistent
under further assumptions, the noisy ABC-MLE is
asymptotically normal
increase in variance of order −2
likely degradation in precision or computing time due to the
lack of summary statistic [curse of dimensionality]
87. Which summary?
Fundamental difficulty of the choice of the summary statistic when
there is no non-trivial sufficient statistics
Loss of statistical information balanced against gain in data
roughening
Approximation error and information loss remain unknown
Choice of statistics induces choice of distance function
towards standardisation
may be imposed for external reasons
may gather several non-B point estimates
can learn about efficient combination
88. Which summary?
Fundamental difficulty of the choice of the summary statistic when
there is no non-trivial sufficient statistics
Loss of statistical information balanced against gain in data
roughening
Approximation error and information loss remain unknown
Choice of statistics induces choice of distance function
towards standardisation
may be imposed for external reasons
may gather several non-B point estimates
can learn about efficient combination
89. Which summary?
Fundamental difficulty of the choice of the summary statistic when
there is no non-trivial sufficient statistics
Loss of statistical information balanced against gain in data
roughening
Approximation error and information loss remain unknown
Choice of statistics induces choice of distance function
towards standardisation
may be imposed for external reasons
may gather several non-B point estimates
can learn about efficient combination
90. Which summary for model choice?
Depending on the choice of η(·), the Bayes factor based on this
insufficient statistic,
Bη
12(y) =
π1(θ1)f η
1 (η(y)|θ1) dθ1
π2(θ2)f η
2 (η(y)|θ2) dθ2
,
is either consistent or inconsistent
[X, Cornuet, Marin, & Pillai, 2012]
Consistency only depends on the range of Ei [η(y)] under both
models
[Marin, Pillai, X, & Rousseau, 2012]
91. Which summary for model choice?
Depending on the choice of η(·), the Bayes factor based on this
insufficient statistic,
Bη
12(y) =
π1(θ1)f η
1 (η(y)|θ1) dθ1
π2(θ2)f η
2 (η(y)|θ2) dθ2
,
is either consistent or inconsistent
[X, Cornuet, Marin, & Pillai, 2012]
Consistency only depends on the range of Ei [η(y)] under both
models
[Marin, Pillai, X, & Rousseau, 2012]
92. Semi-automatic ABC
Fearnhead and Prangle (2012) study ABC and the selection of the
summary statistic in close proximity to Wilkinson’s proposal
ABC considered as inferential method and calibrated as such
randomised (or ‘noisy’) version of the summary statistics
˜η(y) = η(y) + τ
derivation of a well-calibrated version of ABC, i.e. an
algorithm that gives proper predictions for the distribution
associated with this randomised summary statistic
93. Summary [of F&P/statistics)
optimality of the posterior expectation
E[θ|y]
of the parameter of interest as summary statistics η(y)!
[requires iterative process]
use of the standard quadratic loss function
(θ − θ0)T
A(θ − θ0)
recent extension to model choice, optimality of Bayes factor
B12(y)
[F&P, ISBA 2012, Kyoto]
94. Summary [of F&P/statistics)
optimality of the posterior expectation
E[θ|y]
of the parameter of interest as summary statistics η(y)!
[requires iterative process]
use of the standard quadratic loss function
(θ − θ0)T
A(θ − θ0)
recent extension to model choice, optimality of Bayes factor
B12(y)
[F&P, ISBA 2012, Kyoto]
95. Summary [about summaries]
Choice of summary statistics is paramount for ABC
validation/performances
At best, ABC approximates π(. | η(y)) [unavoidable]
Model selection feasible with ABC [with caution!]
For estimation, consistency if {θ; µ(θ) = µ0} = θ0 when
Eθ[η(y)] = µ(θ)
For testing consistency if
{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅
[Marin et al., 2011]
96. Summary [about summaries]
Choice of summary statistics is paramount for ABC
validation/performances
At best, ABC approximates π(. | η(y)) [unavoidable]
Model selection feasible with ABC [with caution!]
For estimation, consistency if {θ; µ(θ) = µ0} = θ0 when
Eθ[η(y)] = µ(θ)
For testing consistency if
{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅
[Marin et al., 2011]
97. Summary [about summaries]
Choice of summary statistics is paramount for ABC
validation/performances
At best, ABC approximates π(. | η(y)) [unavoidable]
Model selection feasible with ABC [with caution!]
For estimation, consistency if {θ; µ(θ) = µ0} = θ0 when
Eθ[η(y)] = µ(θ)
For testing consistency if
{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅
[Marin et al., 2011]
98. Summary [about summaries]
Choice of summary statistics is paramount for ABC
validation/performances
At best, ABC approximates π(. | η(y)) [unavoidable]
Model selection feasible with ABC [with caution!]
For estimation, consistency if {θ; µ(θ) = µ0} = θ0 when
Eθ[η(y)] = µ(θ)
For testing consistency if
{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅
[Marin et al., 2011]
99. Summary [about summaries]
Choice of summary statistics is paramount for ABC
validation/performances
At best, ABC approximates π(. | η(y)) [unavoidable]
Model selection feasible with ABC [with caution!]
For estimation, consistency if {θ; µ(θ) = µ0} = θ0 when
Eθ[η(y)] = µ(θ)
For testing consistency if
{µ1(θ1), θ1 ∈ Θ1} ∩ {µ2(θ2), θ2 ∈ Θ2} = ∅
[Marin et al., 2011]
100. Empirical likelihood (EL)
Unavailable likelihoods
ABC methods
ABC as an inference machine
[A]BCel
ABC and EL
Composite likelihood
Illustrations
Conclusion and perspectives
101. Empirical likelihood (EL)
Dataset x made of n independent replicates x = (x1, . . . , xn) of
some X ∼ F
Generalized moment condition model
EF h(X, φ) = 0,
where h is a known function, and φ an unknown parameter
Corresponding empirical likelihood
Lel(φ|x) = max
p
n
i=1
pi
for all p such that 0 pi 1, i pi = 1, i pi h(xi , φ) = 0.
[Owen, 1988, Bio’ka, & Empirical Likelihood, 2001]
102. Empirical likelihood (EL)
Dataset x made of n independent replicates x = (x1, . . . , xn) of
some X ∼ F
Generalized moment condition model
EF h(X, φ) = 0,
where h is a known function, and φ an unknown parameter
Corresponding empirical likelihood
Lel(φ|x) = max
p
n
i=1
pi
for all p such that 0 pi 1, i pi = 1, i pi h(xi , φ) = 0.
[Owen, 1988, Bio’ka, & Empirical Likelihood, 2001]
103. Convergence of EL [3.4]
Theorem 3.4 Let X, Y1, . . . , Yn be independent rv’s with common
distribution F0. For θ ∈ Θ, and the function h(X, θ) ∈ Rs, let
θ0 ∈ Θ be such that
Var(h(Yi , θ0))
is finite and has rank q > 0. If θ0 satisfies
E(h(X, θ0)) = 0,
then
−2 log
Lel(θ0|Y1, . . . , Yn)
n−n
→ χ2
(q)
in distribution when n → ∞.
[Owen, 2001]
104. Convergence of EL [3.4]
“...The interesting thing about Theorem 3.4 is what is not there. It
includes no conditions to make ^θ a good estimate of θ0, nor even
conditions to ensure a unique value for θ0, nor even that any solution θ0
exists. Theorem 3.4 applies in the just determined, over-determined, and
under-determined cases. When we can prove that our estimating
equations uniquely define θ0, and provide a consistent estimator ^θ of it,
then confidence regions and tests follow almost automatically through
Theorem 3.4.”.
[Owen, 2001]
105. Raw [A]BCelsampler
Act as if EL was an exact likelihood
[Lazar, 2003]
for i = 1 → N do
generate φi from the prior distribution π(·)
set the weight ωi = Lel(φi |xobs)
end for
return (φi , ωi ), i = 1, . . . , N
Output weighted sample of size N
106. Raw [A]BCelsampler
Act as if EL was an exact likelihood
[Lazar, 2003]
for i = 1 → N do
generate φi from the prior distribution π(·)
set the weight ωi = Lel(φi |xobs)
end for
return (φi , ωi ), i = 1, . . . , N
Performance evaluated through effective sample size
ESS = 1
N
i=1
ωi
N
j=1
ωj
2
107. Raw [A]BCelsampler
Act as if EL was an exact likelihood
[Lazar, 2003]
for i = 1 → N do
generate φi from the prior distribution π(·)
set the weight ωi = Lel(φi |xobs)
end for
return (φi , ωi ), i = 1, . . . , N
More advanced algorithms can be adapted to EL:
E.g., adaptive multiple importance sampling (AMIS) of
Cornuet et al. to speed up computations
[Cornuet et al., 2012]
108. Moment condition in population genetics?
EL does not require a fully defined and often complex (hence
debatable) parametric model
Main difficulty
Derive a constraint
EF h(X, φ) = 0,
on the parameters of interest φ when X is made of the genotypes
of the sample of individuals at a given locus
E.g., in phylogeography, φ is composed of
dates of divergence between populations,
ratio of population sizes,
mutation rates, etc.
None of them are moments of the distribution of the allelic states
of the sample
109. Moment condition in population genetics?
EL does not require a fully defined and often complex (hence
debatable) parametric model
Main difficulty
Derive a constraint
EF h(X, φ) = 0,
on the parameters of interest φ when X is made of the genotypes
of the sample of individuals at a given locus
c h made of pairwise composite scores (whose zero is the pairwise
maximum likelihood estimator)
110. Pairwise composite likelihood
The intra-locus pairwise likelihood
2(xk|φ) =
i<j
2(xi
k, xj
k|φ)
with x1
k , . . . , xn
k : allelic states of the gene sample at the k-th locus
The pairwise score function
φ log 2(xk|φ) =
i<j
φ log 2(xi
k, xj
k|φ)
Composite likelihoods are often much narrower than the
original likelihood of the model
Safe(r) with EL because we only use position of its mode
111. Pairwise likelihood: a simple case
Assumptions
sample ⊂ closed, panmictic
population at equilibrium
marker: microsatellite
mutation rate: θ/2
if xi
k et xj
k are two genes of the
sample,
2(xi
k, xj
k|θ) depends only on
δ = xi
k − xj
k
2(δ|θ) =
1
√
1 + 2θ
ρ (θ)|δ|
with
ρ(θ) =
θ
1 + θ +
√
1 + 2θ
Pairwise score function
∂θ log 2(δ|θ) =
−
1
1 + 2θ
+
|δ|
θ
√
1 + 2θ
112. Pairwise likelihood: a simple case
Assumptions
sample ⊂ closed, panmictic
population at equilibrium
marker: microsatellite
mutation rate: θ/2
if xi
k et xj
k are two genes of the
sample,
2(xi
k, xj
k|θ) depends only on
δ = xi
k − xj
k
2(δ|θ) =
1
√
1 + 2θ
ρ (θ)|δ|
with
ρ(θ) =
θ
1 + θ +
√
1 + 2θ
Pairwise score function
∂θ log 2(δ|θ) =
−
1
1 + 2θ
+
|δ|
θ
√
1 + 2θ
113. Pairwise likelihood: a simple case
Assumptions
sample ⊂ closed, panmictic
population at equilibrium
marker: microsatellite
mutation rate: θ/2
if xi
k et xj
k are two genes of the
sample,
2(xi
k, xj
k|θ) depends only on
δ = xi
k − xj
k
2(δ|θ) =
1
√
1 + 2θ
ρ (θ)|δ|
with
ρ(θ) =
θ
1 + θ +
√
1 + 2θ
Pairwise score function
∂θ log 2(δ|θ) =
−
1
1 + 2θ
+
|δ|
θ
√
1 + 2θ
114. Pairwise likelihood: 2 diverging populations
MRCA
POP a POP b
τ
Assumptions
τ: divergence date of
pop. a and b
θ/2: mutation rate
Let xi
k and xj
k be two genes
coming resp. from pop. a and
b
Set δ = xi
k − xj
k.
Then 2(δ|θ, τ) =
e−τθ
√
1 + 2θ
+∞
k=−∞
ρ(θ)|k|
Iδ−k(τθ).
where
In(z) nth-order modified
Bessel function of the first
kind
115. Pairwise likelihood: 2 diverging populations
MRCA
POP a POP b
τ
Assumptions
τ: divergence date of
pop. a and b
θ/2: mutation rate
Let xi
k and xj
k be two genes
coming resp. from pop. a and
b
Set δ = xi
k − xj
k.
A 2-dim score function
∂τ log 2(δ|θ, τ) = −θ+
θ
2
2(δ − 1|θ, τ) + 2(δ + 1|θ, τ)
2(δ|θ, τ)
∂θ log 2(δ|θ, τ) =
−τ −
1
1 + 2θ
+
q(δ|θ, τ)
2(δ|θ, τ)
+
τ
2
2(δ − 1|θ, τ) + 2(δ + 1|θ, τ)
2(δ|θ, τ)
where
q(δ|θ, τ) :=
e−τθ
√
1 + 2θ
ρ (θ)
ρ(θ)
∞
k=−∞
|k|ρ(θ)|k|
Iδ−k (τθ)
116. Example: normal posterior
[A]BCel with two constraints
ESS=155.6
θ
Density
−0.5 0.0 0.5 1.0
0.01.0
ESS=75.93
θ
Density
−0.4 −0.2 0.0 0.2 0.4
0.01.02.0
ESS=76.87
θ
Density
−0.4 −0.2 0.0 0.2
01234
ESS=91.54
θ
Density
−0.6 −0.4 −0.2 0.0 0.2
01234
ESS=108.4
θ
Density
−0.4 0.0 0.2 0.4 0.6
0.01.02.03.0
ESS=85.13
θ
Density
−0.2 0.0 0.2 0.4 0.6
0.01.02.03.0
ESS=149.1
θ
Density
−0.5 0.0 0.5 1.0
0.01.02.0
ESS=96.31
θ
Density
−0.4 0.0 0.2 0.4 0.6
0.01.02.0
ESS=83.77
θ
Density
−0.6 −0.4 −0.2 0.0 0.2 0.401234
ESS=155.7
θ
Density
−0.5 0.0 0.5
0.01.02.0
ESS=92.42
θ
Density
−0.4 0.0 0.2 0.4 0.6
0.01.02.03.0
ESS=95.01
θ
Density
−0.4 0.0 0.2 0.4 0.6
0.01.53.0
ESS=139.2
Density
−0.6 −0.2 0.2 0.6
0.01.02.0
ESS=99.33
Density
−0.4 −0.2 0.0 0.2 0.4
0.01.02.03.0
ESS=87.28
Density
−0.2 0.0 0.2 0.4 0.6
0123
Sample sizes are of 25 (column 1), 50 (column 2) and 75 (column 3)
observations
117. Example: normal posterior
[A]BCel with three constraints
ESS=300.1
θ
Density
−0.4 0.0 0.4 0.8
0.01.0
ESS=205.5
θ
Density
−0.6 −0.2 0.0 0.2 0.4
0.01.02.0
ESS=179.4
θ
Density
−0.2 0.0 0.2 0.4
0.01.53.0
ESS=265.1
θ
Density
−0.3 −0.2 −0.1 0.0 0.1
01234
ESS=250.3
θ
Density
−0.6 −0.4 −0.2 0.0 0.2
0.01.02.0
ESS=134.8
θ
Density
−0.4 −0.2 0.0 0.1
01234
ESS=331.5
θ
Density
−0.8 −0.4 0.0 0.4
0.01.02.0
ESS=167.4
θ
Density
−0.9 −0.7 −0.5 −0.3
0123
ESS=136.5
θ
Density
−0.4 −0.2 0.0 0.201234
ESS=322.4
θ
Density
−0.2 0.0 0.2 0.4 0.6 0.8
0.01.02.0
ESS=202.7
θ
Density
−0.4 −0.2 0.0 0.2 0.4
0.01.02.03.0
ESS=166
θ
Density
−0.4 −0.2 0.0 0.2
01234
ESS=263.7
Density
−1.0 −0.6 −0.2
0.01.02.0
ESS=190.9
Density
−0.4 −0.2 0.0 0.2 0.4 0.6
0123
ESS=165.3
Density
−0.5 −0.3 −0.1 0.1
0.01.53.0
Sample sizes are of 25 (column 1), 50 (column 2) and 75 (column 3)
observations
118. Example: Superposition of gamma processes
Example of superposition of N renewal processes with waiting
times τij (i = 1, . . . , M, j = 1, . . .) ∼ G(α, β), when N is unknown.
Renewal processes
ζi1 = τi1, ζi2 = ζi1 + τi2, . . .
with observations made of first n values of the ζij ’s,
z1 = min{ζij }, z2 = min{ζij ; ζij > z1}, . . .
ending with
zn = min{ζij ; ζij > zn−1} .
[Cox & Kartsonaki, B’ka, 2012]
119. Example: Superposition of gamma processes (ABC)
Interesting testing ground for [A]BCel since data (zt) neither iid
nor Markov
Recovery of an iid structure by
1. simulating a pseudo-dataset,
(z1 , . . . , zn ), as in regular
ABC,
2. deriving sequence of
indicators (ν1, . . . , νn), as
z1 = ζν11, z2 = ζν2j2 , . . .
3. exploiting that those
indicators are distributed
from the prior distribution
on the νt’s leading to an iid
sample of G(α, β) variables
Comparison of ABC and
[A]BCel posteriors
α
Density
0 1 2 3 4
0.00.20.40.60.81.01.21.4
β
Density
0 1 2 3 4
0.00.51.01.5
N
Density
0 5 10 15 20
0.000.020.040.060.08
α
Density
0 1 2 3 4
0.00.51.01.5
β
Density
0 1 2 3 4
0.00.20.40.60.81.0
N
Density
0 5 10 15 20
0.000.010.020.030.040.050.06
Top: [A]BCel
Bottom: regular ABC
120. Example: Superposition of gamma processes (ABC)
Interesting testing ground for [A]BCel since data (zt) neither iid
nor Markov
Recovery of an iid structure by
1. simulating a pseudo-dataset,
(z1 , . . . , zn ), as in regular
ABC,
2. deriving sequence of
indicators (ν1, . . . , νn), as
z1 = ζν11, z2 = ζν2j2 , . . .
3. exploiting that those
indicators are distributed
from the prior distribution
on the νt’s leading to an iid
sample of G(α, β) variables
Comparison of ABC and
[A]BCel posteriors
α
Density
0 1 2 3 4
0.00.20.40.60.81.01.21.4
β
Density
0 1 2 3 4
0.00.51.01.5
N
Density
0 5 10 15 20
0.000.020.040.060.08
α
Density
0 1 2 3 4
0.00.51.01.5
β
Density
0 1 2 3 4
0.00.20.40.60.81.0
N
Density
0 5 10 15 20
0.000.010.020.030.040.050.06
Top: [A]BCel
Bottom: regular ABC
121. Pop’gen’: A first experiment
Evolutionary scenario:
MRCA
POP 0 POP 1
τ
Dataset:
50 genes per populations,
100 microsat. loci
Assumptions:
Ne identical over all
populations
φ = (log10 θ, log10 τ)
uniform prior over
(−1., 1.5) × (−1., 1.)
Comparison of the original
ABC with [A]BCel
ESS=7034
log(theta)
Density
0.00 0.05 0.10 0.15 0.20 0.25
051015
log(tau1)
Density
−0.3 −0.2 −0.1 0.0 0.1 0.2 0.3
01234567
histogram = [A]BCel
curve = original ABC
vertical line = “true”
parameter
122. Pop’gen’: A first experiment
Evolutionary scenario:
MRCA
POP 0 POP 1
τ
Dataset:
50 genes per populations,
100 microsat. loci
Assumptions:
Ne identical over all
populations
φ = (log10 θ, log10 τ)
uniform prior over
(−1., 1.5) × (−1., 1.)
Comparison of the original
ABC with [A]BCel
ESS=7034
log(theta)
Density
0.00 0.05 0.10 0.15 0.20 0.25
051015
log(tau1)
Density
−0.3 −0.2 −0.1 0.0 0.1 0.2 0.3
01234567
histogram = [A]BCel
curve = original ABC
vertical line = “true”
parameter
123. ABC vs. [A]BCel on 100 replicates of the 1st experiment
Accuracy:
log10 θ log10 τ
ABC [A]BCel ABC [A]BCel
(1) 0.097 0.094 0.315 0.117
(2) 0.071 0.059 0.272 0.077
(3) 0.68 0.81 1.0 0.80
(1) Root Mean Square Error of the posterior mean
(2) Median Absolute Deviation of the posterior median
(3) Coverage of the credibility interval of probability 0.8
Computation time: on a recent 6-core computer
(C++/OpenMP)
ABC ≈ 4 hours
[A]BCel≈ 2 minutes
124. Pop’gen’: Second experiment
Evolutionary scenario:
MRCA
POP 0 POP 1 POP 2
τ1
τ2
Dataset:
50 genes per populations,
100 microsat. loci
Assumptions:
Ne identical over all
populations
φ =
(log10 θ, log10 τ1, log10 τ2)
non-informative uniform
Comparison of the original ABC
with [A]BCel
histogram = [A]BCel
curve = original ABC
vertical line = “true” parameter
125. Pop’gen’: Second experiment
Evolutionary scenario:
MRCA
POP 0 POP 1 POP 2
τ1
τ2
Dataset:
50 genes per populations,
100 microsat. loci
Assumptions:
Ne identical over all
populations
φ =
(log10 θ, log10 τ1, log10 τ2)
non-informative uniform
Comparison of the original ABC
with [A]BCel
histogram = [A]BCel
curve = original ABC
vertical line = “true” parameter
126. Pop’gen’: Second experiment
Evolutionary scenario:
MRCA
POP 0 POP 1 POP 2
τ1
τ2
Dataset:
50 genes per populations,
100 microsat. loci
Assumptions:
Ne identical over all
populations
φ =
(log10 θ, log10 τ1, log10 τ2)
non-informative uniform
Comparison of the original ABC
with [A]BCel
histogram = [A]BCel
curve = original ABC
vertical line = “true” parameter
127. Pop’gen’: Second experiment
Evolutionary scenario:
MRCA
POP 0 POP 1 POP 2
τ1
τ2
Dataset:
50 genes per populations,
100 microsat. loci
Assumptions:
Ne identical over all
populations
φ =
(log10 θ, log10 τ1, log10 τ2)
non-informative uniform
Comparison of the original ABC
with [A]BCel
histogram = [A]BCel
curve = original ABC
vertical line = “true” parameter
128. ABC vs. [A]BCel on 100 replicates of the 2nd experiment
Accuracy:
log10 θ log10 τ1 log10 τ2
ABC [A]BCel ABC [A]BCel ABC [A]B
(1) .0059 .0794 .472 .483 29.3 4.7
(2) .048 .053 .32 .28 4.13 3.3
(3) .79 .76 .88 .76 .89 .79
(1) Root Mean Square Error of the posterior mean
(2) Median Absolute Deviation of the posterior median
(3) Coverage of the credibility interval of probability 0.8
Computation time: on a recent 6-core computer
(C++/OpenMP)
ABC ≈ 6 hours
[A]BCel≈ 8 minutes
129. Comparison
On large datasets, [A]BCel gives more accurate results than ABC
ABC simplifies the dataset through summary statistics
Due to the large dimension of x, the original ABC algorithm
estimates
π θ η(xobs) ,
where η(xobs) is some (non-linear) projection of the observed
dataset on a space with smaller dimension
→ Some information is lost
[A]BCel simplifies the model through a generalized moment
condition model.
→ Here, the moment condition model is based on pairwise
composition likelihood
130. Conclusion/perspectives
abc part of a wider picture
to handle complex/Big Data
models
many formats of empirical
[likelihood] Bayes methods
available
lack of comparative tools
and of an assessment for
information loss