Scientific Computing with Python Webinar 9/18/2009:Curve FittingEnthought, Inc.
This webinar will provide an overview of the tools that SciPy and NumPy provide for regression analysis including linear and non-linear least-squares and a brief look at handling other error metrics. We will also demonstrate simple GUI tools that can make some problems easier and provide a quick overview of the new Scikits package statsmodels whose API is maturing in a separate package but should be incorporated into SciPy in the future.
Scientific Computing with Python Webinar 9/18/2009:Curve FittingEnthought, Inc.
This webinar will provide an overview of the tools that SciPy and NumPy provide for regression analysis including linear and non-linear least-squares and a brief look at handling other error metrics. We will also demonstrate simple GUI tools that can make some problems easier and provide a quick overview of the new Scikits package statsmodels whose API is maturing in a separate package but should be incorporated into SciPy in the future.
Classification with mixtures of curved Mahalanobis metricsFrank Nielsen
Presentation at ICIP 2016.
Slide 4, there is a typo, replace absolute value by parenthesis. The cross-ratio can be negative and we use the principal complex logarithm
Computational Information Geometry: A quick review (ICMS)Frank Nielsen
From the workshop
Computational information geometry for image and signal processing
Sep 21, 2015 - Sep 25, 2015
ICMS, 15 South College Street, Edinburgh
http://www.icms.org.uk/workshop.php?id=343
Model Selection with Piecewise Regular GaugesGabriel Peyré
Talk given at Sampta 2013.
The corresponding paper is :
Model Selection with Piecewise Regular Gauges (S. Vaiter, M. Golbabaee, J. Fadili, G. Peyré), Technical report, Preprint hal-00842603, 2013.
http://hal.archives-ouvertes.fr/hal-00842603/
Classification with mixtures of curved Mahalanobis metricsFrank Nielsen
Presentation at ICIP 2016.
Slide 4, there is a typo, replace absolute value by parenthesis. The cross-ratio can be negative and we use the principal complex logarithm
Computational Information Geometry: A quick review (ICMS)Frank Nielsen
From the workshop
Computational information geometry for image and signal processing
Sep 21, 2015 - Sep 25, 2015
ICMS, 15 South College Street, Edinburgh
http://www.icms.org.uk/workshop.php?id=343
Model Selection with Piecewise Regular GaugesGabriel Peyré
Talk given at Sampta 2013.
The corresponding paper is :
Model Selection with Piecewise Regular Gauges (S. Vaiter, M. Golbabaee, J. Fadili, G. Peyré), Technical report, Preprint hal-00842603, 2013.
http://hal.archives-ouvertes.fr/hal-00842603/
Non-sampling functional approximation of linear and non-linear Bayesian UpdateAlexander Litvinenko
We offer a non-sampling functional approximation of non-linear surrogate to classical Bayesian Update formula. We start with prior Polynomial Chaos Expansion (PCE), express log-likelihood in a PCE basis and obtain a new posterior PCE.
Main IDEA is to update not probability density, but basis coefficients.
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...Alexander Litvinenko
We suggest the new vision for classical Bayesian Update formula. We expand all ingredients in Polynomial Chaos Expansion and write out a new formula for Bayesian* update of PCE coefficients. This formula is derived from Minimum Mean Square Estimation. One starts with prior PCE, take measurements into account, and obtain posterior PCE coefficients, without any MCMC sampling.
ABC with data cloning for MLE in state space modelsUmberto Picchini
An application of the "data cloning" method for parameter estimation via MLE aided by Approximate Bayesian Computation. The relevant paper is http://arxiv.org/abs/1505.06318
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
1. Approximative Bayesian Computation (ABC) Methods
Approximative Bayesian Computation
(ABC) Methods
Christian P. Robert
Universit´ Paris Dauphine and CREST-INSEE
e
http://www.ceremade.dauphine.fr/~xian
Joint works with M. Beaumont, J.-M. Cornuet, A. Grelaud,
J.-M. Marin, F. Rodolphe, & J.-F. Tally
Colloquium, Universit´ de Montpellier 2, 12 f´vrier 2009
e e
2. Approximative Bayesian Computation (ABC) Methods
Outline
Introduction
1
Population Monte Carlo
2
ABC
3
ABC-PMC
4
ABC for model choice in GRFs
5
3. Approximative Bayesian Computation (ABC) Methods
Introduction
General purpose
Given a density π known up to a normalizing constant, and an
integrable function h, compute
h(x)˜ (x)µ(dx)
π
Π(h) = h(x)π(x)µ(dx) =
π (x)µ(dx)
˜
when h(x)˜ (x)µ(dx) is intractable.
π
4. Approximative Bayesian Computation (ABC) Methods
Introduction
Monte Carlo basics
Monte Carlo basics
Generate an iid sample x1 , . . . , xN from π and estimate Π(h) by
N
ˆN
ΠM C (h) = N −1 h(xi ).
i=1
as
ˆN
LLN: ΠM C (h) −→ Π(h)
If Π(h2 ) = h2 (x)π(x)µ(dx) < ∞,
√ L
ˆN
N ΠM C (h) − Π(h) N 0, Π [h − Π(h)]2 .
CLT:
Caveat
Often impossible or inefficient to simulate directly from Π
5. Approximative Bayesian Computation (ABC) Methods
Introduction
Importance Sampling
Importance Sampling
For Q proposal distribution such that Q(dx) = q(x)µ(dx),
alternative representation
Π(h) = h(x){π/q}(x)q(x)µ(dx).
Principle
Generate an iid sample x1 , . . . , xN ∼ Q and estimate Π(h) by
N
ˆ IS
ΠQ,N (h) = N −1 h(xi ){π/q}(xi ).
i=1
6. Approximative Bayesian Computation (ABC) Methods
Introduction
Importance Sampling
Then
as
ˆ
LLN: ΠIS (h) −→ Π(h) and if Q((hπ/q)2 ) < ∞,
Q,N
√ L
ˆ Q,N
N (ΠIS (h) − Π(h)) N 0, Q{(hπ/q − Π(h))2 } .
CLT :
Caveat
ˆ Q,N
If normalizing constant unknown, impossible to use ΠIS
Generic problem in Bayesian Statistics: π(θ|x) ∝ f (x|θ)π(θ).
7. Approximative Bayesian Computation (ABC) Methods
Introduction
Importance Sampling
Self-Normalised Importance Sampling
Self normalized version
−1 N
N
ˆ Q,N
ΠSN IS (h) {π/q}(xi )
= h(xi ){π/q}(xi ).
i=1 i=1
as
ˆ
ΠSN IS (h) −→ Π(h)
LLN : Q,N
and if Π((1 + h2 )(π/q)) < ∞,
√ L
0, π {(π/q)(h − Π(h)}2 ) .
ˆ Q,N
N (ΠSN IS (h) − Π(h))
CLT : N
c The quality of the SNIS approximation depends on the
choice of Q
8. Approximative Bayesian Computation (ABC) Methods
Introduction
Importance Sampling
Iterated importance sampling
Introduction of an algorithmic temporal dimension :
(t) (t−1)
xi ∼ qt (x|xi ) i = 1, . . . , n, t = 1, . . .
and
n
1 (t) (t)
ˆ
It = ̺i h(xi )
n
i=1
is still unbiased for
(t)
πt (xi )
(t)
̺i = , i = 1, . . . , n
(t) (t−1)
qt (xi |xi )
9. Approximative Bayesian Computation (ABC) Methods
Population Monte Carlo
PMCA: Population Monte Carlo Algorithm
At time t = 0
iid
Generate (xi,0 )1≤i≤N ∼ Q0
Set ωi,0 = {π/q0 }(xi,0 )
iid
Generate (Ji,0 )1≤i≤N ∼ M(1, (¯ i,0 )1≤i≤N )
ω
Set xi,0 = xJi ,0
˜
At time t (t = 1, . . . , T ),
ind
Generate xi,t ∼ Qi,t (˜i,t−1 , ·)
x
Set ωi,t = {π(xi,t )/qi,t (˜i,t−1 , xi,t )}
x
iid
Generate (Ji,t )1≤i≤N ∼ M(1, (¯ i,t )1≤i≤N )
ω
Set xi,t = xJi,t ,t .
˜
[Capp´, Douc, Guillin, Marin, & CPR, 2009]
e
10. Approximative Bayesian Computation (ABC) Methods
Population Monte Carlo
Notes on PMC
After T iterations of PMCA, PMC estimator of Π(h) given by
T N
1
¯ N,T
ΠP M C (h) = ¯
ωi,t h(xi,t ).
T
t=1 i=1
¯
ωi,t means normalising over whole sequence of simulations
1
Qi,t ’s chosen arbitrarily under support constraint
2
Qi,t ’s may depend on whole sequence of simulations
3
alternatives to multinomial sampling reduce variance/preserve
4
“unbiasedness”
[Kitagawa, 1996 / Carpenter, Clifford & Fearnhead, 1997]
11. Approximative Bayesian Computation (ABC) Methods
ABC
The ABC method
Bayesian setting: target is π(θ)f (x|θ)
When likelihood f (x|θ) not in closed form, likelihood-free rejection
technique:
ABC algorithm
For an observation y ∼ f (y|θ), under the prior π(θ), keep jointly
simulating
θ′ ∼ π(θ) , x ∼ f (x|θ′ ) ,
until the auxiliary variable x is equal to the observed value, x = y.
[Pritchard et al., 1999]
13. Approximative Bayesian Computation (ABC) Methods
ABC
A as approximative
When y is a continuous random variable, equality x = y is replaced
with a tolerance condition,
̺(x, y) ≤ ǫ
where ̺ is a distance between summary statistics
Output distributed from
π(θ) Pθ {̺(x, y) < ǫ} ∝ π(θ|̺(x, y) < ǫ)
14. Approximative Bayesian Computation (ABC) Methods
ABC
ABC improvements
Simulating from the prior is often poor in efficiency
Either modify the proposal distribution on θ to increase the density
of x’s within the vicinity of y...
[Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]
...or by viewing the problem as a conditional density estimation
and by developing techniques to allow for larger ǫ
[Beaumont et al., 2002]
15. Approximative Bayesian Computation (ABC) Methods
ABC
ABC-MCMC
Markov chain (θ(t) ) created via the transition function
θ′ ∼ K(θ′ |θ(t) ) if x ∼ f (x|θ′ ) is such that x = y
π(θ′ )K(θ(t) |θ′ )
(t+1)
and u ∼ U(0, 1) ≤ π(θ(t) )K(θ′ |θ(t) ) ,
θ =
(t)
θ otherwise,
has the posterior π(θ|y) as stationary distribution
[Marjoram et al, 2003]
16. Approximative Bayesian Computation (ABC) Methods
ABC
ABC-PRC
Another sequential version producing a sequence of Markov
(t) (t)
transition kernels Kt and of samples (θ1 , . . . , θN ) (1 ≤ t ≤ T )
ABC-PRC Algorithm
(t−1)
Pick a θ⋆ is selected at random among the previous θi ’s
1
(t−1)
(1 ≤ i ≤ N ).
with probabilities ωi
Generate
2
(t) (t)
θi ∼ Kt (θ|θ⋆ ) , x ∼ f (x|θi ) ,
Check that ̺(x, y) < ǫ, otherwise start again.
3
[Sisson et al., 2007]
17. Approximative Bayesian Computation (ABC) Methods
ABC
ABC-PRC weight
(t)
Probability ωi computed as
(t) (t) (t) (t)
ωi ∝ π(θi )Lt−1 (θ⋆ |θi ){π(θ⋆ )Kt (θi |θ⋆ )}−1 ,
where Lt−1 is an arbitrary transition kernel.
In case
Lt−1 (θ′ |θ) = Kt (θ|θ′ ) ,
all weights are equal under a uniform prior.
Inspired from Del Moral et al. (2006), who use backward kernels
Lt−1 in SMC to achieve unbiasedness
20. Approximative Bayesian Computation (ABC) Methods
ABC-PMC
A PMC version
Use of the same kernel idea as ABC-PRC but with IS correction
Generate a sample at iteration t by
N
(t−1) (t−1)
πt (θ(t) ) ∝ Kt (θ(t) |θj
ˆ ωj )
j=1
modulo acceptance of the associated xt , and use an importance
(t)
weight associated with an accepted simulation θi
(t) (t) (t)
ωi ∝ π(θi ) πt (θi ) .
ˆ
c Still likelihood free
[Beaumont et al., 2008, arXiv:0805.2256]
21. Approximative Bayesian Computation (ABC) Methods
ABC-PMC
The ABC-PMC algorithm
Given a decreasing sequence of approximation levels ǫ1 ≥ . . . ≥ ǫT ,
1. At iteration t = 1,
For i = 1, ..., N
(1) (1)
Simulate θi ∼ π(θ) and x ∼ f (x|θi ) until ̺(x, y) < ǫ1
(1)
Set ωi = 1/N
(1)
Take τ 2 as twice the empirical variance of the θi ’s
2. At iteration 2 ≤ t ≤ T ,
For i = 1, ..., N , repeat
(t−1) (t−1)
⋆
Pick θi from the θj ’s with probabilities ωj
(t) (t)
2
⋆ ⋆
generate θi |θi ∼ N (θi , σt ) and x ∼ f (x|θi )
until ̺(x, y) < ǫt
(t) (t) (t−1) (t) (t−1)
N
Set ωi ∝ π(θi )/ θi − θj
ωj ϕ σt )
−1
j=1
(t)
2
Take τt+1 as twice the weighted empirical variance of the θi ’s
22. Approximative Bayesian Computation (ABC) Methods
ABC-PMC
A mixture example (0)
Toy model of Sisson et al. (2007): if
θ ∼ U(−10, 10) , x|θ ∼ 0.5 N (θ, 1) + 0.5 N (θ, 1/100) ,
then the posterior distribution associated with y = 0 is the normal
mixture
θ|y = 0 ∼ 0.5 N (0, 1) + 0.5 N (0, 1/100)
restricted to [−10, 10].
Furthermore, true target available as
π(θ||x| < ǫ) ∝ Φ(ǫ−θ)−Φ(−ǫ−θ)+Φ(10(ǫ−θ))−Φ(−10(ǫ+θ)) .
23. Approximative Bayesian Computation (ABC) Methods
ABC-PMC
A mixture example (2)
Recovery of the target, whether using a fixed standard deviation of
τ = 0.15 or τ = 1/0.15, or a sequence of adaptive τt ’s.
24. Approximative Bayesian Computation (ABC) Methods
ABC for model choice in GRFs
ABC for model choice
Introduction
1
Population Monte Carlo
2
ABC
3
ABC-PMC
4
ABC for model choice in GRFs
5
Gibbs random fields
Model choice via ABC
Illustrations
25. Approximative Bayesian Computation (ABC) Methods
ABC for model choice in GRFs
Gibbs random fields
Gibbs random fields
Gibbs distribution
y = (y1 , . . . , yn ) is a Gibbs random field associated with the
graph G if
1
f (y) = exp − Vc (yc ) .
Z
c∈C
where Z is the normalising constant, C is the set of cliques and Vc
is any function also called potential (and U (y) = c∈C Vc (yc ) is
the energy function)
c Z is usually unavailable in closed form
26. Approximative Bayesian Computation (ABC) Methods
ABC for model choice in GRFs
Gibbs random fields
Potts model
Potts model
Vc (y) is of the form
Vc (y) = θS(y)
=θ δyl =yi
l∼i
where l∼i denotes a neighbourhood structure
In most realistic settings, summation
exp{θT S(x)}
Zθ =
x∈X
involves too many terms to be manageable and numerical
approximations cannot always be trusted
[Cucala, Marin, CPR & Titterington, 2009]
27. Approximative Bayesian Computation (ABC) Methods
ABC for model choice in GRFs
Model choice via ABC
Bayesian Model Choice
Comparing a model with potential S0 taking values in Rp0 versus a
model with potential S1 taking values in Rp1 through the Bayes
factor corresponding to the priors π0 and π1 on each parameter
space
T
Bm0 /m1 (x) = exp{θ0 S0 (x)}/Zθ0 ,0 π0 (dθ0 )
T
exp{θ1 S1 (x)}/Zθ1 ,1 π1 (dθ1 )
Use of Jeffreys’ scale to select most appropriate model
28. Approximative Bayesian Computation (ABC) Methods
ABC for model choice in GRFs
Model choice via ABC
Neighbourhood relations
Choice to be made between M neighbourhood relations
m
i ∼ i′ (0 ≤ m ≤ M − 1)
with
Sm (x) = I{xi =xi′ }
m
i∼i′
driven by the posterior probabilities of the models.
29. Approximative Bayesian Computation (ABC) Methods
ABC for model choice in GRFs
Model choice via ABC
Model index
Formalisation via a model index M that appears as a new
parameter with prior distribution π(M = m) and
π(θ|M = m) = πm (θm )
Computational target:
P(M = m|x) ∝ fm (x|θm )πm (θm ) dθm π(M = m) ,
Θm
30. Approximative Bayesian Computation (ABC) Methods
ABC for model choice in GRFs
Model choice via ABC
Sufficient statistics
By definition, if S(x) sufficient statistic for the joint parameters
(M, θ0 , . . . , θM −1 ),
P(M = m|x) = P(M = m|S(x)) .
For each model m, own sufficient statistic Sm (·) and
S(·) = (S0 (·), . . . , SM −1 (·)) also sufficient.
For Gibbs random fields,
1 2
x|M = m ∼ fm (x|θm ) = fm (x|S(x))fm (S(x)|θm )
1
f 2 (S(x)|θm )
=
n(S(x)) m
where
n(S(x)) = ♯ {˜ ∈ X : S(˜ ) = S(x)}
x x
c S(x) is therefore also sufficient for the joint parameters
[Specific to Gibbs random fields!]
31. Approximative Bayesian Computation (ABC) Methods
ABC for model choice in GRFs
Model choice via ABC
ABC model choice Algorithm
ABC-MC
Generate m∗ from the prior π(M = m).
∗
Generate θm∗ from the prior πm∗ (·).
Generate x∗ from the model fm∗ (·|θm∗ ).
∗
Compute the distance ρ(S(x0 ), S(x∗ )).
Accept (θm∗ , m∗ ) if ρ(S(x0 ), S(x∗ )) < ǫ.
∗
Note When ǫ = 0 the algorithm is exact
33. Approximative Bayesian Computation (ABC) Methods
ABC for model choice in GRFs
Illustrations
Toy example
iid Bernoulli model versus two-state first-order Markov chain, i.e.
n
{1 + exp(θ0 )}n ,
f0 (x|θ0 ) = exp θ0 I{xi =1}
i=1
versus
n
1
{1 + exp(θ1 )}n−1 ,
f1 (x|θ1 ) = exp θ1 I{xi =xi−1 }
2
i=2
with priors θ0 ∼ U(−5, 5) and θ1 ∼ U(0, 6) (inspired by “phase
transition” boundaries).
34. Approximative Bayesian Computation (ABC) Methods
ABC for model choice in GRFs
Illustrations
Toy example (2)
10
5
5
BF01
BF01
0
0
^
^
−10 −5
−5
−40 −20 0 10 −40 −20 0 10
BF01 BF01
(left) Comparison of the true BF m0 /m1 (x0 ) with BF m0 /m1 (x0 )
(in logs) over 2, 000 simulations and 4.106 proposals from the
prior. (right) Same when using tolerance ǫ corresponding to the
1% quantile on the distances.
35. Approximative Bayesian Computation (ABC) Methods
ABC for model choice in GRFs
Illustrations
Protein folding
Superposition of the native structure (grey) with the ST1
structure (red.), the ST2 structure (orange), the ST3 structure
(green), and the DT structure (blue).
36. Approximative Bayesian Computation (ABC) Methods
ABC for model choice in GRFs
Illustrations
Protein folding (2)
% seq . Id. TM-score FROST score
1i5nA (ST1) 32 0.86 75.3
1ls1A1 (ST2) 5 0.42 8.9
1jr8A (ST3) 4 0.24 8.9
1s7oA (DT) 10 0.08 7.8
Characteristics of dataset. % seq. Id.: percentage of identity with
the query sequence. TM-score.: similarity between predicted and
native structure (uncertainty between 0.17 and 0.4) FROST score:
quality of alignment of the query onto the candidate structure
(uncertainty between 7 and 9).
37. Approximative Bayesian Computation (ABC) Methods
ABC for model choice in GRFs
Illustrations
Protein folding (3)
NS/ST1 NS/ST2 NS/ST3 NS/DT
BF 1.34 1.22 2.42 2.76
P(M = NS|x0 ) 0.573 0.551 0.708 0.734
Estimates of the Bayes factors between model NS and models
ST1, ST2, ST3, and DT, and corresponding posterior
probabilities of model NS based on an ABC-MC algorithm using
1.2 106 simulations and a tolerance ǫ equal to the 1% quantile of
the distances.