SlideShare a Scribd company logo
1 of 36
Download to read offline
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Nested Sampling for General Bayesian
Computation
Represented by WU Changye
12 février 2015
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Introduction
In the Bayesian paradigm, the parameter θ follows the prior
distribution π, the observations y follow the distribution L(y|θ)
given θ, then the posterior distribution f (θ|y) which indicates the
distribution of θ given the observations y has the following form :
f (θ|y) =
L(y|θ)π(θ)
Θ L(y|θ)π(θ)dθ
The objective of nested sampling is to compute the ’evidence’ :
Z =
Θ
L(y|θ)π(θ)dθ
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
θ is a random variable, then
Z = Eπ(L(θ))
For simplicity, let L(θ) denote the likelihood L(y|θ). The cumulative
distribution function of L(θ) is
F(λ) =
L(θ)<λ
π(θ)dθ
Define the induced measure µ on R by the likelihood function and
the prior as follwing
µ(A) = Pπ(L(θ) ∈ A)
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Lemma 1 : Eπ(L(θ)) = Eµ(X).
Proof : ∀g is a indication function of a measurable set A in R.
Then
Eπ(g(L(θ))) = Eπ(IA(L(θ))) =
L(θ)∈A
π(θ)dθ
However, µ(dx) = Θ δ{L(θ)}(dx)π(θ)dθ.
Eµ(g(X)) =
R
IA(x)µ(dx) =
Θ R
IA(x)δ{L(θ)}(dx) π(θ)dθ
Therefore,
Eµ(g(X)) = Eπ(IA(L(θ))) = Eπ(g(L(θ)))
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
In the general case, let {gn} be an increasing sequence of step
functions converging to identity function Id ; then {gn ◦ L} is an
increasing sequence of step functions converging to L and the
desired conclusion follows by taking limits.
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Lemma 2 : If X is a positive-valued random variable, has p.d.f. f
and c.d.f. F, then :
∞
0
(1 − F(x))dx =
∞
0
xf (x)dx = E(X).
Proof :
∞
0
(1 − F(x))dx =
∞
0
(1 − P(X < x))dx
=
∞
0
P(X ≥ x)dx
=
∞
0
∞
x
f (y) · dy · dx
=
∞
0
f (y)
y
0
dx · dy
=
∞
0
f (y) · ydy = E(X)
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
According to Lemma 1 and 2,
Z = Eµ(X) =
∞
0
xdF(x) =
∞
0
(1 − F(x))dx
Let ϕ−1(x) = 1 − F(x) = P{θ : L(θ) > x}
Z =
∞
0
ϕ−1
(x)dx =
1
0
ϕ(x)dx
Therefore, we have the evidence represented by an one-dimensional
integration.
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
In order to compute the following integration :
J =
1
0
ϕ(x)dx
there are three methods based on sampling.
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
1) Importance Sampling :
i = 1, · · · , n, Ui ∼ U[0,1],
ˆJ1 = 1
n
n
i=1 ϕ(Ui )
2) Riemann approximation :
i = 1, · · · , n, Ui ∼ U[0,1], U(i) is the order statistics of
(U1, · · · , Un), U(1) ≤ · · · ≤ U(n),
ˆJ2 = n−1
i=1 ϕ(U(i))(U(i+1) − U(i))
3) A complicated method :
x0 = 1
step1 : i = 1, · · · , N, U1
i ∼ U[0,1], x1 = max{U1
1 , · · · , U1
N}
step2 : i = 1, · · · , N, U2
i ∼ U[0,x1], x2 = max{U2
1 , · · · , U2
N}
· · · · · ·
setp n : i = 1, · · · , N, Un
i ∼ U[0,xn−1], xn = max{Un
1 , · · · , Un
N}
ˆJ3 = n
i=1 ϕ(xi )(xi−1 − xi )
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Nested sampling takes the third method and the reason is that ϕ is
a decreasing function and in many cases it decreases rapidly.
Figure: Graph of ϕ(x) and the trace of (xi , ϕ(xi ))
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
First, we consider the distributions of x1, · · · , xn :
for u ∈ [0, 1],
P(x1 < u) = P(U1
1 < u, · · · , U1
N < u)
=
N
i=1
P(U1
i < u)
= uN
As a result, the density function of x1 is
f (x1) = NxN−1
1
By the same method, we have :
f (xk|xk−1) =
N
xk−1
xk
xk−1
N−1
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Note tk = xk
xk−1
,
P(tk ≤ t) = P(xk ≤ tx|xk−1 = x)fxk−1
(x)dx
=
xk−1
tx
0
fxk |xk−1
(y|x)fxk−1
(x)dxdy
=
xk−1
tx
0
N
x
y
x
N−1
fxk−1
(x)dxdy
=
xk−1
tN
fxk−1
(x)dx = tN
Besides,
P(tk ≤ t|xk−1 = x) = P(xk ≤ tx|xk−1 = x) = tN
As a result, we have tk ⊥ xk−1.
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Moreover, a point estimate for xk can be written entirely in terms
of point estimates for the tk,
xk =
xk
xk−1
×
xk−1
xk−2
×· · ·×
x1
x0
×x0 = tk ·tk−1 · · · t1 ·x0 =
k
i=1
ti ·x0
More appropriate to the large range common to many problems,
log xk becomes
log xk = log
k
i=1
ti · x0 =
k
i=1
log ti + log x0
where the logarithmic shrinkage is distributed as
f (log t) = Ne(N−1) log t
with the mean and the variance :
E(log t) = −
1
N
V(log t) =
1
N2
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Taking the mean as the point estimate for each log ti finally gives
log
xk
x0
= −
k
N
±
√
k
N
Parameterizing xk in terms of the shrinkage proves immediately
advantageous – because the log ti are independent, the errors in the
point estimates tend to cancel and the estimates for the xk grow
increasingly more accurate with k.
xk = exp(−
k
N
)
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Next, we consider the distribution of ϕ(X), where X ∼ U[0, 1]
Considering the random variable X = ϕ−1(L(θ)), where θ ∼ π.
Notice that :
ϕ−1
: [0, Lmax] → [0, 1],
λ → P(L(θ) > λ)
for u ∈ [0, 1],
P(X < u) = P(ϕ−1
(L(θ)) < u)
= P(L(θ) > ϕ(u))
= ϕ−1
(ϕ(u))
= u
This means that ϕ−1(L(θ)) follows the U[0, 1] and ϕ(X) ∼ L(θ).
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Considering the situation on the truncated distribution :
π(θ) ∝
π(θ) L(θ) > L0
0 otherwise
Let X0 = ϕ−1(L0) and X = ϕ−1(L(θ)), where θ ∼ π.
For u ∈ [0, X0],
P(X < u) = P(ϕ−1
(L(θ)) < u|L(θ) > L0)
=
P(L(θ) > ϕ(u))
P(L(θ) > L0)
=
ϕ−1(ϕ(u))
X0
=
u
X0
X ∼ U[0, X0],
As a result, ϕ(X) ∼ L(θ), where X ∼ U[0, X0] and θ ∼ π.
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Algorithm
The algorithm based on the method discussed in the previous
section is described in below :
– Iteration 1 : sample independently N points θ1,i from the prior
π(θ), determine θ1 = arg min1≤i≤N L(θ1,i ) and set ϕ1 = L(θ1)
– Iteration 2 : obtain the N current values θ2,i , by reproducing the
θ1,i ’s except for θ1 that is replaced by a draw from the prior
distribution π conditional upon L(θ) ≥ ϕ1 ; then select θ2 as
θ2 = arg min1≤i≤N L(θ2,i ), and set ϕ2 = L(θ2)
– Iterate the above step until a given stopping rule is satisfied, for
instance when observing very small changes in the approximation
ˆZ or when reaching the maximal value of L(θ) when it is known.
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
ˆZ =
J
i=1
ϕi (xi−1 − xi )
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
By-product of Nested Sampling
Skilling indicates that nested sampling provides simulations from
the posterior distribution at no extra cost : "the existing sequence
of points θ1, θ2, θ3, . . . already gives a set of posterior
representatives, provided the i’th is assigned the appropriate
importance ωi Li "
Eπ(f (θ)) = Θ π(θ)L(θ)f (θ)dθ
Θ π(θ)L(θ)dθ
We can use a single run of nested sampling to obtain estimators of
both the numerator and the denominator, the latter being the
evidence Z. The estimator of the numerator is
j
i=1
(xi−1 − xi )ϕi f (θi ) (1)
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Lemma 3(N.Chopin & C.P Robert) :
Let f (l) = Eπ{f (θ)|L(θ) = l} for l > 0, then, if f is absolutely
continuous,
1
0
ϕ(x)f (ϕ(x)) dx = π(θ)L(θ)f (θ)dθ
Proof : Let ψ : x → xf (x),
π(θ)L(θ)f (θ)dθ = Eπ[ψ{L(θ}]
=
+∞
0
Pπ(ψ{L(θ} > l)dl
=
+∞
0
ϕ−1
(ψ−1
(l))dl =
1
0
ψ(ϕ(x))dx
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Termination
The author suggests that
max(L1, · · · , LN)Xj < fZj =⇒ termination
where f is some fraction.
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
N ?
The larger N is, the smaller the variability of the approximation is.
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
How to sample N points from the constraint parametric
space
Using a MCMC method which constructs a Markov Chain that has
the invariant distribution of the truncated distribution.
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
A decentred gaussian example
The prior is
π(θ) =
d
i=1
1
√
2π
exp −
1
2
(θ(k)
)2
and the likelihood is
L(y|θ) =
d
i=1
1
√
2π
exp −
1
2
(yk − θ(k)
)2
In this example, we can calculate the evidence analytically
Z =
Rd
L(θ)π(θ)dθ =
exp(−
d
k=1 y2
k
4 )
2d πd/2
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Figure: Graph of ϕ(x) and the trace of (xi , ϕ(xi )) with d = 1 and y = 10.
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Figure: The prior distribution and the likelihood with d = 1 and y = 10.
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Figure: the box-plot of log ˆZ − log Z with d = 1 and y = 10 for Nested
sampling and Monte Carlo.
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Figure: the box-plot of log ˆZ − log Z with d = 5 and y = (3, 3, 3, 3, 3).
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
A Probit Model
We consider the arsenic dataset and a probit model studied in
Chapter 5 of Gelman & Hill (2006). The observations are
independent Bernoulli variables yi such that
P(yi = 1|xi ) = Φ(xT
i θ), where xi is a vector of d covariates, θ is a
vector parameter of size d, and Φ denotes the standard normal
distribution function. In this particular example, d = 7.
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
The prior is
θ ∼ N(0, 102
Id )
L(θ) =
n
i=1
Φ(xT
i θ)
yi
1 − Φ(xT
i θ)
1−yi
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Figure: the box-plot of log ˆZ with N = 20 for HMC and random walk
MCMC. The blue line remarks the true value of log Z(Chib’s method).
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Posterior Samples
We use the Gaussian example to illustrate this result. Let
f (θ) = exp(−3θ + 9d
2 ).
Figure: The box-plot of the log-relative error of log ˆZ − log Z and
log ˆE(f ) − log E(f )
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Conclusion
– Nested sampling reverses the accepted approach to Bayesian
computation by putting the evidence first.
– Nested sampling samples more sparsely from the prior in regions
where the likelihood is low and more densely where the likelihood
is high, resulting in greater efficiency than a sampler that draws
directly from the prior.
– The procedure runs with an evolving collection of N points,
where N can be chosen small for speed or large for accuracy.
– Nested sampling always reduces a multidimensional integral to
the integral of a one-dimensional monotonic function, no matter
how many dimensions θ occupies, and no matter how strange the
shape of the likelihood function L(θ) is.
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Problems
– How to generate N independent points in the constraint
parametric space is an important problem. Techniques to do so
effectively and efficiently may vary from problem to problem.
– Termination is also another problem in practice.
Represented by WU Changye Nested Sampling for General Bayesian Computation
Outline
Nested Sampling
Posterior Simulation
Nested Sampling Termination and Size of N
Numerical Examples
Conclusion
Thank you !
Represented by WU Changye Nested Sampling for General Bayesian Computation

More Related Content

What's hot

Multiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsMultiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsChristian Robert
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsStefano Cabras
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distancesChristian Robert
 
Bayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsBayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsCaleb (Shiqiang) Jin
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Valentin De Bortoli
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)Christian Robert
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Christian Robert
 
Inference in generative models using the Wasserstein distance [[INI]
Inference in generative models using the Wasserstein distance [[INI]Inference in generative models using the Wasserstein distance [[INI]
Inference in generative models using the Wasserstein distance [[INI]Christian Robert
 
random forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationrandom forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationChristian Robert
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerChristian Robert
 
ABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsChristian Robert
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big DataChristian Robert
 
Convergence of ABC methods
Convergence of ABC methodsConvergence of ABC methods
Convergence of ABC methodsChristian Robert
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferenceChristian Robert
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes FactorsChristian Robert
 

What's hot (20)

Multiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximationsMultiple estimators for Monte Carlo approximations
Multiple estimators for Monte Carlo approximations
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-Likelihoods
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distances
 
Bayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsBayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear models
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010
 
Inference in generative models using the Wasserstein distance [[INI]
Inference in generative models using the Wasserstein distance [[INI]Inference in generative models using the Wasserstein distance [[INI]
Inference in generative models using the Wasserstein distance [[INI]
 
random forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimationrandom forests for ABC model choice and parameter estimation
random forests for ABC model choice and parameter estimation
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Coordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like samplerCoordinate sampler : A non-reversible Gibbs-like sampler
Coordinate sampler : A non-reversible Gibbs-like sampler
 
ABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified modelsABC convergence under well- and mis-specified models
ABC convergence under well- and mis-specified models
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
 
Convergence of ABC methods
Convergence of ABC methodsConvergence of ABC methods
Convergence of ABC methods
 
the ABC of ABC
the ABC of ABCthe ABC of ABC
the ABC of ABC
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conference
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes Factors
 

Viewers also liked

Species sampling models in Bayesian Nonparametrics
Species sampling models in Bayesian NonparametricsSpecies sampling models in Bayesian Nonparametrics
Species sampling models in Bayesian NonparametricsJulyan Arbel
 
Asymptotics for discrete random measures
Asymptotics for discrete random measuresAsymptotics for discrete random measures
Asymptotics for discrete random measuresJulyan Arbel
 
Bayesian Nonparametrics, Applications to biology, ecology, and marketing
Bayesian Nonparametrics, Applications to biology, ecology, and marketingBayesian Nonparametrics, Applications to biology, ecology, and marketing
Bayesian Nonparametrics, Applications to biology, ecology, and marketingJulyan Arbel
 
Presentation of Bassoum Abou on Stein's 1981 AoS paper
Presentation of Bassoum Abou on Stein's 1981 AoS paperPresentation of Bassoum Abou on Stein's 1981 AoS paper
Presentation of Bassoum Abou on Stein's 1981 AoS paperChristian Robert
 
Dependent processes in Bayesian Nonparametrics
Dependent processes in Bayesian NonparametricsDependent processes in Bayesian Nonparametrics
Dependent processes in Bayesian NonparametricsJulyan Arbel
 
A Gentle Introduction to Bayesian Nonparametrics
A Gentle Introduction to Bayesian NonparametricsA Gentle Introduction to Bayesian Nonparametrics
A Gentle Introduction to Bayesian NonparametricsJulyan Arbel
 
A Gentle Introduction to Bayesian Nonparametrics
A Gentle Introduction to Bayesian NonparametricsA Gentle Introduction to Bayesian Nonparametrics
A Gentle Introduction to Bayesian NonparametricsJulyan Arbel
 
Reading Testing a point-null hypothesis, by Jiahuan Li, Feb. 25, 2013
Reading Testing a point-null hypothesis, by Jiahuan Li, Feb. 25, 2013Reading Testing a point-null hypothesis, by Jiahuan Li, Feb. 25, 2013
Reading Testing a point-null hypothesis, by Jiahuan Li, Feb. 25, 2013Christian Robert
 
Gelfand and Smith (1990), read by
Gelfand and Smith (1990), read byGelfand and Smith (1990), read by
Gelfand and Smith (1990), read byChristian Robert
 
Reading Birnbaum's (1962) paper, by Li Chenlu
Reading Birnbaum's (1962) paper, by Li ChenluReading Birnbaum's (1962) paper, by Li Chenlu
Reading Birnbaum's (1962) paper, by Li ChenluChristian Robert
 
Testing point null hypothesis, a discussion by Amira Mziou
Testing point null hypothesis, a discussion by Amira MziouTesting point null hypothesis, a discussion by Amira Mziou
Testing point null hypothesis, a discussion by Amira MziouChristian Robert
 
Reading Efron's 1979 paper on bootstrap
Reading Efron's 1979 paper on bootstrapReading Efron's 1979 paper on bootstrap
Reading Efron's 1979 paper on bootstrapChristian Robert
 
Reading the Lasso 1996 paper by Robert Tibshirani
Reading the Lasso 1996 paper by Robert TibshiraniReading the Lasso 1996 paper by Robert Tibshirani
Reading the Lasso 1996 paper by Robert TibshiraniChristian Robert
 
Reading the Lindley-Smith 1973 paper on linear Bayes estimators
Reading the Lindley-Smith 1973 paper on linear Bayes estimatorsReading the Lindley-Smith 1973 paper on linear Bayes estimators
Reading the Lindley-Smith 1973 paper on linear Bayes estimatorsChristian Robert
 

Viewers also liked (17)

Species sampling models in Bayesian Nonparametrics
Species sampling models in Bayesian NonparametricsSpecies sampling models in Bayesian Nonparametrics
Species sampling models in Bayesian Nonparametrics
 
Asymptotics for discrete random measures
Asymptotics for discrete random measuresAsymptotics for discrete random measures
Asymptotics for discrete random measures
 
Bayesian Nonparametrics, Applications to biology, ecology, and marketing
Bayesian Nonparametrics, Applications to biology, ecology, and marketingBayesian Nonparametrics, Applications to biology, ecology, and marketing
Bayesian Nonparametrics, Applications to biology, ecology, and marketing
 
Presentation of Bassoum Abou on Stein's 1981 AoS paper
Presentation of Bassoum Abou on Stein's 1981 AoS paperPresentation of Bassoum Abou on Stein's 1981 AoS paper
Presentation of Bassoum Abou on Stein's 1981 AoS paper
 
Dependent processes in Bayesian Nonparametrics
Dependent processes in Bayesian NonparametricsDependent processes in Bayesian Nonparametrics
Dependent processes in Bayesian Nonparametrics
 
A Gentle Introduction to Bayesian Nonparametrics
A Gentle Introduction to Bayesian NonparametricsA Gentle Introduction to Bayesian Nonparametrics
A Gentle Introduction to Bayesian Nonparametrics
 
Bayesian Classics
Bayesian ClassicsBayesian Classics
Bayesian Classics
 
A Gentle Introduction to Bayesian Nonparametrics
A Gentle Introduction to Bayesian NonparametricsA Gentle Introduction to Bayesian Nonparametrics
A Gentle Introduction to Bayesian Nonparametrics
 
Reading Testing a point-null hypothesis, by Jiahuan Li, Feb. 25, 2013
Reading Testing a point-null hypothesis, by Jiahuan Li, Feb. 25, 2013Reading Testing a point-null hypothesis, by Jiahuan Li, Feb. 25, 2013
Reading Testing a point-null hypothesis, by Jiahuan Li, Feb. 25, 2013
 
Gelfand and Smith (1990), read by
Gelfand and Smith (1990), read byGelfand and Smith (1990), read by
Gelfand and Smith (1990), read by
 
Reading Birnbaum's (1962) paper, by Li Chenlu
Reading Birnbaum's (1962) paper, by Li ChenluReading Birnbaum's (1962) paper, by Li Chenlu
Reading Birnbaum's (1962) paper, by Li Chenlu
 
Reading Neyman's 1933
Reading Neyman's 1933 Reading Neyman's 1933
Reading Neyman's 1933
 
Testing point null hypothesis, a discussion by Amira Mziou
Testing point null hypothesis, a discussion by Amira MziouTesting point null hypothesis, a discussion by Amira Mziou
Testing point null hypothesis, a discussion by Amira Mziou
 
Reading Efron's 1979 paper on bootstrap
Reading Efron's 1979 paper on bootstrapReading Efron's 1979 paper on bootstrap
Reading Efron's 1979 paper on bootstrap
 
Reading the Lasso 1996 paper by Robert Tibshirani
Reading the Lasso 1996 paper by Robert TibshiraniReading the Lasso 1996 paper by Robert Tibshirani
Reading the Lasso 1996 paper by Robert Tibshirani
 
slides Céline Beji
slides Céline Bejislides Céline Beji
slides Céline Beji
 
Reading the Lindley-Smith 1973 paper on linear Bayes estimators
Reading the Lindley-Smith 1973 paper on linear Bayes estimatorsReading the Lindley-Smith 1973 paper on linear Bayes estimators
Reading the Lindley-Smith 1973 paper on linear Bayes estimators
 

Similar to Nested sampling

A numerical method to solve fractional Fredholm-Volterra integro-differential...
A numerical method to solve fractional Fredholm-Volterra integro-differential...A numerical method to solve fractional Fredholm-Volterra integro-differential...
A numerical method to solve fractional Fredholm-Volterra integro-differential...OctavianPostavaru
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfAlexander Litvinenko
 
Time Series Analysis
Time Series AnalysisTime Series Analysis
Time Series AnalysisAmit Ghosh
 
Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Jagadeeswaran Rathinavel
 
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)Dahua Lin
 
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALESNONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALESTahia ZERIZER
 
GradStudentSeminarSept30
GradStudentSeminarSept30GradStudentSeminarSept30
GradStudentSeminarSept30Ryan White
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking componentsChristian Robert
 
Introducing Zap Q-Learning
Introducing Zap Q-Learning   Introducing Zap Q-Learning
Introducing Zap Q-Learning Sean Meyn
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distancesChristian Robert
 
slides_online_optimization_david_mateos
slides_online_optimization_david_mateosslides_online_optimization_david_mateos
slides_online_optimization_david_mateosDavid Mateos
 
Mathematics and AI
Mathematics and AIMathematics and AI
Mathematics and AIMarc Lelarge
 
DissertationSlides169
DissertationSlides169DissertationSlides169
DissertationSlides169Ryan White
 
Cheatsheet probability
Cheatsheet probabilityCheatsheet probability
Cheatsheet probabilityAshish Patel
 
Numerical approach for Hamilton-Jacobi equations on a network: application to...
Numerical approach for Hamilton-Jacobi equations on a network: application to...Numerical approach for Hamilton-Jacobi equations on a network: application to...
Numerical approach for Hamilton-Jacobi equations on a network: application to...Guillaume Costeseque
 

Similar to Nested sampling (20)

A numerical method to solve fractional Fredholm-Volterra integro-differential...
A numerical method to solve fractional Fredholm-Volterra integro-differential...A numerical method to solve fractional Fredholm-Volterra integro-differential...
A numerical method to solve fractional Fredholm-Volterra integro-differential...
 
stochastic processes assignment help
stochastic processes assignment helpstochastic processes assignment help
stochastic processes assignment help
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
 
Time Series Analysis
Time Series AnalysisTime Series Analysis
Time Series Analysis
 
Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration
 
Propensity albert
Propensity albertPropensity albert
Propensity albert
 
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
Appendix to MLPI Lecture 2 - Monte Carlo Methods (Basics)
 
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALESNONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALES
 
GradStudentSeminarSept30
GradStudentSeminarSept30GradStudentSeminarSept30
GradStudentSeminarSept30
 
QMC: Operator Splitting Workshop, Compactness Estimates for Nonlinear PDEs - ...
QMC: Operator Splitting Workshop, Compactness Estimates for Nonlinear PDEs - ...QMC: Operator Splitting Workshop, Compactness Estimates for Nonlinear PDEs - ...
QMC: Operator Splitting Workshop, Compactness Estimates for Nonlinear PDEs - ...
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
 
Introducing Zap Q-Learning
Introducing Zap Q-Learning   Introducing Zap Q-Learning
Introducing Zap Q-Learning
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
 
slides_online_optimization_david_mateos
slides_online_optimization_david_mateosslides_online_optimization_david_mateos
slides_online_optimization_david_mateos
 
Mathematics and AI
Mathematics and AIMathematics and AI
Mathematics and AI
 
DissertationSlides169
DissertationSlides169DissertationSlides169
DissertationSlides169
 
Cheatsheet probability
Cheatsheet probabilityCheatsheet probability
Cheatsheet probability
 
sada_pres
sada_pressada_pres
sada_pres
 
Numerical approach for Hamilton-Jacobi equations on a network: application to...
Numerical approach for Hamilton-Jacobi equations on a network: application to...Numerical approach for Hamilton-Jacobi equations on a network: application to...
Numerical approach for Hamilton-Jacobi equations on a network: application to...
 

More from Christian Robert

Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceChristian Robert
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinChristian Robert
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?Christian Robert
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Christian Robert
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Christian Robert
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihoodChristian Robert
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Christian Robert
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussionChristian Robert
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceChristian Robert
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsChristian Robert
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018Christian Robert
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimationChristian Robert
 

More from Christian Robert (20)

Asymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de FranceAsymptotics of ABC, lecture, Collège de France
Asymptotics of ABC, lecture, Collège de France
 
Workshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael MartinWorkshop in honour of Don Poskitt and Gael Martin
Workshop in honour of Don Poskitt and Gael Martin
 
discussion of ICML23.pdf
discussion of ICML23.pdfdiscussion of ICML23.pdf
discussion of ICML23.pdf
 
How many components in a mixture?
How many components in a mixture?How many components in a mixture?
How many components in a mixture?
 
restore.pdf
restore.pdfrestore.pdf
restore.pdf
 
Testing for mixtures at BNP 13
Testing for mixtures at BNP 13Testing for mixtures at BNP 13
Testing for mixtures at BNP 13
 
Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?Inferring the number of components: dream or reality?
Inferring the number of components: dream or reality?
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
 
discussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihooddiscussion on Bayesian restricted likelihood
discussion on Bayesian restricted likelihood
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
eugenics and statistics
eugenics and statisticseugenics and statistics
eugenics and statistics
 
Laplace's Demon: seminar #1
Laplace's Demon: seminar #1Laplace's Demon: seminar #1
Laplace's Demon: seminar #1
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
asymptotics of ABC
asymptotics of ABCasymptotics of ABC
asymptotics of ABC
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Likelihood-free Design: a discussion
Likelihood-free Design: a discussionLikelihood-free Design: a discussion
Likelihood-free Design: a discussion
 
CISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergenceCISEA 2019: ABC consistency and convergence
CISEA 2019: ABC consistency and convergence
 
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment modelsa discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
a discussion of Chib, Shin, and Simoni (2017-8) Bayesian moment models
 
short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018short course at CIRM, Bayesian Masterclass, October 2018
short course at CIRM, Bayesian Masterclass, October 2018
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimation
 

Recently uploaded

Traditional Agroforestry System in India- Shifting Cultivation, Taungya, Home...
Traditional Agroforestry System in India- Shifting Cultivation, Taungya, Home...Traditional Agroforestry System in India- Shifting Cultivation, Taungya, Home...
Traditional Agroforestry System in India- Shifting Cultivation, Taungya, Home...jana861314
 
Artificial Intelligence In Microbiology by Dr. Prince C P
Artificial Intelligence In Microbiology by Dr. Prince C PArtificial Intelligence In Microbiology by Dr. Prince C P
Artificial Intelligence In Microbiology by Dr. Prince C PPRINCE C P
 
Behavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdfBehavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdfSELF-EXPLANATORY
 
Disentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTDisentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTSérgio Sacani
 
Biological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfBiological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfmuntazimhurra
 
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bNightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bSérgio Sacani
 
Orientation, design and principles of polyhouse
Orientation, design and principles of polyhouseOrientation, design and principles of polyhouse
Orientation, design and principles of polyhousejana861314
 
NAVSEA PEO USC - Unmanned & Small Combatants 26Oct23.pdf
NAVSEA PEO USC - Unmanned & Small Combatants 26Oct23.pdfNAVSEA PEO USC - Unmanned & Small Combatants 26Oct23.pdf
NAVSEA PEO USC - Unmanned & Small Combatants 26Oct23.pdfWadeK3
 
GFP in rDNA Technology (Biotechnology).pptx
GFP in rDNA Technology (Biotechnology).pptxGFP in rDNA Technology (Biotechnology).pptx
GFP in rDNA Technology (Biotechnology).pptxAleenaTreesaSaji
 
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝soniya singh
 
Animal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxAnimal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxUmerFayaz5
 
Physiochemical properties of nanomaterials and its nanotoxicity.pptx
Physiochemical properties of nanomaterials and its nanotoxicity.pptxPhysiochemical properties of nanomaterials and its nanotoxicity.pptx
Physiochemical properties of nanomaterials and its nanotoxicity.pptxAArockiyaNisha
 
Call Girls in Munirka Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Munirka Delhi 💯Call Us 🔝9953322196🔝 💯Escort.Call Girls in Munirka Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Munirka Delhi 💯Call Us 🔝9953322196🔝 💯Escort.aasikanpl
 
Biopesticide (2).pptx .This slides helps to know the different types of biop...
Biopesticide (2).pptx  .This slides helps to know the different types of biop...Biopesticide (2).pptx  .This slides helps to know the different types of biop...
Biopesticide (2).pptx .This slides helps to know the different types of biop...RohitNehra6
 
Cultivation of KODO MILLET . made by Ghanshyam pptx
Cultivation of KODO MILLET . made by Ghanshyam pptxCultivation of KODO MILLET . made by Ghanshyam pptx
Cultivation of KODO MILLET . made by Ghanshyam pptxpradhanghanshyam7136
 
Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?Patrick Diehl
 
Work, Energy and Power for class 10 ICSE Physics
Work, Energy and Power for class 10 ICSE PhysicsWork, Energy and Power for class 10 ICSE Physics
Work, Energy and Power for class 10 ICSE Physicsvishikhakeshava1
 
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...Sérgio Sacani
 
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |aasikanpl
 
zoogeography of pakistan.pptx fauna of Pakistan
zoogeography of pakistan.pptx fauna of Pakistanzoogeography of pakistan.pptx fauna of Pakistan
zoogeography of pakistan.pptx fauna of Pakistanzohaibmir069
 

Recently uploaded (20)

Traditional Agroforestry System in India- Shifting Cultivation, Taungya, Home...
Traditional Agroforestry System in India- Shifting Cultivation, Taungya, Home...Traditional Agroforestry System in India- Shifting Cultivation, Taungya, Home...
Traditional Agroforestry System in India- Shifting Cultivation, Taungya, Home...
 
Artificial Intelligence In Microbiology by Dr. Prince C P
Artificial Intelligence In Microbiology by Dr. Prince C PArtificial Intelligence In Microbiology by Dr. Prince C P
Artificial Intelligence In Microbiology by Dr. Prince C P
 
Behavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdfBehavioral Disorder: Schizophrenia & it's Case Study.pdf
Behavioral Disorder: Schizophrenia & it's Case Study.pdf
 
Disentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTDisentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOST
 
Biological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfBiological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdf
 
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bNightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
 
Orientation, design and principles of polyhouse
Orientation, design and principles of polyhouseOrientation, design and principles of polyhouse
Orientation, design and principles of polyhouse
 
NAVSEA PEO USC - Unmanned & Small Combatants 26Oct23.pdf
NAVSEA PEO USC - Unmanned & Small Combatants 26Oct23.pdfNAVSEA PEO USC - Unmanned & Small Combatants 26Oct23.pdf
NAVSEA PEO USC - Unmanned & Small Combatants 26Oct23.pdf
 
GFP in rDNA Technology (Biotechnology).pptx
GFP in rDNA Technology (Biotechnology).pptxGFP in rDNA Technology (Biotechnology).pptx
GFP in rDNA Technology (Biotechnology).pptx
 
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Munirka Delhi 💯Call Us 🔝8264348440🔝
 
Animal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxAnimal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptx
 
Physiochemical properties of nanomaterials and its nanotoxicity.pptx
Physiochemical properties of nanomaterials and its nanotoxicity.pptxPhysiochemical properties of nanomaterials and its nanotoxicity.pptx
Physiochemical properties of nanomaterials and its nanotoxicity.pptx
 
Call Girls in Munirka Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Munirka Delhi 💯Call Us 🔝9953322196🔝 💯Escort.Call Girls in Munirka Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
Call Girls in Munirka Delhi 💯Call Us 🔝9953322196🔝 💯Escort.
 
Biopesticide (2).pptx .This slides helps to know the different types of biop...
Biopesticide (2).pptx  .This slides helps to know the different types of biop...Biopesticide (2).pptx  .This slides helps to know the different types of biop...
Biopesticide (2).pptx .This slides helps to know the different types of biop...
 
Cultivation of KODO MILLET . made by Ghanshyam pptx
Cultivation of KODO MILLET . made by Ghanshyam pptxCultivation of KODO MILLET . made by Ghanshyam pptx
Cultivation of KODO MILLET . made by Ghanshyam pptx
 
Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?Is RISC-V ready for HPC workload? Maybe?
Is RISC-V ready for HPC workload? Maybe?
 
Work, Energy and Power for class 10 ICSE Physics
Work, Energy and Power for class 10 ICSE PhysicsWork, Energy and Power for class 10 ICSE Physics
Work, Energy and Power for class 10 ICSE Physics
 
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
 
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
Call Us ≽ 9953322196 ≼ Call Girls In Mukherjee Nagar(Delhi) |
 
zoogeography of pakistan.pptx fauna of Pakistan
zoogeography of pakistan.pptx fauna of Pakistanzoogeography of pakistan.pptx fauna of Pakistan
zoogeography of pakistan.pptx fauna of Pakistan
 

Nested sampling

  • 1. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Nested Sampling for General Bayesian Computation Represented by WU Changye 12 février 2015 Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 2. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 3. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Introduction In the Bayesian paradigm, the parameter θ follows the prior distribution π, the observations y follow the distribution L(y|θ) given θ, then the posterior distribution f (θ|y) which indicates the distribution of θ given the observations y has the following form : f (θ|y) = L(y|θ)π(θ) Θ L(y|θ)π(θ)dθ The objective of nested sampling is to compute the ’evidence’ : Z = Θ L(y|θ)π(θ)dθ Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 4. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion θ is a random variable, then Z = Eπ(L(θ)) For simplicity, let L(θ) denote the likelihood L(y|θ). The cumulative distribution function of L(θ) is F(λ) = L(θ)<λ π(θ)dθ Define the induced measure µ on R by the likelihood function and the prior as follwing µ(A) = Pπ(L(θ) ∈ A) Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 5. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Lemma 1 : Eπ(L(θ)) = Eµ(X). Proof : ∀g is a indication function of a measurable set A in R. Then Eπ(g(L(θ))) = Eπ(IA(L(θ))) = L(θ)∈A π(θ)dθ However, µ(dx) = Θ δ{L(θ)}(dx)π(θ)dθ. Eµ(g(X)) = R IA(x)µ(dx) = Θ R IA(x)δ{L(θ)}(dx) π(θ)dθ Therefore, Eµ(g(X)) = Eπ(IA(L(θ))) = Eπ(g(L(θ))) Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 6. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion In the general case, let {gn} be an increasing sequence of step functions converging to identity function Id ; then {gn ◦ L} is an increasing sequence of step functions converging to L and the desired conclusion follows by taking limits. Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 7. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Lemma 2 : If X is a positive-valued random variable, has p.d.f. f and c.d.f. F, then : ∞ 0 (1 − F(x))dx = ∞ 0 xf (x)dx = E(X). Proof : ∞ 0 (1 − F(x))dx = ∞ 0 (1 − P(X < x))dx = ∞ 0 P(X ≥ x)dx = ∞ 0 ∞ x f (y) · dy · dx = ∞ 0 f (y) y 0 dx · dy = ∞ 0 f (y) · ydy = E(X) Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 8. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion According to Lemma 1 and 2, Z = Eµ(X) = ∞ 0 xdF(x) = ∞ 0 (1 − F(x))dx Let ϕ−1(x) = 1 − F(x) = P{θ : L(θ) > x} Z = ∞ 0 ϕ−1 (x)dx = 1 0 ϕ(x)dx Therefore, we have the evidence represented by an one-dimensional integration. Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 9. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion In order to compute the following integration : J = 1 0 ϕ(x)dx there are three methods based on sampling. Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 10. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion 1) Importance Sampling : i = 1, · · · , n, Ui ∼ U[0,1], ˆJ1 = 1 n n i=1 ϕ(Ui ) 2) Riemann approximation : i = 1, · · · , n, Ui ∼ U[0,1], U(i) is the order statistics of (U1, · · · , Un), U(1) ≤ · · · ≤ U(n), ˆJ2 = n−1 i=1 ϕ(U(i))(U(i+1) − U(i)) 3) A complicated method : x0 = 1 step1 : i = 1, · · · , N, U1 i ∼ U[0,1], x1 = max{U1 1 , · · · , U1 N} step2 : i = 1, · · · , N, U2 i ∼ U[0,x1], x2 = max{U2 1 , · · · , U2 N} · · · · · · setp n : i = 1, · · · , N, Un i ∼ U[0,xn−1], xn = max{Un 1 , · · · , Un N} ˆJ3 = n i=1 ϕ(xi )(xi−1 − xi ) Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 11. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Nested sampling takes the third method and the reason is that ϕ is a decreasing function and in many cases it decreases rapidly. Figure: Graph of ϕ(x) and the trace of (xi , ϕ(xi )) Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 12. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion First, we consider the distributions of x1, · · · , xn : for u ∈ [0, 1], P(x1 < u) = P(U1 1 < u, · · · , U1 N < u) = N i=1 P(U1 i < u) = uN As a result, the density function of x1 is f (x1) = NxN−1 1 By the same method, we have : f (xk|xk−1) = N xk−1 xk xk−1 N−1 Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 13. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Note tk = xk xk−1 , P(tk ≤ t) = P(xk ≤ tx|xk−1 = x)fxk−1 (x)dx = xk−1 tx 0 fxk |xk−1 (y|x)fxk−1 (x)dxdy = xk−1 tx 0 N x y x N−1 fxk−1 (x)dxdy = xk−1 tN fxk−1 (x)dx = tN Besides, P(tk ≤ t|xk−1 = x) = P(xk ≤ tx|xk−1 = x) = tN As a result, we have tk ⊥ xk−1. Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 14. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Moreover, a point estimate for xk can be written entirely in terms of point estimates for the tk, xk = xk xk−1 × xk−1 xk−2 ×· · ·× x1 x0 ×x0 = tk ·tk−1 · · · t1 ·x0 = k i=1 ti ·x0 More appropriate to the large range common to many problems, log xk becomes log xk = log k i=1 ti · x0 = k i=1 log ti + log x0 where the logarithmic shrinkage is distributed as f (log t) = Ne(N−1) log t with the mean and the variance : E(log t) = − 1 N V(log t) = 1 N2 Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 15. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Taking the mean as the point estimate for each log ti finally gives log xk x0 = − k N ± √ k N Parameterizing xk in terms of the shrinkage proves immediately advantageous – because the log ti are independent, the errors in the point estimates tend to cancel and the estimates for the xk grow increasingly more accurate with k. xk = exp(− k N ) Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 16. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Next, we consider the distribution of ϕ(X), where X ∼ U[0, 1] Considering the random variable X = ϕ−1(L(θ)), where θ ∼ π. Notice that : ϕ−1 : [0, Lmax] → [0, 1], λ → P(L(θ) > λ) for u ∈ [0, 1], P(X < u) = P(ϕ−1 (L(θ)) < u) = P(L(θ) > ϕ(u)) = ϕ−1 (ϕ(u)) = u This means that ϕ−1(L(θ)) follows the U[0, 1] and ϕ(X) ∼ L(θ). Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 17. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Considering the situation on the truncated distribution : π(θ) ∝ π(θ) L(θ) > L0 0 otherwise Let X0 = ϕ−1(L0) and X = ϕ−1(L(θ)), where θ ∼ π. For u ∈ [0, X0], P(X < u) = P(ϕ−1 (L(θ)) < u|L(θ) > L0) = P(L(θ) > ϕ(u)) P(L(θ) > L0) = ϕ−1(ϕ(u)) X0 = u X0 X ∼ U[0, X0], As a result, ϕ(X) ∼ L(θ), where X ∼ U[0, X0] and θ ∼ π. Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 18. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Algorithm The algorithm based on the method discussed in the previous section is described in below : – Iteration 1 : sample independently N points θ1,i from the prior π(θ), determine θ1 = arg min1≤i≤N L(θ1,i ) and set ϕ1 = L(θ1) – Iteration 2 : obtain the N current values θ2,i , by reproducing the θ1,i ’s except for θ1 that is replaced by a draw from the prior distribution π conditional upon L(θ) ≥ ϕ1 ; then select θ2 as θ2 = arg min1≤i≤N L(θ2,i ), and set ϕ2 = L(θ2) – Iterate the above step until a given stopping rule is satisfied, for instance when observing very small changes in the approximation ˆZ or when reaching the maximal value of L(θ) when it is known. Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 19. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion ˆZ = J i=1 ϕi (xi−1 − xi ) Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 20. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion By-product of Nested Sampling Skilling indicates that nested sampling provides simulations from the posterior distribution at no extra cost : "the existing sequence of points θ1, θ2, θ3, . . . already gives a set of posterior representatives, provided the i’th is assigned the appropriate importance ωi Li " Eπ(f (θ)) = Θ π(θ)L(θ)f (θ)dθ Θ π(θ)L(θ)dθ We can use a single run of nested sampling to obtain estimators of both the numerator and the denominator, the latter being the evidence Z. The estimator of the numerator is j i=1 (xi−1 − xi )ϕi f (θi ) (1) Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 21. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Lemma 3(N.Chopin & C.P Robert) : Let f (l) = Eπ{f (θ)|L(θ) = l} for l > 0, then, if f is absolutely continuous, 1 0 ϕ(x)f (ϕ(x)) dx = π(θ)L(θ)f (θ)dθ Proof : Let ψ : x → xf (x), π(θ)L(θ)f (θ)dθ = Eπ[ψ{L(θ}] = +∞ 0 Pπ(ψ{L(θ} > l)dl = +∞ 0 ϕ−1 (ψ−1 (l))dl = 1 0 ψ(ϕ(x))dx Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 22. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Termination The author suggests that max(L1, · · · , LN)Xj < fZj =⇒ termination where f is some fraction. Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 23. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion N ? The larger N is, the smaller the variability of the approximation is. Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 24. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion How to sample N points from the constraint parametric space Using a MCMC method which constructs a Markov Chain that has the invariant distribution of the truncated distribution. Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 25. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion A decentred gaussian example The prior is π(θ) = d i=1 1 √ 2π exp − 1 2 (θ(k) )2 and the likelihood is L(y|θ) = d i=1 1 √ 2π exp − 1 2 (yk − θ(k) )2 In this example, we can calculate the evidence analytically Z = Rd L(θ)π(θ)dθ = exp(− d k=1 y2 k 4 ) 2d πd/2 Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 26. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Figure: Graph of ϕ(x) and the trace of (xi , ϕ(xi )) with d = 1 and y = 10. Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 27. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Figure: The prior distribution and the likelihood with d = 1 and y = 10. Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 28. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Figure: the box-plot of log ˆZ − log Z with d = 1 and y = 10 for Nested sampling and Monte Carlo. Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 29. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Figure: the box-plot of log ˆZ − log Z with d = 5 and y = (3, 3, 3, 3, 3). Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 30. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion A Probit Model We consider the arsenic dataset and a probit model studied in Chapter 5 of Gelman & Hill (2006). The observations are independent Bernoulli variables yi such that P(yi = 1|xi ) = Φ(xT i θ), where xi is a vector of d covariates, θ is a vector parameter of size d, and Φ denotes the standard normal distribution function. In this particular example, d = 7. Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 31. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion The prior is θ ∼ N(0, 102 Id ) L(θ) = n i=1 Φ(xT i θ) yi 1 − Φ(xT i θ) 1−yi Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 32. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Figure: the box-plot of log ˆZ with N = 20 for HMC and random walk MCMC. The blue line remarks the true value of log Z(Chib’s method). Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 33. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Posterior Samples We use the Gaussian example to illustrate this result. Let f (θ) = exp(−3θ + 9d 2 ). Figure: The box-plot of the log-relative error of log ˆZ − log Z and log ˆE(f ) − log E(f ) Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 34. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Conclusion – Nested sampling reverses the accepted approach to Bayesian computation by putting the evidence first. – Nested sampling samples more sparsely from the prior in regions where the likelihood is low and more densely where the likelihood is high, resulting in greater efficiency than a sampler that draws directly from the prior. – The procedure runs with an evolving collection of N points, where N can be chosen small for speed or large for accuracy. – Nested sampling always reduces a multidimensional integral to the integral of a one-dimensional monotonic function, no matter how many dimensions θ occupies, and no matter how strange the shape of the likelihood function L(θ) is. Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 35. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Problems – How to generate N independent points in the constraint parametric space is an important problem. Techniques to do so effectively and efficiently may vary from problem to problem. – Termination is also another problem in practice. Represented by WU Changye Nested Sampling for General Bayesian Computation
  • 36. Outline Nested Sampling Posterior Simulation Nested Sampling Termination and Size of N Numerical Examples Conclusion Thank you ! Represented by WU Changye Nested Sampling for General Bayesian Computation