SlideShare a Scribd company logo
When to Stop Sampling: Answers and Further Questions
Fred J. Hickernell
Department of Applied Mathematics & Center for Interdisciplinary Scientific Computation
Illinois Institute of Technology
hickernell@iit.edu mypages.iit.edu/~hickernell
Thanks to the Guaranteed Automatic Integration Library (GAIL) team and friends
Supported by NSF-DMS-1522687
Thanks to SAMSI, the QMC Program organizers, and the Worskshop Organizers
Trends and Advances in Monte Carlo Sampling Algorithms, December 12, 2017
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Multidimensional Integration Examples
µ =
Rd
f(x) ν(dx) = option price
f(x) = discounted payoff determined by market forces x
µ =
X
g(x) dx = P(X ∈ X) = probability
g = probability density function
µj
µ0
= X
bjL(b|data) (b) db
X
L(b|data) (b) db
= Bayesian posterior expectation of βj
µ :=
X
g(x)dx =
X
f(x) ν(dx) =: E[f(X)] ≈ µn :=
1
n
n
i=1
f(xi) =
X
f(x) ^νn(dx)
How big should n be so that |µ − µn| ε ?
2/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Multidimensional Integration Examples
µ =
Rd
f(x) ν(dx) = option price
f(x) = discounted payoff determined by market forces x
µ =
X
g(x) dx = P(X ∈ X) = probability
g = probability density function
µj
µ0
= X
bjL(b|data) (b) db
X
L(b|data) (b) db
= Bayesian posterior expectation of βj
µ :=
X
g(x)dx =
X
f(x) ν(dx) =: E[f(X)] ≈ µn :=
1
n
n
i=1
f(xi) =
X
f(x) ^νn(dx)
How big should n be so that |µ − µn| ε ?
2/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Error of (Different Kinds of) Monte Carlo Methods
µ :=
X
g(x) dx =
X
f(x) ν(dx) =: E[f(X)] ≈ µn :=
1
n
n
i=1
f(xi) =
X
f(x) ^νn(dx)
errn(f) := |µ − µn| =
X
f(x)
make small by
importance sampling
(ν − ^νn)
make small by
clever sampling
(dx)
∀f
errn(f)
∀ reasonable f
errn(f)
data-driven choose n large enough
ε
errn(f) = size(f)
unknown
× size(ν − ^νn)
maybe known
(H., 2017+)
may choose ^νn more carefully than IID
for random algorithms/integrands, the inequalities above should hold with high probability
algorithms described here in the Guaranteed Automatic Integration Library (GAIL) (Choi et al.,
2013–2017)
3/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Stopping IID Sampling
µ :=
X
f(x) ν(dx) =: E[f(X)] ≈ µn :=
1
n
n
i=1
f(xi), xi
IID
∼ ν, errn(f) = |µ − µn|
The Central Limit Theorem says that
P [errn(f) errn(f)] ≈
n→∞
99%, errn(f) =
2.58σ
√
n
, σ2
= var(f(X))
4/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Stopping IID Sampling
µ :=
X
f(x) ν(dx) =: E[f(X)] ≈ µn :=
1
n
n+nσ
i=1+nσ
f(xi), xi
IID
∼ ν, errn(f) = |µ − µn|
The Central Limit Theorem says that
P [errn(f) errn(f)] ≈
n→∞
99%, errn(f) =
2.58σ
√
n
errn(f) =
2.58 × 1.2^σ
√
n
, ^σ2
=
1
nσ − 1
nσ
i=1
f(xi)
4/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Stopping IID Sampling
µ :=
X
f(x) ν(dx) =: E[f(X)] ≈ µn :=
1
n
n+nσ
i=1+nσ
f(xi), xi
IID
∼ ν, errn(f) = |µ − µn|
The Central Limit Theorem says that
P [errn(f) errn(f)] ≈
n→∞
99%, errn(f) =
2.58σ
√
n
errn(f) =
2.58 × 1.2^σ
√
n
, ^σ2
=
1
nσ − 1
nσ
i=1
f(xi)
Choose n =
2.58 × 1.2^σ
ε
2
to make errn(f) ε
4/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Stopping IID Sampling
µ :=
X
f(x) ν(dx) =: E[f(X)] ≈ µn :=
1
n
n+nσ
i=1+nσ
f(xi), xi
IID
∼ ν, errn(f) = |µ − µn|
H. et al. (2013) (see also Bayer et al. (2014), H. et al. (2017+)) assume that the integrand belongs to the
cone of reasonable integrands: C = {f ∈ L4
(X, ν) : kurt(f(X)) κup}. Then we may replace n→∞ by
using Berry-Esseen inequalities (Nefedova and Shevtsova, 2012; Shevtsova, 2014) with a more
complicated formula for errn(f). Bound on the kurtosis also allows us to be highly confident that
σ 1.2^σ via Cantelli’s inequality.
P [errn(f) errn(f)] 99.5%, P(σ 1.2^σ) 99.5%, ^σ2
=
1
nσ − 1
nσ
i=1
f(xi)
errn(f) = min δ > 0 : Φ −
√
nδ/σ +
1
√
n
min 0.3328(κ
3/4
up + 0.429),
18.1139κ
3/4
up
1 +
√
nδ/σ
3
0.25% ,
errn(f) = same as errn(f) but with σ replaced by 1.2^σ
Choose n to make errn(f) ε
4/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Option Pricing
µ = fair price =
Rd
e−rT
max

1
d
d
j=1
Sj − K, 0

 e−zT
z/2
(2π)d/2
dz ≈ $13.12
Sj = S0e(r−σ2
/2)jT/d+σxj
= stock price at time jT/d,
x = Az, AAT
= Σ = min(i, j)T/d
d
i,j=1
, T = 1/4, d = 13 here
Abs. Error Median Worst 10% Worst 10%
Tolerance Method Error Accuracy n Time (s)
1E−2 IID 2E−3 100% 6.1E7 33.000
5/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Faster Convergence with Quasi-Monte Carlo Sampling
err(f, n) := |µ − µn| =
X
f(x)
make small by
importance sampling
(ν − ^νn)
make small by
clever sampling
(dx)
6/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Faster Convergence with Quasi-Monte Carlo Sampling
err(f, n) := |µ − µn| =
X
f(x)
make small by
importance sampling
(ν − ^νn)
make small by
clever sampling
(dx)
6/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Faster Convergence with Quasi-Monte Carlo Sampling
err(f, n) := |µ − µn| =
X
f(x)
make small by
importance sampling
(ν − ^νn)
make small by
clever sampling
(dx)
6/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Faster Convergence with Quasi-Monte Carlo Sampling
err(f, n) := |µ − µn| =
X
f(x)
make small by
importance sampling
(ν − ^νn)
make small by
clever sampling
(dx)
6/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Faster Convergence with Quasi-Monte Carlo Sampling
err(f, n) := |µ − µn| =
X
f(x)
make small by
importance sampling
(ν − ^νn)
make small by
clever sampling
(dx)
6/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Faster Convergence with Quasi-Monte Carlo Sampling
err(f, n) := |µ − µn| =
X
f(x)
make small by
importance sampling
(ν − ^νn)
make small by
clever sampling
(dx)
6/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Faster Convergence with Quasi-Monte Carlo Sampling
err(f, n) := |µ − µn| =
X
f(x)
make small by
importance sampling
(ν − ^νn)
make small by
clever sampling
(dx)
6/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Faster Convergence with Quasi-Monte Carlo Sampling
err(f, n) := |µ − µn| =
X
f(x)
make small by
importance sampling
(ν − ^νn)
make small by
clever sampling
(dx)
6/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Stopping Quasi-Monte Carlo Sampling
µ =
[0,1]d
f(x) dx = f(0), f(x) =
k
f(k)φk(x), f(k) =
[0,1]d
f(x)φk(x) dx
φk are complex exponentials (for lattices) or Walsh functions (for nets)
· · ·
· · ·
7/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Stopping Quasi-Monte Carlo Sampling
µ =
[0,1]d
f(x) dx = f(0) ≈ µn = fn(0), f(x) =
k
f(k)φk(x), f(k) =
[0,1]d
f(x)φk(x) dx
φk are complex exponentials (for lattices) or Walsh functions (for nets)
· · ·
· · ·
errn(f) errn(f) :=
k∈dual set{0}
f(k) , dual set = {k : φk(xi) = φk(x1), i = 1, . . . , n
φk looks like a constant
}
discrete transform, fn(k) =
1
n
n
i=1
f(xi)φk(xi) ∀k, can be computed in O(n log(n)) operations
7/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Stopping Quasi-Monte Carlo Sampling
µ =
[0,1]d
f(x) dx = f(0) ≈ µn = fn(0), f(x) =
k
f(k)φk(x), f(k) =
[0,1]d
f(x)φk(x) dx
φk are complex exponentials (for lattices) or Walsh functions (for nets)
errn(f) errn(f) :=
k∈dual set{0}
f(k) , dual set = {k : φk(xi) = φk(x1), i = 1, . . . , n
φk looks like a constant
}
discrete transform, fn(k) =
1
n
n
i=1
f(xi)φk(xi) ∀k, can be computed in O(n log(n)) operations
H. and Jiménez Rugama (2016), Jiménez Rugama and H. (2016) and (see also H. et al. (2017+)) define
a cone, C, of reasonable integrands whose true Fourier coefficients do not decay too erratically. Then
errn(f) errn(f) errn(f) := C(n)
moderatek
fn(k) ∀f ∈ C
Control variates are possible (H. et al., 2017+).
7/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
A Reasonable and an Unreasonable Integrand
8/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Option Pricing
µ = fair price =
Rd
e−rT
max

1
d
d
j=1
Sj − K, 0

 e−zT
z/2
(2π)d/2
dz ≈ $13.12
Sj = S0e(r−σ2
/2)jT/d+σxj
= stock price at time jT/d,
x = Az, AAT
= Σ = min(i, j)T/d
d
i,j=1
, T = 1/4, d = 13 here
Abs. Error Median Worst 10% Worst 10%
Tolerance Method Error Accuracy n Time (s)
1E−2 IID 2E−3 100% 6.1E7 33.000
1E−2 Scr. Sobol’ 1E−3 100% 1.6E4 0.040
9/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Option Pricing
µ = fair price =
Rd
e−rT
max

1
d
d
j=1
Sj − K, 0

 e−zT
z/2
(2π)d/2
dz ≈ $13.12
Sj = S0e(r−σ2
/2)jT/d+σxj
= stock price at time jT/d,
x = Az, AAT
= Σ = min(i, j)T/d
d
i,j=1
, T = 1/4, d = 13 here
Abs. Error Median Worst 10% Worst 10%
Tolerance Method Error Accuracy n Time (s)
1E−2 IID 2E−3 100% 6.1E7 33.000
1E−2 Scr. Sobol’ 1E−3 100% 1.6E4 0.040
1E−2 Scr. Sob. cont. var. 2E−3 100% 4.1E3 0.019
10/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Bayesan Cubature with Nice Covariance Kernels
f ∼ GP(m, s2
Cθ), Cθ : [0, 1]d
× [0, 1]d
→ R, m, s, θ to be inferred by MLE,
C = Cθ(xi, xj)
n
i,j=1
=
1
n
WΛWH
, Λ = diag(λ1, . . . , λn), C1 = λ11,
[0,1]d
Cθ(x, t) dt = 1 ∀x
Using maximum likelihood estimation (MLE) gives
mMLE = µn =
1
n
n
i=1
f(xi), s2
MLE =
1
n2
n
i=2
|yi|2
λi
, y = WT



f(x1)
...
f(xn)



fast transform, O(n log n)
,
θMLE = argmin
θ
log
n
i=2
|yi|2
λi
+
1
n
n
i=1
log(λi)
Then µ is a Gaussian random variable, and
P

|µ − µn| 2.58 1 −
n
λ1
1
n2
n
i=2
|yi|2
λ2
i

 = 99% for reasonable f
11/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Option Pricing
µ = fair price =
Rd
e−rT
max

1
d
d
j=1
Sj − K, 0

 e−zT
z/2
(2π)d/2
dz ≈ $13.12
Sj = S0e(r−σ2
/2)jT/d+σxj
= stock price at time jT/d,
x = Az, AAT
= Σ = min(i, j)T/d
d
i,j=1
, T = 1/4, d = 13 here
Abs. Error Median Worst 10% Worst 10%
Tolerance Method Error Accuracy n Time (s)
1E−2 IID 2E−3 100% 6.1E7 33.000
1E−2 Scr. Sobol’ 1E−3 100% 1.6E4 0.040
1E−2 Scr. Sob. cont. var. 2E−3 100% 4.1E3 0.019
1E−2 Bayes. Latt. 2E−3 99% 1.6E4 0.051
12/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
More General Error Criteria
Let’s change the problem: find µn satisfying
|µ − µn| max(ε, εr |µ|), given |µ − µn| errn(f)
H. et al. (2017+) show that if n is large enough to make
errn(f)
max(ε, εr µn + errn(f) ) + max(ε, εr µn − errn(f) )
2
then a satisfactory choice is
µn =
(µn − errn(f)) max(ε, εr µn + errn(f) ) + (µn + errn(f)) max(ε, εr µn − errn(f) )
max(ε, εr µn + errn(f) ) + max(ε, εr µn − errn(f) )
=



µn, εr = 0
max(µ2
n − errn(f)2
, 0)
µn
, ε = 0
H. et al. (2017+) extend this to the case where the solution is v(µ):
|v(µ) − ˜v| max(ε, εr |v(µ)|), given µ ∈ [µ − errn(f), µ + errn(f)]
13/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Gaussian Probability
µ =
[a,b]
exp −1
2 tT
Σ−1
t
(2π)d det(Σ)
dt
Genz (1993)
=
[0,1]d−1
f(x) dx
For some typical choice of a, b, Σ, d = 3; µ ≈ 0.6763
Rel. Error Median Worst 10% Worst 10%
Tolerance Method Error Accuracy n Time (s)
1E−2 IID 5E−4 100% 8.1E4 0.020
1E−2 Scr. Sobol’ 4E−5 100% 1.0E3 0.005
1E−2 Bayes. Latt. 5E−5 100% 4.1E3 0.023
1E−3 IID 9E−5 100% 2.0E6 0.400
1E−3 Scr. Sobol’ 2E−5 100% 2.0E3 0.006
1E−3 Bayes. Latt. 3E−7 100% 6.6E4 0.076
1E−4 Scr. Sobol’ 4E−7 100% 1.6E4 0.018
1E−4 Bayes. Latt. 6E−9 100% 5.2E5 0.580
1E−4 Bayes. Latt. Smth. 1E−7 100% 3.3E4 0.047
14/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Bayesian Inference for Logistic Regression
µj
µ0
= X
bjL(b|data) (b) db
X
L(b|data) (b) db
= Bayesian posterior expectation of βj
yi ∼ Ber
exp(β1 + β2xi)
1 + exp(β1 + β2xi)
, (b) =
exp −(b2
1 + b2
2)/2
2π
Importance sample via an
appropriately chosen normal
ε = 0.001
n = 9 000–17 000
15/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Sobol’ Indices
Y = g(X), where X ∼ U[0, 1]d
; Sobol’ Indexj(µ) describes how much coordinate j of input X influences
output Y (Sobol’, 1990; 2001):
Sobol’ Indexj(µ) :=
µ1
µ2 − µ2
3
, j = 1, . . . , d
µ1 :=
[0,1)2d
[g(x) − g(xj, x −j)]g(x ) dx dx
µ2 :=
[0,1)d
[g(x)]2
dx, µ3 :=
[0,1)d
g(x) dx
Jiménez Rugama and Gilquin (2017+) have used adaptive Sobol’ sampling to compute Sobol’ indices:
g(x) = −x1 + x1x2 − x1x2x3 + · · · + x1x2x3x4x5x6 (Bratley et al., 1992)
ε = 1E−3, εr = 0 j 1 2 3 4 5 6
n 65 536 32 768 16 384 16 384 2 048 2 048
Sobol’ Indexj 0.6529 0.1791 0.0370 0.0133 0.0015 0.0015
Sobol’ Indexj 0.6528 0.1792 0.0363 0.0126 0.0010 0.0012
Sobol’ Indexj(µn) 0.6492 0.1758 0.0308 0.0083 0.0018 0.0039
16/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Summary
Established conservative, pre-asymptotic confidence intervals for reasonable
integrands
for IID sampling
for Quasi-Monte Carlo (low discrepancy) sampling, and
via Bayesian cubature
to fit even general error criteria
Quasi-Monte Carlo sampling can give faster convergence
Bayesian cubature is practical if the sample matches the covariance kernel so that
the vector-matrix operations are fast enough
Algorithms implemented in open-source GAIL (Choi et al., 2013–2017), which is
undergoing continual development
17/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
Further Questions
Are the cones of reasonable integrands reasonable in practice?
What data-driven necessary conditions are there for f to be reasonable (f ∈ C)?
Can we get higher order convergence with higher order nets?
What can be done for Markov Chain Monte Carlo?
What can be done for multi-level Monte Carlo (Giles, 2015)?
18/21
Slides available at speakerdeck.com/fjhickernell/
samsi-qmc-december-2017-trends-workshop
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
References I
Bayer, C., H. Hoel, E. von Schwerin, and R. Tempone. 2014. On nonasymptotic optimal stopping criteria in Monte Carlo
Simulations, SIAM J. Sci. Comput. 36, A869–A885.
Bratley, P., B. L. Fox, and H. Niederreiter. 1992. Implementation and tests of low-discrepancy sequences, ACM Trans. Model.
Comput. Simul. 2, 195–213.
Choi, S.-C. T., Y. Ding, F. J. H., L. Jiang, Ll. A. Jiménez Rugama, D. Li, J. Rathinavel, X. Tong, K. Zhang, Y. Zhang, and X. Zhou.
2013–2017. GAIL: Guaranteed Automatic Integration Library (versions 1.0–2.2).
Genz, A. 1993. Comparison of methods for the computation of multivariate normal probabilities, Computing Science and
Statistics 25, 400–405.
Giles, M. 2015. Multilevel Monte Carlo methods, Acta Numer. 24, 259–328.
H., F. J. 2017+. The trio identity for quasi-Monte Carlo error analysis, Monte Carlo and quasi-Monte Carlo methods: MCQMC,
Stanford, usa, August 2016. submitted for publication, arXiv:1702.01487.
H., F. J., S.-C. T. Choi, L. Jiang, and Ll. A. Jiménez Rugama. 2017+. Monte Carlo simulation, automatic stopping criteria for,
Wiley statsref-statistics reference online. to appear.
H., F. J., L. Jiang, Y. Liu, and A. B. Owen. 2013. Guaranteed conservative fixed width confidence intervals via Monte Carlo
sampling, Monte Carlo and quasi-Monte Carlo methods 2012, pp. 105–128.
20/21
Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References
References II
H., F. J. and Ll. A. Jiménez Rugama. 2016. Reliable adaptive cubature using digital sequences, Monte Carlo and quasi-Monte
Carlo methods: MCQMC, Leuven, Belgium, April 2014, pp. 367–383. arXiv:1410.8615 [math.NA].
H., F. J., Ll. A. Jiménez Rugama, and D. Li. 2017+. Adaptive quasi-Monte Carlo methods for cubature, Contemporary
computational mathematics — a celebration of the 80th birthday of ian sloan. to appear, arXiv:1702.01491 [math.NA].
Jiménez Rugama, Ll. A. and L. Gilquin. 2017+. Reliable error estimation for Sobol’ indices, Statistics and Computing. in press.
Jiménez Rugama, Ll. A. and F. J. H. 2016. Adaptive multidimensional integration based on rank-1 lattices, Monte Carlo and
quasi-Monte Carlo methods: MCQMC, Leuven, Belgium, April 2014, pp. 407–422. arXiv:1411.1966.
Nefedova, Yu. S. and I. G. Shevtsova. 2012. On non-uniform convergence rate estimates in the central limit theorem, Theory
Probab. Appl. 57, 62–97.
Shevtsova, I. 2014. On the absolute constants in the Berry-Esseen type inequalities for identically distributed summands, Dokl.
Akad. Nauk 456, 650–654.
Sobol’, I. M. 1990. On sensitivity estimation for nonlinear mathematical models, Matem. Mod. 2, no. 1, 112–118.
. 2001. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates, Math. Comput.
Simul. 55, no. 1-3, 271–280.
21/21

More Related Content

What's hot

Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
Christian Robert
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithms
Christian Robert
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
comments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle samplercomments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle sampler
Christian Robert
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
accurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannaccurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannolli0601
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
Christian Robert
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
Christian Robert
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
Patch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective DivergencesPatch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective Divergences
Frank Nielsen
 
QMC Error SAMSI Tutorial Aug 2017
QMC Error SAMSI Tutorial Aug 2017QMC Error SAMSI Tutorial Aug 2017
QMC Error SAMSI Tutorial Aug 2017
Fred J. Hickernell
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
Christian Robert
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
 Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli... Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
The Statistical and Applied Mathematical Sciences Institute
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Divergence center-based clustering and their applications
Divergence center-based clustering and their applicationsDivergence center-based clustering and their applications
Divergence center-based clustering and their applications
Frank Nielsen
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 

What's hot (20)

Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithms
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
comments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle samplercomments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle sampler
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
accurate ABC Oliver Ratmann
accurate ABC Oliver Ratmannaccurate ABC Oliver Ratmann
accurate ABC Oliver Ratmann
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Patch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective DivergencesPatch Matching with Polynomial Exponential Families and Projective Divergences
Patch Matching with Polynomial Exponential Families and Projective Divergences
 
QMC Error SAMSI Tutorial Aug 2017
QMC Error SAMSI Tutorial Aug 2017QMC Error SAMSI Tutorial Aug 2017
QMC Error SAMSI Tutorial Aug 2017
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Testing for mixtures by seeking components
Testing for mixtures by seeking componentsTesting for mixtures by seeking components
Testing for mixtures by seeking components
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
 Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli... Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Divergence center-based clustering and their applications
Divergence center-based clustering and their applicationsDivergence center-based clustering and their applications
Divergence center-based clustering and their applications
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 

Similar to QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop, When to Stop Sampling: Answers and Further Questions - Fred Hickernell, Dec 12, 2017

Georgia Tech 2017 March Talk
Georgia Tech 2017 March TalkGeorgia Tech 2017 March Talk
Georgia Tech 2017 March Talk
Fred J. Hickernell
 
Monte Carlo Methods 2017 July Talk in Montreal
Monte Carlo Methods 2017 July Talk in MontrealMonte Carlo Methods 2017 July Talk in Montreal
Monte Carlo Methods 2017 July Talk in Montreal
Fred J. Hickernell
 
SIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithmsSIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithms
Jagadeeswaran Rathinavel
 
Automatic bayesian cubature
Automatic bayesian cubatureAutomatic bayesian cubature
Automatic bayesian cubature
Jagadeeswaran Rathinavel
 
Talk iccf 19_ben_hammouda
Talk iccf 19_ben_hammoudaTalk iccf 19_ben_hammouda
Talk iccf 19_ben_hammouda
Chiheb Ben Hammouda
 
Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration
Jagadeeswaran Rathinavel
 
Tulane March 2017 Talk
Tulane March 2017 TalkTulane March 2017 Talk
Tulane March 2017 Talk
Fred J. Hickernell
 
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
Chiheb Ben Hammouda
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
Christian Robert
 
QMC Opening Workshop, High Accuracy Algorithms for Interpolating and Integrat...
QMC Opening Workshop, High Accuracy Algorithms for Interpolating and Integrat...QMC Opening Workshop, High Accuracy Algorithms for Interpolating and Integrat...
QMC Opening Workshop, High Accuracy Algorithms for Interpolating and Integrat...
The Statistical and Applied Mathematical Sciences Institute
 
Divergence clustering
Divergence clusteringDivergence clustering
Divergence clustering
Frank Nielsen
 
Slides: Simplifying Gaussian Mixture Models Via Entropic Quantization (EUSIPC...
Slides: Simplifying Gaussian Mixture Models Via Entropic Quantization (EUSIPC...Slides: Simplifying Gaussian Mixture Models Via Entropic Quantization (EUSIPC...
Slides: Simplifying Gaussian Mixture Models Via Entropic Quantization (EUSIPC...
Frank Nielsen
 
Simplifying Gaussian Mixture Models Via Entropic Quantization (EUSIPCO 2009)
Simplifying Gaussian Mixture Models Via Entropic Quantization (EUSIPCO 2009)Simplifying Gaussian Mixture Models Via Entropic Quantization (EUSIPCO 2009)
Simplifying Gaussian Mixture Models Via Entropic Quantization (EUSIPCO 2009)
Frank Nielsen
 
SIAM CSE 2017 talk
SIAM CSE 2017 talkSIAM CSE 2017 talk
SIAM CSE 2017 talk
Fred J. Hickernell
 
Inria Tech Talk - La classification de données complexes avec MASSICCC
Inria Tech Talk - La classification de données complexes avec MASSICCCInria Tech Talk - La classification de données complexes avec MASSICCC
Inria Tech Talk - La classification de données complexes avec MASSICCC
Stéphanie Roger
 
On learning statistical mixtures maximizing the complete likelihood
On learning statistical mixtures maximizing the complete likelihoodOn learning statistical mixtures maximizing the complete likelihood
On learning statistical mixtures maximizing the complete likelihood
Frank Nielsen
 
Subquad multi ff
Subquad multi ffSubquad multi ff
Subquad multi ff
Fabian Velazquez
 
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
The Statistical and Applied Mathematical Sciences Institute
 
BEADS : filtrage asymétrique de ligne de base (tendance) et débruitage pour d...
BEADS : filtrage asymétrique de ligne de base (tendance) et débruitage pour d...BEADS : filtrage asymétrique de ligne de base (tendance) et débruitage pour d...
BEADS : filtrage asymétrique de ligne de base (tendance) et débruitage pour d...
Laurent Duval
 
H2O World - Consensus Optimization and Machine Learning - Stephen Boyd
H2O World - Consensus Optimization and Machine Learning - Stephen BoydH2O World - Consensus Optimization and Machine Learning - Stephen Boyd
H2O World - Consensus Optimization and Machine Learning - Stephen Boyd
Sri Ambati
 

Similar to QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop, When to Stop Sampling: Answers and Further Questions - Fred Hickernell, Dec 12, 2017 (20)

Georgia Tech 2017 March Talk
Georgia Tech 2017 March TalkGeorgia Tech 2017 March Talk
Georgia Tech 2017 March Talk
 
Monte Carlo Methods 2017 July Talk in Montreal
Monte Carlo Methods 2017 July Talk in MontrealMonte Carlo Methods 2017 July Talk in Montreal
Monte Carlo Methods 2017 July Talk in Montreal
 
SIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithmsSIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithms
 
Automatic bayesian cubature
Automatic bayesian cubatureAutomatic bayesian cubature
Automatic bayesian cubature
 
Talk iccf 19_ben_hammouda
Talk iccf 19_ben_hammoudaTalk iccf 19_ben_hammouda
Talk iccf 19_ben_hammouda
 
Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration
 
Tulane March 2017 Talk
Tulane March 2017 TalkTulane March 2017 Talk
Tulane March 2017 Talk
 
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
 
CDT 22 slides.pdf
CDT 22 slides.pdfCDT 22 slides.pdf
CDT 22 slides.pdf
 
QMC Opening Workshop, High Accuracy Algorithms for Interpolating and Integrat...
QMC Opening Workshop, High Accuracy Algorithms for Interpolating and Integrat...QMC Opening Workshop, High Accuracy Algorithms for Interpolating and Integrat...
QMC Opening Workshop, High Accuracy Algorithms for Interpolating and Integrat...
 
Divergence clustering
Divergence clusteringDivergence clustering
Divergence clustering
 
Slides: Simplifying Gaussian Mixture Models Via Entropic Quantization (EUSIPC...
Slides: Simplifying Gaussian Mixture Models Via Entropic Quantization (EUSIPC...Slides: Simplifying Gaussian Mixture Models Via Entropic Quantization (EUSIPC...
Slides: Simplifying Gaussian Mixture Models Via Entropic Quantization (EUSIPC...
 
Simplifying Gaussian Mixture Models Via Entropic Quantization (EUSIPCO 2009)
Simplifying Gaussian Mixture Models Via Entropic Quantization (EUSIPCO 2009)Simplifying Gaussian Mixture Models Via Entropic Quantization (EUSIPCO 2009)
Simplifying Gaussian Mixture Models Via Entropic Quantization (EUSIPCO 2009)
 
SIAM CSE 2017 talk
SIAM CSE 2017 talkSIAM CSE 2017 talk
SIAM CSE 2017 talk
 
Inria Tech Talk - La classification de données complexes avec MASSICCC
Inria Tech Talk - La classification de données complexes avec MASSICCCInria Tech Talk - La classification de données complexes avec MASSICCC
Inria Tech Talk - La classification de données complexes avec MASSICCC
 
On learning statistical mixtures maximizing the complete likelihood
On learning statistical mixtures maximizing the complete likelihoodOn learning statistical mixtures maximizing the complete likelihood
On learning statistical mixtures maximizing the complete likelihood
 
Subquad multi ff
Subquad multi ffSubquad multi ff
Subquad multi ff
 
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
 
BEADS : filtrage asymétrique de ligne de base (tendance) et débruitage pour d...
BEADS : filtrage asymétrique de ligne de base (tendance) et débruitage pour d...BEADS : filtrage asymétrique de ligne de base (tendance) et débruitage pour d...
BEADS : filtrage asymétrique de ligne de base (tendance) et débruitage pour d...
 
H2O World - Consensus Optimization and Machine Learning - Stephen Boyd
H2O World - Consensus Optimization and Machine Learning - Stephen BoydH2O World - Consensus Optimization and Machine Learning - Stephen Boyd
H2O World - Consensus Optimization and Machine Learning - Stephen Boyd
 

More from The Statistical and Applied Mathematical Sciences Institute

Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
The Statistical and Applied Mathematical Sciences Institute
 
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
The Statistical and Applied Mathematical Sciences Institute
 
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
The Statistical and Applied Mathematical Sciences Institute
 
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
The Statistical and Applied Mathematical Sciences Institute
 
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
The Statistical and Applied Mathematical Sciences Institute
 
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
The Statistical and Applied Mathematical Sciences Institute
 

More from The Statistical and Applied Mathematical Sciences Institute (20)

Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
 
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
 
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
 
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
 
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
 
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
 
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
 
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
 
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
 
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
 
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
 
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
 
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
 
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
 
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
 
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
 
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
 
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
 
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
 
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
 

Recently uploaded

Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
Atul Kumar Singh
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
Jisc
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
Jisc
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
Special education needs
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
Balvir Singh
 
PART A. Introduction to Costumer Service
PART A. Introduction to Costumer ServicePART A. Introduction to Costumer Service
PART A. Introduction to Costumer Service
PedroFerreira53928
 
Introduction to Quality Improvement Essentials
Introduction to Quality Improvement EssentialsIntroduction to Quality Improvement Essentials
Introduction to Quality Improvement Essentials
Excellence Foundation for South Sudan
 
Sectors of the Indian Economy - Class 10 Study Notes pdf
Sectors of the Indian Economy - Class 10 Study Notes pdfSectors of the Indian Economy - Class 10 Study Notes pdf
Sectors of the Indian Economy - Class 10 Study Notes pdf
Vivekanand Anglo Vedic Academy
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
JosvitaDsouza2
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
EugeneSaldivar
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
joachimlavalley1
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
Thiyagu K
 
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
Nguyen Thanh Tu Collection
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
Vikramjit Singh
 
Ethnobotany and Ethnopharmacology ......
Ethnobotany and Ethnopharmacology ......Ethnobotany and Ethnopharmacology ......
Ethnobotany and Ethnopharmacology ......
Ashokrao Mane college of Pharmacy Peth-Vadgaon
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
Jheel Barad
 
Basic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumersBasic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumers
PedroFerreira53928
 
How to Break the cycle of negative Thoughts
How to Break the cycle of negative ThoughtsHow to Break the cycle of negative Thoughts
How to Break the cycle of negative Thoughts
Col Mukteshwar Prasad
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
Celine George
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
Delapenabediema
 

Recently uploaded (20)

Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
 
PART A. Introduction to Costumer Service
PART A. Introduction to Costumer ServicePART A. Introduction to Costumer Service
PART A. Introduction to Costumer Service
 
Introduction to Quality Improvement Essentials
Introduction to Quality Improvement EssentialsIntroduction to Quality Improvement Essentials
Introduction to Quality Improvement Essentials
 
Sectors of the Indian Economy - Class 10 Study Notes pdf
Sectors of the Indian Economy - Class 10 Study Notes pdfSectors of the Indian Economy - Class 10 Study Notes pdf
Sectors of the Indian Economy - Class 10 Study Notes pdf
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
 
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
 
Ethnobotany and Ethnopharmacology ......
Ethnobotany and Ethnopharmacology ......Ethnobotany and Ethnopharmacology ......
Ethnobotany and Ethnopharmacology ......
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
 
Basic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumersBasic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumers
 
How to Break the cycle of negative Thoughts
How to Break the cycle of negative ThoughtsHow to Break the cycle of negative Thoughts
How to Break the cycle of negative Thoughts
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
 

QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop, When to Stop Sampling: Answers and Further Questions - Fred Hickernell, Dec 12, 2017

  • 1. When to Stop Sampling: Answers and Further Questions Fred J. Hickernell Department of Applied Mathematics & Center for Interdisciplinary Scientific Computation Illinois Institute of Technology hickernell@iit.edu mypages.iit.edu/~hickernell Thanks to the Guaranteed Automatic Integration Library (GAIL) team and friends Supported by NSF-DMS-1522687 Thanks to SAMSI, the QMC Program organizers, and the Worskshop Organizers Trends and Advances in Monte Carlo Sampling Algorithms, December 12, 2017
  • 2. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Multidimensional Integration Examples µ = Rd f(x) ν(dx) = option price f(x) = discounted payoff determined by market forces x µ = X g(x) dx = P(X ∈ X) = probability g = probability density function µj µ0 = X bjL(b|data) (b) db X L(b|data) (b) db = Bayesian posterior expectation of βj µ := X g(x)dx = X f(x) ν(dx) =: E[f(X)] ≈ µn := 1 n n i=1 f(xi) = X f(x) ^νn(dx) How big should n be so that |µ − µn| ε ? 2/21
  • 3. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Multidimensional Integration Examples µ = Rd f(x) ν(dx) = option price f(x) = discounted payoff determined by market forces x µ = X g(x) dx = P(X ∈ X) = probability g = probability density function µj µ0 = X bjL(b|data) (b) db X L(b|data) (b) db = Bayesian posterior expectation of βj µ := X g(x)dx = X f(x) ν(dx) =: E[f(X)] ≈ µn := 1 n n i=1 f(xi) = X f(x) ^νn(dx) How big should n be so that |µ − µn| ε ? 2/21
  • 4. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Error of (Different Kinds of) Monte Carlo Methods µ := X g(x) dx = X f(x) ν(dx) =: E[f(X)] ≈ µn := 1 n n i=1 f(xi) = X f(x) ^νn(dx) errn(f) := |µ − µn| = X f(x) make small by importance sampling (ν − ^νn) make small by clever sampling (dx) ∀f errn(f) ∀ reasonable f errn(f) data-driven choose n large enough ε errn(f) = size(f) unknown × size(ν − ^νn) maybe known (H., 2017+) may choose ^νn more carefully than IID for random algorithms/integrands, the inequalities above should hold with high probability algorithms described here in the Guaranteed Automatic Integration Library (GAIL) (Choi et al., 2013–2017) 3/21
  • 5. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Stopping IID Sampling µ := X f(x) ν(dx) =: E[f(X)] ≈ µn := 1 n n i=1 f(xi), xi IID ∼ ν, errn(f) = |µ − µn| The Central Limit Theorem says that P [errn(f) errn(f)] ≈ n→∞ 99%, errn(f) = 2.58σ √ n , σ2 = var(f(X)) 4/21
  • 6. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Stopping IID Sampling µ := X f(x) ν(dx) =: E[f(X)] ≈ µn := 1 n n+nσ i=1+nσ f(xi), xi IID ∼ ν, errn(f) = |µ − µn| The Central Limit Theorem says that P [errn(f) errn(f)] ≈ n→∞ 99%, errn(f) = 2.58σ √ n errn(f) = 2.58 × 1.2^σ √ n , ^σ2 = 1 nσ − 1 nσ i=1 f(xi) 4/21
  • 7. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Stopping IID Sampling µ := X f(x) ν(dx) =: E[f(X)] ≈ µn := 1 n n+nσ i=1+nσ f(xi), xi IID ∼ ν, errn(f) = |µ − µn| The Central Limit Theorem says that P [errn(f) errn(f)] ≈ n→∞ 99%, errn(f) = 2.58σ √ n errn(f) = 2.58 × 1.2^σ √ n , ^σ2 = 1 nσ − 1 nσ i=1 f(xi) Choose n = 2.58 × 1.2^σ ε 2 to make errn(f) ε 4/21
  • 8. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Stopping IID Sampling µ := X f(x) ν(dx) =: E[f(X)] ≈ µn := 1 n n+nσ i=1+nσ f(xi), xi IID ∼ ν, errn(f) = |µ − µn| H. et al. (2013) (see also Bayer et al. (2014), H. et al. (2017+)) assume that the integrand belongs to the cone of reasonable integrands: C = {f ∈ L4 (X, ν) : kurt(f(X)) κup}. Then we may replace n→∞ by using Berry-Esseen inequalities (Nefedova and Shevtsova, 2012; Shevtsova, 2014) with a more complicated formula for errn(f). Bound on the kurtosis also allows us to be highly confident that σ 1.2^σ via Cantelli’s inequality. P [errn(f) errn(f)] 99.5%, P(σ 1.2^σ) 99.5%, ^σ2 = 1 nσ − 1 nσ i=1 f(xi) errn(f) = min δ > 0 : Φ − √ nδ/σ + 1 √ n min 0.3328(κ 3/4 up + 0.429), 18.1139κ 3/4 up 1 + √ nδ/σ 3 0.25% , errn(f) = same as errn(f) but with σ replaced by 1.2^σ Choose n to make errn(f) ε 4/21
  • 9. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Option Pricing µ = fair price = Rd e−rT max  1 d d j=1 Sj − K, 0   e−zT z/2 (2π)d/2 dz ≈ $13.12 Sj = S0e(r−σ2 /2)jT/d+σxj = stock price at time jT/d, x = Az, AAT = Σ = min(i, j)T/d d i,j=1 , T = 1/4, d = 13 here Abs. Error Median Worst 10% Worst 10% Tolerance Method Error Accuracy n Time (s) 1E−2 IID 2E−3 100% 6.1E7 33.000 5/21
  • 10. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Faster Convergence with Quasi-Monte Carlo Sampling err(f, n) := |µ − µn| = X f(x) make small by importance sampling (ν − ^νn) make small by clever sampling (dx) 6/21
  • 11. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Faster Convergence with Quasi-Monte Carlo Sampling err(f, n) := |µ − µn| = X f(x) make small by importance sampling (ν − ^νn) make small by clever sampling (dx) 6/21
  • 12. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Faster Convergence with Quasi-Monte Carlo Sampling err(f, n) := |µ − µn| = X f(x) make small by importance sampling (ν − ^νn) make small by clever sampling (dx) 6/21
  • 13. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Faster Convergence with Quasi-Monte Carlo Sampling err(f, n) := |µ − µn| = X f(x) make small by importance sampling (ν − ^νn) make small by clever sampling (dx) 6/21
  • 14. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Faster Convergence with Quasi-Monte Carlo Sampling err(f, n) := |µ − µn| = X f(x) make small by importance sampling (ν − ^νn) make small by clever sampling (dx) 6/21
  • 15. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Faster Convergence with Quasi-Monte Carlo Sampling err(f, n) := |µ − µn| = X f(x) make small by importance sampling (ν − ^νn) make small by clever sampling (dx) 6/21
  • 16. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Faster Convergence with Quasi-Monte Carlo Sampling err(f, n) := |µ − µn| = X f(x) make small by importance sampling (ν − ^νn) make small by clever sampling (dx) 6/21
  • 17. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Faster Convergence with Quasi-Monte Carlo Sampling err(f, n) := |µ − µn| = X f(x) make small by importance sampling (ν − ^νn) make small by clever sampling (dx) 6/21
  • 18. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Stopping Quasi-Monte Carlo Sampling µ = [0,1]d f(x) dx = f(0), f(x) = k f(k)φk(x), f(k) = [0,1]d f(x)φk(x) dx φk are complex exponentials (for lattices) or Walsh functions (for nets) · · · · · · 7/21
  • 19. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Stopping Quasi-Monte Carlo Sampling µ = [0,1]d f(x) dx = f(0) ≈ µn = fn(0), f(x) = k f(k)φk(x), f(k) = [0,1]d f(x)φk(x) dx φk are complex exponentials (for lattices) or Walsh functions (for nets) · · · · · · errn(f) errn(f) := k∈dual set{0} f(k) , dual set = {k : φk(xi) = φk(x1), i = 1, . . . , n φk looks like a constant } discrete transform, fn(k) = 1 n n i=1 f(xi)φk(xi) ∀k, can be computed in O(n log(n)) operations 7/21
  • 20. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Stopping Quasi-Monte Carlo Sampling µ = [0,1]d f(x) dx = f(0) ≈ µn = fn(0), f(x) = k f(k)φk(x), f(k) = [0,1]d f(x)φk(x) dx φk are complex exponentials (for lattices) or Walsh functions (for nets) errn(f) errn(f) := k∈dual set{0} f(k) , dual set = {k : φk(xi) = φk(x1), i = 1, . . . , n φk looks like a constant } discrete transform, fn(k) = 1 n n i=1 f(xi)φk(xi) ∀k, can be computed in O(n log(n)) operations H. and Jiménez Rugama (2016), Jiménez Rugama and H. (2016) and (see also H. et al. (2017+)) define a cone, C, of reasonable integrands whose true Fourier coefficients do not decay too erratically. Then errn(f) errn(f) errn(f) := C(n) moderatek fn(k) ∀f ∈ C Control variates are possible (H. et al., 2017+). 7/21
  • 21. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References A Reasonable and an Unreasonable Integrand 8/21
  • 22. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Option Pricing µ = fair price = Rd e−rT max  1 d d j=1 Sj − K, 0   e−zT z/2 (2π)d/2 dz ≈ $13.12 Sj = S0e(r−σ2 /2)jT/d+σxj = stock price at time jT/d, x = Az, AAT = Σ = min(i, j)T/d d i,j=1 , T = 1/4, d = 13 here Abs. Error Median Worst 10% Worst 10% Tolerance Method Error Accuracy n Time (s) 1E−2 IID 2E−3 100% 6.1E7 33.000 1E−2 Scr. Sobol’ 1E−3 100% 1.6E4 0.040 9/21
  • 23. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Option Pricing µ = fair price = Rd e−rT max  1 d d j=1 Sj − K, 0   e−zT z/2 (2π)d/2 dz ≈ $13.12 Sj = S0e(r−σ2 /2)jT/d+σxj = stock price at time jT/d, x = Az, AAT = Σ = min(i, j)T/d d i,j=1 , T = 1/4, d = 13 here Abs. Error Median Worst 10% Worst 10% Tolerance Method Error Accuracy n Time (s) 1E−2 IID 2E−3 100% 6.1E7 33.000 1E−2 Scr. Sobol’ 1E−3 100% 1.6E4 0.040 1E−2 Scr. Sob. cont. var. 2E−3 100% 4.1E3 0.019 10/21
  • 24. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Bayesan Cubature with Nice Covariance Kernels f ∼ GP(m, s2 Cθ), Cθ : [0, 1]d × [0, 1]d → R, m, s, θ to be inferred by MLE, C = Cθ(xi, xj) n i,j=1 = 1 n WΛWH , Λ = diag(λ1, . . . , λn), C1 = λ11, [0,1]d Cθ(x, t) dt = 1 ∀x Using maximum likelihood estimation (MLE) gives mMLE = µn = 1 n n i=1 f(xi), s2 MLE = 1 n2 n i=2 |yi|2 λi , y = WT    f(x1) ... f(xn)    fast transform, O(n log n) , θMLE = argmin θ log n i=2 |yi|2 λi + 1 n n i=1 log(λi) Then µ is a Gaussian random variable, and P  |µ − µn| 2.58 1 − n λ1 1 n2 n i=2 |yi|2 λ2 i   = 99% for reasonable f 11/21
  • 25. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Option Pricing µ = fair price = Rd e−rT max  1 d d j=1 Sj − K, 0   e−zT z/2 (2π)d/2 dz ≈ $13.12 Sj = S0e(r−σ2 /2)jT/d+σxj = stock price at time jT/d, x = Az, AAT = Σ = min(i, j)T/d d i,j=1 , T = 1/4, d = 13 here Abs. Error Median Worst 10% Worst 10% Tolerance Method Error Accuracy n Time (s) 1E−2 IID 2E−3 100% 6.1E7 33.000 1E−2 Scr. Sobol’ 1E−3 100% 1.6E4 0.040 1E−2 Scr. Sob. cont. var. 2E−3 100% 4.1E3 0.019 1E−2 Bayes. Latt. 2E−3 99% 1.6E4 0.051 12/21
  • 26. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References More General Error Criteria Let’s change the problem: find µn satisfying |µ − µn| max(ε, εr |µ|), given |µ − µn| errn(f) H. et al. (2017+) show that if n is large enough to make errn(f) max(ε, εr µn + errn(f) ) + max(ε, εr µn − errn(f) ) 2 then a satisfactory choice is µn = (µn − errn(f)) max(ε, εr µn + errn(f) ) + (µn + errn(f)) max(ε, εr µn − errn(f) ) max(ε, εr µn + errn(f) ) + max(ε, εr µn − errn(f) ) =    µn, εr = 0 max(µ2 n − errn(f)2 , 0) µn , ε = 0 H. et al. (2017+) extend this to the case where the solution is v(µ): |v(µ) − ˜v| max(ε, εr |v(µ)|), given µ ∈ [µ − errn(f), µ + errn(f)] 13/21
  • 27. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Gaussian Probability µ = [a,b] exp −1 2 tT Σ−1 t (2π)d det(Σ) dt Genz (1993) = [0,1]d−1 f(x) dx For some typical choice of a, b, Σ, d = 3; µ ≈ 0.6763 Rel. Error Median Worst 10% Worst 10% Tolerance Method Error Accuracy n Time (s) 1E−2 IID 5E−4 100% 8.1E4 0.020 1E−2 Scr. Sobol’ 4E−5 100% 1.0E3 0.005 1E−2 Bayes. Latt. 5E−5 100% 4.1E3 0.023 1E−3 IID 9E−5 100% 2.0E6 0.400 1E−3 Scr. Sobol’ 2E−5 100% 2.0E3 0.006 1E−3 Bayes. Latt. 3E−7 100% 6.6E4 0.076 1E−4 Scr. Sobol’ 4E−7 100% 1.6E4 0.018 1E−4 Bayes. Latt. 6E−9 100% 5.2E5 0.580 1E−4 Bayes. Latt. Smth. 1E−7 100% 3.3E4 0.047 14/21
  • 28. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Bayesian Inference for Logistic Regression µj µ0 = X bjL(b|data) (b) db X L(b|data) (b) db = Bayesian posterior expectation of βj yi ∼ Ber exp(β1 + β2xi) 1 + exp(β1 + β2xi) , (b) = exp −(b2 1 + b2 2)/2 2π Importance sample via an appropriately chosen normal ε = 0.001 n = 9 000–17 000 15/21
  • 29. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Sobol’ Indices Y = g(X), where X ∼ U[0, 1]d ; Sobol’ Indexj(µ) describes how much coordinate j of input X influences output Y (Sobol’, 1990; 2001): Sobol’ Indexj(µ) := µ1 µ2 − µ2 3 , j = 1, . . . , d µ1 := [0,1)2d [g(x) − g(xj, x −j)]g(x ) dx dx µ2 := [0,1)d [g(x)]2 dx, µ3 := [0,1)d g(x) dx Jiménez Rugama and Gilquin (2017+) have used adaptive Sobol’ sampling to compute Sobol’ indices: g(x) = −x1 + x1x2 − x1x2x3 + · · · + x1x2x3x4x5x6 (Bratley et al., 1992) ε = 1E−3, εr = 0 j 1 2 3 4 5 6 n 65 536 32 768 16 384 16 384 2 048 2 048 Sobol’ Indexj 0.6529 0.1791 0.0370 0.0133 0.0015 0.0015 Sobol’ Indexj 0.6528 0.1792 0.0363 0.0126 0.0010 0.0012 Sobol’ Indexj(µn) 0.6492 0.1758 0.0308 0.0083 0.0018 0.0039 16/21
  • 30. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Summary Established conservative, pre-asymptotic confidence intervals for reasonable integrands for IID sampling for Quasi-Monte Carlo (low discrepancy) sampling, and via Bayesian cubature to fit even general error criteria Quasi-Monte Carlo sampling can give faster convergence Bayesian cubature is practical if the sample matches the covariance kernel so that the vector-matrix operations are fast enough Algorithms implemented in open-source GAIL (Choi et al., 2013–2017), which is undergoing continual development 17/21
  • 31. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References Further Questions Are the cones of reasonable integrands reasonable in practice? What data-driven necessary conditions are there for f to be reasonable (f ∈ C)? Can we get higher order convergence with higher order nets? What can be done for Markov Chain Monte Carlo? What can be done for multi-level Monte Carlo (Giles, 2015)? 18/21
  • 32. Slides available at speakerdeck.com/fjhickernell/ samsi-qmc-december-2017-trends-workshop
  • 33. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References References I Bayer, C., H. Hoel, E. von Schwerin, and R. Tempone. 2014. On nonasymptotic optimal stopping criteria in Monte Carlo Simulations, SIAM J. Sci. Comput. 36, A869–A885. Bratley, P., B. L. Fox, and H. Niederreiter. 1992. Implementation and tests of low-discrepancy sequences, ACM Trans. Model. Comput. Simul. 2, 195–213. Choi, S.-C. T., Y. Ding, F. J. H., L. Jiang, Ll. A. Jiménez Rugama, D. Li, J. Rathinavel, X. Tong, K. Zhang, Y. Zhang, and X. Zhou. 2013–2017. GAIL: Guaranteed Automatic Integration Library (versions 1.0–2.2). Genz, A. 1993. Comparison of methods for the computation of multivariate normal probabilities, Computing Science and Statistics 25, 400–405. Giles, M. 2015. Multilevel Monte Carlo methods, Acta Numer. 24, 259–328. H., F. J. 2017+. The trio identity for quasi-Monte Carlo error analysis, Monte Carlo and quasi-Monte Carlo methods: MCQMC, Stanford, usa, August 2016. submitted for publication, arXiv:1702.01487. H., F. J., S.-C. T. Choi, L. Jiang, and Ll. A. Jiménez Rugama. 2017+. Monte Carlo simulation, automatic stopping criteria for, Wiley statsref-statistics reference online. to appear. H., F. J., L. Jiang, Y. Liu, and A. B. Owen. 2013. Guaranteed conservative fixed width confidence intervals via Monte Carlo sampling, Monte Carlo and quasi-Monte Carlo methods 2012, pp. 105–128. 20/21
  • 34. Introduction IID Sampling QMC Sampling Bayesian Cubature General Error Criteria Concluding Remarks References References II H., F. J. and Ll. A. Jiménez Rugama. 2016. Reliable adaptive cubature using digital sequences, Monte Carlo and quasi-Monte Carlo methods: MCQMC, Leuven, Belgium, April 2014, pp. 367–383. arXiv:1410.8615 [math.NA]. H., F. J., Ll. A. Jiménez Rugama, and D. Li. 2017+. Adaptive quasi-Monte Carlo methods for cubature, Contemporary computational mathematics — a celebration of the 80th birthday of ian sloan. to appear, arXiv:1702.01491 [math.NA]. Jiménez Rugama, Ll. A. and L. Gilquin. 2017+. Reliable error estimation for Sobol’ indices, Statistics and Computing. in press. Jiménez Rugama, Ll. A. and F. J. H. 2016. Adaptive multidimensional integration based on rank-1 lattices, Monte Carlo and quasi-Monte Carlo methods: MCQMC, Leuven, Belgium, April 2014, pp. 407–422. arXiv:1411.1966. Nefedova, Yu. S. and I. G. Shevtsova. 2012. On non-uniform convergence rate estimates in the central limit theorem, Theory Probab. Appl. 57, 62–97. Shevtsova, I. 2014. On the absolute constants in the Berry-Esseen type inequalities for identically distributed summands, Dokl. Akad. Nauk 456, 650–654. Sobol’, I. M. 1990. On sensitivity estimation for nonlinear mathematical models, Matem. Mod. 2, no. 1, 112–118. . 2001. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates, Math. Comput. Simul. 55, no. 1-3, 271–280. 21/21