SlideShare a Scribd company logo
Draft
1
Density estimation
by Randomized Quasi-Monte Carlo
Pierre L’Ecuyer
Joint work with Amal Ben Abdellah, Art B. Owen, and Florian Puchhammer
SAMSI, Raleigh-Durham, North-Carolina, May 2018
Draft
2
What this talk is about
Quasi-Monte Carlo (QMC) and randomized QMC (RQMC) methods have been studied
extensively for estimating an integral, E[X].
Can they be useful for estimating the entire distribution of X?
E.g., estimating a density, a cdf, some quantiles, etc.
When hours or days of computing time are required to perform simulation runs, reporting
only a confidence interval on the mean is a waste of information!
People routinely look at empirical distributions via histograms, for example.
More refined methods: kernel density estimators (KDEs).
Can RQMC improve such density estimators, and by how much?
Draft
3
Setting
Classical density estimation was developed in the context where independent observations
X1, . . . , Xn are given and one wishes to estimate the density f from which they come.
Here we assume that X1, . . . , Xn are generated by simulation from a stochastic model.
We can choose n and we have some freedom on how the simulation is performed.
The Xi ’s are realizations of a random variable X = g(U) ∈ R with density f , where
U = (U1, . . . , Us) ∼ U(0, 1)s and g(u) can be computed easily for any u ∈ (0, 1)s.
Can we obtain a better estimate of f with RQMC instead of MC? How much better?
Draft
4
Density Estimation
Suppose we estimate the density f over a finite interval (a, b).
Let ˆfn(x) denote the density estimator at x, with sample size n.
We use the following measures of error:
MISE = mean integrated squared error =
b
a
E[ˆfn(x) − f (x)]2
dx
= IV + ISB
IV = integrated variance =
b
a
Var[ˆfn(x)]dx
ISB = integrated squared bias =
b
a
(E[ˆfn(x)] − f (x))2
dx
Draft
5
Density Estimation
Simple histogram: Partition [a, b] in m intervals of size h = (b − a)/m and define
ˆfn(x) =
nj
nh
for x ∈ Ij = [a + (j − 1)h, a + jh), j = 1, ..., m
where nj is the number of observations Xi that fall in interval j.
Kernel Density Estimator (KDE) : Select kernel k (unimodal symmetric density centered at
0) and bandwidth h > 0 (serves as horizontal stretching factor for the kernel). The KDE is
defined by
ˆfn(x) =
1
nh
n
i=1
k
x − Xi
h
.
Draft
6
Asymptotic convergence with Monte Carlo for smooth f
For g : R → R, define
R(g) =
b
a
(g(x))2
dx,
µr (g) =
∞
−∞
xr
g(x)dx, for r = 0, 1, 2, . . .
For histograms and KDEs, when n → ∞ and h → 0:
AMISE = AIV + AISB ∼
C
nh
+ Bhα .
C B α
Histogram 1 R(f ) /12 2
KDE µ0(k2) (µ2(k))2 R(f ) /4 4
Draft
7
The asymptotically optimal h is
h∗
=
C
Bαn
1/(α+1)
and it gives AMISE = Kn−α/(1+α).
C B α h∗ AMISE
Histogram 1
R(f )
12
2 (nR(f )/6)−1/3 O(n−2/3)
KDE µ0(k2)
(µ2(k))2 R(f )
4
4
µ0(k2)
(µ2(k))2R(f )n
1/5
O(n−4/5)
To estimate h∗, one can estimate R(f ) and R(f ) via KDE (plugin).
Draft
8
Asymptotic convergence with RQMC for smooth f
Idea: Replace U1, . . . , Un by RQMC points.
RQMC does not change the bias.
For a KDE with smooth k, one could hope (perhaps) to get
AIV = C n−β
h−1
for β > 1, instead of Cn−1
h−1
.
If the IV is reduced, the optimal h can be taken smaller to reduce the ISB as well
(re-balance) and then reduce the MISE.
Draft
8
Asymptotic convergence with RQMC for smooth f
Idea: Replace U1, . . . , Un by RQMC points.
RQMC does not change the bias.
For a KDE with smooth k, one could hope (perhaps) to get
AIV = C n−β
h−1
for β > 1, instead of Cn−1
h−1
.
If the IV is reduced, the optimal h can be taken smaller to reduce the ISB as well
(re-balance) and then reduce the MISE.
Unfortunately, things are not so simple.
Roughly, decreasing h increases the variation of the function in the estimator. So we may
have something like
AIV = C n−β
h−δ
or IV ≈ C n−βh−δ in some bounded region.
Draft
9
Elementary QMC Bounds (Recall)
Integration error for g : [0, 1)s → R with point set Pn = {u0, . . . , un−1} ⊂ [0, 1)s:
En =
1
n
n−1
i=0
g(ui ) −
[0,1)s
g(u)du.
Koksma-Hlawka inequality: |En| ≤ VHK(g)D∗(Pn) where
VHK(g) =
∅=v⊆S [0,1)s
∂|v|g
∂v
du, (Hardy-Krause (HK) variation)
D∗
(Pn) = sup
u∈[0,1)s
vol[0, u) −
|Pn ∩ [0, u)|
n
(star-discrepancy).
There are explicit point sets for which D∗(Pn) = O((log n)s−1/n) = O(n−1+ ).
Explicit RQMC constructions for which E[En] = 0 and Var[En] = O(n−2+ ).
Draft
10
Also
|En| ≤ V2(g)D2(Pn)
where
V 2
2 (g) =
∅=v⊆S [0,1)s
∂|v|g
∂v
2
du, (square L2 variation),
D2
2 (Pn) =
u∈[0,1)s
vol[0, u) −
|Pn ∩ [0, u)|
n
2
du (square L2-star-discrepancy).
Explicit constructions for which D2(Pn) = O(n−1+ ).
Moreover, if Pn is a digital net randomized by a nested uniform scramble (NUS) and
V2(g) < ∞, then E[En] = 0 and Var[En] = O(V 2
2 (g)n−3+ ) for all > 0.
Draft
11
Bounding the AIV under RQMC for a KDE
KDE density estimator at a single point x:
ˆfn(x) =
1
n
n
i=1
1
h
k
x − g(Ui )
h
=
1
n
n
i=1
˜g(Ui ).
With RQMC points Ui , this is an RQMC estimator of E[˜g(U)] = [0,1)s ˜g(u)du = E[fn(x)].
RQMC does not change the bias, but may reduce Var[ˆfn(x)], and then the IV.
To get RQMC variance bounds, we need bounds on the variation of ˜g.
Draft
11
Bounding the AIV under RQMC for a KDE
KDE density estimator at a single point x:
ˆfn(x) =
1
n
n
i=1
1
h
k
x − g(Ui )
h
=
1
n
n
i=1
˜g(Ui ).
With RQMC points Ui , this is an RQMC estimator of E[˜g(U)] = [0,1)s ˜g(u)du = E[fn(x)].
RQMC does not change the bias, but may reduce Var[ˆfn(x)], and then the IV.
To get RQMC variance bounds, we need bounds on the variation of ˜g.
The partial derivatives are:
∂|v|
∂uv
˜g(u) =
1
h
∂|v|
∂uv
k
x − g(u)
h
.
We assume they exist and are uniformly bounded. E.g., Gaussian kernel k.
By expanding via the chain rule, we obtain terms in h−j for j = 2, . . . , |v| + 1.
One of the term for v = S grows as h−s−1k(s) ((g(u) − x)/h) s
j=1 gj (u) = O(h−s−1) when
h → 0, so this AIV bound grows as h−2s−2. Not so good!
Draft
12
Improvement by a Change of Variable, in One Dimension
Change of variable w = (x − g(u))/h.
In one dimension (s = 1), we have dw/du = −g (u)/h, so
VHK(˜g) =
1
h
1
0
k
x − g(u)
h
−g (u)
h
du =
1
h
∞
−∞
k (w)dw = O(h−1
).
Then, if k and g are continuously differentiable, with RQMC points having D∗
(Pn) = O(n−1+
), we
obtain AIV = O(n−2+
h−2
).
With h = Θ(n−1/3
), this gives AMISE = O(n−4/3
).
A similar argument gives
V 2
2 (˜g) =
1
h2
1
0
k
x − g(u)
h
−g (u)
h
2
du =
1
h3
Lg
∞
−∞
(k (w))2
dw = O(h−3
)
if |g | ≤ Lg , and then with NUS: AIV = O(n−3+
h−3
).
With h = Θ(n−3/7
), this gives AMISE = O(n−12/7
).
Draft
13
Higher Dimensions
Let s = 2 and v = {1, 2}. With the change of variable (u1, u2) → (w, u2), the Jacobian is
|dw/du1| = |g1(u1, u2)/h|, where gj = ∂g/∂uj . If |g2| and |g12/g1| are bounded by a constant L,
[0,1)2
∂|v|
˜g
∂uv
du =
1
h [0,1)2
∂2
∂u1∂u2
k
x − g(u)
h
du1du2
=
1
h [0,1)2
k
x − g(u)
h
g1(u)
h
g2(u)
h
+ k
x − g(u)
h
g12(u)
h
du1du2
=
1
h
1
0
∞
−∞
k (w)
g2(u)
h
+ k (w)
g12(u)
g1(u)
dw du2
=
L
h
[µ0(k )/h + µ0(k )] = O(h−2
).
This provides a bound of O(h−2
) for VHK(˜g), then AIV = O(n−2+
h−4
).
Generalizing to s ≥ 2 gives VHK(˜g) = O(h−s
), AIV = O(n−2+
h−2s
), MISE = O(n−4/(2+s)
) .
Beats MC for s < 3, same rate for s = 3. Not very satisfactory.
Draft
14
Empirical Evaluation with Linear Model in a limited region
Regardless of the asymptotic bounds, the true IV may behave better than for MC for pairs
(n, h) of interest. We consider the model
MISE = IV + ISB ≈ Cn−βh−δ + Bhα .
This model is only for a limited region of interest, not for everywhere, not necessarily
asymptotic. The optimal h for this model satisfies
hα+δ
=
Cδ
Bα
n−β
.
and it gives MISE ≈ Kn−αβ/(α+δ).
We can take the asymptotic α (known) and B (estimated as for MC).
To estimate C, β, and δ, estimate the IV over a grid of values of (n, h), and fit a linear
regression model:
log IV ≈ log C − β log n − δ log h.
Draft
15
For each (n, h), we estimate the IV by making nr indep. replications of the RQMC density
estimator, compute the variance at ne evaluation points (stratified) over [a, b], and multiply
by (b − a)/n. We use logs in base 2, since n is a power of 2.
After estimating model parameters, can test out-of-sample with independent simulation
experiments at pairs (n, h) with h = ˆh∗(n).
For test cases in which density is known, can compute a MISE estimate at each point, and
obtain new parameter estimates ˜K and ˜ν of model MISE ≈ Kn−ν.
Draft
16
Numerical illustrations
For each example, we first estimate model parameters by regression using a grid of pairs
(n, h) with n = 214, 215, . . . , 219 and (for KDE) h = h0, . . . , h5 with hj = h02j/2 = 2− 0+j/2.
For histograms, m = (b − a)/h must be an integer.
For each n and each RQMC method, we make nr = 100 independent replications and take
ne = 64 evaluation points over bounded interval [a, b]. Also tried larger ne.
RQMC Point sets:
Independent points (Crude Monte Carlo),
Stratification: stratified unit cube,
Sobol+LMS: Sobol’ points with left matrix scrambling (LMS) + digital random shift,
Sobol+NUS: Sobol’ points with NUS.
Draft
17
Simple test example with standard normal density
Let Z1, . . . , Zs i.i.d. standard normal generated by inversion, and X = (Z1 + · · · + Zs)/
√
s.
Then X ∼ N(0, 1).
Here we can estimate IV, ISB, and MISE accurately.
We can compute
b
a f (x)dx exactly.
Take (a, b) = (−2, 2). We have B = 0.04754 with α = 4 for KDE.
Draft
18
Estimates of model parameters for KDE
IV ≈ Cn−β
h−δ
, MISE ≈ κn−ν
method MC NUS
s 1 1 2 3 5 20
R2 0.999 0.999 1.000 0.995 0.979 0.993
β 1.017 2.787 2.110 1.788 1.288 1.026
δ 1.144 2.997 3.195 3.356 2.293 1.450
α 3.758 3.798 3.846 3.860 3.782 3.870
˜ν 0.770 1.600 1.172 0.981 0.827 0.730
LGM 16.96 34.05 24.37 20.80 17.91 17.07
LGM = − log2(MISE) for n = 219.
Draft
18
Convergence of the MISE in log-log scale, for the one-dimensional example
8 10 12 14 16 18 20
−30
−20
−10
log2(n)
log2(MISE)
normal01densityKDE Independent points
Stratification
Sobol + LMS
Sobol + NUS
Draft
18
Convergence of the MISE, for s = 2, for histograms (left) and KDE (right).
10 15 20
−15
−10
log2(n)
log2(MISE)
Independent points
Stratification
Sobol + LMS
Sobol + NUS
10 15 20
−25
−20
−15
−10
log2(n)
log2(MISE)
Independent points
Stratification
Sobol + LMS
Sobol + NUS
Draft
18
LGM (n = 219) for histogram (left) and KDE (right) for estimation over (−2, 2).
1 2 3 4 5
14
16
18
20
s
LGM
Independent
Stratification
Sobol+LMS
Sobol+NUS
1 2 3 4 5
20
25
30
35
s
Independent
Stratification
Sobol+LMS
Sobol+NUS
Draft
19
Estimated parameters with histogram (left) and KDE (right) over (−2, 2).
1 2 3 4 5
1
1.2
1.4
1.6
1.8
2
s
β, Independent
δ, Independent
β, Stratification
δ, Stratification
β, Sobol+LMS
δ, Sobol+LMS
β, Sobol+NUS
δ, Sobol+NUS
1 2 3 4 5
1
1.5
2
2.5
3
3.5
s
β, Independent
δ, Independent
β, Stratification
δ,Stratification
β, Sobol+LMS
δ, Sobol+LMS
β, Sobol+NUS
δ, Sobol+NUS
Draft
19
KDE Estimate over (−4, 4)
method MC NUS
s 1 1 2 3 5 20
β 1.020 2.434 1.999 1.728 1.272 1.006
δ 1.138 2.432 2.972 3.168 2.256 1.464
˜ν 0.772 1.514 1.138 0.980 0.817 0.767
LGM 16.89 30.07 23.68 20.52 17.72 16.95
Draft
20
KDE Estimate in the tail
KDE (blue) vs true density (red) with RQMC point sets with n = 219:
lattice + shift, Sobol + digital shift, Sobol + LMS-19bits + shift, Sobol + LMS-31bits + shift
-5.1 -4.6 -4.1 -3.6
(10−5
)
0
10
20
30
40
50
60
70
80
-5.1 -4.6 -4.1 -3.6
(10−5
)
0
10
20
30
40
50
60
70
80
-5.1 -4.6 -4.1 -3.6
(10−5
)
0
10
20
30
40
50
60
70
80
-5.1 -4.6 -4.1 -3.6
(10−5
)
0
10
20
30
40
50
60
70
80
Draft
21
Displacement of a cantilevel beam
Displacement D of a cantilever beam with horizontal and vertical loads:
D =
4L3
Ewt
Y 2
t4
+
X2
w4
where L = 100, w = 4, t = 2 (in inches), X, Y , and E are independent and normally
distributed with means and standard deviations:
Description Symbol Mean St. dev.
Young’s modulus E 2.9 × 107 1.45 × 106
Horizontal load X 500 100
Vertical load Y 1000 100
We want to estimate the density of D over (a, b) = (0.336, 1.561) (about 99.5% of density).
Draft
22
Parameter estimates of the linear regression model for IV and MISE:
IV ≈ Cn−β
h−δ
, MISE ≈ κn−ν
Point set ˆC ˆβ ˆδ ˆν
Histogram, α = 2
Independent 0.888 1.001 1.021 0.797
Sobol+LMS 0.134 1.196 1.641 0.848
Sobol+NUS 0.136 1.194 1.633 0.848
KDE with Gaussian kernel, α = 4
Independent 0.210 0.993 1.037 0.789
Sobol+LMS 5.28E-4 1.619 2.949 0.932
Sobol+NUS 5.24E-4 1.621 2.955 0.932
Good fit: we have R2 > 0.99 in all cases.
Draft
23
log2(IV) vs log2 n for cantilever with KDE.
10 15 20
−30
−20
−10
0
log2(n)
log2(IV)
Independent, h = 2−9.5
Sobol+NUS, h = 2−9.5
Independent, h = 2−7.5
Sobol+NUS, h = 2−7.5
Independent, h = 2−5.5
Sobol+NUS, h = 2−5.5
Draft
23
A weighted sum of lognormals
X =
s
j=1
wj exp(Yj )
where Y = (Y1, . . . , Ys)t ∼ N(µ, C).
Let C = AAt. To generate Y, generate Z ∼ N(0, I) and put Y = µ + AZ.
We will use principal component decomposition (PCA).
This has several applications. In one of them, with wj = s0(s − j + 1)/s, e−ρ max(X − K, 0)
is the payoff of a financial option based on an average price at s observation times, under a
GBM process. Want to estimate density of positive payoffs.
Draft
23
A weighted sum of lognormals
X =
s
j=1
wj exp(Yj )
where Y = (Y1, . . . , Ys)t ∼ N(µ, C).
Let C = AAt. To generate Y, generate Z ∼ N(0, I) and put Y = µ + AZ.
We will use principal component decomposition (PCA).
This has several applications. In one of them, with wj = s0(s − j + 1)/s, e−ρ max(X − K, 0)
is the payoff of a financial option based on an average price at s observation times, under a
GBM process. Want to estimate density of positive payoffs.
Numerical experiment: Take s = 12, ρ = 0.037, s0 = 100, K = 101, and C defined
indirectly via: Yj = Yj−1(µ − σ2)j/s + σB(j/s) where Y0 = 0, σ = 0.12136, µ = 0.1, and B
a standard Brownian motion.
We will estimate the density of e−ρ(X − K) over (a, b) = (0, 50).
Draft
24
Histogram of positive values from n = 106
independent simulation runs:
Payoff
0 50 100 15013.1
Frequency (×103
)
0
10
20
30
Draft
25IV as a function of n for KDE.
14 15 16 17 18 19 20
−40
−30
−20
log2(n)
log2(IV)
Independent, h = 2−3,5
Sobol+NUS, h = 2−3,5
Independent, h = 2−1,5
Sobol+NUS, h = 2−1,5
Independent, h = 20,5
Sobol+NUS, h = 20,5
Draft
25
Example: A stochastic activity network
Gives precedence relations between activities. Activity k has random duration Yk (also length
of arc k) with known cumulative distribution function (cdf) Fk(y) := P[Yk ≤ y].
Project duration T = (random) length of longest path from source to sink.
May want to estimate E[T], P[T > x], a quantile, density of T, etc.
0source 1
Y0
2
Y1
Y2
3
Y3
4
Y7
5
Y9
Y4
Y5
6
Y6
7
Y11
Y8
8 sink
Y12
Y10
Draft
26
Simulation
Algorithm: to generate T:
for k = 0, . . . , 12 do
Generate Uk ∼ U(0, 1) and let Yk = F−1
k (Uk)
Compute X = T = h(Y0, . . . , Y12) = f (U0, . . . , U12)
Monte Carlo: Repeat n times independently to obtain n realizations X1, . . . , Xn of T.
Estimate E[T] = (0,1)s f (u)du by ¯Xn = 1
n
n−1
i=0 Xi .
To estimate P(T > x), take X = I[T > x] instead.
RQMC: Replace the n independent points by an RQMC point set of size n.
Numerical illustration from Elmaghraby (1977):
Yk ∼ N(µk , σ2
k ) for k = 0, 1, 3, 10, 11, and Vk ∼ Expon(1/µk ) otherwise.
µ0, . . . , µ12: 13.0, 5.5, 7.0, 5.2, 16.5, 14.7, 10.3, 6.0, 4.0, 20.0, 3.2, 3.2, 16.5.
Draft
27
Naive idea: replace each Yk by its expectation. Gives T = 48.2.
Results of an experiment with n = 100 000.
Histogram of values of T is a density estimator that gives more information than a
confidence interval on E[T] or P[T > x].
T
0 25 50 75 100 125 150 175 200
Frequency
0
5000
10000
T = x = 90
T = 48.2
mean = 64.2
ˆξ0.99 = 131.8
Draft
28
log2(IV) with KDE, Sobol+NUS, for the SAN
−4
−3
−2 10
15
−20
−10
log2(h)
log2(n)
log2(IV)
Draft
29
Results for SAN Network
Parameter estimates of the regression models for the SAN network, n = 219.
Point set C β δ ˆγ∗ ˆν∗
Histogram, α = 2
Independent 0.892 0.999 1.005 0.333 0.665
Stratif. 0.897 1.001 1.006 0.333 0.666
Lattice+shift 0.841 0.988 0.988 0.331 0.662
Sobol+LMS 0.894 1.006 1.024 0.333 0.665
Sobol+NUS 0.888 1.005 1.026 0.332 0.665
KDE with Gaussian kernel, α = 4
Independent 0.254 1.001 1.004 0.199 0.799
Stratif. 0.248 1.001 1.012 0.199 0.799
Lattice+shift 0.222 0.969 0.947 0.196 0.784
Sobol+LMS 0.242 1.021 1.089 0.201 0.803
Sobol+NUS 0.246 1.023 1.087 0.201 0.804
Draft
30
The SAN example, Sobol+NUS vs Independent points, summary for n = 219 = 524288.
Density
Independent points Sobol+NUS
m or h log2IV IV rate log2IV IV rate
Histogram
64 -19.32 -1.003 -19.78 -1.039
256 -17.28 -0.999 -17.40 -1.011
1024 -15.27 -1.001 -15.30 -1.003
4096 -13.27 -0.998 -13.27 -1.000
Kernel
0.10 -16.64 -0.999 -16.71 -1.006
0.13 -17.31 -0.999 -17.42 -1.007
0.18 -17.96 -0.999 -18.18 -1.015
0.24 -18.64 -0.999 -18.92 -1.029
0.32 -19.33 -0.998 -19.79 -1.035
0.43 -19.99 -0.998 -20.71 -1.064
Draft
31
Conditional Monte Carlo for Derivative Estimation
The density is f (x) = F (x), so could we just take the derivative of a cdf estimator?
The derivative of the empirical cdf ˆFn(x) is zero almost everywhere, ... does not work!
We need a smooth cdf estimator.
Draft
31
Conditional Monte Carlo for Derivative Estimation
The density is f (x) = F (x), so could we just take the derivative of a cdf estimator?
The derivative of the empirical cdf ˆFn(x) is zero almost everywhere, ... does not work!
We need a smooth cdf estimator.
Let X = X(θ, ω) with parameter θ ∈ R. Want to estimate g (θ) = d
dθ E[X(θ, ω)].
When X(·, ω) is not continuous in θ (+ other conditions), we cannot interchange the
derivative and expectation, and cannot take d
dθ X(θ, ω) as an estimator of g (θ).
Draft
31
Conditional Monte Carlo for Derivative Estimation
The density is f (x) = F (x), so could we just take the derivative of a cdf estimator?
The derivative of the empirical cdf ˆFn(x) is zero almost everywhere, ... does not work!
We need a smooth cdf estimator.
Let X = X(θ, ω) with parameter θ ∈ R. Want to estimate g (θ) = d
dθ E[X(θ, ω)].
When X(·, ω) is not continuous in θ (+ other conditions), we cannot interchange the
derivative and expectation, and cannot take d
dθ X(θ, ω) as an estimator of g (θ).
Often possible to replace X by a conditional expectation Xe = E[X | G] where G contains
partial information, not enough to reveal X, but enough to compute Xe.
If Xe is smooth enough in θ, we may have
g (θ) = E
d
dθ
Xe(θ, ω) = E[Xe].
This is gradient estimation by IPA + CMC. L’Ecuyer and Perron (1994), Asmussen (2017).
Then, we can simulate Xe with RQMC instead of MC.
Draft
32
Application to the SAN Example
We want a smooth estimate of P[T ≤ t], whose sample derivative will be an unbiased
estimate of the density at t.
Naive estimator: Generate T and compute X = I[T ≤ t].
Repeat n times and average.
Draft
32
Application to the SAN Example
We want a smooth estimate of P[T ≤ t], whose sample derivative will be an unbiased
estimate of the density at t.
Naive estimator: Generate T and compute X = I[T ≤ t].
Repeat n times and average.
0source 1
Y0
2
Y1
Y2
3
Y3
4
Y7
5
Y9
Y4
Y5
6
Y6
7
Y11
Y8
8 sink
Y12
Y10
Draft
33
Conditional Monte Carlo estimator of P[T ≤ t]. Generate the Yj ’s only for the 8 arcs
that do not belong to the cut L = {4, 5, 6, 8, 9}, and replace I[T ≤ t] by its conditional
expectation given those Yj ’s,
Xe = P[T ≤ t | {Yj , j ∈ L}].
This makes the integrand continuous in the Uj ’s and in t.
Draft
33
Conditional Monte Carlo estimator of P[T ≤ t]. Generate the Yj ’s only for the 8 arcs
that do not belong to the cut L = {4, 5, 6, 8, 9}, and replace I[T ≤ t] by its conditional
expectation given those Yj ’s,
Xe = P[T ≤ t | {Yj , j ∈ L}].
This makes the integrand continuous in the Uj ’s and in t.
To compute Xe: for each l ∈ L, say from al to bl , compute the length αl of the longest path
from 1 to al , and the length βl of the longest path from bl to the destination.
The longest path that passes through link l does not exceed t iff αl + Yl + βl ≤ t, which
occurs with probability P[Yl ≤ t − αl − βl ] = Fl [t − αl − βl ].
Draft
33
Conditional Monte Carlo estimator of P[T ≤ t]. Generate the Yj ’s only for the 8 arcs
that do not belong to the cut L = {4, 5, 6, 8, 9}, and replace I[T ≤ t] by its conditional
expectation given those Yj ’s,
Xe = P[T ≤ t | {Yj , j ∈ L}].
This makes the integrand continuous in the Uj ’s and in t.
To compute Xe: for each l ∈ L, say from al to bl , compute the length αl of the longest path
from 1 to al , and the length βl of the longest path from bl to the destination.
The longest path that passes through link l does not exceed t iff αl + Yl + βl ≤ t, which
occurs with probability P[Yl ≤ t − αl − βl ] = Fl [t − αl − βl ].
Since the Yl are independent, we obtain
Xe =
l∈L
Fl [t − αl − βl ].
Draft
34
To estimate the density of T, just take the derivative w.r.t. t:
Xe =
d
dt
Xe(t, ω)
w.p.1
=
j∈L
fj [t − αj − βj ]
l∈L, l=j
Fl [t − αl − βl ].
One can prove that
E[Xe] =
d
dt
E[Xe] =
d
dt
P[T ≤ t] = fT (t)
via the dominated convergence theorem. See L’Ecuyer (1990).
Here, with MC, the IV converges as O(1/n) and there is no bias, so MISE = IV.
Now, we can apply RQMC to simulate Xe. It is a smooth function of the uniforms if each
inverse cdf F−1
j and density fj are smooth.
Then we can get a convergence rate near O(n−2) for the IV and the MISE.
Draft
35
Conclusion
We saw that RQMC can improve the convergence rate of the IV and MISE when
estimating a density.
With histograms and KDEs, the convergence rates observed in small examples are much
better than those that we have proved based on standard QMC theory. There are
opportunities for QMC theoreticians here.
This also applies in the context of Array-RQMC for Markov chains.
The combination of CMC with QMC for density estimation is very promising!
Lots of potential applications! We are working on this.
Draft
35Some references
S. Asmussen. Conditional Monte Carlo for sums, with applications to insurance and finance,
Annals of Actuarial Science, prepublication, 1–24, 2018.
J. Dick and F. Pillichshammer. Digital Nets and Sequences: Discrepancy Theory and
Quasi-Monte Carlo Integration. Cambridge University Press, Cambridge, U.K., 2010.
P. L’Ecuyer. A unified view of the IPA, SF, and LR gradient estimation techniques. Management
Science 36: 1364–1383, 1990.
P. L’Ecuyer. Quasi-Monte Carlo methods with applications in finance. Finance and
Stochastics, 13(3):307–349, 2009.
P. L’Ecuyer. Randomized quasi-monte carlo: An introduction for practitioners. In P. W. Glynn
and A. B. Owen, editors, Monte Carlo and Quasi-Monte Carlo Methods 2016, 2017.
P. L’Ecuyer, C. L´ecot, and B. Tuffin. A randomized quasi-Monte Carlo simulation method for
Markov chains. Operations Research, 56(4):958–975, 2008.
P. L’Ecuyer and G. Perron. On the Convergence Rates of IPA and FDC Derivative Estimators for
Finite-Horizon Stochastic Simulations. Operations Research, 42 (4):643–656, 1994.
D. W. Scott. Multivariate Density Estimation. Wiley, 2015.

More Related Content

What's hot

Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
Optimal interval clustering: Application to Bregman clustering and statistica...
Optimal interval clustering: Application to Bregman clustering and statistica...Optimal interval clustering: Application to Bregman clustering and statistica...
Optimal interval clustering: Application to Bregman clustering and statistica...
Frank Nielsen
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
Elvis DOHMATOB
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like sampler
Christian Robert
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Divergence center-based clustering and their applications
Divergence center-based clustering and their applicationsDivergence center-based clustering and their applications
Divergence center-based clustering and their applications
Frank Nielsen
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
Christian Robert
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)
Frank Nielsen
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Divergence clustering
Divergence clusteringDivergence clustering
Divergence clustering
Frank Nielsen
 
QMC: Transition Workshop - Probabilistic Integrators for Deterministic Differ...
QMC: Transition Workshop - Probabilistic Integrators for Deterministic Differ...QMC: Transition Workshop - Probabilistic Integrators for Deterministic Differ...
QMC: Transition Workshop - Probabilistic Integrators for Deterministic Differ...
The Statistical and Applied Mathematical Sciences Institute
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distances
Christian Robert
 
Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration
Jagadeeswaran Rathinavel
 
Litvinenko low-rank kriging +FFT poster
Litvinenko low-rank kriging +FFT  posterLitvinenko low-rank kriging +FFT  poster
Litvinenko low-rank kriging +FFT poster
Alexander Litvinenko
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 

What's hot (20)

Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Optimal interval clustering: Application to Bregman clustering and statistica...
Optimal interval clustering: Application to Bregman clustering and statistica...Optimal interval clustering: Application to Bregman clustering and statistica...
Optimal interval clustering: Application to Bregman clustering and statistica...
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like sampler
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Divergence center-based clustering and their applications
Divergence center-based clustering and their applicationsDivergence center-based clustering and their applications
Divergence center-based clustering and their applications
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Divergence clustering
Divergence clusteringDivergence clustering
Divergence clustering
 
QMC: Transition Workshop - Probabilistic Integrators for Deterministic Differ...
QMC: Transition Workshop - Probabilistic Integrators for Deterministic Differ...QMC: Transition Workshop - Probabilistic Integrators for Deterministic Differ...
QMC: Transition Workshop - Probabilistic Integrators for Deterministic Differ...
 
ABC based on Wasserstein distances
ABC based on Wasserstein distancesABC based on Wasserstein distances
ABC based on Wasserstein distances
 
Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration
 
Litvinenko low-rank kriging +FFT poster
Litvinenko low-rank kriging +FFT  posterLitvinenko low-rank kriging +FFT  poster
Litvinenko low-rank kriging +FFT poster
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 

Similar to QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo - Pierre L'Ecuyer, May 7, 2018

Implicit schemes for wave models
Implicit schemes for wave modelsImplicit schemes for wave models
Implicit schemes for wave models
Mathieu Dutour Sikiric
 
Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...
Alexander Litvinenko
 
sublabel accurate convex relaxation of vectorial multilabel energies
sublabel accurate convex relaxation of vectorial multilabel energiessublabel accurate convex relaxation of vectorial multilabel energies
sublabel accurate convex relaxation of vectorial multilabel energies
Fujimoto Keisuke
 
Multilayer Neural Networks
Multilayer Neural NetworksMultilayer Neural Networks
Multilayer Neural NetworksESCOM
 
Talk iccf 19_ben_hammouda
Talk iccf 19_ben_hammoudaTalk iccf 19_ben_hammouda
Talk iccf 19_ben_hammouda
Chiheb Ben Hammouda
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian update
Alexander Litvinenko
 
The Gaussian Hardy-Littlewood Maximal Function
The Gaussian Hardy-Littlewood Maximal FunctionThe Gaussian Hardy-Littlewood Maximal Function
The Gaussian Hardy-Littlewood Maximal Function
Radboud University Medical Center
 
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920Karl Rudeen
 
Numerical Analysis Assignment Help
Numerical Analysis Assignment HelpNumerical Analysis Assignment Help
Numerical Analysis Assignment Help
Maths Assignment Help
 
Markov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themMarkov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing them
Pierre Jacob
 
Numerical Analysis Assignment Help
Numerical Analysis Assignment HelpNumerical Analysis Assignment Help
Numerical Analysis Assignment Help
Math Homework Solver
 
v39i11.pdf
v39i11.pdfv39i11.pdf
v39i11.pdf
Gangula Abhimanyu
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
Christian Robert
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Sampling and low-rank tensor approximations
Sampling and low-rank tensor approximationsSampling and low-rank tensor approximations
Sampling and low-rank tensor approximations
Alexander Litvinenko
 
Low-rank response surface in numerical aerodynamics
Low-rank response surface in numerical aerodynamicsLow-rank response surface in numerical aerodynamics
Low-rank response surface in numerical aerodynamics
Alexander Litvinenko
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
Alexander Litvinenko
 
Relative superior mandelbrot sets and relative
Relative superior mandelbrot sets and relativeRelative superior mandelbrot sets and relative
Relative superior mandelbrot sets and relative
eSAT Publishing House
 

Similar to QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo - Pierre L'Ecuyer, May 7, 2018 (20)

Implicit schemes for wave models
Implicit schemes for wave modelsImplicit schemes for wave models
Implicit schemes for wave models
 
Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...
 
sublabel accurate convex relaxation of vectorial multilabel energies
sublabel accurate convex relaxation of vectorial multilabel energiessublabel accurate convex relaxation of vectorial multilabel energies
sublabel accurate convex relaxation of vectorial multilabel energies
 
Multilayer Neural Networks
Multilayer Neural NetworksMultilayer Neural Networks
Multilayer Neural Networks
 
Talk iccf 19_ben_hammouda
Talk iccf 19_ben_hammoudaTalk iccf 19_ben_hammouda
Talk iccf 19_ben_hammouda
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian update
 
The Gaussian Hardy-Littlewood Maximal Function
The Gaussian Hardy-Littlewood Maximal FunctionThe Gaussian Hardy-Littlewood Maximal Function
The Gaussian Hardy-Littlewood Maximal Function
 
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
fb69b412-97cb-4e8d-8a28-574c09557d35-160618025920
 
Project Paper
Project PaperProject Paper
Project Paper
 
Numerical Analysis Assignment Help
Numerical Analysis Assignment HelpNumerical Analysis Assignment Help
Numerical Analysis Assignment Help
 
Markov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themMarkov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing them
 
Numerical Analysis Assignment Help
Numerical Analysis Assignment HelpNumerical Analysis Assignment Help
Numerical Analysis Assignment Help
 
v39i11.pdf
v39i11.pdfv39i11.pdf
v39i11.pdf
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Sampling and low-rank tensor approximations
Sampling and low-rank tensor approximationsSampling and low-rank tensor approximations
Sampling and low-rank tensor approximations
 
New test123
New test123New test123
New test123
 
Low-rank response surface in numerical aerodynamics
Low-rank response surface in numerical aerodynamicsLow-rank response surface in numerical aerodynamics
Low-rank response surface in numerical aerodynamics
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
 
Relative superior mandelbrot sets and relative
Relative superior mandelbrot sets and relativeRelative superior mandelbrot sets and relative
Relative superior mandelbrot sets and relative
 

More from The Statistical and Applied Mathematical Sciences Institute

Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
The Statistical and Applied Mathematical Sciences Institute
 
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
The Statistical and Applied Mathematical Sciences Institute
 
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
The Statistical and Applied Mathematical Sciences Institute
 
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
The Statistical and Applied Mathematical Sciences Institute
 
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
The Statistical and Applied Mathematical Sciences Institute
 
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
The Statistical and Applied Mathematical Sciences Institute
 

More from The Statistical and Applied Mathematical Sciences Institute (20)

Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
 
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
 
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
 
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
 
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
 
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
 
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
 
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
 
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
 
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
 
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
 
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
 
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
 
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
 
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
 
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
 
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
 
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
 
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
 
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
 

Recently uploaded

1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
JosvitaDsouza2
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
EugeneSaldivar
 
Polish students' mobility in the Czech Republic
Polish students' mobility in the Czech RepublicPolish students' mobility in the Czech Republic
Polish students' mobility in the Czech Republic
Anna Sz.
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
Delapenabediema
 
The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
kaushalkr1407
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
Special education needs
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
beazzy04
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
Jisc
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
siemaillard
 
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
BhavyaRajput3
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
DeeptiGupta154
 
How to Create Map Views in the Odoo 17 ERP
How to Create Map Views in the Odoo 17 ERPHow to Create Map Views in the Odoo 17 ERP
How to Create Map Views in the Odoo 17 ERP
Celine George
 
Fish and Chips - have they had their chips
Fish and Chips - have they had their chipsFish and Chips - have they had their chips
Fish and Chips - have they had their chips
GeoBlogs
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
Tamralipta Mahavidyalaya
 
Introduction to Quality Improvement Essentials
Introduction to Quality Improvement EssentialsIntroduction to Quality Improvement Essentials
Introduction to Quality Improvement Essentials
Excellence Foundation for South Sudan
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
Thiyagu K
 
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxStudents, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
EduSkills OECD
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
joachimlavalley1
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
Jisc
 

Recently uploaded (20)

1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
 
Polish students' mobility in the Czech Republic
Polish students' mobility in the Czech RepublicPolish students' mobility in the Czech Republic
Polish students' mobility in the Czech Republic
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
 
The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
 
How to Create Map Views in the Odoo 17 ERP
How to Create Map Views in the Odoo 17 ERPHow to Create Map Views in the Odoo 17 ERP
How to Create Map Views in the Odoo 17 ERP
 
Fish and Chips - have they had their chips
Fish and Chips - have they had their chipsFish and Chips - have they had their chips
Fish and Chips - have they had their chips
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
 
Introduction to Quality Improvement Essentials
Introduction to Quality Improvement EssentialsIntroduction to Quality Improvement Essentials
Introduction to Quality Improvement Essentials
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
 
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxStudents, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptx
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
 

QMC: Transition Workshop - Density Estimation by Randomized Quasi-Monte Carlo - Pierre L'Ecuyer, May 7, 2018

  • 1. Draft 1 Density estimation by Randomized Quasi-Monte Carlo Pierre L’Ecuyer Joint work with Amal Ben Abdellah, Art B. Owen, and Florian Puchhammer SAMSI, Raleigh-Durham, North-Carolina, May 2018
  • 2. Draft 2 What this talk is about Quasi-Monte Carlo (QMC) and randomized QMC (RQMC) methods have been studied extensively for estimating an integral, E[X]. Can they be useful for estimating the entire distribution of X? E.g., estimating a density, a cdf, some quantiles, etc. When hours or days of computing time are required to perform simulation runs, reporting only a confidence interval on the mean is a waste of information! People routinely look at empirical distributions via histograms, for example. More refined methods: kernel density estimators (KDEs). Can RQMC improve such density estimators, and by how much?
  • 3. Draft 3 Setting Classical density estimation was developed in the context where independent observations X1, . . . , Xn are given and one wishes to estimate the density f from which they come. Here we assume that X1, . . . , Xn are generated by simulation from a stochastic model. We can choose n and we have some freedom on how the simulation is performed. The Xi ’s are realizations of a random variable X = g(U) ∈ R with density f , where U = (U1, . . . , Us) ∼ U(0, 1)s and g(u) can be computed easily for any u ∈ (0, 1)s. Can we obtain a better estimate of f with RQMC instead of MC? How much better?
  • 4. Draft 4 Density Estimation Suppose we estimate the density f over a finite interval (a, b). Let ˆfn(x) denote the density estimator at x, with sample size n. We use the following measures of error: MISE = mean integrated squared error = b a E[ˆfn(x) − f (x)]2 dx = IV + ISB IV = integrated variance = b a Var[ˆfn(x)]dx ISB = integrated squared bias = b a (E[ˆfn(x)] − f (x))2 dx
  • 5. Draft 5 Density Estimation Simple histogram: Partition [a, b] in m intervals of size h = (b − a)/m and define ˆfn(x) = nj nh for x ∈ Ij = [a + (j − 1)h, a + jh), j = 1, ..., m where nj is the number of observations Xi that fall in interval j. Kernel Density Estimator (KDE) : Select kernel k (unimodal symmetric density centered at 0) and bandwidth h > 0 (serves as horizontal stretching factor for the kernel). The KDE is defined by ˆfn(x) = 1 nh n i=1 k x − Xi h .
  • 6. Draft 6 Asymptotic convergence with Monte Carlo for smooth f For g : R → R, define R(g) = b a (g(x))2 dx, µr (g) = ∞ −∞ xr g(x)dx, for r = 0, 1, 2, . . . For histograms and KDEs, when n → ∞ and h → 0: AMISE = AIV + AISB ∼ C nh + Bhα . C B α Histogram 1 R(f ) /12 2 KDE µ0(k2) (µ2(k))2 R(f ) /4 4
  • 7. Draft 7 The asymptotically optimal h is h∗ = C Bαn 1/(α+1) and it gives AMISE = Kn−α/(1+α). C B α h∗ AMISE Histogram 1 R(f ) 12 2 (nR(f )/6)−1/3 O(n−2/3) KDE µ0(k2) (µ2(k))2 R(f ) 4 4 µ0(k2) (µ2(k))2R(f )n 1/5 O(n−4/5) To estimate h∗, one can estimate R(f ) and R(f ) via KDE (plugin).
  • 8. Draft 8 Asymptotic convergence with RQMC for smooth f Idea: Replace U1, . . . , Un by RQMC points. RQMC does not change the bias. For a KDE with smooth k, one could hope (perhaps) to get AIV = C n−β h−1 for β > 1, instead of Cn−1 h−1 . If the IV is reduced, the optimal h can be taken smaller to reduce the ISB as well (re-balance) and then reduce the MISE.
  • 9. Draft 8 Asymptotic convergence with RQMC for smooth f Idea: Replace U1, . . . , Un by RQMC points. RQMC does not change the bias. For a KDE with smooth k, one could hope (perhaps) to get AIV = C n−β h−1 for β > 1, instead of Cn−1 h−1 . If the IV is reduced, the optimal h can be taken smaller to reduce the ISB as well (re-balance) and then reduce the MISE. Unfortunately, things are not so simple. Roughly, decreasing h increases the variation of the function in the estimator. So we may have something like AIV = C n−β h−δ or IV ≈ C n−βh−δ in some bounded region.
  • 10. Draft 9 Elementary QMC Bounds (Recall) Integration error for g : [0, 1)s → R with point set Pn = {u0, . . . , un−1} ⊂ [0, 1)s: En = 1 n n−1 i=0 g(ui ) − [0,1)s g(u)du. Koksma-Hlawka inequality: |En| ≤ VHK(g)D∗(Pn) where VHK(g) = ∅=v⊆S [0,1)s ∂|v|g ∂v du, (Hardy-Krause (HK) variation) D∗ (Pn) = sup u∈[0,1)s vol[0, u) − |Pn ∩ [0, u)| n (star-discrepancy). There are explicit point sets for which D∗(Pn) = O((log n)s−1/n) = O(n−1+ ). Explicit RQMC constructions for which E[En] = 0 and Var[En] = O(n−2+ ).
  • 11. Draft 10 Also |En| ≤ V2(g)D2(Pn) where V 2 2 (g) = ∅=v⊆S [0,1)s ∂|v|g ∂v 2 du, (square L2 variation), D2 2 (Pn) = u∈[0,1)s vol[0, u) − |Pn ∩ [0, u)| n 2 du (square L2-star-discrepancy). Explicit constructions for which D2(Pn) = O(n−1+ ). Moreover, if Pn is a digital net randomized by a nested uniform scramble (NUS) and V2(g) < ∞, then E[En] = 0 and Var[En] = O(V 2 2 (g)n−3+ ) for all > 0.
  • 12. Draft 11 Bounding the AIV under RQMC for a KDE KDE density estimator at a single point x: ˆfn(x) = 1 n n i=1 1 h k x − g(Ui ) h = 1 n n i=1 ˜g(Ui ). With RQMC points Ui , this is an RQMC estimator of E[˜g(U)] = [0,1)s ˜g(u)du = E[fn(x)]. RQMC does not change the bias, but may reduce Var[ˆfn(x)], and then the IV. To get RQMC variance bounds, we need bounds on the variation of ˜g.
  • 13. Draft 11 Bounding the AIV under RQMC for a KDE KDE density estimator at a single point x: ˆfn(x) = 1 n n i=1 1 h k x − g(Ui ) h = 1 n n i=1 ˜g(Ui ). With RQMC points Ui , this is an RQMC estimator of E[˜g(U)] = [0,1)s ˜g(u)du = E[fn(x)]. RQMC does not change the bias, but may reduce Var[ˆfn(x)], and then the IV. To get RQMC variance bounds, we need bounds on the variation of ˜g. The partial derivatives are: ∂|v| ∂uv ˜g(u) = 1 h ∂|v| ∂uv k x − g(u) h . We assume they exist and are uniformly bounded. E.g., Gaussian kernel k. By expanding via the chain rule, we obtain terms in h−j for j = 2, . . . , |v| + 1. One of the term for v = S grows as h−s−1k(s) ((g(u) − x)/h) s j=1 gj (u) = O(h−s−1) when h → 0, so this AIV bound grows as h−2s−2. Not so good!
  • 14. Draft 12 Improvement by a Change of Variable, in One Dimension Change of variable w = (x − g(u))/h. In one dimension (s = 1), we have dw/du = −g (u)/h, so VHK(˜g) = 1 h 1 0 k x − g(u) h −g (u) h du = 1 h ∞ −∞ k (w)dw = O(h−1 ). Then, if k and g are continuously differentiable, with RQMC points having D∗ (Pn) = O(n−1+ ), we obtain AIV = O(n−2+ h−2 ). With h = Θ(n−1/3 ), this gives AMISE = O(n−4/3 ). A similar argument gives V 2 2 (˜g) = 1 h2 1 0 k x − g(u) h −g (u) h 2 du = 1 h3 Lg ∞ −∞ (k (w))2 dw = O(h−3 ) if |g | ≤ Lg , and then with NUS: AIV = O(n−3+ h−3 ). With h = Θ(n−3/7 ), this gives AMISE = O(n−12/7 ).
  • 15. Draft 13 Higher Dimensions Let s = 2 and v = {1, 2}. With the change of variable (u1, u2) → (w, u2), the Jacobian is |dw/du1| = |g1(u1, u2)/h|, where gj = ∂g/∂uj . If |g2| and |g12/g1| are bounded by a constant L, [0,1)2 ∂|v| ˜g ∂uv du = 1 h [0,1)2 ∂2 ∂u1∂u2 k x − g(u) h du1du2 = 1 h [0,1)2 k x − g(u) h g1(u) h g2(u) h + k x − g(u) h g12(u) h du1du2 = 1 h 1 0 ∞ −∞ k (w) g2(u) h + k (w) g12(u) g1(u) dw du2 = L h [µ0(k )/h + µ0(k )] = O(h−2 ). This provides a bound of O(h−2 ) for VHK(˜g), then AIV = O(n−2+ h−4 ). Generalizing to s ≥ 2 gives VHK(˜g) = O(h−s ), AIV = O(n−2+ h−2s ), MISE = O(n−4/(2+s) ) . Beats MC for s < 3, same rate for s = 3. Not very satisfactory.
  • 16. Draft 14 Empirical Evaluation with Linear Model in a limited region Regardless of the asymptotic bounds, the true IV may behave better than for MC for pairs (n, h) of interest. We consider the model MISE = IV + ISB ≈ Cn−βh−δ + Bhα . This model is only for a limited region of interest, not for everywhere, not necessarily asymptotic. The optimal h for this model satisfies hα+δ = Cδ Bα n−β . and it gives MISE ≈ Kn−αβ/(α+δ). We can take the asymptotic α (known) and B (estimated as for MC). To estimate C, β, and δ, estimate the IV over a grid of values of (n, h), and fit a linear regression model: log IV ≈ log C − β log n − δ log h.
  • 17. Draft 15 For each (n, h), we estimate the IV by making nr indep. replications of the RQMC density estimator, compute the variance at ne evaluation points (stratified) over [a, b], and multiply by (b − a)/n. We use logs in base 2, since n is a power of 2. After estimating model parameters, can test out-of-sample with independent simulation experiments at pairs (n, h) with h = ˆh∗(n). For test cases in which density is known, can compute a MISE estimate at each point, and obtain new parameter estimates ˜K and ˜ν of model MISE ≈ Kn−ν.
  • 18. Draft 16 Numerical illustrations For each example, we first estimate model parameters by regression using a grid of pairs (n, h) with n = 214, 215, . . . , 219 and (for KDE) h = h0, . . . , h5 with hj = h02j/2 = 2− 0+j/2. For histograms, m = (b − a)/h must be an integer. For each n and each RQMC method, we make nr = 100 independent replications and take ne = 64 evaluation points over bounded interval [a, b]. Also tried larger ne. RQMC Point sets: Independent points (Crude Monte Carlo), Stratification: stratified unit cube, Sobol+LMS: Sobol’ points with left matrix scrambling (LMS) + digital random shift, Sobol+NUS: Sobol’ points with NUS.
  • 19. Draft 17 Simple test example with standard normal density Let Z1, . . . , Zs i.i.d. standard normal generated by inversion, and X = (Z1 + · · · + Zs)/ √ s. Then X ∼ N(0, 1). Here we can estimate IV, ISB, and MISE accurately. We can compute b a f (x)dx exactly. Take (a, b) = (−2, 2). We have B = 0.04754 with α = 4 for KDE.
  • 20. Draft 18 Estimates of model parameters for KDE IV ≈ Cn−β h−δ , MISE ≈ κn−ν method MC NUS s 1 1 2 3 5 20 R2 0.999 0.999 1.000 0.995 0.979 0.993 β 1.017 2.787 2.110 1.788 1.288 1.026 δ 1.144 2.997 3.195 3.356 2.293 1.450 α 3.758 3.798 3.846 3.860 3.782 3.870 ˜ν 0.770 1.600 1.172 0.981 0.827 0.730 LGM 16.96 34.05 24.37 20.80 17.91 17.07 LGM = − log2(MISE) for n = 219.
  • 21. Draft 18 Convergence of the MISE in log-log scale, for the one-dimensional example 8 10 12 14 16 18 20 −30 −20 −10 log2(n) log2(MISE) normal01densityKDE Independent points Stratification Sobol + LMS Sobol + NUS
  • 22. Draft 18 Convergence of the MISE, for s = 2, for histograms (left) and KDE (right). 10 15 20 −15 −10 log2(n) log2(MISE) Independent points Stratification Sobol + LMS Sobol + NUS 10 15 20 −25 −20 −15 −10 log2(n) log2(MISE) Independent points Stratification Sobol + LMS Sobol + NUS
  • 23. Draft 18 LGM (n = 219) for histogram (left) and KDE (right) for estimation over (−2, 2). 1 2 3 4 5 14 16 18 20 s LGM Independent Stratification Sobol+LMS Sobol+NUS 1 2 3 4 5 20 25 30 35 s Independent Stratification Sobol+LMS Sobol+NUS
  • 24. Draft 19 Estimated parameters with histogram (left) and KDE (right) over (−2, 2). 1 2 3 4 5 1 1.2 1.4 1.6 1.8 2 s β, Independent δ, Independent β, Stratification δ, Stratification β, Sobol+LMS δ, Sobol+LMS β, Sobol+NUS δ, Sobol+NUS 1 2 3 4 5 1 1.5 2 2.5 3 3.5 s β, Independent δ, Independent β, Stratification δ,Stratification β, Sobol+LMS δ, Sobol+LMS β, Sobol+NUS δ, Sobol+NUS
  • 25. Draft 19 KDE Estimate over (−4, 4) method MC NUS s 1 1 2 3 5 20 β 1.020 2.434 1.999 1.728 1.272 1.006 δ 1.138 2.432 2.972 3.168 2.256 1.464 ˜ν 0.772 1.514 1.138 0.980 0.817 0.767 LGM 16.89 30.07 23.68 20.52 17.72 16.95
  • 26. Draft 20 KDE Estimate in the tail KDE (blue) vs true density (red) with RQMC point sets with n = 219: lattice + shift, Sobol + digital shift, Sobol + LMS-19bits + shift, Sobol + LMS-31bits + shift -5.1 -4.6 -4.1 -3.6 (10−5 ) 0 10 20 30 40 50 60 70 80 -5.1 -4.6 -4.1 -3.6 (10−5 ) 0 10 20 30 40 50 60 70 80 -5.1 -4.6 -4.1 -3.6 (10−5 ) 0 10 20 30 40 50 60 70 80 -5.1 -4.6 -4.1 -3.6 (10−5 ) 0 10 20 30 40 50 60 70 80
  • 27. Draft 21 Displacement of a cantilevel beam Displacement D of a cantilever beam with horizontal and vertical loads: D = 4L3 Ewt Y 2 t4 + X2 w4 where L = 100, w = 4, t = 2 (in inches), X, Y , and E are independent and normally distributed with means and standard deviations: Description Symbol Mean St. dev. Young’s modulus E 2.9 × 107 1.45 × 106 Horizontal load X 500 100 Vertical load Y 1000 100 We want to estimate the density of D over (a, b) = (0.336, 1.561) (about 99.5% of density).
  • 28. Draft 22 Parameter estimates of the linear regression model for IV and MISE: IV ≈ Cn−β h−δ , MISE ≈ κn−ν Point set ˆC ˆβ ˆδ ˆν Histogram, α = 2 Independent 0.888 1.001 1.021 0.797 Sobol+LMS 0.134 1.196 1.641 0.848 Sobol+NUS 0.136 1.194 1.633 0.848 KDE with Gaussian kernel, α = 4 Independent 0.210 0.993 1.037 0.789 Sobol+LMS 5.28E-4 1.619 2.949 0.932 Sobol+NUS 5.24E-4 1.621 2.955 0.932 Good fit: we have R2 > 0.99 in all cases.
  • 29. Draft 23 log2(IV) vs log2 n for cantilever with KDE. 10 15 20 −30 −20 −10 0 log2(n) log2(IV) Independent, h = 2−9.5 Sobol+NUS, h = 2−9.5 Independent, h = 2−7.5 Sobol+NUS, h = 2−7.5 Independent, h = 2−5.5 Sobol+NUS, h = 2−5.5
  • 30. Draft 23 A weighted sum of lognormals X = s j=1 wj exp(Yj ) where Y = (Y1, . . . , Ys)t ∼ N(µ, C). Let C = AAt. To generate Y, generate Z ∼ N(0, I) and put Y = µ + AZ. We will use principal component decomposition (PCA). This has several applications. In one of them, with wj = s0(s − j + 1)/s, e−ρ max(X − K, 0) is the payoff of a financial option based on an average price at s observation times, under a GBM process. Want to estimate density of positive payoffs.
  • 31. Draft 23 A weighted sum of lognormals X = s j=1 wj exp(Yj ) where Y = (Y1, . . . , Ys)t ∼ N(µ, C). Let C = AAt. To generate Y, generate Z ∼ N(0, I) and put Y = µ + AZ. We will use principal component decomposition (PCA). This has several applications. In one of them, with wj = s0(s − j + 1)/s, e−ρ max(X − K, 0) is the payoff of a financial option based on an average price at s observation times, under a GBM process. Want to estimate density of positive payoffs. Numerical experiment: Take s = 12, ρ = 0.037, s0 = 100, K = 101, and C defined indirectly via: Yj = Yj−1(µ − σ2)j/s + σB(j/s) where Y0 = 0, σ = 0.12136, µ = 0.1, and B a standard Brownian motion. We will estimate the density of e−ρ(X − K) over (a, b) = (0, 50).
  • 32. Draft 24 Histogram of positive values from n = 106 independent simulation runs: Payoff 0 50 100 15013.1 Frequency (×103 ) 0 10 20 30
  • 33. Draft 25IV as a function of n for KDE. 14 15 16 17 18 19 20 −40 −30 −20 log2(n) log2(IV) Independent, h = 2−3,5 Sobol+NUS, h = 2−3,5 Independent, h = 2−1,5 Sobol+NUS, h = 2−1,5 Independent, h = 20,5 Sobol+NUS, h = 20,5
  • 34. Draft 25 Example: A stochastic activity network Gives precedence relations between activities. Activity k has random duration Yk (also length of arc k) with known cumulative distribution function (cdf) Fk(y) := P[Yk ≤ y]. Project duration T = (random) length of longest path from source to sink. May want to estimate E[T], P[T > x], a quantile, density of T, etc. 0source 1 Y0 2 Y1 Y2 3 Y3 4 Y7 5 Y9 Y4 Y5 6 Y6 7 Y11 Y8 8 sink Y12 Y10
  • 35. Draft 26 Simulation Algorithm: to generate T: for k = 0, . . . , 12 do Generate Uk ∼ U(0, 1) and let Yk = F−1 k (Uk) Compute X = T = h(Y0, . . . , Y12) = f (U0, . . . , U12) Monte Carlo: Repeat n times independently to obtain n realizations X1, . . . , Xn of T. Estimate E[T] = (0,1)s f (u)du by ¯Xn = 1 n n−1 i=0 Xi . To estimate P(T > x), take X = I[T > x] instead. RQMC: Replace the n independent points by an RQMC point set of size n. Numerical illustration from Elmaghraby (1977): Yk ∼ N(µk , σ2 k ) for k = 0, 1, 3, 10, 11, and Vk ∼ Expon(1/µk ) otherwise. µ0, . . . , µ12: 13.0, 5.5, 7.0, 5.2, 16.5, 14.7, 10.3, 6.0, 4.0, 20.0, 3.2, 3.2, 16.5.
  • 36. Draft 27 Naive idea: replace each Yk by its expectation. Gives T = 48.2. Results of an experiment with n = 100 000. Histogram of values of T is a density estimator that gives more information than a confidence interval on E[T] or P[T > x]. T 0 25 50 75 100 125 150 175 200 Frequency 0 5000 10000 T = x = 90 T = 48.2 mean = 64.2 ˆξ0.99 = 131.8
  • 37. Draft 28 log2(IV) with KDE, Sobol+NUS, for the SAN −4 −3 −2 10 15 −20 −10 log2(h) log2(n) log2(IV)
  • 38. Draft 29 Results for SAN Network Parameter estimates of the regression models for the SAN network, n = 219. Point set C β δ ˆγ∗ ˆν∗ Histogram, α = 2 Independent 0.892 0.999 1.005 0.333 0.665 Stratif. 0.897 1.001 1.006 0.333 0.666 Lattice+shift 0.841 0.988 0.988 0.331 0.662 Sobol+LMS 0.894 1.006 1.024 0.333 0.665 Sobol+NUS 0.888 1.005 1.026 0.332 0.665 KDE with Gaussian kernel, α = 4 Independent 0.254 1.001 1.004 0.199 0.799 Stratif. 0.248 1.001 1.012 0.199 0.799 Lattice+shift 0.222 0.969 0.947 0.196 0.784 Sobol+LMS 0.242 1.021 1.089 0.201 0.803 Sobol+NUS 0.246 1.023 1.087 0.201 0.804
  • 39. Draft 30 The SAN example, Sobol+NUS vs Independent points, summary for n = 219 = 524288. Density Independent points Sobol+NUS m or h log2IV IV rate log2IV IV rate Histogram 64 -19.32 -1.003 -19.78 -1.039 256 -17.28 -0.999 -17.40 -1.011 1024 -15.27 -1.001 -15.30 -1.003 4096 -13.27 -0.998 -13.27 -1.000 Kernel 0.10 -16.64 -0.999 -16.71 -1.006 0.13 -17.31 -0.999 -17.42 -1.007 0.18 -17.96 -0.999 -18.18 -1.015 0.24 -18.64 -0.999 -18.92 -1.029 0.32 -19.33 -0.998 -19.79 -1.035 0.43 -19.99 -0.998 -20.71 -1.064
  • 40. Draft 31 Conditional Monte Carlo for Derivative Estimation The density is f (x) = F (x), so could we just take the derivative of a cdf estimator? The derivative of the empirical cdf ˆFn(x) is zero almost everywhere, ... does not work! We need a smooth cdf estimator.
  • 41. Draft 31 Conditional Monte Carlo for Derivative Estimation The density is f (x) = F (x), so could we just take the derivative of a cdf estimator? The derivative of the empirical cdf ˆFn(x) is zero almost everywhere, ... does not work! We need a smooth cdf estimator. Let X = X(θ, ω) with parameter θ ∈ R. Want to estimate g (θ) = d dθ E[X(θ, ω)]. When X(·, ω) is not continuous in θ (+ other conditions), we cannot interchange the derivative and expectation, and cannot take d dθ X(θ, ω) as an estimator of g (θ).
  • 42. Draft 31 Conditional Monte Carlo for Derivative Estimation The density is f (x) = F (x), so could we just take the derivative of a cdf estimator? The derivative of the empirical cdf ˆFn(x) is zero almost everywhere, ... does not work! We need a smooth cdf estimator. Let X = X(θ, ω) with parameter θ ∈ R. Want to estimate g (θ) = d dθ E[X(θ, ω)]. When X(·, ω) is not continuous in θ (+ other conditions), we cannot interchange the derivative and expectation, and cannot take d dθ X(θ, ω) as an estimator of g (θ). Often possible to replace X by a conditional expectation Xe = E[X | G] where G contains partial information, not enough to reveal X, but enough to compute Xe. If Xe is smooth enough in θ, we may have g (θ) = E d dθ Xe(θ, ω) = E[Xe]. This is gradient estimation by IPA + CMC. L’Ecuyer and Perron (1994), Asmussen (2017). Then, we can simulate Xe with RQMC instead of MC.
  • 43. Draft 32 Application to the SAN Example We want a smooth estimate of P[T ≤ t], whose sample derivative will be an unbiased estimate of the density at t. Naive estimator: Generate T and compute X = I[T ≤ t]. Repeat n times and average.
  • 44. Draft 32 Application to the SAN Example We want a smooth estimate of P[T ≤ t], whose sample derivative will be an unbiased estimate of the density at t. Naive estimator: Generate T and compute X = I[T ≤ t]. Repeat n times and average. 0source 1 Y0 2 Y1 Y2 3 Y3 4 Y7 5 Y9 Y4 Y5 6 Y6 7 Y11 Y8 8 sink Y12 Y10
  • 45. Draft 33 Conditional Monte Carlo estimator of P[T ≤ t]. Generate the Yj ’s only for the 8 arcs that do not belong to the cut L = {4, 5, 6, 8, 9}, and replace I[T ≤ t] by its conditional expectation given those Yj ’s, Xe = P[T ≤ t | {Yj , j ∈ L}]. This makes the integrand continuous in the Uj ’s and in t.
  • 46. Draft 33 Conditional Monte Carlo estimator of P[T ≤ t]. Generate the Yj ’s only for the 8 arcs that do not belong to the cut L = {4, 5, 6, 8, 9}, and replace I[T ≤ t] by its conditional expectation given those Yj ’s, Xe = P[T ≤ t | {Yj , j ∈ L}]. This makes the integrand continuous in the Uj ’s and in t. To compute Xe: for each l ∈ L, say from al to bl , compute the length αl of the longest path from 1 to al , and the length βl of the longest path from bl to the destination. The longest path that passes through link l does not exceed t iff αl + Yl + βl ≤ t, which occurs with probability P[Yl ≤ t − αl − βl ] = Fl [t − αl − βl ].
  • 47. Draft 33 Conditional Monte Carlo estimator of P[T ≤ t]. Generate the Yj ’s only for the 8 arcs that do not belong to the cut L = {4, 5, 6, 8, 9}, and replace I[T ≤ t] by its conditional expectation given those Yj ’s, Xe = P[T ≤ t | {Yj , j ∈ L}]. This makes the integrand continuous in the Uj ’s and in t. To compute Xe: for each l ∈ L, say from al to bl , compute the length αl of the longest path from 1 to al , and the length βl of the longest path from bl to the destination. The longest path that passes through link l does not exceed t iff αl + Yl + βl ≤ t, which occurs with probability P[Yl ≤ t − αl − βl ] = Fl [t − αl − βl ]. Since the Yl are independent, we obtain Xe = l∈L Fl [t − αl − βl ].
  • 48. Draft 34 To estimate the density of T, just take the derivative w.r.t. t: Xe = d dt Xe(t, ω) w.p.1 = j∈L fj [t − αj − βj ] l∈L, l=j Fl [t − αl − βl ]. One can prove that E[Xe] = d dt E[Xe] = d dt P[T ≤ t] = fT (t) via the dominated convergence theorem. See L’Ecuyer (1990). Here, with MC, the IV converges as O(1/n) and there is no bias, so MISE = IV. Now, we can apply RQMC to simulate Xe. It is a smooth function of the uniforms if each inverse cdf F−1 j and density fj are smooth. Then we can get a convergence rate near O(n−2) for the IV and the MISE.
  • 49. Draft 35 Conclusion We saw that RQMC can improve the convergence rate of the IV and MISE when estimating a density. With histograms and KDEs, the convergence rates observed in small examples are much better than those that we have proved based on standard QMC theory. There are opportunities for QMC theoreticians here. This also applies in the context of Array-RQMC for Markov chains. The combination of CMC with QMC for density estimation is very promising! Lots of potential applications! We are working on this.
  • 50. Draft 35Some references S. Asmussen. Conditional Monte Carlo for sums, with applications to insurance and finance, Annals of Actuarial Science, prepublication, 1–24, 2018. J. Dick and F. Pillichshammer. Digital Nets and Sequences: Discrepancy Theory and Quasi-Monte Carlo Integration. Cambridge University Press, Cambridge, U.K., 2010. P. L’Ecuyer. A unified view of the IPA, SF, and LR gradient estimation techniques. Management Science 36: 1364–1383, 1990. P. L’Ecuyer. Quasi-Monte Carlo methods with applications in finance. Finance and Stochastics, 13(3):307–349, 2009. P. L’Ecuyer. Randomized quasi-monte carlo: An introduction for practitioners. In P. W. Glynn and A. B. Owen, editors, Monte Carlo and Quasi-Monte Carlo Methods 2016, 2017. P. L’Ecuyer, C. L´ecot, and B. Tuffin. A randomized quasi-Monte Carlo simulation method for Markov chains. Operations Research, 56(4):958–975, 2008. P. L’Ecuyer and G. Perron. On the Convergence Rates of IPA and FDC Derivative Estimators for Finite-Horizon Stochastic Simulations. Operations Research, 42 (4):643–656, 1994. D. W. Scott. Multivariate Density Estimation. Wiley, 2015.