Data sparse approximation of the Karhunen-Loeve expansion
1. Data sparse approximation of the
Karhunen-Lo`eve expansion
A. Litvinenko, joint work with B. Khoromskij and H. G. Matthies
Institut f¨ur Wissenschaftliches Rechnen, Technische Universit¨at Braunschweig,
0531-391-3008, litvinen@tu-bs.de
December 5, 2008
4. Stochastic PDE
We consider
− div(κ(x, ω)∇u) = f(x, ω) in G,
u = 0 on ∂G,
with stochastic coefficients κ(x, ω), x ∈ G ⊆ Rd
and ω belongs to the
space of random events Ω.
Figure: Examples of computational domains G with a non-rectangular grid.
5. Covariance functions
The random field f(x, ω) requires to specify its spatial correl. structure
covf (x, y) = E[(f(x, ·) − µf (x))(f(y, ·) − µf (y))],
Let h =
3
i=1 h2
i /ℓ2
i , where hi := xi − yi, i = 1, 2, 3, ℓi are cov.
lengths.
Examples: Gaussian cov(h) = exp(−h2
), exponential
cov(h) = exp(−h),
7. KLE
The Karhunen-Lo`eve expansion is the series
κ(x, ω) = µk (x) +
∞
i=1
λi φi (x)ξi (ω), where
ξi (ω) are uncorrelated random variables and φi are basis functions in
L2
(G).
Eigenpairs λi , φi are the solution of
Tφi = λi φi , φi ∈ L2
(G), i ∈ N, where.
T : L2
(G) → L2
(G),
(Tφ)(x) := G covk (x, y)φ(y)dy.
8. Discrete eigenvalue problem
Let
Wij :=
k,m G
bi (x)bk (x)dxCkm
G
bj (y)bm(y)dy,
Mij =
G
bi (x)bj (x)dx.
Then we solve
W φh
ℓ = λℓMφh
ℓ , where W := MCM
Approximate C and M in
◮ the H-matrix format
◮ low Kronecker rank format
and use the Lanczos method to compute m largest eigenvalues.
11. H - matrices: numerics
To assemble low-rank blocks use ACA [Bebendorf et al. ].
Dependence of the computational time and storage requirements of
˜C on the rank k, n = 322
.
k time (sec.) memory (MB) C− ˜C 2
C 2
2 0.04 2 3.5e − 5
6 0.1 4 1.4e − 5
9 0.14 5.4 1.4e − 5
12 0.17 6.8 3.1e − 7
17 0.23 9.3 6.3e − 8
The time for dense matrix C is 3.3 sec. and the storage 140 MB.
12. H - matrices: numerics
k size, MB t, sec.
1 1548 33
2 1865 42
3 2181 50
4 2497 59
6 nem -
k size, MB t, sec.
4 463 11
8 850 22
12 1236 32
16 1623 43
20 nem -
Table: Computing times and storage requirements on the H-matrix rank k for
the exp. cov. function. (left) standard admissibility condition, geometry
shown in Fig. 1 (middle), l1 = 0.1, l2 = 0.5, n = 2.3 · 105
. (right) weak
admissibility condition, geometry shown in Fig. 1 (right), l1 = 0.1, l2 = 0.5,
l3 = 0.1, n = 4.61 · 105
.
13. H - matrices: numerics
k 2.4 · 104
3.5 · 104
6.8 · 104
2.3 · 105
t1 t2 t1 t2 t1 t2 t1 t2
3 3 · 10−3
0.2 6.0 · 10−3
0.4 1 · 10−2
1 5.0 · 10−2
4
6 6 · 10−3
0.4 1.1 · 10−2
0.7 2 · 10−2
2 9.0 · 10−2
7
9 8 · 10−3
0.5 1.5 · 10−2
1.0 3 · 10−2
3 1.3 · 10−1
11
full 0.62 2.48 10 140
Table: t1- computing times (in sec.) required for an H-matrix and dense
matrix vector multiplication, t2 - times to set up ˜C ∈ Rn×n
.
14. H - matrices: numerics
exponential cov(h) = exp(−h),
The cov. matrix C ∈ Rn×n
, n = 652
.
ℓ1 ℓ2
C− ˜C 2
C 2
0.01 0.02 3 · 10−2
0.1 0.2 8 · 10−3
1 2 2.8 · 10−6
15. m - eigenvalues
matrix info (MB, sec.) m
n k ˜C, MB ˜C, sec. 2 5 10 20 40 80
2.4 · 104
4 12 0.2 0.6 0.9 1.3 2.3 4.2 8
6.8 · 104
8 95 2 2.4 3.8 5.6 8.4 18.0 28
2.3 · 105
12 570 11 10.0 17.0 24.0 39.0 70.0 150
Table: Time required for computing m eigenpairs of the exp. cov. function
with l1 = l3 = 0.1, l3 = 0.5. The geometry is shown in Fig. 1 (right).
17. Sparse tensor decompositions of kernels
cov(x, y) = cov(x − y)
We want to approximate C ∈ RN×N
, N = nd
by
Cr =
r
k=1
V 1
k ⊗ ... ⊗ V d
k
such that C − Cr ≤ ε. The storage of C is O(N2
) = O(n2d
) and the
storage of Cr is O(rdn2
).
To define V i
k use SVD.
Approximate all V i
k in the H-matrix format ⇒ HKT format.
See basic arithmetics in [Hackbusch, Khoromskij, Tyrtyshnikov].
18. Tensor approximation
W φh
ℓ = λℓMφh
ℓ , where W := MCM.
Approximate
M ≈
d
ν=1
M(1)
ν ⊗ M(2)
ν , C ≈
q
ν=1
C(1)
ν ⊗ C(2)
ν , φ ≈
r
ν=1
φ(1)
ν ⊗ φ(2)
ν ,
where M
(j)
ν , C
(j)
ν ∈ Rn×n
, φ(j)
ν ∈ Rn
,
Example: for mass matrix M ∈ RN×N
holds
M = M(1)
⊗ I + I ⊗ M(1)
, where M(1)
∈ Rn×n
is one-dimensional mass matrix.
Hypothesis: the Kronecker rank of M stays small even for a more
general domain with non-regular grid.
19. Suppose C = q
ν=1 C
(1)
ν ⊗ C
(2)
ν and φ = r
j=1 φ
(1)
j ⊗ φ
(2)
j . Then
tensor vector product is defined as
Cφ =
q
ν=1
r
j=1
(C(1)
ν φ
(1)
j ) ⊗ (C(2)
ν φ
(2)
j ).
The complexity is O(qrkn log n).
20. Numerical examples of tensor approximations
Gaussian kernel exp(−h2
) has the Kroneker rank 1.
The exponen. kernel exp(−h) can be approximated by a tensor with
low Kroneker rank
r 1 2 3 4 5 6 10
C−Cr ∞
C ∞
11.5 1.7 0.4 0.14 0.035 0.007 2.8e − 8
C−Cr 2
C 2
6.7 0.52 0.1 0.03 0.008 0.001 5.3e − 9
21. Example
Let G = [0, 1]2
, Lh the stiffness matrix computed with the five-point
formula. Then Lh 2 ≤ 8h−2
cos2
(πh/2) < 8h−2
.
Lemma
The (n − 1)2
eigenvectors of Lh are uνµ (1 ≤ ν, µ ≤ n − 1):
uνµ(x, y) = sin(νπx) sin(µπy), (x, y) ∈ Gh.
The corresponding eigenvalues are
λνµ = 4h−2
(sin2
(νπh/2) + sin2
(µπh/2)), 1 ≤ ν, µ ≤ n − 1.
Use Lanczos method with the matrix in the HKT format to compute
eigenpairs of
Lhvi = λi vi , i = 1..N.
Then we compare the computed eigenpairs with the analytically
known eigenpairs.
23. Higher order moments
Let operator K be deterministic and
Ku(θ) =
α∈J
Ku(α)
Hα(θ) = ˜f(θ) =
α∈J
f(α)
Hα(θ), with
u(α)
= [u
(α)
1 , ..., u
(α)
N ]T
. Projecting onto each Hα obtain
Ku(α)
= f(α)
.
The KLE of f(θ) is
f(θ) = f +
ℓ
λℓφℓ(θ)fℓ =
ℓ α
λℓφ
(α)
ℓ Hα(θ)fℓ
=
α
Hα(θ)f(α)
,
where f(α)
= ℓ
√
λℓφ
(α)
ℓ fℓ.
24. The 3-rd moment of u is
M
(3)
u = E
α,β,γ
u(α)
⊗ u(β)
⊗ u(γ)
HαHβHγ
=
α,β,γ
u(α)
⊗u(β)
⊗u(γ)
cα,β,γ,
cα,β,γ := E (Hα(θ)Hβ(θ)Hγ(θ)) = c
(γ)
α,β · γ!, and
c
(γ)
α,β :=
α!β!
(g − α)!(g − β)!(g − γ)!
, g := (α + β + γ)/2.
Using u(α)
= K−1
f(α)
= ℓ
√
λℓφ
(α)
ℓ K−1
fℓ and uℓ := K−1
fℓ,
obtain
M
(3)
u =
p,q,r
tp,q,r up ⊗ uq ⊗ ur , where
tp,q,r := λpλqλr
α,β,γ
φ
(α)
p φ
(β)
q φ
(γ)
r cα,β,γ.
25. Literature
1. B.N. Khoromskij, A.Litvinenko, H. G. Matthies, Application of
hierarchical matrices for computing the Karhunen-Lo`eve
expansion, Computing, 2008, Springer Wien,
http://dx.doi.org/10.1007/s00607-008-0018-3
2. B.N. Khoromskij, A.Litvinenko, Data Sparse Computation of the
Karhunen-Lo`eve Expansion, 2008, AIP Conference Proceedings,
1048-1, pp. 311-314.
3. H. G. Matthies, Uncertainty Quantification with Stochastic Finite
Elements, Encyclopedia of Computational Mechanics, Wiley,
2007.
4. W. Hackbusch, B. N. Khoromskij, S. A. Sauter, and E. E.
Tyrtyshnikov, Use of Tensor Formats in Elliptic Eigenvalue
Problems, Preprint 78/2008, MPI for mathematics in Leipzig.