Approximation of large Matern covariance functions in the H-matrix format. We computed relative errors in spectral, Frobenius norms as well as the Kullback-Leibler divergence. Storage and computational costs are drastically reduced.
Initial covariance matrix approximations for Kalman filters using H-matrices
1. Initial covariance matrix for the Kalman Filter
Alexander Litvinenko
Group of Raul Tempone, SRI UQ, and Group of David Keyes,
Extreme Computing Research Center KAUST
Center for Uncertainty
Quantification
ntification Logo Lock-up
http://sri-uq.kaust.edu.sa/
2. 4*
Two variants
Either we assume that matrix of snapshots is given
[q(x, θ1), ..., q(x, θnq )]
Or we assume that the covariance function is of a certain type:
The Mat´ern class of covariance functions is defined as
C(r) := Cν, (r) =
2σ2
Γ(ν)
r
2
ν
Kν
r
, (1)
Center for Uncertainty
Quantification
tion Logo Lock-up
2 / 12
8. 4*
Kullback-Leibler divergence (KLD)
DKL(P Q) is measure of the information lost when distribution Q
is used to approximate P:
DKL(P Q) =
i
P(i) ln
P(i)
Q(i)
, DKL(P Q) =
∞
−∞
p(x) ln
p(x)
q(x)
dx,
where p, q densities of P and Q. For miltivariate normal
distributions (µ0, Σ0) and (µ1, Σ1)
2DKL(N0 N1) = tr(Σ−1
1 Σ0)+(µ1 −µ0)T
Σ−1
1 (µ1 −µ0)−k −ln
det Σ0
det Σ1
Center for Uncertainty
Quantification
tion Logo Lock-up
8 / 12
9. 4*
Convergence of KLD with increasing the rank k
k KLD C − CH
2 C(CH
)−1
− I 2
L = 0.25 L = 0.75 L = 0.25 L = 0.75 L = 0.25 L = 0.75
5 0.51 2.3 4.0e-2 0.1 4.8 63
6 0.34 1.6 9.4e-3 0.02 3.4 22
8 5.3e-2 0.4 1.9e-3 0.003 1.2 8
10 2.6e-3 0.2 7.7e-4 7.0e-4 6.0e-2 3.1
12 5.0e-4 2e-2 9.7e-5 5.6e-5 1.6e-2 0.5
15 1.0e-5 9e-4 2.0e-5 1.1e-5 8.0e-4 0.02
20 4.5e-7 4.8e-5 6.5e-7 2.8e-7 2.1e-5 1.2e-3
50 3.4e-13 5e-12 2.0e-13 2.4e-13 4e-11 2.7e-9
Table : Dependence of KLD on the approximation H-matrix rank k,
Matern covariance with parameters L = {0.25, 0.75} and ν = 0.5,
domain G = [0, 1]2
, C(L=0.25,0.75) 2 = {212, 568}.
Center for Uncertainty
Quantification
tion Logo Lock-up
9 / 12
10. 4*
Convergence of KLD with increasing the rank k
k KLD C − CH
2 C(CH
)−1
− I 2
L = 0.25 L = 0.75 L = 0.25 L = 0.75 L = 0.25 L = 0.75
5 nan nan 0.05 6e-2 2.1e+13 1e+28
10 10 10e+17 4e-4 5.5e-4 276 1e+19
15 3.7 1.8 1.1e-5 3e-6 112 4e+3
18 1.2 2.7 1.2e-6 7.4e-7 31 5e+2
20 0.12 2.7 5.3e-7 2e-7 4.5 72
30 3.2e-5 0.4 1.3e-9 5e-10 4.8e-3 20
40 6.5e-8 1e-2 1.5e-11 8e-12 7.4e-6 0.5
50 8.3e-10 3e-3 2.0e-13 1.5e-13 1.5e-7 0.1
Table : Dependence of KLD on the approximation H-matrix rank k,
Matern covariance with parameters L = {0.25, 0.75} and ν = 1.5,
domain G = [0, 1]2
, C(L=0.25,0.75) 2 = {720, 1068}.
Center for Uncertainty
Quantification
tion Logo Lock-up
10 / 12
11. 4*
Application of large covariance matrices
1. Kriging estimate ˆs := Csy C−1
yy y
2. Estimation of variance ˆσ, is the diagonal of conditional cov.
matrix Css|y = diag Css − Csy C−1
yy Cys ,
3. Gestatistical optimal design ϕA := n−1traceCss|y ,
ϕC := cT Css − Csy C−1
yy Cys c,
Center for Uncertainty
Quantification
tion Logo Lock-up
11 / 12
12. 4*
Mean and variance in the rank-k format
u :=
1
Z
Z
i=1
ui =
1
Z
Z
i=1
A · bi = Ab. (5)
Cost is O(k(Z + n)).
C =
1
Z − 1
WcWT
c ≈
1
Z − 1
UkΣkΣT
k UT
k . (6)
Cost is O(k2(Z + n)).
Lemma: Let W − Wk 2 ≤ ε, and uk be a rank-k approximation
of the mean u. Then a) u − uk ≤ ε√
Z
,
b) C − Ck ≤ 1
Z−1ε2.
Center for Uncertainty
Quantification
tion Logo Lock-up
12 / 12