SlideShare a Scribd company logo
Low-rank tensors for PDEs with
uncertain coefficients
Alexander Litvinenko
Center for Uncertainty
Quantification
ntification Logo Lock-up
http://sri-uq.kaust.edu.sa/
Extreme Computing Research Center, KAUST
Alexander Litvinenko Low-rank tensors for PDEs with uncertain coefficients
4*
The structure of the talk
Part I (Stochastic Forward Problem):
1. Motivation
2. Elliptic PDE with uncertain coefficients
3. Discretization and low-rank tensor approximations
Part II (Stochastic Inverse Problem via Bayesian Update):
1. Bayesian update surrogate
2. Examples
Part III (Different Examples relevant for BGS)
13
13
17
17
14
14 17
13
17
14 15
13 13
17 29
13 48
15
13 13
13 13
15 13
13
13 16
23
8 8
13 15
28 29
8
8 15
8 15
8 15
19
18 18
61
57
23
17 17
17 17
23 35
57 60
61 117
17
17 17
17 17
14 14
14
7 7
14 14
34
21 14
17 14
28 28
10
10 13
17 17
17 17
11 11
17
11 11
69
40
17 11
17 11
36 28
69 68
10
10 11
9 9
10 11
9
9 12
14 14
21 21
14
14
11
11 11
42
14
11 11
11 11
14 22
38 36
12
12 13
12 12
10 10
12
10 10
23
12 10
10 10
15 15
13
10 10
15 15
69
97
49
28
16 15
12 12
21 21
48 48
83 132
48 91
16
12 12
13 12
8 8
13
8 8
26
13 8
13 8
22 21
13
13 13
9 9
13 13
9
9 13
49
26
9 12
9 13
26 22
49 48
12
12 14
12 14
12 14
15
9 9
18 18
26
15 15
14 14
26 35
15
14 14
15 14
15 14
16
16 19
97
68
29
16 18
16 18
29 35
65 64
97 132
18
18 18
15 15
18 18
15
15
14
7 7
33
15 16
15 17
32 32
16
16 17
14 14
16 17
14
14 18
64
33
11 11
14 18
31 31
72 65
11
11
8
8 14
11 18
11 13
18
13 13
33
18 13
15 13
33 31
20
15 15
19 15
18 15
19
18 18
53
87
136
64
35
19 18
14 14
35 35
64 66
82 128
61 90
33 62
8
8 13
14 14
17 14
18 14
17
17 18
29
17 18
10 10
35 35
19
10 10
13 10
19 10
13
13
10
10 14
70
28
13 15
13 13
29 37
56 56
15
13 13
15 13
15 13
19
19
10
10 15
23
11 11
12 12
28 33
11
11 12
11 12
11 12
18
15 15
115
66
23
18 15
18 15
23 30
49 49
121 121
18
18 18
12 12
18 18
12
12 18
22
11 11
11 11
27 27
11
11 11
11 11
10 10
17
10 10
62
22
17 10
17 10
21 21
59 49
13
10 10
18 18
10 10
11 11
10
10 11
27
10 11
10 11
32 21
12
12 15
12 13
12 15
13
13 19
88
115
62
27
13 19
13 14
27 32
62 59
115 121
61 90
10
10 11
14 14
21 14
12 12
14
10 10
12 12
29
14 12
15 12
35 35
14
14 15
11 11
14 15
11
11
8
8 16
69
29
11 18
11 23
28 28
62 62
18
18
8
8 15
15 15
13 13
15
13 13
29
15 13
13 13
33 28
16
13 13
16 13
15 13
18
15 15
135
62
29
18 15
18 15
22 22
69 62
101 101
10
10 11
19 19
15 15
7 7
15
7 7
40
15 7
15 7
40 22
19
19
9
9 13
18 18
19 22
18
18
11
10 10
11 11
62
31
18 20
11 11
31 31
39 39
20
11 11
19 11
12 11
19
12 12
26
12 12
14 12
13 13
12
12 14
13 13
Center for Uncertainty
Quantification
ation Logo Lock-up
2
4*
My interests and collaborations
Center for Uncertainty
Quantification
ation Logo Lock-up
3
4*
Motivation to do Uncertainty Quantification (UQ)
Motivation: there is an urgent need to quantify and reduce the
uncertainty in multiscale-multiphysics applications.
UQ and its relevance: Nowadays computational predictions are
used in critical engineering decisions. But, how reliable are
these predictions?
Example: Saudi Aramco currently has a simulator,
GigaPOWERS, which runs with 9 billion cells. How sensitive
are these simulations w.r.t. unknown reservoir properties?
My goal is systematic, mathematically founded, develop-
ment of UQ methods and low-rank algorithms relevant for
applications.
Center for Uncertainty
Quantification
ation Logo Lock-up
4*
PDE with uncertain coefficient
Consider
− div(κ(x, ω) u(x, ω)) = f(x, ω) in G × Ω, G ⊂ Rd ,
u = 0 on ∂G,
where κ(x, ω) - uncertain diffusion coefficient.
1. Efficient Analysis of High Dimensional Data in Tensor
Formats, Espig, Hackbusch, Litvinenko., Matthies, Zander,
2012.
2. Efficient low-rank approx. of the stoch. Galerkin ma-
trix in tensor formats, W¨ahnert, Espig, Hackbusch, A.L.,
Matthies, 2013.
3. PCE of random coefficients and the solution of stochas-
tic PDEs in the Tensor Train format, Dolgov, Litvinenko,
Khoromskij, Matthies, 2016.
4. Application of H-matrices for computing the KL expan-
sion, Khoromskij, Litvinenko, Matthies Computing 84 (1-2),
49-67, 2009
0 0.5 1
-20
0
20
40
60
50 realizations of the solution u,
the mean and quantiles
Related work by R. Scheichl, Chr. Schwab, A. Teckentrup, F. Nobile, D. Kressner,...
Center for Uncertainty
Quantification
ation Logo Lock-up
5
4*
Canonical and Tucker tensor formats
[Pictures are taken from B. Khoromskij and A. Auer lecture course]
Storage: O(nd ) → O(dRn) and O(Rd + dRn).
Center for Uncertainty
Quantification
ation Logo Lock-up
6
4*
Karhunen Lo`eve and Polynomial Chaos Expansions
Apply both
Truncated Karhunen Lo`eve Expansion (KLE):
κ(x, ω) ≈ κ0(x) +
L
j=1
κjgj(x)ξj(θ(ω)),
where θ = θ(ω) = (θ1(ω), θ2(ω), ..., ),
ξj(θ) = 1
κj G (κ(x, ω) − κ0(x)) gj(x)dx.
Truncated Polynomial Chaos Expansion (PCE)
κ(x, ω) ≈ α∈JM,p
κ(α)(x)Hα(θ)
ξj(θ) ≈ α∈JM,p
ξ
(α)
j Hα(θ).
Center for Uncertainty
Quantification
ation Logo Lock-up
7
4*
Discretization of elliptic PDE
Ku = f, where
K :=
L
=1
K ⊗
M
µ=1
∆ µ, K ∈ RN×N
, ∆ µ ∈ RRµ×Rµ
,
u :=
r
j=1
uj ⊗
M
µ=1
ujµ, uj ∈ RN
, ujµ ∈ RRµ
,
f :=
R
k=1
fk ⊗
M
µ=1
gkµ, fk ∈ RN
, gkµ ∈ RRµ
.
Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats, W¨ahnert, Espig,
Hackbusch, Litvinenko, Matthies, 2013.
In [2] we analyzed tensor ranks (compression properties) of the
stochastic Galerkin operator K.
Center for Uncertainty
Quantification
ation Logo Lock-up
8
4*
Numerical Experiments
2D L-shape domain, N = 557 dofs.
Total stochastic dimension is Mu = Mk + Mf = 20, there are
|JM,p| = 231 PCE coefficients
u =
231
j=1
uj,0 ⊗
20
µ=1
ujµ ∈ R557
⊗
20
µ=1
R3
.
Tensor u has 320 · 557 ≈ 2 · 1012 entries ≈ 16 TB of memory.
Instead we store only 231 · (557 + 20 · 3) ≈ 144000 entries
≈ 1.14 MB.
Center for Uncertainty
Quantification
ation Logo Lock-up
9
4*
Level sets
Now we compute level sets
{ui : ui > b · max
i
u},
i := (i1, ..., iM+1)
for b ∈ {0.2, 0.4, 0.6, 0.8}.
The computing time for each b was 10 minutes.
Center for Uncertainty
Quantification
ation Logo Lock-up
10
4*
Part II
Part II: Developing of cheap Bayesian update
surrogate
1. Rosic, Litvinenko, Pajonk, Matthies J. Comp. Ph. 231 (17), 5761-5787, 2013
2. Inverse Problems in a Bayesian Setting, Matthies, Zander, Pajonk, Rosic, Litvinenko. Comp. Meth.for
Solids and Fluids Multiscale Analysis, 2016
Related work by A. Stuart, Chr. Schwab, A. El Sheikh, Y.
Marzouk, H. Najm, O. Ernst
4*
Numerical computation of Bayesian Update surrogate
Notation: ˆy – measurements from engineers, y(ξ) – forecast
from the simulator, ε(ω) – the Gaussian noise.
Look for ϕ such that q(ξ) = ϕ(z(ξ)), z(ξ) = y(ξ) + ε(ω):
ϕ ≈ ˜ϕ =
α∈Jp
ϕαΦα(z(ξ))
and minimize q(ξ) − ˜ϕ(z(ξ)) 2
L2
, where Φα are known
polynomials (e.g. Hermite).
Taking derivatives with respect to ϕα:
∂
∂ϕα
q(ξ) − ˜ϕ(z(ξ)), q(ξ) − ˜ϕ(z(ξ)) = 0 ∀α ∈ Jp
Center for Uncertainty
Quantification
ation Logo Lock-up
11
4*
Numerical computation of NLBU
∂
∂ϕα
E

q2
(ξ) − 2
β∈J
qϕβΦβ(z) +
β,γ∈J
ϕβϕγΦβ(z)Φγ(z)


= 2E

−qΦα(z) +
β∈J
ϕβΦβ(z)Φα(z)


= 2


β∈J
E [Φβ(z)Φα(z)] ϕβ − E [qΦα(z)]

 = 0 ∀α ∈ J .
Center for Uncertainty
Quantification
ation Logo Lock-up
12
4*
Numerical computation of NLBU
Now, rewriting the last sum in a matrix form, obtain the linear
system of equations (=: A) to compute coefficients ϕβ:



... ... ...
... E [Φα(z(ξ))Φβ(z(ξ))]
...
... ... ...







...
ϕβ
...



 =




...
E [q(ξ)Φα(z(ξ))]
...



 ,
where α, β ∈ J , A is of size |J | × |J |.
Center for Uncertainty
Quantification
ation Logo Lock-up
13
4*
Numerical computation of NLBU
Finally, the assimilated parameter qa will be
qa = qf + ˜ϕ(ˆy) − ˜ϕ(z), (1)
z(ξ) = y(ξ) + ε(ω),
˜ϕ = β∈Jp
ϕβΦβ(z(ξ))
Center for Uncertainty
Quantification
ation Logo Lock-up
14
4*
Example: 1D elliptic PDE with uncertain coeffs
− · (κ(x, ξ) u(x, ξ)) = f(x, ξ), x ∈ [0, 1]
+ Dirichlet random b.c. g(0, ξ) and g(1, ξ).
3 measurements: u(0.3) = 22, s.d. 0.2, x(0.5) = 28, s.d. 0.3,
x(0.8) = 18, s.d. 0.3.
κ(x, ξ): N = 100 dofs, M = 5, number of KLE terms 35, beta distribution for κ, Gaussian covκ, cov.
length 0.1, multi-variate Hermite polynomial of order pκ = 2;
RHS f(x, ξ): Mf = 5, number of KLE terms 40, beta distribution for κ, exponential covf , cov. length 0.03,
multi-variate Hermite polynomial of order pf = 2;
b.c. g(x, ξ): Mg = 2, number of KLE terms 2, normal distribution for g, Gaussian covg , cov. length 10,
multi-variate Hermite polynomial of order pg = 1;
pφ = 3 and pu = 3
Center for Uncertainty
Quantification
ation Logo Lock-up
15
4*
Example: Updating of the parameter
0 0.5 1
0
0.5
1
1.5
0 0.5 1
0
0.5
1
1.5
Figure: Prior and posterior (updated) parameter κ.
Collaboration with Y. Marzouk, MIT, and TU Braunschweig.
Together with H. Najm, Sandia Lab, we try to compare our
technique with his advanced MCMC technique for chemical
combustion eqn.
Center for Uncertainty
Quantification
ation Logo Lock-up
16
4*
Example: updating of the solution u
0 0.5 1
-20
0
20
40
60
0 0.5 1
-20
0
20
40
60
0 0.5 1
-20
0
20
40
60
0 0.5 1
-20
0
20
40
60
0 0.5 1
-20
0
20
40
60
Figure: Original and updated solutions, mean value plus/minus 1,2,3
standard deviations. Number of available measurements {0, 1, 2, 3, 5}
[graphics are built in the stochastic Galerkin library sglib, written by E. Zander in TU Braunschweig]
Center for Uncertainty
Quantification
ation Logo Lock-up
17
4*
Part III: My contribution to MUNA project
MUNA=Management and minimization of Uncertainties in
Numerical Aerodynamics.
1. Quantification of airfoil geometry-induced aerodynamic
uncertainties-comparison of approaches, Liu, Litvinenko,
Schillings, Schulz, JUQ 2017
2. Numerical Methods for Uncertainty Quantification and
Bayesian Update in Aerodynamics Litvinenko, Matthies,
chapter in Springer book, Vol 122, pp 262-285, 2013.
4*
Example: uncertainties in free stream turbulence
α
v
v
u
u’
α’
v1
2
Random vectors v1(θ) and v2(θ) model free stream turbulence
Center for Uncertainty
Quantification
ation Logo Lock-up
19
4*
Example: 3sigma intervals
Figure: 3σ interval, σ standard deviation, in each point of RAE2822
airfoil for the pressure (cp) and friction (cf) coefficients.
Center for Uncertainty
Quantification
ation Logo Lock-up
20
4*
Mean and variance of density, tke, xv, zv, pressure
Center for Uncertainty
Quantification
ation Logo Lock-up
4*
Example: Kriging and geostat. optimal design
Domain: 20m × 20m × 20m, 25, 000 × 25, 000 × 25, 000 dofs,
4000 measurements.
Log-Permeability. Color scale: showing the 95% confidence interval.
Kriging and spatial design accelerated by orders of magnitude:
Combining low-rank with FFT, Nowak, Litvinenko, Mathemati-
cal Geosciences 45 (4), 411-435, 2013
Center for Uncertainty
Quantification
ation Logo Lock-up
4*
Numerics on computer with 16GB RAM:
1. 2D Kriging with 270 million estimation points and 100
measurement values (0.25 sec.),
2. to compute the estimation variance (< 1 sec.),
3. to evaluate the spatial average of the estimation variance
(the A-criterion of geostat. optimal design) for 2 · 1012
estim. points (30 sec.),
4. to compute the C-criterion of geostat. optimal design for
2 · 1015 estim. points (30 sec.),
5. solve 3D Kriging problem with 15 · 1012 estim. points and
4000 measurement data values (20 sec.)
Collaboration with Stuttgart University, hydrology and
geosciences.
Center for Uncertainty
Quantification
ation Logo Lock-up
23
4*
Example from spatial statistics
Goal: To improve estimation of un-
known statistical parameters in a spa-
tial soil moisture field, Mississippi
basin, [−85◦ − 73◦] × [32◦, 43◦].
Log-likelihood function with C = e−
|x−y|
θ and Z available
(satellite) data:
L(θ) = −
n
2
log(2π) −
1
2
log |C(θ)| −
1
2
Z C(θ)−1
Z.
Collaboration with statisticians: M. Genton, Y. Sun, R. Huser, H. Rue from KAUST.
n = 512K, matrix setup 261 sec., compression rate 99.98% (0.4 GB against 2006 GB). H-LU is done in
843 sec., error 2 · 10−3
.
Center for Uncertainty
Quantification
ation Logo Lock-up
4*
Conclusion
Introduced:
1. Low-rank tensor methods to solve PDEs with uncertain
coefficients,
2. Post-processing in low-rank tensor format, computing level
sets
3. Bayesian update surrogate ϕ (as a linear, quadratic,...
approximation)
4. Quantification of uncertainties in Numerical Aerodynamics
5. Applications in geosciences
6. Estimating unknown coefficients in spatial statistics
(moisture example)
Center for Uncertainty
Quantification
ation Logo Lock-up
25
4*
Thank you
Thank you!
Center for Uncertainty
Quantification
ation Logo Lock-up
26
4*
Possible collaboration
1. Dominic Breit (error estimates for UQ applications to
balance statistical and discretization errors)
2. Gabriel Lord (num. meth. for PDEs with uncertainties;
combination of multiscale methods, UQ techniques and
Bayesian inference for reservoir modeling; low-rank tensor
methods for high-dimensional problems).
3. Lehel Banjai (computation of electromagnetic fields
scattered from dielectric objects of uncertain shapes;
balancing of the Runge-Kutta time discretization step and
the H-matrix approximation rank in time-dependent PDEs),
Center for Uncertainty
Quantification
ation Logo Lock-up
27
4*
Possible collaboration
1. BGS (CO2 storage, reservoir modeling, spatial statistics in
geology/geophysics),
2. Lyell Institute (subsurface flow under uncertainties, EOR,
Bayesian techniques for data assimilations)
3. EGIS:
3.1. Mike Christie (reservoir modeling under uncertainties,
EOR, seismic wave propagation in uncertain media)
3.2. Vasily Demyanov (uncertainty quantification and
low-rank approximations in geostatistics)
3.3. Dan Arnold (modeling of random geology, multi-scale,
Bayesian inference)
3.4. Ahmed El Sheikh (fast Bayesian update methods,
advanced UQ, surrogates for BU, big data from spatial
statistics).
Center for Uncertainty
Quantification
ation Logo Lock-up
28
4*
My experience since 2002
Center for Uncertainty
Quantification
ation Logo Lock-up
29
4*
Explanation of Bayesian Update surrogate
Let the stochastic model of the measurement is given by
y = M(q) + ε, ε -measurement noise
Best estimator ˜ϕ for q given z, i.e.
˜ϕ = argminϕ E[ q(·) − ϕ(z(·)) 2
2].
The best estimate (or predictor) of q given the
measurement model is
qM(ξ) = ˜ϕ(z(ξ))).
The remainder, i.e. the difference between q and qM, is
given by
q⊥
M(ξ) = q(ξ) − qM(ξ),
Due to the minimisation property of the MMSE
estimator—orthogonal to qM(ξ), i.e. cov(q⊥
M, qM) = 0.
[Thanks to Elmar Zander, TU Braunschweig]
Center for Uncertainty
Quantification
ation Logo Lock-up
30
In other words,
q(ξ) = qM(ξ) + q⊥
M(ξ) (2)
yields an orthogonal decomposition of q.
Actual measurement ˆy, prediction ˆq = ˜ϕ(ˆy). Part qM of q
can be “collapsed” to ˆq. Updated stochastic model q is
thus given by
q (ξ) = ˆq + q⊥
M(ξ) (3)
q (ξ) = q(ξ) + ( ˜ϕ(ˆy) − ˜ϕ(z(ξ))). (4)
Center for Uncertainty
Quantification
ation Logo Lock-up
31
4*
Future plans, Idea N1
Possible collaboration work 1 To develop a low-rank adaptive
goal-oriented Bayesian update technique. The solution of the forward
and inverse problems will be considered as a whole adaptive
process, controlled by error/uncertainty estimators.
z
(y - z) q
f ε
forward update
low-rank and adaptive
y
f z
(y - z)
ε
forward
y q.....
low-rank and adaptive
... q
update
Stochastic forward spatial discret.
stochastic discret.
low-rank approx.
Inverse problem
Errors
inverse operator approx.
4*
Future plans, Idea N2
Edge between Green functions in PDEs and covariance
matrices.
Possible collaboration with statistical group, Doug Nychka
(NCAR), Haavard Rue
Center for Uncertainty
Quantification
ation Logo Lock-up
32
4*
Future plans, Idea N3
Data assimilation techniques, Bayesian update surrogate.
Develop non-linear, non-Gaussian Bayesian update
approximation for gPCE coefficients.
Possible collaboration with Kody Law (OAK NL), Y. Marzouk
(MIT), H. Najm (Sandia NL), TU Braunschweig and KAUST.
4*
Example: Canonical rank d, whereas TT rank 2
d-Laplacian over uniform tensor grid. It is known to have the
Kronecker rank-d representation,
∆d = A⊗IN ⊗...⊗IN +IN ⊗A⊗...⊗IN +...+IN ⊗IN ⊗...⊗A ∈ RI⊗d ⊗I⊗d
(5)
with A = ∆1 = tridiag{−1, 2, −1} ∈ RN×N, and IN being the
N × N identity. Notice that for the canonical rank we have rank
kC(∆d ) = d, while TT-rank of ∆d is equal to 2 for any
dimension due to the explicit representation
∆d = (∆1 I) ×
I 0
∆1 I
× ... ×
I 0
∆1 I
×
I
∆1
(6)
where the rank product operation ”×” is defined as a regular
matrix product of the two corresponding core matrices, their
blocks being multiplied by means of tensor product. The similar
bound is true for the Tucker rank rankTuck (∆d ) = 2.
4*
Advantages and disadvantages
Denote k - rank, d-dimension, n = # dofs in 1D:
1. CP: ill-posed approx. alg-m, O(dnk), hard to compute
approx.
2. Tucker: reliable arithmetic based on SVD, O(dnk + kd )
3. Hierarchical Tucker: based on SVD, storage O(dnk + dk3),
truncation O(dnk2 + dk4)
4. TT: based on SVD, O(dnk2) or O(dnk3), stable
5. Quantics-TT: O(nd ) → O(dlogq
n)
4*
How to compute the variance in CP format
Let u ∈ Rr and
˜u := u − u
d
µ=1
1
nµ
1 =
r+1
j=1
d
µ=1
˜ujµ ∈ Rr+1, (7)
then the variance var(u) of u can be computed as follows
var(u) =
˜u, ˜u
d
µ=1 nµ
=
1
d
µ=1 nµ


r+1
i=1
d
µ=1
˜uiµ

 ,


r+1
j=1
d
ν=1
˜ujν


=
r+1
i=1
r+1
j=1
d
µ=1
1
nµ
˜uiµ, ˜ujµ .
Numerical cost is O (r + 1)2 · d
µ=1 nµ .
4*
Computing QoI in low-rank tensor format
Now, we consider how to
find ‘level sets’,
for instance, all entries of tensor u from interval [a, b].
4*
Definitions of characteristic and sign functions
1. To compute level sets and frequencies we need
characteristic function.
2. To compute characteristic function we need sign function.
The characteristic χI(u) ∈ T of u ∈ T in I ⊂ R is for every multi-
index i ∈ I pointwise defined as
(χI(u))i :=
1, ui ∈ I,
0, ui /∈ I.
Furthermore, the sign(u) ∈ T is for all i ∈ I pointwise defined
by
(sign(u))i :=



1, ui > 0;
−1, ui < 0;
0, ui = 0.
Center for Uncertainty
Quantification
ation Logo Lock-up
34
4*
sign(u) is needed for computing χI(u)
Lemma
Let u ∈ T , a, b ∈ R, and 1 = d
µ=1
˜1µ, where
˜1µ := (1, . . . , 1)t ∈ Rnµ .
(i) If I = R<b, then we have χI(u) = 1
2 (1 + sign(b1 − u)).
(ii) If I = R>a, then we have χI(u) = 1
2(1 − sign(a1 − u)).
(iii) If I = (a, b), then we have
χI(u) = 1
2 (sign(b1 − u) − sign(a1 − u)).
Computing sign(u), u ∈ Rr , via hybrid Newton-Schulz iteration
with rank truncation after each iteration.
Center for Uncertainty
Quantification
ation Logo Lock-up
35
4*
Level Set, Frequency
Definition (Level Set, Frequency)
Let I ⊂ R and u ∈ T . The level set LI(u) ∈ T of u respect to I is
pointwise defined by
(LI(u))i :=
ui, ui ∈ I ;
0, ui /∈ I ,
for all i ∈ I.
The frequency FI(u) ∈ N of u respect to I is defined as
FI(u) := # supp χI(u).
Center for Uncertainty
Quantification
ation Logo Lock-up
36
4*
Computation of level sets and frequency
Proposition
Let I ⊂ R, u ∈ T , and χI(u) its characteristic. We have
LI(u) = χI(u) u
and rank(LI(u)) ≤ rank(χI(u)) rank(u).
The frequency FI(u) ∈ N of u respect to I is
FI(u) = χI(u), 1 ,
where 1 = d
µ=1
˜1µ, ˜1µ := (1, . . . , 1)T ∈ Rnµ .
Center for Uncertainty
Quantification
ation Logo Lock-up
37
4*
Definition of tensor of order d
Tensor of order d is a multidimensional array over a d-tuple
index set I = I1 × · · · × Id ,
A = [ai1...id
: i ∈ I ] ∈ RI
, I = {1, ..., n }, = 1, .., d.
A is an element of the linear space
Vn =
d
=1
V , V = RI
equipped with the Euclidean scalar product ·, · : Vn × Vn → R,
defined as
A, B :=
(i1...id )∈I
ai1...id
bi1...id
, for A, B ∈ Vn.
Center for Uncertainty
Quantification
ation Logo Lock-up
38
4*
Examples of rank-1 and rank-2 tensors
Rank-1:
f(x1, ..., xd ) = exp(f1(x1) + ... + fd (xd )) = d
j=1 exp(fj(xj))
Rank-2: f(x1, ..., xd ) = sin( d
j=1 xj), since
2i · sin( d
j=1 xj) = ei d
j=1 xj
− e−i d
j=1 xj
Rank-d function f(x1, ..., xd ) = x1 + x2 + ... + xd can be
approximated by rank-2: with any prescribed accuracy:
f ≈
d
j=1(1 + εxj)
ε
−
d
j=1 1
ε
+ O(ε), as ε → 0
Center for Uncertainty
Quantification
ation Logo Lock-up
39
4*
Conditional probability and expectation
Classically, Bayes’s theorem gives conditional probability
P(Iq|Mz) =
P(Mz|Iq)
P(Mz)
P(Iq) (or πq(q|z) =
p(z|q)
Zs
pq(q));
Expectation with this posterior measure is conditional
expectation.
Kolmogorov starts from conditional expectation E (·|Mz),
from this conditional probability via P(Iq|Mz) = E χIq
|Mz .
Center for Uncertainty
Quantification
ation Logo Lock-up
40
4*
Conditional expectation
The conditional expectation is defined as
orthogonal projection onto the closed subspace L2(Ω, P, σ(z)):
E(q|σ(z)) := PQ∞ q = argmin˜q∈L2(Ω,P,σ(z)) q − ˜q 2
L2
The subspace Q∞ := L2(Ω, P, σ(z)) represents the available
information.
The update, also called the assimilated value
qa(ω) := PQ∞ q = E(q|σ(z)), is a Q-valued RV
and represents new state of knowledge after the measurement.
Doob-Dynkin: Q∞ = {ϕ ∈ Q : ϕ = φ ◦ z, φ measurable}.
Center for Uncertainty
Quantification
ation Logo Lock-up
41

More Related Content

What's hot

bayesImageS: Bayesian computation for medical Image Segmentation using a hidd...
bayesImageS: Bayesian computation for medical Image Segmentation using a hidd...bayesImageS: Bayesian computation for medical Image Segmentation using a hidd...
bayesImageS: Bayesian computation for medical Image Segmentation using a hidd...
Matt Moores
 
Bayesian Inference and Uncertainty Quantification for Inverse Problems
Bayesian Inference and Uncertainty Quantification for Inverse ProblemsBayesian Inference and Uncertainty Quantification for Inverse Problems
Bayesian Inference and Uncertainty Quantification for Inverse Problems
Matt Moores
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Tensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsTensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEs
Alexander Litvinenko
 
Lecture9 xing
Lecture9 xingLecture9 xing
Lecture9 xing
Tianlu Wang
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
Valentin De Bortoli
 
ICML2013読み会 Large-Scale Learning with Less RAM via Randomization
ICML2013読み会 Large-Scale Learning with Less RAM via RandomizationICML2013読み会 Large-Scale Learning with Less RAM via Randomization
ICML2013読み会 Large-Scale Learning with Less RAM via Randomization
Hidekazu Oiwa
 
Neural Processes
Neural ProcessesNeural Processes
Neural Processes
Sangwoo Mo
 
Conditional neural processes
Conditional neural processesConditional neural processes
Conditional neural processes
Kazuki Fujikawa
 
Sliced Wasserstein距離と生成モデル
Sliced Wasserstein距離と生成モデルSliced Wasserstein距離と生成モデル
Sliced Wasserstein距離と生成モデル
ohken
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Hierarchical matrix techniques for maximum likelihood covariance estimation
Hierarchical matrix techniques for maximum likelihood covariance estimationHierarchical matrix techniques for maximum likelihood covariance estimation
Hierarchical matrix techniques for maximum likelihood covariance estimation
Alexander Litvinenko
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithms
Christian Robert
 
Triggering patterns of topology changes in dynamic attributed graphs
Triggering patterns of topology changes in dynamic attributed graphsTriggering patterns of topology changes in dynamic attributed graphs
Triggering patterns of topology changes in dynamic attributed graphs
INSA Lyon - L'Institut National des Sciences Appliquées de Lyon
 
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
PyData
 
Aaex3 group2
Aaex3 group2Aaex3 group2
Aaex3 group2
Shiang-Yun Yang
 
My PhD talk "Application of H-matrices for computing partial inverse"
My PhD talk "Application of H-matrices for computing partial inverse"My PhD talk "Application of H-matrices for computing partial inverse"
My PhD talk "Application of H-matrices for computing partial inverse"
Alexander Litvinenko
 
Convex Optimization Modelling with CVXOPT
Convex Optimization Modelling with CVXOPTConvex Optimization Modelling with CVXOPT
Convex Optimization Modelling with CVXOPT
andrewmart11
 
Master thesis job shop generic time lag max plus
Master thesis job shop generic time lag max plusMaster thesis job shop generic time lag max plus
Master thesis job shop generic time lag max plus
Siddhartha Verma
 

What's hot (20)

bayesImageS: Bayesian computation for medical Image Segmentation using a hidd...
bayesImageS: Bayesian computation for medical Image Segmentation using a hidd...bayesImageS: Bayesian computation for medical Image Segmentation using a hidd...
bayesImageS: Bayesian computation for medical Image Segmentation using a hidd...
 
Bayesian Inference and Uncertainty Quantification for Inverse Problems
Bayesian Inference and Uncertainty Quantification for Inverse ProblemsBayesian Inference and Uncertainty Quantification for Inverse Problems
Bayesian Inference and Uncertainty Quantification for Inverse Problems
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Tensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsTensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEs
 
Lecture9 xing
Lecture9 xingLecture9 xing
Lecture9 xing
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
 
ICML2013読み会 Large-Scale Learning with Less RAM via Randomization
ICML2013読み会 Large-Scale Learning with Less RAM via RandomizationICML2013読み会 Large-Scale Learning with Less RAM via Randomization
ICML2013読み会 Large-Scale Learning with Less RAM via Randomization
 
Neural Processes
Neural ProcessesNeural Processes
Neural Processes
 
Conditional neural processes
Conditional neural processesConditional neural processes
Conditional neural processes
 
Sliced Wasserstein距離と生成モデル
Sliced Wasserstein距離と生成モデルSliced Wasserstein距離と生成モデル
Sliced Wasserstein距離と生成モデル
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Hierarchical matrix techniques for maximum likelihood covariance estimation
Hierarchical matrix techniques for maximum likelihood covariance estimationHierarchical matrix techniques for maximum likelihood covariance estimation
Hierarchical matrix techniques for maximum likelihood covariance estimation
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithms
 
Triggering patterns of topology changes in dynamic attributed graphs
Triggering patterns of topology changes in dynamic attributed graphsTriggering patterns of topology changes in dynamic attributed graphs
Triggering patterns of topology changes in dynamic attributed graphs
 
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
 
Aaex3 group2
Aaex3 group2Aaex3 group2
Aaex3 group2
 
My PhD talk "Application of H-matrices for computing partial inverse"
My PhD talk "Application of H-matrices for computing partial inverse"My PhD talk "Application of H-matrices for computing partial inverse"
My PhD talk "Application of H-matrices for computing partial inverse"
 
Convex Optimization Modelling with CVXOPT
Convex Optimization Modelling with CVXOPTConvex Optimization Modelling with CVXOPT
Convex Optimization Modelling with CVXOPT
 
Master thesis job shop generic time lag max plus
Master thesis job shop generic time lag max plusMaster thesis job shop generic time lag max plus
Master thesis job shop generic time lag max plus
 

Similar to Developing fast low-rank tensor methods for solving PDEs with uncertain coefficients

Low-rank tensor methods for stochastic forward and inverse problems
Low-rank tensor methods for stochastic forward and inverse problemsLow-rank tensor methods for stochastic forward and inverse problems
Low-rank tensor methods for stochastic forward and inverse problems
Alexander Litvinenko
 
Litvinenko, Uncertainty Quantification - an Overview
Litvinenko, Uncertainty Quantification - an OverviewLitvinenko, Uncertainty Quantification - an Overview
Litvinenko, Uncertainty Quantification - an Overview
Alexander Litvinenko
 
Data sparse approximation of Karhunen-Loeve Expansion
Data sparse approximation of Karhunen-Loeve ExpansionData sparse approximation of Karhunen-Loeve Expansion
Data sparse approximation of Karhunen-Loeve Expansion
Alexander Litvinenko
 
Slides
SlidesSlides
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
Alexander Litvinenko
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixtures
Christian Robert
 
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Alexander Litvinenko
 
CLIM Program: Remote Sensing Workshop, Optimization for Distributed Data Syst...
CLIM Program: Remote Sensing Workshop, Optimization for Distributed Data Syst...CLIM Program: Remote Sensing Workshop, Optimization for Distributed Data Syst...
CLIM Program: Remote Sensing Workshop, Optimization for Distributed Data Syst...
The Statistical and Applied Mathematical Sciences Institute
 
Solving inverse problems via non-linear Bayesian Update of PCE coefficients
Solving inverse problems via non-linear Bayesian Update of PCE coefficientsSolving inverse problems via non-linear Bayesian Update of PCE coefficients
Solving inverse problems via non-linear Bayesian Update of PCE coefficients
Alexander Litvinenko
 
Litvinenko nlbu2016
Litvinenko nlbu2016Litvinenko nlbu2016
Litvinenko nlbu2016
Alexander Litvinenko
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
Alexander Litvinenko
 
Connection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsConnection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problems
Alexander Litvinenko
 
MarcoCeze_defense
MarcoCeze_defenseMarcoCeze_defense
MarcoCeze_defense
Marco Ceze
 
Efficient Simulations for Contamination of Groundwater Aquifers under Uncerta...
Efficient Simulations for Contamination of Groundwater Aquifers under Uncerta...Efficient Simulations for Contamination of Groundwater Aquifers under Uncerta...
Efficient Simulations for Contamination of Groundwater Aquifers under Uncerta...
Alexander Litvinenko
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
Elvis DOHMATOB
 
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Alexander Litvinenko
 
Identification of the Mathematical Models of Complex Relaxation Processes in ...
Identification of the Mathematical Models of Complex Relaxation Processes in ...Identification of the Mathematical Models of Complex Relaxation Processes in ...
Identification of the Mathematical Models of Complex Relaxation Processes in ...
Vladimir Bakhrushin
 
Thesis defense
Thesis defenseThesis defense
Thesis defense
Zheng Mengdi
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
Christian Robert
 
Scalable trust-region method for deep reinforcement learning using Kronecker-...
Scalable trust-region method for deep reinforcement learning using Kronecker-...Scalable trust-region method for deep reinforcement learning using Kronecker-...
Scalable trust-region method for deep reinforcement learning using Kronecker-...
Willy Marroquin (WillyDevNET)
 

Similar to Developing fast low-rank tensor methods for solving PDEs with uncertain coefficients (20)

Low-rank tensor methods for stochastic forward and inverse problems
Low-rank tensor methods for stochastic forward and inverse problemsLow-rank tensor methods for stochastic forward and inverse problems
Low-rank tensor methods for stochastic forward and inverse problems
 
Litvinenko, Uncertainty Quantification - an Overview
Litvinenko, Uncertainty Quantification - an OverviewLitvinenko, Uncertainty Quantification - an Overview
Litvinenko, Uncertainty Quantification - an Overview
 
Data sparse approximation of Karhunen-Loeve Expansion
Data sparse approximation of Karhunen-Loeve ExpansionData sparse approximation of Karhunen-Loeve Expansion
Data sparse approximation of Karhunen-Loeve Expansion
 
Slides
SlidesSlides
Slides
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixtures
 
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
 
CLIM Program: Remote Sensing Workshop, Optimization for Distributed Data Syst...
CLIM Program: Remote Sensing Workshop, Optimization for Distributed Data Syst...CLIM Program: Remote Sensing Workshop, Optimization for Distributed Data Syst...
CLIM Program: Remote Sensing Workshop, Optimization for Distributed Data Syst...
 
Solving inverse problems via non-linear Bayesian Update of PCE coefficients
Solving inverse problems via non-linear Bayesian Update of PCE coefficientsSolving inverse problems via non-linear Bayesian Update of PCE coefficients
Solving inverse problems via non-linear Bayesian Update of PCE coefficients
 
Litvinenko nlbu2016
Litvinenko nlbu2016Litvinenko nlbu2016
Litvinenko nlbu2016
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
 
Connection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsConnection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problems
 
MarcoCeze_defense
MarcoCeze_defenseMarcoCeze_defense
MarcoCeze_defense
 
Efficient Simulations for Contamination of Groundwater Aquifers under Uncerta...
Efficient Simulations for Contamination of Groundwater Aquifers under Uncerta...Efficient Simulations for Contamination of Groundwater Aquifers under Uncerta...
Efficient Simulations for Contamination of Groundwater Aquifers under Uncerta...
 
MVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priorsMVPA with SpaceNet: sparse structured priors
MVPA with SpaceNet: sparse structured priors
 
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
 
Identification of the Mathematical Models of Complex Relaxation Processes in ...
Identification of the Mathematical Models of Complex Relaxation Processes in ...Identification of the Mathematical Models of Complex Relaxation Processes in ...
Identification of the Mathematical Models of Complex Relaxation Processes in ...
 
Thesis defense
Thesis defenseThesis defense
Thesis defense
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
 
Scalable trust-region method for deep reinforcement learning using Kronecker-...
Scalable trust-region method for deep reinforcement learning using Kronecker-...Scalable trust-region method for deep reinforcement learning using Kronecker-...
Scalable trust-region method for deep reinforcement learning using Kronecker-...
 

More from Alexander Litvinenko

Poster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdfPoster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdf
Alexander Litvinenko
 
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdflitvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
Alexander Litvinenko
 
litvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdflitvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdf
Alexander Litvinenko
 
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityDensity Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Alexander Litvinenko
 
litvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdflitvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdf
Alexander Litvinenko
 
Litvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdfLitvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdf
Alexander Litvinenko
 
Uncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdfUncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdf
Alexander Litvinenko
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Alexander Litvinenko
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
Alexander Litvinenko
 
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Alexander Litvinenko
 
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Alexander Litvinenko
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...
Alexander Litvinenko
 
Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...
Alexander Litvinenko
 
Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Propagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater FlowPropagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater Flow
Alexander Litvinenko
 

More from Alexander Litvinenko (20)

Poster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdfPoster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdf
 
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdflitvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
 
litvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdflitvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdf
 
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityDensity Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
 
litvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdflitvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdf
 
Litvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdfLitvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdf
 
Uncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdfUncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdf
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
 
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...
 
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...
 
Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...
 
Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Propagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater FlowPropagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater Flow
 

Recently uploaded

22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt
KrishnaveniKrishnara1
 
Harnessing WebAssembly for Real-time Stateless Streaming Pipelines
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesHarnessing WebAssembly for Real-time Stateless Streaming Pipelines
Harnessing WebAssembly for Real-time Stateless Streaming Pipelines
Christina Lin
 
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptxML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
JamalHussainArman
 
New techniques for characterising damage in rock slopes.pdf
New techniques for characterising damage in rock slopes.pdfNew techniques for characterising damage in rock slopes.pdf
New techniques for characterising damage in rock slopes.pdf
wisnuprabawa3
 
sieving analysis and results interpretation
sieving analysis and results interpretationsieving analysis and results interpretation
sieving analysis and results interpretation
ssuser36d3051
 
2. Operations Strategy in a Global Environment.ppt
2. Operations Strategy in a Global Environment.ppt2. Operations Strategy in a Global Environment.ppt
2. Operations Strategy in a Global Environment.ppt
PuktoonEngr
 
spirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptxspirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptx
Madan Karki
 
PPT on GRP pipes manufacturing and testing
PPT on GRP pipes manufacturing and testingPPT on GRP pipes manufacturing and testing
PPT on GRP pipes manufacturing and testing
anoopmanoharan2
 
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
IJECEIAES
 
Understanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine LearningUnderstanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine Learning
SUTEJAS
 
bank management system in java and mysql report1.pdf
bank management system in java and mysql report1.pdfbank management system in java and mysql report1.pdf
bank management system in java and mysql report1.pdf
Divyam548318
 
Embedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoringEmbedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoring
IJECEIAES
 
Generative AI leverages algorithms to create various forms of content
Generative AI leverages algorithms to create various forms of contentGenerative AI leverages algorithms to create various forms of content
Generative AI leverages algorithms to create various forms of content
Hitesh Mohapatra
 
Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...
IJECEIAES
 
Literature Review Basics and Understanding Reference Management.pptx
Literature Review Basics and Understanding Reference Management.pptxLiterature Review Basics and Understanding Reference Management.pptx
Literature Review Basics and Understanding Reference Management.pptx
Dr Ramhari Poudyal
 
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
thanhdowork
 
14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application
SyedAbiiAzazi1
 
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
University of Maribor
 
6th International Conference on Machine Learning & Applications (CMLA 2024)
6th International Conference on Machine Learning & Applications (CMLA 2024)6th International Conference on Machine Learning & Applications (CMLA 2024)
6th International Conference on Machine Learning & Applications (CMLA 2024)
ClaraZara1
 
Properties Railway Sleepers and Test.pptx
Properties Railway Sleepers and Test.pptxProperties Railway Sleepers and Test.pptx
Properties Railway Sleepers and Test.pptx
MDSABBIROJJAMANPAYEL
 

Recently uploaded (20)

22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt22CYT12-Unit-V-E Waste and its Management.ppt
22CYT12-Unit-V-E Waste and its Management.ppt
 
Harnessing WebAssembly for Real-time Stateless Streaming Pipelines
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesHarnessing WebAssembly for Real-time Stateless Streaming Pipelines
Harnessing WebAssembly for Real-time Stateless Streaming Pipelines
 
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptxML Based Model for NIDS MSc Updated Presentation.v2.pptx
ML Based Model for NIDS MSc Updated Presentation.v2.pptx
 
New techniques for characterising damage in rock slopes.pdf
New techniques for characterising damage in rock slopes.pdfNew techniques for characterising damage in rock slopes.pdf
New techniques for characterising damage in rock slopes.pdf
 
sieving analysis and results interpretation
sieving analysis and results interpretationsieving analysis and results interpretation
sieving analysis and results interpretation
 
2. Operations Strategy in a Global Environment.ppt
2. Operations Strategy in a Global Environment.ppt2. Operations Strategy in a Global Environment.ppt
2. Operations Strategy in a Global Environment.ppt
 
spirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptxspirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptx
 
PPT on GRP pipes manufacturing and testing
PPT on GRP pipes manufacturing and testingPPT on GRP pipes manufacturing and testing
PPT on GRP pipes manufacturing and testing
 
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...
 
Understanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine LearningUnderstanding Inductive Bias in Machine Learning
Understanding Inductive Bias in Machine Learning
 
bank management system in java and mysql report1.pdf
bank management system in java and mysql report1.pdfbank management system in java and mysql report1.pdf
bank management system in java and mysql report1.pdf
 
Embedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoringEmbedded machine learning-based road conditions and driving behavior monitoring
Embedded machine learning-based road conditions and driving behavior monitoring
 
Generative AI leverages algorithms to create various forms of content
Generative AI leverages algorithms to create various forms of contentGenerative AI leverages algorithms to create various forms of content
Generative AI leverages algorithms to create various forms of content
 
Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...Advanced control scheme of doubly fed induction generator for wind turbine us...
Advanced control scheme of doubly fed induction generator for wind turbine us...
 
Literature Review Basics and Understanding Reference Management.pptx
Literature Review Basics and Understanding Reference Management.pptxLiterature Review Basics and Understanding Reference Management.pptx
Literature Review Basics and Understanding Reference Management.pptx
 
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
 
14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application14 Template Contractual Notice - EOT Application
14 Template Contractual Notice - EOT Application
 
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...
 
6th International Conference on Machine Learning & Applications (CMLA 2024)
6th International Conference on Machine Learning & Applications (CMLA 2024)6th International Conference on Machine Learning & Applications (CMLA 2024)
6th International Conference on Machine Learning & Applications (CMLA 2024)
 
Properties Railway Sleepers and Test.pptx
Properties Railway Sleepers and Test.pptxProperties Railway Sleepers and Test.pptx
Properties Railway Sleepers and Test.pptx
 

Developing fast low-rank tensor methods for solving PDEs with uncertain coefficients

  • 1. Low-rank tensors for PDEs with uncertain coefficients Alexander Litvinenko Center for Uncertainty Quantification ntification Logo Lock-up http://sri-uq.kaust.edu.sa/ Extreme Computing Research Center, KAUST Alexander Litvinenko Low-rank tensors for PDEs with uncertain coefficients
  • 2. 4* The structure of the talk Part I (Stochastic Forward Problem): 1. Motivation 2. Elliptic PDE with uncertain coefficients 3. Discretization and low-rank tensor approximations Part II (Stochastic Inverse Problem via Bayesian Update): 1. Bayesian update surrogate 2. Examples Part III (Different Examples relevant for BGS) 13 13 17 17 14 14 17 13 17 14 15 13 13 17 29 13 48 15 13 13 13 13 15 13 13 13 16 23 8 8 13 15 28 29 8 8 15 8 15 8 15 19 18 18 61 57 23 17 17 17 17 23 35 57 60 61 117 17 17 17 17 17 14 14 14 7 7 14 14 34 21 14 17 14 28 28 10 10 13 17 17 17 17 11 11 17 11 11 69 40 17 11 17 11 36 28 69 68 10 10 11 9 9 10 11 9 9 12 14 14 21 21 14 14 11 11 11 42 14 11 11 11 11 14 22 38 36 12 12 13 12 12 10 10 12 10 10 23 12 10 10 10 15 15 13 10 10 15 15 69 97 49 28 16 15 12 12 21 21 48 48 83 132 48 91 16 12 12 13 12 8 8 13 8 8 26 13 8 13 8 22 21 13 13 13 9 9 13 13 9 9 13 49 26 9 12 9 13 26 22 49 48 12 12 14 12 14 12 14 15 9 9 18 18 26 15 15 14 14 26 35 15 14 14 15 14 15 14 16 16 19 97 68 29 16 18 16 18 29 35 65 64 97 132 18 18 18 15 15 18 18 15 15 14 7 7 33 15 16 15 17 32 32 16 16 17 14 14 16 17 14 14 18 64 33 11 11 14 18 31 31 72 65 11 11 8 8 14 11 18 11 13 18 13 13 33 18 13 15 13 33 31 20 15 15 19 15 18 15 19 18 18 53 87 136 64 35 19 18 14 14 35 35 64 66 82 128 61 90 33 62 8 8 13 14 14 17 14 18 14 17 17 18 29 17 18 10 10 35 35 19 10 10 13 10 19 10 13 13 10 10 14 70 28 13 15 13 13 29 37 56 56 15 13 13 15 13 15 13 19 19 10 10 15 23 11 11 12 12 28 33 11 11 12 11 12 11 12 18 15 15 115 66 23 18 15 18 15 23 30 49 49 121 121 18 18 18 12 12 18 18 12 12 18 22 11 11 11 11 27 27 11 11 11 11 11 10 10 17 10 10 62 22 17 10 17 10 21 21 59 49 13 10 10 18 18 10 10 11 11 10 10 11 27 10 11 10 11 32 21 12 12 15 12 13 12 15 13 13 19 88 115 62 27 13 19 13 14 27 32 62 59 115 121 61 90 10 10 11 14 14 21 14 12 12 14 10 10 12 12 29 14 12 15 12 35 35 14 14 15 11 11 14 15 11 11 8 8 16 69 29 11 18 11 23 28 28 62 62 18 18 8 8 15 15 15 13 13 15 13 13 29 15 13 13 13 33 28 16 13 13 16 13 15 13 18 15 15 135 62 29 18 15 18 15 22 22 69 62 101 101 10 10 11 19 19 15 15 7 7 15 7 7 40 15 7 15 7 40 22 19 19 9 9 13 18 18 19 22 18 18 11 10 10 11 11 62 31 18 20 11 11 31 31 39 39 20 11 11 19 11 12 11 19 12 12 26 12 12 14 12 13 13 12 12 14 13 13 Center for Uncertainty Quantification ation Logo Lock-up 2
  • 3. 4* My interests and collaborations Center for Uncertainty Quantification ation Logo Lock-up 3
  • 4. 4* Motivation to do Uncertainty Quantification (UQ) Motivation: there is an urgent need to quantify and reduce the uncertainty in multiscale-multiphysics applications. UQ and its relevance: Nowadays computational predictions are used in critical engineering decisions. But, how reliable are these predictions? Example: Saudi Aramco currently has a simulator, GigaPOWERS, which runs with 9 billion cells. How sensitive are these simulations w.r.t. unknown reservoir properties? My goal is systematic, mathematically founded, develop- ment of UQ methods and low-rank algorithms relevant for applications. Center for Uncertainty Quantification ation Logo Lock-up
  • 5. 4* PDE with uncertain coefficient Consider − div(κ(x, ω) u(x, ω)) = f(x, ω) in G × Ω, G ⊂ Rd , u = 0 on ∂G, where κ(x, ω) - uncertain diffusion coefficient. 1. Efficient Analysis of High Dimensional Data in Tensor Formats, Espig, Hackbusch, Litvinenko., Matthies, Zander, 2012. 2. Efficient low-rank approx. of the stoch. Galerkin ma- trix in tensor formats, W¨ahnert, Espig, Hackbusch, A.L., Matthies, 2013. 3. PCE of random coefficients and the solution of stochas- tic PDEs in the Tensor Train format, Dolgov, Litvinenko, Khoromskij, Matthies, 2016. 4. Application of H-matrices for computing the KL expan- sion, Khoromskij, Litvinenko, Matthies Computing 84 (1-2), 49-67, 2009 0 0.5 1 -20 0 20 40 60 50 realizations of the solution u, the mean and quantiles Related work by R. Scheichl, Chr. Schwab, A. Teckentrup, F. Nobile, D. Kressner,... Center for Uncertainty Quantification ation Logo Lock-up 5
  • 6. 4* Canonical and Tucker tensor formats [Pictures are taken from B. Khoromskij and A. Auer lecture course] Storage: O(nd ) → O(dRn) and O(Rd + dRn). Center for Uncertainty Quantification ation Logo Lock-up 6
  • 7. 4* Karhunen Lo`eve and Polynomial Chaos Expansions Apply both Truncated Karhunen Lo`eve Expansion (KLE): κ(x, ω) ≈ κ0(x) + L j=1 κjgj(x)ξj(θ(ω)), where θ = θ(ω) = (θ1(ω), θ2(ω), ..., ), ξj(θ) = 1 κj G (κ(x, ω) − κ0(x)) gj(x)dx. Truncated Polynomial Chaos Expansion (PCE) κ(x, ω) ≈ α∈JM,p κ(α)(x)Hα(θ) ξj(θ) ≈ α∈JM,p ξ (α) j Hα(θ). Center for Uncertainty Quantification ation Logo Lock-up 7
  • 8. 4* Discretization of elliptic PDE Ku = f, where K := L =1 K ⊗ M µ=1 ∆ µ, K ∈ RN×N , ∆ µ ∈ RRµ×Rµ , u := r j=1 uj ⊗ M µ=1 ujµ, uj ∈ RN , ujµ ∈ RRµ , f := R k=1 fk ⊗ M µ=1 gkµ, fk ∈ RN , gkµ ∈ RRµ . Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats, W¨ahnert, Espig, Hackbusch, Litvinenko, Matthies, 2013. In [2] we analyzed tensor ranks (compression properties) of the stochastic Galerkin operator K. Center for Uncertainty Quantification ation Logo Lock-up 8
  • 9. 4* Numerical Experiments 2D L-shape domain, N = 557 dofs. Total stochastic dimension is Mu = Mk + Mf = 20, there are |JM,p| = 231 PCE coefficients u = 231 j=1 uj,0 ⊗ 20 µ=1 ujµ ∈ R557 ⊗ 20 µ=1 R3 . Tensor u has 320 · 557 ≈ 2 · 1012 entries ≈ 16 TB of memory. Instead we store only 231 · (557 + 20 · 3) ≈ 144000 entries ≈ 1.14 MB. Center for Uncertainty Quantification ation Logo Lock-up 9
  • 10. 4* Level sets Now we compute level sets {ui : ui > b · max i u}, i := (i1, ..., iM+1) for b ∈ {0.2, 0.4, 0.6, 0.8}. The computing time for each b was 10 minutes. Center for Uncertainty Quantification ation Logo Lock-up 10
  • 11. 4* Part II Part II: Developing of cheap Bayesian update surrogate 1. Rosic, Litvinenko, Pajonk, Matthies J. Comp. Ph. 231 (17), 5761-5787, 2013 2. Inverse Problems in a Bayesian Setting, Matthies, Zander, Pajonk, Rosic, Litvinenko. Comp. Meth.for Solids and Fluids Multiscale Analysis, 2016 Related work by A. Stuart, Chr. Schwab, A. El Sheikh, Y. Marzouk, H. Najm, O. Ernst
  • 12. 4* Numerical computation of Bayesian Update surrogate Notation: ˆy – measurements from engineers, y(ξ) – forecast from the simulator, ε(ω) – the Gaussian noise. Look for ϕ such that q(ξ) = ϕ(z(ξ)), z(ξ) = y(ξ) + ε(ω): ϕ ≈ ˜ϕ = α∈Jp ϕαΦα(z(ξ)) and minimize q(ξ) − ˜ϕ(z(ξ)) 2 L2 , where Φα are known polynomials (e.g. Hermite). Taking derivatives with respect to ϕα: ∂ ∂ϕα q(ξ) − ˜ϕ(z(ξ)), q(ξ) − ˜ϕ(z(ξ)) = 0 ∀α ∈ Jp Center for Uncertainty Quantification ation Logo Lock-up 11
  • 13. 4* Numerical computation of NLBU ∂ ∂ϕα E  q2 (ξ) − 2 β∈J qϕβΦβ(z) + β,γ∈J ϕβϕγΦβ(z)Φγ(z)   = 2E  −qΦα(z) + β∈J ϕβΦβ(z)Φα(z)   = 2   β∈J E [Φβ(z)Φα(z)] ϕβ − E [qΦα(z)]   = 0 ∀α ∈ J . Center for Uncertainty Quantification ation Logo Lock-up 12
  • 14. 4* Numerical computation of NLBU Now, rewriting the last sum in a matrix form, obtain the linear system of equations (=: A) to compute coefficients ϕβ:    ... ... ... ... E [Φα(z(ξ))Φβ(z(ξ))] ... ... ... ...        ... ϕβ ...     =     ... E [q(ξ)Φα(z(ξ))] ...     , where α, β ∈ J , A is of size |J | × |J |. Center for Uncertainty Quantification ation Logo Lock-up 13
  • 15. 4* Numerical computation of NLBU Finally, the assimilated parameter qa will be qa = qf + ˜ϕ(ˆy) − ˜ϕ(z), (1) z(ξ) = y(ξ) + ε(ω), ˜ϕ = β∈Jp ϕβΦβ(z(ξ)) Center for Uncertainty Quantification ation Logo Lock-up 14
  • 16. 4* Example: 1D elliptic PDE with uncertain coeffs − · (κ(x, ξ) u(x, ξ)) = f(x, ξ), x ∈ [0, 1] + Dirichlet random b.c. g(0, ξ) and g(1, ξ). 3 measurements: u(0.3) = 22, s.d. 0.2, x(0.5) = 28, s.d. 0.3, x(0.8) = 18, s.d. 0.3. κ(x, ξ): N = 100 dofs, M = 5, number of KLE terms 35, beta distribution for κ, Gaussian covκ, cov. length 0.1, multi-variate Hermite polynomial of order pκ = 2; RHS f(x, ξ): Mf = 5, number of KLE terms 40, beta distribution for κ, exponential covf , cov. length 0.03, multi-variate Hermite polynomial of order pf = 2; b.c. g(x, ξ): Mg = 2, number of KLE terms 2, normal distribution for g, Gaussian covg , cov. length 10, multi-variate Hermite polynomial of order pg = 1; pφ = 3 and pu = 3 Center for Uncertainty Quantification ation Logo Lock-up 15
  • 17. 4* Example: Updating of the parameter 0 0.5 1 0 0.5 1 1.5 0 0.5 1 0 0.5 1 1.5 Figure: Prior and posterior (updated) parameter κ. Collaboration with Y. Marzouk, MIT, and TU Braunschweig. Together with H. Najm, Sandia Lab, we try to compare our technique with his advanced MCMC technique for chemical combustion eqn. Center for Uncertainty Quantification ation Logo Lock-up 16
  • 18. 4* Example: updating of the solution u 0 0.5 1 -20 0 20 40 60 0 0.5 1 -20 0 20 40 60 0 0.5 1 -20 0 20 40 60 0 0.5 1 -20 0 20 40 60 0 0.5 1 -20 0 20 40 60 Figure: Original and updated solutions, mean value plus/minus 1,2,3 standard deviations. Number of available measurements {0, 1, 2, 3, 5} [graphics are built in the stochastic Galerkin library sglib, written by E. Zander in TU Braunschweig] Center for Uncertainty Quantification ation Logo Lock-up 17
  • 19. 4* Part III: My contribution to MUNA project MUNA=Management and minimization of Uncertainties in Numerical Aerodynamics. 1. Quantification of airfoil geometry-induced aerodynamic uncertainties-comparison of approaches, Liu, Litvinenko, Schillings, Schulz, JUQ 2017 2. Numerical Methods for Uncertainty Quantification and Bayesian Update in Aerodynamics Litvinenko, Matthies, chapter in Springer book, Vol 122, pp 262-285, 2013.
  • 20. 4* Example: uncertainties in free stream turbulence α v v u u’ α’ v1 2 Random vectors v1(θ) and v2(θ) model free stream turbulence Center for Uncertainty Quantification ation Logo Lock-up 19
  • 21. 4* Example: 3sigma intervals Figure: 3σ interval, σ standard deviation, in each point of RAE2822 airfoil for the pressure (cp) and friction (cf) coefficients. Center for Uncertainty Quantification ation Logo Lock-up 20
  • 22. 4* Mean and variance of density, tke, xv, zv, pressure Center for Uncertainty Quantification ation Logo Lock-up
  • 23. 4* Example: Kriging and geostat. optimal design Domain: 20m × 20m × 20m, 25, 000 × 25, 000 × 25, 000 dofs, 4000 measurements. Log-Permeability. Color scale: showing the 95% confidence interval. Kriging and spatial design accelerated by orders of magnitude: Combining low-rank with FFT, Nowak, Litvinenko, Mathemati- cal Geosciences 45 (4), 411-435, 2013 Center for Uncertainty Quantification ation Logo Lock-up
  • 24. 4* Numerics on computer with 16GB RAM: 1. 2D Kriging with 270 million estimation points and 100 measurement values (0.25 sec.), 2. to compute the estimation variance (< 1 sec.), 3. to evaluate the spatial average of the estimation variance (the A-criterion of geostat. optimal design) for 2 · 1012 estim. points (30 sec.), 4. to compute the C-criterion of geostat. optimal design for 2 · 1015 estim. points (30 sec.), 5. solve 3D Kriging problem with 15 · 1012 estim. points and 4000 measurement data values (20 sec.) Collaboration with Stuttgart University, hydrology and geosciences. Center for Uncertainty Quantification ation Logo Lock-up 23
  • 25. 4* Example from spatial statistics Goal: To improve estimation of un- known statistical parameters in a spa- tial soil moisture field, Mississippi basin, [−85◦ − 73◦] × [32◦, 43◦]. Log-likelihood function with C = e− |x−y| θ and Z available (satellite) data: L(θ) = − n 2 log(2π) − 1 2 log |C(θ)| − 1 2 Z C(θ)−1 Z. Collaboration with statisticians: M. Genton, Y. Sun, R. Huser, H. Rue from KAUST. n = 512K, matrix setup 261 sec., compression rate 99.98% (0.4 GB against 2006 GB). H-LU is done in 843 sec., error 2 · 10−3 . Center for Uncertainty Quantification ation Logo Lock-up
  • 26. 4* Conclusion Introduced: 1. Low-rank tensor methods to solve PDEs with uncertain coefficients, 2. Post-processing in low-rank tensor format, computing level sets 3. Bayesian update surrogate ϕ (as a linear, quadratic,... approximation) 4. Quantification of uncertainties in Numerical Aerodynamics 5. Applications in geosciences 6. Estimating unknown coefficients in spatial statistics (moisture example) Center for Uncertainty Quantification ation Logo Lock-up 25
  • 27. 4* Thank you Thank you! Center for Uncertainty Quantification ation Logo Lock-up 26
  • 28. 4* Possible collaboration 1. Dominic Breit (error estimates for UQ applications to balance statistical and discretization errors) 2. Gabriel Lord (num. meth. for PDEs with uncertainties; combination of multiscale methods, UQ techniques and Bayesian inference for reservoir modeling; low-rank tensor methods for high-dimensional problems). 3. Lehel Banjai (computation of electromagnetic fields scattered from dielectric objects of uncertain shapes; balancing of the Runge-Kutta time discretization step and the H-matrix approximation rank in time-dependent PDEs), Center for Uncertainty Quantification ation Logo Lock-up 27
  • 29. 4* Possible collaboration 1. BGS (CO2 storage, reservoir modeling, spatial statistics in geology/geophysics), 2. Lyell Institute (subsurface flow under uncertainties, EOR, Bayesian techniques for data assimilations) 3. EGIS: 3.1. Mike Christie (reservoir modeling under uncertainties, EOR, seismic wave propagation in uncertain media) 3.2. Vasily Demyanov (uncertainty quantification and low-rank approximations in geostatistics) 3.3. Dan Arnold (modeling of random geology, multi-scale, Bayesian inference) 3.4. Ahmed El Sheikh (fast Bayesian update methods, advanced UQ, surrogates for BU, big data from spatial statistics). Center for Uncertainty Quantification ation Logo Lock-up 28
  • 30. 4* My experience since 2002 Center for Uncertainty Quantification ation Logo Lock-up 29
  • 31. 4* Explanation of Bayesian Update surrogate Let the stochastic model of the measurement is given by y = M(q) + ε, ε -measurement noise Best estimator ˜ϕ for q given z, i.e. ˜ϕ = argminϕ E[ q(·) − ϕ(z(·)) 2 2]. The best estimate (or predictor) of q given the measurement model is qM(ξ) = ˜ϕ(z(ξ))). The remainder, i.e. the difference between q and qM, is given by q⊥ M(ξ) = q(ξ) − qM(ξ), Due to the minimisation property of the MMSE estimator—orthogonal to qM(ξ), i.e. cov(q⊥ M, qM) = 0. [Thanks to Elmar Zander, TU Braunschweig] Center for Uncertainty Quantification ation Logo Lock-up 30
  • 32. In other words, q(ξ) = qM(ξ) + q⊥ M(ξ) (2) yields an orthogonal decomposition of q. Actual measurement ˆy, prediction ˆq = ˜ϕ(ˆy). Part qM of q can be “collapsed” to ˆq. Updated stochastic model q is thus given by q (ξ) = ˆq + q⊥ M(ξ) (3) q (ξ) = q(ξ) + ( ˜ϕ(ˆy) − ˜ϕ(z(ξ))). (4) Center for Uncertainty Quantification ation Logo Lock-up 31
  • 33. 4* Future plans, Idea N1 Possible collaboration work 1 To develop a low-rank adaptive goal-oriented Bayesian update technique. The solution of the forward and inverse problems will be considered as a whole adaptive process, controlled by error/uncertainty estimators. z (y - z) q f ε forward update low-rank and adaptive y f z (y - z) ε forward y q..... low-rank and adaptive ... q update Stochastic forward spatial discret. stochastic discret. low-rank approx. Inverse problem Errors inverse operator approx.
  • 34. 4* Future plans, Idea N2 Edge between Green functions in PDEs and covariance matrices. Possible collaboration with statistical group, Doug Nychka (NCAR), Haavard Rue Center for Uncertainty Quantification ation Logo Lock-up 32
  • 35. 4* Future plans, Idea N3 Data assimilation techniques, Bayesian update surrogate. Develop non-linear, non-Gaussian Bayesian update approximation for gPCE coefficients. Possible collaboration with Kody Law (OAK NL), Y. Marzouk (MIT), H. Najm (Sandia NL), TU Braunschweig and KAUST.
  • 36. 4* Example: Canonical rank d, whereas TT rank 2 d-Laplacian over uniform tensor grid. It is known to have the Kronecker rank-d representation, ∆d = A⊗IN ⊗...⊗IN +IN ⊗A⊗...⊗IN +...+IN ⊗IN ⊗...⊗A ∈ RI⊗d ⊗I⊗d (5) with A = ∆1 = tridiag{−1, 2, −1} ∈ RN×N, and IN being the N × N identity. Notice that for the canonical rank we have rank kC(∆d ) = d, while TT-rank of ∆d is equal to 2 for any dimension due to the explicit representation ∆d = (∆1 I) × I 0 ∆1 I × ... × I 0 ∆1 I × I ∆1 (6) where the rank product operation ”×” is defined as a regular matrix product of the two corresponding core matrices, their blocks being multiplied by means of tensor product. The similar bound is true for the Tucker rank rankTuck (∆d ) = 2.
  • 37. 4* Advantages and disadvantages Denote k - rank, d-dimension, n = # dofs in 1D: 1. CP: ill-posed approx. alg-m, O(dnk), hard to compute approx. 2. Tucker: reliable arithmetic based on SVD, O(dnk + kd ) 3. Hierarchical Tucker: based on SVD, storage O(dnk + dk3), truncation O(dnk2 + dk4) 4. TT: based on SVD, O(dnk2) or O(dnk3), stable 5. Quantics-TT: O(nd ) → O(dlogq n)
  • 38. 4* How to compute the variance in CP format Let u ∈ Rr and ˜u := u − u d µ=1 1 nµ 1 = r+1 j=1 d µ=1 ˜ujµ ∈ Rr+1, (7) then the variance var(u) of u can be computed as follows var(u) = ˜u, ˜u d µ=1 nµ = 1 d µ=1 nµ   r+1 i=1 d µ=1 ˜uiµ   ,   r+1 j=1 d ν=1 ˜ujν   = r+1 i=1 r+1 j=1 d µ=1 1 nµ ˜uiµ, ˜ujµ . Numerical cost is O (r + 1)2 · d µ=1 nµ .
  • 39. 4* Computing QoI in low-rank tensor format Now, we consider how to find ‘level sets’, for instance, all entries of tensor u from interval [a, b].
  • 40. 4* Definitions of characteristic and sign functions 1. To compute level sets and frequencies we need characteristic function. 2. To compute characteristic function we need sign function. The characteristic χI(u) ∈ T of u ∈ T in I ⊂ R is for every multi- index i ∈ I pointwise defined as (χI(u))i := 1, ui ∈ I, 0, ui /∈ I. Furthermore, the sign(u) ∈ T is for all i ∈ I pointwise defined by (sign(u))i :=    1, ui > 0; −1, ui < 0; 0, ui = 0. Center for Uncertainty Quantification ation Logo Lock-up 34
  • 41. 4* sign(u) is needed for computing χI(u) Lemma Let u ∈ T , a, b ∈ R, and 1 = d µ=1 ˜1µ, where ˜1µ := (1, . . . , 1)t ∈ Rnµ . (i) If I = R<b, then we have χI(u) = 1 2 (1 + sign(b1 − u)). (ii) If I = R>a, then we have χI(u) = 1 2(1 − sign(a1 − u)). (iii) If I = (a, b), then we have χI(u) = 1 2 (sign(b1 − u) − sign(a1 − u)). Computing sign(u), u ∈ Rr , via hybrid Newton-Schulz iteration with rank truncation after each iteration. Center for Uncertainty Quantification ation Logo Lock-up 35
  • 42. 4* Level Set, Frequency Definition (Level Set, Frequency) Let I ⊂ R and u ∈ T . The level set LI(u) ∈ T of u respect to I is pointwise defined by (LI(u))i := ui, ui ∈ I ; 0, ui /∈ I , for all i ∈ I. The frequency FI(u) ∈ N of u respect to I is defined as FI(u) := # supp χI(u). Center for Uncertainty Quantification ation Logo Lock-up 36
  • 43. 4* Computation of level sets and frequency Proposition Let I ⊂ R, u ∈ T , and χI(u) its characteristic. We have LI(u) = χI(u) u and rank(LI(u)) ≤ rank(χI(u)) rank(u). The frequency FI(u) ∈ N of u respect to I is FI(u) = χI(u), 1 , where 1 = d µ=1 ˜1µ, ˜1µ := (1, . . . , 1)T ∈ Rnµ . Center for Uncertainty Quantification ation Logo Lock-up 37
  • 44. 4* Definition of tensor of order d Tensor of order d is a multidimensional array over a d-tuple index set I = I1 × · · · × Id , A = [ai1...id : i ∈ I ] ∈ RI , I = {1, ..., n }, = 1, .., d. A is an element of the linear space Vn = d =1 V , V = RI equipped with the Euclidean scalar product ·, · : Vn × Vn → R, defined as A, B := (i1...id )∈I ai1...id bi1...id , for A, B ∈ Vn. Center for Uncertainty Quantification ation Logo Lock-up 38
  • 45. 4* Examples of rank-1 and rank-2 tensors Rank-1: f(x1, ..., xd ) = exp(f1(x1) + ... + fd (xd )) = d j=1 exp(fj(xj)) Rank-2: f(x1, ..., xd ) = sin( d j=1 xj), since 2i · sin( d j=1 xj) = ei d j=1 xj − e−i d j=1 xj Rank-d function f(x1, ..., xd ) = x1 + x2 + ... + xd can be approximated by rank-2: with any prescribed accuracy: f ≈ d j=1(1 + εxj) ε − d j=1 1 ε + O(ε), as ε → 0 Center for Uncertainty Quantification ation Logo Lock-up 39
  • 46. 4* Conditional probability and expectation Classically, Bayes’s theorem gives conditional probability P(Iq|Mz) = P(Mz|Iq) P(Mz) P(Iq) (or πq(q|z) = p(z|q) Zs pq(q)); Expectation with this posterior measure is conditional expectation. Kolmogorov starts from conditional expectation E (·|Mz), from this conditional probability via P(Iq|Mz) = E χIq |Mz . Center for Uncertainty Quantification ation Logo Lock-up 40
  • 47. 4* Conditional expectation The conditional expectation is defined as orthogonal projection onto the closed subspace L2(Ω, P, σ(z)): E(q|σ(z)) := PQ∞ q = argmin˜q∈L2(Ω,P,σ(z)) q − ˜q 2 L2 The subspace Q∞ := L2(Ω, P, σ(z)) represents the available information. The update, also called the assimilated value qa(ω) := PQ∞ q = E(q|σ(z)), is a Q-valued RV and represents new state of knowledge after the measurement. Doob-Dynkin: Q∞ = {ϕ ∈ Q : ϕ = φ ◦ z, φ measurable}. Center for Uncertainty Quantification ation Logo Lock-up 41