SlideShare a Scribd company logo
Numerical methods for solving stochastic partial
differential equations in the Tensor Train format
Alexander Litvinenko1
(joint work with Sergey Dolgov2,3, Boris Khoromskij3 and
Hermann G. Matthies4)
1 SRI UQ and Extreme Computing Research Center KAUST, 2
Max-Planck-Institut f¨ur Mathematik in den Naturwissenschaften,
Leipzig, MPI for dynamics of complex systems in 3 Magdeburg,
4 TU Braunschweig, Germany
Center for Uncertainty
Quantification
ntification Logo Lock-up
http://sri-uq.kaust.edu.sa/
4*
Motivation for UQ
Nowadays computational algorithms, run on
supercomputers, can simulate and resolve very
complex phenomena. But how reliable are these
predictions? Can we trust to these results?
Some parameters/coefficients are unknown, lack of
data, very few measurements → uncertainty.
Center for Uncertainty
Quantification
tion Logo Lock-up
2 / 58
4*
Notation, problem setup
Consider
A(u; q) = f ⇒ u = S(f ; q),
where S is a solution operator.
Uncertain Input:
1. Parameter q := q(ω) (assume moments/cdf/pdf/quantiles of
q are given)
2. Boundary and initial conditions, right-hand side
3. Geometry of the domain
Uncertain solution:
1. mean value and variance of u
2. exceedance probabilities P(u > u∗)
3. probability density functions (pdf) of u.
Center for Uncertainty
Quantification
tion Logo Lock-up
3 / 58
4*
KAUST
Figure : KAUST campus, 5 years
old, approx. 7000 people (include
1400 kids), 100 nations.
Center for Uncertainty
Quantification
tion Logo Lock-up
4 / 58
4*
Children at KAUST
Center for Uncertainty
Quantification
tion Logo Lock-up
5 / 58
4*
Stochastic Numerics Group at KAUST
Figure : SRI UQ Group
Center for Uncertainty
Quantification
tion Logo Lock-up
6 / 58
4*
3rd UQ Workshop ”Advances in UQ Methods, Alg. & Appl.”
Center for Uncertainty
Quantification
tion Logo Lock-up
7 / 58
4*
PDE with uncertain diffusion coefficients
PART 1. Stochastic Forward Problems
Center for Uncertainty
Quantification
tion Logo Lock-up
8 / 58
4*
PDE with uncertain diffusion coefficients
Consider
− div(κ(x, ω) u(x, ω)) = f (x, ω) in G × Ω, G ⊂ R2,
u = 0 on ∂G,
(1)
where κ(x, ω) - uncertain diffusion coefficient. Since κ positive,
usually κ(x, ω) = eγ(x,ω).
For well-posedness see [Sarkis 09, Gittelson 10, H.J.Starkloff 11,
Ullmann 10].
Further we will assume that covκ(x, y) is given (or estimated from
the available data).
Center for Uncertainty
Quantification
tion Logo Lock-up
9 / 58
4*
Our previous work
After applying the stochastic Galerkin method, obtain:
Ku = f, where all ingredients are represented in a tensor format
Solve for u. Compute max{u}, var(u), level sets of u, pdf, cdf,
1. Efficient Analysis of High Dimensional Data in Tensor Formats,
[Espig, Hackbusch, A.L., Matthies and Zander, 2012]
Research rank of K (from which ingredients it depends)
2. Efficient low-rank approximation of the stochastic Galerkin
matrix in tensor formats, [W¨ahnert, Espig, Hackbusch, A.L., Matthies, 2013]
Center for Uncertainty
Quantification
tion Logo Lock-up
10 / 58
4*
Smooth transformation of Gaussian RF
Step 1: We assume κ = φ(γ) -a smooth transformation of the
Gaussian random field γ(x, ω), e.g. φ(γ) = exp(γ).
[see PhD of E. Zander 2013, or PhD of A. Keese, 2005]
Step 2: Given the covariance matrix of κ(x, ω), we derive the
covariance matrix of γ(x, ω). After that the KLE may be
computed,
γ(x, ω) =
∞
m=1
gm(x)θm(ω),
D
covγ(x, y)gm(y)dy = λmgm(x),
(2)
Center for Uncertainty
Quantification
tion Logo Lock-up
11 / 58
4*
Full JM,p and sparse J sp
M,p multi-index sets
M-dimensional PCE approximation of κ writes (α = (α1, ..., αM))
κ(x, ω) ≈
α∈JM
κα(x)Hα(θ(ω)), Hα(θ) := hα1 (θ1) · · · hαM
(θM)
(3)
Definition
The full multi-index is defined by restricting each component
independently,
JM,p = {0, 1, . . . , p1}⊗· · ·⊗{0, 1, . . . , pM}, where p = (p1, . . . , pM)
is a shortcut for the tuple of order limits.
Definition
The sparse multi-index is defined by restricting the sum of
components,
J sp
M,p = {α = (α1, . . . , αM) : α ≥ 0, α1 + · · · + αM ≤ p} .
Center for Uncertainty
Quantification
tion Logo Lock-up
12 / 58
4*
TT compression of PCE coeffs
The Galerkin coefficients κα are evaluated as follows [Thm 3.10,
PhD of E. Zander 13],
κα(x) =
(α1 + · · · + αM)!
α1! · · · αM!
φα1+···+αM
M
m=1
gαm
m (x), (4)
where φ|α| := φα1+···+αM
is the Galerkin coefficient of the
transform function, and gαm
m (x) means just the αm-th power of the
KLE function value gm(x).
Center for Uncertainty
Quantification
tion Logo Lock-up
13 / 58
4*
Complexity reduction
Complexity reduction in Eq. (4) can be achieved with help of KLE
of κ(x, ω):
κ(x, ω) ≈ ¯κ(x) +
L
=1
√
µ v (x)η (ω) (5)
with the normalized spatial functions v (x).
Instead of using κα(x), (4), directly, we compute
˜κ (α) =
(α1 + · · · + αM)!
α1! · · · αM!
φα1+···+αM
D
M
m=1
gαm
m (x)v (x)dx.
Note that L N. Then we restore the approximate coefficients
κα(x) ≈ ¯κ(x) +
L
=1
v (x)˜κ (α).
Center for Uncertainty
Quantification
tion Logo Lock-up
14 / 58
4*
Construction of the stochastic Galerkin operator
Given KLE of κ, assemble for i, j = 1, . . . , N, = 1, . . . , L:
K0(i, j) =
D
¯κ(x) ϕi (x)· ϕj (x)dx, K (i, j) =
D
v (x) ϕi (x)· ϕj (x)d
(6)
K
(ω)
(α, β) =
RM
Hα(θ)Hβ(θ)
ν∈JM
˜κ (ν)Hν(θ)ρ(θ)dθ
=
ν∈JM
∆α,β,ν ˜κ (ν),
∆α,β,ν = ∆α1,β1,ν1 · · · ∆αM ,βM ,νM
,
∆αm,βm,νm =
R
hαm (θ)hβm (θ)hνm (θ)ρ(θ)dθ,
Center for Uncertainty
Quantification
tion Logo Lock-up
15 / 58
4*
Stochastic Galerkin operator
Putting together previous formulas, obtain the stochastic Galerkin
operator,
K = K
(x)
0 ⊗ ∆0 +
L
=1
K
(x)
⊗ K
(ω)
, (7)
with K ∈ RN(p+1)M ×N(p+1)M
in case of full JM,p.
IDEA: If PCE coefficients of κ are computed in the tensor product
format, the direct product in ∆ (15) allows to exploit the same
format for (7), and build the operator easily.
Center for Uncertainty
Quantification
tion Logo Lock-up
16 / 58
4*
Tensor Train
Two tensor Train examples
Center for Uncertainty
Quantification
tion Logo Lock-up
17 / 58
4*
Examples (B. Khoromskij’s lecture)
f (x1, ..., xd ) = w1(x1) + w2(x2) + ... + wd (xd )
= (w1(x1), 1)
1 0
w2(x2) 1
...
1 0
wd−1(xd−1) 1
1
wd (xd )
Center for Uncertainty
Quantification
tion Logo Lock-up
18 / 58
4*
Examples:
TT rank(f )=2
f = sin(x1 + x2 + ... + xd )
= (sin x1, cos x1)
cos x2 − sin x2
sin x2 cos x2
...
cos xd−1 − sin xd−1
sin xd−1 cos xd−1
cos xd
sin xd−1
Center for Uncertainty
Quantification
tion Logo Lock-up
19 / 58
4*
Low-rank response surface: PCE in the TT format
Calculation of
˜κ (α) =
(α1 + · · · + αM)!
α1! · · · αM!
φα1+···+αM
D
M
m=1
gαm
m (x)v (x)dx.
in TT format needs:
a procedure to compute each element of a tensor, e.g.
˜κα1,...,αM
.
build a TT approximation ˜κα ≈ κ(1)(α1) · · · κ(M)(αM) using a
feasible amount of elements (i.e. much less than (p + 1)M).
Such procedure exists, and relies on the cross interpolation of
matrices, generalized to a higher-dimensional case [Oseledets, Tyrtyshnikov
2010; Savostyanov 13; Grasedyck; Bebendorf].
Center for Uncertainty
Quantification
tion Logo Lock-up
20 / 58
PCE coefficients ˜κ (α) are :
˜κ (α) =
s1,...,sM−1
κ
(1)
,s1
(α1)κ
(2)
s1,s2 (α2) · · · κ
(M)
sM−1 (αM). (8)
Collect the spatial components into the “zeroth” TT block,
κ(0)
(x) = κ
(0)
(x)
L
=0
= ¯κ(x) v1(x) · · · vL(x) , (9)
then the PCE writes as the following TT format,
κ(x, α) =
,s1,...,sM−1
κ
(0)
(x)κ
(1)
,s1
(α1) · · · κ
(M)
sM−1 (αM). (10)
Center for Uncertainty
Quantification
tion Logo Lock-up
21 / 58
4*
Stochastic Galerkin matrix in TT format
Given κα(x), (10), we split the whole sum over ν in K,(7):
ν∈JM,p
∆α,β,ν ˜κ (ν) =
s1,...,sM−1
K
(1)
,s1
(α1, β1)K
(2)
s1,s2 (α2, β2) · · · K
(M)
sM−1 (αM,
K(m)
(αm, βm) =
pm
νm=0
∆αm,βm,νm κ(m)
(νm), m = 1, . . . , M.
(11)
then the TT representation for the operator writes
K =
,s1,...,sM−1
K
(0)
⊗K
(1)
,s1
⊗· · ·⊗K
(M)
sM−1 ∈ R(N·#JM,p)×(N·#JM,p)
,
(12)
Center for Uncertainty
Quantification
tion Logo Lock-up
22 / 58
4*
Solving and Post-processing:
Solve the linear system Ku = f by alternating optimization
methods [Dolgov, Savostyanov 14] with a mean-field
preconditioned. Obtain the solution u in the TT format.
u(x, α) =
s0,...,sM−1
u
(0)
s0 (x)u
(1)
s0,s1 (α1) · · · u
(M)
sM−1 (αM). (13)
u(x, θ) =
s0,...,sM−1
u
(0)
s0 (x)
p
α1=0
hα1 (θ1)u
(1)
s0,s1 (α1) · · · (14)


p
αM =0
hαM
(θM)u
(M)
sM−1 (αM)

 . (15)
Then compute: mean, co(variance), exceedance probabilities
Center for Uncertainty
Quantification
tion Logo Lock-up
23 / 58
4*
Numerics: Main steps
1. Use sglib (E. Zander, TU BS) for discretization and solution
with J sp
M,p.
2. Compute PCE (sglib) of the coefficients κ(x, ω) in the TT
format by new block adaptive cross algorithm (TT toolbox)
3. Use TT-Toolbox for full JM,p,
4. amen cross.m for TT approximation of ˜κα,
5. Compute stochastic Galerkin matrix K in TT format,
6. Replace high-dimensional calculations by the TT-toolbox.
7. Compute solution of the linear system in TT (alternating
minimal energy, tAMEn)
8. Post-processing in TT format
Center for Uncertainty
Quantification
tion Logo Lock-up
24 / 58
4*
Numerical experiments, errors, accuracy
D = [−1, 1]2[0, 1]2. f = f (x) = 1, log-normal and beta
distributions for κ. 557, 2145, 8417 dofs.
Eκ =
1
Nmc
Nmc
z=1
N
i=1 (κ(xi , θz) − κ∗(xi , θz))2
N
i=1 κ2
∗(xi , θz)
where {θz}Nmc
z=1 are normally distributed random samples and
κ∗(xi , θz) = φ (γ(xi , θz)) is the reference coefficient computed
without using the PCE for φ.
E¯u =
¯u − ¯u∗ L2(D)
¯u∗ L2(D)
, Evaru =
varu − varu∗ L2(D)
varu∗ L2(D)
.
Center for Uncertainty
Quantification
tion Logo Lock-up
25 / 58
4*
More numerics
We compute the maximizer of the mean solution,
xmax : ¯u(xmax) ≥ ¯u(x) ∀x ∈ D.
umax(θ) = u(xmax, θ), and ˆu = ¯u(xmax).
Taking some τ > 1, we compute
P = P (umax(θ) > τˆu) =
RM
χumax(θ)>τˆu(θ)ρ(θ)dθ. (16)
By P∗ we will also denote the probability computed from the
Monte Carlo method, and estimate the error as EP = |P − P∗| /P∗.
Center for Uncertainty
Quantification
tion Logo Lock-up
26 / 58
4*
Sparse J sp
M,p or full JM,p ?
What is better sparse J sp
M,p or full JM,p multi-index set ?
Center for Uncertainty
Quantification
tion Logo Lock-up
27 / 58
4*
CPU times (sec.) versus p, log-normal distribution
TT (full index set JM,p) Sparse (index set J sp
M,p)
p Tκ Top Tu Tκ Top Tu
1 9.6 0.2 1.7 0.5 0.3 0.65
2 14.7 0.2 3 0.5 3.2 1.4
3 19.1 0.2 3.4 0.7 1028 18
4 24.4 0.2 4.2 2.2 — —
5 30.9 0.32 5.3 9.8 — —
Center for Uncertainty
Quantification
tion Logo Lock-up
28 / 58
4*
How does polynomial order influence the ranks ?
How does the max. polynomial order p influence the ranks ?
Center for Uncertainty
Quantification
tion Logo Lock-up
29 / 58
4*
Performance versus p, log-normal distribution
p CPU time, sec. rκ ru r ˆχ Eκ Eu P
TT Sparse ˆχ TT Sparse TT Sparse TT
1 11 1.4 0.2 32 42 1 4e-3 1.7e-1 1e-2 1e-1 0
2 18 5.1 0.3 32 49 1 1e-4 1.1e-1 5e-4 5e-2 0
3 23 1046 83 32 49 462 6e-5 2.e-3 3e-4 5e-4 2.8e-4
4 29 — 70 32 50 416 6e-5 — 1e-4 — 1.2e-4
5 37 — 103 32 49 410 6e-5 — 1e-4 — 6.2e-4
Take τ = 1.2:
P = P (umax(θ) > τˆu) =
RM
χumax(θ)>τˆu(θ)ρ(θ)dθ. (17)
Center for Uncertainty
Quantification
tion Logo Lock-up
30 / 58
4*
How does stochastic dimension M influence the ranks ?
How does the stochastic dimension M influence the TT ranks ?
Center for Uncertainty
Quantification
tion Logo Lock-up
31 / 58
4*
Performance versus M, log-normal distribution
M CPU time, sec. rκ ru r ˆχ Eκ Eu P
TT Sparse ˆχ TT Sparse TT Sparse TT
10 6 6 1.3 20 39 70 2e-4 1.7e-1 3e-4 1.5e-1 2.86e-4
15 12 92 23 27 42 381 8e-5 2e-3 3e-4 5e-4 3e-4
20 22 1e+3 67 32 50 422 6e-5 2e-3 3e-4 5e-4 2.96e-4
30 53 5e+4 137 39 50 452 6e-5 1e-1 3e-4 5.5e-2 2.78e-4
Center for Uncertainty
Quantification
tion Logo Lock-up
32 / 58
4*
How does covariance length influence the ranks ?
How does covariance length influence the ranks ?
Center for Uncertainty
Quantification
tion Logo Lock-up
33 / 58
4*
Performance versus cov. length, log-normal distribution
cov. CPU time, sec. rκ ru r ˆχ Eκ Eu P
length TT Sparse ˆχ TT Sparse TT Sparse TT
0.1 216 55800 0.9 70 50 1 2e-2 2e-2 1.8e-2 1.8e-2 0
0.3 317 52360 42 87 74 297 3e-3 3.5e-3 2.6e-3 2.6e-3 8e-31
0.5 195 51700 58 67 74 375 1.5e-4 2e-3 2.6e-4 3.1e-4 6e-33
1.0 57.3 55200 97 39 50 417 6.1e-5 9e-2 3.2e-4 5.6e-2 2.95e-04
1.5 32.4 49800 121 31 34 424 3.2e-5 2e-1 5e-4 1.7e-1 7.5e-04
Center for Uncertainty
Quantification
tion Logo Lock-up
34 / 58
4*
How does standard deviation σ influence the ranks ?
How does the standard deviation σ influence the TT ranks ?
Center for Uncertainty
Quantification
tion Logo Lock-up
35 / 58
4*
Performance versus σ, log-normal distribution
σ CPU time, sec. rκ ru rˆχ Eκ Eu P
TT Sparse ˆχ TT Sparse TT Sparse TT
0.2 16 1e+3 0.3 21 31 1 6e-5 5e-5 4e-5 1e-5 0
0.4 19 968 0.3 29 42 1 7e-5 8e-4 1e-4 2e-4 0
0.5 21 970 80 32 49 456 6e-5 2e-3 3e-4 5e-4 3e-4
0.6 24 962 25 34 57 272 9e-5 4e-3 6e-4 1e-3 2e-3
0.8 32 969 68 39 66 411 4e-4 8e-2 2e-3 3e-2 8e-2
1.0 51 1070 48 44 82 363 2e-3 4e-1 5e-3 3e-1 9e-2
Center for Uncertainty
Quantification
tion Logo Lock-up
36 / 58
4*
How does number of DoFs influence the ranks ?
How does number of DoFs influence the ranks ?
Center for Uncertainty
Quantification
tion Logo Lock-up
37 / 58
4*
Performance versus #DoFs, log-normal distribution
#DoFs CPU time, sec rκ ru r ˆχ Eκ Eu P
TT Sparse ˆχ TT Sparse TT Sparse TT
557 6 6 1.3 20 39 71 2e-4 1.7e-1 3e-4 1.5e-1 2.86e-4
2145 9 14 1.2 20 39 76 2e-4 2e-3 3e-4 5.7e-4 2.9e-4
8417 357 171 0.8 20 40 69 1.7e-4 2e-3 3e-4 5.6e-4 2.93e-4
Center for Uncertainty
Quantification
tion Logo Lock-up
38 / 58
4*
Comparison with the Monte Carlo
Comparison of the solution obtained via the (Stochastic Galerkin
+ TT) with the solution obtained via Monte Carlo (4000).
For the Monte Carlo test, we prepare the TT solution with
parameters p = 5 and M = 30.
Center for Uncertainty
Quantification
tion Logo Lock-up
39 / 58
4*
Verification of the MC method (4000), log-normal distr.
Nmc TMC , sec. E¯u Evaru
P∗ EP TT results
102
0.6 9e-3 2e-1 0 ∞ Tsolve 97 sec.
103
6.2 2e-3 6e-2 0 ∞ Tˆχ 157 sec.
104
6.2·101
6e-4 7e-3 4e-4 5e-1 rκ 39
105
6.2·102
3e-4 3e-3 4e-4 5e-1 ru 50
106
6.3·103
1e-4 1e-3 5e-4 4e-1 rˆχ 432
P 6e-4
Center for Uncertainty
Quantification
tion Logo Lock-up
40 / 58
4*
Part II: diffusion coefficient has beta distrib.
κ(x, ω) = B−1
5,2


1 + erf γ(x,ω)
√
2
2

 + 1,
Ba,b(z) =
1
B(a, b)
z
0
ta−1
(1 − t)b−1
dt.
Center for Uncertainty
Quantification
tion Logo Lock-up
41 / 58
4*
We researched (for beta distribution)
1. Performance versus p
2. Performance versus stochastic dimension M
3. Performance versus cov. length
4. Performance versus #DoFs
5. Verification of the Monte Carlo method
Center for Uncertainty
Quantification
tion Logo Lock-up
42 / 58
4*
Take to home
1. TT methods become preferable for high p, but otherwise the
full computation in a small sparse set may be incredibly fast.
This reflects well the “curse of order”, taking place for the
sparse set instead of the “curse of dimensionality” in the full
set: the cardinality of the sparse set grows exponentially with
p.
2. The TT approach scales linearly with p.
3. TT methods allow easy calculation of stochastic Galerkin
operator. With p < 10 TT storage of stoch. Galerkin operator
allows us to forget about the sparsity issues, since the number
of TT entries O(Mp2r2) is tractable.
4. Chebyshev, Laguerre, ... may be incorporated into the scheme
freely.
Center for Uncertainty
Quantification
tion Logo Lock-up
43 / 58
4*
Future plans for the next article
1. Compute Sobol indices in the TT format. Which uncertain
coefficients and which PCE terms are important ?
2. Solution of this linear elliptic SPDE is a ”working horse” for
the non-linear equation and the Newton method
3. Stochastic Galerkin in TT format above can be used as a
preconditioning it is very fast!) for more complicated
non-linear problems
4. Apply to more complicated diffusion coefficients (e.g. which
are not so good splittable)
5. To create analytic u, compute analytically RHS and solve the
problem again (to avoid to use MC as a reference)
Center for Uncertainty
Quantification
tion Logo Lock-up
44 / 58
4*
Approximate Bayesian Update
PART 2. Inverse Problems via approximate
Bayesian Update
Center for Uncertainty
Quantification
tion Logo Lock-up
45 / 58
4*
Setting for the identification process
General idea:
We observe / measure a system, whose structure we know in
principle.
The system behaviour depends on some quantities (parameters),
which we do not know ⇒ uncertainty.
We model (uncertainty in) our knowledge in a Bayesian setting:
as a probability distribution on the parameters.
We start with what we know a priori, then perform a measurement.
This gives new information, to update our knowledge
(identification).
Update in probabilistic setting works with conditional probabilities
⇒ Bayes’s theorem.
Repeated measurements lead to better identification.
Center for Uncertainty
Quantification
tion Logo Lock-up
46 / 58
4*
Mathematical setup
Consider
A(u; q) = f ⇒ u = S(f ; q),
where S is solution operator.
Operator depends on parameters q ∈ Q,
hence state u ∈ U is also function of q:
Measurement operator Y with values in Y:
y = Y (q; u) = Y (q, S(f ; q)).
Examples of measurements:
(ODE) u(t) = (x(t), y(t), z(t))T , y(t) = (x(t), y(t))T
(PDE) y(ω) = D0
u(ω, x)dx, y(ω) = D0
| grad u(ω, x)|2dx, u in
few points
Center for Uncertainty
Quantification
tion Logo Lock-up
47 / 58
4*
Inverse problem
For given f , measurement y is just a function of q.
This function is usually not invertible ⇒ ill-posed problem,
measurement y does not contain enough information.
In Bayesian framework state of knowledge modelled in a
probabilistic way,
parameters q are uncertain, and assumed as random.
Bayesian setting allows updating / sharpening of information
about q when measurement is performed.
The problem of updating distribution —state of knowledge of q
becomes well-posed.
Can be applied successively, each new measurement y and
forcing f —may also be uncertain—will provide new information.
Center for Uncertainty
Quantification
tion Logo Lock-up
48 / 58
4*
Conditional probability and expectation
With state u ∈ U ⊗ S a RV, the quantity to be measured
y(ω) = Y (q(ω), u(ω))) ∈ Y ⊗ S
is also uncertain, a random variable.
A new measurement z is performed, composed from the
“true” value y ∈ Y and a random error : z(ω) = y + (ω).
Classically, Bayes’s theorem gives conditional probability
P(Iq|Mz) =
P(Mz|Iq)
P(Mz)
P(Iq);
expectation with this posterior measure is conditional expectation.
Kolmogorov starts from conditional expectation E (·|Mz),
from this conditional probability via P(Iq|Mz) = E χIq |Mz .
Center for Uncertainty
Quantification
tion Logo Lock-up
49 / 58
4*
IDEA of the Bayesian Update (BU)
Let Y (x, θ), θ = (θ1, ..., θM, ...), is approximated:
Y (x, θ) =
β∈Jm,p
Hβ(θ)Yβ(x),
q(x, θ) =
β∈Jm,p
Hβ(θ)qβ(x),
Yβ(x) =
1
β! Θ
Hβ(θ)Y (x, θ) P(dθ).
Take qf (ω) = q0(ω).
Linear BU: qa = qf + K · (z − y)
Non-Linear BU: qa = qf + H1 · (z − y) + (z − y)T · H2 · (z − y).
Center for Uncertainty
Quantification
tion Logo Lock-up
50 / 58
4*
Open questions
Open questions
Center for Uncertainty
Quantification
tion Logo Lock-up
51 / 58
4*
Open questions
Multivariate Cauchy distribution
The characteristic function ϕX(t) of the multivariate Cauchy
distribution is defined as follow:
ϕX(t) = exp i(t1, t2) · (µ1, µ2)T
−
1
2
(t1, t2)
σ2
1 0
0 σ2
2
(t1, t2)T ,
(18)
ϕX(t) ≈
R
ν=1
ϕXν,1 (t1) · ϕXν,2 (t2). (19)
Again, from the inversion theorem, the probability density of X on
R2 can be computed from ϕX(t) as follow
Center for Uncertainty
Quantification
tion Logo Lock-up
52 / 58
pX(y) =
1
(2π)2
R2
exp(−i y, t )ϕX(t)dt (20)
≈
1
(2π)2
R2
exp(−i(y1t1 + y2t2))
R
ν=1
ϕXν,1 (t1) · ϕXν,2 (t2)dt1dt2
(21)
≈
R
ν=1
1
(2π) R
exp(−iy1t1)ϕXν,1 (t1)dt1 ·
1
(2π) R
exp(−iy2t2)ϕXν,2 (t
(22)
≈
R
ν=1
pXν,1 (y1) · pXν,2 (y2), i.e. (23)
the probability density pX(y) is numerically splittable.
Center for Uncertainty
Quantification
tion Logo Lock-up
53 / 58
4*
Elliptically contoured multivariate stable distribution
ϕX(t) = exp i(t1, t2) · (µ1, µ2)T
− (t1, t2)
σ2
1 0
0 σ2
2
(t1, t2)T
α/2
(24)
Now the question is to find a separation of
(t1, t2)
σ2
1 0
0 σ2
2
(t1, t2)T
α/2
≈
R
ν=1
φν,1(t1) · φν,2(t2), (25)
with some tensor rank R.
Center for Uncertainty
Quantification
tion Logo Lock-up
54 / 58
4*
Multivariate distribution
Assume that the characteristic function ϕX(t) of some multivariate
d-dimensional distribution is approximated as follow:
ϕX(t) ≈
R
=1
d
µ=1
ϕX ,µ
(tµ). (26)
pX(y) = const
Rd
exp(−i y, t )ϕX(t)dt (27)
≈ const
Rd
exp(−i
d
j=1
yj tj )
R
=1
d
µ=1
ϕX ,µ
(tµ)dt1...dtd (28)
≈
R
=1
const
d
µ=1 R
exp(−iy t )ϕX ,µ
(tµ)dt · (29)
≈
R
=1
d
µ=1
pX ,µ
(yµ). (30)
Center for Uncertainty
Quantification
tion Logo Lock-up
55 / 58
4*
Actual computation of ϕX(t)
ϕX(τβ) = E (exp(i X(θ1, ..., θm), τβ ))
= · · ·
Θ
exp(i X(θ1, ..., θm), τβ )
M
m=1
pθm (θm)dθ1...dθM,
X(ω), τβ =
α∈J
ξα
Hα(θ), τβ ≈
d
=1 α∈J
ξα
Hα(θ)tβ ,
=
α∈J
d
=1
ξα
tβ , Hα(θ) =
α∈J
ξα
, τβ Hα(θ) (31)
Now compute the exp() function from the scalar product
Center for Uncertainty
Quantification
tion Logo Lock-up
56 / 58
exp(i X(ω), τβ ) = exp(i
α∈J
ξα
, τβ Hα(θ)) (32)
=
α∈J
exp (i ξα
, τβ Hα(θ)) (33)
Now we apply integration:
ϕX(t) = E (exp(i X(ω), τβ ))
= · · ·
Θ α∈J
exp (i ξα
, τβ Hα(θ))
M
m=1
pθm (θm)dθ1...dθM
≈????
nq
=1
w
α∈J
exp (i ξα
, τβ Hα(θ ))
M
m=1
pθm (θm, )
Center for Uncertainty
Quantification
tion Logo Lock-up
57 / 58
4*
Literature
1. Polynomial Chaos Expansion of random coefficients and the
solution of stochastic partial differential equations in the Tensor
Train format, S. Dolgov, B. N. Khoromskij, A. Litvinenko, H. G.
Matthies, 2015/3/11, arXiv:1503.03210
2. Efficient analysis of high dimensional data in tensor formats, M.
Espig, W. Hackbusch, A. Litvinenko, H.G. Matthies, E. Zander
Sparse Grids and Applications, 31-56, 40, 2013
3. Application of hierarchical matrices for computing the
Karhunen-Loeve expansion, B.N. Khoromskij, A. Litvinenko, H.G.
Matthies, Computing 84 (1-2), 49-67, 31, 2009
4. Efficient low-rank approximation of the stochastic Galerkin
matrix in tensor formats, M. Espig, W. Hackbusch, A. Litvinenko,
H.G. Matthies, P. Waehnert, Computers and Mathematics with
Applications 67 (4), 818-829
Center for Uncertainty
Quantification
tion Logo Lock-up
58 / 58

More Related Content

What's hot

QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Hyperparameter optimization with approximate gradient
Hyperparameter optimization with approximate gradientHyperparameter optimization with approximate gradient
Hyperparameter optimization with approximate gradient
Fabian Pedregosa
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
Christian Robert
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution Algorithms
PK Lehre
 
Bayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsBayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear models
Caleb (Shiqiang) Jin
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-Likelihoods
Stefano Cabras
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes Factors
Christian Robert
 
Bachelor_Defense
Bachelor_DefenseBachelor_Defense
Bachelor_DefenseTeja Turk
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
Pierre Jacob
 
MCMC and likelihood-free methods
MCMC and likelihood-free methodsMCMC and likelihood-free methods
MCMC and likelihood-free methods
Christian Robert
 
Distributed ADMM
Distributed ADMMDistributed ADMM
Distributed ADMM
Pei-Che Chang
 
Bayesian model choice in cosmology
Bayesian model choice in cosmologyBayesian model choice in cosmology
Bayesian model choice in cosmology
Christian Robert
 
Unbiased Hamiltonian Monte Carlo
Unbiased Hamiltonian Monte Carlo Unbiased Hamiltonian Monte Carlo
Unbiased Hamiltonian Monte Carlo
JeremyHeng10
 
ABC in Venezia
ABC in VeneziaABC in Venezia
ABC in Venezia
Christian Robert
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conference
Christian Robert
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
Fabian Pedregosa
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like sampler
Christian Robert
 
Deep generative model.pdf
Deep generative model.pdfDeep generative model.pdf
Deep generative model.pdf
Hyungjoo Cho
 
Random Matrix Theory and Machine Learning - Part 2
Random Matrix Theory and Machine Learning - Part 2Random Matrix Theory and Machine Learning - Part 2
Random Matrix Theory and Machine Learning - Part 2
Fabian Pedregosa
 

What's hot (20)

QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Hyperparameter optimization with approximate gradient
Hyperparameter optimization with approximate gradientHyperparameter optimization with approximate gradient
Hyperparameter optimization with approximate gradient
 
Can we estimate a constant?
Can we estimate a constant?Can we estimate a constant?
Can we estimate a constant?
 
Simplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution AlgorithmsSimplified Runtime Analysis of Estimation of Distribution Algorithms
Simplified Runtime Analysis of Estimation of Distribution Algorithms
 
Bayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear modelsBayesian hybrid variable selection under generalized linear models
Bayesian hybrid variable selection under generalized linear models
 
Approximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-LikelihoodsApproximate Bayesian Computation with Quasi-Likelihoods
Approximate Bayesian Computation with Quasi-Likelihoods
 
Approximating Bayes Factors
Approximating Bayes FactorsApproximating Bayes Factors
Approximating Bayes Factors
 
Bachelor_Defense
Bachelor_DefenseBachelor_Defense
Bachelor_Defense
 
Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...Estimation of the score vector and observed information matrix in intractable...
Estimation of the score vector and observed information matrix in intractable...
 
MCMC and likelihood-free methods
MCMC and likelihood-free methodsMCMC and likelihood-free methods
MCMC and likelihood-free methods
 
Distributed ADMM
Distributed ADMMDistributed ADMM
Distributed ADMM
 
Bayesian model choice in cosmology
Bayesian model choice in cosmologyBayesian model choice in cosmology
Bayesian model choice in cosmology
 
Unbiased Hamiltonian Monte Carlo
Unbiased Hamiltonian Monte Carlo Unbiased Hamiltonian Monte Carlo
Unbiased Hamiltonian Monte Carlo
 
ABC in Venezia
ABC in VeneziaABC in Venezia
ABC in Venezia
 
Poster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conferencePoster for Bayesian Statistics in the Big Data Era conference
Poster for Bayesian Statistics in the Big Data Era conference
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3Random Matrix Theory and Machine Learning - Part 3
Random Matrix Theory and Machine Learning - Part 3
 
Coordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like samplerCoordinate sampler: A non-reversible Gibbs-like sampler
Coordinate sampler: A non-reversible Gibbs-like sampler
 
Deep generative model.pdf
Deep generative model.pdfDeep generative model.pdf
Deep generative model.pdf
 
Random Matrix Theory and Machine Learning - Part 2
Random Matrix Theory and Machine Learning - Part 2Random Matrix Theory and Machine Learning - Part 2
Random Matrix Theory and Machine Learning - Part 2
 

Viewers also liked

Connection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsConnection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problems
Alexander Litvinenko
 
Multi-linear algebra and different tensor formats with applications
Multi-linear algebra and different tensor formats with applications Multi-linear algebra and different tensor formats with applications
Multi-linear algebra and different tensor formats with applications
Alexander Litvinenko
 
Application of hierarchical matrices for partial inverse
Application of hierarchical matrices for partial inverseApplication of hierarchical matrices for partial inverse
Application of hierarchical matrices for partial inverse
Alexander Litvinenko
 
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
My paper for Domain Decomposition Conference in Strobl, Austria, 2005My paper for Domain Decomposition Conference in Strobl, Austria, 2005
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
Alexander Litvinenko
 
A small introduction into H-matrices which I gave for my colleagues
A small introduction into H-matrices which I gave for my colleaguesA small introduction into H-matrices which I gave for my colleagues
A small introduction into H-matrices which I gave for my colleagues
Alexander Litvinenko
 
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Alexander Litvinenko
 
Response Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty QuantificationResponse Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty Quantification
Alexander Litvinenko
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
Alexander Litvinenko
 
Hierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matricesHierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matrices
Alexander Litvinenko
 
Scalable hierarchical algorithms for stochastic PDEs and UQ
Scalable hierarchical algorithms for stochastic PDEs and UQScalable hierarchical algorithms for stochastic PDEs and UQ
Scalable hierarchical algorithms for stochastic PDEs and UQ
Alexander Litvinenko
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian update
Alexander Litvinenko
 
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...
Alexander Litvinenko
 

Viewers also liked (12)

Connection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsConnection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problems
 
Multi-linear algebra and different tensor formats with applications
Multi-linear algebra and different tensor formats with applications Multi-linear algebra and different tensor formats with applications
Multi-linear algebra and different tensor formats with applications
 
Application of hierarchical matrices for partial inverse
Application of hierarchical matrices for partial inverseApplication of hierarchical matrices for partial inverse
Application of hierarchical matrices for partial inverse
 
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
My paper for Domain Decomposition Conference in Strobl, Austria, 2005My paper for Domain Decomposition Conference in Strobl, Austria, 2005
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
 
A small introduction into H-matrices which I gave for my colleagues
A small introduction into H-matrices which I gave for my colleaguesA small introduction into H-matrices which I gave for my colleagues
A small introduction into H-matrices which I gave for my colleagues
 
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
 
Response Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty QuantificationResponse Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty Quantification
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
 
Hierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matricesHierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matrices
 
Scalable hierarchical algorithms for stochastic PDEs and UQ
Scalable hierarchical algorithms for stochastic PDEs and UQScalable hierarchical algorithms for stochastic PDEs and UQ
Scalable hierarchical algorithms for stochastic PDEs and UQ
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian update
 
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...
 

Similar to Tensor train to solve stochastic PDEs

QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...
QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...
QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...
The Statistical and Applied Mathematical Sciences Institute
 
A nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaA nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formula
Alexander Litvinenko
 
Gracheva Inessa - Fast Global Image Denoising Algorithm on the Basis of Nonst...
Gracheva Inessa - Fast Global Image Denoising Algorithm on the Basis of Nonst...Gracheva Inessa - Fast Global Image Denoising Algorithm on the Basis of Nonst...
Gracheva Inessa - Fast Global Image Denoising Algorithm on the Basis of Nonst...
AIST
 
Hands-On Algorithms for Predictive Modeling
Hands-On Algorithms for Predictive ModelingHands-On Algorithms for Predictive Modeling
Hands-On Algorithms for Predictive Modeling
Arthur Charpentier
 
Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9
Daisuke Yoneoka
 
Lecture9 xing
Lecture9 xingLecture9 xing
Lecture9 xing
Tianlu Wang
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formats
Alexander Litvinenko
 
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Alexander Litvinenko
 
Research internship on optimal stochastic theory with financial application u...
Research internship on optimal stochastic theory with financial application u...Research internship on optimal stochastic theory with financial application u...
Research internship on optimal stochastic theory with financial application u...
Asma Ben Slimene
 
Presentation on stochastic control problem with financial applications (Merto...
Presentation on stochastic control problem with financial applications (Merto...Presentation on stochastic control problem with financial applications (Merto...
Presentation on stochastic control problem with financial applications (Merto...
Asma Ben Slimene
 
Distributed solution of stochastic optimal control problem on GPUs
Distributed solution of stochastic optimal control problem on GPUsDistributed solution of stochastic optimal control problem on GPUs
Distributed solution of stochastic optimal control problem on GPUs
Pantelis Sopasakis
 
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Alexander Litvinenko
 
Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passin...
Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passin...Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passin...
Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passin...
Varad Meru
 
Conference poster 6
Conference poster 6Conference poster 6
Conference poster 6
NTNU
 
QMC: Operator Splitting Workshop, Perturbed (accelerated) Proximal-Gradient A...
QMC: Operator Splitting Workshop, Perturbed (accelerated) Proximal-Gradient A...QMC: Operator Splitting Workshop, Perturbed (accelerated) Proximal-Gradient A...
QMC: Operator Splitting Workshop, Perturbed (accelerated) Proximal-Gradient A...
The Statistical and Applied Mathematical Sciences Institute
 
ABC with data cloning for MLE in state space models
ABC with data cloning for MLE in state space modelsABC with data cloning for MLE in state space models
ABC with data cloning for MLE in state space models
Umberto Picchini
 
Numerical Methods
Numerical MethodsNumerical Methods
Numerical Methods
Teja Ande
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
SIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithmsSIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithms
Jagadeeswaran Rathinavel
 
Inference for stochastic differential equations via approximate Bayesian comp...
Inference for stochastic differential equations via approximate Bayesian comp...Inference for stochastic differential equations via approximate Bayesian comp...
Inference for stochastic differential equations via approximate Bayesian comp...
Umberto Picchini
 

Similar to Tensor train to solve stochastic PDEs (20)

QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...
QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...
QMC: Operator Splitting Workshop, Proximal Algorithms in Probability Spaces -...
 
A nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaA nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formula
 
Gracheva Inessa - Fast Global Image Denoising Algorithm on the Basis of Nonst...
Gracheva Inessa - Fast Global Image Denoising Algorithm on the Basis of Nonst...Gracheva Inessa - Fast Global Image Denoising Algorithm on the Basis of Nonst...
Gracheva Inessa - Fast Global Image Denoising Algorithm on the Basis of Nonst...
 
Hands-On Algorithms for Predictive Modeling
Hands-On Algorithms for Predictive ModelingHands-On Algorithms for Predictive Modeling
Hands-On Algorithms for Predictive Modeling
 
Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9
 
Lecture9 xing
Lecture9 xingLecture9 xing
Lecture9 xing
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formats
 
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
 
Research internship on optimal stochastic theory with financial application u...
Research internship on optimal stochastic theory with financial application u...Research internship on optimal stochastic theory with financial application u...
Research internship on optimal stochastic theory with financial application u...
 
Presentation on stochastic control problem with financial applications (Merto...
Presentation on stochastic control problem with financial applications (Merto...Presentation on stochastic control problem with financial applications (Merto...
Presentation on stochastic control problem with financial applications (Merto...
 
Distributed solution of stochastic optimal control problem on GPUs
Distributed solution of stochastic optimal control problem on GPUsDistributed solution of stochastic optimal control problem on GPUs
Distributed solution of stochastic optimal control problem on GPUs
 
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
 
Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passin...
Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passin...Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passin...
Subproblem-Tree Calibration: A Unified Approach to Max-Product Message Passin...
 
Conference poster 6
Conference poster 6Conference poster 6
Conference poster 6
 
QMC: Operator Splitting Workshop, Perturbed (accelerated) Proximal-Gradient A...
QMC: Operator Splitting Workshop, Perturbed (accelerated) Proximal-Gradient A...QMC: Operator Splitting Workshop, Perturbed (accelerated) Proximal-Gradient A...
QMC: Operator Splitting Workshop, Perturbed (accelerated) Proximal-Gradient A...
 
ABC with data cloning for MLE in state space models
ABC with data cloning for MLE in state space modelsABC with data cloning for MLE in state space models
ABC with data cloning for MLE in state space models
 
Numerical Methods
Numerical MethodsNumerical Methods
Numerical Methods
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
SIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithmsSIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithms
 
Inference for stochastic differential equations via approximate Bayesian comp...
Inference for stochastic differential equations via approximate Bayesian comp...Inference for stochastic differential equations via approximate Bayesian comp...
Inference for stochastic differential equations via approximate Bayesian comp...
 

More from Alexander Litvinenko

Poster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdfPoster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdf
Alexander Litvinenko
 
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdflitvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
Alexander Litvinenko
 
litvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdflitvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdf
Alexander Litvinenko
 
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityDensity Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Alexander Litvinenko
 
litvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdflitvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdf
Alexander Litvinenko
 
Litvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdfLitvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdf
Alexander Litvinenko
 
Uncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdfUncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdf
Alexander Litvinenko
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Alexander Litvinenko
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
Alexander Litvinenko
 
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Alexander Litvinenko
 
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Alexander Litvinenko
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...
Alexander Litvinenko
 
Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...
Alexander Litvinenko
 
Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Propagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater FlowPropagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater Flow
Alexander Litvinenko
 

More from Alexander Litvinenko (20)

Poster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdfPoster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdf
 
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdflitvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
 
litvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdflitvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdf
 
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityDensity Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
 
litvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdflitvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdf
 
Litvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdfLitvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdf
 
Uncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdfUncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdf
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
 
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...
 
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...
 
Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...
 
Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Propagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater FlowPropagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater Flow
 

Recently uploaded

The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
Vivekanand Anglo Vedic Academy
 
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdfAdversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Po-Chuan Chen
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
TechSoup
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
Acetabularia Information For Class 9 .docx
Acetabularia Information For Class 9  .docxAcetabularia Information For Class 9  .docx
Acetabularia Information For Class 9 .docx
vaibhavrinwa19
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
Tamralipta Mahavidyalaya
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
Celine George
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
Special education needs
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
Vikramjit Singh
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
Sandy Millin
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
DeeptiGupta154
 
Palestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptxPalestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptx
RaedMohamed3
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
SACHIN R KONDAGURI
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Atul Kumar Singh
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
Delapenabediema
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
heathfieldcps1
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
camakaiclarkmusic
 
The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
kaushalkr1407
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
JosvitaDsouza2
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 

Recently uploaded (20)

The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
 
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdfAdversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
Acetabularia Information For Class 9 .docx
Acetabularia Information For Class 9  .docxAcetabularia Information For Class 9  .docx
Acetabularia Information For Class 9 .docx
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
 
Palestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptxPalestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptx
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
 
The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 

Tensor train to solve stochastic PDEs

  • 1. Numerical methods for solving stochastic partial differential equations in the Tensor Train format Alexander Litvinenko1 (joint work with Sergey Dolgov2,3, Boris Khoromskij3 and Hermann G. Matthies4) 1 SRI UQ and Extreme Computing Research Center KAUST, 2 Max-Planck-Institut f¨ur Mathematik in den Naturwissenschaften, Leipzig, MPI for dynamics of complex systems in 3 Magdeburg, 4 TU Braunschweig, Germany Center for Uncertainty Quantification ntification Logo Lock-up http://sri-uq.kaust.edu.sa/
  • 2. 4* Motivation for UQ Nowadays computational algorithms, run on supercomputers, can simulate and resolve very complex phenomena. But how reliable are these predictions? Can we trust to these results? Some parameters/coefficients are unknown, lack of data, very few measurements → uncertainty. Center for Uncertainty Quantification tion Logo Lock-up 2 / 58
  • 3. 4* Notation, problem setup Consider A(u; q) = f ⇒ u = S(f ; q), where S is a solution operator. Uncertain Input: 1. Parameter q := q(ω) (assume moments/cdf/pdf/quantiles of q are given) 2. Boundary and initial conditions, right-hand side 3. Geometry of the domain Uncertain solution: 1. mean value and variance of u 2. exceedance probabilities P(u > u∗) 3. probability density functions (pdf) of u. Center for Uncertainty Quantification tion Logo Lock-up 3 / 58
  • 4. 4* KAUST Figure : KAUST campus, 5 years old, approx. 7000 people (include 1400 kids), 100 nations. Center for Uncertainty Quantification tion Logo Lock-up 4 / 58
  • 5. 4* Children at KAUST Center for Uncertainty Quantification tion Logo Lock-up 5 / 58
  • 6. 4* Stochastic Numerics Group at KAUST Figure : SRI UQ Group Center for Uncertainty Quantification tion Logo Lock-up 6 / 58
  • 7. 4* 3rd UQ Workshop ”Advances in UQ Methods, Alg. & Appl.” Center for Uncertainty Quantification tion Logo Lock-up 7 / 58
  • 8. 4* PDE with uncertain diffusion coefficients PART 1. Stochastic Forward Problems Center for Uncertainty Quantification tion Logo Lock-up 8 / 58
  • 9. 4* PDE with uncertain diffusion coefficients Consider − div(κ(x, ω) u(x, ω)) = f (x, ω) in G × Ω, G ⊂ R2, u = 0 on ∂G, (1) where κ(x, ω) - uncertain diffusion coefficient. Since κ positive, usually κ(x, ω) = eγ(x,ω). For well-posedness see [Sarkis 09, Gittelson 10, H.J.Starkloff 11, Ullmann 10]. Further we will assume that covκ(x, y) is given (or estimated from the available data). Center for Uncertainty Quantification tion Logo Lock-up 9 / 58
  • 10. 4* Our previous work After applying the stochastic Galerkin method, obtain: Ku = f, where all ingredients are represented in a tensor format Solve for u. Compute max{u}, var(u), level sets of u, pdf, cdf, 1. Efficient Analysis of High Dimensional Data in Tensor Formats, [Espig, Hackbusch, A.L., Matthies and Zander, 2012] Research rank of K (from which ingredients it depends) 2. Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats, [W¨ahnert, Espig, Hackbusch, A.L., Matthies, 2013] Center for Uncertainty Quantification tion Logo Lock-up 10 / 58
  • 11. 4* Smooth transformation of Gaussian RF Step 1: We assume κ = φ(γ) -a smooth transformation of the Gaussian random field γ(x, ω), e.g. φ(γ) = exp(γ). [see PhD of E. Zander 2013, or PhD of A. Keese, 2005] Step 2: Given the covariance matrix of κ(x, ω), we derive the covariance matrix of γ(x, ω). After that the KLE may be computed, γ(x, ω) = ∞ m=1 gm(x)θm(ω), D covγ(x, y)gm(y)dy = λmgm(x), (2) Center for Uncertainty Quantification tion Logo Lock-up 11 / 58
  • 12. 4* Full JM,p and sparse J sp M,p multi-index sets M-dimensional PCE approximation of κ writes (α = (α1, ..., αM)) κ(x, ω) ≈ α∈JM κα(x)Hα(θ(ω)), Hα(θ) := hα1 (θ1) · · · hαM (θM) (3) Definition The full multi-index is defined by restricting each component independently, JM,p = {0, 1, . . . , p1}⊗· · ·⊗{0, 1, . . . , pM}, where p = (p1, . . . , pM) is a shortcut for the tuple of order limits. Definition The sparse multi-index is defined by restricting the sum of components, J sp M,p = {α = (α1, . . . , αM) : α ≥ 0, α1 + · · · + αM ≤ p} . Center for Uncertainty Quantification tion Logo Lock-up 12 / 58
  • 13. 4* TT compression of PCE coeffs The Galerkin coefficients κα are evaluated as follows [Thm 3.10, PhD of E. Zander 13], κα(x) = (α1 + · · · + αM)! α1! · · · αM! φα1+···+αM M m=1 gαm m (x), (4) where φ|α| := φα1+···+αM is the Galerkin coefficient of the transform function, and gαm m (x) means just the αm-th power of the KLE function value gm(x). Center for Uncertainty Quantification tion Logo Lock-up 13 / 58
  • 14. 4* Complexity reduction Complexity reduction in Eq. (4) can be achieved with help of KLE of κ(x, ω): κ(x, ω) ≈ ¯κ(x) + L =1 √ µ v (x)η (ω) (5) with the normalized spatial functions v (x). Instead of using κα(x), (4), directly, we compute ˜κ (α) = (α1 + · · · + αM)! α1! · · · αM! φα1+···+αM D M m=1 gαm m (x)v (x)dx. Note that L N. Then we restore the approximate coefficients κα(x) ≈ ¯κ(x) + L =1 v (x)˜κ (α). Center for Uncertainty Quantification tion Logo Lock-up 14 / 58
  • 15. 4* Construction of the stochastic Galerkin operator Given KLE of κ, assemble for i, j = 1, . . . , N, = 1, . . . , L: K0(i, j) = D ¯κ(x) ϕi (x)· ϕj (x)dx, K (i, j) = D v (x) ϕi (x)· ϕj (x)d (6) K (ω) (α, β) = RM Hα(θ)Hβ(θ) ν∈JM ˜κ (ν)Hν(θ)ρ(θ)dθ = ν∈JM ∆α,β,ν ˜κ (ν), ∆α,β,ν = ∆α1,β1,ν1 · · · ∆αM ,βM ,νM , ∆αm,βm,νm = R hαm (θ)hβm (θ)hνm (θ)ρ(θ)dθ, Center for Uncertainty Quantification tion Logo Lock-up 15 / 58
  • 16. 4* Stochastic Galerkin operator Putting together previous formulas, obtain the stochastic Galerkin operator, K = K (x) 0 ⊗ ∆0 + L =1 K (x) ⊗ K (ω) , (7) with K ∈ RN(p+1)M ×N(p+1)M in case of full JM,p. IDEA: If PCE coefficients of κ are computed in the tensor product format, the direct product in ∆ (15) allows to exploit the same format for (7), and build the operator easily. Center for Uncertainty Quantification tion Logo Lock-up 16 / 58
  • 17. 4* Tensor Train Two tensor Train examples Center for Uncertainty Quantification tion Logo Lock-up 17 / 58
  • 18. 4* Examples (B. Khoromskij’s lecture) f (x1, ..., xd ) = w1(x1) + w2(x2) + ... + wd (xd ) = (w1(x1), 1) 1 0 w2(x2) 1 ... 1 0 wd−1(xd−1) 1 1 wd (xd ) Center for Uncertainty Quantification tion Logo Lock-up 18 / 58
  • 19. 4* Examples: TT rank(f )=2 f = sin(x1 + x2 + ... + xd ) = (sin x1, cos x1) cos x2 − sin x2 sin x2 cos x2 ... cos xd−1 − sin xd−1 sin xd−1 cos xd−1 cos xd sin xd−1 Center for Uncertainty Quantification tion Logo Lock-up 19 / 58
  • 20. 4* Low-rank response surface: PCE in the TT format Calculation of ˜κ (α) = (α1 + · · · + αM)! α1! · · · αM! φα1+···+αM D M m=1 gαm m (x)v (x)dx. in TT format needs: a procedure to compute each element of a tensor, e.g. ˜κα1,...,αM . build a TT approximation ˜κα ≈ κ(1)(α1) · · · κ(M)(αM) using a feasible amount of elements (i.e. much less than (p + 1)M). Such procedure exists, and relies on the cross interpolation of matrices, generalized to a higher-dimensional case [Oseledets, Tyrtyshnikov 2010; Savostyanov 13; Grasedyck; Bebendorf]. Center for Uncertainty Quantification tion Logo Lock-up 20 / 58
  • 21. PCE coefficients ˜κ (α) are : ˜κ (α) = s1,...,sM−1 κ (1) ,s1 (α1)κ (2) s1,s2 (α2) · · · κ (M) sM−1 (αM). (8) Collect the spatial components into the “zeroth” TT block, κ(0) (x) = κ (0) (x) L =0 = ¯κ(x) v1(x) · · · vL(x) , (9) then the PCE writes as the following TT format, κ(x, α) = ,s1,...,sM−1 κ (0) (x)κ (1) ,s1 (α1) · · · κ (M) sM−1 (αM). (10) Center for Uncertainty Quantification tion Logo Lock-up 21 / 58
  • 22. 4* Stochastic Galerkin matrix in TT format Given κα(x), (10), we split the whole sum over ν in K,(7): ν∈JM,p ∆α,β,ν ˜κ (ν) = s1,...,sM−1 K (1) ,s1 (α1, β1)K (2) s1,s2 (α2, β2) · · · K (M) sM−1 (αM, K(m) (αm, βm) = pm νm=0 ∆αm,βm,νm κ(m) (νm), m = 1, . . . , M. (11) then the TT representation for the operator writes K = ,s1,...,sM−1 K (0) ⊗K (1) ,s1 ⊗· · ·⊗K (M) sM−1 ∈ R(N·#JM,p)×(N·#JM,p) , (12) Center for Uncertainty Quantification tion Logo Lock-up 22 / 58
  • 23. 4* Solving and Post-processing: Solve the linear system Ku = f by alternating optimization methods [Dolgov, Savostyanov 14] with a mean-field preconditioned. Obtain the solution u in the TT format. u(x, α) = s0,...,sM−1 u (0) s0 (x)u (1) s0,s1 (α1) · · · u (M) sM−1 (αM). (13) u(x, θ) = s0,...,sM−1 u (0) s0 (x) p α1=0 hα1 (θ1)u (1) s0,s1 (α1) · · · (14)   p αM =0 hαM (θM)u (M) sM−1 (αM)   . (15) Then compute: mean, co(variance), exceedance probabilities Center for Uncertainty Quantification tion Logo Lock-up 23 / 58
  • 24. 4* Numerics: Main steps 1. Use sglib (E. Zander, TU BS) for discretization and solution with J sp M,p. 2. Compute PCE (sglib) of the coefficients κ(x, ω) in the TT format by new block adaptive cross algorithm (TT toolbox) 3. Use TT-Toolbox for full JM,p, 4. amen cross.m for TT approximation of ˜κα, 5. Compute stochastic Galerkin matrix K in TT format, 6. Replace high-dimensional calculations by the TT-toolbox. 7. Compute solution of the linear system in TT (alternating minimal energy, tAMEn) 8. Post-processing in TT format Center for Uncertainty Quantification tion Logo Lock-up 24 / 58
  • 25. 4* Numerical experiments, errors, accuracy D = [−1, 1]2[0, 1]2. f = f (x) = 1, log-normal and beta distributions for κ. 557, 2145, 8417 dofs. Eκ = 1 Nmc Nmc z=1 N i=1 (κ(xi , θz) − κ∗(xi , θz))2 N i=1 κ2 ∗(xi , θz) where {θz}Nmc z=1 are normally distributed random samples and κ∗(xi , θz) = φ (γ(xi , θz)) is the reference coefficient computed without using the PCE for φ. E¯u = ¯u − ¯u∗ L2(D) ¯u∗ L2(D) , Evaru = varu − varu∗ L2(D) varu∗ L2(D) . Center for Uncertainty Quantification tion Logo Lock-up 25 / 58
  • 26. 4* More numerics We compute the maximizer of the mean solution, xmax : ¯u(xmax) ≥ ¯u(x) ∀x ∈ D. umax(θ) = u(xmax, θ), and ˆu = ¯u(xmax). Taking some τ > 1, we compute P = P (umax(θ) > τˆu) = RM χumax(θ)>τˆu(θ)ρ(θ)dθ. (16) By P∗ we will also denote the probability computed from the Monte Carlo method, and estimate the error as EP = |P − P∗| /P∗. Center for Uncertainty Quantification tion Logo Lock-up 26 / 58
  • 27. 4* Sparse J sp M,p or full JM,p ? What is better sparse J sp M,p or full JM,p multi-index set ? Center for Uncertainty Quantification tion Logo Lock-up 27 / 58
  • 28. 4* CPU times (sec.) versus p, log-normal distribution TT (full index set JM,p) Sparse (index set J sp M,p) p Tκ Top Tu Tκ Top Tu 1 9.6 0.2 1.7 0.5 0.3 0.65 2 14.7 0.2 3 0.5 3.2 1.4 3 19.1 0.2 3.4 0.7 1028 18 4 24.4 0.2 4.2 2.2 — — 5 30.9 0.32 5.3 9.8 — — Center for Uncertainty Quantification tion Logo Lock-up 28 / 58
  • 29. 4* How does polynomial order influence the ranks ? How does the max. polynomial order p influence the ranks ? Center for Uncertainty Quantification tion Logo Lock-up 29 / 58
  • 30. 4* Performance versus p, log-normal distribution p CPU time, sec. rκ ru r ˆχ Eκ Eu P TT Sparse ˆχ TT Sparse TT Sparse TT 1 11 1.4 0.2 32 42 1 4e-3 1.7e-1 1e-2 1e-1 0 2 18 5.1 0.3 32 49 1 1e-4 1.1e-1 5e-4 5e-2 0 3 23 1046 83 32 49 462 6e-5 2.e-3 3e-4 5e-4 2.8e-4 4 29 — 70 32 50 416 6e-5 — 1e-4 — 1.2e-4 5 37 — 103 32 49 410 6e-5 — 1e-4 — 6.2e-4 Take τ = 1.2: P = P (umax(θ) > τˆu) = RM χumax(θ)>τˆu(θ)ρ(θ)dθ. (17) Center for Uncertainty Quantification tion Logo Lock-up 30 / 58
  • 31. 4* How does stochastic dimension M influence the ranks ? How does the stochastic dimension M influence the TT ranks ? Center for Uncertainty Quantification tion Logo Lock-up 31 / 58
  • 32. 4* Performance versus M, log-normal distribution M CPU time, sec. rκ ru r ˆχ Eκ Eu P TT Sparse ˆχ TT Sparse TT Sparse TT 10 6 6 1.3 20 39 70 2e-4 1.7e-1 3e-4 1.5e-1 2.86e-4 15 12 92 23 27 42 381 8e-5 2e-3 3e-4 5e-4 3e-4 20 22 1e+3 67 32 50 422 6e-5 2e-3 3e-4 5e-4 2.96e-4 30 53 5e+4 137 39 50 452 6e-5 1e-1 3e-4 5.5e-2 2.78e-4 Center for Uncertainty Quantification tion Logo Lock-up 32 / 58
  • 33. 4* How does covariance length influence the ranks ? How does covariance length influence the ranks ? Center for Uncertainty Quantification tion Logo Lock-up 33 / 58
  • 34. 4* Performance versus cov. length, log-normal distribution cov. CPU time, sec. rκ ru r ˆχ Eκ Eu P length TT Sparse ˆχ TT Sparse TT Sparse TT 0.1 216 55800 0.9 70 50 1 2e-2 2e-2 1.8e-2 1.8e-2 0 0.3 317 52360 42 87 74 297 3e-3 3.5e-3 2.6e-3 2.6e-3 8e-31 0.5 195 51700 58 67 74 375 1.5e-4 2e-3 2.6e-4 3.1e-4 6e-33 1.0 57.3 55200 97 39 50 417 6.1e-5 9e-2 3.2e-4 5.6e-2 2.95e-04 1.5 32.4 49800 121 31 34 424 3.2e-5 2e-1 5e-4 1.7e-1 7.5e-04 Center for Uncertainty Quantification tion Logo Lock-up 34 / 58
  • 35. 4* How does standard deviation σ influence the ranks ? How does the standard deviation σ influence the TT ranks ? Center for Uncertainty Quantification tion Logo Lock-up 35 / 58
  • 36. 4* Performance versus σ, log-normal distribution σ CPU time, sec. rκ ru rˆχ Eκ Eu P TT Sparse ˆχ TT Sparse TT Sparse TT 0.2 16 1e+3 0.3 21 31 1 6e-5 5e-5 4e-5 1e-5 0 0.4 19 968 0.3 29 42 1 7e-5 8e-4 1e-4 2e-4 0 0.5 21 970 80 32 49 456 6e-5 2e-3 3e-4 5e-4 3e-4 0.6 24 962 25 34 57 272 9e-5 4e-3 6e-4 1e-3 2e-3 0.8 32 969 68 39 66 411 4e-4 8e-2 2e-3 3e-2 8e-2 1.0 51 1070 48 44 82 363 2e-3 4e-1 5e-3 3e-1 9e-2 Center for Uncertainty Quantification tion Logo Lock-up 36 / 58
  • 37. 4* How does number of DoFs influence the ranks ? How does number of DoFs influence the ranks ? Center for Uncertainty Quantification tion Logo Lock-up 37 / 58
  • 38. 4* Performance versus #DoFs, log-normal distribution #DoFs CPU time, sec rκ ru r ˆχ Eκ Eu P TT Sparse ˆχ TT Sparse TT Sparse TT 557 6 6 1.3 20 39 71 2e-4 1.7e-1 3e-4 1.5e-1 2.86e-4 2145 9 14 1.2 20 39 76 2e-4 2e-3 3e-4 5.7e-4 2.9e-4 8417 357 171 0.8 20 40 69 1.7e-4 2e-3 3e-4 5.6e-4 2.93e-4 Center for Uncertainty Quantification tion Logo Lock-up 38 / 58
  • 39. 4* Comparison with the Monte Carlo Comparison of the solution obtained via the (Stochastic Galerkin + TT) with the solution obtained via Monte Carlo (4000). For the Monte Carlo test, we prepare the TT solution with parameters p = 5 and M = 30. Center for Uncertainty Quantification tion Logo Lock-up 39 / 58
  • 40. 4* Verification of the MC method (4000), log-normal distr. Nmc TMC , sec. E¯u Evaru P∗ EP TT results 102 0.6 9e-3 2e-1 0 ∞ Tsolve 97 sec. 103 6.2 2e-3 6e-2 0 ∞ Tˆχ 157 sec. 104 6.2·101 6e-4 7e-3 4e-4 5e-1 rκ 39 105 6.2·102 3e-4 3e-3 4e-4 5e-1 ru 50 106 6.3·103 1e-4 1e-3 5e-4 4e-1 rˆχ 432 P 6e-4 Center for Uncertainty Quantification tion Logo Lock-up 40 / 58
  • 41. 4* Part II: diffusion coefficient has beta distrib. κ(x, ω) = B−1 5,2   1 + erf γ(x,ω) √ 2 2   + 1, Ba,b(z) = 1 B(a, b) z 0 ta−1 (1 − t)b−1 dt. Center for Uncertainty Quantification tion Logo Lock-up 41 / 58
  • 42. 4* We researched (for beta distribution) 1. Performance versus p 2. Performance versus stochastic dimension M 3. Performance versus cov. length 4. Performance versus #DoFs 5. Verification of the Monte Carlo method Center for Uncertainty Quantification tion Logo Lock-up 42 / 58
  • 43. 4* Take to home 1. TT methods become preferable for high p, but otherwise the full computation in a small sparse set may be incredibly fast. This reflects well the “curse of order”, taking place for the sparse set instead of the “curse of dimensionality” in the full set: the cardinality of the sparse set grows exponentially with p. 2. The TT approach scales linearly with p. 3. TT methods allow easy calculation of stochastic Galerkin operator. With p < 10 TT storage of stoch. Galerkin operator allows us to forget about the sparsity issues, since the number of TT entries O(Mp2r2) is tractable. 4. Chebyshev, Laguerre, ... may be incorporated into the scheme freely. Center for Uncertainty Quantification tion Logo Lock-up 43 / 58
  • 44. 4* Future plans for the next article 1. Compute Sobol indices in the TT format. Which uncertain coefficients and which PCE terms are important ? 2. Solution of this linear elliptic SPDE is a ”working horse” for the non-linear equation and the Newton method 3. Stochastic Galerkin in TT format above can be used as a preconditioning it is very fast!) for more complicated non-linear problems 4. Apply to more complicated diffusion coefficients (e.g. which are not so good splittable) 5. To create analytic u, compute analytically RHS and solve the problem again (to avoid to use MC as a reference) Center for Uncertainty Quantification tion Logo Lock-up 44 / 58
  • 45. 4* Approximate Bayesian Update PART 2. Inverse Problems via approximate Bayesian Update Center for Uncertainty Quantification tion Logo Lock-up 45 / 58
  • 46. 4* Setting for the identification process General idea: We observe / measure a system, whose structure we know in principle. The system behaviour depends on some quantities (parameters), which we do not know ⇒ uncertainty. We model (uncertainty in) our knowledge in a Bayesian setting: as a probability distribution on the parameters. We start with what we know a priori, then perform a measurement. This gives new information, to update our knowledge (identification). Update in probabilistic setting works with conditional probabilities ⇒ Bayes’s theorem. Repeated measurements lead to better identification. Center for Uncertainty Quantification tion Logo Lock-up 46 / 58
  • 47. 4* Mathematical setup Consider A(u; q) = f ⇒ u = S(f ; q), where S is solution operator. Operator depends on parameters q ∈ Q, hence state u ∈ U is also function of q: Measurement operator Y with values in Y: y = Y (q; u) = Y (q, S(f ; q)). Examples of measurements: (ODE) u(t) = (x(t), y(t), z(t))T , y(t) = (x(t), y(t))T (PDE) y(ω) = D0 u(ω, x)dx, y(ω) = D0 | grad u(ω, x)|2dx, u in few points Center for Uncertainty Quantification tion Logo Lock-up 47 / 58
  • 48. 4* Inverse problem For given f , measurement y is just a function of q. This function is usually not invertible ⇒ ill-posed problem, measurement y does not contain enough information. In Bayesian framework state of knowledge modelled in a probabilistic way, parameters q are uncertain, and assumed as random. Bayesian setting allows updating / sharpening of information about q when measurement is performed. The problem of updating distribution —state of knowledge of q becomes well-posed. Can be applied successively, each new measurement y and forcing f —may also be uncertain—will provide new information. Center for Uncertainty Quantification tion Logo Lock-up 48 / 58
  • 49. 4* Conditional probability and expectation With state u ∈ U ⊗ S a RV, the quantity to be measured y(ω) = Y (q(ω), u(ω))) ∈ Y ⊗ S is also uncertain, a random variable. A new measurement z is performed, composed from the “true” value y ∈ Y and a random error : z(ω) = y + (ω). Classically, Bayes’s theorem gives conditional probability P(Iq|Mz) = P(Mz|Iq) P(Mz) P(Iq); expectation with this posterior measure is conditional expectation. Kolmogorov starts from conditional expectation E (·|Mz), from this conditional probability via P(Iq|Mz) = E χIq |Mz . Center for Uncertainty Quantification tion Logo Lock-up 49 / 58
  • 50. 4* IDEA of the Bayesian Update (BU) Let Y (x, θ), θ = (θ1, ..., θM, ...), is approximated: Y (x, θ) = β∈Jm,p Hβ(θ)Yβ(x), q(x, θ) = β∈Jm,p Hβ(θ)qβ(x), Yβ(x) = 1 β! Θ Hβ(θ)Y (x, θ) P(dθ). Take qf (ω) = q0(ω). Linear BU: qa = qf + K · (z − y) Non-Linear BU: qa = qf + H1 · (z − y) + (z − y)T · H2 · (z − y). Center for Uncertainty Quantification tion Logo Lock-up 50 / 58
  • 51. 4* Open questions Open questions Center for Uncertainty Quantification tion Logo Lock-up 51 / 58
  • 52. 4* Open questions Multivariate Cauchy distribution The characteristic function ϕX(t) of the multivariate Cauchy distribution is defined as follow: ϕX(t) = exp i(t1, t2) · (µ1, µ2)T − 1 2 (t1, t2) σ2 1 0 0 σ2 2 (t1, t2)T , (18) ϕX(t) ≈ R ν=1 ϕXν,1 (t1) · ϕXν,2 (t2). (19) Again, from the inversion theorem, the probability density of X on R2 can be computed from ϕX(t) as follow Center for Uncertainty Quantification tion Logo Lock-up 52 / 58
  • 53. pX(y) = 1 (2π)2 R2 exp(−i y, t )ϕX(t)dt (20) ≈ 1 (2π)2 R2 exp(−i(y1t1 + y2t2)) R ν=1 ϕXν,1 (t1) · ϕXν,2 (t2)dt1dt2 (21) ≈ R ν=1 1 (2π) R exp(−iy1t1)ϕXν,1 (t1)dt1 · 1 (2π) R exp(−iy2t2)ϕXν,2 (t (22) ≈ R ν=1 pXν,1 (y1) · pXν,2 (y2), i.e. (23) the probability density pX(y) is numerically splittable. Center for Uncertainty Quantification tion Logo Lock-up 53 / 58
  • 54. 4* Elliptically contoured multivariate stable distribution ϕX(t) = exp i(t1, t2) · (µ1, µ2)T − (t1, t2) σ2 1 0 0 σ2 2 (t1, t2)T α/2 (24) Now the question is to find a separation of (t1, t2) σ2 1 0 0 σ2 2 (t1, t2)T α/2 ≈ R ν=1 φν,1(t1) · φν,2(t2), (25) with some tensor rank R. Center for Uncertainty Quantification tion Logo Lock-up 54 / 58
  • 55. 4* Multivariate distribution Assume that the characteristic function ϕX(t) of some multivariate d-dimensional distribution is approximated as follow: ϕX(t) ≈ R =1 d µ=1 ϕX ,µ (tµ). (26) pX(y) = const Rd exp(−i y, t )ϕX(t)dt (27) ≈ const Rd exp(−i d j=1 yj tj ) R =1 d µ=1 ϕX ,µ (tµ)dt1...dtd (28) ≈ R =1 const d µ=1 R exp(−iy t )ϕX ,µ (tµ)dt · (29) ≈ R =1 d µ=1 pX ,µ (yµ). (30) Center for Uncertainty Quantification tion Logo Lock-up 55 / 58
  • 56. 4* Actual computation of ϕX(t) ϕX(τβ) = E (exp(i X(θ1, ..., θm), τβ )) = · · · Θ exp(i X(θ1, ..., θm), τβ ) M m=1 pθm (θm)dθ1...dθM, X(ω), τβ = α∈J ξα Hα(θ), τβ ≈ d =1 α∈J ξα Hα(θ)tβ , = α∈J d =1 ξα tβ , Hα(θ) = α∈J ξα , τβ Hα(θ) (31) Now compute the exp() function from the scalar product Center for Uncertainty Quantification tion Logo Lock-up 56 / 58
  • 57. exp(i X(ω), τβ ) = exp(i α∈J ξα , τβ Hα(θ)) (32) = α∈J exp (i ξα , τβ Hα(θ)) (33) Now we apply integration: ϕX(t) = E (exp(i X(ω), τβ )) = · · · Θ α∈J exp (i ξα , τβ Hα(θ)) M m=1 pθm (θm)dθ1...dθM ≈???? nq =1 w α∈J exp (i ξα , τβ Hα(θ )) M m=1 pθm (θm, ) Center for Uncertainty Quantification tion Logo Lock-up 57 / 58
  • 58. 4* Literature 1. Polynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format, S. Dolgov, B. N. Khoromskij, A. Litvinenko, H. G. Matthies, 2015/3/11, arXiv:1503.03210 2. Efficient analysis of high dimensional data in tensor formats, M. Espig, W. Hackbusch, A. Litvinenko, H.G. Matthies, E. Zander Sparse Grids and Applications, 31-56, 40, 2013 3. Application of hierarchical matrices for computing the Karhunen-Loeve expansion, B.N. Khoromskij, A. Litvinenko, H.G. Matthies, Computing 84 (1-2), 49-67, 31, 2009 4. Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats, M. Espig, W. Hackbusch, A. Litvinenko, H.G. Matthies, P. Waehnert, Computers and Mathematics with Applications 67 (4), 818-829 Center for Uncertainty Quantification tion Logo Lock-up 58 / 58