Low-rank tensor methods for stochastic forward and inverse problems
1. Low-rank tensor methods for PDEs with
uncertain coefficients and
Bayesian Update surrogate
Alexander Litvinenko
Center for Uncertainty
Quantification
ntification Logo Lock-up
http://sri-uq.kaust.edu.sa/
Extreme Computing Research Center, KAUST
Alexander Litvinenko Low-rank tensor methods for PDEs with uncertain coefficien
3. 4*
KAUST
I received very rich collaboration experience as a co-organizator of:
3 UQ workshops,
2 Scalable Hierarchical Algorithms for eXtreme Computing
(SHAXC) workshops
1 HPC Conference (www.hpcsaudi.org, 2017)
5. 4*
Motivation to do Uncertainty Quantification (UQ)
Motivation: there is an urgent need to quantify and reduce the
uncertainty in output quantities of computer simulations within
complex (multiscale-multiphysics) applications.
Typical challenges: classical sampling methods are often very
inefficient, whereas straightforward functional representations
are subject to the well-known Curse of Dimensionality.
My goal is systematic, mathematically founded, development of
UQ methods and low-rank algorithms relevant for applications.
Center for Uncertainty
Quantification
ation Logo Lock-up
-1 / 39
6. 4*
UQ and its relevance
Nowadays computational predictions are used in critical
engineering decisions and thanks to modern computers we are
able to simulate very complex phenomena. But, how reliable
are these predictions? Can they be trusted?
Example: Saudi Aramco currently has a simulator,
GigaPOWERS, which runs with 9 billion cells. How sensitive
are the simulation results with respect to the unknown reservoir
properties?
Center for Uncertainty
Quantification
ation Logo Lock-up
0 / 39
7. 4*
Part I: Stochastic forward problem
Part I: Stochastic Galerkin method to solve
elliptic PDE with uncertain coefficients
8. 4*
PDE with uncertain coefficient and RHS
Consider
− div(κ(x, ω) u(x, ω)) = f(x, ω) in G × Ω, G ⊂ R2,
u = 0 on ∂G,
(1)
where κ(x, ω) - uncertain diffusion coefficient. Since κ positive,
usually κ(x, ω) = eγ(x,ω).
For well-posedness see [Sarkis 09, Gittelson 10, H.J.Starkloff
11, Ullmann 10].
Further we will assume that covκ(x, y) is given.
Center for Uncertainty
Quantification
ation Logo Lock-up
1 / 39
9. 4*
My previous work
After applying the stochastic Galerkin method, obtain:
Ku = f, where all ingredients are represented in a tensor format
Compute max{u}, var(u), level sets of u, sign(u)
[1] Efficient Analysis of High Dimensional Data in Tensor Formats,
Espig, Hackbusch, A.L., Matthies and Zander, 2012.
Research which ingredients influence on the tensor rank of K
[2] Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats,
W¨ahnert, Espig, Hackbusch, A.L., Matthies, 2013.
Approximate κ(x, ω), stochastic Galerkin operator K in Tensor
Train (TT) format, solve for u, postprocessing
[3] Polynomial Chaos Expansion of random coefficients and the solution of stochastic
partial differential equations in the Tensor Train format, Dolgov, Litvinenko, Khoromskij, Matthies, 2016.
Center for Uncertainty
Quantification
ation Logo Lock-up
2 / 39
10. 4*
Typical quantities of interest
Keeping all input and intermediate data in a tensor
representation one wants to perform different tasks:
evaluation for specific parameters (ω1, . . . , ωM),
finding maxima and minima,
finding ‘level sets’ (needed for histogram and probability
density).
Example of level set: all elements of a high dimensional tensor
from the interval [0.7, 0.8].
Center for Uncertainty
Quantification
ation Logo Lock-up
3 / 39
11. 4*
Canonical and Tucker tensor formats
Definition and Examples of tensors
Center for Uncertainty
Quantification
ation Logo Lock-up
4 / 39
12. 4*
Canonical and Tucker tensor formats
[Pictures are taken from B. Khoromskij and A. Auer lecture course]
Storage: O(nd ) → O(dRn) and O(Rd + dRn).
Center for Uncertainty
Quantification
ation Logo Lock-up
5 / 39
13. 4*
Definition of tensor of order d
Tensor of order d is a multidimensional array over a d-tuple
index set I = I1 × · · · × Id ,
A = [ai1...id
: i ∈ I ] ∈ RI
, I = {1, ..., n }, = 1, .., d.
A is an element of the linear space
Vn =
d
=1
V , V = RI
equipped with the Euclidean scalar product ·, · : Vn × Vn → R,
defined as
A, B :=
(i1...id )∈I
ai1...id
bi1...id
, for A, B ∈ Vn.
Center for Uncertainty
Quantification
ation Logo Lock-up
6 / 39
14. 4*
Examples of rank-1 and rank-2 tensors
Rank-1:
f(x1, ..., xd ) = exp(f1(x1) + ... + fd (xd )) = d
j=1 exp(fj(xj))
Rank-2: f(x1, ..., xd ) = sin( d
j=1 xj), since
2i · sin( d
j=1 xj) = ei d
j=1 xj
− e−i d
j=1 xj
Rank-d function f(x1, ..., xd ) = x1 + x2 + ... + xd can be
approximated by rank-2: with any prescribed accuracy:
f ≈
d
j=1(1 + εxj)
ε
−
d
j=1 1
ε
+ O(ε), as ε → 0
Center for Uncertainty
Quantification
ation Logo Lock-up
7 / 39
15. 4*
Tensor and Matrices
Rank-1 tensor
A = u1 ⊗ u2 ⊗ ... ⊗ ud =:
d
µ=1
uµ
Ai1,...,id
= (u1)i1
· ... · (ud )id
Rank-1 tensor A = u ⊗ v, matrix A = uvT , A = vuT , u ∈ Rn,
v ∈ Rm,
Rank-k tensor A = k
i=1 ui ⊗ vi, matrix A = k
i=1 uivT
i .
Kronecker product of n × n and m × m matrices is a new block
matrix A ⊗ B ∈ Rnm×nm, whose ij-th block is [AijB].
Center for Uncertainty
Quantification
ation Logo Lock-up
8 / 39
16. 4*
Discretization of elliptic PDE
Now let us discretize our diffusion equation with
uncertain coefficients
Center for Uncertainty
Quantification
ation Logo Lock-up
9 / 39
17. 4*
Karhunen Lo´eve and Polynomial Chaos Expansions
Apply both
Karhunen Lo´eve Expansion (KLE):
κ(x, ω) = κ0(x) + ∞
j=1 κjgj(x)ξj(θ(ω)), where
θ = θ(ω) = (θ1(ω), θ2(ω), ..., ),
ξj(θ) = 1
κj G (κ(x, ω) − κ0(x)) gj(x)dx.
Polynomial Chaos Expansion (PCE)
κ(x, ω) = α κ(α)(x)Hα(θ), compute ξj(θ) = α∈J ξ
(α)
j Hα(θ),
where ξ
(α)
j = 1
κj G κ(α)(x)gj(x)dx.
Further compute ξ
(α)
j ≈ s
=1(ξ )j
∞
k=1(ξ , k )αk
.
Center for Uncertainty
Quantification
ation Logo Lock-up
10 / 39
18. 4*
Final discretized stochastic PDE
Ku = f, where
K:= s
=1 K ⊗ M
µ=1 ∆ µ, K ∈ RN×N, ∆ µ ∈ RRµ×Rµ ,
u:= r
j=1 uj ⊗ M
µ=1 ujµ, uj ∈ RN, ujµ ∈ RRµ ,
f:= R
k=1 fk ⊗ M
µ=1 gkµ, fk ∈ RN and gkµ ∈ RRµ .
(Wahnert, Espig, Hackbusch, Litvinenko, Matthies, 2011)
Examples of stochastic Galerkin matrices:
Center for Uncertainty
Quantification
ation Logo Lock-up
11 / 39
19. 4*
Computing QoI in low-rank tensor format
Now, we consider how to
find maxima in a high-dimensional tensor
20. 4*
Maximum norm and corresponding index
Let u = r
j=1
d
µ=1 ujµ ∈ Rr , compute
u ∞ := max
i:=(i1,...,id )∈I
|ui| = max
i:=(i1,...,id )∈I
r
j=1
d
µ=1
ujµ iµ
.
Computing u ∞ is equivalent to the following e.v. problem.
Let i∗
:= (i∗
1 , . . . , i∗
d ) ∈ I, #I = d
µ=1 nµ.
u ∞ = |ui∗ | =
r
j=1
d
µ=1
ujµ i∗
µ
and e(i∗
)
:=
d
µ=1
ei∗
µ
,
where ei∗
µ
∈ Rnµ the i∗
µ-th canonical vector in Rnµ (µ ∈ N≤d ).
Center for Uncertainty
Quantification
ation Logo Lock-up
12 / 39
21. Then
u e(i∗
)
=
r
j=1
d
µ=1
ujµ
d
µ=1
ei∗
µ
=
r
j=1
d
µ=1
ujµ ei∗
µ
=
r
j=1
d
µ=1
(ujµ)i∗
µ
ei∗
µ
=
r
j=1
d
µ=1
(ujµ)i∗
µ
ui∗ =
d
µ=1
e(i∗
µ) = ui∗ e(i∗
)
.
Thus, we obtained an “eigenvalue problem”:
u e(i∗
)
= ui∗ e(i∗
)
.
Center for Uncertainty
Quantification
ation Logo Lock-up
13 / 39
22. 4*
Computing u ∞, u ∈ Rr by vector iteration
By defining the following diagonal matrix
D(u) :=
r
j=1
d
µ=1
diag (ujµ) µ µ∈N≤nµ
(2)
with representation rank r, obtain D(u)v = u v.
Now apply the well-known vector iteration method (with rank
truncation) to
D(u)e(i∗
)
= ui∗ e(i∗
)
,
obtain u ∞.
[Approximate iteration, Khoromskij, Hackbusch, Tyrtyshnikov 05],
and [Espig, Hackbusch 2010]
Center for Uncertainty
Quantification
ation Logo Lock-up
14 / 39
23. 4*
How to compute the mean value in CP format
Let u = r
j=1
d
µ=1 ujµ ∈ Rr , then the mean value u can be
computed as a scalar product
u =
r
j=1
d
µ=1
ujµ
,
d
µ=1
1
nµ
˜1µ
=
r
j=1
d
µ=1
ujµ, ˜1µ
nµ
=
(3)
=
r
j=1
d
µ=1
1
nµ
nµ
k=1
(ujµ)k , (4)
where ˜1µ := (1, . . . , 1)T ∈ Rnµ .
Numerical cost is O r · d
µ=1 nµ .
Center for Uncertainty
Quantification
ation Logo Lock-up
15 / 39
24. 4*
Numerical Experiments
2D L-shape domain, N = 557 dofs.
Total stochastic dimension is Mu = Mk + Mf = 20, there are
|J | = 231 PCE coefficients
u =
231
j=1
uj,0 ⊗
20
µ=1
ujµ ∈ R557
⊗
20
µ=1
R3
.
Center for Uncertainty
Quantification
ation Logo Lock-up
16 / 39
25. 4*
Level sets
Now we compute level sets
sign(b u ∞1 − u)
for b ∈ {0.2, 0.4, 0.6, 0.8}.
Tensor u has 320 ∗ 557 ≈ 2 · 1012 entries ≈ 16 TB of
memory.
The computing time of one level set was 10 minutes.
Intermediate ranks of sign(b u ∞1 − u) and of rank(uk )
were less than 24.
Center for Uncertainty
Quantification
ation Logo Lock-up
17 / 39
26. 4*
Part II
Part II: Bayesian update
We will speak about Gauss-Markov-Kalman filter for the
Bayesian updating of parameters in comput. model.
27. 4*
Mathematical setup
Consider
K(u; q) = f ⇒ u = S(f; q),
where S is solution operator.
Operator depends on parameters q ∈ Q,
hence state u ∈ U is also function of q:
Measurement operator Y with values in Y:
y = Y(q; u) = Y(q, S(f; q)).
Examples of measurements:
y(ω) = D0
u(ω, x)dx, or u in few points
Center for Uncertainty
Quantification
ation Logo Lock-up
18 / 39
28. 4*
Random QoI
With state u a RV, the quantity to be measured
y(ω) = Y(q(ω), u(ω)))
is also uncertain, a random variable.
Noisy data: ˆy + (ω),
where ˆy is the “true” value and a random error .
Forecast of the measurement: z(ω) = y(ω) + (ω).
Center for Uncertainty
Quantification
ation Logo Lock-up
19 / 39
29. 4*
Conditional probability and expectation
Classically, Bayes’s theorem gives conditional probability
P(Iq|Mz) =
P(Mz|Iq)
P(Mz)
P(Iq) (or πq(q|z) =
p(z|q)
Zs
pq(q));
Expectation with this posterior measure is conditional
expectation.
Kolmogorov starts from conditional expectation E (·|Mz),
from this conditional probability via P(Iq|Mz) = E χIq
|Mz .
Center for Uncertainty
Quantification
ation Logo Lock-up
20 / 39
30. 4*
Conditional expectation
The conditional expectation is defined as
orthogonal projection onto the closed subspace L2(Ω, P, σ(z)):
E(q|σ(z)) := PQ∞ q = argmin˜q∈L2(Ω,P,σ(z)) q − ˜q 2
L2
The subspace Q∞ := L2(Ω, P, σ(z)) represents the available
information.
The update, also called the assimilated value
qa(ω) := PQ∞ q = E(q|σ(z)), is a Q-valued RV
and represents new state of knowledge after the measurement.
Doob-Dynkin: Q∞ = {ϕ ∈ Q : ϕ = φ ◦ z, φ measurable}.
Center for Uncertainty
Quantification
ation Logo Lock-up
21 / 39
31. 4*
Numerical computation of NLBU
Look for ϕ such that q(ξ) = ϕ(z(ξ)), z(ξ) = y(ξ) + ε(ω):
ϕ ≈ ˜ϕ =
α∈Jp
ϕαΦα(z(ξ))
and minimize q(ξ) − ˜ϕ(z(ξ)) 2
L2
, where Φα are polynomials
(e.g. Hermite, Laguerre, Chebyshev or something else).
Taking derivatives with respect to ϕα:
∂
∂ϕα
q(ξ) − ˜ϕ(z(ξ)), q(ξ) − ˜ϕ(z(ξ)) = 0 ∀α ∈ Jp
Inserting representation for ˜ϕ, obtain:
Center for Uncertainty
Quantification
ation Logo Lock-up
22 / 39
32. 4*
Numerical computation of NLBU
∂
∂ϕα
E
q2
(ξ) − 2
β∈J
qϕβΦβ(z) +
β,γ∈J
ϕβϕγΦβ(z)Φγ(z)
= 2E
−qΦα(z) +
β∈J
ϕβΦβ(z)Φα(z)
= 2
β∈J
E [Φβ(z)Φα(z)] ϕβ − E [qΦα(z)]
= 0 ∀α ∈ J .
Center for Uncertainty
Quantification
ation Logo Lock-up
23 / 39
33. 4*
Numerical computation of NLBU
Now, rewriting the last sum in a matrix form, obtain the linear
system of equations (=: A) to compute coefficients ϕβ:
... ... ...
... E [Φα(z(ξ))Φβ(z(ξ))]
...
... ... ...
...
ϕβ
...
=
...
E [q(ξ)Φα(z(ξ))]
...
,
where α, β ∈ J , A is of size |J | × |J |.
Center for Uncertainty
Quantification
ation Logo Lock-up
24 / 39
34. 4*
Numerical computation of NLBU
We can rewrite the system above in the compact form:
[Φ] [diag(...wi...)] [Φ]T
...
ϕβ
...
= [Φ]
w0q(ξ0)
...
wNq(ξN)
[Φ] ∈ RJα×N, [diag(...wi...)] ∈ RN×N, [Φ] ∈ RJα×N.
Solving this system, obtain vector of coefficients (...ϕβ...)T for
all β.
Finally, the assimilated parameter qa will be
qa = qf + ˜ϕ(ˆy) − ˜ϕ(z), (5)
z(ξ) = y(ξ) + ε(ω), ˜ϕ = β∈Jp
ϕβΦβ(z(ξ))
Center for Uncertainty
Quantification
ation Logo Lock-up
25 / 39
35. 4*
Example: Lorenz 1963 problem (chaotic system of ODEs)
˙x = σ(ω)(y − x)
˙y = x(ρ(ω) − z) − y
˙z = xy − β(ω)z
Initial state q0(ω) = (x0(ω), y0(ω), z0(ω)) are uncertain.
Solving in t0, t1, ..., t10, Noisy Measur. → UPDATE, solving in
t11, t12, ..., t20, Noisy Measur. → UPDATE,...
IDEA of the Bayesian Update (BU):
Take qf (ω) = q0(ω).
Linear BU: qa = qf + K · (z − y)
Non-Linear BU: qa = qf + H1 · (z − y) + (z − y)T · H2 · (z − y).
Center for Uncertainty
Quantification
ation Logo Lock-up
26 / 39
36. Trajectories of x,y and z in time. After each update (new
information coming) the uncertainty drops. [O. Pajonk, B. V. Rosic, A.
Litvinenko, and H. G. Matthies, 2012]
Center for Uncertainty
Quantification
ation Logo Lock-up
27 / 39
37. 4*
Example: Lorenz problem
10 0 10
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
x
20 0 20
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
y
0 10 20
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
z
x
f
xa
y
f
ya
z
f
za
Figure: quadratic BU surrogate, measure the state (x(t), y(t), z(t)).
Prior and posterior after one update.
Center for Uncertainty
Quantification
ation Logo Lock-up
28 / 39
38. 4*
Example: Lorenz Problem
10 5 0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
x
x1
x2
15 10 5
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
y
y1
y2
5 10 15
0
0.1
0.2
0.3
0.4
0.5
0.6
z
z1
z2
Figure: Comparison of the posterior functions computed by linear and
quadratic BU after second update.
Center for Uncertainty
Quantification
ation Logo Lock-up
29 / 39
39. 4*
Example: Lorenz Problem
20 0 20
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
x
50 0 50
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
y
0 10 20
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
z
x
f
xa
y
f
ya
z
f
za
Figure: Quadratic measurement (x(t)2
, y(t)2
, z(t)2
): Comparison of a
priori and a posterior for NLBU
Center for Uncertainty
Quantification
ation Logo Lock-up
30 / 39
40. 4*
Example: 1D elliptic PDE with uncertain coeffs
− · (κ(x, ξ) u(x, ξ)) = f(x, ξ), x ∈ [0, 1]
+ Dirichlet random b.c. g(0, ξ) and g(1, ξ).
3 measurements: u(0.3) = 22, s.d. 0.2, x(0.5) = 28, s.d. 0.3,
x(0.8) = 18, s.d. 0.3.
κ(x, ξ): N = 100 dofs, M = 5, number of KLE terms 35, beta distribution for κ, Gaussian covκ, cov.
length 0.1, multi-variate Hermite polynomial of order pκ = 2;
RHS f(x, ξ): Mf = 5, number of KLE terms 40, beta distribution for κ, exponential covf , cov. length 0.03,
multi-variate Hermite polynomial of order pf = 2;
b.c. g(x, ξ): Mg = 2, number of KLE terms 2, normal distribution for g, Gaussian covg , cov. length 10,
multi-variate Hermite polynomial of order pg = 1;
pφ = 3 and pu = 3
Center for Uncertainty
Quantification
ation Logo Lock-up
31 / 39
41. 4*
Example: updating of the solution u
0 0.5 1
-20
0
20
40
60
0 0.5 1
-20
0
20
40
60
Figure: Original and updated solutions, mean value plus/minus 1,2,3
standard deviations
[graphics are built in the stochastic Galerkin library sglib, written by E. Zander in TU Braunschweig]
Center for Uncertainty
Quantification
ation Logo Lock-up
32 / 39
42. 4*
Example: Updating of the parameter
0 0.5 1
0
0.5
1
1.5
0 0.5 1
0
0.5
1
1.5
Figure: Original and updated parameter κ.
Center for Uncertainty
Quantification
ation Logo Lock-up
33 / 39
43. 4*
Future plans and possible collaboration
Future plans and possible collaboration ideas
44. 4*
Future plans, Idea N1
Possible collaboration work with Troy Butler: To develop a
low-rank adaptive goal-oriented Bayesian update technique. The
solution of the forward and inverse problems will be considered as a
whole adaptive process, controlled by error/uncertainty estimators.
z
(y - z) q
f ε
forward update
low-rank and adaptive
y
f z
(y - z)
ε
forward
y q.....
low-rank and adaptive
... q
update
Stochastic forward spatial discret.
stochastic discret.
low-rank approx.
Inverse problem
Errors
inverse operator approx.
45. 4*
Future plans, Idea N2
Edge between Green functions in PDEs and covariance
matrices.
Possible collaboration with statistical group, Doug Nychka
(NCAR), Havard Rue
Center for Uncertainty
Quantification
ation Logo Lock-up
34 / 39
46. 4*
Future plans, Idea N3
Data assimilation techniques, Bayesian update surrogare.
Develop non-linear, non-Gaussian Bayesian update
approximation for gPCE coefficients.
Possible collaboration with Jan Mandel, Troy Butler, Kody Law,
Y. Marzouk, H. Najm, TU Braunschweig and KAUST
47. 4*
Collaborators
1. Uncertainty quantification and Bayesian Update: Prof. H.
Matthies, Bojana V. Rosic, Elmar Zander, Oliver Pajonk
from TU Braunschweig, Germany,
2. Low-rank tensor calculus: Mike Espig from RWTH Aachen,
Boris and Venera Khoromskij from MPI Leipzig
3. Spatial and environmental statistics: Marc Genton, Ying
Sun, Raphael Huser, Brian Reich, Ben Shaby and David
Bolin.
4. Some others: UQ, data assimilation, high-dimensional
problems/statistics
48. 4*
Conclusion
Introduced low-rank tensor methods to solve elliptic PDEs
with uncertain coefficients,
Explained how to compute the maximum, the mean, level
sets,... in low-rank tensor format,
Derived Bayesian update surrogate ϕ (as a linear,
quadratic, cubic etc approximation), i.e. compute
conditional expectation of q, given measurement y.
Center for Uncertainty
Quantification
ation Logo Lock-up
34 / 39
49. 4*
Example: Canonical rank d, whereas TT rank 2
d-Laplacian over uniform tensor grid. It is known to have the
Kronecker rank-d representation,
∆d = A⊗IN ⊗...⊗IN +IN ⊗A⊗...⊗IN +...+IN ⊗IN ⊗...⊗A ∈ RI⊗d ⊗I⊗d
(6)
with A = ∆1 = tridiag{−1, 2, −1} ∈ RN×N, and IN being the
N × N identity. Notice that for the canonical rank we have rank
kC(∆d ) = d, while TT-rank of ∆d is equal to 2 for any
dimension due to the explicit representation
∆d = (∆1 I) ×
I 0
∆1 I
× ... ×
I 0
∆1 I
×
I
∆1
(7)
where the rank product operation ”×” is defined as a regular
matrix product of the two corresponding core matrices, their
blocks being multiplied by means of tensor product. The similar
bound is true for the Tucker rank rankTuck (∆d ) = 2.
50. 4*
Advantages and disadvantages
Denote k - rank, d-dimension, n = # dofs in 1D:
1. CP: ill-posed approx. alg-m, O(dnk), hard to compute
approx.
2. Tucker: reliable arithmetic based on SVD, O(dnk + kd )
3. Hierarchical Tucker: based on SVD, storage O(dnk + dk3),
truncation O(dnk2 + dk4)
4. TT: based on SVD, O(dnk2) or O(dnk3), stable
5. Quantics-TT: O(nd ) → O(dlogq
n)
51. 4*
How to compute the variance in CP format
Let u ∈ Rr and
˜u := u − u
d
µ=1
1
nµ
1 =
r+1
j=1
d
µ=1
˜ujµ ∈ Rr+1, (8)
then the variance var(u) of u can be computed as follows
var(u) =
˜u, ˜u
d
µ=1 nµ
=
1
d
µ=1 nµ
r+1
i=1
d
µ=1
˜uiµ
,
r+1
j=1
d
ν=1
˜ujν
=
r+1
i=1
r+1
j=1
d
µ=1
1
nµ
˜uiµ, ˜ujµ .
Numerical cost is O (r + 1)2 · d
µ=1 nµ .
52. 4*
Computing QoI in low-rank tensor format
Now, we consider how to
find ‘level sets’,
for instance, all entries of tensor u from interval [a, b].
53. 4*
Definitions of characteristic and sign functions
1. To compute level sets and frequencies we need
characteristic function.
2. To compute characteristic function we need sign function.
The characteristic χI(u) ∈ T of u ∈ T in I ⊂ R is for every multi-
index i ∈ I pointwise defined as
(χI(u))i :=
1, ui ∈ I,
0, ui /∈ I.
Furthermore, the sign(u) ∈ T is for all i ∈ I pointwise defined
by
(sign(u))i :=
1, ui > 0;
−1, ui < 0;
0, ui = 0.
Center for Uncertainty
Quantification
ation Logo Lock-up
36 / 39
54. 4*
sign(u) is needed for computing χI(u)
Lemma
Let u ∈ T , a, b ∈ R, and 1 = d
µ=1
˜1µ, where
˜1µ := (1, . . . , 1)t ∈ Rnµ .
(i) If I = R<b, then we have χI(u) = 1
2 (1 + sign(b1 − u)).
(ii) If I = R>a, then we have χI(u) = 1
2(1 − sign(a1 − u)).
(iii) If I = (a, b), then we have
χI(u) = 1
2 (sign(b1 − u) − sign(a1 − u)).
Computing sign(u), u ∈ Rr , via hybrid Newton-Schulz iteration
with rank truncation after each iteration.
Center for Uncertainty
Quantification
ation Logo Lock-up
37 / 39
55. 4*
Level Set, Frequency
Definition (Level Set, Frequency)
Let I ⊂ R and u ∈ T . The level set LI(u) ∈ T of u respect to I is
pointwise defined by
(LI(u))i :=
ui, ui ∈ I ;
0, ui /∈ I ,
for all i ∈ I.
The frequency FI(u) ∈ N of u respect to I is defined as
FI(u) := # supp χI(u).
Center for Uncertainty
Quantification
ation Logo Lock-up
38 / 39
56. 4*
Computation of level sets and frequency
Proposition
Let I ⊂ R, u ∈ T , and χI(u) its characteristic. We have
LI(u) = χI(u) u
and rank(LI(u)) ≤ rank(χI(u)) rank(u).
The frequency FI(u) ∈ N of u respect to I is
FI(u) = χI(u), 1 ,
where 1 = d
µ=1
˜1µ, ˜1µ := (1, . . . , 1)T ∈ Rnµ .
Center for Uncertainty
Quantification
ation Logo Lock-up
39 / 39