SlideShare a Scribd company logo
Tensor completion for PDEs with uncertain
coefficients and Bayesian Update
Alexander Litvinenko
(joint work with E. Zander, B. Rosic, O. Pajonk, H. Matthies)
Center for Uncertainty
Quantification
ntification Logo Lock-up
http://sri-uq.kaust.edu.sa/
Extreme Computing Research Center, KAUST
Alexander Litvinenko (joint work with E. Zander, B. Rosic, O. Pajonk, H. Matthies)Tensor completion for PDEs with uncertain coefficients and B
4*
The structure of the talk
Part I (Stochastic forward problem):
1. Motivation
2. Elliptic PDE with uncertain coefficients
3. Discretization and low-rank tensor approximations
Part II (Bayesian update):
1. Bayesian update surrogate
2. Examples
Part III (Tensor completion):
1. Problem setup
2. Tensor completion for Bayesian Update
4*
Motivation to do Uncertainty Quantification (UQ)
Motivation: there is an urgent need to quantify and reduce the
uncertainty in output quantities of computer simulations within
complex (multiscale-multiphysics) applications.
Typical challenges: classical sampling methods are often very
inefficient, whereas straightforward functional representations
are subject to the well-known Curse of Dimensionality.
Nowadays computational predictions are used in critical
engineering decisions and thanks to modern computers we are
able to simulate very complex phenomena. But, how reliable
are these predictions? Can they be trusted?
Example: Saudi Aramco currently has a simulator,
GigaPOWERS, which runs with 9 billion cells. How sensitive
are the simulation results with respect to the unknown reservoir
properties?
Center for Uncertainty
Quantification
ation Logo Lock-up
3 / 30
4*
Part I: Stochastic forward problem
Part I: Stochastic Galerkin method to solve
elliptic PDE with uncertain coefficients
4*
PDE with uncertain coefficient and RHS
Consider
− div(κ(x, ω) u(x, ω)) = f(x, ω) in G × Ω, G ⊂ R2,
u = 0 on ∂G,
(1)
where κ(x, ω) - uncertain diffusion coefficient. Since κ positive,
usually κ(x, ω) = eγ(x,ω).
For well-posedness see [Sarkis 09, Gittelson 10, H.J.Starkloff
11, Ullmann 10].
Further we will assume that covκ(x, y) is given.
Center for Uncertainty
Quantification
ation Logo Lock-up
4 / 30
4*
My previous work
After applying the stochastic Galerkin method, obtain:
Ku = f, where all ingredients are represented in a tensor format
Compute max{u}, var(u), level sets of u, sign(u)
[1] Efficient Analysis of High Dimensional Data in Tensor Formats,
Espig, Hackbusch, A.L., Matthies and Zander, 2012.
Research which ingredients influence on the tensor rank of K
[2] Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats,
W¨ahnert, Espig, Hackbusch, A.L., Matthies, 2013.
Approximate κ(x, ω), stochastic Galerkin operator K in Tensor
Train (TT) format, solve for u, postprocessing
[3] Polynomial Chaos Expansion of random coefficients and the solution of stochastic
partial differential equations in the Tensor Train format, Dolgov, Litvinenko, Khoromskij, Matthies, 2016.
Center for Uncertainty
Quantification
ation Logo Lock-up
5 / 30
4*
Canonical and Tucker tensor formats
Definition and Examples of tensors
Center for Uncertainty
Quantification
ation Logo Lock-up
6 / 30
4*
Canonical and Tucker tensor formats
[Pictures are taken from B. Khoromskij and A. Auer lecture course]
Storage: O(nd ) → O(dRn) and O(Rd + dRn).
Center for Uncertainty
Quantification
ation Logo Lock-up
7 / 30
4*
Definition of tensor of order d
Tensor of order d is a multidimensional array over a d-tuple
index set I = I1 × · · · × Id ,
A = [ai1...id
: iµ ∈ Iµ] ∈ RI
, Iµ = {1, ..., nµ}, µ = 1, .., d.
A is an element of the linear space
Vn =
d
µ=1
Vµ, Vµ = RIµ
equipped with the Euclidean scalar product ·, · : Vn × Vn → R,
defined as
A, B :=
(i1...id )∈I
ai1...id
bi1...id
, for A, B ∈ Vn.
Center for Uncertainty
Quantification
ation Logo Lock-up
8 / 30
4*
Discretization of elliptic PDE
Now let us discretize our diffusion equation with
uncertain coefficients
Center for Uncertainty
Quantification
ation Logo Lock-up
9 / 30
4*
Karhunen Lo´eve and Polynomial Chaos Expansions
Apply both
Karhunen Lo´eve Expansion (KLE):
κ(x, ω) = κ0(x) + ∞
j=1 κjgj(x)ξj(θ(ω)), where
θ = θ(ω) = (θ1(ω), θ2(ω), ..., ),
ξj(θ) = 1
κj G (κ(x, ω) − κ0(x)) gj(x)dx.
Polynomial Chaos Expansion (PCE)
κ(x, ω) = α κ(α)(x)Hα(θ), compute ξj(θ) = α∈J ξ
(α)
j Hα(θ),
where ξ
(α)
j = 1
κj G κ(α)(x)gj(x)dx.
Further compute ξ
(α)
j ≈ s
=1(ξ )j
∞
k=1(ξ , k )αk
.
Center for Uncertainty
Quantification
ation Logo Lock-up
10 / 30
4*
Final discretized stochastic PDE
Ku = f, where
K:= s
=1 K ⊗ M
µ=1 ∆ µ, K ∈ RN×N, ∆ µ ∈ RRµ×Rµ ,
u:= r
j=1 uj ⊗ M
µ=1 ujµ, uj ∈ RN, ujµ ∈ RRµ ,
f:= R
k=1 fk ⊗ M
µ=1 gkµ, fk ∈ RN and gkµ ∈ RRµ .
(Wahnert, Espig, Hackbusch, Litvinenko, Matthies, 2011)
Examples of stochastic Galerkin matrices:
Center for Uncertainty
Quantification
ation Logo Lock-up
11 / 30
4*
Part II
Part II: Bayesian update
We will speak about Gauss-Markov-Kalman filter for the
Bayesian updating of parameters in comput. model.
4*
Mathematical setup
Consider
K(u; q) = f ⇒ u = S(f; q),
where S is solution operator.
Operator depends on parameters q ∈ Q,
hence state u ∈ U is also function of q:
Measurement operator Y with values in Y:
y = Y(q; u) = Y(q, S(f; q)).
Examples of measurements:
y(ω) = D0
u(ω, x)dx, or u in few points
Center for Uncertainty
Quantification
ation Logo Lock-up
12 / 30
4*
Random QoI
With state u a RV, the quantity to be measured
y(ω) = Y(q(ω), u(ω)))
is also uncertain, a random variable.
Noisy data: ˆy + (ω),
where ˆy is the “true” value and a random error .
Forecast of the measurement: z(ω) = y(ω) + (ω).
Center for Uncertainty
Quantification
ation Logo Lock-up
13 / 30
4*
Conditional probability and expectation
Classically, Bayes’s theorem gives conditional probability
P(Iq|Mz) =
P(Mz|Iq)
P(Mz)
P(Iq) (or πq(q|z) =
p(z|q)
Zs
pq(q));
Expectation with this posterior measure is conditional
expectation.
Kolmogorov starts from conditional expectation E (·|Mz),
from this conditional probability via P(Iq|Mz) = E χIq
|Mz .
Center for Uncertainty
Quantification
ation Logo Lock-up
14 / 30
4*
Conditional expectation
The conditional expectation is defined as
orthogonal projection onto the closed subspace L2(Ω, P, σ(z)):
E(q|σ(z)) := PQ∞ q = argmin˜q∈L2(Ω,P,σ(z)) q − ˜q 2
L2
The subspace Q∞ := L2(Ω, P, σ(z)) represents the available
information.
The update, also called the assimilated value
qa(ω) := PQ∞ q = E(q|σ(z)), is a Q-valued RV
and represents new state of knowledge after the measurement.
Doob-Dynkin: Q∞ = {ϕ ∈ Q : ϕ = φ ◦ z, φ measurable}.
Center for Uncertainty
Quantification
ation Logo Lock-up
15 / 30
4*
Numerical computation of NLBU
Look for ϕ such that q(ξ) = ϕ(z(ξ)), z(ξ) = y(ξ) + ε(ω):
ϕ ≈ ˜ϕ =
α∈Jp
ϕαΦα(z(ξ))
and minimize q(ξ) − ˜ϕ(z(ξ)) 2
L2
, where Φα are polynomials
(e.g. Hermite, Laguerre, Chebyshev or something else).
Taking derivatives with respect to ϕα:
∂
∂ϕα
q(ξ) − ˜ϕ(z(ξ)), q(ξ) − ˜ϕ(z(ξ)) = 0 ∀α ∈ Jp
Inserting representation for ˜ϕ, obtain:
Center for Uncertainty
Quantification
ation Logo Lock-up
16 / 30
4*
Numerical computation of NLBU
∂
∂ϕα
E

q2
(ξ) − 2
β∈J
qϕβΦβ(z) +
β,γ∈J
ϕβϕγΦβ(z)Φγ(z)


= 2E

−qΦα(z) +
β∈J
ϕβΦβ(z)Φα(z)


= 2


β∈J
E [Φβ(z)Φα(z)] ϕβ − E [qΦα(z)]

 = 0 ∀α ∈ J .
Center for Uncertainty
Quantification
ation Logo Lock-up
17 / 30
4*
Numerical computation of NLBU
Now, rewriting the last sum in a matrix form, obtain the linear
system of equations (=: A) to compute coefficients ϕβ:



... ... ...
... E [Φα(z(ξ))Φβ(z(ξ))]
...
... ... ...







...
ϕβ
...



 =




...
E [q(ξ)Φα(z(ξ))]
...



 ,
where α, β ∈ J , A is of size |J | × |J |.
Center for Uncertainty
Quantification
ation Logo Lock-up
18 / 30
4*
Numerical computation of NLBU
We can rewrite the system above in the compact form:
[Φ] [diag(...wi...)] [Φ]T




...
ϕβ
...



 = [Φ]


w0q(ξ0)
...
wNq(ξN)


[Φ] ∈ RJα×N, [diag(...wi...)] ∈ RN×N, [Φ] ∈ RJα×N.
Solving this system, obtain vector of coefficients (...ϕβ...)T for
all β.
Finally, the assimilated parameter qa will be
qa = qf + ˜ϕ(ˆy) − ˜ϕ(z), (2)
z(ξ) = y(ξ) + ε(ω), ˜ϕ = β∈Jp
ϕβΦβ(z(ξ))
Center for Uncertainty
Quantification
ation Logo Lock-up
19 / 30
4*
Explanation of ” Bayesian Update surrogate” from E. Zander
Let the stochastic model of the measurement is given by
y = M(q) + ε, ε -measurement noise (3)
Best estimator ˜ϕ for q given z, i.e.
˜ϕ = argminϕ E[ q(·) − ϕ(z(·)) 2
2]. (4)
The best estimate (or predictor) of q given the
measurement model is
qM(ξ) = ˜ϕ(z(ξ))). (5)
The remainder, i.e. the difference between q and qM, is
given by
q⊥
M(ξ) = q(ξ) − qM(ξ), (6)
Due to the minimisation property of the MMSE
estimator—orthogonal to qM(ξ), i.e. cov(q⊥
M, qM) = 0.
Center for Uncertainty
Quantification
ation Logo Lock-up
20 / 30
In other words,
q(ξ) = qM(ξ) + q⊥
M(ξ) (7)
yields an orthogonal decomposition of q.
Actual measurement ˆy, prediction ˆq = ˜ϕ(ˆy). Part qM of q
can be “collapsed” to ˆq. Updated stochastic model q is
thus given by
q (ξ) = ˆq + q⊥
M(ξ) (8)
q (ξ) = q(ξ) + ( ˜ϕ(ˆy) − ˜ϕ(z(ξ))). (9)
Center for Uncertainty
Quantification
ation Logo Lock-up
21 / 30
4*
Example: 1D elliptic PDE with uncertain coeffs
− · (κ(x, ξ) u(x, ξ)) = f(x, ξ), x ∈ [0, 1]
+ Dirichlet random b.c. g(0, ξ) and g(1, ξ).
3 measurements: u(0.3) = 22, s.d. 0.2, x(0.5) = 28, s.d. 0.3,
x(0.8) = 18, s.d. 0.3.
κ(x, ξ): N = 100 dofs, M = 5, number of KLE terms 35, beta distribution for κ, Gaussian covκ, cov.
length 0.1, multi-variate Hermite polynomial of order pκ = 2;
RHS f(x, ξ): Mf = 5, number of KLE terms 40, beta distribution for κ, exponential covf , cov. length 0.03,
multi-variate Hermite polynomial of order pf = 2;
b.c. g(x, ξ): Mg = 2, number of KLE terms 2, normal distribution for g, Gaussian covg , cov. length 10,
multi-variate Hermite polynomial of order pg = 1;
pφ = 3 and pu = 3
Center for Uncertainty
Quantification
ation Logo Lock-up
22 / 30
4*
Example: updating of the solution u
0 0.5 1
-20
0
20
40
60
0 0.5 1
-20
0
20
40
60
Figure: Original and updated solutions, mean value plus/minus 1,2,3
standard deviations
[graphics are built in the stochastic Galerkin library sglib, written by E. Zander in TU Braunschweig]
Center for Uncertainty
Quantification
ation Logo Lock-up
23 / 30
4*
Example: Updating of the parameter
0 0.5 1
0
0.5
1
1.5
0 0.5 1
0
0.5
1
1.5
Figure: Original and updated parameter κ.
Center for Uncertainty
Quantification
ation Logo Lock-up
24 / 30
4*
Part III. Tensor completion
Now, we consider how to
apply Tensor Completion Techniques
for Bayesian Update
In Bayesian Update surrogate, the assimilated PCE coeffs of
parameter qa will be
NEW gPCE coeffs=OLD gPCE coeffs + gPCE of Update
ALL INGREDIENTS ARE TENSORS!
qa = qf + ˜ϕ(ˆy) − ˜ϕ(z), (10)
z(ξ) = y(ξ) + ε(ω), qa ∈ RN×#Ja , N = 1..107, #Ja > 1000,
#Jf < #Ja.
4*
Problem setup: Tensor completion
Problem of fitting a low rank tensor A ∈ RI, I := I1 × ... × Id ,
Iµ = {1, ..., nµ}, µ ∈ D := {1, .., d}, to given data points
{Mi ∈ R | i ∈ P}, P ⊂ I, #P ≥
d
µ=1
nµ, (11)
by minimizing the distance between the given values (Mi)i∈P
and approximations (Ai)i∈P:
A = argmin˜A∈T
i∈P
(Mi − ˜Ai)2
(12)
Remark: here we assume that our target tensor M allows for a
low rank approximation M − ˜M ≤ ε, ε ≥ 0 and ˜M fulfills
certain rank bounds, T - Low rank format under consideration.
Center for Uncertainty
Quantification
ation Logo Lock-up
26 / 30
4*
Problem setup: Tensor completion
L. Grasedyck et all, 2016, hierarchical and tensor train formats
W. Austin, T, Kolda, D, Kressner, M. Steinlechner et al, CP
format
Goal: Reconstruct tensor with O(log N) number of samples.
Methods:
1. ALS inspired by LMaFit method for matrix completion,
complexity O(r4d#P).
2. Alternating directions fitting (ADF), complexity O(r2d#P).
Center for Uncertainty
Quantification
ation Logo Lock-up
27 / 30
4*
Numerical experiments for SPDEs: Tensor completion
[L. Grasedyck, M. Kluge, S. Kraemer, SIAM J. Sci. Comput., Vol 37/5, 2016]
Applied ALS and ADF methods to:
− div(κ(x, ω) u(x, ω)) = 1 in D × Ω,
u(x, ω) = 0 on ∂G × Ω,
(13)
D = [−1, 1]. The goal is to determine u(ω) := D u(x, ω)dx.
FE with 50 dofs, KLE with d terms, d-stochastic independent
RVs,
Yields to tensor Ai1...id
:= u(i1, ..., id ),
n = 100, d = 5, slice density CSD = 6.
Software (matlab) is available.
Center for Uncertainty
Quantification
ation Logo Lock-up
28 / 30
4*
Example: updating of the solution u
0 0.5 1
-20
0
20
40
60
0 0.5 1
-20
0
20
40
60
0 0.5 1
-20
0
20
40
60
0 0.5 1
-20
0
20
40
60
0 0.5 1
-20
0
20
40
60
Figure: Original and updated solutions, mean value plus/minus 1,2,3
standard deviations. Number of available measurements {0, 1, 2, 3, 5}
[graphics are built in the stochastic Galerkin library sglib, written by E. Zander in TU Braunschweig]
Center for Uncertainty
Quantification
ation Logo Lock-up
29 / 30
4*
Conclusion
Introduced low-rank tensor methods to solve elliptic PDEs
with uncertain coefficients,
Explained how to compute the maximum and the mean in
low-rank tensor format,
Derived Bayesian update surrogate ϕ (as a linear,
quadratic, cubic etc approximation), i.e. compute
conditional expectation of q, given measurement y.
Apply Tensor Completion method to sparse measurement
tensor in the likelihood.
Center for Uncertainty
Quantification
ation Logo Lock-up
30 / 30

More Related Content

What's hot

Divergence clustering
Divergence clusteringDivergence clustering
Divergence clustering
Frank Nielsen
 
Bregman divergences from comparative convexity
Bregman divergences from comparative convexityBregman divergences from comparative convexity
Bregman divergences from comparative convexity
Frank Nielsen
 
Mesh Processing Course : Active Contours
Mesh Processing Course : Active ContoursMesh Processing Course : Active Contours
Mesh Processing Course : Active Contours
Gabriel Peyré
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formats
Alexander Litvinenko
 
Small updates of matrix functions used for network centrality
Small updates of matrix functions used for network centralitySmall updates of matrix functions used for network centrality
Small updates of matrix functions used for network centrality
Francesco Tudisco
 
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Francesco Tudisco
 
Optimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-peripheryOptimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-periphery
Francesco Tudisco
 
Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)
Frank Nielsen
 
A new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensorsA new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensors
Francesco Tudisco
 
Tailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsTailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest Neighbors
Frank Nielsen
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
Hybrid dynamics in large-scale logistics networks
Hybrid dynamics in large-scale logistics networksHybrid dynamics in large-scale logistics networks
Hybrid dynamics in large-scale logistics networks
MKosmykov
 
Lecture3 linear svm_with_slack
Lecture3 linear svm_with_slackLecture3 linear svm_with_slack
Lecture3 linear svm_with_slackStéphane Canu
 
Tutorial on Belief Propagation in Bayesian Networks
Tutorial on Belief Propagation in Bayesian NetworksTutorial on Belief Propagation in Bayesian Networks
Tutorial on Belief Propagation in Bayesian Networks
Anmol Dwivedi
 
Linear Discriminant Analysis (LDA) Under f-Divergence Measures
Linear Discriminant Analysis (LDA) Under f-Divergence MeasuresLinear Discriminant Analysis (LDA) Under f-Divergence Measures
Linear Discriminant Analysis (LDA) Under f-Divergence Measures
Anmol Dwivedi
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
 Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli... Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
The Statistical and Applied Mathematical Sciences Institute
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
Tensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantificationTensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantification
Alexander Litvinenko
 
Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9
Daisuke Yoneoka
 

What's hot (20)

Divergence clustering
Divergence clusteringDivergence clustering
Divergence clustering
 
Bregman divergences from comparative convexity
Bregman divergences from comparative convexityBregman divergences from comparative convexity
Bregman divergences from comparative convexity
 
Mesh Processing Course : Active Contours
Mesh Processing Course : Active ContoursMesh Processing Course : Active Contours
Mesh Processing Course : Active Contours
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formats
 
Small updates of matrix functions used for network centrality
Small updates of matrix functions used for network centralitySmall updates of matrix functions used for network centrality
Small updates of matrix functions used for network centrality
 
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
 
Optimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-peripheryOptimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-periphery
 
Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)
 
A new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensorsA new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensors
 
Lecture5 kernel svm
Lecture5 kernel svmLecture5 kernel svm
Lecture5 kernel svm
 
Tailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsTailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest Neighbors
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Hybrid dynamics in large-scale logistics networks
Hybrid dynamics in large-scale logistics networksHybrid dynamics in large-scale logistics networks
Hybrid dynamics in large-scale logistics networks
 
Lecture3 linear svm_with_slack
Lecture3 linear svm_with_slackLecture3 linear svm_with_slack
Lecture3 linear svm_with_slack
 
Tutorial on Belief Propagation in Bayesian Networks
Tutorial on Belief Propagation in Bayesian NetworksTutorial on Belief Propagation in Bayesian Networks
Tutorial on Belief Propagation in Bayesian Networks
 
Linear Discriminant Analysis (LDA) Under f-Divergence Measures
Linear Discriminant Analysis (LDA) Under f-Divergence MeasuresLinear Discriminant Analysis (LDA) Under f-Divergence Measures
Linear Discriminant Analysis (LDA) Under f-Divergence Measures
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
 Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli... Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Tensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantificationTensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantification
 
Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9Murphy: Machine learning A probabilistic perspective: Ch.9
Murphy: Machine learning A probabilistic perspective: Ch.9
 

Viewers also liked

VRとわたし
VRとわたしVRとわたし
VRとわたし
Jun Iio
 
Why cuba trade delegation
Why cuba trade delegationWhy cuba trade delegation
Why cuba trade delegation
R. Sonny Betancourt
 
Definitive casts and dies
Definitive casts and diesDefinitive casts and dies
Definitive casts and dies
hesham1964
 
4 organizacija-i-vlasnicka-struktura
4 organizacija-i-vlasnicka-struktura4 organizacija-i-vlasnicka-struktura
4 organizacija-i-vlasnicka-struktura
Vladimir Stanković
 
02 brojni sistemi
02 brojni sistemi02 brojni sistemi
02 brojni sistemi
Vladimir Stanković
 
How to find User Intent Changes for Google SEO
How to find User Intent Changes for Google SEOHow to find User Intent Changes for Google SEO
How to find User Intent Changes for Google SEO
Jameson (Jack) Treseler
 
Azure sql database escalabilidad
Azure sql database escalabilidadAzure sql database escalabilidad
Azure sql database escalabilidad
Eduardo Castro
 
Response Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty QuantificationResponse Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty Quantification
Alexander Litvinenko
 
Hierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matricesHierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matrices
Alexander Litvinenko
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
Alexander Litvinenko
 
CURRICULUM_VITAE_SubhasreeMondal.PDF
CURRICULUM_VITAE_SubhasreeMondal.PDFCURRICULUM_VITAE_SubhasreeMondal.PDF
CURRICULUM_VITAE_SubhasreeMondal.PDFSubhasree Mondal
 
Tecnologia
TecnologiaTecnologia
Tecnologia
danibarsa123
 
Pulpotomiateraputica pptparablog-111201182605-phpapp01
Pulpotomiateraputica pptparablog-111201182605-phpapp01Pulpotomiateraputica pptparablog-111201182605-phpapp01
Pulpotomiateraputica pptparablog-111201182605-phpapp01
Yolanda Charre
 
Modulador AM DSBFC
Modulador AM DSBFCModulador AM DSBFC
Modulador AM DSBFC
Pablo Cruz Rodríguez
 
社会科学研究者からみた機械学習
社会科学研究者からみた機械学習社会科学研究者からみた機械学習
社会科学研究者からみた機械学習
Jun Iio
 
El universo, la tierra, la vida y sus origenes.
El universo, la tierra, la vida y sus origenes.El universo, la tierra, la vida y sus origenes.
El universo, la tierra, la vida y sus origenes.
seduca
 
IT in Healthcare
IT in HealthcareIT in Healthcare
IT in Healthcare
NetApp
 
TRABAJO UNIVERSIDAD
TRABAJO UNIVERSIDADTRABAJO UNIVERSIDAD
TRABAJO UNIVERSIDAD
Geovany Cámbara
 
Aula 01-filosofia-parte 02 - v3
Aula 01-filosofia-parte 02 - v3Aula 01-filosofia-parte 02 - v3
Aula 01-filosofia-parte 02 - v3
Anderson Favaro
 
Codemotion - Modern Branding en SharePoint desde todos los ángulos
Codemotion - Modern Branding en SharePoint desde todos los ángulosCodemotion - Modern Branding en SharePoint desde todos los ángulos
Codemotion - Modern Branding en SharePoint desde todos los ángulos
Santiago Porras Rodríguez
 

Viewers also liked (20)

VRとわたし
VRとわたしVRとわたし
VRとわたし
 
Why cuba trade delegation
Why cuba trade delegationWhy cuba trade delegation
Why cuba trade delegation
 
Definitive casts and dies
Definitive casts and diesDefinitive casts and dies
Definitive casts and dies
 
4 organizacija-i-vlasnicka-struktura
4 organizacija-i-vlasnicka-struktura4 organizacija-i-vlasnicka-struktura
4 organizacija-i-vlasnicka-struktura
 
02 brojni sistemi
02 brojni sistemi02 brojni sistemi
02 brojni sistemi
 
How to find User Intent Changes for Google SEO
How to find User Intent Changes for Google SEOHow to find User Intent Changes for Google SEO
How to find User Intent Changes for Google SEO
 
Azure sql database escalabilidad
Azure sql database escalabilidadAzure sql database escalabilidad
Azure sql database escalabilidad
 
Response Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty QuantificationResponse Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty Quantification
 
Hierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matricesHierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matrices
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
 
CURRICULUM_VITAE_SubhasreeMondal.PDF
CURRICULUM_VITAE_SubhasreeMondal.PDFCURRICULUM_VITAE_SubhasreeMondal.PDF
CURRICULUM_VITAE_SubhasreeMondal.PDF
 
Tecnologia
TecnologiaTecnologia
Tecnologia
 
Pulpotomiateraputica pptparablog-111201182605-phpapp01
Pulpotomiateraputica pptparablog-111201182605-phpapp01Pulpotomiateraputica pptparablog-111201182605-phpapp01
Pulpotomiateraputica pptparablog-111201182605-phpapp01
 
Modulador AM DSBFC
Modulador AM DSBFCModulador AM DSBFC
Modulador AM DSBFC
 
社会科学研究者からみた機械学習
社会科学研究者からみた機械学習社会科学研究者からみた機械学習
社会科学研究者からみた機械学習
 
El universo, la tierra, la vida y sus origenes.
El universo, la tierra, la vida y sus origenes.El universo, la tierra, la vida y sus origenes.
El universo, la tierra, la vida y sus origenes.
 
IT in Healthcare
IT in HealthcareIT in Healthcare
IT in Healthcare
 
TRABAJO UNIVERSIDAD
TRABAJO UNIVERSIDADTRABAJO UNIVERSIDAD
TRABAJO UNIVERSIDAD
 
Aula 01-filosofia-parte 02 - v3
Aula 01-filosofia-parte 02 - v3Aula 01-filosofia-parte 02 - v3
Aula 01-filosofia-parte 02 - v3
 
Codemotion - Modern Branding en SharePoint desde todos los ángulos
Codemotion - Modern Branding en SharePoint desde todos los ángulosCodemotion - Modern Branding en SharePoint desde todos los ángulos
Codemotion - Modern Branding en SharePoint desde todos los ángulos
 

Similar to Tensor Completion for PDEs with uncertain coefficients and Bayesian Update techniques

A nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaA nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formula
Alexander Litvinenko
 
Connection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsConnection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problems
Alexander Litvinenko
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian update
Alexander Litvinenko
 
Linear Bayesian update surrogate for updating PCE coefficients
Linear Bayesian update surrogate for updating PCE coefficientsLinear Bayesian update surrogate for updating PCE coefficients
Linear Bayesian update surrogate for updating PCE coefficients
Alexander Litvinenko
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixtures
Christian Robert
 
Non-sampling functional approximation of linear and non-linear Bayesian Update
Non-sampling functional approximation of linear and non-linear Bayesian UpdateNon-sampling functional approximation of linear and non-linear Bayesian Update
Non-sampling functional approximation of linear and non-linear Bayesian Update
Alexander Litvinenko
 
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...How to find a cheap surrogate to approximate Bayesian Update Formula and to a...
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...
Alexander Litvinenko
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
Alexander Litvinenko
 
Tensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsTensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEs
Alexander Litvinenko
 
Solving inverse problems via non-linear Bayesian Update of PCE coefficients
Solving inverse problems via non-linear Bayesian Update of PCE coefficientsSolving inverse problems via non-linear Bayesian Update of PCE coefficients
Solving inverse problems via non-linear Bayesian Update of PCE coefficients
Alexander Litvinenko
 
Litvinenko nlbu2016
Litvinenko nlbu2016Litvinenko nlbu2016
Litvinenko nlbu2016
Alexander Litvinenko
 
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
The Statistical and Applied Mathematical Sciences Institute
 
Developing fast low-rank tensor methods for solving PDEs with uncertain coef...
Developing fast  low-rank tensor methods for solving PDEs with uncertain coef...Developing fast  low-rank tensor methods for solving PDEs with uncertain coef...
Developing fast low-rank tensor methods for solving PDEs with uncertain coef...
Alexander Litvinenko
 
Automatic bayesian cubature
Automatic bayesian cubatureAutomatic bayesian cubature
Automatic bayesian cubature
Jagadeeswaran Rathinavel
 
My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...
Alexander Litvinenko
 
Introduction to modern Variational Inference.
Introduction to modern Variational Inference.Introduction to modern Variational Inference.
Introduction to modern Variational Inference.
Tomasz Kusmierczyk
 
Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration
Jagadeeswaran Rathinavel
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 
Approximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelApproximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts model
Matt Moores
 
MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...
MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...
MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...
The Statistical and Applied Mathematical Sciences Institute
 

Similar to Tensor Completion for PDEs with uncertain coefficients and Bayesian Update techniques (20)

A nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formulaA nonlinear approximation of the Bayesian Update formula
A nonlinear approximation of the Bayesian Update formula
 
Connection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsConnection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problems
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian update
 
Linear Bayesian update surrogate for updating PCE coefficients
Linear Bayesian update surrogate for updating PCE coefficientsLinear Bayesian update surrogate for updating PCE coefficients
Linear Bayesian update surrogate for updating PCE coefficients
 
Bayesian inference on mixtures
Bayesian inference on mixturesBayesian inference on mixtures
Bayesian inference on mixtures
 
Non-sampling functional approximation of linear and non-linear Bayesian Update
Non-sampling functional approximation of linear and non-linear Bayesian UpdateNon-sampling functional approximation of linear and non-linear Bayesian Update
Non-sampling functional approximation of linear and non-linear Bayesian Update
 
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...How to find a cheap surrogate to approximate Bayesian Update Formula and to a...
How to find a cheap surrogate to approximate Bayesian Update Formula and to a...
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
 
Tensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsTensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEs
 
Solving inverse problems via non-linear Bayesian Update of PCE coefficients
Solving inverse problems via non-linear Bayesian Update of PCE coefficientsSolving inverse problems via non-linear Bayesian Update of PCE coefficients
Solving inverse problems via non-linear Bayesian Update of PCE coefficients
 
Litvinenko nlbu2016
Litvinenko nlbu2016Litvinenko nlbu2016
Litvinenko nlbu2016
 
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
QMC: Transition Workshop - Applying Quasi-Monte Carlo Methods to a Stochastic...
 
Developing fast low-rank tensor methods for solving PDEs with uncertain coef...
Developing fast  low-rank tensor methods for solving PDEs with uncertain coef...Developing fast  low-rank tensor methods for solving PDEs with uncertain coef...
Developing fast low-rank tensor methods for solving PDEs with uncertain coef...
 
Automatic bayesian cubature
Automatic bayesian cubatureAutomatic bayesian cubature
Automatic bayesian cubature
 
My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...My presentation at University of Nottingham "Fast low-rank methods for solvin...
My presentation at University of Nottingham "Fast low-rank methods for solvin...
 
Introduction to modern Variational Inference.
Introduction to modern Variational Inference.Introduction to modern Variational Inference.
Introduction to modern Variational Inference.
 
Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration Automatic Bayesian method for Numerical Integration
Automatic Bayesian method for Numerical Integration
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Approximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts modelApproximate Bayesian computation for the Ising/Potts model
Approximate Bayesian computation for the Ising/Potts model
 
MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...
MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...
MUMS Opening Workshop - Panel Discussion: Facts About Some Statisitcal Models...
 

More from Alexander Litvinenko

Poster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdfPoster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdf
Alexander Litvinenko
 
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdflitvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
Alexander Litvinenko
 
litvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdflitvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdf
Alexander Litvinenko
 
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityDensity Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Alexander Litvinenko
 
litvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdflitvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdf
Alexander Litvinenko
 
Litvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdfLitvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdf
Alexander Litvinenko
 
Uncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdfUncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdf
Alexander Litvinenko
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Alexander Litvinenko
 
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Alexander Litvinenko
 
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Alexander Litvinenko
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...
Alexander Litvinenko
 
Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...
Alexander Litvinenko
 
Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Propagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater FlowPropagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater Flow
Alexander Litvinenko
 
Simulation of propagation of uncertainties in density-driven groundwater flow
Simulation of propagation of uncertainties in density-driven groundwater flowSimulation of propagation of uncertainties in density-driven groundwater flow
Simulation of propagation of uncertainties in density-driven groundwater flow
Alexander Litvinenko
 

More from Alexander Litvinenko (20)

Poster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdfPoster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdf
 
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdflitvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
 
litvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdflitvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdf
 
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityDensity Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
 
litvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdflitvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdf
 
Litvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdfLitvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdf
 
Uncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdfUncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdf
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
 
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...
 
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...
 
Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...
 
Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Propagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater FlowPropagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater Flow
 
Simulation of propagation of uncertainties in density-driven groundwater flow
Simulation of propagation of uncertainties in density-driven groundwater flowSimulation of propagation of uncertainties in density-driven groundwater flow
Simulation of propagation of uncertainties in density-driven groundwater flow
 

Recently uploaded

CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
camakaiclarkmusic
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Thiyagu K
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
Peter Windle
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
Tamralipta Mahavidyalaya
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
Balvir Singh
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
heathfieldcps1
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
Mohd Adib Abd Muin, Senior Lecturer at Universiti Utara Malaysia
 
Honest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptxHonest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptx
timhan337
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
Nguyen Thanh Tu Collection
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
Celine George
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
Thiyagu K
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Atul Kumar Singh
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
TechSoup
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
Jisc
 
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
BhavyaRajput3
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
JosvitaDsouza2
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
beazzy04
 
678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf
CarlosHernanMontoyab2
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
Levi Shapiro
 

Recently uploaded (20)

CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
 
Honest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptxHonest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptx
 
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
BÀI TẬP BỔ TRỢ TIẾNG ANH GLOBAL SUCCESS LỚP 3 - CẢ NĂM (CÓ FILE NGHE VÀ ĐÁP Á...
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
 
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCECLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
CLASS 11 CBSE B.St Project AIDS TO TRADE - INSURANCE
 
1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx1.4 modern child centered education - mahatma gandhi-2.pptx
1.4 modern child centered education - mahatma gandhi-2.pptx
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
 
678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
 

Tensor Completion for PDEs with uncertain coefficients and Bayesian Update techniques

  • 1. Tensor completion for PDEs with uncertain coefficients and Bayesian Update Alexander Litvinenko (joint work with E. Zander, B. Rosic, O. Pajonk, H. Matthies) Center for Uncertainty Quantification ntification Logo Lock-up http://sri-uq.kaust.edu.sa/ Extreme Computing Research Center, KAUST Alexander Litvinenko (joint work with E. Zander, B. Rosic, O. Pajonk, H. Matthies)Tensor completion for PDEs with uncertain coefficients and B
  • 2. 4* The structure of the talk Part I (Stochastic forward problem): 1. Motivation 2. Elliptic PDE with uncertain coefficients 3. Discretization and low-rank tensor approximations Part II (Bayesian update): 1. Bayesian update surrogate 2. Examples Part III (Tensor completion): 1. Problem setup 2. Tensor completion for Bayesian Update
  • 3. 4* Motivation to do Uncertainty Quantification (UQ) Motivation: there is an urgent need to quantify and reduce the uncertainty in output quantities of computer simulations within complex (multiscale-multiphysics) applications. Typical challenges: classical sampling methods are often very inefficient, whereas straightforward functional representations are subject to the well-known Curse of Dimensionality. Nowadays computational predictions are used in critical engineering decisions and thanks to modern computers we are able to simulate very complex phenomena. But, how reliable are these predictions? Can they be trusted? Example: Saudi Aramco currently has a simulator, GigaPOWERS, which runs with 9 billion cells. How sensitive are the simulation results with respect to the unknown reservoir properties? Center for Uncertainty Quantification ation Logo Lock-up 3 / 30
  • 4. 4* Part I: Stochastic forward problem Part I: Stochastic Galerkin method to solve elliptic PDE with uncertain coefficients
  • 5. 4* PDE with uncertain coefficient and RHS Consider − div(κ(x, ω) u(x, ω)) = f(x, ω) in G × Ω, G ⊂ R2, u = 0 on ∂G, (1) where κ(x, ω) - uncertain diffusion coefficient. Since κ positive, usually κ(x, ω) = eγ(x,ω). For well-posedness see [Sarkis 09, Gittelson 10, H.J.Starkloff 11, Ullmann 10]. Further we will assume that covκ(x, y) is given. Center for Uncertainty Quantification ation Logo Lock-up 4 / 30
  • 6. 4* My previous work After applying the stochastic Galerkin method, obtain: Ku = f, where all ingredients are represented in a tensor format Compute max{u}, var(u), level sets of u, sign(u) [1] Efficient Analysis of High Dimensional Data in Tensor Formats, Espig, Hackbusch, A.L., Matthies and Zander, 2012. Research which ingredients influence on the tensor rank of K [2] Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats, W¨ahnert, Espig, Hackbusch, A.L., Matthies, 2013. Approximate κ(x, ω), stochastic Galerkin operator K in Tensor Train (TT) format, solve for u, postprocessing [3] Polynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format, Dolgov, Litvinenko, Khoromskij, Matthies, 2016. Center for Uncertainty Quantification ation Logo Lock-up 5 / 30
  • 7. 4* Canonical and Tucker tensor formats Definition and Examples of tensors Center for Uncertainty Quantification ation Logo Lock-up 6 / 30
  • 8. 4* Canonical and Tucker tensor formats [Pictures are taken from B. Khoromskij and A. Auer lecture course] Storage: O(nd ) → O(dRn) and O(Rd + dRn). Center for Uncertainty Quantification ation Logo Lock-up 7 / 30
  • 9. 4* Definition of tensor of order d Tensor of order d is a multidimensional array over a d-tuple index set I = I1 × · · · × Id , A = [ai1...id : iµ ∈ Iµ] ∈ RI , Iµ = {1, ..., nµ}, µ = 1, .., d. A is an element of the linear space Vn = d µ=1 Vµ, Vµ = RIµ equipped with the Euclidean scalar product ·, · : Vn × Vn → R, defined as A, B := (i1...id )∈I ai1...id bi1...id , for A, B ∈ Vn. Center for Uncertainty Quantification ation Logo Lock-up 8 / 30
  • 10. 4* Discretization of elliptic PDE Now let us discretize our diffusion equation with uncertain coefficients Center for Uncertainty Quantification ation Logo Lock-up 9 / 30
  • 11. 4* Karhunen Lo´eve and Polynomial Chaos Expansions Apply both Karhunen Lo´eve Expansion (KLE): κ(x, ω) = κ0(x) + ∞ j=1 κjgj(x)ξj(θ(ω)), where θ = θ(ω) = (θ1(ω), θ2(ω), ..., ), ξj(θ) = 1 κj G (κ(x, ω) − κ0(x)) gj(x)dx. Polynomial Chaos Expansion (PCE) κ(x, ω) = α κ(α)(x)Hα(θ), compute ξj(θ) = α∈J ξ (α) j Hα(θ), where ξ (α) j = 1 κj G κ(α)(x)gj(x)dx. Further compute ξ (α) j ≈ s =1(ξ )j ∞ k=1(ξ , k )αk . Center for Uncertainty Quantification ation Logo Lock-up 10 / 30
  • 12. 4* Final discretized stochastic PDE Ku = f, where K:= s =1 K ⊗ M µ=1 ∆ µ, K ∈ RN×N, ∆ µ ∈ RRµ×Rµ , u:= r j=1 uj ⊗ M µ=1 ujµ, uj ∈ RN, ujµ ∈ RRµ , f:= R k=1 fk ⊗ M µ=1 gkµ, fk ∈ RN and gkµ ∈ RRµ . (Wahnert, Espig, Hackbusch, Litvinenko, Matthies, 2011) Examples of stochastic Galerkin matrices: Center for Uncertainty Quantification ation Logo Lock-up 11 / 30
  • 13. 4* Part II Part II: Bayesian update We will speak about Gauss-Markov-Kalman filter for the Bayesian updating of parameters in comput. model.
  • 14. 4* Mathematical setup Consider K(u; q) = f ⇒ u = S(f; q), where S is solution operator. Operator depends on parameters q ∈ Q, hence state u ∈ U is also function of q: Measurement operator Y with values in Y: y = Y(q; u) = Y(q, S(f; q)). Examples of measurements: y(ω) = D0 u(ω, x)dx, or u in few points Center for Uncertainty Quantification ation Logo Lock-up 12 / 30
  • 15. 4* Random QoI With state u a RV, the quantity to be measured y(ω) = Y(q(ω), u(ω))) is also uncertain, a random variable. Noisy data: ˆy + (ω), where ˆy is the “true” value and a random error . Forecast of the measurement: z(ω) = y(ω) + (ω). Center for Uncertainty Quantification ation Logo Lock-up 13 / 30
  • 16. 4* Conditional probability and expectation Classically, Bayes’s theorem gives conditional probability P(Iq|Mz) = P(Mz|Iq) P(Mz) P(Iq) (or πq(q|z) = p(z|q) Zs pq(q)); Expectation with this posterior measure is conditional expectation. Kolmogorov starts from conditional expectation E (·|Mz), from this conditional probability via P(Iq|Mz) = E χIq |Mz . Center for Uncertainty Quantification ation Logo Lock-up 14 / 30
  • 17. 4* Conditional expectation The conditional expectation is defined as orthogonal projection onto the closed subspace L2(Ω, P, σ(z)): E(q|σ(z)) := PQ∞ q = argmin˜q∈L2(Ω,P,σ(z)) q − ˜q 2 L2 The subspace Q∞ := L2(Ω, P, σ(z)) represents the available information. The update, also called the assimilated value qa(ω) := PQ∞ q = E(q|σ(z)), is a Q-valued RV and represents new state of knowledge after the measurement. Doob-Dynkin: Q∞ = {ϕ ∈ Q : ϕ = φ ◦ z, φ measurable}. Center for Uncertainty Quantification ation Logo Lock-up 15 / 30
  • 18. 4* Numerical computation of NLBU Look for ϕ such that q(ξ) = ϕ(z(ξ)), z(ξ) = y(ξ) + ε(ω): ϕ ≈ ˜ϕ = α∈Jp ϕαΦα(z(ξ)) and minimize q(ξ) − ˜ϕ(z(ξ)) 2 L2 , where Φα are polynomials (e.g. Hermite, Laguerre, Chebyshev or something else). Taking derivatives with respect to ϕα: ∂ ∂ϕα q(ξ) − ˜ϕ(z(ξ)), q(ξ) − ˜ϕ(z(ξ)) = 0 ∀α ∈ Jp Inserting representation for ˜ϕ, obtain: Center for Uncertainty Quantification ation Logo Lock-up 16 / 30
  • 19. 4* Numerical computation of NLBU ∂ ∂ϕα E  q2 (ξ) − 2 β∈J qϕβΦβ(z) + β,γ∈J ϕβϕγΦβ(z)Φγ(z)   = 2E  −qΦα(z) + β∈J ϕβΦβ(z)Φα(z)   = 2   β∈J E [Φβ(z)Φα(z)] ϕβ − E [qΦα(z)]   = 0 ∀α ∈ J . Center for Uncertainty Quantification ation Logo Lock-up 17 / 30
  • 20. 4* Numerical computation of NLBU Now, rewriting the last sum in a matrix form, obtain the linear system of equations (=: A) to compute coefficients ϕβ:    ... ... ... ... E [Φα(z(ξ))Φβ(z(ξ))] ... ... ... ...        ... ϕβ ...     =     ... E [q(ξ)Φα(z(ξ))] ...     , where α, β ∈ J , A is of size |J | × |J |. Center for Uncertainty Quantification ation Logo Lock-up 18 / 30
  • 21. 4* Numerical computation of NLBU We can rewrite the system above in the compact form: [Φ] [diag(...wi...)] [Φ]T     ... ϕβ ...     = [Φ]   w0q(ξ0) ... wNq(ξN)   [Φ] ∈ RJα×N, [diag(...wi...)] ∈ RN×N, [Φ] ∈ RJα×N. Solving this system, obtain vector of coefficients (...ϕβ...)T for all β. Finally, the assimilated parameter qa will be qa = qf + ˜ϕ(ˆy) − ˜ϕ(z), (2) z(ξ) = y(ξ) + ε(ω), ˜ϕ = β∈Jp ϕβΦβ(z(ξ)) Center for Uncertainty Quantification ation Logo Lock-up 19 / 30
  • 22. 4* Explanation of ” Bayesian Update surrogate” from E. Zander Let the stochastic model of the measurement is given by y = M(q) + ε, ε -measurement noise (3) Best estimator ˜ϕ for q given z, i.e. ˜ϕ = argminϕ E[ q(·) − ϕ(z(·)) 2 2]. (4) The best estimate (or predictor) of q given the measurement model is qM(ξ) = ˜ϕ(z(ξ))). (5) The remainder, i.e. the difference between q and qM, is given by q⊥ M(ξ) = q(ξ) − qM(ξ), (6) Due to the minimisation property of the MMSE estimator—orthogonal to qM(ξ), i.e. cov(q⊥ M, qM) = 0. Center for Uncertainty Quantification ation Logo Lock-up 20 / 30
  • 23. In other words, q(ξ) = qM(ξ) + q⊥ M(ξ) (7) yields an orthogonal decomposition of q. Actual measurement ˆy, prediction ˆq = ˜ϕ(ˆy). Part qM of q can be “collapsed” to ˆq. Updated stochastic model q is thus given by q (ξ) = ˆq + q⊥ M(ξ) (8) q (ξ) = q(ξ) + ( ˜ϕ(ˆy) − ˜ϕ(z(ξ))). (9) Center for Uncertainty Quantification ation Logo Lock-up 21 / 30
  • 24. 4* Example: 1D elliptic PDE with uncertain coeffs − · (κ(x, ξ) u(x, ξ)) = f(x, ξ), x ∈ [0, 1] + Dirichlet random b.c. g(0, ξ) and g(1, ξ). 3 measurements: u(0.3) = 22, s.d. 0.2, x(0.5) = 28, s.d. 0.3, x(0.8) = 18, s.d. 0.3. κ(x, ξ): N = 100 dofs, M = 5, number of KLE terms 35, beta distribution for κ, Gaussian covκ, cov. length 0.1, multi-variate Hermite polynomial of order pκ = 2; RHS f(x, ξ): Mf = 5, number of KLE terms 40, beta distribution for κ, exponential covf , cov. length 0.03, multi-variate Hermite polynomial of order pf = 2; b.c. g(x, ξ): Mg = 2, number of KLE terms 2, normal distribution for g, Gaussian covg , cov. length 10, multi-variate Hermite polynomial of order pg = 1; pφ = 3 and pu = 3 Center for Uncertainty Quantification ation Logo Lock-up 22 / 30
  • 25. 4* Example: updating of the solution u 0 0.5 1 -20 0 20 40 60 0 0.5 1 -20 0 20 40 60 Figure: Original and updated solutions, mean value plus/minus 1,2,3 standard deviations [graphics are built in the stochastic Galerkin library sglib, written by E. Zander in TU Braunschweig] Center for Uncertainty Quantification ation Logo Lock-up 23 / 30
  • 26. 4* Example: Updating of the parameter 0 0.5 1 0 0.5 1 1.5 0 0.5 1 0 0.5 1 1.5 Figure: Original and updated parameter κ. Center for Uncertainty Quantification ation Logo Lock-up 24 / 30
  • 27. 4* Part III. Tensor completion Now, we consider how to apply Tensor Completion Techniques for Bayesian Update In Bayesian Update surrogate, the assimilated PCE coeffs of parameter qa will be NEW gPCE coeffs=OLD gPCE coeffs + gPCE of Update ALL INGREDIENTS ARE TENSORS! qa = qf + ˜ϕ(ˆy) − ˜ϕ(z), (10) z(ξ) = y(ξ) + ε(ω), qa ∈ RN×#Ja , N = 1..107, #Ja > 1000, #Jf < #Ja.
  • 28. 4* Problem setup: Tensor completion Problem of fitting a low rank tensor A ∈ RI, I := I1 × ... × Id , Iµ = {1, ..., nµ}, µ ∈ D := {1, .., d}, to given data points {Mi ∈ R | i ∈ P}, P ⊂ I, #P ≥ d µ=1 nµ, (11) by minimizing the distance between the given values (Mi)i∈P and approximations (Ai)i∈P: A = argmin˜A∈T i∈P (Mi − ˜Ai)2 (12) Remark: here we assume that our target tensor M allows for a low rank approximation M − ˜M ≤ ε, ε ≥ 0 and ˜M fulfills certain rank bounds, T - Low rank format under consideration. Center for Uncertainty Quantification ation Logo Lock-up 26 / 30
  • 29. 4* Problem setup: Tensor completion L. Grasedyck et all, 2016, hierarchical and tensor train formats W. Austin, T, Kolda, D, Kressner, M. Steinlechner et al, CP format Goal: Reconstruct tensor with O(log N) number of samples. Methods: 1. ALS inspired by LMaFit method for matrix completion, complexity O(r4d#P). 2. Alternating directions fitting (ADF), complexity O(r2d#P). Center for Uncertainty Quantification ation Logo Lock-up 27 / 30
  • 30. 4* Numerical experiments for SPDEs: Tensor completion [L. Grasedyck, M. Kluge, S. Kraemer, SIAM J. Sci. Comput., Vol 37/5, 2016] Applied ALS and ADF methods to: − div(κ(x, ω) u(x, ω)) = 1 in D × Ω, u(x, ω) = 0 on ∂G × Ω, (13) D = [−1, 1]. The goal is to determine u(ω) := D u(x, ω)dx. FE with 50 dofs, KLE with d terms, d-stochastic independent RVs, Yields to tensor Ai1...id := u(i1, ..., id ), n = 100, d = 5, slice density CSD = 6. Software (matlab) is available. Center for Uncertainty Quantification ation Logo Lock-up 28 / 30
  • 31. 4* Example: updating of the solution u 0 0.5 1 -20 0 20 40 60 0 0.5 1 -20 0 20 40 60 0 0.5 1 -20 0 20 40 60 0 0.5 1 -20 0 20 40 60 0 0.5 1 -20 0 20 40 60 Figure: Original and updated solutions, mean value plus/minus 1,2,3 standard deviations. Number of available measurements {0, 1, 2, 3, 5} [graphics are built in the stochastic Galerkin library sglib, written by E. Zander in TU Braunschweig] Center for Uncertainty Quantification ation Logo Lock-up 29 / 30
  • 32. 4* Conclusion Introduced low-rank tensor methods to solve elliptic PDEs with uncertain coefficients, Explained how to compute the maximum and the mean in low-rank tensor format, Derived Bayesian update surrogate ϕ (as a linear, quadratic, cubic etc approximation), i.e. compute conditional expectation of q, given measurement y. Apply Tensor Completion method to sparse measurement tensor in the likelihood. Center for Uncertainty Quantification ation Logo Lock-up 30 / 30