SlideShare a Scribd company logo
1 of 24
Download to read offline
Sampling and Low-Rank Tensor Approximations
Hermann G. Matthies∗
Alexander Litvinenko∗
, Tarek A. El-Moshely+
∗
TU Braunschweig, Brunswick, Germany
+
MIT, Cambridge, MA, USA
wire@tu-bs.de
http://www.wire.tu-bs.de
$Id: 12_Sydney-MCQMC.tex,v 1.3 2012/02/12 16:52:28 hgm Exp $
2
Overview
1. Functionals of SPDE solutions
2. Computing the simulation
3. Parametric problems
4. Tensor products and other factorisations
5. Functional approximation
6. Emulation approximation
7. Examples and conclusion
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
3
Problem statement
We want to compute
Jk = E (Ψk(·, ue(·))) =
Ω
Ψk(ω, ue(ω)) P(dω),
where P is a probability measure on Ω, and
ue is the solution of a PDE depending on the parameter ω ∈ Ω.
A[ω](ue(ω)) = f(ω) a.s. in ω ∈ Ω,
ue(ω) is a U-valued random variable (RV).
To compute an approximation uM(ω) to ue(ω) via
simulation is expensive, even for one value of ω, let alone for
Jk ≈
N
n=1
Ψk(ωn, uM(ωn)) wn
Not all Ψk of interest are known from the outset.
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
4
Example: stochastic diffusion
Aquifer
0
0.5
1
1.5
2
0
0.5
1
1.5
2
Geometry
2D Model
Simple stationary model of groundwater flow with stochastic data κ, f
− · (κ(x, ω) u(x, ω)) = f(x, ω) x ∈ D ⊂ Rd
& b.c.
Solution is in tensor space S ⊗ U =: W, e.g. W = L2(Ω, P) ⊗ ˚H1
(D)
leads after Galerkin discretisation with UM = span{vm}M
m=1 ⊂ U to
A[ω](uM(ω)) = f(ω) a.s. in ω ∈ Ω,
where uM(ω) =
M
m=1 um(ω)vm ∈ S ⊗ UM.
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
5
Realisation of κ(x, ω)
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
6
Solution example
0
0.5
1
1.5
2
0
0.5
1
1.5
2
Geometry
flow out
Dirichlet b.c.
flow = 0
Sources
7
8
9
10
11
12
0
1
2
0
1
2
5
10
15
Realization of κ
5.5
6
6.5
7
7.5
8
8.5
9
9.5
10
0
1
2
0
1
2
4
6
8
10
Realization of solution
4
5
6
7
8
9
10
0
1
2
0
1
2
0
5
10
Mean of solution
1
2
3
4
5
0
1
2
0
1
2
0
2
4
6
Variance of solution
−1
−0.5
0
0.5
1
−1
−0.5
0
0.5
1
0
0.2
0.4
0.6
0.8
y
x
Pr{u(x) > 8}
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
7
Computing the simulation
To simulate uM one needs samples of the random field (RF) κ,
which depends on infinitely many random variables (RVs).
This has to be reduced / transformed Ξ : Ω → [0, 1]s
to a finite number
s of RVs ξ = (ξ1, . . . , ξs), with µ = Ξ∗P the push-forward measure:
Jk =
Ω
Ψk(ω, ue(ω)) P(dω) ≈
[0,1]s
ˆΨk(ξ, uM(ξ)) µ(dξ).
This is a product measure for independent RVs (ξ1, . . . , ξs).
Approximate expensive simulation uM(ξ) by cheaper emulation.
Both tasks are related by viewing uM : ξ → uM(ξ), or κ1 : x → κ(x, ·)
(RF indexed by x), or κ2 : ω → κ(·, ω) (function valued RV),
maps from a set of parameters into a vector space.
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
8
Parametric problems and RKHS
For each p in a parameter set P, let r(p) be an
‘object’ in a Hilbert space V (for simplicity).
With r : P → V, denote U = span r(P) = span im r, then
to each function r : P → U corresponds a linear map R : U → ˆR:
R : U v → r(·)|v V ∈ ˆR = im R ⊂ RP
.
(sometimes called a weak distribution)
By construction R is injective. Use this to make ˆR a pre-Hilbert space:
∀φ, ψ ∈ ˆR : φ|ψ R := R−1
φ|R−1
ψ U.
R−1
is unitary on completion R which is a RKHS — reproducing kernel
Hilbert space with kernel ρ(p1, p2) = r(p1)|r(p2) U.
Functions in R are in one-to-one correspondence with elements of U.
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
9
‘Covariance’
If Q ⊂ RP
is Hilbert with inner product ·|· Q; e.g. Q = L2(P, ν),
define in U a positive self-adjoint map—the covariance C = R∗
R
Cu|v U = Ru|Rv Q, ⇒ has spectrum σ(C) ⊆ R+,
with spectral projectors Eλ : C =
∞
0
λ dEλ
Similarly, define ˆC : Q → Q for φ, ψ ∈ Q such that ˆC = RR∗
by
ˆCφ|ψ Q = R∗
φ|R∗
ψ U ⇒ has same spectrum as C : σ( ˆC) = σ(C),
and unitarily equivalent projectors ˆEλ = WEλW∗
: ˆC =
∞
0
λ d ˆEλ.
Spectrum and projectors (σ(C), Eλ) are essence of r(p).
Specifically, for φ, ψ ∈ L2(P, ν) we have
R∗
φ|R∗
ψ U =
P×P
φ(p1)ρ(p1, p2)ψ(p2) ν(dp1) ν(dp2).
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
10
‘Covariance’ operator and SVD
Spectral decomposition with projectors Eλ
Cv =
∞
0
λ dEλv =
λj∈σp(C)
λj ej|v U ej +
R+σp(C)
λ dEλv.
C unitarily equivalent to multiplication operator Mk with non-negative k:
C = U∗
MkU = (U∗
M
1/2
k )(M
1/2
k U), with M
1/2
k = M√
k.
This connects to the singular value decomposition (SVD)
of R = V M
1/2
k U, with a (partial) isometry V .
Often C has a pure point spectrum (e.g. C compact)
⇒ last integral vanishes.
In general—to show tensors—we have to invoke generalised eigenvectors
and Gelfand triplets (rigged Hilbert spaces) for the continuous spectrum.
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
11
SVD, Karhunen-Lo`eve-expansion, and tensors
For sake of simplicity assume σ(C) = σp(C).
C =
j
λj ej|· Uej =
j
λj ej ⊗ ej
.
(Rv)(p) = r(p)|v U =
j
λj ej|v U sj(p)
with sj := Rej with R = j λj (sj ⊗ ej
), or
R∗
=
j
λj (ej
⊗ sj), r(p) =
j
λj sj(p)ej, r ∈ S ⊗ U.
The singular value decomposition, a.k.a. Karhunen-Lo`eve-expansion.
A sum of rank-1 operators / tensors.
In general C = R+
λ eλ, · eλ (dλ) with generalised eigenvectors eλ.
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
12
Examples and interpretations
• If V is a space of centred random variables (RVs), r is a random field
or stochastic process indexed by P, then ˆC represented by the kernel
ρ(p1, p2) is the covariance function.
• If in this case P = Rd
and moreover ρ(p1, p2) = c(p1 − p2) (stationary
process / homogeneous field), then the diagonalisation U is effected
by the Fourier transform, and the point spectrum is typically empty.
• If ν is a probability measure (ν(P) = 1), and r is a V-valued RV, then
C is the covariance operator.
• If P = {1, 2, . . . , n} and R = Rn
, then ρ is the Gram matrix of the
vectors r1, . . . , rn. If n < dim V, the map R can be seen as a model
reduction projector.
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
13
Factorisations / re-parametrisations
R∗
serves as representation for Karhunen-Lo`eve expansion.
This is a factorisation of C. Some other possible ones:
C = R∗
R = (V M
1/2
k )(V M
1/2
k )∗
= C1/2
C1/2
= B∗
B,
where C = B∗
B is an arbitrary one.
Each factorisation leads to a representation—all unitarily equivalent.
(When C is a matrix, a favourite is Cholesky: C = LL∗
).
Assume that C = B∗
B and B : U → H −→ r ∈ U ⊗ H.
Select a orthonormal basis {ek} in H.
Unitary Q : 2 a = (a1, a2, . . .) → k akek ∈ H.
Approximation possible by injection P∗
s : Rs
→ 2.
Let ˜r(a) := B∗
Qa := ˜R∗
a (linear in a), i.e. ˜R∗
: 2 → U. Then
˜R∗ ˜R = (B∗
Q)(Q∗
B) = B∗
B = C.
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
14
Representations
Several representions for ‘object’ r(p) ∈ U in a simpler space.
• The RKHS
• The Karhunen-Lo`eve expansion based on spectral decomposition of C.
• The multiplicative spectral decomposition, as V M
1/2
k maps into U.
• Arbitrary factorisations C = B∗
B.
• Analogous: consider ˆC instead of C. If Q = L2(P, ν) this leads to
integral transforms, the kernel decompositions.
These can all be used for model reduction, choosing a smaller subspace.
Applied to RF κ(x, ω), and hence to uM(ω), yielding uM(ξ).
Can again be applied to uM(ξ).
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
15
Functional approximation
Emulation — replace expensive simulation uM(ξ) by inexpensive
approximation / emulation uE(ξ) ≈ uM(ξ)
( alias response surfaces, proxy / surrogate models, etc.)
Choose subspace SB ⊂ S with basis {Xβ}B
β=1,
make ansatz for each um(ξ) ≈ β uβ
mXβ(ξ), giving
uE(ξ) =
m,β
uβ
mXβ(ξ)vm =
m,β
uβ
mXβ(ξ) ⊗ vm.
Set U = (uβ
m) — (M × B).
Sampling, we generate matrix / tensor
U = [uM(ξ1), . . . , uM(ξN)] = (um(ξn))n
m — (M × N).
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
16
Tensor product structure
Story does not end here as one may choose S = k Sk,
approximated by SB =
K
k=1 SBk
, with SBk
⊂ Sk.
Solution represented as a tensor of grade K + 1
in WB,N =
K
k=1 SBk
⊗ UN.
For higher grade tensor product structure, more reduction is possible,
— but that is a story for another talk, here we stay with K = 1.
With orthonormal Xβ one has
uβ
m =
[0,1]s
Xβ(ξ)um(ξ) µ(dξ) ≈
N
n=1
wnXβ(ξn)um(ξn).
Let W = diag (wn)—(N × N), X = (Xβ(ξn)) — (B × N), hence
U = U(W XT
). For B = N this is just a basis change.
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
17
Low-rank approximation
Focus on array of numbers U := [um(ξn)], view as matrix / tensor:
N
n=1
M
m=1
Um,nem
M ⊗ en
N, with unit vectors en
N ∈ RN
, em
M ∈ RM
.
The sum has M ∗ N terms, the number of entries in U.
Rank-R representation is approximation with R terms
U =
N
n=1
M
m=1
Um,nem
M(en
N)T
≈
R
=1
a bT
= ABT
,
with A = [a1, . . . , aR] — (M × R) and B = [b1, . . . , bR] — (N × R).
It contains only R ∗ (M + N) M ∗ N numbers.
We will use updated, truncated SVD. This gives for coefficients
U = U(W XT
) ≈ ABT
(W XT
) = A(XW B)T
=: AB
T
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
18
Emulation instead of simulation
Let x(ξ) := [X1(ξ), . . . , XB(ξ)]T
. Emulator and low-rank emulator is
uE(ξ) = Ux(ξ), and uL(ξ) := AB
T
x(ξ).
Computing A, B: start with z samples Uz1 = [uM(ξ1), . . . , uM(ξz)].
Compute truncated, error controled SVD:
M×z
Uz1 ≈
M×R
W
R×R
Σ
z×R
V
T
;
then set A1 = W Σ1/2
, B1 = V Σ1/2
⇒ B1.
For each n = z + 1, . . . , 2z, emulate uL(ξn) and evaluate residuum
rn := r(ξn) := f(ξn) − A[ξn](uL(ξn)). If rn is small, accept
un
A = uL(ξn), otherwise solve for uM(ξn) and set un
A = uM(ξn).
Set Uz2 = [uz+1
A , . . . , u2z
A ], compute updated SVD of [Uz1, Uz2],
⇒ A2, B2. Repeat for each batch of z samples.
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
19
Emulator in integration
To evaluate
Jk =
Ω
Ψk(ω, ue(ω)) P(dω) ≈
[0,1]s
ˆΨk(ξ, uM(ξ)) µ(dξ),
we compute
Jk ≈
N
n=1
wn
ˆΨk(ξn, uL(ξn)).
If we are lucky, we need much fewer than N samples to find the
low-rank representation A, B for uL.
This is cheap to compute from samples, and uses only little storage.
In the integral the integrand is cheap to evaluate, and the low-rank
representation can be re-used if a new (Jk, Ψk) has to be evaluated.
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
20
Use in MC sampling solution—sample
Example: Compressible RANS-flow around RAE air-foil.
Sample solution
turbulent kinetic energy pressure
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
21
Use in MC sampling solution—storage
Inflow and air-foil shape uncertain.
Data compression achieved by updated SVD:
Made from 600 MC Simulations, SVD is updated every 10 samples.
M = 260, 000 N = 600
Updated SVD: Relative errors, memory requirements:
rank R pressure turb. kin. energy memory [MB]
10 1.9e-2 4.0e-3 21
20 1.4e-2 5.9e-3 42
50 5.3e-3 1.5e-4 104
Dense matrix ∈ R260000×600
costs 1250 MB storage.
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
22
Use in QMC sampling—mean
Trans-sonic flow with shock with N = 2600 samples.
Relative error for the density mean for rank R = 5, 10, 30, 50.
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
23
Use in QMC sampling—variance
Trans-sonic flow with shock with N = 2600 samples.
Relative error for the density variance for rank R = 5, 10, 30, 50.
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing
24
Conclusion
• Random field discretisation and sampling can be seen as weak
distribution with associated covariance.
• Analysis of associated linear map reveals essential structure.
• Factorisations of covariance lead to SVD (Karhunen-Lo`eve
expansion) and tensor products.
• Functional approximation to construct emulator.
• Sparse and inexpensive emulation.
TU Braunschweig Institute of Scientific Computing
CC
Scientifi omputing

More Related Content

What's hot

QMC: Transition Workshop - Probabilistic Integrators for Deterministic Differ...
QMC: Transition Workshop - Probabilistic Integrators for Deterministic Differ...QMC: Transition Workshop - Probabilistic Integrators for Deterministic Differ...
QMC: Transition Workshop - Probabilistic Integrators for Deterministic Differ...
The Statistical and Applied Mathematical Sciences Institute
 
A study of the worst case ratio of a simple algorithm for simple assembly lin...
A study of the worst case ratio of a simple algorithm for simple assembly lin...A study of the worst case ratio of a simple algorithm for simple assembly lin...
A study of the worst case ratio of a simple algorithm for simple assembly lin...
narmo
 
Polyhedral and algebraic method in computational geometry
Polyhedral and algebraic method in computational geometryPolyhedral and algebraic method in computational geometry
Polyhedral and algebraic method in computational geometry
Springer
 
RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010
Christian Robert
 
Bayesian inversion of deterministic dynamic causal models
Bayesian inversion of deterministic dynamic causal modelsBayesian inversion of deterministic dynamic causal models
Bayesian inversion of deterministic dynamic causal models
khbrodersen
 
Smooth entropies a tutorial
Smooth entropies a tutorialSmooth entropies a tutorial
Smooth entropies a tutorial
wtyru1989
 

What's hot (20)

Athens workshop on MCMC
Athens workshop on MCMCAthens workshop on MCMC
Athens workshop on MCMC
 
QMC: Transition Workshop - Probabilistic Integrators for Deterministic Differ...
QMC: Transition Workshop - Probabilistic Integrators for Deterministic Differ...QMC: Transition Workshop - Probabilistic Integrators for Deterministic Differ...
QMC: Transition Workshop - Probabilistic Integrators for Deterministic Differ...
 
Technical
TechnicalTechnical
Technical
 
Volume and edge skeleton computation in high dimensions
Volume and edge skeleton computation in high dimensionsVolume and edge skeleton computation in high dimensions
Volume and edge skeleton computation in high dimensions
 
A study of the worst case ratio of a simple algorithm for simple assembly lin...
A study of the worst case ratio of a simple algorithm for simple assembly lin...A study of the worst case ratio of a simple algorithm for simple assembly lin...
A study of the worst case ratio of a simple algorithm for simple assembly lin...
 
no U-turn sampler, a discussion of Hoffman & Gelman NUTS algorithm
no U-turn sampler, a discussion of Hoffman & Gelman NUTS algorithmno U-turn sampler, a discussion of Hoffman & Gelman NUTS algorithm
no U-turn sampler, a discussion of Hoffman & Gelman NUTS algorithm
 
Subespacios de funciones integrables respecto de una medidavectorialye...
Subespacios de funciones integrables respecto de una medidavectorialye...Subespacios de funciones integrables respecto de una medidavectorialye...
Subespacios de funciones integrables respecto de una medidavectorialye...
 
Germany2003 gamg
Germany2003 gamgGermany2003 gamg
Germany2003 gamg
 
High-dimensional polytopes defined by oracles: algorithms, computations and a...
High-dimensional polytopes defined by oracles: algorithms, computations and a...High-dimensional polytopes defined by oracles: algorithms, computations and a...
High-dimensional polytopes defined by oracles: algorithms, computations and a...
 
Code of the multidimensional fractional pseudo-Newton method using recursive ...
Code of the multidimensional fractional pseudo-Newton method using recursive ...Code of the multidimensional fractional pseudo-Newton method using recursive ...
Code of the multidimensional fractional pseudo-Newton method using recursive ...
 
Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010Mark Girolami's Read Paper 2010
Mark Girolami's Read Paper 2010
 
Polyhedral and algebraic method in computational geometry
Polyhedral and algebraic method in computational geometryPolyhedral and algebraic method in computational geometry
Polyhedral and algebraic method in computational geometry
 
Polyhedral computations in computational algebraic geometry and optimization
Polyhedral computations in computational algebraic geometry and optimizationPolyhedral computations in computational algebraic geometry and optimization
Polyhedral computations in computational algebraic geometry and optimization
 
Применение машинного обучения для навигации и управления роботами
Применение машинного обучения для навигации и управления роботамиПрименение машинного обучения для навигации и управления роботами
Применение машинного обучения для навигации и управления роботами
 
RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010RSS discussion of Girolami and Calderhead, October 13, 2010
RSS discussion of Girolami and Calderhead, October 13, 2010
 
PMF BPMF and BPTF
PMF BPMF and BPTFPMF BPMF and BPTF
PMF BPMF and BPTF
 
Bayesian inversion of deterministic dynamic causal models
Bayesian inversion of deterministic dynamic causal modelsBayesian inversion of deterministic dynamic causal models
Bayesian inversion of deterministic dynamic causal models
 
Mohammad Sabawi NTCCIT-2018 Presentation
Mohammad Sabawi NTCCIT-2018 PresentationMohammad Sabawi NTCCIT-2018 Presentation
Mohammad Sabawi NTCCIT-2018 Presentation
 
Sampling-Based Planning Algorithms for Multi-Objective Missions
Sampling-Based Planning Algorithms for Multi-Objective MissionsSampling-Based Planning Algorithms for Multi-Objective Missions
Sampling-Based Planning Algorithms for Multi-Objective Missions
 
Smooth entropies a tutorial
Smooth entropies a tutorialSmooth entropies a tutorial
Smooth entropies a tutorial
 

Viewers also liked

Biosynthesis of deuterium labeled transmembrane protein
Biosynthesis of deuterium labeled transmembrane proteinBiosynthesis of deuterium labeled transmembrane protein
Biosynthesis of deuterium labeled transmembrane protein
Alexander Decker
 
Como subir o publicar un archivo en blog.pptx (robin acosta)
Como subir o publicar un archivo en blog.pptx (robin acosta)Como subir o publicar un archivo en blog.pptx (robin acosta)
Como subir o publicar un archivo en blog.pptx (robin acosta)
robinacosta10
 
marks sheet_graduation
marks sheet_graduationmarks sheet_graduation
marks sheet_graduation
Sushanta Kumar
 
ANDRÉS PACHÓN
ANDRÉS PACHÓNANDRÉS PACHÓN
ANDRÉS PACHÓN
mark1138
 
Notes on the low rank matrix approximation of kernel
Notes on the low rank matrix approximation of kernelNotes on the low rank matrix approximation of kernel
Notes on the low rank matrix approximation of kernel
Hiroshi Tsukahara
 

Viewers also liked (20)

Value Function Approximation via Low-Rank Models
Value Function Approximation via Low-Rank ModelsValue Function Approximation via Low-Rank Models
Value Function Approximation via Low-Rank Models
 
Biosynthesis of deuterium labeled transmembrane protein
Biosynthesis of deuterium labeled transmembrane proteinBiosynthesis of deuterium labeled transmembrane protein
Biosynthesis of deuterium labeled transmembrane protein
 
Como subir o publicar un archivo en blog.pptx (robin acosta)
Como subir o publicar un archivo en blog.pptx (robin acosta)Como subir o publicar un archivo en blog.pptx (robin acosta)
Como subir o publicar un archivo en blog.pptx (robin acosta)
 
marks sheet_graduation
marks sheet_graduationmarks sheet_graduation
marks sheet_graduation
 
Educational consideration for children with ADHD
Educational consideration for children with ADHDEducational consideration for children with ADHD
Educational consideration for children with ADHD
 
2013 demande formelle à reno bernier de renoncer à son serment de confidentia...
2013 demande formelle à reno bernier de renoncer à son serment de confidentia...2013 demande formelle à reno bernier de renoncer à son serment de confidentia...
2013 demande formelle à reno bernier de renoncer à son serment de confidentia...
 
Tabela fasefinal
Tabela fasefinalTabela fasefinal
Tabela fasefinal
 
Mis dos amores
Mis dos amoresMis dos amores
Mis dos amores
 
1
11
1
 
ANDRÉS PACHÓN
ANDRÉS PACHÓNANDRÉS PACHÓN
ANDRÉS PACHÓN
 
2014 Wechat數位行銷營
2014 Wechat數位行銷營2014 Wechat數位行銷營
2014 Wechat數位行銷營
 
Suciu et al_ ro_edunet_2015
Suciu et al_ ro_edunet_2015Suciu et al_ ro_edunet_2015
Suciu et al_ ro_edunet_2015
 
Exodus- Startup Master Class II
Exodus- Startup Master Class IIExodus- Startup Master Class II
Exodus- Startup Master Class II
 
Abhijit profile
Abhijit  profile Abhijit  profile
Abhijit profile
 
Cloud Computing and Validated Learning for Accelerating Innovation in IoT
Cloud Computing and Validated Learning for Accelerating Innovation in IoTCloud Computing and Validated Learning for Accelerating Innovation in IoT
Cloud Computing and Validated Learning for Accelerating Innovation in IoT
 
Story board
Story boardStory board
Story board
 
Notes on the low rank matrix approximation of kernel
Notes on the low rank matrix approximation of kernelNotes on the low rank matrix approximation of kernel
Notes on the low rank matrix approximation of kernel
 
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
 
New Geber Company Intro 2016
New Geber Company Intro 2016New Geber Company Intro 2016
New Geber Company Intro 2016
 
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017) Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
 

Similar to Sampling and low-rank tensor approximations

QCD Phase Diagram
QCD Phase DiagramQCD Phase Diagram
QCD Phase Diagram
RomanHllwieser
 
20070823
2007082320070823
20070823
neostar
 
sublabel accurate convex relaxation of vectorial multilabel energies
sublabel accurate convex relaxation of vectorial multilabel energiessublabel accurate convex relaxation of vectorial multilabel energies
sublabel accurate convex relaxation of vectorial multilabel energies
Fujimoto Keisuke
 

Similar to Sampling and low-rank tensor approximations (20)

Zeros of orthogonal polynomials generated by a Geronimus perturbation of meas...
Zeros of orthogonal polynomials generated by a Geronimus perturbation of meas...Zeros of orthogonal polynomials generated by a Geronimus perturbation of meas...
Zeros of orthogonal polynomials generated by a Geronimus perturbation of meas...
 
PCA on graph/network
PCA on graph/networkPCA on graph/network
PCA on graph/network
 
Low-rank response surface in numerical aerodynamics
Low-rank response surface in numerical aerodynamicsLow-rank response surface in numerical aerodynamics
Low-rank response surface in numerical aerodynamics
 
BNL_Research_Report
BNL_Research_ReportBNL_Research_Report
BNL_Research_Report
 
QCD Phase Diagram
QCD Phase DiagramQCD Phase Diagram
QCD Phase Diagram
 
Approximate Thin Plate Spline Mappings
Approximate Thin Plate Spline MappingsApproximate Thin Plate Spline Mappings
Approximate Thin Plate Spline Mappings
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
 
Lecture 6
Lecture 6Lecture 6
Lecture 6
 
CMB Likelihood Part 1
CMB Likelihood Part 1CMB Likelihood Part 1
CMB Likelihood Part 1
 
Markov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themMarkov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing them
 
(DL輪読)Variational Dropout Sparsifies Deep Neural Networks
(DL輪読)Variational Dropout Sparsifies Deep Neural Networks(DL輪読)Variational Dropout Sparsifies Deep Neural Networks
(DL輪読)Variational Dropout Sparsifies Deep Neural Networks
 
Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...
 
likelihood_p1.pdf
likelihood_p1.pdflikelihood_p1.pdf
likelihood_p1.pdf
 
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
 
20070823
2007082320070823
20070823
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Upm etsiccp-seminar-vf
Upm etsiccp-seminar-vfUpm etsiccp-seminar-vf
Upm etsiccp-seminar-vf
 
Numerical Analysis Assignment Help
Numerical Analysis Assignment HelpNumerical Analysis Assignment Help
Numerical Analysis Assignment Help
 
sublabel accurate convex relaxation of vectorial multilabel energies
sublabel accurate convex relaxation of vectorial multilabel energiessublabel accurate convex relaxation of vectorial multilabel energies
sublabel accurate convex relaxation of vectorial multilabel energies
 
Development of a Pseudo-Spectral 3D Navier Stokes Solver for Wind Turbine App...
Development of a Pseudo-Spectral 3D Navier Stokes Solver for Wind Turbine App...Development of a Pseudo-Spectral 3D Navier Stokes Solver for Wind Turbine App...
Development of a Pseudo-Spectral 3D Navier Stokes Solver for Wind Turbine App...
 

More from Alexander Litvinenko

Poster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdfPoster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdf
Alexander Litvinenko
 
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityDensity Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Alexander Litvinenko
 
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Alexander Litvinenko
 

More from Alexander Litvinenko (20)

Poster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdfPoster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdf
 
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdflitvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
 
litvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdflitvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdf
 
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityDensity Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
 
litvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdflitvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdf
 
Litvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdfLitvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdf
 
Uncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdfUncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdf
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
 
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...
 
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...
 
Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...
 
Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...
 
Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Propagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater FlowPropagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater Flow
 
Simulation of propagation of uncertainties in density-driven groundwater flow
Simulation of propagation of uncertainties in density-driven groundwater flowSimulation of propagation of uncertainties in density-driven groundwater flow
Simulation of propagation of uncertainties in density-driven groundwater flow
 
Approximation of large covariance matrices in statistics
Approximation of large covariance matrices in statisticsApproximation of large covariance matrices in statistics
Approximation of large covariance matrices in statistics
 

Recently uploaded

The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
heathfieldcps1
 

Recently uploaded (20)

NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
NO1 Top Black Magic Specialist In Lahore Black magic In Pakistan Kala Ilam Ex...
 
What is 3 Way Matching Process in Odoo 17.pptx
What is 3 Way Matching Process in Odoo 17.pptxWhat is 3 Way Matching Process in Odoo 17.pptx
What is 3 Way Matching Process in Odoo 17.pptx
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
 
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptxExploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
Exploring_the_Narrative_Style_of_Amitav_Ghoshs_Gun_Island.pptx
 
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptxOn_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
On_Translating_a_Tamil_Poem_by_A_K_Ramanujan.pptx
 
OSCM Unit 2_Operations Processes & Systems
OSCM Unit 2_Operations Processes & SystemsOSCM Unit 2_Operations Processes & Systems
OSCM Unit 2_Operations Processes & Systems
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - English
 
dusjagr & nano talk on open tools for agriculture research and learning
dusjagr & nano talk on open tools for agriculture research and learningdusjagr & nano talk on open tools for agriculture research and learning
dusjagr & nano talk on open tools for agriculture research and learning
 
FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024
 
Tatlong Kwento ni Lola basyang-1.pdf arts
Tatlong Kwento ni Lola basyang-1.pdf artsTatlong Kwento ni Lola basyang-1.pdf arts
Tatlong Kwento ni Lola basyang-1.pdf arts
 
Model Attribute _rec_name in the Odoo 17
Model Attribute _rec_name in the Odoo 17Model Attribute _rec_name in the Odoo 17
Model Attribute _rec_name in the Odoo 17
 
UGC NET Paper 1 Unit 7 DATA INTERPRETATION.pdf
UGC NET Paper 1 Unit 7 DATA INTERPRETATION.pdfUGC NET Paper 1 Unit 7 DATA INTERPRETATION.pdf
UGC NET Paper 1 Unit 7 DATA INTERPRETATION.pdf
 
PANDITA RAMABAI- Indian political thought GENDER.pptx
PANDITA RAMABAI- Indian political thought GENDER.pptxPANDITA RAMABAI- Indian political thought GENDER.pptx
PANDITA RAMABAI- Indian political thought GENDER.pptx
 
Play hard learn harder: The Serious Business of Play
Play hard learn harder:  The Serious Business of PlayPlay hard learn harder:  The Serious Business of Play
Play hard learn harder: The Serious Business of Play
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
Towards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptxTowards a code of practice for AI in AT.pptx
Towards a code of practice for AI in AT.pptx
 
Details on CBSE Compartment Exam.pptx1111
Details on CBSE Compartment Exam.pptx1111Details on CBSE Compartment Exam.pptx1111
Details on CBSE Compartment Exam.pptx1111
 
FICTIONAL SALESMAN/SALESMAN SNSW 2024.pdf
FICTIONAL SALESMAN/SALESMAN SNSW 2024.pdfFICTIONAL SALESMAN/SALESMAN SNSW 2024.pdf
FICTIONAL SALESMAN/SALESMAN SNSW 2024.pdf
 
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
 
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdfUnit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
 

Sampling and low-rank tensor approximations

  • 1. Sampling and Low-Rank Tensor Approximations Hermann G. Matthies∗ Alexander Litvinenko∗ , Tarek A. El-Moshely+ ∗ TU Braunschweig, Brunswick, Germany + MIT, Cambridge, MA, USA wire@tu-bs.de http://www.wire.tu-bs.de $Id: 12_Sydney-MCQMC.tex,v 1.3 2012/02/12 16:52:28 hgm Exp $
  • 2. 2 Overview 1. Functionals of SPDE solutions 2. Computing the simulation 3. Parametric problems 4. Tensor products and other factorisations 5. Functional approximation 6. Emulation approximation 7. Examples and conclusion TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 3. 3 Problem statement We want to compute Jk = E (Ψk(·, ue(·))) = Ω Ψk(ω, ue(ω)) P(dω), where P is a probability measure on Ω, and ue is the solution of a PDE depending on the parameter ω ∈ Ω. A[ω](ue(ω)) = f(ω) a.s. in ω ∈ Ω, ue(ω) is a U-valued random variable (RV). To compute an approximation uM(ω) to ue(ω) via simulation is expensive, even for one value of ω, let alone for Jk ≈ N n=1 Ψk(ωn, uM(ωn)) wn Not all Ψk of interest are known from the outset. TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 4. 4 Example: stochastic diffusion Aquifer 0 0.5 1 1.5 2 0 0.5 1 1.5 2 Geometry 2D Model Simple stationary model of groundwater flow with stochastic data κ, f − · (κ(x, ω) u(x, ω)) = f(x, ω) x ∈ D ⊂ Rd & b.c. Solution is in tensor space S ⊗ U =: W, e.g. W = L2(Ω, P) ⊗ ˚H1 (D) leads after Galerkin discretisation with UM = span{vm}M m=1 ⊂ U to A[ω](uM(ω)) = f(ω) a.s. in ω ∈ Ω, where uM(ω) = M m=1 um(ω)vm ∈ S ⊗ UM. TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 5. 5 Realisation of κ(x, ω) TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 6. 6 Solution example 0 0.5 1 1.5 2 0 0.5 1 1.5 2 Geometry flow out Dirichlet b.c. flow = 0 Sources 7 8 9 10 11 12 0 1 2 0 1 2 5 10 15 Realization of κ 5.5 6 6.5 7 7.5 8 8.5 9 9.5 10 0 1 2 0 1 2 4 6 8 10 Realization of solution 4 5 6 7 8 9 10 0 1 2 0 1 2 0 5 10 Mean of solution 1 2 3 4 5 0 1 2 0 1 2 0 2 4 6 Variance of solution −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 0 0.2 0.4 0.6 0.8 y x Pr{u(x) > 8} TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 7. 7 Computing the simulation To simulate uM one needs samples of the random field (RF) κ, which depends on infinitely many random variables (RVs). This has to be reduced / transformed Ξ : Ω → [0, 1]s to a finite number s of RVs ξ = (ξ1, . . . , ξs), with µ = Ξ∗P the push-forward measure: Jk = Ω Ψk(ω, ue(ω)) P(dω) ≈ [0,1]s ˆΨk(ξ, uM(ξ)) µ(dξ). This is a product measure for independent RVs (ξ1, . . . , ξs). Approximate expensive simulation uM(ξ) by cheaper emulation. Both tasks are related by viewing uM : ξ → uM(ξ), or κ1 : x → κ(x, ·) (RF indexed by x), or κ2 : ω → κ(·, ω) (function valued RV), maps from a set of parameters into a vector space. TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 8. 8 Parametric problems and RKHS For each p in a parameter set P, let r(p) be an ‘object’ in a Hilbert space V (for simplicity). With r : P → V, denote U = span r(P) = span im r, then to each function r : P → U corresponds a linear map R : U → ˆR: R : U v → r(·)|v V ∈ ˆR = im R ⊂ RP . (sometimes called a weak distribution) By construction R is injective. Use this to make ˆR a pre-Hilbert space: ∀φ, ψ ∈ ˆR : φ|ψ R := R−1 φ|R−1 ψ U. R−1 is unitary on completion R which is a RKHS — reproducing kernel Hilbert space with kernel ρ(p1, p2) = r(p1)|r(p2) U. Functions in R are in one-to-one correspondence with elements of U. TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 9. 9 ‘Covariance’ If Q ⊂ RP is Hilbert with inner product ·|· Q; e.g. Q = L2(P, ν), define in U a positive self-adjoint map—the covariance C = R∗ R Cu|v U = Ru|Rv Q, ⇒ has spectrum σ(C) ⊆ R+, with spectral projectors Eλ : C = ∞ 0 λ dEλ Similarly, define ˆC : Q → Q for φ, ψ ∈ Q such that ˆC = RR∗ by ˆCφ|ψ Q = R∗ φ|R∗ ψ U ⇒ has same spectrum as C : σ( ˆC) = σ(C), and unitarily equivalent projectors ˆEλ = WEλW∗ : ˆC = ∞ 0 λ d ˆEλ. Spectrum and projectors (σ(C), Eλ) are essence of r(p). Specifically, for φ, ψ ∈ L2(P, ν) we have R∗ φ|R∗ ψ U = P×P φ(p1)ρ(p1, p2)ψ(p2) ν(dp1) ν(dp2). TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 10. 10 ‘Covariance’ operator and SVD Spectral decomposition with projectors Eλ Cv = ∞ 0 λ dEλv = λj∈σp(C) λj ej|v U ej + R+σp(C) λ dEλv. C unitarily equivalent to multiplication operator Mk with non-negative k: C = U∗ MkU = (U∗ M 1/2 k )(M 1/2 k U), with M 1/2 k = M√ k. This connects to the singular value decomposition (SVD) of R = V M 1/2 k U, with a (partial) isometry V . Often C has a pure point spectrum (e.g. C compact) ⇒ last integral vanishes. In general—to show tensors—we have to invoke generalised eigenvectors and Gelfand triplets (rigged Hilbert spaces) for the continuous spectrum. TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 11. 11 SVD, Karhunen-Lo`eve-expansion, and tensors For sake of simplicity assume σ(C) = σp(C). C = j λj ej|· Uej = j λj ej ⊗ ej . (Rv)(p) = r(p)|v U = j λj ej|v U sj(p) with sj := Rej with R = j λj (sj ⊗ ej ), or R∗ = j λj (ej ⊗ sj), r(p) = j λj sj(p)ej, r ∈ S ⊗ U. The singular value decomposition, a.k.a. Karhunen-Lo`eve-expansion. A sum of rank-1 operators / tensors. In general C = R+ λ eλ, · eλ (dλ) with generalised eigenvectors eλ. TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 12. 12 Examples and interpretations • If V is a space of centred random variables (RVs), r is a random field or stochastic process indexed by P, then ˆC represented by the kernel ρ(p1, p2) is the covariance function. • If in this case P = Rd and moreover ρ(p1, p2) = c(p1 − p2) (stationary process / homogeneous field), then the diagonalisation U is effected by the Fourier transform, and the point spectrum is typically empty. • If ν is a probability measure (ν(P) = 1), and r is a V-valued RV, then C is the covariance operator. • If P = {1, 2, . . . , n} and R = Rn , then ρ is the Gram matrix of the vectors r1, . . . , rn. If n < dim V, the map R can be seen as a model reduction projector. TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 13. 13 Factorisations / re-parametrisations R∗ serves as representation for Karhunen-Lo`eve expansion. This is a factorisation of C. Some other possible ones: C = R∗ R = (V M 1/2 k )(V M 1/2 k )∗ = C1/2 C1/2 = B∗ B, where C = B∗ B is an arbitrary one. Each factorisation leads to a representation—all unitarily equivalent. (When C is a matrix, a favourite is Cholesky: C = LL∗ ). Assume that C = B∗ B and B : U → H −→ r ∈ U ⊗ H. Select a orthonormal basis {ek} in H. Unitary Q : 2 a = (a1, a2, . . .) → k akek ∈ H. Approximation possible by injection P∗ s : Rs → 2. Let ˜r(a) := B∗ Qa := ˜R∗ a (linear in a), i.e. ˜R∗ : 2 → U. Then ˜R∗ ˜R = (B∗ Q)(Q∗ B) = B∗ B = C. TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 14. 14 Representations Several representions for ‘object’ r(p) ∈ U in a simpler space. • The RKHS • The Karhunen-Lo`eve expansion based on spectral decomposition of C. • The multiplicative spectral decomposition, as V M 1/2 k maps into U. • Arbitrary factorisations C = B∗ B. • Analogous: consider ˆC instead of C. If Q = L2(P, ν) this leads to integral transforms, the kernel decompositions. These can all be used for model reduction, choosing a smaller subspace. Applied to RF κ(x, ω), and hence to uM(ω), yielding uM(ξ). Can again be applied to uM(ξ). TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 15. 15 Functional approximation Emulation — replace expensive simulation uM(ξ) by inexpensive approximation / emulation uE(ξ) ≈ uM(ξ) ( alias response surfaces, proxy / surrogate models, etc.) Choose subspace SB ⊂ S with basis {Xβ}B β=1, make ansatz for each um(ξ) ≈ β uβ mXβ(ξ), giving uE(ξ) = m,β uβ mXβ(ξ)vm = m,β uβ mXβ(ξ) ⊗ vm. Set U = (uβ m) — (M × B). Sampling, we generate matrix / tensor U = [uM(ξ1), . . . , uM(ξN)] = (um(ξn))n m — (M × N). TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 16. 16 Tensor product structure Story does not end here as one may choose S = k Sk, approximated by SB = K k=1 SBk , with SBk ⊂ Sk. Solution represented as a tensor of grade K + 1 in WB,N = K k=1 SBk ⊗ UN. For higher grade tensor product structure, more reduction is possible, — but that is a story for another talk, here we stay with K = 1. With orthonormal Xβ one has uβ m = [0,1]s Xβ(ξ)um(ξ) µ(dξ) ≈ N n=1 wnXβ(ξn)um(ξn). Let W = diag (wn)—(N × N), X = (Xβ(ξn)) — (B × N), hence U = U(W XT ). For B = N this is just a basis change. TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 17. 17 Low-rank approximation Focus on array of numbers U := [um(ξn)], view as matrix / tensor: N n=1 M m=1 Um,nem M ⊗ en N, with unit vectors en N ∈ RN , em M ∈ RM . The sum has M ∗ N terms, the number of entries in U. Rank-R representation is approximation with R terms U = N n=1 M m=1 Um,nem M(en N)T ≈ R =1 a bT = ABT , with A = [a1, . . . , aR] — (M × R) and B = [b1, . . . , bR] — (N × R). It contains only R ∗ (M + N) M ∗ N numbers. We will use updated, truncated SVD. This gives for coefficients U = U(W XT ) ≈ ABT (W XT ) = A(XW B)T =: AB T TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 18. 18 Emulation instead of simulation Let x(ξ) := [X1(ξ), . . . , XB(ξ)]T . Emulator and low-rank emulator is uE(ξ) = Ux(ξ), and uL(ξ) := AB T x(ξ). Computing A, B: start with z samples Uz1 = [uM(ξ1), . . . , uM(ξz)]. Compute truncated, error controled SVD: M×z Uz1 ≈ M×R W R×R Σ z×R V T ; then set A1 = W Σ1/2 , B1 = V Σ1/2 ⇒ B1. For each n = z + 1, . . . , 2z, emulate uL(ξn) and evaluate residuum rn := r(ξn) := f(ξn) − A[ξn](uL(ξn)). If rn is small, accept un A = uL(ξn), otherwise solve for uM(ξn) and set un A = uM(ξn). Set Uz2 = [uz+1 A , . . . , u2z A ], compute updated SVD of [Uz1, Uz2], ⇒ A2, B2. Repeat for each batch of z samples. TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 19. 19 Emulator in integration To evaluate Jk = Ω Ψk(ω, ue(ω)) P(dω) ≈ [0,1]s ˆΨk(ξ, uM(ξ)) µ(dξ), we compute Jk ≈ N n=1 wn ˆΨk(ξn, uL(ξn)). If we are lucky, we need much fewer than N samples to find the low-rank representation A, B for uL. This is cheap to compute from samples, and uses only little storage. In the integral the integrand is cheap to evaluate, and the low-rank representation can be re-used if a new (Jk, Ψk) has to be evaluated. TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 20. 20 Use in MC sampling solution—sample Example: Compressible RANS-flow around RAE air-foil. Sample solution turbulent kinetic energy pressure TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 21. 21 Use in MC sampling solution—storage Inflow and air-foil shape uncertain. Data compression achieved by updated SVD: Made from 600 MC Simulations, SVD is updated every 10 samples. M = 260, 000 N = 600 Updated SVD: Relative errors, memory requirements: rank R pressure turb. kin. energy memory [MB] 10 1.9e-2 4.0e-3 21 20 1.4e-2 5.9e-3 42 50 5.3e-3 1.5e-4 104 Dense matrix ∈ R260000×600 costs 1250 MB storage. TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 22. 22 Use in QMC sampling—mean Trans-sonic flow with shock with N = 2600 samples. Relative error for the density mean for rank R = 5, 10, 30, 50. TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 23. 23 Use in QMC sampling—variance Trans-sonic flow with shock with N = 2600 samples. Relative error for the density variance for rank R = 5, 10, 30, 50. TU Braunschweig Institute of Scientific Computing CC Scientifi omputing
  • 24. 24 Conclusion • Random field discretisation and sampling can be seen as weak distribution with associated covariance. • Analysis of associated linear map reveals essential structure. • Factorisations of covariance lead to SVD (Karhunen-Lo`eve expansion) and tensor products. • Functional approximation to construct emulator. • Sparse and inexpensive emulation. TU Braunschweig Institute of Scientific Computing CC Scientifi omputing