SlideShare a Scribd company logo
Low-rank tensor methods for analysis of high
dimensional data
Alexander Litvinenko and Mike Espig
Center for Uncertainty
Quantification
ntification Logo Lock-up
http://sri-uq.kaust.edu.sa/
Extreme Computing Research Center, KAUST
Alexander Litvinenko and Mike Espig Low-rank tensor methods for analysis of high dimensional da
4*
KAUST
I received very rich collaboration experience as a co-organizator of:
3 UQ workshops,
2 Scalable Hierarchical Algorithms for eXtreme Computing
(SHAXC) workshops
1 HPC Conference (www.hpcsaudi.org, 2017)
4*
My previous work
After applying the stochastic Galerkin method, obtain:
Ku = f, where all ingredients are represented in a tensor format
Compute max{u}, var(u), level sets of u, sign(u)
[1] Efficient Analysis of High Dimensional Data in Tensor Formats,
Espig, Hackbusch, A.L., Matthies and Zander, 2012.
Research which ingredients influence on the tensor rank of K
[2] Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats,
W¨ahnert, Espig, Hackbusch, A.L., Matthies, 2013.
Approximate κ(x, ω), stochastic Galerkin operator K in Tensor
Train (TT) format, solve for u, postprocessing
[3] Polynomial Chaos Expansion of random coefficients and the solution of stochastic
partial differential equations in the Tensor Train format, Dolgov, Litvinenko, Khoromskij, Matthies, 2016.
Center for Uncertainty
Quantification
ation Logo Lock-up
-2 / 18
4*
Typical quantities of interest
Keeping all input and intermediate data in a tensor
representation one wants to perform different tasks:
evaluation for specific parameters (ω1, . . . , ωM),
finding maxima and minima,
finding ‘level sets’ (needed for histogram and probability
density).
Example of level set: all elements of a high dimensional tensor
from the interval [0.7, 0.8].
Center for Uncertainty
Quantification
ation Logo Lock-up
-1 / 18
4*
Canonical and Tucker tensor formats
Definition and Examples of tensors
Center for Uncertainty
Quantification
ation Logo Lock-up
0 / 18
4*
Canonical and Tucker tensor formats
[Pictures are taken from B. Khoromskij and A. Auer lecture course]
Storage: O(nd ) → O(dRn) and O(Rd + dRn).
Center for Uncertainty
Quantification
ation Logo Lock-up
1 / 18
4*
Definition of tensor of order d
Tensor of order d is a multidimensional array over a d-tuple
index set I = I1 × · · · × Id ,
A = [ai1...id
: i ∈ I ] ∈ RI
, I = {1, ..., n }, = 1, .., d.
A is an element of the linear space
Vn =
d
=1
V , V = RI
equipped with the Euclidean scalar product ·, · : Vn × Vn → R,
defined as
A, B :=
(i1...id )∈I
ai1...id
bi1...id
, for A, B ∈ Vn.
Let T := d
µ=1 Rnµ ,
RR(T ) := R
i=1
d
µ=1 viµ ∈ T : viµ ∈ Rnµ ,
Center for Uncertainty
Quantification
ation Logo Lock-up
2 / 18
4*
Examples of rank-1 and rank-2 tensors
Rank-1:
f(x1, ..., xd ) = exp(f1(x1) + ... + fd (xd )) = d
j=1 exp(fj(xj))
Rank-2: f(x1, ..., xd ) = sin( d
j=1 xj), since
2i · sin( d
j=1 xj) = ei d
j=1 xj
− e−i d
j=1 xj
Rank-d function f(x1, ..., xd ) = x1 + x2 + ... + xd can be
approximated by rank-2: with any prescribed accuracy:
f ≈
d
j=1(1 + εxj)
ε
−
d
j=1 1
ε
+ O(ε), as ε → 0
Center for Uncertainty
Quantification
ation Logo Lock-up
3 / 18
4*
Tensor and Matrices
Rank-1 tensor
A = u1 ⊗ u2 ⊗ ... ⊗ ud =:
d
µ=1
uµ
Ai1,...,id
= (u1)i1
· ... · (ud )id
Rank-1 tensor A = u ⊗ v, matrix A = uvT , A = vuT , u ∈ Rn,
v ∈ Rm,
Rank-k tensor A = k
i=1 ui ⊗ vi, matrix A = k
i=1 uivT
i .
Kronecker product of n × n and m × m matrices is a new block
matrix A ⊗ B ∈ Rnm×nm, whose ij-th block is [AijB].
Center for Uncertainty
Quantification
ation Logo Lock-up
4 / 18
4*
Computing QoI in low-rank tensor format
Now, we consider how to
find maxima in a high-dimensional tensor
4*
Maximum norm and corresponding index
Let u = r
j=1
d
µ=1 ujµ ∈ Rr , compute
u ∞ := max
i:=(i1,...,id )∈I
|ui| = max
i:=(i1,...,id )∈I
r
j=1
d
µ=1
ujµ iµ
.
Computing u ∞ is equivalent to the following e.v. problem.
Let i∗
:= (i∗
1 , . . . , i∗
d ) ∈ I, #I = d
µ=1 nµ.
u ∞ = |ui∗ | =
r
j=1
d
µ=1
ujµ i∗
µ
and e(i∗
)
:=
d
µ=1
ei∗
µ
,
where ei∗
µ
∈ Rnµ the i∗
µ-th canonical vector in Rnµ (µ ∈ N≤d ).
Center for Uncertainty
Quantification
ation Logo Lock-up
5 / 18
Then
u e(i∗
)
=


r
j=1
d
µ=1
ujµ




d
µ=1
ei∗
µ

 =
r
j=1
d
µ=1
ujµ ei∗
µ
=
r
j=1
d
µ=1
(ujµ)i∗
µ
ei∗
µ
=


r
j=1
d
µ=1
(ujµ)i∗
µ


ui∗ =
d
µ=1
e(i∗
µ) = ui∗ e(i∗
)
.
Thus, we obtained an “eigenvalue problem”:
u e(i∗
)
= ui∗ e(i∗
)
.
Center for Uncertainty
Quantification
ation Logo Lock-up
6 / 18
4*
Computing u ∞, u ∈ Rr by vector iteration
By defining the following diagonal matrix
D(u) :=
r
j=1
d
µ=1
diag (ujµ) µ µ∈N≤nµ
(1)
with representation rank r, obtain D(u)v = u v.
Now apply the well-known vector iteration method (with rank
truncation) to
D(u)e(i∗
)
= ui∗ e(i∗
)
,
obtain u ∞.
[Approximate iteration, Khoromskij, Hackbusch, Tyrtyshnikov 05],
and [Espig, Hackbusch 2010]
Center for Uncertainty
Quantification
ation Logo Lock-up
7 / 18
4*
How to compute the mean value in CP format
Let u = r
j=1
d
µ=1 ujµ ∈ Rr , then the mean value u can be
computed as a scalar product
u =


r
j=1
d
µ=1
ujµ

 ,


d
µ=1
1
nµ
˜1µ

 =
r
j=1
d
µ=1
ujµ, ˜1µ
nµ
=
(2)
=
r
j=1
d
µ=1
1
nµ
nµ
k=1
(ujµ)k , (3)
where ˜1µ := (1, . . . , 1)T ∈ Rnµ .
Numerical cost is O r · d
µ=1 nµ .
Center for Uncertainty
Quantification
ation Logo Lock-up
8 / 18
4*
How to compute the variance in CP format
Let u ∈ Rr and
˜u := u − u
d
µ=1
1
nµ
1 =
r+1
j=1
d
µ=1
˜ujµ ∈ Rr+1, (4)
then the variance var(u) of u can be computed as follows
var(u) =
˜u, ˜u
d
µ=1 nµ
=
1
d
µ=1 nµ


r+1
i=1
d
µ=1
˜uiµ

 ,


r+1
j=1
d
ν=1
˜ujν


=
r+1
i=1
r+1
j=1
d
µ=1
1
nµ
˜uiµ, ˜ujµ .
Numerical cost is O (r + 1)2 · d
µ=1 nµ .
4*
Computing QoI in low-rank tensor format
Now, we consider how to
find ‘level sets’,
for instance, all entries of tensor u from interval [a, b].
4*
Definitions of characteristic and sign functions
1. To compute level sets and frequencies we need
characteristic function.
2. To compute characteristic function we need sign function.
The characteristic χI(u) ∈ T of u ∈ T in I ⊂ R is for every multi-
index i ∈ I pointwise defined as
(χI(u))i :=
1, ui ∈ I,
0, ui /∈ I.
Furthermore, the sign(u) ∈ T is for all i ∈ I pointwise defined
by
(sign(u))i :=



1, ui > 0;
−1, ui < 0;
0, ui = 0.
Center for Uncertainty
Quantification
ation Logo Lock-up
10 / 18
4*
sign(u) is needed for computing χI(u)
Lemma
Let u ∈ T , a, b ∈ R, and 1 = d
µ=1
˜1µ, where
˜1µ := (1, . . . , 1)t ∈ Rnµ .
(i) If I = R<b, then we have χI(u) = 1
2 (1 + sign(b1 − u)).
(ii) If I = R>a, then we have χI(u) = 1
2(1 − sign(a1 − u)).
(iii) If I = (a, b), then we have
χI(u) = 1
2 (sign(b1 − u) − sign(a1 − u)).
Computing sign(u), u ∈ Rr , via hybrid Newton-Schulz iteration
with rank truncation after each iteration.
Center for Uncertainty
Quantification
ation Logo Lock-up
11 / 18
4*
Level Set, Frequency
Definition (Level Set, Frequency)
Let I ⊂ R and u ∈ T . The level set LI(u) ∈ T of u respect to I is
pointwise defined by
(LI(u))i :=
ui, ui ∈ I ;
0, ui /∈ I ,
for all i ∈ I.
The frequency FI(u) ∈ N of u respect to I is defined as
FI(u) := # supp χI(u).
Center for Uncertainty
Quantification
ation Logo Lock-up
12 / 18
4*
Computation of level sets and frequency
Proposition
Let I ⊂ R, u ∈ T , and χI(u) its characteristic. We have
LI(u) = χI(u) u
and rank(LI(u)) ≤ rank(χI(u)) rank(u).
The frequency FI(u) ∈ N of u respect to I is
FI(u) = χI(u), 1 ,
where 1 = d
µ=1
˜1µ, ˜1µ := (1, . . . , 1)T ∈ Rnµ .
Center for Uncertainty
Quantification
ation Logo Lock-up
13 / 18
4*
Numerical Experiments
2D L-shape domain, N = 557 dofs.
Total stochastic dimension is Mu = Mk + Mf = 20, there are
|J | = 231 PCE coefficients
u =
231
j=1
uj,0 ⊗
20
µ=1
ujµ ∈ R557
⊗
20
µ=1
R3
.
Center for Uncertainty
Quantification
ation Logo Lock-up
14 / 18
4*
Level sets
Now we compute level sets
sign(b u ∞1 − u)
for b ∈ {0.2, 0.4, 0.6, 0.8}.
Tensor u has 320 ∗ 557 ≈ 2 · 1012 entries ≈ 16 TB of
memory.
The computing time of one level set was 10 minutes.
Intermediate ranks of sign(b u ∞1 − u) and of rank(uk )
were less than 24.
Center for Uncertainty
Quantification
ation Logo Lock-up
15 / 18
4*
Example: Canonical rank d, whereas TT rank 2
d-Laplacian over uniform tensor grid. It is known to have the
Kronecker rank-d representation,
∆d = A⊗IN ⊗...⊗IN +IN ⊗A⊗...⊗IN +...+IN ⊗IN ⊗...⊗A ∈ RI⊗d ⊗I⊗d
(5)
with A = ∆1 = tridiag{−1, 2, −1} ∈ RN×N, and IN being the
N × N identity. Notice that for the canonical rank we have rank
kC(∆d ) = d, while TT-rank of ∆d is equal to 2 for any
dimension due to the explicit representation
∆d = (∆1 I) ×
I 0
∆1 I
× ... ×
I 0
∆1 I
×
I
∆1
(6)
where the rank product operation ”×” is defined as a regular
matrix product of the two corresponding core matrices, their
blocks being multiplied by means of tensor product. The similar
bound is true for the Tucker rank rankTuck (∆d ) = 2.
4*
Advantages and disadvantages
Denote k - rank, d-dimension, n = # dofs in 1D:
1. CP: ill-posed approx. alg-m, O(dnk), hard to compute
approx.
2. Tucker: reliable arithmetic based on SVD, O(dnk + kd )
3. Hierarchical Tucker: based on SVD, storage O(dnk + dk3),
truncation O(dnk2 + dk4)
4. TT: based on SVD, O(dnk2) or O(dnk3), stable
5. Quantics-TT: O(nd ) → O(dlogq
n)

More Related Content

What's hot

Proximal Splitting and Optimal Transport
Proximal Splitting and Optimal TransportProximal Splitting and Optimal Transport
Proximal Splitting and Optimal Transport
Gabriel Peyré
 
Bregman divergences from comparative convexity
Bregman divergences from comparative convexityBregman divergences from comparative convexity
Bregman divergences from comparative convexity
Frank Nielsen
 
Small updates of matrix functions used for network centrality
Small updates of matrix functions used for network centralitySmall updates of matrix functions used for network centrality
Small updates of matrix functions used for network centrality
Francesco Tudisco
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formats
Alexander Litvinenko
 
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Francesco Tudisco
 
Optimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-peripheryOptimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-periphery
Francesco Tudisco
 
Mesh Processing Course : Active Contours
Mesh Processing Course : Active ContoursMesh Processing Course : Active Contours
Mesh Processing Course : Active Contours
Gabriel Peyré
 
A new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensorsA new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensors
Francesco Tudisco
 
Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)
Frank Nielsen
 
Tensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantificationTensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantification
Alexander Litvinenko
 
Lecture3 linear svm_with_slack
Lecture3 linear svm_with_slackLecture3 linear svm_with_slack
Lecture3 linear svm_with_slackStéphane Canu
 
Tailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsTailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest Neighbors
Frank Nielsen
 
Linear Discriminant Analysis (LDA) Under f-Divergence Measures
Linear Discriminant Analysis (LDA) Under f-Divergence MeasuresLinear Discriminant Analysis (LDA) Under f-Divergence Measures
Linear Discriminant Analysis (LDA) Under f-Divergence Measures
Anmol Dwivedi
 
Tutorial on Belief Propagation in Bayesian Networks
Tutorial on Belief Propagation in Bayesian NetworksTutorial on Belief Propagation in Bayesian Networks
Tutorial on Belief Propagation in Bayesian Networks
Anmol Dwivedi
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...
Alexander Litvinenko
 
Connection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsConnection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problems
Alexander Litvinenko
 
Hybrid dynamics in large-scale logistics networks
Hybrid dynamics in large-scale logistics networksHybrid dynamics in large-scale logistics networks
Hybrid dynamics in large-scale logistics networks
MKosmykov
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
 Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli... Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
The Statistical and Applied Mathematical Sciences Institute
 

What's hot (20)

Proximal Splitting and Optimal Transport
Proximal Splitting and Optimal TransportProximal Splitting and Optimal Transport
Proximal Splitting and Optimal Transport
 
Bregman divergences from comparative convexity
Bregman divergences from comparative convexityBregman divergences from comparative convexity
Bregman divergences from comparative convexity
 
Small updates of matrix functions used for network centrality
Small updates of matrix functions used for network centralitySmall updates of matrix functions used for network centrality
Small updates of matrix functions used for network centrality
 
Efficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formatsEfficient Analysis of high-dimensional data in tensor formats
Efficient Analysis of high-dimensional data in tensor formats
 
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
Nodal Domain Theorem for the p-Laplacian on Graphs and the Related Multiway C...
 
Optimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-peripheryOptimal L-shaped matrix reordering, aka graph's core-periphery
Optimal L-shaped matrix reordering, aka graph's core-periphery
 
Mesh Processing Course : Active Contours
Mesh Processing Course : Active ContoursMesh Processing Course : Active Contours
Mesh Processing Course : Active Contours
 
A new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensorsA new Perron-Frobenius theorem for nonnegative tensors
A new Perron-Frobenius theorem for nonnegative tensors
 
Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)Computational Information Geometry: A quick review (ICMS)
Computational Information Geometry: A quick review (ICMS)
 
Lecture5 kernel svm
Lecture5 kernel svmLecture5 kernel svm
Lecture5 kernel svm
 
Tensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantificationTensor Train data format for uncertainty quantification
Tensor Train data format for uncertainty quantification
 
Lecture3 linear svm_with_slack
Lecture3 linear svm_with_slackLecture3 linear svm_with_slack
Lecture3 linear svm_with_slack
 
Tailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsTailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest Neighbors
 
Linear Discriminant Analysis (LDA) Under f-Divergence Measures
Linear Discriminant Analysis (LDA) Under f-Divergence MeasuresLinear Discriminant Analysis (LDA) Under f-Divergence Measures
Linear Discriminant Analysis (LDA) Under f-Divergence Measures
 
Tutorial on Belief Propagation in Bayesian Networks
Tutorial on Belief Propagation in Bayesian NetworksTutorial on Belief Propagation in Bayesian Networks
Tutorial on Belief Propagation in Bayesian Networks
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...
 
Connection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problemsConnection between inverse problems and uncertainty quantification problems
Connection between inverse problems and uncertainty quantification problems
 
Hybrid dynamics in large-scale logistics networks
Hybrid dynamics in large-scale logistics networksHybrid dynamics in large-scale logistics networks
Hybrid dynamics in large-scale logistics networks
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
 Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli... Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Appli...
 

Viewers also liked

VRとわたし
VRとわたしVRとわたし
VRとわたし
Jun Iio
 
Why cuba trade delegation
Why cuba trade delegationWhy cuba trade delegation
Why cuba trade delegation
R. Sonny Betancourt
 
Definitive casts and dies
Definitive casts and diesDefinitive casts and dies
Definitive casts and dies
hesham1964
 
4 organizacija-i-vlasnicka-struktura
4 organizacija-i-vlasnicka-struktura4 organizacija-i-vlasnicka-struktura
4 organizacija-i-vlasnicka-struktura
Vladimir Stanković
 
02 brojni sistemi
02 brojni sistemi02 brojni sistemi
02 brojni sistemi
Vladimir Stanković
 
How to find User Intent Changes for Google SEO
How to find User Intent Changes for Google SEOHow to find User Intent Changes for Google SEO
How to find User Intent Changes for Google SEO
Jameson (Jack) Treseler
 
Azure sql database escalabilidad
Azure sql database escalabilidadAzure sql database escalabilidad
Azure sql database escalabilidad
Eduardo Castro
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian update
Alexander Litvinenko
 
Likelihood approximation with parallel hierarchical matrices for large spatia...
Likelihood approximation with parallel hierarchical matrices for large spatia...Likelihood approximation with parallel hierarchical matrices for large spatia...
Likelihood approximation with parallel hierarchical matrices for large spatia...
Alexander Litvinenko
 
My PhD on 4 pages
My PhD on 4 pagesMy PhD on 4 pages
My PhD on 4 pages
Alexander Litvinenko
 
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Alexander Litvinenko
 
Response Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty QuantificationResponse Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty Quantification
Alexander Litvinenko
 
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
My paper for Domain Decomposition Conference in Strobl, Austria, 2005My paper for Domain Decomposition Conference in Strobl, Austria, 2005
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
Alexander Litvinenko
 
Hierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matricesHierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matrices
Alexander Litvinenko
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
Alexander Litvinenko
 
Tecnologia
TecnologiaTecnologia
Tecnologia
danibarsa123
 
CURRICULUM_VITAE_SubhasreeMondal.PDF
CURRICULUM_VITAE_SubhasreeMondal.PDFCURRICULUM_VITAE_SubhasreeMondal.PDF
CURRICULUM_VITAE_SubhasreeMondal.PDFSubhasree Mondal
 
Pulpotomiateraputica pptparablog-111201182605-phpapp01
Pulpotomiateraputica pptparablog-111201182605-phpapp01Pulpotomiateraputica pptparablog-111201182605-phpapp01
Pulpotomiateraputica pptparablog-111201182605-phpapp01
Yolanda Charre
 
Modulador AM DSBFC
Modulador AM DSBFCModulador AM DSBFC
Modulador AM DSBFC
Pablo Cruz Rodríguez
 

Viewers also liked (20)

VRとわたし
VRとわたしVRとわたし
VRとわたし
 
Why cuba trade delegation
Why cuba trade delegationWhy cuba trade delegation
Why cuba trade delegation
 
Definitive casts and dies
Definitive casts and diesDefinitive casts and dies
Definitive casts and dies
 
4 organizacija-i-vlasnicka-struktura
4 organizacija-i-vlasnicka-struktura4 organizacija-i-vlasnicka-struktura
4 organizacija-i-vlasnicka-struktura
 
02 brojni sistemi
02 brojni sistemi02 brojni sistemi
02 brojni sistemi
 
How to find User Intent Changes for Google SEO
How to find User Intent Changes for Google SEOHow to find User Intent Changes for Google SEO
How to find User Intent Changes for Google SEO
 
Azure sql database escalabilidad
Azure sql database escalabilidadAzure sql database escalabilidad
Azure sql database escalabilidad
 
Minimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian updateMinimum mean square error estimation and approximation of the Bayesian update
Minimum mean square error estimation and approximation of the Bayesian update
 
Likelihood approximation with parallel hierarchical matrices for large spatia...
Likelihood approximation with parallel hierarchical matrices for large spatia...Likelihood approximation with parallel hierarchical matrices for large spatia...
Likelihood approximation with parallel hierarchical matrices for large spatia...
 
My PhD on 4 pages
My PhD on 4 pagesMy PhD on 4 pages
My PhD on 4 pages
 
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
 
RS
RSRS
RS
 
Response Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty QuantificationResponse Surface in Tensor Train format for Uncertainty Quantification
Response Surface in Tensor Train format for Uncertainty Quantification
 
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
My paper for Domain Decomposition Conference in Strobl, Austria, 2005My paper for Domain Decomposition Conference in Strobl, Austria, 2005
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
 
Hierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matricesHierarchical matrix approximation of large covariance matrices
Hierarchical matrix approximation of large covariance matrices
 
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansionData sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of the Karhunen-Loeve expansion
 
Tecnologia
TecnologiaTecnologia
Tecnologia
 
CURRICULUM_VITAE_SubhasreeMondal.PDF
CURRICULUM_VITAE_SubhasreeMondal.PDFCURRICULUM_VITAE_SubhasreeMondal.PDF
CURRICULUM_VITAE_SubhasreeMondal.PDF
 
Pulpotomiateraputica pptparablog-111201182605-phpapp01
Pulpotomiateraputica pptparablog-111201182605-phpapp01Pulpotomiateraputica pptparablog-111201182605-phpapp01
Pulpotomiateraputica pptparablog-111201182605-phpapp01
 
Modulador AM DSBFC
Modulador AM DSBFCModulador AM DSBFC
Modulador AM DSBFC
 

Similar to Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)

Low-rank tensor methods for stochastic forward and inverse problems
Low-rank tensor methods for stochastic forward and inverse problemsLow-rank tensor methods for stochastic forward and inverse problems
Low-rank tensor methods for stochastic forward and inverse problems
Alexander Litvinenko
 
Codes and Isogenies
Codes and IsogeniesCodes and Isogenies
Codes and Isogenies
Priyanka Aash
 
Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)
Alexander Litvinenko
 
Tucker tensor analysis of Matern functions in spatial statistics
Tucker tensor analysis of Matern functions in spatial statistics Tucker tensor analysis of Matern functions in spatial statistics
Tucker tensor analysis of Matern functions in spatial statistics
Alexander Litvinenko
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...
Alexander Litvinenko
 
Digital Signal Processing[ECEG-3171]-Ch1_L02
Digital Signal Processing[ECEG-3171]-Ch1_L02Digital Signal Processing[ECEG-3171]-Ch1_L02
Digital Signal Processing[ECEG-3171]-Ch1_L02
Rediet Moges
 
Randomized algorithms ver 1.0
Randomized algorithms ver 1.0Randomized algorithms ver 1.0
Randomized algorithms ver 1.0
Dr. C.V. Suresh Babu
 
Hierarchical matrix techniques for maximum likelihood covariance estimation
Hierarchical matrix techniques for maximum likelihood covariance estimationHierarchical matrix techniques for maximum likelihood covariance estimation
Hierarchical matrix techniques for maximum likelihood covariance estimation
Alexander Litvinenko
 
Unit 3
Unit 3Unit 3
Unit 3
guna287176
 
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
STAIR Lab, Chiba Institute of Technology
 
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
Jialin LIU
 
Talk at SciCADE2013 about "Accelerated Multiple Precision ODE solver base on ...
Talk at SciCADE2013 about "Accelerated Multiple Precision ODE solver base on ...Talk at SciCADE2013 about "Accelerated Multiple Precision ODE solver base on ...
Talk at SciCADE2013 about "Accelerated Multiple Precision ODE solver base on ...
Shizuoka Inst. Science and Tech.
 
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Alexander Litvinenko
 
Lecture50
Lecture50Lecture50
Lecture50
Muhammad Kamran
 
Time complexity.ppt
Time complexity.pptTime complexity.ppt
Time complexity.ppt
YekoyeTigabuYeko
 
how to calclute time complexity of algortihm
how to calclute time complexity of algortihmhow to calclute time complexity of algortihm
how to calclute time complexity of algortihmSajid Marwat
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
Alexander Litvinenko
 
SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012
Zheng Mengdi
 

Similar to Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017) (20)

Low-rank tensor methods for stochastic forward and inverse problems
Low-rank tensor methods for stochastic forward and inverse problemsLow-rank tensor methods for stochastic forward and inverse problems
Low-rank tensor methods for stochastic forward and inverse problems
 
Codes and Isogenies
Codes and IsogeniesCodes and Isogenies
Codes and Isogenies
 
Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)Low-rank tensor approximation (Introduction)
Low-rank tensor approximation (Introduction)
 
Tucker tensor analysis of Matern functions in spatial statistics
Tucker tensor analysis of Matern functions in spatial statistics Tucker tensor analysis of Matern functions in spatial statistics
Tucker tensor analysis of Matern functions in spatial statistics
 
Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...Low rank tensor approximation of probability density and characteristic funct...
Low rank tensor approximation of probability density and characteristic funct...
 
Digital Signal Processing[ECEG-3171]-Ch1_L02
Digital Signal Processing[ECEG-3171]-Ch1_L02Digital Signal Processing[ECEG-3171]-Ch1_L02
Digital Signal Processing[ECEG-3171]-Ch1_L02
 
Randomized algorithms ver 1.0
Randomized algorithms ver 1.0Randomized algorithms ver 1.0
Randomized algorithms ver 1.0
 
Hierarchical matrix techniques for maximum likelihood covariance estimation
Hierarchical matrix techniques for maximum likelihood covariance estimationHierarchical matrix techniques for maximum likelihood covariance estimation
Hierarchical matrix techniques for maximum likelihood covariance estimation
 
Unit 3
Unit 3Unit 3
Unit 3
 
Unit 3
Unit 3Unit 3
Unit 3
 
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
Higher-order Factorization Machines(第5回ステアラボ人工知能セミナー)
 
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
 
Talk at SciCADE2013 about "Accelerated Multiple Precision ODE solver base on ...
Talk at SciCADE2013 about "Accelerated Multiple Precision ODE solver base on ...Talk at SciCADE2013 about "Accelerated Multiple Precision ODE solver base on ...
Talk at SciCADE2013 about "Accelerated Multiple Precision ODE solver base on ...
 
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
 
Lecture50
Lecture50Lecture50
Lecture50
 
Time complexity.ppt
Time complexity.pptTime complexity.ppt
Time complexity.ppt
 
how to calclute time complexity of algortihm
how to calclute time complexity of algortihmhow to calclute time complexity of algortihm
how to calclute time complexity of algortihm
 
Litv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdfLitv_Denmark_Weak_Supervised_Learning.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
 
Lecture5
Lecture5Lecture5
Lecture5
 
SPDE presentation 2012
SPDE presentation 2012SPDE presentation 2012
SPDE presentation 2012
 

More from Alexander Litvinenko

Poster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdfPoster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdf
Alexander Litvinenko
 
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdflitvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
Alexander Litvinenko
 
litvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdflitvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdf
Alexander Litvinenko
 
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityDensity Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Alexander Litvinenko
 
litvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdflitvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdf
Alexander Litvinenko
 
Litvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdfLitvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdf
Alexander Litvinenko
 
Uncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdfUncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdf
Alexander Litvinenko
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Alexander Litvinenko
 
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Alexander Litvinenko
 
Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...
Alexander Litvinenko
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Alexander Litvinenko
 
Propagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater FlowPropagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater Flow
Alexander Litvinenko
 
Simulation of propagation of uncertainties in density-driven groundwater flow
Simulation of propagation of uncertainties in density-driven groundwater flowSimulation of propagation of uncertainties in density-driven groundwater flow
Simulation of propagation of uncertainties in density-driven groundwater flow
Alexander Litvinenko
 
Approximation of large covariance matrices in statistics
Approximation of large covariance matrices in statisticsApproximation of large covariance matrices in statistics
Approximation of large covariance matrices in statistics
Alexander Litvinenko
 
Semi-Supervised Regression using Cluster Ensemble
Semi-Supervised Regression using Cluster EnsembleSemi-Supervised Regression using Cluster Ensemble
Semi-Supervised Regression using Cluster Ensemble
Alexander Litvinenko
 
Talk Alexander Litvinenko on SIAM GS Conference in Houston
Talk Alexander Litvinenko on SIAM GS Conference in HoustonTalk Alexander Litvinenko on SIAM GS Conference in Houston
Talk Alexander Litvinenko on SIAM GS Conference in Houston
Alexander Litvinenko
 

More from Alexander Litvinenko (20)

Poster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdfPoster_density_driven_with_fracture_MLMC.pdf
Poster_density_driven_with_fracture_MLMC.pdf
 
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdflitvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
 
litvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdflitvinenko_Intrusion_Bari_2023.pdf
litvinenko_Intrusion_Bari_2023.pdf
 
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityDensity Driven Groundwater Flow with Uncertain Porosity and Permeability
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
 
litvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdflitvinenko_Gamm2023.pdf
litvinenko_Gamm2023.pdf
 
Litvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdfLitvinenko_Poster_Henry_22May.pdf
Litvinenko_Poster_Henry_22May.pdf
 
Uncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdfUncertain_Henry_problem-poster.pdf
Uncertain_Henry_problem-poster.pdf
 
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdfLitvinenko_RWTH_UQ_Seminar_talk.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
 
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of High-Dimensional Probability Density...
 
Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...Identification of unknown parameters and prediction of missing values. Compar...
Identification of unknown parameters and prediction of missing values. Compar...
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...Identification of unknown parameters and prediction with hierarchical matrice...
Identification of unknown parameters and prediction with hierarchical matrice...
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...Application of parallel hierarchical matrices for parameter inference and pre...
Application of parallel hierarchical matrices for parameter inference and pre...
 
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
 
Propagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater FlowPropagation of Uncertainties in Density Driven Groundwater Flow
Propagation of Uncertainties in Density Driven Groundwater Flow
 
Simulation of propagation of uncertainties in density-driven groundwater flow
Simulation of propagation of uncertainties in density-driven groundwater flowSimulation of propagation of uncertainties in density-driven groundwater flow
Simulation of propagation of uncertainties in density-driven groundwater flow
 
Approximation of large covariance matrices in statistics
Approximation of large covariance matrices in statisticsApproximation of large covariance matrices in statistics
Approximation of large covariance matrices in statistics
 
Semi-Supervised Regression using Cluster Ensemble
Semi-Supervised Regression using Cluster EnsembleSemi-Supervised Regression using Cluster Ensemble
Semi-Supervised Regression using Cluster Ensemble
 
Talk Alexander Litvinenko on SIAM GS Conference in Houston
Talk Alexander Litvinenko on SIAM GS Conference in HoustonTalk Alexander Litvinenko on SIAM GS Conference in Houston
Talk Alexander Litvinenko on SIAM GS Conference in Houston
 

Recently uploaded

Basic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumersBasic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumers
PedroFerreira53928
 
PART A. Introduction to Costumer Service
PART A. Introduction to Costumer ServicePART A. Introduction to Costumer Service
PART A. Introduction to Costumer Service
PedroFerreira53928
 
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
AzmatAli747758
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
Pavel ( NSTU)
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Thiyagu K
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
Special education needs
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
siemaillard
 
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
Nguyen Thanh Tu Collection
 
Ethnobotany and Ethnopharmacology ......
Ethnobotany and Ethnopharmacology ......Ethnobotany and Ethnopharmacology ......
Ethnobotany and Ethnopharmacology ......
Ashokrao Mane college of Pharmacy Peth-Vadgaon
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
Thiyagu K
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
Mohd Adib Abd Muin, Senior Lecturer at Universiti Utara Malaysia
 
The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
kaushalkr1407
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
Palestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptxPalestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptx
RaedMohamed3
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
beazzy04
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
Celine George
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
Vikramjit Singh
 
How to Create Map Views in the Odoo 17 ERP
How to Create Map Views in the Odoo 17 ERPHow to Create Map Views in the Odoo 17 ERP
How to Create Map Views in the Odoo 17 ERP
Celine George
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
Sandy Millin
 

Recently uploaded (20)

Basic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumersBasic phrases for greeting and assisting costumers
Basic phrases for greeting and assisting costumers
 
PART A. Introduction to Costumer Service
PART A. Introduction to Costumer ServicePART A. Introduction to Costumer Service
PART A. Introduction to Costumer Service
 
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...Cambridge International AS  A Level Biology Coursebook - EBook (MaryFosbery J...
Cambridge International AS A Level Biology Coursebook - EBook (MaryFosbery J...
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
 
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
 
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
GIÁO ÁN DẠY THÊM (KẾ HOẠCH BÀI BUỔI 2) - TIẾNG ANH 8 GLOBAL SUCCESS (2 CỘT) N...
 
Ethnobotany and Ethnopharmacology ......
Ethnobotany and Ethnopharmacology ......Ethnobotany and Ethnopharmacology ......
Ethnobotany and Ethnopharmacology ......
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
 
Chapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptxChapter 3 - Islamic Banking Products and Services.pptx
Chapter 3 - Islamic Banking Products and Services.pptx
 
The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
Palestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptxPalestine last event orientationfvgnh .pptx
Palestine last event orientationfvgnh .pptx
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
 
How to Create Map Views in the Odoo 17 ERP
How to Create Map Views in the Odoo 17 ERPHow to Create Map Views in the Odoo 17 ERP
How to Create Map Views in the Odoo 17 ERP
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
 

Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)

  • 1. Low-rank tensor methods for analysis of high dimensional data Alexander Litvinenko and Mike Espig Center for Uncertainty Quantification ntification Logo Lock-up http://sri-uq.kaust.edu.sa/ Extreme Computing Research Center, KAUST Alexander Litvinenko and Mike Espig Low-rank tensor methods for analysis of high dimensional da
  • 2. 4* KAUST I received very rich collaboration experience as a co-organizator of: 3 UQ workshops, 2 Scalable Hierarchical Algorithms for eXtreme Computing (SHAXC) workshops 1 HPC Conference (www.hpcsaudi.org, 2017)
  • 3. 4* My previous work After applying the stochastic Galerkin method, obtain: Ku = f, where all ingredients are represented in a tensor format Compute max{u}, var(u), level sets of u, sign(u) [1] Efficient Analysis of High Dimensional Data in Tensor Formats, Espig, Hackbusch, A.L., Matthies and Zander, 2012. Research which ingredients influence on the tensor rank of K [2] Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats, W¨ahnert, Espig, Hackbusch, A.L., Matthies, 2013. Approximate κ(x, ω), stochastic Galerkin operator K in Tensor Train (TT) format, solve for u, postprocessing [3] Polynomial Chaos Expansion of random coefficients and the solution of stochastic partial differential equations in the Tensor Train format, Dolgov, Litvinenko, Khoromskij, Matthies, 2016. Center for Uncertainty Quantification ation Logo Lock-up -2 / 18
  • 4. 4* Typical quantities of interest Keeping all input and intermediate data in a tensor representation one wants to perform different tasks: evaluation for specific parameters (ω1, . . . , ωM), finding maxima and minima, finding ‘level sets’ (needed for histogram and probability density). Example of level set: all elements of a high dimensional tensor from the interval [0.7, 0.8]. Center for Uncertainty Quantification ation Logo Lock-up -1 / 18
  • 5. 4* Canonical and Tucker tensor formats Definition and Examples of tensors Center for Uncertainty Quantification ation Logo Lock-up 0 / 18
  • 6. 4* Canonical and Tucker tensor formats [Pictures are taken from B. Khoromskij and A. Auer lecture course] Storage: O(nd ) → O(dRn) and O(Rd + dRn). Center for Uncertainty Quantification ation Logo Lock-up 1 / 18
  • 7. 4* Definition of tensor of order d Tensor of order d is a multidimensional array over a d-tuple index set I = I1 × · · · × Id , A = [ai1...id : i ∈ I ] ∈ RI , I = {1, ..., n }, = 1, .., d. A is an element of the linear space Vn = d =1 V , V = RI equipped with the Euclidean scalar product ·, · : Vn × Vn → R, defined as A, B := (i1...id )∈I ai1...id bi1...id , for A, B ∈ Vn. Let T := d µ=1 Rnµ , RR(T ) := R i=1 d µ=1 viµ ∈ T : viµ ∈ Rnµ , Center for Uncertainty Quantification ation Logo Lock-up 2 / 18
  • 8. 4* Examples of rank-1 and rank-2 tensors Rank-1: f(x1, ..., xd ) = exp(f1(x1) + ... + fd (xd )) = d j=1 exp(fj(xj)) Rank-2: f(x1, ..., xd ) = sin( d j=1 xj), since 2i · sin( d j=1 xj) = ei d j=1 xj − e−i d j=1 xj Rank-d function f(x1, ..., xd ) = x1 + x2 + ... + xd can be approximated by rank-2: with any prescribed accuracy: f ≈ d j=1(1 + εxj) ε − d j=1 1 ε + O(ε), as ε → 0 Center for Uncertainty Quantification ation Logo Lock-up 3 / 18
  • 9. 4* Tensor and Matrices Rank-1 tensor A = u1 ⊗ u2 ⊗ ... ⊗ ud =: d µ=1 uµ Ai1,...,id = (u1)i1 · ... · (ud )id Rank-1 tensor A = u ⊗ v, matrix A = uvT , A = vuT , u ∈ Rn, v ∈ Rm, Rank-k tensor A = k i=1 ui ⊗ vi, matrix A = k i=1 uivT i . Kronecker product of n × n and m × m matrices is a new block matrix A ⊗ B ∈ Rnm×nm, whose ij-th block is [AijB]. Center for Uncertainty Quantification ation Logo Lock-up 4 / 18
  • 10. 4* Computing QoI in low-rank tensor format Now, we consider how to find maxima in a high-dimensional tensor
  • 11. 4* Maximum norm and corresponding index Let u = r j=1 d µ=1 ujµ ∈ Rr , compute u ∞ := max i:=(i1,...,id )∈I |ui| = max i:=(i1,...,id )∈I r j=1 d µ=1 ujµ iµ . Computing u ∞ is equivalent to the following e.v. problem. Let i∗ := (i∗ 1 , . . . , i∗ d ) ∈ I, #I = d µ=1 nµ. u ∞ = |ui∗ | = r j=1 d µ=1 ujµ i∗ µ and e(i∗ ) := d µ=1 ei∗ µ , where ei∗ µ ∈ Rnµ the i∗ µ-th canonical vector in Rnµ (µ ∈ N≤d ). Center for Uncertainty Quantification ation Logo Lock-up 5 / 18
  • 12. Then u e(i∗ ) =   r j=1 d µ=1 ujµ     d µ=1 ei∗ µ   = r j=1 d µ=1 ujµ ei∗ µ = r j=1 d µ=1 (ujµ)i∗ µ ei∗ µ =   r j=1 d µ=1 (ujµ)i∗ µ   ui∗ = d µ=1 e(i∗ µ) = ui∗ e(i∗ ) . Thus, we obtained an “eigenvalue problem”: u e(i∗ ) = ui∗ e(i∗ ) . Center for Uncertainty Quantification ation Logo Lock-up 6 / 18
  • 13. 4* Computing u ∞, u ∈ Rr by vector iteration By defining the following diagonal matrix D(u) := r j=1 d µ=1 diag (ujµ) µ µ∈N≤nµ (1) with representation rank r, obtain D(u)v = u v. Now apply the well-known vector iteration method (with rank truncation) to D(u)e(i∗ ) = ui∗ e(i∗ ) , obtain u ∞. [Approximate iteration, Khoromskij, Hackbusch, Tyrtyshnikov 05], and [Espig, Hackbusch 2010] Center for Uncertainty Quantification ation Logo Lock-up 7 / 18
  • 14. 4* How to compute the mean value in CP format Let u = r j=1 d µ=1 ujµ ∈ Rr , then the mean value u can be computed as a scalar product u =   r j=1 d µ=1 ujµ   ,   d µ=1 1 nµ ˜1µ   = r j=1 d µ=1 ujµ, ˜1µ nµ = (2) = r j=1 d µ=1 1 nµ nµ k=1 (ujµ)k , (3) where ˜1µ := (1, . . . , 1)T ∈ Rnµ . Numerical cost is O r · d µ=1 nµ . Center for Uncertainty Quantification ation Logo Lock-up 8 / 18
  • 15. 4* How to compute the variance in CP format Let u ∈ Rr and ˜u := u − u d µ=1 1 nµ 1 = r+1 j=1 d µ=1 ˜ujµ ∈ Rr+1, (4) then the variance var(u) of u can be computed as follows var(u) = ˜u, ˜u d µ=1 nµ = 1 d µ=1 nµ   r+1 i=1 d µ=1 ˜uiµ   ,   r+1 j=1 d ν=1 ˜ujν   = r+1 i=1 r+1 j=1 d µ=1 1 nµ ˜uiµ, ˜ujµ . Numerical cost is O (r + 1)2 · d µ=1 nµ .
  • 16. 4* Computing QoI in low-rank tensor format Now, we consider how to find ‘level sets’, for instance, all entries of tensor u from interval [a, b].
  • 17. 4* Definitions of characteristic and sign functions 1. To compute level sets and frequencies we need characteristic function. 2. To compute characteristic function we need sign function. The characteristic χI(u) ∈ T of u ∈ T in I ⊂ R is for every multi- index i ∈ I pointwise defined as (χI(u))i := 1, ui ∈ I, 0, ui /∈ I. Furthermore, the sign(u) ∈ T is for all i ∈ I pointwise defined by (sign(u))i :=    1, ui > 0; −1, ui < 0; 0, ui = 0. Center for Uncertainty Quantification ation Logo Lock-up 10 / 18
  • 18. 4* sign(u) is needed for computing χI(u) Lemma Let u ∈ T , a, b ∈ R, and 1 = d µ=1 ˜1µ, where ˜1µ := (1, . . . , 1)t ∈ Rnµ . (i) If I = R<b, then we have χI(u) = 1 2 (1 + sign(b1 − u)). (ii) If I = R>a, then we have χI(u) = 1 2(1 − sign(a1 − u)). (iii) If I = (a, b), then we have χI(u) = 1 2 (sign(b1 − u) − sign(a1 − u)). Computing sign(u), u ∈ Rr , via hybrid Newton-Schulz iteration with rank truncation after each iteration. Center for Uncertainty Quantification ation Logo Lock-up 11 / 18
  • 19. 4* Level Set, Frequency Definition (Level Set, Frequency) Let I ⊂ R and u ∈ T . The level set LI(u) ∈ T of u respect to I is pointwise defined by (LI(u))i := ui, ui ∈ I ; 0, ui /∈ I , for all i ∈ I. The frequency FI(u) ∈ N of u respect to I is defined as FI(u) := # supp χI(u). Center for Uncertainty Quantification ation Logo Lock-up 12 / 18
  • 20. 4* Computation of level sets and frequency Proposition Let I ⊂ R, u ∈ T , and χI(u) its characteristic. We have LI(u) = χI(u) u and rank(LI(u)) ≤ rank(χI(u)) rank(u). The frequency FI(u) ∈ N of u respect to I is FI(u) = χI(u), 1 , where 1 = d µ=1 ˜1µ, ˜1µ := (1, . . . , 1)T ∈ Rnµ . Center for Uncertainty Quantification ation Logo Lock-up 13 / 18
  • 21. 4* Numerical Experiments 2D L-shape domain, N = 557 dofs. Total stochastic dimension is Mu = Mk + Mf = 20, there are |J | = 231 PCE coefficients u = 231 j=1 uj,0 ⊗ 20 µ=1 ujµ ∈ R557 ⊗ 20 µ=1 R3 . Center for Uncertainty Quantification ation Logo Lock-up 14 / 18
  • 22. 4* Level sets Now we compute level sets sign(b u ∞1 − u) for b ∈ {0.2, 0.4, 0.6, 0.8}. Tensor u has 320 ∗ 557 ≈ 2 · 1012 entries ≈ 16 TB of memory. The computing time of one level set was 10 minutes. Intermediate ranks of sign(b u ∞1 − u) and of rank(uk ) were less than 24. Center for Uncertainty Quantification ation Logo Lock-up 15 / 18
  • 23. 4* Example: Canonical rank d, whereas TT rank 2 d-Laplacian over uniform tensor grid. It is known to have the Kronecker rank-d representation, ∆d = A⊗IN ⊗...⊗IN +IN ⊗A⊗...⊗IN +...+IN ⊗IN ⊗...⊗A ∈ RI⊗d ⊗I⊗d (5) with A = ∆1 = tridiag{−1, 2, −1} ∈ RN×N, and IN being the N × N identity. Notice that for the canonical rank we have rank kC(∆d ) = d, while TT-rank of ∆d is equal to 2 for any dimension due to the explicit representation ∆d = (∆1 I) × I 0 ∆1 I × ... × I 0 ∆1 I × I ∆1 (6) where the rank product operation ”×” is defined as a regular matrix product of the two corresponding core matrices, their blocks being multiplied by means of tensor product. The similar bound is true for the Tucker rank rankTuck (∆d ) = 2.
  • 24. 4* Advantages and disadvantages Denote k - rank, d-dimension, n = # dofs in 1D: 1. CP: ill-posed approx. alg-m, O(dnk), hard to compute approx. 2. Tucker: reliable arithmetic based on SVD, O(dnk + kd ) 3. Hierarchical Tucker: based on SVD, storage O(dnk + dk3), truncation O(dnk2 + dk4) 4. TT: based on SVD, O(dnk2) or O(dnk3), stable 5. Quantics-TT: O(nd ) → O(dlogq n)