Talk given at ENUMATH 2011 in Leicester and GAMM ANLA Workshop 2011 in Bremen. There is a preprint available under http://www.mpi-magdeburg.mpg.de/preprints/index.php
This paper studies an approximate dynamic programming (ADP) strategy of a group of nonlinear switched systems, where the external disturbances are considered. The neural network (NN) technique is regarded to estimate the unknown part of actor as well as critic to deal with the corresponding nominal system. The training technique is simul-taneously carried out based on the solution of minimizing the square error Hamilton function. The closed system’s tracking error is analyzed to converge to an attraction region of origin point with the uniformly ultimately bounded (UUB) description. The simulation results are implemented to determine the effectiveness of the ADP based controller.
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning.
Part 3 covers: 1. Motivation: Average-case versus worst-case in high dimensions 2. Algorithm halting times (runtimes) 3. Outlook
Random Matrix Theory and Machine Learning - Part 1Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning. Part 1 covers: 1. A brief history of Random Matrix Theory, 2. Classical Random Matrix Ensembles (basic building blocks)
We have implemented a multiple precision ODE solver based on high-order fully implicit Runge-Kutta(IRK) methods. This ODE solver uses any order Gauss type formulas, and can be accelerated by using (1) MPFR as multiple precision floating-point arithmetic library, (2) real tridiagonalization supported in SPARK3, of linear equations to be solved in simplified Newton method as inner iteration, (3) mixed precision iterative refinement method\cite{mixed_prec_iterative_ref}, (4) parallelization with OpenMP, and (5) embedded formulas for IRK methods. In this talk, we describe the reason why we adopt such accelerations, and show the efficiency of the ODE solver through numerical experiments such as Kuramoto-Sivashinsky equation.
Accelerating Pseudo-Marginal MCMC using Gaussian ProcessesMatt Moores
The grouped independence Metropolis-Hastings (GIMH) and Markov chain within Metropolis (MCWM) algorithms are pseudo-marginal methods used to perform Bayesian inference in latent variable models. These methods replace intractable likelihood calculations with unbiased estimates within Markov chain Monte Carlo algorithms. The GIMH method has the posterior of interest as its limiting distribution, but suffers from poor mixing if it is too computationally intensive to obtain high-precision likelihood estimates. The MCWM algorithm has better mixing properties, but less theoretical support. In this paper we accelerate the GIMH method by using a Gaussian process (GP) approximation to the log-likelihood and train this GP using a short pilot run of the MCWM algorithm. Our new method, GP-GIMH, is illustrated on simulated data from a stochastic volatility and a gene network model. Our approach produces reasonable estimates of the univariate and bivariate posterior distributions, and the posterior correlation matrix in these examples with at least an order of magnitude improvement in computing time.
This paper studies an approximate dynamic programming (ADP) strategy of a group of nonlinear switched systems, where the external disturbances are considered. The neural network (NN) technique is regarded to estimate the unknown part of actor as well as critic to deal with the corresponding nominal system. The training technique is simul-taneously carried out based on the solution of minimizing the square error Hamilton function. The closed system’s tracking error is analyzed to converge to an attraction region of origin point with the uniformly ultimately bounded (UUB) description. The simulation results are implemented to determine the effectiveness of the ADP based controller.
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning.
Part 3 covers: 1. Motivation: Average-case versus worst-case in high dimensions 2. Algorithm halting times (runtimes) 3. Outlook
Random Matrix Theory and Machine Learning - Part 1Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning. Part 1 covers: 1. A brief history of Random Matrix Theory, 2. Classical Random Matrix Ensembles (basic building blocks)
We have implemented a multiple precision ODE solver based on high-order fully implicit Runge-Kutta(IRK) methods. This ODE solver uses any order Gauss type formulas, and can be accelerated by using (1) MPFR as multiple precision floating-point arithmetic library, (2) real tridiagonalization supported in SPARK3, of linear equations to be solved in simplified Newton method as inner iteration, (3) mixed precision iterative refinement method\cite{mixed_prec_iterative_ref}, (4) parallelization with OpenMP, and (5) embedded formulas for IRK methods. In this talk, we describe the reason why we adopt such accelerations, and show the efficiency of the ODE solver through numerical experiments such as Kuramoto-Sivashinsky equation.
Accelerating Pseudo-Marginal MCMC using Gaussian ProcessesMatt Moores
The grouped independence Metropolis-Hastings (GIMH) and Markov chain within Metropolis (MCWM) algorithms are pseudo-marginal methods used to perform Bayesian inference in latent variable models. These methods replace intractable likelihood calculations with unbiased estimates within Markov chain Monte Carlo algorithms. The GIMH method has the posterior of interest as its limiting distribution, but suffers from poor mixing if it is too computationally intensive to obtain high-precision likelihood estimates. The MCWM algorithm has better mixing properties, but less theoretical support. In this paper we accelerate the GIMH method by using a Gaussian process (GP) approximation to the log-likelihood and train this GP using a short pilot run of the MCWM algorithm. Our new method, GP-GIMH, is illustrated on simulated data from a stochastic volatility and a gene network model. Our approach produces reasonable estimates of the univariate and bivariate posterior distributions, and the posterior correlation matrix in these examples with at least an order of magnitude improvement in computing time.
My talk at the International Conference on Monte Carlo Methods and Applications (MCM2032) related to advances in mathematical aspects of stochastic simulation and Monte Carlo methods at Sorbonne Université June 28, 2023, about my recent works (i) "Numerical Smoothing with Hierarchical Adaptive Sparse Grids and Quasi-Monte Carlo Methods for Efficient Option Pricing" (link: https://doi.org/10.1080/14697688.2022.2135455), and (ii) "Multilevel Monte Carlo with Numerical Smoothing for Robust and Efficient Computation of Probabilities and Densities" (link: https://arxiv.org/abs/2003.05708).
Bayesian Inference and Uncertainty Quantification for Inverse ProblemsMatt Moores
So-called “inverse” problems arise when the parameters of a physical system cannot be directly observed. The mapping between these latent parameters and the space of noisy observations is represented as a mathematical model, often involving a system of differential equations. We seek to infer the parameter values that best fit our observed data. However, it is also vital to obtain accurate quantification of the uncertainty involved with these parameters, particularly when the output of the model will be used for forecasting. Bayesian inference provides well-calibrated uncertainty estimates, represented by the posterior distribution over the parameters. In this talk, I will give a brief introduction to Markov chain Monte Carlo (MCMC) algorithms for sampling from the posterior distribution and describe how they can be combined with numerical solvers for the forward model. We apply these methods to two examples of ODE models: growth curves in ecology, and thermogravimetric analysis (TGA) in chemistry. This is joint work with Matthew Berry, Mark Nelson, Brian Monaghan and Raymond Longbottom.
New data structures and algorithms for \\post-processing large data sets and ...Alexander Litvinenko
In this work, we describe advanced numerical tools for working with multivariate functions and for
the analysis of large data sets. These tools will drastically reduce the required computing time and the
storage cost, and, therefore, will allow us to consider much larger data sets or ner meshes. Covariance
matrices are crucial in spatio-temporal statistical tasks, but are often very expensive to compute and
store, especially in 3D. Therefore, we approximate covariance functions by cheap surrogates in a
low-rank tensor format. We apply the Tucker and canonical tensor decompositions to a family of
Matern- and Slater-type functions with varying parameters and demonstrate numerically that their
approximations exhibit exponentially fast convergence. We prove the exponential convergence of the
Tucker and canonical approximations in tensor rank parameters. Several statistical operations are
performed in this low-rank tensor format, including evaluating the conditional covariance matrix,
spatially averaged estimation variance, computing a quadratic form, determinant, trace, loglikelihood,
inverse, and Cholesky decomposition of a large covariance matrix. Low-rank tensor approximations
reduce the computing and storage costs essentially. For example, the storage cost is reduced from an
exponential O(nd) to a linear scaling O(drn), where d is the spatial dimension, n is the number of
mesh points in one direction, and r is the tensor rank. Prerequisites for applicability of the proposed
techniques are the assumptions that the data, locations, and measurements lie on a tensor (axesparallel)
grid and that the covariance function depends on a distance,...
Talk of Michael Samet, entitled "Optimal Damping with Hierarchical Adaptive Quadrature for Efficient Fourier Pricing of Multi-Asset Options in Lévy Models" at the International Conference on Computational Finance (ICCF)", Wuppertal June 6-10, 2022
SMC^2: an algorithm for sequential analysis of state-space modelsPierre Jacob
In these slides I presented the SMC^2 method (see the article here: http://arxiv.org/abs/1101.1528 ) to an audience of marine biogeochemistry people, emphasizing on the model evidence estimation aspect.
Joint blind calibration and time-delay estimation for multiband rangingTarik Kazaz
In this presentation, we focus on the problem of blind joint calibration of multiband transceivers and time-delay (TD) estimation of multipath channels. We show that this problem can be formulated as a particular case of covariance matching. Although this problem is severely ill-posed, prior information about radio-frequency chain distortions and multipath channel sparsity is used for regularization. This approach leads to a biconvex optimization problem, which is formulated as a rank-constrained linear system and solved by a simple group Lasso algorithm.
% This method is general and can be also applied for calibration of sensors arrays and in direction of arrival estimation.
Numerical experiments show that the proposed algorithm provides better calibration and higher resolution for TD estimation than current state-of-the-art methods.
Stochastic reaction networks (SRNs) are a particular class of continuous-time Markov chains used to model a wide range of phenomena, including biological/chemical reactions, epidemics, risk theory, queuing, and supply chain/social/multi-agents networks. In this context, we explore the efficient estimation of statistical quantities, particularly rare event probabilities, and propose two alternative importance sampling (IS) approaches [1,2] to improve the Monte Carlo (MC) estimator efficiency. The key challenge in the IS framework is to choose an appropriate change of probability measure to achieve substantial variance reduction, which often requires insights into the underlying problem. Therefore, we propose an automated approach to obtain a highly efficient path-dependent measure change based on an original connection between finding optimal IS parameters and solving a variance minimization problem via a stochastic optimal control formulation. We pursue two alternative approaches to mitigate the curse of dimensionality when solving the resulting dynamic programming problem. In the first approach [1], we propose a learning-based method to approximate the value function using a neural network, where the parameters are determined via a stochastic optimization algorithm. As an alternative, we present in [2] a dimension reduction method, based on mapping the problem to a significantly lower dimensional space via the Markovian projection (MP) idea. The output of this model reduction technique is a low dimensional SRN (potentially one dimension) that preserves the marginal distribution of the original high-dimensional SRN system. The dynamics of the projected process are obtained via a discrete $L^2$ regression. By solving a resulting projected Hamilton-Jacobi-Bellman (HJB) equation for the reduced-dimensional SRN, we get projected IS parameters, which are then mapped back to the original full-dimensional SRN system, and result in an efficient IS-MC estimator of the full-dimensional SRN. Our analysis and numerical experiments verify that both proposed IS (learning based and MP-HJB-IS) approaches substantially reduce the MC estimator’s variance, resulting in a lower computational complexity in the rare event regime than standard MC estimators. [1] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. Learning-based importance sampling via stochastic optimal control for stochastic reaction net-works. Statistics and Computing 33, no. 3 (2023): 58. [2] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. (2023). Automated Importance Sampling via Optimal Control for Stochastic Reaction Networks: A Markovian Projection-based Approach. To appear soon.
Tucker tensor analysis of Matern functions in spatial statistics Alexander Litvinenko
1. Motivation: improve statistical models
2. Motivation: disadvantages of matrices
3. Tools: Tucker tensor format
4. Tensor approximation of Matern covariance function via FFT
5. Typical statistical operations in Tucker tensor format
6. Numerical experiments
On Deflations in Extended QR AlgorithmsThomas Mach
De ation procedures are one of the core parts of every iterative eigenvalue al-
gorithm. In this lecture we discuss the de ation criterion used in the extended
QR algorithm based on the chasing of rotations. We show that this de ation
criterion can be considered to be optimal with respect to absolute and relative
perturbation of the eigenvalues.
Further, we present a generalization of aggressive early de ation to the new
extended QR algorithms. Aggressive early de ation is the key technique for
the identication and de ation of already converged eigenvalues. Often these
possibilities for de ation are not detected by the standard technique. We present
numerical results underpinning the power of aggressive early de ation in the
context of extended QR algorithms. These ideas can be further generalized to
middle de ations in the setting of extended QR algorithms.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Computing Inner Eigenvalues of Matrices in Tensor Train Matrix Format
1. GAMM Workshop Applied and
Numerical Linear Algebra 2011
September 22, 2011, Bremen
Computing Inner Eigenvalues of Matrices in
Tensor Train Matrix Format
Thomas Mach
joint work with Peter Benner
Max Planck Institute for Dynamics of Complex Technical Systems
Computational Methods in Systems and Control Theory
Magdeburg
MAX PLANCK INSTITUTE
FOR DYNAMICS OF COMPLEX
TECHNICAL SYSTEMS
MAGDEBURG
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 1/21
2. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Outline
1 Tensor Trains
2 PINVIT and Folded Spectrum Method
3 Numerical Results
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 2/21
3. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Tensor Trains
[Oseledets, Tyrtyshnikov ’09]
d
T ∈ Rm
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 3/21
4. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Tensor Trains
[Oseledets, Tyrtyshnikov ’09]
d
T ∈ Rm
r
T = α=1 U1 (i1 , α)U2 (i2 , α) · · · Ud (id , α), with Uj (·, α) ∈ Rm
U1 (i1 , α)
U4 (i4 , α) α U2 (i2 , α)
U3 (i3 , α)
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 3/21
5. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Tensor Trains
[Oseledets, Tyrtyshnikov ’09]
d
T ∈ Rm
r
T = α=1 U1 (i1 , α)U2 (i2 , α) · · · Ud (id , α), with Uj (·, α) ∈ Rm
r r d
T (i1 , i2 , . . . , id ) = α1 =1 · · · αd =1 Gα1 ,...,αd
1 d
j=1 Uj (ij , αj ), with
G ∈ Rr1 ×···×rd and Uj (·, αj ) ∈ Rm
U1 (i1 , α1 ) U3 (i3 , α3 )
α1 α3
U4 (i4 , α4 ) α4 G (α1 , . . . , α4 ) α2 U2 (i2 , α2 )
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 3/21
6. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Tensor Trains
[Oseledets, Tyrtyshnikov ’09]
d
T ∈ Rm
r
T = α=1 U1 (i1 , α)U2 (i2 , α) · · · Ud (id , α), with Uj (·, α) ∈ Rm
r r d
T (i1 , i2 , . . . , id ) = α1 =1 · · · αd =1 Gα1 ,...,αd
1 d
j=1 Uj (ij , αj ), with
G ∈ Rr1 ×···×rd and Uj (·, αj ) ∈ Rm
T (i1 , i2 , . . . , id ) = G1 (i1 , α1 )G2 (α1 , i2 , α2 ) · · ·
α1 ,...,αd−1
Gd−1 (αd−2 , id−1 , αd−1 )Gd (αd−1 , id )
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 3/21
7. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Tensor Trains
[Oseledets, Tyrtyshnikov ’09]
T (i1 , i2 , . . . , id ) = G1 (i1 , α1 )G2 (α1 , i2 , α2 ) · · ·
α1 ,...,αd−1
Gd−1 (αd−2 , id−1 , αd−1 )Gd (αd−1 , id )
G1 (i1 , α1 ) α1 G2 (α1 , i2 , α2 ) α2 G3 (α2 , i3 , α3 ) α3
··· αd−1 Gd (αd−1 , id )
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 4/21
8. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Tensor Train Matrix Format (TTM)
[Oseledets ’10]
G1 (i1 , j1 , α1 ) α1 G2 (α1 , i2 , j2 , α2 ) α2
··· αd−1 Gd (αd−1 , id , jd )
d ×md
M ∈ Rm
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 5/21
9. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Tensor Train Matrix Format (TTM)
[Oseledets ’10]
G1 (i1 , j1 , α1 ) α1 G2 (α1 , i2 , j2 , α2 ) α2
··· αd−1 Gd (αd−1 , id , jd )
d ×md
M ∈ Rm
M (i1 , i2 , . . . , id ; j1 , j2 , . . . , jd ) =
α1 ,...,αd−1 G1 (i1 , j1 , α1 )G2 (α1 , i2 , j2 , α2 ) · · · · · · Gd (αd−1 , id , jd )
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 5/21
10. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Tensor Train Matrix Format (TTM)
[Oseledets ’10]
G1 (i1 , j1 , α1 ) α1 G2 (α1 , i2 , j2 , α2 ) α2
··· αd−1 Gd (αd−1 , id , jd )
d ×md
M ∈ Rm
M (i1 , i2 , . . . , id ; j1 , j2 , . . . , jd ) =
α1 ,...,αd−1 G1 (i1 , j1 , α1 )G2 (α1 , i2 , j2 , α2 ) · · · · · · Gd (αd−1 , id , jd )
TTM is a data-sparse matrix format.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 5/21
11. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Tensor Train Matrix Format (TTM)
[Oseledets ’10]
G1 (i1 , j1 , α1 ) α1 G2 (α1 , i2 , j2 , α2 ) α2
··· αd−1 Gd (αd−1 , id , jd )
d ×md
M ∈ Rm
M (i1 , i2 , . . . , id ; j1 , j2 , . . . , jd ) =
α1 ,...,αd−1 G1 (i1 , j1 , α1 )G2 (α1 , i2 , j2 , α2 ) · · · · · · Gd (αd−1 , id , jd )
TTM is a data-sparse matrix format.
m = 2 ⇒ QTT matrix format
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 5/21
12. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Matrix Vector Product in TT and TTM Format
[Oseledets ’10]
T ∈ (Q)TT , M ∈ (Q)TTM ⇒ MT = W ∈ (Q)TT
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 6/21
13. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Matrix Vector Product in TT and TTM Format
[Oseledets ’10]
T ∈ (Q)TT , M ∈ (Q)TTM ⇒ MT = W ∈ (Q)TT
W (i1 , i2 , . . . , id ) = M(i1 , j1 , i2 , j2 , . . . , id , jd )T (j1 , j2 , . . . , jd )
j1 ,j2 ,...,jd
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 6/21
14. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Matrix Vector Product in TT and TTM Format
[Oseledets ’10]
T ∈ (Q)TT , M ∈ (Q)TTM ⇒ MT = W ∈ (Q)TT
W (i1 , i2 , . . . , id ) = M(i1 , j1 , i2 , j2 , . . . , id , jd )T (j1 , j2 , . . . , jd )
j1 ,j2 ,...,jd
G1 (i1 , j1 , α1 ) j1 H1 (j1 , β1 )
α1 β1
G2 (α1 , i2 , j2 , α2 ) j2 H2 (β1 , j2 , β2 )
α2 β2
.
. .
.
. .
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 6/21
15. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Matrix Vector Product in TT and TTM Format
[Oseledets ’10]
T ∈ (Q)TT , M ∈ (Q)TTM ⇒ MT = W ∈ (Q)TT
W (i1 , i2 , . . . , id ) = M(i1 , j1 , i2 , j2 , . . . , id , jd )T (j1 , j2 , . . . , jd )
j1 ,j2 ,...,jd
G1 (i1 , j1 , α1 ) j1 H1 (j1 , β1 )
α1 β1
G2 (α1 , i2 , j2 , α2 ) j2 H2 (β1 , j2 , β2 )
α2 β2
.
. .
.
. .
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 6/21
16. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Matrix Vector Product in TT and TTM Format
[Oseledets ’10]
T ∈ (Q)TT , M ∈ (Q)TTM ⇒ MT = W ∈ (Q)TT
W (i1 , i2 , . . . , id ) = M(i1 , j1 , i2 , j2 , . . . , id , jd )T (j1 , j2 , . . . , jd )
j1 ,j2 ,...,jd
j1
G1 (i1 , j1 , αK1 (i1 , (α1 , β1 )) H1 (j1 , β1 )
1)
α1 β1
G2 (α1 , i2 , j2 , α2 ) j2 H2 (β1 , j2 , β2 )
α2 β2
.
. .
.
. .
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 6/21
17. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Matrix Vector Product in TT and TTM Format
[Oseledets ’10]
T ∈ (Q)TT , M ∈ (Q)TTM ⇒ MT = W ∈ (Q)TT
W (i1 , i2 , . . . , id ) = M(i1 , j1 , i2 , j2 , . . . , id , jd )T (j1 , j2 , . . . , jd )
j1 ,j2 ,...,jd
j1
G1 (i1 , j1 , αK1 (i1 , (α1 , β1 )) H1 (j1 , β1 )
1)
α1 β1
G2 (α1 , i2 , j2 , α2 ) j2 H2 (β1 , j2 , β2 )
α2 β2
.
. .
.
. .
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 6/21
18. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Matrix Vector Product in TT and TTM Format
[Oseledets ’10]
T ∈ (Q)TT , M ∈ (Q)TTM ⇒ MT = W ∈ (Q)TT
W (i1 , i2 , . . . , id ) = M(i1 , j1 , i2 , j2 , . . . , id , jd )T (j1 , j2 , . . . , jd )
j1 ,j2 ,...,jd
j1
G1 (i1 , j1 , αK1 (i1 , (α1 , β1 )) H1 (j1 , β1 )
1)
α1 β1
G2 (α1 , iK2j((α1 ,)β1 ),j22 , (α2H22 ))1 , j2 , β2 )
2 , 2 , α2 i , β (β
α2 β2
.
. .
.
. .
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 6/21
19. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Matrix Vector Product in TT and TTM Format
[Oseledets ’10]
T ∈ (Q)TT , M ∈ (Q)TTM ⇒ MT = W ∈ (Q)TT
W (i1 , i2 , . . . , id ) = M(i1 , j1 , i2 , j2 , . . . , id , jd )T (j1 , j2 , . . . , jd )
j1 ,j2 ,...,jd
j1
G1 (i1 , j1 , αK1 (i1 , (α1 , β1 )) H1 (j1 , β1 )
1)
α1 β1
G2 (α1 , iK2j((α1 ,)β1 ),j22 , (α2H22 ))1 , j2 , β2 )
2 , 2 , α2 i , β (β
α2 β2
.
. .
.
. .
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 6/21
20. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Matrix Vector Product in TT and TTM Format
[Oseledets ’10]
T ∈ (Q)TT , M ∈ (Q)TTM ⇒ MT = W ∈ (Q)TT
W (i1 , i2 , . . . , id ) = M(i1 , j1 , i2 , j2 , . . . , id , jd )T (j1 , j2 , . . . , jd )
j1 ,j2 ,...,jd
j1
G1 (i1 , j1 , αK1 (i1 , (α1 , β1 )) H1 (j1 , β1 )
1)
(α1 , β1 )
G2 (α1 , iK2j((α1 ,)β1 ),j22 , (α2H22 ))1 , j2 , β2 )
2 , 2 , α2 i , β (β
α2 β2
.
. .
.
. .
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 6/21
21. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Matrix Vector Product in TT and TTM Format
[Oseledets ’10]
T ∈ (Q)TT , M ∈ (Q)TTM ⇒ MT = W ∈ (Q)TT
W (i1 , i2 , . . . , id ) = M(i1 , j1 , i2 , j2 , . . . , id , jd )T (j1 , j2 , . . . , jd )
j1 ,j2 ,...,jd
j1
G1 (i1 , j1 , αK1 (i1 , (α1 , β1 )) H1 (j1 , β1 )
1)
o n
ati
(α1 , β1 )
c
t run
G2 (α1 , iK2j((α1 ,)β1 ),j22 , (α2H22 ))1 , j2 , β2 )
2 , 2 , α2 i , β (β
+
α2 β2
.
. .
.
. .
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 6/21
22. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Inversion of a Matrix in TTM Format
[Schulz 1933, Oseledets ’10]
Newton-Schulz Iteration
Xk+1 = 2Xk − Xk MXk
X0 initial approximation to M −1 with,
ρ(MX0 − I ) < 1.
2
If M is symmetric, positive definite, then X0 = M I is an
2
admissible initial approximation.
Hk+1 = I − Yk
Yk+1 = Yk Hk+1
Xk+1 = Hk+1 Xk
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 7/21
23. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Inversion of a Matrix in TTM Format
[Schulz 1933, Oseledets ’10]
Newton-Schulz Iteration
Xk+1 = 2Xk − Xk MXk
X0 initial approximation to M −1 with,
ρ(MX0 − I ) < 1.
2
If M is symmetric, positive definite, then X0 = M I is an
2
admissible initial approximation.
Hk+1 = T (I − Yk , )
Yk+1 = T (Yk Hk+1 , )
Xk+1 = T (Hk+1 Xk , )
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 7/21
24. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Eigenvalue Problem
Problem Setting
d d
Assume M ∈ R2 ×2 is given in TTM. M is sym. pos. definite.
d
Compute eigenvalue λ and eigenvector v ∈ R2 of M.
d
R2 Mv = λv
quantum molecular dynamics
[Lebedeva ’11]
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 8/21
25. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Preconditioned Inverse Iteration
[Knyazev, Neymeyr, et al.]
Definition
The function
x T Mx
µ(x) = µ(x, M) =
xT x
is called the Rayleigh quotient.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 9/21
26. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Preconditioned Inverse Iteration
[Knyazev, Neymeyr, et al.]
Definition
The function
x T Mx
µ(x) = µ(x, M) =
xT x
is called the Rayleigh quotient.
Minimize the Rayleigh quotient by a gradient method:
2
xi+1 := xi − α µ(xi ), µ(x) = T (Mx − xµ(x)) ,
x x
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 9/21
27. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Preconditioned Inverse Iteration
[Knyazev, Neymeyr, et al.]
Residual r (x) = Mx − xµ(x).
Minimize the Rayleigh quotient by a gradient method:
2
xi+1 := xi − α µ(xi ), µ(x) = T (Mx − xµ(x)) ,
x x
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 9/21
28. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Preconditioned Inverse Iteration
[Knyazev, Neymeyr, et al.]
Definition
The function
x T Mx
µ(x) = µ(x, M) =
xT x
is called the Rayleigh quotient.
Minimize the Rayleigh quotient by a gradient method:
2
xi+1 := xi − α µ(xi ), µ(x) = T (Mx − xµ(x)) ,
x x
+ preconditioning ⇒ update equation:
xi+1 := xi − B −1 (Mxi − xi µ(xi )) .
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 9/21
29. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Preconditioned Inverse Iteration
[Knyazev, Neymeyr, et al.]
Preconditioned residual B −1 r (x) = B −1 (Mx − xµ(x)).
⇒ inexact Newton-method
+ preconditioning ⇒ update equation:
xi+1 := xi − B −1 (Mxi − xi µ(xi )) .
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 9/21
30. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Preconditioned Inverse Iteration
[Knyazev, Neymeyr 2009]
xi+1 := xi − B −1 (Mxi − xi µ(xi ))
If
M ∈ Rn×n symmetric positive definite and
B −1 approximates the inverse of M, so that
I − B −1 M M
≤ c < 1,
then Preconditioned INVerse ITeration (PINVIT) converges and
the number of iterations is independent of n.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 10/21
31. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Preconditioned Inverse Iteration
[Knyazev, Neymeyr, et al.]
The residual
ri = Mxi − xi µ(xi )
converges to 0, so that
ri 2 <
is a useful termination criterion.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 11/21
32. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Algorithm
The number of iterations is independent of matrix size n = 2d .
PINVIT(1,s)
Input: M ∈ Rn×n , X0 ∈ Rn×s (X0 X0 = I , e.g. randomly chosen)
T
Output: Xp ∈ R n×s , µ ∈ Rs×s , with MX − X µ ≤
p p
Approximative inversion B −1 ≈ (M)−1
R := MX0 − X0 µ, µ = X0 MX0T
for (i := 1; R F > ; i + +) do
Xi := Orthogonalize Xi−1 − B −1 R
R := MXi − Xi µ, µ = XiT MXi
end
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 12/21
33. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Algorithm
The number of iterations is independent of matrix size n = 2d .
PINVIT(1,s)
Input: M ∈ Rn×n , X0 ∈ Rn×s (X0 X0 = I , e.g. randomly chosen)
T
Output: Xp ∈ R n×s , µ ∈ Rs×s , with MX − X µ ≤
p p
Approximative inversion B −1 ≈ (M)−1
R := MX0 − X0 µ, µ = X0 MX0T
for (i := 1; R F > ; i + +) do
Xi := Orthogonalize Xi−1 − B −1 R
R := MXi − Xi µ, µ = XiT MXi
end Newton-Schulz iteration:
Bk+1 = 2Bk − Bk MBk
[Oseledets ’10] for TTM
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 12/21
34. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Algorithm
The number of iterations is independent of matrix size n = 2d .
PINVIT(1,s)
Input: M ∈ Rn×n , X0 ∈ Rn×s (X0 X0 = I , e.g. randomly chosen)
T
Output: Xp ∈ R n×s , µ ∈ Rs×s , with MX − X µ ≤
p p
Approximative inversion B −1 ≈ (M)−1
R := MX0 − X0 µ, µ = X0 MX0T
for (i := 1; R F > ; i + +) do
Xi := Orthogonalize Xi−1 − B −1 R
R := MXi − Xi µ, µ = XiT MXi
end
TTM-TT products
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 12/21
35. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Folded Spectrum Method
How to find λi ?
If i = n − s with s < O(log n),
use subspace version PINVIT(·,s).
...
0λn λn−1 λn−2 λ1
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 13/21
36. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Folded Spectrum Method
How to find λi ?
If i = n − s with s log n?
... ...
0λn λi+1 λi λi−1 λ1
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 13/21
37. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Folded Spectrum Method
How to find λi ?
If i = n − s with s log n,
shift with σ near λi .
... ...
0λn λi+1 λi λi−1 λ1
σ
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 13/21
38. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Folded Spectrum Method
How to find λi ?
If i = n − s with s log n,
shift with σ near λi .
... ...
0λn λi+1 λi λi−1 λ1
σ
But (M − σI) is not positive definite.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 13/21
39. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Folded Spectrum Method
Folded Spectrum Method [Wang, Zunger 1994, Morgan 1991]
Mσ = (M − σI)2
Mσ is s.p.d., if M is s.p.d. and σ = λi .
Assume all eigenvalues of Mσ are simple.
Mv = λv ⇔ Mσ v = (M − σI)2 v
= M 2 v − 2σMv + σ 2 v
= λ2 v − 2σλv + σ 2 v
= (λ − σ)2 v
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 14/21
40. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Folded Spectrum Method
Folded Spectrum Method [Wang, Zunger 1994, Morgan 1991]
Mσ = (M − σI)2
Mσ is s.p.d., if M is s.p.d. and σ = λi .
Assume all eigenvalues of Mσ are simple.
(2, v2 ), (3, v3 ), σ = 2.5 ⇒ Mσ has eigenvalue 0.25 of
multiplicity 2.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 14/21
41. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Folded Spectrum Method
Folded Spectrum Method [Wang, Zunger 1994, Morgan 1991]
Mσ = (M − σI)2
Mσ is s.p.d., if M is s.p.d. and σ = λi .
Assume all eigenvalues of Mσ are simple.
(2, v2 ), (3, v3 ), σ = 2.5 ⇒ Mσ has eigenvalue 0.25 of
multiplicity 2.
PINVIT computes v ∈ span(v2 , v3 ). ⇒ v T Mv /v T v ∈ [2, 3]
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 14/21
42. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Folded Spectrum Method
Folded Spectrum Method [Wang, Zunger 1994, Morgan 1991]
Mσ = (M − σI)2
Mσ is s.p.d., if M is s.p.d. and σ = λi .
Assume all eigenvalues of Mσ are simple.
(2, v2 ), (3, v3 ), σ = 2.5 ⇒ Mσ has eigenvalue 0.25 of
multiplicity 2.
PINVIT computes v ∈ span(v2 , v3 ). ⇒ v T Mv /v T v ∈ [2, 3]
Use PINVIT to compute V ∈ Rn×2
⇒ Λ(V T MV /V T V ) = {2, 3}
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 14/21
43. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Folded Spectrum Method
1 Choose σ.
2 Compute
2
a) Mσ := (M − σI) and
b) B −1 :≈ Mσ .
−1
3 Use PINVIT to find the smallest eigenpair (µσ , v ) of Mσ .
4 Compute µ := v T Mv /v T v .
(µ, v ) is the nearest eigenpair to σ.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 15/21
44. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Folded Spectrum Method
1 Choose σ.
2 Compute
2
a) Mσ := (M − σI) and
b) B −1 :≈ Mσ .
−1
3 Use PINVIT to find the smallest eigenpair (µσ , v ) of Mσ .
4 Compute µ := v T Mv /v T v .
(µ, v ) is the nearest eigenpair to σ.
In TTM M can be shifted, squared and inverted with
reasonable costs.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 15/21
45. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Folded Spectrum Method
1 Choose σ.
2 Compute
2
a) Mσ := (M − σI) and
b) B −1 :≈ Mσ .
−1
3 Use PINVIT to find the smallest eigenpair (µσ , v ) of Mσ .
4 Compute µ := v T Mv /v T v .
(µ, v ) is the nearest eigenpair to σ.
In TTM M can be shifted, squared and inverted with
reasonable costs.
If M is sparse, then squaring and inverting is prohibitive.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 15/21
46. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Folded Spectrum Method
1 Choose σ.
2 Compute
2
a) Mσ := (M − σI) and
b) B −1 :≈ Mσ .
−1
3 Use PINVIT to find the smallest eigenpair (µσ , v ) of Mσ .
4 Compute µ := v T Mv /v T v .
(µ, v ) is the nearest eigenpair to σ.
In TTM M can be shifted, squared and inverted with
reasonable costs.
If M is sparse, then squaring and inverting is prohibitive.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 15/21
47. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Folded Spectrum Method
1 Choose σ.
2 Compute
2
a) Mσ := (M − σI) and
b) B −1 :≈ Mσ .
−1
3 Use PINVIT to find the smallest eigenpair (µσ , v ) of Mσ .
4 Compute µ := v T Mv /v T v .
(µ, v ) is the nearest eigenpair to σ.
In TTM M can be shifted, squared and inverted with
reasonable costs.
If M is sparse, then squaring and inverting is prohibitive.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 15/21
48. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Drawbacks
The condition number of (M − σI )2 is larger.
−1
⇒ The computation of Mσ is more expensive.
−1
⇒ Mσ has larger local ranks.
−1
⇒ Mσ v is more expensive.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 16/21
49. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Drawbacks
The condition number of (M − σI )2 is larger.
−1
⇒ The computation of Mσ is more expensive.
−1
⇒ Mσ has larger local ranks.
−1
⇒ Mσ v is more expensive.
Multiple eigenvalues of Mσ may lead to
incomplete subspace information.
⇒ v T Mv /v T v does not approximate λ.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 16/21
50. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Numerical Results
Numerical Results
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 17/21
51. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
TT-Toolbox 2.1
[Oseledets et al. ’09–’11]
We use TT-Toolbox 2.1 for MATLAB from I.V. Oseledets at al.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 18/21
52. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
2D Laplace
d )2 ×(2d )2
M = α∆2 = α (∆1 ⊗ I + I ⊗ ∆1 ) ∈ R(2 , with
∆1 = tridiag([−1, 2, −1]).
without shift, 3 smallest eigenvalues
n d tinv in s tPINVIT in s # it. error
64 3 0.449 1.577 17 1.2230 e−07
256 4 0.345 1.376 18 1.1450 e−07
1 024 5 0.997 3.236 27 2.7432 e−07
4 096 6 2.347 8.172 19 1.8911 e−07
16 384 7 4.789 19.885 16 6.4020 e−08
65 536 8 11.895 43.717 20 1.0688 e−07
262 144 9 19.076 63.727 20 4.1304 e−09
1 048 576 10 29.865 99.808 20 3.7722 e−09
4 194 304 11 110.712* 331.059 27 5.8789 e−10
16 777 216 12 165.560* 439.341 23 4.5062 e−09
67 108 864 13 240.226* 587.796 26 8.9795 e−09
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 19/21
53. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
2D Laplace
d )2 ×(2d )2
M = α∆2 = α (∆1 ⊗ I + I ⊗ ∆1 ) ∈ R(2 , with
∆1 = tridiag([−1, 2, −1]).
with shift σ = 203.3139, folded spectrum method, 4 eigenvalues
n d tinv in s tPINVIT in s # it. error
64 3 0.796 0.843 8 5.5601 e−10
256 4 3.360 3.053 45 2.9268 e−09
1 024 5 22.659 4.581 20 1.1411 e−10
4 096 6 60.755 32.264 31 2.5295 e−12
16 384 7 411.165* 80.827 28 6.3665 e−12
65 536 8 1 808.302* 344.696 27 2.5835 e−11
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 19/21
54. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
3D Laplace
M = α∆3 = α (∆1 ⊗ I ⊗ I + I ⊗ ∆1 ⊗ I + I ⊗ I ⊗ ∆1 ) ,
d 3 d 3
M ∈ R(2 ) ×(2 ) .
without shift, 4 smallest eigenvalues
n d tinv in s tPINVIT in s # it. error
64 2 0.219 4.518 30 2.4762 e−07
512 3 0.561 6.742 27 2.9333 e−07
4 096 4 1.109 23.715 24 1.5441 e−07
32 768 5 7.197 99.543 28 9.1209 e−09
262 144 6 11.052 249.084 24 1.9956 e−08
2 097 152 7 20.893 923.874 26 1.6577 e−07
16 777 216 8 34.937 5 131.664 34 1.1977 e−08
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 19/21
55. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
3D Laplace
M = α∆3 = α (∆1 ⊗ I ⊗ I + I ⊗ ∆1 ⊗ I + I ⊗ I ⊗ ∆1 ) ,
d 3 d 3
M ∈ R(2 ) ×(2 ) .
with shift σ = 230.6195, folded spectrum method, 6 eigenvalues
n d tinv in s tPINVIT in s # it. error
512 3 14.300 45.979 29 2.8479 e−07
4 096 4 137.502 293.154 54 1.8060 e−08
32 768 5 1 952.980* 1 223.395 19 1.0971 e−11
262 144 6 37 149.998* out of mem1
1
canceled after 60 hours while using > 300 GB RAM
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 19/21
56. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
4D Laplace
M = α∆4 = α (∆2 ⊗ I + I ⊗ ∆2 ) ,
d 4 d 4
M ∈ R(2 ) ×(2 )
without shift, 5 smallest eigenvalues
n d tinv in s tPINVIT in s # it. error
256 2 0.240 9.265 36 2.5997 e−07
4 096 3 0.873 23.993 28 1.7104 e−07
65 536 4 1.436 119.348 28 5.9876 e−08
1 048 576 5 5.975 497.812 32 2.2100 e−08
16 777 216 6 12.655 1 710.326 31 7.9299 e−09
268 435 456 7 23.628 7 898.374 41 5.1963 e−10
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 20/21
57. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Conclusions
Finding the eigenvalues by PINVIT is cheap and storage
efficient.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 21/21
58. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Conclusions
Finding the eigenvalues by PINVIT is cheap and storage
efficient.
The folded spectrum method enables us to compute inner
eigenvalues, too.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 21/21
59. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Conclusions
Finding the eigenvalues by PINVIT is cheap and storage
efficient.
The folded spectrum method enables us to compute inner
eigenvalues, too.
The use of the folded spectrum method leads to not well
conditioned problems.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 21/21
60. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Conclusions
Finding the eigenvalues by PINVIT is cheap and storage
efficient.
The folded spectrum method enables us to compute inner
eigenvalues, too.
The use of the folded spectrum method leads to not well
conditioned problems.
Choose the shift and the subspace dimension carefully.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 21/21
61. Tensor Trains PINVIT and Folded Spectrum Method Numerical Results
Conclusions
Finding the eigenvalues by PINVIT is cheap and storage
efficient.
The folded spectrum method enables us to compute inner
eigenvalues, too.
The use of the folded spectrum method leads to not well
conditioned problems.
Choose the shift and the subspace dimension carefully.
Thank you for your attention.
Max Planck Institute Magdeburg Thomas Mach, Computing Eigenvalues of Matrices in TTM Format 21/21