The document summarizes the application of alternating direction implicit (ADI) methods to solve tensor structured equations. ADI methods were originally developed to solve Poisson problems on uniform grids. They exploit the separable structure of higher-dimensional differential operators to solve vectorized equations more efficiently. The document discusses extending ADI methods to solve general matrix equations and tensor equations arising in applications like Lyapunov equations.
The document discusses applying alternating direction implicit (ADI) methods to solve tensor structured equations. ADI methods were originally developed to solve linear systems related to Poisson problems on uniform grids. The document outlines how ADI can be generalized to solve systems with tensor structure, such as those arising from tensor train decompositions of multidimensional problems. By exploiting the tensor structure, the ADI method can solve large problems with significantly less computational cost and storage than solving the equivalent vectorized problem directly.
The document discusses description logics, which are decidable fragments of first-order logic used for knowledge representation. It presents the syntax and semantics of ALC, a basic description logic. It then introduces a labeled sequent calculus called SCALC for reasoning with ALC concepts. SCALC uses labeled formulas and includes structural, boolean, and generalization rules for reasoning over ALC concepts. An example proof in SCALC is provided.
1) The document is a math exam paper containing multiple choice and written response questions.
2) The questions cover topics in additional mathematics including solving quadratic equations, integration, sums, graphs of trigonometric functions, and simultaneous equations.
3) The paper is divided into two sections with the first section worth 30 marks and the second section worth 20 marks.
IRJET- Analytic Evaluation of the Head Injury Criterion (HIC) within the Fram...IRJET Journal
This document presents an analytic evaluation of the Head Injury Criterion (HIC) within the framework of constrained optimization theory. The HIC is a weighted impulse function used to predict the probability of closed head injury based on measured head acceleration. Previous work analyzed the unclipped HIC function, but the clipped HIC formulation used in practice limits the evaluation window duration. The author develops analytic relationships for determining the window initiation and termination points to maximize the clipped HIC function. Example applications illustrate the general solutions for when head acceleration is defined by a single function or composite functions over the evaluation domain.
1) The present value of a perpetuity is equal to the constant payment divided by the interest rate.
2) The present value of an ordinary annuity can be derived by subtracting the present value of a perpetuity starting at time N+1 from a perpetuity starting at time 1.
3) This results in a formula for present value of an ordinary annuity as a function of the payment, interest rate, and number of periods.
A Study on the Root Systems and Dynkin diagrams associated with QHA2(1)IRJET Journal
This document discusses the quasi-hyperbolic Kac-Moody algebra QHA2(1). It begins with an abstract that introduces the algebra and states that the paper aims to classify the Dynkin diagrams associated with QHA2(1) and study properties of strictly and purely imaginary roots. It then provides background on Kac-Moody algebras, roots, and related concepts. The main results are a classification theorem stating there are 212 connected, non-isomorphic Dynkin diagrams for QHA2(1) and a discussion of strictly and purely imaginary roots for this algebra.
This document provides a quick reference guide for the 2010-2011 competitive events offered by FBLA, including the event name, the grade level, whether it is an individual or team event, and eligibility requirements for regional, state, and national competitions. It lists over 50 different competitive events in business-related topics. The legend at the bottom explains the different abbreviations used for the eligibility requirements.
This document provides information about the STPM/S(E)954 Mathematics (T) syllabus, including its aims, objectives, content, assessment format, and specimen papers and assignments. The syllabus covers topics in algebra, geometry, calculus, statistics, and other areas of mathematics over three terms. It is designed to provide candidates with mathematical concepts and problem-solving skills to prepare them for university studies. The assessment consists of written papers and coursework assignments.
The document discusses applying alternating direction implicit (ADI) methods to solve tensor structured equations. ADI methods were originally developed to solve linear systems related to Poisson problems on uniform grids. The document outlines how ADI can be generalized to solve systems with tensor structure, such as those arising from tensor train decompositions of multidimensional problems. By exploiting the tensor structure, the ADI method can solve large problems with significantly less computational cost and storage than solving the equivalent vectorized problem directly.
The document discusses description logics, which are decidable fragments of first-order logic used for knowledge representation. It presents the syntax and semantics of ALC, a basic description logic. It then introduces a labeled sequent calculus called SCALC for reasoning with ALC concepts. SCALC uses labeled formulas and includes structural, boolean, and generalization rules for reasoning over ALC concepts. An example proof in SCALC is provided.
1) The document is a math exam paper containing multiple choice and written response questions.
2) The questions cover topics in additional mathematics including solving quadratic equations, integration, sums, graphs of trigonometric functions, and simultaneous equations.
3) The paper is divided into two sections with the first section worth 30 marks and the second section worth 20 marks.
IRJET- Analytic Evaluation of the Head Injury Criterion (HIC) within the Fram...IRJET Journal
This document presents an analytic evaluation of the Head Injury Criterion (HIC) within the framework of constrained optimization theory. The HIC is a weighted impulse function used to predict the probability of closed head injury based on measured head acceleration. Previous work analyzed the unclipped HIC function, but the clipped HIC formulation used in practice limits the evaluation window duration. The author develops analytic relationships for determining the window initiation and termination points to maximize the clipped HIC function. Example applications illustrate the general solutions for when head acceleration is defined by a single function or composite functions over the evaluation domain.
1) The present value of a perpetuity is equal to the constant payment divided by the interest rate.
2) The present value of an ordinary annuity can be derived by subtracting the present value of a perpetuity starting at time N+1 from a perpetuity starting at time 1.
3) This results in a formula for present value of an ordinary annuity as a function of the payment, interest rate, and number of periods.
A Study on the Root Systems and Dynkin diagrams associated with QHA2(1)IRJET Journal
This document discusses the quasi-hyperbolic Kac-Moody algebra QHA2(1). It begins with an abstract that introduces the algebra and states that the paper aims to classify the Dynkin diagrams associated with QHA2(1) and study properties of strictly and purely imaginary roots. It then provides background on Kac-Moody algebras, roots, and related concepts. The main results are a classification theorem stating there are 212 connected, non-isomorphic Dynkin diagrams for QHA2(1) and a discussion of strictly and purely imaginary roots for this algebra.
This document provides a quick reference guide for the 2010-2011 competitive events offered by FBLA, including the event name, the grade level, whether it is an individual or team event, and eligibility requirements for regional, state, and national competitions. It lists over 50 different competitive events in business-related topics. The legend at the bottom explains the different abbreviations used for the eligibility requirements.
This document provides information about the STPM/S(E)954 Mathematics (T) syllabus, including its aims, objectives, content, assessment format, and specimen papers and assignments. The syllabus covers topics in algebra, geometry, calculus, statistics, and other areas of mathematics over three terms. It is designed to provide candidates with mathematical concepts and problem-solving skills to prepare them for university studies. The assessment consists of written papers and coursework assignments.
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...SSA KPI
The document describes efficient solution methods for two-stage stochastic linear programs (SLPs) using interior point methods. Interior point methods require solving large, dense systems of linear equations at each iteration, which can be computationally difficult for SLPs due to their structure leading to dense matrices. The paper reviews methods for improving computational efficiency, including reformulating the problem, exploiting special structures like transpose products, and explicitly factorizing the matrices to solve smaller independent systems in parallel. Computational results show explicit factorizations generally require the least effort.
F4 Final Sbp 2006 Math Skema P 1 & P 2 norainisaser
The document is a marking scheme for a Year 4 mathematics exam consisting of Paper 1 and Paper 2. It provides the answers to questions 1 through 40 for Paper 1 and a detailed marking scheme for multiple choice and structured questions for Paper 2, including the breakdown of sub-marks and full marks awarded for parts of questions. The marking scheme serves as a guide for examiners to use in a consistent manner when evaluating and scoring student responses.
F4 Final Sbp 2007 Maths Skema P 1 & P2norainisaser
This document contains the marking scheme for the Mathematics Paper 1 exam for Form 4 students in Malaysia in October 2007. It includes the marking schemes for 52 multiple choice questions in Section A worth a total of 52 marks and short answer questions in Section B worth a total of 48 marks. The marking schemes provide the number of marks awarded for each part of each question.
The document contains rules and guidelines for marking the trial SPM Mathematics paper for SBP schools in 2007. It includes:
1) The marking scheme for Section A with 52 marks covering questions 1 to 10, outlining the points and marks awarded.
2) The marking scheme for Section B with 48 marks covering questions 11 to 16, including graphs and diagrams.
3) Examples of student responses with marks awarded for questions involving calculations, graphs, and geometric diagrams.
4) Guidelines specify the level of accuracy for measurements and angles in geometric questions.
In 3 sentences, the document provides the marking scheme and examples to standardize the evaluation of the 2007 trial SPM Mathematics paper for schools under
This document provides the answer key to homework #7 for CHEM 444. It includes 3 chemistry problems dealing with equations of state for gases and thermodynamic derivatives. The solutions show the steps to derive the requested equations, citing relevant equations from the textbook. A point value is given for each part of each problem. Notes are included to explain aspects of the solutions and emphasize conceptual understanding over just citing equations.
Mathematical models for a chemical reactorLuis Rodríguez
This document presents a mathematical model for the concentration of a chemical in a reactor. It examines both steady state and time-dependent models. For steady state, the model is an ordinary differential equation that can be solved analytically. For time dependence, the model is a partial differential equation that requires numerical solution. Two numerical methods are presented: an implicit finite difference method and the finite element method.
This document contains a marking scheme for a mathematics assessment with 16 questions. It provides the question number, marking scheme, and marks awarded for each part of each question. The marking scheme includes keys to common mistakes, correct working methods, and final answers. The document aims to evaluate students' skills in topics like algebra, geometry, statistics, and problem solving.
New data structures and algorithms for \\post-processing large data sets and ...Alexander Litvinenko
In this work, we describe advanced numerical tools for working with multivariate functions and for
the analysis of large data sets. These tools will drastically reduce the required computing time and the
storage cost, and, therefore, will allow us to consider much larger data sets or ner meshes. Covariance
matrices are crucial in spatio-temporal statistical tasks, but are often very expensive to compute and
store, especially in 3D. Therefore, we approximate covariance functions by cheap surrogates in a
low-rank tensor format. We apply the Tucker and canonical tensor decompositions to a family of
Matern- and Slater-type functions with varying parameters and demonstrate numerically that their
approximations exhibit exponentially fast convergence. We prove the exponential convergence of the
Tucker and canonical approximations in tensor rank parameters. Several statistical operations are
performed in this low-rank tensor format, including evaluating the conditional covariance matrix,
spatially averaged estimation variance, computing a quadratic form, determinant, trace, loglikelihood,
inverse, and Cholesky decomposition of a large covariance matrix. Low-rank tensor approximations
reduce the computing and storage costs essentially. For example, the storage cost is reduced from an
exponential O(nd) to a linear scaling O(drn), where d is the spatial dimension, n is the number of
mesh points in one direction, and r is the tensor rank. Prerequisites for applicability of the proposed
techniques are the assumptions that the data, locations, and measurements lie on a tensor (axesparallel)
grid and that the covariance function depends on a distance,...
This study aimed to improve detection of 2-hydroxyglutarate (2HG) by 13C NMR spectroscopy in tissue extracts from IDH-mutated gliomas. The researchers determined all 1H-13C and 13C-13C coupling constants of 2HG, found that lowering the pH to 6 improved resolution of 2HG from overlapping metabolites, and showed that using a cryogenically-cooled probe significantly improved detection of 13C-labeled 2HG in a tumor extract compared to standard probes. This will enable better monitoring of 13C labeling patterns in 2HG-producing IDH mutant gliomas.
This document summarizes research using singular value decomposition (SVD) analysis of 2D 13C-13C solid state NMR correlation spectra to determine structural information of proteins. SVD was used to fit the cross-peak intensity build up data to a sum of exponentials model with 6 terms, separating curves into specified rate ranges. Initial testing showed stable results. Further research is needed to enhance model specificity, such as increasing the coefficient density and applying it to the full spectrum. SVD analysis provides a way to observe long-range cross-peaks that are normally obscured due to spectral overlap.
The document discusses solid state nuclear magnetic resonance (NMR) spectroscopy. It provides examples of applications of solid state NMR including structure determination of organic and inorganic complexes as well as biological molecules, minerals, ceramics, polymers and more. It describes several interactions observed in solid state NMR spectra such as chemical shift anisotropy, dipole-dipole coupling, J-coupling, quadrupolar interactions and magic angle spinning which is a technique to average anisotropic interactions and improve resolution.
1) 13C NMR spectroscopy provides valuable structural information when 1H NMR is insufficient or ambiguous. It directly detects carbon atoms and gives signals based on their chemical environment rather than hydrogen bonding.
2) 13C NMR spectra contain information about the number and types of carbon atoms present based on the number of signals and their chemical shifts. The chemical shifts are influenced by factors like hybridization and electronegativity.
3) Techniques like proton decoupling and DEPT allow differentiation of carbon types like CH, CH2, and CH3 based on their signal behavior under different pulse sequences.
Nuclear magnetic resonance (NMR) spectroscopy uses the NMR phenomenon to study the physical, chemical, and biological properties of matter. NMR occurs when atomic nuclei are placed in a magnetic field and exposed to a second oscillating field. Only certain atomic nuclei experience NMR, depending on whether they have a quantum property called spin. NMR spectroscopy is valuable in chemistry for determining molecular structure. It is commonly used to map the carbon-hydrogen framework of organic molecules. More advanced NMR techniques also study protein structure and dynamics in biological chemistry.
1313
C NMR spectroscopy provides information about the number and types of nonequivalent carbon atoms in a molecule. It detects the number of protons bonded to each carbon and the electronic environment of the carbons. The chemical shift range for 1313
C NMR is much wider than for 1H NMR, from 0 to 220 ppm versus 0 to 12 ppm, making individual carbon signals easier to distinguish. Signal averaging and Fourier transform techniques improve the sensitivity of the 1313
C NMR spectrum. Decoupling and DEPT experiments can also provide information about the types of carbon atoms present.
Nuclear Magnetic Resonance Spectroscopy is a technique used to characterize organic molecules by identifying carbon-hydrogen frameworks. It exploits the magnetic properties of atomic nuclei when subjected to radio waves and magnetic fields. There are two main types of NMR spectroscopy: 1H NMR determines the number and type of hydrogen atoms, and 13C NMR determines the type of carbon atoms. When nuclei are placed in a magnetic field, their spins can be aligned with or against the field, producing detectable signals. Chemical shifts in these signals provide information about the molecular structure and atomic environment of the nuclei.
This document outlines a PowerPoint presentation on nuclear magnetic resonance (NMR) spectroscopy. It covers the fundamentals of NMR including spin-spin coupling, instrumentation, solvents, chemical shifts, and 2D NMR techniques. Applications discussed include structure elucidation of organic compounds and biomolecules, as well as clinical uses such as MRI. Specific NMR experiments summarized are COSY, NOESY, and HETCOR.
1. The document discusses the Finite Difference Time Domain (FDTD) method for computational electromagnetics (CEM). FDTD solves Maxwell's equations by approximating the derivatives with central finite differences and marching the solution in both space and time.
2. It provides the 1D update equations for the electric and magnetic fields in the FDTD method. The fields are discretized and interleaved in both space and time.
3. The update equations are expressed in terms of the electric and magnetic fields at previous time steps to march the solution forward in time. This allows the fields to be solved for numerically via a computer program.
We start with motivation, few examples of uncertainties. Then we discretize elliptic PDE with uncertain coefficients, apply TT format for permeability, the stochastic operator and for the solution. We compare sparse multi-index set approach with full multi-index+TT.
Tensor Train format allows us to keep the whole multi-index set, without any multi-index set truncation.
How to Generate Personalized Tasks and Sample Solutions for Anonymous Peer Re...Mathias Magdowski
The document describes a method for automatically generating personalized engineering tasks and sample solutions in LaTeX for anonymous peer review. Randomized circuit diagrams and current-time graphs are created for each student using PGFPlots and predefined equations. The tasks are distributed via Moodle and email, and students submit handwritten solutions in ZIP files which are then anonymously peer reviewed. The method aims to prevent plagiarism while allowing scalable peer assessment of handwritten work.
hydro chapter_4_b_pipe_network_by louy Al hami Louy Alhamy
This document discusses the Hardy Cross method for analyzing water distribution pipe networks. The key steps of the Hardy Cross method are:
1) Assume flows in each pipe and ensure mass balance at nodes and zero head loss around loops.
2) Calculate head losses in each pipe based on the assumed flows.
3) Calculate a flow correction factor for each loop based on the net head loss around the loop.
4) Adjust the assumed flows using the correction factors and repeat the process in an iterative manner until head losses around all loops are negligible.
NIPS2010: optimization algorithms in machine learningzukun
The document summarizes optimization algorithms for machine learning applications. It discusses first-order methods like gradient descent, accelerated methods like Nesterov's algorithm, and non-monotone methods like Barzilai-Borwein. Gradient descent converges at a rate of 1/k, while methods like heavy-ball, conjugate gradient, and Nesterov's algorithm can achieve faster linear or 1/k^2 convergence rates depending on the problem structure. The document provides convergence analysis and rate results for various first-order optimization algorithms applied to machine learning problems.
This document outlines an algorithm design technique called the greedy method. It discusses several problems that can be solved using greedy algorithms, including the knapsack problem, job scheduling with deadlines, minimum cost spanning trees, and optimal storage on tapes. For each problem, it provides the general greedy approach, an algorithm to solve the problem greedily, and an example to illustrate the algorithm. It also compares the Prim's and Kruskal's algorithms for finding minimum cost spanning trees.
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...SSA KPI
The document describes efficient solution methods for two-stage stochastic linear programs (SLPs) using interior point methods. Interior point methods require solving large, dense systems of linear equations at each iteration, which can be computationally difficult for SLPs due to their structure leading to dense matrices. The paper reviews methods for improving computational efficiency, including reformulating the problem, exploiting special structures like transpose products, and explicitly factorizing the matrices to solve smaller independent systems in parallel. Computational results show explicit factorizations generally require the least effort.
F4 Final Sbp 2006 Math Skema P 1 & P 2 norainisaser
The document is a marking scheme for a Year 4 mathematics exam consisting of Paper 1 and Paper 2. It provides the answers to questions 1 through 40 for Paper 1 and a detailed marking scheme for multiple choice and structured questions for Paper 2, including the breakdown of sub-marks and full marks awarded for parts of questions. The marking scheme serves as a guide for examiners to use in a consistent manner when evaluating and scoring student responses.
F4 Final Sbp 2007 Maths Skema P 1 & P2norainisaser
This document contains the marking scheme for the Mathematics Paper 1 exam for Form 4 students in Malaysia in October 2007. It includes the marking schemes for 52 multiple choice questions in Section A worth a total of 52 marks and short answer questions in Section B worth a total of 48 marks. The marking schemes provide the number of marks awarded for each part of each question.
The document contains rules and guidelines for marking the trial SPM Mathematics paper for SBP schools in 2007. It includes:
1) The marking scheme for Section A with 52 marks covering questions 1 to 10, outlining the points and marks awarded.
2) The marking scheme for Section B with 48 marks covering questions 11 to 16, including graphs and diagrams.
3) Examples of student responses with marks awarded for questions involving calculations, graphs, and geometric diagrams.
4) Guidelines specify the level of accuracy for measurements and angles in geometric questions.
In 3 sentences, the document provides the marking scheme and examples to standardize the evaluation of the 2007 trial SPM Mathematics paper for schools under
This document provides the answer key to homework #7 for CHEM 444. It includes 3 chemistry problems dealing with equations of state for gases and thermodynamic derivatives. The solutions show the steps to derive the requested equations, citing relevant equations from the textbook. A point value is given for each part of each problem. Notes are included to explain aspects of the solutions and emphasize conceptual understanding over just citing equations.
Mathematical models for a chemical reactorLuis Rodríguez
This document presents a mathematical model for the concentration of a chemical in a reactor. It examines both steady state and time-dependent models. For steady state, the model is an ordinary differential equation that can be solved analytically. For time dependence, the model is a partial differential equation that requires numerical solution. Two numerical methods are presented: an implicit finite difference method and the finite element method.
This document contains a marking scheme for a mathematics assessment with 16 questions. It provides the question number, marking scheme, and marks awarded for each part of each question. The marking scheme includes keys to common mistakes, correct working methods, and final answers. The document aims to evaluate students' skills in topics like algebra, geometry, statistics, and problem solving.
New data structures and algorithms for \\post-processing large data sets and ...Alexander Litvinenko
In this work, we describe advanced numerical tools for working with multivariate functions and for
the analysis of large data sets. These tools will drastically reduce the required computing time and the
storage cost, and, therefore, will allow us to consider much larger data sets or ner meshes. Covariance
matrices are crucial in spatio-temporal statistical tasks, but are often very expensive to compute and
store, especially in 3D. Therefore, we approximate covariance functions by cheap surrogates in a
low-rank tensor format. We apply the Tucker and canonical tensor decompositions to a family of
Matern- and Slater-type functions with varying parameters and demonstrate numerically that their
approximations exhibit exponentially fast convergence. We prove the exponential convergence of the
Tucker and canonical approximations in tensor rank parameters. Several statistical operations are
performed in this low-rank tensor format, including evaluating the conditional covariance matrix,
spatially averaged estimation variance, computing a quadratic form, determinant, trace, loglikelihood,
inverse, and Cholesky decomposition of a large covariance matrix. Low-rank tensor approximations
reduce the computing and storage costs essentially. For example, the storage cost is reduced from an
exponential O(nd) to a linear scaling O(drn), where d is the spatial dimension, n is the number of
mesh points in one direction, and r is the tensor rank. Prerequisites for applicability of the proposed
techniques are the assumptions that the data, locations, and measurements lie on a tensor (axesparallel)
grid and that the covariance function depends on a distance,...
This study aimed to improve detection of 2-hydroxyglutarate (2HG) by 13C NMR spectroscopy in tissue extracts from IDH-mutated gliomas. The researchers determined all 1H-13C and 13C-13C coupling constants of 2HG, found that lowering the pH to 6 improved resolution of 2HG from overlapping metabolites, and showed that using a cryogenically-cooled probe significantly improved detection of 13C-labeled 2HG in a tumor extract compared to standard probes. This will enable better monitoring of 13C labeling patterns in 2HG-producing IDH mutant gliomas.
This document summarizes research using singular value decomposition (SVD) analysis of 2D 13C-13C solid state NMR correlation spectra to determine structural information of proteins. SVD was used to fit the cross-peak intensity build up data to a sum of exponentials model with 6 terms, separating curves into specified rate ranges. Initial testing showed stable results. Further research is needed to enhance model specificity, such as increasing the coefficient density and applying it to the full spectrum. SVD analysis provides a way to observe long-range cross-peaks that are normally obscured due to spectral overlap.
The document discusses solid state nuclear magnetic resonance (NMR) spectroscopy. It provides examples of applications of solid state NMR including structure determination of organic and inorganic complexes as well as biological molecules, minerals, ceramics, polymers and more. It describes several interactions observed in solid state NMR spectra such as chemical shift anisotropy, dipole-dipole coupling, J-coupling, quadrupolar interactions and magic angle spinning which is a technique to average anisotropic interactions and improve resolution.
1) 13C NMR spectroscopy provides valuable structural information when 1H NMR is insufficient or ambiguous. It directly detects carbon atoms and gives signals based on their chemical environment rather than hydrogen bonding.
2) 13C NMR spectra contain information about the number and types of carbon atoms present based on the number of signals and their chemical shifts. The chemical shifts are influenced by factors like hybridization and electronegativity.
3) Techniques like proton decoupling and DEPT allow differentiation of carbon types like CH, CH2, and CH3 based on their signal behavior under different pulse sequences.
Nuclear magnetic resonance (NMR) spectroscopy uses the NMR phenomenon to study the physical, chemical, and biological properties of matter. NMR occurs when atomic nuclei are placed in a magnetic field and exposed to a second oscillating field. Only certain atomic nuclei experience NMR, depending on whether they have a quantum property called spin. NMR spectroscopy is valuable in chemistry for determining molecular structure. It is commonly used to map the carbon-hydrogen framework of organic molecules. More advanced NMR techniques also study protein structure and dynamics in biological chemistry.
1313
C NMR spectroscopy provides information about the number and types of nonequivalent carbon atoms in a molecule. It detects the number of protons bonded to each carbon and the electronic environment of the carbons. The chemical shift range for 1313
C NMR is much wider than for 1H NMR, from 0 to 220 ppm versus 0 to 12 ppm, making individual carbon signals easier to distinguish. Signal averaging and Fourier transform techniques improve the sensitivity of the 1313
C NMR spectrum. Decoupling and DEPT experiments can also provide information about the types of carbon atoms present.
Nuclear Magnetic Resonance Spectroscopy is a technique used to characterize organic molecules by identifying carbon-hydrogen frameworks. It exploits the magnetic properties of atomic nuclei when subjected to radio waves and magnetic fields. There are two main types of NMR spectroscopy: 1H NMR determines the number and type of hydrogen atoms, and 13C NMR determines the type of carbon atoms. When nuclei are placed in a magnetic field, their spins can be aligned with or against the field, producing detectable signals. Chemical shifts in these signals provide information about the molecular structure and atomic environment of the nuclei.
This document outlines a PowerPoint presentation on nuclear magnetic resonance (NMR) spectroscopy. It covers the fundamentals of NMR including spin-spin coupling, instrumentation, solvents, chemical shifts, and 2D NMR techniques. Applications discussed include structure elucidation of organic compounds and biomolecules, as well as clinical uses such as MRI. Specific NMR experiments summarized are COSY, NOESY, and HETCOR.
1. The document discusses the Finite Difference Time Domain (FDTD) method for computational electromagnetics (CEM). FDTD solves Maxwell's equations by approximating the derivatives with central finite differences and marching the solution in both space and time.
2. It provides the 1D update equations for the electric and magnetic fields in the FDTD method. The fields are discretized and interleaved in both space and time.
3. The update equations are expressed in terms of the electric and magnetic fields at previous time steps to march the solution forward in time. This allows the fields to be solved for numerically via a computer program.
We start with motivation, few examples of uncertainties. Then we discretize elliptic PDE with uncertain coefficients, apply TT format for permeability, the stochastic operator and for the solution. We compare sparse multi-index set approach with full multi-index+TT.
Tensor Train format allows us to keep the whole multi-index set, without any multi-index set truncation.
How to Generate Personalized Tasks and Sample Solutions for Anonymous Peer Re...Mathias Magdowski
The document describes a method for automatically generating personalized engineering tasks and sample solutions in LaTeX for anonymous peer review. Randomized circuit diagrams and current-time graphs are created for each student using PGFPlots and predefined equations. The tasks are distributed via Moodle and email, and students submit handwritten solutions in ZIP files which are then anonymously peer reviewed. The method aims to prevent plagiarism while allowing scalable peer assessment of handwritten work.
hydro chapter_4_b_pipe_network_by louy Al hami Louy Alhamy
This document discusses the Hardy Cross method for analyzing water distribution pipe networks. The key steps of the Hardy Cross method are:
1) Assume flows in each pipe and ensure mass balance at nodes and zero head loss around loops.
2) Calculate head losses in each pipe based on the assumed flows.
3) Calculate a flow correction factor for each loop based on the net head loss around the loop.
4) Adjust the assumed flows using the correction factors and repeat the process in an iterative manner until head losses around all loops are negligible.
NIPS2010: optimization algorithms in machine learningzukun
The document summarizes optimization algorithms for machine learning applications. It discusses first-order methods like gradient descent, accelerated methods like Nesterov's algorithm, and non-monotone methods like Barzilai-Borwein. Gradient descent converges at a rate of 1/k, while methods like heavy-ball, conjugate gradient, and Nesterov's algorithm can achieve faster linear or 1/k^2 convergence rates depending on the problem structure. The document provides convergence analysis and rate results for various first-order optimization algorithms applied to machine learning problems.
This document outlines an algorithm design technique called the greedy method. It discusses several problems that can be solved using greedy algorithms, including the knapsack problem, job scheduling with deadlines, minimum cost spanning trees, and optimal storage on tapes. For each problem, it provides the general greedy approach, an algorithm to solve the problem greedily, and an example to illustrate the algorithm. It also compares the Prim's and Kruskal's algorithms for finding minimum cost spanning trees.
This document provides an introduction to concepts and applications of Global Navigation Satellite Systems (GNSS). It outlines topics to be covered, including basic concepts, collecting geospatial data, introducing GNSS, applications and software, resources, and acknowledgements. The introduction discusses the long history of human navigation from ancient to modern times. It will cover mathematical concepts required to understand GNSS such as Taylor series expansion, Jacobian, and least squares adjustment. Older surveying techniques for collecting geospatial data involved chains, tapes, sextants, theodolites and autolevels, while modern methods include GNSS.
Fundamentals of Engineering Probability Visualization Techniques & MatLab Cas...Jim Jenkins
This four-day course gives a solid practical and intuitive understanding of the fundamental concepts of discrete and continuous probability. It emphasizes visual aspects by using many graphical tools such as Venn diagrams, descriptive tables, trees, and a unique 3-dimensional plot to illustrate the behavior of probability densities under coordinate transformations. Many relevant engineering applications are used to crystallize crucial probability concepts that commonly arise in aerospace CONOPS and tradeoffs
Localized methods for diffusions in large graphsDavid Gleich
I describe a few ongoing research projects on diffusions in large graphs and how we can create efficient matrix computations in order to determine them efficiently.
The document discusses partial differential equations (PDEs) and numerical methods for solving them. It begins by defining PDEs as equations involving derivatives of an unknown function with respect to two or more independent variables. PDEs describe many physical phenomena involving variations across space and time, such as fluid flow, heat transfer, electromagnetism, and weather prediction. The document then focuses on solving elliptic, parabolic, and hyperbolic PDEs numerically using finite difference and finite element methods. It provides examples of discretizing and solving the Laplace, heat, and wave equations to estimate unknown functions.
This document discusses low-density parity-check (LDPC) codes and their decoding using belief propagation on factor graphs. It introduces LDPC codes and their representation by sparse parity-check matrices and Tanner graphs. It describes irregular and regular LDPC codes, degree distributions, code ensembles, and decoding using belief propagation on factor graphs and the sum-product algorithm. Examples of decoding a LDPC code over a binary-input additive white Gaussian noise channel are also presented.
Similar to ADI for Tensor Structured Equations (11)
Thomas Mach and Raf Vandebril present a document on deflations in extended QR algorithms. They describe Francis' implicit Hessenberg QR algorithm which computes the bulge of a Hessenberg matrix using shifts from the eigenvalues of the trailing matrix. It then performs bulge chasing by computing zeroing similarity transformations to annihilate the bulge and applying the transformations. The algorithm iterates until the matrix is fully reduced. They also discuss rotations, Hessenberg matrices, extended Hessenberg matrices, and rotators used in QR factorization.
On Deflations in Extended QR AlgorithmsThomas Mach
De ation procedures are one of the core parts of every iterative eigenvalue al-
gorithm. In this lecture we discuss the de ation criterion used in the extended
QR algorithm based on the chasing of rotations. We show that this de ation
criterion can be considered to be optimal with respect to absolute and relative
perturbation of the eigenvalues.
Further, we present a generalization of aggressive early de ation to the new
extended QR algorithms. Aggressive early de ation is the key technique for
the identication and de ation of already converged eigenvalues. Often these
possibilities for de ation are not detected by the standard technique. We present
numerical results underpinning the power of aggressive early de ation in the
context of extended QR algorithms. These ideas can be further generalized to
middle de ations in the setting of extended QR algorithms.
Eigenvalues of Symmetrix Hierarchical MatricesThomas Mach
The document summarizes the classification of eigenvalue problems for symmetric hierarchical matrices. It discusses that hierarchical matrices can be used to approximate dense matrices, such as those from boundary element or finite element methods, in a data-sparse way. Eigenvalue algorithms for symmetric hierarchical matrices must exploit the special structure of these matrices.
Computing Inner Eigenvalues of Matrices in Tensor Train Matrix FormatThomas Mach
Talk given at ENUMATH 2011 in Leicester and GAMM ANLA Workshop 2011 in Bremen. There is a preprint available under http://www.mpi-magdeburg.mpg.de/preprints/index.php
Preconditioned Inverse Iteration for Hierarchical MatricesThomas Mach
The document summarizes a presentation given by Peter Benner and Thomas Mach at the 82nd GAMM Annual Scientific Conference on April 19, 2011 in Graz, Austria. The presentation introduced preconditioned inverse iteration (PINVIT) for solving linear systems involving hierarchical matrices. Hierarchical matrices can be used to approximate dense matrices, such as those from boundary element or finite element methods, in a data-sparse way using low-rank blocks. PINVIT was presented as a method to efficiently solve linear systems involving hierarchical matrices.
Preconditioned Inverse Iteration for Hierarchical MatricesThomas Mach
The document discusses hierarchical (H-) matrices, which can represent dense matrices like those from boundary element or finite element methods in a data-sparse way. It presents preconditioned inverse iteration (PINVIT) for solving linear systems involving H-matrices. Finally, it provides some numerical results demonstrating PINVIT on H-matrices.
Hierarchical Matrices: Concept, Application and EigenvaluesThomas Mach
The document discusses hierarchical matrices (H-matrices), which provide an efficient storage format that lies between dense and sparse matrices. H-matrices can be used to solve problems involving integral equations more efficiently than with dense matrices. The document covers H-matrix concepts, applications to problems like electrostatics, and computing eigenvalues with H-matrices.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
1. 83rd GAMM Annual Scientific Conference
Darmstadt, 28 March 2012
ADI for Tensor Structured Equations
Thomas Mach and Jens Saak
Max Planck Institute for Dynamics of Complex Technical Systems
Computational Methods in Systems and Control Theory
MAX PLANCK INSTITUTE
FOR DYNAMICS OF COMPLEX
TECHNICAL SYSTEMS
MAGDEBURG
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 1/24
2. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Classic ADI [Peaceman/Rachford ’55]
Developed to solve linear systems related to Poisson problems
−∆u = f in Ω ⊂ Rd , d = 1, 2
u=0 on ∂Ω.
uniform grid size h, centered differences, d = 1,
⇒ ∆1,h u = h2 f
2 −1
−1 2 −1
∆1,h =
.. .. .. .
. . .
−1 2 −1
−1 2
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 2/24
3. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Classic ADI [Peaceman/Rachford ’55]
Developed to solve linear systems related to Poisson problems
−∆u = f in Ω ⊂ Rd , d = 1, 2
u=0 on ∂Ω.
uniform grid size h, 5-point difference star, d = 2,
⇒ ∆2,h u = h2 f
K −I 4 −1
−I K −I −1 4 −1
∆2,h =
.. .. .. and K =
.. .. .. .
. . . . . .
−I K −I −1 4 −1
−I K −1 4
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 2/24
4. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Classic ADI [Peaceman/Rachford ’55]
Observation
∆2,h = (∆1,h ⊗ I ) + (I ⊗ ∆1,h ).
=:H =:V
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 3/24
5. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Classic ADI [Peaceman/Rachford ’55]
Observation
∆2,h = (∆1,h ⊗ I ) + (I ⊗ ∆1,h ).
=:H =:V
˜
Solve ∆2,h u = h2 f =: f exploiting structure in H and V .
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 3/24
6. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Classic ADI [Peaceman/Rachford ’55]
Observation
∆2,h = (∆1,h ⊗ I ) + (I ⊗ ∆1,h ).
=:H =:V
˜
Solve ∆2,h u = h2 f =: f exploiting structure in H and V .
For certain shift parameters perform
˜
(H + pi I ) ui+ 1 = (pi I − V ) ui + f ,
2
˜
(V + pi I ) ui+1 = (pi I − H) ui+ 1 + f ,
2
until ui is good enough.
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 3/24
7. ADI ADI for Tensors Numerical Results and Shifts Conclusions
ADI and Lyapunov Equations [Wachspress ’88]
Lyapunov Equation
FX + XF T = −GG T
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 4/24
8. ADI ADI for Tensors Numerical Results and Shifts Conclusions
ADI and Lyapunov Equations [Wachspress ’88]
Lyapunov Equation
FX + XF T = −GG T
Vectorized Lyapunov Equation
(I ⊗ F ) + (F ⊗ I ) vec(X ) = −vec(GG T )
=:HF =:VF
Same structure ⇒ apply ADI
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 4/24
9. ADI ADI for Tensors Numerical Results and Shifts Conclusions
ADI and Lyapunov Equations [Wachspress ’88]
Lyapunov Equation
FX + XF T = −GG T
Vectorized Lyapunov Equation
(I ⊗ F ) + (F ⊗ I ) vec(X ) = −vec(GG T )
=:HF =:VF
Same structure ⇒ apply ADI
(F + pi I ) Xi+ 1 = −GG T − Xi F T − pi I
2
(F + pi I ) Xi+1 = −GG T − Xi+ 1 F T − pi I
T
2
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 4/24
10. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Generalizing Matrix Equations
∆2,h vec(X ) = vec(B)
I ⊗ ∆1,h + ∆1,h ⊗ I vec(X ) = vec(B)
=H =V =u =f
∆µa a
Xa c
+ = Ba c
c ∆µc
Xa c
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 5/24
11. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Generalizing Matrix Equations
∆4,h vec(X ) = vec(B)
I ⊗ I ⊗ I ⊗ ∆1,h + I ⊗ I ⊗ ∆1,h ⊗ I + I ⊗ ∆1,h ⊗ I ⊗ I + ∆1,h ⊗ I ⊗ I ⊗ I vec(X ) = vec(B)
=H =V =R =Q =u =f
∆µa a
Xabcd + Xabcd
∆µb b
+ = Babcd
c ∆µc
Xabcd + Xabcd
d ∆µd
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 5/24
12. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Generalizing ADI
I ⊗ ∆1,h + ∆1,h ⊗ I vec(X ) = vec(B)
=H =V =u =f
(H + I ⊗ pi,1 I )Xi+ 1 = (pi,1 I − V )Xi + B
2
(V + pi,2 I ⊗ I )Xi+ 1 = (pi,2 I − H)Xi+ 1 + B
2 2
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 6/24
13. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Generalizing ADI
I ⊗ ∆1,h + ∆1,h ⊗ I vec(X ) = vec(B)
=H =V =u =f
(H + I ⊗ pi,1 I )Xi+ 1 = (pi,1 I − V )Xi + B
2
(V + pi,2 I ⊗ I )Xi+ 1 = (pi,2 I − H)Xi+ 1 + B
2 2
I ⊗ I ⊗ I ⊗ ∆1,h + I ⊗ I ⊗ ∆1,h ⊗ I + I ⊗ ∆1,h ⊗ I ⊗ I + ∆1,h ⊗ I ⊗ I ⊗ I vec(X ) = vec(B)
=H =V =R =Q =u =f
(H + I ⊗ I ⊗ I ⊗ pi,1 I )Xi+ 1 = (pi,1 I − V − R − Q)Xi +B
4
(V + I ⊗ I ⊗ pi,2 I ⊗ I )Xi+ 1 = (pi,2 I − H − R − Q)Xi+ 1 +B
2 4
(R + I ⊗ pi,3 I ⊗ I ⊗ I )Xi+ 3 = (pi,3 I − H − V − Q)Xi+ 1 +B
4 2
(Q + pi,4 I ⊗ I ⊗ I ⊗ I )Xi+1 = (pi,4 I − H − V − R)Xi+ 3 +B
4
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 6/24
14. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Goal
Solve AX = B
A = I ⊗ I ⊗ · · · ⊗ I ⊗ I ⊗ A1 +
I ⊗ I ⊗ · · · ⊗ I ⊗ A2 ⊗ I +
... +
Ad ⊗ I ⊗ · · · ⊗ I ⊗ I ⊗ I
B is given in tensor train decomposition
⇒ X is sought in tensor train decomposition.
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 7/24
15. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Tensor Trains [Oseledets, Tyrtyshnikov ’09]
r1 ,...,rd−1
T (i1 , i2 , . . . , id ) = G1 (i1 , α1 )G2 (α1 , i2 , α2 )
α1 ,...,αd−1 =1
· · · Gj (αj−1 , ij , αj ) · · ·
Gd−1 (αd−2 , id−1 , αd−1 )Gd (αd−1 , id ).
G1 (i1 , α1 ) α1 G2 (α1 , i2 , α2 ) α2 ··· Gd (αd−1 , id )
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 8/24
16. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Tensor Trains [Oseledets, Tyrtyshnikov ’09]
Tensor trains are
computable, and
d
require only O(dnr 2 ) storage, with TT-rank r and T ∈ Rn .
Canonical representation
T (i1 , i2 , . . . , id ) = G1 (i1 , α) · · · Gd (id , α)
α
Tucker decomposition
T (i1 , i2 , . . . , id ) = C (α1 , . . . , αd )G1 (i1 , α1 ) · · · Gd (id , αd )
α1 ,...,αd
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 9/24
17. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Tensor Trains [Oseledets, Tyrtyshnikov ’09]
(I ⊗ · · · ⊗ I ⊗ A1 ) T
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 10/24
18. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Tensor Trains [Oseledets, Tyrtyshnikov ’09]
(I ⊗ · · · ⊗ I ⊗ A1 ) T
A1 (β, i1 )
i1
G1 (i1 , α1 ) α1 G2 (α1 , i2 , α2 ) α2 ··· Gd (αd−1 , id )
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 10/24
19. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Tensor Trains [Oseledets, Tyrtyshnikov ’09]
(I ⊗ · · · ⊗ I ⊗ A1 ) T
A1 (β, i1 )
i1
G1 (i1 , α1 ) α1 G2 (α1 , i2 , α2 ) α2 ··· Gd (αd−1 , id )
T (i1 , i2 , . . . , id ) ×1 A1 = A1 β,i1 G1 (i1 , α1 )G2 (α1 , i2 , α2 )
α1 ,...,αd−1
· · · Gd−1 (αd−2 , id−1 , αd−1 )Gd (αd−1 , id )
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 10/24
20. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Tensor Trains [Oseledets, Tyrtyshnikov ’09]
(I ⊗ · · · ⊗ I ⊗ A1 ) T
˜
= G1 (β, α1 ) = A1 G1
A1 (β, i1 )
i1
G1 (i1 , α1 ) α1 G2 (α1 , i2 , α2 ) α2 ··· Gd (αd−1 , id )
T (i1 , i2 , . . . , id ) ×1 A1 = A1 β,i1 G1 (i1 , α1 )G2 (α1 , i2 , α2 )
α1 ,...,αd−1
· · · Gd−1 (αd−2 , id−1 , αd−1 )Gd (αd−1 , id )
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 10/24
21. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Tensor Trains [Oseledets, Tyrtyshnikov ’09]
(I ⊗ · · · ⊗ I ⊗ A1 ) −1 T
˜
= G1 (β, α1 ) = A1 G1
A1 (β, i1 )
i1
G1 (i1 , α1 ) α1 G2 (α1 , i2 , α2 ) α2 ··· Gd (αd−1 , id )
T (i1 , i2 , . . . , id ) ×1 A1 −1 = A1 −1 β,i1 G1 (i1 , α1 )G2 (α1 , i2 , α2 )
α1 ,...,αd−1
· · · Gd−1 (αd−2 , id−1 , αd−1 )Gd (αd−1 , id )
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 10/24
22. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Algorithm
Input: {A1 , . . . , Ad }, tensor train B, accuracy
Output: tensor train X , with AX = B
forall j ∈ {1, . . . , d} do
(0)
Xj := zeros(n, 1, 1)
end
while r (i) > do
Choose shift pi
forall k ∈ {1, . . . , d} do
d
×j Aj ×k (Ak + pi I )−1
k k−1 k−1
X (i+ d ) := B +pi X (i+ d ) − X (i+ d )
j=1
j=k
end
end
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 11/24
23. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Algorithm
r (i) := B
Input: {A1 , . . . , Ad }, tensor train B, accuracy
forall j ∈ {1, . . . , d} do
Output: tensor train X , with AX(i) B =
r (i) := r − Xi ×j Aj
forall j ∈ {1, . . . , d} do
(0) end
Xj := zeros(n, 1, 1)
end
while r (i) > do
Choose shift pi
forall k ∈ {1, . . . , d} do
d
×j Aj ×k (Ak + pi I )−1
k k−1 k−1
X (i+ d ) := B +pi X (i+ d ) − X (i+ d )
j=1
j=k
end
end
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 11/24
24. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Algorithm
Input: {A1 , . . . , Ad }, tensor train B, accuracy
Output: tensor train X , with AX = B
forall j ∈ {1, . . . , d} do
(0)
Xj := zeros(n, 1, 1)
end (I ⊗ I ⊗ · · · ⊗ I ⊗ Aj ⊗ I ⊗ · · · ⊗ I ) Xi+ k−1
d
while r (i) > do
Choose shift pi
forall k ∈ {1, . . . , d} do
d
×j Aj ×k (Ak + pi I )−1
k k−1 k−1
X (i+ d ) := B +pi X (i+ d ) − X (i+ d )
j=1
j=k
end
end
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 11/24
25. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Eigenvalues
A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I
St´phanos’ theorem:
e
⇒ λi (A) = λi1 (A1 ) + λi2 (A2 ) + · · · + λid (Ad ),
d−1
with i = i1 + i2 n1 + · · · + id nj .
j=1
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 12/24
26. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Eigenvalues
A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I
St´phanos’ theorem:
e
⇒ λi (A) = λi1 (A1 ) + λi2 (A2 ) + · · · + λid (Ad ),
d−1
with i = i1 + i2 n1 + · · · + id nj .
j=1
d
AX = B ⇔ X ×j Aj = B
j=1
A is regular ⇔ λi (A) = 0 ∀i ⇐ Ai Hurwitz ∀i
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 12/24
27. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Lemma
Lemma [Grasedyck ’04]
The tensor equation
d
j=1 X ×j Aj = B
with Ak Hurwitz ∀k has the solution
∞
X =− 0 B ×1 exp(A1 t) ×2 · · · ×d exp(Ad t)dt
Z (t) = B ×1 exp(A1 t) ×2 · · · ×d exp(Ad t)
d ∞
˙
Z (t) = Z (t) ×j Aj Z (∞) − Z (0) = ˙
Z (t)dt,
j=1 0
d ∞
0−B = Z (t)dt ×j Aj
j=1 0
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 13/24
28. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Theorem
Theorem
{A1 , . . . , Ad } ⇒ A, Λ(A) ⊂ [−λmax , −λmin ] ⊕ ı [−µ, µ] ⊂ C− .
Let k ∈ N and use the quadrature points and weights:
√
hst := √k , tj := log e jhst + 1 + e 2jhst , wj := √ hst−2jh .
π
1+e st
Then the solution X can be approximated by
r1 ,...,rd−1
˜
X (i1 , i2 , . . . , id ) = − H1 (i1 , α1 ) · · · Hd (αd−1 , id ),
α1 ,...,αd−1 =1
2tj
2wj Ap
with Hp (αp−1 , ip , αp ) := k
j=−k λmin βp e
λmin
ip ,βp
Gp (αp−1 , βp , αp )
with the approximation error
2µλ−1 +1 √
(λI − 2A/λmin )−1
min
˜
X −X ≤ Cst −π k
πλmin e dΓ λ B 2.
2 π
Γ 2
extending [Grasedyck ’04] (X and B of low Kronecker rank) to low TT-rank
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 14/24
29. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Approximation Accuracy
Storage in 104 ·Double
constant truncation error 10−2
i
8 tightened truncation error
Truncation Error
6 10−8
4
10−14
2
10−20
0 5 10 15 20 25 30
Iteration
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 15/24
30. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Example: Laplace – Ai = ∆1, 11
1
Ai = ∆1, 1
11
B = 0 0 ... 0 1
Shifts:
pi := e1 (∗1 ) + . . . + ed (∗d ) — random chosen eigenvalue
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 16/24
31. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Numerical Results – Ai = ∆1, 11
1
d t in s residual mean(#it)
2 3.887 e−01 7.015 e−10 112.8
5 5.398 e+00 7.467 e−10 45.8
8 6.007 e+00 6.936 e−10 12.8
10 3.662 e+00 7.685 e−10 6.8
25 3.142 e+01 2.437 e−10 5.0
50 2.268 e+02 2.049 e−10 5.0
75 7.192 e+02 4.036 e−10 5.0
100 1.700 e+03 1.864 e−10 5.0
150 5.538 e+03 1.801 e−10 5.0
200 1.280 e+04 1.472 e−10 5.0
250 2.499 e+04 1.816 e−10 5.0
300 4.298 e+04 2.535 e−10 5.0
500 1.952 e+05 2.039 e−10 5.0
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 17/24
32. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Numerical Results – Ai = ∆1, 11
1
sparse dense
d TADI MESS Penzl’s sh. lyap
2 0.310 0.0006 0.024 0.003 0.0003 0.0005
4 3.130 0.1695 0.011 0.049 6.331 0.012
6 8.147 — 0.076 0.094 — 7.17
8 5.458 — 5.863 1.097 — 13 698.2
10 5.306 — 3 445.523 249.464 — —
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 18/24
33. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Numerical Results – Ai = ∆1, 11
1
105
104
Computation Time in s
103
102
Tensor ADI
101 sparse
MESS
100
Penzl’s shifts
10−1 dense
lyap
10−2
10 100 300
Dimension d
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 19/24
34. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Numerical Results – Ai = ∆1, 11
1
105
104
Computation Time in s
103
102
Tensor ADI
101 sparse
MESS
100
Penzl’s shifts
10−1 dense
lyap
10−2
10 100 300
Dimension d
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 19/24
35. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Single Shift and Convergence
A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I
We assume Λ(Ak ) ⊂ R− .
Error Propagation, Single Shift
p− λk + λl λk
d d
k k
G1 2 ≤ max = 1 − .
λk ∈Λ(Ak ), p + λl p + λl
k=1,...,d l=0 l=0
If G1 2 < 1, then the ADI iteration converges.
p < 0 and p > −∞
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 20/24
36. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Single Shift and Convergence
A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I
We assume Λ(Ak ) ⊂ R− .
Error Propagation, Single Shift
p− λk + λl λk
d d
k k
G1 2 ≤ max = 1 − .
λk ∈Λ(Ak ), p + λl p + λl
k=1,...,d l=0 l=0
If G1 2 < 1, then the ADI iteration converges.
p < 0 and p > −∞
d
p < λi (A) = k=1 λk (Ak ) ∀i
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 20/24
37. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Single Shift and Convergence
A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I
We assume Λ(Ak ) ⊂ R− .
Error Propagation, Single Shift
p− λk + λl λk
d d
k k
G1 2 ≤ max = 1 − .
λk ∈Λ(Ak ), p + λl p + λl
k=1,...,d l=0 l=0
If G1 2 < 1, then the ADI iteration converges.
p < 0 and p > −∞
d
p < λi (A) = k=1 λk (Ak ) ∀i
d−2
Lyapunov case (Ak = A0 ∀k): p < 2 λmin (A0 )
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 20/24
38. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Single Shift and Convergence
A = I ⊗ · · · ⊗ I ⊗ A1 + I ⊗ · · · ⊗ I ⊗ A2 ⊗ I + . . . + Ad ⊗ I ⊗ · · · ⊗ I
We assume Λ(Ak ) ⊂ R− .
Error Propagation, Single Shift
p− λk + λl λk
d d
k k
G1 2 ≤ max = 1 − .
λk ∈Λ(Ak ), p + λl p + λl
k=1,...,d l=0 l=0
If G1 2 < 1, then the ADI iteration converges.
p < 0 and p > −∞
d
p < λi (A) = k=1 λk (Ak ) ∀i
2−2
Lyapunov case (Ak = A0 ∀k): p < 2 λmin (A0 ) =0
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 20/24
39. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Shifts
Min-Max-Problem
d
pi,k − j=k λj
min max
{p1,1 ,...,p ,d }⊂C λk ∈Λ(Ak ) ∀k pi,k + λk
i=0 k=0
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 21/24
40. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Shifts
Min-Max-Problem
d
pi,k − j=k λj
min max
{p1,1 ,...,p ,d }⊂C λk ∈Λ(Ak ) ∀k pi,k + λk
i=0 k=0
Min-Max-Problem, Lyapunov case (Ak = A0 ∀k, A0 Hurwitz)
d
pi,k − j=k λj
min max
{p1,1 ,...,p ,d }⊂C λk ∈Λ(A0 ) ∀k pi,k + λk
i=0 k=0
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 21/24
41. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Shifts
Min-Max-Problem
d
pi,k − j=k λj
min max
{p1,1 ,...,p ,d }⊂C λk ∈Λ(Ak ) ∀k pi,k + λk
i=0 k=0
Min-Max-Problem, Lyapunov case (Ak = A0 ∀k, A0 Hurwitz)
d
pi − j=k λj
min max
{p1 ,...,p }⊂C λk ∈Λ(A0 ) ∀k pi + λk
i=0 k=0
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 21/24
42. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Shifts
Min-Max-Problem
d
pi,k − j=k λj
min max
{p1,1 ,...,p ,d }⊂C λk ∈Λ(Ak ) ∀k pi,k + λk
i=0 k=0
Min-Max-Problem, Lyapunov case (Ak = A0 ∀k, A0 Hurwitz)
d
pi − j=k λj
min max
{p1 ,...,p }⊂C λk ∈Λ(A0 ) ∀k pi + λk
i=0 k=0
λk = λ0 ∀k
Penzl’s idea: {p1 , . . . , p } ⊂ (d − 1)Λ(A0 )
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 21/24
43. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Random Example
seed := 1;
R := rand(10);
R := R + R ;
R := R − λmin + 0.1;
A0 = −R;
Λ(A0 ) = {−0.1000, −0.2250, −1.1024, −1.7496, −2.0355,
−2.4402, −3.1330, −3.3961, −3.9347, −11.9713}
⇒ The random shifts do not lead to convergence.
p0 = λ10 (A0 )(d − 1)
p1 = λ9 (A0 )(d − 1)
p2 = λ8 (A0 )(d − 1)
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 22/24
44. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Numerical Results – Ai = −R
d t in s residual #it
2 2.7673 9.1353 e−09 219.0
5 7.8942 9.6503 e−09 98.0
8 18.9964 9.8650 e−09 84.0
10 18.4739 7.5746 e−09 58.0
15 27.5661 5.0619 e−09 40.0
20 32.2409 4.9971 e−09 32.0
25 40.2462 5.1732 e−09 29.0
50 76.3225 7.4093 e−09 14.0
75 159.6627 3.2629 e−09 10.0
100 436.6120 9.1137 e−09 11.0
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 23/24
45. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Numerical Results – Ai = −R
d t in s tdmrg in s #it
2 2.7673 0.0148 219.0
5 7.8942 2.5576 98.0
8 18.9964 5.4536 84.0
10 18.4739 5.5852 58.0
15 27.5661 6.3068 40.0
20 32.2409 7.4044 32.0
25 40.2462 8.3371 29.0
50 76.3225 11.8840 14.0
75 159.6627 18.0581 10.0
100 436.6120 28.8515 11.0
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 23/24
46. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Conclusions and Outlook
We have seen
a generalization of the ADI method,
capable of solving tensor Lyapunov and Sylvester equations,
producing solutions of low TT-rank.
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 24/24
47. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Conclusions and Outlook
We have seen
a generalization of the ADI method,
capable of solving tensor Lyapunov and Sylvester equations,
producing solutions of low TT-rank.
Open questions:
more sophisticated shift strategies and
why is the dmrg solver so much faster?
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 24/24
48. ADI ADI for Tensors Numerical Results and Shifts Conclusions
Conclusions and Outlook
We have seen
a generalization of the ADI method,
capable of solving tensor Lyapunov and Sylvester equations,
producing solutions of low TT-rank.
Open questions:
more sophisticated shift strategies and
why is the dmrg solver so much faster?
Thank you for your attention.
Max Planck Institute Magdeburg Thomas Mach, Jens Saak, Tensor-ADI 24/24