Just some ideas how low-rank matrices/tensors can be useful in spatial and environmental statistics, where one usually has to deal with very large data
This document provides an overview of key concepts in calculus including the definite integral, properties of definite integrals, the Mean Value Theorem for Integrals, the Average Value Theorem, the Fundamental Theorem of Calculus parts 1 and 2, and the Trapezoidal Rule for approximating definite integrals. It defines integrals, discusses how to evaluate them on a TI-84 calculator, and lists properties such as additivity, constant multiples, and order of integration. It also introduces concepts like finding the average value of a function over an interval using the integral, and relating the derivative of an integral to the original function.
The document defines Riemann sums and definite integrals. Riemann sums approximate the area under a function curve between two points by dividing the interval into subintervals and evaluating the function at sample points in each. The definite integral is defined as the limit of Riemann sums as the number of subintervals approaches infinity. Geometrically, the definite integral represents the net area between the function curve and x-axis over the interval.
The trapezoidal rule is used to approximate the area under a curve by dividing it into trapezoids. It takes the average of the function values at the beginning and end of each sub-interval. The area is calculated as the sum of the areas of each trapezoid multiplied by the width of the sub-interval. An example calculates the area under y=1+x^3 from 0 to 1 using n=4 sub-intervals, giving an approximate result of 1.26953125. The document also provides an example of using the trapezoidal rule with n=8 sub-intervals to estimate the area under the curve of the function y=x from 0 to 3.
This document provides an introduction to quadratic functions. It defines the standard form of a quadratic function as f(x) = ax^2 + bx + c, and shows how to graph simple quadratic functions like f(x) = x^2 by creating a table of x and y-values. It also introduces key concepts for quadratic functions like domain, range, vertex, axis of symmetry, and maximum/minimum values.
1. The document provides instructions for students to use a graphing calculator application called Nspire to explore and analyze graphs of quadratic equations.
2. Students are asked to vary the values of a, b, and c in different quadratic equations and record the shape of the graph, location of maximum/minimum points, and equation of the line of symmetry.
3. The summary explains that graphs of quadratic equations with a positive coefficient of x^2 open up and have a maximum point, while those with a negative coefficient of x^2 open down and have a minimum point. The graph is always symmetrical around the line of symmetry passing through the maximum or minimum point.
This document is a 4-page exam for the course BCS-012 Basic Mathematics. It contains 10 questions testing various math skills. Question 1 has 5 sub-questions, and questions 2 through 5 each have between 3-5 sub-questions. The questions cover topics such as algebra, calculus, vectors, matrices and linear programming. Students are instructed to answer question 1 and any 3 of the remaining questions.
The document discusses different alignment techniques for sequences including affine gap penalties where the penalty for a gap of length l is equal to the gap open penalty plus the gap extend penalty multiplied by l minus 1. It provides an example where the Needleman-Wunsch algorithm produces a counterintuitive alignment and discusses multiple sequence alignment techniques like progressive alignments and iterative methods which are useful but computationally difficult problems.
This document provides an overview of key concepts in calculus including the definite integral, properties of definite integrals, the Mean Value Theorem for Integrals, the Average Value Theorem, the Fundamental Theorem of Calculus parts 1 and 2, and the Trapezoidal Rule for approximating definite integrals. It defines integrals, discusses how to evaluate them on a TI-84 calculator, and lists properties such as additivity, constant multiples, and order of integration. It also introduces concepts like finding the average value of a function over an interval using the integral, and relating the derivative of an integral to the original function.
The document defines Riemann sums and definite integrals. Riemann sums approximate the area under a function curve between two points by dividing the interval into subintervals and evaluating the function at sample points in each. The definite integral is defined as the limit of Riemann sums as the number of subintervals approaches infinity. Geometrically, the definite integral represents the net area between the function curve and x-axis over the interval.
The trapezoidal rule is used to approximate the area under a curve by dividing it into trapezoids. It takes the average of the function values at the beginning and end of each sub-interval. The area is calculated as the sum of the areas of each trapezoid multiplied by the width of the sub-interval. An example calculates the area under y=1+x^3 from 0 to 1 using n=4 sub-intervals, giving an approximate result of 1.26953125. The document also provides an example of using the trapezoidal rule with n=8 sub-intervals to estimate the area under the curve of the function y=x from 0 to 3.
This document provides an introduction to quadratic functions. It defines the standard form of a quadratic function as f(x) = ax^2 + bx + c, and shows how to graph simple quadratic functions like f(x) = x^2 by creating a table of x and y-values. It also introduces key concepts for quadratic functions like domain, range, vertex, axis of symmetry, and maximum/minimum values.
1. The document provides instructions for students to use a graphing calculator application called Nspire to explore and analyze graphs of quadratic equations.
2. Students are asked to vary the values of a, b, and c in different quadratic equations and record the shape of the graph, location of maximum/minimum points, and equation of the line of symmetry.
3. The summary explains that graphs of quadratic equations with a positive coefficient of x^2 open up and have a maximum point, while those with a negative coefficient of x^2 open down and have a minimum point. The graph is always symmetrical around the line of symmetry passing through the maximum or minimum point.
This document is a 4-page exam for the course BCS-012 Basic Mathematics. It contains 10 questions testing various math skills. Question 1 has 5 sub-questions, and questions 2 through 5 each have between 3-5 sub-questions. The questions cover topics such as algebra, calculus, vectors, matrices and linear programming. Students are instructed to answer question 1 and any 3 of the remaining questions.
The document discusses different alignment techniques for sequences including affine gap penalties where the penalty for a gap of length l is equal to the gap open penalty plus the gap extend penalty multiplied by l minus 1. It provides an example where the Needleman-Wunsch algorithm produces a counterintuitive alignment and discusses multiple sequence alignment techniques like progressive alignments and iterative methods which are useful but computationally difficult problems.
This document contains 3 sections describing quadratic functions f(x) with their vertex, y-intercept, zeros, domain, and range. The first function is f(x) = 10 - 3x - x^2, the second is f(x) = 2x^2 - 12x, and the third is f(x) = (2 - x)(5 + x). For each function, the document lists the relevant properties to be determined but does not show the calculations or results.
This document contains a 7 page exam for the course CS-601: Differential and Integral Calculus with Applications. The exam contains 8 questions testing a variety of calculus concepts:
1) Part a contains 6 multiple choice questions testing derivatives, integrals, limits, and monotonicity. Part b contains 6 fill in the blank questions testing derivatives, integrals, and equations of tangents.
2) Questions 2-5 contain additional multiple choice or short answer problems testing continuity, derivatives, integrals, Rolle's theorem, and partial derivatives.
3) Questions 6-8 contain free response problems on geometry, differential equations, and approximating an area using Simpson's rule. The exam tests a comprehensive understanding of
The document is a past exam paper for the term-end examination in Computer Oriented Numerical Techniques. It contains 6 questions testing various numerical analysis techniques including interpolation, root finding using bisection, Newton-Raphson, and Regula Falsi methods, solving differential equations using Runge-Kutta and solving systems of equations using Gauss-Jordan elimination, Jacobi and Gauss-Seidel iteration methods. Students are required to answer 4 out of the 6 questions in the paper.
This document provides instructions for graphing trigonometric transformations in 3 steps: 1) Determine the a, b, c, and d values from the function's factored form. 2) Draw the median position and amplitude. 3) Determine the period and mark points to graph the wave-like function. Examples graph y=3sin(2x)-1, f(x)=sin(1/2x+1), and f(x)=2cos(3x)-2.
The document defines a definite integral as the integral of a function over a bounded interval from a to b, written as ∫f(x)dx from a to b. This represents the area under the curve of the function f(x) between the bounds a and b. Several examples are provided of calculating definite integrals to find the area under curves over given intervals using the Fundamental Theorem of Calculus. It is noted that definite integrals cannot result in negative area values.
1) The document provides notes from Lesson 28 including assignments that are due, warm up problems, definitions of scientific notation and coefficient, examples of writing numbers in standard and scientific notation.
2) Set 28 evens homework is due on October 31st and asks if a test was signed.
3) Scientific notation is defined as a method of writing a number as a decimal number times a power of 10, and the coefficient is the decimal number that must have one non-zero digit.
This document describes performing extreme value analysis on daily precipitation data from Fort Collins, Colorado from 1900 to 1999 using R. It first reads in and plots the data, summarizing seasonal variations. It then performs two extreme value analysis approaches: the block maxima approach, which fits a generalized extreme value distribution to summer maximum daily precipitation values within blocks; and the peak over threshold approach, which fits a generalized Pareto distribution to values exceeding a threshold. It estimates return levels such as the 100-year event and calculates confidence intervals.
This document provides examples of a student's work factorizing polynomials with 2, 3, and 4 terms. It outlines the steps to factor polynomials for each case. For a 2 term polynomial, the student looks for the difference of two perfect squares and writes the factored form as two binomials. For a 3 term polynomial, the student uses the "diamond method" to find two numbers whose product and sum match the coefficients. For a 4 term polynomial, the student groups like terms and finds the greatest common factor of each group.
1. The document is about Taylor polynomials for the function x e^-x.
2. It gives the Taylor series expansion for x e^-x and lists the first 20 Taylor polynomials P1(x) through P20(x).
3. Readers are asked to graph the function x e^-x along with its Taylor polynomials of varying degrees to compare how well the polynomials approximate the original function.
The Extreme Value Theorem states that if a function is continuous on a closed interval, then it has a global maximum and minimum value on that interval. These global extrema can only occur at critical points or endpoints. The document provides an example of finding the global max and min of the function f(x)=2x-x^2 by identifying critical numbers and endpoint values and comparing them.
This document provides examples of sketching the graphs of reciprocal functions by dividing 1 by various linear functions of x and finding their domains and ranges. It includes 4 examples of sketching the graphs of reciprocal functions by dividing 1 by expressions of the form ax + b, finding any horizontal or vertical asymptotes, and stating their domains and ranges.
Algebra lesson 4.2 zeroes of quadratic functionspipamutuc
This document provides information about quadratic functions and solving for their zeroes (x-intercepts). It discusses factoring quadratic expressions, using the zero product property to set each factor equal to zero. It also introduces the quadratic formula as a way to solve quadratic equations that are not factorable. There is an example of using the quadratic formula to find the zeroes of the function f(x)=x^2 - 3x - 1. The document concludes with practice problems for students to solve for the zeroes of various quadratic functions.
The document contains 20 multiple choice questions from an exam for the Brazilian Naval Academy in 2016. The questions cover topics such as systems of equations, probability, geometry, limits, integrals, and other calculus and math concepts.
This document provides steps to graph functions of the form y = ax^2 + bx + c. It works through an example problem, graphing y = 2x^2 - 8x + 6. The steps are to: 1) identify the coefficients a, b, and c, 2) find the vertex by calculating x from b/2a and the corresponding y-value, 3) draw the axis of symmetry at x, 4) plot other points and 5) draw the parabola through the points. It then provides guided practice problems for students to practice graphing quadratic functions using the same steps.
This document provides a tutorial on topics related to calculus including:
1) Differentiating various functions and finding points where the gradient is zero
2) Evaluating definite integrals of functions including trigonometric, exponential, and rational functions
3) Finding areas bounded by curves, axes, and lines by evaluating definite integrals
4) Sketching graphs of functions and finding relevant information like minimum/maximum points
5) Finding equations of tangents and normals to curves at given points
The document provides a solution to find the extreme values of the function f(x)= 2x3 − 15x2 + 24x + 7 on the interval [0, 6]. It outlines a two step process of first finding the critical points by solving the derivative, which are 1 and 4. The second step is to calculate the y-values at these critical points and the endpoints and compare them, with the maximum being f(6)=43 and the minimum being f(4)=-9.
The document discusses exponential functions of the form f(x) = a^x where a is a constant. It provides examples of exponential functions with a = 2 and a = 1/2. It describes the domain, range, and common point of exponential functions. The document also discusses transformations of exponential functions by adding or subtracting constants, and provides examples of sketching and describing transformed exponential functions. Finally, it lists common exponential expressions.
This document provides instructions for graphing quadratic functions and examples worked through step-by-step. It begins with the general steps: 1) identify coefficients, 2) find the vertex, 3) draw the axis of symmetry, 4) find the y-intercept, 5) find roots, 6) reflect points over the axis, and 7) graph the parabola. An example graphs the function y = 3x^2 - 6x + 1. It then works through graphing the path of a basketball using the function f(x) = -16x^2 + 32x, finding that the maximum height is 16 feet reached at 1 second, and the basketball is in the air for 2 seconds. The document
The document summarizes a dissertation on applying hierarchical matrices to solve multiscale problems. The dissertation proposes a new hierarchical domain decomposition (HDD) method that combines hierarchical matrices and domain decomposition. HDD allows efficiently computing solution mappings and functionals, and solving problems on coarser grids or with multiple right-hand sides. Complexity analyses show HDD has lower complexity than other methods. Numerical tests on problems with oscillatory and jumping coefficients demonstrate HDD achieves the expected error bounds and is independent of frequency.
Alexander Litvinenko's research interests include developing efficient numerical methods for solving stochastic PDEs using low-rank tensor approximations. He has made contributions in areas such as fast techniques for solving stochastic PDEs using tensor approximations, inexpensive functional approximations of Bayesian updating formulas, and modeling uncertainties in parameters, coefficients, and computational geometry using probabilistic methods. His current research focuses on uncertainty quantification, Bayesian updating techniques, and developing scalable and parallel methods using hierarchical matrices.
This document contains 3 sections describing quadratic functions f(x) with their vertex, y-intercept, zeros, domain, and range. The first function is f(x) = 10 - 3x - x^2, the second is f(x) = 2x^2 - 12x, and the third is f(x) = (2 - x)(5 + x). For each function, the document lists the relevant properties to be determined but does not show the calculations or results.
This document contains a 7 page exam for the course CS-601: Differential and Integral Calculus with Applications. The exam contains 8 questions testing a variety of calculus concepts:
1) Part a contains 6 multiple choice questions testing derivatives, integrals, limits, and monotonicity. Part b contains 6 fill in the blank questions testing derivatives, integrals, and equations of tangents.
2) Questions 2-5 contain additional multiple choice or short answer problems testing continuity, derivatives, integrals, Rolle's theorem, and partial derivatives.
3) Questions 6-8 contain free response problems on geometry, differential equations, and approximating an area using Simpson's rule. The exam tests a comprehensive understanding of
The document is a past exam paper for the term-end examination in Computer Oriented Numerical Techniques. It contains 6 questions testing various numerical analysis techniques including interpolation, root finding using bisection, Newton-Raphson, and Regula Falsi methods, solving differential equations using Runge-Kutta and solving systems of equations using Gauss-Jordan elimination, Jacobi and Gauss-Seidel iteration methods. Students are required to answer 4 out of the 6 questions in the paper.
This document provides instructions for graphing trigonometric transformations in 3 steps: 1) Determine the a, b, c, and d values from the function's factored form. 2) Draw the median position and amplitude. 3) Determine the period and mark points to graph the wave-like function. Examples graph y=3sin(2x)-1, f(x)=sin(1/2x+1), and f(x)=2cos(3x)-2.
The document defines a definite integral as the integral of a function over a bounded interval from a to b, written as ∫f(x)dx from a to b. This represents the area under the curve of the function f(x) between the bounds a and b. Several examples are provided of calculating definite integrals to find the area under curves over given intervals using the Fundamental Theorem of Calculus. It is noted that definite integrals cannot result in negative area values.
1) The document provides notes from Lesson 28 including assignments that are due, warm up problems, definitions of scientific notation and coefficient, examples of writing numbers in standard and scientific notation.
2) Set 28 evens homework is due on October 31st and asks if a test was signed.
3) Scientific notation is defined as a method of writing a number as a decimal number times a power of 10, and the coefficient is the decimal number that must have one non-zero digit.
This document describes performing extreme value analysis on daily precipitation data from Fort Collins, Colorado from 1900 to 1999 using R. It first reads in and plots the data, summarizing seasonal variations. It then performs two extreme value analysis approaches: the block maxima approach, which fits a generalized extreme value distribution to summer maximum daily precipitation values within blocks; and the peak over threshold approach, which fits a generalized Pareto distribution to values exceeding a threshold. It estimates return levels such as the 100-year event and calculates confidence intervals.
This document provides examples of a student's work factorizing polynomials with 2, 3, and 4 terms. It outlines the steps to factor polynomials for each case. For a 2 term polynomial, the student looks for the difference of two perfect squares and writes the factored form as two binomials. For a 3 term polynomial, the student uses the "diamond method" to find two numbers whose product and sum match the coefficients. For a 4 term polynomial, the student groups like terms and finds the greatest common factor of each group.
1. The document is about Taylor polynomials for the function x e^-x.
2. It gives the Taylor series expansion for x e^-x and lists the first 20 Taylor polynomials P1(x) through P20(x).
3. Readers are asked to graph the function x e^-x along with its Taylor polynomials of varying degrees to compare how well the polynomials approximate the original function.
The Extreme Value Theorem states that if a function is continuous on a closed interval, then it has a global maximum and minimum value on that interval. These global extrema can only occur at critical points or endpoints. The document provides an example of finding the global max and min of the function f(x)=2x-x^2 by identifying critical numbers and endpoint values and comparing them.
This document provides examples of sketching the graphs of reciprocal functions by dividing 1 by various linear functions of x and finding their domains and ranges. It includes 4 examples of sketching the graphs of reciprocal functions by dividing 1 by expressions of the form ax + b, finding any horizontal or vertical asymptotes, and stating their domains and ranges.
Algebra lesson 4.2 zeroes of quadratic functionspipamutuc
This document provides information about quadratic functions and solving for their zeroes (x-intercepts). It discusses factoring quadratic expressions, using the zero product property to set each factor equal to zero. It also introduces the quadratic formula as a way to solve quadratic equations that are not factorable. There is an example of using the quadratic formula to find the zeroes of the function f(x)=x^2 - 3x - 1. The document concludes with practice problems for students to solve for the zeroes of various quadratic functions.
The document contains 20 multiple choice questions from an exam for the Brazilian Naval Academy in 2016. The questions cover topics such as systems of equations, probability, geometry, limits, integrals, and other calculus and math concepts.
This document provides steps to graph functions of the form y = ax^2 + bx + c. It works through an example problem, graphing y = 2x^2 - 8x + 6. The steps are to: 1) identify the coefficients a, b, and c, 2) find the vertex by calculating x from b/2a and the corresponding y-value, 3) draw the axis of symmetry at x, 4) plot other points and 5) draw the parabola through the points. It then provides guided practice problems for students to practice graphing quadratic functions using the same steps.
This document provides a tutorial on topics related to calculus including:
1) Differentiating various functions and finding points where the gradient is zero
2) Evaluating definite integrals of functions including trigonometric, exponential, and rational functions
3) Finding areas bounded by curves, axes, and lines by evaluating definite integrals
4) Sketching graphs of functions and finding relevant information like minimum/maximum points
5) Finding equations of tangents and normals to curves at given points
The document provides a solution to find the extreme values of the function f(x)= 2x3 − 15x2 + 24x + 7 on the interval [0, 6]. It outlines a two step process of first finding the critical points by solving the derivative, which are 1 and 4. The second step is to calculate the y-values at these critical points and the endpoints and compare them, with the maximum being f(6)=43 and the minimum being f(4)=-9.
The document discusses exponential functions of the form f(x) = a^x where a is a constant. It provides examples of exponential functions with a = 2 and a = 1/2. It describes the domain, range, and common point of exponential functions. The document also discusses transformations of exponential functions by adding or subtracting constants, and provides examples of sketching and describing transformed exponential functions. Finally, it lists common exponential expressions.
This document provides instructions for graphing quadratic functions and examples worked through step-by-step. It begins with the general steps: 1) identify coefficients, 2) find the vertex, 3) draw the axis of symmetry, 4) find the y-intercept, 5) find roots, 6) reflect points over the axis, and 7) graph the parabola. An example graphs the function y = 3x^2 - 6x + 1. It then works through graphing the path of a basketball using the function f(x) = -16x^2 + 32x, finding that the maximum height is 16 feet reached at 1 second, and the basketball is in the air for 2 seconds. The document
The document summarizes a dissertation on applying hierarchical matrices to solve multiscale problems. The dissertation proposes a new hierarchical domain decomposition (HDD) method that combines hierarchical matrices and domain decomposition. HDD allows efficiently computing solution mappings and functionals, and solving problems on coarser grids or with multiple right-hand sides. Complexity analyses show HDD has lower complexity than other methods. Numerical tests on problems with oscillatory and jumping coefficients demonstrate HDD achieves the expected error bounds and is independent of frequency.
Alexander Litvinenko's research interests include developing efficient numerical methods for solving stochastic PDEs using low-rank tensor approximations. He has made contributions in areas such as fast techniques for solving stochastic PDEs using tensor approximations, inexpensive functional approximations of Bayesian updating formulas, and modeling uncertainties in parameters, coefficients, and computational geometry using probabilistic methods. His current research focuses on uncertainty quantification, Bayesian updating techniques, and developing scalable and parallel methods using hierarchical matrices.
My paper for Domain Decomposition Conference in Strobl, Austria, 2005Alexander Litvinenko
We did a first step in solving, so-called, skin problem. We developed an efficient H-matrix preconditioner to solve diffusion problem with jumping coefficients
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017) Alexander Litvinenko
Overview of our latest works in applying low-rank tensor techniques to a) solving PDEs with uncertain coefficients (or multi-parametric PDEs) b) postprocessing high-dimensional data c) compute the largest element, level sets, TOP5% elelments
Minimum mean square error estimation and approximation of the Bayesian updateAlexander Litvinenko
This document discusses methods for approximating the Bayesian update used in parameter identification problems with partial differential equations containing uncertain coefficients. It presents:
1) Deriving the Bayesian update from conditional expectation and proposing polynomial chaos expansions to approximate the full Bayesian update.
2) Describing minimum mean square error estimation to find estimators that minimize the error between the true parameter and its estimate given measurements.
3) Providing an example of applying these methods to identify an uncertain coefficient in a 1D elliptic PDE using measurements at two points.
Likelihood approximation with parallel hierarchical matrices for large spatia...Alexander Litvinenko
First, we use hierarchical matrices to approximate large Matern covariance matrices and the loglikelihood. Second, we find a maximum of loglikelihood and estimate 3 unknown parameters (covariance length, smoothness and variance).
My PhD talk "Application of H-matrices for computing partial inverse"Alexander Litvinenko
This document describes a hierarchical domain decomposition (HDD) method for solving stochastic elliptic boundary value problems with oscillatory or jumping coefficients. HDD constructs mappings between boundary and interface values that allow the solution to be computed locally in each subdomain. These mappings are represented as H-matrices to reduce computational costs. The total storage cost of HDD is O(kn log2nh) and complexity is O(k2nh log3nh), where n is the number of degrees of freedom, k is the H-matrix rank, and h is the mesh size. HDD can also be used to compute solutions when the right-hand side is represented on a coarser grid.
This document is a dissertation submitted by Alexander Litvinenko to the Faculty of Mathematics and Computer Science at the University of Leipzig in partial fulfillment of the requirements for the degree of Doctor of Natural Sciences. The dissertation proposes the application of hierarchical matrices (H-matrices) to solve multiscale problems using the hierarchical domain decomposition (HDD) method. It begins with an introduction and literature review of multiscale problems and existing solution methods. It then describes the classical finite element method, the HDD method, and H-matrices. The main body of the dissertation focuses on applying H-matrices within the HDD method to efficiently solve problems involving multiple spatial and temporal scales. Numerical results demonstrate the effectiveness of the proposed approach.
We combined: low-rank tensor techniques and FFT to compute kriging, estimate variance, compute conditional covariance. We are able to solve 3D problems with very high resolution
Low-rank tensor methods for stochastic forward and inverse problemsAlexander Litvinenko
The document discusses low-rank tensor methods for solving partial differential equations (PDEs) with uncertain coefficients. It covers two parts: (1) using the stochastic Galerkin method to discretize an elliptic PDE with uncertain diffusion coefficient represented by tensors, and (2) computing quantities of interest like the maximum value from the tensor solution in a efficient way. Specifically, it describes representing the diffusion coefficient, forcing term, and solution of the discretized PDE using tensors, and computing the maximum value and corresponding indices by solving an eigenvalue problem involving the tensor solution.
Response Surface in Tensor Train format for Uncertainty QuantificationAlexander Litvinenko
We apply low-rank Tensor Train format to solve PDEs with uncertain coefficients. First, we approximate uncertain permeability coefficient in TT format, then the operator and then apply iterations to solve stochastic Galerkin system.
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...Alexander Litvinenko
We develop hierarchical domain decomposition method to compute a part of the solution, a part of the inverse operator with O(n log n) storage and computing cost.
Hierarchical matrix approximation of large covariance matricesAlexander Litvinenko
We research class of Matern covariance matrices and their approximability in the H-matrix format. Further tasks are compute H-Cholesky factorization, trace, determinant, quadratic form, loglikelihood. Later H-matrices can be applied in kriging.
We research how we can use Scalable hierarchical algorithms for solving stochastic PDEs and for Uncertainty Quantification. Particularly we are interested in approximating large covariance matrices in H-matrix format, Hierarchical Cholesky factorization and computing Karhunen-Loeve expansion
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...Alexander Litvinenko
1) The document describes a method called Multilevel Monte Carlo (MLMC) to efficiently compute electromagnetic fields scattered from dielectric objects of uncertain shapes. MLMC balances statistical errors from random sampling and numerical errors from geometry discretization to reduce computational time.
2) A surface integral equation solver is used to model scattering from dielectric objects. Random geometries are generated by perturbing surfaces with random fields defined by spherical harmonics.
3) MLMC is shown to estimate scattering cross sections accurately while requiring fewer overall computations compared to traditional Monte Carlo methods. This is achieved by optimally allocating samples across discretization levels.
Hierarchical matrix techniques for maximum likelihood covariance estimationAlexander Litvinenko
1. We apply hierarchical matrix techniques (HLIB, hlibpro) to approximate huge covariance matrices. We are able to work with 250K-350K non-regular grid nodes.
2. We maximize a non-linear, non-convex Gaussian log-likelihood function to identify hyper-parameters of covariance.
New data structures and algorithms for \\post-processing large data sets and ...Alexander Litvinenko
In this work, we describe advanced numerical tools for working with multivariate functions and for
the analysis of large data sets. These tools will drastically reduce the required computing time and the
storage cost, and, therefore, will allow us to consider much larger data sets or ner meshes. Covariance
matrices are crucial in spatio-temporal statistical tasks, but are often very expensive to compute and
store, especially in 3D. Therefore, we approximate covariance functions by cheap surrogates in a
low-rank tensor format. We apply the Tucker and canonical tensor decompositions to a family of
Matern- and Slater-type functions with varying parameters and demonstrate numerically that their
approximations exhibit exponentially fast convergence. We prove the exponential convergence of the
Tucker and canonical approximations in tensor rank parameters. Several statistical operations are
performed in this low-rank tensor format, including evaluating the conditional covariance matrix,
spatially averaged estimation variance, computing a quadratic form, determinant, trace, loglikelihood,
inverse, and Cholesky decomposition of a large covariance matrix. Low-rank tensor approximations
reduce the computing and storage costs essentially. For example, the storage cost is reduced from an
exponential O(nd) to a linear scaling O(drn), where d is the spatial dimension, n is the number of
mesh points in one direction, and r is the tensor rank. Prerequisites for applicability of the proposed
techniques are the assumptions that the data, locations, and measurements lie on a tensor (axesparallel)
grid and that the covariance function depends on a distance,...
Tucker tensor analysis of Matern functions in spatial statistics Alexander Litvinenko
1. Motivation: improve statistical models
2. Motivation: disadvantages of matrices
3. Tools: Tucker tensor format
4. Tensor approximation of Matern covariance function via FFT
5. Typical statistical operations in Tucker tensor format
6. Numerical experiments
We apply tensor train (TT) data format to solve an elliptic PDE with uncertain coefficients. We reduce complexity and storage from exponential to linear. Post-processing in TT format is also provided.
The document discusses distributed online convex optimization algorithms for coordinating multiple agents. It presents a coordination algorithm where each agent performs proportional-integral feedback to minimize local objectives while sharing information with neighbors over noisy communication channels. The algorithm is proven to achieve exponential convergence of second moments to the optimal solution and an ultimate bound on the error that depends on the noise level. Simulation results on a medical diagnosis example are also presented to illustrate the algorithm's behavior.
We start with motivation, few examples of uncertainties. Then we discretize elliptic PDE with uncertain coefficients, apply TT format for permeability, the stochastic operator and for the solution. We compare sparse multi-index set approach with full multi-index+TT.
Tensor Train format allows us to keep the whole multi-index set, without any multi-index set truncation.
This document discusses Bayesian inference on mixtures models. It covers several key topics:
1. Density approximation and consistency results for mixtures as a way to approximate unknown distributions.
2. The "scarcity phenomenon" where the posterior probabilities of most component allocations in mixture models are zero, concentrating on just a few high probability allocations.
3. Challenges with Bayesian inference for mixtures, including identifiability issues, label switching, and complex combinatorial calculations required to integrate over all possible component allocations.
Fast Identification of Heavy Hitters by Cached and Packed Group TestingRakuten Group, Inc.
The document summarizes a research paper on efficiently identifying heavy hitters in data streams using cached and packed group testing techniques. The paper proposes using packed bidirectional counter arrays to implement the operations of combinatorial group testing (CGT) in constant time. This improves the time complexity of CGT for updating frequencies and querying heavy hitters from O(log(n)) to O(1), eliminating dependency on the size of the data universe n. Experimental results show the proposed method achieves competitive precision, update throughput, and query throughput compared to existing CGT and hierarchical count-min sketch approaches.
Low rank tensor approximation of probability density and characteristic funct...Alexander Litvinenko
This document summarizes a presentation on computing divergences and distances between high-dimensional probability density functions (pdfs) represented using tensor formats. It discusses:
1) Motivating the problem using examples from stochastic PDEs and functional representations of uncertainties.
2) Computing Kullback-Leibler divergence and other divergences when pdfs are not directly available.
3) Representing probability characteristic functions and approximating pdfs using tensor decompositions like CP and TT formats.
4) Numerical examples computing Kullback-Leibler divergence and Hellinger distance between Gaussian and alpha-stable distributions using these tensor approximations.
Poster to be presented at Stochastic Numerics and Statistical Learning: Theory and Applications Workshop 2024, Kaust, Saudi Arabia, https://cemse.kaust.edu.sa/stochnum/events/event/snsl-workshop-2024.
In this work we have considered a setting that mimics the Henry problem \cite{Simpson2003,Simpson04_Henry}, modeling seawater intrusion into a 2D coastal aquifer. The pure water recharge from the ``land side'' resists the salinisation of the aquifer due to the influx of saline water through the ``sea side'', thereby achieving some equilibrium in the salt concentration. In our setting, following \cite{GRILLO2010}, we consider a fracture on the sea side that significantly increases the permeability of the porous medium.
The flow and transport essentially depend on the geological parameters of the porous medium, including the fracture. We investigated the effects of various uncertainties on saltwater intrusion. We assumed uncertainties in the fracture width, the porosity of the bulk medium, its permeability and the pure water recharge from the land side. The porosity and permeability were modeled by random fields, the recharge by a random but periodic intensity and the thickness by a random variable. We calculated the mean and variance of the salt mass fraction, which is also uncertain.
The main question we investigated in this work was how well the MLMC method can be used to compute statistics of different QoIs. We found that the answer depends on the choice of the QoI. First, not every QoI requires a hierarchy of meshes and MLMC. Second, MLMC requires stable convergence rates for $\EXP{g_{\ell} - g_{\ell-1}}$ and $\Var{g_{\ell} - g_{\ell-1}}$. These rates should be independent of $\ell$. If these convergence rates vary for different $\ell$, then it will be hard to estimate $L$ and $m_{\ell}$, and MLMC will either not work or be suboptimal. We were not able to get stable convergence rates for all levels $\ell=1,\ldots,5$ when the QoI was an integral as in \eqref{eq:integral_box}. We found that for $\ell=1,\ldots 4$ and $\ell=5$ the rate $\alpha$ was different. Further investigation is needed to find the reason for this. Another difficulty is the dependence on time, i.e. the number of levels $L$ and the number of sums $m_{\ell}$ depend on $t$. At the beginning the variability is small, then it increases, and after the process of mixing salt and fresh water has stopped, the variance decreases again.
The number of random samples required at each level was estimated by calculating the decay of the variances and the computational cost for each level. These estimates depend on the minimisation function in the MLMC algorithm.
To achieve the efficiency of the MLMC approach presented in this work, it is essential that the complexity of the numerical solution of each random realisation is proportional to the number of grid vertices on the grid levels.
1. Motivation: why do we need low-rank tensors
2. Tensors of the second order (matrices)
3. CP, Tucker and tensor train tensor formats
4. Many classical kernels have (or can be approximated in ) low-rank tensor format
5. Post processing: Computation of mean, variance, level sets, frequency
In this work we discuss how to compute KLE with complexity O(k n log n), how to approximate large covariance matrices (in H-matrix format), how to use the Lanczos method.
We solve elliptic PDE with uncertain coefficients. We apply Karhunen-Loeve expansion to separate stochastic part from spatial part. The corresponding eigenvalue problem with covariance function is solved via the Hierarchical Matrix technique. We also demonstrate how low-rank tensor method can be applied for high-dimensional problems (e.g., to compute higher order statistical moments) . We provide explicit formulas to compute statistical moments of order k with linear complexity.
NIPS2010: optimization algorithms in machine learningzukun
The document summarizes optimization algorithms for machine learning applications. It discusses first-order methods like gradient descent, accelerated methods like Nesterov's algorithm, and non-monotone methods like Barzilai-Borwein. Gradient descent converges at a rate of 1/k, while methods like heavy-ball, conjugate gradient, and Nesterov's algorithm can achieve faster linear or 1/k^2 convergence rates depending on the problem structure. The document provides convergence analysis and rate results for various first-order optimization algorithms applied to machine learning problems.
Saltwater intrusion occurs when sea levels rise and saltwater moves onto the land. Usually, this occurs during storms, high tides, droughts, or when saltwater penetrates freshwater aquifers and raises the groundwater table. Since groundwater is an essential nutrition and irrigation resource, its salinization may lead to catastrophic consequences. Many acres of farmland may be lost because they can become too wet or salty to grow crops. Therefore, accurate modeling of different scenarios of saline flow is essential to help farmers and researchers develop strategies to improve the soil quality and decrease saltwater intrusion effects.
Saline flow is density-driven and described by a system of time-dependent nonlinear partial differential equations (PDEs). It features convection dominance and can demonstrate very complicated behavior.
As a specific model, we consider a Henry-like problem with uncertain permeability and porosity.
These parameters may strongly affect the flow and transport of salt.
Similar to Possible applications of low-rank tensors in statistics and UQ (my talk in Bonn, Germany) (20)
We investigated the applicability and efficiency of the MLMC approach to the Henry-like problem with uncertain porosity, permeability and recharge. These uncertain parameters were modelled by random fields with three independent random variables. Permeability is a function of porosity. Both functions are time-dependent, have multi-scale behaviour and are defined for two layers. The numerical solution for each random realisation was obtained using the well-known ug4 parallel multigrid solver. The number of random samples required at each level was estimated by calculating the decay of the variances and the computational cost for each level.
The MLMC method was used to compute the expected value and variance of several QoIs, such as the solution at a few preselected points $(t,\bx)$, the solution integrated over a small subdomain, and the time evolution of the freshwater integral. We have found that some QoIs require only 2-3 mesh levels and samples from finer meshes would not significantly improve the result. Other QoIs require more grid levels.
1. Investigated efficiency of MLMC for Henry problem with
uncertain porosity, permeability, and recharge.
2. Uncertainties are modeled by random fields.
3. MLMC could be much faster than MC, 3200 times faster !
4. The time dependence is challenging.
Remarks:
1. Check if MLMC is needed.
2. The optimal number of samples depends on the point (t;x)
3. An advanced MLMC may give better estimates of L and m`.
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityAlexander Litvinenko
In this work, we solved the density driven groundwater flow problem with uncertain porosity and permeability. An accurate solution of this time-dependent and non-linear problem is impossible because of the presence of natural uncertainties in the reservoir such as porosity and permeability.
Therefore, we estimated the mean value and the variance of the solution, as well as the propagation of uncertainties from the random input parameters to the solution.
We started by defining the Elder-like problem. Then we described the multi-variate polynomial approximation (\gPC) approach and used it to estimate the required statistics of the mass fraction.
Utilizing the \gPC method allowed us
to reduce the computational cost compared to the classical quasi Monte Carlo method.
\gPC assumes that the output function $\sol(t,\bx,\thetab)$ is square-integrable and smooth w.r.t uncertain input variables $\btheta$.
Many factors, such as non-linearity, multiple solutions, multiple stationary states, time dependence and complicated solvers, make the investigation of the convergence of the \gPC method a non-trivial task.
We used an easy-to-implement, but only sub-optimal \gPC technique to quantify the uncertainty. For example, it is known that by increasing the degree of global polynomials (Hermite, Langange and similar), Runge's phenomenon appears. Here, probably local polynomials, splines or their mixtures would be better. Additionally, we used an easy-to-parallelise quadrature rule, which was also only suboptimal. For instance, adaptive choice of sparse grid (or collocation) points \cite{ConradMarzouk13,nobile-sg-mc-2015,Sudret_sparsePCE,CONSTANTINE12,crestaux2009polynomial} would be better, but we were limited by the usage of parallel methods. Adaptive quadrature rules are not (so well) parallelisable. In conclusion, we can report that: a) we developed a highly parallel method to quantify uncertainty in the Elder-like problem; b) with the \gPC of degree 4 we can achieve similar results as with the \QMC method.
In the numerical section we considered two different aquifers - a solid parallelepiped and a solid elliptic cylinder. One of our goals was to see how the domain geometry influences the formation, the number and the shape of fingers.
Since the considered problem is nonlinear,
a high variance in the porosity may result in totally different solutions; for instance, the number of fingers, their intensity and shape, the propagation time, and the velocity may vary considerably.
The number of cells in the presented experiments varied from $241{,}152$ to $15{,}433{,}728$ for the cylindrical domain and from $524{,}288$ to $4{,}194{,}304$ for the parallelepiped. The maximal number of parallel processing units was $600\times 32$, where $600$ is the number of parallel nodes and $32$ is the number of computing cores on each node. The total computing time varied from 2 hours for the coarse mesh to 24 hours for the finest mesh.
We consider a class of density-driven flow problems. We are particularly interested in the problem of the salinization of coastal aquifers. We consider the Henry saltwater intrusion problem with uncertain porosity, permeability, and recharge parameters as a test case.
The reason for the presence of uncertainties is the lack of knowledge, inaccurate measurements,
and inability to measure parameters at each spatial or time location. This problem is nonlinear and time-dependent. The solution is the salt mass fraction, which is uncertain and changes in time. Uncertainties in porosity, permeability, recharge, and mass fraction are modeled using random fields. This work investigates the applicability of the well-known multilevel Monte Carlo (MLMC) method for such problems. The MLMC method can reduce the total computational and storage costs. Moreover, the MLMC method runs multiple scenarios on different spatial and time meshes and then estimates the mean value of the mass fraction.
The parallelization is performed in both the physical space and stochastic space. To solve every deterministic scenario, we run the parallel multigrid solver ug4 in a black-box fashion.
We use the solution obtained from the quasi-Monte Carlo method as a reference solution.
We investigated the applicability and efficiency of the MLMC approach for the Henry-like problem with uncertain porosity, permeability, and recharge. These uncertain parameters were modeled by random fields with three independent random variables. The numerical solution for each random realization was obtained using the well-known ug4 parallel multigrid solver. The number of required random samples on each level was estimated by computing the decay of the variances and computational costs for each level. We also computed the expected value and variance of the mass fraction in the whole domain, the evolution of the pdfs, the solutions at a few preselected points $(t,\bx)$, and the time evolution of the freshwater integral value. We have found that some QoIs require only 2-3 of the coarsest mesh levels, and samples from finer meshes would not significantly improve the result. Note that a different type of porosity may lead to a different conclusion.
The results show that the MLMC method is faster than the QMC method at the finest mesh. Thus, sampling at different mesh levels makes sense and helps to reduce the overall computational cost.
Here the interest is mainly to compute characterisations like the entropy,
the Kullback-Leibler divergence, more general $f$-divergences, or other such characteristics based on
the probability density. The density is often not available directly,
and it is a computational challenge to just represent it in a numerically
feasible fashion in case the dimension is even moderately large. It
is an even stronger numerical challenge to then actually compute said characteristics
in the high-dimensional case.
The task considered here was the numerical computation of characterising statistics of
high-dimensional pdfs, as well as their divergences and distances,
where the pdf in the numerical implementation was assumed discretised on some regular grid.
We have demonstrated that high-dimensional pdfs,
pcfs, and some functions of them
can be approximated and represented in a low-rank tensor data format.
Utilisation of low-rank tensor techniques helps to reduce the computational complexity
and the storage cost from exponential $\C{O}(n^d)$ to linear in the dimension $d$, e.g.\
$O(d n r^2 )$ for the TT format. Here $n$ is the number of discretisation
points in one direction, $r<<n$ is the maximal tensor rank, and $d$ the problem dimension.
This document proposes a method for weakly supervised regression on uncertain datasets. It combines graph Laplacian regularization and cluster ensemble methodology. The method solves an auxiliary minimization problem to determine the optimal solution for predicting uncertain parameters. It is tested on artificial data to predict target values using a mixture of normal distributions with labeled, inaccurately labeled, and unlabeled samples. The method is shown to outperform a simplified version by reducing mean Wasserstein distance between predicted and true values.
Computing f-Divergences and Distances of High-Dimensional Probability Density...Alexander Litvinenko
Poster presented on Stochastic Numerics and Statistical Learning: Theory and Applications Workshop in KAUST, Saudi Arabia.
The task considered here was the numerical computation of characterising statistics of
high-dimensional pdfs, as well as their divergences and distances,
where the pdf in the numerical implementation was assumed discretised on some regular grid.
Even for moderate dimension $d$, the full storage and computation with such objects become very quickly infeasible.
We have demonstrated that high-dimensional pdfs,
pcfs, and some functions of them
can be approximated and represented in a low-rank tensor data format.
Utilisation of low-rank tensor techniques helps to reduce the computational complexity
and the storage cost from exponential $\C{O}(n^d)$ to linear in the dimension $d$, e.g.
O(d n r^2) for the TT format. Here $n$ is the number of discretisation
points in one direction, r<n is the maximal tensor rank, and d the problem dimension.
The particular data format is rather unimportant,
any of the well-known tensor formats (CP, Tucker, hierarchical Tucker, tensor-train (TT)) can be used,
and we used the TT data format. Much of the presentation and in fact the central train
of discussion and thought is actually independent of the actual representation.
In the beginning it was motivated through three possible ways how one may
arrive at such a representation of the pdf. One was if the pdf was given in some approximate
analytical form, e.g. like a function tensor product of lower-dimensional pdfs with a
product measure, or from an analogous representation of the pcf and subsequent use of the
Fourier transform, or from a low-rank functional representation of a high-dimensional
RV, again via its pcf.
The theoretical underpinnings of the relation between pdfs and pcfs as well as their
properties were recalled in Section: Theory, as they are important to be preserved in the
discrete approximation. This also introduced the concepts of the convolution and of
the point-wise multiplication Hadamard algebra, concepts which become especially important if
one wants to characterise sums of independent RVs or mixture models,
a topic we did not touch on for the sake of brevity but which follows very naturally from
the developments here. Especially the Hadamard algebra is also
important for the algorithms to compute various point-wise functions in the sparse formats.
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Alexander Litvinenko
Talk presented on SIAM IS 2022 conference.
Very often, in the course of uncertainty quantification tasks or
data analysis, one has to deal with high-dimensional random variables (RVs)
(with values in $\Rd$). Just like any other RV,
a high-dimensional RV can be described by its probability density (\pdf) and/or
by the corresponding probability characteristic functions (\pcf),
or a more general representation as
a function of other, known, random variables.
Here the interest is mainly to compute characterisations like the entropy, the Kullback-Leibler, or more general
$f$-divergences. These are all computed from the \pdf, which is often not available directly,
and it is a computational challenge to even represent it in a numerically
feasible fashion in case the dimension $d$ is even moderately large. It
is an even stronger numerical challenge to then actually compute said characterisations
in the high-dimensional case.
In this regard, in order to achieve a computationally feasible task, we propose
to approximate density by a low-rank tensor.
Identification of unknown parameters and prediction of missing values. Compar...Alexander Litvinenko
H-matrix approximation of large Mat\'{e}rn covariance matrices, Gaussian log-likelihoods.
Identifying unknown parameters and making predictions
Comparison with machine learning methods.
kNN is easy to implement and shows promising results.
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
This document describes using the Continuation Multi-Level Monte Carlo (CMLMC) method to compute electromagnetic fields scattered from dielectric objects of uncertain shapes. CMLMC optimally balances statistical and discretization errors using fewer samples on fine meshes and more on coarse meshes. The method is tested by computing scattering cross sections for randomly perturbed spheres under plane wave excitation and comparing results to the unperturbed sphere. Computational costs and errors are analyzed to demonstrate the efficiency of CMLMC for this scattering problem with uncertain geometry.
Identification of unknown parameters and prediction with hierarchical matrice...Alexander Litvinenko
We compare four numerical methods for the prediction of missing values in four different datasets.
These methods are 1) the hierarchical maximum likelihood estimation (H-MLE), and three machine learning (ML) methods, which include 2) k-nearest neighbors (kNN), 3) random forest, and 4) Deep Neural Network (DNN).
From the ML methods, the best results (for considered datasets) were obtained by the kNN method with three (or seven) neighbors.
On one dataset, the MLE method showed a smaller error than the kNN method, whereas, on another, the kNN method was better.
The MLE method requires a lot of linear algebra computations and works fine on almost all datasets. Its result can be improved by taking a smaller threshold and more accurate hierarchical matrix arithmetics. To our surprise, the well-known kNN method produces similar results as H-MLE and worked much faster.
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
Computational tools for characterizing electromagnetic scattering from objects with uncertain shapes are needed in various applications ranging from remote sensing at microwave frequencies to Raman spectroscopy at optical frequencies. Often, such computational tools use the Monte Carlo (MC) method to sample a parametric space describing geometric uncertainties. For each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver computes the scattered fields. However, for an accurate statistical characterization the number of MC samples has to be large. In this work, to address this challenge, the continuation multilevel Monte Carlo (\CMLMC) method is used together with a surface integral equation solver.
The \CMLMC method optimally balances statistical errors due to sampling of
the parametric space, and numerical errors due to the discretization of the geometry using a hierarchy of discretizations, from coarse to fine.
The number of realizations of finer discretizations can be kept low, with most samples
computed on coarser discretizations to minimize computational cost.
Consequently, the total execution time is significantly reduced, in comparison to the standard MC scheme.
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
Computational tools for characterizing electromagnetic scattering from objects with uncertain shapes are needed in various applications ranging from remote sensing at microwave frequencies to Raman spectroscopy at optical frequencies. Often, such computational tools use the Monte Carlo (MC) method to sample a parametric space describing geometric uncertainties. For each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver computes the scattered fields. However, for an accurate statistical characterization the number of MC samples has to be large. In this work, to address this challenge, the continuation multilevel Monte Carlo (\CMLMC) method is used together with a surface integral equation solver.
The \CMLMC method optimally balances statistical errors due to sampling of
the parametric space, and numerical errors due to the discretization of the geometry using a hierarchy of discretizations, from coarse to fine.
The number of realizations of finer discretizations can be kept low, with most samples
computed on coarser discretizations to minimize computational cost.
Consequently, the total execution time is significantly reduced, in comparison to the standard MC scheme.
Propagation of Uncertainties in Density Driven Groundwater FlowAlexander Litvinenko
Major Goal: estimate risks of the pollution in a subsurface flow.
How?: we solve density-driven groundwater flow with uncertain porosity and permeability.
We set up density-driven groundwater flow problem,
review stochastic modeling and stochastic methods, use UG4 framework (https://gcsc.uni-frankfurt.de/simulation-and-modelling/ug4),
model uncertainty in porosity and permeability,
2D and 3D numerical experiments.
Simulation of propagation of uncertainties in density-driven groundwater flowAlexander Litvinenko
Consider stochastic modelling of the density-driven subsurface flow in 3D. This talk was presented by Dmitry Logashenko on the IMG conference in Kunming, China, August 2019.
Large data sets result large dense matrices, say with 2.000.000 rows and columns. How to work with such large matrices? How to approximate them? How to compute log-likelihood? determination? inverse? All answers are in this work.
This document summarizes a semi-supervised regression method that combines graph Laplacian regularization with cluster ensemble methodology. It proposes using a weighted averaged co-association matrix from the cluster ensemble as the similarity matrix in graph Laplacian regularization. The method (SSR-LRCM) finds a low-rank approximation of the co-association matrix to efficiently solve the regression problem. Experimental results on synthetic and real-world datasets show SSR-LRCM achieves significantly better prediction accuracy than an alternative method, while also having lower computational costs for large datasets. Future work will explore using a hierarchical matrix approximation instead of low-rank.
This document summarizes a talk on solving density-driven groundwater flow problems with uncertain porosity and permeability coefficients. The major goal is to estimate pollution risks in subsurface flows. The presentation covers: (1) setting up the groundwater flow problem; (2) reviewing stochastic modeling methods; (3) modeling uncertainty in porosity and permeability; (4) numerical methods to solve deterministic problems; and (5) 2D and 3D numerical experiments. The experiments demonstrate computing statistics of contaminant concentration and its propagation under uncertain parameters.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
বাংলাদেশের অর্থনৈতিক সমীক্ষা ২০২৪ [Bangladesh Economic Review 2024 Bangla.pdf] কম্পিউটার , ট্যাব ও স্মার্ট ফোন ভার্সন সহ সম্পূর্ণ বাংলা ই-বুক বা pdf বই " সুচিপত্র ...বুকমার্ক মেনু 🔖 ও হাইপার লিংক মেনু 📝👆 যুক্ত ..
আমাদের সবার জন্য খুব খুব গুরুত্বপূর্ণ একটি বই ..বিসিএস, ব্যাংক, ইউনিভার্সিটি ভর্তি ও যে কোন প্রতিযোগিতা মূলক পরীক্ষার জন্য এর খুব ইম্পরট্যান্ট একটি বিষয় ...তাছাড়া বাংলাদেশের সাম্প্রতিক যে কোন ডাটা বা তথ্য এই বইতে পাবেন ...
তাই একজন নাগরিক হিসাবে এই তথ্য গুলো আপনার জানা প্রয়োজন ...।
বিসিএস ও ব্যাংক এর লিখিত পরীক্ষা ...+এছাড়া মাধ্যমিক ও উচ্চমাধ্যমিকের স্টুডেন্টদের জন্য অনেক কাজে আসবে ...
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
Possible applications of low-rank tensors in statistics and UQ (my talk in Bonn, Germany)
1. Possible applications of low-rank tensors in statistics
and UQ
Alexander Litvinenko,
Extreme Computing Research Center and Uncertainty
Quantification Center, KAUST
(joint work with H.G. Matthies, MIT and KAUST)
Center for Uncertainty
Quantification
ntification Logo Lock-up
http://sri-uq.kaust.edu.sa/
2. 4*
Problem 1. Predict temperature, velocity, salinity
Grid: 50Mi locations on 50 levels, 4*(X*Y*Z) = 4*500*500*50=
50Mi.
High-resolution time-dependent data about Red Sea: zonal velocity and
temperature
Center for Uncertainty
Quantification
tion Logo Lock-up
2 / 13
3. 4*
Problem 1. Apply low-rank tensor for
1. Kriging estimate
ˆs := Csy C−1
yy y
2. Estimation of variance ˆσ, is the diagonal of conditional cov.
matrix
Css|y = diag Css − Csy C−1
yy Cys
,
3. Gestatistical optimal design
ϕA := n−1
trace{Css|y }
ϕC := cT
Css − Csy C−1
yy Cys c
,
Center for Uncertainty
Quantification
tion Logo Lock-up
3 / 13
4. 4*
Problem 2. Stochastic Galerkin Operator
Problem 2. Stochastic Galerkin Operator
Center for Uncertainty
Quantification
tion Logo Lock-up
4 / 13
5. 4*
Discretization of stoch. PDE − div(κ(p, x) u(p, x)) = f(x, p)
Pictures 1, 2 (poor and rich discretization of p):
(
i=1
∆i ⊗ Ki) · (x ⊗ e) = (f ⊗ e) (1)
Picture 3:
(
i=1
Ki ⊗ ∆i) · (x ⊗ e) = (f ⊗ e) (2)
Center for Uncertainty
Quantification
antification Logo Lock-up
1 / 1
Center for Uncertainty
Quantification
tion Logo Lock-up
5 / 13
6. 4*
Problem 3. Predict moisture, estimate covariance parameters
Grid: 1830 × 1329 = 2, 432, 070 locations with 2,153,888
observations and 278,182 missing values.
−120 −110 −100 −90 −80 −70
253035404550
Soil moisture
longitude
latitude
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
High-resolution daily soil moisture data at the top layer of the Mississippi
basin, U.S.A., 01.01.2014 (Chaney et al., in review).
Important for agriculture, defense. Moisture is very heterogeneous.
Center for Uncertainty
Quantification
tion Logo Lock-up
5 / 13
7. 4*
Problem 4: Identifying uncertain parameters
Given: a vector of measurements z = (z1, ..., zn)T with a
covariance matrix C(θ∗) = C(σ2, ν, ).
To identify: uncertain parameters (σ2, ν, ).
Plan: Maximize the log-likelihood function
L(θ) = −
1
2
Nlog2π + log det{C(θ)} + zT
C(θ)−1
z ,
On each iteration i we have a new matrix C(θi ).
Center for Uncertainty
Quantification
tion Logo Lock-up
6 / 13
8. 4*
Solution: Estimation of uncertain parameters
H-matrix rank
3 7 9
cov.length
0.02
0.025
0.03
0.035
0.04
0.045
0.05
0.055
0.06
Box-plots for = 0.0334 (domain [0, 1]2) vs different H-matrix
ranks k = {3, 7, 9}.
Which H-matrix rank is sufficient for identification of parameters
of a particular type of cov. matrix?
Center for Uncertainty
Quantification
tion Logo Lock-up
7 / 13
9. 0 10 20 30 40
−4000
−3000
−2000
−1000
0
1000
2000
parameter θ, truth θ*=12
Log−likelihood(θ)
Shape of Log−likelihood(θ)
log(det(C))
zT
C−1
z
Log−likelihood
Figure : Minimum of negative log-likelihood (black) is at
θ = (·, ·, ) ≈ 12 (σ2
and ν are fixed)
Center for Uncertainty
Quantification
tion Logo Lock-up
8 / 13
10. 4*
Problem 5: Multivariate characteristic function
Multivariate characteristic function
Center for Uncertainty
Quantification
tion Logo Lock-up
9 / 13
11. 4*
Problem 5: Multivariate characteristic function
The multivariate characteristic function ϕX(t) of a d-dimensional
random vector X = (X1, ..., Xd ) with X1,...,Xd independent, is
ϕX(t) =
Rd
pX(y)exp(i y, t )dy, t = (t1, ..., td ) ∈ Rd
, (1)
The probability density is
pX(y) =
1
(2π)d
Rd
exp(−i y, t )ϕX(t)dt, y ∈ Rd
(2)
Center for Uncertainty
Quantification
tion Logo Lock-up
10 / 13
12. 4*
Elliptically contoured multivariate stable distribution
The characteristic function ϕX(t) of the elliptically contoured
multivariate stable distribution is defined as follow:
ϕX(t) = exp i(t1, t2) · (µ1, µ2)T
− (t1, t2)
σ2
1 0
0 σ2
2
(t1, t2)T
α/2
(3)
Now the question is to find a separation of
(t1, t2)
σ2
1 0
0 σ2
2
(t1, t2)T
α/2
≈
R
ν=1
φν,1(t1) · φν,2(t2), (4)
Center for Uncertainty
Quantification
tion Logo Lock-up
11 / 13
13. 4*
Multivariate distribution
Let ϕX(t) of some multivariate d-dimensional distribution is
approximated as follow:
ϕX(t) ≈
R
=1
d
µ=1
ϕX ,µ
(tµ). (5)
pX(y) ≈
Rd
exp(−i y, t )ϕX(t)dt (6)
≈
Rd
exp(−i
d
j=1
yj tj )
R
=1
d
µ=1
ϕX ,µ
(tµ)dt1...dtd (7)
≈
R
=1
d
µ=1 R
exp(−iyµtµ)ϕX ,µ
(tµ)dtµ ≈
R
=1
d
µ=1
pX ,µ
(yµ)
(8)
Center for Uncertainty
Quantification
tion Logo Lock-up
12 / 13
14. 4*
Literature
1. PCE of random coefficients and the solution of stochastic partial
differential equations in the Tensor Train format, S. Dolgov, B. N.
Khoromskij, A. Litvinenko, H. G. Matthies, 2015/3/11, arXiv:1503.03210
2. Efficient analysis of high dimensional data in tensor formats, M. Espig,
W. Hackbusch, A. Litvinenko, H.G. Matthies, E. Zander Sparse Grids and
Applications, 31-56, 40, 2013
3. Application of hierarchical matrices for computing the Karhunen-Loeve
expansion, B.N. Khoromskij, A. Litvinenko, H.G. Matthies, Computing
84 (1-2), 49-67, 31, 2009
4. Efficient low-rank approximation of the stochastic Galerkin matrix in
tensor formats, M. Espig, W. Hackbusch, A. Litvinenko, H.G. Matthies,
P. Waehnert, Comp. & Math. with Appl. 67 (4), 818-829, 2012
Center for Uncertainty
Quantification
tion Logo Lock-up
13 / 13