We did a first step in solving, so-called, skin problem. We developed an efficient H-matrix preconditioner to solve diffusion problem with jumping coefficients
Conformable Chebyshev differential equation of first kindIJECEIAES
In this paper, the Chebyshev-I conformable differential equation is considered. A proper power series is examined; there are two solutions, the even solution and the odd solution. The Rodrigues’ type formula is also allocated for the conformable Chebyshev-I polynomials.
Third-kind Chebyshev Polynomials Vr(x) in Collocation Methods of Solving Boun...IOSR Journals
This paper proposed the use of third-kind Chebyshev polynomials as trial functions in solving boundary value problems via collocation method. In applying this method, two different collocation points are considered, which are points at zeros of third-kind Chebyshev polynomials and equally-spaced points. These points yielded different results on each considered problem, thus possessing different level of accuracy. The
method is computational very simple and attractive. Applications are equally demonstrated through numerical examples to illustrate the efficiency and simplicity of the approach
Conformable Chebyshev differential equation of first kindIJECEIAES
In this paper, the Chebyshev-I conformable differential equation is considered. A proper power series is examined; there are two solutions, the even solution and the odd solution. The Rodrigues’ type formula is also allocated for the conformable Chebyshev-I polynomials.
Third-kind Chebyshev Polynomials Vr(x) in Collocation Methods of Solving Boun...IOSR Journals
This paper proposed the use of third-kind Chebyshev polynomials as trial functions in solving boundary value problems via collocation method. In applying this method, two different collocation points are considered, which are points at zeros of third-kind Chebyshev polynomials and equally-spaced points. These points yielded different results on each considered problem, thus possessing different level of accuracy. The
method is computational very simple and attractive. Applications are equally demonstrated through numerical examples to illustrate the efficiency and simplicity of the approach
Solving second order ordinary differential equations (boundary value problems) using the Least Squares Technique. Contains one numerical examples from Shah, Eldho, Desai
How to Solve a Partial Differential Equation on a surfacetr1987
Familiar techniques of separation of variables and Fourier series can be used to solve a variety of pde based on domains in the plane, however these techniques do not extend naturally to surface problems. Instead we look to take a computational approach. The talk will cover the basics of finite difference and finite element approximations of the one dimensional heat equation and show how to extend these ideas on to surfaces. If time allows, we will show numerical results of an optimal partition problem based on a sphere. No background knowledge of pde or computation is required.
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...Alexander Litvinenko
We develop hierarchical domain decomposition method to compute a part of the solution, a part of the inverse operator with O(n log n) storage and computing cost.
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...Alexander Litvinenko
Just some ideas how low-rank matrices/tensors can be useful in spatial and environmental statistics, where one usually has to deal with very large data
Solving second order ordinary differential equations (boundary value problems) using the Least Squares Technique. Contains one numerical examples from Shah, Eldho, Desai
How to Solve a Partial Differential Equation on a surfacetr1987
Familiar techniques of separation of variables and Fourier series can be used to solve a variety of pde based on domains in the plane, however these techniques do not extend naturally to surface problems. Instead we look to take a computational approach. The talk will cover the basics of finite difference and finite element approximations of the one dimensional heat equation and show how to extend these ideas on to surfaces. If time allows, we will show numerical results of an optimal partition problem based on a sphere. No background knowledge of pde or computation is required.
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...Alexander Litvinenko
We develop hierarchical domain decomposition method to compute a part of the solution, a part of the inverse operator with O(n log n) storage and computing cost.
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...Alexander Litvinenko
Just some ideas how low-rank matrices/tensors can be useful in spatial and environmental statistics, where one usually has to deal with very large data
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017) Alexander Litvinenko
Overview of our latest works in applying low-rank tensor techniques to a) solving PDEs with uncertain coefficients (or multi-parametric PDEs) b) postprocessing high-dimensional data c) compute the largest element, level sets, TOP5% elelments
Minimum mean square error estimation and approximation of the Bayesian updateAlexander Litvinenko
We develop a Bayesian update surrogate. Our formula allows us to update polynomial chaos coefficients. In contrast to classical Bayesian approach, we suggest to update PCE coefficients. We show that classical Kalman filter is a particular case of our update.
Likelihood approximation with parallel hierarchical matrices for large spatia...Alexander Litvinenko
First, we use hierarchical matrices to approximate large Matern covariance matrices and the loglikelihood. Second, we find a maximum of loglikelihood and estimate 3 unknown parameters (covariance length, smoothness and variance).
Multi-linear algebra and different tensor formats with applications Alexander Litvinenko
A short overview of well-known tensor formats, elliptic PDE with uncertain coefficients, some academic examples of separable functions, post-processing in tensor format
We consider an elliptic BVP.
How to compute a part of the solution? For instance, solution on the interface, solution in s subdomain in a point without computing the whole solution and with O(n log n) complexity/storage.
We apply tensor train (TT) data format to solve an elliptic PDE with uncertain coefficients. We reduce complexity and storage from exponential to linear. Post-processing in TT format is also provided.
We combined: low-rank tensor techniques and FFT to compute kriging, estimate variance, compute conditional covariance. We are able to solve 3D problems with very high resolution
My PhD talk "Application of H-matrices for computing partial inverse"Alexander Litvinenko
Sometimes you need not the whole solution of a partial differential equation, but only a part (e.g. in boundary layer). How to compute not the whole inverse matrix, but only a part or it (which can nevertheless provide you the solution in a subdomain)?
Response Surface in Tensor Train format for Uncertainty QuantificationAlexander Litvinenko
We apply low-rank Tensor Train format to solve PDEs with uncertain coefficients. First, we approximate uncertain permeability coefficient in TT format, then the operator and then apply iterations to solve stochastic Galerkin system.
Hierarchical matrix approximation of large covariance matricesAlexander Litvinenko
We research class of Matern covariance matrices and their approximability in the H-matrix format. Further tasks are compute H-Cholesky factorization, trace, determinant, quadratic form, loglikelihood. Later H-matrices can be applied in kriging.
I am Theo G. I am a Numerical Analysis Assignment Solver at mathhomeworksolver.com. I hold a Master's in Mathematics, from Adelaide, Australia. I have been helping students with their assignments for the past 12 years. I solved assignments related to Numerical Analysis.
Visit mathhomeworksolver.com or email support@mathhomeworksolver.com. You can also call on +1 678 648 4277 for any assistance with Numerical Analysis Assignment.
I am George P. I am a Chemistry Assignment Expert at eduassignmenthelp.com. I hold a Ph.D. in Chemistry, from Perth, Australia. I have been helping students with their homework for the past 6 years. I solve assignments related to Chemistry.
Visit eduassignmenthelp.com or email info@eduassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with Chemistry Assignments.
Anomalous Diffusion Through Homopolar Membrane: One-Dimensional Model by Guilherme Garcia Gimenez and Adélcio C Oliveira* in Evolutions in Mechanical Engineering
I am Ronald N. I am a Numerical Analysis Assignment Solver at mathhomeworksolver.com. I hold a Master's in Mathematics, from Quebec, Canada. I have been helping students with their assignments for the past 10 years. I solved assignments related to Numerical Analysis.
Visit mathhomeworksolver.com or email support@mathhomeworksolver.com. You can also call on +1 678 648 4277 for any assistance with Numerical Analysis Assignment.
On the Seidel’s Method, a Stronger Contraction Fixed Point Iterative Method o...BRNSS Publication Hub
In the solution of a system of linear equations, there exist many methods most of which are not fixed point iterative methods. However, this method of Sidel’s iteration ensures that the given system of the equation must be contractive after satisfying diagonal dominance. The theory behind this was discussed in sections one and two and the end; the application was extensively discussed in the last section.
I am Isaac M. I am a Computer Network Assignment Expert at computernetworkassignmenthelp.com. I hold a Master's in Computer Science from, Glasgow University, UK. I have been helping students with their assignments for the past 8 years. I solve assignments related to the Computer Network.
Visit computernetworkassignmenthelp.com or email support@computernetworkassignmenthelp.com.
You can also call on +1 678 648 4277 for any assistance with the Computer Network Assignment.
I am Grey Nolan. Currently associated with matlabassignmentexperts.com as an assignment helper. After completing my master's from the University of British Columbia, I was in search for an opportunity that expands my area of knowledge hence I decided to help students with their Signals and Systems assignments. I have written several assignments till date to help students overcome numerous difficulties they face in Signals and Systems Assignments.
Poster to be presented at Stochastic Numerics and Statistical Learning: Theory and Applications Workshop 2024, Kaust, Saudi Arabia, https://cemse.kaust.edu.sa/stochnum/events/event/snsl-workshop-2024.
In this work we have considered a setting that mimics the Henry problem \cite{Simpson2003,Simpson04_Henry}, modeling seawater intrusion into a 2D coastal aquifer. The pure water recharge from the ``land side'' resists the salinisation of the aquifer due to the influx of saline water through the ``sea side'', thereby achieving some equilibrium in the salt concentration. In our setting, following \cite{GRILLO2010}, we consider a fracture on the sea side that significantly increases the permeability of the porous medium.
The flow and transport essentially depend on the geological parameters of the porous medium, including the fracture. We investigated the effects of various uncertainties on saltwater intrusion. We assumed uncertainties in the fracture width, the porosity of the bulk medium, its permeability and the pure water recharge from the land side. The porosity and permeability were modeled by random fields, the recharge by a random but periodic intensity and the thickness by a random variable. We calculated the mean and variance of the salt mass fraction, which is also uncertain.
The main question we investigated in this work was how well the MLMC method can be used to compute statistics of different QoIs. We found that the answer depends on the choice of the QoI. First, not every QoI requires a hierarchy of meshes and MLMC. Second, MLMC requires stable convergence rates for $\EXP{g_{\ell} - g_{\ell-1}}$ and $\Var{g_{\ell} - g_{\ell-1}}$. These rates should be independent of $\ell$. If these convergence rates vary for different $\ell$, then it will be hard to estimate $L$ and $m_{\ell}$, and MLMC will either not work or be suboptimal. We were not able to get stable convergence rates for all levels $\ell=1,\ldots,5$ when the QoI was an integral as in \eqref{eq:integral_box}. We found that for $\ell=1,\ldots 4$ and $\ell=5$ the rate $\alpha$ was different. Further investigation is needed to find the reason for this. Another difficulty is the dependence on time, i.e. the number of levels $L$ and the number of sums $m_{\ell}$ depend on $t$. At the beginning the variability is small, then it increases, and after the process of mixing salt and fresh water has stopped, the variance decreases again.
The number of random samples required at each level was estimated by calculating the decay of the variances and the computational cost for each level. These estimates depend on the minimisation function in the MLMC algorithm.
To achieve the efficiency of the MLMC approach presented in this work, it is essential that the complexity of the numerical solution of each random realisation is proportional to the number of grid vertices on the grid levels.
We investigated the applicability and efficiency of the MLMC approach to the Henry-like problem with uncertain porosity, permeability and recharge. These uncertain parameters were modelled by random fields with three independent random variables. Permeability is a function of porosity. Both functions are time-dependent, have multi-scale behaviour and are defined for two layers. The numerical solution for each random realisation was obtained using the well-known ug4 parallel multigrid solver. The number of random samples required at each level was estimated by calculating the decay of the variances and the computational cost for each level.
The MLMC method was used to compute the expected value and variance of several QoIs, such as the solution at a few preselected points $(t,\bx)$, the solution integrated over a small subdomain, and the time evolution of the freshwater integral. We have found that some QoIs require only 2-3 mesh levels and samples from finer meshes would not significantly improve the result. Other QoIs require more grid levels.
1. Investigated efficiency of MLMC for Henry problem with
uncertain porosity, permeability, and recharge.
2. Uncertainties are modeled by random fields.
3. MLMC could be much faster than MC, 3200 times faster !
4. The time dependence is challenging.
Remarks:
1. Check if MLMC is needed.
2. The optimal number of samples depends on the point (t;x)
3. An advanced MLMC may give better estimates of L and m`.
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityAlexander Litvinenko
In this work, we solved the density driven groundwater flow problem with uncertain porosity and permeability. An accurate solution of this time-dependent and non-linear problem is impossible because of the presence of natural uncertainties in the reservoir such as porosity and permeability.
Therefore, we estimated the mean value and the variance of the solution, as well as the propagation of uncertainties from the random input parameters to the solution.
We started by defining the Elder-like problem. Then we described the multi-variate polynomial approximation (\gPC) approach and used it to estimate the required statistics of the mass fraction.
Utilizing the \gPC method allowed us
to reduce the computational cost compared to the classical quasi Monte Carlo method.
\gPC assumes that the output function $\sol(t,\bx,\thetab)$ is square-integrable and smooth w.r.t uncertain input variables $\btheta$.
Many factors, such as non-linearity, multiple solutions, multiple stationary states, time dependence and complicated solvers, make the investigation of the convergence of the \gPC method a non-trivial task.
We used an easy-to-implement, but only sub-optimal \gPC technique to quantify the uncertainty. For example, it is known that by increasing the degree of global polynomials (Hermite, Langange and similar), Runge's phenomenon appears. Here, probably local polynomials, splines or their mixtures would be better. Additionally, we used an easy-to-parallelise quadrature rule, which was also only suboptimal. For instance, adaptive choice of sparse grid (or collocation) points \cite{ConradMarzouk13,nobile-sg-mc-2015,Sudret_sparsePCE,CONSTANTINE12,crestaux2009polynomial} would be better, but we were limited by the usage of parallel methods. Adaptive quadrature rules are not (so well) parallelisable. In conclusion, we can report that: a) we developed a highly parallel method to quantify uncertainty in the Elder-like problem; b) with the \gPC of degree 4 we can achieve similar results as with the \QMC method.
In the numerical section we considered two different aquifers - a solid parallelepiped and a solid elliptic cylinder. One of our goals was to see how the domain geometry influences the formation, the number and the shape of fingers.
Since the considered problem is nonlinear,
a high variance in the porosity may result in totally different solutions; for instance, the number of fingers, their intensity and shape, the propagation time, and the velocity may vary considerably.
The number of cells in the presented experiments varied from $241{,}152$ to $15{,}433{,}728$ for the cylindrical domain and from $524{,}288$ to $4{,}194{,}304$ for the parallelepiped. The maximal number of parallel processing units was $600\times 32$, where $600$ is the number of parallel nodes and $32$ is the number of computing cores on each node. The total computing time varied from 2 hours for the coarse mesh to 24 hours for the finest mesh.
Saltwater intrusion occurs when sea levels rise and saltwater moves onto the land. Usually, this occurs during storms, high tides, droughts, or when saltwater penetrates freshwater aquifers and raises the groundwater table. Since groundwater is an essential nutrition and irrigation resource, its salinization may lead to catastrophic consequences. Many acres of farmland may be lost because they can become too wet or salty to grow crops. Therefore, accurate modeling of different scenarios of saline flow is essential to help farmers and researchers develop strategies to improve the soil quality and decrease saltwater intrusion effects.
Saline flow is density-driven and described by a system of time-dependent nonlinear partial differential equations (PDEs). It features convection dominance and can demonstrate very complicated behavior.
As a specific model, we consider a Henry-like problem with uncertain permeability and porosity.
These parameters may strongly affect the flow and transport of salt.
We consider a class of density-driven flow problems. We are particularly interested in the problem of the salinization of coastal aquifers. We consider the Henry saltwater intrusion problem with uncertain porosity, permeability, and recharge parameters as a test case.
The reason for the presence of uncertainties is the lack of knowledge, inaccurate measurements,
and inability to measure parameters at each spatial or time location. This problem is nonlinear and time-dependent. The solution is the salt mass fraction, which is uncertain and changes in time. Uncertainties in porosity, permeability, recharge, and mass fraction are modeled using random fields. This work investigates the applicability of the well-known multilevel Monte Carlo (MLMC) method for such problems. The MLMC method can reduce the total computational and storage costs. Moreover, the MLMC method runs multiple scenarios on different spatial and time meshes and then estimates the mean value of the mass fraction.
The parallelization is performed in both the physical space and stochastic space. To solve every deterministic scenario, we run the parallel multigrid solver ug4 in a black-box fashion.
We use the solution obtained from the quasi-Monte Carlo method as a reference solution.
We investigated the applicability and efficiency of the MLMC approach for the Henry-like problem with uncertain porosity, permeability, and recharge. These uncertain parameters were modeled by random fields with three independent random variables. The numerical solution for each random realization was obtained using the well-known ug4 parallel multigrid solver. The number of required random samples on each level was estimated by computing the decay of the variances and computational costs for each level. We also computed the expected value and variance of the mass fraction in the whole domain, the evolution of the pdfs, the solutions at a few preselected points $(t,\bx)$, and the time evolution of the freshwater integral value. We have found that some QoIs require only 2-3 of the coarsest mesh levels, and samples from finer meshes would not significantly improve the result. Note that a different type of porosity may lead to a different conclusion.
The results show that the MLMC method is faster than the QMC method at the finest mesh. Thus, sampling at different mesh levels makes sense and helps to reduce the overall computational cost.
Here the interest is mainly to compute characterisations like the entropy,
the Kullback-Leibler divergence, more general $f$-divergences, or other such characteristics based on
the probability density. The density is often not available directly,
and it is a computational challenge to just represent it in a numerically
feasible fashion in case the dimension is even moderately large. It
is an even stronger numerical challenge to then actually compute said characteristics
in the high-dimensional case.
The task considered here was the numerical computation of characterising statistics of
high-dimensional pdfs, as well as their divergences and distances,
where the pdf in the numerical implementation was assumed discretised on some regular grid.
We have demonstrated that high-dimensional pdfs,
pcfs, and some functions of them
can be approximated and represented in a low-rank tensor data format.
Utilisation of low-rank tensor techniques helps to reduce the computational complexity
and the storage cost from exponential $\C{O}(n^d)$ to linear in the dimension $d$, e.g.\
$O(d n r^2 )$ for the TT format. Here $n$ is the number of discretisation
points in one direction, $r<<n$ is the maximal tensor rank, and $d$ the problem dimension.
Talk presented on this workshop "Workshop: Imaging With Uncertainty Quantification (IUQ), September 2022",
https://people.compute.dtu.dk/pcha/CUQI/IUQworkshop.html
We consider a weakly supervised classification problem. It
is a classification problem where the target variable can be unknown
or uncertain for some subset of samples. This problem appears when
the labeling is impossible, time-consuming, or expensive. Noisy measurements
and lack of data may prevent accurate labeling. Our task
is to build an optimal classification function. For this, we construct and
minimize a specific objective function, which includes the fitting error on
labeled data and a smoothness term. Next, we use covariance and radial AQ1
basis functions to define the degree of similarity between points. The further
process involves the repeated solution of an extensive linear system
with the graph Laplacian operator. To speed up this solution process,
we introduce low-rank approximation techniques. We call the resulting
algorithm WSC-LR. Then we use the WSC-LR algorithm for analysis
CT brain scans to recognize ischemic stroke disease. We also compare
WSC-LR with other well-known machine learning algorithms.
Computing f-Divergences and Distances of High-Dimensional Probability Density...Alexander Litvinenko
Poster presented on Stochastic Numerics and Statistical Learning: Theory and Applications Workshop in KAUST, Saudi Arabia.
The task considered here was the numerical computation of characterising statistics of
high-dimensional pdfs, as well as their divergences and distances,
where the pdf in the numerical implementation was assumed discretised on some regular grid.
Even for moderate dimension $d$, the full storage and computation with such objects become very quickly infeasible.
We have demonstrated that high-dimensional pdfs,
pcfs, and some functions of them
can be approximated and represented in a low-rank tensor data format.
Utilisation of low-rank tensor techniques helps to reduce the computational complexity
and the storage cost from exponential $\C{O}(n^d)$ to linear in the dimension $d$, e.g.
O(d n r^2) for the TT format. Here $n$ is the number of discretisation
points in one direction, r<n is the maximal tensor rank, and d the problem dimension.
The particular data format is rather unimportant,
any of the well-known tensor formats (CP, Tucker, hierarchical Tucker, tensor-train (TT)) can be used,
and we used the TT data format. Much of the presentation and in fact the central train
of discussion and thought is actually independent of the actual representation.
In the beginning it was motivated through three possible ways how one may
arrive at such a representation of the pdf. One was if the pdf was given in some approximate
analytical form, e.g. like a function tensor product of lower-dimensional pdfs with a
product measure, or from an analogous representation of the pcf and subsequent use of the
Fourier transform, or from a low-rank functional representation of a high-dimensional
RV, again via its pcf.
The theoretical underpinnings of the relation between pdfs and pcfs as well as their
properties were recalled in Section: Theory, as they are important to be preserved in the
discrete approximation. This also introduced the concepts of the convolution and of
the point-wise multiplication Hadamard algebra, concepts which become especially important if
one wants to characterise sums of independent RVs or mixture models,
a topic we did not touch on for the sake of brevity but which follows very naturally from
the developments here. Especially the Hadamard algebra is also
important for the algorithms to compute various point-wise functions in the sparse formats.
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Alexander Litvinenko
Talk presented on SIAM IS 2022 conference.
Very often, in the course of uncertainty quantification tasks or
data analysis, one has to deal with high-dimensional random variables (RVs)
(with values in $\Rd$). Just like any other RV,
a high-dimensional RV can be described by its probability density (\pdf) and/or
by the corresponding probability characteristic functions (\pcf),
or a more general representation as
a function of other, known, random variables.
Here the interest is mainly to compute characterisations like the entropy, the Kullback-Leibler, or more general
$f$-divergences. These are all computed from the \pdf, which is often not available directly,
and it is a computational challenge to even represent it in a numerically
feasible fashion in case the dimension $d$ is even moderately large. It
is an even stronger numerical challenge to then actually compute said characterisations
in the high-dimensional case.
In this regard, in order to achieve a computationally feasible task, we propose
to approximate density by a low-rank tensor.
Low rank tensor approximation of probability density and characteristic funct...Alexander Litvinenko
Very often one has to deal with high-dimensional random variables (RVs). A high-dimensional RV can be described by its probability density (\pdf) and/or by the corresponding probability characteristic functions (\pcf), or by a function representation. Here the interest is mainly to compute characterisations like the entropy, or
relations between two distributions, like their Kullback-Leibler divergence, or more general measures such as $f$-divergences,
among others. These are all computed from the \pdf, which is often not available directly, and it is a computational challenge to even represent it in a numerically feasible fashion in case the dimension $d$ is even moderately large. It is an even stronger numerical challenge to then actually compute said characterisations in the high-dimensional case.
In this regard, in order to achieve a computationally feasible task, we propose to represent the density by a high order tensor product, and approximate this in a low-rank format.
Identification of unknown parameters and prediction of missing values. Compar...Alexander Litvinenko
H-matrix approximation of large Mat\'{e}rn covariance matrices, Gaussian log-likelihoods.
Identifying unknown parameters and making predictions
Comparison with machine learning methods.
kNN is easy to implement and shows promising results.
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
We develop fast and efficient stochastic methods for characterizing scattering
from objects of uncertain shapes. This is highly needed in the
fields of electromagnetics, optics, and photonics.
The continuation multilevel Monte Carlo (CMLMC) method is
used together with a surface integral equation solver. The
CMLMC method optimally balances statistical errors due to
sampling of the parametric space, and numerical errors due
to the discretization of the geometry using a hierarchy of
discretizations, from coarse to fine. The number of realizations
of finer discretizations can be kept low, with most samples
computed on coarser discretizations to minimize computational
work. Consequently, the total execution time is significantly
reduced, in comparison to the standard MC scheme.
Identification of unknown parameters and prediction with hierarchical matrice...Alexander Litvinenko
We compare four numerical methods for the prediction of missing values in four different datasets.
These methods are 1) the hierarchical maximum likelihood estimation (H-MLE), and three machine learning (ML) methods, which include 2) k-nearest neighbors (kNN), 3) random forest, and 4) Deep Neural Network (DNN).
From the ML methods, the best results (for considered datasets) were obtained by the kNN method with three (or seven) neighbors.
On one dataset, the MLE method showed a smaller error than the kNN method, whereas, on another, the kNN method was better.
The MLE method requires a lot of linear algebra computations and works fine on almost all datasets. Its result can be improved by taking a smaller threshold and more accurate hierarchical matrix arithmetics. To our surprise, the well-known kNN method produces similar results as H-MLE and worked much faster.
1. Motivation: why do we need low-rank tensors
2. Tensors of the second order (matrices)
3. CP, Tucker and tensor train tensor formats
4. Many classical kernels have (or can be approximated in ) low-rank tensor format
5. Post processing: Computation of mean, variance, level sets, frequency
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
Computational tools for characterizing electromagnetic scattering from objects with uncertain shapes are needed in various applications ranging from remote sensing at microwave frequencies to Raman spectroscopy at optical frequencies. Often, such computational tools use the Monte Carlo (MC) method to sample a parametric space describing geometric uncertainties. For each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver computes the scattered fields. However, for an accurate statistical characterization the number of MC samples has to be large. In this work, to address this challenge, the continuation multilevel Monte Carlo (\CMLMC) method is used together with a surface integral equation solver.
The \CMLMC method optimally balances statistical errors due to sampling of
the parametric space, and numerical errors due to the discretization of the geometry using a hierarchy of discretizations, from coarse to fine.
The number of realizations of finer discretizations can be kept low, with most samples
computed on coarser discretizations to minimize computational cost.
Consequently, the total execution time is significantly reduced, in comparison to the standard MC scheme.
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
Computational tools for characterizing electromagnetic scattering from objects with uncertain shapes are needed in various applications ranging from remote sensing at microwave frequencies to Raman spectroscopy at optical frequencies. Often, such computational tools use the Monte Carlo (MC) method to sample a parametric space describing geometric uncertainties. For each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver computes the scattered fields. However, for an accurate statistical characterization the number of MC samples has to be large. In this work, to address this challenge, the continuation multilevel Monte Carlo (\CMLMC) method is used together with a surface integral equation solver.
The \CMLMC method optimally balances statistical errors due to sampling of
the parametric space, and numerical errors due to the discretization of the geometry using a hierarchy of discretizations, from coarse to fine.
The number of realizations of finer discretizations can be kept low, with most samples
computed on coarser discretizations to minimize computational cost.
Consequently, the total execution time is significantly reduced, in comparison to the standard MC scheme.
Propagation of Uncertainties in Density Driven Groundwater FlowAlexander Litvinenko
Major Goal: estimate risks of the pollution in a subsurface flow.
How?: we solve density-driven groundwater flow with uncertain porosity and permeability.
We set up density-driven groundwater flow problem,
review stochastic modeling and stochastic methods, use UG4 framework (https://gcsc.uni-frankfurt.de/simulation-and-modelling/ug4),
model uncertainty in porosity and permeability,
2D and 3D numerical experiments.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
1. H-matrix based preconditioner for the skin problem
B.N.Khoromskij, A.Litvinenko
bokh, litvinen@mis.mpg.de
Max Planck Institute for Mathematics in the Sciences
Leipzig. 18/08/2006
Abstract
In this paper we propose and analyze the new H-Cholesky based preconditioner for the so-called
skin problem [5]. After a special reordering of indices and omitting the coupling, we obtain a
block diagonal matrix which is very suitable for the hierarchical Cholesky (H-Cholesky) factor-
ization. We perform the H-Cholesky factorization of this matrix and use it as a preconditioner
for the cg method. We will show that the new preconditioner requires less memory and com-
putational time than the standard H-Cholesky preconditioner, which is also very cheap and fast.
Key words: skin problem, H-matrix approximation, hierarchical Cholesky, jumping
coefficients, domain decomposition.
1 Introduction
In the series of papers [7], [9], [10] the authors successfully apply the iteration method (cg,
gmres, bicgstab) with H-matrices based preconditioners to different types of second order
elliptic differential problems. In this paper we continue the research in this direction.
Under some definite conditions H-matrices can be used even as a direct solver. There are
results (see, e.g., [11] and references therein) where authors apply additive Schwarz domain
decomposition preconditioners. It is known that for problems with jumping coefficients
(see (1)) the condition number cond(A) is proportional to h−d
sup
x,y∈Ω
α(x)
α(y)
, where α(x)
denotes the jumping coefficient, d the spatial dimension and h the grid step size. This is
why a good preconditioner W is needed so that cond(W−1
A) ≃ 1.
In this paper we consider a diffusion process (see (1)) through the domain as shown in
Fig. 1 (left). This figure shows cells and the lipid layer between them. In this problem
the Dirichlet boundary condition means the presence of some drugs on the boundary γ
of the skin fragment. The right-hand side presents external forces. The zero Neumann
condition on Γγ shows that there is no penetration through the surface Γγ. Typical for
the skin problem are the high jumping coefficients. The penetration coefficient inside the
cells is very low ∼ 10−5
− 10−3
, but it is large between cells.
The diffusion equation has the form:
div(α(x)∇u) = f x ∈ Ω
u = 0 x ∈ γ
∂u
∂n
= g x ∈ Γ γ
(1)
where Γ = ∂Ω, α(x, y) = ε ≪ 1 in cells and α(x, y) = β = 1 in between. The rest of this
1
2. ε
β
z
y
x
Figure 1: (left) A skin fragment consists of cells and of the lipid layer. The penetration
through the cells goes very slowly and very fast through the lipid layer. (right) The
simplified model of a skin fragment contains 8 cells with the lipid layer between them.
Ω = [−1, 1]3
, α(x, y) = ε inside cells and α(x, y) = β = 1 in the lipid layer.
paper is structured as follows. In Section 2 we describe the discretisation which is done
by FEM. We recall the main idea of the H-matrix technique in Section 3. Section 4 is
devoted to the new preconditioner and estimations of its complexity. Numerical tests and
comparisons of different preconditioners are provided in Section 5. Finally, some remarks
conclude the paper.
2 Discretisation (FEM)
Let us choose the triangulation τh which is compatible with the lipid layer, i.e., τh :=
τ1
h ∪ τ2
h , where τ1
h is a triangulation of the lipid layer and τ2
h a triangulation of cells. Let
bj, j = 1..n, be piecewise linear basis functions and
Vh ⊂ H1
(Ω), Vh := span{b1, ..., bn}. (2)
Then the variational formulation of the initial problem is
find uh ∈ Vn, so that a(uh, v) = c(v) for all v ∈ Vn. (3)
Assuming (2), we obtain the equivalent problem
Au = c, where Aij = a(bj, bi) and ci := c(bi), i, j = 1, .., n. (4)
Here
a(bj, bi) = α(∇bj, ∇bi)dx =
Ω
fbjdx +
Γγ
gbjdΓ =: cj. (5)
2
3. The lipid layer between the cells defines the natural decomposition of Ω. The width of
this layer is proportional to the grid step size h. Note that after the reordering of indices,
we can represent the global stiffness in the following form:
A11 εA12
εA21 εA22
. (6)
Here A11, A22 are the stiffness matrices which correspond to the lipid layer and to the
rest of domain accordingly. A12, A21 are coupling matrices. To simplify the model we will
consider Ω as in Fig. 1 (right).
3 Hierarchical Matrices
The hierarchical matrices (H-matrices) were introduced in 1998 by Hackbusch [2] and
since then, H-matrices have been applied in a wide range of applications. They provide a
format for the data-sparse representation of fully-populated matrices. Suppose there are
two matrices A ∈ Rn×k
and B ∈ Rm×k
, k ≪ min(n, m), so that ABT
= R ∈ Rn×m
. We
say then that R is the rank-k matrix. The main idea of H-matrices is to approximate
certain subblocks of a given matrix by rank-k matrices. The admissible partitioning
indicates which blocks can be approximated by rank-k matrices. The storage requirement
for matrices A and B is k(n + m) instead of n · m for matrix R. One of the biggest
advantages of H-matrices is that the complexity of the H-matrix addition, multiplication
and inversion is not bigger than Ckn logq
n, q = 1, 2 (see [2], [13]). The lack is that the
constant C is large. For example for 3D case it can be bigger than 120.
To build an H-matrix one needs an admissible block partitioning (see Fig. 2). To build
this partitioning one needs an admissibility condition and a block cluster tree. To build
the block cluster tree a cluster tree is necessary. The cluster tree requires grid data. For
more details see [2] or [13].
H-matrixverices
finite elements
cluster tree
block
cluster tree
admissibility
condition
admissible
partitioning
H-Cholesky
factorization
Figure 2: The schema of building an H-matrix and its H-Cholesky factorisation.
Definition 3.1 We define the set of H-matrices with the maximal rank k as follows
H(TI×J , k) := {M ∈ RI×J
| rank(M |t×s) ≤ k for all admissible leaves t × s of TI×J}.
3
4. Algorithm of the H-Cholesky factorization
Our aim is to compute the H-Cholesky factorization of the stiffness matrix which appears
after discretisation of the Laplace operator. Suppose that
A =
A11 A12
A21 A22
=
L11 0
L21 L22
U11 U12
0 U22
then the algorithm is as follows
1. compute L11 and U11 as H-Cholesky decomposition of A11.
2. compute U12 from L11U12 = A12 (use a recursive block forward substitution).
3. compute L21 from L21U11 = A21 (use a recursive block backward substitution).
4. compute L22 and U22 as H-Cholesky decomposition of L22U22 = A22 ⊖ L21 ⊙ U12.
All the steps are executed in the class of H-matrices.
4 New Preconditioner
The H-Cholesky factorization of the stiffness matrix produces H-matrix as shown in Fig.
3 (left). After reordering of the index set I(Ω) and omitting the coupling between cells and
the lipid layer we obtain H-matrix as shown in Fig. 3 (right). As a new preconditioner
we use the H-Cholesky decomposition of
A11 0
0 εA22
. (7)
Remark 4.1 Note that W−1
A := (LLT
)
−1
A = L−T
L−1
A = L−T
AL−1
, i.e., W−1
A is
positive definite and symmetric. Thus, for solving the initial problem (4) we apply the pcg
method with the H-Cholesky preconditioner.
Below we prove that omitting of the coupling for small ε is possible.
Lemma 4.1 For a symmetric and positive definite matrix A =
A11 A12
A21 A22
and any
vector v =
v1
v2
it is hold (A12v1, v2) ≤ A
1/2
11 v1 · A
1/2
22 v2 .
Proof: From Cauchy inequality for any vectors u, v it follows
uT
Av = (u, v) A ≤ u A · v A.
Construct two vectors u = (v1, 0)T
and v = (0, v2)T
, then uT
Av = (A12v2, v1). It means
that
(A12v2, v1) ≤ v1 A · v2 A = A
1/2
11 v1 · A
1/2
22 v2 .
4
5. Lemma 4.2 For a symmetric and positive definite matrix A =
A11 A12
A21 A22
and any
vector v =
v1
v2
it is hold
2(A12u2, u1) ≤ (A11u1, u1) + (A22u2, u2),
(A12v1, v2) ≤
1
2
A
1/2
11 v1 + A
1/2
22 v2 .
Proof: Let u1 := v1 and u2 = −v2 then u =
u1
−u2
. From the positive definiteness
of A it follows
0 ≤ (Au, u) = (A11u1, u1) − (A12u2, u1) − (A21u1, u2) + (A22u2, u2).
Move negative terms to the left, obtain
(A12u2, u1) + (A21u1, u2) ≤ (A11u1, u1) + (A22u2, u2).
Recall that A is symmetric, obtain 2(A12u2, u1) ≤ (A11u1, u1) + (A22u2, u2) and
2(A12u2, u1) ≤ (A
1/2
11 u1, A
1/2
11 u1) + (A
1/2
22 u2, A
1/2
22 u2),
(A12u2, u1) ≤
1
2
A
1/2
11 u1 + A
1/2
22 u2 .
Lemma 4.3 Let u be a vector and W =
A11 0
0 A22
be a preconditioner, then
(Au, u) ≤ 2(Wu, u). (8)
Proof: Compute both scalar products
(W2u, u) =
A11 0
0 εA22
u1
u2
,
u1
u2
= (A11u1, u1) + ε(A22u2, u2).
(Au, u) =
A11 εA12
εA21 εA22
u1
u2
,
u1
u2
= (A11u1, u1) + 2ε(A12u2, u1) + ε(A22u2, u2) = (Wu, u) + 2ε(A12u2, u1),
From the previous Lemma it follows that (Au, u) ≤ (Wu, u) + (Wu, u).
Remark 4.2 Recall that A and W are spectral equivalent if c1 · I ≤ W−1
A ≤ c2 · I,
∀u ∈ Rn
.
Lemma 4.4 Matrices A and W are spectral equivalent with I ≤ W−1
A ≤ 2cdotI.
Proof: We will write A ≥ B if A − B is semi-positive definite. From Lemma 4.3 follows
(Au, u) ≤ 2(Wu, u), u ∈ Rn
. Move everything in the left part, obtain ((A−2W)u, u) ≤ 0.
Since the last holds for ∀u than A − 2W ≤ 0 or W−1
A ≤ 2.
From the construction of W it is clear that A − W ≥ 0, i.e. W−1
A ≥ I.
Thus, I ≤ W−1
A ≤ 2 · I.
5
7. 0.01
0.1
1
10
100
1000
10000
0 50 100 150 200 250
alpha=1
alpha=1e-2
alpha=1e-4
"alpha_1"
"alpha_1e-2"
"alpha_1e-4"
Figure 4: Decay of singular values of A for ε = 1, ε = 10−2
and ε = 10−4
.
domain with larger number of cells the difference between sparsity constants will be more
significant.
Table 5 shows the resources requirements for the preconditioners W1 and W2. We see
that W2 requires less resources than W1. It requires less memory (S(W1) > S(W2)) and
time (t(W1) > t(W2)) for the building. Columns 2 and 5 contain the times for computing
the Cholesky factorisations and cg iterations. In Table 5 we compare the solutions ˜u and
k t(W1),sec S(W1),MB iter(W1) t(W2),sec S(W2),MB iter(W2)
1 24 + 10.6 2 ∗ 102
69 8.7+10 102
99
2 70 + 11.3 3.8 ∗ 102
46 21.6+13.3 1.8 ∗ 102
91
4 208 + 12.5 7.5 ∗ 102
17 68+13.5 3.5 ∗ 102
60
6 483.7 + 82 1.1 ∗ 103
11 123+26 5.1 ∗ 102
74
Table 2: Comparison of the preconditioners W1 and W2. 403
dofs, Ax − b = 10−8
,
α = 10−5
.
ucg, obtained with the preconditioners W1 and W2. The solution ucg, obtained with the
preconditioner W1 is considered as ’exact’.
6 Conclusion
The matrix W2 can be successfully used as a preconditioner. The simple structure of
W2 is the reason why it is good parallelisable. The parallel computational complexity is
7
8. k |ucg−˜u|
|˜u|
|ucg − ˜u|∞
1 5.3 ∗ 10−10
4.5 ∗ 10−6
2 5.1 ∗ 10−9
3.5 ∗ 10−8
4 5.8 ∗ 10−10
4.6 ∗ 10−6
6 7.2 ∗ 10−10
2.5 ∗ 10−5
Table 3: Comparison of the solutions ucg and ˜u. 403
dofs, Ax − b = 10−8
, α = 10−5
.
max{O(nI log2
nI), O(nD log2
nD)}, nD := n−nI
p−1
, nI number of degrees of freedom in the
lipid layer. The sequential version of the preconditioner W2 requires less memory. Note
that the more cells domain Ω contains, the bigger the advantages in storage and compu-
tational resources will be (see Table 5). The disadvantage is the relative large number of
pcg iterations, but these iterations require less resources than the standard H-Cholesky
preconditioner W1. In frames of HLIB (see [1]) it is quite easy to implement the offered
preconditioner.
Acknowledgment: The authors wish to thank Prof. Dr. Hackbusch for his correc-
tions as well as Dr. B¨orm and Dr. Grasedyck for HLIB.
8
9. References
[1] Hierarchical matrix library: www.hlib.org
[2] W.Hackbusch: A sparse matrix arithmetic based on H-matrices. Part 1: Introduction
to H-matrices. Computing, 62: 89-108, 1999.
[3] W. Hackbusch: Direct Domain Decomposition using the Hierarchical Matrix Tech-
nique, pp. 39-50, Domain Decomposition Methods in Sci. and Engineering. Cocoyoc,
Mexico, 2003.
[4] W. Hackbusch, B.N. Khoromskij and R. Kriemann: Direct Schur Complement
Method by Hierarchical Matrix Techniques. Computing and Visualisation in Science,
2005, 8: 179-188.
[5] B.N. Khoromskij and G. Wittum: Numerical Solution of Elliptic Differential Equa-
tions by Reduction to the Interface. LNCSE 36, Springer, 2004.
[6] M.Bebendorf and W.Hackbusch: Existence of H-Matrix approximants to the inverse
FE-matrix of elliptic operators with L∞
- coefficients. Numerische Mathematik, 95:1-
28, 2003.
[7] M.Bebendorf: Hierarchical LU decomposition-based preconditioners for BEM, Com-
puting 74, 225-247, 2005.
[8] S. Le Borne, Ronald Kriemann, Lars Grasedyck: Parallel Black Box Domain De-
composition Based H-LU Preconditioning, Preprint 115, 2005, Max-Planck-Institut
MIS, Leipzig.
[9] S. Le Borne, Lars Grasedyck: H-matrix preconditioners in convection-dominated
problems, SIAM J. Matrix Anal. Appl., Vol. 27, No. 4, pp. 1172-1183.
[10] S. Le Borne: H-matrices for convection-diffusion problems with constant convection,
Computing, 70 (2003), 261-274.
[11] I.G. Graham, P.Lechner and R.Scheichl: Domain Decomposition for Multiscale
PDEs, Bath Institute for Complex Systems, Preprint 11/06 (2006), available at
www.bath.ac.uk/math-sci/BICS
[12] A. Litvinenko: Application of Hierarchical Matrices for Solving Multiscale Problems.
PhD Dissertation, Leipzig University, submitted, April 2006.
[13] L.Grasedyck, W.Hackbusch: Construction and Arithmetics of H-Matrices. Comput-
ing, 70: 295-334, 2003.
[14] Michael Lintner, The eigenvalue problem for the Laplacian in H-matrix arithmetic
and application to the heat and wave equation. Computing, 72:293-323, 2004.
9