Computational tools for characterizing electromagnetic scattering from objects with uncertain shapes are needed in various applications ranging from remote sensing at microwave frequencies to Raman spectroscopy at optical frequencies. Often, such computational tools use the Monte Carlo (MC) method to sample a parametric space describing geometric uncertainties. For each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver computes the scattered fields. However, for an accurate statistical characterization the number of MC samples has to be large. In this work, to address this challenge, the continuation multilevel Monte Carlo (\CMLMC) method is used together with a surface integral equation solver.
The \CMLMC method optimally balances statistical errors due to sampling of
the parametric space, and numerical errors due to the discretization of the geometry using a hierarchy of discretizations, from coarse to fine.
The number of realizations of finer discretizations can be kept low, with most samples
computed on coarser discretizations to minimize computational cost.
Consequently, the total execution time is significantly reduced, in comparison to the standard MC scheme.
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
We develop fast and efficient stochastic methods for characterizing scattering
from objects of uncertain shapes. This is highly needed in the
fields of electromagnetics, optics, and photonics.
The continuation multilevel Monte Carlo (CMLMC) method is
used together with a surface integral equation solver. The
CMLMC method optimally balances statistical errors due to
sampling of the parametric space, and numerical errors due
to the discretization of the geometry using a hierarchy of
discretizations, from coarse to fine. The number of realizations
of finer discretizations can be kept low, with most samples
computed on coarser discretizations to minimize computational
work. Consequently, the total execution time is significantly
reduced, in comparison to the standard MC scheme.
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
Computational tools for characterizing electromagnetic scattering from objects with uncertain shapes are needed in various applications ranging from remote sensing at microwave frequencies to Raman spectroscopy at optical frequencies. Often, such computational tools use the Monte Carlo (MC) method to sample a parametric space describing geometric uncertainties. For each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver computes the scattered fields. However, for an accurate statistical characterization the number of MC samples has to be large. In this work, to address this challenge, the continuation multilevel Monte Carlo (\CMLMC) method is used together with a surface integral equation solver.
The \CMLMC method optimally balances statistical errors due to sampling of
the parametric space, and numerical errors due to the discretization of the geometry using a hierarchy of discretizations, from coarse to fine.
The number of realizations of finer discretizations can be kept low, with most samples
computed on coarser discretizations to minimize computational cost.
Consequently, the total execution time is significantly reduced, in comparison to the standard MC scheme.
Computation of electromagnetic_fields_scattered_from_dielectric_objects_of_un...Alexander Litvinenko
Tools for electromagnetic scattering from objects with uncertain shapes are needed in various applications.
We develop numerical methods for predicting radar and scattering cross sections (RCS and SCS) of complex targets.
To reduce cost of Monte Carlo (MC) we offer modified multilevel MC (CMLMC) method.
Response of dynamic systems to harmonic excitation is discussed. Single degree of freedom systems are considered. For general damped multi degree of freedom systems, see my book Structural Dynamic Analysis with Generalized Damping Models: Analysis (e.g., in Amazon http://buff.ly/NqwHEE)
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
We develop fast and efficient stochastic methods for characterizing scattering
from objects of uncertain shapes. This is highly needed in the
fields of electromagnetics, optics, and photonics.
The continuation multilevel Monte Carlo (CMLMC) method is
used together with a surface integral equation solver. The
CMLMC method optimally balances statistical errors due to
sampling of the parametric space, and numerical errors due
to the discretization of the geometry using a hierarchy of
discretizations, from coarse to fine. The number of realizations
of finer discretizations can be kept low, with most samples
computed on coarser discretizations to minimize computational
work. Consequently, the total execution time is significantly
reduced, in comparison to the standard MC scheme.
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
Computational tools for characterizing electromagnetic scattering from objects with uncertain shapes are needed in various applications ranging from remote sensing at microwave frequencies to Raman spectroscopy at optical frequencies. Often, such computational tools use the Monte Carlo (MC) method to sample a parametric space describing geometric uncertainties. For each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver computes the scattered fields. However, for an accurate statistical characterization the number of MC samples has to be large. In this work, to address this challenge, the continuation multilevel Monte Carlo (\CMLMC) method is used together with a surface integral equation solver.
The \CMLMC method optimally balances statistical errors due to sampling of
the parametric space, and numerical errors due to the discretization of the geometry using a hierarchy of discretizations, from coarse to fine.
The number of realizations of finer discretizations can be kept low, with most samples
computed on coarser discretizations to minimize computational cost.
Consequently, the total execution time is significantly reduced, in comparison to the standard MC scheme.
Computation of electromagnetic_fields_scattered_from_dielectric_objects_of_un...Alexander Litvinenko
Tools for electromagnetic scattering from objects with uncertain shapes are needed in various applications.
We develop numerical methods for predicting radar and scattering cross sections (RCS and SCS) of complex targets.
To reduce cost of Monte Carlo (MC) we offer modified multilevel MC (CMLMC) method.
Response of dynamic systems to harmonic excitation is discussed. Single degree of freedom systems are considered. For general damped multi degree of freedom systems, see my book Structural Dynamic Analysis with Generalized Damping Models: Analysis (e.g., in Amazon http://buff.ly/NqwHEE)
Eh4 energy harvesting due to random excitations and optimal designUniversity of Glasgow
This lecture is about vibration energy harvesting when both the excitation and the system have uncertainties. Two cases, namely, when the excitation is a random process and when the system parameters are described by random variables are described. Optimal design for both cases is discussed.
This presentation on Pseudo Random Number Generator enlists the different generators, their mechanisms and the various applications of random numbers and pseudo random numbers in different arenas.
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...Alexander Litvinenko
We research how input uncertainties in the geometry shape propagate through the electromagnetic model to electro-magnetic fields. We use multi-level Monte Carlo methods.
Eh4 energy harvesting due to random excitations and optimal designUniversity of Glasgow
This lecture is about vibration energy harvesting when both the excitation and the system have uncertainties. Two cases, namely, when the excitation is a random process and when the system parameters are described by random variables are described. Optimal design for both cases is discussed.
This presentation on Pseudo Random Number Generator enlists the different generators, their mechanisms and the various applications of random numbers and pseudo random numbers in different arenas.
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...Alexander Litvinenko
We research how input uncertainties in the geometry shape propagate through the electromagnetic model to electro-magnetic fields. We use multi-level Monte Carlo methods.
New data structures and algorithms for \\post-processing large data sets and ...Alexander Litvinenko
In this work, we describe advanced numerical tools for working with multivariate functions and for
the analysis of large data sets. These tools will drastically reduce the required computing time and the
storage cost, and, therefore, will allow us to consider much larger data sets or ner meshes. Covariance
matrices are crucial in spatio-temporal statistical tasks, but are often very expensive to compute and
store, especially in 3D. Therefore, we approximate covariance functions by cheap surrogates in a
low-rank tensor format. We apply the Tucker and canonical tensor decompositions to a family of
Matern- and Slater-type functions with varying parameters and demonstrate numerically that their
approximations exhibit exponentially fast convergence. We prove the exponential convergence of the
Tucker and canonical approximations in tensor rank parameters. Several statistical operations are
performed in this low-rank tensor format, including evaluating the conditional covariance matrix,
spatially averaged estimation variance, computing a quadratic form, determinant, trace, loglikelihood,
inverse, and Cholesky decomposition of a large covariance matrix. Low-rank tensor approximations
reduce the computing and storage costs essentially. For example, the storage cost is reduced from an
exponential O(nd) to a linear scaling O(drn), where d is the spatial dimension, n is the number of
mesh points in one direction, and r is the tensor rank. Prerequisites for applicability of the proposed
techniques are the assumptions that the data, locations, and measurements lie on a tensor (axesparallel)
grid and that the covariance function depends on a distance,...
Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...Arthur Weglein
This short report gives a brief review on the finite difference modeling method used in MOSRP
and its boundary conditions as a preparation for the Green’s theorem RTM. The first
part gives the finite difference formulae we used and the second part describes the implemented
boundary conditions. The last part, using two examples, points out some impacts of the accuracy
of source fields on the results of modeling.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Response Surface in Tensor Train format for Uncertainty QuantificationAlexander Litvinenko
We apply low-rank Tensor Train format to solve PDEs with uncertain coefficients. First, we approximate uncertain permeability coefficient in TT format, then the operator and then apply iterations to solve stochastic Galerkin system.
New folderelec425_2016_hw5.pdfMar 25, 2016 ELEC 425 S.docxcurwenmichaela
New folder/elec425_2016_hw5.pdf
Mar 25, 2016
ELEC 425 Spring 2016 HW 5 Questions
due in class on Tue Mar 31, 2016
1) Read Sec. 1.11 from the textbook. Use the conventions plotted on Fig. 1.42 to derive the TM
matrix in Eq. 1.253.
2) The file Tmatrix.m is a Matlab script that evaluates the reflection and transmission coefficients
for TE and TM polarizations. Analyze the code, and write a script that uses Tmatrix.m to
generate Fig. 3 from Winn1998.pdf file. When the output from the Matlab code is overlaid with
Fig. 3 from the paper, they should match exactly as shown below. Note the dB scale in the
figure.
3) Read the following tutorial from the Lumerical website.
https://kb.lumerical.com/en/diffractive_optics_stack.html
First, run and verify the tutorial. Then, modify the tutorial files so that you simulate 0° and 45°
results from Fig. 3 of the Winn1998.pdf paper as shown above. The structure is composed of a
total of 12 layers: air on the entrance and exit sides, and five repetitions of two quarter wave
(𝑑1 + 𝑑2 =
𝜆1
4
+
𝜆2
4
= 𝑎) layers of refractive index 𝑛1 = 1.7 and 𝑛2 = 3.4 and thicknesses 𝑑1
and 𝑑2. Export your simulation results, import them into Matlab, and plot the output from part
2) with the output from Lumerical FDTD on the same plot. Verify that FDTD code results in a
similar set of results.
Please hand in your derivations, your plots and the relevant code used to generate the plots all
stapled together.
You can find the required files under the Handouts section on the course website at:
http://courses.ku.edu.tr/elec425
https://kb.lumerical.com/en/diffractive_optics_stack.html
http://courses.ku.edu.tr/elec425
New folder/PhotonicsLaserEngineering.pdf.part
Dynamic stiffness and eigenvalues of nonlocal nano beams - new methods for dynamic analysis of nano-scale structures. This lecture gives a review and proposed new techniques.
International Journal of Computational Engineering Research(IJCER)ijceronline
International Journal of Computational Engineering Research(IJCER) is an intentional online Journal in English monthly publishing journal. This Journal publish original research work that contributes significantly to further the scientific knowledge in engineering and Technology.
Poster to be presented at Stochastic Numerics and Statistical Learning: Theory and Applications Workshop 2024, Kaust, Saudi Arabia, https://cemse.kaust.edu.sa/stochnum/events/event/snsl-workshop-2024.
In this work we have considered a setting that mimics the Henry problem \cite{Simpson2003,Simpson04_Henry}, modeling seawater intrusion into a 2D coastal aquifer. The pure water recharge from the ``land side'' resists the salinisation of the aquifer due to the influx of saline water through the ``sea side'', thereby achieving some equilibrium in the salt concentration. In our setting, following \cite{GRILLO2010}, we consider a fracture on the sea side that significantly increases the permeability of the porous medium.
The flow and transport essentially depend on the geological parameters of the porous medium, including the fracture. We investigated the effects of various uncertainties on saltwater intrusion. We assumed uncertainties in the fracture width, the porosity of the bulk medium, its permeability and the pure water recharge from the land side. The porosity and permeability were modeled by random fields, the recharge by a random but periodic intensity and the thickness by a random variable. We calculated the mean and variance of the salt mass fraction, which is also uncertain.
The main question we investigated in this work was how well the MLMC method can be used to compute statistics of different QoIs. We found that the answer depends on the choice of the QoI. First, not every QoI requires a hierarchy of meshes and MLMC. Second, MLMC requires stable convergence rates for $\EXP{g_{\ell} - g_{\ell-1}}$ and $\Var{g_{\ell} - g_{\ell-1}}$. These rates should be independent of $\ell$. If these convergence rates vary for different $\ell$, then it will be hard to estimate $L$ and $m_{\ell}$, and MLMC will either not work or be suboptimal. We were not able to get stable convergence rates for all levels $\ell=1,\ldots,5$ when the QoI was an integral as in \eqref{eq:integral_box}. We found that for $\ell=1,\ldots 4$ and $\ell=5$ the rate $\alpha$ was different. Further investigation is needed to find the reason for this. Another difficulty is the dependence on time, i.e. the number of levels $L$ and the number of sums $m_{\ell}$ depend on $t$. At the beginning the variability is small, then it increases, and after the process of mixing salt and fresh water has stopped, the variance decreases again.
The number of random samples required at each level was estimated by calculating the decay of the variances and the computational cost for each level. These estimates depend on the minimisation function in the MLMC algorithm.
To achieve the efficiency of the MLMC approach presented in this work, it is essential that the complexity of the numerical solution of each random realisation is proportional to the number of grid vertices on the grid levels.
We investigated the applicability and efficiency of the MLMC approach to the Henry-like problem with uncertain porosity, permeability and recharge. These uncertain parameters were modelled by random fields with three independent random variables. Permeability is a function of porosity. Both functions are time-dependent, have multi-scale behaviour and are defined for two layers. The numerical solution for each random realisation was obtained using the well-known ug4 parallel multigrid solver. The number of random samples required at each level was estimated by calculating the decay of the variances and the computational cost for each level.
The MLMC method was used to compute the expected value and variance of several QoIs, such as the solution at a few preselected points $(t,\bx)$, the solution integrated over a small subdomain, and the time evolution of the freshwater integral. We have found that some QoIs require only 2-3 mesh levels and samples from finer meshes would not significantly improve the result. Other QoIs require more grid levels.
1. Investigated efficiency of MLMC for Henry problem with
uncertain porosity, permeability, and recharge.
2. Uncertainties are modeled by random fields.
3. MLMC could be much faster than MC, 3200 times faster !
4. The time dependence is challenging.
Remarks:
1. Check if MLMC is needed.
2. The optimal number of samples depends on the point (t;x)
3. An advanced MLMC may give better estimates of L and m`.
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityAlexander Litvinenko
In this work, we solved the density driven groundwater flow problem with uncertain porosity and permeability. An accurate solution of this time-dependent and non-linear problem is impossible because of the presence of natural uncertainties in the reservoir such as porosity and permeability.
Therefore, we estimated the mean value and the variance of the solution, as well as the propagation of uncertainties from the random input parameters to the solution.
We started by defining the Elder-like problem. Then we described the multi-variate polynomial approximation (\gPC) approach and used it to estimate the required statistics of the mass fraction.
Utilizing the \gPC method allowed us
to reduce the computational cost compared to the classical quasi Monte Carlo method.
\gPC assumes that the output function $\sol(t,\bx,\thetab)$ is square-integrable and smooth w.r.t uncertain input variables $\btheta$.
Many factors, such as non-linearity, multiple solutions, multiple stationary states, time dependence and complicated solvers, make the investigation of the convergence of the \gPC method a non-trivial task.
We used an easy-to-implement, but only sub-optimal \gPC technique to quantify the uncertainty. For example, it is known that by increasing the degree of global polynomials (Hermite, Langange and similar), Runge's phenomenon appears. Here, probably local polynomials, splines or their mixtures would be better. Additionally, we used an easy-to-parallelise quadrature rule, which was also only suboptimal. For instance, adaptive choice of sparse grid (or collocation) points \cite{ConradMarzouk13,nobile-sg-mc-2015,Sudret_sparsePCE,CONSTANTINE12,crestaux2009polynomial} would be better, but we were limited by the usage of parallel methods. Adaptive quadrature rules are not (so well) parallelisable. In conclusion, we can report that: a) we developed a highly parallel method to quantify uncertainty in the Elder-like problem; b) with the \gPC of degree 4 we can achieve similar results as with the \QMC method.
In the numerical section we considered two different aquifers - a solid parallelepiped and a solid elliptic cylinder. One of our goals was to see how the domain geometry influences the formation, the number and the shape of fingers.
Since the considered problem is nonlinear,
a high variance in the porosity may result in totally different solutions; for instance, the number of fingers, their intensity and shape, the propagation time, and the velocity may vary considerably.
The number of cells in the presented experiments varied from $241{,}152$ to $15{,}433{,}728$ for the cylindrical domain and from $524{,}288$ to $4{,}194{,}304$ for the parallelepiped. The maximal number of parallel processing units was $600\times 32$, where $600$ is the number of parallel nodes and $32$ is the number of computing cores on each node. The total computing time varied from 2 hours for the coarse mesh to 24 hours for the finest mesh.
Saltwater intrusion occurs when sea levels rise and saltwater moves onto the land. Usually, this occurs during storms, high tides, droughts, or when saltwater penetrates freshwater aquifers and raises the groundwater table. Since groundwater is an essential nutrition and irrigation resource, its salinization may lead to catastrophic consequences. Many acres of farmland may be lost because they can become too wet or salty to grow crops. Therefore, accurate modeling of different scenarios of saline flow is essential to help farmers and researchers develop strategies to improve the soil quality and decrease saltwater intrusion effects.
Saline flow is density-driven and described by a system of time-dependent nonlinear partial differential equations (PDEs). It features convection dominance and can demonstrate very complicated behavior.
As a specific model, we consider a Henry-like problem with uncertain permeability and porosity.
These parameters may strongly affect the flow and transport of salt.
We consider a class of density-driven flow problems. We are particularly interested in the problem of the salinization of coastal aquifers. We consider the Henry saltwater intrusion problem with uncertain porosity, permeability, and recharge parameters as a test case.
The reason for the presence of uncertainties is the lack of knowledge, inaccurate measurements,
and inability to measure parameters at each spatial or time location. This problem is nonlinear and time-dependent. The solution is the salt mass fraction, which is uncertain and changes in time. Uncertainties in porosity, permeability, recharge, and mass fraction are modeled using random fields. This work investigates the applicability of the well-known multilevel Monte Carlo (MLMC) method for such problems. The MLMC method can reduce the total computational and storage costs. Moreover, the MLMC method runs multiple scenarios on different spatial and time meshes and then estimates the mean value of the mass fraction.
The parallelization is performed in both the physical space and stochastic space. To solve every deterministic scenario, we run the parallel multigrid solver ug4 in a black-box fashion.
We use the solution obtained from the quasi-Monte Carlo method as a reference solution.
We investigated the applicability and efficiency of the MLMC approach for the Henry-like problem with uncertain porosity, permeability, and recharge. These uncertain parameters were modeled by random fields with three independent random variables. The numerical solution for each random realization was obtained using the well-known ug4 parallel multigrid solver. The number of required random samples on each level was estimated by computing the decay of the variances and computational costs for each level. We also computed the expected value and variance of the mass fraction in the whole domain, the evolution of the pdfs, the solutions at a few preselected points $(t,\bx)$, and the time evolution of the freshwater integral value. We have found that some QoIs require only 2-3 of the coarsest mesh levels, and samples from finer meshes would not significantly improve the result. Note that a different type of porosity may lead to a different conclusion.
The results show that the MLMC method is faster than the QMC method at the finest mesh. Thus, sampling at different mesh levels makes sense and helps to reduce the overall computational cost.
Here the interest is mainly to compute characterisations like the entropy,
the Kullback-Leibler divergence, more general $f$-divergences, or other such characteristics based on
the probability density. The density is often not available directly,
and it is a computational challenge to just represent it in a numerically
feasible fashion in case the dimension is even moderately large. It
is an even stronger numerical challenge to then actually compute said characteristics
in the high-dimensional case.
The task considered here was the numerical computation of characterising statistics of
high-dimensional pdfs, as well as their divergences and distances,
where the pdf in the numerical implementation was assumed discretised on some regular grid.
We have demonstrated that high-dimensional pdfs,
pcfs, and some functions of them
can be approximated and represented in a low-rank tensor data format.
Utilisation of low-rank tensor techniques helps to reduce the computational complexity
and the storage cost from exponential $\C{O}(n^d)$ to linear in the dimension $d$, e.g.\
$O(d n r^2 )$ for the TT format. Here $n$ is the number of discretisation
points in one direction, $r<<n$ is the maximal tensor rank, and $d$ the problem dimension.
Talk presented on this workshop "Workshop: Imaging With Uncertainty Quantification (IUQ), September 2022",
https://people.compute.dtu.dk/pcha/CUQI/IUQworkshop.html
We consider a weakly supervised classification problem. It
is a classification problem where the target variable can be unknown
or uncertain for some subset of samples. This problem appears when
the labeling is impossible, time-consuming, or expensive. Noisy measurements
and lack of data may prevent accurate labeling. Our task
is to build an optimal classification function. For this, we construct and
minimize a specific objective function, which includes the fitting error on
labeled data and a smoothness term. Next, we use covariance and radial AQ1
basis functions to define the degree of similarity between points. The further
process involves the repeated solution of an extensive linear system
with the graph Laplacian operator. To speed up this solution process,
we introduce low-rank approximation techniques. We call the resulting
algorithm WSC-LR. Then we use the WSC-LR algorithm for analysis
CT brain scans to recognize ischemic stroke disease. We also compare
WSC-LR with other well-known machine learning algorithms.
Computing f-Divergences and Distances of High-Dimensional Probability Density...Alexander Litvinenko
Poster presented on Stochastic Numerics and Statistical Learning: Theory and Applications Workshop in KAUST, Saudi Arabia.
The task considered here was the numerical computation of characterising statistics of
high-dimensional pdfs, as well as their divergences and distances,
where the pdf in the numerical implementation was assumed discretised on some regular grid.
Even for moderate dimension $d$, the full storage and computation with such objects become very quickly infeasible.
We have demonstrated that high-dimensional pdfs,
pcfs, and some functions of them
can be approximated and represented in a low-rank tensor data format.
Utilisation of low-rank tensor techniques helps to reduce the computational complexity
and the storage cost from exponential $\C{O}(n^d)$ to linear in the dimension $d$, e.g.
O(d n r^2) for the TT format. Here $n$ is the number of discretisation
points in one direction, r<n is the maximal tensor rank, and d the problem dimension.
The particular data format is rather unimportant,
any of the well-known tensor formats (CP, Tucker, hierarchical Tucker, tensor-train (TT)) can be used,
and we used the TT data format. Much of the presentation and in fact the central train
of discussion and thought is actually independent of the actual representation.
In the beginning it was motivated through three possible ways how one may
arrive at such a representation of the pdf. One was if the pdf was given in some approximate
analytical form, e.g. like a function tensor product of lower-dimensional pdfs with a
product measure, or from an analogous representation of the pcf and subsequent use of the
Fourier transform, or from a low-rank functional representation of a high-dimensional
RV, again via its pcf.
The theoretical underpinnings of the relation between pdfs and pcfs as well as their
properties were recalled in Section: Theory, as they are important to be preserved in the
discrete approximation. This also introduced the concepts of the convolution and of
the point-wise multiplication Hadamard algebra, concepts which become especially important if
one wants to characterise sums of independent RVs or mixture models,
a topic we did not touch on for the sake of brevity but which follows very naturally from
the developments here. Especially the Hadamard algebra is also
important for the algorithms to compute various point-wise functions in the sparse formats.
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Alexander Litvinenko
Talk presented on SIAM IS 2022 conference.
Very often, in the course of uncertainty quantification tasks or
data analysis, one has to deal with high-dimensional random variables (RVs)
(with values in $\Rd$). Just like any other RV,
a high-dimensional RV can be described by its probability density (\pdf) and/or
by the corresponding probability characteristic functions (\pcf),
or a more general representation as
a function of other, known, random variables.
Here the interest is mainly to compute characterisations like the entropy, the Kullback-Leibler, or more general
$f$-divergences. These are all computed from the \pdf, which is often not available directly,
and it is a computational challenge to even represent it in a numerically
feasible fashion in case the dimension $d$ is even moderately large. It
is an even stronger numerical challenge to then actually compute said characterisations
in the high-dimensional case.
In this regard, in order to achieve a computationally feasible task, we propose
to approximate density by a low-rank tensor.
Low rank tensor approximation of probability density and characteristic funct...Alexander Litvinenko
Very often one has to deal with high-dimensional random variables (RVs). A high-dimensional RV can be described by its probability density (\pdf) and/or by the corresponding probability characteristic functions (\pcf), or by a function representation. Here the interest is mainly to compute characterisations like the entropy, or
relations between two distributions, like their Kullback-Leibler divergence, or more general measures such as $f$-divergences,
among others. These are all computed from the \pdf, which is often not available directly, and it is a computational challenge to even represent it in a numerically feasible fashion in case the dimension $d$ is even moderately large. It is an even stronger numerical challenge to then actually compute said characterisations in the high-dimensional case.
In this regard, in order to achieve a computationally feasible task, we propose to represent the density by a high order tensor product, and approximate this in a low-rank format.
Identification of unknown parameters and prediction of missing values. Compar...Alexander Litvinenko
H-matrix approximation of large Mat\'{e}rn covariance matrices, Gaussian log-likelihoods.
Identifying unknown parameters and making predictions
Comparison with machine learning methods.
kNN is easy to implement and shows promising results.
Identification of unknown parameters and prediction with hierarchical matrice...Alexander Litvinenko
We compare four numerical methods for the prediction of missing values in four different datasets.
These methods are 1) the hierarchical maximum likelihood estimation (H-MLE), and three machine learning (ML) methods, which include 2) k-nearest neighbors (kNN), 3) random forest, and 4) Deep Neural Network (DNN).
From the ML methods, the best results (for considered datasets) were obtained by the kNN method with three (or seven) neighbors.
On one dataset, the MLE method showed a smaller error than the kNN method, whereas, on another, the kNN method was better.
The MLE method requires a lot of linear algebra computations and works fine on almost all datasets. Its result can be improved by taking a smaller threshold and more accurate hierarchical matrix arithmetics. To our surprise, the well-known kNN method produces similar results as H-MLE and worked much faster.
1. Motivation: why do we need low-rank tensors
2. Tensors of the second order (matrices)
3. CP, Tucker and tensor train tensor formats
4. Many classical kernels have (or can be approximated in ) low-rank tensor format
5. Post processing: Computation of mean, variance, level sets, frequency
Propagation of Uncertainties in Density Driven Groundwater FlowAlexander Litvinenko
Major Goal: estimate risks of the pollution in a subsurface flow.
How?: we solve density-driven groundwater flow with uncertain porosity and permeability.
We set up density-driven groundwater flow problem,
review stochastic modeling and stochastic methods, use UG4 framework (https://gcsc.uni-frankfurt.de/simulation-and-modelling/ug4),
model uncertainty in porosity and permeability,
2D and 3D numerical experiments.
Simulation of propagation of uncertainties in density-driven groundwater flowAlexander Litvinenko
Consider stochastic modelling of the density-driven subsurface flow in 3D. This talk was presented by Dmitry Logashenko on the IMG conference in Kunming, China, August 2019.
Large data sets result large dense matrices, say with 2.000.000 rows and columns. How to work with such large matrices? How to approximate them? How to compute log-likelihood? determination? inverse? All answers are in this work.
In this paper, we solve a semi-supervised regression
problem. Due to the luck of knowledge about the
data structure and the presence of random noise, the considered data model is uncertain. We propose a method which combines graph Laplacian regularization and cluster ensemble methodologies. The co-association matrix of the ensemble is calculated on both labeled and unlabeled data; this matrix is used as a similarity matrix in the regularization framework to derive the predicted outputs. We use the low-rank decomposition of the co-association matrix to significantly speedup calculations and reduce memory. Two clustering problem examples are presented.
Full version is here https://arxiv.org/abs/1901.03919
The Indian economy is classified into different sectors to simplify the analysis and understanding of economic activities. For Class 10, it's essential to grasp the sectors of the Indian economy, understand their characteristics, and recognize their importance. This guide will provide detailed notes on the Sectors of the Indian Economy Class 10, using specific long-tail keywords to enhance comprehension.
For more information, visit-www.vavaclasses.com
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
Computation of electromagnetic fields scattered from dielectric objects of uncertain shapes using MLMC method
1. Computation of Electromagnetic Fields Scattered From Dielectric Objects of
Uncertain Shapes Using MLMC
A. Litvinenko1
, A. C. Yucel3
, H. Bagci2
, J. Oppelstrup4
, E. Michielssen5
, R. Tempone1,2
1
RWTH Aachen, 2
KAUST, 3
Nanyang Technological University in Singapore, 4
KTH Royal Institute of Technology, 5
University of Michigan
GAMM 2021 (online)
2. Motivation
Efficient computation tools for characterizing scattering from objects
of uncertain shapes are needed in the fields of electromagnetics,
optics, and photonics.
How: Use CMLMC method (advanced version of Multi-Level Monte
Carlo).
CMLMC optimally balances statistical and discretization errors. It
requires very few samples on fine meshes and more on coarse.
taken from wiki, reddit.com, EMCoS
1/35
3. Plan:
1. Scattering problem setup
2. Deterministic solver
3. Generation of random shapes
4. Shape transformation
5. QoI on perturbed shape
6. Continuation Multi Level Monte Carlo (CMLMC)
7. Results (time, work vs. TOL, weak and strong convergences)
8. Conclusion
2/35
4. Scattering problem
Input: randomly perturbed shape
Output: radar and scattering cross sections, electric and magnetic
surface current densities
3/35
5. Previous works
I Monte Carlo (N. Gacia, Jandhyala, Michielssen,)
I surrogate methods (A. Yucel, H. Bagci, L. Gomez, L.H. Garcia)
I stochastic collocation ([C. Chauviere, J. Hesthaven, K. Wilcox’07],
[D. Xiu, J. Hesthaven’07], [Zh. Zeng, J. M. Jin’07])
4/35
6. Deterministic solver
Electromagnetic scattering from dielectric objects is analyzed by
using the Poggio-Miller-Chan-Harrington-Wu-Tsai surface integral
equation (PMCHWT-SIE) solver.
The PMCHWT-SIE is discretized using the method of moments
(MoM) and the iterative solution of the resulting matrix system is
accelerated using a (parallelized) fast multipole method (FMM) - fast
Fourier transform (FFT) scheme (FMM-FFT).
Input uncertainties: position, orientation, roughness, and shape of
scatterers, as well as internal and/or external excitation
characteristics such as the frequency, amplitude, and angle of arrival.
5/35
7. Generation of random shapes
Perturbed shape v(ϑm, ϕm) is defined as
v(ϑm, ϕm) ≈ ṽ(ϑm, ϕm) +
K
X
k=1
akκk(ϑm, ϕm). (1)
where ϑm and ϕm are angular coordinates of node m,
ṽ(ϑm, ϕm) = 1 m is unperturbed radial coordinate on the unit sphere.
κk(ϑ, ϕ) obtained from spherical harmonics by re-scaling their
arguments, κ1(ϑ, ϕ) = cos(α1ϑ), κ2(ϑ, ϕ) = sin(α2ϑ) sin(α3ϕ),
where α1, α2, α3 > 0.
-1
1
-0.5
1
0
0.5
0
1
0
-1 -1
1
0
-1
-1
-0.5
0
0.5
1
0.5
0
-0.5
-1
1
1
0
-1
1
0
-1
1
0.5
0
-0.5
-1
-1.5
6/35
9. Mesh transformation
The perturbed mesh P0 is also rotated and scaled using the following
transformation
xm
ym
zm
:=L(lx, ly, lz)Rx(ϕx)Ry(ϕy)Rz(ϕz)
xm
ym
zm
, (2)
matrices Rx(ϕx), Ry(ϕy), and Rz(ϕz) perform rotations around x, y,
and z axes by angles ϕx, ϕy, and ϕz,
matrix L(lx, ly, lz) implements scaling along x, y, and z axes by lx, ly,
and lz, respectively.
8/35
10. Random rotation, stretching and expanding
rotations around axes x, y, and z by angles ϕx, ϕy, and ϕz:
Rx(ϕx) =
1 0 0
0 cos ϕx − sin ϕx
0 sin ϕx cos ϕx
Ry(ϕy) =
cos ϕy 0 sin ϕy
0 1 0
− sin ϕy 0 cos ϕy
Rz(ϕz) =
cos ϕz − sin ϕz 0
sin ϕz cos ϕz 0
0 0 1
.
L(lx, ly, lz) implements scaling along axes x,y,z by factors lx, ly, and lz:
L̄(lx, ly, lz) =
1/lx 0 0
0 1/ly 0
0 0 1/lz
.
9/35
11. Input random vector
RVs used in generating the coarsest perturbed mesh P0 are:
1. perturbation weights ak, k = 1, . . . , K,
2. rotation angles ϕx, ϕy, and ϕz,
3. scaling factors lx, ly, and lz.
Thus, random input parameter vector:
ξ = (a1, . . . , aK , ϕx, ϕy, ϕz, lx, ly, lz) ∈ RK+6
(3)
defines the perturbed shape.
10/35
12. Mesh refinement
Mesh P0: the coarsest discretisation of the sphere (e.g., icosahedron)
Mesh P`=1 is generated by refining each triangle of the perturbed P0
into four (by halving all three edges and connecting mid-points).
Mesh P2 is generated in the same way from P1.
All meshes P` at all levels ` = 1, . . . , L are nested discretizations of
P0.
(!!!) No uncertainties are added on meshes P`, ` > 0;
the uncertainty is introduced only at level ` = 0.
11/35
13. Refinement of the perturbed shape
4 nested meshes with {320, 1280, 5120, 20480} triangular elements.12/35
14. Electric (left) and magnetic (right) surface current densities
Amplitudes: a) J(r); b) M(r) (sphere); c) J(r); d) M(r) (perturbed shape).
13/35
15. Electric (left) and magnetic (right) surface current densities
Amplitudes of (a) J(r) and (b) M(r) induced on the unit sphere
under excitation by an x̂-polarized plane wave propagating in −ẑ
direction at 300 MHz.
Amplitudes of (c) J(r) and (d) M(r) induced on the perturbed shape
under excitation by the same plane wave. For all figures, amplitudes
are normalized to 1 and plotted in dB scale.
14/35
16. QoI: RCS and SCS
To compute RCS and SCS, the scatterer is excited by a plane wave
Einc
(r).
σrcs
(ϑ, ϕ) =
24. 2 , (4)
F(ϑ, ϕ) is the scattered electric field pattern in the far field.
The SCS Csca
(Ω) is obtained by integrating σrcs
(ϑ, ϕ) over the angle
Ω:
Csca
(Ω) =
1
4π
Z
Ω
σrcs
(ϑ, ϕ) sin ϑdϑdϕ. (5)
15/35
25. RCS of unit sphere and perturbed shape
3 /4 /2 /4 0 /4 /2 3 /4
(rad)
-10
-5
0
5
10
15
20
25
rcs
(dB)
Sphere
Perturbed surface
3 /4 /2 /4 0 /4 /2 3 /4
(rad)
-10
-5
0
5
10
15
20
25
rcs
(dB)
Sphere
Perturbed surface
RCS is computed on
(top) xz
(bottom) yz planes
under excitation by an x̂-polarized plane
wave propagating in −ẑ direction at
300 MHz.
(top) ϕ = 0 and ϕ = π rad in the first
and second halves of the horizontal axis,
respectively.
(bottom) ϕ = π/2 rad and ϕ =
3π/2 rad in the first and second halves
of the horizontal axis.
16/35
26. Multilevel Monte Carlo Algorithm
Aim: to approximate the mean E (g(u)) of QoI g(u) to a given
accuracy ε := TOL, where u = u(ω) randoom shape.
Input: a hierarchy of L + 1 meshes {h`}L
`=0, h` := h0β−`
for each
realization of random domain.
Compute:
E (gL) =
PL
`=0 E (g`(ω) − g`−1(ω)) =:
PL
`=0 E (G`) ≈
PL
`=0 E
G̃`
,
where G̃` = M−1
`
PM`
m=0 G`(ω`,m).
Output: A ≈ E (g(u)) ≈
PL
`=0 G̃`.
Cost of one sample of G̃`: W` ∝ h−γ
` = (h0β−`
)−γ
.
Total work of estimation A: W =
PL
`=0 M`W`.
Estimator A satisfies a tolerance with a prescribed failure probability
0 ν ≤ 1, i.e.,
P[|E (g) − A| ≤ TOL] ≥ 1 − ν (6)
while minimizing W .
17/35
27. CMLMC numerical tests
The QoI is the SCS over a user-defined solid angle
Ω = [1/6, 11/36]π rad × [5/12, 19/36]π rad (i.e., a measure of
far-field scattered power in a cone).
Uniform RVs are:
a1, a2 ∼ U[−0.14, 0.14] m,
ϕx, ϕy, ϕz ∼ U[0.2, 3] rad,
lx, ly, lz ∼ U[0.9, 1.1];
CMLMC runs for TOL ranging from 0.2 to 0.008.
At TOL ≈ 0.008, CMLMC requires L = 5 meshes with
{320, 1280, 5120, 20480, 81920} triangles.
18/35
29. Average time vs. TOL
10−3
10−2
10−1
100
TOL
104
105
106
107
108
Average
Time
(s)
TOL−2
CMLMC
MC Estimate
The experiment is repeated 15 times independently and the obtained
values are shown as error bars on the curves. 20/35
30. Work estimate vs. TOL
10−3
10−2
10−1
100
TOL
101
102
103
104
105
106
Work
estimate
TOL−2
CMLMC
MC Estimate
21/35
31. Time required to compute G` vs. `.
0 1 2 3 4
`
101
102
103
104
105
Time
(s)
22`
G`
22/35
35. Best practices for applying CMLMC method to CEM problems
I Download CMLMC:
https://github.com/StochasticNumerics/mimclib.git (or use
MLMC from M. Giles)
I Implement interface to couple CMLMC and your deterministic
solver
I Generate a hierarchy of meshes (mimimum 3), nested are better
I Generate 5-7 random shapes on first 3 meshes
I Estimate the strong and weak convergence rates, q1, q2, (later they
will be corrected by CMLMC algorithm)
I Run CMLMC solver and check visually the automatically generated
plots
26/35
36. Conclusion (what is done)
I Used CMLMC method to characterize EM wave scattering from
dielectric objects with uncertain shapes.
I Researched how uncertainties in the shape propagate to the
solution.
I Demonstrated that the CMLMC algorithm can be 10 times faster
than MC.
I To increase the efficiency further, each of the simulations is carried
out using the FMM-FFT accelerated PMCHWT-SIE solver.
I Confirmed that the known advantages of the CMLMC algorithm
can be observed when it is applied to EM wave scattering:
non-intrusiveness, dimension independence, better convergence
rates compared to the classical MC method, and higher immunity
to irregularity w.r.t. uncertain parameters, than, for example,
sparse grid methods.
27/35
37. Conclusion
Some random perturbations may affect the convergence rates in
CMLMC.
With difficult-to-predict convergence rates, it is hard for CMLMC to
estimate:
I computational cost W ,
I number of levels L,
I number of samples on each level M`,
I computation time,
I parameter θ,
I variance in QoI.
All these may result in a sub-optimal performance.
28/35
38. Acknowledgements
SRI UQ at KAUST and Alexander von Humboldt
foundation.
Results are published:
A. Litvinenko, A. C. Yucel, H. Bagci, J. Oppelstrup, E. Michielssen,
R. Tempone,
Computation of Electromagnetic Fields Scattered From Objects With
Uncertain Shapes Using Multilevel Monte Carlo Method,
IEEE J. on Multiscale and Multiphysics Comput. Techniques,
pp 37-50, 2019.
https://arxiv.org/abs/1809.00362
29/35
39. Main idea of (C)MLMC method
Let {P`}L
`=0 be sequences of meshes with h` = h0β−`
, β 1. Let
g`(ξ) represent the approximation to g(ξ) computed using mesh P`.
E[gL] =
L
X
`=0
E[G`] (7)
where G` is defined as
G` =
(
g0 if ` = 0
g` − g`−1 if ` 0
. (8)
Note that g` and g`−1 are computed using the same input random
parameter ξ.
30/35
40. Main idea of (C)MLMC method
E[G`] ≈
∼
G` = M−1
`
PM`
m=1 G`,m,
E[g − g`] ≈ QW hq1
` (9a)
Var[g` − g`−1] ≈ QShq2
`−1 (9b)
for QW 6= 0, QS 0, q1 0, and 0 q2 ≤ 2q1.
QoI A =
PL
`=0
∼
G`.
Let the average cost of generating one sample of G` (cost of one
deterministic simulation for one random realization) be
W` ∝ h−dγ
` = h−dγ
0 β`dγ
(10)
31/35
41. Main idea of (C)MLMC method
The total CMLMC computational cost is
W =
L
X
`=0
M`W`. (11)
The estimator A satisfies a tolerance with a prescribed failure
probability 0 ν ≤ 1, i.e.,
P[|E[g] − A| ≤ TOL] ≥ 1 − ν (12)
while minimizing W . The total error is split into bias and statistical
error,
|E[g] − A| ≤ |E[g − A]|
| {z }
Bias
+ |E[A] − A|
| {z }
Statistical error
32/35
42. Main idea of (C)MLMC method
Let θ ∈ (0, 1) be a splitting parameter, so that
TOL = (1 − θ)TOL
| {z }
Bias tolerance
+ θTOL
| {z }
Statistical error tolerance
. (13)
The CMLMC algorithm bounds the bias, B = |E[g − A]|, and the
statistical error as
B = |E[g − A]| ≤ (1 − θ)TOL (14)
|E[A] − A| ≤ θTOL (15)
where the latter bound holds with probability 1 − ν.
To satisfy condition in (15) we require:
Var[A] ≤
θTOL
Cν
2
(16)
for some given confidence parameter, Cν, such that Φ(Cν) = 1 − ν
2,
Φ is the cdf of a standard normal random variable.
33/35
43. Main idea of (C)MLMC method
By construction of the MLMC estimator, E[A] = E[gL], and by
independence Var[A] =
PL
`=0 V`M−1
` , where V` = Var[G`].
Given L, TOL, and 0 θ 1, and by minimizing W obtain the
following optimal number of samples per level `:
M` =
Cν
θTOL
2
s
V`
W`
L
X
`=0
p
V`W`
!
. (17)
Summing the optimal numbers of samples over all levels yields the
following expression for the total optimal computational cost in terms
of TOL:
W (TOL, L) =
Cν
θTOL
2 L
X
`=0
p
V`W`
!2
. (18)
34/35
44. Literature
1. Collier, N., Haji-Ali, A., Nobile, F. et al. A continuation multilevel Monte Carlo algorithm. Bit Numer Math 55, 399–432 (2015).
https://doi.org/10.1007/s10543-014-0511-3
2. C. Chauviere, J. S. Hesthaven, and L. Lurati. Computational modeling of uncertainty in time-domain electromagnetics. SIAM J. Sci. Comput.,
28(2):751-775, 2006,
3. C. Chauviere, J. S. Hesthaven, and L. C. Wilcox. Efficient computation of RCS from scatterers of uncertain shapes. IEEE Trans. Electromagn. Compat.,
55(5):1437-1448, 2007,
4. D. Liu, A. Litvinenko, C. Schillings, V. Schulz, Quantification of Airfoil Geometry-Induced Aerodynamic Uncertainties—Comparison of Approaches,
SIAM/ASA Journal on Uncertainty Quantification 5 (1), 334-352, 2017
5. Litvinenko A., Matthies H.G., El-Moselhy T.A. (2013) Sampling and Low-Rank Tensor Approximation of the Response Surface. In: Dick J., Kuo F., Peters
G., Sloan I. (eds) Monte Carlo and Quasi-Monte Carlo Methods 2012. Springer Proceedings in Mathematics Statistics, vol 65. Springer, Berlin, Heidelber
6. A. Litvinenko, Application of hierarchical matrices for solving multiscale problems, Dissertation, Leipzig University, Germany,
http://publications.rwth-aachen.de/record/754296/files/754296.pdf
7. A. Litvinenko, R Kriemann, MG Genton, Y Sun, DE Keyes, HLIBCov: Parallel hierarchical matrix approximation of large covariance matrices and likelihoods
with applications in parameter identification MethodsX 7, 100600, 2020
8. A. Litvinenko, D. Logashenko, R. Tempone, G. Wittum, D. Keyes, Solution of the 3D density-driven groundwater flow problem with uncertain porosity and
permeability. Int. J. Geomath 11, pp 1-29 (2020). https://doi.org/10.1007/s13137-020-0147-1
9. A Litvinenko, Y Sun, MG Genton, DE Keyes, Likelihood approximation with hierarchical matrices for large spatial datasets, Computational Statistics Data
Analysis 137, 115-132, 2019
10. S. Dolgov, A. Litvinenko, D.Liu, KRIGING IN TENSOR TRAIN DATA FORMAT Conf. Proceedings, 3rd International Conference on Uncertainty
Quantification in Computational Sciences and Engineering, https://files.eccomasproceedia.org/papers/e-books/uncecomp_2019.pdf pp 309-329,
2019
11. A. Litvinenko, D. Keyes, V. Khoromskaia, B.N. Khoromskij, H. G. Matthies, Tucker tensor analysis of Matérn functions in spatial statistics, J.
Computational Methods in Applied Mathematics, Vol. 19, Issue 1, pp 101-122, 2019, De Gruyter
12. H.G. Matthies, E. Zander, B.V. Rosic, A. Litvinenko, Parameter estimation via conditional expectation: a Bayesian inversion, Advanced modeling and
simulation in engineering sciences 3 (1), 1-21, 2016
13. M. Espig, W. Hackbusch, A. Litvinenko, H.G. Matthies, E. Zander, Iterative algorithms for the post-processing of high-dimensional data, Journal of
Computational Physics, Vol. 410, p 109396, 2020
35/35