We consider a class of density-driven flow problems. We are particularly interested in the problem of the salinization of coastal aquifers. We consider the Henry saltwater intrusion problem with uncertain porosity, permeability, and recharge parameters as a test case.
The reason for the presence of uncertainties is the lack of knowledge, inaccurate measurements,
and inability to measure parameters at each spatial or time location. This problem is nonlinear and time-dependent. The solution is the salt mass fraction, which is uncertain and changes in time. Uncertainties in porosity, permeability, recharge, and mass fraction are modeled using random fields. This work investigates the applicability of the well-known multilevel Monte Carlo (MLMC) method for such problems. The MLMC method can reduce the total computational and storage costs. Moreover, the MLMC method runs multiple scenarios on different spatial and time meshes and then estimates the mean value of the mass fraction.
The parallelization is performed in both the physical space and stochastic space. To solve every deterministic scenario, we run the parallel multigrid solver ug4 in a black-box fashion.
We use the solution obtained from the quasi-Monte Carlo method as a reference solution.
We investigated the applicability and efficiency of the MLMC approach for the Henry-like problem with uncertain porosity, permeability, and recharge. These uncertain parameters were modeled by random fields with three independent random variables. The numerical solution for each random realization was obtained using the well-known ug4 parallel multigrid solver. The number of required random samples on each level was estimated by computing the decay of the variances and computational costs for each level. We also computed the expected value and variance of the mass fraction in the whole domain, the evolution of the pdfs, the solutions at a few preselected points $(t,\bx)$, and the time evolution of the freshwater integral value. We have found that some QoIs require only 2-3 of the coarsest mesh levels, and samples from finer meshes would not significantly improve the result. Note that a different type of porosity may lead to a different conclusion.
The results show that the MLMC method is faster than the QMC method at the finest mesh. Thus, sampling at different mesh levels makes sense and helps to reduce the overall computational cost.
Saltwater intrusion occurs when sea levels rise and saltwater moves onto the land. Usually, this occurs during storms, high tides, droughts, or when saltwater penetrates freshwater aquifers and raises the groundwater table. Since groundwater is an essential nutrition and irrigation resource, its salinization may lead to catastrophic consequences. Many acres of farmland may be lost because they can become too wet or salty to grow crops. Therefore, accurate modeling of different scenarios of saline flow is essential to help farmers and researchers develop strategies to improve the soil quality and decrease saltwater intrusion effects.
Saline flow is density-driven and described by a system of time-dependent nonlinear partial differential equations (PDEs). It features convection dominance and can demonstrate very complicated behavior.
As a specific model, we consider a Henry-like problem with uncertain permeability and porosity.
These parameters may strongly affect the flow and transport of salt.
We investigated the applicability and efficiency of the MLMC approach to the Henry-like problem with uncertain porosity, permeability and recharge. These uncertain parameters were modelled by random fields with three independent random variables. Permeability is a function of porosity. Both functions are time-dependent, have multi-scale behaviour and are defined for two layers. The numerical solution for each random realisation was obtained using the well-known ug4 parallel multigrid solver. The number of random samples required at each level was estimated by calculating the decay of the variances and the computational cost for each level.
The MLMC method was used to compute the expected value and variance of several QoIs, such as the solution at a few preselected points $(t,\bx)$, the solution integrated over a small subdomain, and the time evolution of the freshwater integral. We have found that some QoIs require only 2-3 mesh levels and samples from finer meshes would not significantly improve the result. Other QoIs require more grid levels.
1. Investigated efficiency of MLMC for Henry problem with
uncertain porosity, permeability, and recharge.
2. Uncertainties are modeled by random fields.
3. MLMC could be much faster than MC, 3200 times faster !
4. The time dependence is challenging.
Remarks:
1. Check if MLMC is needed.
2. The optimal number of samples depends on the point (t;x)
3. An advanced MLMC may give better estimates of L and m`.
Poster to be presented at Stochastic Numerics and Statistical Learning: Theory and Applications Workshop 2024, Kaust, Saudi Arabia, https://cemse.kaust.edu.sa/stochnum/events/event/snsl-workshop-2024.
In this work we have considered a setting that mimics the Henry problem \cite{Simpson2003,Simpson04_Henry}, modeling seawater intrusion into a 2D coastal aquifer. The pure water recharge from the ``land side'' resists the salinisation of the aquifer due to the influx of saline water through the ``sea side'', thereby achieving some equilibrium in the salt concentration. In our setting, following \cite{GRILLO2010}, we consider a fracture on the sea side that significantly increases the permeability of the porous medium.
The flow and transport essentially depend on the geological parameters of the porous medium, including the fracture. We investigated the effects of various uncertainties on saltwater intrusion. We assumed uncertainties in the fracture width, the porosity of the bulk medium, its permeability and the pure water recharge from the land side. The porosity and permeability were modeled by random fields, the recharge by a random but periodic intensity and the thickness by a random variable. We calculated the mean and variance of the salt mass fraction, which is also uncertain.
The main question we investigated in this work was how well the MLMC method can be used to compute statistics of different QoIs. We found that the answer depends on the choice of the QoI. First, not every QoI requires a hierarchy of meshes and MLMC. Second, MLMC requires stable convergence rates for $\EXP{g_{\ell} - g_{\ell-1}}$ and $\Var{g_{\ell} - g_{\ell-1}}$. These rates should be independent of $\ell$. If these convergence rates vary for different $\ell$, then it will be hard to estimate $L$ and $m_{\ell}$, and MLMC will either not work or be suboptimal. We were not able to get stable convergence rates for all levels $\ell=1,\ldots,5$ when the QoI was an integral as in \eqref{eq:integral_box}. We found that for $\ell=1,\ldots 4$ and $\ell=5$ the rate $\alpha$ was different. Further investigation is needed to find the reason for this. Another difficulty is the dependence on time, i.e. the number of levels $L$ and the number of sums $m_{\ell}$ depend on $t$. At the beginning the variability is small, then it increases, and after the process of mixing salt and fresh water has stopped, the variance decreases again.
The number of random samples required at each level was estimated by calculating the decay of the variances and the computational cost for each level. These estimates depend on the minimisation function in the MLMC algorithm.
To achieve the efficiency of the MLMC approach presented in this work, it is essential that the complexity of the numerical solution of each random realisation is proportional to the number of grid vertices on the grid levels.
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityAlexander Litvinenko
In this work, we solved the density driven groundwater flow problem with uncertain porosity and permeability. An accurate solution of this time-dependent and non-linear problem is impossible because of the presence of natural uncertainties in the reservoir such as porosity and permeability.
Therefore, we estimated the mean value and the variance of the solution, as well as the propagation of uncertainties from the random input parameters to the solution.
We started by defining the Elder-like problem. Then we described the multi-variate polynomial approximation (\gPC) approach and used it to estimate the required statistics of the mass fraction.
Utilizing the \gPC method allowed us
to reduce the computational cost compared to the classical quasi Monte Carlo method.
\gPC assumes that the output function $\sol(t,\bx,\thetab)$ is square-integrable and smooth w.r.t uncertain input variables $\btheta$.
Many factors, such as non-linearity, multiple solutions, multiple stationary states, time dependence and complicated solvers, make the investigation of the convergence of the \gPC method a non-trivial task.
We used an easy-to-implement, but only sub-optimal \gPC technique to quantify the uncertainty. For example, it is known that by increasing the degree of global polynomials (Hermite, Langange and similar), Runge's phenomenon appears. Here, probably local polynomials, splines or their mixtures would be better. Additionally, we used an easy-to-parallelise quadrature rule, which was also only suboptimal. For instance, adaptive choice of sparse grid (or collocation) points \cite{ConradMarzouk13,nobile-sg-mc-2015,Sudret_sparsePCE,CONSTANTINE12,crestaux2009polynomial} would be better, but we were limited by the usage of parallel methods. Adaptive quadrature rules are not (so well) parallelisable. In conclusion, we can report that: a) we developed a highly parallel method to quantify uncertainty in the Elder-like problem; b) with the \gPC of degree 4 we can achieve similar results as with the \QMC method.
In the numerical section we considered two different aquifers - a solid parallelepiped and a solid elliptic cylinder. One of our goals was to see how the domain geometry influences the formation, the number and the shape of fingers.
Since the considered problem is nonlinear,
a high variance in the porosity may result in totally different solutions; for instance, the number of fingers, their intensity and shape, the propagation time, and the velocity may vary considerably.
The number of cells in the presented experiments varied from $241{,}152$ to $15{,}433{,}728$ for the cylindrical domain and from $524{,}288$ to $4{,}194{,}304$ for the parallelepiped. The maximal number of parallel processing units was $600\times 32$, where $600$ is the number of parallel nodes and $32$ is the number of computing cores on each node. The total computing time varied from 2 hours for the coarse mesh to 24 hours for the finest mesh.
Propagation of Uncertainties in Density Driven Groundwater FlowAlexander Litvinenko
Major Goal: estimate risks of the pollution in a subsurface flow.
How?: we solve density-driven groundwater flow with uncertain porosity and permeability.
We set up density-driven groundwater flow problem,
review stochastic modeling and stochastic methods, use UG4 framework (https://gcsc.uni-frankfurt.de/simulation-and-modelling/ug4),
model uncertainty in porosity and permeability,
2D and 3D numerical experiments.
In many countries, groundwater is the strategic reserve, which is used as drinking water and as an irrigation resource. Therefore, accurate modeling of the pollution of the soil and groundwater aquifer is highly important. As a model, we consider a density-driven groundwater flow problem with uncertain porosity and permeability. This problem may arise in geothermal reservoir simulation, natural saline-disposal basins, modeling of contaminant plumes and subsurface flow.
This strongly non-linear problem describes how salt or polluted water streams down building ''fingers". The solving process requires a very fine unstructured mesh and, therefore, high computational resources. Consequently, we run the parallel multigrid solver UG4 (https://github.com/UG4/ughub.wiki.git) on Shaheen II supercomputer.
The parallelization is done in both - the physical space and the stochastic space. The novelty of this work is the estimation of risks that the pollution will achieve a specific critical concentration. Additionally, we demonstrate how the multigrid UG4 solver can be run in a black-box fashion for testing different scenarios in the density-driven flow.
We solve Elder's problem in 2D and 3D domains, where unknown porosity and permeability are modeled by random fields. For approximations in the stochastic space, we use the generalized polynomial chaos expansion. We compute different quantities of interest such as the mean, variance and exceedance probabilities of the concentration. As a reference solution, we use the solution, obtained from the quasi-Monte Carlo method.
We investigated the applicability and efficiency of the MLMC approach for the Henry-like problem with uncertain porosity, permeability, and recharge. These uncertain parameters were modeled by random fields with three independent random variables. The numerical solution for each random realization was obtained using the well-known ug4 parallel multigrid solver. The number of required random samples on each level was estimated by computing the decay of the variances and computational costs for each level. We also computed the expected value and variance of the mass fraction in the whole domain, the evolution of the pdfs, the solutions at a few preselected points $(t,\bx)$, and the time evolution of the freshwater integral value. We have found that some QoIs require only 2-3 of the coarsest mesh levels, and samples from finer meshes would not significantly improve the result. Note that a different type of porosity may lead to a different conclusion.
The results show that the MLMC method is faster than the QMC method at the finest mesh. Thus, sampling at different mesh levels makes sense and helps to reduce the overall computational cost.
Saltwater intrusion occurs when sea levels rise and saltwater moves onto the land. Usually, this occurs during storms, high tides, droughts, or when saltwater penetrates freshwater aquifers and raises the groundwater table. Since groundwater is an essential nutrition and irrigation resource, its salinization may lead to catastrophic consequences. Many acres of farmland may be lost because they can become too wet or salty to grow crops. Therefore, accurate modeling of different scenarios of saline flow is essential to help farmers and researchers develop strategies to improve the soil quality and decrease saltwater intrusion effects.
Saline flow is density-driven and described by a system of time-dependent nonlinear partial differential equations (PDEs). It features convection dominance and can demonstrate very complicated behavior.
As a specific model, we consider a Henry-like problem with uncertain permeability and porosity.
These parameters may strongly affect the flow and transport of salt.
We investigated the applicability and efficiency of the MLMC approach to the Henry-like problem with uncertain porosity, permeability and recharge. These uncertain parameters were modelled by random fields with three independent random variables. Permeability is a function of porosity. Both functions are time-dependent, have multi-scale behaviour and are defined for two layers. The numerical solution for each random realisation was obtained using the well-known ug4 parallel multigrid solver. The number of random samples required at each level was estimated by calculating the decay of the variances and the computational cost for each level.
The MLMC method was used to compute the expected value and variance of several QoIs, such as the solution at a few preselected points $(t,\bx)$, the solution integrated over a small subdomain, and the time evolution of the freshwater integral. We have found that some QoIs require only 2-3 mesh levels and samples from finer meshes would not significantly improve the result. Other QoIs require more grid levels.
1. Investigated efficiency of MLMC for Henry problem with
uncertain porosity, permeability, and recharge.
2. Uncertainties are modeled by random fields.
3. MLMC could be much faster than MC, 3200 times faster !
4. The time dependence is challenging.
Remarks:
1. Check if MLMC is needed.
2. The optimal number of samples depends on the point (t;x)
3. An advanced MLMC may give better estimates of L and m`.
Poster to be presented at Stochastic Numerics and Statistical Learning: Theory and Applications Workshop 2024, Kaust, Saudi Arabia, https://cemse.kaust.edu.sa/stochnum/events/event/snsl-workshop-2024.
In this work we have considered a setting that mimics the Henry problem \cite{Simpson2003,Simpson04_Henry}, modeling seawater intrusion into a 2D coastal aquifer. The pure water recharge from the ``land side'' resists the salinisation of the aquifer due to the influx of saline water through the ``sea side'', thereby achieving some equilibrium in the salt concentration. In our setting, following \cite{GRILLO2010}, we consider a fracture on the sea side that significantly increases the permeability of the porous medium.
The flow and transport essentially depend on the geological parameters of the porous medium, including the fracture. We investigated the effects of various uncertainties on saltwater intrusion. We assumed uncertainties in the fracture width, the porosity of the bulk medium, its permeability and the pure water recharge from the land side. The porosity and permeability were modeled by random fields, the recharge by a random but periodic intensity and the thickness by a random variable. We calculated the mean and variance of the salt mass fraction, which is also uncertain.
The main question we investigated in this work was how well the MLMC method can be used to compute statistics of different QoIs. We found that the answer depends on the choice of the QoI. First, not every QoI requires a hierarchy of meshes and MLMC. Second, MLMC requires stable convergence rates for $\EXP{g_{\ell} - g_{\ell-1}}$ and $\Var{g_{\ell} - g_{\ell-1}}$. These rates should be independent of $\ell$. If these convergence rates vary for different $\ell$, then it will be hard to estimate $L$ and $m_{\ell}$, and MLMC will either not work or be suboptimal. We were not able to get stable convergence rates for all levels $\ell=1,\ldots,5$ when the QoI was an integral as in \eqref{eq:integral_box}. We found that for $\ell=1,\ldots 4$ and $\ell=5$ the rate $\alpha$ was different. Further investigation is needed to find the reason for this. Another difficulty is the dependence on time, i.e. the number of levels $L$ and the number of sums $m_{\ell}$ depend on $t$. At the beginning the variability is small, then it increases, and after the process of mixing salt and fresh water has stopped, the variance decreases again.
The number of random samples required at each level was estimated by calculating the decay of the variances and the computational cost for each level. These estimates depend on the minimisation function in the MLMC algorithm.
To achieve the efficiency of the MLMC approach presented in this work, it is essential that the complexity of the numerical solution of each random realisation is proportional to the number of grid vertices on the grid levels.
Density Driven Groundwater Flow with Uncertain Porosity and PermeabilityAlexander Litvinenko
In this work, we solved the density driven groundwater flow problem with uncertain porosity and permeability. An accurate solution of this time-dependent and non-linear problem is impossible because of the presence of natural uncertainties in the reservoir such as porosity and permeability.
Therefore, we estimated the mean value and the variance of the solution, as well as the propagation of uncertainties from the random input parameters to the solution.
We started by defining the Elder-like problem. Then we described the multi-variate polynomial approximation (\gPC) approach and used it to estimate the required statistics of the mass fraction.
Utilizing the \gPC method allowed us
to reduce the computational cost compared to the classical quasi Monte Carlo method.
\gPC assumes that the output function $\sol(t,\bx,\thetab)$ is square-integrable and smooth w.r.t uncertain input variables $\btheta$.
Many factors, such as non-linearity, multiple solutions, multiple stationary states, time dependence and complicated solvers, make the investigation of the convergence of the \gPC method a non-trivial task.
We used an easy-to-implement, but only sub-optimal \gPC technique to quantify the uncertainty. For example, it is known that by increasing the degree of global polynomials (Hermite, Langange and similar), Runge's phenomenon appears. Here, probably local polynomials, splines or their mixtures would be better. Additionally, we used an easy-to-parallelise quadrature rule, which was also only suboptimal. For instance, adaptive choice of sparse grid (or collocation) points \cite{ConradMarzouk13,nobile-sg-mc-2015,Sudret_sparsePCE,CONSTANTINE12,crestaux2009polynomial} would be better, but we were limited by the usage of parallel methods. Adaptive quadrature rules are not (so well) parallelisable. In conclusion, we can report that: a) we developed a highly parallel method to quantify uncertainty in the Elder-like problem; b) with the \gPC of degree 4 we can achieve similar results as with the \QMC method.
In the numerical section we considered two different aquifers - a solid parallelepiped and a solid elliptic cylinder. One of our goals was to see how the domain geometry influences the formation, the number and the shape of fingers.
Since the considered problem is nonlinear,
a high variance in the porosity may result in totally different solutions; for instance, the number of fingers, their intensity and shape, the propagation time, and the velocity may vary considerably.
The number of cells in the presented experiments varied from $241{,}152$ to $15{,}433{,}728$ for the cylindrical domain and from $524{,}288$ to $4{,}194{,}304$ for the parallelepiped. The maximal number of parallel processing units was $600\times 32$, where $600$ is the number of parallel nodes and $32$ is the number of computing cores on each node. The total computing time varied from 2 hours for the coarse mesh to 24 hours for the finest mesh.
Propagation of Uncertainties in Density Driven Groundwater FlowAlexander Litvinenko
Major Goal: estimate risks of the pollution in a subsurface flow.
How?: we solve density-driven groundwater flow with uncertain porosity and permeability.
We set up density-driven groundwater flow problem,
review stochastic modeling and stochastic methods, use UG4 framework (https://gcsc.uni-frankfurt.de/simulation-and-modelling/ug4),
model uncertainty in porosity and permeability,
2D and 3D numerical experiments.
In many countries, groundwater is the strategic reserve, which is used as drinking water and as an irrigation resource. Therefore, accurate modeling of the pollution of the soil and groundwater aquifer is highly important. As a model, we consider a density-driven groundwater flow problem with uncertain porosity and permeability. This problem may arise in geothermal reservoir simulation, natural saline-disposal basins, modeling of contaminant plumes and subsurface flow.
This strongly non-linear problem describes how salt or polluted water streams down building ''fingers". The solving process requires a very fine unstructured mesh and, therefore, high computational resources. Consequently, we run the parallel multigrid solver UG4 (https://github.com/UG4/ughub.wiki.git) on Shaheen II supercomputer.
The parallelization is done in both - the physical space and the stochastic space. The novelty of this work is the estimation of risks that the pollution will achieve a specific critical concentration. Additionally, we demonstrate how the multigrid UG4 solver can be run in a black-box fashion for testing different scenarios in the density-driven flow.
We solve Elder's problem in 2D and 3D domains, where unknown porosity and permeability are modeled by random fields. For approximations in the stochastic space, we use the generalized polynomial chaos expansion. We compute different quantities of interest such as the mean, variance and exceedance probabilities of the concentration. As a reference solution, we use the solution, obtained from the quasi-Monte Carlo method.
Major Goal: estimate risks of the pollution in a subsurface flow.
How? We solve density-driven groundwater flow with uncertain porosity and permeability.
1. We set up density-driven groundwater flow problem
2. Review stochastic modeling and stochastic methods
3. Modeling of uncertainty in porosity and permeability
4. Numerical methods to solve deterministic problem
5. 2D and 3D examples with 0.5-8 Millions mesh points.
We study an elliptic eigenvalue problem, with a random coefficient that can be parametrised by infinitely-many stochastic parameters. The physical motivation is the criticality problem for a nuclear reactor: in steady state the fission reaction can be modeled by an elliptic eigenvalue
problem, and the smallest eigenvalue provides a measure of how close the reaction is to equilibrium -- in terms of production/absorption of neutrons. The coefficients are allowed to be random to model the uncertainty of the composition of materials inside the reactor, e.g., the
control rods, reactor structure, fuel rods etc.
The randomness in the coefficient also results in randomness in the eigenvalues and corresponding eigenfunctions. As such, our quantity of interest is the expected value, with
respect to the stochastic parameters, of the smallest eigenvalue, which we formulate as an integral over the infinite-dimensional parameter domain. Our approximation involves three steps: truncating the stochastic dimension, discretizing the spatial domain using finite elements and approximating the now finite but still high-dimensional integral.
To approximate the high-dimensional integral we use quasi-Monte Carlo (QMC) methods. These are deterministic or quasi-random quadrature rules that can be proven to be very efficient for the numerical integration of certain classes of high-dimensional functions. QMC methods have previously been applied to linear functionals of the solution of a similar elliptic source problem; however, because of the nonlinearity of eigenvalues the existing analysis of the integration error
does not hold in our case.
We show that the minimal eigenvalue belongs to the spaces required for QMC theory, outline the approximation algorithm and provide numerical results.
Efficient Simulations for Contamination of Groundwater Aquifers under Uncerta...Alexander Litvinenko
1. Solved time-dependent density driven flow problem with uncertain porosity and permeability in 2D and 3D
2. Computed propagation of uncertainties in porosity into the mass fraction.
3. Computed the mean, variance, exceedance probabilities, quantiles, risks.
4. Such QoIs as the number of fingers, their size, shape, propagation time can be unstable
5. For moderate perturbations, our gPCE surrogate results are similar to qMC results.
6. Used highly scalable solver on up to 800 computing nodes,
Joint blind calibration and time-delay estimation for multiband rangingTarik Kazaz
In this presentation, we focus on the problem of blind joint calibration of multiband transceivers and time-delay (TD) estimation of multipath channels. We show that this problem can be formulated as a particular case of covariance matching. Although this problem is severely ill-posed, prior information about radio-frequency chain distortions and multipath channel sparsity is used for regularization. This approach leads to a biconvex optimization problem, which is formulated as a rank-constrained linear system and solved by a simple group Lasso algorithm.
% This method is general and can be also applied for calibration of sensors arrays and in direction of arrival estimation.
Numerical experiments show that the proposed algorithm provides better calibration and higher resolution for TD estimation than current state-of-the-art methods.
Talk presented on GAMM 2019 Conference in Vienna, Austria.
Parallel algorithm for uncertainty quantification in the density driven subsurface flow. Estimate risks of subsurface flow pollution.
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...Alexander Litvinenko
Just some ideas how low-rank matrices/tensors can be useful in spatial and environmental statistics, where one usually has to deal with very large data
We combined: low-rank tensor techniques and FFT to compute kriging, estimate variance, compute conditional covariance. We are able to solve 3D problems with very high resolution
Probabilistic Control of Uncertain Linear Systems Using Stochastic ReachabilityLeo Asselborn
This presentation proposes an approach to algorithmically synthesize control strategies for
set-to-set transitions of discrete-time uncertain systems based on reachable set computations in
a stochastic setting. For given Gaussian distributions of the initial states and disturbances, state
sets wich are reachable to a chosen confidence level under the effect of time-variant control laws
are computed by using principles of the ellipsoidal calculus. The proposed algorithm iterates over
LMI-constrained semi-definite programming problems to compute probabilistically stabilizing
controllers, while ellipsoidal input constraints are considered. An example for illustration is included.
Hierarchical matrix approximation of large covariance matricesAlexander Litvinenko
We research class of Matern covariance matrices and their approximability in the H-matrix format. Further tasks are compute H-Cholesky factorization, trace, determinant, quadratic form, loglikelihood. Later H-matrices can be applied in kriging.
Identification of the Mathematical Models of Complex Relaxation Processes in ...Vladimir Bakhrushin
The approach to solving the problem of complex relaxation spectra is presented.
Presentation for the XI International Conference on Defect interaction and anelastic phenomena in solids. Tula, 2007.
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
The GraphNet (aka S-Lasso), as well as other “sparsity + structure” priors like TV (Total-Variation), TV-L1, etc., are not easily applicable to brain data because of technical problems
relating to the selection of the regularization parameters. Also, in
their own right, such models lead to challenging high-dimensional optimization problems. In this manuscript, we present some heuristics for speeding up the overall optimization process: (a) Early-stopping, whereby one halts the optimization process when the test score (performance on leftout data) for the internal cross-validation for model-selection stops improving, and (b) univariate feature-screening, whereby irrelevant (non-predictive) voxels are detected and eliminated before the optimization problem is entered, thus reducing the size of the problem. Empirical results with GraphNet on real MRI (Magnetic Resonance Imaging) datasets indicate that these heuristics are a win-win strategy, as they add speed without sacrificing the quality of the predictions. We expect the proposed heuristics to work on other models like TV-L1, etc.
1. Motivation: why do we need low-rank tensors
2. Tensors of the second order (matrices)
3. CP, Tucker and tensor train tensor formats
4. Many classical kernels have (or can be approximated in ) low-rank tensor format
5. Post processing: Computation of mean, variance, level sets, frequency
Sequential quasi-Monte Carlo (SQMC) is a quasi-Monte Carlo (QMC) version of sequential Monte Carlo (or particle filtering), a popular class of Monte Carlo techniques used to carry out inference in state space models. In this talk I will first review the SQMC methodology as well as some theoretical results. Although SQMC converges faster than the usual Monte Carlo error rate its performance deteriorates quickly as the dimension of the hidden variable increases. However, I will show with an example that SQMC may perform well for some "high" dimensional problems. I will conclude this talk with some open problems and potential applications of SQMC in complicated settings.
All of material inside is un-licence, kindly use it for educational only but please do not to commercialize it.
Based on 'ilman nafi'an, hopefully this file beneficially for you.
Thank you.
Here the interest is mainly to compute characterisations like the entropy,
the Kullback-Leibler divergence, more general $f$-divergences, or other such characteristics based on
the probability density. The density is often not available directly,
and it is a computational challenge to just represent it in a numerically
feasible fashion in case the dimension is even moderately large. It
is an even stronger numerical challenge to then actually compute said characteristics
in the high-dimensional case.
The task considered here was the numerical computation of characterising statistics of
high-dimensional pdfs, as well as their divergences and distances,
where the pdf in the numerical implementation was assumed discretised on some regular grid.
We have demonstrated that high-dimensional pdfs,
pcfs, and some functions of them
can be approximated and represented in a low-rank tensor data format.
Utilisation of low-rank tensor techniques helps to reduce the computational complexity
and the storage cost from exponential $\C{O}(n^d)$ to linear in the dimension $d$, e.g.\
$O(d n r^2 )$ for the TT format. Here $n$ is the number of discretisation
points in one direction, $r<<n$ is the maximal tensor rank, and $d$ the problem dimension.
Talk presented on this workshop "Workshop: Imaging With Uncertainty Quantification (IUQ), September 2022",
https://people.compute.dtu.dk/pcha/CUQI/IUQworkshop.html
We consider a weakly supervised classification problem. It
is a classification problem where the target variable can be unknown
or uncertain for some subset of samples. This problem appears when
the labeling is impossible, time-consuming, or expensive. Noisy measurements
and lack of data may prevent accurate labeling. Our task
is to build an optimal classification function. For this, we construct and
minimize a specific objective function, which includes the fitting error on
labeled data and a smoothness term. Next, we use covariance and radial AQ1
basis functions to define the degree of similarity between points. The further
process involves the repeated solution of an extensive linear system
with the graph Laplacian operator. To speed up this solution process,
we introduce low-rank approximation techniques. We call the resulting
algorithm WSC-LR. Then we use the WSC-LR algorithm for analysis
CT brain scans to recognize ischemic stroke disease. We also compare
WSC-LR with other well-known machine learning algorithms.
Major Goal: estimate risks of the pollution in a subsurface flow.
How? We solve density-driven groundwater flow with uncertain porosity and permeability.
1. We set up density-driven groundwater flow problem
2. Review stochastic modeling and stochastic methods
3. Modeling of uncertainty in porosity and permeability
4. Numerical methods to solve deterministic problem
5. 2D and 3D examples with 0.5-8 Millions mesh points.
We study an elliptic eigenvalue problem, with a random coefficient that can be parametrised by infinitely-many stochastic parameters. The physical motivation is the criticality problem for a nuclear reactor: in steady state the fission reaction can be modeled by an elliptic eigenvalue
problem, and the smallest eigenvalue provides a measure of how close the reaction is to equilibrium -- in terms of production/absorption of neutrons. The coefficients are allowed to be random to model the uncertainty of the composition of materials inside the reactor, e.g., the
control rods, reactor structure, fuel rods etc.
The randomness in the coefficient also results in randomness in the eigenvalues and corresponding eigenfunctions. As such, our quantity of interest is the expected value, with
respect to the stochastic parameters, of the smallest eigenvalue, which we formulate as an integral over the infinite-dimensional parameter domain. Our approximation involves three steps: truncating the stochastic dimension, discretizing the spatial domain using finite elements and approximating the now finite but still high-dimensional integral.
To approximate the high-dimensional integral we use quasi-Monte Carlo (QMC) methods. These are deterministic or quasi-random quadrature rules that can be proven to be very efficient for the numerical integration of certain classes of high-dimensional functions. QMC methods have previously been applied to linear functionals of the solution of a similar elliptic source problem; however, because of the nonlinearity of eigenvalues the existing analysis of the integration error
does not hold in our case.
We show that the minimal eigenvalue belongs to the spaces required for QMC theory, outline the approximation algorithm and provide numerical results.
Efficient Simulations for Contamination of Groundwater Aquifers under Uncerta...Alexander Litvinenko
1. Solved time-dependent density driven flow problem with uncertain porosity and permeability in 2D and 3D
2. Computed propagation of uncertainties in porosity into the mass fraction.
3. Computed the mean, variance, exceedance probabilities, quantiles, risks.
4. Such QoIs as the number of fingers, their size, shape, propagation time can be unstable
5. For moderate perturbations, our gPCE surrogate results are similar to qMC results.
6. Used highly scalable solver on up to 800 computing nodes,
Joint blind calibration and time-delay estimation for multiband rangingTarik Kazaz
In this presentation, we focus on the problem of blind joint calibration of multiband transceivers and time-delay (TD) estimation of multipath channels. We show that this problem can be formulated as a particular case of covariance matching. Although this problem is severely ill-posed, prior information about radio-frequency chain distortions and multipath channel sparsity is used for regularization. This approach leads to a biconvex optimization problem, which is formulated as a rank-constrained linear system and solved by a simple group Lasso algorithm.
% This method is general and can be also applied for calibration of sensors arrays and in direction of arrival estimation.
Numerical experiments show that the proposed algorithm provides better calibration and higher resolution for TD estimation than current state-of-the-art methods.
Talk presented on GAMM 2019 Conference in Vienna, Austria.
Parallel algorithm for uncertainty quantification in the density driven subsurface flow. Estimate risks of subsurface flow pollution.
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...Alexander Litvinenko
Just some ideas how low-rank matrices/tensors can be useful in spatial and environmental statistics, where one usually has to deal with very large data
We combined: low-rank tensor techniques and FFT to compute kriging, estimate variance, compute conditional covariance. We are able to solve 3D problems with very high resolution
Probabilistic Control of Uncertain Linear Systems Using Stochastic ReachabilityLeo Asselborn
This presentation proposes an approach to algorithmically synthesize control strategies for
set-to-set transitions of discrete-time uncertain systems based on reachable set computations in
a stochastic setting. For given Gaussian distributions of the initial states and disturbances, state
sets wich are reachable to a chosen confidence level under the effect of time-variant control laws
are computed by using principles of the ellipsoidal calculus. The proposed algorithm iterates over
LMI-constrained semi-definite programming problems to compute probabilistically stabilizing
controllers, while ellipsoidal input constraints are considered. An example for illustration is included.
Hierarchical matrix approximation of large covariance matricesAlexander Litvinenko
We research class of Matern covariance matrices and their approximability in the H-matrix format. Further tasks are compute H-Cholesky factorization, trace, determinant, quadratic form, loglikelihood. Later H-matrices can be applied in kriging.
Identification of the Mathematical Models of Complex Relaxation Processes in ...Vladimir Bakhrushin
The approach to solving the problem of complex relaxation spectra is presented.
Presentation for the XI International Conference on Defect interaction and anelastic phenomena in solids. Tula, 2007.
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
The GraphNet (aka S-Lasso), as well as other “sparsity + structure” priors like TV (Total-Variation), TV-L1, etc., are not easily applicable to brain data because of technical problems
relating to the selection of the regularization parameters. Also, in
their own right, such models lead to challenging high-dimensional optimization problems. In this manuscript, we present some heuristics for speeding up the overall optimization process: (a) Early-stopping, whereby one halts the optimization process when the test score (performance on leftout data) for the internal cross-validation for model-selection stops improving, and (b) univariate feature-screening, whereby irrelevant (non-predictive) voxels are detected and eliminated before the optimization problem is entered, thus reducing the size of the problem. Empirical results with GraphNet on real MRI (Magnetic Resonance Imaging) datasets indicate that these heuristics are a win-win strategy, as they add speed without sacrificing the quality of the predictions. We expect the proposed heuristics to work on other models like TV-L1, etc.
1. Motivation: why do we need low-rank tensors
2. Tensors of the second order (matrices)
3. CP, Tucker and tensor train tensor formats
4. Many classical kernels have (or can be approximated in ) low-rank tensor format
5. Post processing: Computation of mean, variance, level sets, frequency
Sequential quasi-Monte Carlo (SQMC) is a quasi-Monte Carlo (QMC) version of sequential Monte Carlo (or particle filtering), a popular class of Monte Carlo techniques used to carry out inference in state space models. In this talk I will first review the SQMC methodology as well as some theoretical results. Although SQMC converges faster than the usual Monte Carlo error rate its performance deteriorates quickly as the dimension of the hidden variable increases. However, I will show with an example that SQMC may perform well for some "high" dimensional problems. I will conclude this talk with some open problems and potential applications of SQMC in complicated settings.
All of material inside is un-licence, kindly use it for educational only but please do not to commercialize it.
Based on 'ilman nafi'an, hopefully this file beneficially for you.
Thank you.
Here the interest is mainly to compute characterisations like the entropy,
the Kullback-Leibler divergence, more general $f$-divergences, or other such characteristics based on
the probability density. The density is often not available directly,
and it is a computational challenge to just represent it in a numerically
feasible fashion in case the dimension is even moderately large. It
is an even stronger numerical challenge to then actually compute said characteristics
in the high-dimensional case.
The task considered here was the numerical computation of characterising statistics of
high-dimensional pdfs, as well as their divergences and distances,
where the pdf in the numerical implementation was assumed discretised on some regular grid.
We have demonstrated that high-dimensional pdfs,
pcfs, and some functions of them
can be approximated and represented in a low-rank tensor data format.
Utilisation of low-rank tensor techniques helps to reduce the computational complexity
and the storage cost from exponential $\C{O}(n^d)$ to linear in the dimension $d$, e.g.\
$O(d n r^2 )$ for the TT format. Here $n$ is the number of discretisation
points in one direction, $r<<n$ is the maximal tensor rank, and $d$ the problem dimension.
Talk presented on this workshop "Workshop: Imaging With Uncertainty Quantification (IUQ), September 2022",
https://people.compute.dtu.dk/pcha/CUQI/IUQworkshop.html
We consider a weakly supervised classification problem. It
is a classification problem where the target variable can be unknown
or uncertain for some subset of samples. This problem appears when
the labeling is impossible, time-consuming, or expensive. Noisy measurements
and lack of data may prevent accurate labeling. Our task
is to build an optimal classification function. For this, we construct and
minimize a specific objective function, which includes the fitting error on
labeled data and a smoothness term. Next, we use covariance and radial AQ1
basis functions to define the degree of similarity between points. The further
process involves the repeated solution of an extensive linear system
with the graph Laplacian operator. To speed up this solution process,
we introduce low-rank approximation techniques. We call the resulting
algorithm WSC-LR. Then we use the WSC-LR algorithm for analysis
CT brain scans to recognize ischemic stroke disease. We also compare
WSC-LR with other well-known machine learning algorithms.
Computing f-Divergences and Distances of High-Dimensional Probability Density...Alexander Litvinenko
Poster presented on Stochastic Numerics and Statistical Learning: Theory and Applications Workshop in KAUST, Saudi Arabia.
The task considered here was the numerical computation of characterising statistics of
high-dimensional pdfs, as well as their divergences and distances,
where the pdf in the numerical implementation was assumed discretised on some regular grid.
Even for moderate dimension $d$, the full storage and computation with such objects become very quickly infeasible.
We have demonstrated that high-dimensional pdfs,
pcfs, and some functions of them
can be approximated and represented in a low-rank tensor data format.
Utilisation of low-rank tensor techniques helps to reduce the computational complexity
and the storage cost from exponential $\C{O}(n^d)$ to linear in the dimension $d$, e.g.
O(d n r^2) for the TT format. Here $n$ is the number of discretisation
points in one direction, r<n is the maximal tensor rank, and d the problem dimension.
The particular data format is rather unimportant,
any of the well-known tensor formats (CP, Tucker, hierarchical Tucker, tensor-train (TT)) can be used,
and we used the TT data format. Much of the presentation and in fact the central train
of discussion and thought is actually independent of the actual representation.
In the beginning it was motivated through three possible ways how one may
arrive at such a representation of the pdf. One was if the pdf was given in some approximate
analytical form, e.g. like a function tensor product of lower-dimensional pdfs with a
product measure, or from an analogous representation of the pcf and subsequent use of the
Fourier transform, or from a low-rank functional representation of a high-dimensional
RV, again via its pcf.
The theoretical underpinnings of the relation between pdfs and pcfs as well as their
properties were recalled in Section: Theory, as they are important to be preserved in the
discrete approximation. This also introduced the concepts of the convolution and of
the point-wise multiplication Hadamard algebra, concepts which become especially important if
one wants to characterise sums of independent RVs or mixture models,
a topic we did not touch on for the sake of brevity but which follows very naturally from
the developments here. Especially the Hadamard algebra is also
important for the algorithms to compute various point-wise functions in the sparse formats.
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...Alexander Litvinenko
Talk presented on SIAM IS 2022 conference.
Very often, in the course of uncertainty quantification tasks or
data analysis, one has to deal with high-dimensional random variables (RVs)
(with values in $\Rd$). Just like any other RV,
a high-dimensional RV can be described by its probability density (\pdf) and/or
by the corresponding probability characteristic functions (\pcf),
or a more general representation as
a function of other, known, random variables.
Here the interest is mainly to compute characterisations like the entropy, the Kullback-Leibler, or more general
$f$-divergences. These are all computed from the \pdf, which is often not available directly,
and it is a computational challenge to even represent it in a numerically
feasible fashion in case the dimension $d$ is even moderately large. It
is an even stronger numerical challenge to then actually compute said characterisations
in the high-dimensional case.
In this regard, in order to achieve a computationally feasible task, we propose
to approximate density by a low-rank tensor.
Low rank tensor approximation of probability density and characteristic funct...Alexander Litvinenko
Very often one has to deal with high-dimensional random variables (RVs). A high-dimensional RV can be described by its probability density (\pdf) and/or by the corresponding probability characteristic functions (\pcf), or by a function representation. Here the interest is mainly to compute characterisations like the entropy, or
relations between two distributions, like their Kullback-Leibler divergence, or more general measures such as $f$-divergences,
among others. These are all computed from the \pdf, which is often not available directly, and it is a computational challenge to even represent it in a numerically feasible fashion in case the dimension $d$ is even moderately large. It is an even stronger numerical challenge to then actually compute said characterisations in the high-dimensional case.
In this regard, in order to achieve a computationally feasible task, we propose to represent the density by a high order tensor product, and approximate this in a low-rank format.
Identification of unknown parameters and prediction of missing values. Compar...Alexander Litvinenko
H-matrix approximation of large Mat\'{e}rn covariance matrices, Gaussian log-likelihoods.
Identifying unknown parameters and making predictions
Comparison with machine learning methods.
kNN is easy to implement and shows promising results.
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
We develop fast and efficient stochastic methods for characterizing scattering
from objects of uncertain shapes. This is highly needed in the
fields of electromagnetics, optics, and photonics.
The continuation multilevel Monte Carlo (CMLMC) method is
used together with a surface integral equation solver. The
CMLMC method optimally balances statistical errors due to
sampling of the parametric space, and numerical errors due
to the discretization of the geometry using a hierarchy of
discretizations, from coarse to fine. The number of realizations
of finer discretizations can be kept low, with most samples
computed on coarser discretizations to minimize computational
work. Consequently, the total execution time is significantly
reduced, in comparison to the standard MC scheme.
Identification of unknown parameters and prediction with hierarchical matrice...Alexander Litvinenko
We compare four numerical methods for the prediction of missing values in four different datasets.
These methods are 1) the hierarchical maximum likelihood estimation (H-MLE), and three machine learning (ML) methods, which include 2) k-nearest neighbors (kNN), 3) random forest, and 4) Deep Neural Network (DNN).
From the ML methods, the best results (for considered datasets) were obtained by the kNN method with three (or seven) neighbors.
On one dataset, the MLE method showed a smaller error than the kNN method, whereas, on another, the kNN method was better.
The MLE method requires a lot of linear algebra computations and works fine on almost all datasets. Its result can be improved by taking a smaller threshold and more accurate hierarchical matrix arithmetics. To our surprise, the well-known kNN method produces similar results as H-MLE and worked much faster.
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
Computational tools for characterizing electromagnetic scattering from objects with uncertain shapes are needed in various applications ranging from remote sensing at microwave frequencies to Raman spectroscopy at optical frequencies. Often, such computational tools use the Monte Carlo (MC) method to sample a parametric space describing geometric uncertainties. For each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver computes the scattered fields. However, for an accurate statistical characterization the number of MC samples has to be large. In this work, to address this challenge, the continuation multilevel Monte Carlo (\CMLMC) method is used together with a surface integral equation solver.
The \CMLMC method optimally balances statistical errors due to sampling of
the parametric space, and numerical errors due to the discretization of the geometry using a hierarchy of discretizations, from coarse to fine.
The number of realizations of finer discretizations can be kept low, with most samples
computed on coarser discretizations to minimize computational cost.
Consequently, the total execution time is significantly reduced, in comparison to the standard MC scheme.
Computation of electromagnetic fields scattered from dielectric objects of un...Alexander Litvinenko
Computational tools for characterizing electromagnetic scattering from objects with uncertain shapes are needed in various applications ranging from remote sensing at microwave frequencies to Raman spectroscopy at optical frequencies. Often, such computational tools use the Monte Carlo (MC) method to sample a parametric space describing geometric uncertainties. For each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver computes the scattered fields. However, for an accurate statistical characterization the number of MC samples has to be large. In this work, to address this challenge, the continuation multilevel Monte Carlo (\CMLMC) method is used together with a surface integral equation solver.
The \CMLMC method optimally balances statistical errors due to sampling of
the parametric space, and numerical errors due to the discretization of the geometry using a hierarchy of discretizations, from coarse to fine.
The number of realizations of finer discretizations can be kept low, with most samples
computed on coarser discretizations to minimize computational cost.
Consequently, the total execution time is significantly reduced, in comparison to the standard MC scheme.
Simulation of propagation of uncertainties in density-driven groundwater flowAlexander Litvinenko
Consider stochastic modelling of the density-driven subsurface flow in 3D. This talk was presented by Dmitry Logashenko on the IMG conference in Kunming, China, August 2019.
Large data sets result large dense matrices, say with 2.000.000 rows and columns. How to work with such large matrices? How to approximate them? How to compute log-likelihood? determination? inverse? All answers are in this work.
In this paper, we solve a semi-supervised regression
problem. Due to the luck of knowledge about the
data structure and the presence of random noise, the considered data model is uncertain. We propose a method which combines graph Laplacian regularization and cluster ensemble methodologies. The co-association matrix of the ensemble is calculated on both labeled and unlabeled data; this matrix is used as a similarity matrix in the regularization framework to derive the predicted outputs. We use the low-rank decomposition of the co-association matrix to significantly speedup calculations and reduce memory. Two clustering problem examples are presented.
Full version is here https://arxiv.org/abs/1901.03919
Computation of electromagnetic_fields_scattered_from_dielectric_objects_of_un...Alexander Litvinenko
Tools for electromagnetic scattering from objects with uncertain shapes are needed in various applications.
We develop numerical methods for predicting radar and scattering cross sections (RCS and SCS) of complex targets.
To reduce cost of Monte Carlo (MC) we offer modified multilevel MC (CMLMC) method.
Application of Parallel Hierarchical Matrices in Spatial Statistics and Param...Alexander Litvinenko
Part 1: Parallel H-matrices in spatial statistics
1. Motivation: improve statistical model
2. Tools: Hierarchical matrices [Hackbusch 1999]
3. Matern covariance function and joint Gaussian likelihood
4. Identification of unknown parameters via maximizing Gaussian
log-likelihood
5. Implementation with HLIBPro.
Tucker tensor analysis of Matern functions in spatial statistics Alexander Litvinenko
1. Motivation: improve statistical models
2. Motivation: disadvantages of matrices
3. Tools: Tucker tensor format
4. Tensor approximation of Matern covariance function via FFT
5. Typical statistical operations in Tucker tensor format
6. Numerical experiments
Natural farming @ Dr. Siddhartha S. Jena.pptxsidjena70
A brief about organic farming/ Natural farming/ Zero budget natural farming/ Subash Palekar Natural farming which keeps us and environment safe and healthy. Next gen Agricultural practices of chemical free farming.
Diabetes is a rapidly and serious health problem in Pakistan. This chronic condition is associated with serious long-term complications, including higher risk of heart disease and stroke. Aggressive treatment of hypertension and hyperlipideamia can result in a substantial reduction in cardiovascular events in patients with diabetes 1. Consequently pharmacist-led diabetes cardiovascular risk (DCVR) clinics have been established in both primary and secondary care sites in NHS Lothian during the past five years. An audit of the pharmaceutical care delivery at the clinics was conducted in order to evaluate practice and to standardize the pharmacists’ documentation of outcomes. Pharmaceutical care issues (PCI) and patient details were collected both prospectively and retrospectively from three DCVR clinics. The PCI`s were categorized according to a triangularised system consisting of multiple categories. These were ‘checks’, ‘changes’ (‘change in drug therapy process’ and ‘change in drug therapy’), ‘drug therapy problems’ and ‘quality assurance descriptors’ (‘timer perspective’ and ‘degree of change’). A verified medication assessment tool (MAT) for patients with chronic cardiovascular disease was applied to the patients from one of the clinics. The tool was used to quantify PCI`s and pharmacist actions that were centered on implementing or enforcing clinical guideline standards. A database was developed to be used as an assessment tool and to standardize the documentation of achievement of outcomes. Feedback on the audit of the pharmaceutical care delivery and the database was received from the DCVR clinic pharmacist at a focus group meeting.
Artificial Reefs by Kuddle Life Foundation - May 2024punit537210
Situated in Pondicherry, India, Kuddle Life Foundation is a charitable, non-profit and non-governmental organization (NGO) dedicated to improving the living standards of coastal communities and simultaneously placing a strong emphasis on the protection of marine ecosystems.
One of the key areas we work in is Artificial Reefs. This presentation captures our journey so far and our learnings. We hope you get as excited about marine conservation and artificial reefs as we are.
Please visit our website: https://kuddlelife.org
Our Instagram channel:
@kuddlelifefoundation
Our Linkedin Page:
https://www.linkedin.com/company/kuddlelifefoundation/
and write to us if you have any questions:
info@kuddlelife.org
UNDERSTANDING WHAT GREEN WASHING IS!.pdfJulietMogola
Many companies today use green washing to lure the public into thinking they are conserving the environment but in real sense they are doing more harm. There have been such several cases from very big companies here in Kenya and also globally. This ranges from various sectors from manufacturing and goes to consumer products. Educating people on greenwashing will enable people to make better choices based on their analysis and not on what they see on marketing sites.
Willie Nelson Net Worth: A Journey Through Music, Movies, and Business Venturesgreendigital
Willie Nelson is a name that resonates within the world of music and entertainment. Known for his unique voice, and masterful guitar skills. and an extraordinary career spanning several decades. Nelson has become a legend in the country music scene. But, his influence extends far beyond the realm of music. with ventures in acting, writing, activism, and business. This comprehensive article delves into Willie Nelson net worth. exploring the various facets of his career that have contributed to his large fortune.
Follow us on: Pinterest
Introduction
Willie Nelson net worth is a testament to his enduring influence and success in many fields. Born on April 29, 1933, in Abbott, Texas. Nelson's journey from a humble beginning to becoming one of the most iconic figures in American music is nothing short of inspirational. His net worth, which estimated to be around $25 million as of 2024. reflects a career that is as diverse as it is prolific.
Early Life and Musical Beginnings
Humble Origins
Willie Hugh Nelson was born during the Great Depression. a time of significant economic hardship in the United States. Raised by his grandparents. Nelson found solace and inspiration in music from an early age. His grandmother taught him to play the guitar. setting the stage for what would become an illustrious career.
First Steps in Music
Nelson's initial foray into the music industry was fraught with challenges. He moved to Nashville, Tennessee, to pursue his dreams, but success did not come . Working as a songwriter, Nelson penned hits for other artists. which helped him gain a foothold in the competitive music scene. His songwriting skills contributed to his early earnings. laying the foundation for his net worth.
Rise to Stardom
Breakthrough Albums
The 1970s marked a turning point in Willie Nelson's career. His albums "Shotgun Willie" (1973), "Red Headed Stranger" (1975). and "Stardust" (1978) received critical acclaim and commercial success. These albums not only solidified his position in the country music genre. but also introduced his music to a broader audience. The success of these albums played a crucial role in boosting Willie Nelson net worth.
Iconic Songs
Willie Nelson net worth is also attributed to his extensive catalog of hit songs. Tracks like "Blue Eyes Crying in the Rain," "On the Road Again," and "Always on My Mind" have become timeless classics. These songs have not only earned Nelson large royalties but have also ensured his continued relevance in the music industry.
Acting and Film Career
Hollywood Ventures
In addition to his music career, Willie Nelson has also made a mark in Hollywood. His distinctive personality and on-screen presence have landed him roles in several films and television shows. Notable appearances include roles in "The Electric Horseman" (1979), "Honeysuckle Rose" (1980), and "Barbarosa" (1982). These acting gigs have added a significant amount to Willie Nelson net worth.
Television Appearances
Nelson's char
Micro RNA genes and their likely influence in rice (Oryza sativa L.) dynamic ...Open Access Research Paper
Micro RNAs (miRNAs) are small non-coding RNAs molecules having approximately 18-25 nucleotides, they are present in both plants and animals genomes. MiRNAs have diverse spatial expression patterns and regulate various developmental metabolisms, stress responses and other physiological processes. The dynamic gene expression playing major roles in phenotypic differences in organisms are believed to be controlled by miRNAs. Mutations in regions of regulatory factors, such as miRNA genes or transcription factors (TF) necessitated by dynamic environmental factors or pathogen infections, have tremendous effects on structure and expression of genes. The resultant novel gene products presents potential explanations for constant evolving desirable traits that have long been bred using conventional means, biotechnology or genetic engineering. Rice grain quality, yield, disease tolerance, climate-resilience and palatability properties are not exceptional to miRN Asmutations effects. There are new insights courtesy of high-throughput sequencing and improved proteomic techniques that organisms’ complexity and adaptations are highly contributed by miRNAs containing regulatory networks. This article aims to expound on how rice miRNAs could be driving evolution of traits and highlight the latest miRNA research progress. Moreover, the review accentuates miRNAs grey areas to be addressed and gives recommendations for further studies.
"Understanding the Carbon Cycle: Processes, Human Impacts, and Strategies for...MMariSelvam4
The carbon cycle is a critical component of Earth's environmental system, governing the movement and transformation of carbon through various reservoirs, including the atmosphere, oceans, soil, and living organisms. This complex cycle involves several key processes such as photosynthesis, respiration, decomposition, and carbon sequestration, each contributing to the regulation of carbon levels on the planet.
Human activities, particularly fossil fuel combustion and deforestation, have significantly altered the natural carbon cycle, leading to increased atmospheric carbon dioxide concentrations and driving climate change. Understanding the intricacies of the carbon cycle is essential for assessing the impacts of these changes and developing effective mitigation strategies.
By studying the carbon cycle, scientists can identify carbon sources and sinks, measure carbon fluxes, and predict future trends. This knowledge is crucial for crafting policies aimed at reducing carbon emissions, enhancing carbon storage, and promoting sustainable practices. The carbon cycle's interplay with climate systems, ecosystems, and human activities underscores its importance in maintaining a stable and healthy planet.
In-depth exploration of the carbon cycle reveals the delicate balance required to sustain life and the urgent need to address anthropogenic influences. Through research, education, and policy, we can work towards restoring equilibrium in the carbon cycle and ensuring a sustainable future for generations to come.
WRI’s brand new “Food Service Playbook for Promoting Sustainable Food Choices” gives food service operators the very latest strategies for creating dining environments that empower consumers to choose sustainable, plant-rich dishes. This research builds off our first guide for food service, now with industry experience and insights from nearly 350 academic trials.
Characterization and the Kinetics of drying at the drying oven and with micro...Open Access Research Paper
The objective of this work is to contribute to valorization de Nephelium lappaceum by the characterization of kinetics of drying of seeds of Nephelium lappaceum. The seeds were dehydrated until a constant mass respectively in a drying oven and a microwawe oven. The temperatures and the powers of drying are respectively: 50, 60 and 70°C and 140, 280 and 420 W. The results show that the curves of drying of seeds of Nephelium lappaceum do not present a phase of constant kinetics. The coefficients of diffusion vary between 2.09.10-8 to 2.98. 10-8m-2/s in the interval of 50°C at 70°C and between 4.83×10-07 at 9.04×10-07 m-8/s for the powers going of 140 W with 420 W the relation between Arrhenius and a value of energy of activation of 16.49 kJ. mol-1 expressed the effect of the temperature on effective diffusivity.
Characterization and the Kinetics of drying at the drying oven and with micro...
Litvinenko_Poster_Henry_22May.pdf
1. Center for Uncertainty
Quantification
Salinization of coastal aquifers under uncertainties
Alexander Litvinenko1
, Dmitry Logashenko2
, Raul Tempone1,2
, Ekaterina Vasilyeva2
, Gabriel Wittum2
1
RWTH Aachen, Germany, 2
KAUST, Saudi Arabia
litvinenko@uq.rwth-aachen.de
Center for Uncertainty
Quantification
Center for Uncertainty
Quantification
Abstract
Problem: Henry saltwater intrusion (nonlinear and time-dependent, describes a two-phase
subsurface flow)
Input uncertainty: porosity, permeability, and recharge (model by random fields)
Solution: the salt mass fraction (uncertain and time-dependent)
Method: Multi Level Monte Carlo (MLMC) method
Deterministic solver: parallel multigrid solver ug4
Questions:
1. How long can wells be used?
2. Where is the largest uncertainty?
3. Freshwater exceed. probabil.?
4. What is the mean scenario and its
variations?
5. What are the extreme scenarios?
6. How do the uncertainties change
over time?
Figure 1: Henry problem, taken from https://www.mdpi.com/2073-4441/10/2/230
1. Henry problem settings
The mass conservation laws for the entire liquid phase and salt yield the following equations
∂t(ϕρ) + ∇ · (ρq) = 0,
∂t(ϕρc) + ∇ · (ρcq − ρD∇c) = 0,
where ϕ(x, ξ) is porosity, x ∈ D, is determined by a set of RVs ξ = (ξ1, . . . , ξM, ...).
c(t, x) mass fraction of the salt, ρ = ρ(c) density of the liquid phase, and D(t, x) molecular
diffusion tensor.
For q(t, x) velocity, we assume Darcy’s law:
q = −
K
µ
(∇p − ρg),
where p = p(t, x) is the hydrostatic pressure, K permeability, µ = µ(c) viscosity of the liquid
phase, and g gravity. To compute: c and p.
Comput. domain: D × [0, T]. We set ρ(c) = ρ0 + (ρ1 − ρ0)c, and D = ϕDI,
I.C.: c|t=0 = 0, B.C.: c|x=2 = 1, p|x=2 = −ρ1gy. c|x=0 = 0, ρq · ex|x=0 = q̂in.
We model the uncertain ϕ using a random field and assume: K = KI, K = K(ϕ).
Methods: Newton method, BiCGStab, preconditioned with the geometric multigrid method
(V-cycle), ILUβ-smoothers and Gaussian elimination.
2. Solution of the Henry problem
q̂in = 6.6 · 10−2
kg/s
c = 0 c = 1
p = −ρ1gy
0
−1 m
2 m
y
x
D := [0, 2] × [−1, 0]; a realization of c(t, x); ϕ(ξ∗
) ∈ [0.18, 0.59]; E [c] ∈ [0, 0.35]; Var[c] ∈ [0.0, 0.04]
QoIs: c in the whole domain, c at a point, or integral values (the freshwater/saltwater integrals):
QFW (t, ω) :=
R
x∈D I(c(t, x, ω) ≤ 0.012178)dx, Qs(t, ω) :=
R
x∈D c(t, x, ω)ρ(t, x, ω)dx
2.1 Multi Level Monte Carlo (MLMC) method
Spatial and temporal grid hierarchies D0, D1, . . . , DL, and T0, T1, . . . , TL; n0 = 512, nℓ ≈ n0 · 16ℓ
,
τℓ+1 = 1
4τℓ, rℓ+1 = 4rℓ and rℓ = r04ℓ
. Computation complexity is
sℓ = O(nℓrℓ) = O(43ℓγ
n0 · r0)
MLMC approximates E [gL] ≈ E [g] using the following telescopic sum:
E [gL] ≈ m−1
0
m0
X
i=1
g
(0,i)
0 +
L
X
ℓ=1
m−1
ℓ
mℓ
X
i=1
(g
(ℓ,i)
ℓ − g
(ℓ,i)
ℓ−1 )
!
.
Minimize F(m0, . . . , mL) :=
PL
ℓ=0 mℓsℓ + µ2 Vℓ
mℓ
, obtain mℓ = ε−2
q
Vℓ
sℓ
PL
i=0
√
Visi.
100 realizations of QFW (t); Evolution of the pdf of c(t, x), t = {3τ, . . . , 48τ}; pdf of the earliest
time point when c(t, x) < 0.9, x = (1.85, −0.95); mean values E [c(t, x9, y9)]; variances
Var[c](t, x9, y9) on levels 0,1,2,3.
E [gℓ − gℓ−1] (t, x9, y9); V [gℓ − gℓ−1] (t, x9, y9), ℓ = 1, 2, 3, QoI is the integral value over D9 ; 100 realisations of g1 − g0, g2 − g1, g3 − g2, QoI gℓ is the integral value
Qs(t, ω) computed over a subdomain around 9th point, t ∈ [τ, 48τ].
Level ℓ nℓ, ( nℓ
nℓ−1
) rℓ, ( rℓ
rℓ−1
) τℓ = 6016/rℓ
Computing times (sℓ), ( sℓ
sℓ−1
)
average min. max.
0 153 94 64 0.6 0.5 0.7
1 2145 (14) 376 (4) 16 7.1 (14) 6.9 8.7
2 33153 (15.5) 1504 (4) 4 252.9 (36) 246.2 266.2
3 525825 (15.9) 6016 (4) 1 11109.8 (44) 9858.4 15506.9
#ndofs nℓ, number of time steps rℓ, time step τℓ; average, minimal, and maximal computing times on each level ℓ.
ε2
0.1
total cost of MC, SMC 9.5e + 6
total cost of MLMC, S 4.25e + 4
{m0, m1, m2, m3} {7927, 946, 57, 2}
Comparison of MC and MLMC
ε2
m0 m1 m2 m3
1 73 8 1 0
0.5 290 32 3 0
0.1 7258 811 68 1
0.05 29031 3245 274 5
MLMC: number of samples on level ℓ
(1)Weak (α = −2) and (2)strong (β = −3.4) convergences, ℓ = 0, 1, 2, 3. QoI=local integral of c
over (x, y)9 = (1.65, −0.75)-subdomain; (3) decay of absolute and (4) relative errors between
the mean values computed on a fine mesh via MC and via MLMC at (t, x, y) = (12, 1.60, −0.95).
Acknowledgements: KAUST HPC and the Alexander von Humboldt foundation.
1. A. Litvinenko, D. Logashenko, R. Tempone, E. Vasilyeva, G. Wittum, Uncertainty quantification in coastal aquifers using the multilevel Monte Carlo method,
arXiv:2302.07804, 2023
2. A. Litvinenko, D. Logashenko, R. Tempone, G. Wittum, D. Keyes, Solution of the 3D density-driven groundwater flow problem with uncertain porosity and
permeability, GEM-International Journal on Geomathematics 11, 1-29, 2020
3. A. Litvinenko, A.C. Yucel, H. Bagci, J. Oppelstrup, E. Michielssen, R. Tempone, Computation of electromagnetic fields scattered from objects with uncertain
shapes using multilevel Monte Carlo method, IEEE Journal on Multiscale and Multiphysics Computational Techniques 4, 37-50, 2019
4. A. Litvinenko, D. Logashenko, R. Tempone, G. Wittum, D. Keyes, Propagation of Uncertainties in Density-Driven Flow, In: Bungartz, HJ., Garcke, J., Pflüger,
D. (eds) Sparse Grids and Applications - Munich 2018. LNCSE, vol. 144, pp 121-126, Springer, 2018