- The thesis studies numerical methods for stochastic partial differential equations (SPDEs) subject to generalized Levy noise.
- It develops both deterministic methods using the Fokker-Planck equation and probabilistic methods like polynomial chaos.
- Key contributions include developing adaptive multi-element polynomial chaos for discrete measures, comparing approaches to construct orthogonal polynomials over discrete measures, and improving efficiency and accuracy through adaptive integration meshes and sparse grids.
This document summarizes a thesis on numerical methods for stochastic systems subject to generalized Levy noise. It includes:
1) Motivation for studying such systems from both mathematical and applicational perspectives, such as in mathematical finance and chaotic flows.
2) An introduction to Levy processes and the probability collocation method (PCM) for uncertainty quantification (UQ).
3) Details on improving PCM through a multi-element approach and constructing orthogonal polynomials for discrete measures.
Lattice rules are one of the two main classes of methods for quasi-Monte Carlo (QMC) and randomized quasi-Monte Carlo (RQMC) integration. In this tutorial, we recall the definition and summarize the key properties of lattice rules. We discuss what classes of functions these rules are good to integrate, and how their parameters can be chosen in terms of variance bounds for these classes of functions. We consider integration lattices in the real space as well as in a polynomial space over the finite field F2. We provide various numerical examples of how these rules perform compared with standard Monte Carlo. Some examples involve high-dimensional integrals, others involve Markov chains. We also discuss software design for RQMC and what software is available.
Multidimensional integrals may be approximated by weighted averages of integrand values. Quasi-Monte Carlo (QMC) methods are more accurate than simple Monte Carlo methods because they carefully choose where to evaluate the integrand. This tutorial focuses on how quickly QMC methods converge to the correct answer as the number of integrand values increases. The answer may depend on the smoothness of the integrand and the sophistication of the QMC method. QMC error analysis may assumes the integrand belongs to a reproducing kernel Hilbert space or may assume that the integrand is an instance of a stochastic process with known covariance structure. These two approaches have interesting parallels. This tutorial also explores how the computational cost of achieving a good approximation to the integral depends on the dimension of the domain of the integrand. Finally, this tutorial explores methods for determining how many integrand values are needed to satisfy the error tolerance. Relevant software is described.
Non-sampling functional approximation of linear and non-linear Bayesian UpdateAlexander Litvinenko
We offer a non-sampling functional approximation of non-linear surrogate to classical Bayesian Update formula. We start with prior Polynomial Chaos Expansion (PCE), express log-likelihood in a PCE basis and obtain a new posterior PCE.
Main IDEA is to update not probability density, but basis coefficients.
This document summarizes a thesis on numerical methods for stochastic systems subject to generalized Levy noise. It includes:
1) Motivation for studying such systems from both mathematical and applicational perspectives, such as in mathematical finance and chaotic flows.
2) An introduction to Levy processes and the probability collocation method (PCM) for uncertainty quantification (UQ).
3) Details on improving PCM through a multi-element approach and constructing orthogonal polynomials for discrete measures.
Lattice rules are one of the two main classes of methods for quasi-Monte Carlo (QMC) and randomized quasi-Monte Carlo (RQMC) integration. In this tutorial, we recall the definition and summarize the key properties of lattice rules. We discuss what classes of functions these rules are good to integrate, and how their parameters can be chosen in terms of variance bounds for these classes of functions. We consider integration lattices in the real space as well as in a polynomial space over the finite field F2. We provide various numerical examples of how these rules perform compared with standard Monte Carlo. Some examples involve high-dimensional integrals, others involve Markov chains. We also discuss software design for RQMC and what software is available.
Multidimensional integrals may be approximated by weighted averages of integrand values. Quasi-Monte Carlo (QMC) methods are more accurate than simple Monte Carlo methods because they carefully choose where to evaluate the integrand. This tutorial focuses on how quickly QMC methods converge to the correct answer as the number of integrand values increases. The answer may depend on the smoothness of the integrand and the sophistication of the QMC method. QMC error analysis may assumes the integrand belongs to a reproducing kernel Hilbert space or may assume that the integrand is an instance of a stochastic process with known covariance structure. These two approaches have interesting parallels. This tutorial also explores how the computational cost of achieving a good approximation to the integral depends on the dimension of the domain of the integrand. Finally, this tutorial explores methods for determining how many integrand values are needed to satisfy the error tolerance. Relevant software is described.
Non-sampling functional approximation of linear and non-linear Bayesian UpdateAlexander Litvinenko
We offer a non-sampling functional approximation of non-linear surrogate to classical Bayesian Update formula. We start with prior Polynomial Chaos Expansion (PCE), express log-likelihood in a PCE basis and obtain a new posterior PCE.
Main IDEA is to update not probability density, but basis coefficients.
Hybrid dynamics in large-scale logistics networksMKosmykov
The document discusses modeling and analyzing the stability of large-scale logistics networks. Such networks can be modeled as hybrid systems with both continuous and discrete dynamics. Stability is important to prevent issues like high inventory costs and lost customers. The document proposes modeling each location in the network as an individual hybrid system, and then interconnecting the locations. Conditions like input-to-state stability of subsystems and small gain theorems can ensure stability of the overall interconnected system. The main result presented is a small gain theorem guaranteeing input-to-state stability of the large-scale hybrid system under certain assumptions.
Monte Carlo methods for some not-quite-but-almost Bayesian problemsPierre Jacob
This document outlines an approach to inference when exact Bayesian methods are not applicable. Specifically, it discusses Dempster-Shafer theory, which defines lower and upper probabilities for hypotheses based on feasible parameter sets. It proposes a Gibbs sampler to sample from the distribution of these feasible sets defined by count data. It represents the feasible set as relations between data points, allowing conditional distributions to be derived. This leads to a Gibbs sampling algorithm for approximating inferences under Dempster-Shafer theory for problems where exact Bayesian computation is difficult.
The document discusses methods for reducing the size of gain matrices used in analyzing the stability of interconnected systems. It proposes aggregating nodes in typical interconnection motifs like parallel and sequential connections to obtain a reduced gain matrix. This is done while preserving the small gain condition needed for input-to-state stability. Aggregating motifs involving almost disconnected subgraphs is also discussed. The reduction technique aims to simplify verifying the small gain condition for large networks.
This document describes unbiased Markov chain Monte Carlo (MCMC) methods using coupled Markov chains. It begins by discussing how standard MCMC estimators are biased due to initialization and finite simulation length. It then introduces the idea of running two coupled Markov chains such that they meet and become equal after some meeting time τ. The difference in function values between the chains can then be used to construct an unbiased estimator. Several methods for designing coupled chains that meet this criterion are described, including couplings of popular MCMC algorithms like Metropolis-Hastings. Conditions under which the resulting estimators are guaranteed to be unbiased and have good statistical properties are also outlined.
This document describes the Space Alternating Data Augmentation (SADA) algorithm, an efficient Markov chain Monte Carlo method for sampling from posterior distributions. SADA extends the Data Augmentation algorithm by introducing multiple sets of missing data, with each set corresponding to a subset of model parameters. These are sampled in a "space alternating" manner to improve convergence. The document applies SADA to finite mixtures of Gaussians, introducing different types of missing data to update parameter subsets. Simulation results show SADA provides better mixing and convergence than standard Data Augmentation.
This document summarizes research on scale-free percolation on random graphs. The key points are:
1) A random graph model is introduced that interpolates between long-range percolation and inhomogeneous random graphs, allowing for scale-free degrees and percolative properties determined by weight distributions and parameters.
2) It is shown that the model exhibits scale-free degree distributions and infinite-component percolation when the weight distributions have power law tails with exponent between 2 and 3.
3) Graph distances in the model are proved to grow logarithmically or double-logarithmically depending on whether weight variances are finite or infinite, analogous to other scale-free random graph models
Couplings of Markov chains and the Poisson equation Pierre Jacob
The document discusses couplings of Markov chains and the Poisson equation. It begins with an outline introducing couplings as a technique to study Markov chain convergence rates. An example is provided of a Gibbs sampler motivated by Dempster-Shafer inference, known as the donkey walk. A common random numbers coupling of the donkey walk yields an explicit bound on the Wasserstein distance between the distribution after t steps and the stationary distribution.
Unbiased Markov chain Monte Carlo methods Pierre Jacob
This document describes unbiased Markov chain Monte Carlo methods for approximating integrals with respect to a target probability distribution π. It introduces the idea of coupling two Markov chains such that their states are equal with positive probability, which can be used to construct an unbiased estimator of integrals of the form Eπ[h(X)]. The document outlines conditions under which the proposed estimator is unbiased and has finite variance. It also discusses implementations of coupled Markov chains for common MCMC algorithms like Metropolis-Hastings and Gibbs sampling.
Quasistatic Fracture using Nonliner-Nonlocal Elastostatics with an Analytic T...Patrick Diehl
The document discusses a new method for quasistatic fracture simulation using a regularized nonlinear pairwise (RNP) potential. Key points:
1) An analytic tangent stiffness matrix is derived for the RNP potential by taking the derivative of the bond potential, allowing for more efficient simulations.
2) Two loading algorithms are presented - soft loading and hard loading. Soft loading uses bond softening while hard loading applies a prescribed displacement field.
3) Numerical results show the method can capture linear elastic behavior, bond softening prior to crack growth, and eventual stable crack propagation under both soft and hard loading conditions.
The document discusses combining branching time logic with logics of knowledge for reasoning about multi-agent systems. It proposes an update and abstraction algorithm for model checking Computational Tree Logic with Knowledge (Act-CTL-K) in perfect recall synchronous settings. The key points are:
1) The algorithm transforms Act-CTL-K formulas of bounded knowledge depth k into Act-CTL, using k-trees and knowledge update functions to represent the original environment.
2) A k-tree is a finite tree of height k that represents the knowledge of agents. Knowledge update functions are defined to transform k-trees after actions.
3) The resulting model checking algorithm solves Act-CTL on the transformed k-trees,
Unit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VECsundarKanagaraj1
This document discusses uncertainty and statistical reasoning in artificial intelligence. It covers probability theory, Bayesian networks, and certainty factors. Key topics include probability distributions, Bayes' rule, building Bayesian networks, different types of probabilistic inferences using Bayesian networks, and defining and combining certainty factors. Case studies are provided to illustrate each algorithm.
05 history of cv a machine learning (theory) perspective on computer visionzukun
This document provides an overview of machine learning algorithms used in computer vision from the perspective of a machine learning theorist. It discusses how the theorist got involved in a computer vision project in 2002 and summarizes key algorithms at that time like boosting, support vector machines, and their developments. It also provides historical context and comparisons of algorithms like perceptron and Winnow. The document uses examples to explain concepts like kernels and the kernel trick in support vector machines.
This document presents Joe Suzuki's work on Bayes independence tests. It discusses both discrete and continuous cases. For the discrete case, it estimates mutual information using maximum likelihood and proposes a Bayesian estimation using Lempel-Ziv compression. This Bayesian estimation is shown to be consistent. For the continuous case, it constructs a generalized Bayesian estimation that is also consistent. It also discusses the Hilbert Schmidt independence criterion (HSIC) and its limitations. Experiments show the proposed method performs well on both synthetic and real data, while HSIC shows poor performance in some cases. The proposed method has significantly better execution time than HSIC.
“Statistical Physics Studies of Machine Learning Problems" by Lenka Zdeborova, Researcher @CNRS
Abstract : We will talk about some insight of the following questions: What makes problems studied in machine and statistical physics related? How can this relation be used to understand better the performance and limitations of machine learning systems? What happens when a phase transition is found in a computational problem? How do phase transitions influence algorithmic hardness?
1) The document discusses bias amplification that can occur when using instrumental variable calibration estimators with missing survey data. It presents models where a variable of interest (y) and instrumental variables (z) are related, and response propensity depends on the instrumental variables.
2) When an imperfect proxy for the instrumental variables (x) is used in calibration instead of the true variables, it can lead to bias amplification if the proxy is also related to response propensity. This violates the assumption that the proxy is independent of response given the instrumental variables.
3) A simulation study is presented to illustrate how using an imperfect proxy in calibration can amplify bias compared to the naive estimator that ignores nonresponse. The degree of bias
1) The document discusses a model of stochastic spiking neural networks where dynamical neuronal gains produce self-organized criticality. Introducing dynamic neuronal gains Γi[t] in addition to dynamic synaptic weights Wij[t] allows the system to self-organize toward a critical region without requiring divergent timescales.
2) For finite recovery timescales τ, the model exhibits self-organized supercriticality (SOSC) where the average neuronal gain Γ* is always slightly above critical. SOSC may help explain biological phenomena like large avalanches and epileptic activity.
3) The model provides a new framework to study self-organized phenomena in neuronal networks, including potential analytic solutions and
It is a new theory based on an algorithmic approach. Its only element
is called nokton. These rules are precise. The innities are completely
absent whatever the system studied. It is a theory with discrete space
and time. The theory is only at these beginnings.
This document discusses the limits of computation. It distinguishes between intractable problems that take an impractical amount of time to solve versus truly unsolvable problems. It describes different complexity classes based on how fast the number of operations grows with input size. Hard problems like the traveling salesman problem are inherently difficult even with faster computers. Reductions show relationships between problem difficulties. The halting problem and incompleteness theorems prove certain logical and mathematical questions cannot be answered algorithmically.
This document discusses high-order numerical methods for predictive science on large-scale high-performance computing architectures. It covers three main topics: 1) High performance computing and how modern architectures have increasing numbers of cores but declining memory per core, requiring a shift in numerical algorithms. 2) Ideas on high-order numerical methods that are more accurate using less grid points and higher-order approximations. 3) The importance of validating and verifying simulations against theoretical solutions and experiments for predictive science.
Hybrid dynamics in large-scale logistics networksMKosmykov
The document discusses modeling and analyzing the stability of large-scale logistics networks. Such networks can be modeled as hybrid systems with both continuous and discrete dynamics. Stability is important to prevent issues like high inventory costs and lost customers. The document proposes modeling each location in the network as an individual hybrid system, and then interconnecting the locations. Conditions like input-to-state stability of subsystems and small gain theorems can ensure stability of the overall interconnected system. The main result presented is a small gain theorem guaranteeing input-to-state stability of the large-scale hybrid system under certain assumptions.
Monte Carlo methods for some not-quite-but-almost Bayesian problemsPierre Jacob
This document outlines an approach to inference when exact Bayesian methods are not applicable. Specifically, it discusses Dempster-Shafer theory, which defines lower and upper probabilities for hypotheses based on feasible parameter sets. It proposes a Gibbs sampler to sample from the distribution of these feasible sets defined by count data. It represents the feasible set as relations between data points, allowing conditional distributions to be derived. This leads to a Gibbs sampling algorithm for approximating inferences under Dempster-Shafer theory for problems where exact Bayesian computation is difficult.
The document discusses methods for reducing the size of gain matrices used in analyzing the stability of interconnected systems. It proposes aggregating nodes in typical interconnection motifs like parallel and sequential connections to obtain a reduced gain matrix. This is done while preserving the small gain condition needed for input-to-state stability. Aggregating motifs involving almost disconnected subgraphs is also discussed. The reduction technique aims to simplify verifying the small gain condition for large networks.
This document describes unbiased Markov chain Monte Carlo (MCMC) methods using coupled Markov chains. It begins by discussing how standard MCMC estimators are biased due to initialization and finite simulation length. It then introduces the idea of running two coupled Markov chains such that they meet and become equal after some meeting time τ. The difference in function values between the chains can then be used to construct an unbiased estimator. Several methods for designing coupled chains that meet this criterion are described, including couplings of popular MCMC algorithms like Metropolis-Hastings. Conditions under which the resulting estimators are guaranteed to be unbiased and have good statistical properties are also outlined.
This document describes the Space Alternating Data Augmentation (SADA) algorithm, an efficient Markov chain Monte Carlo method for sampling from posterior distributions. SADA extends the Data Augmentation algorithm by introducing multiple sets of missing data, with each set corresponding to a subset of model parameters. These are sampled in a "space alternating" manner to improve convergence. The document applies SADA to finite mixtures of Gaussians, introducing different types of missing data to update parameter subsets. Simulation results show SADA provides better mixing and convergence than standard Data Augmentation.
This document summarizes research on scale-free percolation on random graphs. The key points are:
1) A random graph model is introduced that interpolates between long-range percolation and inhomogeneous random graphs, allowing for scale-free degrees and percolative properties determined by weight distributions and parameters.
2) It is shown that the model exhibits scale-free degree distributions and infinite-component percolation when the weight distributions have power law tails with exponent between 2 and 3.
3) Graph distances in the model are proved to grow logarithmically or double-logarithmically depending on whether weight variances are finite or infinite, analogous to other scale-free random graph models
Couplings of Markov chains and the Poisson equation Pierre Jacob
The document discusses couplings of Markov chains and the Poisson equation. It begins with an outline introducing couplings as a technique to study Markov chain convergence rates. An example is provided of a Gibbs sampler motivated by Dempster-Shafer inference, known as the donkey walk. A common random numbers coupling of the donkey walk yields an explicit bound on the Wasserstein distance between the distribution after t steps and the stationary distribution.
Unbiased Markov chain Monte Carlo methods Pierre Jacob
This document describes unbiased Markov chain Monte Carlo methods for approximating integrals with respect to a target probability distribution π. It introduces the idea of coupling two Markov chains such that their states are equal with positive probability, which can be used to construct an unbiased estimator of integrals of the form Eπ[h(X)]. The document outlines conditions under which the proposed estimator is unbiased and has finite variance. It also discusses implementations of coupled Markov chains for common MCMC algorithms like Metropolis-Hastings and Gibbs sampling.
Quasistatic Fracture using Nonliner-Nonlocal Elastostatics with an Analytic T...Patrick Diehl
The document discusses a new method for quasistatic fracture simulation using a regularized nonlinear pairwise (RNP) potential. Key points:
1) An analytic tangent stiffness matrix is derived for the RNP potential by taking the derivative of the bond potential, allowing for more efficient simulations.
2) Two loading algorithms are presented - soft loading and hard loading. Soft loading uses bond softening while hard loading applies a prescribed displacement field.
3) Numerical results show the method can capture linear elastic behavior, bond softening prior to crack growth, and eventual stable crack propagation under both soft and hard loading conditions.
The document discusses combining branching time logic with logics of knowledge for reasoning about multi-agent systems. It proposes an update and abstraction algorithm for model checking Computational Tree Logic with Knowledge (Act-CTL-K) in perfect recall synchronous settings. The key points are:
1) The algorithm transforms Act-CTL-K formulas of bounded knowledge depth k into Act-CTL, using k-trees and knowledge update functions to represent the original environment.
2) A k-tree is a finite tree of height k that represents the knowledge of agents. Knowledge update functions are defined to transform k-trees after actions.
3) The resulting model checking algorithm solves Act-CTL on the transformed k-trees,
Unit IV UNCERTAINITY AND STATISTICAL REASONING in AI K.Sundar,AP/CSE,VECsundarKanagaraj1
This document discusses uncertainty and statistical reasoning in artificial intelligence. It covers probability theory, Bayesian networks, and certainty factors. Key topics include probability distributions, Bayes' rule, building Bayesian networks, different types of probabilistic inferences using Bayesian networks, and defining and combining certainty factors. Case studies are provided to illustrate each algorithm.
05 history of cv a machine learning (theory) perspective on computer visionzukun
This document provides an overview of machine learning algorithms used in computer vision from the perspective of a machine learning theorist. It discusses how the theorist got involved in a computer vision project in 2002 and summarizes key algorithms at that time like boosting, support vector machines, and their developments. It also provides historical context and comparisons of algorithms like perceptron and Winnow. The document uses examples to explain concepts like kernels and the kernel trick in support vector machines.
This document presents Joe Suzuki's work on Bayes independence tests. It discusses both discrete and continuous cases. For the discrete case, it estimates mutual information using maximum likelihood and proposes a Bayesian estimation using Lempel-Ziv compression. This Bayesian estimation is shown to be consistent. For the continuous case, it constructs a generalized Bayesian estimation that is also consistent. It also discusses the Hilbert Schmidt independence criterion (HSIC) and its limitations. Experiments show the proposed method performs well on both synthetic and real data, while HSIC shows poor performance in some cases. The proposed method has significantly better execution time than HSIC.
“Statistical Physics Studies of Machine Learning Problems" by Lenka Zdeborova, Researcher @CNRS
Abstract : We will talk about some insight of the following questions: What makes problems studied in machine and statistical physics related? How can this relation be used to understand better the performance and limitations of machine learning systems? What happens when a phase transition is found in a computational problem? How do phase transitions influence algorithmic hardness?
1) The document discusses bias amplification that can occur when using instrumental variable calibration estimators with missing survey data. It presents models where a variable of interest (y) and instrumental variables (z) are related, and response propensity depends on the instrumental variables.
2) When an imperfect proxy for the instrumental variables (x) is used in calibration instead of the true variables, it can lead to bias amplification if the proxy is also related to response propensity. This violates the assumption that the proxy is independent of response given the instrumental variables.
3) A simulation study is presented to illustrate how using an imperfect proxy in calibration can amplify bias compared to the naive estimator that ignores nonresponse. The degree of bias
1) The document discusses a model of stochastic spiking neural networks where dynamical neuronal gains produce self-organized criticality. Introducing dynamic neuronal gains Γi[t] in addition to dynamic synaptic weights Wij[t] allows the system to self-organize toward a critical region without requiring divergent timescales.
2) For finite recovery timescales τ, the model exhibits self-organized supercriticality (SOSC) where the average neuronal gain Γ* is always slightly above critical. SOSC may help explain biological phenomena like large avalanches and epileptic activity.
3) The model provides a new framework to study self-organized phenomena in neuronal networks, including potential analytic solutions and
It is a new theory based on an algorithmic approach. Its only element
is called nokton. These rules are precise. The innities are completely
absent whatever the system studied. It is a theory with discrete space
and time. The theory is only at these beginnings.
This document discusses the limits of computation. It distinguishes between intractable problems that take an impractical amount of time to solve versus truly unsolvable problems. It describes different complexity classes based on how fast the number of operations grows with input size. Hard problems like the traveling salesman problem are inherently difficult even with faster computers. Reductions show relationships between problem difficulties. The halting problem and incompleteness theorems prove certain logical and mathematical questions cannot be answered algorithmically.
This document discusses high-order numerical methods for predictive science on large-scale high-performance computing architectures. It covers three main topics: 1) High performance computing and how modern architectures have increasing numbers of cores but declining memory per core, requiring a shift in numerical algorithms. 2) Ideas on high-order numerical methods that are more accurate using less grid points and higher-order approximations. 3) The importance of validating and verifying simulations against theoretical solutions and experiments for predictive science.
Nahian Ahmed, student ID 151-15-5137 from section G, will present on the application of numerical methods in computer science. The presentation will discuss curve fitting, which is a widely used analysis tool for examining relationships between predictors. Curve fitting can be used in MS Excel to generate curves and equations like y=ax+b that fit the provided data. It can also determine the best fit line for a data set. The presentation will cover Nahian's 4th semester project on this topic.
Presentation on DNA Sequencing ProcessNahian Ahmed
The document summarizes a presentation on bioinformatics and DNA sequencing. It includes 5 group members who each discuss an aspect of DNA sequencing. It describes how DNA stores genetic information and its structure. It then explains the history of DNA discovery and different sequencing methods, including the Sanger method. Modern applications of sequencing in forensics, medicine, and agriculture are outlined. The human genome project is summarized as a large international effort to sequence the entire human genome.
1) The document is a thesis on numerically analyzing the load-settlement behavior of multi-edge foundations using FLAC3D.
2) It examines the effects of geometry, soil parameters, and element size on the bearing capacity of cross-shaped and H-shaped foundations.
3) The optimal ratios of width to length are determined, which result in the maximum bearing capacity due to blocking effects between shear zones under the foundation.
The Tridiagonal Matrix Algorithm (TDMA) is used to solve systems of tridiagonal linear algebraic equations. The equations are of the form:
aiXi-1 + biXi + ciXi+1 = di
Where ai, bi, ci are the coefficients on the sub-diagonal, diagonal and super-diagonal respectively.
TDMA solves the equations in forward and backward substitution steps. In the forward step, it expresses the solution at each node Xi in terms of the solution at the next node Xi+1. In the backward step, it substitutes these expressions back into the original equations to obtain an expression for the solution at each node in terms of the solutions of nodes with higher indices. This
Numerical methods in Transient-heat-conductiontmuliya
This file contains slides on Numerical methods in Transient heat conduction.
The slides were prepared while teaching Heat Transfer course to the M.Tech. students in Mechanical Engineering Dept. of St. Joseph Engineering College, Vamanjoor, Mangalore, India, during Sept. – Dec. 2010.
Contents: Finite difference eqns. by energy balance – Explicit and Implicit methods – 1-D transient conduction in a plane wall – stability criterion – Problems - 2-D transient heat conduction – Finite diff. eqns. for interior nodes – Explicit and Implicit methods - stability criterion – difference eqns for different boundary conditions – Accuracy considerations – discretization error and round–off error - Problems
Numerical methods for 2 d heat transferArun Sarasan
This document presents a numerical study comparing finite difference and finite volume methods for solving the heat transfer equation during solidification in a complex casting geometry. The study uses a multi-block grid with bilinear interpolation and generalized curvilinear coordinates. Results show good agreement between the two discretization methods, with a slight advantage for the finite volume method due to its use of more nodal information. The multi-block grid approach reduces computational time and allows complex geometries to be accurately modeled while overcoming issues at block interfaces.
This document summarizes numerical methods used in various fields including engineering, crime detection, scientific computing, finding roots, and solving heat equations. It discusses how numerical methods are widely used in engineering to model systems using mathematical equations when analytical solutions are not possible. Examples of applying numerical methods include structural analysis, fluid dynamics, image processing to deblur photos, and algorithms for finding roots of equations and solving differential equations.
Chapter 12 Influence Of Culture On Consumer BehaviorAvinash Kumar
The document discusses how culture influences consumer behavior. It defines culture as the learned beliefs, values and customs shared by members of a society. Culture is transmitted through enculturation, acculturation, language, symbols, rituals and sharing. Marketers must understand a target culture to effectively appeal to consumers within that culture.
The document discusses numerical methods for finding roots of equations and integrating functions. It covers root-finding algorithms like the bisection method, Regula Falsi method, modified Regula Falsi, and secant method. These algorithms iteratively find roots by narrowing the interval that contains the root. The document also discusses numerical integration techniques like the trapezoidal rule to approximate the area under a curve without having a closed-form solution. It notes the tradeoffs between different root-finding algorithms in terms of speed, accuracy, and ability to guarantee convergence.
This document summarizes research on computing stochastic partial differential equations (SPDEs) using an adaptive multi-element polynomial chaos method (MEPCM) with discrete measures. Key points include:
1) MEPCM uses polynomial chaos expansions and numerical integration to compute SPDEs with parametric uncertainty.
2) Orthogonal polynomials are generated for discrete measures using various methods like Vandermonde, Stieltjes, and Lanczos.
3) Numerical integration is tested on discrete measures using Genz functions in 1D and sparse grids in higher dimensions.
4) The method is demonstrated on the KdV equation with random initial conditions. Future work includes applying these techniques to SPDEs driven
This document contains information about data structures and algorithms taught at KTH Royal Institute of Technology. It includes code templates for a contest, descriptions and implementations of common data structures like an order statistic tree and hash map, as well as summaries of mathematical and algorithmic concepts like trigonometry, probability theory, and Markov chains.
Wang-Landau Monte Carlo simulation is a method for calculating the density of states function which can then be used to calculate thermodynamic properties like the mean value of variables. It improves on traditional Monte Carlo methods which struggle at low temperatures due to complicated energy landscapes with many local minima separated by large barriers. The Wang-Landau algorithm calculates the density of states function directly rather than relying on sampling configurations, allowing it to overcome barriers and fully explore the configuration space even at low temperatures.
This document discusses computational issues that arise in Bayesian statistics. It provides examples of latent variable models like mixture models that make computation difficult due to the large number of terms that must be calculated. It also discusses time series models like the AR(p) and MA(q) models, noting that they have complex parameter spaces due to stationarity constraints. The document outlines the Metropolis-Hastings algorithm, Gibbs sampler, and other methods like Population Monte Carlo and Approximate Bayesian Computation that can help address these computational challenges.
Delayed acceptance for Metropolis-Hastings algorithmsChristian Robert
The document proposes a delayed acceptance method for accelerating Metropolis-Hastings algorithms. It begins with a motivating example of non-informative inference for mixture models where computing the prior density is costly. It then introduces the delayed acceptance approach which splits the acceptance probability into pieces that are evaluated sequentially, avoiding computing the full acceptance ratio each time. It validates that the delayed acceptance chain is reversible and provides bounds on its spectral gap and asymptotic variance compared to the original chain. Finally, it discusses optimizing the delayed acceptance approach by considering the expected square jump distance and cost per iteration to maximize efficiency.
Patch Matching with Polynomial Exponential Families and Projective DivergencesFrank Nielsen
This document presents a method called Polynomial Exponential Family-Patch Matching (PEF-PM) to solve the patch matching problem. PEF-PM models patch colors using polynomial exponential families (PEFs), which are universal smooth positive densities. It estimates PEFs using a Score Matching Estimator and accelerates batch estimation using Summed Area Tables. Patch similarity is measured using a statistical projective divergence called the symmetrized γ-divergence. Experiments show PEF-PM handles noise robustly, symmetries, and outperforms baseline methods.
MVPA with SpaceNet: sparse structured priorsElvis DOHMATOB
The GraphNet (aka S-Lasso), as well as other “sparsity + structure” priors like TV (Total-Variation), TV-L1, etc., are not easily applicable to brain data because of technical problems
relating to the selection of the regularization parameters. Also, in
their own right, such models lead to challenging high-dimensional optimization problems. In this manuscript, we present some heuristics for speeding up the overall optimization process: (a) Early-stopping, whereby one halts the optimization process when the test score (performance on leftout data) for the internal cross-validation for model-selection stops improving, and (b) univariate feature-screening, whereby irrelevant (non-predictive) voxels are detected and eliminated before the optimization problem is entered, thus reducing the size of the problem. Empirical results with GraphNet on real MRI (Magnetic Resonance Imaging) datasets indicate that these heuristics are a win-win strategy, as they add speed without sacrificing the quality of the predictions. We expect the proposed heuristics to work on other models like TV-L1, etc.
Relaxation methods for the matrix exponential on large networksDavid Gleich
My talk from the Stanford ICME seminar series on doing network analysis and link prediction using the a fast algorithm for the matrix exponential on graph problems.
This document discusses Bayesian inference on mixtures models. It covers several key topics:
1. Density approximation and consistency results for mixtures as a way to approximate unknown distributions.
2. The "scarcity phenomenon" where the posterior probabilities of most component allocations in mixture models are zero, concentrating on just a few high probability allocations.
3. Challenges with Bayesian inference for mixtures, including identifiability issues, label switching, and complex combinatorial calculations required to integrate over all possible component allocations.
This document summarizes research on quantum chaos, including the principle of uniform semiclassical condensation of Wigner functions, spectral statistics in mixed systems, and dynamical localization of chaotic eigenstates. It discusses how in the semiclassical limit, Wigner functions condense uniformly on classical invariant components. For mixed systems, the spectrum can be seen as a superposition of regular and chaotic level sequences. Localization effects can be observed if the Heisenberg time is shorter than the classical diffusion time. The document presents an analytical formula called BRB that describes the transition between Poisson and random matrix statistics. An example is given of applying this to analyze the level spacing distribution for a billiard system.
Probabilistic Control of Switched Linear Systems with Chance ConstraintsLeo Asselborn
An approach to algorithmically synthesize control
strategies for set-to-set transitions of uncertain discrete-time
switched linear systems based on a combination of tree search
and reachable set computations in a stochastic setting is
proposed in this presentation. The initial state and disturbances
are assumed to be Gaussian distributed, and a time-variant
hybrid control law stabilizes the system towards a goal set.
The algorithmic solution computes sequences of discrete states
via tree search and the continuous controls are obtained
from solving embedded semi-definite programs (SDP). These
program taking polytopic input constraints as well as timevarying
probabilistic state constraints into account. An example
for demonstrating the principles of the solution procedure with
focus on handling the chance constraints is included.
Characterization of Subsurface Heterogeneity: Integration of Soft and Hard In...Amro Elfeki
Park, E., Elfeki, A. M. M., Dekking, F.M. (2003). Characterization of subsurface heterogeneity: Integration of soft and hard information using multi-dimensional Coupled Markov chain approach. Underground Injection Science and Technology Symposium, Lawrence Berkeley National Lab., October 22-25, 2003. p.49. Eds. Tsang, Chin.-Fu and Apps, John A.
http://www.lbl.gov/Conferences/UIST/index.html#topics
This document discusses dynamics of structures with uncertainties. It begins with an introduction to stochastic single degree of freedom systems and how natural frequency variability can be modeled using probability distributions. It then discusses how to extend this approach to stochastic multi degree of freedom systems using stochastic finite element formulations and modal projections. Key challenges with statistical overlap of eigenvalues are noted. The document provides mathematical models of equivalent damping in stochastic systems and examples of stochastic frequency response functions.
Estimating the Evolution Direction of Populations to Improve Genetic AlgorithmsAnnibale Panichella
Meta-heuristics have been successfully used to solve a wide variety of problems. However, one issue many techniques have is their risk of being trapped into local optima, or to create a limited variety of solutions (problem known as ``population drift''). During recent and past years, different kinds of techniques have been proposed to deal with population drift, for example hybridizing genetic algorithms with local search techniques or using niche techniques.
This paper proposes a technique, based on Singular Value Decomposition (SVD), to enhance Genetic Algorithms (GAs) population diversity. SVD helps to estimate the evolution direction and drive next generations towards orthogonal dimensions.
The proposed SVD-based GA has been evaluated on 11 benchmark problems and compared with a simple GA and a GA with a distance-crowding schema. Results indicate that SVD-based GA achieves significantly better solutions and exhibits a quicker convergence than the alternative techniques.
This document discusses Bayesian variable selection methods for regression models. It begins by reviewing traditional ANOVA tables and their limitations for modern applications with many variables, such as GWAS studies. It then introduces Bayesian approaches using priors to perform variable selection by building it into the regression model. Several variable selection methods are described that use different prior distributions, such as slab and spike priors, the stochastic search variable selection (SSVS) method, and the normal-exponential-gamma (NEG) distribution. The document discusses how these methods can be implemented using MCMC sampling and compares their performance. It also discusses some extensions like using random effects and polynomial terms.
Different kind of distance and Statistical DistanceKhulna University
A short brief of distance and statistical distance which is core of multivariate analysis.................you will get here some more simple conception about distances and statistical distance.
The document discusses uncertainty quantification and robust design approaches for aircraft design. It compares using a polynomial chaos expansion with an adaptive sparse grid to represent input uncertainties and the objective function. This allows solving the robust optimization problem with reduced computational cost compared to evaluating on a full tensor grid. The methodology is demonstrated on a transonic airfoil design test case with geometrical uncertainties, comparing different robust measures of performance.
Stochastic reaction networks (SRNs) are a particular class of continuous-time Markov chains used to model a wide range of phenomena, including biological/chemical reactions, epidemics, risk theory, queuing, and supply chain/social/multi-agents networks. In this context, we explore the efficient estimation of statistical quantities, particularly rare event probabilities, and propose two alternative importance sampling (IS) approaches [1,2] to improve the Monte Carlo (MC) estimator efficiency. The key challenge in the IS framework is to choose an appropriate change of probability measure to achieve substantial variance reduction, which often requires insights into the underlying problem. Therefore, we propose an automated approach to obtain a highly efficient path-dependent measure change based on an original connection between finding optimal IS parameters and solving a variance minimization problem via a stochastic optimal control formulation. We pursue two alternative approaches to mitigate the curse of dimensionality when solving the resulting dynamic programming problem. In the first approach [1], we propose a learning-based method to approximate the value function using a neural network, where the parameters are determined via a stochastic optimization algorithm. As an alternative, we present in [2] a dimension reduction method, based on mapping the problem to a significantly lower dimensional space via the Markovian projection (MP) idea. The output of this model reduction technique is a low dimensional SRN (potentially one dimension) that preserves the marginal distribution of the original high-dimensional SRN system. The dynamics of the projected process are obtained via a discrete $L^2$ regression. By solving a resulting projected Hamilton-Jacobi-Bellman (HJB) equation for the reduced-dimensional SRN, we get projected IS parameters, which are then mapped back to the original full-dimensional SRN system, and result in an efficient IS-MC estimator of the full-dimensional SRN. Our analysis and numerical experiments verify that both proposed IS (learning based and MP-HJB-IS) approaches substantially reduce the MC estimator’s variance, resulting in a lower computational complexity in the rare event regime than standard MC estimators. [1] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. Learning-based importance sampling via stochastic optimal control for stochastic reaction net-works. Statistics and Computing 33, no. 3 (2023): 58. [2] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. (2023). Automated Importance Sampling via Optimal Control for Stochastic Reaction Networks: A Markovian Projection-based Approach. To appear soon.
phd Thesis Mengdi Zheng (Summer) Brown Applied MathsZheng Mengdi
This document is the dissertation of Mengdi Zheng submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Applied Mathematics at Brown University in May 2015. The dissertation focuses on developing numerical methods for uncertainty quantification of stochastic partial differential equations driven by Lévy jump processes. Specifically, it presents work on applying polynomial chaos expansions, Wick-Malliavin approximations, and generalized Fokker-Planck equations to simulate stochastic systems subject to generalized Lévy noise and analyze their moment statistics. The dissertation contains publications by the author in peer-reviewed journals on these topics.
Mengdi Zheng is a Chinese national working as a KTP Associate at University College London since June 2015. She holds a PhD in Applied Mathematics from Brown University (2011-2015) and has experience modeling natural catastrophes including earthquakes, tsunamis, and financial losses. Her research interests include stochastic partial differential equations, uncertainty quantification, and scientific computing methods.
Mengdi Zheng is a catastrophe risk research analyst at University College London. She has over 10 years of experience in applied mathematics and scientific computing. Her responsibilities include numerical simulation of tsunamis from earthquakes and estimating financial loss from tsunamis for an insurance company. She holds a PhD in Applied Mathematics from Brown University.
This document is Mengdi Zheng's dissertation for the degree of Doctor of Philosophy in Applied Mathematics from Brown University. The dissertation focuses on developing numerical methods for stochastic partial differential equations (SPDEs) driven by Lévy jump processes. Chapter 1 introduces the motivation and challenges in uncertainty quantification of nonlinear SPDEs driven by Lévy noise. The subsequent chapters develop simulation methods for Lévy jump processes, adaptive stochastic collocation methods, and Wick-Malliavin approximations to solve SPDEs with discrete and tempered stable Lévy noise in multiple dimensions.
This document summarizes numerical methods for solving stochastic partial differential equations (SPDEs) driven by Lévy jump processes. It discusses both probabilistic methods like Monte Carlo (MC) and probabilistic collocation method (PCM), as well as deterministic methods based on solving the generalized Fokker-Planck equation. Specific examples discussed include an overdamped Langevin equation driven by a 1D tempered alpha-stable process, and diffusion equations driven by multi-dimensional jump processes using different dependence structures. The document compares the accuracy and efficiency of MC/PCM versus solving the tempered fractional Fokker-Planck equation directly. It also discusses how to represent SPDEs with additive multi-dimensional Lévy
This document summarizes research on numerical methods for solving stochastic partial differential equations (SPDEs) driven by Lévy jump processes. It discusses both probabilistic methods like Monte Carlo simulation and polynomial chaos methods, as well as deterministic methods based on generalized Fokker-Planck equations. Specific examples presented include the overdamped Langevin equation driven by a tempered α-stable Lévy process, and heat equations with jumps modeled by multi-dimensional Lévy processes using either Lévy copulas or Lévy measure representations. Comparisons are made between probabilistic and deterministic methods in terms of accuracy and computational efficiency for moment statistics.
The document is a teaching statement from an applicant named Mengdi Zheng. It discusses their teaching philosophy and experience. Some key points made include:
- College students are able to think independently but class time is not enough to fully cover material, so the goal is to inspire interest and emphasize key points.
- Teaching and research are related activities that both involve collecting information, asking questions, and problem solving. Teaching enhances research skills.
- It is important to engage all students, not just active ones, and ensure quieter students also understand before moving to new topics. Flexibility is needed to reach learning goals.
Summer Zheng will present the paper "Fractional dynamics on networks: Emergence of anomalous diffusion and Levy flights" which discusses fractional diffusion processes and long-range dynamics on networks. The paper introduces a fractional formalism to describe diffusion on networks and shows how this leads to anomalous diffusion and Levy flights. It provides examples of fractional diffusion on tree and ring networks and analyzes how network structure, such as being a tree, ring, or scale-free, influences properties like the fractional return probability and global exploration time.
This document outlines the author's dissertation project on numerical methods for stochastic systems subject to generalized Lévy noise. The dissertation will include 6 chapters covering: 1) simulation of Lévy jump processes, 2) adaptive multi-element polynomial chaos for stochastic PDEs with discrete measures, 3) Wick-Malliavin approximation of nonlinear SPDEs with discrete random variables, 4) methods for SPDEs with tempered α-stable processes, 5) methods for SPDEs with additive multidimensional Lévy jump processes, and 6) application of fractional dynamics on networks. Each chapter will apply the numerical methods to examples such as stochastic reaction-diffusion equations, Burgers equations, and Navier-Stokes flow past
uncertainty quantification of SPDEs with multi-dimensional Levy processesZheng Mengdi
The document discusses using Fokker-Planck equations to model stochastic differential equations driven by additive pure jump processes. It compares using the Fokker-Planck approach to a Monte Carlo simulation or probability density approach. It also examines using Fokker-Planck equations to model heat equations with two-dimensional jump processes, comparing the Fokker-Planck solution to exact solutions and those obtained from copula models or using LePage series.
1) The document discusses five methods for generating orthogonal polynomials for discrete measures: Nowak's method, Fischer's method, Stieltjes method, modified Chebyshev method, and Lanczos method.
2) It compares the orthogonality, computational cost, and minimum polynomial order where the Stieltjes method starts to fail for these five methods.
3) It proposes an adaptive multi-element polynomial chaos method (ME-PCM) for stochastic partial differential equations driven by discrete random variables and demonstrates its accuracy on example problems.
2014 spring crunch seminar (SDE/levy/fractional/spectral method)Zheng Mengdi
This document summarizes numerical methods for simulating stochastic partial differential equations (SPDEs) with tempered alpha-stable (TαS) processes. It discusses two main methods:
1) The compound Poisson (CP) approximation method, which simulates large jumps as a CP process and replaces small jumps with their expected drift term.
2) The series representation method, which represents the TαS process as an infinite series involving i.i.d. random variables.
It also provides algorithms for implementing these two methods and applies them to simulate specific examples like reaction-diffusion equations with TαS noise. Numerical results demonstrate that both methods can accurately capture the statistics of the underlying TαS
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
The cost of acquiring information by natural selectionCarl Bergstrom
This is a short talk that I gave at the Banff International Research Station workshop on Modeling and Theory in Population Biology. The idea is to try to understand how the burden of natural selection relates to the amount of information that selection puts into the genome.
It's based on the first part of this research paper:
The cost of information acquisition by natural selection
Ryan Seamus McGee, Olivia Kosterlitz, Artem Kaznatcheev, Benjamin Kerr, Carl T. Bergstrom
bioRxiv 2022.07.02.498577; doi: https://doi.org/10.1101/2022.07.02.498577
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
PPT on Direct Seeded Rice presented at the three-day 'Training and Validation Workshop on Modules of Climate Smart Agriculture (CSA) Technologies in South Asia' workshop on April 22, 2024.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
1. Numerical methods
for stochastic systems
subject to generalized Levy noise
by Mengdi Zheng!
Thesis committee: George Em Karniadakis (Ph.D., advisor)!
Hui Wang (Ph.D., reader, APMA, Brown)!
Xiaoliang Wan (Ph.D., reader, Mathematics, LSU)
2. Motivation from 2 aspects
Mathematical!
Reasons
Reasons from applications
Mathematical!
Finance!
!
Levy flights!
in Chaotic!
flows
3. Goal of my thesis
We consider:!
!
SPDEs driven by !
1. discrete RVs !
2. jump processes!
!
(jump systems/
memory systems)
Deterministic method:!
!
Fokker-Planck (FP)!
equation
Probabilistic method:!
!
Monte Carlo (MC)!
general Polynomial chaos (gPC)!
probability collocation method (PCM)!
multi-element PCM (MEPCM)
Uncertainty !
Quantification
We are the first ones who solved such systems through
both deterministic & probabilistic methods.
5. Outline
♚ Adaptive multi-element polynomial chaos with discrete
measure: Algorithms and applications to SPDEs
♚ Adaptive Wick-Malliavin (WM) approximation to nonlinear
SPDEs with discrete RVs
♚ Numerical methods for SPDEs with 1D tempered -stable
(T S) processes
♚ Numerical methods for SPDEs with additive
multi-dimensional Levy jump processes
♚ Future work
α
α
6. Probability collocation method (PCM) in UQ
Xt (ω) ≈ Xt (ξ1,ξ2,...,ξn ) ω ∈Ω Ε[um
(x,t;ω)] ≈ Ε[um
(x,t;ξ1,ξ2,...,ξn )]
ξ1
ξ2
ξ3
... ξn
O Ω
PCM
ξ1
ξ2
Ω
O
MEPCM
B1 B2
B3 B4
n = 2
I = dΓ(x) f (x) ≈
a
b
∫ dΓ(x) f (xi )hi (x)
i=1
d
∑ = f (xi ) dΓ(x)hi (x)
a
b
∫i=1
d
∑
a
b
∫
Gauss integration:
u(x,t;ξ1i )
i=1
d
∑ wi
n = 1
{Pi (x)}orthogonal to Γ(x)
Pd (x)zeros of
Lagrange
interpolation
at zeros{xi ,i = 1,..,d}
7. Gauss
quadrature
and weights
generate
orthogonal to{Pj
k (ξ j
)}
µ j
(ξ j
)
moment
statistics
5
ways
tensor
product
or
sparse
grid
Ε[um
(x,t;ω)]
0
17.5
35
52.5
70
1 2 3 4
measure of
ξi
ξi
what if is a set of experimental data?ξi
subjective assumption of!
distribution shapes
?
ut + 6uux + uxxx = σiξi ,
i=1
n
∑ x ∈!
u(x,0) =
a
2
sech2
(
a
2
(x − x0 ))
Data-driven UQ for stochastic KdV equations
M. Zheng, X. Wan, G.E. Karniadakis, Adaptive multi-element polynomial chaos with discrete measure: Algorithms and
application to SPDEs, Applied Numerical Mathematics, 90 (2015), pp. 91–110.
A(k + n,n) = (−1)k+n−|i| n −1
k + n− | i |
⎛
⎝⎜
⎞
⎠⎟ (Ui1
⊗...⊗Uin
)
k+1≤|i|≤k+n
∑
Sparse grids in Smolyak algorithm: level k, dimension n
sparse!
grid
tensor!
product!
grid
8. Construct orthogonal polynomials to discrete measures
1. (Nowak) S. Oladyshkin, W. Nowak, Data-driven uncertainty quantification using the arbitrary polynomial chaos expansion,
Reliability Engineering & System Safety, 106 (2012), pp. 179–190.
2. (Stieltjes, Modified Chebyshev) W. Gautschi, On generating orthogonal polynomials, SIAM J. Sci. Stat. Comp., 3 (1982), no.3, pp.
289–317.
3. (Lanczos) D. Boley, G. H. Golub, A survey of matrix inverse eigenvalue problems, Inverse Problems, 3 (1987), pp. 595–622.
4. (Fischer) H. J. Fischer, On generating orthogonal polynomials for discrete measures, Z. Anal. Anwendungen, 17 (1998), pp. 183–
205.
f (k;N, p) =
N!
k!(N − k)!
pk
(1− p)N−k
,k = 0,1,...,N.
10 20 40 80 100
10
−4
10
−3
10
−2
10
−1
10
0
polynomial order i
CPUtimetoevaluateorth(i)
Nowak
Stieltjes
Fischer
Modified Chebyshev
Lanczos
C*i2
n=100,p=1/2
polynomial order i
Bino(100, 1/2)
0 10 20 30 40 50 60 70 80 90 100
10
−20
10
−15
10
−10
10
−5
10
0
polynomial order i
orth(i)
Nowak
Stieltjes
Fischer
Modified Chebyshev
Lanczos
N=100, p=1/2
polynomial order i
Bino(100, 1/2)
orthogonality ?
cost ?
Bino(N, p)Binomial distribution
9. | f (ξ)µ(ξ)− Qm
Bi
f
i=1
Nes
∑
Γ
∫ |≤ Chm+1
|| EΓ ||m+1,∞,Γ | f |m+1,∞,Γ
{Bi
}i=1
Nes
: elementsNes : number of elements
: number of elementsΓ
µ: discrete measure
Qm
Bi
Gauss quadrature + tensor product
with exactness m=2d-1
h: maximum size of Bi
f : test function in W m+1,∞
(Γ)
(when the measure is continuous) J. Foo, X. Wan, G. E. Karniadakis, A multi-element probabilistic collocation method for PDEs with
parametric uncertainty: error analysis and applications, Journal of Computational Physics 227 (2008), pp. 9572–9595.
Multi-element Gauss integration over discrete measures
2 3 4 5 6 7 8
10
−3
10
−2
d
error
l2u1
aPC
l2u2aPC
2 3 5 10 15 20 30
10
−7
10
−6
10
−5
10
−4
10
−3
10
−2
N
es
error
l2u1
l2u2
C*Nel
−4
h-convergence!
of MEPCM
p-convergence of PCM
Poisson distribution
Binomial distribution
10. Adaptive MEPCM for stochastic KdV equation with 1RV
ut + 6uux + uxxx = σiξi ,
i=1
n
∑ x ∈!
u(x,0) =
a
2
sech2
(
a
2
(x − x0 ))
σi
2
local variance to the measure µ(dξ) / µ(dξ)
Bi
∫
adaptive !
integration!
mesh
2 3 4 5 6
10
−5
10
−4
10
−3
10
−2
Number of PCM points on each element
errors
2 el, even grid
2 el, uneven grid
4 el, even grid
4 el, uneven grid
5 el, even grid
5 el, uneven grid
(MEPCM)!
adaptive!
vs.!
non-
adaptive!
meshes
error of Ε[u2
]
Improved !
11. ut + 6uux + uxxx = σiξi ,
i=1
n
∑ x ∈!
u(x,0) =
a
2
sech2
(
a
2
(x − x0 ))
(sparse grid) D. Xiu, J. S. Hesthaven, High-order collocation methods for differential equations with random inputs, SIAM J. Scientific
Computing 27(3) (2005), pp. 1118– 1139.
17 153 256 969 4,845
10
−10
10
−9
10
−8
10
−7
10
−6
10
−5
10
−4
r(k)
errors
l2u1(sparse grid)
l2u2(sparse grid)
l2u1(tensor product grid)
l2u2(tensor product grid)
sparse grid vs. tensor product grid
Binomial distribution
n=8
Improved !
2D sparse grid in Smolyak algorithm
UQ of stochastic KdV equation with multiple RVs
12. Summary of contributions (1)
✰ convergence study of multi-element integration over
discrete measure
!
✰ comparison of 5 ways to construct orthogonal
polynomials w.r.t. discrete measure
!
✰ improvement of moment statistics by adaptive
integration mesh (on discrete measure)
!
✰ improvement of efficiency in computing moment
statistics by sparse grid (on discrete measure)
13. Outline
♚ Adaptive multi-element polynomial chaos with discrete
measure: Algorithms and applications to SPDEs
♚ Adaptive Wick-Malliavin (WM) approximation to
nonlinear SPDEs with discrete RVs
♚ Numerical methods for SPDEs with 1D tempered -stable
(T S) processes
♚ Numerical methods for SPDEs with additive
multi-dimensional Levy jump processes
♚ Future work
α
α
14. gPC for 1D stochastic Burgers equation
M. Zheng, B. Rozovsky, G.E. Karniadakis, Adaptive Wick-Malliavin Approximation to Nonlinear SPDEs with Discrete Random
Variables, SIAM J. Sci. Comput., revised. (multiple discrete RVs)
D. Venturi, X. Wan, R. Mikulevicius, B.L. Rozovskii, G.E. Karniadakis, Wick-Malliavin approximation to nonlinear stochastic
PDEs: analysis and simulations, Proceedings of the Royal Society, vol.469, no.2158, (2013). (multiple Gaussian RVs)
ut + uux = υuxx +σc1(ξ;λ), x ∈[−π,π]
u(x,t;ξ) ≈ u!k (x,t)ck (ξ;λ)
k=0
P
∑Expand the solution:
∂u!k
∂t
+ u!m
∂u!n
∂t
< cmcnck >=
m,n=0
P
∑ υ
∂2
u!k
∂x2
+σδ1k ,
general Polynomial Chaos (gPC) propagator
k = 0,1,...,P.
ξ ∼ Pois(λ)
ck : Charlier polynomial
e−λ
λk
k!
cm (k;λ)cn (k;λ) =< cmcn >= n!λn
δmn
k∈!
∑
cm
by Galerkin projection : < uck >
nonlinear
How many numbers of
terms !
!
!
!
there are !
u!m
∂u!n
∂t
< cmcnck >
?
(motivation)
Let us simplify the gPC propagator !
(P +1)3
15. Wick-Malliavin (WM) approximation
G.C. Wick, The evaluation of the collision matrix, Phys. Rev. 80 (2), (1950), pp. 268-272.
◊
ξ ∼ Pois(λ)✰ consider with measure
!
✰ define Wick product as:
✰ define Malliavin derivative D as:
!
✰ the product of two polynomials can be approximated by:
!
!
!
✰ here
!
✰ approximate the product of uv as:
Γ(x) =
e−λ
λk
k!
δ(x − k)
k∈!
∑
cm (x;λ)◊cn (x;λ) = cm+n (x;λ),
Dp
ci (x;λ) =
i!
(i − p)!
ci−p (x;λ)
cm (x)cm (x) = a(k,m,n)ck (x) =
k=0
m+n
∑ Kmnpcm+n−2 p (x;λ)
p=0
m+n
2
∑
Kmnp = a(m + n − 2p,m,n)
uv =
Dp
u◊pDp
v
p!
≈
p=0
∞
∑
Dp
u◊pDp
v
p!p=0
Q
∑
16. WM approximation simplifies the gPC propagator !
ut + uux = υuxx +σc1(ξ;λ), x ∈[−π,π]
∂u!k
∂t
+ u!m
∂u!n
∂t
< cmcnck >=
m,n=0
P
∑ υ
∂2
u!k
∂x2
+σδ1k , k = 0,1,...,P.gPC propagator:
∂u!k
∂t
+ u!i
i=0
P
∑
∂u!k+2 p−i
∂x
Ki,k+2 p−i,Q =
p=0
Q
∑ υ
∂2
u!k
∂x2
+σδ1k , k = 0,1,...,P.WM propagator:
How much less? Let us count the dots !
k=0 k=1 k=2 k=3 k=4
P=4, Q=1/2
17. Spectral convergence when Q ≥ P −1
ut + uux = υuxx +σc1(ξ;λ) u(x,0) = 1− sin(x)
ξ ∼ Pois(λ) x ∈[−π,π] Periodic B.C.
18. concept of P-Q refinement
ut + uux = υuxx +σc1(ξ;λ) u(x,0) = 1− sin(x)
ξ ∼ Pois(λ) x ∈[−π,π] Periodic B.C.
19. WM for stochastic Burgers equation w/ multiple RVs
ut + uux = υuxx +σ
j=1
3
∑ c1(ξj;λ)cos(0.1jt)
0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
10
−7
10
−6
10
−5
10
−4
10
−3
10
−2
T
l2u2(T)
Q
1
=Q
2
=Q
3
=0
Q1
=1,Q2
=Q3
=0
Q1
=Q2
=1,Q3
=0
Q
1
=Q
2
=Q
3
=1
u(x,0) = 1− sin(x) ξ1,2,3 ∼ Pois(λ)
x ∈[−π,π] Periodic B.C.
How about 3 discrete RVs ? How about the cost in d-dim ?
C(P,Q)d
the # of terms u!i
∂u! j
∂x
(P +1)3d
the # of terms u!m
∂u!n
∂t
Let us find the ratio
C(P,Q)d
(P +1)3d
P=3
Q=2
P=4
Q=3
d=2 61.0% 65.3%
d=3 47.7% 52.8%
d=4 0.000436% 0.0023%
C(P,Q)d
(P +1)3d
??
Cost: WM vs. gPC
20. Summary of contributions (2)
References
✰ Extend the numerical work on WM approximation for
SPDEs driven by Gaussian RVs to discrete RVs with
arbitrary distribution w/ finite moments
✰ Discover spectral convergence when for
stochastic Burgers equations
✰ Error control with P-Q refinements
✰ Computational complexity comparison of gPC and
WM in d dimensions
Q ≥ P −1
D. Bell, The Malliavin calculus, Dover, (2007)
S. Kaligotla and S.V. Lototsky, Wick product in the stochastic Burgers equation: a curse or a cure?
Asymptotic Analysis 75, (2011), pp. 145–168.
S.V. Lototsky, B.L. Rozovskii, and D. Selesi, On generalized Malliavin calculus, Stochastic Processes
and their Applications 122(3), (2012), pp. 808–843.
R. Mikulevicius and B.L. Rozovskii, On distribution free Skorokhod-Malliavin calculus, submitted.
21. Outline
♚ Adaptive multi-element polynomial chaos with discrete
measure: Algorithms and applications to SPDEs
♚ Adaptive Wick-Malliavin (WM) approximation to nonlinear
SPDEs with discrete RVs
♚ Numerical methods for SPDEs with 1D tempered
-stable (T S) processes
♚ Numerical methods for SPDEs with additive
multi-dimensional Levy jump processes
♚ Future work
αα
22. sample path of a
Poisson process
Jt
t
Introduction of Levy processes
23. M. Zheng, G.E. Karniadakis, ‘Numerical Methods for SPDEs with Tempered Stable Processes’,SIAM J. Sci. Comput., accepted.
N. Hilber, O. Reichmann, Ch. Schwab, Ch. Winter, Computational Methods for Quantitative Finance: Finite Element Methods for
Derivative Pricing, Springer Finance, 2013.
S.I. Denisov, W. Horsthemke, P. Hänggi, Generalized Fokker-Planck equation: Derivation and exact solutions, Eur. Phys. J. B, 68
(2009), pp. 567–575.
Generalized Fokker-Planck Equation for Overdamped Langevin Equation
Overdamped Langevin equation (1D, SODE, in the Ito’s sense)
Density satisfies tempered fractional PDEs (by Ito’s formula)
1D tempered stable (TS) pure jump process has this Levy measure
24. Generalized FP Equation for Overdamped Langevin Equation w/ TS white noise
Left Riemann-Liouville tempered fractional derivatives (as an example)
Fully implicit scheme in time, Grunwald-Letnikov for fractional derivatives
MC for Overdamped Langevin Equation driven by TS white noise
TFPDE
PCM for Overdamped Langevin Equation driven by TS white noise
Compound Poisson (CP)
approximation
MC!
(probabilistic)
PCM!
(probabilistic)
TFPDE!
(deterministic)
1
2
3
25. Histogram from MC vs. Density from TFPDEs
jump intensity jump size distribution
26. Moment Statistics from PCM/CP vs. TFPDE
1. TFPDE costs less than PCM
2. PCM depends on the series representation
3. TFPDE depends on the initial condition
4. Convergence in TFPDE by refinement
λ = 10 λ = 1
27. Outline
♚ Adaptive multi-element polynomial chaos with discrete
measure: Algorithms and applications to SPDEs
♚ Adaptive Wick-Malliavin (WM) approximation to nonlinear
SPDEs with discrete RVs
♚ Numerical methods for SPDEs with 1D tempered -stable
(T S) processes
♚ Numerical methods for SPDEs with additive
multi-dimensional Levy jump processes
♚ Future work
α
α
28. M. Zheng, G.E. Karniadakis, Numerical methods for SPDEs with additive multi-dimensional Levy jump processes, in
preparation.
29. How to describe the dependence structure among components!
of a multi-dimensional Levy jump process ?
LePage’s representation of Levy measure:
1
Series representation:
Levy measure:
J. Rosinski, Series representations of Levy processes from the perspective of point processes in: Levy Processes - Theory and Applications, O. E.
Barndor-Nielsen, T. Mikosch and S. I. Resnick (Eds.), Birkḧauser, Boston, (2001), pp. 401–415.
30. How to describe the dependence structure among components!
of a multi-dimensional Levy jump process ?
Dependence structure by Levy copula:
J. Kallsen, P. Tankov, Characterization of dependence of multidimensional Levy processes using Levy copulas, Journal of
Multivariate Analysis, 97 (2006), pp. 1551–1572.
Levy copula + Marginal Levy measure = Levy measure
Series rep:
τ = 1
τ = 100
Construction:
2
31. Analysis of variance (ANOVA) + FP = marginal distribution
FP equation
ANOVA decomposition
ANOVA terms are related to marginal distributions
1D-ANOVA-FP for marginal distributions
2D-ANOVA-FP for marginal distributions
LePage’s
representation
TFPDEs
32. 0 0.2 0.4 0.6 0.8 1
−2
0
2
4
6
8
10
12
x
E[u(x,T=1)]
E[uPCM
]
E[u
1D−ANOVA−FP
]
E[u
2D−ANOVA−FP
]
0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
3.4
3.6
3.8
4
4.2
4.4
4.6
4.8
5
5.2
x 10
−4
T
L2
normofdifferenceinE[u]
||E[u
1D−ANOVA−FP
−E[u
PCM
]||
L
2
([0,1])
/||E[u
PCM
]||
L
2
([0,1])
||E[u
2D−ANOVA−FP
−E[u
PCM
]||
L
2
([0,1])
/||E[u
PCM
]||
L
2
([0,1])
Moments: 1D-ANOVA-FP is accurate for E[u] in 10D
1D-ANOVA-FP
2D-ANOVA-FP
PCM
1D-ANOVA-FP
2D-ANOVA-FP
noise-to-signal!
ratio NSR ≈18.24%
33. Moments: 1D-ANOVA-FP is not accurate for in 10D
0 0.2 0.4 0.6 0.8 1
0
20
40
60
80
100
120
x
E[u
2
(x,T=1)]
E[u
2
PCM
]
E[u2
1D−ANOVA−FP
]
E[u
2
2D−ANOVA−FP
]
0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1
0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
T
L
2
normofdifferenceinE[u2
]
||E[u2
1D−ANOVA−FP
−E[u2
PCM
]||
L
2
([0,1])
/||E[u2
PCM
]||
L
2
([0,1])
||E[u
2
2D−ANOVA−FP
−E[u
2
PCM
]||
L
2
([0,1])
/||E[u
2
PCM
]||
L
2
([0,1])
1D-ANOVA-FP
2D-ANOVA-FP
PCM
1D-ANOVA-FP
2D-ANOVA-FP
Ε[u2
]
NSR ≈18.24%
34. Moments: PCM vs. FP (TFPDE)
Initial condition of FP
equation introduce error
0.2 0.4 0.6 0.8 1
10
−10
10
−8
10
−6
10
−4
10
−2
l2u2(t)
t
PCM/S Q=5, q=2
PCM/S Q=10, q=2
TFPDE
NSR 4.8%
Moments: PCM vs. MC
LePage’s representation (2D)
Ε[u2
]
Ε[u2
]
LePage’s representation (2D)
10
0
10
2
10
4
10
6
10
−4
10
−3
10
−2
10
−1
s
l2u2(t=1)
PCM/S q=1
PCM/S q=2
MC/S Q=40
PCM costs less than MC
Q — # of truncation in series representation
q — # of quadrature points on each
dimension
35. Density: MC vs. FP equation (2D Levy)
LePage’s !
representation!
2D — MC
3D — FP
Levy!
copula
t = 1
t = 1
t = 1.5
t = 1.5
37. Summary of contributions (3, 4)
✰ Established a framework for UQ of SPDEs w/ multi-
dimensional Levy jump processes by probabilistic
(MC, PCM) and deterministic (FP) methods
✰ Combined the ANOVA & FP to simulate moments of
solution at lower orders
✰ Improved the traditional MC method’s efficiency and
accuracy
✰ Link the area of fractional PDEs & UQ for SPDEs w/
Levy jump processes
38. Outline
♚ Adaptive multi-element polynomial chaos with discrete
measure: Algorithms and applications to SPDEs
♚ Adaptive Wick-Malliavin (WM) approximation to nonlinear
SPDEs with discrete RVs
♚ Numerical methods for SPDEs with 1D tempered -stable
(T S) processes
♚ Numerical methods for SPDEs with additive
multi-dimensional Levy jump processes
♚ Future work
α
α
39. Future work
For methodology:!
✰ Simulate SPDEs driven by higher-dimensional Levy jump processes
with ANOVA-FP
✰ Consider other jump processes than TS processes
✰ Consider nonlinear SPDEs w/ multiplicative multi-dimensional Levy
jump processes
!
For applications:!
✰ Application to the Energy Balance Model in climate modeling: P.
Imkeller, Energy balance models - viewed from stochastic dynamics,
Stochastic climate models, Basel: Birkhuser. Prog. Probab., 49
(2001), pp. 213– 240.
!
✰ Application to Mathematical Finance such as the stock price model
associated with multi-dimensional Levy jump processes: R. Cont, P.
Tankov, Financial Modelling with Jump Processes, Chapman & Hall/
CRC Press, 2004.
40. Acknowledgements
✰ Thanks Prof. George Em Karniadakis for advice and
support
✰ Thanks Prof. Xiaoliang Wan and Prof. Hui Wang to be on
my committee
✰ Thanks Prof. Xiaoliang Wan and Prof. Boris Rozovskii for
their innovative ideas and collaboration
✰ Thanks for the support from the National Science
Foundation, "Overcoming the Bottlenecks in Polynomial
chaos: Algorithms and Applications to Systems Biology
and Fluid Mechanics" (Grant #526859)
✰ Thanks for the support from the Air Force Office of
Scientific Research: Multidisciplinary Research Program of
the University Research Initiative, "Multi-scale Fusion of
Information for Uncertainty Quantification and
Management in Large-Scale Simulations" (Grant #521024)