This document summarizes research on computing stochastic partial differential equations (SPDEs) using an adaptive multi-element polynomial chaos method (MEPCM) with discrete measures. Key points include:
1) MEPCM uses polynomial chaos expansions and numerical integration to compute SPDEs with parametric uncertainty.
2) Orthogonal polynomials are generated for discrete measures using various methods like Vandermonde, Stieltjes, and Lanczos.
3) Numerical integration is tested on discrete measures using Genz functions in 1D and sparse grids in higher dimensions.
4) The method is demonstrated on the KdV equation with random initial conditions. Future work includes applying these techniques to SPDEs driven
Optimal interval clustering: Application to Bregman clustering and statistica...Frank Nielsen
We present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means, k-medoids, k-medians, k-centers, etc. We extend the method to incorporate cluster size constraints and show how to choose the appropriate k by model selection. Finally, we illustrate and refine the method on two case studies: Bregman clustering and statistical mixture learning maximizing the complete likelihood.
http://arxiv.org/abs/1403.2485
In this talk, we discuss some recent advances in probabilistic schemes for high-dimensional PIDEs. It is known that traditional PDE solvers, e.g., finite element, finite difference methods, do not scale well with the increase of dimension. The idea of probabilistic schemes is to link a wide class of nonlinear parabolic PIDEs to stochastic Levy processes based on nonlinear version of the Feynman-Kac theory. As such, the solution of the PIDE can be represented by a conditional expectation (i.e., a high-dimensional integral) with respect to a stochastic dynamical system driven by Levy processes. In other words, we can solve the PIDEs by performing high-dimensional numerical integration. A variety of quadrature methods could be applied, including MC, QMC, sparse grids, etc. The probabilistic schemes have been used in many application problems, e.g., particle transport in plasmas (e.g., Vlasov-Fokker-Planck equations), nonlinear filtering (e.g., Zakai equations), and option pricing, etc.
We present recent result on the numerical analysis of Quasi Monte-Carlo quadrature methods, applied to forward and inverse uncertainty quantification for elliptic and parabolic PDEs. Particular attention will be placed on Higher
-Order QMC, the stable and efficient generation of
interlaced polynomial lattice rules, and the numerical analysis of multilevel QMC Finite Element discretizations with applications to computational uncertainty quantification.
In this talk we consider the question of how to use QMC with an empirical dataset, such as a set of points generated by MCMC. Using ideas from partitioning for parallel computing, we apply recursive bisection to reorder the points, and then interleave the bits of the QMC coordinates to select the appropriate point from the dataset. Numerical tests show that in the case of known distributions this is almost as effective as applying QMC directly to the original distribution. The same recursive bisection can also be used to thin the dataset, by recursively bisecting down to many small subsets of points, and then randomly selecting one point from each subset. This makes it possible to reduce the size of the dataset greatly without significantly increasing the overall error. Co-author: Fei Xie
The generation of Gaussian random fields over a physical domain is a challenging problem in computational mathematics, especially when the correlation length is short and the field is rough. The traditional approach is to make use of a truncated Karhunen-Loeve (KL) expansion, but the generation of even a single realisation of the field may then be effectively beyond reach (especially for 3-dimensional domains) if the need is to obtain an expected L2 error of say 5%, because of the potentially very slow convergence of the KL expansion. In this talk, based on joint work with Ivan Graham, Frances Kuo, Dirk Nuyens, and Rob Scheichl, a completely different approach is used, in which the field is initially generated at a regular grid on a 2- or 3-dimensional rectangle that contains the physical domain, and then possibly interpolated to obtain the field at other points. In that case there is no need for any truncation. Rather the main problem becomes the factorisation of a large dense matrix. For this we use circulant embedding and FFT ideas. Quasi-Monte Carlo integration is then used to evaluate the expected value of some functional of the finite-element solution of an elliptic PDE with a random field as input.
A fundamental numerical problem in many sciences is to compute integrals. These integrals can often be expressed as expectations and then approximated by sampling methods. Monte Carlo sampling is very competitive in high dimensions, but has a slow rate of convergence. One reason for this slowness is that the MC points form clusters and gaps. Quasi-Monte Carlo methods greatly reduce such clusters and gaps, and under modest smoothness demands on the integrand they can greatly improve accuracy. This can even take place in problems of surprisingly high dimension. This talk will introduce the basics of QMC and randomized QMC. It will include discrepancy and the Koksma-Hlawka inequality, some digital constructions and some randomized QMC methods that allow error estimation and sometimes bring improved accuracy.
After we applied the stochastic Galerkin method to solve stochastic PDE, and solve large linear system, we obtain stochastic solution (random field), which is represented in Karhunen Loeve and PCE basis. No sampling error is involved, only algebraic truncation error. Now we would like to escape classical MCMC path to compute the posterior. We develop an Bayesian* update formula for KLE-PCE coefficients.
We combined: low-rank tensor techniques and FFT to compute kriging, estimate variance, compute conditional covariance. We are able to solve 3D problems with very high resolution
Optimal interval clustering: Application to Bregman clustering and statistica...Frank Nielsen
We present a generic dynamic programming method to compute the optimal clustering of n scalar elements into k pairwise disjoint intervals. This case includes 1D Euclidean k-means, k-medoids, k-medians, k-centers, etc. We extend the method to incorporate cluster size constraints and show how to choose the appropriate k by model selection. Finally, we illustrate and refine the method on two case studies: Bregman clustering and statistical mixture learning maximizing the complete likelihood.
http://arxiv.org/abs/1403.2485
In this talk, we discuss some recent advances in probabilistic schemes for high-dimensional PIDEs. It is known that traditional PDE solvers, e.g., finite element, finite difference methods, do not scale well with the increase of dimension. The idea of probabilistic schemes is to link a wide class of nonlinear parabolic PIDEs to stochastic Levy processes based on nonlinear version of the Feynman-Kac theory. As such, the solution of the PIDE can be represented by a conditional expectation (i.e., a high-dimensional integral) with respect to a stochastic dynamical system driven by Levy processes. In other words, we can solve the PIDEs by performing high-dimensional numerical integration. A variety of quadrature methods could be applied, including MC, QMC, sparse grids, etc. The probabilistic schemes have been used in many application problems, e.g., particle transport in plasmas (e.g., Vlasov-Fokker-Planck equations), nonlinear filtering (e.g., Zakai equations), and option pricing, etc.
We present recent result on the numerical analysis of Quasi Monte-Carlo quadrature methods, applied to forward and inverse uncertainty quantification for elliptic and parabolic PDEs. Particular attention will be placed on Higher
-Order QMC, the stable and efficient generation of
interlaced polynomial lattice rules, and the numerical analysis of multilevel QMC Finite Element discretizations with applications to computational uncertainty quantification.
In this talk we consider the question of how to use QMC with an empirical dataset, such as a set of points generated by MCMC. Using ideas from partitioning for parallel computing, we apply recursive bisection to reorder the points, and then interleave the bits of the QMC coordinates to select the appropriate point from the dataset. Numerical tests show that in the case of known distributions this is almost as effective as applying QMC directly to the original distribution. The same recursive bisection can also be used to thin the dataset, by recursively bisecting down to many small subsets of points, and then randomly selecting one point from each subset. This makes it possible to reduce the size of the dataset greatly without significantly increasing the overall error. Co-author: Fei Xie
The generation of Gaussian random fields over a physical domain is a challenging problem in computational mathematics, especially when the correlation length is short and the field is rough. The traditional approach is to make use of a truncated Karhunen-Loeve (KL) expansion, but the generation of even a single realisation of the field may then be effectively beyond reach (especially for 3-dimensional domains) if the need is to obtain an expected L2 error of say 5%, because of the potentially very slow convergence of the KL expansion. In this talk, based on joint work with Ivan Graham, Frances Kuo, Dirk Nuyens, and Rob Scheichl, a completely different approach is used, in which the field is initially generated at a regular grid on a 2- or 3-dimensional rectangle that contains the physical domain, and then possibly interpolated to obtain the field at other points. In that case there is no need for any truncation. Rather the main problem becomes the factorisation of a large dense matrix. For this we use circulant embedding and FFT ideas. Quasi-Monte Carlo integration is then used to evaluate the expected value of some functional of the finite-element solution of an elliptic PDE with a random field as input.
A fundamental numerical problem in many sciences is to compute integrals. These integrals can often be expressed as expectations and then approximated by sampling methods. Monte Carlo sampling is very competitive in high dimensions, but has a slow rate of convergence. One reason for this slowness is that the MC points form clusters and gaps. Quasi-Monte Carlo methods greatly reduce such clusters and gaps, and under modest smoothness demands on the integrand they can greatly improve accuracy. This can even take place in problems of surprisingly high dimension. This talk will introduce the basics of QMC and randomized QMC. It will include discrepancy and the Koksma-Hlawka inequality, some digital constructions and some randomized QMC methods that allow error estimation and sometimes bring improved accuracy.
After we applied the stochastic Galerkin method to solve stochastic PDE, and solve large linear system, we obtain stochastic solution (random field), which is represented in Karhunen Loeve and PCE basis. No sampling error is involved, only algebraic truncation error. Now we would like to escape classical MCMC path to compute the posterior. We develop an Bayesian* update formula for KLE-PCE coefficients.
We combined: low-rank tensor techniques and FFT to compute kriging, estimate variance, compute conditional covariance. We are able to solve 3D problems with very high resolution
One of the central tasks in computational mathematics and statistics is to accurately approximate unknown target functions. This is typically done with the help of data — samples of the unknown functions. The emergence of Big Data presents both opportunities and challenges. On one hand, big data introduces more information about the unknowns and, in principle, allows us to create more accurate models. On the other hand, data storage and processing become highly challenging. In this talk, we present a set of sequential algorithms for function approximation in high dimensions with large data sets. The algorithms are of iterative nature and involve only vector operations. They use one data sample at each step and can handle dynamic/stream data. We present both the numerical algorithms, which are easy to implement, as well as rigorous analysis for their theoretical foundation.
Introduction about Monte Carlo Methods, lecture given at Technical University of Kaiserslautern 2014.
There are many situations where Monte Carlo Methods are useful to solve data science problems
Efficient Analysis of high-dimensional data in tensor formatsAlexander Litvinenko
We solve a PDE with uncertain coefficients. The solution is approximated in the Karhunen Loeve/PCE basis. How to compute maximum ? frequency? probability density function? with almost linear complexity? We offer various methods.
Relaxation methods for the matrix exponential on large networksDavid Gleich
My talk from the Stanford ICME seminar series on doing network analysis and link prediction using the a fast algorithm for the matrix exponential on graph problems.
Stochastic reaction networks (SRNs) are a particular class of continuous-time Markov chains used to model a wide range of phenomena, including biological/chemical reactions, epidemics, risk theory, queuing, and supply chain/social/multi-agents networks. In this context, we explore the efficient estimation of statistical quantities, particularly rare event probabilities, and propose two alternative importance sampling (IS) approaches [1,2] to improve the Monte Carlo (MC) estimator efficiency. The key challenge in the IS framework is to choose an appropriate change of probability measure to achieve substantial variance reduction, which often requires insights into the underlying problem. Therefore, we propose an automated approach to obtain a highly efficient path-dependent measure change based on an original connection between finding optimal IS parameters and solving a variance minimization problem via a stochastic optimal control formulation. We pursue two alternative approaches to mitigate the curse of dimensionality when solving the resulting dynamic programming problem. In the first approach [1], we propose a learning-based method to approximate the value function using a neural network, where the parameters are determined via a stochastic optimization algorithm. As an alternative, we present in [2] a dimension reduction method, based on mapping the problem to a significantly lower dimensional space via the Markovian projection (MP) idea. The output of this model reduction technique is a low dimensional SRN (potentially one dimension) that preserves the marginal distribution of the original high-dimensional SRN system. The dynamics of the projected process are obtained via a discrete $L^2$ regression. By solving a resulting projected Hamilton-Jacobi-Bellman (HJB) equation for the reduced-dimensional SRN, we get projected IS parameters, which are then mapped back to the original full-dimensional SRN system, and result in an efficient IS-MC estimator of the full-dimensional SRN. Our analysis and numerical experiments verify that both proposed IS (learning based and MP-HJB-IS) approaches substantially reduce the MC estimator’s variance, resulting in a lower computational complexity in the rare event regime than standard MC estimators. [1] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. Learning-based importance sampling via stochastic optimal control for stochastic reaction net-works. Statistics and Computing 33, no. 3 (2023): 58. [2] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. (2023). Automated Importance Sampling via Optimal Control for Stochastic Reaction Networks: A Markovian Projection-based Approach. To appear soon.
ABC with data cloning for MLE in state space modelsUmberto Picchini
An application of the "data cloning" method for parameter estimation via MLE aided by Approximate Bayesian Computation. The relevant paper is http://arxiv.org/abs/1505.06318
phd Thesis Mengdi Zheng (Summer) Brown Applied MathsZheng Mengdi
solving the evolution of probability density function of SPDEs driven by multi-dimensional heavy tailed Levy jump processes (tempered stable processes) by applying ANOVA decomposition dimension reduction method to the derived tempered fractional Fokker-Planck equation
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
The increased availability of biomedical data, particularly in the public domain, offers the opportunity to better understand human health and to develop effective therapeutics for a wide range of unmet medical needs. However, data scientists remain stymied by the fact that data remain hard to find and to productively reuse because data and their metadata i) are wholly inaccessible, ii) are in non-standard or incompatible representations, iii) do not conform to community standards, and iv) have unclear or highly restricted terms and conditions that preclude legitimate reuse. These limitations require a rethink on data can be made machine and AI-ready - the key motivation behind the FAIR Guiding Principles. Concurrently, while recent efforts have explored the use of deep learning to fuse disparate data into predictive models for a wide range of biomedical applications, these models often fail even when the correct answer is already known, and fail to explain individual predictions in terms that data scientists can appreciate. These limitations suggest that new methods to produce practical artificial intelligence are still needed.
In this talk, I will discuss our work in (1) building an integrative knowledge infrastructure to prepare FAIR and "AI-ready" data and services along with (2) neurosymbolic AI methods to improve the quality of predictions and to generate plausible explanations. Attention is given to standards, platforms, and methods to wrangle knowledge into simple, but effective semantic and latent representations, and to make these available into standards-compliant and discoverable interfaces that can be used in model building, validation, and explanation. Our work, and those of others in the field, creates a baseline for building trustworthy and easy to deploy AI models in biomedicine.
Bio
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University, founder and executive director of the Institute of Data Science, and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research explores socio-technological approaches for responsible discovery science, which includes collaborative multi-modal knowledge graphs, privacy-preserving distributed data mining, and AI methods for drug discovery and personalized medicine. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon Europe, the European Open Science Cloud, the US National Institutes of Health, and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.
1. Adaptive multi-element polynomial chaos with
discrete measure: Algorithms and application to
SPDEs
Mengdi Zheng and George Karniadakis
2. Content:
1. computing SPDE by MEPCM
2. motivations
3. numerical integration on discrete
measure
4. numerical example on KdV
equation
5. future work
3. 1.What computational SPDE is about? (MEPCM)
Xt(!) E[y(x, t; !)]
Xt(!)
Xt(⇠1, ⇠2, ...⇠n)
...
⇠n
⇠3
⇠2
⇠1
⌦
E[ym
(x, t; !)]
E[ym
(x, t; ⇠1, ⇠2, ..., ⇠n)]
fix x, t, integration
over a finite
dimensional
sample space
MEPCM=FEM on
sample space
⇠1
⇠2
⌦
⇡ ⇡
Gauss quadratures
4. So it’s all about integration on the sample space...
Gauss integration
I =
Z b
a
d (x)f(x) ⇡
Z b
a
d (x)
dX
i=1
f(xi)hi(x)
=
dX
i=1
f(xi)
Z b
a
d (x)hi(x)
Generate {P_i(x)} orthogonal to
this measure
zeros of P_d(x) Lagrange interpolation
on the zeros
dX
i=1
y(x, t; ⇠1,i)wi
only run deterministic solver
on quadrature points,
no need to run propagator
exactness of integration
m=2d-1
5. 2. Three motivations of dealing with discrete measure
Gaussian
process Levy
process
Hermite polynomial chaos
Levy-Sheffer polynomial chaos ?
jump
current work
Analysis of historical stock
prices shows that simple
models with randomness
provided by pure jump Levy
processes often capture the
statistical behavior of
observed stock prices better
than similar models with
randomness provided by a
Brownian motion.
Mathematical finance
1
2
3
4
5
6. 3. J. Foo proved this on continuous measure
J. Foo, X. Wan, G. E. Karniadakis, A multi-element probabilistic col- location method for PDEs with parametric uncertainty: error anal-
ysis and applications, Journal of Computational Physics 227 (2008), pp. 9572–9595.
7. 3. Can we prove it on discrete measure?
for discrete measure
" =
NX
i=1
i⌘"
⌧i
,
lim
"!0
⌘"
⌧i
= ⌧i , lim
"!0
" = .
Z
f(x) (dx)
NeX
i=1
QBi
m f
Z
f(x) (dx)
Z
f(x) "(dx)
+
Z
f(x) "(dx)
NeX
i=1
Q",Bi
m f +
NeX
i=1
Q",Bi
m f
NeX
i=1
QBi
m f ,
h / N 1
es N (m+1)
es
m = 2d 1
N 2d
es
=
NX
i=1
i ⌧i ⌦
⌧1 ⌧2 ⌧3
8.
9.
10. Generating orthogonal polynomials for discrete measure
Vandermonde matrix method
µk =
Z
R
xk
(dx)
0
B
B
B
B
@
µ0 µ1 . . . µk
µ1 µ2 . . . µk+1
. . . . . . . . . . . .
µk 1 µk . . . µ2k 1
0 0 . . . 1
1
C
C
C
C
A
0
B
B
B
B
@
p0
p1
. . .
pk 1
pk
1
C
C
C
C
A
=
0
B
B
B
B
@
0
0
. . .
0
1
1
C
C
C
C
A
11. Generating orthogonal polynomials for discrete measure
Stieltjes’ method
↵i =
R
R
xP2
i (x) (dx)
R
R
P2
i (x) (dx)
, i =
R
R
xP2
i (x) (dx)
R
R
P2
i 1(x) (dx)
Pj+1(x) = (x ↵j)Pj(x) jPj 1(x) j = 1, . . .
12. Generating orthogonal polynomials for discrete measure
Fischer’s method
=
NX
i=1
i ⌧i ⌫ = + ⌧
↵⌫
i = ↵i +
2
i Pi(⌧)Pi+1(⌧)
1 +
Pi
j=0
2
j P2
j (⌧)
2
i 1Pi(⌧)Pi 1(⌧)
1 +
Pi 1
j=0
2
j P2
j (⌧)
⌫
i = i
[1 +
Pi 2
j=0
2
j P2
j (⌧)][1 +
Pi
j=0
2
j P2
j (⌧)]
[1 +
Pi 1
j=0
2
j P2
j (⌧)]2
13. Generating orthogonal polynomials for discrete measure
Modified Chebyshev method
⌫r =
Z
⌦
pr(⇠)d (⇠)
kl =
Z
⌦
Pk(⇠)pl(⇠)d (⇠)
↵k = ak +
k,k+1
kk
k 1,k
k 1,k 1
, k =
k,k
k 1,k 1
14. Generating orthogonal polynomials for discrete measure
Lanczos’ method
0
B
B
B
B
@
1
p
w1
p
w2 . . .
p
wNp
w1 ⌧1 0 . . . 0p
w2 0 ⌧2 . . . 0
. . . . . . . . . . . . . . .
p
wN 0 0 . . . ⌧N
1
C
C
C
C
A
0
B
B
B
B
@
1
p
1 0 . . . 0p
0 ↵0
p
1 . . . 0
0
p
1 ↵1 . . . 0
. . . . . . . . . . . . . . .
0 0 0 . . . ↵N 1
1
C
C
C
C
A
(x) =
NX
i=1
wi ⌧i
18. 3. Numerical integration on discrete measure
test theorem on discrete measure by GENZ functions
in 1D
19. 3. Numerical integration on discrete measure
in 1D
test theorem on discrete measure by GENZ functions
20. Sparse grid for discrete measure in higher dimensions
A(k + N, N) =
X
k+1|i|k+N
( 1)k+N |i|
✓
k + N 1
k + N |i|
◆
(Ui1
⌦ ... ⌦ UiN
)
‘finite difference method along dimensions’
21. 3. Numerical integration on discrete measure
in 2D
by sparse grid
test theorem on discrete measure by GENZ functions
22. Numerical example on KdV equation
ut + 6uux + uxxx = ⇠, x 2 R
u(x, 0) =
a
2
sech2
(
p
a
2
(x x0))
< um
(x, T; !) >=
Z
R
d⇢(⇠)[
a
2
sech2
(
p
a
2
(x 3 ⇠T2
x0 aT)) + ⇠T]m
L2u1 =
qR
dx(E[unum(x, T; !)] E[uex(x, T; !)])2
qR
dx(E[uex(x, T; !)])2
L2u2 =
qR
dx(E[u2
num(x, T; !)] E[u2
ex(x, T; !)])2
qR
dx(E[u2
ex(x, T; !)])2
25. MEPCM on an adapted mesh
⇠1
⇠2
⌦
Gauss quadratures
Criterion:
divide integration
domain s.t. we minimize
the difference in
variance
‘local variance’ criterion
30. Future work before I graduate
1. represent Levy process by independent
R.V.s and solve SPDE w/ Levy by MEPCM
2. try LDP on SPDE w/ Levy
3. try Levy-Sheffer system on SPDE w/
Levy
4. application in mathematical finance
5. simulate NS equation with jump
processes
6. solve SPDE w/ non-Gaussian processes
7. simulate NS equation with non-Gaussian
processes