I made public for discussion a first version (subject to changes) of a talk that will be given at the Particles 2019 conference.
The bold points of this presentation are the following :
1) We present new to my knowledge, sharp, estimations for Monte-Carlo type methods.
2) These estimations can be used in a wide variety of context to perform a sharp error analysis.
3) We present a class of numerical methods that we refer to as Transported Meshfree Methods. This class of methods can be used for a wide variety of problems based on Partial Differential Equations, among which Artificial Intelligence problems belongs.
4) We can guarantee, thanks to the error analysis, a worst-error while computing with transported meshfree methods. We can also check that this error matches optimal convergence rate.
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning.
Part 3 covers: 1. Motivation: Average-case versus worst-case in high dimensions 2. Algorithm halting times (runtimes) 3. Outlook
Random Matrix Theory and Machine Learning - Part 1Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning. Part 1 covers: 1. A brief history of Random Matrix Theory, 2. Classical Random Matrix Ensembles (basic building blocks)
Low rank tensor approximation of probability density and characteristic funct...Alexander Litvinenko
Very often one has to deal with high-dimensional random variables (RVs). A high-dimensional RV can be described by its probability density (\pdf) and/or by the corresponding probability characteristic functions (\pcf), or by a function representation. Here the interest is mainly to compute characterisations like the entropy, or
relations between two distributions, like their Kullback-Leibler divergence, or more general measures such as $f$-divergences,
among others. These are all computed from the \pdf, which is often not available directly, and it is a computational challenge to even represent it in a numerically feasible fashion in case the dimension $d$ is even moderately large. It is an even stronger numerical challenge to then actually compute said characterisations in the high-dimensional case.
In this regard, in order to achieve a computationally feasible task, we propose to represent the density by a high order tensor product, and approximate this in a low-rank format.
Random Matrix Theory and Machine Learning - Part 3Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning.
Part 3 covers: 1. Motivation: Average-case versus worst-case in high dimensions 2. Algorithm halting times (runtimes) 3. Outlook
Random Matrix Theory and Machine Learning - Part 1Fabian Pedregosa
ICML 2021 tutorial on random matrix theory and machine learning. Part 1 covers: 1. A brief history of Random Matrix Theory, 2. Classical Random Matrix Ensembles (basic building blocks)
Low rank tensor approximation of probability density and characteristic funct...Alexander Litvinenko
Very often one has to deal with high-dimensional random variables (RVs). A high-dimensional RV can be described by its probability density (\pdf) and/or by the corresponding probability characteristic functions (\pcf), or by a function representation. Here the interest is mainly to compute characterisations like the entropy, or
relations between two distributions, like their Kullback-Leibler divergence, or more general measures such as $f$-divergences,
among others. These are all computed from the \pdf, which is often not available directly, and it is a computational challenge to even represent it in a numerically feasible fashion in case the dimension $d$ is even moderately large. It is an even stronger numerical challenge to then actually compute said characterisations in the high-dimensional case.
In this regard, in order to achieve a computationally feasible task, we propose to represent the density by a high order tensor product, and approximate this in a low-rank format.
We apply tensor train (TT) data format to solve an elliptic PDE with uncertain coefficients. We reduce complexity and storage from exponential to linear. Post-processing in TT format is also provided.
Data fusion is the process of combining data from different sources to enhance the utility of the combined product. In remote sensing, input data sources are typically massive, noisy, and have different spatial supports and sampling characteristics. We take an inferential approach to this data fusion problem: we seek to infer a true but not directly observed spatial (or spatio-temporal) field from heterogeneous inputs. We use a statistical model to make these inferences, but like all models it is at least somewhat uncertain. In this talk, we will discuss our experiences with the impacts of these uncertainties and some potential ways addressing them.
Mixed Spectra for Stable Signals from Discrete Observationssipij
This paper concerns the continuous-time stable alpha symmetric processes which are inivitable in the modeling of certain signals with indefinitely increasing variance. Particularly the case where the spectral measurement is mixed: sum of a continuous measurement and a discrete measurement. Our goal is to estimate the spectral density of the continuous part by observing the signal in a discrete way. For that, we propose a method which consists in sampling the signal at periodic instants. We use Jackson's polynomial kernel to build a periodogram which we then smooth by two spectral windows taking into account the width of the interval where the spectral density is non-zero. Thus, we bypass the phenomenon of aliasing often encountered in the case of estimation from discrete observations of a continuous time process.
We apply tensor train (TT) data format to solve an elliptic PDE with uncertain coefficients. We reduce complexity and storage from exponential to linear. Post-processing in TT format is also provided.
Data fusion is the process of combining data from different sources to enhance the utility of the combined product. In remote sensing, input data sources are typically massive, noisy, and have different spatial supports and sampling characteristics. We take an inferential approach to this data fusion problem: we seek to infer a true but not directly observed spatial (or spatio-temporal) field from heterogeneous inputs. We use a statistical model to make these inferences, but like all models it is at least somewhat uncertain. In this talk, we will discuss our experiences with the impacts of these uncertainties and some potential ways addressing them.
Mixed Spectra for Stable Signals from Discrete Observationssipij
This paper concerns the continuous-time stable alpha symmetric processes which are inivitable in the modeling of certain signals with indefinitely increasing variance. Particularly the case where the spectral measurement is mixed: sum of a continuous measurement and a discrete measurement. Our goal is to estimate the spectral density of the continuous part by observing the signal in a discrete way. For that, we propose a method which consists in sampling the signal at periodic instants. We use Jackson's polynomial kernel to build a periodogram which we then smooth by two spectral windows taking into account the width of the interval where the spectral density is non-zero. Thus, we bypass the phenomenon of aliasing often encountered in the case of estimation from discrete observations of a continuous time process.
Tucker tensor analysis of Matern functions in spatial statistics Alexander Litvinenko
1. Motivation: improve statistical models
2. Motivation: disadvantages of matrices
3. Tools: Tucker tensor format
4. Tensor approximation of Matern covariance function via FFT
5. Typical statistical operations in Tucker tensor format
6. Numerical experiments
Stochastic reaction networks (SRNs) are a particular class of continuous-time Markov chains used to model a wide range of phenomena, including biological/chemical reactions, epidemics, risk theory, queuing, and supply chain/social/multi-agents networks. In this context, we explore the efficient estimation of statistical quantities, particularly rare event probabilities, and propose two alternative importance sampling (IS) approaches [1,2] to improve the Monte Carlo (MC) estimator efficiency. The key challenge in the IS framework is to choose an appropriate change of probability measure to achieve substantial variance reduction, which often requires insights into the underlying problem. Therefore, we propose an automated approach to obtain a highly efficient path-dependent measure change based on an original connection between finding optimal IS parameters and solving a variance minimization problem via a stochastic optimal control formulation. We pursue two alternative approaches to mitigate the curse of dimensionality when solving the resulting dynamic programming problem. In the first approach [1], we propose a learning-based method to approximate the value function using a neural network, where the parameters are determined via a stochastic optimization algorithm. As an alternative, we present in [2] a dimension reduction method, based on mapping the problem to a significantly lower dimensional space via the Markovian projection (MP) idea. The output of this model reduction technique is a low dimensional SRN (potentially one dimension) that preserves the marginal distribution of the original high-dimensional SRN system. The dynamics of the projected process are obtained via a discrete $L^2$ regression. By solving a resulting projected Hamilton-Jacobi-Bellman (HJB) equation for the reduced-dimensional SRN, we get projected IS parameters, which are then mapped back to the original full-dimensional SRN system, and result in an efficient IS-MC estimator of the full-dimensional SRN. Our analysis and numerical experiments verify that both proposed IS (learning based and MP-HJB-IS) approaches substantially reduce the MC estimator’s variance, resulting in a lower computational complexity in the rare event regime than standard MC estimators. [1] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. Learning-based importance sampling via stochastic optimal control for stochastic reaction net-works. Statistics and Computing 33, no. 3 (2023): 58. [2] Ben Hammouda, C., Ben Rached, N., and Tempone, R., and Wiechert, S. (2023). Automated Importance Sampling via Optimal Control for Stochastic Reaction Networks: A Markovian Projection-based Approach. To appear soon.
NONLINEAR DIFFERENCE EQUATIONS WITH SMALL PARAMETERS OF MULTIPLE SCALESTahia ZERIZER
In this article we study a general model of nonlinear difference equations including small parameters of multiple scales. For two kinds of perturbations, we describe algorithmic methods giving asymptotic solutions to boundary value problems.
The problem of existence and uniqueness of the solution is also addressed.
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...Chiheb Ben Hammouda
In biochemically reactive systems with small copy numbers of one or more reactant molecules, the dynamics are dominated by stochastic effects. To approximate those systems, discrete state-space and stochastic simulation approaches have been shown to be more relevant than continuous state-space and deterministic ones. These stochastic models constitute the theory of Stochastic Reaction Networks (SRNs). In systems characterized by having simultaneously fast and slow timescales, existing discrete space-state stochastic path simulation methods, such as the stochastic simulation algorithm (SSA) and the explicit tau-leap (explicit-TL) method, can be very slow. In this talk, we propose a novel implicit scheme, split-step implicit tau-leap (SSI-TL), to improve numerical stability and provide efficient simulation algorithms for those systems. Furthermore, to estimate statistical quantities related to SRNs, we propose a novel hybrid Multilevel Monte Carlo (MLMC) estimator in the spirit of the work by Anderson and Higham (SIAM Multiscal Model. Simul. 10(1), 2012). This estimator uses the SSI-TL scheme at levels where the explicit-TL method is not applicable due to numerical stability issues, and then, starting from a certain interface level, it switches to the explicit scheme. We present numerical examples that illustrate the achieved gains of our proposed approach in this context.
SOLVING BVPs OF SINGULARLY PERTURBED DISCRETE SYSTEMSTahia ZERIZER
In this article, we study boundary value problems of a large
class of non-linear discrete systems at two-time-scales. Algorithms are given to implement asymptotic solutions for any order of approximation.
0x01 - Newton's Third Law: Static vs. Dynamic AbusersOWASP Beja
f you offer a service on the web, odds are that someone will abuse it. Be it an API, a SaaS, a PaaS, or even a static website, someone somewhere will try to figure out a way to use it to their own needs. In this talk we'll compare measures that are effective against static attackers and how to battle a dynamic attacker who adapts to your counter-measures.
About the Speaker
===============
Diogo Sousa, Engineering Manager @ Canonical
An opinionated individual with an interest in cryptography and its intersection with secure software development.
Acorn Recovery: Restore IT infra within minutesIP ServerOne
Introducing Acorn Recovery as a Service, a simple, fast, and secure managed disaster recovery (DRaaS) by IP ServerOne. A DR solution that helps restore your IT infra within minutes.
This presentation by Morris Kleiner (University of Minnesota), was made during the discussion “Competition and Regulation in Professions and Occupations” held at the Working Party No. 2 on Competition and Regulation on 10 June 2024. More papers and presentations on the topic can be found out at oe.cd/crps.
This presentation was uploaded with the author’s consent.
Sharpen existing tools or get a new toolbox? Contemporary cluster initiatives...Orkestra
UIIN Conference, Madrid, 27-29 May 2024
James Wilson, Orkestra and Deusto Business School
Emily Wise, Lund University
Madeline Smith, The Glasgow School of Art
Have you ever wondered how search works while visiting an e-commerce site, internal website, or searching through other types of online resources? Look no further than this informative session on the ways that taxonomies help end-users navigate the internet! Hear from taxonomists and other information professionals who have first-hand experience creating and working with taxonomies that aid in navigation, search, and discovery across a range of disciplines.
This presentation, created by Syed Faiz ul Hassan, explores the profound influence of media on public perception and behavior. It delves into the evolution of media from oral traditions to modern digital and social media platforms. Key topics include the role of media in information propagation, socialization, crisis awareness, globalization, and education. The presentation also examines media influence through agenda setting, propaganda, and manipulative techniques used by advertisers and marketers. Furthermore, it highlights the impact of surveillance enabled by media technologies on personal behavior and preferences. Through this comprehensive overview, the presentation aims to shed light on how media shapes collective consciousness and public opinion.
2. Local integration with kernel methods
Monte Carlo estimations - consider the following error estimation (µ probability
measure, Y = (y1
, . . . , yN
) ∈ RN×D
)
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , HKµ ϕ HKµ
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 2 / 8
3. Local integration with kernel methods
Monte Carlo estimations - consider the following error estimation (µ probability
measure, Y = (y1
, . . . , yN
) ∈ RN×D
)
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , HKµ ϕ HKµ
1 example 1 : Y i.i.d. → E(Y , HKµ ) ∼ 1√
N
and HKµ ∼ L2
(RD
, |x|2
dµ) (stat.).
2 ex 1 : Y SOBOL, µ = dxΩ, Ω = [0, 1]D
. Then HK = BV (Ω) (bounded variations),
E(Y , HK ) ≥ ln(N)D−1
N
( Koksma-Hlawka sharp estimate conjecture).
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 2 / 8
4. Local integration with kernel methods
Monte Carlo estimations - consider the following error estimation (µ probability
measure, Y = (y1
, . . . , yN
) ∈ RN×D
)
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , HKµ ϕ HKµ
1 example 1 : Y i.i.d. → E(Y , HKµ ) ∼ 1√
N
and HKµ ∼ L2
(RD
, |x|2
dµ) (stat.).
2 ex 1 : Y SOBOL, µ = dxΩ, Ω = [0, 1]D
. Then HK = BV (Ω) (bounded variations),
E(Y , HK ) ≥ ln(N)D−1
N
( Koksma-Hlawka sharp estimate conjecture).
3 Others examples ...quantifiers, wavelet, deep feed-forward neural networks ...
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 2 / 8
5. Local integration with kernel methods
Monte Carlo estimations - consider the following error estimation (µ probability
measure, Y = (y1
, . . . , yN
) ∈ RN×D
)
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , HKµ ϕ HKµ
1 example 1 : Y i.i.d. → E(Y , HKµ ) ∼ 1√
N
and HKµ ∼ L2
(RD
, |x|2
dµ) (stat.).
2 ex 1 : Y SOBOL, µ = dxΩ, Ω = [0, 1]D
. Then HK = BV (Ω) (bounded variations),
E(Y , HK ) ≥ ln(N)D−1
N
( Koksma-Hlawka sharp estimate conjecture).
3 Others examples ...quantifiers, wavelet, deep feed-forward neural networks ...
4 A general method Let K(x, y), an admissible kernel, then
E2
(Y , HKµ ) =
R2D
K(x, y)dµx dµy +
1
N2
N
n,m=1
K(yn
, ym
)−
2
N
N
n=1 RD
K(x, yn
)dµx
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 2 / 8
6. Local integration with kernel methods
Monte Carlo estimations - consider the following error estimation (µ probability
measure, Y = (y1
, . . . , yN
) ∈ RN×D
)
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , HKµ ϕ HKµ
1 example 1 : Y i.i.d. → E(Y , HKµ ) ∼ 1√
N
and HKµ ∼ L2
(RD
, |x|2
dµ) (stat.).
2 ex 1 : Y SOBOL, µ = dxΩ, Ω = [0, 1]D
. Then HK = BV (Ω) (bounded variations),
E(Y , HK ) ≥ ln(N)D−1
N
( Koksma-Hlawka sharp estimate conjecture).
3 Others examples ...quantifiers, wavelet, deep feed-forward neural networks ...
4 A general method Let K(x, y), an admissible kernel, then
E2
(Y , HKµ ) =
R2D
K(x, y)dµx dµy +
1
N2
N
n,m=1
K(yn
, ym
)−
2
N
N
n=1 RD
K(x, yn
)dµx
5 We can compute sharp discrepancy sequences and optimal discrepancy error as
Y = arg inf
Y ∈RD×N
E(Y , HKµ ), EHKµ
(N, D) = E(Y , HKµ )
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 2 / 8
7. Our local kernels : lattice-based and transported kernels
Without loss of generality, consider µ = dx[0,1]D . We use two kind of kernels :
1 Lattice-based kernel : Let L a Lattice, L∗
its dual Lattice. Consider any discrete
function satisfying φ(α∗
) ∈ 1
(L∗
), φ(α∗
) ≥ 0, φ(0) = 1.
Kper (x, y) =
1
|L|
α∗∈L∗
φ(α∗
) exp2iπ<x−y,α∗
>
x
y
z
Matern
x
y
k
Multiquadric
x
y
k
Gaussian
x
y
k
Truncated
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 3 / 8
8. Our local kernels : lattice-based and transported kernels
Without loss of generality, consider µ = dx[0,1]D . We use two kind of kernels :
1 Lattice-based kernel : Let L a Lattice, L∗
its dual Lattice. Consider any discrete
function satisfying φ(α∗
) ∈ 1
(L∗
), φ(α∗
) ≥ 0, φ(0) = 1.
Kper (x, y) =
1
|L|
α∗∈L∗
φ(α∗
) exp2iπ<x−y,α∗
>
x
y
z
Matern
x
y
k
Multiquadric
x
y
k
Gaussian
x
y
k
Truncated
2 Transported kernel : S : Ω → RD
a transport map. Ktra(x, y) = K(S(x), S(y)).
x
y
k
Matern
x
y
k
Gaussian
x
y
k
Multiquadric
x
y
k
Truncated
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 3 / 8
9. An example : Monte-Carlo integration with Matern kernel
1 Kernel, random and computed sequences Y . N=256,D=2.
x
y
z
Matern
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
random points
x
y
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
computed points for lattice Matern
x
y
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 4 / 8
10. An example : Monte-Carlo integration with Matern kernel
1 Kernel, random and computed sequences Y . N=256,D=2.
x
y
z
Matern
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
random points
x
y
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
computed points for lattice Matern
x
y
2 Optimal discrepancy error → Koksma-Hlavka type estimate
EHK (N, D) ∼ n>N
φ(α∗n)
N
∼
ln(N)D−1
N
, φ(α) = ΠD
d=1
2
1 + 4π2α2
d /τ2
D
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 4 / 8
11. An example : Monte-Carlo integration with Matern kernel
1 Kernel, random and computed sequences Y . N=256,D=2.
x
y
z
Matern
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
random points
x
y
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
computed points for lattice Matern
x
y
2 Optimal discrepancy error → Koksma-Hlavka type estimate
EHK (N, D) ∼ n>N
φ(α∗n)
N
∼
ln(N)D−1
N
, φ(α) = ΠD
d=1
2
1 + 4π2α2
d /τ2
D
3 E(Y , HK ) random – vs E(Y , HK ) computed - vs theoretical EHK (N, D)
D=1 D=16 D=128
N=16 0.228 0.304 0.319
N=128 0.117 0.111 0.115
N=512 0.035 0.054 0.059
D=1 D=16 D=128
N=16 0.062 0.211 0.223
N=128 0.008 0.069 0.077
N=512 0.002 0.034 0.049
D=1 D=16 D=128
N=16 0.062 0.288 0.323
N=128 0.008 0.077 0.105
N=512 0.002 0.034 0.043
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 4 / 8
12. Application : Transported Meshfree Methods (TMM)
If µ(t, x) solution to non-linear hyperbolic-parabolic Fokker-Planck equations
∂t µ − Lµ = 0, L = · (bµ) + 2
· (Aµ), A :=
1
2
σσT
1 FORWARD : Compute µ(t) ∼ 1
N
δy1(t) + . . . + δyN (t) as best discrepancy
sequences
RD
ϕ(x)dµ(t, x) −
1
N
N
n=1
ϕ(yn
(t)) ≤ E Y (t), HKµ(t,·)
ϕ HKµ(t,·)
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 5 / 8
13. Application : Transported Meshfree Methods (TMM)
If µ(t, x) solution to non-linear hyperbolic-parabolic Fokker-Planck equations
∂t µ − Lµ = 0, L = · (bµ) + 2
· (Aµ), A :=
1
2
σσT
1 FORWARD : Compute µ(t) ∼ 1
N
δy1(t) + . . . + δyN (t) as best discrepancy
sequences
RD
ϕ(x)dµ(t, x) −
1
N
N
n=1
ϕ(yn
(t)) ≤ E Y (t), HKµ(t,·)
ϕ HKµ(t,·)
2 CHECK optimal rate : E Y (t), HKµ(t,·)
∼ EHKµ(t)
(N, D)
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 5 / 8
14. Application : Transported Meshfree Methods (TMM)
If µ(t, x) solution to non-linear hyperbolic-parabolic Fokker-Planck equations
∂t µ − Lµ = 0, L = · (bµ) + 2
· (Aµ), A :=
1
2
σσT
1 FORWARD : Compute µ(t) ∼ 1
N
δy1(t) + . . . + δyN (t) as best discrepancy
sequences
RD
ϕ(x)dµ(t, x) −
1
N
N
n=1
ϕ(yn
(t)) ≤ E Y (t), HKµ(t,·)
ϕ HKµ(t,·)
2 CHECK optimal rate : E Y (t), HKµ(t,·)
∼ EHKµ(t)
(N, D)
3 BACKWARD : interpret t → yn
(t), n = 1 . . . N as a moving, transported, PDE
grid. Solve with it the Kolmogorov equation. ERROR ESTIMATION :
RD
P(t, ·)dµ(t, ·) −
1
N
N
n=1
P(t, yn
(t)) ≤ EHKµ(t)
(N, D) P(t, ·) HKµ(t,·)
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 5 / 8
15. Application : Transported Meshfree Methods (TMM)
If µ(t, x) solution to non-linear hyperbolic-parabolic Fokker-Planck equations
∂t µ − Lµ = 0, L = · (bµ) + 2
· (Aµ), A :=
1
2
σσT
1 FORWARD : Compute µ(t) ∼ 1
N
δy1(t) + . . . + δyN (t) as best discrepancy
sequences
RD
ϕ(x)dµ(t, x) −
1
N
N
n=1
ϕ(yn
(t)) ≤ E Y (t), HKµ(t,·)
ϕ HKµ(t,·)
2 CHECK optimal rate : E Y (t), HKµ(t,·)
∼ EHKµ(t)
(N, D)
3 BACKWARD : interpret t → yn
(t), n = 1 . . . N as a moving, transported, PDE
grid. Solve with it the Kolmogorov equation. ERROR ESTIMATION :
RD
P(t, ·)dµ(t, ·) −
1
N
N
n=1
P(t, yn
(t)) ≤ EHKµ(t)
(N, D) P(t, ·) HKµ(t,·)
4 HYPERBOLIC CASE (σ ≡ 0) : LAGRANGIAN MESHFREE METHODS.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 5 / 8
16. Illustration : the 2D SABR process, widely used in Finance
SABR process d
Ft
αt
= ρ
αt Fβ
t 0
0 ναt
dW 1
t
dW 2
t
, with 0 ≤ β ≤ 1,
ν ≥ 0,ρ ∈ R2×2
. The Fokker-Planck equation associated to SABR is
∂t µ + L∗
µ = 0, L∗
µ = ρ
x2
2
2
xβ
1 0
0 ν2
2
x2
ρT
· 2
µ.
(Loading SABR200)
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 6 / 8
17. academic tests, business cases, curse of dimensionality
1 Academic tests : finance, non-linear hyperbolic systems
1 Revisiting the method of characteristics via a convex hull algorithm
2 Numerical results using CoDeFi
3 A new method for solving Kolmogorov equations in mathematical finance
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 7 / 8
18. academic tests, business cases, curse of dimensionality
1 Academic tests : finance, non-linear hyperbolic systems
1 Revisiting the method of characteristics via a convex hull algorithm
2 Numerical results using CoDeFi
3 A new method for solving Kolmogorov equations in mathematical finance
2 Business cases : finance
1 Hedging Strategies for Net Interest Income and Economic Values of
Equity (http://dx.doi.org/10.2139/ssrn.3454813, with S.Miryusupov).
2 Compute metrics for big portfolio of Autocalls depending on several
underlyings (unpublished).
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 7 / 8
19. academic tests, business cases, curse of dimensionality
1 Academic tests : finance, non-linear hyperbolic systems
1 Revisiting the method of characteristics via a convex hull algorithm
2 Numerical results using CoDeFi
3 A new method for solving Kolmogorov equations in mathematical finance
2 Business cases : finance
1 Hedging Strategies for Net Interest Income and Economic Values of
Equity (http://dx.doi.org/10.2139/ssrn.3454813, with S.Miryusupov).
2 Compute metrics for big portfolio of Autocalls depending on several
underlyings (unpublished).
3 CURSE of dimensionality in finance : Price and manage a complex option
written on several underlyings. Result ? We can solve at any order of accuracy :
RD
P(t, ·)dµ(t, ·) −
1
N
N
n=1
P(t, yn
(t)) ≤
P(t, ·) HKµ(t,·)
Nα
...
...where α ≥ 1/2 is ANY NUMBER ! Choose it according to your desired
electricity bill ! BUT ...beware to low-regularity problems if the dimension D is too
big (e.g. american-type options or Autocall)
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 7 / 8
20. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 8 / 8
21. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 8 / 8
22. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
3 A new method for numerical simulations of PDE : Transported
Meshfree Methods....
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 8 / 8
23. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
3 A new method for numerical simulations of PDE : Transported
Meshfree Methods....
4 ... that can be used in a wide variety of applications (hyperbolic /
parabolic equations, artificial intelligence, etc...)...
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 8 / 8
24. Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
3 A new method for numerical simulations of PDE : Transported
Meshfree Methods....
4 ... that can be used in a wide variety of applications (hyperbolic /
parabolic equations, artificial intelligence, etc...)...
5 ..for which the error analysis applies : we can guarantee a worst
error estimation, and we can check that this error matches an
optimal convergence rate.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)Integration with Kernel Methods, Transported Meshfree methods. 19 10 2019 8 / 8