SlideShare a Scribd company logo
A computational framework based over Transported
Meshfree methods.
P.G. LeFloch 1, J.M. Mercier 2
1CNRS, 2MPG-Partners
16 01 2020
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 1 / 12
Foundations : local integration with Monte-Carlo methods
Monte Carlo estimations - consider the following family of (worst) error estimates (µ
probability measure, Y = (y1
, . . . , yN
) ∈ RN×D
)
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , Hµ ϕ Hµ
where Hµ is a µ-weighted Hilbert (or Banach) functional space.
1 classical example 1 : Y i.i.d. → E(Y , Hµ) ∼ 1√
N
and Hµ ∼ L2
(RD
, |x|2
dµ) (stat :
law of large number) : most used convergence rate in the Finance industry.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 2 / 12
Foundations : local integration with Monte-Carlo methods
Monte Carlo estimations - consider the following family of (worst) error estimates (µ
probability measure, Y = (y1
, . . . , yN
) ∈ RN×D
)
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , Hµ ϕ Hµ
where Hµ is a µ-weighted Hilbert (or Banach) functional space.
1 classical example 1 : Y i.i.d. → E(Y , Hµ) ∼ 1√
N
and Hµ ∼ L2
(RD
, |x|2
dµ) (stat :
law of large number) : most used convergence rate in the Finance industry.
2 classical ex 2 : Y SOBOL, µ = dxΩ, Ω = [0, 1]D
. Then HK = BV (Ω) (bounded
variations), E(Y , HK ) ≥ ln(N)D−1
N
( Koksma-Hlawka sharp estimate conjecture).
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 2 / 12
Foundations : local integration with Monte-Carlo methods
Monte Carlo estimations - consider the following family of (worst) error estimates (µ
probability measure, Y = (y1
, . . . , yN
) ∈ RN×D
)
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , Hµ ϕ Hµ
where Hµ is a µ-weighted Hilbert (or Banach) functional space.
1 classical example 1 : Y i.i.d. → E(Y , Hµ) ∼ 1√
N
and Hµ ∼ L2
(RD
, |x|2
dµ) (stat :
law of large number) : most used convergence rate in the Finance industry.
2 classical ex 2 : Y SOBOL, µ = dxΩ, Ω = [0, 1]D
. Then HK = BV (Ω) (bounded
variations), E(Y , HK ) ≥ ln(N)D−1
N
( Koksma-Hlawka sharp estimate conjecture).
3 Others examples ...quantifiers, wavelet, deep feed-forward neural networks ...
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 2 / 12
A general approach using kernels methods
1 You have a problem involving a probability measure µ and you guess that the
solution belongs to a weighted functional space Hµ.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 3 / 12
A general approach using kernels methods
1 You have a problem involving a probability measure µ and you guess that the
solution belongs to a weighted functional space Hµ.
2 Identify the admissible kernel K(x, y) generating it (RHKS theory) : Hµ ≡ HK .
Example of classically used kernels : RELU, convolutional kernels, Wendland
functions...
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 3 / 12
A general approach using kernels methods
1 You have a problem involving a probability measure µ and you guess that the
solution belongs to a weighted functional space Hµ.
2 Identify the admissible kernel K(x, y) generating it (RHKS theory) : Hµ ≡ HK .
Example of classically used kernels : RELU, convolutional kernels, Wendland
functions...
3 Pick (i.i.d) samples y1
, . . . , yN
. Then you can measure your integration error using
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , HK ϕ HK
where
E2
(Y , HK ) =
R2D
K(x, y)dxdy +
1
N2
N
n,m=1
K(yn
, ym
) −
2
N
N
n=1 RD
K(x, yn
)dx
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 3 / 12
A general approach using kernels methods
1 You have a problem involving a probability measure µ and you guess that the
solution belongs to a weighted functional space Hµ.
2 Identify the admissible kernel K(x, y) generating it (RHKS theory) : Hµ ≡ HK .
Example of classically used kernels : RELU, convolutional kernels, Wendland
functions...
3 Pick (i.i.d) samples y1
, . . . , yN
. Then you can measure your integration error using
RD
ϕ(x)dµ −
1
N
N
n=1
ϕ(yn
) ≤ E Y , HK ϕ HK
where
E2
(Y , HK ) =
R2D
K(x, y)dxdy +
1
N2
N
n,m=1
K(yn
, ym
) −
2
N
N
n=1 RD
K(x, yn
)dx
4 You can optimize your error computing sharp discrepancy sequences and optimal
discrepancy error as
Y = arg inf
Y ∈RD×N
E(Y , HK ), EHK (N, D) = E(Y , HK )
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 3 / 12
Our local kernels : lattice-based and transported kernels
For our purposes, we crafted two kind of kernels :
1 Lattice-based kernel : (suited to study Lebesgue-measure of type µ = dxΩ). Let L
a Lattice, L∗
its dual Lattice. Consider any discrete function satisfying
φ(α∗
) ∈ 1
(L∗
), φ(α∗
) ≥ 0, φ(0) = 1 and define
Kper (x, y) =
1
|L|
α∗∈L∗
φ(α∗
) exp2iπ<x−y,α∗
>
x
y
z
Matern
x
y
k Multiquadric
x
y
k
Gaussian
x
y
k
Truncated
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 4 / 12
Our local kernels : lattice-based and transported kernels
For our purposes, we crafted two kind of kernels :
1 Lattice-based kernel : (suited to study Lebesgue-measure of type µ = dxΩ). Let L
a Lattice, L∗
its dual Lattice. Consider any discrete function satisfying
φ(α∗
) ∈ 1
(L∗
), φ(α∗
) ≥ 0, φ(0) = 1 and define
Kper (x, y) =
1
|L|
α∗∈L∗
φ(α∗
) exp2iπ<x−y,α∗
>
x
y
z
Matern
x
y
k Multiquadric
x
y
k
Gaussian
x
y
k
Truncated
2 Transported kernel : S : Ω → RD
a transport map. Ktra(x, y) = K(S(x), S(y)).
x
y
k
Matern
x
y
k
Gaussian
x
y
k
Multiquadric
x
y
k
Truncated
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 4 / 12
Example I : Monte-Carlo integration with Matern kernel
1 Kernel, random and computed sequences Y . N=256,D=2.
x
y
z
Matern
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
random points
x
y
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
computed points for lattice Matern
x
y
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 5 / 12
Example I : Monte-Carlo integration with Matern kernel
1 Kernel, random and computed sequences Y . N=256,D=2.
x
y
z
Matern
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
random points
x
y
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
computed points for lattice Matern
x
y
2 Optimal discrepancy error → Koksma-Hlavka type estimate
EHK (N, D) ∼ n>N
φ(α∗n)
N
∼
ln(N)D−1
N
, φ(α) = ΠD
d=1
2
1 + 4π2α2
d /τ2
D
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 5 / 12
Example I : Monte-Carlo integration with Matern kernel
1 Kernel, random and computed sequences Y . N=256,D=2.
x
y
z
Matern
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
random points
x
y
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
computed points for lattice Matern
x
y
2 Optimal discrepancy error → Koksma-Hlavka type estimate
EHK (N, D) ∼ n>N
φ(α∗n)
N
∼
ln(N)D−1
N
, φ(α) = ΠD
d=1
2
1 + 4π2α2
d /τ2
D
3 E(Y , HK ) random – vs E(Y , HK ) computed - vs theoretical EHK (N, D)
D=1 D=16 D=128
N=16 0.228 0.304 0.319
N=128 0.117 0.111 0.115
N=512 0.035 0.054 0.059
D=1 D=16 D=128
N=16 0.062 0.211 0.223
N=128 0.008 0.069 0.077
N=512 0.002 0.034 0.049
D=1 D=16 D=128
N=16 0.062 0.288 0.323
N=128 0.008 0.077 0.105
N=512 0.002 0.034 0.043
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 5 / 12
Application : Machine Learning
1 Setting : consider a set of observations
(y1
, P1
), . . . , (yN
, PN
) ∈ RD×M×N
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 6 / 12
Application : Machine Learning
1 Setting : consider a set of observations
(y1
, P1
), . . . , (yN
, PN
) ∈ RD×M×N
2 Interpolation : pick-up a kernel K(x, y), denotes HK its native space, and consider
a continuous function P(y) such that
< P, δyn >= P(yn
) ∼ Pn
One can further optimize computing Y (∼ learning).
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 6 / 12
Application : Machine Learning
1 Setting : consider a set of observations
(y1
, P1
), . . . , (yN
, PN
) ∈ RD×M×N
2 Interpolation : pick-up a kernel K(x, y), denotes HK its native space, and consider
a continuous function P(y) such that
< P, δyn >= P(yn
) ∼ Pn
One can further optimize computing Y (∼ learning).
3 Extrapolation : then one can extrapolate with error bound
RD
P(x)dµ −
1
N
N
n=1
P(yn
) ≤ EHK (N, D) P HK
i.e. µ ∼ 1
N
N
n=1
δyn .
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 6 / 12
Application : Machine Learning
1 Setting : consider a set of observations
(y1
, P1
), . . . , (yN
, PN
) ∈ RD×M×N
2 Interpolation : pick-up a kernel K(x, y), denotes HK its native space, and consider
a continuous function P(y) such that
< P, δyn >= P(yn
) ∼ Pn
One can further optimize computing Y (∼ learning).
3 Extrapolation : then one can extrapolate with error bound
RD
P(x)dµ −
1
N
N
n=1
P(yn
) ≤ EHK (N, D) P HK
i.e. µ ∼ 1
N
N
n=1
δyn .
4 Here are two very similar applications :
1 (y1
, P1
), . . . , (yN
, PN
) are prices and implied volatilities (eg call
options under SABR model): Pricing.
2 (y1
, P1
), . . . , (yN
, PN
) are pictures of dogs and cats : classifier.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 6 / 12
Application to time-dependant PDE
(Loading NS)
1 Consider a time dependant probability measure µ(t, x) and a kernel Kt
(x, y). We
can define sharp discrepancy sequences t → y1
(t), . . . , yn
(t) .
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 7 / 12
Application to time-dependant PDE
(Loading NS)
1 Consider a time dependant probability measure µ(t, x) and a kernel Kt
(x, y). We
can define sharp discrepancy sequences t → y1
(t), . . . , yn
(t) .
2 For PDE, we can try to compute these sequences. For instance consider the
Navier-Stokes equation (hyperbolic equations)
∂t µ = · (vµ), ∂t (µv) + · (µv2
) = − p + · (µΣ)
· v = 0 (or energy conservation for non newtonian fluids)
Together with boundary conditions Dirichlet / Neumann. We obtain a numerical
scheme sharing some similarities with SPH - smooth particle hydrodynamics :
that are LAGRANGIAN MESHFREE METHODS.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 7 / 12
Application to industrial Finance
Consider µ(t, x) solution to non-linear hyperbolic-parabolic Fokker-Planck equations
∂t µ − Lµ = 0, L = · (bµ) + 2
· (Aµ), A :=
1
2
σσT
1 FORWARD : Compute µ(t) ∼ 1
N
δy1(t) + . . . + δyN (t) as sharp discrepancy
sequences.
RD
ϕ(x)dµ(t, x) −
1
N
N
n=1
ϕ(yn
(t)) ≤ E Y (t), HKt ϕ HKt
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 8 / 12
Application to industrial Finance
Consider µ(t, x) solution to non-linear hyperbolic-parabolic Fokker-Planck equations
∂t µ − Lµ = 0, L = · (bµ) + 2
· (Aµ), A :=
1
2
σσT
1 FORWARD : Compute µ(t) ∼ 1
N
δy1(t) + . . . + δyN (t) as sharp discrepancy
sequences.
RD
ϕ(x)dµ(t, x) −
1
N
N
n=1
ϕ(yn
(t)) ≤ E Y (t), HKt ϕ HKt
2 CHECK optimal rate : E Y (t), HKt ∼ EHKt (N, D)
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 8 / 12
Application to industrial Finance
Consider µ(t, x) solution to non-linear hyperbolic-parabolic Fokker-Planck equations
∂t µ − Lµ = 0, L = · (bµ) + 2
· (Aµ), A :=
1
2
σσT
1 FORWARD : Compute µ(t) ∼ 1
N
δy1(t) + . . . + δyN (t) as sharp discrepancy
sequences.
RD
ϕ(x)dµ(t, x) −
1
N
N
n=1
ϕ(yn
(t)) ≤ E Y (t), HKt ϕ HKt
2 CHECK optimal rate : E Y (t), HKt ∼ EHKt (N, D)
3 BACKWARD : interpret t → yn
(t), n = 1 . . . N as a moving, transported, PDE
grid (TREE). Solve with it the Kolmogorov equation. ERROR ESTIMATION :
RD
P(t, ·)dµ(t, ·) −
1
N
N
n=1
P(t, yn
(t)) ≤ EHKt (N, D) P(t, ·) HKt
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 8 / 12
Illustration : the 2D SABR process, widely used in Finance
SABR process d
Ft
αt
= ρ
αt Fβ
t 0
0 ναt
dW 1
t
dW 2
t
, with 0 ≤ β ≤ 1,
ν ≥ 0,ρ ∈ R2×2
. The Fokker-Planck equation associated to SABR is
∂t µ + L∗
µ = 0, L∗
µ = ρ
x2
2
2
xβ
1 0
0 ν2
2
x2
ρT
· 2
µ.
(Loading SABR200)
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 9 / 12
The curse of dimensionality
CURSE of dimensionality in finance : Price and manage a complex option written on
several underlyings.
1 Step 1 : Compute a measure solution µ(t, x) to a Fokker-Planck equation in large
dimension, calibrate it.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 10 / 12
The curse of dimensionality
CURSE of dimensionality in finance : Price and manage a complex option written on
several underlyings.
1 Step 1 : Compute a measure solution µ(t, x) to a Fokker-Planck equation in large
dimension, calibrate it.
2 Step 2 : Backward a Kolmogorov equation in large dimension, denote the solution
P(t, x).
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 10 / 12
The curse of dimensionality
CURSE of dimensionality in finance : Price and manage a complex option written on
several underlyings.
1 Step 1 : Compute a measure solution µ(t, x) to a Fokker-Planck equation in large
dimension, calibrate it.
2 Step 2 : Backward a Kolmogorov equation in large dimension, denote the solution
P(t, x).
3 Step 3 : Compute various metrics on the solution as for instance : VaR or XVA for
regulatory purposes, or future deltas / gammas / implied vols for hedging purposes.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 10 / 12
The curse of dimensionality
CURSE of dimensionality in finance : Price and manage a complex option written on
several underlyings.
1 Step 1 : Compute a measure solution µ(t, x) to a Fokker-Planck equation in large
dimension, calibrate it.
2 Step 2 : Backward a Kolmogorov equation in large dimension, denote the solution
P(t, x).
3 Step 3 : Compute various metrics on the solution as for instance : VaR or XVA for
regulatory purposes, or future deltas / gammas / implied vols for hedging purposes.
4 Result ? We can compute the solution P(t, x) at any order of accuracy :
RD
P(t, ·)dµ(t, ·) −
1
N
N
n=1
P(t, yn
(t)) ≤
P(t, ·) HK
Nα
...
...where α ≥ 1/2 is any number... Choose it according to your desired electricity bill
! But beware to smoothing effects in high dimensions : HK contains less
informations as the dimension raises. Some problems, as are for instance optimal
stopping problems, are intrinsecally cursed.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 10 / 12
academic tests, business cases
1 Academic works : finance, non-linear hyperbolic systems
1 Revisiting the method of characteristics via a convex hull algorithm :
explicit solutions to high-dimensional conservation laws with non
convex-fluxes.
2 Numerical results using CoDeFi. Benchmark of TMM methods for
classical pricing problems.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 11 / 12
academic tests, business cases
1 Academic works : finance, non-linear hyperbolic systems
1 Revisiting the method of characteristics via a convex hull algorithm :
explicit solutions to high-dimensional conservation laws with non
convex-fluxes.
2 Numerical results using CoDeFi. Benchmark of TMM methods for
classical pricing problems.
2 Business cases - done
1 Hedging Strategies for Net Interest Income and Economic Values of
Equity (http://dx.doi.org/10.2139/ssrn.3454813, with S.Miryusupov).
2 Compute metrics for big portfolio of Autocalls depending on several
underlyings (unpublished).
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 11 / 12
academic tests, business cases
1 Academic works : finance, non-linear hyperbolic systems
1 Revisiting the method of characteristics via a convex hull algorithm :
explicit solutions to high-dimensional conservation laws with non
convex-fluxes.
2 Numerical results using CoDeFi. Benchmark of TMM methods for
classical pricing problems.
2 Business cases - done
1 Hedging Strategies for Net Interest Income and Economic Values of
Equity (http://dx.doi.org/10.2139/ssrn.3454813, with S.Miryusupov).
2 Compute metrics for big portfolio of Autocalls depending on several
underlyings (unpublished).
3 Under work
1 McKean Vlasov equations (stochastic volatility modeling).
2 ISDA Standard Initial Margin : XVA computations based on sensitivities
(delta / vega ..gamma)
3 Transition IBOR / RFR rates a la Lyashenko - Mercurio.
4 Strategies for Liquidity risk : Hamilton-Jacobi-Bellman equations in high
dimensions.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 11 / 12
Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
3 A new method for numerical simulations of PDE : Transported
Meshfree Methods....
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
3 A new method for numerical simulations of PDE : Transported
Meshfree Methods....
4 ... that can be used in a wide variety of applications (hyperbolic /
parabolic equations, artificial intelligence, etc...)...
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
3 A new method for numerical simulations of PDE : Transported
Meshfree Methods....
4 ... that can be used in a wide variety of applications (hyperbolic /
parabolic equations, artificial intelligence, etc...)...
5 ..for which the error analysis applies : we can guarantee a worst
error estimation, and we can check that this error matches an
optimal convergence rate.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
Summary and Conclusions
We presented in this talk:
1 News, sharps, estimations for Monte-Carlo methods...
2 ...that can be used in a wide variety of context to perform a sharp
error analysis.
3 A new method for numerical simulations of PDE : Transported
Meshfree Methods....
4 ... that can be used in a wide variety of applications (hyperbolic /
parabolic equations, artificial intelligence, etc...)...
5 ..for which the error analysis applies : we can guarantee a worst
error estimation, and we can check that this error matches an
optimal convergence rate.
6 ...Thus we can argue that our numerical methods reach nearly optimal
algorithmic complexity.
P.G. LeFloch 1
, J.M. Mercier 2
(1
CNRS, 2
MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12

More Related Content

What's hot

Polynomial Matrix Decompositions
Polynomial Matrix DecompositionsPolynomial Matrix Decompositions
Polynomial Matrix Decompositions
Förderverein Technische Fakultät
 
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارشبررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
پروژه مارکت
 
Estimating Space-Time Covariance from Finite Sample Sets
Estimating Space-Time Covariance from Finite Sample SetsEstimating Space-Time Covariance from Finite Sample Sets
Estimating Space-Time Covariance from Finite Sample Sets
Förderverein Technische Fakultät
 
Dictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix FactorizationDictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix Factorization
recsysfr
 
Differential analyses of structures in HiC data
Differential analyses of structures in HiC dataDifferential analyses of structures in HiC data
Differential analyses of structures in HiC data
tuxette
 
CSC446: Pattern Recognition (LN8)
CSC446: Pattern Recognition (LN8)CSC446: Pattern Recognition (LN8)
CSC446: Pattern Recognition (LN8)
Mostafa G. M. Mostafa
 
ICPR 2012
ICPR 2012ICPR 2012
ICPR 2012
BOUWMANS Thierry
 
Least squares support Vector Machine Classifier
Least squares support Vector Machine ClassifierLeast squares support Vector Machine Classifier
Least squares support Vector Machine Classifier
Raj Sikarwar
 
Habilitation à diriger des recherches
Habilitation à diriger des recherchesHabilitation à diriger des recherches
Habilitation à diriger des recherches
Pierre Pudlo
 
Data-Driven Recommender Systems
Data-Driven Recommender SystemsData-Driven Recommender Systems
Data-Driven Recommender Systems
recsysfr
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysisbutest
 
cvpr2009 tutorial: kernel methods in computer vision: part II: Statistics and...
cvpr2009 tutorial: kernel methods in computer vision: part II: Statistics and...cvpr2009 tutorial: kernel methods in computer vision: part II: Statistics and...
cvpr2009 tutorial: kernel methods in computer vision: part II: Statistics and...zukun
 
Hyperparameter optimization with approximate gradient
Hyperparameter optimization with approximate gradientHyperparameter optimization with approximate gradient
Hyperparameter optimization with approximate gradient
Fabian Pedregosa
 
Robust Image Denoising in RKHS via Orthogonal Matching Pursuit
Robust Image Denoising in RKHS via Orthogonal Matching PursuitRobust Image Denoising in RKHS via Orthogonal Matching Pursuit
Robust Image Denoising in RKHS via Orthogonal Matching Pursuit
Pantelis Bouboulis
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
The Statistical and Applied Mathematical Sciences Institute
 

What's hot (15)

Polynomial Matrix Decompositions
Polynomial Matrix DecompositionsPolynomial Matrix Decompositions
Polynomial Matrix Decompositions
 
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارشبررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
بررسی دو روش شناسایی سیستم های متغیر با زمان به همراه شبیه سازی و گزارش
 
Estimating Space-Time Covariance from Finite Sample Sets
Estimating Space-Time Covariance from Finite Sample SetsEstimating Space-Time Covariance from Finite Sample Sets
Estimating Space-Time Covariance from Finite Sample Sets
 
Dictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix FactorizationDictionary Learning for Massive Matrix Factorization
Dictionary Learning for Massive Matrix Factorization
 
Differential analyses of structures in HiC data
Differential analyses of structures in HiC dataDifferential analyses of structures in HiC data
Differential analyses of structures in HiC data
 
CSC446: Pattern Recognition (LN8)
CSC446: Pattern Recognition (LN8)CSC446: Pattern Recognition (LN8)
CSC446: Pattern Recognition (LN8)
 
ICPR 2012
ICPR 2012ICPR 2012
ICPR 2012
 
Least squares support Vector Machine Classifier
Least squares support Vector Machine ClassifierLeast squares support Vector Machine Classifier
Least squares support Vector Machine Classifier
 
Habilitation à diriger des recherches
Habilitation à diriger des recherchesHabilitation à diriger des recherches
Habilitation à diriger des recherches
 
Data-Driven Recommender Systems
Data-Driven Recommender SystemsData-Driven Recommender Systems
Data-Driven Recommender Systems
 
Machine Learning and Statistical Analysis
Machine Learning and Statistical AnalysisMachine Learning and Statistical Analysis
Machine Learning and Statistical Analysis
 
cvpr2009 tutorial: kernel methods in computer vision: part II: Statistics and...
cvpr2009 tutorial: kernel methods in computer vision: part II: Statistics and...cvpr2009 tutorial: kernel methods in computer vision: part II: Statistics and...
cvpr2009 tutorial: kernel methods in computer vision: part II: Statistics and...
 
Hyperparameter optimization with approximate gradient
Hyperparameter optimization with approximate gradientHyperparameter optimization with approximate gradient
Hyperparameter optimization with approximate gradient
 
Robust Image Denoising in RKHS via Orthogonal Matching Pursuit
Robust Image Denoising in RKHS via Orthogonal Matching PursuitRobust Image Denoising in RKHS via Orthogonal Matching Pursuit
Robust Image Denoising in RKHS via Orthogonal Matching Pursuit
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 

Similar to Pres metabief2020jmm

NIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learningNIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learningzukun
 
Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...
Alexander Litvinenko
 
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Frank Nielsen
 
A new implementation of k-MLE for mixture modelling of Wishart distributions
A new implementation of k-MLE for mixture modelling of Wishart distributionsA new implementation of k-MLE for mixture modelling of Wishart distributions
A new implementation of k-MLE for mixture modelling of Wishart distributions
Frank Nielsen
 
Jörg Stelzer
Jörg StelzerJörg Stelzer
Jörg Stelzerbutest
 
SURF 2012 Final Report(1)
SURF 2012 Final Report(1)SURF 2012 Final Report(1)
SURF 2012 Final Report(1)Eric Zhang
 
Nonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo MethodNonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo Method
SSA KPI
 
Slides: A glance at information-geometric signal processing
Slides: A glance at information-geometric signal processingSlides: A glance at information-geometric signal processing
Slides: A glance at information-geometric signal processing
Frank Nielsen
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
Valentin De Bortoli
 
Regularized Compression of A Noisy Blurred Image
Regularized Compression of A Noisy Blurred Image Regularized Compression of A Noisy Blurred Image
Regularized Compression of A Noisy Blurred Image
ijcsa
 
Pattern learning and recognition on statistical manifolds: An information-geo...
Pattern learning and recognition on statistical manifolds: An information-geo...Pattern learning and recognition on statistical manifolds: An information-geo...
Pattern learning and recognition on statistical manifolds: An information-geo...
Frank Nielsen
 
An investigation of inference of the generalized extreme value distribution b...
An investigation of inference of the generalized extreme value distribution b...An investigation of inference of the generalized extreme value distribution b...
An investigation of inference of the generalized extreme value distribution b...
Alexander Decker
 
FiniteElementNotes
FiniteElementNotesFiniteElementNotes
FiniteElementNotesMartin Jones
 
ENBIS 2018 presentation on Deep k-Means
ENBIS 2018 presentation on Deep k-MeansENBIS 2018 presentation on Deep k-Means
ENBIS 2018 presentation on Deep k-Means
tthonet
 
Graph Neural Network in practice
Graph Neural Network in practiceGraph Neural Network in practice
Graph Neural Network in practice
tuxette
 
KAUST_talk_short.pdf
KAUST_talk_short.pdfKAUST_talk_short.pdf
KAUST_talk_short.pdf
Chiheb Ben Hammouda
 
Jere Koskela slides
Jere Koskela slidesJere Koskela slides
Jere Koskela slides
Christian Robert
 
IVR - Chapter 1 - Introduction
IVR - Chapter 1 - IntroductionIVR - Chapter 1 - Introduction
IVR - Chapter 1 - Introduction
Charles Deledalle
 
Unbiased Markov chain Monte Carlo
Unbiased Markov chain Monte CarloUnbiased Markov chain Monte Carlo
Unbiased Markov chain Monte Carlo
JeremyHeng10
 
Optimization tutorial
Optimization tutorialOptimization tutorial
Optimization tutorial
Northwestern University
 

Similar to Pres metabief2020jmm (20)

NIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learningNIPS2010: optimization algorithms in machine learning
NIPS2010: optimization algorithms in machine learning
 
Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrices for approximating large covariance matries and computin...
 
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)Computational Information Geometry on Matrix Manifolds (ICTP 2013)
Computational Information Geometry on Matrix Manifolds (ICTP 2013)
 
A new implementation of k-MLE for mixture modelling of Wishart distributions
A new implementation of k-MLE for mixture modelling of Wishart distributionsA new implementation of k-MLE for mixture modelling of Wishart distributions
A new implementation of k-MLE for mixture modelling of Wishart distributions
 
Jörg Stelzer
Jörg StelzerJörg Stelzer
Jörg Stelzer
 
SURF 2012 Final Report(1)
SURF 2012 Final Report(1)SURF 2012 Final Report(1)
SURF 2012 Final Report(1)
 
Nonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo MethodNonlinear Stochastic Optimization by the Monte-Carlo Method
Nonlinear Stochastic Optimization by the Monte-Carlo Method
 
Slides: A glance at information-geometric signal processing
Slides: A glance at information-geometric signal processingSlides: A glance at information-geometric signal processing
Slides: A glance at information-geometric signal processing
 
Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...Maximum likelihood estimation of regularisation parameters in inverse problem...
Maximum likelihood estimation of regularisation parameters in inverse problem...
 
Regularized Compression of A Noisy Blurred Image
Regularized Compression of A Noisy Blurred Image Regularized Compression of A Noisy Blurred Image
Regularized Compression of A Noisy Blurred Image
 
Pattern learning and recognition on statistical manifolds: An information-geo...
Pattern learning and recognition on statistical manifolds: An information-geo...Pattern learning and recognition on statistical manifolds: An information-geo...
Pattern learning and recognition on statistical manifolds: An information-geo...
 
An investigation of inference of the generalized extreme value distribution b...
An investigation of inference of the generalized extreme value distribution b...An investigation of inference of the generalized extreme value distribution b...
An investigation of inference of the generalized extreme value distribution b...
 
FiniteElementNotes
FiniteElementNotesFiniteElementNotes
FiniteElementNotes
 
ENBIS 2018 presentation on Deep k-Means
ENBIS 2018 presentation on Deep k-MeansENBIS 2018 presentation on Deep k-Means
ENBIS 2018 presentation on Deep k-Means
 
Graph Neural Network in practice
Graph Neural Network in practiceGraph Neural Network in practice
Graph Neural Network in practice
 
KAUST_talk_short.pdf
KAUST_talk_short.pdfKAUST_talk_short.pdf
KAUST_talk_short.pdf
 
Jere Koskela slides
Jere Koskela slidesJere Koskela slides
Jere Koskela slides
 
IVR - Chapter 1 - Introduction
IVR - Chapter 1 - IntroductionIVR - Chapter 1 - Introduction
IVR - Chapter 1 - Introduction
 
Unbiased Markov chain Monte Carlo
Unbiased Markov chain Monte CarloUnbiased Markov chain Monte Carlo
Unbiased Markov chain Monte Carlo
 
Optimization tutorial
Optimization tutorialOptimization tutorial
Optimization tutorial
 

Recently uploaded

Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
Cheryl Hung
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
Abida Shariff
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
DianaGray10
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
Guy Korland
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
Ralf Eggert
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Product School
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Product School
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
Product School
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
Sri Ambati
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Jeffrey Haguewood
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
Jemma Hussein Allen
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
Elena Simperl
 
ODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User Group
CatarinaPereira64715
 

Recently uploaded (20)

Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
 
UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4UiPath Test Automation using UiPath Test Suite series, part 4
UiPath Test Automation using UiPath Test Suite series, part 4
 
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdfFIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
FIDO Alliance Osaka Seminar: Passkeys and the Road Ahead.pdf
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
GraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge GraphGraphRAG is All You need? LLM & Knowledge Graph
GraphRAG is All You need? LLM & Knowledge Graph
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
GenAISummit 2024 May 28 Sri Ambati Keynote: AGI Belongs to The Community in O...
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...
 
The Future of Platform Engineering
The Future of Platform EngineeringThe Future of Platform Engineering
The Future of Platform Engineering
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
ODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User GroupODC, Data Fabric and Architecture User Group
ODC, Data Fabric and Architecture User Group
 

Pres metabief2020jmm

  • 1. A computational framework based over Transported Meshfree methods. P.G. LeFloch 1, J.M. Mercier 2 1CNRS, 2MPG-Partners 16 01 2020 P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 1 / 12
  • 2. Foundations : local integration with Monte-Carlo methods Monte Carlo estimations - consider the following family of (worst) error estimates (µ probability measure, Y = (y1 , . . . , yN ) ∈ RN×D ) RD ϕ(x)dµ − 1 N N n=1 ϕ(yn ) ≤ E Y , Hµ ϕ Hµ where Hµ is a µ-weighted Hilbert (or Banach) functional space. 1 classical example 1 : Y i.i.d. → E(Y , Hµ) ∼ 1√ N and Hµ ∼ L2 (RD , |x|2 dµ) (stat : law of large number) : most used convergence rate in the Finance industry. P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 2 / 12
  • 3. Foundations : local integration with Monte-Carlo methods Monte Carlo estimations - consider the following family of (worst) error estimates (µ probability measure, Y = (y1 , . . . , yN ) ∈ RN×D ) RD ϕ(x)dµ − 1 N N n=1 ϕ(yn ) ≤ E Y , Hµ ϕ Hµ where Hµ is a µ-weighted Hilbert (or Banach) functional space. 1 classical example 1 : Y i.i.d. → E(Y , Hµ) ∼ 1√ N and Hµ ∼ L2 (RD , |x|2 dµ) (stat : law of large number) : most used convergence rate in the Finance industry. 2 classical ex 2 : Y SOBOL, µ = dxΩ, Ω = [0, 1]D . Then HK = BV (Ω) (bounded variations), E(Y , HK ) ≥ ln(N)D−1 N ( Koksma-Hlawka sharp estimate conjecture). P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 2 / 12
  • 4. Foundations : local integration with Monte-Carlo methods Monte Carlo estimations - consider the following family of (worst) error estimates (µ probability measure, Y = (y1 , . . . , yN ) ∈ RN×D ) RD ϕ(x)dµ − 1 N N n=1 ϕ(yn ) ≤ E Y , Hµ ϕ Hµ where Hµ is a µ-weighted Hilbert (or Banach) functional space. 1 classical example 1 : Y i.i.d. → E(Y , Hµ) ∼ 1√ N and Hµ ∼ L2 (RD , |x|2 dµ) (stat : law of large number) : most used convergence rate in the Finance industry. 2 classical ex 2 : Y SOBOL, µ = dxΩ, Ω = [0, 1]D . Then HK = BV (Ω) (bounded variations), E(Y , HK ) ≥ ln(N)D−1 N ( Koksma-Hlawka sharp estimate conjecture). 3 Others examples ...quantifiers, wavelet, deep feed-forward neural networks ... P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 2 / 12
  • 5. A general approach using kernels methods 1 You have a problem involving a probability measure µ and you guess that the solution belongs to a weighted functional space Hµ. P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 3 / 12
  • 6. A general approach using kernels methods 1 You have a problem involving a probability measure µ and you guess that the solution belongs to a weighted functional space Hµ. 2 Identify the admissible kernel K(x, y) generating it (RHKS theory) : Hµ ≡ HK . Example of classically used kernels : RELU, convolutional kernels, Wendland functions... P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 3 / 12
  • 7. A general approach using kernels methods 1 You have a problem involving a probability measure µ and you guess that the solution belongs to a weighted functional space Hµ. 2 Identify the admissible kernel K(x, y) generating it (RHKS theory) : Hµ ≡ HK . Example of classically used kernels : RELU, convolutional kernels, Wendland functions... 3 Pick (i.i.d) samples y1 , . . . , yN . Then you can measure your integration error using RD ϕ(x)dµ − 1 N N n=1 ϕ(yn ) ≤ E Y , HK ϕ HK where E2 (Y , HK ) = R2D K(x, y)dxdy + 1 N2 N n,m=1 K(yn , ym ) − 2 N N n=1 RD K(x, yn )dx P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 3 / 12
  • 8. A general approach using kernels methods 1 You have a problem involving a probability measure µ and you guess that the solution belongs to a weighted functional space Hµ. 2 Identify the admissible kernel K(x, y) generating it (RHKS theory) : Hµ ≡ HK . Example of classically used kernels : RELU, convolutional kernels, Wendland functions... 3 Pick (i.i.d) samples y1 , . . . , yN . Then you can measure your integration error using RD ϕ(x)dµ − 1 N N n=1 ϕ(yn ) ≤ E Y , HK ϕ HK where E2 (Y , HK ) = R2D K(x, y)dxdy + 1 N2 N n,m=1 K(yn , ym ) − 2 N N n=1 RD K(x, yn )dx 4 You can optimize your error computing sharp discrepancy sequences and optimal discrepancy error as Y = arg inf Y ∈RD×N E(Y , HK ), EHK (N, D) = E(Y , HK ) P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 3 / 12
  • 9. Our local kernels : lattice-based and transported kernels For our purposes, we crafted two kind of kernels : 1 Lattice-based kernel : (suited to study Lebesgue-measure of type µ = dxΩ). Let L a Lattice, L∗ its dual Lattice. Consider any discrete function satisfying φ(α∗ ) ∈ 1 (L∗ ), φ(α∗ ) ≥ 0, φ(0) = 1 and define Kper (x, y) = 1 |L| α∗∈L∗ φ(α∗ ) exp2iπ<x−y,α∗ > x y z Matern x y k Multiquadric x y k Gaussian x y k Truncated P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 4 / 12
  • 10. Our local kernels : lattice-based and transported kernels For our purposes, we crafted two kind of kernels : 1 Lattice-based kernel : (suited to study Lebesgue-measure of type µ = dxΩ). Let L a Lattice, L∗ its dual Lattice. Consider any discrete function satisfying φ(α∗ ) ∈ 1 (L∗ ), φ(α∗ ) ≥ 0, φ(0) = 1 and define Kper (x, y) = 1 |L| α∗∈L∗ φ(α∗ ) exp2iπ<x−y,α∗ > x y z Matern x y k Multiquadric x y k Gaussian x y k Truncated 2 Transported kernel : S : Ω → RD a transport map. Ktra(x, y) = K(S(x), S(y)). x y k Matern x y k Gaussian x y k Multiquadric x y k Truncated P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 4 / 12
  • 11. Example I : Monte-Carlo integration with Matern kernel 1 Kernel, random and computed sequences Y . N=256,D=2. x y z Matern 0.0 0.2 0.4 0.6 0.8 1.0 0.00.20.40.60.81.0 random points x y 0.0 0.2 0.4 0.6 0.8 1.0 0.00.20.40.60.81.0 computed points for lattice Matern x y P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 5 / 12
  • 12. Example I : Monte-Carlo integration with Matern kernel 1 Kernel, random and computed sequences Y . N=256,D=2. x y z Matern 0.0 0.2 0.4 0.6 0.8 1.0 0.00.20.40.60.81.0 random points x y 0.0 0.2 0.4 0.6 0.8 1.0 0.00.20.40.60.81.0 computed points for lattice Matern x y 2 Optimal discrepancy error → Koksma-Hlavka type estimate EHK (N, D) ∼ n>N φ(α∗n) N ∼ ln(N)D−1 N , φ(α) = ΠD d=1 2 1 + 4π2α2 d /τ2 D P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 5 / 12
  • 13. Example I : Monte-Carlo integration with Matern kernel 1 Kernel, random and computed sequences Y . N=256,D=2. x y z Matern 0.0 0.2 0.4 0.6 0.8 1.0 0.00.20.40.60.81.0 random points x y 0.0 0.2 0.4 0.6 0.8 1.0 0.00.20.40.60.81.0 computed points for lattice Matern x y 2 Optimal discrepancy error → Koksma-Hlavka type estimate EHK (N, D) ∼ n>N φ(α∗n) N ∼ ln(N)D−1 N , φ(α) = ΠD d=1 2 1 + 4π2α2 d /τ2 D 3 E(Y , HK ) random – vs E(Y , HK ) computed - vs theoretical EHK (N, D) D=1 D=16 D=128 N=16 0.228 0.304 0.319 N=128 0.117 0.111 0.115 N=512 0.035 0.054 0.059 D=1 D=16 D=128 N=16 0.062 0.211 0.223 N=128 0.008 0.069 0.077 N=512 0.002 0.034 0.049 D=1 D=16 D=128 N=16 0.062 0.288 0.323 N=128 0.008 0.077 0.105 N=512 0.002 0.034 0.043 P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 5 / 12
  • 14. Application : Machine Learning 1 Setting : consider a set of observations (y1 , P1 ), . . . , (yN , PN ) ∈ RD×M×N P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 6 / 12
  • 15. Application : Machine Learning 1 Setting : consider a set of observations (y1 , P1 ), . . . , (yN , PN ) ∈ RD×M×N 2 Interpolation : pick-up a kernel K(x, y), denotes HK its native space, and consider a continuous function P(y) such that < P, δyn >= P(yn ) ∼ Pn One can further optimize computing Y (∼ learning). P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 6 / 12
  • 16. Application : Machine Learning 1 Setting : consider a set of observations (y1 , P1 ), . . . , (yN , PN ) ∈ RD×M×N 2 Interpolation : pick-up a kernel K(x, y), denotes HK its native space, and consider a continuous function P(y) such that < P, δyn >= P(yn ) ∼ Pn One can further optimize computing Y (∼ learning). 3 Extrapolation : then one can extrapolate with error bound RD P(x)dµ − 1 N N n=1 P(yn ) ≤ EHK (N, D) P HK i.e. µ ∼ 1 N N n=1 δyn . P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 6 / 12
  • 17. Application : Machine Learning 1 Setting : consider a set of observations (y1 , P1 ), . . . , (yN , PN ) ∈ RD×M×N 2 Interpolation : pick-up a kernel K(x, y), denotes HK its native space, and consider a continuous function P(y) such that < P, δyn >= P(yn ) ∼ Pn One can further optimize computing Y (∼ learning). 3 Extrapolation : then one can extrapolate with error bound RD P(x)dµ − 1 N N n=1 P(yn ) ≤ EHK (N, D) P HK i.e. µ ∼ 1 N N n=1 δyn . 4 Here are two very similar applications : 1 (y1 , P1 ), . . . , (yN , PN ) are prices and implied volatilities (eg call options under SABR model): Pricing. 2 (y1 , P1 ), . . . , (yN , PN ) are pictures of dogs and cats : classifier. P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 6 / 12
  • 18. Application to time-dependant PDE (Loading NS) 1 Consider a time dependant probability measure µ(t, x) and a kernel Kt (x, y). We can define sharp discrepancy sequences t → y1 (t), . . . , yn (t) . P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 7 / 12
  • 19. Application to time-dependant PDE (Loading NS) 1 Consider a time dependant probability measure µ(t, x) and a kernel Kt (x, y). We can define sharp discrepancy sequences t → y1 (t), . . . , yn (t) . 2 For PDE, we can try to compute these sequences. For instance consider the Navier-Stokes equation (hyperbolic equations) ∂t µ = · (vµ), ∂t (µv) + · (µv2 ) = − p + · (µΣ) · v = 0 (or energy conservation for non newtonian fluids) Together with boundary conditions Dirichlet / Neumann. We obtain a numerical scheme sharing some similarities with SPH - smooth particle hydrodynamics : that are LAGRANGIAN MESHFREE METHODS. P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 7 / 12
  • 20. Application to industrial Finance Consider µ(t, x) solution to non-linear hyperbolic-parabolic Fokker-Planck equations ∂t µ − Lµ = 0, L = · (bµ) + 2 · (Aµ), A := 1 2 σσT 1 FORWARD : Compute µ(t) ∼ 1 N δy1(t) + . . . + δyN (t) as sharp discrepancy sequences. RD ϕ(x)dµ(t, x) − 1 N N n=1 ϕ(yn (t)) ≤ E Y (t), HKt ϕ HKt P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 8 / 12
  • 21. Application to industrial Finance Consider µ(t, x) solution to non-linear hyperbolic-parabolic Fokker-Planck equations ∂t µ − Lµ = 0, L = · (bµ) + 2 · (Aµ), A := 1 2 σσT 1 FORWARD : Compute µ(t) ∼ 1 N δy1(t) + . . . + δyN (t) as sharp discrepancy sequences. RD ϕ(x)dµ(t, x) − 1 N N n=1 ϕ(yn (t)) ≤ E Y (t), HKt ϕ HKt 2 CHECK optimal rate : E Y (t), HKt ∼ EHKt (N, D) P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 8 / 12
  • 22. Application to industrial Finance Consider µ(t, x) solution to non-linear hyperbolic-parabolic Fokker-Planck equations ∂t µ − Lµ = 0, L = · (bµ) + 2 · (Aµ), A := 1 2 σσT 1 FORWARD : Compute µ(t) ∼ 1 N δy1(t) + . . . + δyN (t) as sharp discrepancy sequences. RD ϕ(x)dµ(t, x) − 1 N N n=1 ϕ(yn (t)) ≤ E Y (t), HKt ϕ HKt 2 CHECK optimal rate : E Y (t), HKt ∼ EHKt (N, D) 3 BACKWARD : interpret t → yn (t), n = 1 . . . N as a moving, transported, PDE grid (TREE). Solve with it the Kolmogorov equation. ERROR ESTIMATION : RD P(t, ·)dµ(t, ·) − 1 N N n=1 P(t, yn (t)) ≤ EHKt (N, D) P(t, ·) HKt P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 8 / 12
  • 23. Illustration : the 2D SABR process, widely used in Finance SABR process d Ft αt = ρ αt Fβ t 0 0 ναt dW 1 t dW 2 t , with 0 ≤ β ≤ 1, ν ≥ 0,ρ ∈ R2×2 . The Fokker-Planck equation associated to SABR is ∂t µ + L∗ µ = 0, L∗ µ = ρ x2 2 2 xβ 1 0 0 ν2 2 x2 ρT · 2 µ. (Loading SABR200) P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 9 / 12
  • 24. The curse of dimensionality CURSE of dimensionality in finance : Price and manage a complex option written on several underlyings. 1 Step 1 : Compute a measure solution µ(t, x) to a Fokker-Planck equation in large dimension, calibrate it. P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 10 / 12
  • 25. The curse of dimensionality CURSE of dimensionality in finance : Price and manage a complex option written on several underlyings. 1 Step 1 : Compute a measure solution µ(t, x) to a Fokker-Planck equation in large dimension, calibrate it. 2 Step 2 : Backward a Kolmogorov equation in large dimension, denote the solution P(t, x). P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 10 / 12
  • 26. The curse of dimensionality CURSE of dimensionality in finance : Price and manage a complex option written on several underlyings. 1 Step 1 : Compute a measure solution µ(t, x) to a Fokker-Planck equation in large dimension, calibrate it. 2 Step 2 : Backward a Kolmogorov equation in large dimension, denote the solution P(t, x). 3 Step 3 : Compute various metrics on the solution as for instance : VaR or XVA for regulatory purposes, or future deltas / gammas / implied vols for hedging purposes. P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 10 / 12
  • 27. The curse of dimensionality CURSE of dimensionality in finance : Price and manage a complex option written on several underlyings. 1 Step 1 : Compute a measure solution µ(t, x) to a Fokker-Planck equation in large dimension, calibrate it. 2 Step 2 : Backward a Kolmogorov equation in large dimension, denote the solution P(t, x). 3 Step 3 : Compute various metrics on the solution as for instance : VaR or XVA for regulatory purposes, or future deltas / gammas / implied vols for hedging purposes. 4 Result ? We can compute the solution P(t, x) at any order of accuracy : RD P(t, ·)dµ(t, ·) − 1 N N n=1 P(t, yn (t)) ≤ P(t, ·) HK Nα ... ...where α ≥ 1/2 is any number... Choose it according to your desired electricity bill ! But beware to smoothing effects in high dimensions : HK contains less informations as the dimension raises. Some problems, as are for instance optimal stopping problems, are intrinsecally cursed. P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 10 / 12
  • 28. academic tests, business cases 1 Academic works : finance, non-linear hyperbolic systems 1 Revisiting the method of characteristics via a convex hull algorithm : explicit solutions to high-dimensional conservation laws with non convex-fluxes. 2 Numerical results using CoDeFi. Benchmark of TMM methods for classical pricing problems. P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 11 / 12
  • 29. academic tests, business cases 1 Academic works : finance, non-linear hyperbolic systems 1 Revisiting the method of characteristics via a convex hull algorithm : explicit solutions to high-dimensional conservation laws with non convex-fluxes. 2 Numerical results using CoDeFi. Benchmark of TMM methods for classical pricing problems. 2 Business cases - done 1 Hedging Strategies for Net Interest Income and Economic Values of Equity (http://dx.doi.org/10.2139/ssrn.3454813, with S.Miryusupov). 2 Compute metrics for big portfolio of Autocalls depending on several underlyings (unpublished). P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 11 / 12
  • 30. academic tests, business cases 1 Academic works : finance, non-linear hyperbolic systems 1 Revisiting the method of characteristics via a convex hull algorithm : explicit solutions to high-dimensional conservation laws with non convex-fluxes. 2 Numerical results using CoDeFi. Benchmark of TMM methods for classical pricing problems. 2 Business cases - done 1 Hedging Strategies for Net Interest Income and Economic Values of Equity (http://dx.doi.org/10.2139/ssrn.3454813, with S.Miryusupov). 2 Compute metrics for big portfolio of Autocalls depending on several underlyings (unpublished). 3 Under work 1 McKean Vlasov equations (stochastic volatility modeling). 2 ISDA Standard Initial Margin : XVA computations based on sensitivities (delta / vega ..gamma) 3 Transition IBOR / RFR rates a la Lyashenko - Mercurio. 4 Strategies for Liquidity risk : Hamilton-Jacobi-Bellman equations in high dimensions. P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 11 / 12
  • 31. Summary and Conclusions We presented in this talk: 1 News, sharps, estimations for Monte-Carlo methods... P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
  • 32. Summary and Conclusions We presented in this talk: 1 News, sharps, estimations for Monte-Carlo methods... 2 ...that can be used in a wide variety of context to perform a sharp error analysis. P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
  • 33. Summary and Conclusions We presented in this talk: 1 News, sharps, estimations for Monte-Carlo methods... 2 ...that can be used in a wide variety of context to perform a sharp error analysis. 3 A new method for numerical simulations of PDE : Transported Meshfree Methods.... P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
  • 34. Summary and Conclusions We presented in this talk: 1 News, sharps, estimations for Monte-Carlo methods... 2 ...that can be used in a wide variety of context to perform a sharp error analysis. 3 A new method for numerical simulations of PDE : Transported Meshfree Methods.... 4 ... that can be used in a wide variety of applications (hyperbolic / parabolic equations, artificial intelligence, etc...)... P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
  • 35. Summary and Conclusions We presented in this talk: 1 News, sharps, estimations for Monte-Carlo methods... 2 ...that can be used in a wide variety of context to perform a sharp error analysis. 3 A new method for numerical simulations of PDE : Transported Meshfree Methods.... 4 ... that can be used in a wide variety of applications (hyperbolic / parabolic equations, artificial intelligence, etc...)... 5 ..for which the error analysis applies : we can guarantee a worst error estimation, and we can check that this error matches an optimal convergence rate. P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12
  • 36. Summary and Conclusions We presented in this talk: 1 News, sharps, estimations for Monte-Carlo methods... 2 ...that can be used in a wide variety of context to perform a sharp error analysis. 3 A new method for numerical simulations of PDE : Transported Meshfree Methods.... 4 ... that can be used in a wide variety of applications (hyperbolic / parabolic equations, artificial intelligence, etc...)... 5 ..for which the error analysis applies : we can guarantee a worst error estimation, and we can check that this error matches an optimal convergence rate. 6 ...Thus we can argue that our numerical methods reach nearly optimal algorithmic complexity. P.G. LeFloch 1 , J.M. Mercier 2 (1 CNRS, 2 MPG-Partners)A computational framework based over Transported Meshfree methods.16 01 2020 12 / 12