SlideShare a Scribd company logo
1 of 200
Download to read offline
Luc_Faucheux_2020
Stochastic Calculus – ITO – II
From SDEs to PDEs and back
1
Luc_Faucheux_2020
In this section
š We tackle the mapping between SDE (SIE) and PDEs (in particular the PDE for the PDF)
š SDE: Stochastic Differential Equations
š SIE: Stochastic Integral Equations
š PDE: Partial Differential Equations
š PDF: Probability Distribution Function
š This part is somewhat easier as it will be mostly dealing with stochastic terms that are either
constant or time dependent, with no dependency on the stochastic variable (so not ITO-
STRATO controversy to worry about). We will leave that for part III
2
Luc_Faucheux_2020
From PDE to SDE and back
3
Luc_Faucheux_2020
From SED to PDE and back – first pass
¹ Let’s start with a Fokker-Planck equation
š In the PDE section, we will show that the Chapman-Kolmogorov equation is a particular case
of the Master equation, and that the Fokker-Planck equation is itself a special case of the
Chapman-Kolmogorov but for now we sort of take the FP for granted.
¹ Let’s assume we have the following FP for the Probability Distribution:
š
!"($,&)
!&
= −
!
!$
[𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 −
!
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]]
š For the notations, 𝑥 is a “regular variable”, we keep the capital 𝑋 for the SDE
š By writing the first derivative in time being equal to a gradient in space of something (the
diffusion current in Physics), we ensure that the overall probability is conserved:
š
!
!&
𝑝 = −
!
!$
𝐜
š Note that this would not be the case for absorption problems, or cliff (see first passage time
in the Bachelier section)
4
Luc_Faucheux_2020
From SED to PDE and back – first pass - II
¹ Let’s check that the probability is conserved
š
*
*&
. ∫+,
-,
𝑝 𝑥, 𝑡 . 𝑑𝑥 = ∫+,
-, !
!&
𝑝 𝑥, 𝑡 . 𝑑𝑥 = ∫+,
-,
−
!
!$
𝐜(𝑥, 𝑡). 𝑑𝑥 = [−𝐜(𝑥, 𝑡)]+,
-,
š And with the appropriate boundary conditions 𝐜 𝑥 = −∞, 𝑡 = 𝐜 𝑥 = +∞, 𝑡 = 0
š
*
*&
. ∫+,
-,
𝑝 𝑥, 𝑡 . 𝑑𝑥 = 0
š 𝐜 𝑥, 𝑡 = 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 −
!
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]
š We can calculate the moments of that distribution noted:
š 𝑚. 𝑥, 𝑡 =< 𝑥. >&= ∫+,
-,
𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥
š Remember that we calculated those in the Gaussian case in the Bachelier section
š Note that those are NOT the cumulants, we will go over that in the PDE section
5
Luc_Faucheux_2020
From SED to PDE and back – first pass - III
š 𝑚. 𝑥, 𝑡 =< 𝑥. >&= ∫+,
-,
𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥
š 𝑚. 𝑥, 𝑡 + 𝛿𝑡 =< 𝑥. >&-/&= ∫+,
-,
𝑝 𝑥, 𝑡 + 𝛿𝑡 . 𝑥.. 𝑑𝑥
š 𝑚. 𝑥, 𝑡 + 𝛿𝑡 =< 𝑥. >&-/&= ∫+,
-,
{𝑝 𝑥, 𝑡 +
!
!&
𝑝 𝑥, 𝑡 . 𝛿𝑡}. 𝑥.. 𝑑𝑥
š 𝑚. 𝑥, 𝑡 + 𝛿𝑡 =< 𝑥. >&-/&= ∫+,
-,
𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥 + 𝛿𝑡. ∫+,
-, !
!&
𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥
š 𝑚. 𝑥, 𝑡 + 𝛿𝑡 =< 𝑥. >&-/&=< 𝑥. >& +𝛿𝑡. ∫+,
-, !
!&
𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥
š 𝑚. 𝑥, 𝑡 + 𝛿𝑡 − 𝑚. 𝑥, 𝑡 =< 𝑥. >&-/& −< 𝑥. >&= 𝐌𝛿𝑡 = 𝛿𝑡. ∫+,
-, !
!&
𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥
š Let’s calculate 𝐌 = ∫+,
-, !
!&
𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥
6
Luc_Faucheux_2020
From SED to PDE and back – first pass - IV
š Let’s calculate 𝐌 = ∫+,
-, !
!&
𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥
š We have also :
!"($,&)
!&
= −
!
!$
[𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 −
!
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]]
š 𝐌 = ∫+,
-,
−
!
!$
[𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 −
!
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]] . 𝑥.. 𝑑𝑥
š We can integrate by parts
š 𝐌 = 𝐌( + 𝐌)
š Where:
š 𝐌( = [− [𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 −
!
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]]. 𝑘. 𝑥.+(]+,
-,
š 𝐌) = ∫+,
-,
[𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 −
!
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]].𝑘. 𝑥.+(. 𝑑𝑥
š 𝐌( = 0 for appropriate boundary conditions (same ones that ensured overall conservation of
probability, meaning 𝐜 𝑥 = −∞, 𝑡 = 𝐜 𝑥 = +∞, 𝑡 = 0 and we are left with:
7
Luc_Faucheux_2020
From SED to PDE and back – first pass - V
š 𝑚. 𝑥, 𝑡 + 𝛿𝑡 − 𝑚. 𝑥, 𝑡 =< 𝑥. >&-/& −< 𝑥. >&= 𝐌). 𝛿𝑡
š 𝐌) = ∫+,
-,
[𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 −
!
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]. 𝑘. 𝑥.+(. 𝑑𝑥
š We integrate by parts once again
š 𝐌)(𝑘) = ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(. 𝑑𝑥 − ∫+,
-, !
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]. 𝑘. 𝑥.+(. 𝑑𝑥
š IF (𝑘 = 1)
š 𝐌) 1 = ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑑𝑥 − ∫+,
-, !
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ].𝑑𝑥
š 𝐌) 1 = ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑑𝑥 − [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]+,
-,
š 𝐌) 1 = ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑑𝑥
8
Luc_Faucheux_2020
From SED to PDE and back – first pass - VI
š IF (𝑘 > 1)
š 𝐌)(𝑘) = ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(. 𝑑𝑥 − ∫+,
-, !
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]. 𝑘. 𝑥.+(. 𝑑𝑥
š 𝐌) 𝑘 = ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(. 𝑑𝑥 − 𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(
+,
-, +
∫+,
-,
𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. (𝑘 − 1). 𝑥.+). 𝑑𝑥
š With again having boundary conditions that are not pathological
š 𝐌) 𝑘 = ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(. 𝑑𝑥 + ∫+,
-,
𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. (𝑘 − 1). 𝑥.+). 𝑑𝑥
9
Luc_Faucheux_2020
From SED to PDE and back – first pass - VII
š So we have in the case of:
š
!0($,&)
!&
= −
!
!$
[𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 −
!
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]]
š 𝑚. 𝑥, 𝑡 + 𝛿𝑡 − 𝑚. 𝑥, 𝑡 =< 𝑥. >&-/& −< 𝑥. >&= 𝐌)(𝑘). 𝛿𝑡
š 𝑚. 𝑥, 𝑡 =< 𝑥. >&= ∫+,
-,
𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥
š IF (𝑘 = 0), we have conservation of probability: ∫+,
-,
𝑝 𝑥, 𝑡 . 1. 𝑑𝑥 = 𝑚1 𝑥, 𝑡 = 1
š IF (𝑘 = 1):
š 𝑚( 𝑥, 𝑡 + 𝛿𝑡 − 𝑚( 𝑥, 𝑡 =< 𝑥 >&-/& −< 𝑥 >&= 𝛿𝑡. ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑑𝑥
š IF (𝑘 = 2): 𝑚) 𝑥, 𝑡 + 𝛿𝑡 − 𝑚) 𝑥, 𝑡 =< 𝑥) >&-/& −< 𝑥) >&= 𝐌)(2). 𝛿𝑡
š 𝐌) 𝑘 = ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(. 𝑑𝑥 + ∫+,
-,
𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. (𝑘 − 1). 𝑥.+). 𝑑𝑥
š 𝐌) 2 = ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 2𝑥. 𝑑𝑥 + ∫+,
-,
𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 2. 𝑑𝑥
10
Luc_Faucheux_2020
From SED to PDE and back – first pass - VIIa
¹ So let’s recap:
š If we have a PDE of the form:
!"($,&)
!&
= −
!
!$
[𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 −
!
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]]
š The moments of the variable will follow the following equation:
š 𝑚. 𝑥, 𝑡 =< 𝑥. >&= ∫+,
-,
𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥
š
*
*&
𝑚. 𝑥, 𝑡 = 𝐌)(𝑘)
š 𝐌) 𝑘 = ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(. 𝑑𝑥 + ∫+,
-,
𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. (𝑘 − 1). 𝑥.+). 𝑑𝑥
š So far, just integration by part and quite general
11
Luc_Faucheux_2020
From SED to PDE and back – first pass - VIII
š This so far is very general.
š We need to tie this up to a stochastic process of the form:
š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 (in the ITO convention)
š This indicates that we will have to concern ourselves with conditional probabilities,
reminiscent of two things:
š 1) Green propagators when the initial starting point is a Dirac peak
š 2) Chapman Kolmogorov equation from Bachelier where he was looking for solution where
the transition probability function was itself the probability density:
š 𝑝 𝑥, 𝑡 = ∫$23+,
$23-,
𝑝 𝑥′, 𝑡′ . 𝑃𝑅 𝑥2, 𝑡2, 𝑥, 𝑡 . 𝑑𝑥’
š WITH: 𝑃𝑅 𝑥2, 𝑡2, 𝑥, 𝑡 = 𝑝(𝑥 − 𝑥2, 𝑡 − 𝑡2)
š 𝑝 𝑥, 𝑡 = ∫$23+,
$23-,
𝑝 𝑥′, 𝑡′ . 𝑃𝑅 𝑥2, 𝑡2, 𝑥, 𝑡 . 𝑑𝑥′
12
Luc_Faucheux_2020
From SED to PDE and back – first pass - IX
š Writing those with more explicit notation for the conditional probability:
š 𝑝 𝑥, 𝑡 | 𝑥1, 𝑡1 = ∫$23+,
$23-,
𝑝 𝑥, 𝑡 |𝑥′, 𝑡′ . 𝑝 𝑥′, 𝑡′| 𝑥1, 𝑡1 . 𝑑𝑥′
š This is also known as the composition rule, or Chapman-Kolmogorov
š It is a special case of the Master equation
š Fokker-Planck is a special case of the Chapman-Kolmogorov
13
Luc_Faucheux_2020
From SED to PDE and back – first pass - X
š The processes described by the Chapman-Kolmogorov equation are such that we need to
find a function 𝑝 𝑥′, 𝑡′| 𝑥1, 𝑡1 so that every possible “path” has a length equal to that
function, and the resulting probability at the end is the sum over all the possible paths.
š At first it seems that finding such a function for ALL partitions could be quite tricky
š That was one of the achievements of Louis Bachelier 1900 Ph.D. thesis
14
𝑡 = 𝑡!
𝑡 = 𝑡"
𝑡 = 𝑡#
𝑥!
𝑝 𝑥(, 𝑡(| 𝑥1, 𝑡1
𝑥"
𝑥#
𝑝 𝑥), 𝑡)| 𝑥(, 𝑡(
Luc_Faucheux_2020
From SED to PDE and back – first pass - XI
š Bachelier guessed a functional: 𝑝 𝑥, 𝑡 = 𝐎(𝑡). 𝑒𝑥𝑝(−𝐵(𝑡)). 𝑥))
š And proceeded to prove that the following had to be observed:
š 𝑝 𝑥, 𝑡 = 𝑝 𝑥 = 0, 𝑡 . exp{−𝜋. 𝑝 𝑥 = 0, 𝑡 ) . 𝑥)}
š Which is a truly beautiful relationship between the distribution peak and its overall shape
š Bachelier ended up with the revered Gaussian distribution:
š 𝑝 𝑥, 𝑡 =
4
&
. exp{−
54!$!
&
}
š Which is a solution of the Fokker Planck equation
¹ In life, it is almost always a Gaussian, with “almost” and “always” being loosely defined
¹ Let’s see if we can now come up with the same result without having to guess
15
Luc_Faucheux_2020
Side note on Bachelier
š 𝑃 𝑥, 𝑡 = ∫$23+,
$23-,
𝑃 𝑥′, 𝑡′ . 𝑃𝑅 𝑥2, 𝑡2, 𝑥, 𝑡 . 𝑑𝑥’
š NOW is the big one, we assume that we can write: 𝑃𝑅 𝑥2, 𝑡2, 𝑥, 𝑡 = 𝑃(𝑥 − 𝑥2, 𝑡 − 𝑡2)
š In many ways this seems quite impossible, you need to find a solution such that at all times
the distribution function has the same functional as the conditional probability
š in the graph in the previous slide, if the width of the line is a somewhat crude
representation of the conditional probability 𝑃𝑅 𝑥2, 𝑡2, 𝑥, 𝑡 , and the size of the dot is also
somewhat a crude representation of the probability distribution, then for any and every
possible combination of intermediate points and time you need to find a function that is
such that BOTH the size of the dots and the width of the lines are described by the same
functional
š 𝑃 𝑥, 𝑡 = 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛(𝑥, 𝑡)
š 𝑃𝑅 𝑥2, 𝑡2, 𝑥, 𝑡 = 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛(𝑥 − 𝑥2, 𝑡 − 𝑡2)
š That seems quite impossible
16
Luc_Faucheux_2020
Side note on Bachelier - II
š This sort of points us towards some sort of Pascal triangle or binomial distribution, for which
each step is identical, and somehow builds up over repeated steps.
š Note that the binomial distribution is different because at each step it is a discrete limited
jump to the nearest neighbors, not a full function
š So this is not exactly the same, but it looks like if we need to build something that is true for
ANY and EVERY possible intermediate time and position, we might have a shot if we build it
for EVERY smallest possible increment, and have some sort of SCALING property to scale
that up when integrating.
š This is why it looks likes we should choose the Gaussian, because the distribution for a
variable that is a sum of variables each following a Gaussian, will itself follow a Gaussian,
and the variance will be the sum of the individual variances. (So SCALE with fractal
dimension 2 as we would say these days).
17
Luc_Faucheux_2020
Sidebar on Gaussian distribution
š It seems that the Gaussian distribution is a valuable candidate to such function. Bachelier
did not explicitly describes how he comes up with his guess for the Gaussian, but he was
surely well versed in the Pascal triangle, Binomial distribution, and the scaling properties of
the Gaussian
š If 𝑥(and 𝑥) are two independent random variables following the Gaussian distribution, then
š 𝑊 = 𝑥( + 𝑥) will also follow a Gaussian distribution, and
š < ∆𝑊)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 =< ∆(𝑥( + 𝑥)))> =< ∆𝑥(
)
>+< ∆𝑥)
)
> = 𝜎(
). ∆𝑡 + 𝜎(
). ∆𝑡
š 𝜎$"-$!
) = 𝜎$"
) + 𝜎$!
)
š In particular, in the case of 𝑁 independent random increments 𝑥6 following a Gaussian
distribution, the sum of those random increments will also be a random variable following a
Gaussian distribution of variance:
š < ∆(∑ 𝑥6))> = 𝔌 (∑ 𝑥6)) = 𝜎). ∆𝑡 = ∆𝑡. ∑ 𝜎$#
)
18
Luc_Faucheux_2020
Sidebar on Gaussian distribution - II
š The invariance of the Gaussian distribution under addition is closely connected with the CLT
(Central Limit Theorem), which states that a suitably normalized sum of many independent
variables with finite variances (do not have to follow a Gaussian distribution) will converge
to a Gaussian distribution, another reason why the Gaussian distribution is awesome
š Note that the Gaussian distribution is invariant under addition for exponent 2
š 𝜎$"-$!
) = 𝜎$"
) + 𝜎$!
)
š Schroeder points out that a number of distributions are invariant under addition for a
different exponent 𝐷4 (not diffusion, but more like a dimension coefficient, see fractal
theory).
š In particular, the celebrated Cauchy distribution: 𝑃 𝑥 =
(
5((-$!)
is invariant for addition for
exponent 𝐷4 = 1
š As another example, for 𝐷4 = 1/2, the distribution that is invariant under addition is:
š 𝑃 𝑥 =
(
)5
. 𝑥+
$
!. exp(
+(
)$
)
19
Luc_Faucheux_2020
Sidebar on Gaussian distribution - III
š Another note on the Gaussian and Cauchy distribution.
š Suppose that we have 𝑁 identically distributed random variables.
š For Gaussian distribution:
š < ∆𝑋)> = < ∆(∑ 𝑥6))> = 𝔌 (∑ 𝑥6)) = 𝜎). ∆𝑡 = ∆𝑡. ∑ 𝜎$#
) = 𝑁. ∆𝑡. 𝜎6
)
š 𝜎) = 𝑁. 𝜎6
)
š And so the AVERAGE (not the SUM) of those variables will be such that:
¹ < ∆(
7
8
))> = (
(
8
))< ∆(∑ 𝑥6))> = (
(
8
)) 𝑁. ∆𝑡. 𝜎6
) =
(
8
. ∆𝑡. 𝜎6
) = ∆𝑡. 𝜎9$:
)
š So for variables following a Gaussian distribution, the more measurements you make and
take the average, the more precise you have an estimate of the average:
š 𝜎9$:
) =
(
8
. 𝜎6
) or equivalently: 𝜎9$: =
(
8
. 𝜎6
20
Luc_Faucheux_2020
Sidebar on Gaussian distribution - IV
š This is why in Physics for most experiments you believe that the more measurements you
make , the better an estimate you will get.
¹ This is usually the often quoted “convergence” in
(
8
for most computer simulations
š Note that this is not true in Finance, where you do not have the luxury of making of lot of
experiments of the same physical system. In Finance you do not have the luxury of control
experiment, not do you have the luxury of a steady state solution.
š In contrast, the Cauchy distribution is such that: 𝜎 = 𝑁. 𝜎6
š And so the distribution of the AVERAGE of 𝑁 identically distributed Cauchy variable is the
SAME as the original distribution.
š Averaging Cauchy variables does not improve the estimate.
š Averaging Gaussian variables improve the estimate.
21
Luc_Faucheux_2020
Side note on Bachelier - III
š It is also possible that as a Frenchman, Bachelier was familiar with the visual illustrations of
Charles-Joseph Minard, and might have thought of the process he was trying to describe as
an army of probability diffusing in time like the Great Army during the Russian campaign.
š He was also certainly familiar with the Galton board, that might have given him some insight
on how a security diffuses.
š One thing is for sure, I am no Minard:
22
Luc_Faucheux_2020
Side note on Bachelier - IV
¹ Luc’s trying to illustrate a flow of probability so that the width of the lines are proportional
to the conditional probability, and the size of the dots proportional to the probability density
23
𝑥!
𝑡 = 𝑡!
𝑡 = 𝑡"
𝑡 = 𝑡#
𝑃 𝑥(, 𝑡(| 𝑥1, 𝑡1
𝑥"
𝑥#
𝑃 𝑥), 𝑡)| 𝑥(, 𝑡(
Luc_Faucheux_2020
Side note on Bachelier - V
š Charles-Joseph Minard illustrating the Napoleon Russian campaign, and totally schooling
Luc.
24
Luc_Faucheux_2020
The Galton board (also called bean machine)
š I just could not resist, because the Galton board is so beautiful, and it is almost certain that
Louis Bachelier knew about it and might have gotten inspiration from.
š Sit Galton himself:
¹ ”Order in Apparent Chaos: I know of scarcely anything so apt to impress the imagination as
the wonderful form of cosmic order expressed by the Law of Frequency of Error. The law
would have been personified by the Greeks and deified, if they had known of it. It reigns
with serenity and in complete self-effacement amidst the wildest confusion. The huger the
mob, and the greater the apparent anarchy, the more perfect is its sway. It is the supreme
law of Unreason. Whenever a large sample of chaotic elements are taken in hand and
marshalled in the order of their magnitude, an unsuspected and most beautiful form of
regularity proves to have been latent all along”
25
Luc_Faucheux_2020
The Galton board (also called bean machine) - II
26
Luc_Faucheux_2020
The Galton board (also called bean machine) - III
27
Luc_Faucheux_2020
From SED to PDE and back – first pass - X
š We will follow 1) the Green propagators and the conditional probabilities concept
š We now are dealing with a stochastic variable 𝑋(𝑡)
š We will be preoccupying ourselves with an expansion in time so we define
š ∆𝑋 = 𝑋 𝑡 + ∆𝑡 − 𝑋(𝑡)
š We further assume that:
š < ∆𝑋 > = 𝐞 ∆𝑋 = 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 (drift term)
š < ∆𝑋)> = 𝐞 ∆𝑋) = 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 (diffusion term)
š All higher orders < ∆𝑋.> are of order ∆𝑡) at least
š Note: there are abnormal diffusion processes for which those assumptions would not be
verified.
š We are essentially looking at a solution where 𝑝 𝑥, 𝑡 = 𝛿(𝑥 − 𝑋 𝑡 ) where this is the Dirac
function, and looking at ”small” interval in time after
28
Luc_Faucheux_2020
From SED to PDE and back – first pass - XI
š IF (𝑘 = 1):
š 𝑚( 𝑥, 𝑡 + ∆𝑡 − 𝑚( 𝑥, 𝑡 =< 𝑥 >&-∆& −< 𝑥 >&= ∆𝑡. ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑑𝑥
š 𝑚. 𝑥, 𝑡 =< 𝑥. >&= ∫+,
-,
𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥
š 𝑚( 𝑥, 𝑡 =< 𝑥 >&= ∫+,
-,
𝑝 𝑥, 𝑡 . 𝑥. 𝑑𝑥 = ∫+,
-,
𝛿(𝑥 − 𝑋 𝑡 ). 𝑥. 𝑑𝑥 = 𝑋(𝑡)
š 𝑚( 𝑥, 𝑡 + ∆𝑡 − 𝑚( 𝑥, 𝑡 =< 𝑥 >&-∆& −< 𝑥 >&= ∆𝑡. ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝛿(𝑥 − 𝑋 𝑡 ). 𝑑𝑥
š 𝑚( 𝑥, 𝑡 + ∆𝑡 − 𝑚( 𝑥, 𝑡 =< 𝑥 >&-∆& −< 𝑥 >&= ∆𝑡. 𝑀( 𝑋(𝑡), 𝑡
š So we can equate with : < ∆𝑋 > = 𝐞 ∆𝑋 = 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡
š And have 𝐹( 𝑋 𝑡 , 𝑡 = 𝑀( 𝑋 𝑡 , 𝑡
š Note: since we were wrong for a large part of the first deck because of Ito/Stratanovitch, we
are proceeding with caution here
29
Luc_Faucheux_2020
From SED to PDE and back – first pass - XII
š 𝑚( 𝑥, 𝑡 + ∆𝑡 − 𝑚( 𝑥, 𝑡 =< 𝑥 >&-∆& −< 𝑥 >&= ∆𝑡. 𝑀( 𝑋 𝑡 , 𝑡
š This is why this term is referred to as the drift. It is a first order in time where the ratio of
the average displacement to the time interval is the velocity 𝑀( 𝑋 𝑡 , 𝑡
š We are now concerning ourselves with the second order in time, and pay attention to
subtracting, or taking into account the drift term in the correct manner
30
< 𝑥 𝑡 > = 𝑋(𝑡)
x
𝑡 𝑡 + ∆𝑡
< 𝑥 𝑡 + ∆𝑡 >
Luc_Faucheux_2020
From SED to PDE and back – first pass - XIII
š IF (𝑘 = 2): 𝑚) 𝑥, 𝑡 + ∆𝑡 − 𝑚) 𝑥, 𝑡 =< 𝑥) >&-∆& −< 𝑥) >&= 𝐌)(2). ∆𝑡
š 𝐌) 𝑘 = ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(. 𝑑𝑥 + ∫+,
-,
𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. (𝑘 − 1). 𝑥.+). 𝑑𝑥
š 𝐌) 2 = ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 2𝑥. 𝑑𝑥 + ∫+,
-,
𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 2. 𝑑𝑥
š < 𝑥 >&-∆& −< 𝑥 >&= ∆𝑡. 𝑀( 𝑋 𝑡 , 𝑡
š < 𝑥 >&-∆&=< 𝑥 >& +∆𝑡. 𝑀( 𝑋 𝑡 , 𝑡
š (< 𝑥 >&-∆&))= (< 𝑥 >& +∆𝑡. 𝑀( 𝑋 𝑡 , 𝑡 ))
š (< 𝑥 >&-∆&))= (< 𝑥 >&))+2. < 𝑥 >&. ∆𝑡. 𝑀( 𝑋 𝑡 , 𝑡 ) + (∆𝑡. 𝑀( 𝑋 𝑡 , 𝑡 ))
š < ∆𝑋)> = 𝐞 ∆𝑋) = 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 =< (𝑋− < 𝑋 >))>
š < ∆𝑋)> =< (𝑋− < 𝑋 >))> =< 𝑋) − 2𝑋. < 𝑋 > + < 𝑋 >)>
š < ∆𝑋)> = < 𝑋) > −2 < 𝑋 >. < 𝑋 > + < 𝑋 >) = < 𝑋) > − < 𝑋 >)
31
Luc_Faucheux_2020
From SED to PDE and back – first pass - XIV
š < 𝑥) >&-∆& −< 𝑥) >&= 𝐌)(2). ∆𝑡
š 𝐌) 2 = ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 2𝑥. 𝑑𝑥 + ∫+,
-,
𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 2. 𝑑𝑥
š 𝐌) 2 = ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝛿(𝑥 − 𝑋 𝑡 ). 2𝑥. 𝑑𝑥 + ∫+,
-,
𝑀) 𝑥, 𝑡 . 𝛿(𝑥 − 𝑋 𝑡 ). 2. 𝑑𝑥
š 𝐌) 2 = 𝑀( 𝑋 𝑡 , 𝑡 . 2𝑋(𝑡) + 2. 𝑀) 𝑋(𝑡), 𝑡
š < 𝑥) >&-∆& −< 𝑥) >&= 2. 𝑀( 𝑋 𝑡 , 𝑡 . 𝑋 𝑡 . ∆𝑡 + 2. 𝑀) 𝑋 𝑡 , 𝑡 . ∆𝑡
š < ∆𝑋)> = 𝐞 ∆𝑋) = 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 (diffusion term)
š < ∆𝑋)> = < (𝑥−< 𝑥 >&-∆&))>&-∆&=< 𝑥) >&-∆& −(< 𝑥 >&-∆&))
š < 𝑥 >&-∆& −< 𝑥 >&= ∆𝑡. 𝑀( 𝑋(𝑡), 𝑡
š So: (< 𝑥 >&-∆&))= (< 𝑥 >&))+2. 𝑀( 𝑋(𝑡), 𝑡 . < 𝑥 >&. ∆𝑡 + (∆𝑡. 𝑀( 𝑋(𝑡), 𝑡 ))
32
Luc_Faucheux_2020
From SED to PDE and back – first pass - XV
š So again apologies for being pedestrian here, but I have seen too many textbooks where
they go “let’s set the drift to 0 then we get that, then it is easy to show when adding the
drift back that it only changes the first order and does not impact the diffusion term”
š So trying to make sure that we are on firm ground here.
š Also just to reassure us, all the integrals are normal integrals over the Probability
Distribution and the possible outcomes 𝑥, there is no Ito versus Stratonovitch here.
š < 𝑥) >&-∆& −< 𝑥) >&= 2. 𝑀( 𝑋 𝑡 , 𝑡 . 𝑋 𝑡 . ∆𝑡 + 2. 𝑀) 𝑋 𝑡 , 𝑡 . ∆𝑡
š And < 𝑥) >&= ∫+,
-,
𝑥). 𝑝 𝑥, 𝑡 . 𝑑𝑥 = ∫+,
-,
𝑥). 𝛿(𝑥 − 𝑋 𝑡 ). 𝑑𝑥 = 𝑋 𝑡 )
š < 𝑥) >&-∆&= 𝑋 𝑡 ) + 2. 𝑀( 𝑋 𝑡 , 𝑡 . 𝑋 𝑡 . ∆𝑡 + 2. 𝑀) 𝑋 𝑡 , 𝑡 . ∆𝑡
š < ∆𝑋)> = < (𝑥−< 𝑥 >&-∆&))>&-∆&=< 𝑥) >&-∆& −(< 𝑥 >&-∆&))
š < 𝑥 >&-∆& −< 𝑥 >&= ∆𝑡. 𝑀( 𝑋(𝑡), 𝑡
š So: (< 𝑥 >&-∆&))= (< 𝑥 >&))+2. 𝑀( 𝑋(𝑡), 𝑡 . < 𝑥 >&. ∆𝑡 + (∆𝑡. 𝑀( 𝑋(𝑡), 𝑡 ))
33
Luc_Faucheux_2020
From SED to PDE and back – first pass - XVI
š (< 𝑥 >&-∆&))= (< 𝑥 >&))+2. 𝑀( 𝑋(𝑡), 𝑡 . < 𝑥 >&. ∆𝑡 + (∆𝑡. 𝑀( 𝑋(𝑡), 𝑡 ))
š < 𝑥 >&= ∫+,
-,
𝑝 𝑥, 𝑡 . 𝑥. 𝑑𝑥 = ∫+,
-,
𝛿(𝑥 − 𝑋 𝑡 ). 𝑥. 𝑑𝑥 = 𝑋(𝑡)
š (< 𝑥 >&-∆&))= 𝑋 𝑡 ) + 2. 𝑀( 𝑋(𝑡), 𝑡 . 𝑋 𝑡 . ∆𝑡 + (∆𝑡. 𝑀( 𝑋(𝑡), 𝑡 ))
š < ∆𝑋)> = < (𝑥−< 𝑥 >&-∆&))>&-∆&=< 𝑥) >&-∆& −(< 𝑥 >&-∆&))
š < ∆𝑋)> =< 𝑥) >&-∆& −𝑋 𝑡 ) − 2. 𝑀( 𝑋 𝑡 , 𝑡 . 𝑋 𝑡 . ∆𝑡 − (∆𝑡. 𝑀( 𝑋(𝑡), 𝑡 ))
š And: < 𝑥) >&-∆&= 𝑋 𝑡 ) + 2. 𝑀( 𝑋 𝑡 , 𝑡 . 𝑋 𝑡 . ∆𝑡 + 2. 𝑀) 𝑋 𝑡 , 𝑡 . ∆𝑡
š So: < ∆𝑋)> = 2. 𝑀) 𝑋 𝑡 , 𝑡 . ∆𝑡 − ∆𝑡. 𝑀( 𝑋 𝑡 , 𝑡
)
= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡
š In the limit of small time increments: 𝐹) 𝑋 𝑡 , 𝑡 = 2. 𝑀) 𝑋 𝑡 , 𝑡
34
Luc_Faucheux_2020
From SED to PDE and back – first pass - XVII
¹ Wait, what did we get? Let’s recap.
š We assumed that the Probability Distribution Function (PDF) is following the Fokker-Planck
Partial Differential Equation (PDE):
š
!"($,&)
!&
= −
!
!$
[𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 −
!
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]]
š Note: we have not proven yet that the FP can be derived from the Chapman-Kolmogorov,
that block we will need to tackle separately, for now we are assuming that the PDF follows a
Fokker-Planck PDE, and try to tie this to SDE or SIE
š Looking at a random process 𝑋(𝑡) such that 𝑝 𝑥, 𝑡 = 𝛿(𝑥 − 𝑋 𝑡 )
š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑥 >&-∆& −< 𝑥 >&= 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 (drift term)
š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑥−< 𝑥 >&-∆&))>&-∆&= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 (diffusion term)
š We showed that 𝐹( 𝑋 𝑡 , 𝑡 = 𝑀( 𝑋 𝑡 , 𝑡 and 𝐹) 𝑋 𝑡 , 𝑡 = 2. 𝑀) 𝑋 𝑡 , 𝑡
35
Luc_Faucheux_2020
From SED to PDE and back – first pass - XVIII
š So we would love to jump ahead of ourselves and write something like this: The SDE:
š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊
š Has:
š < ∆𝑋 > = 𝑎(𝑡, 𝑋 𝑡 ). ∆𝑡
š And
š < ∆𝑋)> = 𝑏 𝑡, 𝑋 𝑡
)
. ∆𝑡
š And so will corresponds to the PDE:
š
!"($,&)
!&
= −
!
!$
[𝑎(𝑡, 𝑋 𝑡 ). 𝑝 𝑥, 𝑡 −
!
!$
[
< &,7 &
!
)
. 𝑝 𝑥, 𝑡 ]]
š That is quite tempting indeed, but we have spent so much time trying to not be tricked into
forgetting a small term that changes everything, it is not the time to be cavalier and glance
over the last few steps
36
Luc_Faucheux_2020
From SED to PDE and back – first pass - XIX
š It is so tempting though that we are going to use some illustrative examples.
š Remember when we write and SDE:
š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊
š We really are writing an SIE, because random processes are NOT differentiable
š 𝑋 𝑡< − 𝑋 𝑡= = ∫&3&=
&3&<
𝑑𝑋 𝑡 = ∫&3&=
&3&<
𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + ∫&3&=
&3&<
𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊(𝑡)
š We need to calculate from this equation the quantities:
š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& (drift term)
š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑋−< 𝑋 >&-∆&))>&-∆& (diffusion term)
37
Luc_Faucheux_2020
Some notes on the notations
š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊
š We will try to be rigorous when needed, and when it does not hide from the intuition
š Usually UPPER CASE are for stochastic variables
š Usually LOWER CASE are for regular variables
š Say the Gaussian distribution below is the distribution for the process: 𝑑𝑋 𝑡 = 𝑑𝑊
š We should really be writing the function using lower case
š 𝑝 𝑥, 𝑡 =
(
)5&
. 𝑒𝑥 𝑝 −
$!
)&
is the probability density function
š Really to be rigorous we should write is as: 𝑝7(𝑥, 𝑡)
š And the distribution function being: 𝑃7(𝑥, 𝑡)
š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 ≀ 𝑥, 𝑡 = ∫>3+,
>3$
𝑝7 𝑊, 𝑡 . 𝑑𝑊
38
Luc_Faucheux_2020
Some notes on the notations - II
š PDF Probability Density Function: 𝑝7(𝑥, 𝑡)
š Distribution function : 𝑃7(𝑥, 𝑡)
š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 ≀ 𝑥, 𝑡 = ∫>3+,
>3$
𝑝7 𝑊, 𝑡 . 𝑑𝑊
š 𝑝7(𝑥, 𝑡) =
!
!$
𝑃7 𝑥, 𝑡
š This highlights the fact that capital letters are reserved for stochastic variables and the lower
case are for just regular variables of a function.
š Usually we just use one or the other without paying too much attention, I will try to be
rigorous on this one but I know that I will fail
39
Luc_Faucheux_2020
Some notes on the notations - III
š Also instead of saying:
š Looking at a random process 𝑋(𝑡) such that 𝑝 𝑥, 𝑡 = 𝛿(𝑥 − 𝑋 𝑡 ) and then computing
quantities at time (𝑡 + 𝛿𝑡), we should really to be rigorous express this in term of
conditional probabilities: 𝑝7 𝑥, 𝑡 + 𝛿𝑡|𝑥 = 𝑋 𝑡 , 𝑡
š More generally for a starting condition: 𝑝 𝑥, 𝑡 = 𝑡1 = 𝛿(𝑥 − 𝑥1)
š Sometimes written: 𝑝 𝑥, 𝑡 = 𝑡1 = 𝛿(𝑥 − 𝑥1) with the added 𝑥1 = 𝑋1 = 𝑋(𝑡1)
š We then are concerning ourselves with: 𝑝7 𝑥, 𝑡|𝑥1, 𝑡1
š So for example really:
š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& (drift term)
š < ∆𝑋 > = ∫>3+,
>3-,
𝑊. 𝑝7 𝑊, 𝑡 + ∆𝑡|𝑥 = 𝑋(𝑡), 𝑡 . 𝑑𝑊 − ∫>3+,
>3-,
𝑊. 𝑝7 𝑊, 𝑡|𝑥 = 𝑋(𝑡), 𝑡 . 𝑑𝑊
š < ∆𝑋 > = ∫>3+,
>3-,
𝑊. 𝑝7 𝑊, 𝑡 + ∆𝑡|𝑥 = 𝑋(𝑡), 𝑡 . 𝑑𝑊 − ∫>3+,
>3-,
𝑊. 𝛿(𝑊 − 𝑋 𝑡 ). 𝑑𝑊
š < ∆𝑋 > = ∫>3+,
>3-,
𝑊. 𝑝7 𝑊, 𝑡 + ∆𝑡|𝑥 = 𝑋(𝑡), 𝑡 . 𝑑𝑊 − 𝑋(𝑡)
40
Luc_Faucheux_2020
From SED to PDE and back – first pass - XX
š Just to recap where we stand,
š Assuming we have a Fokker-Planck equation of the form:
š
!"($,&)
!&
= −
!
!$
[𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 −
!
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]]
š Looking at a specific solution 𝑃 𝑥, 𝑡 = 𝛿(𝑥 − 𝑋 𝑡 ) where this is the Dirac function, and
looking at ”small” interval in time after we have shown that:
š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑥 >&-∆& −< 𝑥 >&= 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 (drift term)
š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑥−< 𝑥 >&-∆&))>&-∆&= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 (diffusion term)
š We showed that 𝐹( 𝑋 𝑡 , 𝑡 = 𝑀( 𝑋 𝑡 , 𝑡 and 𝐹) 𝑋 𝑡 , 𝑡 = 2. 𝑀) 𝑋 𝑡 , 𝑡
š We NOW look at the formulation: 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊
š And are looking to map 𝑎 𝑡, 𝑋 𝑡 , 𝑏 𝑡, 𝑋 𝑡 ⟺ 𝐹( 𝑋 𝑡 , 𝑡 , 𝐹) 𝑋 𝑡 , 𝑡
š Since we already have: 𝑀( 𝑥, 𝑡 , 𝑀) 𝑥, 𝑡 ⟺ 𝐹( 𝑋 𝑡 , 𝑡 , 𝐹) 𝑋 𝑡 , 𝑡
41
Luc_Faucheux_2020
š
!"($,&)
!&
= −
!
!$
[𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 −
!
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]]
š 𝑚. 𝑥, 𝑡 =< 𝑥. >&= ∫+,
-,
𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥
š
*
*&
𝑚. 𝑥, 𝑡 = 𝐌)(𝑘)
š 𝐌) 𝑘 = ∫+,
-,
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(. 𝑑𝑥 + ∫+,
-,
𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. (𝑘 − 1). 𝑥.+). 𝑑𝑥
š In particular we have the following mapping:
š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑥 >&-∆& −< 𝑥 >&= 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 (drift term)
š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑥−< 𝑥 >&-∆&))>&-∆&= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 (diffusion term)
š We showed that
š 𝐹( 𝑋 𝑡 , 𝑡 = 𝑀( 𝑋 𝑡 , 𝑡 and 𝐹) 𝑋 𝑡 , 𝑡 = 2. 𝑀) 𝑋 𝑡 , 𝑡
From SED to PDE and back – first pass - XXI
42
Luc_Faucheux_2020
dX=a.dt
43
Luc_Faucheux_2020
A couple of simple examples dX=a.dt
š We feel that this one is going to be a little tough
š So better start with simpler forms of the SDE rather than the general one to gain some
intuition
š Ideally, if we had an explicit solution for :
š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊
š We could calculate from this equation the quantities for ALL time:
š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& (drift term)
š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑋−< 𝑋 >&-∆&))>&-∆& (diffusion term)
š That would be awesome, but we have a feeling that we are going to look at small expansion
in time around the starting point, and we know by know that anything with the word
“expansion” in it when dealing with stochastic calculus is fraught with danger
44
Luc_Faucheux_2020
A couple of simple examples - II dX=a.dt
š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊
š How about we start with 𝑎 𝑡, 𝑋 𝑡 = 𝑎 and 𝑏 𝑡, 𝑋 𝑡 = 0
š 𝑑𝑋 𝑡 = 𝑎. 𝑑𝑡
š 𝑋 𝑡 = 𝑎. 𝑡 + 𝐶, but let’s set 𝐶 = 0
š 𝑋 𝑡 ) = (𝑎. 𝑡))
š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& (drift term)
š < ∆𝑋 > = 𝑎. 𝑡 + ∆𝑡 − 𝑎. 𝑡 = 𝑎. ∆𝑡 (note that the average is only over one point)
š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑥 >&-∆& −< 𝑥 >&= 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡
š So: 𝐹( 𝑋 𝑡 , 𝑡 = 𝑎 = 𝐹( = 𝑐𝑡𝑒
š Note: we are overly pedestrian at this point, but we need to make sure that we are on firm
footing before tackling the generalized equation, so apologies but bear with me, or jump
ahead a number of slides.
45
Luc_Faucheux_2020
A couple of simple examples - III dX=a.dt
š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑋−< 𝑋 >&-∆&))>&-∆& (diffusion term)
š < ∆𝑋)> =< ([𝑎. 𝑡 + ∆𝑡 ]−< 𝑎. 𝑡 + ∆𝑡 >&-∆&))>&-∆&= 0
š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑥−< 𝑥 >&-∆&))>&-∆& = 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 (diffusion term)
š So we have: 𝐹) 𝑋 𝑡 , 𝑡 = 0
š 𝑎 𝑡, 𝑋 𝑡 = 𝑎, 𝑏 𝑡, 𝑋 𝑡 = 0 ⟺ 𝐹( 𝑋 𝑡 , 𝑡 = 𝑎, 𝐹) 𝑋 𝑡 , 𝑡 = 0
š Trivial note: we did not say that 𝐹) 𝑋 𝑡 , 𝑡 = 𝐹) = 𝑏 = 0, we are just saying that BOTH
𝐹) 𝑋 𝑡 , 𝑡 and 𝑏 are equal to 0, not that they are equal to each other
š 𝐹( 𝑋 𝑡 , 𝑡 = 𝑀( 𝑋 𝑡 , 𝑡 and 𝐹) 𝑋 𝑡 , 𝑡 = 2. 𝑀) 𝑋 𝑡 , 𝑡
š 𝐹( 𝑋 𝑡 , 𝑡 = 𝑎, 𝐹) 𝑋 𝑡 , 𝑡 = 0 ⟺ 𝑀( 𝑋 𝑡 , 𝑡 = 𝑎, 𝑀) 𝑋 𝑡 , 𝑡 = 0
š
!"($,&)
!&
= −
!
!$
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 −
!
!$
𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 = −
!
!$
𝑎. 𝑝 𝑥, 𝑡
46
Luc_Faucheux_2020
A couple of simple examples – III - a dX=a.dt
š Note that in our case we have an explicit solution so we have a formula for ALL times
š < ∆𝑋 > 𝑡 = 𝑎. 𝑡
š A fortiori for small time increments:
š < ∆𝑋 >∆&= 𝑎. ∆𝑡
š And of course:
š < ∆𝑋)> (𝑡) = 0
š < ∆𝑋)>∆&= 0
47
Luc_Faucheux_2020
A couple of simple examples - IV dX=a.dt
š
!"($,&)
!&
= −
!
!$
𝑎. 𝑝 𝑥, 𝑡 also called advection equation
š We are now trying to solve this equation:
š We can guess a form :
š 𝑝 𝑥, 𝑡 = 𝑓(𝑥 − 𝑎. 𝑡) the function just translates the x axis with velocity a
š
!"($,&)
!&
= −𝑎. 𝑓′(𝑥 − 𝑎. 𝑡)
š
!" $,&
!$
= 𝑓′(𝑥 − 𝑎. 𝑡)
š So indeed:
!"($,&)
!&
= −𝑎
!" $,&
!$
= −
!
!$
𝐜? 𝑥, 𝑡 = −
!
!$
[𝑎. 𝑝 𝑥, 𝑡 ]
š With the initial function : 𝑝 𝑥, 𝑡1 = 𝛿 𝑥 − 𝑋 𝑡1 = 𝛿(𝑥 − 𝑋1)
š 𝑝 𝑥, 𝑡 = 𝑓 𝑥 − 𝑎. 𝑡 and at time 𝑡 = 𝑡1, 𝑓 𝑥 − 𝑎. 𝑡1 = 𝛿(𝑥 − 𝑋1)
48
Luc_Faucheux_2020
A couple of simple examples - V dX=a.dtI
š With the initial function : 𝑝 𝑥, 𝑡1 = 𝛿 𝑥 − 𝑋 𝑡1 = 𝛿(𝑥 − 𝑋1)
š 𝑝 𝑥, 𝑡 = 𝑓 𝑥 − 𝑎. 𝑡 and at time 𝑡 = 𝑡1, 𝑓 𝑥 − 𝑎. 𝑡1 = 𝛿(𝑥 − 𝑋1)
š So 𝑝 𝑥, 𝑡 = 𝛿(𝑥 − 𝑋1 − 𝑎(𝑡 − 𝑡1))
š Just to be ever so pedestrian at this point:
š 𝑓 𝑥 − 𝑎. 𝑡1 = 𝛿(𝑥 − 𝑋1)
š We change the variable 𝑥( = 𝑥 − 𝑎. 𝑡1, so 𝑥 − 𝑋1 = 𝑥( + 𝑎. 𝑡1 − 𝑋1
š 𝑓 𝑥( = 𝛿(𝑥( − 𝑋1 + 𝑎. 𝑡1)
š We now change again to the variable 𝑥) = 𝑥( + 𝑎𝑡,
š so 𝑥( − 𝑋1 + 𝑎. 𝑡1 = 𝑥) − 𝑎𝑡 − 𝑋1 + 𝑎. 𝑡1
š 𝑓 𝑥) − 𝑎𝑡 = 𝛿(𝑥) − 𝑎𝑡 − 𝑋1 + 𝑎. 𝑡1)
š We change again trivially back to 𝑥 = 𝑥)
49
Luc_Faucheux_2020
A couple of simple examples – V dX=a.dtI
š 𝑓 𝑥 − 𝑎𝑡 = 𝛿(𝑥 − 𝑎𝑡 − 𝑋1 + 𝑎. 𝑡1)
š 𝑝 𝑥, 𝑡 = 𝑓 𝑥 − 𝑎. 𝑡 = 𝛿(𝑥 − 𝑋1 − 𝑎(𝑡 − 𝑡1))
š This seems rather obvious but again we want to familiarize ourselves with the structure of
the proof once we apply it to more complicated functionals
š This is what physicists like to call a propagation equation (in one dimension), because
whatever is the initial shape for 𝑝 𝑥, 𝑡1 , it is conserved in time and is just translated by an
amount 𝑎(𝑡 − 𝑡1), so thinking of 𝑥 as a space dimension and 𝑡 obviously as time, the initial
shape moves at a velocity 𝑎
š I say whatever shape because you can always express any function over the complete span
of the Dirac functions
š Any function 𝑝 𝑥, 𝑡 can be written as a sum (integral) of the Dirac delta peaks of height the
value of the function at that position:
š 𝑝 𝑥, 𝑡 = ∫7%3+,
7%3+,
𝑝 𝑋1, 𝑡 . 𝛿 𝑥 − 𝑋1 . 𝑑𝑋1
50
Luc_Faucheux_2020
A couple of simple examples – VII dX=a.dt
š So for the simple propagation / advection case:
!"($,&)
!&
= −
!
!$
𝑎. 𝑝 𝑥, 𝑡
51
< 𝑥 𝑡 > = 𝑋(𝑡)
x
𝑡 𝑡 + ∆𝑡
< 𝑥 𝑡 + ∆𝑡 >
Luc_Faucheux_2020
A couple of simple examples – VIII dX=a.dt
š That picture really becomes:
52
< 𝑥 𝑡1 > = 𝑋1
x
𝑡! 𝑡
< 𝑥(𝑡) >
• < ∆𝑋 > 𝑡 = 𝑎. 𝑡
• < ∆𝑋)> (𝑡) = 0
Luc_Faucheux_2020
dX=b.dW
53
Luc_Faucheux_2020
Second simple example dX=b.dW
š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊
š We choose 𝑎 𝑡, 𝑋 𝑡 = 0 and 𝑏 𝑡, 𝑋 𝑡 = 𝑏
š So the only thing that we can write with some certainty now that we have to deal with a
stochastic term that is non-zero is the SIE:
š 𝑋 𝑡< − 𝑋 𝑡= = ∫&3&=
&3&<
𝑑𝑋 𝑡 = ∫&3&=
&3&<
𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 𝑡 = 𝑏 ∫&3&=
&3&<
1. ([). 𝑑𝑊(𝑡)
š NOW, because we are integrating a constant over 𝑑𝑊, any convention that we take (Ito,
Strato or any other point in between the partition) will all converge to the same value
š 𝑏 ∫&3&=
&3&<
1. ([). 𝑑𝑊(𝑡) = 𝑏 ∫&3&=
&3&<
1. (∘). 𝑑𝑊(𝑡) = 𝑊 𝑡< − 𝑊(𝑡=)
š Again we are lucky enough to have an explicit solution for 𝑋 𝑡
š 𝑋 𝑡< − 𝑋 𝑡= = 𝑏 ∫&3&=
&3&<
1. ([). 𝑑𝑊(𝑡) = 𝑏. (𝑊 𝑡< − 𝑊(𝑡=))
54
Luc_Faucheux_2020
Second simple example – II dX=b.dW
š Remember that :
š The Stratonovitch integral is defined as:
š ∫&3&=
&3&<
𝑓 𝑋 𝑡 . (∘). 𝑑𝑋(𝑡) = lim
8→,
{∑.3(
.38
𝑓 [𝑋(𝑡. + 𝑋(𝑡.-()]/2). [𝑋(𝑡.-() − 𝑋(𝑡.)]}
š The Ito integral is defined as:
š ∫&3&=
&3&<
𝑓 𝑋 𝑡 . ([). 𝑑𝑋(𝑡) = lim
8→,
{∑.3(
.38
𝑓(𝑋(𝑡.)). [𝑋(𝑡.-() − 𝑋(𝑡.)]}
š Running the risk of sounding too obvious here, when 𝑓 𝑥 = 1
š 𝑓 [𝑋(𝑡. + 𝑋(𝑡.-()]/2) = 𝑓(𝑋(𝑡.)) = 1
š So both sums are exactly equal, hence will converge to the same value when 𝑁 → ∞
55
Luc_Faucheux_2020
Second simple example – III dX=b.dW
š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& (drift term)
š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑋−< 𝑋 >&-∆&))>&-∆& (diffusion term)
š For the function: 𝑋 𝑡 = 𝑊(𝑡), where 𝑊(𝑡) is the Wiener process (Brownian motion)
¹ Always remember that almost all the times it is always a Gaussian

š We are dealing here with the standard Brownian motion (you have to start somewhere)
š We will do shortly a quick sidebar as to why Gaussian is so widely used
56
Luc_Faucheux_2020
Second simple example – IV dX=b.dW
š The usual definition of a Brownian motion is that (see also Bachelier deck):
š It starts at zero: 𝑊 𝑡 = 0 = 0
š That one we can easily relax to a non-zero starting point, this is just for convenience
š It has independent increments: for any and every partition in time {𝑡.} the variables defined
as: {(𝑊 𝑡A-( − 𝑊 𝑡A-( } are independent random variables
š It has stationary increments: for any and every partition in time {𝑡.} the distribution for the
random variable {(𝑊 𝑡. − 𝑊 𝑡B } is the same distribution (not the same value) as the
distribution for the random variable {(𝑊 𝑡.-" − 𝑊 𝑡B-" }
š It has continuous sample paths: no jumps, no traveling in time
š For every point in time 𝑡, the distribution for 𝑊 𝑡 is the Normal distribution (Gaussian
function) 𝑁(0, 𝑡)
57
Luc_Faucheux_2020
Second simple example – V dX=b.dW
š Couple of notes on terminology.
š The Gaussian function has usually 3 parameters:
š 𝐺 𝑎, 𝑏, 𝑐, 𝑥 = 𝑎. 𝑒𝑥𝑝(−
($+<)!
C
)
š The “normalized” Gaussian function has only 2 parameters, by solving 𝑎 so that:
š ∫$"3+,
$"3-,
𝐺 𝑎, 𝑏, 𝑐, 𝑥 . 𝑑𝑥 = 1
š It is equal to:
š 𝐺 𝑏, 𝑐, 𝑥 =
(
C )5
. 𝑒𝑥𝑝(−
($+<)!
C
)
š A normalized Gaussian function is the Probability Density Function of a Gaussian
Distribution. This is also known as the Normal Distribution
58
Luc_Faucheux_2020
Second simple example – VI dX=b.dW
š If 𝑏 = 0 and 𝑐 = 1 that Normal Distribution function is known as the Standard Normal
Distribution function. It is also sometimes referred to as the Z-distribution (think of Z-score)
¹ So “normal” maybe means that it has been normalized, or that it is so used everywhere that
we are used to the Bell shaped and it should be “normal” to expect a Gaussian
¹ We will go over later why it is “normal” to usually expect a Gaussian. In short
š Gauss is awesome, he is the Prince of Mathematicians
š The Gaussian distribution is the limit of the Binomial distribution that we love
š It is also the limit of a lot of other distributions, as long as the second moment is finite
(Central limit theorem)
š If you truncate the Master equation to the second order, the solution will be a Gaussian
š The distribution of sum of variables following a Gaussian will ALSO be a Gaussian
š The Gaussian is such that only the 1st and 2nd cumulant are non-zero
š Using the principle of Maximum Entropy, you recover a Gaussian when you only know the
first 2 moments
..and a lot more reasons why the Gaussian is awesome
59
Luc_Faucheux_2020
Gauss is awesome
š When he was 12 year old he solved: 𝑆 𝑛 = ∑636
63B
𝑖 =
B(B-()
)
60
Luc_Faucheux_2020
Second simple example – VII dX=b.dW
š All right, back to the Brownian motion for now
š From the definition of the Brownian motion, a couple of properties ensues:
š Noting 𝔌 the expected value (integral over the distribution), we already know that:
š 𝔌{𝑊 𝑡 } = 0
š 𝔌{𝑊 𝑡 )} = 𝑡
š 𝔌{𝑊 𝑡 . 𝑊(𝑡2)} = min(𝑡, 𝑡2)
š 𝔌 𝑊 𝑡 − 𝑊 𝑡2 )
= 𝔌 𝑊 𝑡 ) + 𝔌 𝑊 𝑡 ) − 2. 𝔌{𝑊 𝑡 . 𝑊(𝑡2)}
š 𝔌 𝑊 𝑡 − 𝑊 𝑡2 )
= 𝑡 + 𝑡2 − 2. min 𝑡, 𝑡2 = |𝑡 − 𝑡′|
61
Luc_Faucheux_2020
Second simple example – VIII dX=b.dW
š 𝔌 𝑊 𝑡 − 𝑊 𝑡2 . 𝑊 𝑡′′ − 𝑊 𝑡222 = min 𝑡, 𝑡22 + min 𝑡2, 𝑡222 − min 𝑡, 𝑡22 −
min(𝑡2, 𝑡22)
š For a partition in time {𝑡.} and looking at the increments we get for 𝑖 < 𝑗 for example
š 𝔌 𝑊 𝑡6-( − 𝑊 𝑡6 . 𝑊 𝑡A-( − 𝑊 𝑡A = 𝑡6-( + 𝑡6 − 𝑡6-( − 𝑡6 = 0
š So enforcing the Gaussian distribution automatically ensures that the Brownian motion has
independent increments.
š Note that for the list of criteria above, I think that it can be shown that the Gaussian is the
only process with continuous paths. Other distributions will exhibit jumps (maybe Levy). I
am not sure of this and might spend some time researching it, but for now we are happy to
have a solution, and do not concern ourselves with the unicity of that solution for now.
62
Luc_Faucheux_2020
Second simple example – IX dX=b.dW
š All right this time back to our example:
š 𝑋 𝑡< − 𝑋 𝑡= = 𝑏 ∫&3&=
&3&<
1. ([). 𝑑𝑊(𝑡) = 𝑏. (𝑊 𝑡< − 𝑊(𝑡=))
š < ∆𝑋 > = 𝔌 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& (drift term)
š < ∆𝑋 > = 𝔌 𝑋(𝑡 + ∆𝑡 ) − 𝔌 𝑋(𝑡 ) = 0
š Note on the drift term:
š < ∆𝑋)> = 𝔌 ∆𝑋) =< (𝑋−< 𝑋 >&-∆&))>&-∆& (diffusion term)
š Assuming 𝑋 𝑡 = 0
š Then 𝔌 𝑋(𝑡 + ∆𝑡 =< 𝑋 >&-∆&= 0
š < ∆𝑋)> = 𝔌 ∆𝑋) = 𝔌 𝑋)(𝑡 + ∆𝑡) = 𝑏). (𝑡 + ∆𝑡)
š If we choose 𝑝 𝑥, 𝑡 = 𝛿(𝑥 − 𝑋 𝑡 ) then < ∆𝑋)> = 𝑏). ∆𝑡
63
Luc_Faucheux_2020
Second simple example – X dX=b.dW
š Couple of notes on probabilities and conditional probabilities
š We are looking at 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊
š ”starting” at time 𝑡, with 𝑝 𝑥, 𝑡 = 𝛿(𝑥 − 𝑋 𝑡 )
š So for the drift for example we are looking at:
š < ∆𝑋 > = 𝔌 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& (drift term)
š < ∆𝑋 > = 𝔌 ∆𝑋 =< 𝑋 >&-∆& −𝑋(𝑡)
š This is really : < ∆𝑋 > = 𝔌 ∆𝑋 | 𝑋(𝑡) =< 𝑋| 𝑋(𝑡) >&-∆& −𝑋(𝑡)
š Those are really conditional probabilities
š For example if 𝑋 𝑡 = 0 = 0 we know that 𝔌 𝑋(𝑡) = 0
š However for a given 𝑋(𝑡), 𝔌 𝑋 𝑡 + ∆𝑡 | 𝑋(𝑡) = 𝑋(𝑡)
64
Luc_Faucheux_2020
Second simple example – XI dX=b.dW
¹ Let’s recap:
š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 = 𝑏. ([). 𝑑𝑊 = 𝑏. (∘). 𝑑𝑊
š < ∆𝑋 > = 0 = 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡
š < ∆𝑋)> = 𝑏). ∆𝑡 = 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡
š 𝑎 𝑡, 𝑋 𝑡 = 0, 𝑏 𝑡, 𝑋 𝑡 = 𝑏 ⟺ 𝐹( 𝑋 𝑡 , 𝑡 = 0, 𝐹) 𝑋 𝑡 , 𝑡 = 𝑏)
š Trivial note: we did not say that 𝐹( 𝑋 𝑡 , 𝑡 = 𝐹( = 𝑎 = 0, we are just saying that BOTH
𝐹( 𝑋 𝑡 , 𝑡 and 𝑎 are equal to 0, not that they are equal to each other
š 𝐹( 𝑋 𝑡 , 𝑡 = 𝑀( 𝑋 𝑡 , 𝑡 and 𝐹) 𝑋 𝑡 , 𝑡 = 2. 𝑀) 𝑋 𝑡 , 𝑡
š 𝐹( 𝑋 𝑡 , 𝑡 = 0, 𝐹) 𝑋 𝑡 , 𝑡 = 𝑏 ⟺ 𝑀( 𝑋 𝑡 , 𝑡 = 0, 𝑀) 𝑋 𝑡 , 𝑡 =
<!
)
š
!"($,&)
!&
= −
!
!$
𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 −
!
!$
𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 =
!
!$
!
!$
<!
)
. 𝑝 𝑥, 𝑡
65
Luc_Faucheux_2020
Second simple example – XII dX=b.dW
š So for the SDE (SIE):
š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 = 𝑏. ([). 𝑑𝑊 = 𝑏. (∘). 𝑑𝑊
š We have the equivalent PDE for the PDF:
š
!"($,&)
!&
=
!
!$
!
!$
<!
)
. 𝑝 𝑥, 𝑡 =
<!
)
.
!!
!$! [𝑝 𝑥, 𝑡 ]
š This is the celebrated Heat equation, of which the Gaussian function is a solution.
š This should not be that surprising since the definition of the Brownian motion is that:
š For every point in time 𝑡, the distribution for 𝑊 𝑡 is the Normal distribution (Gaussian
function) 𝑁(0, 𝑡)
š Note also that those really should be written as PDE on the conditional probability density
function, this will become a notation that we will have to change more rigorously when we
look at FORWARD and BACKWARD Kolmogorov equations
66
Luc_Faucheux_2020
Second simple example – XIII dX=b.dW
š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 = 𝑏. ([). 𝑑𝑊 = 𝑏. (∘). 𝑑𝑊
š Will most likely in Finance be used with 𝑏 = 𝜎, the volatility
š Will most likely in Physics be used with 𝐷 =
<!
)
, the diffusion coefficient
š The solution of:
š
!"($,&)
!&
=
D!
)
.
!!
!$! 𝑝 𝑥, 𝑡 = 𝐷.
!!
!$! [𝑝 𝑥, 𝑡 ]
š Subject to 𝑝 𝑥, 𝑡 = 𝑡1 = 𝛿 𝑥 − 𝑋 𝑡1 = 𝛿 𝑥 − 𝑋1 = 𝛿 𝑥 − 𝑥1 is:
š 𝑝 𝑥, 𝑡 = 𝑝 𝑥, 𝑡|𝑥1, 𝑡1 =
(
E5F &+&%
. 𝑒𝑥 𝑝 −
$+7%
!
EF &+&%
=
(
)5D!(&+&%)
. 𝑒𝑥𝑝(−
($+7%)!
)D!(&+&%)
)
š < ∆𝑋)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 = 2. 𝐷. ∆𝑡
67
Luc_Faucheux_2020
Second simple example – XIII – b dX=b.dW
š So the picture becomes:
68
< 𝑥 𝑡 > = 𝑋(𝑡)
x
𝑡 𝑡 + ∆𝑡
< 𝑥 𝑡 + ∆𝑡 >
Luc_Faucheux_2020
Second simple example – XIV dX=b.dW
š A really amazing book to read is :
69
Luc_Faucheux_2020
Second simple example – XV dX=b.dW
š In it, he points out that one of the reason why we love the Gaussian distribution so much is
that it is invariant under addition (p.157)
š If 𝑥(and 𝑥) are two independent random variables following the Gaussian distribution, then
š 𝑊 = 𝑥( + 𝑥) will also follow a Gaussian distribution, and
š < ∆𝑊)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 =< ∆(𝑥( + 𝑥)))> =< ∆𝑥(
)
>+< ∆𝑥)
)
> = 𝜎(
). ∆𝑡 + 𝜎(
). ∆𝑡
š 𝜎$"-$!
) = 𝜎$"
) + 𝜎$!
)
š In particular, in the case of 𝑁 independent random increments 𝑥6 following a Gaussian
distribution, the sum of those random increments will also be a random variable following a
Gaussian distribution of variance:
š < ∆(∑ 𝑥6))> = 𝔌 (∑ 𝑥6)) = 𝜎). ∆𝑡 = ∆𝑡. ∑ 𝜎$#
)
70
Luc_Faucheux_2020
Second simple example – XVI dX=b.dW
š The invariance of the Gaussian distribution under addition is closely connected with the CLT
(Central Limit Theorem), which states that a suitably normalized sum of many independent
variables with finite variances (do not have to follow a Gaussian distribution) will converge
to a Gaussian distribution, another reason why the Gaussian distribution is awesome
š Note that the Gaussian distribution is invariant under addition for exponent 2
š 𝜎$"-$!
) = 𝜎$"
) + 𝜎$!
)
š Schroeder points out that a number of distributions are invariant under addition for a
different exponent 𝐷4 (not diffusion, but more like a dimension coefficient, see fractal
theory).
š In particular, the celebrated Cauchy distribution: 𝑝 𝑥 =
(
5((-$!)
is invariant for addition for
exponent 𝐷4 = 1
š As another example, for 𝐷4 = 1/2, the distribution that is invariant under addition is:
š 𝑝 𝑥 =
(
)5
. 𝑥+
$
!. exp(
+(
)$
)
71
Luc_Faucheux_2020
Second simple example – XVII dX=b.dW
š Another note on the Gaussian and Cauchy distribution.
š Suppose that we have 𝑁 identically distributed random variables.
š For Gaussian distribution:
š < ∆𝑋)> = < ∆(∑ 𝑥6))> = 𝔌 (∑ 𝑥6)) = 𝜎). ∆𝑡 = ∆𝑡. ∑ 𝜎$#
) = 𝑁. ∆𝑡. 𝜎6
)
š 𝜎) = 𝑁. 𝜎6
)
š And so the AVERAGE (not the SUM) of those variables will be such that:
¹ < ∆(
7
8
))> = (
(
8
))< ∆(∑ 𝑥6))> = (
(
8
)) 𝑁. ∆𝑡. 𝜎6
) =
(
8
. ∆𝑡. 𝜎6
) = ∆𝑡. 𝜎9$:
)
š So for variables following a Gaussian distribution, the more measurements you make and
take the average, the more precise you have an estimate of the average:
š 𝜎9$:
) =
(
8
. 𝜎6
) or equivalently: 𝜎9$: =
(
8
. 𝜎6
72
Luc_Faucheux_2020
Second simple example – XVIII dX=b.dW
š This is why in Physics for most experiments you believe that the more measurements you
make , the better an estimate you will get.
¹ This is usually the often quoted “convergence” in
(
8
for most computer simulations
š Note that this is not true in Finance, where you do not have the luxury of making of lot of
experiments of the same physical system. In Finance you do not have the luxury of control
experiment, not do you have the luxury of a steady state solution.
š In contrast, the Cauchy distribution is such that: 𝜎 = 𝑁. 𝜎6
š And so the distribution of the AVERAGE of 𝑁 identically distributed Cauchy variable is the
SAME as the original distribution.
š Averaging Cauchy variables does not improve the estimate.
š Averaging Gaussian variables improve the estimate.
73
Luc_Faucheux_2020
Some Physics terminology on the Diffusion equation
š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 = 𝑏. ([). 𝑑𝑊 = 𝑏. (∘). 𝑑𝑊
š That can be numerically simulated on a computer
š See also the deck on Binomial and Bachelier with a Taylor expansion of the Binomial process
š
!"($,&)
!&
=
D!
)
.
!!
!$! 𝑝 𝑥, 𝑡 = 𝐷.
!!
!$! [𝑝 𝑥, 𝑡 ]
š But physicists like to write it as :
š The diffusion current is sometimes defined as: 𝐜* 𝑥, 𝑡 = −𝐷
!"($,&)
!$
(Fick’s law)
š
!"($,&)
!&
= −
!
!$
𝐜* 𝑥, 𝑡
š The usual example is a drop of ink diffusing into a glass of water
74
Luc_Faucheux_2020
Some Physics terminology on the Diffusion equation - II
š Drifts caused by diffusion and drifts caused by external forces
š Internal random drifts and external forcing drifts
š When the particle follows a propagation equation: 𝑑𝑋 𝑡 = 𝑎. 𝑑𝑡
š
!"($,&)
!&
= −𝑎
!" $,&
!$
= −
!
!$
𝐜? 𝑥, 𝑡 = −
!
!$
[𝑎. 𝑝 𝑥, 𝑡 ]
š There is a directed motion due to a drift: 𝐜? 𝑥, 𝑡 = −𝑎. 𝑝 𝑥, 𝑡
š When the particle follows a diffusion equation: 𝑑𝑋 𝑡 = 𝑏. ([). 𝑑𝑊
š
!"($,&)
!&
=
D!
)
.
!!
!$! 𝑝 𝑥, 𝑡 = 𝐷.
!!
!$! 𝑝 𝑥, 𝑡 = −
!
!$
𝐜* 𝑥, 𝑡 with 𝐷 =
<!
)
š There is a diffusive (random) motion due to a drift: 𝐜* 𝑥, 𝑡 = −𝐷
!"($,&)
!$
75
Luc_Faucheux_2020
Our first brush with Feyman-Kac
76
Luc_Faucheux_2020
Since we know about Black-Sholes and options
š 𝑑𝑆 𝑡 = 𝑎 𝑡, 𝑆 𝑡 . 𝑑𝑡 + 𝜎 𝑡, 𝑆 𝑡 . ([). 𝑑𝑊 = 𝜎. ([). 𝑑𝑊 = 𝜎. (∘). 𝑑𝑊
š 𝑆 𝑡 follows a PDF that is a solution of the PDE:
š
!"($,&)
!&
=
D!
)
.
!!
!$! 𝑝 𝑥, 𝑡
77
< 𝑆 𝑡 > = 𝑆(𝑡)
S
𝑡 𝑡 + ∆𝑡
< 𝑆 𝑡 + ∆𝑡 >
Luc_Faucheux_2020
Since we know about Black-Sholes and options - II
š We know from the FTAP (Fundamental Theorem of Asset Pricing), that a derivative (in
particular a call price) is the discounted value of the expected payoff of the derivative at
maturity
78
Luc_Faucheux_2020
Since we know about Black-Sholes and options - III
š The option value is then the product of the distribution function at expiry with the terminal
payoff of the option
š 𝐶 𝑆1, 𝑇 = ∫ 𝑃 𝑆2, 𝑆1, 𝑇 . 𝑃𝐎𝑌𝑂𝐹𝐹 𝑆2 . 𝑑𝑆′
š 𝐶 𝑆1, 𝑇 = ∫ 𝑃 𝑆1 → 𝑆2, 𝑡 = 0 → 𝑇 . 𝑃𝐎𝑌𝑂𝐹𝐹 𝑆2 . 𝑑𝑆′
79
S
S0
S’
t t = T
⹂ =
Luc_Faucheux_2020
Since we know about Black-Sholes and options - IV
š The option value is now a function of S (instead of being formally of function of 𝑆1
š 𝐶 𝑆, 𝑇 = ∫ 𝑃 𝑆2, 𝑆, 𝑇 . 𝑃𝐎𝑌𝑂𝐹𝐹 𝑆2 . 𝑑𝑆′
š 𝐶 𝑆, 𝑇 = ∫ 𝑃 𝑆′ → 𝑆, 𝑡 = 𝑇 → 0 . 𝑃𝐎𝑌𝑂𝐹𝐹 𝑆2 . 𝑑𝑆′
80
SS
St
t
𝑡 + ∆𝑡
Luc_Faucheux_2020
Since we know about Black-Sholes and options - V
š 𝐶 𝑆1, 𝑇, 𝐟, 𝜎 = ∫ 𝑃 𝑆1 → 𝑆2, 𝑡 = 0 → 𝑡 = 𝑇 . 𝑃𝐎𝑌𝑂𝐹𝐹 𝑆2 . 𝑑𝑆′
š In the case of a regular call option: 𝑃𝐎𝑌𝑂𝐹𝐹 𝑆2 = (𝑆2 − 𝐟)-= 𝑀𝐎𝑋(𝑆2 − 𝐟, 0)
š 𝐶 𝑆1, 𝑇, 𝐟, 𝜎 = ∫M&3+,
M&3-,
𝑃 𝑆1 → 𝑆2, 𝑡 = 0 → 𝑡 = 𝑇 . (𝑆2 − 𝐟)-. 𝑑𝑆′
š 𝐶 𝑆1, 𝑇, 𝐟, 𝜎 = ∫M&3N
M&3-,
𝑃 𝑆1 → 𝑆2, 𝑡 = 0 → 𝑡 = 𝑇 . 𝑆2 − 𝐟 . 𝑑𝑆′
š
!
!N
𝐶 𝑆1, 𝑇, 𝐟, 𝜎 = ∫M&3N
M&3-,
𝑃 𝑆1, 𝑆2, 𝑇 .
!
!N
𝑆2 − 𝐟 . 𝑑𝑆′ − 𝐟 − 𝐟 . 𝑃 𝑆1, 𝐟, 𝑇
š
!
!N
𝐶 𝑆1, 𝑇, 𝐟, 𝜎 = − ∫M&3N
M&3-,
𝑃 𝑆1, 𝑆2, 𝑇 . 𝑑𝑆′
š
!!
!!N
𝐶 𝑆1, 𝑇, 𝐟, 𝜎 = 𝑃 𝑆1, 𝐟, 𝑇
š Note that we can write: 𝑃 𝑆1, 𝐟, 𝑇 = ∫M&3+,
M&3-,
𝑃 𝑆1, 𝑆2, 𝑇 . 𝛿(𝑆2 − 𝐟). 𝑑𝑆′
81
Luc_Faucheux_2020
Since we know about Black-Sholes and options - VI
š Note that we can write: 𝑃 𝑆1, 𝐟, 𝑇 = ∫M&3+,
M&3-,
𝑃 𝑆1, 𝑆2, 𝑇 . 𝛿(𝑆2 − 𝐟). 𝑑𝑆′
š Where 𝛿(𝑆2 − 𝐟) is the Dirac peak,
š 𝛿 𝑆2 − 𝐟 = 0 for all 𝑆2 <> 𝐟
š ∫M&3+,
M&3-,
1. 𝛿(𝑆2 − 𝐟). 𝑑𝑆′ = 1
š ∫M&3+,
M&3-,
𝑃 𝑆1, 𝑆2, 𝑇 . 𝛿(𝑆2 − 𝐟). 𝑑𝑆′ = 𝑃 𝑆1, 𝐟, 𝑇
š So:
š 𝑃 𝑆1, 𝐟, 𝑇 =
!!
!!N
𝐶 𝑆1, 𝑇, 𝐟, 𝜎 = ∫M&3+,
M&3-,
𝑃 𝑆1, 𝑆2, 𝑇 . 𝛿(𝑆2 − 𝐟). 𝑑𝑆′
82
Luc_Faucheux_2020
Since we know about Black-Sholes and options - VII
š Putting us in the NORMAL framework for Black-Sholes (totally our right), the above eaution
re-writes itself as:
š 𝑃 𝑆(𝑡), 𝐟, 𝑇 =
!!
!!N
𝐶 𝑆(𝑡), 𝑇, 𝐟, 𝜎 = ∫M&3+,
M&3-,
𝑃 𝑆2| 𝑆(𝑡) . 𝛿(𝑆2 − 𝐟). 𝑑𝑆′
š This is quite powerful
š The PDF 𝑃 𝑆(𝑡), 𝐟, 𝑇 is given as the expectation value of some payoff
š The PDF that is a solution of a PDE, is also the conditional expectation under some
probability measure that is related to the SDE
š If you only had Excel, and did not know anything about calculus, this is how you could solve
the PDE:
83
Luc_Faucheux_2020
Since we know about Black-Sholes and options - VIII
84
< 𝑆 𝑡 > = 𝑆(𝑡)
S
𝑡 𝑡 + ∆𝑡
< 𝑆 𝑡 + ∆𝑡 >
S
𝑡 𝑡 + ∆𝑡
𝛿(𝑆! − 𝐟)
Luc_Faucheux_2020
Since we know about Black-Sholes and options - IX
85
š
!"(M,&)
!&
=
D!
)
.
!!
!$! 𝑝 𝑆, 𝑡 is a diffusion equation (Forward equation)
š If you know calculus, a solution is the Gaussian: 𝑝 𝑆, 𝑡 =
(
)5D!(&+&%)
. 𝑒𝑥𝑝(−
(M+M%)!
)D!(&+&%)
) for
the initial condition 𝑝 𝑆, 𝑡 = 0 = 𝛿(𝑆 − 𝑆1)
š If you do not know calculus but have Excel, you simulate the random process (SDE)
associated to the above PDE which is: 𝑑𝑆 𝑡 = 𝜎. ([). 𝑑𝑊 = 𝜎. (∘). 𝑑𝑊
š You calculate the expectation of the Dirac delta function at maturity 𝑇
š This will give you exactly 𝑝 𝑆, 𝑡 =
(
)5D!(R+&)
. 𝑒𝑥𝑝(−
(M+N)!
)D!(R+&)
)
š If you think about it, it is kind of awesome.
š It is a very crude first introduction to the Feynman-Kac theorem (1950)
š This is also an illustration of the forward-backward formalism in PDE
Luc_Faucheux_2020
Since we know about Black-Sholes and options - X
š The PDF is a solution of a heat equation (PDF)
š It is also the expectation of the payoff at maturity equal to the Dirac delta
š A call price is also the expectation of a payoff
š And so the call price ALSO follows a heat equation (the celebrated Black-Sholes)
š So the PDF, and also any derivatives of the stock that follows the SDE, will all follow the
same PDE, just with different boundary conditions, as they all are expectations of payoff
(granted with the Green propagators that is the Gaussian, so this is a little serl-referential,
but at least it is consistent)
86
Luc_Faucheux_2020
Since we know about Black-Sholes and options - XI
š Since we are here, here something really cool about Black-Sholes that I could not really put
in any of the other decks, so here it is:
š 𝐶 𝑆1, 𝑇, 𝐟, 𝜎 is a function of the stock 𝑆1
š We can write ITO lemma on the call (this is what Black and Sholes did)
š 𝑑𝐶 𝑆, 𝑡 =
!S
!M
. 𝑑𝑆 +
!S
!&
𝑑𝑡 +
(
)
!!S
!M! . (𝑑𝑆))
š 𝑑𝐶 𝑆, 𝑡 =
!S
!M
. 𝑑𝑆 +
!S
!&
𝑑𝑡 +
(
)
!!S
!M! . (𝜎)). 𝑑𝑡 =
!S
!M
. 𝑑𝑆 + 𝑑𝑡. [
!S
!&
+
(
)
!!S
!M! . (𝜎))]
š What is inside the bracket is the Black-Sholes equation (for 𝑟 = 0)
š So: 𝑑𝐶 𝑆, 𝑡 =
!S
!M
. 𝑑𝑆 or in the SIE for that we should really always use:
š 𝐶 𝑡< − 𝐶 𝑡= = ∫&3&=
&3&< !S
!M
. 𝑑𝑆(𝑡) = ∫&3&=
&3&<
∆. 𝑑𝑆(𝑡) where ∆ is the Black-Scholes Greek.
š Pretty nifty no? The call option is the integral over the stock of the delta
87
Luc_Faucheux_2020
Since we know about Black-Sholes and options - XII
š 𝐶 𝑡< − 𝐶 𝑡= = ∫&3&=
&3&< !S
!M
. 𝑑𝑆(𝑡) = ∫&3&=
&3&<
∆. 𝑑𝑆(𝑡) where ∆ is the Black-Scholes Greek.
š If we set 𝑡< = 𝑇, maturity of the option, and 𝑡= = 𝑡 for sake of clarity
š 𝐶 𝑇 − 𝐶 𝑡 = ∫&3&=
&3&<
∆. 𝑑𝑆(𝑡)
š At maturity the call price is equal to the payoff, in this example 𝐶 𝑇 = 𝑀𝐎𝑋(𝑆 𝑇 − 𝐟, 0)
š Let’s note 𝐻 𝑇 = 𝑀𝐎𝑋(𝑆 𝑇 − 𝐟, 0) the payoff function.
¹ Let’s compute the expected value of the above equation
š 𝔌 𝐶 𝑇 𝑆 𝑡 = 𝔌 𝐻 𝑇 𝑆 𝑡 = 𝔌 (𝑆 𝑇 − 𝐟)- 𝑆 𝑡 = 𝔌 𝑀𝐎𝑋(𝑆 𝑇 − 𝐟, 0) 𝑆 𝑡
š 𝔌 𝐶 𝑡 𝑆 𝑡 = 𝐶(𝑡)
š Under the probability measure where the stock is a martingale
š 𝔌 ∫&3&=
&3&<
∆. 𝑑𝑆 𝑡 = 0 under the ITO integral (ITO integral of a trading strategy is a
martingale)
88
Luc_Faucheux_2020
Since we know about Black-Sholes and options - XIII
š And so we have:
š 𝐶 𝑡 = 𝔌 𝐶 𝑡 𝑆 𝑡 = 𝔌 𝑀𝐎𝑋(𝑆 𝑇 − 𝐟, 0) 𝑆 𝑡
š We recover the fact that the call price is the expected value of the terminal payoff, under the
proper probability distribution associated to the specific numeraire we chose, properly
discounted (in our case in order to simplify we had 𝑟 = 0, say it another way the Money
Market Numeraire is the trivial constant 1)
š This is kind of neat.
š Using ITO lemma (in ITO calculus), and the fact that the Call price follows the PDE (Black-
Sholes) that follows the Heat Equation (Fokker-Planck), using the fact that the ITO integral is
a martingale, we can then express the Call as the expected value of a terminal payoff
(boundary condition) under the conditional probability distribution
š This gives us a little flavor of Feynman-Kac and also of the McLean derivation of the link
between a non-linear SDE and the associated PDE
89
Luc_Faucheux_2020
dX=a.dt + b.dW
90
Luc_Faucheux_2020
Third simple example dX=a.dt + b.dW
š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊
š We choose 𝑎 𝑡, 𝑋 𝑡 = 𝑎 and 𝑏 𝑡, 𝑋 𝑡 = 𝑏
š So the only thing that we can write with some certainty now that we have to deal with a
stochastic term that is non-zero is the SIE:
š 𝑋 𝑡< − 𝑋 𝑡= = ∫&3&=
&3&<
𝑑𝑋 𝑡 = 𝑎. ∫&3&=
&3&<
1. 𝑑𝑠 + 𝑏 ∫&3&=
&3&<
1. ([). 𝑑𝑊(𝑡)
š We could manually redo the calculation of the first and second moment:
š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& (drift term)
š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑋−< 𝑋 >&-∆&))>&-∆& (diffusion term)
š Or we could leverage the work we already did by looking at a change of variables.
91
Luc_Faucheux_2020
Third simple example – II dX=a.dt + b.dW
š 𝑑𝑋 𝑡 = 𝑎. 𝑑𝑡 + 𝑏. ([). 𝑑𝑊
š We are looking at the change of variables:
š 𝑋 𝑡 → 𝑋2 𝑡2 = 𝑋 𝑡′ − 𝑎. 𝑡2 = 𝑋 𝑡 − 𝑎. 𝑡
š 𝑡 → 𝑡2 = 𝑡
š 𝑑𝑋2(𝑡2) = 𝑑𝑋 𝑡2 − 𝑎. 𝑑𝑡′ = 𝑎. 𝑑𝑡′ + 𝑏. ([). 𝑑𝑊(𝑡2) − 𝑎. 𝑑𝑡′ = 𝑏. ([). 𝑑𝑊(𝑡2)
š We know that this SDE corresponds to the mapping:
š
!"2($2,&2)
!&2
=
!
!$2
!
!$2
<!
)
. 𝑝′ 𝑥′, 𝑡′ =
<!
)
.
!!
!$2! [𝑝′ 𝑥′, 𝑡′ ]
š We then go back to the original variables 𝑋 𝑡 and 𝑡, but we need to be a little careful here
on the notations.
92
Luc_Faucheux_2020
Third simple example – III dX=a.dt + b.dW
š 𝑋 𝑡 → 𝑋2 𝑡2 = 𝑋 𝑡′ − 𝑎. 𝑡2 = 𝑋 𝑡 − 𝑎. 𝑡
š 𝑡 → 𝑡2 = 𝑡
š So we have
š 𝑋2 𝑡′ → 𝑋 𝑡 = 𝑋2 𝑡2 + 𝑎. 𝑡2
š 𝑡′ → 𝑡 = 𝑡′
š We ALSO define a transformation on the regular variables:
š 𝑥2 = 𝑓(𝑥, 𝑡) = 𝑥 − 𝑎. 𝑡 and so 𝑥 = 𝑔(𝑥2, 𝑡2) = 𝑥2 + 𝑎. 𝑡′
š
!
!$2
=
!
!$
.
!$
!$2
+
!
!&
.
!&
!$2
=
!
!$
š
!
!&2
=
!
!$
.
!$
!&2
+
!
!&
.
!&
!&2
=
!
!$
. 𝑎 +
!
!&
š
!!
!$&! =
!
!$2
.
!
!$2
=
!
!$2
.
!
!$
=
!
!$
.
!
!$
=
!!
!$!
93
Luc_Faucheux_2020
Third simple example – III –a dX=a.dt + b.dW
š We have:
š
!"2($2,&2)
!&2
=
<!
)
.
!!
!$2! [𝑝′ 𝑥′, 𝑡′ ]
š And we have the relations between the partial derivatives, HOWEVER we do not have yet
the relation between 𝑝′(𝑥′, 𝑡′) and 𝑝(𝑥, 𝑡)
š Where: 𝑝 𝑥, 𝑡 = 𝑝7(𝑥, 𝑡) and 𝑝′ 𝑥2, 𝑡2 = 𝑝7&(𝑥′, 𝑡′)
š PDF Probability Density Function: 𝑝7&(𝑥2, 𝑡)
š Distribution function : 𝑃7&(𝑥2, 𝑡)
š 𝑃7& 𝑥2, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋2 ≀ 𝑥2, 𝑡 = ∫>3+,
>3$&
𝑝7& 𝑊, 𝑡 . 𝑑𝑊
š 𝑝72(𝑥2, 𝑡) =
!
!$2
𝑃7& 𝑥2, 𝑡
94
Luc_Faucheux_2020
Third simple example – III –b dX=a.dt + b.dW
š This highlights the fact that capital letters are reserved for stochastic variables and the lower
case are for just regular variables of a function.
š Usually we just use one or the other without paying too much attention
š But here we are doing a change of variable on a PDF, so we need to be a little careful.
95
Luc_Faucheux_2020
Third simple example – III –c dX=a.dt + b.dW
š Variable change through the Distribution function technique (one dimension)
š Suppose that we have a stochastic variable 𝑋(𝑡)
š Suppose that there is a PDF 𝑝7(𝑥, 𝑡)and a DF 𝑃7(𝑥, 𝑡)
š Suppose that we define 𝑋′ 𝑡 = Ί(𝑋 𝑡 ), and that we can invert 𝑋 𝑡 = 𝜑(𝑋′ 𝑡 )
š Formally to go from one DF to another we would write something like this:
š 𝑃72 𝑥′, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋2 ≀ 𝑥2, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(Ί 𝑋 𝑡 ≀ 𝑥2, 𝑡)
š 𝑃72 𝑥′, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 Ί 𝑋 𝑡 ≀ 𝑥2, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(𝑋 𝑡 ≀ 𝜑 𝑥2 , 𝑡)
š 𝑃72 𝑥′, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 𝑡 ≀ 𝜑 𝑥2 , 𝑡 = 𝑃7 𝜑 𝑥2 , 𝑡
š And then applying: 𝑝72(𝑥2, 𝑡) =
!
!$2
𝑃7& 𝑥2, 𝑡
96
Luc_Faucheux_2020
Third simple example – III –d dX=a.dt + b.dW
š 𝑃72 𝑥′, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 𝑡 ≀ 𝜑 𝑥2 , 𝑡 = 𝑃7 𝜑 𝑥2 , 𝑡
š 𝑝72(𝑥2, 𝑡) =
!
!$2
𝑃7& 𝑥2, 𝑡
š 𝑝7(𝑥, 𝑡) =
!
!$
𝑃7 𝑥, 𝑡
š 𝑋′ 𝑡 = Ί(𝑋 𝑡 )
š 𝑋 𝑡 = 𝜑(𝑋′ 𝑡 )
š So:
š 𝑝7& 𝑥2, 𝑡 =
!
!$& 𝑃7& 𝑥2, 𝑡 =
!
!$& 𝑃7 𝜑 𝑥2 , 𝑡 =
!
!$& 𝑃7 𝑥 = 𝜑 𝑥2 , 𝑡
š 𝑝7& 𝑥2, 𝑡 =
!
!$& 𝑃7 𝑥 = 𝜑 𝑥2 , 𝑡 =
!
!$
𝑃7 𝑥, 𝑡 .
!
!$& 𝜑 𝑥2 = 𝑝7 𝑥, 𝑡 .
!
!$& 𝜑 𝑥2
97
Luc_Faucheux_2020
Third simple example – III –e dX=a.dt + b.dW
š 𝑋′ 𝑡 = Ί(𝑋 𝑡 )
š 𝑋 𝑡 = 𝜑(𝑋′ 𝑡 )
š 𝑝7& 𝑥2, 𝑡 = 𝑝7 𝑥, 𝑡 .
!
!$& 𝜑 𝑥2
š This is in one dimension.
š In the case of multiple dimensions and joint probabilities that gets more complicated and
involves a determinant
š 𝑝7& 𝑥2, 𝑡 = 𝑝7 𝑥, 𝑡 .
!
!$& 𝜑 𝑥2 and noting 𝑥 = 𝜑 𝑥2 and 𝑥′ = Ί(𝑥)
š
!
!$& 𝜑 𝑥2 =
*$
*$& =
*T $&
*$&
š The density of probability {𝑝7& 𝑥2, 𝑡 . 𝑑𝑥′} = {𝑝7 𝑥, 𝑡 . 𝑑𝑥} is conserved
š If you integrate under the curve, then change the variable of integration, this is the usual
result
98
Luc_Faucheux_2020
Third simple example – III –f dX=a.dt + b.dW
š In our case
š 𝑥2 = 𝑓(𝑥, 𝑡) = 𝑥 − 𝑎. 𝑡 and so 𝑥 = 𝑔(𝑥2, 𝑡2) = 𝑥2 + 𝑎. 𝑡′
š
*$
*$& =
*T $&
*$&
š Because this is not an integration over two variables, here the time is only a parametrization
š So
š {𝑝7& 𝑥2, 𝑡′ . 𝑑𝑥′} = {𝑝7 𝑥, 𝑡 . 𝑑𝑥
š 𝑝7& 𝑥2, 𝑡′ = 𝑝7 𝑥, 𝑡
š We can replace 𝑝7& 𝑥2, 𝑡 by 𝑝7 𝑥, 𝑡 in the equation:
!"2($2,&2)
!&2
=
<!
)
.
!!
!$2! [𝑝′ 𝑥′, 𝑡′ ]
š Or dropping the subscript replace 𝑝′ 𝑥2, 𝑡′ by 𝑝 𝑥, 𝑡
99
Luc_Faucheux_2020
Third simple example – IV dX=a.dt + b.dW
š
!"2($2,&2)
!&2
=
<!
)
.
!!
!$2! [𝑝′ 𝑥′, 𝑡′ ]
š Becomes:
š
!
!$
. 𝑎 +
!
!&
. 𝑝 𝑥, 𝑡 =
<!
)
.
!!
!$! [𝑝 𝑥, 𝑡 ]
š
!
!&
. 𝑝 𝑥, 𝑡 = −𝑎.
!
!$
𝑝 𝑥, 𝑡 +
<!
)
.
!!
!$! [𝑝 𝑥, 𝑡 ]
š Given the fact that we are dealing with constants, we can freely move those in and out of
the partial derivatives
š Note that this is not the same when those start becoming functions, in particular functions
of 𝑥, and especially when 𝑏 becomes a function of 𝑥. 𝑏 𝑥 is the killer
š
!
!&
. 𝑝 𝑥, 𝑡 = −
!
!$
𝑎. 𝑝 𝑥, 𝑡 −
<!
)
.
!" $,&
!$
= −
!
!$
𝑎. 𝑝 𝑥, 𝑡 − 𝐷.
!" $,&
!$
= −
!
!$
[𝐜?+𝐜F]
š 𝐜? 𝑥, 𝑡 = 𝑎. 𝑝 𝑥, 𝑡 and 𝐜F 𝑥, 𝑡 = −𝐷.
!" $,&
!$
100
Luc_Faucheux_2020
Third simple example – V dX=a.dt + b.dW
š So we have the mapping:
š 𝑑𝑋 𝑡 = 𝑎. 𝑑𝑡 + 𝑏. ([). 𝑑𝑊
š
!
!&
. 𝑝 𝑥, 𝑡 = −
!
!$
𝑎. 𝑝 𝑥, 𝑡 −
<!
)
.
!" $,&
!$
= −
!
!$
𝑎. 𝑝 𝑥, 𝑡 − 𝐷.
!" $,&
!$
= −
!
!$
[𝐜?+𝐜F]
š 𝐜? 𝑥, 𝑡 = 𝑎. 𝑝 𝑥, 𝑡 and 𝐜F 𝑥, 𝑡 = −𝐷.
!" $,&
!$
š
!"($,&)
!&
= −
!
!$
[𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 −
!
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]]
š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑥 >&-∆& −< 𝑥 >&= 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 (drift term)
š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑥−< 𝑥 >&-∆&))>&-∆&= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 (diffusion term)
š We showed that 𝐹( 𝑋 𝑡 , 𝑡 = 𝑀( 𝑋 𝑡 , 𝑡 and 𝐹) 𝑋 𝑡 , 𝑡 = 2. 𝑀) 𝑋 𝑡 , 𝑡
101
Luc_Faucheux_2020
Third simple example – VI dX=a.dt + b.dW
š 𝑀( 𝑥, 𝑡 = 𝑎 = 𝐹( 𝑋 𝑡 , 𝑡
š 𝑀) 𝑋 𝑡 , 𝑡 =
(
)
. 𝐹) 𝑋 𝑡 , 𝑡 and 𝑀) 𝑋 𝑡 , 𝑡 =
<!
)
so 𝐹) 𝑋 𝑡 , 𝑡 = 𝑏)
š So:
š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑥 >&-∆& −< 𝑥 >&= 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 = 𝑎. ∆𝑡
š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑥−< 𝑥 >&-∆&))>&-∆&= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 = 𝑏). ∆𝑡
š < ∆𝑋)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 = 2𝐷. ∆𝑡
102
Luc_Faucheux_2020
Third simple example – VII dX=a.dt + b.dW
š A solution of the PDE:
š
!
!&
. 𝑝 𝑥, 𝑡 = −
!
!$
𝑎. 𝑝 𝑥, 𝑡 −
<!
)
.
!" $,&
!$
= −
!
!$
𝑎. 𝑝 𝑥, 𝑡 − 𝐷.
!" $,&
!$
= −
!
!$
[𝐜?+𝐜F]
š 𝐜? 𝑥, 𝑡 = 𝑎. 𝑝 𝑥, 𝑡 and 𝐜F 𝑥, 𝑡 = −𝐷.
!" $,&
!$
š Subject to 𝑝 𝑥, 𝑡 = 𝑡1 = 𝛿 𝑥 − 𝑋 𝑡1 = 𝛿(𝑥 − 𝑋1) is:
š 𝑝 𝑥, 𝑡 =
(
E5F &+&%
. 𝑒𝑥 𝑝 −
$+7%+=.(&+&%) !
EF &+&%
=
(
)5D!(&+&%)
. 𝑒𝑥𝑝(−
($+7%+=.(&+&%))!
)D!(&+&%)
)
š < ∆𝑋)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 = 2. 𝐷. ∆𝑡
š < ∆𝑋 > = 𝑎. ∆𝑡
103
Luc_Faucheux_2020
Third simple example – VIII dX=a.dt + b.dW
š Note that we can also recover the solution from the PDF from the change of variables:
š PDF Probability Density Function: 𝑝7(𝑥, 𝑡)
š Distribution function : 𝑃7(𝑥, 𝑡)
š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 ≀ 𝑥, 𝑡 = ∫>3+,
>3$
𝑝7 𝑊, 𝑡 . 𝑑𝑊
š 𝑝7(𝑥, 𝑡) =
!
!$
𝑃7 𝑥, 𝑡
š 𝑑𝑋 𝑡 = 𝑎. 𝑑𝑡 + 𝑏. ([). 𝑑𝑊
š We define: 𝑌 𝑡 = 𝑋 𝑡 − 𝑎. 𝑡
š 𝑑𝑌 𝑡 = 𝑏. ([). 𝑑𝑊 = 𝑏. (∘). 𝑑𝑊 = 𝑏. 𝑑𝑊
104
Luc_Faucheux_2020
Third simple example – IX dX=a.dt + b.dW
š 𝑃7 𝑥, 𝑡< = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(𝑋 ≀ 𝑥|𝑋 𝑡= = 𝑥=)
š 𝑃7 𝑥, 𝑡< = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(𝑋 𝑡< − 𝑎. (𝑡<−𝑡=) ≀ 𝑥 − 𝑎. (𝑡< − 𝑡=)|𝑋 𝑡= − 𝑎. 𝑡= = 𝑥= −
𝑎. 𝑡=)
š Just for sake of notation, setting 𝑡< = 𝑡 and 𝑡= = 0
š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(𝑋 𝑡 − 𝑎. 𝑡 ≀ 𝑥 − 𝑎. 𝑡|𝑋 0 = 𝑥=)
š 𝑃7 𝑥, 𝑡 = 𝑃V 𝑥 − 𝑎. 𝑡, 𝑡
š We define: 𝑍 𝑡 = 𝑌 𝑡 /𝑏
š 𝑑𝑍 𝑡 = 1. ([). 𝑑𝑊 = 1. (∘). 𝑑𝑊 = 1. 𝑑𝑊 = 𝑑𝑊
š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(𝑋 𝑡 − 𝑎. 𝑡 ≀ 𝑥 − 𝑎. 𝑡|𝑌 0 = 𝑥=)
š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(
7 & +=.&
<
≀
$+=.&
<
|𝑍 0 = 𝑥=/𝑏)
105
Luc_Faucheux_2020
Third simple example – X dX=a.dt + b.dW
š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(
7 & +=.&
<
≀
$+=.&
<
|𝑍 0 = 𝑥=/𝑏)
š 𝑃7 𝑥, 𝑡 = 𝑃V 𝑥 − 𝑎. 𝑡, 𝑡 = 𝑃W
$+=.&
<
, 𝑡
š And we know that
š 𝑝W 𝑧, 𝑡 =
!
!X
𝑃W 𝑧, 𝑡 =
!
!X
𝑁 𝑧 − 𝑍1, 𝑡 =
(
)5&
. 𝑒𝑥𝑝(−
(X+W%)!
)&
)
š 𝑃W 𝑧, 𝑡 = ∫Y3+,
Y3X
𝑝W 𝜉, 𝑡 . 𝑑𝜉 = ∫Y3+,
Y3X
ℎ 𝜉 − 𝑍1, 𝑡 . 𝑑𝜉 = ∫Y3+,
Y3X (
)5&
. 𝑒𝑥𝑝(−
(Y+W%)!
)&
). 𝑑𝜉
š 𝑃7 𝑥, 𝑡 = 𝑃V 𝑥 − 𝑎. 𝑡, 𝑡 = 𝑃W
$+=.&
<
, 𝑡
š 𝑃7 𝑥, 𝑡 = ∫Y3+,
Y3
'().+
, (
)5&
. 𝑒𝑥𝑝(−
(Y+W%)!
)&
). 𝑑𝜉
106
Luc_Faucheux_2020
Third simple example – XI dX=a.dt + b.dW
š 𝑃7 𝑥, 𝑡 = ∫Y3+,
Y3
'().+
, (
)5&
. 𝑒𝑥𝑝(−
(Y+W%)!
)&
). 𝑑𝜉
š Just to be quite pedestrian and show the mechanics of the change of variable, we do the
following:
š 𝜉 =
Z+=.&
<
š 𝜂 = 𝑏𝜉 + 𝑎𝑡 𝑑𝜂 = 𝑏. 𝑑𝜉
š 𝑃7 𝑥, 𝑡 = ∫Z3+,
Z3$ (
)5&
. 𝑒𝑥𝑝(−
(
-().+
,
+W%)!
)&
).
*Y
<
= ∫Z3+,
Z3$ (
)5&
. 𝑒𝑥𝑝(−
(
-().+
,
+
')
,
)!
)&
).
*Y
<
š 𝑃7 𝑥, 𝑡 = ∫Z3+,
Z3$ (
)5<!&
. 𝑒𝑥𝑝(−
(Z+=.&+$))!
)&<! ). 𝑑𝜉
š 𝑝7 𝑥, 𝑡 =
!
!$
𝑃7 𝑥, 𝑡 =
(
)5<!&
. 𝑒𝑥𝑝(−
($+=.&+$))!
)<!&
)
107
Luc_Faucheux_2020
Third simple example – XII dX=a.dt + b.dW
š 𝑝7 𝑥, 𝑡 =
!
!$
𝑃7 𝑥, 𝑡 =
(
)5<!&
. 𝑒𝑥𝑝(−
($+=.&+7%)!
)<!&
)
š Replacing the time 𝑡1 into the equation and also setting 𝑥= = 𝑋1
š 𝑝7 𝑥, 𝑡 =
!
!$
𝑃7 𝑥, 𝑡 =
(
)5<!(&+&%)
. 𝑒𝑥𝑝(−
($+=.(&+&%)+7%)!
)D!(&+&%)
)
š And with the usual notation (Finance): 𝜎 = 𝑏
š And with the usual notation (Physics): 𝐷 =
<!
)
š Subject to 𝑝 𝑥, 𝑡 = 𝑡1 = 𝛿 𝑥 − 𝑋 𝑡1 = 𝛿(𝑥 − 𝑋1) is:
š 𝑝 𝑥, 𝑡 =
(
E5F &+&%
. 𝑒𝑥 𝑝 −
$+7%+=.(&+&%) !
EF &+&%
=
(
)5D!(&+&%)
. 𝑒𝑥𝑝(−
($+7%+=.(&+&%))!
)D!(&+&%)
)
š < ∆𝑋)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 = 2. 𝐷. ∆𝑡
š < ∆𝑋 > = 𝑎. ∆𝑡
108
Luc_Faucheux_2020
Third simple example – XIII dX=a.dt + b.dW
š We recover what we had doing algebra on the partial derivatives of the PDE
š The change of variable is sometimes very powerful and simpler than going through the PDE
š We will see in the Langevin section how elegant and powerful it makes the derivation, it also
simplifies the derivation.
š Just a good trick to get used to
109
Luc_Faucheux_2020
Third simple example – XIV dX=a.dt + b.dW
š A nifty property of the change of variable. Useful when sampling distributions (mostly from
Press et al, Numerical Recipes book)
š PDF Probability Density Function: 𝑝7(𝑥, 𝑡)
š Distribution function : 𝑃7(𝑥, 𝑡)
š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 ≀ 𝑥, 𝑡 = ∫>3+,
>3$
𝑝7 𝑊, 𝑡 . 𝑑𝑊
š 𝑝7(𝑥, 𝑡) =
!
!$
𝑃7 𝑥, 𝑡
š Suppose as usual that the variable 𝑋 belongs to ] − ∞; +∞[
š 𝑃7 𝑥, 𝑡 is a function from ] − ∞; +∞[ into [0; 1]
š Suppose that we know the functional form for this function, and that it has an inverse
š 𝑃7 𝑥, 𝑡 = 𝐹(𝑥) is a function from ] − ∞; +∞[ into [0; 1]
š 𝐹+((𝑊) is a function from [0; 1] into ] − ∞; +∞[
š THEN 𝐹(𝑋) has for PDF the uniform distribution function (constant=1) over [0; 1]
110
Luc_Faucheux_2020
Third simple example – XV dX=a.dt + b.dW
111
𝑊 = 𝑃, 𝑥, 𝑡 = 𝐹 𝑥
𝑥
1
0
Luc_Faucheux_2020
Third simple example – XVI dX=a.dt + b.dW
112
0 1
𝑊
𝑥 = 𝐹-" 𝑊
Luc_Faucheux_2020
Third simple example – XVII dX=a.dt + b.dW
š So the nifty property is the following:
š If the variable 𝑋 has the following Distribution Function 𝑃7 𝑥, 𝑡 = 𝐹(𝑥) from ] − ∞; +∞[
into [0; 1]
š Then the variable 𝑌 = 𝐹 𝑋 has itself a Uniform Probability Distribution Function from [0; 1]
into [0; 1]
š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 ≀ 𝑥, 𝑡 = ∫>3+,
>3$
𝑝7 𝑊, 𝑡 . 𝑑𝑊 = 𝐹(𝑥)
š 𝑃?(7) 𝑊, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝐹(𝑋) ≀ 𝑊, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 ≀ 𝐹+((𝑊), 𝑡 = 𝐹 𝐹+( 𝑊 = 𝑊
š 𝑝? 7 𝑊, 𝑡 =
!
!>
𝑃? 7 𝑊, 𝑡 =
!
!>
𝑊 = 1, which is the uniform PDF (constant)
š Not super deep but quite nifty and useful, and not super trivial at first glance
š It is quite useful when say you already have a good sampling (number sequence sampling
the [0; 1] interval and you want to sample 𝑋 over ] − ∞; +∞[ if you know 𝐹(𝑥) when
running simulations or computing expectations.
113
Luc_Faucheux_2020
dX=a(t).dt + b.dW
114
Luc_Faucheux_2020
Fourth simple example dX=a(t).dt + b.dW
š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊
š We choose 𝑎 𝑡, 𝑋 𝑡 = 𝑎(𝑡) and 𝑏 𝑡, 𝑋 𝑡 = 𝑏
š 𝑑𝑋 𝑡 = 𝑎(𝑡). 𝑑𝑡 + 𝑏. ([). 𝑑𝑊
š 𝑋 𝑡< − 𝑋 𝑡= = ∫&3&=
&3&<
𝑑𝑋 𝑡 = ∫&3&=
&3&<
𝑎(𝑠). 𝑑𝑠 + 𝑏 ∫&3&=
&3&<
1. ([). 𝑑𝑊(𝑡)
š Similarly to the previous change of variables we choose
š 𝑋 𝑡 → 𝑋2 𝑡2 = 𝑋 𝑡′ − ∫&3&1
&2
𝑎(𝑠). 𝑑𝑠
š 𝑡 → 𝑡2 = 𝑡
š Note that those are regular Riemann integrals, so no issue on ITO/STRATO/ALPHA here, we
will have those when 𝑏 𝑡, 𝑋 𝑡 = 𝑏(𝑋)
115
Luc_Faucheux_2020
Fourth simple example – II dX=a(t).dt + b.dW
š 𝑋 𝑡 → 𝑋2 𝑡2 = 𝑋 𝑡′ − ∫&3&1
&2
𝑎(𝑠). 𝑑𝑠
š 𝑡 → 𝑡2 = 𝑡
š And so
š 𝑑𝑋2 𝑡2 = 𝑑𝑋 𝑡 − 𝑎 𝑡 . 𝑑𝑡 = 𝑏. ([). 𝑑𝑊
š 𝑑𝑡2 = 𝑑𝑡
š We have same as before,
š
!"2($2,&2)
!&2
=
!
!$2
!
!$2
<!
)
. 𝑝′ 𝑥′, 𝑡′ =
<!
)
.
!!
!$2! [𝑝′ 𝑥′, 𝑡′ ]
š We then go back to the original variables 𝑋 𝑡 and 𝑡
š We just need to be a little more careful because of the integral
116
Luc_Faucheux_2020
Fourth simple example – III dX=a(t).dt + b.dW
š 𝑋 𝑡 → 𝑋2 𝑡2 = 𝑋 𝑡′ − ∫&3&1
&2
𝑎(𝑠). 𝑑𝑠 = 𝑋 𝑡 − ∫&3&1
&2
𝑎(𝑠). 𝑑𝑠
š 𝑡 → 𝑡2 = 𝑡
š So we have
š 𝑋2 𝑡′ → 𝑋 𝑡 = 𝑋2 𝑡2 + ∫&3&1
&2
𝑎(𝑠). 𝑑𝑠
š 𝑡′ → 𝑡 = 𝑡′
š And defining on the regular variables: 𝑥 = 𝑥′ + ∫&3&1
&2
𝑎(𝑠). 𝑑𝑠
š
!
!$2
=
!
!$
.
!$
!$2
+
!
!&
.
!&
!$2
=
!
!$
š
!
!&2
=
!
!$
.
!$
!&2
+
!
!&
.
!&
!&2
=
!
!$
. 𝑎 𝑡2 +
!
!&
=
!
!$
. 𝑎(𝑡) +
!
!&
š
!!
!$&! =
!
!$2
.
!
!$2
=
!
!$2
.
!
!$
=
!
!$
.
!
!$
=
!!
!$!
117
Luc_Faucheux_2020
Fourth simple example – IV dX=a(t).dt + b.dW
š
!"2($2,&2)
!&2
=
<!
)
.
!!
!$2! [𝑝′ 𝑥′, 𝑡′ ]
š Becomes:
š
!
!$
. 𝑎(𝑡) +
!
!&
. 𝑝 𝑥, 𝑡 =
<!
)
.
!!
!$! [𝑝 𝑥, 𝑡 ]
š
!
!&
. 𝑝 𝑥, 𝑡 = −𝑎(𝑡).
!
!$
𝑝 𝑥, 𝑡 +
<!
)
.
!!
!$! [𝑝 𝑥, 𝑡 ]
š Given the fact that 𝑎 = 𝑎(𝑡), we can move it inside the
!
!$
š
!
!&
. 𝑝 𝑥, 𝑡 = −
!
!$
𝑎 𝑡 . 𝑝 𝑥, 𝑡 −
<!
)
.
!" $,&
!$
= −
!
!$
𝑎 𝑡 . 𝑝 𝑥, 𝑡 − 𝐷.
!" $,&
!$
š
!
!&
. 𝑝 𝑥, 𝑡 = −
!
!$
[𝐜?+𝐜F]
š 𝐜? 𝑥, 𝑡 = 𝑎(𝑡). 𝑝 𝑥, 𝑡 and 𝐜F 𝑥, 𝑡 = −𝐷.
!" $,&
!$
118
Luc_Faucheux_2020
Fourth simple example – V dX=a(t).dt + b.dW
š So we have the mapping:
š 𝑑𝑋 𝑡 = 𝑎(𝑡). 𝑑𝑡 + 𝑏. ([). 𝑑𝑊
š
!
!&
. 𝑝 𝑥, 𝑡 = −
!
!$
𝑎 𝑡 . 𝑝 𝑥, 𝑡 −
<!
)
.
!" $,&
!$
= −
!
!$
𝑎 𝑡 . 𝑝 𝑥, 𝑡 − 𝐷.
!" $,&
!$
š
!
!&
. 𝑝 𝑥, 𝑡 = −
!
!$
[𝐜?+𝐜F]
š 𝐜? 𝑥, 𝑡 = 𝑎(𝑡). 𝑝 𝑥, 𝑡 and 𝐜F 𝑥, 𝑡 = −𝐷.
!" $,&
!$
š
!"($,&)
!&
= −
!
!$
[𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 −
!
!$
[𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]]
š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑥 >&-∆& −< 𝑥 >&= 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 (drift term)
š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑥−< 𝑥 >&-∆&))>&-∆&= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 (diffusion term)
š We showed that 𝐹( 𝑋 𝑡 , 𝑡 = 𝑀( 𝑋 𝑡 , 𝑡 and 𝐹) 𝑋 𝑡 , 𝑡 = 2. 𝑀) 𝑋 𝑡 , 𝑡
119
Luc_Faucheux_2020
Fourth simple example – VI dX=a(t).dt + b.dW
š 𝑀( 𝑥, 𝑡 = 𝑎(𝑡) = 𝐹( 𝑋 𝑡 , 𝑡
š 𝑀) 𝑋 𝑡 , 𝑡 =
(
)
. 𝐹) 𝑋 𝑡 , 𝑡 and 𝑀) 𝑋 𝑡 , 𝑡 =
<!
)
so 𝐹) 𝑋 𝑡 , 𝑡 = 𝑏)
š So:
š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑥 >&-∆& −< 𝑥 >&= 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 = 𝑎. ∆𝑡
š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑥−< 𝑥 >&-∆&))>&-∆&= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 = 𝑏). ∆𝑡
š < ∆𝑋)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 = 2𝐷. ∆𝑡
120
Luc_Faucheux_2020
Fourth simple example – VII dX=a(t).dt + b.dW
š A solution of the PDE:
š
!
!&
. 𝑝 𝑥, 𝑡 = −
!
!$
𝑎(𝑡). 𝑝 𝑥, 𝑡 −
<!
)
.
!" $,&
!$
= −
!
!$
𝑎(𝑡). 𝑝 𝑥, 𝑡 − 𝐷.
!" $,&
!$
=
−
!
!$
[𝐜?+𝐜F]
š 𝐜? 𝑥, 𝑡 = 𝑎(𝑡). 𝑝 𝑥, 𝑡 and 𝐜F 𝑥, 𝑡 = −𝐷.
!" $,&
!$
š Subject to 𝑝 𝑥, 𝑡 = 𝑡1 = 𝛿 𝑥 − 𝑋 𝑡1 = 𝛿(𝑥 − 𝑋1) defining:
š 𝑋 𝑡 = 𝑋1 + ∫&3&1
&
𝑎(𝑠). 𝑑𝑠
š 𝑝 𝑥, 𝑡 =
(
E5F &+&%
. 𝑒𝑥 𝑝 −
$+7 &
!
EF &+&%
=
(
)5D!(&+&%)
. 𝑒𝑥𝑝(−
($+7 & )!
)D!(&+&%)
)
š < ∆𝑋)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 = 2. 𝐷. ∆𝑡
š < ∆𝑋 > = 𝑎(𝑡). ∆𝑡
121
Luc_Faucheux_2020
Fourth simple example – VIII dX=a(t).dt + b.dW
š Note that we can also recover the solution from the PDF from the change of variables:
š PDF Probability Density Function: 𝑝7(𝑥, 𝑡)
š Distribution function : 𝑃7(𝑥, 𝑡)
š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 ≀ 𝑥, 𝑡 = ∫>3+,
>3$
𝑝7 𝑊, 𝑡 . 𝑑𝑊
š 𝑝7(𝑥, 𝑡) =
!
!$
𝑃7 𝑥, 𝑡
š 𝑑𝑋 𝑡 = 𝑎(𝑡). 𝑑𝑡 + 𝑏. ([). 𝑑𝑊
š We define: 𝑌 𝑡 = 𝑋 𝑡 − ∫&3&1
&
𝑎(𝑠). 𝑑𝑠
š 𝑑𝑌 𝑡 = 𝑏. ([). 𝑑𝑊 = 𝑏. (∘). 𝑑𝑊 = 𝑏. 𝑑𝑊
122
Luc_Faucheux_2020
Fourth simple example – IX dX=a(t).dt + b.dW
š 𝑃7 𝑥, 𝑡< = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(𝑋 ≀ 𝑥|𝑋 𝑡= = 𝑥=)
š 𝑃7 𝑥, 𝑡< = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(𝑋 𝑡< − ∫&3&)
&3&,
𝑎(𝑠). 𝑑𝑠 ≀ 𝑥 − ∫&3&)
&3&,
𝑎(𝑠). 𝑑𝑠 |𝑋 𝑡= −
∫&3&)
&3&,
𝑎(𝑠). 𝑑𝑠 = 𝑥= − ∫&3&)
&3&,
𝑎(𝑠). 𝑑𝑠)
š Just for sake of notation, setting 𝑡< = 𝑡 and 𝑡= = 0
š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(𝑋 𝑡 − ∫[31
[3&
𝑎(𝑠). 𝑑𝑠 ≀ 𝑥 − ∫[31
[3&
𝑎(𝑠). 𝑑𝑠 |𝑋 0 = 𝑥=)
š 𝑃7 𝑥, 𝑡 = 𝑃V 𝑥 − ∫[31
[3&
𝑎(𝑠). 𝑑𝑠 , 𝑡
š We define: 𝑍 𝑡 = 𝑌 𝑡 /𝑏
š 𝑑𝑍 𝑡 = 1. ([). 𝑑𝑊 = 1. (∘). 𝑑𝑊 = 1. 𝑑𝑊 = 𝑑𝑊
š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(
7 & +∫./%
./+
=([).*[
<
≀
$+∫./%
./+
=([).*[
<
|𝑍 0 = 𝑥=/𝑏)
123
Luc_Faucheux_2020
Fourth simple example – X dX=a(t).dt + b.dW
š So this is more complicated that previously, but we can notice that:
š ∫[31
[3&
𝑎(𝑠). 𝑑𝑠 is only a function of the time 𝑡
š So if we define:
š 𝑋 𝑡 = 𝑋1 + ∫&3&1
&
𝑎(𝑠). 𝑑𝑠 = 𝑋1 + 𝐎 𝑡 . 𝑡
š 𝐎 𝑡 =
(
&
. ∫&3&1
&
𝑎(𝑠). 𝑑𝑠
š We can carry exactly the same derivation we had previously for the change of variable and
recover:
š 𝑝7 𝑥, 𝑡 =
!
!$
𝑃7 𝑥, 𝑡 =
(
)5<!&
. 𝑒𝑥𝑝(−
($+](&).&+7%)!
)<!&
)
124
Luc_Faucheux_2020
Fourth simple example – XI dX=a(t).dt + b.dW
š 𝑝7 𝑥, 𝑡 =
!
!$
𝑃7 𝑥, 𝑡 =
(
)5<!&
. 𝑒𝑥𝑝(−
($+](&).&+7%)!
)<!&
)
š Replacing the time 𝑡1 into the equation:
š 𝑝7 𝑥, 𝑡 =
!
!$
𝑃7 𝑥, 𝑡 =
(
)5<!(&+&%)
. 𝑒𝑥𝑝(−
($+](&+&%).(&+&%)+7%)!
)D!(&+&%)
)
š And with the usual notation (Finance): 𝜎 = 𝑏
š And with the usual notation (Physics): 𝐷 =
<!
)
š Subject to 𝑝 𝑥, 𝑡 = 𝑡1 = 𝛿 𝑥 − 𝑋 𝑡1 = 𝛿(𝑥 − 𝑋1) is:
š 𝑝 𝑥, 𝑡 =
(
E5F &+&%
. 𝑒𝑥 𝑝 −
$+7%+](&+&%).(&+&%) !
EF &+&%
=
(
)5D!(&+&%)
. 𝑒𝑥𝑝(−
($+7%+](&+&%).(&+&%))!
)D!(&+&%)
)
125
Luc_Faucheux_2020
Fourth simple example – XII dX=a(t).dt + b.dW
š 𝐎(𝑡 − 𝑡1) =
(
(&+&%)
. ∫&3&1
&
𝑎(𝑠). 𝑑𝑠
š We recover:
š 𝑋 𝑡 = 𝑋1 + ∫&3&1
&
𝑎(𝑠). 𝑑𝑠
š 𝑝 𝑥, 𝑡 =
(
E5F &+&%
. 𝑒𝑥 𝑝 −
$+7 &
!
EF &+&%
=
(
)5D!(&+&%)
. 𝑒𝑥𝑝(−
($+7 & )!
)D!(&+&%)
)
š < ∆𝑋)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 = 2. 𝐷. ∆𝑡
š < ∆𝑋 > = 𝑎(𝑡). ∆𝑡
š Again, worth using the change of variable method to make sure that we did not drop any
term
126
Luc_Faucheux_2020
dX= b(t).dW
“Cent fois sur le métier remettez votre ouvrage”
Nicolas Boileau
127
Luc_Faucheux_2020
Fifth simple example dX= b(t).dW
š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊
š We choose 𝑎 𝑡, 𝑋 𝑡 = 0 and 𝑏 𝑡, 𝑋 𝑡 = 𝑏(𝑡)
š 𝑑𝑋 𝑡 = 𝑏(𝑡). ([). 𝑑𝑊
š 𝑋 𝑡< − 𝑋 𝑡= = ∫&3&=
&3&<
𝑑𝑋 𝑡 = ∫&3&=
&3&<
𝑏(𝑡). ([). 𝑑𝑊(𝑡)
š Similarly to the previous change of variables we are going to try something like this
š 𝑋 𝑡 → 𝑋2 𝑡2 = 𝑋 𝑡 /𝑏(𝑡)
š 𝑡 → 𝑡2 = 𝑡
š Using ITO lemma on 𝑋2 𝑡2
š 𝑑𝑋2 𝑡2 =
!72
!7
. 𝑑𝑋 +
!72
!&
. 𝑑𝑡 +
(
)
.
!!72
!7! . 𝑑𝑋)
128
Luc_Faucheux_2020
Fifth simple example – II dX= b(t).dW
š 𝑋 𝑡 → 𝑋2 𝑡2 = 𝑋 𝑡 /𝑏(𝑡)
š 𝑑𝑋2 𝑡2 =
!72
!7
. 𝑑𝑋 +
!72
!&
. 𝑑𝑡 +
(
)
.
!!72
!7! . 𝑑𝑋) =
(
<(&)
𝑑𝑋 + 𝑋 𝑡 .
!
!&
(
< &
. 𝑑𝑡
š 𝑑𝑋2 𝑡2 =
(
< &
𝑏 𝑡 . [ . 𝑑𝑊 + 𝑋 𝑡 .
+(
< & !
!< &
!&
. 𝑑𝑡 = 𝑑𝑊 + 𝑋 𝑡 .
+(
< & !
!< &
!&
. 𝑑𝑡
š So the first term is ok, because that is going to be the Gaussian.
š The second term however is going to be a drift term that is BOTH a function of 𝑋 𝑡 and 𝑡
š We have not yet done the case : 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑑𝑊
129
Luc_Faucheux_2020
Fifth simple example – III dX= b(t).dW
¹ So let’s try something else.
š We know that for a Brownian motion the quadratic variation is linear in time.
š Noting 𝔌 the expected value (integral over the distribution), we already know that:
š 𝔌{𝑊 𝑡 } = 0
š 𝔌{𝑊 𝑡 )} = 𝑡
š 𝔌{𝑊 𝑡 . 𝑊(𝑡2)} = min(𝑡, 𝑡2)
š 𝔌 𝑊 𝑡 − 𝑊 𝑡2 )
= 𝔌 𝑊 𝑡 ) + 𝔌 𝑊 𝑡 ) − 2. 𝔌{𝑊 𝑡 . 𝑊(𝑡2)}
š 𝔌 𝑊 𝑡 − 𝑊 𝑡2 )
= 𝑡 + 𝑡2 − 2. min 𝑡, 𝑡2 = |𝑡 − 𝑡′|
130
Luc_Faucheux_2020
Fifth simple example – IV dX= b(t).dW
š 𝑑𝑋 𝑡 = 𝑏(𝑡). ([). 𝑑𝑊
š 𝑋 𝑡< − 𝑋 𝑡= = ∫&3&=
&3&<
𝑑𝑋 𝑡 = ∫&3&=
&3&<
𝑏(𝑡). ([). 𝑑𝑊(𝑡)
š Using the Ito integral:
š 𝑋 𝑡< − 𝑋 𝑡= = lim
8→,
∑.3(
.38
𝑏 𝑡. . {𝑊(𝑡.) − 𝑊(𝑡.+()}
š NOTE that since we have 𝑏(𝑡), we can choose the right point (Ito), the middle point (Strato)
or any other point in between for 𝑏 𝑡. , they will all converge to the same limit because if
we Taylor expand 𝑏 𝑡 around 𝑡 = 𝑡. with 𝑡 being in the partition [𝑡., 𝑡.-(], the higher
order terms will be of order : {𝑡.-( − 𝑡.}. {𝑊(𝑡.) − 𝑊(𝑡.+()} and will vanish
¹ So instead of changing variables let’s try to go back to our estimates of the first and second
moments
š 𝔌 𝑋 𝑡< − 𝑋 𝑡= = 𝔌 lim
8→,
∑.3(
.38
𝑏 𝑡. . {𝑊(𝑡.) − 𝑊(𝑡.+(} = 0
š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& = 0 (drift term)
131
Luc_Faucheux_2020
Fifth simple example – V dX= b(t).dW
š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑋−< 𝑋 >&-∆&))>&-∆&= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 Diffusion term
š 𝑋 𝑡< − 𝑋 𝑡= = lim
8→,
∑.3(
.38
𝑏 𝑡. . {𝑊(𝑡.) − 𝑊(𝑡.+()}
š {𝑋 𝑡< − 𝑋 𝑡= })= { lim
8→,
∑.3(
.38
𝑏 𝑡. . {𝑊(𝑡.) − 𝑊(𝑡.+()} })
š {𝑋 𝑡< − 𝑋 𝑡= })= lim
8→,
∑.3(
.38 ∑"3(
"38
𝑏 𝑡. . {𝑊(𝑡.) − 𝑊(𝑡.+(} 𝑏 𝑡" . {𝑊(𝑡") − 𝑊(𝑡"+()}
š 𝔌{𝑋 𝑡< − 𝑋 𝑡= })= lim
8→,
∑.3(
.38
𝑏 𝑡. . 𝑏 𝑡. . 𝔌{(𝑊(𝑡.) − 𝑊(𝑡.+()))}
š 𝔌{𝑋 𝑡< − 𝑋 𝑡= })= lim
8→,
∑.3(
.38
𝑏 𝑡. . 𝑏 𝑡. . (𝑡.-( − 𝑡.)
š 𝔌{𝑋 𝑡< − 𝑋 𝑡= })= ∫&3&=
&3&<
𝑏 𝑡 . 𝑏 𝑡 . 𝑑𝑡
132
Luc_Faucheux_2020
Fifth simple example – VI dX= b(t).dW
š This is what we knew already on the Gaussian (and one of the many reasons why we love
the Gaussian distribution).
š A variable that is a sum of variables that are independent and each follow a Gaussian
distribution, will also follow a Gaussian distribution.
š In discrete form:
š If 𝑋 = ∑6 𝛿𝑋6 with < 𝛿𝑋6. 𝛿𝑋6 > = 0 if 𝑖 ≠ 𝑗, (𝜎6
). 𝛿𝑡6) otherwise
š Then < ∆𝑋)> = < ∑6 𝛿𝑋6 . ∑A 𝛿𝑋A > = ∑6(𝜎6
). 𝛿𝑡6)
š In the continuous form (small time interval 𝛿𝑡6 limit)
š < ∆𝑋)>= ∫&3&=
&3&<
𝜎 𝑡 ). 𝑑𝑡
š So we would like to define a new variable z𝜎(𝑡) so that:
š z𝜎(𝑡)). 𝑡 = ∫[31
[3&
𝜎 𝑠 ). 𝑑𝑠
133
Luc_Faucheux_2020
Fifth simple example – VII dX= b(t).dW
š So with the new variable: z𝜎(𝑡)). 𝑡 = ∫[31
[3&
𝜎 𝑠 ). 𝑑𝑠
š we have < ∆𝑋)>= ∫[3&
[3&-∆&
𝜎 𝑠 ). 𝑑𝑠 = z𝜎 𝑡 + ∆𝑡 ). 𝑡 + ∆𝑡 − z𝜎 𝑡 ). 𝑡
š < ∆𝑋)>= 𝜎 𝑡 ). ∆𝑡
š We can convince ourselves of this through Taylor expansion of z𝜎 𝑡 + ∆𝑡 ). 𝑡 + ∆𝑡
š z𝜎 𝑡 + ∆𝑡 ). 𝑡 + ∆𝑡 = z𝜎 𝑡 ). 𝑡 + ∆𝑡.
!
!&
[z𝜎 𝑡 ). 𝑡 ]
š z𝜎 𝑡 + ∆𝑡 ). 𝑡 + ∆𝑡 = z𝜎 𝑡 ). 𝑡 + ∆𝑡.
!
!&
[∫[31
[3&
𝜎 𝑠 ). 𝑑𝑠]
š z𝜎 𝑡 + ∆𝑡 ). 𝑡 + ∆𝑡 = z𝜎 𝑡 ). 𝑡 + ∆𝑡. 𝜎 𝑡 )
š < ∆𝑋)>= 𝜎 𝑡 ). ∆𝑡
134
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii
Lf 2020 stochastic_calculus_ito-ii

More Related Content

Similar to Lf 2020 stochastic_calculus_ito-ii

Lf 2021 rates_vii
Lf 2021 rates_viiLf 2021 rates_vii
Lf 2021 rates_viiluc faucheux
 
Basic calculus (ii) recap
Basic calculus (ii) recapBasic calculus (ii) recap
Basic calculus (ii) recapFarzad Javidanrad
 
Existence, Uniqueness and Stability Solution of Differential Equations with B...
Existence, Uniqueness and Stability Solution of Differential Equations with B...Existence, Uniqueness and Stability Solution of Differential Equations with B...
Existence, Uniqueness and Stability Solution of Differential Equations with B...iosrjce
 
Lf 2021 rates_viii_a
Lf 2021 rates_viii_aLf 2021 rates_viii_a
Lf 2021 rates_viii_aluc faucheux
 
A05330107
A05330107A05330107
A05330107IOSR-JEN
 
BSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICSBSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICSRai University
 
Bc4301300308
Bc4301300308Bc4301300308
Bc4301300308IJERA Editor
 
Dual Spaces of Generalized Cesaro Sequence Space and Related Matrix Mapping
Dual Spaces of Generalized Cesaro Sequence Space and Related Matrix MappingDual Spaces of Generalized Cesaro Sequence Space and Related Matrix Mapping
Dual Spaces of Generalized Cesaro Sequence Space and Related Matrix Mappinginventionjournals
 
CP2-Chp2-Series.pptx
CP2-Chp2-Series.pptxCP2-Chp2-Series.pptx
CP2-Chp2-Series.pptxNasimSalim2
 
One solution for many linear partial differential equations with terms of equ...
One solution for many linear partial differential equations with terms of equ...One solution for many linear partial differential equations with terms of equ...
One solution for many linear partial differential equations with terms of equ...Lossian Barbosa Bacelar Miranda
 
Functions ppt Dr Frost Maths Mixed questions
Functions ppt Dr Frost Maths Mixed questionsFunctions ppt Dr Frost Maths Mixed questions
Functions ppt Dr Frost Maths Mixed questionsgcutbill
 
Some Common Fixed Point Results for Expansive Mappings in a Cone Metric Space
Some Common Fixed Point Results for Expansive Mappings in a Cone Metric SpaceSome Common Fixed Point Results for Expansive Mappings in a Cone Metric Space
Some Common Fixed Point Results for Expansive Mappings in a Cone Metric SpaceIOSR Journals
 
Reducible equation to quadratic form
Reducible equation to quadratic formReducible equation to quadratic form
Reducible equation to quadratic formMahrukhShehzadi1
 
Lesson 3: Problem Set 4
Lesson 3: Problem Set 4Lesson 3: Problem Set 4
Lesson 3: Problem Set 4Kevin Johnson
 
Metal-Insulator Transitions I.pdf
Metal-Insulator Transitions I.pdfMetal-Insulator Transitions I.pdf
Metal-Insulator Transitions I.pdfTheAnalyzed
 
A brief introduction to finite difference method
A brief introduction to finite difference methodA brief introduction to finite difference method
A brief introduction to finite difference methodPrateek Jha
 

Similar to Lf 2020 stochastic_calculus_ito-ii (20)

Lf 2021 rates_vii
Lf 2021 rates_viiLf 2021 rates_vii
Lf 2021 rates_vii
 
Periodic Solutions for Non-Linear Systems of Integral Equations
Periodic Solutions for Non-Linear Systems of Integral EquationsPeriodic Solutions for Non-Linear Systems of Integral Equations
Periodic Solutions for Non-Linear Systems of Integral Equations
 
Basic calculus (ii) recap
Basic calculus (ii) recapBasic calculus (ii) recap
Basic calculus (ii) recap
 
Periodic Solutions for Nonlinear Systems of Integro-Differential Equations of...
Periodic Solutions for Nonlinear Systems of Integro-Differential Equations of...Periodic Solutions for Nonlinear Systems of Integro-Differential Equations of...
Periodic Solutions for Nonlinear Systems of Integro-Differential Equations of...
 
Existence, Uniqueness and Stability Solution of Differential Equations with B...
Existence, Uniqueness and Stability Solution of Differential Equations with B...Existence, Uniqueness and Stability Solution of Differential Equations with B...
Existence, Uniqueness and Stability Solution of Differential Equations with B...
 
Lf 2021 rates_viii_a
Lf 2021 rates_viii_aLf 2021 rates_viii_a
Lf 2021 rates_viii_a
 
A05330107
A05330107A05330107
A05330107
 
B0560508
B0560508B0560508
B0560508
 
Four Point Gauss Quadrature Runge – Kuta Method Of Order 8 For Ordinary Diffe...
Four Point Gauss Quadrature Runge – Kuta Method Of Order 8 For Ordinary Diffe...Four Point Gauss Quadrature Runge – Kuta Method Of Order 8 For Ordinary Diffe...
Four Point Gauss Quadrature Runge – Kuta Method Of Order 8 For Ordinary Diffe...
 
BSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICSBSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICS
 
Bc4301300308
Bc4301300308Bc4301300308
Bc4301300308
 
Dual Spaces of Generalized Cesaro Sequence Space and Related Matrix Mapping
Dual Spaces of Generalized Cesaro Sequence Space and Related Matrix MappingDual Spaces of Generalized Cesaro Sequence Space and Related Matrix Mapping
Dual Spaces of Generalized Cesaro Sequence Space and Related Matrix Mapping
 
CP2-Chp2-Series.pptx
CP2-Chp2-Series.pptxCP2-Chp2-Series.pptx
CP2-Chp2-Series.pptx
 
One solution for many linear partial differential equations with terms of equ...
One solution for many linear partial differential equations with terms of equ...One solution for many linear partial differential equations with terms of equ...
One solution for many linear partial differential equations with terms of equ...
 
Functions ppt Dr Frost Maths Mixed questions
Functions ppt Dr Frost Maths Mixed questionsFunctions ppt Dr Frost Maths Mixed questions
Functions ppt Dr Frost Maths Mixed questions
 
Some Common Fixed Point Results for Expansive Mappings in a Cone Metric Space
Some Common Fixed Point Results for Expansive Mappings in a Cone Metric SpaceSome Common Fixed Point Results for Expansive Mappings in a Cone Metric Space
Some Common Fixed Point Results for Expansive Mappings in a Cone Metric Space
 
Reducible equation to quadratic form
Reducible equation to quadratic formReducible equation to quadratic form
Reducible equation to quadratic form
 
Lesson 3: Problem Set 4
Lesson 3: Problem Set 4Lesson 3: Problem Set 4
Lesson 3: Problem Set 4
 
Metal-Insulator Transitions I.pdf
Metal-Insulator Transitions I.pdfMetal-Insulator Transitions I.pdf
Metal-Insulator Transitions I.pdf
 
A brief introduction to finite difference method
A brief introduction to finite difference methodA brief introduction to finite difference method
A brief introduction to finite difference method
 

More from luc faucheux

Marcel_Faucheux_Taquin.pdf
Marcel_Faucheux_Taquin.pdfMarcel_Faucheux_Taquin.pdf
Marcel_Faucheux_Taquin.pdfluc faucheux
 
Selection_Brownian_Particles.pdf
Selection_Brownian_Particles.pdfSelection_Brownian_Particles.pdf
Selection_Brownian_Particles.pdfluc faucheux
 
Binary_Potential.pdf
Binary_Potential.pdfBinary_Potential.pdf
Binary_Potential.pdfluc faucheux
 
Periodic_Forcing.pdf
Periodic_Forcing.pdfPeriodic_Forcing.pdf
Periodic_Forcing.pdfluc faucheux
 
ConfinedBrownian MotionPhysRevE.49.5158.pdf
ConfinedBrownian MotionPhysRevE.49.5158.pdfConfinedBrownian MotionPhysRevE.49.5158.pdf
ConfinedBrownian MotionPhysRevE.49.5158.pdfluc faucheux
 
Optical_Thermal_Ratchet.pdf
Optical_Thermal_Ratchet.pdfOptical_Thermal_Ratchet.pdf
Optical_Thermal_Ratchet.pdfluc faucheux
 
Lf 2021 stochastic_calculus_ito-iii-a
Lf 2021 stochastic_calculus_ito-iii-aLf 2021 stochastic_calculus_ito-iii-a
Lf 2021 stochastic_calculus_ito-iii-aluc faucheux
 
Lf 2021 risk_management_101
Lf 2021 risk_management_101Lf 2021 risk_management_101
Lf 2021 risk_management_101luc faucheux
 
Lf 2021 risk_management_101
Lf 2021 risk_management_101Lf 2021 risk_management_101
Lf 2021 risk_management_101luc faucheux
 
Lf 2021 rates_viii
Lf 2021 rates_viiiLf 2021 rates_viii
Lf 2021 rates_viiiluc faucheux
 
Lf 2021 rates_v_b2
Lf 2021 rates_v_b2Lf 2021 rates_v_b2
Lf 2021 rates_v_b2luc faucheux
 
Lf 2021 rates_v_a
Lf 2021 rates_v_aLf 2021 rates_v_a
Lf 2021 rates_v_aluc faucheux
 
Lf 2020 rates_iii
Lf 2020 rates_iiiLf 2020 rates_iii
Lf 2020 rates_iiiluc faucheux
 
Lf 2020 rates_i
Lf 2020 rates_iLf 2020 rates_i
Lf 2020 rates_iluc faucheux
 
Lf 2020 stochastic_calculus_ito-i
Lf 2020 stochastic_calculus_ito-iLf 2020 stochastic_calculus_ito-i
Lf 2020 stochastic_calculus_ito-iluc faucheux
 
Lf 2020 structured
Lf 2020 structuredLf 2020 structured
Lf 2020 structuredluc faucheux
 
Lf 2020 options
Lf 2020 optionsLf 2020 options
Lf 2020 optionsluc faucheux
 
Lf 2020 skew
Lf 2020 skewLf 2020 skew
Lf 2020 skewluc faucheux
 
Lf 2020 bachelier
Lf 2020 bachelierLf 2020 bachelier
Lf 2020 bachelierluc faucheux
 

More from luc faucheux (19)

Marcel_Faucheux_Taquin.pdf
Marcel_Faucheux_Taquin.pdfMarcel_Faucheux_Taquin.pdf
Marcel_Faucheux_Taquin.pdf
 
Selection_Brownian_Particles.pdf
Selection_Brownian_Particles.pdfSelection_Brownian_Particles.pdf
Selection_Brownian_Particles.pdf
 
Binary_Potential.pdf
Binary_Potential.pdfBinary_Potential.pdf
Binary_Potential.pdf
 
Periodic_Forcing.pdf
Periodic_Forcing.pdfPeriodic_Forcing.pdf
Periodic_Forcing.pdf
 
ConfinedBrownian MotionPhysRevE.49.5158.pdf
ConfinedBrownian MotionPhysRevE.49.5158.pdfConfinedBrownian MotionPhysRevE.49.5158.pdf
ConfinedBrownian MotionPhysRevE.49.5158.pdf
 
Optical_Thermal_Ratchet.pdf
Optical_Thermal_Ratchet.pdfOptical_Thermal_Ratchet.pdf
Optical_Thermal_Ratchet.pdf
 
Lf 2021 stochastic_calculus_ito-iii-a
Lf 2021 stochastic_calculus_ito-iii-aLf 2021 stochastic_calculus_ito-iii-a
Lf 2021 stochastic_calculus_ito-iii-a
 
Lf 2021 risk_management_101
Lf 2021 risk_management_101Lf 2021 risk_management_101
Lf 2021 risk_management_101
 
Lf 2021 risk_management_101
Lf 2021 risk_management_101Lf 2021 risk_management_101
Lf 2021 risk_management_101
 
Lf 2021 rates_viii
Lf 2021 rates_viiiLf 2021 rates_viii
Lf 2021 rates_viii
 
Lf 2021 rates_v_b2
Lf 2021 rates_v_b2Lf 2021 rates_v_b2
Lf 2021 rates_v_b2
 
Lf 2021 rates_v_a
Lf 2021 rates_v_aLf 2021 rates_v_a
Lf 2021 rates_v_a
 
Lf 2020 rates_iii
Lf 2020 rates_iiiLf 2020 rates_iii
Lf 2020 rates_iii
 
Lf 2020 rates_i
Lf 2020 rates_iLf 2020 rates_i
Lf 2020 rates_i
 
Lf 2020 stochastic_calculus_ito-i
Lf 2020 stochastic_calculus_ito-iLf 2020 stochastic_calculus_ito-i
Lf 2020 stochastic_calculus_ito-i
 
Lf 2020 structured
Lf 2020 structuredLf 2020 structured
Lf 2020 structured
 
Lf 2020 options
Lf 2020 optionsLf 2020 options
Lf 2020 options
 
Lf 2020 skew
Lf 2020 skewLf 2020 skew
Lf 2020 skew
 
Lf 2020 bachelier
Lf 2020 bachelierLf 2020 bachelier
Lf 2020 bachelier
 

Recently uploaded

Bladex Earnings Call Presentation 1Q2024
Bladex Earnings Call Presentation 1Q2024Bladex Earnings Call Presentation 1Q2024
Bladex Earnings Call Presentation 1Q2024Bladex
 
Lundin Gold April 2024 Corporate Presentation v4.pdf
Lundin Gold April 2024 Corporate Presentation v4.pdfLundin Gold April 2024 Corporate Presentation v4.pdf
Lundin Gold April 2024 Corporate Presentation v4.pdfAdnet Communications
 
Call Girls Near Delhi Pride Hotel, New Delhi|9873777170
Call Girls Near Delhi Pride Hotel, New Delhi|9873777170Call Girls Near Delhi Pride Hotel, New Delhi|9873777170
Call Girls Near Delhi Pride Hotel, New Delhi|9873777170Sonam Pathan
 
Russian Call Girls In Gtb Nagar (Delhi) 9711199012 💋✔💕😘 Naughty Call Girls Se...
Russian Call Girls In Gtb Nagar (Delhi) 9711199012 💋✔💕😘 Naughty Call Girls Se...Russian Call Girls In Gtb Nagar (Delhi) 9711199012 💋✔💕😘 Naughty Call Girls Se...
Russian Call Girls In Gtb Nagar (Delhi) 9711199012 💋✔💕😘 Naughty Call Girls Se...shivangimorya083
 
Call Girls Near Me WhatsApp:+91-9833363713
Call Girls Near Me WhatsApp:+91-9833363713Call Girls Near Me WhatsApp:+91-9833363713
Call Girls Near Me WhatsApp:+91-9833363713Sonam Pathan
 
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptx
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptxOAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptx
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptxhiddenlevers
 
chapter_2.ppt The labour market definitions and trends
chapter_2.ppt The labour market definitions and trendschapter_2.ppt The labour market definitions and trends
chapter_2.ppt The labour market definitions and trendslemlemtesfaye192
 
Authentic No 1 Amil Baba In Pakistan Authentic No 1 Amil Baba In Karachi No 1...
Authentic No 1 Amil Baba In Pakistan Authentic No 1 Amil Baba In Karachi No 1...Authentic No 1 Amil Baba In Pakistan Authentic No 1 Amil Baba In Karachi No 1...
Authentic No 1 Amil Baba In Pakistan Authentic No 1 Amil Baba In Karachi No 1...First NO1 World Amil baba in Faisalabad
 
magnetic-pensions-a-new-blueprint-for-the-dc-landscape.pdf
magnetic-pensions-a-new-blueprint-for-the-dc-landscape.pdfmagnetic-pensions-a-new-blueprint-for-the-dc-landscape.pdf
magnetic-pensions-a-new-blueprint-for-the-dc-landscape.pdfHenry Tapper
 
BPPG response - Options for Defined Benefit schemes - 19Apr24.pdf
BPPG response - Options for Defined Benefit schemes - 19Apr24.pdfBPPG response - Options for Defined Benefit schemes - 19Apr24.pdf
BPPG response - Options for Defined Benefit schemes - 19Apr24.pdfHenry Tapper
 
Chapter 2.ppt of macroeconomics by mankiw 9th edition
Chapter 2.ppt of macroeconomics by mankiw 9th editionChapter 2.ppt of macroeconomics by mankiw 9th edition
Chapter 2.ppt of macroeconomics by mankiw 9th editionMuhammadHusnain82237
 
SBP-Market-Operations and market managment
SBP-Market-Operations and market managmentSBP-Market-Operations and market managment
SBP-Market-Operations and market managmentfactical
 
Unveiling the Top Chartered Accountants in India and Their Staggering Net Worth
Unveiling the Top Chartered Accountants in India and Their Staggering Net WorthUnveiling the Top Chartered Accountants in India and Their Staggering Net Worth
Unveiling the Top Chartered Accountants in India and Their Staggering Net WorthShaheen Kumar
 
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service AizawlVip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawlmakika9823
 
Stock Market Brief Deck for 4/24/24 .pdf
Stock Market Brief Deck for 4/24/24 .pdfStock Market Brief Deck for 4/24/24 .pdf
Stock Market Brief Deck for 4/24/24 .pdfMichael Silva
 
House of Commons ; CDC schemes overview document
House of Commons ; CDC schemes overview documentHouse of Commons ; CDC schemes overview document
House of Commons ; CDC schemes overview documentHenry Tapper
 
Monthly Market Risk Update: April 2024 [SlideShare]
Monthly Market Risk Update: April 2024 [SlideShare]Monthly Market Risk Update: April 2024 [SlideShare]
Monthly Market Risk Update: April 2024 [SlideShare]Commonwealth
 
fca-bsps-decision-letter-redacted (1).pdf
fca-bsps-decision-letter-redacted (1).pdffca-bsps-decision-letter-redacted (1).pdf
fca-bsps-decision-letter-redacted (1).pdfHenry Tapper
 

Recently uploaded (20)

🔝+919953056974 🔝young Delhi Escort service Pusa Road
🔝+919953056974 🔝young Delhi Escort service Pusa Road🔝+919953056974 🔝young Delhi Escort service Pusa Road
🔝+919953056974 🔝young Delhi Escort service Pusa Road
 
Bladex Earnings Call Presentation 1Q2024
Bladex Earnings Call Presentation 1Q2024Bladex Earnings Call Presentation 1Q2024
Bladex Earnings Call Presentation 1Q2024
 
Lundin Gold April 2024 Corporate Presentation v4.pdf
Lundin Gold April 2024 Corporate Presentation v4.pdfLundin Gold April 2024 Corporate Presentation v4.pdf
Lundin Gold April 2024 Corporate Presentation v4.pdf
 
Call Girls Near Delhi Pride Hotel, New Delhi|9873777170
Call Girls Near Delhi Pride Hotel, New Delhi|9873777170Call Girls Near Delhi Pride Hotel, New Delhi|9873777170
Call Girls Near Delhi Pride Hotel, New Delhi|9873777170
 
Russian Call Girls In Gtb Nagar (Delhi) 9711199012 💋✔💕😘 Naughty Call Girls Se...
Russian Call Girls In Gtb Nagar (Delhi) 9711199012 💋✔💕😘 Naughty Call Girls Se...Russian Call Girls In Gtb Nagar (Delhi) 9711199012 💋✔💕😘 Naughty Call Girls Se...
Russian Call Girls In Gtb Nagar (Delhi) 9711199012 💋✔💕😘 Naughty Call Girls Se...
 
Call Girls Near Me WhatsApp:+91-9833363713
Call Girls Near Me WhatsApp:+91-9833363713Call Girls Near Me WhatsApp:+91-9833363713
Call Girls Near Me WhatsApp:+91-9833363713
 
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptx
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptxOAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptx
OAT_RI_Ep19 WeighingTheRisks_Apr24_TheYellowMetal.pptx
 
chapter_2.ppt The labour market definitions and trends
chapter_2.ppt The labour market definitions and trendschapter_2.ppt The labour market definitions and trends
chapter_2.ppt The labour market definitions and trends
 
Authentic No 1 Amil Baba In Pakistan Authentic No 1 Amil Baba In Karachi No 1...
Authentic No 1 Amil Baba In Pakistan Authentic No 1 Amil Baba In Karachi No 1...Authentic No 1 Amil Baba In Pakistan Authentic No 1 Amil Baba In Karachi No 1...
Authentic No 1 Amil Baba In Pakistan Authentic No 1 Amil Baba In Karachi No 1...
 
magnetic-pensions-a-new-blueprint-for-the-dc-landscape.pdf
magnetic-pensions-a-new-blueprint-for-the-dc-landscape.pdfmagnetic-pensions-a-new-blueprint-for-the-dc-landscape.pdf
magnetic-pensions-a-new-blueprint-for-the-dc-landscape.pdf
 
BPPG response - Options for Defined Benefit schemes - 19Apr24.pdf
BPPG response - Options for Defined Benefit schemes - 19Apr24.pdfBPPG response - Options for Defined Benefit schemes - 19Apr24.pdf
BPPG response - Options for Defined Benefit schemes - 19Apr24.pdf
 
Chapter 2.ppt of macroeconomics by mankiw 9th edition
Chapter 2.ppt of macroeconomics by mankiw 9th editionChapter 2.ppt of macroeconomics by mankiw 9th edition
Chapter 2.ppt of macroeconomics by mankiw 9th edition
 
Monthly Economic Monitoring of Ukraine No 231, April 2024
Monthly Economic Monitoring of Ukraine No 231, April 2024Monthly Economic Monitoring of Ukraine No 231, April 2024
Monthly Economic Monitoring of Ukraine No 231, April 2024
 
SBP-Market-Operations and market managment
SBP-Market-Operations and market managmentSBP-Market-Operations and market managment
SBP-Market-Operations and market managment
 
Unveiling the Top Chartered Accountants in India and Their Staggering Net Worth
Unveiling the Top Chartered Accountants in India and Their Staggering Net WorthUnveiling the Top Chartered Accountants in India and Their Staggering Net Worth
Unveiling the Top Chartered Accountants in India and Their Staggering Net Worth
 
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service AizawlVip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
Vip B Aizawl Call Girls #9907093804 Contact Number Escorts Service Aizawl
 
Stock Market Brief Deck for 4/24/24 .pdf
Stock Market Brief Deck for 4/24/24 .pdfStock Market Brief Deck for 4/24/24 .pdf
Stock Market Brief Deck for 4/24/24 .pdf
 
House of Commons ; CDC schemes overview document
House of Commons ; CDC schemes overview documentHouse of Commons ; CDC schemes overview document
House of Commons ; CDC schemes overview document
 
Monthly Market Risk Update: April 2024 [SlideShare]
Monthly Market Risk Update: April 2024 [SlideShare]Monthly Market Risk Update: April 2024 [SlideShare]
Monthly Market Risk Update: April 2024 [SlideShare]
 
fca-bsps-decision-letter-redacted (1).pdf
fca-bsps-decision-letter-redacted (1).pdffca-bsps-decision-letter-redacted (1).pdf
fca-bsps-decision-letter-redacted (1).pdf
 

Lf 2020 stochastic_calculus_ito-ii

  • 1. Luc_Faucheux_2020 Stochastic Calculus – ITO – II From SDEs to PDEs and back 1
  • 2. Luc_Faucheux_2020 In this section š We tackle the mapping between SDE (SIE) and PDEs (in particular the PDE for the PDF) š SDE: Stochastic Differential Equations š SIE: Stochastic Integral Equations š PDE: Partial Differential Equations š PDF: Probability Distribution Function š This part is somewhat easier as it will be mostly dealing with stochastic terms that are either constant or time dependent, with no dependency on the stochastic variable (so not ITO- STRATO controversy to worry about). We will leave that for part III 2
  • 4. Luc_Faucheux_2020 From SED to PDE and back – first pass š Let’s start with a Fokker-Planck equation š In the PDE section, we will show that the Chapman-Kolmogorov equation is a particular case of the Master equation, and that the Fokker-Planck equation is itself a special case of the Chapman-Kolmogorov but for now we sort of take the FP for granted. š Let’s assume we have the following FP for the Probability Distribution: š !"($,&) !& = − ! !$ [𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 − ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]] š For the notations, 𝑥 is a “regular variable”, we keep the capital 𝑋 for the SDE š By writing the first derivative in time being equal to a gradient in space of something (the diffusion current in Physics), we ensure that the overall probability is conserved: š ! !& 𝑝 = − ! !$ 𝐜 š Note that this would not be the case for absorption problems, or cliff (see first passage time in the Bachelier section) 4
  • 5. Luc_Faucheux_2020 From SED to PDE and back – first pass - II š Let’s check that the probability is conserved š * *& . ∫+, -, 𝑝 𝑥, 𝑡 . 𝑑𝑥 = ∫+, -, ! !& 𝑝 𝑥, 𝑡 . 𝑑𝑥 = ∫+, -, − ! !$ 𝐜(𝑥, 𝑡). 𝑑𝑥 = [−𝐜(𝑥, 𝑡)]+, -, š And with the appropriate boundary conditions 𝐜 𝑥 = −∞, 𝑡 = 𝐜 𝑥 = +∞, 𝑡 = 0 š * *& . ∫+, -, 𝑝 𝑥, 𝑡 . 𝑑𝑥 = 0 š 𝐜 𝑥, 𝑡 = 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 − ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ] š We can calculate the moments of that distribution noted: š 𝑚. 𝑥, 𝑡 =< 𝑥. >&= ∫+, -, 𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥 š Remember that we calculated those in the Gaussian case in the Bachelier section š Note that those are NOT the cumulants, we will go over that in the PDE section 5
  • 6. Luc_Faucheux_2020 From SED to PDE and back – first pass - III š 𝑚. 𝑥, 𝑡 =< 𝑥. >&= ∫+, -, 𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥 š 𝑚. 𝑥, 𝑡 + 𝛿𝑡 =< 𝑥. >&-/&= ∫+, -, 𝑝 𝑥, 𝑡 + 𝛿𝑡 . 𝑥.. 𝑑𝑥 š 𝑚. 𝑥, 𝑡 + 𝛿𝑡 =< 𝑥. >&-/&= ∫+, -, {𝑝 𝑥, 𝑡 + ! !& 𝑝 𝑥, 𝑡 . 𝛿𝑡}. 𝑥.. 𝑑𝑥 š 𝑚. 𝑥, 𝑡 + 𝛿𝑡 =< 𝑥. >&-/&= ∫+, -, 𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥 + 𝛿𝑡. ∫+, -, ! !& 𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥 š 𝑚. 𝑥, 𝑡 + 𝛿𝑡 =< 𝑥. >&-/&=< 𝑥. >& +𝛿𝑡. ∫+, -, ! !& 𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥 š 𝑚. 𝑥, 𝑡 + 𝛿𝑡 − 𝑚. 𝑥, 𝑡 =< 𝑥. >&-/& −< 𝑥. >&= 𝐌𝛿𝑡 = 𝛿𝑡. ∫+, -, ! !& 𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥 š Let’s calculate 𝐌 = ∫+, -, ! !& 𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥 6
  • 7. Luc_Faucheux_2020 From SED to PDE and back – first pass - IV š Let’s calculate 𝐌 = ∫+, -, ! !& 𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥 š We have also : !"($,&) !& = − ! !$ [𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 − ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]] š 𝐌 = ∫+, -, − ! !$ [𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 − ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]] . 𝑥.. 𝑑𝑥 š We can integrate by parts š 𝐌 = 𝐌( + 𝐌) š Where: š 𝐌( = [− [𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 − ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]]. 𝑘. 𝑥.+(]+, -, š 𝐌) = ∫+, -, [𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 − ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]].𝑘. 𝑥.+(. 𝑑𝑥 š 𝐌( = 0 for appropriate boundary conditions (same ones that ensured overall conservation of probability, meaning 𝐜 𝑥 = −∞, 𝑡 = 𝐜 𝑥 = +∞, 𝑡 = 0 and we are left with: 7
  • 8. Luc_Faucheux_2020 From SED to PDE and back – first pass - V š 𝑚. 𝑥, 𝑡 + 𝛿𝑡 − 𝑚. 𝑥, 𝑡 =< 𝑥. >&-/& −< 𝑥. >&= 𝐌). 𝛿𝑡 š 𝐌) = ∫+, -, [𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 − ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]. 𝑘. 𝑥.+(. 𝑑𝑥 š We integrate by parts once again š 𝐌)(𝑘) = ∫+, -, 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(. 𝑑𝑥 − ∫+, -, ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]. 𝑘. 𝑥.+(. 𝑑𝑥 š IF (𝑘 = 1) š 𝐌) 1 = ∫+, -, 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑑𝑥 − ∫+, -, ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ].𝑑𝑥 š 𝐌) 1 = ∫+, -, 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑑𝑥 − [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]+, -, š 𝐌) 1 = ∫+, -, 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑑𝑥 8
  • 9. Luc_Faucheux_2020 From SED to PDE and back – first pass - VI š IF (𝑘 > 1) š 𝐌)(𝑘) = ∫+, -, 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(. 𝑑𝑥 − ∫+, -, ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]. 𝑘. 𝑥.+(. 𝑑𝑥 š 𝐌) 𝑘 = ∫+, -, 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(. 𝑑𝑥 − 𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+( +, -, + ∫+, -, 𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. (𝑘 − 1). 𝑥.+). 𝑑𝑥 š With again having boundary conditions that are not pathological š 𝐌) 𝑘 = ∫+, -, 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(. 𝑑𝑥 + ∫+, -, 𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. (𝑘 − 1). 𝑥.+). 𝑑𝑥 9
  • 10. Luc_Faucheux_2020 From SED to PDE and back – first pass - VII š So we have in the case of: š !0($,&) !& = − ! !$ [𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 − ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]] š 𝑚. 𝑥, 𝑡 + 𝛿𝑡 − 𝑚. 𝑥, 𝑡 =< 𝑥. >&-/& −< 𝑥. >&= 𝐌)(𝑘). 𝛿𝑡 š 𝑚. 𝑥, 𝑡 =< 𝑥. >&= ∫+, -, 𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥 š IF (𝑘 = 0), we have conservation of probability: ∫+, -, 𝑝 𝑥, 𝑡 . 1. 𝑑𝑥 = 𝑚1 𝑥, 𝑡 = 1 š IF (𝑘 = 1): š 𝑚( 𝑥, 𝑡 + 𝛿𝑡 − 𝑚( 𝑥, 𝑡 =< 𝑥 >&-/& −< 𝑥 >&= 𝛿𝑡. ∫+, -, 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑑𝑥 š IF (𝑘 = 2): 𝑚) 𝑥, 𝑡 + 𝛿𝑡 − 𝑚) 𝑥, 𝑡 =< 𝑥) >&-/& −< 𝑥) >&= 𝐌)(2). 𝛿𝑡 š 𝐌) 𝑘 = ∫+, -, 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(. 𝑑𝑥 + ∫+, -, 𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. (𝑘 − 1). 𝑥.+). 𝑑𝑥 š 𝐌) 2 = ∫+, -, 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 2𝑥. 𝑑𝑥 + ∫+, -, 𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 2. 𝑑𝑥 10
  • 11. Luc_Faucheux_2020 From SED to PDE and back – first pass - VIIa š So let’s recap: š If we have a PDE of the form: !"($,&) !& = − ! !$ [𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 − ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]] š The moments of the variable will follow the following equation: š 𝑚. 𝑥, 𝑡 =< 𝑥. >&= ∫+, -, 𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥 š * *& 𝑚. 𝑥, 𝑡 = 𝐌)(𝑘) š 𝐌) 𝑘 = ∫+, -, 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(. 𝑑𝑥 + ∫+, -, 𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. (𝑘 − 1). 𝑥.+). 𝑑𝑥 š So far, just integration by part and quite general 11
  • 12. Luc_Faucheux_2020 From SED to PDE and back – first pass - VIII š This so far is very general. š We need to tie this up to a stochastic process of the form: š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 (in the ITO convention) š This indicates that we will have to concern ourselves with conditional probabilities, reminiscent of two things: š 1) Green propagators when the initial starting point is a Dirac peak š 2) Chapman Kolmogorov equation from Bachelier where he was looking for solution where the transition probability function was itself the probability density: š 𝑝 𝑥, 𝑡 = ∫$23+, $23-, 𝑝 𝑥′, 𝑡′ . 𝑃𝑅 𝑥2, 𝑡2, 𝑥, 𝑡 . 𝑑𝑥’ š WITH: 𝑃𝑅 𝑥2, 𝑡2, 𝑥, 𝑡 = 𝑝(𝑥 − 𝑥2, 𝑡 − 𝑡2) š 𝑝 𝑥, 𝑡 = ∫$23+, $23-, 𝑝 𝑥′, 𝑡′ . 𝑃𝑅 𝑥2, 𝑡2, 𝑥, 𝑡 . 𝑑𝑥′ 12
  • 13. Luc_Faucheux_2020 From SED to PDE and back – first pass - IX š Writing those with more explicit notation for the conditional probability: š 𝑝 𝑥, 𝑡 | 𝑥1, 𝑡1 = ∫$23+, $23-, 𝑝 𝑥, 𝑡 |𝑥′, 𝑡′ . 𝑝 𝑥′, 𝑡′| 𝑥1, 𝑡1 . 𝑑𝑥′ š This is also known as the composition rule, or Chapman-Kolmogorov š It is a special case of the Master equation š Fokker-Planck is a special case of the Chapman-Kolmogorov 13
  • 14. Luc_Faucheux_2020 From SED to PDE and back – first pass - X š The processes described by the Chapman-Kolmogorov equation are such that we need to find a function 𝑝 𝑥′, 𝑡′| 𝑥1, 𝑡1 so that every possible “path” has a length equal to that function, and the resulting probability at the end is the sum over all the possible paths. š At first it seems that finding such a function for ALL partitions could be quite tricky š That was one of the achievements of Louis Bachelier 1900 Ph.D. thesis 14 𝑡 = 𝑡! 𝑡 = 𝑡" 𝑡 = 𝑡# 𝑥! 𝑝 𝑥(, 𝑡(| 𝑥1, 𝑡1 𝑥" 𝑥# 𝑝 𝑥), 𝑡)| 𝑥(, 𝑡(
  • 15. Luc_Faucheux_2020 From SED to PDE and back – first pass - XI š Bachelier guessed a functional: 𝑝 𝑥, 𝑡 = 𝐎(𝑡). 𝑒𝑥𝑝(−𝐵(𝑡)). 𝑥)) š And proceeded to prove that the following had to be observed: š 𝑝 𝑥, 𝑡 = 𝑝 𝑥 = 0, 𝑡 . exp{−𝜋. 𝑝 𝑥 = 0, 𝑡 ) . 𝑥)} š Which is a truly beautiful relationship between the distribution peak and its overall shape š Bachelier ended up with the revered Gaussian distribution: š 𝑝 𝑥, 𝑡 = 4 & . exp{− 54!$! & } š Which is a solution of the Fokker Planck equation š In life, it is almost always a Gaussian, with “almost” and “always” being loosely defined š Let’s see if we can now come up with the same result without having to guess 15
  • 16. Luc_Faucheux_2020 Side note on Bachelier š 𝑃 𝑥, 𝑡 = ∫$23+, $23-, 𝑃 𝑥′, 𝑡′ . 𝑃𝑅 𝑥2, 𝑡2, 𝑥, 𝑡 . 𝑑𝑥’ š NOW is the big one, we assume that we can write: 𝑃𝑅 𝑥2, 𝑡2, 𝑥, 𝑡 = 𝑃(𝑥 − 𝑥2, 𝑡 − 𝑡2) š In many ways this seems quite impossible, you need to find a solution such that at all times the distribution function has the same functional as the conditional probability š in the graph in the previous slide, if the width of the line is a somewhat crude representation of the conditional probability 𝑃𝑅 𝑥2, 𝑡2, 𝑥, 𝑡 , and the size of the dot is also somewhat a crude representation of the probability distribution, then for any and every possible combination of intermediate points and time you need to find a function that is such that BOTH the size of the dots and the width of the lines are described by the same functional š 𝑃 𝑥, 𝑡 = 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛(𝑥, 𝑡) š 𝑃𝑅 𝑥2, 𝑡2, 𝑥, 𝑡 = 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛(𝑥 − 𝑥2, 𝑡 − 𝑡2) š That seems quite impossible 16
  • 17. Luc_Faucheux_2020 Side note on Bachelier - II š This sort of points us towards some sort of Pascal triangle or binomial distribution, for which each step is identical, and somehow builds up over repeated steps. š Note that the binomial distribution is different because at each step it is a discrete limited jump to the nearest neighbors, not a full function š So this is not exactly the same, but it looks like if we need to build something that is true for ANY and EVERY possible intermediate time and position, we might have a shot if we build it for EVERY smallest possible increment, and have some sort of SCALING property to scale that up when integrating. š This is why it looks likes we should choose the Gaussian, because the distribution for a variable that is a sum of variables each following a Gaussian, will itself follow a Gaussian, and the variance will be the sum of the individual variances. (So SCALE with fractal dimension 2 as we would say these days). 17
  • 18. Luc_Faucheux_2020 Sidebar on Gaussian distribution š It seems that the Gaussian distribution is a valuable candidate to such function. Bachelier did not explicitly describes how he comes up with his guess for the Gaussian, but he was surely well versed in the Pascal triangle, Binomial distribution, and the scaling properties of the Gaussian š If 𝑥(and 𝑥) are two independent random variables following the Gaussian distribution, then š 𝑊 = 𝑥( + 𝑥) will also follow a Gaussian distribution, and š < ∆𝑊)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 =< ∆(𝑥( + 𝑥)))> =< ∆𝑥( ) >+< ∆𝑥) ) > = 𝜎( ). ∆𝑡 + 𝜎( ). ∆𝑡 š 𝜎$"-$! ) = 𝜎$" ) + 𝜎$! ) š In particular, in the case of 𝑁 independent random increments 𝑥6 following a Gaussian distribution, the sum of those random increments will also be a random variable following a Gaussian distribution of variance: š < ∆(∑ 𝑥6))> = 𝔌 (∑ 𝑥6)) = 𝜎). ∆𝑡 = ∆𝑡. ∑ 𝜎$# ) 18
  • 19. Luc_Faucheux_2020 Sidebar on Gaussian distribution - II š The invariance of the Gaussian distribution under addition is closely connected with the CLT (Central Limit Theorem), which states that a suitably normalized sum of many independent variables with finite variances (do not have to follow a Gaussian distribution) will converge to a Gaussian distribution, another reason why the Gaussian distribution is awesome š Note that the Gaussian distribution is invariant under addition for exponent 2 š 𝜎$"-$! ) = 𝜎$" ) + 𝜎$! ) š Schroeder points out that a number of distributions are invariant under addition for a different exponent 𝐷4 (not diffusion, but more like a dimension coefficient, see fractal theory). š In particular, the celebrated Cauchy distribution: 𝑃 𝑥 = ( 5((-$!) is invariant for addition for exponent 𝐷4 = 1 š As another example, for 𝐷4 = 1/2, the distribution that is invariant under addition is: š 𝑃 𝑥 = ( )5 . 𝑥+ $ !. exp( +( )$ ) 19
  • 20. Luc_Faucheux_2020 Sidebar on Gaussian distribution - III š Another note on the Gaussian and Cauchy distribution. š Suppose that we have 𝑁 identically distributed random variables. š For Gaussian distribution: š < ∆𝑋)> = < ∆(∑ 𝑥6))> = 𝔌 (∑ 𝑥6)) = 𝜎). ∆𝑡 = ∆𝑡. ∑ 𝜎$# ) = 𝑁. ∆𝑡. 𝜎6 ) š 𝜎) = 𝑁. 𝜎6 ) š And so the AVERAGE (not the SUM) of those variables will be such that: š < ∆( 7 8 ))> = ( ( 8 ))< ∆(∑ 𝑥6))> = ( ( 8 )) 𝑁. ∆𝑡. 𝜎6 ) = ( 8 . ∆𝑡. 𝜎6 ) = ∆𝑡. 𝜎9$: ) š So for variables following a Gaussian distribution, the more measurements you make and take the average, the more precise you have an estimate of the average: š 𝜎9$: ) = ( 8 . 𝜎6 ) or equivalently: 𝜎9$: = ( 8 . 𝜎6 20
  • 21. Luc_Faucheux_2020 Sidebar on Gaussian distribution - IV š This is why in Physics for most experiments you believe that the more measurements you make , the better an estimate you will get. š This is usually the often quoted “convergence” in ( 8 for most computer simulations š Note that this is not true in Finance, where you do not have the luxury of making of lot of experiments of the same physical system. In Finance you do not have the luxury of control experiment, not do you have the luxury of a steady state solution. š In contrast, the Cauchy distribution is such that: 𝜎 = 𝑁. 𝜎6 š And so the distribution of the AVERAGE of 𝑁 identically distributed Cauchy variable is the SAME as the original distribution. š Averaging Cauchy variables does not improve the estimate. š Averaging Gaussian variables improve the estimate. 21
  • 22. Luc_Faucheux_2020 Side note on Bachelier - III š It is also possible that as a Frenchman, Bachelier was familiar with the visual illustrations of Charles-Joseph Minard, and might have thought of the process he was trying to describe as an army of probability diffusing in time like the Great Army during the Russian campaign. š He was also certainly familiar with the Galton board, that might have given him some insight on how a security diffuses. š One thing is for sure, I am no Minard: 22
  • 23. Luc_Faucheux_2020 Side note on Bachelier - IV š Luc’s trying to illustrate a flow of probability so that the width of the lines are proportional to the conditional probability, and the size of the dots proportional to the probability density 23 𝑥! 𝑡 = 𝑡! 𝑡 = 𝑡" 𝑡 = 𝑡# 𝑃 𝑥(, 𝑡(| 𝑥1, 𝑡1 𝑥" 𝑥# 𝑃 𝑥), 𝑡)| 𝑥(, 𝑡(
  • 24. Luc_Faucheux_2020 Side note on Bachelier - V š Charles-Joseph Minard illustrating the Napoleon Russian campaign, and totally schooling Luc. 24
  • 25. Luc_Faucheux_2020 The Galton board (also called bean machine) š I just could not resist, because the Galton board is so beautiful, and it is almost certain that Louis Bachelier knew about it and might have gotten inspiration from. š Sit Galton himself: š ”Order in Apparent Chaos: I know of scarcely anything so apt to impress the imagination as the wonderful form of cosmic order expressed by the Law of Frequency of Error. The law would have been personified by the Greeks and deified, if they had known of it. It reigns with serenity and in complete self-effacement amidst the wildest confusion. The huger the mob, and the greater the apparent anarchy, the more perfect is its sway. It is the supreme law of Unreason. Whenever a large sample of chaotic elements are taken in hand and marshalled in the order of their magnitude, an unsuspected and most beautiful form of regularity proves to have been latent all along” 25
  • 26. Luc_Faucheux_2020 The Galton board (also called bean machine) - II 26
  • 27. Luc_Faucheux_2020 The Galton board (also called bean machine) - III 27
  • 28. Luc_Faucheux_2020 From SED to PDE and back – first pass - X š We will follow 1) the Green propagators and the conditional probabilities concept š We now are dealing with a stochastic variable 𝑋(𝑡) š We will be preoccupying ourselves with an expansion in time so we define š ∆𝑋 = 𝑋 𝑡 + ∆𝑡 − 𝑋(𝑡) š We further assume that: š < ∆𝑋 > = 𝐞 ∆𝑋 = 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 (drift term) š < ∆𝑋)> = 𝐞 ∆𝑋) = 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 (diffusion term) š All higher orders < ∆𝑋.> are of order ∆𝑡) at least š Note: there are abnormal diffusion processes for which those assumptions would not be verified. š We are essentially looking at a solution where 𝑝 𝑥, 𝑡 = 𝛿(𝑥 − 𝑋 𝑡 ) where this is the Dirac function, and looking at ”small” interval in time after 28
  • 29. Luc_Faucheux_2020 From SED to PDE and back – first pass - XI š IF (𝑘 = 1): š 𝑚( 𝑥, 𝑡 + ∆𝑡 − 𝑚( 𝑥, 𝑡 =< 𝑥 >&-∆& −< 𝑥 >&= ∆𝑡. ∫+, -, 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑑𝑥 š 𝑚. 𝑥, 𝑡 =< 𝑥. >&= ∫+, -, 𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥 š 𝑚( 𝑥, 𝑡 =< 𝑥 >&= ∫+, -, 𝑝 𝑥, 𝑡 . 𝑥. 𝑑𝑥 = ∫+, -, 𝛿(𝑥 − 𝑋 𝑡 ). 𝑥. 𝑑𝑥 = 𝑋(𝑡) š 𝑚( 𝑥, 𝑡 + ∆𝑡 − 𝑚( 𝑥, 𝑡 =< 𝑥 >&-∆& −< 𝑥 >&= ∆𝑡. ∫+, -, 𝑀( 𝑥, 𝑡 . 𝛿(𝑥 − 𝑋 𝑡 ). 𝑑𝑥 š 𝑚( 𝑥, 𝑡 + ∆𝑡 − 𝑚( 𝑥, 𝑡 =< 𝑥 >&-∆& −< 𝑥 >&= ∆𝑡. 𝑀( 𝑋(𝑡), 𝑡 š So we can equate with : < ∆𝑋 > = 𝐞 ∆𝑋 = 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 š And have 𝐹( 𝑋 𝑡 , 𝑡 = 𝑀( 𝑋 𝑡 , 𝑡 š Note: since we were wrong for a large part of the first deck because of Ito/Stratanovitch, we are proceeding with caution here 29
  • 30. Luc_Faucheux_2020 From SED to PDE and back – first pass - XII š 𝑚( 𝑥, 𝑡 + ∆𝑡 − 𝑚( 𝑥, 𝑡 =< 𝑥 >&-∆& −< 𝑥 >&= ∆𝑡. 𝑀( 𝑋 𝑡 , 𝑡 š This is why this term is referred to as the drift. It is a first order in time where the ratio of the average displacement to the time interval is the velocity 𝑀( 𝑋 𝑡 , 𝑡 š We are now concerning ourselves with the second order in time, and pay attention to subtracting, or taking into account the drift term in the correct manner 30 < 𝑥 𝑡 > = 𝑋(𝑡) x 𝑡 𝑡 + ∆𝑡 < 𝑥 𝑡 + ∆𝑡 >
  • 31. Luc_Faucheux_2020 From SED to PDE and back – first pass - XIII š IF (𝑘 = 2): 𝑚) 𝑥, 𝑡 + ∆𝑡 − 𝑚) 𝑥, 𝑡 =< 𝑥) >&-∆& −< 𝑥) >&= 𝐌)(2). ∆𝑡 š 𝐌) 𝑘 = ∫+, -, 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(. 𝑑𝑥 + ∫+, -, 𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. (𝑘 − 1). 𝑥.+). 𝑑𝑥 š 𝐌) 2 = ∫+, -, 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 2𝑥. 𝑑𝑥 + ∫+, -, 𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 2. 𝑑𝑥 š < 𝑥 >&-∆& −< 𝑥 >&= ∆𝑡. 𝑀( 𝑋 𝑡 , 𝑡 š < 𝑥 >&-∆&=< 𝑥 >& +∆𝑡. 𝑀( 𝑋 𝑡 , 𝑡 š (< 𝑥 >&-∆&))= (< 𝑥 >& +∆𝑡. 𝑀( 𝑋 𝑡 , 𝑡 )) š (< 𝑥 >&-∆&))= (< 𝑥 >&))+2. < 𝑥 >&. ∆𝑡. 𝑀( 𝑋 𝑡 , 𝑡 ) + (∆𝑡. 𝑀( 𝑋 𝑡 , 𝑡 )) š < ∆𝑋)> = 𝐞 ∆𝑋) = 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 =< (𝑋− < 𝑋 >))> š < ∆𝑋)> =< (𝑋− < 𝑋 >))> =< 𝑋) − 2𝑋. < 𝑋 > + < 𝑋 >)> š < ∆𝑋)> = < 𝑋) > −2 < 𝑋 >. < 𝑋 > + < 𝑋 >) = < 𝑋) > − < 𝑋 >) 31
  • 32. Luc_Faucheux_2020 From SED to PDE and back – first pass - XIV š < 𝑥) >&-∆& −< 𝑥) >&= 𝐌)(2). ∆𝑡 š 𝐌) 2 = ∫+, -, 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 2𝑥. 𝑑𝑥 + ∫+, -, 𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 2. 𝑑𝑥 š 𝐌) 2 = ∫+, -, 𝑀( 𝑥, 𝑡 . 𝛿(𝑥 − 𝑋 𝑡 ). 2𝑥. 𝑑𝑥 + ∫+, -, 𝑀) 𝑥, 𝑡 . 𝛿(𝑥 − 𝑋 𝑡 ). 2. 𝑑𝑥 š 𝐌) 2 = 𝑀( 𝑋 𝑡 , 𝑡 . 2𝑋(𝑡) + 2. 𝑀) 𝑋(𝑡), 𝑡 š < 𝑥) >&-∆& −< 𝑥) >&= 2. 𝑀( 𝑋 𝑡 , 𝑡 . 𝑋 𝑡 . ∆𝑡 + 2. 𝑀) 𝑋 𝑡 , 𝑡 . ∆𝑡 š < ∆𝑋)> = 𝐞 ∆𝑋) = 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 (diffusion term) š < ∆𝑋)> = < (𝑥−< 𝑥 >&-∆&))>&-∆&=< 𝑥) >&-∆& −(< 𝑥 >&-∆&)) š < 𝑥 >&-∆& −< 𝑥 >&= ∆𝑡. 𝑀( 𝑋(𝑡), 𝑡 š So: (< 𝑥 >&-∆&))= (< 𝑥 >&))+2. 𝑀( 𝑋(𝑡), 𝑡 . < 𝑥 >&. ∆𝑡 + (∆𝑡. 𝑀( 𝑋(𝑡), 𝑡 )) 32
  • 33. Luc_Faucheux_2020 From SED to PDE and back – first pass - XV š So again apologies for being pedestrian here, but I have seen too many textbooks where they go “let’s set the drift to 0 then we get that, then it is easy to show when adding the drift back that it only changes the first order and does not impact the diffusion term” š So trying to make sure that we are on firm ground here. š Also just to reassure us, all the integrals are normal integrals over the Probability Distribution and the possible outcomes 𝑥, there is no Ito versus Stratonovitch here. š < 𝑥) >&-∆& −< 𝑥) >&= 2. 𝑀( 𝑋 𝑡 , 𝑡 . 𝑋 𝑡 . ∆𝑡 + 2. 𝑀) 𝑋 𝑡 , 𝑡 . ∆𝑡 š And < 𝑥) >&= ∫+, -, 𝑥). 𝑝 𝑥, 𝑡 . 𝑑𝑥 = ∫+, -, 𝑥). 𝛿(𝑥 − 𝑋 𝑡 ). 𝑑𝑥 = 𝑋 𝑡 ) š < 𝑥) >&-∆&= 𝑋 𝑡 ) + 2. 𝑀( 𝑋 𝑡 , 𝑡 . 𝑋 𝑡 . ∆𝑡 + 2. 𝑀) 𝑋 𝑡 , 𝑡 . ∆𝑡 š < ∆𝑋)> = < (𝑥−< 𝑥 >&-∆&))>&-∆&=< 𝑥) >&-∆& −(< 𝑥 >&-∆&)) š < 𝑥 >&-∆& −< 𝑥 >&= ∆𝑡. 𝑀( 𝑋(𝑡), 𝑡 š So: (< 𝑥 >&-∆&))= (< 𝑥 >&))+2. 𝑀( 𝑋(𝑡), 𝑡 . < 𝑥 >&. ∆𝑡 + (∆𝑡. 𝑀( 𝑋(𝑡), 𝑡 )) 33
  • 34. Luc_Faucheux_2020 From SED to PDE and back – first pass - XVI š (< 𝑥 >&-∆&))= (< 𝑥 >&))+2. 𝑀( 𝑋(𝑡), 𝑡 . < 𝑥 >&. ∆𝑡 + (∆𝑡. 𝑀( 𝑋(𝑡), 𝑡 )) š < 𝑥 >&= ∫+, -, 𝑝 𝑥, 𝑡 . 𝑥. 𝑑𝑥 = ∫+, -, 𝛿(𝑥 − 𝑋 𝑡 ). 𝑥. 𝑑𝑥 = 𝑋(𝑡) š (< 𝑥 >&-∆&))= 𝑋 𝑡 ) + 2. 𝑀( 𝑋(𝑡), 𝑡 . 𝑋 𝑡 . ∆𝑡 + (∆𝑡. 𝑀( 𝑋(𝑡), 𝑡 )) š < ∆𝑋)> = < (𝑥−< 𝑥 >&-∆&))>&-∆&=< 𝑥) >&-∆& −(< 𝑥 >&-∆&)) š < ∆𝑋)> =< 𝑥) >&-∆& −𝑋 𝑡 ) − 2. 𝑀( 𝑋 𝑡 , 𝑡 . 𝑋 𝑡 . ∆𝑡 − (∆𝑡. 𝑀( 𝑋(𝑡), 𝑡 )) š And: < 𝑥) >&-∆&= 𝑋 𝑡 ) + 2. 𝑀( 𝑋 𝑡 , 𝑡 . 𝑋 𝑡 . ∆𝑡 + 2. 𝑀) 𝑋 𝑡 , 𝑡 . ∆𝑡 š So: < ∆𝑋)> = 2. 𝑀) 𝑋 𝑡 , 𝑡 . ∆𝑡 − ∆𝑡. 𝑀( 𝑋 𝑡 , 𝑡 ) = 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 š In the limit of small time increments: 𝐹) 𝑋 𝑡 , 𝑡 = 2. 𝑀) 𝑋 𝑡 , 𝑡 34
  • 35. Luc_Faucheux_2020 From SED to PDE and back – first pass - XVII š Wait, what did we get? Let’s recap. š We assumed that the Probability Distribution Function (PDF) is following the Fokker-Planck Partial Differential Equation (PDE): š !"($,&) !& = − ! !$ [𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 − ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]] š Note: we have not proven yet that the FP can be derived from the Chapman-Kolmogorov, that block we will need to tackle separately, for now we are assuming that the PDF follows a Fokker-Planck PDE, and try to tie this to SDE or SIE š Looking at a random process 𝑋(𝑡) such that 𝑝 𝑥, 𝑡 = 𝛿(𝑥 − 𝑋 𝑡 ) š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑥 >&-∆& −< 𝑥 >&= 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 (drift term) š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑥−< 𝑥 >&-∆&))>&-∆&= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 (diffusion term) š We showed that 𝐹( 𝑋 𝑡 , 𝑡 = 𝑀( 𝑋 𝑡 , 𝑡 and 𝐹) 𝑋 𝑡 , 𝑡 = 2. 𝑀) 𝑋 𝑡 , 𝑡 35
  • 36. Luc_Faucheux_2020 From SED to PDE and back – first pass - XVIII š So we would love to jump ahead of ourselves and write something like this: The SDE: š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 š Has: š < ∆𝑋 > = 𝑎(𝑡, 𝑋 𝑡 ). ∆𝑡 š And š < ∆𝑋)> = 𝑏 𝑡, 𝑋 𝑡 ) . ∆𝑡 š And so will corresponds to the PDE: š !"($,&) !& = − ! !$ [𝑎(𝑡, 𝑋 𝑡 ). 𝑝 𝑥, 𝑡 − ! !$ [ < &,7 & ! ) . 𝑝 𝑥, 𝑡 ]] š That is quite tempting indeed, but we have spent so much time trying to not be tricked into forgetting a small term that changes everything, it is not the time to be cavalier and glance over the last few steps 36
  • 37. Luc_Faucheux_2020 From SED to PDE and back – first pass - XIX š It is so tempting though that we are going to use some illustrative examples. š Remember when we write and SDE: š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 š We really are writing an SIE, because random processes are NOT differentiable š 𝑋 𝑡< − 𝑋 𝑡= = ∫&3&= &3&< 𝑑𝑋 𝑡 = ∫&3&= &3&< 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + ∫&3&= &3&< 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊(𝑡) š We need to calculate from this equation the quantities: š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& (drift term) š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑋−< 𝑋 >&-∆&))>&-∆& (diffusion term) 37
  • 38. Luc_Faucheux_2020 Some notes on the notations š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 š We will try to be rigorous when needed, and when it does not hide from the intuition š Usually UPPER CASE are for stochastic variables š Usually LOWER CASE are for regular variables š Say the Gaussian distribution below is the distribution for the process: 𝑑𝑋 𝑡 = 𝑑𝑊 š We should really be writing the function using lower case š 𝑝 𝑥, 𝑡 = ( )5& . 𝑒𝑥 𝑝 − $! )& is the probability density function š Really to be rigorous we should write is as: 𝑝7(𝑥, 𝑡) š And the distribution function being: 𝑃7(𝑥, 𝑡) š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 ≀ 𝑥, 𝑡 = ∫>3+, >3$ 𝑝7 𝑊, 𝑡 . 𝑑𝑊 38
  • 39. Luc_Faucheux_2020 Some notes on the notations - II š PDF Probability Density Function: 𝑝7(𝑥, 𝑡) š Distribution function : 𝑃7(𝑥, 𝑡) š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 ≀ 𝑥, 𝑡 = ∫>3+, >3$ 𝑝7 𝑊, 𝑡 . 𝑑𝑊 š 𝑝7(𝑥, 𝑡) = ! !$ 𝑃7 𝑥, 𝑡 š This highlights the fact that capital letters are reserved for stochastic variables and the lower case are for just regular variables of a function. š Usually we just use one or the other without paying too much attention, I will try to be rigorous on this one but I know that I will fail 39
  • 40. Luc_Faucheux_2020 Some notes on the notations - III š Also instead of saying: š Looking at a random process 𝑋(𝑡) such that 𝑝 𝑥, 𝑡 = 𝛿(𝑥 − 𝑋 𝑡 ) and then computing quantities at time (𝑡 + 𝛿𝑡), we should really to be rigorous express this in term of conditional probabilities: 𝑝7 𝑥, 𝑡 + 𝛿𝑡|𝑥 = 𝑋 𝑡 , 𝑡 š More generally for a starting condition: 𝑝 𝑥, 𝑡 = 𝑡1 = 𝛿(𝑥 − 𝑥1) š Sometimes written: 𝑝 𝑥, 𝑡 = 𝑡1 = 𝛿(𝑥 − 𝑥1) with the added 𝑥1 = 𝑋1 = 𝑋(𝑡1) š We then are concerning ourselves with: 𝑝7 𝑥, 𝑡|𝑥1, 𝑡1 š So for example really: š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& (drift term) š < ∆𝑋 > = ∫>3+, >3-, 𝑊. 𝑝7 𝑊, 𝑡 + ∆𝑡|𝑥 = 𝑋(𝑡), 𝑡 . 𝑑𝑊 − ∫>3+, >3-, 𝑊. 𝑝7 𝑊, 𝑡|𝑥 = 𝑋(𝑡), 𝑡 . 𝑑𝑊 š < ∆𝑋 > = ∫>3+, >3-, 𝑊. 𝑝7 𝑊, 𝑡 + ∆𝑡|𝑥 = 𝑋(𝑡), 𝑡 . 𝑑𝑊 − ∫>3+, >3-, 𝑊. 𝛿(𝑊 − 𝑋 𝑡 ). 𝑑𝑊 š < ∆𝑋 > = ∫>3+, >3-, 𝑊. 𝑝7 𝑊, 𝑡 + ∆𝑡|𝑥 = 𝑋(𝑡), 𝑡 . 𝑑𝑊 − 𝑋(𝑡) 40
  • 41. Luc_Faucheux_2020 From SED to PDE and back – first pass - XX š Just to recap where we stand, š Assuming we have a Fokker-Planck equation of the form: š !"($,&) !& = − ! !$ [𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 − ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]] š Looking at a specific solution 𝑃 𝑥, 𝑡 = 𝛿(𝑥 − 𝑋 𝑡 ) where this is the Dirac function, and looking at ”small” interval in time after we have shown that: š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑥 >&-∆& −< 𝑥 >&= 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 (drift term) š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑥−< 𝑥 >&-∆&))>&-∆&= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 (diffusion term) š We showed that 𝐹( 𝑋 𝑡 , 𝑡 = 𝑀( 𝑋 𝑡 , 𝑡 and 𝐹) 𝑋 𝑡 , 𝑡 = 2. 𝑀) 𝑋 𝑡 , 𝑡 š We NOW look at the formulation: 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 š And are looking to map 𝑎 𝑡, 𝑋 𝑡 , 𝑏 𝑡, 𝑋 𝑡 ⟺ 𝐹( 𝑋 𝑡 , 𝑡 , 𝐹) 𝑋 𝑡 , 𝑡 š Since we already have: 𝑀( 𝑥, 𝑡 , 𝑀) 𝑥, 𝑡 ⟺ 𝐹( 𝑋 𝑡 , 𝑡 , 𝐹) 𝑋 𝑡 , 𝑡 41
  • 42. Luc_Faucheux_2020 š !"($,&) !& = − ! !$ [𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 − ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]] š 𝑚. 𝑥, 𝑡 =< 𝑥. >&= ∫+, -, 𝑝 𝑥, 𝑡 . 𝑥.. 𝑑𝑥 š * *& 𝑚. 𝑥, 𝑡 = 𝐌)(𝑘) š 𝐌) 𝑘 = ∫+, -, 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. 𝑥.+(. 𝑑𝑥 + ∫+, -, 𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 . 𝑘. (𝑘 − 1). 𝑥.+). 𝑑𝑥 š In particular we have the following mapping: š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑥 >&-∆& −< 𝑥 >&= 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 (drift term) š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑥−< 𝑥 >&-∆&))>&-∆&= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 (diffusion term) š We showed that š 𝐹( 𝑋 𝑡 , 𝑡 = 𝑀( 𝑋 𝑡 , 𝑡 and 𝐹) 𝑋 𝑡 , 𝑡 = 2. 𝑀) 𝑋 𝑡 , 𝑡 From SED to PDE and back – first pass - XXI 42
  • 44. Luc_Faucheux_2020 A couple of simple examples dX=a.dt š We feel that this one is going to be a little tough š So better start with simpler forms of the SDE rather than the general one to gain some intuition š Ideally, if we had an explicit solution for : š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 š We could calculate from this equation the quantities for ALL time: š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& (drift term) š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑋−< 𝑋 >&-∆&))>&-∆& (diffusion term) š That would be awesome, but we have a feeling that we are going to look at small expansion in time around the starting point, and we know by know that anything with the word “expansion” in it when dealing with stochastic calculus is fraught with danger 44
  • 45. Luc_Faucheux_2020 A couple of simple examples - II dX=a.dt š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 š How about we start with 𝑎 𝑡, 𝑋 𝑡 = 𝑎 and 𝑏 𝑡, 𝑋 𝑡 = 0 š 𝑑𝑋 𝑡 = 𝑎. 𝑑𝑡 š 𝑋 𝑡 = 𝑎. 𝑡 + 𝐶, but let’s set 𝐶 = 0 š 𝑋 𝑡 ) = (𝑎. 𝑡)) š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& (drift term) š < ∆𝑋 > = 𝑎. 𝑡 + ∆𝑡 − 𝑎. 𝑡 = 𝑎. ∆𝑡 (note that the average is only over one point) š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑥 >&-∆& −< 𝑥 >&= 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 š So: 𝐹( 𝑋 𝑡 , 𝑡 = 𝑎 = 𝐹( = 𝑐𝑡𝑒 š Note: we are overly pedestrian at this point, but we need to make sure that we are on firm footing before tackling the generalized equation, so apologies but bear with me, or jump ahead a number of slides. 45
  • 46. Luc_Faucheux_2020 A couple of simple examples - III dX=a.dt š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑋−< 𝑋 >&-∆&))>&-∆& (diffusion term) š < ∆𝑋)> =< ([𝑎. 𝑡 + ∆𝑡 ]−< 𝑎. 𝑡 + ∆𝑡 >&-∆&))>&-∆&= 0 š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑥−< 𝑥 >&-∆&))>&-∆& = 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 (diffusion term) š So we have: 𝐹) 𝑋 𝑡 , 𝑡 = 0 š 𝑎 𝑡, 𝑋 𝑡 = 𝑎, 𝑏 𝑡, 𝑋 𝑡 = 0 ⟺ 𝐹( 𝑋 𝑡 , 𝑡 = 𝑎, 𝐹) 𝑋 𝑡 , 𝑡 = 0 š Trivial note: we did not say that 𝐹) 𝑋 𝑡 , 𝑡 = 𝐹) = 𝑏 = 0, we are just saying that BOTH 𝐹) 𝑋 𝑡 , 𝑡 and 𝑏 are equal to 0, not that they are equal to each other š 𝐹( 𝑋 𝑡 , 𝑡 = 𝑀( 𝑋 𝑡 , 𝑡 and 𝐹) 𝑋 𝑡 , 𝑡 = 2. 𝑀) 𝑋 𝑡 , 𝑡 š 𝐹( 𝑋 𝑡 , 𝑡 = 𝑎, 𝐹) 𝑋 𝑡 , 𝑡 = 0 ⟺ 𝑀( 𝑋 𝑡 , 𝑡 = 𝑎, 𝑀) 𝑋 𝑡 , 𝑡 = 0 š !"($,&) !& = − ! !$ 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 − ! !$ 𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 = − ! !$ 𝑎. 𝑝 𝑥, 𝑡 46
  • 47. Luc_Faucheux_2020 A couple of simple examples – III - a dX=a.dt š Note that in our case we have an explicit solution so we have a formula for ALL times š < ∆𝑋 > 𝑡 = 𝑎. 𝑡 š A fortiori for small time increments: š < ∆𝑋 >∆&= 𝑎. ∆𝑡 š And of course: š < ∆𝑋)> (𝑡) = 0 š < ∆𝑋)>∆&= 0 47
  • 48. Luc_Faucheux_2020 A couple of simple examples - IV dX=a.dt š !"($,&) !& = − ! !$ 𝑎. 𝑝 𝑥, 𝑡 also called advection equation š We are now trying to solve this equation: š We can guess a form : š 𝑝 𝑥, 𝑡 = 𝑓(𝑥 − 𝑎. 𝑡) the function just translates the x axis with velocity a š !"($,&) !& = −𝑎. 𝑓′(𝑥 − 𝑎. 𝑡) š !" $,& !$ = 𝑓′(𝑥 − 𝑎. 𝑡) š So indeed: !"($,&) !& = −𝑎 !" $,& !$ = − ! !$ 𝐜? 𝑥, 𝑡 = − ! !$ [𝑎. 𝑝 𝑥, 𝑡 ] š With the initial function : 𝑝 𝑥, 𝑡1 = 𝛿 𝑥 − 𝑋 𝑡1 = 𝛿(𝑥 − 𝑋1) š 𝑝 𝑥, 𝑡 = 𝑓 𝑥 − 𝑎. 𝑡 and at time 𝑡 = 𝑡1, 𝑓 𝑥 − 𝑎. 𝑡1 = 𝛿(𝑥 − 𝑋1) 48
  • 49. Luc_Faucheux_2020 A couple of simple examples - V dX=a.dtI š With the initial function : 𝑝 𝑥, 𝑡1 = 𝛿 𝑥 − 𝑋 𝑡1 = 𝛿(𝑥 − 𝑋1) š 𝑝 𝑥, 𝑡 = 𝑓 𝑥 − 𝑎. 𝑡 and at time 𝑡 = 𝑡1, 𝑓 𝑥 − 𝑎. 𝑡1 = 𝛿(𝑥 − 𝑋1) š So 𝑝 𝑥, 𝑡 = 𝛿(𝑥 − 𝑋1 − 𝑎(𝑡 − 𝑡1)) š Just to be ever so pedestrian at this point: š 𝑓 𝑥 − 𝑎. 𝑡1 = 𝛿(𝑥 − 𝑋1) š We change the variable 𝑥( = 𝑥 − 𝑎. 𝑡1, so 𝑥 − 𝑋1 = 𝑥( + 𝑎. 𝑡1 − 𝑋1 š 𝑓 𝑥( = 𝛿(𝑥( − 𝑋1 + 𝑎. 𝑡1) š We now change again to the variable 𝑥) = 𝑥( + 𝑎𝑡, š so 𝑥( − 𝑋1 + 𝑎. 𝑡1 = 𝑥) − 𝑎𝑡 − 𝑋1 + 𝑎. 𝑡1 š 𝑓 𝑥) − 𝑎𝑡 = 𝛿(𝑥) − 𝑎𝑡 − 𝑋1 + 𝑎. 𝑡1) š We change again trivially back to 𝑥 = 𝑥) 49
  • 50. Luc_Faucheux_2020 A couple of simple examples – V dX=a.dtI š 𝑓 𝑥 − 𝑎𝑡 = 𝛿(𝑥 − 𝑎𝑡 − 𝑋1 + 𝑎. 𝑡1) š 𝑝 𝑥, 𝑡 = 𝑓 𝑥 − 𝑎. 𝑡 = 𝛿(𝑥 − 𝑋1 − 𝑎(𝑡 − 𝑡1)) š This seems rather obvious but again we want to familiarize ourselves with the structure of the proof once we apply it to more complicated functionals š This is what physicists like to call a propagation equation (in one dimension), because whatever is the initial shape for 𝑝 𝑥, 𝑡1 , it is conserved in time and is just translated by an amount 𝑎(𝑡 − 𝑡1), so thinking of 𝑥 as a space dimension and 𝑡 obviously as time, the initial shape moves at a velocity 𝑎 š I say whatever shape because you can always express any function over the complete span of the Dirac functions š Any function 𝑝 𝑥, 𝑡 can be written as a sum (integral) of the Dirac delta peaks of height the value of the function at that position: š 𝑝 𝑥, 𝑡 = ∫7%3+, 7%3+, 𝑝 𝑋1, 𝑡 . 𝛿 𝑥 − 𝑋1 . 𝑑𝑋1 50
  • 51. Luc_Faucheux_2020 A couple of simple examples – VII dX=a.dt š So for the simple propagation / advection case: !"($,&) !& = − ! !$ 𝑎. 𝑝 𝑥, 𝑡 51 < 𝑥 𝑡 > = 𝑋(𝑡) x 𝑡 𝑡 + ∆𝑡 < 𝑥 𝑡 + ∆𝑡 >
  • 52. Luc_Faucheux_2020 A couple of simple examples – VIII dX=a.dt š That picture really becomes: 52 < 𝑥 𝑡1 > = 𝑋1 x 𝑡! 𝑡 < 𝑥(𝑡) > • < ∆𝑋 > 𝑡 = 𝑎. 𝑡 • < ∆𝑋)> (𝑡) = 0
  • 54. Luc_Faucheux_2020 Second simple example dX=b.dW š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 š We choose 𝑎 𝑡, 𝑋 𝑡 = 0 and 𝑏 𝑡, 𝑋 𝑡 = 𝑏 š So the only thing that we can write with some certainty now that we have to deal with a stochastic term that is non-zero is the SIE: š 𝑋 𝑡< − 𝑋 𝑡= = ∫&3&= &3&< 𝑑𝑋 𝑡 = ∫&3&= &3&< 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 𝑡 = 𝑏 ∫&3&= &3&< 1. ([). 𝑑𝑊(𝑡) š NOW, because we are integrating a constant over 𝑑𝑊, any convention that we take (Ito, Strato or any other point in between the partition) will all converge to the same value š 𝑏 ∫&3&= &3&< 1. ([). 𝑑𝑊(𝑡) = 𝑏 ∫&3&= &3&< 1. (∘). 𝑑𝑊(𝑡) = 𝑊 𝑡< − 𝑊(𝑡=) š Again we are lucky enough to have an explicit solution for 𝑋 𝑡 š 𝑋 𝑡< − 𝑋 𝑡= = 𝑏 ∫&3&= &3&< 1. ([). 𝑑𝑊(𝑡) = 𝑏. (𝑊 𝑡< − 𝑊(𝑡=)) 54
  • 55. Luc_Faucheux_2020 Second simple example – II dX=b.dW š Remember that : š The Stratonovitch integral is defined as: š ∫&3&= &3&< 𝑓 𝑋 𝑡 . (∘). 𝑑𝑋(𝑡) = lim 8→, {∑.3( .38 𝑓 [𝑋(𝑡. + 𝑋(𝑡.-()]/2). [𝑋(𝑡.-() − 𝑋(𝑡.)]} š The Ito integral is defined as: š ∫&3&= &3&< 𝑓 𝑋 𝑡 . ([). 𝑑𝑋(𝑡) = lim 8→, {∑.3( .38 𝑓(𝑋(𝑡.)). [𝑋(𝑡.-() − 𝑋(𝑡.)]} š Running the risk of sounding too obvious here, when 𝑓 𝑥 = 1 š 𝑓 [𝑋(𝑡. + 𝑋(𝑡.-()]/2) = 𝑓(𝑋(𝑡.)) = 1 š So both sums are exactly equal, hence will converge to the same value when 𝑁 → ∞ 55
  • 56. Luc_Faucheux_2020 Second simple example – III dX=b.dW š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& (drift term) š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑋−< 𝑋 >&-∆&))>&-∆& (diffusion term) š For the function: 𝑋 𝑡 = 𝑊(𝑡), where 𝑊(𝑡) is the Wiener process (Brownian motion) š Always remember that almost all the times it is always a Gaussian
 š We are dealing here with the standard Brownian motion (you have to start somewhere) š We will do shortly a quick sidebar as to why Gaussian is so widely used 56
  • 57. Luc_Faucheux_2020 Second simple example – IV dX=b.dW š The usual definition of a Brownian motion is that (see also Bachelier deck): š It starts at zero: 𝑊 𝑡 = 0 = 0 š That one we can easily relax to a non-zero starting point, this is just for convenience š It has independent increments: for any and every partition in time {𝑡.} the variables defined as: {(𝑊 𝑡A-( − 𝑊 𝑡A-( } are independent random variables š It has stationary increments: for any and every partition in time {𝑡.} the distribution for the random variable {(𝑊 𝑡. − 𝑊 𝑡B } is the same distribution (not the same value) as the distribution for the random variable {(𝑊 𝑡.-" − 𝑊 𝑡B-" } š It has continuous sample paths: no jumps, no traveling in time š For every point in time 𝑡, the distribution for 𝑊 𝑡 is the Normal distribution (Gaussian function) 𝑁(0, 𝑡) 57
  • 58. Luc_Faucheux_2020 Second simple example – V dX=b.dW š Couple of notes on terminology. š The Gaussian function has usually 3 parameters: š 𝐺 𝑎, 𝑏, 𝑐, 𝑥 = 𝑎. 𝑒𝑥𝑝(− ($+<)! C ) š The “normalized” Gaussian function has only 2 parameters, by solving 𝑎 so that: š ∫$"3+, $"3-, 𝐺 𝑎, 𝑏, 𝑐, 𝑥 . 𝑑𝑥 = 1 š It is equal to: š 𝐺 𝑏, 𝑐, 𝑥 = ( C )5 . 𝑒𝑥𝑝(− ($+<)! C ) š A normalized Gaussian function is the Probability Density Function of a Gaussian Distribution. This is also known as the Normal Distribution 58
  • 59. Luc_Faucheux_2020 Second simple example – VI dX=b.dW š If 𝑏 = 0 and 𝑐 = 1 that Normal Distribution function is known as the Standard Normal Distribution function. It is also sometimes referred to as the Z-distribution (think of Z-score) š So “normal” maybe means that it has been normalized, or that it is so used everywhere that we are used to the Bell shaped and it should be “normal” to expect a Gaussian š We will go over later why it is “normal” to usually expect a Gaussian. In short š Gauss is awesome, he is the Prince of Mathematicians š The Gaussian distribution is the limit of the Binomial distribution that we love š It is also the limit of a lot of other distributions, as long as the second moment is finite (Central limit theorem) š If you truncate the Master equation to the second order, the solution will be a Gaussian š The distribution of sum of variables following a Gaussian will ALSO be a Gaussian š The Gaussian is such that only the 1st and 2nd cumulant are non-zero š Using the principle of Maximum Entropy, you recover a Gaussian when you only know the first 2 moments
..and a lot more reasons why the Gaussian is awesome 59
  • 60. Luc_Faucheux_2020 Gauss is awesome š When he was 12 year old he solved: 𝑆 𝑛 = ∑636 63B 𝑖 = B(B-() ) 60
  • 61. Luc_Faucheux_2020 Second simple example – VII dX=b.dW š All right, back to the Brownian motion for now š From the definition of the Brownian motion, a couple of properties ensues: š Noting 𝔌 the expected value (integral over the distribution), we already know that: š 𝔌{𝑊 𝑡 } = 0 š 𝔌{𝑊 𝑡 )} = 𝑡 š 𝔌{𝑊 𝑡 . 𝑊(𝑡2)} = min(𝑡, 𝑡2) š 𝔌 𝑊 𝑡 − 𝑊 𝑡2 ) = 𝔌 𝑊 𝑡 ) + 𝔌 𝑊 𝑡 ) − 2. 𝔌{𝑊 𝑡 . 𝑊(𝑡2)} š 𝔌 𝑊 𝑡 − 𝑊 𝑡2 ) = 𝑡 + 𝑡2 − 2. min 𝑡, 𝑡2 = |𝑡 − 𝑡′| 61
  • 62. Luc_Faucheux_2020 Second simple example – VIII dX=b.dW š 𝔌 𝑊 𝑡 − 𝑊 𝑡2 . 𝑊 𝑡′′ − 𝑊 𝑡222 = min 𝑡, 𝑡22 + min 𝑡2, 𝑡222 − min 𝑡, 𝑡22 − min(𝑡2, 𝑡22) š For a partition in time {𝑡.} and looking at the increments we get for 𝑖 < 𝑗 for example š 𝔌 𝑊 𝑡6-( − 𝑊 𝑡6 . 𝑊 𝑡A-( − 𝑊 𝑡A = 𝑡6-( + 𝑡6 − 𝑡6-( − 𝑡6 = 0 š So enforcing the Gaussian distribution automatically ensures that the Brownian motion has independent increments. š Note that for the list of criteria above, I think that it can be shown that the Gaussian is the only process with continuous paths. Other distributions will exhibit jumps (maybe Levy). I am not sure of this and might spend some time researching it, but for now we are happy to have a solution, and do not concern ourselves with the unicity of that solution for now. 62
  • 63. Luc_Faucheux_2020 Second simple example – IX dX=b.dW š All right this time back to our example: š 𝑋 𝑡< − 𝑋 𝑡= = 𝑏 ∫&3&= &3&< 1. ([). 𝑑𝑊(𝑡) = 𝑏. (𝑊 𝑡< − 𝑊(𝑡=)) š < ∆𝑋 > = 𝔌 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& (drift term) š < ∆𝑋 > = 𝔌 𝑋(𝑡 + ∆𝑡 ) − 𝔌 𝑋(𝑡 ) = 0 š Note on the drift term: š < ∆𝑋)> = 𝔌 ∆𝑋) =< (𝑋−< 𝑋 >&-∆&))>&-∆& (diffusion term) š Assuming 𝑋 𝑡 = 0 š Then 𝔌 𝑋(𝑡 + ∆𝑡 =< 𝑋 >&-∆&= 0 š < ∆𝑋)> = 𝔌 ∆𝑋) = 𝔌 𝑋)(𝑡 + ∆𝑡) = 𝑏). (𝑡 + ∆𝑡) š If we choose 𝑝 𝑥, 𝑡 = 𝛿(𝑥 − 𝑋 𝑡 ) then < ∆𝑋)> = 𝑏). ∆𝑡 63
  • 64. Luc_Faucheux_2020 Second simple example – X dX=b.dW š Couple of notes on probabilities and conditional probabilities š We are looking at 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 š ”starting” at time 𝑡, with 𝑝 𝑥, 𝑡 = 𝛿(𝑥 − 𝑋 𝑡 ) š So for the drift for example we are looking at: š < ∆𝑋 > = 𝔌 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& (drift term) š < ∆𝑋 > = 𝔌 ∆𝑋 =< 𝑋 >&-∆& −𝑋(𝑡) š This is really : < ∆𝑋 > = 𝔌 ∆𝑋 | 𝑋(𝑡) =< 𝑋| 𝑋(𝑡) >&-∆& −𝑋(𝑡) š Those are really conditional probabilities š For example if 𝑋 𝑡 = 0 = 0 we know that 𝔌 𝑋(𝑡) = 0 š However for a given 𝑋(𝑡), 𝔌 𝑋 𝑡 + ∆𝑡 | 𝑋(𝑡) = 𝑋(𝑡) 64
  • 65. Luc_Faucheux_2020 Second simple example – XI dX=b.dW š Let’s recap: š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 = 𝑏. ([). 𝑑𝑊 = 𝑏. (∘). 𝑑𝑊 š < ∆𝑋 > = 0 = 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 š < ∆𝑋)> = 𝑏). ∆𝑡 = 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 š 𝑎 𝑡, 𝑋 𝑡 = 0, 𝑏 𝑡, 𝑋 𝑡 = 𝑏 ⟺ 𝐹( 𝑋 𝑡 , 𝑡 = 0, 𝐹) 𝑋 𝑡 , 𝑡 = 𝑏) š Trivial note: we did not say that 𝐹( 𝑋 𝑡 , 𝑡 = 𝐹( = 𝑎 = 0, we are just saying that BOTH 𝐹( 𝑋 𝑡 , 𝑡 and 𝑎 are equal to 0, not that they are equal to each other š 𝐹( 𝑋 𝑡 , 𝑡 = 𝑀( 𝑋 𝑡 , 𝑡 and 𝐹) 𝑋 𝑡 , 𝑡 = 2. 𝑀) 𝑋 𝑡 , 𝑡 š 𝐹( 𝑋 𝑡 , 𝑡 = 0, 𝐹) 𝑋 𝑡 , 𝑡 = 𝑏 ⟺ 𝑀( 𝑋 𝑡 , 𝑡 = 0, 𝑀) 𝑋 𝑡 , 𝑡 = <! ) š !"($,&) !& = − ! !$ 𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 − ! !$ 𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 = ! !$ ! !$ <! ) . 𝑝 𝑥, 𝑡 65
  • 66. Luc_Faucheux_2020 Second simple example – XII dX=b.dW š So for the SDE (SIE): š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 = 𝑏. ([). 𝑑𝑊 = 𝑏. (∘). 𝑑𝑊 š We have the equivalent PDE for the PDF: š !"($,&) !& = ! !$ ! !$ <! ) . 𝑝 𝑥, 𝑡 = <! ) . !! !$! [𝑝 𝑥, 𝑡 ] š This is the celebrated Heat equation, of which the Gaussian function is a solution. š This should not be that surprising since the definition of the Brownian motion is that: š For every point in time 𝑡, the distribution for 𝑊 𝑡 is the Normal distribution (Gaussian function) 𝑁(0, 𝑡) š Note also that those really should be written as PDE on the conditional probability density function, this will become a notation that we will have to change more rigorously when we look at FORWARD and BACKWARD Kolmogorov equations 66
  • 67. Luc_Faucheux_2020 Second simple example – XIII dX=b.dW š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 = 𝑏. ([). 𝑑𝑊 = 𝑏. (∘). 𝑑𝑊 š Will most likely in Finance be used with 𝑏 = 𝜎, the volatility š Will most likely in Physics be used with 𝐷 = <! ) , the diffusion coefficient š The solution of: š !"($,&) !& = D! ) . !! !$! 𝑝 𝑥, 𝑡 = 𝐷. !! !$! [𝑝 𝑥, 𝑡 ] š Subject to 𝑝 𝑥, 𝑡 = 𝑡1 = 𝛿 𝑥 − 𝑋 𝑡1 = 𝛿 𝑥 − 𝑋1 = 𝛿 𝑥 − 𝑥1 is: š 𝑝 𝑥, 𝑡 = 𝑝 𝑥, 𝑡|𝑥1, 𝑡1 = ( E5F &+&% . 𝑒𝑥 𝑝 − $+7% ! EF &+&% = ( )5D!(&+&%) . 𝑒𝑥𝑝(− ($+7%)! )D!(&+&%) ) š < ∆𝑋)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 = 2. 𝐷. ∆𝑡 67
  • 68. Luc_Faucheux_2020 Second simple example – XIII – b dX=b.dW š So the picture becomes: 68 < 𝑥 𝑡 > = 𝑋(𝑡) x 𝑡 𝑡 + ∆𝑡 < 𝑥 𝑡 + ∆𝑡 >
  • 69. Luc_Faucheux_2020 Second simple example – XIV dX=b.dW š A really amazing book to read is : 69
  • 70. Luc_Faucheux_2020 Second simple example – XV dX=b.dW š In it, he points out that one of the reason why we love the Gaussian distribution so much is that it is invariant under addition (p.157) š If 𝑥(and 𝑥) are two independent random variables following the Gaussian distribution, then š 𝑊 = 𝑥( + 𝑥) will also follow a Gaussian distribution, and š < ∆𝑊)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 =< ∆(𝑥( + 𝑥)))> =< ∆𝑥( ) >+< ∆𝑥) ) > = 𝜎( ). ∆𝑡 + 𝜎( ). ∆𝑡 š 𝜎$"-$! ) = 𝜎$" ) + 𝜎$! ) š In particular, in the case of 𝑁 independent random increments 𝑥6 following a Gaussian distribution, the sum of those random increments will also be a random variable following a Gaussian distribution of variance: š < ∆(∑ 𝑥6))> = 𝔌 (∑ 𝑥6)) = 𝜎). ∆𝑡 = ∆𝑡. ∑ 𝜎$# ) 70
  • 71. Luc_Faucheux_2020 Second simple example – XVI dX=b.dW š The invariance of the Gaussian distribution under addition is closely connected with the CLT (Central Limit Theorem), which states that a suitably normalized sum of many independent variables with finite variances (do not have to follow a Gaussian distribution) will converge to a Gaussian distribution, another reason why the Gaussian distribution is awesome š Note that the Gaussian distribution is invariant under addition for exponent 2 š 𝜎$"-$! ) = 𝜎$" ) + 𝜎$! ) š Schroeder points out that a number of distributions are invariant under addition for a different exponent 𝐷4 (not diffusion, but more like a dimension coefficient, see fractal theory). š In particular, the celebrated Cauchy distribution: 𝑝 𝑥 = ( 5((-$!) is invariant for addition for exponent 𝐷4 = 1 š As another example, for 𝐷4 = 1/2, the distribution that is invariant under addition is: š 𝑝 𝑥 = ( )5 . 𝑥+ $ !. exp( +( )$ ) 71
  • 72. Luc_Faucheux_2020 Second simple example – XVII dX=b.dW š Another note on the Gaussian and Cauchy distribution. š Suppose that we have 𝑁 identically distributed random variables. š For Gaussian distribution: š < ∆𝑋)> = < ∆(∑ 𝑥6))> = 𝔌 (∑ 𝑥6)) = 𝜎). ∆𝑡 = ∆𝑡. ∑ 𝜎$# ) = 𝑁. ∆𝑡. 𝜎6 ) š 𝜎) = 𝑁. 𝜎6 ) š And so the AVERAGE (not the SUM) of those variables will be such that: š < ∆( 7 8 ))> = ( ( 8 ))< ∆(∑ 𝑥6))> = ( ( 8 )) 𝑁. ∆𝑡. 𝜎6 ) = ( 8 . ∆𝑡. 𝜎6 ) = ∆𝑡. 𝜎9$: ) š So for variables following a Gaussian distribution, the more measurements you make and take the average, the more precise you have an estimate of the average: š 𝜎9$: ) = ( 8 . 𝜎6 ) or equivalently: 𝜎9$: = ( 8 . 𝜎6 72
  • 73. Luc_Faucheux_2020 Second simple example – XVIII dX=b.dW š This is why in Physics for most experiments you believe that the more measurements you make , the better an estimate you will get. š This is usually the often quoted “convergence” in ( 8 for most computer simulations š Note that this is not true in Finance, where you do not have the luxury of making of lot of experiments of the same physical system. In Finance you do not have the luxury of control experiment, not do you have the luxury of a steady state solution. š In contrast, the Cauchy distribution is such that: 𝜎 = 𝑁. 𝜎6 š And so the distribution of the AVERAGE of 𝑁 identically distributed Cauchy variable is the SAME as the original distribution. š Averaging Cauchy variables does not improve the estimate. š Averaging Gaussian variables improve the estimate. 73
  • 74. Luc_Faucheux_2020 Some Physics terminology on the Diffusion equation š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 = 𝑏. ([). 𝑑𝑊 = 𝑏. (∘). 𝑑𝑊 š That can be numerically simulated on a computer š See also the deck on Binomial and Bachelier with a Taylor expansion of the Binomial process š !"($,&) !& = D! ) . !! !$! 𝑝 𝑥, 𝑡 = 𝐷. !! !$! [𝑝 𝑥, 𝑡 ] š But physicists like to write it as : š The diffusion current is sometimes defined as: 𝐜* 𝑥, 𝑡 = −𝐷 !"($,&) !$ (Fick’s law) š !"($,&) !& = − ! !$ 𝐜* 𝑥, 𝑡 š The usual example is a drop of ink diffusing into a glass of water 74
  • 75. Luc_Faucheux_2020 Some Physics terminology on the Diffusion equation - II š Drifts caused by diffusion and drifts caused by external forces š Internal random drifts and external forcing drifts š When the particle follows a propagation equation: 𝑑𝑋 𝑡 = 𝑎. 𝑑𝑡 š !"($,&) !& = −𝑎 !" $,& !$ = − ! !$ 𝐜? 𝑥, 𝑡 = − ! !$ [𝑎. 𝑝 𝑥, 𝑡 ] š There is a directed motion due to a drift: 𝐜? 𝑥, 𝑡 = −𝑎. 𝑝 𝑥, 𝑡 š When the particle follows a diffusion equation: 𝑑𝑋 𝑡 = 𝑏. ([). 𝑑𝑊 š !"($,&) !& = D! ) . !! !$! 𝑝 𝑥, 𝑡 = 𝐷. !! !$! 𝑝 𝑥, 𝑡 = − ! !$ 𝐜* 𝑥, 𝑡 with 𝐷 = <! ) š There is a diffusive (random) motion due to a drift: 𝐜* 𝑥, 𝑡 = −𝐷 !"($,&) !$ 75
  • 76. Luc_Faucheux_2020 Our first brush with Feyman-Kac 76
  • 77. Luc_Faucheux_2020 Since we know about Black-Sholes and options š 𝑑𝑆 𝑡 = 𝑎 𝑡, 𝑆 𝑡 . 𝑑𝑡 + 𝜎 𝑡, 𝑆 𝑡 . ([). 𝑑𝑊 = 𝜎. ([). 𝑑𝑊 = 𝜎. (∘). 𝑑𝑊 š 𝑆 𝑡 follows a PDF that is a solution of the PDE: š !"($,&) !& = D! ) . !! !$! 𝑝 𝑥, 𝑡 77 < 𝑆 𝑡 > = 𝑆(𝑡) S 𝑡 𝑡 + ∆𝑡 < 𝑆 𝑡 + ∆𝑡 >
  • 78. Luc_Faucheux_2020 Since we know about Black-Sholes and options - II š We know from the FTAP (Fundamental Theorem of Asset Pricing), that a derivative (in particular a call price) is the discounted value of the expected payoff of the derivative at maturity 78
  • 79. Luc_Faucheux_2020 Since we know about Black-Sholes and options - III š The option value is then the product of the distribution function at expiry with the terminal payoff of the option š 𝐶 𝑆1, 𝑇 = ∫ 𝑃 𝑆2, 𝑆1, 𝑇 . 𝑃𝐎𝑌𝑂𝐹𝐹 𝑆2 . 𝑑𝑆′ š 𝐶 𝑆1, 𝑇 = ∫ 𝑃 𝑆1 → 𝑆2, 𝑡 = 0 → 𝑇 . 𝑃𝐎𝑌𝑂𝐹𝐹 𝑆2 . 𝑑𝑆′ 79 S S0 S’ t t = T ⹂ =
  • 80. Luc_Faucheux_2020 Since we know about Black-Sholes and options - IV š The option value is now a function of S (instead of being formally of function of 𝑆1 š 𝐶 𝑆, 𝑇 = ∫ 𝑃 𝑆2, 𝑆, 𝑇 . 𝑃𝐎𝑌𝑂𝐹𝐹 𝑆2 . 𝑑𝑆′ š 𝐶 𝑆, 𝑇 = ∫ 𝑃 𝑆′ → 𝑆, 𝑡 = 𝑇 → 0 . 𝑃𝐎𝑌𝑂𝐹𝐹 𝑆2 . 𝑑𝑆′ 80 SS St t 𝑡 + ∆𝑡
  • 81. Luc_Faucheux_2020 Since we know about Black-Sholes and options - V š 𝐶 𝑆1, 𝑇, 𝐟, 𝜎 = ∫ 𝑃 𝑆1 → 𝑆2, 𝑡 = 0 → 𝑡 = 𝑇 . 𝑃𝐎𝑌𝑂𝐹𝐹 𝑆2 . 𝑑𝑆′ š In the case of a regular call option: 𝑃𝐎𝑌𝑂𝐹𝐹 𝑆2 = (𝑆2 − 𝐟)-= 𝑀𝐎𝑋(𝑆2 − 𝐟, 0) š 𝐶 𝑆1, 𝑇, 𝐟, 𝜎 = ∫M&3+, M&3-, 𝑃 𝑆1 → 𝑆2, 𝑡 = 0 → 𝑡 = 𝑇 . (𝑆2 − 𝐟)-. 𝑑𝑆′ š 𝐶 𝑆1, 𝑇, 𝐟, 𝜎 = ∫M&3N M&3-, 𝑃 𝑆1 → 𝑆2, 𝑡 = 0 → 𝑡 = 𝑇 . 𝑆2 − 𝐟 . 𝑑𝑆′ š ! !N 𝐶 𝑆1, 𝑇, 𝐟, 𝜎 = ∫M&3N M&3-, 𝑃 𝑆1, 𝑆2, 𝑇 . ! !N 𝑆2 − 𝐟 . 𝑑𝑆′ − 𝐟 − 𝐟 . 𝑃 𝑆1, 𝐟, 𝑇 š ! !N 𝐶 𝑆1, 𝑇, 𝐟, 𝜎 = − ∫M&3N M&3-, 𝑃 𝑆1, 𝑆2, 𝑇 . 𝑑𝑆′ š !! !!N 𝐶 𝑆1, 𝑇, 𝐟, 𝜎 = 𝑃 𝑆1, 𝐟, 𝑇 š Note that we can write: 𝑃 𝑆1, 𝐟, 𝑇 = ∫M&3+, M&3-, 𝑃 𝑆1, 𝑆2, 𝑇 . 𝛿(𝑆2 − 𝐟). 𝑑𝑆′ 81
  • 82. Luc_Faucheux_2020 Since we know about Black-Sholes and options - VI š Note that we can write: 𝑃 𝑆1, 𝐟, 𝑇 = ∫M&3+, M&3-, 𝑃 𝑆1, 𝑆2, 𝑇 . 𝛿(𝑆2 − 𝐟). 𝑑𝑆′ š Where 𝛿(𝑆2 − 𝐟) is the Dirac peak, š 𝛿 𝑆2 − 𝐟 = 0 for all 𝑆2 <> 𝐟 š ∫M&3+, M&3-, 1. 𝛿(𝑆2 − 𝐟). 𝑑𝑆′ = 1 š ∫M&3+, M&3-, 𝑃 𝑆1, 𝑆2, 𝑇 . 𝛿(𝑆2 − 𝐟). 𝑑𝑆′ = 𝑃 𝑆1, 𝐟, 𝑇 š So: š 𝑃 𝑆1, 𝐟, 𝑇 = !! !!N 𝐶 𝑆1, 𝑇, 𝐟, 𝜎 = ∫M&3+, M&3-, 𝑃 𝑆1, 𝑆2, 𝑇 . 𝛿(𝑆2 − 𝐟). 𝑑𝑆′ 82
  • 83. Luc_Faucheux_2020 Since we know about Black-Sholes and options - VII š Putting us in the NORMAL framework for Black-Sholes (totally our right), the above eaution re-writes itself as: š 𝑃 𝑆(𝑡), 𝐟, 𝑇 = !! !!N 𝐶 𝑆(𝑡), 𝑇, 𝐟, 𝜎 = ∫M&3+, M&3-, 𝑃 𝑆2| 𝑆(𝑡) . 𝛿(𝑆2 − 𝐟). 𝑑𝑆′ š This is quite powerful š The PDF 𝑃 𝑆(𝑡), 𝐟, 𝑇 is given as the expectation value of some payoff š The PDF that is a solution of a PDE, is also the conditional expectation under some probability measure that is related to the SDE š If you only had Excel, and did not know anything about calculus, this is how you could solve the PDE: 83
  • 84. Luc_Faucheux_2020 Since we know about Black-Sholes and options - VIII 84 < 𝑆 𝑡 > = 𝑆(𝑡) S 𝑡 𝑡 + ∆𝑡 < 𝑆 𝑡 + ∆𝑡 > S 𝑡 𝑡 + ∆𝑡 𝛿(𝑆! − 𝐟)
  • 85. Luc_Faucheux_2020 Since we know about Black-Sholes and options - IX 85 š !"(M,&) !& = D! ) . !! !$! 𝑝 𝑆, 𝑡 is a diffusion equation (Forward equation) š If you know calculus, a solution is the Gaussian: 𝑝 𝑆, 𝑡 = ( )5D!(&+&%) . 𝑒𝑥𝑝(− (M+M%)! )D!(&+&%) ) for the initial condition 𝑝 𝑆, 𝑡 = 0 = 𝛿(𝑆 − 𝑆1) š If you do not know calculus but have Excel, you simulate the random process (SDE) associated to the above PDE which is: 𝑑𝑆 𝑡 = 𝜎. ([). 𝑑𝑊 = 𝜎. (∘). 𝑑𝑊 š You calculate the expectation of the Dirac delta function at maturity 𝑇 š This will give you exactly 𝑝 𝑆, 𝑡 = ( )5D!(R+&) . 𝑒𝑥𝑝(− (M+N)! )D!(R+&) ) š If you think about it, it is kind of awesome. š It is a very crude first introduction to the Feynman-Kac theorem (1950) š This is also an illustration of the forward-backward formalism in PDE
  • 86. Luc_Faucheux_2020 Since we know about Black-Sholes and options - X š The PDF is a solution of a heat equation (PDF) š It is also the expectation of the payoff at maturity equal to the Dirac delta š A call price is also the expectation of a payoff š And so the call price ALSO follows a heat equation (the celebrated Black-Sholes) š So the PDF, and also any derivatives of the stock that follows the SDE, will all follow the same PDE, just with different boundary conditions, as they all are expectations of payoff (granted with the Green propagators that is the Gaussian, so this is a little serl-referential, but at least it is consistent) 86
  • 87. Luc_Faucheux_2020 Since we know about Black-Sholes and options - XI š Since we are here, here something really cool about Black-Sholes that I could not really put in any of the other decks, so here it is: š 𝐶 𝑆1, 𝑇, 𝐟, 𝜎 is a function of the stock 𝑆1 š We can write ITO lemma on the call (this is what Black and Sholes did) š 𝑑𝐶 𝑆, 𝑡 = !S !M . 𝑑𝑆 + !S !& 𝑑𝑡 + ( ) !!S !M! . (𝑑𝑆)) š 𝑑𝐶 𝑆, 𝑡 = !S !M . 𝑑𝑆 + !S !& 𝑑𝑡 + ( ) !!S !M! . (𝜎)). 𝑑𝑡 = !S !M . 𝑑𝑆 + 𝑑𝑡. [ !S !& + ( ) !!S !M! . (𝜎))] š What is inside the bracket is the Black-Sholes equation (for 𝑟 = 0) š So: 𝑑𝐶 𝑆, 𝑡 = !S !M . 𝑑𝑆 or in the SIE for that we should really always use: š 𝐶 𝑡< − 𝐶 𝑡= = ∫&3&= &3&< !S !M . 𝑑𝑆(𝑡) = ∫&3&= &3&< ∆. 𝑑𝑆(𝑡) where ∆ is the Black-Scholes Greek. š Pretty nifty no? The call option is the integral over the stock of the delta 87
  • 88. Luc_Faucheux_2020 Since we know about Black-Sholes and options - XII š 𝐶 𝑡< − 𝐶 𝑡= = ∫&3&= &3&< !S !M . 𝑑𝑆(𝑡) = ∫&3&= &3&< ∆. 𝑑𝑆(𝑡) where ∆ is the Black-Scholes Greek. š If we set 𝑡< = 𝑇, maturity of the option, and 𝑡= = 𝑡 for sake of clarity š 𝐶 𝑇 − 𝐶 𝑡 = ∫&3&= &3&< ∆. 𝑑𝑆(𝑡) š At maturity the call price is equal to the payoff, in this example 𝐶 𝑇 = 𝑀𝐎𝑋(𝑆 𝑇 − 𝐟, 0) š Let’s note 𝐻 𝑇 = 𝑀𝐎𝑋(𝑆 𝑇 − 𝐟, 0) the payoff function. š Let’s compute the expected value of the above equation š 𝔌 𝐶 𝑇 𝑆 𝑡 = 𝔌 𝐻 𝑇 𝑆 𝑡 = 𝔌 (𝑆 𝑇 − 𝐟)- 𝑆 𝑡 = 𝔌 𝑀𝐎𝑋(𝑆 𝑇 − 𝐟, 0) 𝑆 𝑡 š 𝔌 𝐶 𝑡 𝑆 𝑡 = 𝐶(𝑡) š Under the probability measure where the stock is a martingale š 𝔌 ∫&3&= &3&< ∆. 𝑑𝑆 𝑡 = 0 under the ITO integral (ITO integral of a trading strategy is a martingale) 88
  • 89. Luc_Faucheux_2020 Since we know about Black-Sholes and options - XIII š And so we have: š 𝐶 𝑡 = 𝔌 𝐶 𝑡 𝑆 𝑡 = 𝔌 𝑀𝐎𝑋(𝑆 𝑇 − 𝐟, 0) 𝑆 𝑡 š We recover the fact that the call price is the expected value of the terminal payoff, under the proper probability distribution associated to the specific numeraire we chose, properly discounted (in our case in order to simplify we had 𝑟 = 0, say it another way the Money Market Numeraire is the trivial constant 1) š This is kind of neat. š Using ITO lemma (in ITO calculus), and the fact that the Call price follows the PDE (Black- Sholes) that follows the Heat Equation (Fokker-Planck), using the fact that the ITO integral is a martingale, we can then express the Call as the expected value of a terminal payoff (boundary condition) under the conditional probability distribution š This gives us a little flavor of Feynman-Kac and also of the McLean derivation of the link between a non-linear SDE and the associated PDE 89
  • 91. Luc_Faucheux_2020 Third simple example dX=a.dt + b.dW š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 š We choose 𝑎 𝑡, 𝑋 𝑡 = 𝑎 and 𝑏 𝑡, 𝑋 𝑡 = 𝑏 š So the only thing that we can write with some certainty now that we have to deal with a stochastic term that is non-zero is the SIE: š 𝑋 𝑡< − 𝑋 𝑡= = ∫&3&= &3&< 𝑑𝑋 𝑡 = 𝑎. ∫&3&= &3&< 1. 𝑑𝑠 + 𝑏 ∫&3&= &3&< 1. ([). 𝑑𝑊(𝑡) š We could manually redo the calculation of the first and second moment: š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& (drift term) š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑋−< 𝑋 >&-∆&))>&-∆& (diffusion term) š Or we could leverage the work we already did by looking at a change of variables. 91
  • 92. Luc_Faucheux_2020 Third simple example – II dX=a.dt + b.dW š 𝑑𝑋 𝑡 = 𝑎. 𝑑𝑡 + 𝑏. ([). 𝑑𝑊 š We are looking at the change of variables: š 𝑋 𝑡 → 𝑋2 𝑡2 = 𝑋 𝑡′ − 𝑎. 𝑡2 = 𝑋 𝑡 − 𝑎. 𝑡 š 𝑡 → 𝑡2 = 𝑡 š 𝑑𝑋2(𝑡2) = 𝑑𝑋 𝑡2 − 𝑎. 𝑑𝑡′ = 𝑎. 𝑑𝑡′ + 𝑏. ([). 𝑑𝑊(𝑡2) − 𝑎. 𝑑𝑡′ = 𝑏. ([). 𝑑𝑊(𝑡2) š We know that this SDE corresponds to the mapping: š !"2($2,&2) !&2 = ! !$2 ! !$2 <! ) . 𝑝′ 𝑥′, 𝑡′ = <! ) . !! !$2! [𝑝′ 𝑥′, 𝑡′ ] š We then go back to the original variables 𝑋 𝑡 and 𝑡, but we need to be a little careful here on the notations. 92
  • 93. Luc_Faucheux_2020 Third simple example – III dX=a.dt + b.dW š 𝑋 𝑡 → 𝑋2 𝑡2 = 𝑋 𝑡′ − 𝑎. 𝑡2 = 𝑋 𝑡 − 𝑎. 𝑡 š 𝑡 → 𝑡2 = 𝑡 š So we have š 𝑋2 𝑡′ → 𝑋 𝑡 = 𝑋2 𝑡2 + 𝑎. 𝑡2 š 𝑡′ → 𝑡 = 𝑡′ š We ALSO define a transformation on the regular variables: š 𝑥2 = 𝑓(𝑥, 𝑡) = 𝑥 − 𝑎. 𝑡 and so 𝑥 = 𝑔(𝑥2, 𝑡2) = 𝑥2 + 𝑎. 𝑡′ š ! !$2 = ! !$ . !$ !$2 + ! !& . !& !$2 = ! !$ š ! !&2 = ! !$ . !$ !&2 + ! !& . !& !&2 = ! !$ . 𝑎 + ! !& š !! !$&! = ! !$2 . ! !$2 = ! !$2 . ! !$ = ! !$ . ! !$ = !! !$! 93
  • 94. Luc_Faucheux_2020 Third simple example – III –a dX=a.dt + b.dW š We have: š !"2($2,&2) !&2 = <! ) . !! !$2! [𝑝′ 𝑥′, 𝑡′ ] š And we have the relations between the partial derivatives, HOWEVER we do not have yet the relation between 𝑝′(𝑥′, 𝑡′) and 𝑝(𝑥, 𝑡) š Where: 𝑝 𝑥, 𝑡 = 𝑝7(𝑥, 𝑡) and 𝑝′ 𝑥2, 𝑡2 = 𝑝7&(𝑥′, 𝑡′) š PDF Probability Density Function: 𝑝7&(𝑥2, 𝑡) š Distribution function : 𝑃7&(𝑥2, 𝑡) š 𝑃7& 𝑥2, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋2 ≀ 𝑥2, 𝑡 = ∫>3+, >3$& 𝑝7& 𝑊, 𝑡 . 𝑑𝑊 š 𝑝72(𝑥2, 𝑡) = ! !$2 𝑃7& 𝑥2, 𝑡 94
  • 95. Luc_Faucheux_2020 Third simple example – III –b dX=a.dt + b.dW š This highlights the fact that capital letters are reserved for stochastic variables and the lower case are for just regular variables of a function. š Usually we just use one or the other without paying too much attention š But here we are doing a change of variable on a PDF, so we need to be a little careful. 95
  • 96. Luc_Faucheux_2020 Third simple example – III –c dX=a.dt + b.dW š Variable change through the Distribution function technique (one dimension) š Suppose that we have a stochastic variable 𝑋(𝑡) š Suppose that there is a PDF 𝑝7(𝑥, 𝑡)and a DF 𝑃7(𝑥, 𝑡) š Suppose that we define 𝑋′ 𝑡 = Ί(𝑋 𝑡 ), and that we can invert 𝑋 𝑡 = 𝜑(𝑋′ 𝑡 ) š Formally to go from one DF to another we would write something like this: š 𝑃72 𝑥′, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋2 ≀ 𝑥2, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(Ί 𝑋 𝑡 ≀ 𝑥2, 𝑡) š 𝑃72 𝑥′, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 Ί 𝑋 𝑡 ≀ 𝑥2, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(𝑋 𝑡 ≀ 𝜑 𝑥2 , 𝑡) š 𝑃72 𝑥′, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 𝑡 ≀ 𝜑 𝑥2 , 𝑡 = 𝑃7 𝜑 𝑥2 , 𝑡 š And then applying: 𝑝72(𝑥2, 𝑡) = ! !$2 𝑃7& 𝑥2, 𝑡 96
  • 97. Luc_Faucheux_2020 Third simple example – III –d dX=a.dt + b.dW š 𝑃72 𝑥′, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 𝑡 ≀ 𝜑 𝑥2 , 𝑡 = 𝑃7 𝜑 𝑥2 , 𝑡 š 𝑝72(𝑥2, 𝑡) = ! !$2 𝑃7& 𝑥2, 𝑡 š 𝑝7(𝑥, 𝑡) = ! !$ 𝑃7 𝑥, 𝑡 š 𝑋′ 𝑡 = Ί(𝑋 𝑡 ) š 𝑋 𝑡 = 𝜑(𝑋′ 𝑡 ) š So: š 𝑝7& 𝑥2, 𝑡 = ! !$& 𝑃7& 𝑥2, 𝑡 = ! !$& 𝑃7 𝜑 𝑥2 , 𝑡 = ! !$& 𝑃7 𝑥 = 𝜑 𝑥2 , 𝑡 š 𝑝7& 𝑥2, 𝑡 = ! !$& 𝑃7 𝑥 = 𝜑 𝑥2 , 𝑡 = ! !$ 𝑃7 𝑥, 𝑡 . ! !$& 𝜑 𝑥2 = 𝑝7 𝑥, 𝑡 . ! !$& 𝜑 𝑥2 97
  • 98. Luc_Faucheux_2020 Third simple example – III –e dX=a.dt + b.dW š 𝑋′ 𝑡 = Ί(𝑋 𝑡 ) š 𝑋 𝑡 = 𝜑(𝑋′ 𝑡 ) š 𝑝7& 𝑥2, 𝑡 = 𝑝7 𝑥, 𝑡 . ! !$& 𝜑 𝑥2 š This is in one dimension. š In the case of multiple dimensions and joint probabilities that gets more complicated and involves a determinant š 𝑝7& 𝑥2, 𝑡 = 𝑝7 𝑥, 𝑡 . ! !$& 𝜑 𝑥2 and noting 𝑥 = 𝜑 𝑥2 and 𝑥′ = Ί(𝑥) š ! !$& 𝜑 𝑥2 = *$ *$& = *T $& *$& š The density of probability {𝑝7& 𝑥2, 𝑡 . 𝑑𝑥′} = {𝑝7 𝑥, 𝑡 . 𝑑𝑥} is conserved š If you integrate under the curve, then change the variable of integration, this is the usual result 98
  • 99. Luc_Faucheux_2020 Third simple example – III –f dX=a.dt + b.dW š In our case š 𝑥2 = 𝑓(𝑥, 𝑡) = 𝑥 − 𝑎. 𝑡 and so 𝑥 = 𝑔(𝑥2, 𝑡2) = 𝑥2 + 𝑎. 𝑡′ š *$ *$& = *T $& *$& š Because this is not an integration over two variables, here the time is only a parametrization š So š {𝑝7& 𝑥2, 𝑡′ . 𝑑𝑥′} = {𝑝7 𝑥, 𝑡 . 𝑑𝑥 š 𝑝7& 𝑥2, 𝑡′ = 𝑝7 𝑥, 𝑡 š We can replace 𝑝7& 𝑥2, 𝑡 by 𝑝7 𝑥, 𝑡 in the equation: !"2($2,&2) !&2 = <! ) . !! !$2! [𝑝′ 𝑥′, 𝑡′ ] š Or dropping the subscript replace 𝑝′ 𝑥2, 𝑡′ by 𝑝 𝑥, 𝑡 99
  • 100. Luc_Faucheux_2020 Third simple example – IV dX=a.dt + b.dW š !"2($2,&2) !&2 = <! ) . !! !$2! [𝑝′ 𝑥′, 𝑡′ ] š Becomes: š ! !$ . 𝑎 + ! !& . 𝑝 𝑥, 𝑡 = <! ) . !! !$! [𝑝 𝑥, 𝑡 ] š ! !& . 𝑝 𝑥, 𝑡 = −𝑎. ! !$ 𝑝 𝑥, 𝑡 + <! ) . !! !$! [𝑝 𝑥, 𝑡 ] š Given the fact that we are dealing with constants, we can freely move those in and out of the partial derivatives š Note that this is not the same when those start becoming functions, in particular functions of 𝑥, and especially when 𝑏 becomes a function of 𝑥. 𝑏 𝑥 is the killer š ! !& . 𝑝 𝑥, 𝑡 = − ! !$ 𝑎. 𝑝 𝑥, 𝑡 − <! ) . !" $,& !$ = − ! !$ 𝑎. 𝑝 𝑥, 𝑡 − 𝐷. !" $,& !$ = − ! !$ [𝐜?+𝐜F] š 𝐜? 𝑥, 𝑡 = 𝑎. 𝑝 𝑥, 𝑡 and 𝐜F 𝑥, 𝑡 = −𝐷. !" $,& !$ 100
  • 101. Luc_Faucheux_2020 Third simple example – V dX=a.dt + b.dW š So we have the mapping: š 𝑑𝑋 𝑡 = 𝑎. 𝑑𝑡 + 𝑏. ([). 𝑑𝑊 š ! !& . 𝑝 𝑥, 𝑡 = − ! !$ 𝑎. 𝑝 𝑥, 𝑡 − <! ) . !" $,& !$ = − ! !$ 𝑎. 𝑝 𝑥, 𝑡 − 𝐷. !" $,& !$ = − ! !$ [𝐜?+𝐜F] š 𝐜? 𝑥, 𝑡 = 𝑎. 𝑝 𝑥, 𝑡 and 𝐜F 𝑥, 𝑡 = −𝐷. !" $,& !$ š !"($,&) !& = − ! !$ [𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 − ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]] š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑥 >&-∆& −< 𝑥 >&= 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 (drift term) š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑥−< 𝑥 >&-∆&))>&-∆&= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 (diffusion term) š We showed that 𝐹( 𝑋 𝑡 , 𝑡 = 𝑀( 𝑋 𝑡 , 𝑡 and 𝐹) 𝑋 𝑡 , 𝑡 = 2. 𝑀) 𝑋 𝑡 , 𝑡 101
  • 102. Luc_Faucheux_2020 Third simple example – VI dX=a.dt + b.dW š 𝑀( 𝑥, 𝑡 = 𝑎 = 𝐹( 𝑋 𝑡 , 𝑡 š 𝑀) 𝑋 𝑡 , 𝑡 = ( ) . 𝐹) 𝑋 𝑡 , 𝑡 and 𝑀) 𝑋 𝑡 , 𝑡 = <! ) so 𝐹) 𝑋 𝑡 , 𝑡 = 𝑏) š So: š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑥 >&-∆& −< 𝑥 >&= 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 = 𝑎. ∆𝑡 š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑥−< 𝑥 >&-∆&))>&-∆&= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 = 𝑏). ∆𝑡 š < ∆𝑋)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 = 2𝐷. ∆𝑡 102
  • 103. Luc_Faucheux_2020 Third simple example – VII dX=a.dt + b.dW š A solution of the PDE: š ! !& . 𝑝 𝑥, 𝑡 = − ! !$ 𝑎. 𝑝 𝑥, 𝑡 − <! ) . !" $,& !$ = − ! !$ 𝑎. 𝑝 𝑥, 𝑡 − 𝐷. !" $,& !$ = − ! !$ [𝐜?+𝐜F] š 𝐜? 𝑥, 𝑡 = 𝑎. 𝑝 𝑥, 𝑡 and 𝐜F 𝑥, 𝑡 = −𝐷. !" $,& !$ š Subject to 𝑝 𝑥, 𝑡 = 𝑡1 = 𝛿 𝑥 − 𝑋 𝑡1 = 𝛿(𝑥 − 𝑋1) is: š 𝑝 𝑥, 𝑡 = ( E5F &+&% . 𝑒𝑥 𝑝 − $+7%+=.(&+&%) ! EF &+&% = ( )5D!(&+&%) . 𝑒𝑥𝑝(− ($+7%+=.(&+&%))! )D!(&+&%) ) š < ∆𝑋)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 = 2. 𝐷. ∆𝑡 š < ∆𝑋 > = 𝑎. ∆𝑡 103
  • 104. Luc_Faucheux_2020 Third simple example – VIII dX=a.dt + b.dW š Note that we can also recover the solution from the PDF from the change of variables: š PDF Probability Density Function: 𝑝7(𝑥, 𝑡) š Distribution function : 𝑃7(𝑥, 𝑡) š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 ≀ 𝑥, 𝑡 = ∫>3+, >3$ 𝑝7 𝑊, 𝑡 . 𝑑𝑊 š 𝑝7(𝑥, 𝑡) = ! !$ 𝑃7 𝑥, 𝑡 š 𝑑𝑋 𝑡 = 𝑎. 𝑑𝑡 + 𝑏. ([). 𝑑𝑊 š We define: 𝑌 𝑡 = 𝑋 𝑡 − 𝑎. 𝑡 š 𝑑𝑌 𝑡 = 𝑏. ([). 𝑑𝑊 = 𝑏. (∘). 𝑑𝑊 = 𝑏. 𝑑𝑊 104
  • 105. Luc_Faucheux_2020 Third simple example – IX dX=a.dt + b.dW š 𝑃7 𝑥, 𝑡< = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(𝑋 ≀ 𝑥|𝑋 𝑡= = 𝑥=) š 𝑃7 𝑥, 𝑡< = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(𝑋 𝑡< − 𝑎. (𝑡<−𝑡=) ≀ 𝑥 − 𝑎. (𝑡< − 𝑡=)|𝑋 𝑡= − 𝑎. 𝑡= = 𝑥= − 𝑎. 𝑡=) š Just for sake of notation, setting 𝑡< = 𝑡 and 𝑡= = 0 š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(𝑋 𝑡 − 𝑎. 𝑡 ≀ 𝑥 − 𝑎. 𝑡|𝑋 0 = 𝑥=) š 𝑃7 𝑥, 𝑡 = 𝑃V 𝑥 − 𝑎. 𝑡, 𝑡 š We define: 𝑍 𝑡 = 𝑌 𝑡 /𝑏 š 𝑑𝑍 𝑡 = 1. ([). 𝑑𝑊 = 1. (∘). 𝑑𝑊 = 1. 𝑑𝑊 = 𝑑𝑊 š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(𝑋 𝑡 − 𝑎. 𝑡 ≀ 𝑥 − 𝑎. 𝑡|𝑌 0 = 𝑥=) š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊( 7 & +=.& < ≀ $+=.& < |𝑍 0 = 𝑥=/𝑏) 105
  • 106. Luc_Faucheux_2020 Third simple example – X dX=a.dt + b.dW š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊( 7 & +=.& < ≀ $+=.& < |𝑍 0 = 𝑥=/𝑏) š 𝑃7 𝑥, 𝑡 = 𝑃V 𝑥 − 𝑎. 𝑡, 𝑡 = 𝑃W $+=.& < , 𝑡 š And we know that š 𝑝W 𝑧, 𝑡 = ! !X 𝑃W 𝑧, 𝑡 = ! !X 𝑁 𝑧 − 𝑍1, 𝑡 = ( )5& . 𝑒𝑥𝑝(− (X+W%)! )& ) š 𝑃W 𝑧, 𝑡 = ∫Y3+, Y3X 𝑝W 𝜉, 𝑡 . 𝑑𝜉 = ∫Y3+, Y3X ℎ 𝜉 − 𝑍1, 𝑡 . 𝑑𝜉 = ∫Y3+, Y3X ( )5& . 𝑒𝑥𝑝(− (Y+W%)! )& ). 𝑑𝜉 š 𝑃7 𝑥, 𝑡 = 𝑃V 𝑥 − 𝑎. 𝑡, 𝑡 = 𝑃W $+=.& < , 𝑡 š 𝑃7 𝑥, 𝑡 = ∫Y3+, Y3 '().+ , ( )5& . 𝑒𝑥𝑝(− (Y+W%)! )& ). 𝑑𝜉 106
  • 107. Luc_Faucheux_2020 Third simple example – XI dX=a.dt + b.dW š 𝑃7 𝑥, 𝑡 = ∫Y3+, Y3 '().+ , ( )5& . 𝑒𝑥𝑝(− (Y+W%)! )& ). 𝑑𝜉 š Just to be quite pedestrian and show the mechanics of the change of variable, we do the following: š 𝜉 = Z+=.& < š 𝜂 = 𝑏𝜉 + 𝑎𝑡 𝑑𝜂 = 𝑏. 𝑑𝜉 š 𝑃7 𝑥, 𝑡 = ∫Z3+, Z3$ ( )5& . 𝑒𝑥𝑝(− ( -().+ , +W%)! )& ). *Y < = ∫Z3+, Z3$ ( )5& . 𝑒𝑥𝑝(− ( -().+ , + ') , )! )& ). *Y < š 𝑃7 𝑥, 𝑡 = ∫Z3+, Z3$ ( )5<!& . 𝑒𝑥𝑝(− (Z+=.&+$))! )&<! ). 𝑑𝜉 š 𝑝7 𝑥, 𝑡 = ! !$ 𝑃7 𝑥, 𝑡 = ( )5<!& . 𝑒𝑥𝑝(− ($+=.&+$))! )<!& ) 107
  • 108. Luc_Faucheux_2020 Third simple example – XII dX=a.dt + b.dW š 𝑝7 𝑥, 𝑡 = ! !$ 𝑃7 𝑥, 𝑡 = ( )5<!& . 𝑒𝑥𝑝(− ($+=.&+7%)! )<!& ) š Replacing the time 𝑡1 into the equation and also setting 𝑥= = 𝑋1 š 𝑝7 𝑥, 𝑡 = ! !$ 𝑃7 𝑥, 𝑡 = ( )5<!(&+&%) . 𝑒𝑥𝑝(− ($+=.(&+&%)+7%)! )D!(&+&%) ) š And with the usual notation (Finance): 𝜎 = 𝑏 š And with the usual notation (Physics): 𝐷 = <! ) š Subject to 𝑝 𝑥, 𝑡 = 𝑡1 = 𝛿 𝑥 − 𝑋 𝑡1 = 𝛿(𝑥 − 𝑋1) is: š 𝑝 𝑥, 𝑡 = ( E5F &+&% . 𝑒𝑥 𝑝 − $+7%+=.(&+&%) ! EF &+&% = ( )5D!(&+&%) . 𝑒𝑥𝑝(− ($+7%+=.(&+&%))! )D!(&+&%) ) š < ∆𝑋)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 = 2. 𝐷. ∆𝑡 š < ∆𝑋 > = 𝑎. ∆𝑡 108
  • 109. Luc_Faucheux_2020 Third simple example – XIII dX=a.dt + b.dW š We recover what we had doing algebra on the partial derivatives of the PDE š The change of variable is sometimes very powerful and simpler than going through the PDE š We will see in the Langevin section how elegant and powerful it makes the derivation, it also simplifies the derivation. š Just a good trick to get used to 109
  • 110. Luc_Faucheux_2020 Third simple example – XIV dX=a.dt + b.dW š A nifty property of the change of variable. Useful when sampling distributions (mostly from Press et al, Numerical Recipes book) š PDF Probability Density Function: 𝑝7(𝑥, 𝑡) š Distribution function : 𝑃7(𝑥, 𝑡) š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 ≀ 𝑥, 𝑡 = ∫>3+, >3$ 𝑝7 𝑊, 𝑡 . 𝑑𝑊 š 𝑝7(𝑥, 𝑡) = ! !$ 𝑃7 𝑥, 𝑡 š Suppose as usual that the variable 𝑋 belongs to ] − ∞; +∞[ š 𝑃7 𝑥, 𝑡 is a function from ] − ∞; +∞[ into [0; 1] š Suppose that we know the functional form for this function, and that it has an inverse š 𝑃7 𝑥, 𝑡 = 𝐹(𝑥) is a function from ] − ∞; +∞[ into [0; 1] š 𝐹+((𝑊) is a function from [0; 1] into ] − ∞; +∞[ š THEN 𝐹(𝑋) has for PDF the uniform distribution function (constant=1) over [0; 1] 110
  • 111. Luc_Faucheux_2020 Third simple example – XV dX=a.dt + b.dW 111 𝑊 = 𝑃, 𝑥, 𝑡 = 𝐹 𝑥 𝑥 1 0
  • 112. Luc_Faucheux_2020 Third simple example – XVI dX=a.dt + b.dW 112 0 1 𝑊 𝑥 = 𝐹-" 𝑊
  • 113. Luc_Faucheux_2020 Third simple example – XVII dX=a.dt + b.dW š So the nifty property is the following: š If the variable 𝑋 has the following Distribution Function 𝑃7 𝑥, 𝑡 = 𝐹(𝑥) from ] − ∞; +∞[ into [0; 1] š Then the variable 𝑌 = 𝐹 𝑋 has itself a Uniform Probability Distribution Function from [0; 1] into [0; 1] š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 ≀ 𝑥, 𝑡 = ∫>3+, >3$ 𝑝7 𝑊, 𝑡 . 𝑑𝑊 = 𝐹(𝑥) š 𝑃?(7) 𝑊, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝐹(𝑋) ≀ 𝑊, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 ≀ 𝐹+((𝑊), 𝑡 = 𝐹 𝐹+( 𝑊 = 𝑊 š 𝑝? 7 𝑊, 𝑡 = ! !> 𝑃? 7 𝑊, 𝑡 = ! !> 𝑊 = 1, which is the uniform PDF (constant) š Not super deep but quite nifty and useful, and not super trivial at first glance š It is quite useful when say you already have a good sampling (number sequence sampling the [0; 1] interval and you want to sample 𝑋 over ] − ∞; +∞[ if you know 𝐹(𝑥) when running simulations or computing expectations. 113
  • 115. Luc_Faucheux_2020 Fourth simple example dX=a(t).dt + b.dW š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 š We choose 𝑎 𝑡, 𝑋 𝑡 = 𝑎(𝑡) and 𝑏 𝑡, 𝑋 𝑡 = 𝑏 š 𝑑𝑋 𝑡 = 𝑎(𝑡). 𝑑𝑡 + 𝑏. ([). 𝑑𝑊 š 𝑋 𝑡< − 𝑋 𝑡= = ∫&3&= &3&< 𝑑𝑋 𝑡 = ∫&3&= &3&< 𝑎(𝑠). 𝑑𝑠 + 𝑏 ∫&3&= &3&< 1. ([). 𝑑𝑊(𝑡) š Similarly to the previous change of variables we choose š 𝑋 𝑡 → 𝑋2 𝑡2 = 𝑋 𝑡′ − ∫&3&1 &2 𝑎(𝑠). 𝑑𝑠 š 𝑡 → 𝑡2 = 𝑡 š Note that those are regular Riemann integrals, so no issue on ITO/STRATO/ALPHA here, we will have those when 𝑏 𝑡, 𝑋 𝑡 = 𝑏(𝑋) 115
  • 116. Luc_Faucheux_2020 Fourth simple example – II dX=a(t).dt + b.dW š 𝑋 𝑡 → 𝑋2 𝑡2 = 𝑋 𝑡′ − ∫&3&1 &2 𝑎(𝑠). 𝑑𝑠 š 𝑡 → 𝑡2 = 𝑡 š And so š 𝑑𝑋2 𝑡2 = 𝑑𝑋 𝑡 − 𝑎 𝑡 . 𝑑𝑡 = 𝑏. ([). 𝑑𝑊 š 𝑑𝑡2 = 𝑑𝑡 š We have same as before, š !"2($2,&2) !&2 = ! !$2 ! !$2 <! ) . 𝑝′ 𝑥′, 𝑡′ = <! ) . !! !$2! [𝑝′ 𝑥′, 𝑡′ ] š We then go back to the original variables 𝑋 𝑡 and 𝑡 š We just need to be a little more careful because of the integral 116
  • 117. Luc_Faucheux_2020 Fourth simple example – III dX=a(t).dt + b.dW š 𝑋 𝑡 → 𝑋2 𝑡2 = 𝑋 𝑡′ − ∫&3&1 &2 𝑎(𝑠). 𝑑𝑠 = 𝑋 𝑡 − ∫&3&1 &2 𝑎(𝑠). 𝑑𝑠 š 𝑡 → 𝑡2 = 𝑡 š So we have š 𝑋2 𝑡′ → 𝑋 𝑡 = 𝑋2 𝑡2 + ∫&3&1 &2 𝑎(𝑠). 𝑑𝑠 š 𝑡′ → 𝑡 = 𝑡′ š And defining on the regular variables: 𝑥 = 𝑥′ + ∫&3&1 &2 𝑎(𝑠). 𝑑𝑠 š ! !$2 = ! !$ . !$ !$2 + ! !& . !& !$2 = ! !$ š ! !&2 = ! !$ . !$ !&2 + ! !& . !& !&2 = ! !$ . 𝑎 𝑡2 + ! !& = ! !$ . 𝑎(𝑡) + ! !& š !! !$&! = ! !$2 . ! !$2 = ! !$2 . ! !$ = ! !$ . ! !$ = !! !$! 117
  • 118. Luc_Faucheux_2020 Fourth simple example – IV dX=a(t).dt + b.dW š !"2($2,&2) !&2 = <! ) . !! !$2! [𝑝′ 𝑥′, 𝑡′ ] š Becomes: š ! !$ . 𝑎(𝑡) + ! !& . 𝑝 𝑥, 𝑡 = <! ) . !! !$! [𝑝 𝑥, 𝑡 ] š ! !& . 𝑝 𝑥, 𝑡 = −𝑎(𝑡). ! !$ 𝑝 𝑥, 𝑡 + <! ) . !! !$! [𝑝 𝑥, 𝑡 ] š Given the fact that 𝑎 = 𝑎(𝑡), we can move it inside the ! !$ š ! !& . 𝑝 𝑥, 𝑡 = − ! !$ 𝑎 𝑡 . 𝑝 𝑥, 𝑡 − <! ) . !" $,& !$ = − ! !$ 𝑎 𝑡 . 𝑝 𝑥, 𝑡 − 𝐷. !" $,& !$ š ! !& . 𝑝 𝑥, 𝑡 = − ! !$ [𝐜?+𝐜F] š 𝐜? 𝑥, 𝑡 = 𝑎(𝑡). 𝑝 𝑥, 𝑡 and 𝐜F 𝑥, 𝑡 = −𝐷. !" $,& !$ 118
  • 119. Luc_Faucheux_2020 Fourth simple example – V dX=a(t).dt + b.dW š So we have the mapping: š 𝑑𝑋 𝑡 = 𝑎(𝑡). 𝑑𝑡 + 𝑏. ([). 𝑑𝑊 š ! !& . 𝑝 𝑥, 𝑡 = − ! !$ 𝑎 𝑡 . 𝑝 𝑥, 𝑡 − <! ) . !" $,& !$ = − ! !$ 𝑎 𝑡 . 𝑝 𝑥, 𝑡 − 𝐷. !" $,& !$ š ! !& . 𝑝 𝑥, 𝑡 = − ! !$ [𝐜?+𝐜F] š 𝐜? 𝑥, 𝑡 = 𝑎(𝑡). 𝑝 𝑥, 𝑡 and 𝐜F 𝑥, 𝑡 = −𝐷. !" $,& !$ š !"($,&) !& = − ! !$ [𝑀( 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 − ! !$ [𝑀) 𝑥, 𝑡 . 𝑝 𝑥, 𝑡 ]] š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑥 >&-∆& −< 𝑥 >&= 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 (drift term) š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑥−< 𝑥 >&-∆&))>&-∆&= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 (diffusion term) š We showed that 𝐹( 𝑋 𝑡 , 𝑡 = 𝑀( 𝑋 𝑡 , 𝑡 and 𝐹) 𝑋 𝑡 , 𝑡 = 2. 𝑀) 𝑋 𝑡 , 𝑡 119
  • 120. Luc_Faucheux_2020 Fourth simple example – VI dX=a(t).dt + b.dW š 𝑀( 𝑥, 𝑡 = 𝑎(𝑡) = 𝐹( 𝑋 𝑡 , 𝑡 š 𝑀) 𝑋 𝑡 , 𝑡 = ( ) . 𝐹) 𝑋 𝑡 , 𝑡 and 𝑀) 𝑋 𝑡 , 𝑡 = <! ) so 𝐹) 𝑋 𝑡 , 𝑡 = 𝑏) š So: š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑥 >&-∆& −< 𝑥 >&= 𝐹( 𝑋 𝑡 , 𝑡 . ∆𝑡 = 𝑎. ∆𝑡 š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑥−< 𝑥 >&-∆&))>&-∆&= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 = 𝑏). ∆𝑡 š < ∆𝑋)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 = 2𝐷. ∆𝑡 120
  • 121. Luc_Faucheux_2020 Fourth simple example – VII dX=a(t).dt + b.dW š A solution of the PDE: š ! !& . 𝑝 𝑥, 𝑡 = − ! !$ 𝑎(𝑡). 𝑝 𝑥, 𝑡 − <! ) . !" $,& !$ = − ! !$ 𝑎(𝑡). 𝑝 𝑥, 𝑡 − 𝐷. !" $,& !$ = − ! !$ [𝐜?+𝐜F] š 𝐜? 𝑥, 𝑡 = 𝑎(𝑡). 𝑝 𝑥, 𝑡 and 𝐜F 𝑥, 𝑡 = −𝐷. !" $,& !$ š Subject to 𝑝 𝑥, 𝑡 = 𝑡1 = 𝛿 𝑥 − 𝑋 𝑡1 = 𝛿(𝑥 − 𝑋1) defining: š 𝑋 𝑡 = 𝑋1 + ∫&3&1 & 𝑎(𝑠). 𝑑𝑠 š 𝑝 𝑥, 𝑡 = ( E5F &+&% . 𝑒𝑥 𝑝 − $+7 & ! EF &+&% = ( )5D!(&+&%) . 𝑒𝑥𝑝(− ($+7 & )! )D!(&+&%) ) š < ∆𝑋)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 = 2. 𝐷. ∆𝑡 š < ∆𝑋 > = 𝑎(𝑡). ∆𝑡 121
  • 122. Luc_Faucheux_2020 Fourth simple example – VIII dX=a(t).dt + b.dW š Note that we can also recover the solution from the PDF from the change of variables: š PDF Probability Density Function: 𝑝7(𝑥, 𝑡) š Distribution function : 𝑃7(𝑥, 𝑡) š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊 𝑋 ≀ 𝑥, 𝑡 = ∫>3+, >3$ 𝑝7 𝑊, 𝑡 . 𝑑𝑊 š 𝑝7(𝑥, 𝑡) = ! !$ 𝑃7 𝑥, 𝑡 š 𝑑𝑋 𝑡 = 𝑎(𝑡). 𝑑𝑡 + 𝑏. ([). 𝑑𝑊 š We define: 𝑌 𝑡 = 𝑋 𝑡 − ∫&3&1 & 𝑎(𝑠). 𝑑𝑠 š 𝑑𝑌 𝑡 = 𝑏. ([). 𝑑𝑊 = 𝑏. (∘). 𝑑𝑊 = 𝑏. 𝑑𝑊 122
  • 123. Luc_Faucheux_2020 Fourth simple example – IX dX=a(t).dt + b.dW š 𝑃7 𝑥, 𝑡< = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(𝑋 ≀ 𝑥|𝑋 𝑡= = 𝑥=) š 𝑃7 𝑥, 𝑡< = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(𝑋 𝑡< − ∫&3&) &3&, 𝑎(𝑠). 𝑑𝑠 ≀ 𝑥 − ∫&3&) &3&, 𝑎(𝑠). 𝑑𝑠 |𝑋 𝑡= − ∫&3&) &3&, 𝑎(𝑠). 𝑑𝑠 = 𝑥= − ∫&3&) &3&, 𝑎(𝑠). 𝑑𝑠) š Just for sake of notation, setting 𝑡< = 𝑡 and 𝑡= = 0 š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊(𝑋 𝑡 − ∫[31 [3& 𝑎(𝑠). 𝑑𝑠 ≀ 𝑥 − ∫[31 [3& 𝑎(𝑠). 𝑑𝑠 |𝑋 0 = 𝑥=) š 𝑃7 𝑥, 𝑡 = 𝑃V 𝑥 − ∫[31 [3& 𝑎(𝑠). 𝑑𝑠 , 𝑡 š We define: 𝑍 𝑡 = 𝑌 𝑡 /𝑏 š 𝑑𝑍 𝑡 = 1. ([). 𝑑𝑊 = 1. (∘). 𝑑𝑊 = 1. 𝑑𝑊 = 𝑑𝑊 š 𝑃7 𝑥, 𝑡 = 𝑃𝑟𝑜𝑏𝑎𝑏𝑖𝑙𝑖𝑡𝑊( 7 & +∫./% ./+ =([).*[ < ≀ $+∫./% ./+ =([).*[ < |𝑍 0 = 𝑥=/𝑏) 123
  • 124. Luc_Faucheux_2020 Fourth simple example – X dX=a(t).dt + b.dW š So this is more complicated that previously, but we can notice that: š ∫[31 [3& 𝑎(𝑠). 𝑑𝑠 is only a function of the time 𝑡 š So if we define: š 𝑋 𝑡 = 𝑋1 + ∫&3&1 & 𝑎(𝑠). 𝑑𝑠 = 𝑋1 + 𝐎 𝑡 . 𝑡 š 𝐎 𝑡 = ( & . ∫&3&1 & 𝑎(𝑠). 𝑑𝑠 š We can carry exactly the same derivation we had previously for the change of variable and recover: š 𝑝7 𝑥, 𝑡 = ! !$ 𝑃7 𝑥, 𝑡 = ( )5<!& . 𝑒𝑥𝑝(− ($+](&).&+7%)! )<!& ) 124
  • 125. Luc_Faucheux_2020 Fourth simple example – XI dX=a(t).dt + b.dW š 𝑝7 𝑥, 𝑡 = ! !$ 𝑃7 𝑥, 𝑡 = ( )5<!& . 𝑒𝑥𝑝(− ($+](&).&+7%)! )<!& ) š Replacing the time 𝑡1 into the equation: š 𝑝7 𝑥, 𝑡 = ! !$ 𝑃7 𝑥, 𝑡 = ( )5<!(&+&%) . 𝑒𝑥𝑝(− ($+](&+&%).(&+&%)+7%)! )D!(&+&%) ) š And with the usual notation (Finance): 𝜎 = 𝑏 š And with the usual notation (Physics): 𝐷 = <! ) š Subject to 𝑝 𝑥, 𝑡 = 𝑡1 = 𝛿 𝑥 − 𝑋 𝑡1 = 𝛿(𝑥 − 𝑋1) is: š 𝑝 𝑥, 𝑡 = ( E5F &+&% . 𝑒𝑥 𝑝 − $+7%+](&+&%).(&+&%) ! EF &+&% = ( )5D!(&+&%) . 𝑒𝑥𝑝(− ($+7%+](&+&%).(&+&%))! )D!(&+&%) ) 125
  • 126. Luc_Faucheux_2020 Fourth simple example – XII dX=a(t).dt + b.dW š 𝐎(𝑡 − 𝑡1) = ( (&+&%) . ∫&3&1 & 𝑎(𝑠). 𝑑𝑠 š We recover: š 𝑋 𝑡 = 𝑋1 + ∫&3&1 & 𝑎(𝑠). 𝑑𝑠 š 𝑝 𝑥, 𝑡 = ( E5F &+&% . 𝑒𝑥 𝑝 − $+7 & ! EF &+&% = ( )5D!(&+&%) . 𝑒𝑥𝑝(− ($+7 & )! )D!(&+&%) ) š < ∆𝑋)> = 𝑏). ∆𝑡 = 𝜎). ∆𝑡 = 2. 𝐷. ∆𝑡 š < ∆𝑋 > = 𝑎(𝑡). ∆𝑡 š Again, worth using the change of variable method to make sure that we did not drop any term 126
  • 127. Luc_Faucheux_2020 dX= b(t).dW “Cent fois sur le métier remettez votre ouvrage” Nicolas Boileau 127
  • 128. Luc_Faucheux_2020 Fifth simple example dX= b(t).dW š 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑏 𝑡, 𝑋 𝑡 . ([). 𝑑𝑊 š We choose 𝑎 𝑡, 𝑋 𝑡 = 0 and 𝑏 𝑡, 𝑋 𝑡 = 𝑏(𝑡) š 𝑑𝑋 𝑡 = 𝑏(𝑡). ([). 𝑑𝑊 š 𝑋 𝑡< − 𝑋 𝑡= = ∫&3&= &3&< 𝑑𝑋 𝑡 = ∫&3&= &3&< 𝑏(𝑡). ([). 𝑑𝑊(𝑡) š Similarly to the previous change of variables we are going to try something like this š 𝑋 𝑡 → 𝑋2 𝑡2 = 𝑋 𝑡 /𝑏(𝑡) š 𝑡 → 𝑡2 = 𝑡 š Using ITO lemma on 𝑋2 𝑡2 š 𝑑𝑋2 𝑡2 = !72 !7 . 𝑑𝑋 + !72 !& . 𝑑𝑡 + ( ) . !!72 !7! . 𝑑𝑋) 128
  • 129. Luc_Faucheux_2020 Fifth simple example – II dX= b(t).dW š 𝑋 𝑡 → 𝑋2 𝑡2 = 𝑋 𝑡 /𝑏(𝑡) š 𝑑𝑋2 𝑡2 = !72 !7 . 𝑑𝑋 + !72 !& . 𝑑𝑡 + ( ) . !!72 !7! . 𝑑𝑋) = ( <(&) 𝑑𝑋 + 𝑋 𝑡 . ! !& ( < & . 𝑑𝑡 š 𝑑𝑋2 𝑡2 = ( < & 𝑏 𝑡 . [ . 𝑑𝑊 + 𝑋 𝑡 . +( < & ! !< & !& . 𝑑𝑡 = 𝑑𝑊 + 𝑋 𝑡 . +( < & ! !< & !& . 𝑑𝑡 š So the first term is ok, because that is going to be the Gaussian. š The second term however is going to be a drift term that is BOTH a function of 𝑋 𝑡 and 𝑡 š We have not yet done the case : 𝑑𝑋 𝑡 = 𝑎 𝑡, 𝑋 𝑡 . 𝑑𝑡 + 𝑑𝑊 129
  • 130. Luc_Faucheux_2020 Fifth simple example – III dX= b(t).dW š So let’s try something else. š We know that for a Brownian motion the quadratic variation is linear in time. š Noting 𝔌 the expected value (integral over the distribution), we already know that: š 𝔌{𝑊 𝑡 } = 0 š 𝔌{𝑊 𝑡 )} = 𝑡 š 𝔌{𝑊 𝑡 . 𝑊(𝑡2)} = min(𝑡, 𝑡2) š 𝔌 𝑊 𝑡 − 𝑊 𝑡2 ) = 𝔌 𝑊 𝑡 ) + 𝔌 𝑊 𝑡 ) − 2. 𝔌{𝑊 𝑡 . 𝑊(𝑡2)} š 𝔌 𝑊 𝑡 − 𝑊 𝑡2 ) = 𝑡 + 𝑡2 − 2. min 𝑡, 𝑡2 = |𝑡 − 𝑡′| 130
  • 131. Luc_Faucheux_2020 Fifth simple example – IV dX= b(t).dW š 𝑑𝑋 𝑡 = 𝑏(𝑡). ([). 𝑑𝑊 š 𝑋 𝑡< − 𝑋 𝑡= = ∫&3&= &3&< 𝑑𝑋 𝑡 = ∫&3&= &3&< 𝑏(𝑡). ([). 𝑑𝑊(𝑡) š Using the Ito integral: š 𝑋 𝑡< − 𝑋 𝑡= = lim 8→, ∑.3( .38 𝑏 𝑡. . {𝑊(𝑡.) − 𝑊(𝑡.+()} š NOTE that since we have 𝑏(𝑡), we can choose the right point (Ito), the middle point (Strato) or any other point in between for 𝑏 𝑡. , they will all converge to the same limit because if we Taylor expand 𝑏 𝑡 around 𝑡 = 𝑡. with 𝑡 being in the partition [𝑡., 𝑡.-(], the higher order terms will be of order : {𝑡.-( − 𝑡.}. {𝑊(𝑡.) − 𝑊(𝑡.+()} and will vanish š So instead of changing variables let’s try to go back to our estimates of the first and second moments š 𝔌 𝑋 𝑡< − 𝑋 𝑡= = 𝔌 lim 8→, ∑.3( .38 𝑏 𝑡. . {𝑊(𝑡.) − 𝑊(𝑡.+(} = 0 š < ∆𝑋 > = 𝐞 ∆𝑋 =< 𝑋 >&-∆& −< 𝑋 >& = 0 (drift term) 131
  • 132. Luc_Faucheux_2020 Fifth simple example – V dX= b(t).dW š < ∆𝑋)> = 𝐞 ∆𝑋) =< (𝑋−< 𝑋 >&-∆&))>&-∆&= 𝐹) 𝑋 𝑡 , 𝑡 . ∆𝑡 Diffusion term š 𝑋 𝑡< − 𝑋 𝑡= = lim 8→, ∑.3( .38 𝑏 𝑡. . {𝑊(𝑡.) − 𝑊(𝑡.+()} š {𝑋 𝑡< − 𝑋 𝑡= })= { lim 8→, ∑.3( .38 𝑏 𝑡. . {𝑊(𝑡.) − 𝑊(𝑡.+()} }) š {𝑋 𝑡< − 𝑋 𝑡= })= lim 8→, ∑.3( .38 ∑"3( "38 𝑏 𝑡. . {𝑊(𝑡.) − 𝑊(𝑡.+(} 𝑏 𝑡" . {𝑊(𝑡") − 𝑊(𝑡"+()} š 𝔌{𝑋 𝑡< − 𝑋 𝑡= })= lim 8→, ∑.3( .38 𝑏 𝑡. . 𝑏 𝑡. . 𝔌{(𝑊(𝑡.) − 𝑊(𝑡.+()))} š 𝔌{𝑋 𝑡< − 𝑋 𝑡= })= lim 8→, ∑.3( .38 𝑏 𝑡. . 𝑏 𝑡. . (𝑡.-( − 𝑡.) š 𝔌{𝑋 𝑡< − 𝑋 𝑡= })= ∫&3&= &3&< 𝑏 𝑡 . 𝑏 𝑡 . 𝑑𝑡 132
  • 133. Luc_Faucheux_2020 Fifth simple example – VI dX= b(t).dW š This is what we knew already on the Gaussian (and one of the many reasons why we love the Gaussian distribution). š A variable that is a sum of variables that are independent and each follow a Gaussian distribution, will also follow a Gaussian distribution. š In discrete form: š If 𝑋 = ∑6 𝛿𝑋6 with < 𝛿𝑋6. 𝛿𝑋6 > = 0 if 𝑖 ≠ 𝑗, (𝜎6 ). 𝛿𝑡6) otherwise š Then < ∆𝑋)> = < ∑6 𝛿𝑋6 . ∑A 𝛿𝑋A > = ∑6(𝜎6 ). 𝛿𝑡6) š In the continuous form (small time interval 𝛿𝑡6 limit) š < ∆𝑋)>= ∫&3&= &3&< 𝜎 𝑡 ). 𝑑𝑡 š So we would like to define a new variable z𝜎(𝑡) so that: š z𝜎(𝑡)). 𝑡 = ∫[31 [3& 𝜎 𝑠 ). 𝑑𝑠 133
  • 134. Luc_Faucheux_2020 Fifth simple example – VII dX= b(t).dW š So with the new variable: z𝜎(𝑡)). 𝑡 = ∫[31 [3& 𝜎 𝑠 ). 𝑑𝑠 š we have < ∆𝑋)>= ∫[3& [3&-∆& 𝜎 𝑠 ). 𝑑𝑠 = z𝜎 𝑡 + ∆𝑡 ). 𝑡 + ∆𝑡 − z𝜎 𝑡 ). 𝑡 š < ∆𝑋)>= 𝜎 𝑡 ). ∆𝑡 š We can convince ourselves of this through Taylor expansion of z𝜎 𝑡 + ∆𝑡 ). 𝑡 + ∆𝑡 š z𝜎 𝑡 + ∆𝑡 ). 𝑡 + ∆𝑡 = z𝜎 𝑡 ). 𝑡 + ∆𝑡. ! !& [z𝜎 𝑡 ). 𝑡 ] š z𝜎 𝑡 + ∆𝑡 ). 𝑡 + ∆𝑡 = z𝜎 𝑡 ). 𝑡 + ∆𝑡. ! !& [∫[31 [3& 𝜎 𝑠 ). 𝑑𝑠] š z𝜎 𝑡 + ∆𝑡 ). 𝑡 + ∆𝑡 = z𝜎 𝑡 ). 𝑡 + ∆𝑡. 𝜎 𝑡 ) š < ∆𝑋)>= 𝜎 𝑡 ). ∆𝑡 134