2. Luc_Faucheux_2020
A quick summary
ยจ We are mostly using the book โLouis Bachelierโs Theory of Speculationโ, Mark Davis and
Alison Etheridge
ยจ We use it as a starting point to explore some properties of Brownian motion, Gaussian
processes and option pricing concepts
ยจ It is trying to be as rigorous as possible without losing track of being pragmatic
ยจ Those notes are trying to offer you an overview of some of the concepts around option
mathematics, and allow you to be a reference , and an introduction to some of the methods
and sometimes โtricksโ that end up being useful
ยจ Those notes tend to also be โnon-linearโ, meaning I will sometimes use a specific page from
the Bachelier book as a starting point to muse around Gaussian processes and option pricing
theory, hence the rather disorganized structure. I found out usually that at least for me this
is how I learn, by using a starting point and checking โwhat-ifsโ and โwhat-notsโ around it.
2
3. Luc_Faucheux_2020
Many disclaimers and apologies
ยจ Apologies for the lack of, or incomplete references. This is still a work in progress, and I
would welcome any comment, or kind readers pointing out omissions and glaring mistakes
ยจ I have tried to keep consistent notations throughout those notes. It is somewhat
impossible, because keeping with Bachelierโs original notations is not possible with any
conventional notations we see in recent textbooks, so again many apologies
ยจ The structure of those notes is somewhat โfree-flowingโ, as they originated from reading
the original thesis, and going off on a tangent, writing down some notes or derivations, then
putting those down in Powerpoint
ยจ Apologies for the Powerpoint format, it is a result of working in Finance for too long, even
though I have to say that I have learnt to appreciate the PowerPoint Equation Editor
ยจ In many ways, those notes are nothing more than a rather pedestrian derivations in many
pages of what Bachelier did in a line or two, but they also present what I hope is a rather
extensive โbag of tricksโ that one need to have handy when dealing with option theories.
ยจ Pages numbers usually refer the ones in the Davis & Etheridge book, but I am not to the
point where I can produce a rigorous index or references list
3
5. Luc_Faucheux_2020
Bachelierโs thesis : March 29th, 1900
ยจ Louis Bachelier defended his Ph.D. thesis in front of Paul Appell, Henri Poincarre and Joseph
Boussinesq, a formidable trio of โmousquetairesโ.
ยจ It is also quite remarkable that the โsecond oral presentationโ that Bachelier had to do was
on the matter of the โResistance of a sphere in a liquidโ under Boussinesq supervision, 5
years before the seminal paper by Einstein that relates the thermal fluctuations to the
viscous dissipation (a precursor of the fluctuation-dissipation theorem) through the diffusion
constant: ๐ท =
!!.#
$%&'
, where ๐is here the temperature, ๐ the radius of the sphere, the fluid
viscosity ๐ and ๐( is the famous Boltzmann constant.
ยจ So the first part of Bachelier thesis dealt with stochastic processes in Finance
ยจ The second part dealt with stochastic processes in Physics
5
6. Luc_Faucheux_2020
A few humbling examples
ยจ I am using the pages of the Davis and Etheridge book
ยจ In page 16, Bachelier fully describes contango and backwardation
ยจ On page 34, Bachelier works out the continuous limit of a binomial process through the
Stirling formula
ยจ On page 40, Bachelier derives the heat equation (Fourier equation) through a Taylor
expansion of the probability flow
ยจ On page 44, Bachelier essentially derives the now celebrated Dupire formula (1994)
ยจ On page 45, Bachelier derives the now common proxy for at-the-money options
ยจ On page 66, Bachelier uses the Reflection principle to recover one of the most intriguing and
beautiful property of a Brownian motion: The probability that a price will be attained or
exceeded at time t is half the probability that the price will be attained or exceeded during
the interval of time up to t.
6
7. Luc_Faucheux_2020
A few humbling examples - II
ยจ On page 70, Bachelier looks at some properties around first passage time, and shows that
the expected value is infinite (a version of the Doob paradox of 1948)
ยจ On page 73, Bachelier uses the method of images from Lord Kelvin, essentially the backbone
for valuing simple barrier options (Carr, Reiner, Rubinstein, Haug) from the 1990
ยจ The examples are too numerous, and I would not do justice to the way it is presented in the
David and Etheridge book, especially Chapter III
7
10. Luc_Faucheux_2020
Also spread have also been negative for a while
ยจ Spread option pricing models allow for spreads (difference between two indices) to be
negative.
ยจ One of the most infamous was the curve inversion in June 2008 in Europe between the 2
year swap and the 30 year swap (units are in % on the right scale)
10
11. Luc_Faucheux_2020
Black-Sholes does not allow for negative prices
ยจ The lognormal distribution only allows for positive asset prices.
ยจ The Normal distribution allows for negative prices, hence when Black-Sholes was developed
in the context of stocks and bonds, the geometric Brownian motion (lognormal) was
favored.
ยจ Also it offered the advantage to lend itself nicely to change of numeraires, as the inverse of a
geometric process, powers, ratio and products will also be geometric. It does however,
because it is a non-linear function of a Brownian motion, require the full fledged Ito calculus
and Ito lemma (1951)
ยจ In the 1990s mostly out of Japan in the rate space, people started hitting the limits of
Lognormal models. The easy way out of it was shifted-lognormal implementations, which
essentially translate the variable
ยจ Just for completeness, I have included in the next slides the closed forms for Lognormal,
Normal and shifted lognormal (setting to 0 the rates, cost of carry, dividends,..) in order to
preserve the essence of the formula. Also of interest are the incremental PL in a Taylor
expansion and the Vega scaling (Greeks normalized to Vega), quite a crucial point when
trying to capture the impact of stochastic volatilities
11
13. Luc_Faucheux_2020
13
Greeks and Scaling in the Lognormal Model
Greeks Definition Black formula Units Incremental P/L Vega scaling
Delta
F
C
ยถ
ยถ
=D )( 1dN ($/bp) )( FdD
Gamma
2
2
F
C
ยถ
ยถ
=g )('
1
1dN
TFs
($/bp/bp) 2
)(
2
1
Fdg
TF s2
1
Theta
T
C
ยถ
ยถ
=Q )('
2
2dN
T
TKs ($/day) )( TdQ
T2
s
Vega
sยถ
ยถ
=
C
Vega )(' 2dNTK ($/%) )(dsVega 1
Vanna
sยถยถ
ยถ
=
F
C
Vanna
2
)('' 2dN
F
K
s
($/%/bp) ))(( dsdFVanna
TF
d
s
2-
Volga
2
2
sยถ
ยถ
=
C
Volga )('' 2
1
dNTK
d
s
-
($/%/%) 2
)(
2
1
dsVolga
s
21dd
15. Luc_Faucheux_2020
15
Greeks and Scaling in the Normal Model
Greeks Definition Black formula Units Incremental P/L Vega scaling
Delta
F
C
ยถ
ยถ
=D )(dN ($/bp) )( FdD
Gamma
2
2
F
C
ยถ
ยถ
=g )('
1
dN
TNs
($/bp/bp) 2
)(
2
1
Fdg
TNs
1
Theta
T
C
ยถ
ยถ
=Q )('
2
dN
T
TNs ($/day) )( TdQ
T
N
2
s
Vega
N
C
Vega
sยถ
ยถ
= )(' dNT ($/%) )( NVega ds 1
Vanna
NF
C
Vanna
sยถยถ
ยถ
=
2
)(''
1
dN
Ns
($/%/bp) ))(( NFVanna dsd
T
d
Ns
-
Volga
2
2
N
C
Volga
sยถ
ยถ
= )('' dNT
d
Ns
-
($/%/%) 2
)(
2
1
NVolga ds
N
d
s
2
17. Luc_Faucheux_2020
Greeks and Scaling in the shifted Lognormal Model
17
Greeks Definition Black formula Units Incremental P/L Vega scaling
Delta ($/bp)
Gamma ($/bp/bp)
Theta ($/day)
Vega ($/%) 1
Vanna ($/%/bp)
Volga ($/%/%)
F
C
ยถ
ยถ
=D )( 1dN )( FdD
2
2
F
C
ยถ
ยถ
=g
TF
dN
Ssb )(
)(' 1
+
2
)(
2
1
Fdg 2
)(
11
bs +FTS
T
C
ยถ
ยถ
=Q )('
2
)(
2dN
T
TK Ssb+
)( TdQ
T
S
2
s
S
C
Vega
sยถ
ยถ
= )(')( 2dNTK b+ )( SVega ds
SF
C
Vanna
sยถยถ
ยถ
=
2
)(''
1
)(
)(
2dN
F
K
Ssb
b
+
+
))(( SFVanna dsd
TF
d
Ssb
1
)(
2
+
-
2
2
S
C
Volga
sยถ
ยถ
= )('')( 2
1
dNTK
d
S
b
s
+
- 2
)(
2
1
SVolga ds
S
dd
s
21
18. Luc_Faucheux_2020
The Ph.D. thesis of Louis Bachelier
ยจ Reading the original thesis (both in French if you can and the excellent translation by Mark
Davis and Alison Etheridge) is humbling.
ยจ Without a strong well-developed theory of stochastic calculus (Ito lemma) that only came
about in the 1960s or so
ยจ Without a strong theoretical footing of what is a numeraire and how to price a derivative in
the risk-neutral probability associated to that numeraire (Pliska 1980 or so)
ยจ Without yet the strong connection between PDE (Partial Differential Equations) and SDE
(Stochastic Differential Equations) that really came about from the Feynman-Kac formula
(1950 roughly)
ยจ Louis Bachelier managed to not only built a theory of option pricing that is nowadays
coming back in fashion with a vengeance, but perusing through the rather short thesis, one
cannot but be amazed at the breadth of his genius, but also at his attention to details.
Bachelier at times go through numerical examples with the same precision and clarity of
thoughts that he displays in the other more theoretical parts of his thesis.
18
20. Luc_Faucheux_2020
Kolmogorov equation: Bachelier thesis (page 35)
ยจ ๐ ๐ฅ, ๐ก . ๐๐ฅ is the probability that the price is in ๐ฅ, ๐ฅ + ๐๐ฅ at time ๐ก
ยจ ๐ ๐ฅ, ๐ก+ . ๐ ๐ง โ ๐ฅ, ๐ก, โ ๐ก+ . ๐๐ฅ. ๐๐ง is the probability that the price is (๐ฅ, ๐ก+) and (๐ง, ๐ก,)
ยจ ๐ ๐ง, ๐ก,|๐ฅ, ๐ก+ = ๐ ๐ง โ ๐ฅ, ๐ก, โ ๐ก+
ยจ Note that just writing something like the above implies lot of things:
ยจ Strong Markov property
ยจ The price process is memoryless
ยจ The price process is homogeneous in time and space
20
21. Luc_Faucheux_2020
Kolmogorov equation: Bachelier thesis (page 35) - b
ยจ What it is saying if you break it down is:
ยจ There must be a function ๐ ๐ฅ, ๐ก that is the probability density to find the price (particle,
random walker, stochastic process) at ๐ฅ at time ๐ก
ยจ This assumes that it is a function, that we can find, and one which we can perform usual
calculus (not completely obvious)
ยจ It is then saying that before reaching the point (๐ฅ, ๐ก) the price might have reached another
level at some time before (rather obvious, but again has some mathematical consequences)
ยจ Bachelier for some typographical reasons used ๐ฅ and ๐ง, which we will stick to in some of the
following slides, but for ease of notations here:
ยจ The probability density to reach ๐ฅ, ๐ก is ๐ ๐ฅ, ๐ก
ยจ The probability density to reach ๐ฅโฒ, ๐กโฒ is ALSO the same function ๐ ๐ฅโฒ, ๐กโฒ
ยจ The conditional probability density to go from ๐ฅโฒ, ๐กโฒ to ๐ฅ, ๐ก is ALSO assumed to be some
function that we will note ๐๐ ๐ฅ2, ๐ก2, ๐ฅ, ๐ก
21
22. Luc_Faucheux_2020
Kolmogorov equation: Bachelier thesis (page 35) - c
ยจ In order to recover ๐ ๐ฅ, ๐ก , we can sum over all the possible in-between states
ยจ ๐ ๐ฅ, ๐ก = โซ32456
324/6
โซ7$48
7$47
๐ ๐ฅโฒ, ๐กโฒ . ๐๐ ๐ฅ2, ๐ก2, ๐ฅ, ๐ก . ๐๐ฅ2. ๐๐กโฒ
ยจ Graphically this creates somewhat of a Feynman diagram
ยจ NOW (and again, either this is painfully obvious or rather deep and we need to pay attention
to), we can actually SEPARATE the space and the time variable (because we are dealing with
a well defined process ๐(๐ก)) (see next slide)
ยจ So for a given time ๐กโฒ we suppose that we can write something like
ยจ ๐ ๐ฅ, ๐ก = โซ32456
324/6
๐ ๐ฅโฒ, ๐กโฒ . ๐๐ ๐ฅ2, ๐ก2, ๐ฅ, ๐ก . ๐๐ฅโ
ยจ NOW is the big one, we assume that we can write: ๐๐ ๐ฅ2, ๐ก2, ๐ฅ, ๐ก = ๐(๐ฅ โ ๐ฅ2, ๐ก โ ๐ก2)
ยจ This is actually again either obvious or not at all
22
23. Luc_Faucheux_2020
Kolmogorov equation: Bachelier thesis (page 35) - c-1
ยจ There is no traveling back in time, so
ยจ ๐๐ ๐ฅ2, ๐ก2, ๐ฅ, ๐ก = 0 if ๐ก2> ๐ก
ยจ Also no disappearing and โre-apparateโ
ยจ So for process from 0 to ๐ก, this process WILL have to go through every intermediate time ๐กโฒ
23
24. Luc_Faucheux_2020
Kolmogorov equation: Bachelier thesis (page 35) - d
ยจ ๐๐ ๐ฅ2, ๐ก2, ๐ฅ, ๐ก = ๐(๐ฅ โ ๐ฅ2, ๐ก โ ๐ก2)
ยจ First of all this is assuming that the conditional probability is the same as the probability
density:
ยจ It does not matter where you are starting from, and at what time, the probability to end up
at a different level is only a function of the distance to the original starting point, and the
time lapsed
ยจ FURTHERMORE, that conditional probability is exactly the probability density we are looking
for
ยจ Strong Markov property
ยจ The price process is memoryless
ยจ The price process is homogeneous in time and space
ยจ No smile, no mean reversion, no time dependent volatility, and all functions are
mathematically well behaved
24
25. Luc_Faucheux_2020
Kolmogorov equation: Bachelier thesis (page 35) II
ยจ NOW Bachelier writes what is now known as the Chapman-Kolmogorov equation:
ยจ ๐ ๐ฅ, ๐ก+ . ๐ ๐ง โ ๐ฅ, ๐ก, โ ๐ก+ . ๐๐ฅ. ๐๐ง is the probability that the price is (๐ฅ, ๐ก+) and (๐ง, ๐ก,)
ยจ ๐ ๐ง, ๐ก, = โซ3456
34/6
๐ ๐ฅ, ๐ก+ . ๐ ๐ง โ ๐ฅ, ๐ก, โ ๐ก+ . ๐๐ฅ
ยจ Bachelier actually changes the notation a little and writes it as
ยจ ๐ ๐ง, ๐ก+ + ๐ก, = โซ3456
34/6
๐ ๐ฅ, ๐ก+ . ๐ ๐ง โ ๐ฅ, ๐ก, . ๐๐ฅ
ยจ We can take a lucky guess like Louis did and write ๐ ๐ฅ, ๐ก = ๐ด. ๐๐ฅ๐(โ๐ต,. ๐ฅ,)
ยจ To be more exact, ๐ ๐ฅ, ๐ก = ๐ด(๐ก). ๐๐ฅ๐(โ๐ต(๐ก),. ๐ฅ,)
ยจ Note that this does not guarantee the unicity of a solution, only the existence
ยจ Kolmogorov expressed 30 years later or so some Germanic displeasure with what he
perceived to be a lack of mathematical rigor from Louis
ยจ โDass die Bachelierschen Betrachtungen jeder mathematischen Strenge ganzlich entbehrenโ
25
30. Luc_Faucheux_2020
Kolmogorov equation: Bachelier thesis (page 35) VII
ยจ โซC456
C4/6
exp[โ๐๐ข,]. ๐๐ข =
%
9
with ๐ผ = ๐
ยจ ๐ ๐ง, ๐ก+ + ๐ก, =
A*A%
A*
%/A%
%
exp
5%A%
%A*
%B%
A*
%/A%
%
ยจ And we know that
ยจ ๐ ๐ฅ, ๐ก = ๐ ๐ฅ = 0, ๐ก . exp{โ๐. ๐ ๐ฅ = 0, ๐ก , . ๐ฅ,}
ยจ ๐ ๐ง, ๐ก = ๐ ๐ง = 0, ๐ก+ + ๐ก, . exp{โ๐. ๐ ๐ง = 0, ๐ก+ + ๐ก,
, . ๐ง,}
ยจ ๐+ = ๐(๐ฅ = 0, ๐ก+) and ๐, = ๐ ๐ฅ = 0, ๐ก,
ยจ ๐ ๐ง = 0, ๐ก+ + ๐ก,
, =
A%
%A%
%
A*
%/A%
% or keeping the notation: ๐+/,
, =
A%
%A%
%
A*
%/A%
%
ยจ Also quite an elegant formulation for the relationship between the peaks (maximum of
probability) for different times
30
31. Luc_Faucheux_2020
Kolmogorov equation: Bachelier thesis (page 35) VIII
ยจ ๐+/,
, =
A%
%A%
%
A*
%/A%
%, where ๐+ = ๐(๐ฅ = 0, ๐ก+), ๐, = ๐ ๐ฅ = 0, ๐ก,
ยจ So to make it simpler letโs use the notation ๐ = ๐(๐ก)
ยจ ๐(๐ก+ + ๐ก,), = ๐+/,
,
=
A(7*)%A(7%)%
A(7*)%/A(7%)%
ยจ Method #1 : letโs be lucky and guess that ๐ ๐ก = ๐ป. ๐ก โ&*
% =
E
7
ยจ ๐(๐ก), =
E%
7
ยจ
A(7*)%A(7%)%
A(7*)%/A(7%)% =
(
+%
,*
)(
+%
,%
)
+%
,*
/(
+%
,%
)
= ๐ป,.
+
7*7%
.
+
*
,*
/
*
,%
= ๐ป,.
+
7*7%
.
7*7%
7*/7%
= ๐ป,.
+
7*/7%
= ๐(๐ก+ + ๐ก,),
ยจ It works !
31
32. Luc_Faucheux_2020
Kolmogorov equation: Bachelier thesis (page 35) VIII
ยจ ๐ ๐ก = ๐ป. ๐ก โ&*
% =
E
7
= ๐(๐ฅ = 0, ๐ก)
ยจ ๐ ๐ฅ, ๐ก = ๐ ๐ฅ = 0, ๐ก . exp{โ๐. ๐ ๐ฅ = 0, ๐ก , . ๐ฅ,}
ยจ So we finally have what we are looking for:
ยจ ๐ ๐ฅ, ๐ก =
E
7
. exp{โ
%E%3%
7
}
ยจ We just need to normalize one more time: โซ56
/6
๐ ๐ฅ, ๐ก . ๐๐ฅ = 1
ยจ We already know that : ๐ผ = (โซ56
/6
๐593%
. ๐๐ฅ), =
%
9
so โซ56
/6
๐593%
. ๐๐ฅ =
%
9
ยจ ๐ผ =
%E%
7
, so โซ56
/6
๐ ๐ฅ, ๐ก . ๐๐ฅ = โซ56
/6 E
7
. exp{โ
%E%3%
7
} . ๐๐ฅ =
E
7
.
%
-+%
,
= 1
32
33. Luc_Faucheux_2020
Kolmogorov equation: Bachelier thesis (page 35) IX
ยจ ๐ ๐ฅ, ๐ก =
E
7
. exp{โ
%E%3%
7
} is already normalized
ยจ We still need to solve for the value of ๐ป
ยจ A couple of side notes first on ๐ผ9 = โซ56
/6
๐593%
. ๐๐ฅ =
%
9
ยจ ๐9 ๐ฅ =
+
F'
. ๐593%
is the normalized probability distribution
ยจ < ๐ฅ > = โซ56
/6
๐ฅ. ๐9 ๐ฅ . ๐๐ฅ = 0 because ๐ฅ. ๐9 ๐ฅ is an odd function
ยจ < ๐ฅ! > = โซ56
/6
๐ฅ!. ๐9 ๐ฅ . ๐๐ฅ = 0 because (๐ฅ!. ๐9 ๐ฅ ) is an odd function if ๐ is odd
33
52. Luc_Faucheux_2020
Kolmogorov equation: Bachelier thesis (page 35) XXVIII
ยจ ๐ ๐ก,
, ,A$ 7* A 7* .A 7%
%
{A 7*
%/A 7%
%}% = ๐ ๐ก+
, ,A$ 7% A 7% .A 7*
%
{A 7*
%/A 7%
%}%
ยจ ๐2 ๐ก+ ๐ ๐ก+ . ๐ ๐ก,
K = ๐2 ๐ก, ๐ ๐ก, . ๐ ๐ก+
K
ยจ
A$ 7*
A 7*
. =
A$ 7%
A 7%
. for all values of ๐ก+ and ๐ก,
ยจ So
A$ 7
A 7 . = ๐๐ก๐ and
A$ 7
A 7 . = (
5+
,
)
H
H7
[
+
A 7 %]
ยจ So
H
H7
+
A 7 % = ๐๐ก๐ = ๐ผ and
+
A 7 % = ๐ผ๐ก + ๐ฝ
ยจ We can choose ๐ฝ = 0 and rewrite
+
A 7 % = ๐ผ๐ก as ๐ ๐ก = zE
7
ยจ And so we are back to the expression for the distribution with the explicit peak value
52
53. Luc_Faucheux_2020
Kolmogorov equation: Bachelier thesis (page 35) XXIX
ยจ ๐ ๐ก = ๐ป. ๐ก โ&*
% =
E
7
= ๐(๐ฅ = 0, ๐ก)
ยจ ๐ ๐ฅ, ๐ก = ๐ ๐ฅ = 0, ๐ก . exp{โ๐. ๐ ๐ฅ = 0, ๐ก , . ๐ฅ,}
ยจ So we finally have what we are looking for:
ยจ ๐ ๐ฅ, ๐ก =
E
7
. exp{โ
%E%3%
7
}
ยจ Note how general the assumptions we made seem to be
ยจ Note also that we have the existence of a solution, but we have said nothing about unicity
53
55. Luc_Faucheux_2020
From the coin toss to the random walker
ยจ Let us limit ourselves to a one-dimensional random walk
ยจ A random walker will jump to the right or the left by one unit ๐ at equal time intervals ๐
ยจ We assume equal probability (1/2) to jump to the right or the left
ยจ ๐(๐ก) will be in bin [๐], ๐(๐ก + ๐) will be in bin [๐ โ 1] or [๐ + 1] with equal probability 50%
ยจ Analogous to the coin toss
ยจ The random walk can be mapped to the coin toss for money, the position on the X axis is the
current amount of money that the player has while playing a simple strategy where one
amount of currency ($1) is won or lost if the coin lands on heads or tail
55
Xi i+1i-1
56. Luc_Faucheux_2020
The random walk properties โ Markov
ยจ Markov property: The value of ๐(๐ก + ๐) only depends on the value ๐(๐ก)
ยจ The distribution of the value of the random variable ๐(๐ก + ๐) conditional upon all the past
events only depends on the previous value ๐(๐ก)
ยจ โThe random walk has no memory beyond where it is nowโ
ยจ Note: this does not mean that the expected value of ๐(๐ก + ๐) is ๐(๐ก)
56
57. Luc_Faucheux_2020
The random walk properties - Martingale
ยจ Martingale: the expected value of ๐(๐ก + ๐) is ๐(๐ก)
ยจ In terms of game:
โ You know how much money you have (your current winnings)
โ Your expected winnings after one more coin toss is your current winning
โ By recurrence, your expected winnings after any number of coin toss is the value of your
current winnings (somewhat akin to the Tower property)
ยจ Note: this does not mean that your winnings are stuck at their current value, it is only the
expected value of your winnings that is equal to the current value
57
58. Luc_Faucheux_2020
Searching for the PDE for the PDF
ยจ ๐(๐, ๐) is the probability to find our random walker in the bin (๐) at the time (๐. ๐)
ยจ Remember that time and position are discrete and NOT continuous
ยจ Position is indexed by ๐ , and the size of the jump is ๐, at every ๐ in time.
ยจ Another way to think about it is to have a large number of random walkers, and so at time
๐ก = ๐. ๐ the number of walkers in a specific bin indexed by (๐) is ๐. ๐(๐, ๐)
ยจ ๐ ๐, ๐ + 1 =
+
,
. {๐ ๐ โ 1, ๐ + ๐ ๐ + 1, ๐ }
ยจ Because the random walker has no choice but to jump to one of the adjacent bins, the
probability after the jump to be in the bin [i] is half of the probability before the jump in the
left bin, and half of the probability before the jump in the right bin
ยจ This is sometime called Master Equation, or Fokker-Planck equation
58
59. Luc_Faucheux_2020
Taylor expansion
ยจ Even though we are in the discrete description, we are somewhat assuming that we can use
tools of continuous calculus like Taylor expansion on the function ๐ ๐, ๐
ยจ More rigorously, ๐ ๐, ๐ is NOT a continuous function (just like BINOM.DIST was not either),
but we are looking for a continuous function โข๐ ๐ฅ, ๐ก , that would match the discrete values
of ๐ ๐, ๐ , or is not โtoo farโ from it.
ยจ Say it another way, we assuming that there is a limit for ๐ ๐, ๐ that would be a regular
continuous function โข๐ ๐ฅ ๐ , ๐. ๐ , and because we are not that rigorous, we just use the
same notation ๐ ๐, ๐ and ๐ ๐ฅ, ๐ก
59
60. Luc_Faucheux_2020
Taylor expansion II
ยจ So really we should have written
ยจ ๐ ๐, ๐ + 1 is the discrete probability to find the random walker in the bin ๐ after (๐ + 1)
jumps of size ๐ every time interval ๐
ยจ We have a strong feeling that there might be a continuous function โข๐ ๐ฅ, ๐ก which is a
function of the two continuous variables position and time, which is such that the discrete
function โconvergesโ to the continuous function under some limits
ยจ By โconvergeโ, what we mean is that there is a manner in which you can calculate the
โdistanceโ between the continuous function and the discrete one, and we would like to say
something along the lines of โas the size of the jump ๐ goes to 0 and the period of the jump
๐ also goes to zeroโ
ยจ Note that we have not defined the โdistanceโ
ยจ Note that we have not defined โhow we get to 0โ
ยจ We are trying to be pragmatic without butchering the actual math too much, so really get to
the essence but alert you that there are a couple of trees in the forest that you should pay
attention to, and some others that are not that important
60
61. Luc_Faucheux_2020
Taylor expansion III
ยจ ๐ ๐, ๐ + 1 =
+
,
. {๐ ๐ โ 1, ๐ + ๐ ๐ + 1, ๐ }
ยจ ๐ ๐, ๐ + 1 = ๐ ๐, ๐ + ๐.
G OP(3,7)
G7
+ ๐ช(๐,)
ยจ ๐ ๐ โ 1, ๐ = ๐ ๐, ๐ โ ๐.
G OP(3,7)
G3
+
+
,
. ๐,.
G% OP(3,7)
G3% + ๐ช(๐R)
ยจ ๐ ๐ + 1, ๐ = ๐ ๐, ๐ + ๐.
G OP(3,7)
G3
+
+
,
. ๐,.
G% OP(3,7)
G3% + ๐ช(๐R)
ยจ ๐ช(. . ) means โsomething of the order ofโ, meaning all the higher orders that we are
neglecting in the Taylor expansion
ยจ You have to be careful to which order you go to, and also if the higher orders are indeed
negligible for what you are trying to achieve
61
63. Luc_Faucheux_2020
Taylor expansion V
ยจ ๐.
G OP(3,7)
G7
+ ๐ช ๐, =
+
,
๐,.
G% OP(3,7)
G3% + ๐ช ๐R
ยจ
G OP(3,7)
G7
=
S%
,T
.
G% OP(3,7)
G3%
ยจ Equation above is usually referred to as a โheat equationโ or โdiffusion equationโ
ยจ The diffusion coefficient is defined as ๐ท =
S%
,T
ยจ This is the PDE (Partial Differential Equation) for the PDF (Probability Distribution Function)
ยจ Note that we were looking for a continuous limit when ๐ โgoes to zeroโ and ๐ โgoes to zeroโ.
Obviously since we are dividing one by the other we are going to have to be a little careful
here.
63
64. Luc_Faucheux_2020
Conservation Equation
ยจ
G OP(3,7)
G7
= ๐ท
G% OP(3,7)
G3%
ยจ We can rewrite the above as
ยจ
G OP(3,7)
G7
= ๐ท
G% OP(3,7)
G3% =
G
G3
[๐ท
G OP(3,7)
G3
]
ยจ This is also known as a conservation equation, because it verifies the conservation of overall
probability (we do not lose any random walkers)
ยจ The overall probability is the integral over the position axis of the function โข๐(๐ฅ, ๐ก)
ยจ
H
H7
. โซ โข๐ ๐ฅ, ๐ก = โซ
G
G7
โข๐ ๐ฅ, ๐ก = โซ
G
G3
[๐ท
G OP(3,7)
G3
] = 0
ยจ So the overall probability is โconservedโ.
ยจ Please note that we were quite liberal in taking the diffusion coefficient ๐ท inside the partial
derivative, which can can only do if it has no dependence on the position. When it depends
on the position, this opens up the whole Ito-Stratonovitch can of worms
64
65. Luc_Faucheux_2020
Gradient and Diffusion current
ยจ
G OP(3,7)
G7
= ๐ท
G% OP(3,7)
G3% =
G
G3
[๐ท
G OP(3,7)
G3
]
ยจ The diffusion current is sometimes defined as: ๐ฝ ๐ฅ, ๐ก = โ๐ท
G OP(3,7)
G3
ยจ The above equation is sometimes called the Ficksโs law.
ยจ
G
G7
โข๐ = โ
G
G3
๐ฝ
ยจ
G OP(3,7)
G7
= ๐ท
G% OP(3,7)
G3%
ยจ We know a solution of this equation : the Normal Distribution Function, or Gausssian.
ยจ ๐บ ๐ฅ, ๐ก =
+
K%U7
. ๐๐ฅ๐(โ
3%
KU7
)
65
66. Luc_Faucheux_2020
Propagator and Green function
ยจ ๐บ ๐ฅ, ๐ก =
+
K%U7
. ๐๐ฅ๐(โ
3%
KU7
) is one solution of the diffusion equation.
ยจ Because the diffusion equation is linear, a linear combination of solutions is ALSO a solution
ยจ The Gaussian function is self-similar, if you plot {๐บ ๐ฅ, ๐ก . 4๐๐ท๐ก} as a function of the
rescaled variable ๐ฆ =
3%
KU7
, you always get the same function ๐๐ฅ๐(โ๐ฆ,)
ยจ This is what we did in the spreadsheet with the BINOM.DIST function
ยจ When t=0, the Gaussian function above converges to the Dirac function. It is a function
equal to 0 everywhere except at x=0, where it goes to infinity but in such a way that the
integral of the Gaussian over the x-axis is always conserved and equal to 1 (conservation of
probability)
66
67. Luc_Faucheux_2020
Propagator and Green functions II
ยจ Take any arbitrary initial probability distribution function โข๐(๐ฅ, ๐ก = 0)
ยจ This can be written formally as
ยจ โข๐ ๐ฅ, ๐ก = 0 = โซ โข๐ ๐ฅ2, ๐ก = 0 . ๐ฟ ๐ฅ โ ๐ฅ2 . ๐๐ฅโฒ
ยจ The initial โpeakโ โข๐ ๐ฅ2, ๐ก = 0 . ๐ฟ(๐ฅ โ ๐ฅ2) is centered around ๐ฅ with integral โข๐ ๐ฅ, ๐ก = 0
ยจ This peak will diffuse with the Gaussian ๐บ ๐ฅ, ๐ฅโฒ, ๐ก =
+
K%U7
. ๐๐ฅ๐(โ
(353$)%
KU7
)
ยจ and so for any time t, the solution of the diffusion equation that satisfies โข๐(๐ฅ, ๐ก = 0) will
be:
ยจ โข๐ ๐ฅ, ๐ก = โซ โข๐ ๐ฅ2, ๐ก = 0 . ๐บ ๐ฅ, ๐ฅโฒ, ๐ก . ๐๐ฅโฒ
ยจ ๐บ ๐ฅ, ๐ฅโฒ, ๐ก is called the Green function, or the propagator
67
68. Luc_Faucheux_2020
Propagators and Green functions III
ยจ The propagator technique is hugely helpful when discounting payoff
ยจ The Black Sholes equation is a diffusion equation
ยจ Note : the probability distribution function for the random variable โdiffuses forward in
timeโ
ยจ Note : the option value as a function of the random variable โdiffuses backward in timeโ
from the terminal payoff.
ยจ The terminal payoff is sometimes called the โboundary conditionโ for the diffusion equation
followed by the option value
68
69. Luc_Faucheux_2020
Diffusion and convexity
ยจ
G
G7
๐(๐ฅ, ๐ก) = ๐ท
G%
G3% ๐(๐ฅ, ๐ก)
ยจ If the density probability has a โsharp peakโ,
G%
G3% ๐(๐ฅ, ๐ก) is a large negative number, and so
G
G7
๐(๐ฅ, ๐ก) is also a large negative number, and so the density probability at that spot will
decrease rapidly in time.
ยจ If the density probability has a โsharp troughโ,
G%
G3% ๐(๐ฅ, ๐ก) is a large positive number, and so
G
G7
๐(๐ฅ, ๐ก) is also a large positive number, and so the density probability at that spot will
increase rapidly in time.
ยจ The diffusion equation tends to โsmooth outโ any irregularity of the density probability
(forward in time), any sharp โkinksโ diffuses over time
ยจ Note: in regions of large convexity (Gamma), the time dependence (time decay) is also
maximum
69
70. Luc_Faucheux_2020
Diffusion and convexity II
ยจ The steady-state solution (also called equilibrium solution) of the diffusion equation is a
solution where there is no dependence in time.
ยจ In our simple case, it means
G%
G3% ๐ ๐ฅ, ๐ก = โ =
G%
G3% ๐?VCWXW(YWCZ ๐ฅ = 0
ยจ That is a straight line
ยจ Any โkinkโ (places where the second spatial derivative was non-zero) got smoothed out
70
71. Luc_Faucheux_2020
Another scaling argument (Bachelier page 69)
ยจ For a given ๐ฅ, the probability density function at a given time ๐ก is given by:
ยจ ๐ ๐ฅ, ๐ก =
+
K%U7
. ๐๐ฅ๐(โ
3%
KU7
)
ยจ ๐ ๐ฅ, ๐ก = 0 = 0 and lim
7โ6
๐ ๐ฅ, ๐ก = 0
ยจ For a given ๐ฅ, the function ๐ ๐ฅ, ๐ก will exhibit a positive maximum for a given time ๐ก
ยจ ๐ ๐ฅ, ๐ก =
+
K%U7
. ๐๐ฅ๐(โ
3%
KU7
)
ยจ
GP 3,7
G3
=
5+
,
.
+
K%U7
.
+
7
. ๐๐ฅ ๐ โ
3%
KU7
+
+
K%U7
. ๐๐ฅ ๐ โ
3%
KU7
. (
3%
KU
.
+
7%)
ยจ
GP 3,7
G3
= 0 at the maximum ๐ก = ๐ก implies ๐ก =
3%
,U
ยจ Again we see the neat scaling of the square of the distance to the first order in time appears
71
72. Luc_Faucheux_2020
A neat thing about the diffusion equation (Bachelier)
ยจ ๐ ๐ฅ, ๐ก =
+
K%U7
. ๐๐ฅ๐(โ
3%
KU7
) is a solution of
G
G7
๐(๐ฅ, ๐ก) = ๐ท
G%
G3% ๐(๐ฅ, ๐ก)
ยจ We define ๐ ๐ฅ, ๐ก = โซ3
6
๐ ๐ฅโฒ, ๐ก . ๐๐ฅโฒ as the probability to find the random variable at time ๐ก
at a distance greater than ๐ฅ
ยจ
G] 3,7
G7
= โซ3
6 GP 32,7
G7
. ๐๐ฅโฒ = โซ3
6
๐ท
G%
G32% ๐ ๐ฅโฒ, ๐ก . ๐๐ฅโฒ = ๐ท.
GP 3$,7
G3$
6
3$43 = โ๐ท.
GP 3,7
G3
ยจ
G] 3,7
G7
= โ๐ท.
GP 3,7
G3
ยจ ๐ ๐ฅ, ๐ก = โซ3
6
๐ ๐ฅโฒ, ๐ก . ๐๐ฅโฒ and so
G] 3,7
G3
= โ๐(๐ฅ, ๐ก)
ยจ And so the function ๐ ๐ฅ, ๐ก = โซ3
6
๐ ๐ฅโฒ, ๐ก . ๐๐ฅโฒ ALSO follows the same equation diffusion as
๐(๐ฅ, ๐ก)
ยจ
G
G7
๐(๐ฅ, ๐ก) = ๐ท
G%
G3% ๐ ๐ฅ, ๐ก NOTE that ๐(๐ฅ, ๐ก) is NOT a Gaussian (unicity of solution)
72
73. Luc_Faucheux_2020
Bachelier on the rayonement de probabilite
ยจ Let us limit ourselves to a one-dimensional random walk
ยจ A random walker will jump to the right or the left by one unit ๐ at equal time intervals ๐
ยจ We assume equal probability (1/2) to jump to the right or the left
ยจ ๐(๐ก) will be in bin [๐], ๐(๐ก + ๐) will be in bin [๐ โ 1] or [๐ + 1] with equal probability 50%
ยจ Analogous to the coin toss
ยจ The random walk can be mapped to the coin toss for money, the position on the X axis is the
current amount of money that the player has while playing a simple strategy where one
amount of currency ($1) is won or lost if the coin lands on heads or tail
73
Xi i+1i-1
74. Luc_Faucheux_2020
Bachelierโs argument is slighty different
ยจ ๐(๐, ๐) is the probability to ๏ฌnd our random walker in the bin (๐) at the tme (๐. ๐)
ยจ We define ๐ ๐, ๐ = โI4W
6
๐(๐, ๐)
ยจ ๐ ๐, ๐ is the probability to find the random walker on the right of the bin (๐) at the tme
(๐. ๐)
ยจ Bachelier somehow was more interested in ๐ ๐, ๐ than ๐(๐, ๐), because he was more
interested in pricing an option
ยจ ๐ ๐, ๐ = โI4W
6
๐(๐, ๐)
ยจ ๐ ๐ + 1, ๐ = โI4W/+
6
๐(๐, ๐)
ยจ And so ๐ ๐, ๐ = ๐ ๐, ๐ โ ๐ ๐ + 1, ๐
ยจ ๐ ๐ + 1, ๐ = ๐ ๐, ๐ +
G]
G3
. ๐ + ๐ช(๐,)
ยจ ๐ ๐, ๐ = โ
G]
G3
. ๐ to the second order in ๐
74
75. Luc_Faucheux_2020
Rayonement de probability (Bachelier page)
ยจ ๐ ๐, ๐ = โ
G]
G3
. ๐
ยจ We also know that the random walker follows the jump dynamic of equal probability to the
right and the left at every discrete time increment
ยจ ๐ ๐, ๐ + 1 = ๐ ๐, ๐ โ
+
,
. ๐ ๐, ๐ +
+
,
. ๐ ๐ โ 1, ๐
ยจ ๐ ๐, ๐ + 1 โ ๐ ๐, ๐ =
+
,
. {๐ ๐ โ 1, ๐ โ ๐(๐, ๐)}
ยจ
G]
G7
. ๐ =
+
,
. โ
GP
G3
. ๐ =
+
,
. ๐,.
G%]
G3% or
G]
G7
=
S%
,T
.
G%]
G3%
ยจ This is the same diffusion equation or heat equation that we had for ๐.
ยจ Note that the two functions are quite different (there is no unicity of the diffusion equation)
ยจ Different Boundary conditions:
ยจ Note that we were a little liberal mixing ๐ ๐, ๐ and โข๐ ๐ฅ ๐ , ๐. ๐ for clarity sake
75
78. Luc_Faucheux_2020
Some concepts around time II
ยจ We can define for a given path a number of variables
ยจ ๐ ๐ก is the Brownian variable, ๐ is the last time, 0 โค ๐ก โค ๐
ยจ We can define the Maximum value of the path : ๐๐ด๐ ๐ = MAX(๐ ๐ก , 0 โค ๐ก โค ๐)
ยจ We can define the โFirst passage timeโ, the first time that the Brownian motion reaches the
value ๐, as ๐ ๐, ๐ = min(๐ก โฅ 0, ๐ ๐ก = ๐)
ยจ It would be useful to be able to know the distribution probability for ๐ ๐, ๐
ยจ Bachelier devotes the last few pages of his thesis on this, and comes up with a number of
very useful โrules of thumbโ
ยจ Letโs introduce now the reflection principle or symmetry principle.
ยจ Not only it is neat, but also it is used widely for example in reducing the CPU and time for
Monte Carlo simulations.
78
79. Luc_Faucheux_2020
Some concepts around time III
ยจ Letโs do the following trick: As soon as ๐ gets reached at time ๐ ๐, ๐ by the Brownian
motion ๐ ๐ก (we then have ๐ ๐ ๐, ๐ = ๐ ), we create a symmetrical Brownian motion,
where starting at time ๐ ๐, ๐ , every time ๐ ๐ก goes up or down, ๐)^ ๐ก does exactly the
opposite, example below ๐ = 2, ๐ ๐, ๐ = 12, ๐ ๐ก solid orange line, ๐)^ ๐ก is the
dashed blue line
79
80. Luc_Faucheux_2020
Some concepts around time III-b
80
Maximum to date ๐๐ด๐(๐)
End point ๐ ๐ = ๐#
End point reflected:
๐)^ ๐ = 2๐ โ ๐#
Level of first passage ๐
81. Luc_Faucheux_2020
Some concepts around time III-c
81
Maximum to date ๐๐ด๐(๐)
End point ๐ ๐ = ๐#
End point reflected:
๐)^ ๐ = 2๐ โ ๐#
Level of first passage ๐
82. Luc_Faucheux_2020
Some concepts around time III-d
82
-2
0
2
4
6
8
10
Maximum to date ๐๐ด๐(๐)
End point ๐ ๐ = ๐#
End point reflected:
๐)^ ๐ = 2๐ โ ๐#
Level of first passage ๐
83. Luc_Faucheux_2020
Some concepts around time IV
ยจ If during the interval [0, ๐], ๐ ๐ก reaches ๐, then we have a reflected path ๐)^ ๐ก
ยจ We have by construction: ABS(๐)^ ๐ โ ๐) = ABS(๐ ๐ก โ ๐)
ยจ So for all time ๐ก โฅ ๐ ๐, ๐ , ๐ ๐ก and ๐)^ ๐ก are symmetrical around ๐
ยจ Letโs now define in a more general fashion a new terminal variable ๐#
ยจ We now the probability that at time ๐ก = ๐, the Brownian motion will end with a value such
that ๐ ๐ก โฅ ๐#
ยจ This is the usual Gaussian distribution : โ ๐ฅ, ๐ก =
+
,%-%7
. exp(
53%
,-%7
)
ยจ So ๐ ๐ ๐ โฅ ๐# = โซ34_1
346
1. โ ๐ฅ, ๐ . ๐๐ฅ
ยจ Letโs try to figure out the cumulative probability distribution for ๐ ๐, ๐ , or more exactly:
ยจ ๐ ๐ ๐, ๐ โค ๐
83
84. Luc_Faucheux_2020
Some concepts around time V
ยจ We can write what is known as the โreflection formulaโ
ยจ ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โค ๐# = ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โฅ 2๐ โ ๐#
ยจ Because if ๐ ๐ก did reach ๐ at some point, for every path ๐ ๐ก after ๐ก = ๐ ๐, ๐ , there exist
a symmetrical path ๐)^ ๐ก around ๐.
ยจ So the number of paths ๐ ๐ก that did reach ๐ at some point, and are now at a terminal
value ๐ ๐ โค ๐#, is the same number of paths ๐)^ ๐ก that are now at a terminal value
๐)^ ๐ โฅ 2๐ โ ๐#
ยจ Those paths ๐)^ ๐ก are โvalidโ paths ๐ ๐ก , meaning that they are a specific realization of
the Brownian motion ๐ ๐ก
ยจ So re-stating again the above, the number of paths ๐ ๐ก that did reach ๐ at some point, and
are now at a terminal value ๐ ๐ โค ๐#, is the same number of paths ๐ ๐ก that are now at a
terminal value ๐ ๐ โฅ 2๐ โ ๐#
ยจ This is the reflection principle (Desiree Andre, 1840-1917) or also sometimes called the
ballot problem (Joseph Louis Bertrand, 1887)
84
85. Luc_Faucheux_2020
Some concepts around time VI
ยจ ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โค ๐# = ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โฅ 2๐ โ ๐#
ยจ Now letโs use the specific example of ๐# = ๐
ยจ ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โค ๐ = ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โฅ ๐
ยจ But we also have
ยจ ๐ ๐ ๐, ๐ โค ๐ = ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โค ๐ + ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โฅ ๐
ยจ ๐ ๐ ๐, ๐ โค ๐ = 2. ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โค ๐ = 2. ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โฅ ๐
ยจ ๐ ๐ ๐, ๐ โค ๐ = 2. ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โฅ ๐
ยจ And if ๐ ๐ โฅ ๐, then obviously ๐ ๐ก did reach ๐ before ๐ก = ๐
ยจ So : ๐ ๐ ๐, ๐ โค ๐ = 2. ๐ ๐ ๐ โฅ ๐ the usual Gaussian
ยจ The probability that the Brownian motion will be greater than a given level at maturity is
half the probability that this given level will be reached or exceeded during the time interval
up to maturity
85
86. Luc_Faucheux_2020
Some concepts around time VI-a
ยจ Note: Shrieve (p.112) looks at each Brownian motion path that reaches level ๐ prior to time
๐ but is at a level ๐# below ๐ at time ๐
ยจ In that case, since ๐# โค ๐, we have automatically: (2๐ โ ๐#) โฅ ๐#
ยจ And so he writes from the start the reflection equality as:
ยจ ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โค ๐# = ๐ ๐ ๐ โฅ 2๐ โ ๐# and stating ๐# โค ๐, ๐ > 0
ยจ We only did it when equating ๐# = ๐, and then of course:
ยจ ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โฅ 2๐ โ ๐# = ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โฅ ๐ = ๐ ๐ ๐ โฅ ๐
ยจ Do we have :
ยจ ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โค ๐# = ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โฅ 2๐ โ ๐#
ยจ Even in the cases where ๐# โฅ ๐, ๐ > 0 ?
86
87. Luc_Faucheux_2020
Some concepts around time VII
ยจ ๐ ๐ ๐, ๐ โค ๐ = 2. ๐ ๐ ๐ โฅ ๐
ยจ ๐ ๐ ๐, ๐ โค ๐ = 2. โซ34Z
346
1. โ ๐ฅ, ๐ . ๐๐ฅ
ยจ ๐ ๐ ๐, ๐ โค ๐ = 2. โซ34Z
346
1.
+
,%-%#
. exp(
53%
,-%#
). ๐๐ฅ
ยจ We can now compute things such as the average first passage time:
ยจ Note that ๐ ๐ ๐, ๐ โค ๐ is the CUMULATIVE distribution function
ยจ ๐ ๐ ๐, ๐ โค ๐ is the probability that ๐ ๐ก will exceed ๐ over the time interval [0, ๐]
ยจ Between ๐ and (๐ + ๐๐), the density function is ๐ ๐ ๐, ๐ โค ๐ =
GP T Z,# `#
G#
ยจ ๐ ๐ ๐, ๐ โค ๐ = โซa48
a4#
๐ ๐ ๐, ๐ โค ๐ ). ๐๐
87
88. Luc_Faucheux_2020
Some concepts around time VIII
ยจ ๐ ๐ ๐, ๐ โค ๐ is the probability that ๐ ๐ก will exceed ๐ over the time interval [0, ๐]
ยจ ๐ ๐ ๐, ๐ + ๐๐ โค ๐ + ๐๐ is the probability that ๐ ๐ก will exceed ๐ over the time interval
[0, ๐ + ๐๐]
ยจ ๐ ๐ ๐, ๐ + ๐๐ โค ๐ + ๐๐ โ ๐ ๐ ๐, ๐ โค ๐ is thus the probability that ๐ ๐ก will exceed
๐ INSIDE the time interval [๐, ๐ + ๐๐]
ยจ ๐ ๐ ๐, ๐ + ๐๐ โค ๐ + ๐๐ โ ๐ ๐ ๐, ๐ โค ๐ is the probability that ๐ ๐, ๐ is within the
interval [๐, ๐ + ๐๐]
ยจ ๐ ๐ ๐, ๐ + ๐๐ โค ๐ + ๐๐ โ ๐ ๐ ๐, ๐ โค ๐ =
GP T Z,# `#
G#
. ๐๐
ยจ ๐ ๐ ๐, ๐ โค ๐ =
GP T Z,# `#
G#
is the probability density to have ๐ ๐, ๐ at time ๐
ยจ So less confusing to rewrite it as ๐ ๐, ๐ , probability that the Brownian motion ๐ ๐ก will
exceed ๐ at time ๐
ยจ It also makes things easier to grasp when you realize that ๐ ๐ ๐, ๐ โค ๐ can only increase
with ๐ (if ๐ was reached, it is obviously still reached)
88
89. Luc_Faucheux_2020
Some concepts around time IX
ยจ Letโs rewrite a little the cumulative probability:
ยจ ๐ ๐ ๐, ๐ โค ๐ = 2. โซ34Z
346
1.
+
,%-%#
. exp(
53%
,-%#
). ๐๐ฅ
ยจ We rescale using the change of variable: ๐ฆ, =
3%
-%#
ยจ ๐ฅ = ๐ corresponds to ๐ฆ =
Z
-%#
and ๐๐ฅ = ๐๐ฆ. ๐, ๐
ยจ ๐ ๐ ๐, ๐ โค ๐ = 2. โซ:4
2
3%1
:46 +
,%
. exp(
5:%
,
). ๐๐ฆ
ยจ ๐ ๐ ๐, ๐ โค ๐ =
,
,%
. โซ:4
2
3%1
:46
exp(
5:%
,
). ๐๐ฆ
89
91. Luc_Faucheux_2020
Some concepts around time X-a
ยจ ๐ ๐ ๐, ๐ โค ๐ = ๐(๐, ๐) =
,
,%
. โซ:4:2
:46
exp(
5:%
,
). ๐๐ฆ with ๐ฆZ =
Z
-%#
ยจ If ๐ โ โ, ๐ฆZ โ 0 and so ๐(๐, ๐ = โ) =
,
,%
. โซ:48
:46
exp(
5:%
,
). ๐๐ฆ
ยจ We always go back to : ๐ผ9 = โซ56
/6
๐593%
. ๐๐ฅ =
%
9
ยจ ๐ ๐, ๐ = โ =
,
,%
. โซ:48
:46
exp
5:%
,
. ๐๐ฆ =
,
,%
.
+
,
.
%
โ*
%
= 1
ยจ ๐ ๐, ๐ = โ = 1 for all ๐. So whatever the value of ๐, it will be reached at some point in
time by the stochastic process ๐ ๐ก with probability 1
91
92. Luc_Faucheux_2020
A little paradox (Doob)
ยจ ๐ ๐, ๐ = โ = 1 for all ๐. So whatever the value of ๐, it will be reached at some point in
time by the stochastic process ๐ ๐ก with probability 1
ยจ A little detour through notations and martingale
ยจ We are looking at a stochastic process ๐(๐ก) or in its discrete implementation ๐! = ๐(๐ก!)
ยจ (Because this is usually about stocks, so we are using the letter ๐)
ยจ You see sometimes the notation โฑ! which is sometimes referred to as a โfiltrationโ
ยจ Essentially it is the current set of information available on the world at time ๐ก!
ยจ It is a collection of stuff
ยจ You also sometimes see something that looks like this :
ยจ โฑ! = โฑ(๐ก!) โ โฑb = โฑ(๐กb) for all 0 < ๐ < ๐
ยจ That means that the set of information increases with time
92
93. Luc_Faucheux_2020
A little paradox (Doob) - II
ยจ โฑ! = โฑ(๐ก!) โ โฑb = โฑ(๐กb) for all 0 < ๐ < ๐
ยจ That means that the set of information increases with time
ยจ Any information available at time ๐ก! is still available at time ๐ก!/+
ยจ โฑ! is sometimes called a ๐-field on ฮฉ (nothing to do with variance, it is just a name)
ยจ Now, just to be super-formal, a sequence of filtrations (collection of ๐-fields), is also called a
filtration if the stream of information is increasing
ยจ The collection (โฑ!, ๐ > 0) of ๐-fields on ฮฉ is called a filtration if โฑ! โ โฑb for all 0 < ๐ < ๐
ยจ If (โฑ!, ๐ = 0,1, โฆ ) is a sequence of ๐-fields on ฮฉ and โฑ! โ โฑ!/+ for all ๐, we call (โฑ!) a
filtration as well
93
94. Luc_Faucheux_2020
A little paradox (Doob) - III
ยจ A stochastic process ๐! is said to be โadapted to the filtrationโ (โฑ!, ๐ > 0) if the value of ๐!
is completely determined by the information in โฑ!, which is to say that :
ยจ ๐! = ๐ธ[๐!|โฑ!]
ยจ ๐ธ[๐!|โฑ!] is the conditional expectation of ๐!
ยจ Conditional expectation is not the conditional probability
ยจ The conditional expectation is a weighted average of conditional probabilities
ยจ A filtration ๐ฎโฑ! is said to be โgeneratedโ by the stochastic process (๐8, ๐+, ๐,,โฆ ๐!) if it
contains all the information, and only the information (๐8, ๐+, ๐,,โฆ ๐!), also sometimes
referred to as ๐(๐I, ๐ โค ๐)
ยจ The stochastic process is said to be adapted to the filtrationโ (โฑ!, ๐ > 0) if:
ยจ ๐(๐!) โ โฑ! = โฑ(๐ก!) for all ๐
ยจ It essentially means that the stochastic process does not carry more information than the
filtration, or ๐ฎโฑ! โ โฑ! for all ๐
94
95. Luc_Faucheux_2020
A little paradox (Doob) - IV
ยจ For example suppose that ๐ฎโฑ! is said to be โgeneratedโ by the stochastic process
(๐8, ๐+, ๐,,โฆ ๐!)
ยจ (๐!) is adapted to ๐ฎโฑ!
ยจ (๐!/๐!5+) is adapted to ๐ฎโฑ!
ยจ (๐๐ด๐(๐I; ๐ < +๐)) is adapted to ๐ฎโฑ!
ยจ (๐!/+) is NOT adapted to ๐ฎโฑ!, because it is an additional piece of information that was NOT
included in ๐ฎโฑ!, but will be included in ๐ฎโฑ!/+
ยจ Any โtrading strategyโ is adapted to ๐ฎโฑ!
ยจ A โtrading strategyโ on the stock ๐! is a sequence of positions ๐ !on the stock ๐!
ยจ At time ๐ก!, the investor places a bet of size ๐ !on the stock ๐!
ยจ The trading strategy is adapted to ๐ฎโฑ! means that the ๐ !are being computed (decided) only
based on the information ๐ฎโฑ! = ๐(๐I, ๐ โค ๐), or (๐8, ๐+, ๐,,โฆ ๐!)
95
96. Luc_Faucheux_2020
A little paradox (Doob) - V
ยจ The trading strategy is said to be โself-financingโ if the only gain or losses result from the
movements in the stochastic variable ๐! (no one adds money or subtract money to the
account)
ยจ โself-financingโ is not the same as โreplicatingโ
ยจ The total winnings up to time ๐! are: ๐! = ๐!5+ + ๐ !5+. (๐! โ ๐!5+)
ยจ A stochastic process ๐! is called a martingale with respect to โฑ! in the following fashion:
ยจ 1) ๐ธ ๐๐๐ ๐! โฑ!] < โ for all ๐
ยจ 2) ๐! is adapted to โฑ!
ยจ 3) ๐ธ[๐!| โฑI] = ๐I for all 0 โค ๐ < ๐, meaning that ๐I is the best predictor of ๐! given โฑI
ยจ In particular ๐ธ[๐!/+| โฑ!] = ๐!
ยจ A martingale has the remarkable property that its expectation function is constant (and we
sometimes omit pointing out which exact filtration is being used)
96
97. Luc_Faucheux_2020
A little paradox (Doob) - VI
ยจ Any self-financing trading strategy of a martingale is also a martingale
ยจ The total winnings up to time ๐! are: ๐! = ๐!5+ + ๐ !5+. (๐! โ ๐!5+)
ยจ ๐ธ[๐!/+| โฑ!] = ๐!
ยจ ๐ธ[๐!/+| โฑ!] = ๐ธ[๐! + ๐ !. (๐!/+ โ ๐!)| โฑ!]
ยจ ๐ธ[๐!/+| โฑ!] = ๐! + ๐ !. (๐ธ[๐!/+| โฑ!] โ ๐!) = ๐!
ยจ Martingales is also referred to as โfair gameโ
ยจ Originally, martingale is a French word referring something you put on a horse to drive
him/her
97
98. Luc_Faucheux_2020
A little paradox (Doob) - VII
ยจ Almost getting to the paradox, but we had to spend a little time on definitions first.
ยจ Suppose that a stochastic process (a stock) is a martingale, and that ๐8 = 0
ยจ We define the following trading strategy: ๐ ! = 1 if ๐! < ๐, 0 otherwise
ยจ Recall that: ๐ ๐ ๐, ๐ โค ๐ = ๐(๐, ๐)
ยจ ๐ ๐, ๐ is here referred as the โstopping timeโ. As soon as ๐! reached ๐, ๐!stays stuck on
that value
ยจ The trading strategy is ๐! = ๐!5+ + ๐ !5+. (๐! โ ๐!5+) with ๐ ! = 1 if ๐! < ๐, 0 otherwise
ยจ So ๐! = ๐! if ๐ก! < ๐ ๐, ๐ and ๐! = ๐ for all ๐ก! โฅ ๐ ๐, ๐
ยจ Now here is the paradox: from what is sometimes referred to as Doobโs theorem (you
cannot make an expected non-zero profit with a trading strategy on a martingale), we know
that ๐ธ ๐! = 0
ยจ HOWEVER we also know that ๐ ๐, ๐ = โ = 1 for all ๐. So whatever the value of ๐, it
will be reached at some point in time by the stochastic process ๐ ๐ก with probability 1
98
99. Luc_Faucheux_2020
A little paradox (Doob) - VIII
ยจ So if ๐ ๐, ๐ = โ = 1 for all ๐, AND
ยจ ๐! = ๐! if ๐ก! < ๐ ๐, ๐ and ๐! = ๐ for all ๐ก! โฅ ๐ ๐, ๐
ยจ We deduce that ๐! = ๐ will be equal to ๐ with probability 1
ยจ And so we would like to say that ๐ธ[๐!] will be equal to ๐ with probability 1
ยจ This obviously seems like a paradox.
ยจ The fact of the matter is that for any given time ๐ that is large enough, ๐# is very likely to be
equal to ๐, however there is enough probability that it has very large negative value that
the expected value is still 0
ยจ Also we will see in the following slides that the average first passage time is infinite, which
again seems somewhat confusing.
99
100. Luc_Faucheux_2020
A couple more example of martingales
ยจ A Brownian bridge is NOT a martingale
ยจ Ito integrals are martingale
ยจ Stratonovitch integrals are NOT martingale
100
101. Luc_Faucheux_2020
Some concepts around time XI
ยจ For the Gaussian case:
ยจ โ ๐ฅ, ๐ก =
+
,%-%7
. exp(
53%
,-%7
)
ยจ ๐ ๐, ๐ =
Z
# ,%-%#
. exp
5Z%
,-%#
=
Z
#
. โ(๐, ๐)
ยจ Be careful that those two probabilities are not equal, and that can be confusing.
ยจ โ ๐ฅ, ๐ก is the probability density for ๐ ๐ก , i.e. for a given time ๐ก, what is the probability to
find ๐ ๐ก in the interval [๐ฅ, ๐ฅ + ๐๐ฅ]
ยจ ๐ ๐, ๐ is, for a given ๐, the probability that the Brownian motion ๐ ๐ก will exceed ๐ in
the interval [๐, ๐ + ๐๐] for the first time
ยจ ๐ ๐, ๐ is, for a given ๐, the probability that the first passage time defined as :
๐ ๐, ๐ = min(๐ก โฅ 0, ๐ ๐ก = ๐) can be found in the interval [๐, ๐ + ๐๐]
101
102. Luc_Faucheux_2020
Some concepts around time XI-b
ยจ ๐ ๐, ๐ =
Z
# ,%-%#
. exp
5Z%
,-%#
=
Z
#
. โ(๐, ๐)
ยจ ๐. ๐ ๐, ๐ = ๐. โ(๐, ๐)
ยจ This is quite elegant and somewhat intuitive
ยจ โ(๐, ๐) is the probability density for a given time ๐ to find the stochastic variable ๐ ๐ก at
the position ๐ ๐ก = ๐
ยจ โ ๐, ๐ . ๐๐ is the probability for a given time ๐ to find the stochastic variable ๐ ๐ก in the
interval [๐, ๐ + ๐๐]
ยจ โ(๐, ๐) is normalized so that: โซZ456
Z4/6
โ ๐, ๐ . ๐๐ = 1
ยจ So โ(๐, ๐) has units of
[+]
[_]
102
103. Luc_Faucheux_2020
Some concepts around time XI-c
ยจ ๐. ๐ ๐, ๐ = ๐. โ(๐, ๐)
ยจ ๐(๐, ๐) is the probability density for a given time level ๐ to find the first passage time
๐ ๐, ๐ = min(๐ก โฅ 0, ๐ ๐ก = ๐) at time ๐
ยจ ๐ ๐, ๐ . ๐๐ is the probability for a given level ๐ to find the the first passage time ๐ ๐, ๐ =
min(๐ก โฅ 0, ๐ ๐ก = ๐) in the interval ๐, ๐ + ๐๐
ยจ Is ๐(๐, ๐) is normalized ? โซ#48
#4/6
๐ ๐, ๐ . ๐๐ =?
ยจ So ๐(๐, ๐) has units of
[+]
[#]
103
104. Luc_Faucheux_2020
Some concepts around time XI-d
ยจ ๐ ๐, ๐ =
Z
# ,%-%#
. exp
5Z%
,-%#
=
Z
#
. โ(๐, ๐)
ยจ The average time is then < ๐ ๐, ๐ > = โซ#48
#46
๐. ๐ ๐, ๐ . ๐๐
ยจ < ๐ ๐, ๐ > = โซ#48
#46
๐.
Z
# ,%-%#
. exp
5Z%
,-%#
. ๐๐
ยจ < ๐ ๐, ๐ > = โซ#48
#46 Z
,%-%#
. exp
5Z%
,-%#
. ๐๐
ยจ When ๐ โ โ, exp
5Z%
,-%#
โ 1, and so the large ๐ integral looks like โซ#48
#46 +
#
. ๐๐ โ โ
ยจ We will explore this in more details, but the average first passage time is infinite.
ยจ The stochastic process will reach any level with probability 1, but will take on average an
infinite amount of time to reach that level. This is a little weird.
104
105. Luc_Faucheux_2020
Some concepts around time XI-e
ยจ Because of the fact that the average first passage time is infinite, in the literature you will
find what is called the typical time
ยจ The typical time is the maximum of the function ๐ ๐, ๐ as a function of time ๐
ยจ ๐ ๐, ๐ =
Z
# ,%-%#
. exp
5Z%
,-%#
=
Z
#
. โ(๐, ๐)
ยจ
GA(Z,#)
G#
=
5R
,
Z
## ,%-%#
. exp
5Z%
,-%#
+
Z%
,-%##
.
Z
# ,%-%#
. exp
5Z%
,-%#
ยจ
GA(Z,#)
G#
= 0 implies:
R
,
Z
## ,%-%#
=
Z%
,-%##
.
Z
# ,%-%#
ยจ
R
,
=
Z%
,-%#
, or again ๐ =
Z%
R-%, or using the diffusion notation ๐ท =
-%
,
we have ๐ =
Z%
$U
ยจ The typical time scales as the square of the level for the first passage time
105
106. Luc_Faucheux_2020
Another scaling argument - redux
ยจ
G OP(3,7)
G7
=
G% OP(3,7)
G3% , where we set (๐ท = 1) for simplicity sake
ยจ ๐บ ๐ฅ, ๐ก, ๐ท = 1 =
+
K%7
. ๐๐ฅ๐(โ
3%
K7
) is a solution
ยจ You can check that the following function is ALSO a solution of the heat equation
ยจ ๐ ๐ฅ, ๐ก, ๐ท = 1 = โซ56
( e4
,
)
exp(โ
a%
K
). ๐๐ = ๐(
3
7
)
ยจ This is another self-similarity argument, where again the position has to scale with the
square root of the time, but also where the integral of a solution to the diffusion equation
ALSO is a solution of the same diffusion equation.
ยจ Sometimes those kind of โmappingโ are useful because it is easier to work in a given
(function, variable) space and then โmapโ the results onto the final (function, variable)
space that you need to work in
106
110. Luc_Faucheux_2020
Another scaling argument โ redux IV
ยจ We then have :
ยจ
GP(Z,#)
G#
=
-%
,
G%P (Z,#)
GZ% with ๐ ๐ ๐, ๐ โค ๐ =
,
,%
. โซ:4
2
3%1
:46
exp(
5:%
,
). ๐๐ฆ
ยจ Recall that this was for a usual Gaussian: โ ๐ฅ, ๐ก =
+
,%-%7
. exp(
53%
,-%7
)
ยจ Which is the solution of the heat equation:
G
G7
โ(๐ฅ, ๐ก) =
-%
,
G%
G3% โ(๐ฅ, ๐ก)
ยจ In terms of the Diffusion equation, one has the notation:
ยจ
G
G7
๐(๐ฅ, ๐ก) = ๐ท
G%
G3% ๐(๐ฅ, ๐ก)
ยจ ๐ ๐ฅ, ๐ก =
+
K%U7
. ๐๐ฅ๐(โ
3%
KU7
) with ๐ท = (๐,/2)
110
111. Luc_Faucheux_2020
Another scaling argument โ redux V
ยจ This is kind of a neat result.
ยจ The cumulative distribution function for the random variable which is the first passage time:
๐ ๐, ๐ = min(๐ก โฅ 0, ๐ ๐ก = ๐) is ๐ ๐, ๐ =
,
,%
. โซ:4
2
3%1
:46
exp(
5:%
,
). ๐๐ฆ
ยจ The underlying Brownian motion ๐ ๐ก follows โ ๐ฅ, ๐ก =
+
,%-%7
. exp(
53%
,-%7
)
ยจ โ ๐ฅ, ๐ก follows the diffusion equation:
G
G7
โ(๐ฅ, ๐ก) =
-%
,
G%
G3% โ(๐ฅ, ๐ก)
ยจ ๐ ๐, ๐ follows the diffusion equation:
GP(Z,#)
G#
=
-%
,
G%P (Z,#)
GZ%
ยจ ๐ ๐, ๐ diffuses in the space (๐, ๐) with the SAME diffusion as โ ๐ฅ, ๐ก in (๐ฅ, ๐ก)
ยจ Note again that it can get confusing to compare those two probability functions, one is a
cumulative, the other one is a density function. Compare that to the next slide where the
density and the cumulative for ๐ฅ follows the SAME diffusion equation.
111
112. Luc_Faucheux_2020
Another scaling argument โ redux VI
ยจ This is reminiscent of a Dupire like equation:
ยจ
GP(Z,#)
G#
=
-%
,
G%P (Z,#)
GZ%
ยจ ๐ ๐, ๐ =
Z
# ,%-%#
. exp
5Z%
,-%#
=
Z
#
. โ ๐, ๐ =
GP(Z,#)
G#
ยจ So:
ยจ โ ๐, ๐ = (
#
Z
)
-%
,
G%P (Z,#)
GZ%
ยจ So if we know ๐(๐, ๐), we can deduce the probability density : โ ๐, ๐
ยจ Note: Dupire equation: if we know the Call options prices ๐ถ(๐, ๐) we can deduce the
probability density (also Bachelier p. 51 of his thesis)
ยจ โ ๐, ๐ =
G%g (Z,#)
GZ%
112
113. Luc_Faucheux_2020
A neat thing about the diffusion equation (Bachelier) -redux
ยจ ๐ ๐ฅ, ๐ก =
+
K%U7
. ๐๐ฅ๐(โ
3%
KU7
) is a solution of
G
G7
๐(๐ฅ, ๐ก) = ๐ท
G%
G3% ๐(๐ฅ, ๐ก)
ยจ We define ๐ ๐ฅ, ๐ก = โซ3
6
๐ ๐ฅโฒ, ๐ก . ๐๐ฅโฒ as the probability to find the random variable at time ๐ก
at a distance greater than ๐ฅ
ยจ
G] 3,7
G7
= โซ3
6 GP 32,7
G7
. ๐๐ฅโฒ = โซ3
6
๐ท
G%
G32% ๐ ๐ฅโฒ, ๐ก . ๐๐ฅโฒ = ๐ท.
GP 3$,7
G3$
6
3$43 = โ๐ท.
GP 3,7
G3
ยจ
G] 3,7
G7
= โ๐ท.
GP 3,7
G3
ยจ ๐ ๐ฅ, ๐ก = โซ3
6
๐ ๐ฅโฒ, ๐ก . ๐๐ฅโฒ and so
G] 3,7
G3
= โ๐(๐ฅ, ๐ก)
ยจ And so the function ๐ ๐ฅ, ๐ก = โซ3
6
๐ ๐ฅโฒ, ๐ก . ๐๐ฅโฒ ALSO follows the same equation diffusion as
๐(๐ฅ, ๐ก)
ยจ
G
G7
๐(๐ฅ, ๐ก) = ๐ท
G%
G3% ๐ ๐ฅ, ๐ก NOTE that ๐(๐ฅ, ๐ก) is NOT a Gaussian (unicity of solution)
113
114. Luc_Faucheux_2020
Another scaling argument โ redux VI
ยจ
GP(Z,#)
G#
=
-%
,
G%P (Z,#)
GZ% with ๐ ๐ ๐, ๐ โค ๐ =
,
,%
. โซ:4
2
3%1
:46
exp(
5:%
,
). ๐๐ฆ
ยจ ๐ ๐ ๐, ๐ โค ๐ =
GP T Z,# `#
G#
=
Z
# ,%-%#
. exp
5Z%
,-%#
=
Z
#
. โ(๐, ๐)
ยจ Does ๐ ๐ ๐, ๐ โค ๐ also follows a diffusion equation?
114
115. Luc_Faucheux_2020
Some more about the Maximum
ยจ Letโs look again at the reflection principle
115
Maximum to date ๐๐ด๐(๐)
End point ๐ ๐ = ๐#
End point reflected:
๐)^ ๐ = 2๐ โ ๐#
Level of first passage ๐
116. Luc_Faucheux_2020
Some more about the Maximum - a
116
-2
0
2
4
6
8
10
Maximum to date ๐๐ด๐(๐)
End point ๐ ๐ = ๐#
End point reflected:
๐)^ ๐ = 2๐ โ ๐#
Level of first passage ๐
117. Luc_Faucheux_2020
Some more about the Maximum II
ยจ We can define the Maximum value of the path : ๐๐ด๐ ๐ = MAX(๐ ๐ก , 0 โค ๐ก โค ๐)
ยจ For positive ๐, we have ๐๐ด๐ ๐ โฅ ๐ if and only if ๐ ๐, ๐ = min(๐ก โฅ 0, ๐ ๐ก = ๐) is
such that ๐ ๐, ๐ โค ๐
ยจ (You cannot reach a maximum higher than the level ๐ if you have not reached that level yet)
ยจ The reflection equality was:
ยจ ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โค ๐# = ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โฅ 2๐ โ ๐#
ยจ Now :
ยจ ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โค ๐# = ๐ ๐๐ด๐ ๐ โฅ ๐, ๐ ๐ โค ๐#
ยจ So we have expressed something in terms of probabilities of ๐๐ด๐ ๐ and ๐ ๐ being above
some levels. This indicates that we should be able to define and maybe calculate a joint
probability for {๐๐ด๐ ๐ , ๐ ๐ } that we define as ๐ ๐๐ด๐ ๐ = ๐, ๐ ๐ = ๐# = ๐(๐, ๐)
117
118. Luc_Faucheux_2020
Some more about the Maximum III
ยจ Here we use the following trick: we do not try to explicitly derive ๐(๐, ๐), but write
equations that ๐(๐, ๐) verifies, and from those try to derive ๐(๐, ๐)
ยจ ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โค ๐# = ๐ ๐๐ด๐ ๐ โฅ ๐, ๐ ๐ โค ๐#
ยจ ๐ ๐๐ด๐ ๐ โฅ ๐, ๐ ๐ โค ๐# = โซ4Z
46
๐๐ โซ_456
_4_1
๐๐ . ๐(๐, ๐)
ยจ ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โฅ 2๐ โ ๐# = โซ34,Z5_1
346
1. โ(๐ฅ, ๐ก). ๐๐ฅ
ยจ ๐ ๐ ๐, ๐ โค ๐, ๐ ๐ โฅ 2๐ โ ๐# = โซ34,Z5_1
346
1.
+
,%-%#
. exp(
53%
,-%#
). ๐๐ฅ
ยจ Because if ๐ ๐ โฅ 2๐ โ ๐#, then by construction ๐ ๐ has reached the level ๐ before ๐
ยจ Note that we follow here p. 114 of Shrieve (so ๐# < ๐)
ยจ Will try to give a shot later at a more general formula
118
119. Luc_Faucheux_2020
Some more about the Maximum IV
ยจ So we have the following:
ยจ โซ4Z
46
๐๐ โซ_456
_4_1
๐๐ . ๐ ๐, ๐ = โซ34,Z5_1
346
1.
+
,%-%#
. exp(
53%
,-%#
). ๐๐ฅ
ยจ Now bear in mind that we do not still know ๐ ๐, ๐
ยจ But we will take the derivative of the above equation with respect to ๐# and ๐
ยจ
G
GZ
โซ4Z
46
๐๐ โซ_456
_4_1
๐๐ . ๐ ๐, ๐ = โ โซ_456
_4_1
๐๐ . ๐(๐, ๐)
ยจ
G
GZ
โซ34,Z5_1
346
1.
+
,%-%#
. exp(
53%
,-%#
). ๐๐ฅ = โ
+
,%-%#
. exp(
5(,Z5_1)%
,-%#
)
ยจ So we now have:
ยจ โซ_456
_4_1
๐๐ . ๐ ๐, ๐ =
+
,%-%#
. exp(
5(,Z5_1)%
,-%#
)
119
120. Luc_Faucheux_2020
Some more about the Maximum V
ยจ โซ_456
_4_1
๐๐ . ๐ ๐, ๐ =
+
,%-%#
. exp(
5(,Z5_1)%
,-%#
)
ยจ We now take the derivative with respect with ๐#
ยจ
G
G_1
โซ_456
_4_1
๐๐ . ๐ ๐, ๐ = ๐(๐, ๐#)
ยจ
G
G_1
+
,%-%#
. exp(
5(,Z5_1)%
,-%#
) =
+
,%-%#
. exp
5 ,Z5_1
%
,-%#
. 2. 2๐ โ ๐# .
+
,-%#
ยจ We have now determined for ๐# < ๐
ยจ ๐ ๐, ๐# =
,Z5_1
-%#
.
+
,%-%#
. exp
5 ,Z5_1
%
,-%#
=
,Z5_1
-%#
. โ(2๐ โ ๐#, ๐)
ยจ โ(๐ฅ, ๐ก) is the regular Gaussian
ยจ ๐ ๐, ๐# is the joint probability density at time ๐ to have a maximum ๐ and terminal value
๐#
120
121. Luc_Faucheux_2020
Some more about the Maximum VI
ยจ The joint probability density to reach within the interval [0, ๐] a maximum value ๐and
having a terminal value for the Brownian motion ๐# is:
ยจ ๐ ๐, ๐# =
,5_1
-%#
. โ(2๐ โ ๐#, ๐)
ยจ ๐ ๐, ๐ is the probability density function for the reaching the level ๐ for the first time at
time ๐
ยจ ๐ ๐, ๐ . ๐๐ is the probability for a given level ๐ to find the the first passage time ๐ ๐, ๐ =
min(๐ก โฅ 0, ๐ ๐ก = ๐) in the interval ๐, ๐ + ๐๐
ยจ ๐ ๐, ๐ =
Z
#
. โ ๐, ๐
121
122. Luc_Faucheux_2020
Some more about the Maximum VII
ยจ โ ๐, ๐ has unit of
[+]
[_]
, โ ๐, ๐ก =
+
,%-%7
. exp(
5Z%
,-%7
)
ยจ ๐ ๐, ๐ has unit of
[+]
[#]
๐ ๐, ๐ =
Z
#
. โ ๐, ๐
ยจ ๐, ๐ has units of [๐,]
ยจ ๐ ๐, ๐# has units of
[+]
[_%]
๐ ๐, ๐# =
,5_1
-%#
. โ(2๐ โ ๐#, ๐) with (๐#< ๐)
ยจ What does that mean to set ๐# = ๐ ?
ยจ ๐ ๐, ๐ =
-%#
. โ(๐, ๐)
ยจ ๐ ๐, ๐ =
Z
-%#
. โ ๐, ๐ =
+
-% . ๐(๐, ๐)
122
123. Luc_Faucheux_2020
Some more about the Maximum VII-b
ยจ ๐ ๐, ๐ =
Z
-%#
. โ ๐, ๐ =
+
-% . ๐(๐, ๐)
ยจ Units are still correct
ยจ ๐ ๐, ๐ has to be integrated of ๐ then again over ๐ to return a dimensionless number
ยจ ๐(๐, ๐) has to be integrated over time to return a dimensionless number
ยจ The probability density to end up at time ๐ at a terminal value ๐, with the time ๐ being the
first time that this value ๐ is reached (since it is the maximum, so was never reached
before), is equal to the probability density (in time) to have the first passage time for the
level ๐ at the terminal time ๐, scaled by the square of the volatility
ยจ ๐ ๐, ๐ . ๐, ๐ = ๐. ๐(๐, ๐)
123
124. Luc_Faucheux_2020
Some more about the Maximum VIII
ยจ Joint density and conditional density
ยจ ๐ ๐, ๐# is the joint density to find the maximum within [๐, ๐ + ๐๐] and the terminal
value within [๐#, ๐# + ๐๐#] (with for now the condition (๐#< ๐)
ยจ Sometimes it is easier from a numerical point of view to simulate the Brownian motion
(process ๐(๐)) and THEN simulate another process for the maximum ๐. This second step
requires a slightly different probability, we need to know in this case the distribution of the
maximum ๐ between [0, ๐], conditioned on the value of ๐# = ๐(๐)
ยจ (Shrieve p.114)
ยจ The conditional density is the joint density divided by the marginal density of the
conditioning random variable.
ยจ We are looking for the conditional density ๐ ๐|๐#
ยจ ๐๐๐๐ ๐|๐# = ๐๐๐๐ ๐, ๐# /๐๐๐๐ ๐#
ยจ ๐ ๐|๐# = ๐ ๐, ๐# /โ ๐#
124
125. Luc_Faucheux_2020
Some more about the Maximum IX
ยจ ๐ ๐|๐# = ๐ ๐, ๐# /โ ๐# and has unit of
[+]
[_]
ยจ โ ๐# = โ ๐#, ๐ =
+
,%-%#
. exp
5_1
%
,-%#
ยจ ๐ ๐, ๐# =
,5_1
-%#
. โ(2๐ โ ๐#, ๐)
ยจ And so we get:
ยจ ๐ ๐|๐# =
,5_1
-%#
.
h ,5_1,#
h _1,#
=
,5_1
-%#
. exp
/_1
%
,-%#
. exp
5(,5_1)%
,-%#
ยจ We also have: โ(2๐ โ ๐#), + ๐#
,
= โ4๐, + 4๐๐# = โ4๐(๐# โ ๐)
ยจ ๐ ๐|๐# =
,5_1
-%#
. exp
5K(_15)
,-%#
ยจ That is kind of it, not sure if I can find any insightful thing to say about this
125
126. Luc_Faucheux_2020
Some more concepts about time โ first return
ยจ We have looked at the first passage, now letโs gain some intuition on the first return (and
also last return), first return in blue dot, successive returns in grey, last return in red for the
return to the origin
126
127. Luc_Faucheux_2020
Some more concepts about time โ first return II
ยจ Following some of the convention we had before, let us define as ๐๐๐(0, ๐)the probability
density distribution for the first return time to the origin at time ๐
ยจ Note that we can always shift later the distribution around a โnewโ origin, one of the nice
properties of a Kolmogorov-like process
ยจ ๐๐๐(0, ๐) is the probability density for the stochastic process ๐ ๐ก to return for the first
time back to the origin a time ๐
ยจ The cumulative function of ๐๐๐(0, ๐) is:
ยจ ๐๐ ๐น 0, ๐ = โซ748
74#
๐๐๐ 0, ๐ก ). ๐๐ก
ยจ ๐๐๐ 0, ๐ =
GP.' 8,#
G#
ยจ ๐๐ ๐น 0, ๐ is the cumulative probability that the stochastic process ๐ ๐ก has returned to the
origin (at least once) by the time ๐
127
128. Luc_Faucheux_2020
Some more concepts about time โ first return III
ยจ ๐๐ ๐น 0, ๐ is the cumulative probability that the stochastic process ๐ ๐ก has returned to the
origin (at least once) by the time ๐
ยจ {1 โ ๐๐ ๐น 0, ๐ } is the cumulative survival probability noted ๐ 0, ๐ that the stochastic
process ๐ ๐ก has NOT returned to the origin by the time ๐
ยจ ๐๐๐ 0, ๐ =
GP.' 8,#
G#
= โ
G) 8,#
G#
ยจ In order to evaluate ๐ 0, ๐ , we need to enumerate all the paths that never returned to the
origin after time ๐ (or after ๐ steps where the time interval is ๐ฟ๐ก = ๐/๐)
ยจ We need to calculate the probability that a path never returns to the origin.
ยจ This is a variant of the โballot theoremโ: in a ballot where candidates A and B have a and b
total votes respectively, what is the probability that when counting the votes, the tally for A
is always higher than B (A always leads the vote tally and there is no tie, and no time when B
is leading in the vote).
ยจ Desire Andre and Joseph Louis Francois Bertrand (1887)
128
129. Luc_Faucheux_2020
Some more concepts about time โ first return IV
ยจ The remarkably simple result is that the probability of such a path is
(i5()
(i/()
ยจ There are a couple of ways we can convince ourselves of this result.
ยจ Proof by reflection, we suppose a>b (A is the winner, so the path always stays above 0)
ยจ Any sequence that starts with a B must reach a tie at some point because A wins.
ยจ Also any sequence that starts with B has B leading, so has to be excluded
ยจ So we are left with sequences that start with A
ยจ Some of those will never reach a tie, and some will
ยจ For those who do reach a tie, we will use the reflection trick again, by reflecting the votes up
to the point of first tie (๐ ๐ก crossing the origin again). The reflected new sequence will
start with a B
129
131. Luc_Faucheux_2020
Some more concepts about time โ first return VI
ยจ A reflected sequence up to the first point of tie, reflected path in blue
131
-3
-2
-1
0
1
2
3
4
132. Luc_Faucheux_2020
Some more concepts about time โ first return VII
ยจ Note: this is why it is sometime so helpful in gaining intuition to run simulations. It was very
hard to find a โniceโ looking graphs.
ยจ A lot of graphs either has the first return very close to the origin, or never returned
ยจ This is somewhat counter-intuitive because a lot of people would expect that for a fair game,
each player would be on the winning side for about half the time, and that the lead will pass
not infrequently from one player to the other.
ยจ It is actually not the case, and we will show that actually first returns and last returns are
actually much more likely to occur either very early or very late in the random walk.
ยจ It is highly probable to remain on one side of the origin for nearly the entire walk, leading to
long waiting time before the tie
132
133. Luc_Faucheux_2020
Some more concepts about time โ first return VIII
ยจ A couple of F9
133
-8
-6
-4
-2
0
2
4
6
-3
-2
-1
0
1
2
3
4
5
6
7
8
-4
-3
-2
-1
0
1
2
3
4
5
6
-6
-4
-2
0
2
4
6
8
0
2
4
6
8
10
12
14
134. Luc_Faucheux_2020
Some more concepts about time โ first return IX
ยจ So to recap, looking at the survival probability for path that always stay above the origin
ยจ Every sequence that starts with B is excluded
ยจ Every sequence that starts with A, either ties or does not
ยจ If it does, we built the reflected path in blue, reflecting up to the point of the first tie. This
reflected sequence will start with B and will cross the origin (will tie)
ยจ Over ๐ votes, we have ๐ votes for A (or jump up in space) and ๐ votes for B (or jump down
in space)
ยจ Because of the reflection, the number of sequences that start with A and tie is equal to the
number of sequences starting with B and do also tie
ยจ Looking at the outcome with A being the winner (๐ > ๐), any sequence starting with B will
automatically tie at some point
ยจ So we are counting twice the number of sequences starting with B
ยจ The probability that a sequence starts with B is: โ๐ (๐ + ๐)
134
135. Luc_Faucheux_2020
Some more concepts about time โ first return X
ยจ So the survival probability that we are after is :
ยจ 1 โ 2. z๐ ๐ + ๐ =
i5(
i/(
ยจ Another way to look at it is by induction: suppose that the formula is true for (๐ โ 1) steps,
can we extend it to ๐ steps?
ยจ So for (๐ โ 1) , we had either (๐ โ 1, ๐) or (๐, ๐ โ 1) votes for A and B
ยจ The probability of no tie in the case (๐ โ 1, ๐ โ 1, ๐) is
i5+5(
i5+/(
ยจ The probability of no tie in the case (๐ โ 1, ๐, ๐ โ 1) is
i5(/+
i/(5+
ยจ Going into (๐) we need to count the last vote, the probability of a vote for A is
i
i/(
and the
probability of a vote for B is
(
i/(
(reverse the order and treat the last vote as the first one,
and read the sequence backward)
135
136. Luc_Faucheux_2020
Some more concepts about time โ first return XI
ยจ The survival probability at the level (๐)is then:
ยจ
i
i/(
.
i5+5(
i5+/(
+
(
i/(
.
i5(/+
i/(5+
=
ii5i5i(/(i5((/(
(i/()(i/(5+)
=
(i/(5+)(i5()
(i/()(i/(5+)
=
(i5()
(i/()
ยจ Another notation would be ๐/ = ๐ and ๐5 = ๐, ๐/ + ๐5 = ๐
ยจ The total number of possible paths is
]!
]5!]&!
for a given set (๐, ๐/, ๐5)
ยจ The paths that end with positive end value are such that ๐/ > ๐5
ยจ The number of paths never crossing (never tie-ing) is the total number of paths multiplied by
the probability that a path will not tie
ยจ We will then have to sum all those numbers over the possible end points (because we have
it currently fixed by having a given set (๐, ๐/, ๐5), with those being above the origin, or
ensuring that (๐/ > ๐5)
ยจ Then multiply by the probability for one path which is (25]) in the binomial discrete model
136
137. Luc_Faucheux_2020
Some more concepts about time โ first return XII
ยจ We then have for the survival probability:
ยจ ๐ 0, ๐ = ๐ 0, ๐ = (25]). โ]&48
]&k]5 ]!
]5!]&!
.
]55]&
]5/]&
and we have: ๐/+๐5= ๐
ยจ So ๐5 < ๐/ is equivalent to ๐5 < ๐ โ ๐5 or ๐5 < ๐/2
ยจ ๐ 0, ๐ = (25]). โ]&48
]&k
6
% ] !
(]5]&)!(]&)!
. (
]5,]&
]
)
ยจ (2]). ๐ 0, ๐ = โ]&48
]&k
6
%
[
] !
(]5]&)!(]&)!
โ 2
]&
]
.
] !
(]5]&)!(]&)!
]
ยจ (2]). ๐ 0, ๐ = โ]&48
]&k
6
%
[
] !
(]5]&)!(]&)!
โ 2 .
]5+ !
]5]& !(]&5+)!
]
137
138. Luc_Faucheux_2020
Some more concepts about time โ first return XIII
ยจ We also use the result from the Pascal triangle
ยจ
] !
(]5]&)!(]&)!
=
]5+ !
(]5+5]&)!(]&)!
+
]5+ !
]5+5(]&5+) !(]&5+)!
or: ๐ถJ
A
= ๐ถJ5+
A
+ ๐ถJ5+
A5+
ยจ ๐ถJ5+
A
+ ๐ถJ5+
A5+
=
J5+ !
J5+5A !A!
+
J5+ !
J5+5 A5+ !(A5+)!
=
J5+ !
J5+5A !A!
+
J5+ !
J5A !(A5+)!
ยจ ๐ถJ5+
A
+ ๐ถJ5+
A5+
=
J5+ !
J5+5A !A!
+
J5+ !
J5A !(A5+)!
=
J5+ !(J5A)
J5A !A!
+
J5+ !A
J5A !(A)!
=
J5+ ! J5A / J5+ !A
J5A !A!
ยจ ๐ถJ5+
A
+ ๐ถJ5+
A5+
=
J5+ ! J5A / J5+ !A
J5A !A!
=
J5+ ! J5A/A
J5A !A!
=
J5+ ! J
J5A !A!
=
J !
J5A !A!
= ๐ถJ
A
ยจ So we can rewrite:
ยจ (2]). ๐ 0, ๐ = โ]&48
]&k
6
%
[
]5+ !
(]5+5]&)!(]&)!
+
]5+ !
]5+5(]&5+) !(]&5+)!
โ 2 .
]5+ !
]5]& !(]&5+)!
]
138
139. Luc_Faucheux_2020
Some more concepts about time โ first return XIV
ยจ (2]). ๐ 0, ๐ = โ]&48
]&k
6
%
[
]5+ !
(]5+5]&)!(]&)!
+
]5+ !
]5]&) !(]&5+)!
โ 2 .
]5+ !
]5]& !(]&5+)!
]
ยจ (2]). ๐ 0, ๐ = โ]&48
]&k
6
%
[๐ถ]5+
]&
+ ๐ถ]5+
]&5+
โ 2 . ๐ถ]5+
]&5+
]
ยจ (2]). ๐ 0, ๐ = โ]&48
]&k
6
%
[๐ถ]5+
]&
+๐ถ]5+
]&5+
]
ยจ In the above sum, terms cancel each other out up until the last one
ยจ (2]). ๐ 0, ๐ = ๐ถ]5+
6
%
5+
ยจ We now make use of the Stirling approximation : ๐! ~ 2๐๐( โ]
?)]~ 2๐๐exp(๐๐๐๐ โ ๐)
139
140. Luc_Faucheux_2020
Some more concepts about time โ first return XV
ยจ (2]). ๐ 0, ๐ = 1 + โ]&4+
]&k
6
%
[
]5+ !
(]5+5]&)!(]&)!
โ
]5+ !
]5]& !(]&5+)!
]
140
141. Luc_Faucheux_2020
Another look at first passage time
ยจ Letโs look at the problem in a slightly different fashion
ยจ The underlying Brownian motion ๐ ๐ก follows โ ๐ฅ, ๐ก =
+
,%-%7
. exp(
5(35l7)%
,-%7
)
ยจ The normalization is: โซ3456
34/6
โ ๐ฅ, ๐ก . ๐๐ฅ = 1
ยจ โ ๐ฅ, ๐ก follows the diffusion equation:
G
G7
โ ๐ฅ, ๐ก =
-%
,
G%
G3% โ ๐ฅ, ๐ก โ ๐.
G
G3
โ ๐ฅ, ๐ก
ยจ The corresponding SDE is : ๐๐ = ๐. ๐๐ก + ๐. ๐๐
ยจ In general, we will look at SDE <-> PDE but the simple mapping is:
ยจ ๐๐ = ๐. ๐๐ก + ๐. ๐๐
ยจ
G
G7
โ ๐ฅ, ๐ก = โ
G
G3
[๐ด. โ ๐ฅ, ๐ก โ
G
G3
(๐ต. โ ๐ฅ, ๐ก )] with ๐ด = ๐ and ๐ต =
+
,
. ๐,
141
142. Luc_Faucheux_2020
Another look at first passage time - II
ยจ Without any drift for now, this reads:
ยจ The underlying Brownian motion ๐ ๐ก follows โ ๐ฅ, ๐ก =
+
,%-%7
. exp(
53%
,-%7
)
ยจ The normalization is: โซ3456
34/6
โ ๐ฅ, ๐ก . ๐๐ฅ = 1
ยจ โ ๐ฅ, ๐ก follows the diffusion equation:
G
G7
โ ๐ฅ, ๐ก =
-%
,
G%
G3% โ ๐ฅ, ๐ก
ยจ The corresponding SDE is : ๐๐ = ๐. ๐๐
ยจ In general, we will look at SDE <-> PDE but the simple mapping is:
ยจ ๐๐ = ๐. ๐๐
ยจ
G
G7
โ ๐ฅ, ๐ก =
G
G3
[
G
G3
(๐ต. โ ๐ฅ, ๐ก )] with ๐ต =
+
,
. ๐,
142
143. Luc_Faucheux_2020
Another look at first passage time - III
ยจ The diffusion equation is a linear equation
ยจ Any linear combination of solutions will itself be a solution
ยจ We can identify a spanning set of solutions:
ยจ The underlying Brownian motion ๐ ๐ก follows โ ๐ฅ, ๐ฅ8, ๐ก =
+
,%-%7
. exp(
5(3537)%
,-%7
)
ยจ The normalization is: โซ3456
34/6
โ ๐ฅ, ๐ฅ8, ๐ก . ๐๐ฅ = 1
ยจ And the initial starting point: โ ๐ฅ, ๐ฅ8, ๐ก = 0 = ๐ฟ(๐ฅ โ ๐ฅ8)
ยจ Letโs consider what is sometimes referred to as the โcliffโ problem: a random walker
diffuses from its initial position ๐ฅ8 up until it meets the โcliffโ at position ๐ฅ = ๐, and then
โfalls off the cliffโ and disappears.
ยจ So we are looking for a solution โขโ ๐ฅ, ๐ฅ8, ๐ก that obeys the diffusion equation in the interval
] โ โ, ๐]
ยจ For all time we need to verify: โขโ ๐ฅ = ๐, ๐ฅ8, ๐ก = 0
143
144. Luc_Faucheux_2020
Another look at first passage time - IV
ยจ Note that the conservation of probability โซ3456
34/6 โขโ ๐ฅ, ๐ฅ8, ๐ก . ๐๐ฅ = 1 obviously will not apply
ยจ If anything we will define the survival probability, which is the probability that the random
walker did not yet fall off the cliff
ยจ ๐๐ ๐ก = โซ3456
34Z โขโ ๐ฅ, ๐ฅ8, ๐ก . ๐๐ฅ
ยจ This probability is the probability that the random walker did not reach the level ๐ฅ = ๐ up
until the time ๐ก
ยจ And so the probability that the random walker would have met the level ๐ฅ = ๐ within the
time interval [0, ๐ก] is 1 โ ๐๐ ๐ก = ๐ ๐ ๐, ๐ก โค ๐ก in the previous notation
ยจ So now, either we know ๐ ๐ ๐, ๐ก โค ๐ก and we can calculate the Survival Probability ๐๐ ๐ก
ยจ Conversely, if we can find an easy way to calculate ๐๐ ๐ก we know ๐ ๐ ๐, ๐ก โค ๐ก
144
145. Luc_Faucheux_2020
Another look at first passage time - V
ยจ Note that the two processes are NOT the same.
ยจ In the case of the cliff problem, the density is not conserved and the random walker is
โtaken outโ as soon as it hits the level ๐ฅ = ๐
ยจ In the case of the regular diffusion we looked at, the process is not impacted when crossing
the level ๐ฅ = ๐
ยจ However, for the purpose of calculating the First Passage Probability ๐ ๐ ๐, ๐ก โค ๐ก we can
use either
ยจ So letโs see if we can find an easy solution to the cliff problem
ยจ Remember that the diffusion equation is linear, so if we could find a linear combination of
Gaussians that matches โขโ ๐ฅ = ๐, ๐ฅ8, ๐ก = 0, we would have at least one solution to work
with (maybe not unique, but at least something we could use)
145
146. Luc_Faucheux_2020
Another look at first passage time โ VI โ Image method
ยจ Letโs look at:
ยจ โขโ ๐ฅ, ๐ฅ8, ๐ก =
+
,%-%7
. exp
5 3537
%
,-%7
โ
+
,%-%7
. exp(
5(35(,Z537))%
,-%7
)
ยจ โขโ ๐ฅ, ๐ฅ8, ๐ก = โ ๐ฅ, ๐ฅ8, ๐ก โ โ ๐ฅ, (2๐ โ ๐ฅ8), ๐ก
ยจ Note that we see the beautiful symmetry principle at work again here
ยจ โขโ ๐ฅ, ๐ฅ8, ๐ก follows the diffusion equation
ยจ โขโ ๐ฅ = ๐, ๐ฅ8, ๐ก = 0 for all time ๐ก
ยจ We are in business
ยจ ๐๐ ๐ก = โซ3456
34Z โขโ ๐ฅ, ๐ฅ8, ๐ก . ๐๐ฅ = โซ3456
34Z
โ ๐ฅ, ๐ฅ8, ๐ก . ๐๐ฅ โ โซ3456
34Z
โ ๐ฅ, (2๐ โ ๐ฅ8), ๐ก . ๐๐ฅ
ยจ We also have: โซ3456
34Z
โ ๐ฅ, ๐ฅ8, ๐ก . ๐๐ฅ = โซ3456
34Z537
โ ๐ฅ, ๐ฅ8 = 0, ๐ก . ๐๐ฅ
146