Refresher on Stochastic Calculus Foundations
Ashwin Rao
ICME, Stanford University
November 13, 2018
Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 1 / 11
Continuous and Non-Differentiable Sample Paths
Sample paths of Brownian motion zt are continuous
Sample paths of zt are almost always non-differentiable, meaning
lim
h→0
zt+h − zt
h
is almost always infinite
The intuition is that dzt
dt has standard deviation of 1√
dt
, which goes to
∞ as dt goes to 0
Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 2 / 11
Infinite Total Variation of Sample Paths
Sample paths of Brownian motion are of infinite total variation, i.e.
lim
h→0
n−1
i=m
|z(i+1)h − zih| is almost always infinite
More succinctly, we write
T
S
|dzt| = ∞ (almost always)
Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 3 / 11
Finite Quadratic Variation of Sample Paths
Sample paths of Brownian Motion are of finite quadratic variation, i.e.
lim
h→0
n−1
i=m
(z(i+1)h − zih)2
= h(n − m)
More succinctly, we write
T
S
(dzt)2
= T − S
This means it’s expected value is T − S and it’s variance is 0
This leads to Ito’s Lemma (Taylor series with (dzt)2 replaced with dt)
This also leads to Ito Isometry (next slide)
Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 4 / 11
Ito Isometry
Let Xt : [0, T] × Ω → R be a stochastic process adapted to filtration
F of brownian motion zt.
Then, we know that the Ito integral
T
0 Xt · dzt is a martingale
Ito Isometry tells us about the variance of
T
0 Xt · dzt
E[(
T
0
Xt · dzt)2
] = E[
T
0
X2
t · dt]
Extending this to two Ito integrals, we have:
E[(
T
0
Xt · dzt)(
T
0
Yt · dzt)] = E[
T
0
Xt · Yt · dt]
Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 5 / 11
Fokker-Planck equation for PDF of a Stochastic Process
We are given the following stochastic process:
dXt = µ(Xt, t) · dt + σ(Xt, t) · dzt
The Fokker-Planck equation of this process is the PDE:
∂p(x, t)
∂t
= −
∂{µ(x, t) · p(x, t)}
∂x
+
∂2{σ2(x,t)
2 · p(x, t)}
∂x2
where p(x, t) is the probability density function of Xt
The Fokker-Planck equation is used for problems where the initial
distribution is known (Kolmogorov forward equation)
However, if the problem is to know the distribution at previous times,
the Feynman-Kac formula can be used (a consequence of the
Kolmogorov backward equation)
Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 6 / 11
Feynman-Kac Formula (PDE-SDE linkage)
Consider the partial differential equation for u : R × [0, T] → R:
∂u(x, t)
∂t
+µ(x, t)
∂u(x, t)
∂x
+
σ2(x, t)
2
∂2u(x, t)
∂x2
−V (x, t)u(x, t) = f (x, t)
subject to u(x, T) = ψ(x), where µ, σ, V , f , ψ are known functions.
Then the Feynman-Kac formula tells us that the solution u(x, t) can
be written as the following conditional expectation:
E[(
T
t
e− u
t V (Xs ,s)ds
· f (Xu, u) · du) + e− T
t V (Xu,u)du
· ψ(XT )|Xt = x]
such that Xu is the following Ito process with initial condition Xt = x:
dXu = µ(Xu, u) · du + σ(Xu, u) · dzu
Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 7 / 11
Stopping Time
Stopping time τ is a “random time” (random variable) interpreted as
time at which a given stochastic process exhibits certain behavior
Stopping time often defined by a “stopping policy” to decide whether
to continue/stop a process based on present position and past events
Random variable τ such that Pr[τ ≤ t] is in σ-algebra Ft, for all t
Deciding whether τ ≤ t only depends on information up to time t
Hitting time of a Borel set A for a process Xt is the first time Xt
takes a value within the set A
Hitting time is an example of stopping time. Formally,
TX,A = min{t ∈ R|Xt ∈ A}
eg: Hitting time of a process to exceed a certain fixed level
Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 8 / 11
Optimal Stopping Problem
Optimal Stopping problem for Stochastic Process Xt:
V (x) = max
τ
E[G(Xτ )|X0 = x]
where τ is a set of stopping times of Xt, V (·) is called the value
function, and G is called the reward (or gain) function.
Note that sometimes we can have several stopping times that
maximize E[G(Xτ )] and we say that the optimal stopping time is the
smallest stopping time achieving the maximum value.
Example of Optimal Stopping: Optimal Exercise of American Options
Xt is stochastic process for underlying security’s price
x is underlying security’s current price
τ is set of exercise times corresponding to various stopping policies
V (·) is American option price as function of underlying’s current price
G(·) is the option payoff function
Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 9 / 11
Markov Property
Markov property says that the Ft-conditional PDF of Xt+h depends
only on the present state Xt
Strong Markov property says that for every stopping time τ, the
Fτ -conditional PDF of Xτ+h depends only on Xτ
Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 10 / 11
Infinitesimal Generator and Dynkin’s Formula
Infinitesimal Generator of a time-homogeneous Rn-valued diffusion Xt
is the PDE operator A (operating on functions f : Rn → R) defined as
A • f (x) = lim
t→0
E[f (Xt)|X0 = x] − f (x)
t
For Rn-valued diffusion Xt given by: dXt = µ(Xt) · dt + σ(Xt) · dzt,
A • f (x) =
i
µi (x)
∂f
∂xi
(x) +
i,j
(σ(x)σ(x)T
)i,j
∂2f
∂xi ∂xj
(x)
If τ is stopping time conditional on X0 = x, Dynkin’s formula says:
E[f (Xτ )|X0 = x] = f (x) + E[
τ
0
A • f (Xs) · ds|X0 = x]
Stochastic generalization of 2nd fundamental theorem of calculus
Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 11 / 11

Overview of Stochastic Calculus Foundations

  • 1.
    Refresher on StochasticCalculus Foundations Ashwin Rao ICME, Stanford University November 13, 2018 Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 1 / 11
  • 2.
    Continuous and Non-DifferentiableSample Paths Sample paths of Brownian motion zt are continuous Sample paths of zt are almost always non-differentiable, meaning lim h→0 zt+h − zt h is almost always infinite The intuition is that dzt dt has standard deviation of 1√ dt , which goes to ∞ as dt goes to 0 Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 2 / 11
  • 3.
    Infinite Total Variationof Sample Paths Sample paths of Brownian motion are of infinite total variation, i.e. lim h→0 n−1 i=m |z(i+1)h − zih| is almost always infinite More succinctly, we write T S |dzt| = ∞ (almost always) Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 3 / 11
  • 4.
    Finite Quadratic Variationof Sample Paths Sample paths of Brownian Motion are of finite quadratic variation, i.e. lim h→0 n−1 i=m (z(i+1)h − zih)2 = h(n − m) More succinctly, we write T S (dzt)2 = T − S This means it’s expected value is T − S and it’s variance is 0 This leads to Ito’s Lemma (Taylor series with (dzt)2 replaced with dt) This also leads to Ito Isometry (next slide) Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 4 / 11
  • 5.
    Ito Isometry Let Xt: [0, T] × Ω → R be a stochastic process adapted to filtration F of brownian motion zt. Then, we know that the Ito integral T 0 Xt · dzt is a martingale Ito Isometry tells us about the variance of T 0 Xt · dzt E[( T 0 Xt · dzt)2 ] = E[ T 0 X2 t · dt] Extending this to two Ito integrals, we have: E[( T 0 Xt · dzt)( T 0 Yt · dzt)] = E[ T 0 Xt · Yt · dt] Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 5 / 11
  • 6.
    Fokker-Planck equation forPDF of a Stochastic Process We are given the following stochastic process: dXt = µ(Xt, t) · dt + σ(Xt, t) · dzt The Fokker-Planck equation of this process is the PDE: ∂p(x, t) ∂t = − ∂{µ(x, t) · p(x, t)} ∂x + ∂2{σ2(x,t) 2 · p(x, t)} ∂x2 where p(x, t) is the probability density function of Xt The Fokker-Planck equation is used for problems where the initial distribution is known (Kolmogorov forward equation) However, if the problem is to know the distribution at previous times, the Feynman-Kac formula can be used (a consequence of the Kolmogorov backward equation) Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 6 / 11
  • 7.
    Feynman-Kac Formula (PDE-SDElinkage) Consider the partial differential equation for u : R × [0, T] → R: ∂u(x, t) ∂t +µ(x, t) ∂u(x, t) ∂x + σ2(x, t) 2 ∂2u(x, t) ∂x2 −V (x, t)u(x, t) = f (x, t) subject to u(x, T) = ψ(x), where µ, σ, V , f , ψ are known functions. Then the Feynman-Kac formula tells us that the solution u(x, t) can be written as the following conditional expectation: E[( T t e− u t V (Xs ,s)ds · f (Xu, u) · du) + e− T t V (Xu,u)du · ψ(XT )|Xt = x] such that Xu is the following Ito process with initial condition Xt = x: dXu = µ(Xu, u) · du + σ(Xu, u) · dzu Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 7 / 11
  • 8.
    Stopping Time Stopping timeτ is a “random time” (random variable) interpreted as time at which a given stochastic process exhibits certain behavior Stopping time often defined by a “stopping policy” to decide whether to continue/stop a process based on present position and past events Random variable τ such that Pr[τ ≤ t] is in σ-algebra Ft, for all t Deciding whether τ ≤ t only depends on information up to time t Hitting time of a Borel set A for a process Xt is the first time Xt takes a value within the set A Hitting time is an example of stopping time. Formally, TX,A = min{t ∈ R|Xt ∈ A} eg: Hitting time of a process to exceed a certain fixed level Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 8 / 11
  • 9.
    Optimal Stopping Problem OptimalStopping problem for Stochastic Process Xt: V (x) = max τ E[G(Xτ )|X0 = x] where τ is a set of stopping times of Xt, V (·) is called the value function, and G is called the reward (or gain) function. Note that sometimes we can have several stopping times that maximize E[G(Xτ )] and we say that the optimal stopping time is the smallest stopping time achieving the maximum value. Example of Optimal Stopping: Optimal Exercise of American Options Xt is stochastic process for underlying security’s price x is underlying security’s current price τ is set of exercise times corresponding to various stopping policies V (·) is American option price as function of underlying’s current price G(·) is the option payoff function Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 9 / 11
  • 10.
    Markov Property Markov propertysays that the Ft-conditional PDF of Xt+h depends only on the present state Xt Strong Markov property says that for every stopping time τ, the Fτ -conditional PDF of Xτ+h depends only on Xτ Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 10 / 11
  • 11.
    Infinitesimal Generator andDynkin’s Formula Infinitesimal Generator of a time-homogeneous Rn-valued diffusion Xt is the PDE operator A (operating on functions f : Rn → R) defined as A • f (x) = lim t→0 E[f (Xt)|X0 = x] − f (x) t For Rn-valued diffusion Xt given by: dXt = µ(Xt) · dt + σ(Xt) · dzt, A • f (x) = i µi (x) ∂f ∂xi (x) + i,j (σ(x)σ(x)T )i,j ∂2f ∂xi ∂xj (x) If τ is stopping time conditional on X0 = x, Dynkin’s formula says: E[f (Xτ )|X0 = x] = f (x) + E[ τ 0 A • f (Xs) · ds|X0 = x] Stochastic generalization of 2nd fundamental theorem of calculus Ashwin Rao (Stanford) Stochastic Calculus Foundations November 13, 2018 11 / 11