Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
Koc5(dba)
1. Lectures on Lévy Processes and Stochastic
Calculus (Koc University) Lecture 5: The
Ornstein-Uhlenbeck Process
David Applebaum
School of Mathematics and Statistics, University of Sheffield, UK
9th December 2011
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 1 / 44
2. Historical Origins
This process was first introduced by Ornstein and Uhlenbeck in the
1930s as a more accurate model of the physical phenomenon of
Brownian motion than the Einstein-Smoluchowski-Wiener process.
They argued that
Brownian motion = viscous drag of fluid + random molecular
bombardment.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 2 / 44
3. Historical Origins
This process was first introduced by Ornstein and Uhlenbeck in the
1930s as a more accurate model of the physical phenomenon of
Brownian motion than the Einstein-Smoluchowski-Wiener process.
They argued that
Brownian motion = viscous drag of fluid + random molecular
bombardment.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 2 / 44
4. Historical Origins
This process was first introduced by Ornstein and Uhlenbeck in the
1930s as a more accurate model of the physical phenomenon of
Brownian motion than the Einstein-Smoluchowski-Wiener process.
They argued that
Brownian motion = viscous drag of fluid + random molecular
bombardment.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 2 / 44
5. Let v (t) be the velocity at time t of a particle of mass m executing
Brownian motion. By Newton’s second law of motion, the total force
dv (t)
acting on the particle at time t is F (t) = m .We then have
dt
dv (t) dB(t)
m = − mkv (t) + mσ ,
dt dt
viscous drag molecular bombardment
where k , σ > 0.
dB(t)
Of course, doesn’t exist, but this is a “physicist’s argument”. If
dt
we cancel the ms and multiply both sides by dt then we get a
legitimate SDE - the Langevin equation
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44
6. Let v (t) be the velocity at time t of a particle of mass m executing
Brownian motion. By Newton’s second law of motion, the total force
dv (t)
acting on the particle at time t is F (t) = m .We then have
dt
dv (t) dB(t)
m = − mkv (t) + mσ ,
dt dt
viscous drag molecular bombardment
where k , σ > 0.
dB(t)
Of course, doesn’t exist, but this is a “physicist’s argument”. If
dt
we cancel the ms and multiply both sides by dt then we get a
legitimate SDE - the Langevin equation
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44
7. Let v (t) be the velocity at time t of a particle of mass m executing
Brownian motion. By Newton’s second law of motion, the total force
dv (t)
acting on the particle at time t is F (t) = m .We then have
dt
dv (t) dB(t)
m = − mkv (t) + mσ ,
dt dt
viscous drag molecular bombardment
where k , σ > 0.
dB(t)
Of course, doesn’t exist, but this is a “physicist’s argument”. If
dt
we cancel the ms and multiply both sides by dt then we get a
legitimate SDE - the Langevin equation
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44
8. Let v (t) be the velocity at time t of a particle of mass m executing
Brownian motion. By Newton’s second law of motion, the total force
dv (t)
acting on the particle at time t is F (t) = m .We then have
dt
dv (t) dB(t)
m = − mkv (t) + mσ ,
dt dt
viscous drag molecular bombardment
where k , σ > 0.
dB(t)
Of course, doesn’t exist, but this is a “physicist’s argument”. If
dt
we cancel the ms and multiply both sides by dt then we get a
legitimate SDE - the Langevin equation
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44
9. Let v (t) be the velocity at time t of a particle of mass m executing
Brownian motion. By Newton’s second law of motion, the total force
dv (t)
acting on the particle at time t is F (t) = m .We then have
dt
dv (t) dB(t)
m = − mkv (t) + mσ ,
dt dt
viscous drag molecular bombardment
where k , σ > 0.
dB(t)
Of course, doesn’t exist, but this is a “physicist’s argument”. If
dt
we cancel the ms and multiply both sides by dt then we get a
legitimate SDE - the Langevin equation
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44
10. dv (t) = −kv (t)dt + σdB(t) (0.1)
Using the integrating factor ekt we can then easily check that the
unique solution to this equation is the Ornstein-Uhlenbeck process
(v (t), t ≥ 0) where
t
v (t) = e−kt v (0) + e−k (t−s) dB(s).
0
We are interested in Lévy processes so replace B by a d-dimensional
Lévy process X and k by a d × d matrix K . Our Langevin equation is
dY (t) = −KY (t)dt + dX (t) (0.2)
and its unique solution is
t
Y (t) = e−tK Y0 + e−(t−s)K dX (s), (0.3)
0
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
11. dv (t) = −kv (t)dt + σdB(t) (0.1)
Using the integrating factor ekt we can then easily check that the
unique solution to this equation is the Ornstein-Uhlenbeck process
(v (t), t ≥ 0) where
t
v (t) = e−kt v (0) + e−k (t−s) dB(s).
0
We are interested in Lévy processes so replace B by a d-dimensional
Lévy process X and k by a d × d matrix K . Our Langevin equation is
dY (t) = −KY (t)dt + dX (t) (0.2)
and its unique solution is
t
Y (t) = e−tK Y0 + e−(t−s)K dX (s), (0.3)
0
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
12. dv (t) = −kv (t)dt + σdB(t) (0.1)
Using the integrating factor ekt we can then easily check that the
unique solution to this equation is the Ornstein-Uhlenbeck process
(v (t), t ≥ 0) where
t
v (t) = e−kt v (0) + e−k (t−s) dB(s).
0
We are interested in Lévy processes so replace B by a d-dimensional
Lévy process X and k by a d × d matrix K . Our Langevin equation is
dY (t) = −KY (t)dt + dX (t) (0.2)
and its unique solution is
t
Y (t) = e−tK Y0 + e−(t−s)K dX (s), (0.3)
0
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
13. dv (t) = −kv (t)dt + σdB(t) (0.1)
Using the integrating factor ekt we can then easily check that the
unique solution to this equation is the Ornstein-Uhlenbeck process
(v (t), t ≥ 0) where
t
v (t) = e−kt v (0) + e−k (t−s) dB(s).
0
We are interested in Lévy processes so replace B by a d-dimensional
Lévy process X and k by a d × d matrix K . Our Langevin equation is
dY (t) = −KY (t)dt + dX (t) (0.2)
and its unique solution is
t
Y (t) = e−tK Y0 + e−(t−s)K dX (s), (0.3)
0
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
14. dv (t) = −kv (t)dt + σdB(t) (0.1)
Using the integrating factor ekt we can then easily check that the
unique solution to this equation is the Ornstein-Uhlenbeck process
(v (t), t ≥ 0) where
t
v (t) = e−kt v (0) + e−k (t−s) dB(s).
0
We are interested in Lévy processes so replace B by a d-dimensional
Lévy process X and k by a d × d matrix K . Our Langevin equation is
dY (t) = −KY (t)dt + dX (t) (0.2)
and its unique solution is
t
Y (t) = e−tK Y0 + e−(t−s)K dX (s), (0.3)
0
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
15. dv (t) = −kv (t)dt + σdB(t) (0.1)
Using the integrating factor ekt we can then easily check that the
unique solution to this equation is the Ornstein-Uhlenbeck process
(v (t), t ≥ 0) where
t
v (t) = e−kt v (0) + e−k (t−s) dB(s).
0
We are interested in Lévy processes so replace B by a d-dimensional
Lévy process X and k by a d × d matrix K . Our Langevin equation is
dY (t) = −KY (t)dt + dX (t) (0.2)
and its unique solution is
t
Y (t) = e−tK Y0 + e−(t−s)K dX (s), (0.3)
0
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
16. dv (t) = −kv (t)dt + σdB(t) (0.1)
Using the integrating factor ekt we can then easily check that the
unique solution to this equation is the Ornstein-Uhlenbeck process
(v (t), t ≥ 0) where
t
v (t) = e−kt v (0) + e−k (t−s) dB(s).
0
We are interested in Lévy processes so replace B by a d-dimensional
Lévy process X and k by a d × d matrix K . Our Langevin equation is
dY (t) = −KY (t)dt + dX (t) (0.2)
and its unique solution is
t
Y (t) = e−tK Y0 + e−(t−s)K dX (s), (0.3)
0
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
17. where Y0 := Y (0) is a fixed F0 measurable random variables. We still
call the process Y an Ornstein-Uhlenbeck or OU process.
Furthermore
Y has càdlàg paths.
Y is a Markov process.
The process X is sometimes called the background driving Lévy
process or BDLP.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44
18. where Y0 := Y (0) is a fixed F0 measurable random variables. We still
call the process Y an Ornstein-Uhlenbeck or OU process.
Furthermore
Y has càdlàg paths.
Y is a Markov process.
The process X is sometimes called the background driving Lévy
process or BDLP.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44
19. where Y0 := Y (0) is a fixed F0 measurable random variables. We still
call the process Y an Ornstein-Uhlenbeck or OU process.
Furthermore
Y has càdlàg paths.
Y is a Markov process.
The process X is sometimes called the background driving Lévy
process or BDLP.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44
20. where Y0 := Y (0) is a fixed F0 measurable random variables. We still
call the process Y an Ornstein-Uhlenbeck or OU process.
Furthermore
Y has càdlàg paths.
Y is a Markov process.
The process X is sometimes called the background driving Lévy
process or BDLP.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44
21. where Y0 := Y (0) is a fixed F0 measurable random variables. We still
call the process Y an Ornstein-Uhlenbeck or OU process.
Furthermore
Y has càdlàg paths.
Y is a Markov process.
The process X is sometimes called the background driving Lévy
process or BDLP.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44
22. We get a Markov semigroup on Bb (Rd ) called a Mehler semigroup:
Tt f (x) = E(f (Y (t))|Y0 = x)
= f (e−tK x + y )ρt (dy ) (0.4)
Rd
where ρt is the law of the stochastic integral
t d t
0e−sK dX (s) = 0 e−(t−s)K dX (s).
This generalises the classical Mehler formula (X (t) = B(t), K = kI)
1 1 − e−2kt y2
Tt f (x) = d
f e−kt x + y e− 2 dy .
(2π) 2 Rd 2k
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44
23. We get a Markov semigroup on Bb (Rd ) called a Mehler semigroup:
Tt f (x) = E(f (Y (t))|Y0 = x)
= f (e−tK x + y )ρt (dy ) (0.4)
Rd
where ρt is the law of the stochastic integral
t d t
0e−sK dX (s) = 0 e−(t−s)K dX (s).
This generalises the classical Mehler formula (X (t) = B(t), K = kI)
1 1 − e−2kt y2
Tt f (x) = d
f e−kt x + y e− 2 dy .
(2π) 2 Rd 2k
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44
24. We get a Markov semigroup on Bb (Rd ) called a Mehler semigroup:
Tt f (x) = E(f (Y (t))|Y0 = x)
= f (e−tK x + y )ρt (dy ) (0.4)
Rd
where ρt is the law of the stochastic integral
t d t
0e−sK dX (s) = 0 e−(t−s)K dX (s).
This generalises the classical Mehler formula (X (t) = B(t), K = kI)
1 1 − e−2kt y2
Tt f (x) = d
f e−kt x + y e− 2 dy .
(2π) 2 Rd 2k
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44
25. We get a Markov semigroup on Bb (Rd ) called a Mehler semigroup:
Tt f (x) = E(f (Y (t))|Y0 = x)
= f (e−tK x + y )ρt (dy ) (0.4)
Rd
where ρt is the law of the stochastic integral
t d t
0e−sK dX (s) = 0 e−(t−s)K dX (s).
This generalises the classical Mehler formula (X (t) = B(t), K = kI)
1 1 − e−2kt y2
Tt f (x) = d
f e−kt x + y e− 2 dy .
(2π) 2 Rd 2k
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44
26. We get a Markov semigroup on Bb (Rd ) called a Mehler semigroup:
Tt f (x) = E(f (Y (t))|Y0 = x)
= f (e−tK x + y )ρt (dy ) (0.4)
Rd
where ρt is the law of the stochastic integral
t d t
0e−sK dX (s) = 0 e−(t−s)K dX (s).
This generalises the classical Mehler formula (X (t) = B(t), K = kI)
1 1 − e−2kt y2
Tt f (x) = d
f e−kt x + y e− 2 dy .
(2π) 2 Rd 2k
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44
27. In fact (Tt , t ≥ 0) satisfies the Feller property: Tt (C0 (Rd )) ⊆ C0 (Rd ).
We also have the skew-convolution semigroup property:
ρs+t = ρK ∗ ρt ,
s
where ρK (B) = ρs (etK B). Another terminology for this is
s
measure-valued cocycle.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 7 / 44
28. In fact (Tt , t ≥ 0) satisfies the Feller property: Tt (C0 (Rd )) ⊆ C0 (Rd ).
We also have the skew-convolution semigroup property:
ρs+t = ρK ∗ ρt ,
s
where ρK (B) = ρs (etK B). Another terminology for this is
s
measure-valued cocycle.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 7 / 44
29. In fact (Tt , t ≥ 0) satisfies the Feller property: Tt (C0 (Rd )) ⊆ C0 (Rd ).
We also have the skew-convolution semigroup property:
ρs+t = ρK ∗ ρt ,
s
where ρK (B) = ρs (etK B). Another terminology for this is
s
measure-valued cocycle.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 7 / 44
30. We get nicer probabilistic properties of our solution if we make the
following
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important in
applications such as volatility modelling, Lévy driven CARMA
processes, branching processes with immigration.
In infinite dimensions they solve the simplest linear SPDE with additive
noise. To develop this theme, let H and K be separable Hilbert spaces
and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator
J. Let X be a Lévy process on K and C ∈ L(K , H).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
31. We get nicer probabilistic properties of our solution if we make the
following
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important in
applications such as volatility modelling, Lévy driven CARMA
processes, branching processes with immigration.
In infinite dimensions they solve the simplest linear SPDE with additive
noise. To develop this theme, let H and K be separable Hilbert spaces
and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator
J. Let X be a Lévy process on K and C ∈ L(K , H).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
32. We get nicer probabilistic properties of our solution if we make the
following
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important in
applications such as volatility modelling, Lévy driven CARMA
processes, branching processes with immigration.
In infinite dimensions they solve the simplest linear SPDE with additive
noise. To develop this theme, let H and K be separable Hilbert spaces
and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator
J. Let X be a Lévy process on K and C ∈ L(K , H).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
33. We get nicer probabilistic properties of our solution if we make the
following
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important in
applications such as volatility modelling, Lévy driven CARMA
processes, branching processes with immigration.
In infinite dimensions they solve the simplest linear SPDE with additive
noise. To develop this theme, let H and K be separable Hilbert spaces
and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator
J. Let X be a Lévy process on K and C ∈ L(K , H).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
34. We get nicer probabilistic properties of our solution if we make the
following
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important in
applications such as volatility modelling, Lévy driven CARMA
processes, branching processes with immigration.
In infinite dimensions they solve the simplest linear SPDE with additive
noise. To develop this theme, let H and K be separable Hilbert spaces
and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator
J. Let X be a Lévy process on K and C ∈ L(K , H).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
35. We get nicer probabilistic properties of our solution if we make the
following
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important in
applications such as volatility modelling, Lévy driven CARMA
processes, branching processes with immigration.
In infinite dimensions they solve the simplest linear SPDE with additive
noise. To develop this theme, let H and K be separable Hilbert spaces
and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator
J. Let X be a Lévy process on K and C ∈ L(K , H).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
36. We get nicer probabilistic properties of our solution if we make the
following
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important in
applications such as volatility modelling, Lévy driven CARMA
processes, branching processes with immigration.
In infinite dimensions they solve the simplest linear SPDE with additive
noise. To develop this theme, let H and K be separable Hilbert spaces
and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator
J. Let X be a Lévy process on K and C ∈ L(K , H).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
37. We have the SPDE
dY (t) = JY (t) + CdX (t),
whose unique solution is
t
Y (t) = S(t)Y0 + S(t − s)CdX (s) ,
0
stochastic convolution
and the generalised Mehler semigroup is
Tt f (x) = f (S(t)x + y )ρt (dy ).
Rd
From now on we will work in finite dimensions and assume the strict
positive-definiteness of K .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 9 / 44
38. We have the SPDE
dY (t) = JY (t) + CdX (t),
whose unique solution is
t
Y (t) = S(t)Y0 + S(t − s)CdX (s) ,
0
stochastic convolution
and the generalised Mehler semigroup is
Tt f (x) = f (S(t)x + y )ρt (dy ).
Rd
From now on we will work in finite dimensions and assume the strict
positive-definiteness of K .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 9 / 44
39. We have the SPDE
dY (t) = JY (t) + CdX (t),
whose unique solution is
t
Y (t) = S(t)Y0 + S(t − s)CdX (s) ,
0
stochastic convolution
and the generalised Mehler semigroup is
Tt f (x) = f (S(t)x + y )ρt (dy ).
Rd
From now on we will work in finite dimensions and assume the strict
positive-definiteness of K .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 9 / 44
40. We have the SPDE
dY (t) = JY (t) + CdX (t),
whose unique solution is
t
Y (t) = S(t)Y0 + S(t − s)CdX (s) ,
0
stochastic convolution
and the generalised Mehler semigroup is
Tt f (x) = f (S(t)x + y )ρt (dy ).
Rd
From now on we will work in finite dimensions and assume the strict
positive-definiteness of K .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 9 / 44
41. Additive Processes and Wiener-Lévy Integrals
The study of O-U processes focusses attention on Wiener-Lévy
t
integrals If (t) := 0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.
Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z
has independent increments and is stochastically continuous. It follows
that each Z (t) is infinitely divisible.
Theorem
(If (t), t ≥ 0) is an additive process.
Proof. (sketch) Independent increments follows from the fact that for
r ≤s≤t
s
If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} -
measurable,
t
If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} -
measurable, 2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
42. Additive Processes and Wiener-Lévy Integrals
The study of O-U processes focusses attention on Wiener-Lévy
t
integrals If (t) := 0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.
Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z
has independent increments and is stochastically continuous. It follows
that each Z (t) is infinitely divisible.
Theorem
(If (t), t ≥ 0) is an additive process.
Proof. (sketch) Independent increments follows from the fact that for
r ≤s≤t
s
If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} -
measurable,
t
If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} -
measurable, 2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
43. Additive Processes and Wiener-Lévy Integrals
The study of O-U processes focusses attention on Wiener-Lévy
t
integrals If (t) := 0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.
Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z
has independent increments and is stochastically continuous. It follows
that each Z (t) is infinitely divisible.
Theorem
(If (t), t ≥ 0) is an additive process.
Proof. (sketch) Independent increments follows from the fact that for
r ≤s≤t
s
If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} -
measurable,
t
If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} -
measurable, 2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
44. Additive Processes and Wiener-Lévy Integrals
The study of O-U processes focusses attention on Wiener-Lévy
t
integrals If (t) := 0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.
Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z
has independent increments and is stochastically continuous. It follows
that each Z (t) is infinitely divisible.
Theorem
(If (t), t ≥ 0) is an additive process.
Proof. (sketch) Independent increments follows from the fact that for
r ≤s≤t
s
If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} -
measurable,
t
If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} -
measurable, 2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
45. Additive Processes and Wiener-Lévy Integrals
The study of O-U processes focusses attention on Wiener-Lévy
t
integrals If (t) := 0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.
Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z
has independent increments and is stochastically continuous. It follows
that each Z (t) is infinitely divisible.
Theorem
(If (t), t ≥ 0) is an additive process.
Proof. (sketch) Independent increments follows from the fact that for
r ≤s≤t
s
If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} -
measurable,
t
If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} -
measurable, 2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
46. Additive Processes and Wiener-Lévy Integrals
The study of O-U processes focusses attention on Wiener-Lévy
t
integrals If (t) := 0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.
Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z
has independent increments and is stochastically continuous. It follows
that each Z (t) is infinitely divisible.
Theorem
(If (t), t ≥ 0) is an additive process.
Proof. (sketch) Independent increments follows from the fact that for
r ≤s≤t
s
If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} -
measurable,
t
If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} -
measurable, 2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
47. Theorem
If X has Lévy symbol η then for each t ≥ 0, u ∈ Rd ,
t
E(ei(u,If (t)) ) = exp η(f (s)T u) .
0
t
Proof. (sketch) Define Mf (t) = exp i u, 0 f (s)dX (s) and use Itô’s
formula to show that
t
Mf (t) = 1 + i u, Mf (s−)f (s)dB(s)
0
t t
+ ˜
Mf (s−)(ei(u,f (s)x) − 1)N(ds, dx) + Mf (s−)η(f (s)
0 Rd −{0} 0
Now take expectations of both sides to get
t
E(Mf (t)) = 1 + E(Mf (s))η(f (s)T u)ds,
0
and the result follows. 2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 11 / 44
48. Theorem
If X has Lévy symbol η then for each t ≥ 0, u ∈ Rd ,
t
E(ei(u,If (t)) ) = exp η(f (s)T u) .
0
t
Proof. (sketch) Define Mf (t) = exp i u, 0 f (s)dX (s) and use Itô’s
formula to show that
t
Mf (t) = 1 + i u, Mf (s−)f (s)dB(s)
0
t t
+ ˜
Mf (s−)(ei(u,f (s)x) − 1)N(ds, dx) + Mf (s−)η(f (s)
0 Rd −{0} 0
Now take expectations of both sides to get
t
E(Mf (t)) = 1 + E(Mf (s))η(f (s)T u)ds,
0
and the result follows. 2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 11 / 44
49. Theorem
If X has Lévy symbol η then for each t ≥ 0, u ∈ Rd ,
t
E(ei(u,If (t)) ) = exp η(f (s)T u) .
0
t
Proof. (sketch) Define Mf (t) = exp i u, 0 f (s)dX (s) and use Itô’s
formula to show that
t
Mf (t) = 1 + i u, Mf (s−)f (s)dB(s)
0
t t
+ ˜
Mf (s−)(ei(u,f (s)x) − 1)N(ds, dx) + Mf (s−)η(f (s)
0 Rd −{0} 0
Now take expectations of both sides to get
t
E(Mf (t)) = 1 + E(Mf (s))η(f (s)T u)ds,
0
and the result follows. 2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 11 / 44
50. Theorem
If X has Lévy symbol η then for each t ≥ 0, u ∈ Rd ,
t
E(ei(u,If (t)) ) = exp η(f (s)T u) .
0
t
Proof. (sketch) Define Mf (t) = exp i u, 0 f (s)dX (s) and use Itô’s
formula to show that
t
Mf (t) = 1 + i u, Mf (s−)f (s)dB(s)
0
t t
+ ˜
Mf (s−)(ei(u,f (s)x) − 1)N(ds, dx) + Mf (s−)η(f (s)
0 Rd −{0} 0
Now take expectations of both sides to get
t
E(Mf (t)) = 1 + E(Mf (s))η(f (s)T u)ds,
0
and the result follows. 2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 11 / 44
51. If X has characteristics (b, A, ν), it follows that If (t) has characteristics
(btf , Af , νtf ) where
t
t
btf = f (s)bds + f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds,
0 0 Rd −{0}
t
Af =
t f (s)T Af (s)ds,
0
t
νtf (B) = ν(f (s)−1 (B)).
0
It follows that every OU process Y conditioned on Y0 = y is an additive
process. It will have characteristics as above with f (s) = e−sK and btf
translated by e−tK y .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
52. If X has characteristics (b, A, ν), it follows that If (t) has characteristics
(btf , Af , νtf ) where
t
t
btf = f (s)bds + f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds,
0 0 Rd −{0}
t
Af =
t f (s)T Af (s)ds,
0
t
νtf (B) = ν(f (s)−1 (B)).
0
It follows that every OU process Y conditioned on Y0 = y is an additive
process. It will have characteristics as above with f (s) = e−sK and btf
translated by e−tK y .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
53. If X has characteristics (b, A, ν), it follows that If (t) has characteristics
(btf , Af , νtf ) where
t
t
btf = f (s)bds + f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds,
0 0 Rd −{0}
t
Af =
t f (s)T Af (s)ds,
0
t
νtf (B) = ν(f (s)−1 (B)).
0
It follows that every OU process Y conditioned on Y0 = y is an additive
process. It will have characteristics as above with f (s) = e−sK and btf
translated by e−tK y .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
54. If X has characteristics (b, A, ν), it follows that If (t) has characteristics
(btf , Af , νtf ) where
t
t
btf = f (s)bds + f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds,
0 0 Rd −{0}
t
Af =
t f (s)T Af (s)ds,
0
t
νtf (B) = ν(f (s)−1 (B)).
0
It follows that every OU process Y conditioned on Y0 = y is an additive
process. It will have characteristics as above with f (s) = e−sK and btf
translated by e−tK y .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
55. If X has characteristics (b, A, ν), it follows that If (t) has characteristics
(btf , Af , νtf ) where
t
t
btf = f (s)bds + f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds,
0 0 Rd −{0}
t
Af =
t f (s)T Af (s)ds,
0
t
νtf (B) = ν(f (s)−1 (B)).
0
It follows that every OU process Y conditioned on Y0 = y is an additive
process. It will have characteristics as above with f (s) = e−sK and btf
translated by e−tK y .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
56. If X has characteristics (b, A, ν), it follows that If (t) has characteristics
(btf , Af , νtf ) where
t
t
btf = f (s)bds + f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds,
0 0 Rd −{0}
t
Af =
t f (s)T Af (s)ds,
0
t
νtf (B) = ν(f (s)−1 (B)).
0
It follows that every OU process Y conditioned on Y0 = y is an additive
process. It will have characteristics as above with f (s) = e−sK and btf
translated by e−tK y .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
57. Invariant Measures, Stationary Processes, Ergodicity:
General Theory
We want to investigate invariant measures and stationary solutions for
OU processes. First a little general theory.
First let (Tt , t ≥ 0) be a general Markov semigroup with transition
probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for
f ∈ Bb (Rd ). We say that a probability measure µ is an invariant
measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ),
Tt f (x)µ(dx) = f (x)µ(dx) (0.5)
Rd Rd
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
58. Invariant Measures, Stationary Processes, Ergodicity:
General Theory
We want to investigate invariant measures and stationary solutions for
OU processes. First a little general theory.
First let (Tt , t ≥ 0) be a general Markov semigroup with transition
probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for
f ∈ Bb (Rd ). We say that a probability measure µ is an invariant
measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ),
Tt f (x)µ(dx) = f (x)µ(dx) (0.5)
Rd Rd
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
59. Invariant Measures, Stationary Processes, Ergodicity:
General Theory
We want to investigate invariant measures and stationary solutions for
OU processes. First a little general theory.
First let (Tt , t ≥ 0) be a general Markov semigroup with transition
probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for
f ∈ Bb (Rd ). We say that a probability measure µ is an invariant
measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ),
Tt f (x)µ(dx) = f (x)µ(dx) (0.5)
Rd Rd
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
60. Invariant Measures, Stationary Processes, Ergodicity:
General Theory
We want to investigate invariant measures and stationary solutions for
OU processes. First a little general theory.
First let (Tt , t ≥ 0) be a general Markov semigroup with transition
probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for
f ∈ Bb (Rd ). We say that a probability measure µ is an invariant
measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ),
Tt f (x)µ(dx) = f (x)µ(dx) (0.5)
Rd Rd
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
61. Invariant Measures, Stationary Processes, Ergodicity:
General Theory
We want to investigate invariant measures and stationary solutions for
OU processes. First a little general theory.
First let (Tt , t ≥ 0) be a general Markov semigroup with transition
probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for
f ∈ Bb (Rd ). We say that a probability measure µ is an invariant
measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ),
Tt f (x)µ(dx) = f (x)µ(dx) (0.5)
Rd Rd
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
62. Invariant Measures, Stationary Processes, Ergodicity:
General Theory
We want to investigate invariant measures and stationary solutions for
OU processes. First a little general theory.
First let (Tt , t ≥ 0) be a general Markov semigroup with transition
probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for
f ∈ Bb (Rd ). We say that a probability measure µ is an invariant
measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ),
Tt f (x)µ(dx) = f (x)µ(dx) (0.5)
Rd Rd
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
63. Equivalently for all Borel sets B
pt (x, B)µ(dx) = µ(B). (0.6)
Rd
To see that (0.5) ⇒ (0.6) rewrite as
f (y )pt (x, dy )µ(dx) = f (x)µ(dx),
Rd Rd Rd
and put f = 1B . For the converse - approximate f by simple functions
and take limits.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 14 / 44
64. Equivalently for all Borel sets B
pt (x, B)µ(dx) = µ(B). (0.6)
Rd
To see that (0.5) ⇒ (0.6) rewrite as
f (y )pt (x, dy )µ(dx) = f (x)µ(dx),
Rd Rd Rd
and put f = 1B . For the converse - approximate f by simple functions
and take limits.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 14 / 44
65. Equivalently for all Borel sets B
pt (x, B)µ(dx) = µ(B). (0.6)
Rd
To see that (0.5) ⇒ (0.6) rewrite as
f (y )pt (x, dy )µ(dx) = f (x)µ(dx),
Rd Rd Rd
and put f = 1B . For the converse - approximate f by simple functions
and take limits.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 14 / 44
66. Equivalently for all Borel sets B
pt (x, B)µ(dx) = µ(B). (0.6)
Rd
To see that (0.5) ⇒ (0.6) rewrite as
f (y )pt (x, dy )µ(dx) = f (x)µ(dx),
Rd Rd Rd
and put f = 1B . For the converse - approximate f by simple functions
and take limits.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 14 / 44
67. e.g. A Lévy process doesn’t have an invariant probability measure but
Lebesgue measure is invariant in the sense that for f ∈ L1 (Rd )
Tt f (x)dx = f (x + y )pt (dy )dx = f (x)dx.
Rd Rd Rd Rd
A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for all
n ∈ N, t1 , . . . , tn , h ∈ R+ ,
d
(Z (t1 ), . . . , Z (tn )) = (Z (t1 + h), . . . , Z (tn + h))
Theorem
A Markov process Z wherein µ is the law of Z (0) is stationary if and
only if µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 15 / 44
68. e.g. A Lévy process doesn’t have an invariant probability measure but
Lebesgue measure is invariant in the sense that for f ∈ L1 (Rd )
Tt f (x)dx = f (x + y )pt (dy )dx = f (x)dx.
Rd Rd Rd Rd
A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for all
n ∈ N, t1 , . . . , tn , h ∈ R+ ,
d
(Z (t1 ), . . . , Z (tn )) = (Z (t1 + h), . . . , Z (tn + h))
Theorem
A Markov process Z wherein µ is the law of Z (0) is stationary if and
only if µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 15 / 44
69. e.g. A Lévy process doesn’t have an invariant probability measure but
Lebesgue measure is invariant in the sense that for f ∈ L1 (Rd )
Tt f (x)dx = f (x + y )pt (dy )dx = f (x)dx.
Rd Rd Rd Rd
A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for all
n ∈ N, t1 , . . . , tn , h ∈ R+ ,
d
(Z (t1 ), . . . , Z (tn )) = (Z (t1 + h), . . . , Z (tn + h))
Theorem
A Markov process Z wherein µ is the law of Z (0) is stationary if and
only if µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 15 / 44
70. e.g. A Lévy process doesn’t have an invariant probability measure but
Lebesgue measure is invariant in the sense that for f ∈ L1 (Rd )
Tt f (x)dx = f (x + y )pt (dy )dx = f (x)dx.
Rd Rd Rd Rd
A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for all
n ∈ N, t1 , . . . , tn , h ∈ R+ ,
d
(Z (t1 ), . . . , Z (tn )) = (Z (t1 + h), . . . , Z (tn + h))
Theorem
A Markov process Z wherein µ is the law of Z (0) is stationary if and
only if µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 15 / 44
71. Proof. If the process is stationary then µ is invariant since
µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) = pt (x, B)µ(dx).
Rd
For the converse, its sufficient to prove that
E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all
f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to
show
E(f (Z (t)) = E(E(f (Z (t)|F0 )))
= E(Tt f (Z (0)))
= (Tt f (x))µ(dx)
Rd
= f (x)µ(dx) = E(f (Z (0))).
Rd
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
72. Proof. If the process is stationary then µ is invariant since
µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) = pt (x, B)µ(dx).
Rd
For the converse, its sufficient to prove that
E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all
f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to
show
E(f (Z (t)) = E(E(f (Z (t)|F0 )))
= E(Tt f (Z (0)))
= (Tt f (x))µ(dx)
Rd
= f (x)µ(dx) = E(f (Z (0))).
Rd
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
73. Proof. If the process is stationary then µ is invariant since
µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) = pt (x, B)µ(dx).
Rd
For the converse, its sufficient to prove that
E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all
f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to
show
E(f (Z (t)) = E(E(f (Z (t)|F0 )))
= E(Tt f (Z (0)))
= (Tt f (x))µ(dx)
Rd
= f (x)µ(dx) = E(f (Z (0))).
Rd
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
74. Proof. If the process is stationary then µ is invariant since
µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) = pt (x, B)µ(dx).
Rd
For the converse, its sufficient to prove that
E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all
f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to
show
E(f (Z (t)) = E(E(f (Z (t)|F0 )))
= E(Tt f (Z (0)))
= (Tt f (x))µ(dx)
Rd
= f (x)µ(dx) = E(f (Z (0))).
Rd
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
75. Proof. If the process is stationary then µ is invariant since
µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) = pt (x, B)µ(dx).
Rd
For the converse, its sufficient to prove that
E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all
f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to
show
E(f (Z (t)) = E(E(f (Z (t)|F0 )))
= E(Tt f (Z (0)))
= (Tt f (x))µ(dx)
Rd
= f (x)µ(dx) = E(f (Z (0))).
Rd
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
76. Proof. If the process is stationary then µ is invariant since
µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) = pt (x, B)µ(dx).
Rd
For the converse, its sufficient to prove that
E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all
f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to
show
E(f (Z (t)) = E(E(f (Z (t)|F0 )))
= E(Tt f (Z (0)))
= (Tt f (x))µ(dx)
Rd
= f (x)µ(dx) = E(f (Z (0))).
Rd
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
77. Proof. If the process is stationary then µ is invariant since
µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) = pt (x, B)µ(dx).
Rd
For the converse, its sufficient to prove that
E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all
f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to
show
E(f (Z (t)) = E(E(f (Z (t)|F0 )))
= E(Tt f (Z (0)))
= (Tt f (x))µ(dx)
Rd
= f (x)µ(dx) = E(f (Z (0))).
Rd
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
78. In general use
E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))
= E(f1 (Z (t1 + h)) · · · E(fn (Z (tn + h))|Ftn−1 +h ))
= E(f1 (Z (t1 + h) · · · Ttn −tn−1 fn (Z (tn−1 + h)))).
2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 17 / 44
79. In general use
E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))
= E(f1 (Z (t1 + h)) · · · E(fn (Z (tn + h))|Ftn−1 +h ))
= E(f1 (Z (t1 + h) · · · Ttn −tn−1 fn (Z (tn−1 + h)))).
2
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 17 / 44
80. Let µ be an invariant probability measure for a Markov semigroup
(Tt , t ≥ 0). µ is ergodic if
Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.
If µ is ergodic then “time averages” = “space averages” for the
corresponding stationary Markov process, i.e.
T
1
lim f (Z (s))ds = f (x)µ(dx) a.s.
T →∞ T 0 Rd
Fact: The invariant measures form a convex set and the ergodic
measures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
81. Let µ be an invariant probability measure for a Markov semigroup
(Tt , t ≥ 0). µ is ergodic if
Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.
If µ is ergodic then “time averages” = “space averages” for the
corresponding stationary Markov process, i.e.
T
1
lim f (Z (s))ds = f (x)µ(dx) a.s.
T →∞ T 0 Rd
Fact: The invariant measures form a convex set and the ergodic
measures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
82. Let µ be an invariant probability measure for a Markov semigroup
(Tt , t ≥ 0). µ is ergodic if
Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.
If µ is ergodic then “time averages” = “space averages” for the
corresponding stationary Markov process, i.e.
T
1
lim f (Z (s))ds = f (x)µ(dx) a.s.
T →∞ T 0 Rd
Fact: The invariant measures form a convex set and the ergodic
measures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
83. Let µ be an invariant probability measure for a Markov semigroup
(Tt , t ≥ 0). µ is ergodic if
Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.
If µ is ergodic then “time averages” = “space averages” for the
corresponding stationary Markov process, i.e.
T
1
lim f (Z (s))ds = f (x)µ(dx) a.s.
T →∞ T 0 Rd
Fact: The invariant measures form a convex set and the ergodic
measures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
84. Let µ be an invariant probability measure for a Markov semigroup
(Tt , t ≥ 0). µ is ergodic if
Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.
If µ is ergodic then “time averages” = “space averages” for the
corresponding stationary Markov process, i.e.
T
1
lim f (Z (s))ds = f (x)µ(dx) a.s.
T →∞ T 0 Rd
Fact: The invariant measures form a convex set and the ergodic
measures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
85. Let µ be an invariant probability measure for a Markov semigroup
(Tt , t ≥ 0). µ is ergodic if
Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.
If µ is ergodic then “time averages” = “space averages” for the
corresponding stationary Markov process, i.e.
T
1
lim f (Z (s))ds = f (x)µ(dx) a.s.
T →∞ T 0 Rd
Fact: The invariant measures form a convex set and the ergodic
measures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
86. The Self-Decomposable Connection
Recall that a random variable Z is self-decomposable if for each
0 < a < 1 there exists a random variable Wa that is independent of Z
such that
d
Z = aZ + Wa
or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B).
Z Z
Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.
t
Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s)
since
t
Y (t) = e−kt Y0 + e−(t−s)K dX (s)
0
and by stationary increments of the process X
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
87. The Self-Decomposable Connection
Recall that a random variable Z is self-decomposable if for each
0 < a < 1 there exists a random variable Wa that is independent of Z
such that
d
Z = aZ + Wa
or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B).
Z Z
Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.
t
Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s)
since
t
Y (t) = e−kt Y0 + e−(t−s)K dX (s)
0
and by stationary increments of the process X
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
88. The Self-Decomposable Connection
Recall that a random variable Z is self-decomposable if for each
0 < a < 1 there exists a random variable Wa that is independent of Z
such that
d
Z = aZ + Wa
or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B).
Z Z
Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.
t
Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s)
since
t
Y (t) = e−kt Y0 + e−(t−s)K dX (s)
0
and by stationary increments of the process X
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
89. The Self-Decomposable Connection
Recall that a random variable Z is self-decomposable if for each
0 < a < 1 there exists a random variable Wa that is independent of Z
such that
d
Z = aZ + Wa
or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B).
Z Z
Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.
t
Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s)
since
t
Y (t) = e−kt Y0 + e−(t−s)K dX (s)
0
and by stationary increments of the process X
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
90. The Self-Decomposable Connection
Recall that a random variable Z is self-decomposable if for each
0 < a < 1 there exists a random variable Wa that is independent of Z
such that
d
Z = aZ + Wa
or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B).
Z Z
Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.
t
Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s)
since
t
Y (t) = e−kt Y0 + e−(t−s)K dX (s)
0
and by stationary increments of the process X
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
91. The Self-Decomposable Connection
Recall that a random variable Z is self-decomposable if for each
0 < a < 1 there exists a random variable Wa that is independent of Z
such that
d
Z = aZ + Wa
or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B).
Z Z
Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.
t
Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s)
since
t
Y (t) = e−kt Y0 + e−(t−s)K dX (s)
0
and by stationary increments of the process X
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
92. t t
d d
Y (t) = Y0 and e−k (t−s) dX (s) = e−ks dX (s)
0 0
d
⇒ Y0 = e−kt Y0 + Wa(t) .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 20 / 44
93. t t
d d
Y (t) = Y0 and e−k (t−s) dX (s) = e−ks dX (s)
0 0
d
⇒ Y0 = e−kt Y0 + Wa(t) .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 20 / 44
94. Now suppose that µ is self-decomposable - more precisely that
kt
µ = µe ∗ ρt ,
where ρt is the law of Wa(t) . Then
Tt f (x)µ(dx) = f (e−kt x + y )ρt (dy )µ(dx)
R R R
kt
= f (x + y )ρt (dy )µe (dx)
R R
kt
= f (x)(µe ∗ ρt )(dx)
R
= f (x)µ(dx).
R
So µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
95. Now suppose that µ is self-decomposable - more precisely that
kt
µ = µe ∗ ρt ,
where ρt is the law of Wa(t) . Then
Tt f (x)µ(dx) = f (e−kt x + y )ρt (dy )µ(dx)
R R R
kt
= f (x + y )ρt (dy )µe (dx)
R R
kt
= f (x)(µe ∗ ρt )(dx)
R
= f (x)µ(dx).
R
So µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
96. Now suppose that µ is self-decomposable - more precisely that
kt
µ = µe ∗ ρt ,
where ρt is the law of Wa(t) . Then
Tt f (x)µ(dx) = f (e−kt x + y )ρt (dy )µ(dx)
R R R
kt
= f (x + y )ρt (dy )µe (dx)
R R
kt
= f (x)(µe ∗ ρt )(dx)
R
= f (x)µ(dx).
R
So µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
97. Now suppose that µ is self-decomposable - more precisely that
kt
µ = µe ∗ ρt ,
where ρt is the law of Wa(t) . Then
Tt f (x)µ(dx) = f (e−kt x + y )ρt (dy )µ(dx)
R R R
kt
= f (x + y )ρt (dy )µe (dx)
R R
kt
= f (x)(µe ∗ ρt )(dx)
R
= f (x)µ(dx).
R
So µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
98. Now suppose that µ is self-decomposable - more precisely that
kt
µ = µe ∗ ρt ,
where ρt is the law of Wa(t) . Then
Tt f (x)µ(dx) = f (e−kt x + y )ρt (dy )µ(dx)
R R R
kt
= f (x + y )ρt (dy )µe (dx)
R R
kt
= f (x)(µe ∗ ρt )(dx)
R
= f (x)µ(dx).
R
So µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
99. Now suppose that µ is self-decomposable - more precisely that
kt
µ = µe ∗ ρt ,
where ρt is the law of Wa(t) . Then
Tt f (x)µ(dx) = f (e−kt x + y )ρt (dy )µ(dx)
R R R
kt
= f (x + y )ρt (dy )µe (dx)
R R
kt
= f (x)(µe ∗ ρt )(dx)
R
= f (x)µ(dx).
R
So µ is an invariant measure.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
100. So we have shown that:
Theorem
The following are equivalent for the O-U process Y .
Y is stationary.
The law of Y (0) is an invariant measure.
t
The law of Y (0) is self-decomposable (with Wa(t) = 0 e−ks dX (s)).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 22 / 44
101. So we have shown that:
Theorem
The following are equivalent for the O-U process Y .
Y is stationary.
The law of Y (0) is an invariant measure.
t
The law of Y (0) is self-decomposable (with Wa(t) = 0 e−ks dX (s)).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 22 / 44
102. So we have shown that:
Theorem
The following are equivalent for the O-U process Y .
Y is stationary.
The law of Y (0) is an invariant measure.
t
The law of Y (0) is self-decomposable (with Wa(t) = 0 e−ks dX (s)).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 22 / 44
103. So we have shown that:
Theorem
The following are equivalent for the O-U process Y .
Y is stationary.
The law of Y (0) is an invariant measure.
t
The law of Y (0) is self-decomposable (with Wa(t) = 0 e−ks dX (s)).
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 22 / 44
104. We seek some condition on the Lévy process X which ensures that Y
is stationary.
∞
Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is
self-decomposable.
To see this observe that (using stationary increments of X )
∞ ∞ t
e−ks dX (s) = e−ks dX (s) + e−ks dX (s)
0 t 0
∞ t
d
= e−k (t+s) dX (s) + e−ks dX (s)
0 0
∞ t
= e−kt e−ks dX (s) + e−ks dX (s)
0 0
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
105. We seek some condition on the Lévy process X which ensures that Y
is stationary.
∞
Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is
self-decomposable.
To see this observe that (using stationary increments of X )
∞ ∞ t
e−ks dX (s) = e−ks dX (s) + e−ks dX (s)
0 t 0
∞ t
d
= e−k (t+s) dX (s) + e−ks dX (s)
0 0
∞ t
= e−kt e−ks dX (s) + e−ks dX (s)
0 0
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
106. We seek some condition on the Lévy process X which ensures that Y
is stationary.
∞
Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is
self-decomposable.
To see this observe that (using stationary increments of X )
∞ ∞ t
e−ks dX (s) = e−ks dX (s) + e−ks dX (s)
0 t 0
∞ t
d
= e−k (t+s) dX (s) + e−ks dX (s)
0 0
∞ t
= e−kt e−ks dX (s) + e−ks dX (s)
0 0
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
107. We seek some condition on the Lévy process X which ensures that Y
is stationary.
∞
Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is
self-decomposable.
To see this observe that (using stationary increments of X )
∞ ∞ t
e−ks dX (s) = e−ks dX (s) + e−ks dX (s)
0 t 0
∞ t
d
= e−k (t+s) dX (s) + e−ks dX (s)
0 0
∞ t
= e−kt e−ks dX (s) + e−ks dX (s)
0 0
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
108. We seek some condition on the Lévy process X which ensures that Y
is stationary.
∞
Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is
self-decomposable.
To see this observe that (using stationary increments of X )
∞ ∞ t
e−ks dX (s) = e−ks dX (s) + e−ks dX (s)
0 t 0
∞ t
d
= e−k (t+s) dX (s) + e−ks dX (s)
0 0
∞ t
= e−kt e−ks dX (s) + e−ks dX (s)
0 0
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
109. We seek some condition on the Lévy process X which ensures that Y
is stationary.
∞
Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is
self-decomposable.
To see this observe that (using stationary increments of X )
∞ ∞ t
e−ks dX (s) = e−ks dX (s) + e−ks dX (s)
0 t 0
∞ t
d
= e−k (t+s) dX (s) + e−ks dX (s)
0 0
∞ t
= e−kt e−ks dX (s) + e−ks dX (s)
0 0
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
110. t
When does limt→∞ 0 e−ks dX (s) exist in distribution? Use the Lévy-Itô
decomposition.
X (t) = bt + M(t) + xN(t, dx).
|x|≥1
t
It is not difficult to see that limt→∞ 0 e−ks dM(s) exists in L2 -sense.
t
Fact: lim e−ks xN(ds, dx) exists in distribution if and only if
t→∞ 0 |x|≥1
|x|≥1 log(1 + |x|)ν(dx) < ∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
111. t
When does limt→∞ 0 e−ks dX (s) exist in distribution? Use the Lévy-Itô
decomposition.
X (t) = bt + M(t) + xN(t, dx).
|x|≥1
t
It is not difficult to see that limt→∞ 0 e−ks dM(s) exists in L2 -sense.
t
Fact: lim e−ks xN(ds, dx) exists in distribution if and only if
t→∞ 0 |x|≥1
|x|≥1 log(1 + |x|)ν(dx) < ∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
112. t
When does limt→∞ 0 e−ks dX (s) exist in distribution? Use the Lévy-Itô
decomposition.
X (t) = bt + M(t) + xN(t, dx).
|x|≥1
t
It is not difficult to see that limt→∞ 0 e−ks dM(s) exists in L2 -sense.
t
Fact: lim e−ks xN(ds, dx) exists in distribution if and only if
t→∞ 0 |x|≥1
|x|≥1 log(1 + |x|)ν(dx) < ∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
113. t
When does limt→∞ 0 e−ks dX (s) exist in distribution? Use the Lévy-Itô
decomposition.
X (t) = bt + M(t) + xN(t, dx).
|x|≥1
t
It is not difficult to see that limt→∞ 0 e−ks dM(s) exists in L2 -sense.
t
Fact: lim e−ks xN(ds, dx) exists in distribution if and only if
t→∞ 0 |x|≥1
|x|≥1 log(1 + |x|)ν(dx) < ∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
114. t
When does limt→∞ 0 e−ks dX (s) exist in distribution? Use the Lévy-Itô
decomposition.
X (t) = bt + M(t) + xN(t, dx).
|x|≥1
t
It is not difficult to see that limt→∞ 0 e−ks dM(s) exists in L2 -sense.
t
Fact: lim e−ks xN(ds, dx) exists in distribution if and only if
t→∞ 0 |x|≥1
|x|≥1 log(1 + |x|)ν(dx) < ∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
115. t
When does limt→∞ 0 e−ks dX (s) exist in distribution? Use the Lévy-Itô
decomposition.
X (t) = bt + M(t) + xN(t, dx).
|x|≥1
t
It is not difficult to see that limt→∞ 0 e−ks dM(s) exists in L2 -sense.
t
Fact: lim e−ks xN(ds, dx) exists in distribution if and only if
t→∞ 0 |x|≥1
|x|≥1 log(1 + |x|)ν(dx) < ∞.
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
116. To prove this you need
1 If (ξn , n ∈ N) are i.i.d. then ∞ c n ξn converges a.s. (0 < c < 1)
n=1
if and only if E(log(1 + |ξ1 |)) < ∞.
2
n n−1
d
e−ks xN(ds, dx) = e−kj Mj
0 |x|≥1 j=0
j+1 −k (s−j) xN(ds, dx).
where Mj := j |x|≥1 e Note that (Mj , j ∈ N)
are i.i.d.
In this case, Y has characteristics (b∞ , Af , ν∞ ).
f
∞
f
1
e.g. Brownian motion case. X (t) = B(t). µ ∼ N 0, 2k .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44
117. To prove this you need
1 If (ξn , n ∈ N) are i.i.d. then ∞ c n ξn converges a.s. (0 < c < 1)
n=1
if and only if E(log(1 + |ξ1 |)) < ∞.
2
n n−1
d
e−ks xN(ds, dx) = e−kj Mj
0 |x|≥1 j=0
j+1 −k (s−j) xN(ds, dx).
where Mj := j |x|≥1 e Note that (Mj , j ∈ N)
are i.i.d.
In this case, Y has characteristics (b∞ , Af , ν∞ ).
f
∞
f
1
e.g. Brownian motion case. X (t) = B(t). µ ∼ N 0, 2k .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44
118. To prove this you need
1 If (ξn , n ∈ N) are i.i.d. then ∞ c n ξn converges a.s. (0 < c < 1)
n=1
if and only if E(log(1 + |ξ1 |)) < ∞.
2
n n−1
d
e−ks xN(ds, dx) = e−kj Mj
0 |x|≥1 j=0
j+1 −k (s−j) xN(ds, dx).
where Mj := j |x|≥1 e Note that (Mj , j ∈ N)
are i.i.d.
In this case, Y has characteristics (b∞ , Af , ν∞ ).
f
∞
f
1
e.g. Brownian motion case. X (t) = B(t). µ ∼ N 0, 2k .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44
119. To prove this you need
1 If (ξn , n ∈ N) are i.i.d. then ∞ c n ξn converges a.s. (0 < c < 1)
n=1
if and only if E(log(1 + |ξ1 |)) < ∞.
2
n n−1
d
e−ks xN(ds, dx) = e−kj Mj
0 |x|≥1 j=0
j+1 −k (s−j) xN(ds, dx).
where Mj := j |x|≥1 e Note that (Mj , j ∈ N)
are i.i.d.
In this case, Y has characteristics (b∞ , Af , ν∞ ).
f
∞
f
1
e.g. Brownian motion case. X (t) = B(t). µ ∼ N 0, 2k .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44
120. To prove this you need
1 If (ξn , n ∈ N) are i.i.d. then ∞ c n ξn converges a.s. (0 < c < 1)
n=1
if and only if E(log(1 + |ξ1 |)) < ∞.
2
n n−1
d
e−ks xN(ds, dx) = e−kj Mj
0 |x|≥1 j=0
j+1 −k (s−j) xN(ds, dx).
where Mj := j |x|≥1 e Note that (Mj , j ∈ N)
are i.i.d.
In this case, Y has characteristics (b∞ , Af , ν∞ ).
f
∞
f
1
e.g. Brownian motion case. X (t) = B(t). µ ∼ N 0, 2k .
Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44