SlideShare a Scribd company logo
1 of 207
Download to read offline
Lectures on Lévy Processes and Stochastic
         Calculus (Koc University) Lecture 5: The
               Ornstein-Uhlenbeck Process

                                   David Applebaum

                School of Mathematics and Statistics, University of Sheffield, UK


                                  9th December 2011




Dave Applebaum (Sheffield UK)                Lecture 5                        December 2011   1 / 44
Historical Origins



This process was first introduced by Ornstein and Uhlenbeck in the
1930s as a more accurate model of the physical phenomenon of
Brownian motion than the Einstein-Smoluchowski-Wiener process.
They argued that

      Brownian motion = viscous drag of fluid + random molecular
                            bombardment.




 Dave Applebaum (Sheffield UK)   Lecture 5              December 2011   2 / 44
Historical Origins



This process was first introduced by Ornstein and Uhlenbeck in the
1930s as a more accurate model of the physical phenomenon of
Brownian motion than the Einstein-Smoluchowski-Wiener process.
They argued that

      Brownian motion = viscous drag of fluid + random molecular
                            bombardment.




 Dave Applebaum (Sheffield UK)   Lecture 5              December 2011   2 / 44
Historical Origins



This process was first introduced by Ornstein and Uhlenbeck in the
1930s as a more accurate model of the physical phenomenon of
Brownian motion than the Einstein-Smoluchowski-Wiener process.
They argued that

      Brownian motion = viscous drag of fluid + random molecular
                            bombardment.




 Dave Applebaum (Sheffield UK)   Lecture 5              December 2011   2 / 44
Let v (t) be the velocity at time t of a particle of mass m executing
Brownian motion. By Newton’s second law of motion, the total force
                                               dv (t)
acting on the particle at time t is F (t) = m         .We then have
                                                 dt
            dv (t)                                    dB(t)
          m        = − mkv (t) +                  mσ              ,
              dt                                        dt
                        viscous drag molecular bombardment
where k , σ > 0.
            dB(t)
Of course,        doesn’t exist, but this is a “physicist’s argument”. If
             dt
we cancel the ms and multiply both sides by dt then we get a
legitimate SDE - the Langevin equation




 Dave Applebaum (Sheffield UK)      Lecture 5                 December 2011   3 / 44
Let v (t) be the velocity at time t of a particle of mass m executing
Brownian motion. By Newton’s second law of motion, the total force
                                               dv (t)
acting on the particle at time t is F (t) = m         .We then have
                                                 dt
            dv (t)                                    dB(t)
          m        = − mkv (t) +                  mσ              ,
              dt                                        dt
                        viscous drag molecular bombardment
where k , σ > 0.
            dB(t)
Of course,        doesn’t exist, but this is a “physicist’s argument”. If
             dt
we cancel the ms and multiply both sides by dt then we get a
legitimate SDE - the Langevin equation




 Dave Applebaum (Sheffield UK)      Lecture 5                 December 2011   3 / 44
Let v (t) be the velocity at time t of a particle of mass m executing
Brownian motion. By Newton’s second law of motion, the total force
                                               dv (t)
acting on the particle at time t is F (t) = m         .We then have
                                                 dt
            dv (t)                                    dB(t)
          m        = − mkv (t) +                  mσ              ,
              dt                                        dt
                        viscous drag molecular bombardment
where k , σ > 0.
            dB(t)
Of course,        doesn’t exist, but this is a “physicist’s argument”. If
             dt
we cancel the ms and multiply both sides by dt then we get a
legitimate SDE - the Langevin equation




 Dave Applebaum (Sheffield UK)      Lecture 5                 December 2011   3 / 44
Let v (t) be the velocity at time t of a particle of mass m executing
Brownian motion. By Newton’s second law of motion, the total force
                                               dv (t)
acting on the particle at time t is F (t) = m         .We then have
                                                 dt
            dv (t)                                    dB(t)
          m        = − mkv (t) +                  mσ              ,
              dt                                        dt
                        viscous drag molecular bombardment
where k , σ > 0.
            dB(t)
Of course,        doesn’t exist, but this is a “physicist’s argument”. If
             dt
we cancel the ms and multiply both sides by dt then we get a
legitimate SDE - the Langevin equation




 Dave Applebaum (Sheffield UK)      Lecture 5                 December 2011   3 / 44
Let v (t) be the velocity at time t of a particle of mass m executing
Brownian motion. By Newton’s second law of motion, the total force
                                               dv (t)
acting on the particle at time t is F (t) = m         .We then have
                                                 dt
            dv (t)                                    dB(t)
          m        = − mkv (t) +                  mσ              ,
              dt                                        dt
                        viscous drag molecular bombardment
where k , σ > 0.
            dB(t)
Of course,        doesn’t exist, but this is a “physicist’s argument”. If
             dt
we cancel the ms and multiply both sides by dt then we get a
legitimate SDE - the Langevin equation




 Dave Applebaum (Sheffield UK)      Lecture 5                 December 2011   3 / 44
dv (t) = −kv (t)dt + σdB(t)                                     (0.1)

Using the integrating factor ekt we can then easily check that the
unique solution to this equation is the Ornstein-Uhlenbeck process
(v (t), t ≥ 0) where
                                                         t
                          v (t) = e−kt v (0) +               e−k (t−s) dB(s).
                                                     0

We are interested in Lévy processes so replace B by a d-dimensional
Lévy process X and k by a d × d matrix K . Our Langevin equation is

                                dY (t) = −KY (t)dt + dX (t)                                     (0.2)

and its unique solution is
                                                     t
                          Y (t) = e−tK Y0 +              e−(t−s)K dX (s),                       (0.3)
                                                 0
 Dave Applebaum (Sheffield UK)              Lecture 5                            December 2011     4 / 44
dv (t) = −kv (t)dt + σdB(t)                                     (0.1)

Using the integrating factor ekt we can then easily check that the
unique solution to this equation is the Ornstein-Uhlenbeck process
(v (t), t ≥ 0) where
                                                         t
                          v (t) = e−kt v (0) +               e−k (t−s) dB(s).
                                                     0

We are interested in Lévy processes so replace B by a d-dimensional
Lévy process X and k by a d × d matrix K . Our Langevin equation is

                                dY (t) = −KY (t)dt + dX (t)                                     (0.2)

and its unique solution is
                                                     t
                          Y (t) = e−tK Y0 +              e−(t−s)K dX (s),                       (0.3)
                                                 0
 Dave Applebaum (Sheffield UK)              Lecture 5                            December 2011     4 / 44
dv (t) = −kv (t)dt + σdB(t)                                     (0.1)

Using the integrating factor ekt we can then easily check that the
unique solution to this equation is the Ornstein-Uhlenbeck process
(v (t), t ≥ 0) where
                                                         t
                          v (t) = e−kt v (0) +               e−k (t−s) dB(s).
                                                     0

We are interested in Lévy processes so replace B by a d-dimensional
Lévy process X and k by a d × d matrix K . Our Langevin equation is

                                dY (t) = −KY (t)dt + dX (t)                                     (0.2)

and its unique solution is
                                                     t
                          Y (t) = e−tK Y0 +              e−(t−s)K dX (s),                       (0.3)
                                                 0
 Dave Applebaum (Sheffield UK)              Lecture 5                            December 2011     4 / 44
dv (t) = −kv (t)dt + σdB(t)                                     (0.1)

Using the integrating factor ekt we can then easily check that the
unique solution to this equation is the Ornstein-Uhlenbeck process
(v (t), t ≥ 0) where
                                                         t
                          v (t) = e−kt v (0) +               e−k (t−s) dB(s).
                                                     0

We are interested in Lévy processes so replace B by a d-dimensional
Lévy process X and k by a d × d matrix K . Our Langevin equation is

                                dY (t) = −KY (t)dt + dX (t)                                     (0.2)

and its unique solution is
                                                     t
                          Y (t) = e−tK Y0 +              e−(t−s)K dX (s),                       (0.3)
                                                 0
 Dave Applebaum (Sheffield UK)              Lecture 5                            December 2011     4 / 44
dv (t) = −kv (t)dt + σdB(t)                                     (0.1)

Using the integrating factor ekt we can then easily check that the
unique solution to this equation is the Ornstein-Uhlenbeck process
(v (t), t ≥ 0) where
                                                         t
                          v (t) = e−kt v (0) +               e−k (t−s) dB(s).
                                                     0

We are interested in Lévy processes so replace B by a d-dimensional
Lévy process X and k by a d × d matrix K . Our Langevin equation is

                                dY (t) = −KY (t)dt + dX (t)                                     (0.2)

and its unique solution is
                                                     t
                          Y (t) = e−tK Y0 +              e−(t−s)K dX (s),                       (0.3)
                                                 0
 Dave Applebaum (Sheffield UK)              Lecture 5                            December 2011     4 / 44
dv (t) = −kv (t)dt + σdB(t)                                     (0.1)

Using the integrating factor ekt we can then easily check that the
unique solution to this equation is the Ornstein-Uhlenbeck process
(v (t), t ≥ 0) where
                                                         t
                          v (t) = e−kt v (0) +               e−k (t−s) dB(s).
                                                     0

We are interested in Lévy processes so replace B by a d-dimensional
Lévy process X and k by a d × d matrix K . Our Langevin equation is

                                dY (t) = −KY (t)dt + dX (t)                                     (0.2)

and its unique solution is
                                                     t
                          Y (t) = e−tK Y0 +              e−(t−s)K dX (s),                       (0.3)
                                                 0
 Dave Applebaum (Sheffield UK)              Lecture 5                            December 2011     4 / 44
dv (t) = −kv (t)dt + σdB(t)                                     (0.1)

Using the integrating factor ekt we can then easily check that the
unique solution to this equation is the Ornstein-Uhlenbeck process
(v (t), t ≥ 0) where
                                                         t
                          v (t) = e−kt v (0) +               e−k (t−s) dB(s).
                                                     0

We are interested in Lévy processes so replace B by a d-dimensional
Lévy process X and k by a d × d matrix K . Our Langevin equation is

                                dY (t) = −KY (t)dt + dX (t)                                     (0.2)

and its unique solution is
                                                     t
                          Y (t) = e−tK Y0 +              e−(t−s)K dX (s),                       (0.3)
                                                 0
 Dave Applebaum (Sheffield UK)              Lecture 5                            December 2011     4 / 44
where Y0 := Y (0) is a fixed F0 measurable random variables. We still
call the process Y an Ornstein-Uhlenbeck or OU process.
Furthermore
     Y has càdlàg paths.
     Y is a Markov process.
The process X is sometimes called the background driving Lévy
process or BDLP.




 Dave Applebaum (Sheffield UK)   Lecture 5              December 2011   5 / 44
where Y0 := Y (0) is a fixed F0 measurable random variables. We still
call the process Y an Ornstein-Uhlenbeck or OU process.
Furthermore
     Y has càdlàg paths.
     Y is a Markov process.
The process X is sometimes called the background driving Lévy
process or BDLP.




 Dave Applebaum (Sheffield UK)   Lecture 5              December 2011   5 / 44
where Y0 := Y (0) is a fixed F0 measurable random variables. We still
call the process Y an Ornstein-Uhlenbeck or OU process.
Furthermore
     Y has càdlàg paths.
     Y is a Markov process.
The process X is sometimes called the background driving Lévy
process or BDLP.




 Dave Applebaum (Sheffield UK)   Lecture 5              December 2011   5 / 44
where Y0 := Y (0) is a fixed F0 measurable random variables. We still
call the process Y an Ornstein-Uhlenbeck or OU process.
Furthermore
     Y has càdlàg paths.
     Y is a Markov process.
The process X is sometimes called the background driving Lévy
process or BDLP.




 Dave Applebaum (Sheffield UK)   Lecture 5              December 2011   5 / 44
where Y0 := Y (0) is a fixed F0 measurable random variables. We still
call the process Y an Ornstein-Uhlenbeck or OU process.
Furthermore
     Y has càdlàg paths.
     Y is a Markov process.
The process X is sometimes called the background driving Lévy
process or BDLP.




 Dave Applebaum (Sheffield UK)   Lecture 5              December 2011   5 / 44
We get a Markov semigroup on Bb (Rd ) called a Mehler semigroup:


                          Tt f (x) = E(f (Y (t))|Y0 = x)
                                         =            f (e−tK x + y )ρt (dy )                     (0.4)
                                                 Rd

where ρt is the law of the stochastic integral
 t                   d     t
 0e−sK dX (s) = 0 e−(t−s)K dX (s).
This generalises the classical Mehler formula (X (t) = B(t), K = kI)

                                1                              1 − e−2kt           y2
           Tt f (x) =               d
                                             f   e−kt x +                y      e− 2 dy .
                          (2π)      2   Rd                        2k




 Dave Applebaum (Sheffield UK)                     Lecture 5                       December 2011     6 / 44
We get a Markov semigroup on Bb (Rd ) called a Mehler semigroup:


                          Tt f (x) = E(f (Y (t))|Y0 = x)
                                         =            f (e−tK x + y )ρt (dy )                     (0.4)
                                                 Rd

where ρt is the law of the stochastic integral
 t                   d     t
 0e−sK dX (s) = 0 e−(t−s)K dX (s).
This generalises the classical Mehler formula (X (t) = B(t), K = kI)

                                1                              1 − e−2kt           y2
           Tt f (x) =               d
                                             f   e−kt x +                y      e− 2 dy .
                          (2π)      2   Rd                        2k




 Dave Applebaum (Sheffield UK)                     Lecture 5                       December 2011     6 / 44
We get a Markov semigroup on Bb (Rd ) called a Mehler semigroup:


                          Tt f (x) = E(f (Y (t))|Y0 = x)
                                         =            f (e−tK x + y )ρt (dy )                     (0.4)
                                                 Rd

where ρt is the law of the stochastic integral
 t                   d     t
 0e−sK dX (s) = 0 e−(t−s)K dX (s).
This generalises the classical Mehler formula (X (t) = B(t), K = kI)

                                1                              1 − e−2kt           y2
           Tt f (x) =               d
                                             f   e−kt x +                y      e− 2 dy .
                          (2π)      2   Rd                        2k




 Dave Applebaum (Sheffield UK)                     Lecture 5                       December 2011     6 / 44
We get a Markov semigroup on Bb (Rd ) called a Mehler semigroup:


                          Tt f (x) = E(f (Y (t))|Y0 = x)
                                         =            f (e−tK x + y )ρt (dy )                     (0.4)
                                                 Rd

where ρt is the law of the stochastic integral
 t                   d     t
 0e−sK dX (s) = 0 e−(t−s)K dX (s).
This generalises the classical Mehler formula (X (t) = B(t), K = kI)

                                1                              1 − e−2kt           y2
           Tt f (x) =               d
                                             f   e−kt x +                y      e− 2 dy .
                          (2π)      2   Rd                        2k




 Dave Applebaum (Sheffield UK)                     Lecture 5                       December 2011     6 / 44
We get a Markov semigroup on Bb (Rd ) called a Mehler semigroup:


                          Tt f (x) = E(f (Y (t))|Y0 = x)
                                         =            f (e−tK x + y )ρt (dy )                     (0.4)
                                                 Rd

where ρt is the law of the stochastic integral
 t                   d     t
 0e−sK dX (s) = 0 e−(t−s)K dX (s).
This generalises the classical Mehler formula (X (t) = B(t), K = kI)

                                1                              1 − e−2kt           y2
           Tt f (x) =               d
                                             f   e−kt x +                y      e− 2 dy .
                          (2π)      2   Rd                        2k




 Dave Applebaum (Sheffield UK)                     Lecture 5                       December 2011     6 / 44
In fact (Tt , t ≥ 0) satisfies the Feller property: Tt (C0 (Rd )) ⊆ C0 (Rd ).
We also have the skew-convolution semigroup property:

                                ρs+t = ρK ∗ ρt ,
                                        s

where ρK (B) = ρs (etK B). Another terminology for this is
       s
measure-valued cocycle.




 Dave Applebaum (Sheffield UK)        Lecture 5                 December 2011   7 / 44
In fact (Tt , t ≥ 0) satisfies the Feller property: Tt (C0 (Rd )) ⊆ C0 (Rd ).
We also have the skew-convolution semigroup property:

                                ρs+t = ρK ∗ ρt ,
                                        s

where ρK (B) = ρs (etK B). Another terminology for this is
       s
measure-valued cocycle.




 Dave Applebaum (Sheffield UK)        Lecture 5                 December 2011   7 / 44
In fact (Tt , t ≥ 0) satisfies the Feller property: Tt (C0 (Rd )) ⊆ C0 (Rd ).
We also have the skew-convolution semigroup property:

                                ρs+t = ρK ∗ ρt ,
                                        s

where ρK (B) = ρs (etK B). Another terminology for this is
       s
measure-valued cocycle.




 Dave Applebaum (Sheffield UK)        Lecture 5                 December 2011   7 / 44
We get nicer probabilistic properties of our solution if we make the
following
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important in
applications such as volatility modelling, Lévy driven CARMA
processes, branching processes with immigration.
In infinite dimensions they solve the simplest linear SPDE with additive
noise. To develop this theme, let H and K be separable Hilbert spaces
and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator
J. Let X be a Lévy process on K and C ∈ L(K , H).




 Dave Applebaum (Sheffield UK)    Lecture 5                December 2011   8 / 44
We get nicer probabilistic properties of our solution if we make the
following
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important in
applications such as volatility modelling, Lévy driven CARMA
processes, branching processes with immigration.
In infinite dimensions they solve the simplest linear SPDE with additive
noise. To develop this theme, let H and K be separable Hilbert spaces
and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator
J. Let X be a Lévy process on K and C ∈ L(K , H).




 Dave Applebaum (Sheffield UK)    Lecture 5                December 2011   8 / 44
We get nicer probabilistic properties of our solution if we make the
following
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important in
applications such as volatility modelling, Lévy driven CARMA
processes, branching processes with immigration.
In infinite dimensions they solve the simplest linear SPDE with additive
noise. To develop this theme, let H and K be separable Hilbert spaces
and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator
J. Let X be a Lévy process on K and C ∈ L(K , H).




 Dave Applebaum (Sheffield UK)    Lecture 5                December 2011   8 / 44
We get nicer probabilistic properties of our solution if we make the
following
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important in
applications such as volatility modelling, Lévy driven CARMA
processes, branching processes with immigration.
In infinite dimensions they solve the simplest linear SPDE with additive
noise. To develop this theme, let H and K be separable Hilbert spaces
and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator
J. Let X be a Lévy process on K and C ∈ L(K , H).




 Dave Applebaum (Sheffield UK)    Lecture 5                December 2011   8 / 44
We get nicer probabilistic properties of our solution if we make the
following
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important in
applications such as volatility modelling, Lévy driven CARMA
processes, branching processes with immigration.
In infinite dimensions they solve the simplest linear SPDE with additive
noise. To develop this theme, let H and K be separable Hilbert spaces
and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator
J. Let X be a Lévy process on K and C ∈ L(K , H).




 Dave Applebaum (Sheffield UK)    Lecture 5                December 2011   8 / 44
We get nicer probabilistic properties of our solution if we make the
following
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important in
applications such as volatility modelling, Lévy driven CARMA
processes, branching processes with immigration.
In infinite dimensions they solve the simplest linear SPDE with additive
noise. To develop this theme, let H and K be separable Hilbert spaces
and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator
J. Let X be a Lévy process on K and C ∈ L(K , H).




 Dave Applebaum (Sheffield UK)    Lecture 5                December 2011   8 / 44
We get nicer probabilistic properties of our solution if we make the
following
Assumption K is strictly positive definite.
OU processes solve simple linear SDEs. They are important in
applications such as volatility modelling, Lévy driven CARMA
processes, branching processes with immigration.
In infinite dimensions they solve the simplest linear SPDE with additive
noise. To develop this theme, let H and K be separable Hilbert spaces
and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator
J. Let X be a Lévy process on K and C ∈ L(K , H).




 Dave Applebaum (Sheffield UK)    Lecture 5                December 2011   8 / 44
We have the SPDE

                                    dY (t) = JY (t) + CdX (t),
whose unique solution is
                                                          t
                       Y (t) = S(t)Y0 +                       S(t − s)CdX (s) ,
                                                      0

                                                  stochastic convolution

and the generalised Mehler semigroup is

                                Tt f (x) =        f (S(t)x + y )ρt (dy ).
                                             Rd

From now on we will work in finite dimensions and assume the strict
positive-definiteness of K .


 Dave Applebaum (Sheffield UK)                     Lecture 5                       December 2011   9 / 44
We have the SPDE

                                    dY (t) = JY (t) + CdX (t),
whose unique solution is
                                                          t
                       Y (t) = S(t)Y0 +                       S(t − s)CdX (s) ,
                                                      0

                                                  stochastic convolution

and the generalised Mehler semigroup is

                                Tt f (x) =        f (S(t)x + y )ρt (dy ).
                                             Rd

From now on we will work in finite dimensions and assume the strict
positive-definiteness of K .


 Dave Applebaum (Sheffield UK)                     Lecture 5                       December 2011   9 / 44
We have the SPDE

                                    dY (t) = JY (t) + CdX (t),
whose unique solution is
                                                          t
                       Y (t) = S(t)Y0 +                       S(t − s)CdX (s) ,
                                                      0

                                                  stochastic convolution

and the generalised Mehler semigroup is

                                Tt f (x) =        f (S(t)x + y )ρt (dy ).
                                             Rd

From now on we will work in finite dimensions and assume the strict
positive-definiteness of K .


 Dave Applebaum (Sheffield UK)                     Lecture 5                       December 2011   9 / 44
We have the SPDE

                                    dY (t) = JY (t) + CdX (t),
whose unique solution is
                                                          t
                       Y (t) = S(t)Y0 +                       S(t − s)CdX (s) ,
                                                      0

                                                  stochastic convolution

and the generalised Mehler semigroup is

                                Tt f (x) =        f (S(t)x + y )ρt (dy ).
                                             Rd

From now on we will work in finite dimensions and assume the strict
positive-definiteness of K .


 Dave Applebaum (Sheffield UK)                     Lecture 5                       December 2011   9 / 44
Additive Processes and Wiener-Lévy Integrals

The study of O-U processes focusses attention on Wiener-Lévy
                     t
integrals If (t) := 0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.
Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z
has independent increments and is stochastically continuous. It follows
that each Z (t) is infinitely divisible.

Theorem
(If (t), t ≥ 0) is an additive process.

Proof. (sketch) Independent increments follows from the fact that for
r ≤s≤t
                    s
If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} -
measurable,
                   t
If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} -
measurable,                                                           2

 Dave Applebaum (Sheffield UK)       Lecture 5               December 2011   10 / 44
Additive Processes and Wiener-Lévy Integrals

The study of O-U processes focusses attention on Wiener-Lévy
                     t
integrals If (t) := 0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.
Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z
has independent increments and is stochastically continuous. It follows
that each Z (t) is infinitely divisible.

Theorem
(If (t), t ≥ 0) is an additive process.

Proof. (sketch) Independent increments follows from the fact that for
r ≤s≤t
                    s
If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} -
measurable,
                   t
If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} -
measurable,                                                           2

 Dave Applebaum (Sheffield UK)       Lecture 5               December 2011   10 / 44
Additive Processes and Wiener-Lévy Integrals

The study of O-U processes focusses attention on Wiener-Lévy
                     t
integrals If (t) := 0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.
Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z
has independent increments and is stochastically continuous. It follows
that each Z (t) is infinitely divisible.

Theorem
(If (t), t ≥ 0) is an additive process.

Proof. (sketch) Independent increments follows from the fact that for
r ≤s≤t
                    s
If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} -
measurable,
                   t
If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} -
measurable,                                                           2

 Dave Applebaum (Sheffield UK)       Lecture 5               December 2011   10 / 44
Additive Processes and Wiener-Lévy Integrals

The study of O-U processes focusses attention on Wiener-Lévy
                     t
integrals If (t) := 0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.
Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z
has independent increments and is stochastically continuous. It follows
that each Z (t) is infinitely divisible.

Theorem
(If (t), t ≥ 0) is an additive process.

Proof. (sketch) Independent increments follows from the fact that for
r ≤s≤t
                    s
If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} -
measurable,
                   t
If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} -
measurable,                                                           2

 Dave Applebaum (Sheffield UK)       Lecture 5               December 2011   10 / 44
Additive Processes and Wiener-Lévy Integrals

The study of O-U processes focusses attention on Wiener-Lévy
                     t
integrals If (t) := 0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.
Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z
has independent increments and is stochastically continuous. It follows
that each Z (t) is infinitely divisible.

Theorem
(If (t), t ≥ 0) is an additive process.

Proof. (sketch) Independent increments follows from the fact that for
r ≤s≤t
                    s
If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} -
measurable,
                   t
If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} -
measurable,                                                           2

 Dave Applebaum (Sheffield UK)       Lecture 5               December 2011   10 / 44
Additive Processes and Wiener-Lévy Integrals

The study of O-U processes focusses attention on Wiener-Lévy
                     t
integrals If (t) := 0 f (s)dX (s). For simplicity we assume that
f : Rd → Rd is continuous.
Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z
has independent increments and is stochastically continuous. It follows
that each Z (t) is infinitely divisible.

Theorem
(If (t), t ≥ 0) is an additive process.

Proof. (sketch) Independent increments follows from the fact that for
r ≤s≤t
                    s
If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} -
measurable,
                   t
If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} -
measurable,                                                           2

 Dave Applebaum (Sheffield UK)       Lecture 5               December 2011   10 / 44
Theorem
If X has Lévy symbol η then for each t ≥ 0, u ∈ Rd ,
                                                                 t
                            E(ei(u,If (t)) ) = exp                   η(f (s)T u) .
                                                             0
                                                                      t
Proof. (sketch) Define Mf (t) = exp i u,                               0   f (s)dX (s)    and use Itô’s
formula to show that

                                    t
Mf (t) = 1 + i u,                       Mf (s−)f (s)dB(s)
                                0
                      t                                                                       t
          +                                                  ˜
                                    Mf (s−)(ei(u,f (s)x) − 1)N(ds, dx) +                          Mf (s−)η(f (s)
                  0       Rd −{0}                                                         0

Now take expectations of both sides to get
                                                    t
                          E(Mf (t)) = 1 +               E(Mf (s))η(f (s)T u)ds,
                                                0
and the result follows.                                                                                     2
 Dave Applebaum (Sheffield UK)                    Lecture 5                              December 2011     11 / 44
Theorem
If X has Lévy symbol η then for each t ≥ 0, u ∈ Rd ,
                                                                 t
                            E(ei(u,If (t)) ) = exp                   η(f (s)T u) .
                                                             0
                                                                      t
Proof. (sketch) Define Mf (t) = exp i u,                               0   f (s)dX (s)    and use Itô’s
formula to show that

                                    t
Mf (t) = 1 + i u,                       Mf (s−)f (s)dB(s)
                                0
                      t                                                                       t
          +                                                  ˜
                                    Mf (s−)(ei(u,f (s)x) − 1)N(ds, dx) +                          Mf (s−)η(f (s)
                  0       Rd −{0}                                                         0

Now take expectations of both sides to get
                                                    t
                          E(Mf (t)) = 1 +               E(Mf (s))η(f (s)T u)ds,
                                                0
and the result follows.                                                                                     2
 Dave Applebaum (Sheffield UK)                    Lecture 5                              December 2011     11 / 44
Theorem
If X has Lévy symbol η then for each t ≥ 0, u ∈ Rd ,
                                                                 t
                            E(ei(u,If (t)) ) = exp                   η(f (s)T u) .
                                                             0
                                                                      t
Proof. (sketch) Define Mf (t) = exp i u,                               0   f (s)dX (s)    and use Itô’s
formula to show that

                                    t
Mf (t) = 1 + i u,                       Mf (s−)f (s)dB(s)
                                0
                      t                                                                       t
          +                                                  ˜
                                    Mf (s−)(ei(u,f (s)x) − 1)N(ds, dx) +                          Mf (s−)η(f (s)
                  0       Rd −{0}                                                         0

Now take expectations of both sides to get
                                                    t
                          E(Mf (t)) = 1 +               E(Mf (s))η(f (s)T u)ds,
                                                0
and the result follows.                                                                                     2
 Dave Applebaum (Sheffield UK)                    Lecture 5                              December 2011     11 / 44
Theorem
If X has Lévy symbol η then for each t ≥ 0, u ∈ Rd ,
                                                                 t
                            E(ei(u,If (t)) ) = exp                   η(f (s)T u) .
                                                             0
                                                                      t
Proof. (sketch) Define Mf (t) = exp i u,                               0   f (s)dX (s)    and use Itô’s
formula to show that

                                    t
Mf (t) = 1 + i u,                       Mf (s−)f (s)dB(s)
                                0
                      t                                                                       t
          +                                                  ˜
                                    Mf (s−)(ei(u,f (s)x) − 1)N(ds, dx) +                          Mf (s−)η(f (s)
                  0       Rd −{0}                                                         0

Now take expectations of both sides to get
                                                    t
                          E(Mf (t)) = 1 +               E(Mf (s))η(f (s)T u)ds,
                                                0
and the result follows.                                                                                     2
 Dave Applebaum (Sheffield UK)                    Lecture 5                              December 2011     11 / 44
If X has characteristics (b, A, ν), it follows that If (t) has characteristics
(btf , Af , νtf ) where
        t


                  t
    btf =             f (s)bds +                       f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds,
              0                     0   Rd −{0}



                                                   t
                                    Af =
                                     t                 f (s)T Af (s)ds,
                                               0


                                                           t
                                   νtf (B) =                   ν(f (s)−1 (B)).
                                                       0

It follows that every OU process Y conditioned on Y0 = y is an additive
process. It will have characteristics as above with f (s) = e−sK and btf
translated by e−tK y .

 Dave Applebaum (Sheffield UK)                          Lecture 5                 December 2011   12 / 44
If X has characteristics (b, A, ν), it follows that If (t) has characteristics
(btf , Af , νtf ) where
        t


                  t
    btf =             f (s)bds +                       f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds,
              0                     0   Rd −{0}



                                                   t
                                    Af =
                                     t                 f (s)T Af (s)ds,
                                               0


                                                           t
                                   νtf (B) =                   ν(f (s)−1 (B)).
                                                       0

It follows that every OU process Y conditioned on Y0 = y is an additive
process. It will have characteristics as above with f (s) = e−sK and btf
translated by e−tK y .

 Dave Applebaum (Sheffield UK)                          Lecture 5                 December 2011   12 / 44
If X has characteristics (b, A, ν), it follows that If (t) has characteristics
(btf , Af , νtf ) where
        t


                  t
    btf =             f (s)bds +                       f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds,
              0                     0   Rd −{0}



                                                   t
                                    Af =
                                     t                 f (s)T Af (s)ds,
                                               0


                                                           t
                                   νtf (B) =                   ν(f (s)−1 (B)).
                                                       0

It follows that every OU process Y conditioned on Y0 = y is an additive
process. It will have characteristics as above with f (s) = e−sK and btf
translated by e−tK y .

 Dave Applebaum (Sheffield UK)                          Lecture 5                 December 2011   12 / 44
If X has characteristics (b, A, ν), it follows that If (t) has characteristics
(btf , Af , νtf ) where
        t


                  t
    btf =             f (s)bds +                       f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds,
              0                     0   Rd −{0}



                                                   t
                                    Af =
                                     t                 f (s)T Af (s)ds,
                                               0


                                                           t
                                   νtf (B) =                   ν(f (s)−1 (B)).
                                                       0

It follows that every OU process Y conditioned on Y0 = y is an additive
process. It will have characteristics as above with f (s) = e−sK and btf
translated by e−tK y .

 Dave Applebaum (Sheffield UK)                          Lecture 5                 December 2011   12 / 44
If X has characteristics (b, A, ν), it follows that If (t) has characteristics
(btf , Af , νtf ) where
        t


                  t
    btf =             f (s)bds +                       f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds,
              0                     0   Rd −{0}



                                                   t
                                    Af =
                                     t                 f (s)T Af (s)ds,
                                               0


                                                           t
                                   νtf (B) =                   ν(f (s)−1 (B)).
                                                       0

It follows that every OU process Y conditioned on Y0 = y is an additive
process. It will have characteristics as above with f (s) = e−sK and btf
translated by e−tK y .

 Dave Applebaum (Sheffield UK)                          Lecture 5                 December 2011   12 / 44
If X has characteristics (b, A, ν), it follows that If (t) has characteristics
(btf , Af , νtf ) where
        t


                  t
    btf =             f (s)bds +                       f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds,
              0                     0   Rd −{0}



                                                   t
                                    Af =
                                     t                 f (s)T Af (s)ds,
                                               0


                                                           t
                                   νtf (B) =                   ν(f (s)−1 (B)).
                                                       0

It follows that every OU process Y conditioned on Y0 = y is an additive
process. It will have characteristics as above with f (s) = e−sK and btf
translated by e−tK y .

 Dave Applebaum (Sheffield UK)                          Lecture 5                 December 2011   12 / 44
Invariant Measures, Stationary Processes, Ergodicity:
General Theory


We want to investigate invariant measures and stationary solutions for
OU processes. First a little general theory.
First let (Tt , t ≥ 0) be a general Markov semigroup with transition
probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for
f ∈ Bb (Rd ). We say that a probability measure µ is an invariant
measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ),

                                     Tt f (x)µ(dx) =            f (x)µ(dx)                   (0.5)
                                Rd                         Rd




 Dave Applebaum (Sheffield UK)                  Lecture 5                     December 2011    13 / 44
Invariant Measures, Stationary Processes, Ergodicity:
General Theory


We want to investigate invariant measures and stationary solutions for
OU processes. First a little general theory.
First let (Tt , t ≥ 0) be a general Markov semigroup with transition
probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for
f ∈ Bb (Rd ). We say that a probability measure µ is an invariant
measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ),

                                     Tt f (x)µ(dx) =            f (x)µ(dx)                   (0.5)
                                Rd                         Rd




 Dave Applebaum (Sheffield UK)                  Lecture 5                     December 2011    13 / 44
Invariant Measures, Stationary Processes, Ergodicity:
General Theory


We want to investigate invariant measures and stationary solutions for
OU processes. First a little general theory.
First let (Tt , t ≥ 0) be a general Markov semigroup with transition
probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for
f ∈ Bb (Rd ). We say that a probability measure µ is an invariant
measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ),

                                     Tt f (x)µ(dx) =            f (x)µ(dx)                   (0.5)
                                Rd                         Rd




 Dave Applebaum (Sheffield UK)                  Lecture 5                     December 2011    13 / 44
Invariant Measures, Stationary Processes, Ergodicity:
General Theory


We want to investigate invariant measures and stationary solutions for
OU processes. First a little general theory.
First let (Tt , t ≥ 0) be a general Markov semigroup with transition
probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for
f ∈ Bb (Rd ). We say that a probability measure µ is an invariant
measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ),

                                     Tt f (x)µ(dx) =            f (x)µ(dx)                   (0.5)
                                Rd                         Rd




 Dave Applebaum (Sheffield UK)                  Lecture 5                     December 2011    13 / 44
Invariant Measures, Stationary Processes, Ergodicity:
General Theory


We want to investigate invariant measures and stationary solutions for
OU processes. First a little general theory.
First let (Tt , t ≥ 0) be a general Markov semigroup with transition
probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for
f ∈ Bb (Rd ). We say that a probability measure µ is an invariant
measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ),

                                     Tt f (x)µ(dx) =            f (x)µ(dx)                   (0.5)
                                Rd                         Rd




 Dave Applebaum (Sheffield UK)                  Lecture 5                     December 2011    13 / 44
Invariant Measures, Stationary Processes, Ergodicity:
General Theory


We want to investigate invariant measures and stationary solutions for
OU processes. First a little general theory.
First let (Tt , t ≥ 0) be a general Markov semigroup with transition
probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for
f ∈ Bb (Rd ). We say that a probability measure µ is an invariant
measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ),

                                     Tt f (x)µ(dx) =            f (x)µ(dx)                   (0.5)
                                Rd                         Rd




 Dave Applebaum (Sheffield UK)                  Lecture 5                     December 2011    13 / 44
Equivalently for all Borel sets B

                                         pt (x, B)µ(dx) = µ(B).                            (0.6)
                                    Rd

To see that (0.5) ⇒ (0.6) rewrite as

                                 f (y )pt (x, dy )µ(dx) =        f (x)µ(dx),
                      Rd    Rd                              Rd
and put f = 1B . For the converse - approximate f by simple functions
and take limits.




 Dave Applebaum (Sheffield UK)                  Lecture 5                   December 2011    14 / 44
Equivalently for all Borel sets B

                                         pt (x, B)µ(dx) = µ(B).                            (0.6)
                                    Rd

To see that (0.5) ⇒ (0.6) rewrite as

                                 f (y )pt (x, dy )µ(dx) =        f (x)µ(dx),
                      Rd    Rd                              Rd
and put f = 1B . For the converse - approximate f by simple functions
and take limits.




 Dave Applebaum (Sheffield UK)                  Lecture 5                   December 2011    14 / 44
Equivalently for all Borel sets B

                                         pt (x, B)µ(dx) = µ(B).                            (0.6)
                                    Rd

To see that (0.5) ⇒ (0.6) rewrite as

                                 f (y )pt (x, dy )µ(dx) =        f (x)µ(dx),
                      Rd    Rd                              Rd
and put f = 1B . For the converse - approximate f by simple functions
and take limits.




 Dave Applebaum (Sheffield UK)                  Lecture 5                   December 2011    14 / 44
Equivalently for all Borel sets B

                                         pt (x, B)µ(dx) = µ(B).                            (0.6)
                                    Rd

To see that (0.5) ⇒ (0.6) rewrite as

                                 f (y )pt (x, dy )µ(dx) =        f (x)µ(dx),
                      Rd    Rd                              Rd
and put f = 1B . For the converse - approximate f by simple functions
and take limits.




 Dave Applebaum (Sheffield UK)                  Lecture 5                   December 2011    14 / 44
e.g. A Lévy process doesn’t have an invariant probability measure but
Lebesgue measure is invariant in the sense that for f ∈ L1 (Rd )

                  Tt f (x)dx =              f (x + y )pt (dy )dx =           f (x)dx.
             Rd                   Rd   Rd                               Rd


A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for all
n ∈ N, t1 , . . . , tn , h ∈ R+ ,
                                            d
                  (Z (t1 ), . . . , Z (tn )) = (Z (t1 + h), . . . , Z (tn + h))



Theorem
A Markov process Z wherein µ is the law of Z (0) is stationary if and
only if µ is an invariant measure.



 Dave Applebaum (Sheffield UK)                   Lecture 5                    December 2011   15 / 44
e.g. A Lévy process doesn’t have an invariant probability measure but
Lebesgue measure is invariant in the sense that for f ∈ L1 (Rd )

                  Tt f (x)dx =              f (x + y )pt (dy )dx =           f (x)dx.
             Rd                   Rd   Rd                               Rd


A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for all
n ∈ N, t1 , . . . , tn , h ∈ R+ ,
                                            d
                  (Z (t1 ), . . . , Z (tn )) = (Z (t1 + h), . . . , Z (tn + h))



Theorem
A Markov process Z wherein µ is the law of Z (0) is stationary if and
only if µ is an invariant measure.



 Dave Applebaum (Sheffield UK)                   Lecture 5                    December 2011   15 / 44
e.g. A Lévy process doesn’t have an invariant probability measure but
Lebesgue measure is invariant in the sense that for f ∈ L1 (Rd )

                  Tt f (x)dx =              f (x + y )pt (dy )dx =           f (x)dx.
             Rd                   Rd   Rd                               Rd


A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for all
n ∈ N, t1 , . . . , tn , h ∈ R+ ,
                                            d
                  (Z (t1 ), . . . , Z (tn )) = (Z (t1 + h), . . . , Z (tn + h))



Theorem
A Markov process Z wherein µ is the law of Z (0) is stationary if and
only if µ is an invariant measure.



 Dave Applebaum (Sheffield UK)                   Lecture 5                    December 2011   15 / 44
e.g. A Lévy process doesn’t have an invariant probability measure but
Lebesgue measure is invariant in the sense that for f ∈ L1 (Rd )

                  Tt f (x)dx =              f (x + y )pt (dy )dx =           f (x)dx.
             Rd                   Rd   Rd                               Rd


A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for all
n ∈ N, t1 , . . . , tn , h ∈ R+ ,
                                            d
                  (Z (t1 ), . . . , Z (tn )) = (Z (t1 + h), . . . , Z (tn + h))



Theorem
A Markov process Z wherein µ is the law of Z (0) is stationary if and
only if µ is an invariant measure.



 Dave Applebaum (Sheffield UK)                   Lecture 5                    December 2011   15 / 44
Proof. If the process is stationary then µ is invariant since

          µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =                   pt (x, B)µ(dx).
                                                            Rd

For the converse, its sufficient to prove that
E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all
f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to
show

                    E(f (Z (t)) = E(E(f (Z (t)|F0 )))
                                = E(Tt f (Z (0)))
                                =         (Tt f (x))µ(dx)
                                     Rd

                                =         f (x)µ(dx) = E(f (Z (0))).
                                     Rd




 Dave Applebaum (Sheffield UK)             Lecture 5                     December 2011   16 / 44
Proof. If the process is stationary then µ is invariant since

          µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =                   pt (x, B)µ(dx).
                                                            Rd

For the converse, its sufficient to prove that
E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all
f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to
show

                    E(f (Z (t)) = E(E(f (Z (t)|F0 )))
                                = E(Tt f (Z (0)))
                                =         (Tt f (x))µ(dx)
                                     Rd

                                =         f (x)µ(dx) = E(f (Z (0))).
                                     Rd




 Dave Applebaum (Sheffield UK)             Lecture 5                     December 2011   16 / 44
Proof. If the process is stationary then µ is invariant since

          µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =                   pt (x, B)µ(dx).
                                                            Rd

For the converse, its sufficient to prove that
E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all
f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to
show

                    E(f (Z (t)) = E(E(f (Z (t)|F0 )))
                                = E(Tt f (Z (0)))
                                =         (Tt f (x))µ(dx)
                                     Rd

                                =         f (x)µ(dx) = E(f (Z (0))).
                                     Rd




 Dave Applebaum (Sheffield UK)             Lecture 5                     December 2011   16 / 44
Proof. If the process is stationary then µ is invariant since

          µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =                   pt (x, B)µ(dx).
                                                            Rd

For the converse, its sufficient to prove that
E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all
f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to
show

                    E(f (Z (t)) = E(E(f (Z (t)|F0 )))
                                = E(Tt f (Z (0)))
                                =         (Tt f (x))µ(dx)
                                     Rd

                                =         f (x)µ(dx) = E(f (Z (0))).
                                     Rd




 Dave Applebaum (Sheffield UK)             Lecture 5                     December 2011   16 / 44
Proof. If the process is stationary then µ is invariant since

          µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =                   pt (x, B)µ(dx).
                                                            Rd

For the converse, its sufficient to prove that
E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all
f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to
show

                    E(f (Z (t)) = E(E(f (Z (t)|F0 )))
                                = E(Tt f (Z (0)))
                                =         (Tt f (x))µ(dx)
                                     Rd

                                =         f (x)µ(dx) = E(f (Z (0))).
                                     Rd




 Dave Applebaum (Sheffield UK)             Lecture 5                     December 2011   16 / 44
Proof. If the process is stationary then µ is invariant since

          µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =                   pt (x, B)µ(dx).
                                                            Rd

For the converse, its sufficient to prove that
E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all
f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to
show

                    E(f (Z (t)) = E(E(f (Z (t)|F0 )))
                                = E(Tt f (Z (0)))
                                =         (Tt f (x))µ(dx)
                                     Rd

                                =         f (x)µ(dx) = E(f (Z (0))).
                                     Rd




 Dave Applebaum (Sheffield UK)             Lecture 5                     December 2011   16 / 44
Proof. If the process is stationary then µ is invariant since

          µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) =                   pt (x, B)µ(dx).
                                                            Rd

For the converse, its sufficient to prove that
E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all
f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to
show

                    E(f (Z (t)) = E(E(f (Z (t)|F0 )))
                                = E(Tt f (Z (0)))
                                =         (Tt f (x))µ(dx)
                                     Rd

                                =         f (x)µ(dx) = E(f (Z (0))).
                                     Rd




 Dave Applebaum (Sheffield UK)             Lecture 5                     December 2011   16 / 44
In general use


                         E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))
                   = E(f1 (Z (t1 + h)) · · · E(fn (Z (tn + h))|Ftn−1 +h ))
                   = E(f1 (Z (t1 + h) · · · Ttn −tn−1 fn (Z (tn−1 + h)))).

                                                                                        2




 Dave Applebaum (Sheffield UK)                Lecture 5                December 2011   17 / 44
In general use


                         E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))
                   = E(f1 (Z (t1 + h)) · · · E(fn (Z (tn + h))|Ftn−1 +h ))
                   = E(f1 (Z (t1 + h) · · · Ttn −tn−1 fn (Z (tn−1 + h)))).

                                                                                        2




 Dave Applebaum (Sheffield UK)                Lecture 5                December 2011   17 / 44
Let µ be an invariant probability measure for a Markov semigroup
(Tt , t ≥ 0). µ is ergodic if

                   Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.

If µ is ergodic then “time averages” = “space averages” for the
corresponding stationary Markov process, i.e.
                                    T
                          1
                      lim               f (Z (s))ds =           f (x)µ(dx) a.s.
                     T →∞ T     0                          Rd

Fact: The invariant measures form a convex set and the ergodic
measures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.




 Dave Applebaum (Sheffield UK)                  Lecture 5                      December 2011   18 / 44
Let µ be an invariant probability measure for a Markov semigroup
(Tt , t ≥ 0). µ is ergodic if

                   Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.

If µ is ergodic then “time averages” = “space averages” for the
corresponding stationary Markov process, i.e.
                                    T
                          1
                      lim               f (Z (s))ds =           f (x)µ(dx) a.s.
                     T →∞ T     0                          Rd

Fact: The invariant measures form a convex set and the ergodic
measures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.




 Dave Applebaum (Sheffield UK)                  Lecture 5                      December 2011   18 / 44
Let µ be an invariant probability measure for a Markov semigroup
(Tt , t ≥ 0). µ is ergodic if

                   Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.

If µ is ergodic then “time averages” = “space averages” for the
corresponding stationary Markov process, i.e.
                                    T
                          1
                      lim               f (Z (s))ds =           f (x)µ(dx) a.s.
                     T →∞ T     0                          Rd

Fact: The invariant measures form a convex set and the ergodic
measures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.




 Dave Applebaum (Sheffield UK)                  Lecture 5                      December 2011   18 / 44
Let µ be an invariant probability measure for a Markov semigroup
(Tt , t ≥ 0). µ is ergodic if

                   Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.

If µ is ergodic then “time averages” = “space averages” for the
corresponding stationary Markov process, i.e.
                                    T
                          1
                      lim               f (Z (s))ds =           f (x)µ(dx) a.s.
                     T →∞ T     0                          Rd

Fact: The invariant measures form a convex set and the ergodic
measures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.




 Dave Applebaum (Sheffield UK)                  Lecture 5                      December 2011   18 / 44
Let µ be an invariant probability measure for a Markov semigroup
(Tt , t ≥ 0). µ is ergodic if

                   Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.

If µ is ergodic then “time averages” = “space averages” for the
corresponding stationary Markov process, i.e.
                                    T
                          1
                      lim               f (Z (s))ds =           f (x)µ(dx) a.s.
                     T →∞ T     0                          Rd

Fact: The invariant measures form a convex set and the ergodic
measures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.




 Dave Applebaum (Sheffield UK)                  Lecture 5                      December 2011   18 / 44
Let µ be an invariant probability measure for a Markov semigroup
(Tt , t ≥ 0). µ is ergodic if

                   Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1.

If µ is ergodic then “time averages” = “space averages” for the
corresponding stationary Markov process, i.e.
                                    T
                          1
                      lim               f (Z (s))ds =           f (x)µ(dx) a.s.
                     T →∞ T     0                          Rd

Fact: The invariant measures form a convex set and the ergodic
measures are the extreme points of this set.
It follows that if an invariant measure is unique then it is ergodic.




 Dave Applebaum (Sheffield UK)                  Lecture 5                      December 2011   18 / 44
The Self-Decomposable Connection

Recall that a random variable Z is self-decomposable if for each
0 < a < 1 there exists a random variable Wa that is independent of Z
such that
                               d
                             Z = aZ + Wa


or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B).
                      Z                Z
Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.
                                                         t
Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s)
since
                                                    t
                           Y (t) = e−kt Y0 +            e−(t−s)K dX (s)
                                                0

and by stationary increments of the process X


 Dave Applebaum (Sheffield UK)             Lecture 5                       December 2011   19 / 44
The Self-Decomposable Connection

Recall that a random variable Z is self-decomposable if for each
0 < a < 1 there exists a random variable Wa that is independent of Z
such that
                               d
                             Z = aZ + Wa


or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B).
                      Z                Z
Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.
                                                         t
Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s)
since
                                                    t
                           Y (t) = e−kt Y0 +            e−(t−s)K dX (s)
                                                0

and by stationary increments of the process X


 Dave Applebaum (Sheffield UK)             Lecture 5                       December 2011   19 / 44
The Self-Decomposable Connection

Recall that a random variable Z is self-decomposable if for each
0 < a < 1 there exists a random variable Wa that is independent of Z
such that
                               d
                             Z = aZ + Wa


or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B).
                      Z                Z
Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.
                                                         t
Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s)
since
                                                    t
                           Y (t) = e−kt Y0 +            e−(t−s)K dX (s)
                                                0

and by stationary increments of the process X


 Dave Applebaum (Sheffield UK)             Lecture 5                       December 2011   19 / 44
The Self-Decomposable Connection

Recall that a random variable Z is self-decomposable if for each
0 < a < 1 there exists a random variable Wa that is independent of Z
such that
                               d
                             Z = aZ + Wa


or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B).
                      Z                Z
Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.
                                                         t
Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s)
since
                                                    t
                           Y (t) = e−kt Y0 +            e−(t−s)K dX (s)
                                                0

and by stationary increments of the process X


 Dave Applebaum (Sheffield UK)             Lecture 5                       December 2011   19 / 44
The Self-Decomposable Connection

Recall that a random variable Z is self-decomposable if for each
0 < a < 1 there exists a random variable Wa that is independent of Z
such that
                               d
                             Z = aZ + Wa


or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B).
                      Z                Z
Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.
                                                         t
Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s)
since
                                                    t
                           Y (t) = e−kt Y0 +            e−(t−s)K dX (s)
                                                0

and by stationary increments of the process X


 Dave Applebaum (Sheffield UK)             Lecture 5                       December 2011   19 / 44
The Self-Decomposable Connection

Recall that a random variable Z is self-decomposable if for each
0 < a < 1 there exists a random variable Wa that is independent of Z
such that
                               d
                             Z = aZ + Wa


or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B).
                      Z                Z
Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R.
                                                         t
Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s)
since
                                                    t
                           Y (t) = e−kt Y0 +            e−(t−s)K dX (s)
                                                0

and by stationary increments of the process X


 Dave Applebaum (Sheffield UK)             Lecture 5                       December 2011   19 / 44
t                            t
                      d                                   d
              Y (t) = Y0 and              e−k (t−s) dX (s) =           e−ks dX (s)
                                  0                            0

                                          d
                               ⇒ Y0 = e−kt Y0 + Wa(t) .




Dave Applebaum (Sheffield UK)                  Lecture 5                      December 2011   20 / 44
t                            t
                      d                                   d
              Y (t) = Y0 and              e−k (t−s) dX (s) =           e−ks dX (s)
                                  0                            0

                                          d
                               ⇒ Y0 = e−kt Y0 + Wa(t) .




Dave Applebaum (Sheffield UK)                  Lecture 5                      December 2011   20 / 44
Now suppose that µ is self-decomposable - more precisely that
                                              kt
                                      µ = µe ∗ ρt ,


where ρt is the law of Wa(t) . Then


                    Tt f (x)µ(dx) =              f (e−kt x + y )ρt (dy )µ(dx)
                R                       R    R
                                                                    kt
                                 =               f (x + y )ρt (dy )µe (dx)
                                        R    R
                                                     kt
                                 =          f (x)(µe ∗ ρt )(dx)
                                        R

                                 =          f (x)µ(dx).
                                        R

So µ is an invariant measure.

 Dave Applebaum (Sheffield UK)            Lecture 5                       December 2011   21 / 44
Now suppose that µ is self-decomposable - more precisely that
                                              kt
                                      µ = µe ∗ ρt ,


where ρt is the law of Wa(t) . Then


                    Tt f (x)µ(dx) =              f (e−kt x + y )ρt (dy )µ(dx)
                R                       R    R
                                                                    kt
                                 =               f (x + y )ρt (dy )µe (dx)
                                        R    R
                                                     kt
                                 =          f (x)(µe ∗ ρt )(dx)
                                        R

                                 =          f (x)µ(dx).
                                        R

So µ is an invariant measure.

 Dave Applebaum (Sheffield UK)            Lecture 5                       December 2011   21 / 44
Now suppose that µ is self-decomposable - more precisely that
                                              kt
                                      µ = µe ∗ ρt ,


where ρt is the law of Wa(t) . Then


                    Tt f (x)µ(dx) =              f (e−kt x + y )ρt (dy )µ(dx)
                R                       R    R
                                                                    kt
                                 =               f (x + y )ρt (dy )µe (dx)
                                        R    R
                                                     kt
                                 =          f (x)(µe ∗ ρt )(dx)
                                        R

                                 =          f (x)µ(dx).
                                        R

So µ is an invariant measure.

 Dave Applebaum (Sheffield UK)            Lecture 5                       December 2011   21 / 44
Now suppose that µ is self-decomposable - more precisely that
                                              kt
                                      µ = µe ∗ ρt ,


where ρt is the law of Wa(t) . Then


                    Tt f (x)µ(dx) =              f (e−kt x + y )ρt (dy )µ(dx)
                R                       R    R
                                                                    kt
                                 =               f (x + y )ρt (dy )µe (dx)
                                        R    R
                                                     kt
                                 =          f (x)(µe ∗ ρt )(dx)
                                        R

                                 =          f (x)µ(dx).
                                        R

So µ is an invariant measure.

 Dave Applebaum (Sheffield UK)            Lecture 5                       December 2011   21 / 44
Now suppose that µ is self-decomposable - more precisely that
                                              kt
                                      µ = µe ∗ ρt ,


where ρt is the law of Wa(t) . Then


                    Tt f (x)µ(dx) =              f (e−kt x + y )ρt (dy )µ(dx)
                R                       R    R
                                                                    kt
                                 =               f (x + y )ρt (dy )µe (dx)
                                        R    R
                                                     kt
                                 =          f (x)(µe ∗ ρt )(dx)
                                        R

                                 =          f (x)µ(dx).
                                        R

So µ is an invariant measure.

 Dave Applebaum (Sheffield UK)            Lecture 5                       December 2011   21 / 44
Now suppose that µ is self-decomposable - more precisely that
                                              kt
                                      µ = µe ∗ ρt ,


where ρt is the law of Wa(t) . Then


                    Tt f (x)µ(dx) =              f (e−kt x + y )ρt (dy )µ(dx)
                R                       R    R
                                                                    kt
                                 =               f (x + y )ρt (dy )µe (dx)
                                        R    R
                                                     kt
                                 =          f (x)(µe ∗ ρt )(dx)
                                        R

                                 =          f (x)µ(dx).
                                        R

So µ is an invariant measure.

 Dave Applebaum (Sheffield UK)            Lecture 5                       December 2011   21 / 44
So we have shown that:
Theorem
The following are equivalent for the O-U process Y .
     Y is stationary.
     The law of Y (0) is an invariant measure.
                                                            t
     The law of Y (0) is self-decomposable (with Wa(t) =    0   e−ks dX (s)).




 Dave Applebaum (Sheffield UK)    Lecture 5                 December 2011   22 / 44
So we have shown that:
Theorem
The following are equivalent for the O-U process Y .
     Y is stationary.
     The law of Y (0) is an invariant measure.
                                                            t
     The law of Y (0) is self-decomposable (with Wa(t) =    0   e−ks dX (s)).




 Dave Applebaum (Sheffield UK)    Lecture 5                 December 2011   22 / 44
So we have shown that:
Theorem
The following are equivalent for the O-U process Y .
     Y is stationary.
     The law of Y (0) is an invariant measure.
                                                            t
     The law of Y (0) is self-decomposable (with Wa(t) =    0   e−ks dX (s)).




 Dave Applebaum (Sheffield UK)    Lecture 5                 December 2011   22 / 44
So we have shown that:
Theorem
The following are equivalent for the O-U process Y .
     Y is stationary.
     The law of Y (0) is an invariant measure.
                                                            t
     The law of Y (0) is self-decomposable (with Wa(t) =    0   e−ks dX (s)).




 Dave Applebaum (Sheffield UK)    Lecture 5                 December 2011   22 / 44
We seek some condition on the Lévy process X which ensures that Y
is stationary.
                        ∞
Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is
self-decomposable.
To see this observe that (using stationary increments of X )

               ∞                        ∞                          t
                   e−ks dX (s) =            e−ks dX (s) +              e−ks dX (s)
           0                        t                          0
                                        ∞                                   t
                                d
                                =           e−k (t+s) dX (s) +                  e−ks dX (s)
                                    0                                   0
                                                 ∞                              t
                                = e−kt               e−ks dX (s) +                  e−ks dX (s)
                                             0                              0




 Dave Applebaum (Sheffield UK)               Lecture 5                                 December 2011   23 / 44
We seek some condition on the Lévy process X which ensures that Y
is stationary.
                        ∞
Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is
self-decomposable.
To see this observe that (using stationary increments of X )

               ∞                        ∞                          t
                   e−ks dX (s) =            e−ks dX (s) +              e−ks dX (s)
           0                        t                          0
                                        ∞                                   t
                                d
                                =           e−k (t+s) dX (s) +                  e−ks dX (s)
                                    0                                   0
                                                 ∞                              t
                                = e−kt               e−ks dX (s) +                  e−ks dX (s)
                                             0                              0




 Dave Applebaum (Sheffield UK)               Lecture 5                                 December 2011   23 / 44
We seek some condition on the Lévy process X which ensures that Y
is stationary.
                        ∞
Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is
self-decomposable.
To see this observe that (using stationary increments of X )

               ∞                        ∞                          t
                   e−ks dX (s) =            e−ks dX (s) +              e−ks dX (s)
           0                        t                          0
                                        ∞                                   t
                                d
                                =           e−k (t+s) dX (s) +                  e−ks dX (s)
                                    0                                   0
                                                 ∞                              t
                                = e−kt               e−ks dX (s) +                  e−ks dX (s)
                                             0                              0




 Dave Applebaum (Sheffield UK)               Lecture 5                                 December 2011   23 / 44
We seek some condition on the Lévy process X which ensures that Y
is stationary.
                        ∞
Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is
self-decomposable.
To see this observe that (using stationary increments of X )

               ∞                        ∞                          t
                   e−ks dX (s) =            e−ks dX (s) +              e−ks dX (s)
           0                        t                          0
                                        ∞                                   t
                                d
                                =           e−k (t+s) dX (s) +                  e−ks dX (s)
                                    0                                   0
                                                 ∞                              t
                                = e−kt               e−ks dX (s) +                  e−ks dX (s)
                                             0                              0




 Dave Applebaum (Sheffield UK)               Lecture 5                                 December 2011   23 / 44
We seek some condition on the Lévy process X which ensures that Y
is stationary.
                        ∞
Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is
self-decomposable.
To see this observe that (using stationary increments of X )

               ∞                        ∞                          t
                   e−ks dX (s) =            e−ks dX (s) +              e−ks dX (s)
           0                        t                          0
                                        ∞                                   t
                                d
                                =           e−k (t+s) dX (s) +                  e−ks dX (s)
                                    0                                   0
                                                 ∞                              t
                                = e−kt               e−ks dX (s) +                  e−ks dX (s)
                                             0                              0




 Dave Applebaum (Sheffield UK)               Lecture 5                                 December 2011   23 / 44
We seek some condition on the Lévy process X which ensures that Y
is stationary.
                        ∞
Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is
self-decomposable.
To see this observe that (using stationary increments of X )

               ∞                        ∞                          t
                   e−ks dX (s) =            e−ks dX (s) +              e−ks dX (s)
           0                        t                          0
                                        ∞                                   t
                                d
                                =           e−k (t+s) dX (s) +                  e−ks dX (s)
                                    0                                   0
                                                 ∞                              t
                                = e−kt               e−ks dX (s) +                  e−ks dX (s)
                                             0                              0




 Dave Applebaum (Sheffield UK)               Lecture 5                                 December 2011   23 / 44
t
When does limt→∞                 0   e−ks dX (s) exist in distribution? Use the Lévy-Itô
decomposition.

                          X (t) = bt + M(t) +                       xN(t, dx).
                                                            |x|≥1

                                                    t
It is not difficult to see that limt→∞               0   e−ks dM(s) exists in L2 -sense.
                  t
Fact: lim                       e−ks xN(ds, dx) exists in distribution if and only if
       t→∞ 0          |x|≥1
 |x|≥1 log(1 + |x|)ν(dx) < ∞.




 Dave Applebaum (Sheffield UK)                   Lecture 5                        December 2011   24 / 44
t
When does limt→∞                 0   e−ks dX (s) exist in distribution? Use the Lévy-Itô
decomposition.

                          X (t) = bt + M(t) +                       xN(t, dx).
                                                            |x|≥1

                                                    t
It is not difficult to see that limt→∞               0   e−ks dM(s) exists in L2 -sense.
                  t
Fact: lim                       e−ks xN(ds, dx) exists in distribution if and only if
       t→∞ 0          |x|≥1
 |x|≥1 log(1 + |x|)ν(dx) < ∞.




 Dave Applebaum (Sheffield UK)                   Lecture 5                        December 2011   24 / 44
t
When does limt→∞                 0   e−ks dX (s) exist in distribution? Use the Lévy-Itô
decomposition.

                          X (t) = bt + M(t) +                       xN(t, dx).
                                                            |x|≥1

                                                    t
It is not difficult to see that limt→∞               0   e−ks dM(s) exists in L2 -sense.
                  t
Fact: lim                       e−ks xN(ds, dx) exists in distribution if and only if
       t→∞ 0          |x|≥1
 |x|≥1 log(1 + |x|)ν(dx) < ∞.




 Dave Applebaum (Sheffield UK)                   Lecture 5                        December 2011   24 / 44
t
When does limt→∞                 0   e−ks dX (s) exist in distribution? Use the Lévy-Itô
decomposition.

                          X (t) = bt + M(t) +                       xN(t, dx).
                                                            |x|≥1

                                                    t
It is not difficult to see that limt→∞               0   e−ks dM(s) exists in L2 -sense.
                  t
Fact: lim                       e−ks xN(ds, dx) exists in distribution if and only if
       t→∞ 0          |x|≥1
 |x|≥1 log(1 + |x|)ν(dx) < ∞.




 Dave Applebaum (Sheffield UK)                   Lecture 5                        December 2011   24 / 44
t
When does limt→∞                 0   e−ks dX (s) exist in distribution? Use the Lévy-Itô
decomposition.

                          X (t) = bt + M(t) +                       xN(t, dx).
                                                            |x|≥1

                                                    t
It is not difficult to see that limt→∞               0   e−ks dM(s) exists in L2 -sense.
                  t
Fact: lim                       e−ks xN(ds, dx) exists in distribution if and only if
       t→∞ 0          |x|≥1
 |x|≥1 log(1 + |x|)ν(dx) < ∞.




 Dave Applebaum (Sheffield UK)                   Lecture 5                        December 2011   24 / 44
t
When does limt→∞                 0   e−ks dX (s) exist in distribution? Use the Lévy-Itô
decomposition.

                          X (t) = bt + M(t) +                       xN(t, dx).
                                                            |x|≥1

                                                    t
It is not difficult to see that limt→∞               0   e−ks dM(s) exists in L2 -sense.
                  t
Fact: lim                       e−ks xN(ds, dx) exists in distribution if and only if
       t→∞ 0          |x|≥1
 |x|≥1 log(1 + |x|)ν(dx) < ∞.




 Dave Applebaum (Sheffield UK)                   Lecture 5                        December 2011   24 / 44
To prove this you need
 1   If (ξn , n ∈ N) are i.i.d. then ∞ c n ξn converges a.s. (0 < c < 1)
                                      n=1
     if and only if E(log(1 + |ξ1 |)) < ∞.
 2
                                    n                                 n−1
                                                                  d
                                                e−ks xN(ds, dx) =           e−kj Mj
                                0       |x|≥1                         j=0

                               j+1                −k (s−j) xN(ds, dx).
     where Mj :=           j            |x|≥1 e                          Note that (Mj , j ∈ N)
     are i.i.d.
In this case, Y has characteristics (b∞ , Af , ν∞ ).
                                      f
                                           ∞
                                                f
                                                     1
e.g. Brownian motion case. X (t) = B(t). µ ∼ N 0, 2k .




 Dave Applebaum (Sheffield UK)                        Lecture 5                    December 2011   25 / 44
To prove this you need
 1   If (ξn , n ∈ N) are i.i.d. then ∞ c n ξn converges a.s. (0 < c < 1)
                                      n=1
     if and only if E(log(1 + |ξ1 |)) < ∞.
 2
                                    n                                 n−1
                                                                  d
                                                e−ks xN(ds, dx) =           e−kj Mj
                                0       |x|≥1                         j=0

                               j+1                −k (s−j) xN(ds, dx).
     where Mj :=           j            |x|≥1 e                          Note that (Mj , j ∈ N)
     are i.i.d.
In this case, Y has characteristics (b∞ , Af , ν∞ ).
                                      f
                                           ∞
                                                f
                                                     1
e.g. Brownian motion case. X (t) = B(t). µ ∼ N 0, 2k .




 Dave Applebaum (Sheffield UK)                        Lecture 5                    December 2011   25 / 44
To prove this you need
 1   If (ξn , n ∈ N) are i.i.d. then ∞ c n ξn converges a.s. (0 < c < 1)
                                      n=1
     if and only if E(log(1 + |ξ1 |)) < ∞.
 2
                                    n                                 n−1
                                                                  d
                                                e−ks xN(ds, dx) =           e−kj Mj
                                0       |x|≥1                         j=0

                               j+1                −k (s−j) xN(ds, dx).
     where Mj :=           j            |x|≥1 e                          Note that (Mj , j ∈ N)
     are i.i.d.
In this case, Y has characteristics (b∞ , Af , ν∞ ).
                                      f
                                           ∞
                                                f
                                                     1
e.g. Brownian motion case. X (t) = B(t). µ ∼ N 0, 2k .




 Dave Applebaum (Sheffield UK)                        Lecture 5                    December 2011   25 / 44
To prove this you need
 1   If (ξn , n ∈ N) are i.i.d. then ∞ c n ξn converges a.s. (0 < c < 1)
                                      n=1
     if and only if E(log(1 + |ξ1 |)) < ∞.
 2
                                    n                                 n−1
                                                                  d
                                                e−ks xN(ds, dx) =           e−kj Mj
                                0       |x|≥1                         j=0

                               j+1                −k (s−j) xN(ds, dx).
     where Mj :=           j            |x|≥1 e                          Note that (Mj , j ∈ N)
     are i.i.d.
In this case, Y has characteristics (b∞ , Af , ν∞ ).
                                      f
                                           ∞
                                                f
                                                     1
e.g. Brownian motion case. X (t) = B(t). µ ∼ N 0, 2k .




 Dave Applebaum (Sheffield UK)                        Lecture 5                    December 2011   25 / 44
To prove this you need
 1   If (ξn , n ∈ N) are i.i.d. then ∞ c n ξn converges a.s. (0 < c < 1)
                                      n=1
     if and only if E(log(1 + |ξ1 |)) < ∞.
 2
                                    n                                 n−1
                                                                  d
                                                e−ks xN(ds, dx) =           e−kj Mj
                                0       |x|≥1                         j=0

                               j+1                −k (s−j) xN(ds, dx).
     where Mj :=           j            |x|≥1 e                          Note that (Mj , j ∈ N)
     are i.i.d.
In this case, Y has characteristics (b∞ , Af , ν∞ ).
                                      f
                                           ∞
                                                f
                                                     1
e.g. Brownian motion case. X (t) = B(t). µ ∼ N 0, 2k .




 Dave Applebaum (Sheffield UK)                        Lecture 5                    December 2011   25 / 44
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)
Koc5(dba)

More Related Content

What's hot

12 x1 t04 05 displacement, velocity, acceleration (2012)
12 x1 t04 05 displacement, velocity, acceleration (2012)12 x1 t04 05 displacement, velocity, acceleration (2012)
12 x1 t04 05 displacement, velocity, acceleration (2012)Nigel Simmons
 
Thermodynamics of freezing soil
Thermodynamics of freezing soilThermodynamics of freezing soil
Thermodynamics of freezing soilMatteo Dall'Amico
 
Sviluppi modellistici sulla propagazione degli incendi boschivi
Sviluppi modellistici sulla propagazione degli incendi boschiviSviluppi modellistici sulla propagazione degli incendi boschivi
Sviluppi modellistici sulla propagazione degli incendi boschiviCRS4 Research Center in Sardinia
 
The inverse droplet coagulation problem
The inverse droplet coagulation problemThe inverse droplet coagulation problem
The inverse droplet coagulation problemColm Connaughton
 
Important equation in physics2
Important equation in physics2Important equation in physics2
Important equation in physics2Melelise Lusama
 
DIGITAL IMAGE PROCESSING - Day 4 Image Transform
DIGITAL IMAGE PROCESSING - Day 4 Image TransformDIGITAL IMAGE PROCESSING - Day 4 Image Transform
DIGITAL IMAGE PROCESSING - Day 4 Image Transformvijayanand Kandaswamy
 
Cluster-cluster aggregation with (complete) collisional fragmentation
Cluster-cluster aggregation with (complete) collisional fragmentationCluster-cluster aggregation with (complete) collisional fragmentation
Cluster-cluster aggregation with (complete) collisional fragmentationColm Connaughton
 
Important equation in physics
Important equation in physicsImportant equation in physics
Important equation in physicsKing Ali
 
Oscillatory kinetics in cluster-cluster aggregation
Oscillatory kinetics in cluster-cluster aggregationOscillatory kinetics in cluster-cluster aggregation
Oscillatory kinetics in cluster-cluster aggregationColm Connaughton
 
Feedback of zonal flows on Rossby-wave turbulence driven by small scale inst...
Feedback of zonal flows on  Rossby-wave turbulence driven by small scale inst...Feedback of zonal flows on  Rossby-wave turbulence driven by small scale inst...
Feedback of zonal flows on Rossby-wave turbulence driven by small scale inst...Colm Connaughton
 
Large scale coherent structures and turbulence in quasi-2D hydrodynamic models
Large scale coherent structures and turbulence in quasi-2D hydrodynamic modelsLarge scale coherent structures and turbulence in quasi-2D hydrodynamic models
Large scale coherent structures and turbulence in quasi-2D hydrodynamic modelsColm Connaughton
 
Research Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and ScienceResearch Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and Scienceresearchinventy
 
Seridonio fachini conem_draft
Seridonio fachini conem_draftSeridonio fachini conem_draft
Seridonio fachini conem_draftAna Seridonio
 
Neutral Electronic Excitations: a Many-body approach to the optical absorptio...
Neutral Electronic Excitations: a Many-body approach to the optical absorptio...Neutral Electronic Excitations: a Many-body approach to the optical absorptio...
Neutral Electronic Excitations: a Many-body approach to the optical absorptio...Claudio Attaccalite
 
thermodynamics
thermodynamicsthermodynamics
thermodynamicskcrycss
 
Solucionario Mecácnica Clásica Goldstein
Solucionario Mecácnica Clásica GoldsteinSolucionario Mecácnica Clásica Goldstein
Solucionario Mecácnica Clásica GoldsteinFredy Mojica
 
New Presentation From Astrophysicist Dr. Andrew Beckwith: "Detailing Coherent...
New Presentation From Astrophysicist Dr. Andrew Beckwith: "Detailing Coherent...New Presentation From Astrophysicist Dr. Andrew Beckwith: "Detailing Coherent...
New Presentation From Astrophysicist Dr. Andrew Beckwith: "Detailing Coherent...Global HeavyLift Holdings, LLC
 

What's hot (19)

12 x1 t04 05 displacement, velocity, acceleration (2012)
12 x1 t04 05 displacement, velocity, acceleration (2012)12 x1 t04 05 displacement, velocity, acceleration (2012)
12 x1 t04 05 displacement, velocity, acceleration (2012)
 
Thermodynamics of freezing soil
Thermodynamics of freezing soilThermodynamics of freezing soil
Thermodynamics of freezing soil
 
Sviluppi modellistici sulla propagazione degli incendi boschivi
Sviluppi modellistici sulla propagazione degli incendi boschiviSviluppi modellistici sulla propagazione degli incendi boschivi
Sviluppi modellistici sulla propagazione degli incendi boschivi
 
The inverse droplet coagulation problem
The inverse droplet coagulation problemThe inverse droplet coagulation problem
The inverse droplet coagulation problem
 
Important equation in physics2
Important equation in physics2Important equation in physics2
Important equation in physics2
 
DIGITAL IMAGE PROCESSING - Day 4 Image Transform
DIGITAL IMAGE PROCESSING - Day 4 Image TransformDIGITAL IMAGE PROCESSING - Day 4 Image Transform
DIGITAL IMAGE PROCESSING - Day 4 Image Transform
 
Cluster-cluster aggregation with (complete) collisional fragmentation
Cluster-cluster aggregation with (complete) collisional fragmentationCluster-cluster aggregation with (complete) collisional fragmentation
Cluster-cluster aggregation with (complete) collisional fragmentation
 
Important equation in physics
Important equation in physicsImportant equation in physics
Important equation in physics
 
Oscillatory kinetics in cluster-cluster aggregation
Oscillatory kinetics in cluster-cluster aggregationOscillatory kinetics in cluster-cluster aggregation
Oscillatory kinetics in cluster-cluster aggregation
 
Feedback of zonal flows on Rossby-wave turbulence driven by small scale inst...
Feedback of zonal flows on  Rossby-wave turbulence driven by small scale inst...Feedback of zonal flows on  Rossby-wave turbulence driven by small scale inst...
Feedback of zonal flows on Rossby-wave turbulence driven by small scale inst...
 
Dft
DftDft
Dft
 
Large scale coherent structures and turbulence in quasi-2D hydrodynamic models
Large scale coherent structures and turbulence in quasi-2D hydrodynamic modelsLarge scale coherent structures and turbulence in quasi-2D hydrodynamic models
Large scale coherent structures and turbulence in quasi-2D hydrodynamic models
 
Research Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and ScienceResearch Inventy : International Journal of Engineering and Science
Research Inventy : International Journal of Engineering and Science
 
Seridonio fachini conem_draft
Seridonio fachini conem_draftSeridonio fachini conem_draft
Seridonio fachini conem_draft
 
Neutral Electronic Excitations: a Many-body approach to the optical absorptio...
Neutral Electronic Excitations: a Many-body approach to the optical absorptio...Neutral Electronic Excitations: a Many-body approach to the optical absorptio...
Neutral Electronic Excitations: a Many-body approach to the optical absorptio...
 
thermodynamics
thermodynamicsthermodynamics
thermodynamics
 
Solucionario Mecácnica Clásica Goldstein
Solucionario Mecácnica Clásica GoldsteinSolucionario Mecácnica Clásica Goldstein
Solucionario Mecácnica Clásica Goldstein
 
Speaking
SpeakingSpeaking
Speaking
 
New Presentation From Astrophysicist Dr. Andrew Beckwith: "Detailing Coherent...
New Presentation From Astrophysicist Dr. Andrew Beckwith: "Detailing Coherent...New Presentation From Astrophysicist Dr. Andrew Beckwith: "Detailing Coherent...
New Presentation From Astrophysicist Dr. Andrew Beckwith: "Detailing Coherent...
 

Similar to Koc5(dba)

Applications of differential equations by shahzad
Applications of differential equations by shahzadApplications of differential equations by shahzad
Applications of differential equations by shahzadbiotech energy pvt limited
 
Causal Dynamical Triangulations
Causal Dynamical TriangulationsCausal Dynamical Triangulations
Causal Dynamical TriangulationsRene García
 
Instantaneous Gelation in Smoluchwski's Coagulation Equation Revisited, Confe...
Instantaneous Gelation in Smoluchwski's Coagulation Equation Revisited, Confe...Instantaneous Gelation in Smoluchwski's Coagulation Equation Revisited, Confe...
Instantaneous Gelation in Smoluchwski's Coagulation Equation Revisited, Confe...Colm Connaughton
 
Fundamentals of Momentum, Heat and Mass Transfer | 6th Edition UnsteadyModel...
Fundamentals of Momentum, Heat and Mass Transfer | 6th  Edition UnsteadyModel...Fundamentals of Momentum, Heat and Mass Transfer | 6th  Edition UnsteadyModel...
Fundamentals of Momentum, Heat and Mass Transfer | 6th Edition UnsteadyModel...BalqeesMustafa
 
What are free particles in quantum mechanics
What are free particles in quantum mechanicsWhat are free particles in quantum mechanics
What are free particles in quantum mechanicsbhaskar chatterjee
 
The photon and its momentum
The photon and its momentumThe photon and its momentum
The photon and its momentumXequeMateShannon
 
Statistica theromodynamics
Statistica theromodynamicsStatistica theromodynamics
Statistica theromodynamicsRaguM6
 
Application of calculus in everyday life
Application of calculus in everyday lifeApplication of calculus in everyday life
Application of calculus in everyday lifeMohamed Ibrahim
 
First order linear differential equation
First order linear differential equationFirst order linear differential equation
First order linear differential equationNofal Umair
 
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docxpaynetawnya
 

Similar to Koc5(dba) (20)

The wave equation
The wave equationThe wave equation
The wave equation
 
Applications of differential equations by shahzad
Applications of differential equations by shahzadApplications of differential equations by shahzad
Applications of differential equations by shahzad
 
Causal Dynamical Triangulations
Causal Dynamical TriangulationsCausal Dynamical Triangulations
Causal Dynamical Triangulations
 
Instantaneous Gelation in Smoluchwski's Coagulation Equation Revisited, Confe...
Instantaneous Gelation in Smoluchwski's Coagulation Equation Revisited, Confe...Instantaneous Gelation in Smoluchwski's Coagulation Equation Revisited, Confe...
Instantaneous Gelation in Smoluchwski's Coagulation Equation Revisited, Confe...
 
Fundamentals of Momentum, Heat and Mass Transfer | 6th Edition UnsteadyModel...
Fundamentals of Momentum, Heat and Mass Transfer | 6th  Edition UnsteadyModel...Fundamentals of Momentum, Heat and Mass Transfer | 6th  Edition UnsteadyModel...
Fundamentals of Momentum, Heat and Mass Transfer | 6th Edition UnsteadyModel...
 
What are free particles in quantum mechanics
What are free particles in quantum mechanicsWhat are free particles in quantum mechanics
What are free particles in quantum mechanics
 
The photon and its momentum
The photon and its momentumThe photon and its momentum
The photon and its momentum
 
Statistica theromodynamics
Statistica theromodynamicsStatistica theromodynamics
Statistica theromodynamics
 
Dynamics eg260 l1
Dynamics eg260 l1Dynamics eg260 l1
Dynamics eg260 l1
 
L02 acous
L02 acousL02 acous
L02 acous
 
Koc2(dba)
Koc2(dba)Koc2(dba)
Koc2(dba)
 
Basics in Seismology
Basics in SeismologyBasics in Seismology
Basics in Seismology
 
Application of calculus in everyday life
Application of calculus in everyday lifeApplication of calculus in everyday life
Application of calculus in everyday life
 
First order linear differential equation
First order linear differential equationFirst order linear differential equation
First order linear differential equation
 
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx
 
Fourier series
Fourier seriesFourier series
Fourier series
 
Wave Motion theory-2
Wave Motion theory-2Wave Motion theory-2
Wave Motion theory-2
 
Physics formulae
Physics formulaePhysics formulae
Physics formulae
 
Statistics Homework Help
Statistics Homework HelpStatistics Homework Help
Statistics Homework Help
 
Multiple Linear Regression Homework Help
Multiple Linear Regression Homework HelpMultiple Linear Regression Homework Help
Multiple Linear Regression Homework Help
 

Recently uploaded

Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...Pooja Nehwal
 
Solution Manual for Financial Accounting, 11th Edition by Robert Libby, Patri...
Solution Manual for Financial Accounting, 11th Edition by Robert Libby, Patri...Solution Manual for Financial Accounting, 11th Edition by Robert Libby, Patri...
Solution Manual for Financial Accounting, 11th Edition by Robert Libby, Patri...ssifa0344
 
Instant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School SpiritInstant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School Spiritegoetzinger
 
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...Call Girls in Nagpur High Profile
 
The Economic History of the U.S. Lecture 21.pdf
The Economic History of the U.S. Lecture 21.pdfThe Economic History of the U.S. Lecture 21.pdf
The Economic History of the U.S. Lecture 21.pdfGale Pooley
 
Dividend Policy and Dividend Decision Theories.pptx
Dividend Policy and Dividend Decision Theories.pptxDividend Policy and Dividend Decision Theories.pptx
Dividend Policy and Dividend Decision Theories.pptxanshikagoel52
 
06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf
06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf
06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdfFinTech Belgium
 
Malad Call Girl in Services 9892124323 | ₹,4500 With Room Free Delivery
Malad Call Girl in Services  9892124323 | ₹,4500 With Room Free DeliveryMalad Call Girl in Services  9892124323 | ₹,4500 With Room Free Delivery
Malad Call Girl in Services 9892124323 | ₹,4500 With Room Free DeliveryPooja Nehwal
 
Call Girls Koregaon Park Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Koregaon Park Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Koregaon Park Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Koregaon Park Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escortsranjana rawat
 
The Economic History of the U.S. Lecture 22.pdf
The Economic History of the U.S. Lecture 22.pdfThe Economic History of the U.S. Lecture 22.pdf
The Economic History of the U.S. Lecture 22.pdfGale Pooley
 
CALL ON ➥8923113531 🔝Call Girls Gomti Nagar Lucknow best sexual service
CALL ON ➥8923113531 🔝Call Girls Gomti Nagar Lucknow best sexual serviceCALL ON ➥8923113531 🔝Call Girls Gomti Nagar Lucknow best sexual service
CALL ON ➥8923113531 🔝Call Girls Gomti Nagar Lucknow best sexual serviceanilsa9823
 
The Economic History of the U.S. Lecture 18.pdf
The Economic History of the U.S. Lecture 18.pdfThe Economic History of the U.S. Lecture 18.pdf
The Economic History of the U.S. Lecture 18.pdfGale Pooley
 
The Economic History of the U.S. Lecture 25.pdf
The Economic History of the U.S. Lecture 25.pdfThe Economic History of the U.S. Lecture 25.pdf
The Economic History of the U.S. Lecture 25.pdfGale Pooley
 
Pooja 9892124323 : Call Girl in Juhu Escorts Service Free Home Delivery
Pooja 9892124323 : Call Girl in Juhu Escorts Service Free Home DeliveryPooja 9892124323 : Call Girl in Juhu Escorts Service Free Home Delivery
Pooja 9892124323 : Call Girl in Juhu Escorts Service Free Home DeliveryPooja Nehwal
 
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...Suhani Kapoor
 
Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...
Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...
Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...ssifa0344
 
00_Main ppt_MeetupDORA&CyberSecurity.pptx
00_Main ppt_MeetupDORA&CyberSecurity.pptx00_Main ppt_MeetupDORA&CyberSecurity.pptx
00_Main ppt_MeetupDORA&CyberSecurity.pptxFinTech Belgium
 
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure serviceCall US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure servicePooja Nehwal
 

Recently uploaded (20)

Veritas Interim Report 1 January–31 March 2024
Veritas Interim Report 1 January–31 March 2024Veritas Interim Report 1 January–31 March 2024
Veritas Interim Report 1 January–31 March 2024
 
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
Dharavi Russian callg Girls, { 09892124323 } || Call Girl In Mumbai ...
 
Solution Manual for Financial Accounting, 11th Edition by Robert Libby, Patri...
Solution Manual for Financial Accounting, 11th Edition by Robert Libby, Patri...Solution Manual for Financial Accounting, 11th Edition by Robert Libby, Patri...
Solution Manual for Financial Accounting, 11th Edition by Robert Libby, Patri...
 
Instant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School SpiritInstant Issue Debit Cards - High School Spirit
Instant Issue Debit Cards - High School Spirit
 
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...
VVIP Pune Call Girls Katraj (7001035870) Pune Escorts Nearby with Complete Sa...
 
The Economic History of the U.S. Lecture 21.pdf
The Economic History of the U.S. Lecture 21.pdfThe Economic History of the U.S. Lecture 21.pdf
The Economic History of the U.S. Lecture 21.pdf
 
Dividend Policy and Dividend Decision Theories.pptx
Dividend Policy and Dividend Decision Theories.pptxDividend Policy and Dividend Decision Theories.pptx
Dividend Policy and Dividend Decision Theories.pptx
 
06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf
06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf
06_Joeri Van Speybroek_Dell_MeetupDora&Cybersecurity.pdf
 
Malad Call Girl in Services 9892124323 | ₹,4500 With Room Free Delivery
Malad Call Girl in Services  9892124323 | ₹,4500 With Room Free DeliveryMalad Call Girl in Services  9892124323 | ₹,4500 With Room Free Delivery
Malad Call Girl in Services 9892124323 | ₹,4500 With Room Free Delivery
 
Call Girls Koregaon Park Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Koregaon Park Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Koregaon Park Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Koregaon Park Call Me 7737669865 Budget Friendly No Advance Booking
 
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Maya Call 7001035870 Meet With Nagpur Escorts
 
The Economic History of the U.S. Lecture 22.pdf
The Economic History of the U.S. Lecture 22.pdfThe Economic History of the U.S. Lecture 22.pdf
The Economic History of the U.S. Lecture 22.pdf
 
CALL ON ➥8923113531 🔝Call Girls Gomti Nagar Lucknow best sexual service
CALL ON ➥8923113531 🔝Call Girls Gomti Nagar Lucknow best sexual serviceCALL ON ➥8923113531 🔝Call Girls Gomti Nagar Lucknow best sexual service
CALL ON ➥8923113531 🔝Call Girls Gomti Nagar Lucknow best sexual service
 
The Economic History of the U.S. Lecture 18.pdf
The Economic History of the U.S. Lecture 18.pdfThe Economic History of the U.S. Lecture 18.pdf
The Economic History of the U.S. Lecture 18.pdf
 
The Economic History of the U.S. Lecture 25.pdf
The Economic History of the U.S. Lecture 25.pdfThe Economic History of the U.S. Lecture 25.pdf
The Economic History of the U.S. Lecture 25.pdf
 
Pooja 9892124323 : Call Girl in Juhu Escorts Service Free Home Delivery
Pooja 9892124323 : Call Girl in Juhu Escorts Service Free Home DeliveryPooja 9892124323 : Call Girl in Juhu Escorts Service Free Home Delivery
Pooja 9892124323 : Call Girl in Juhu Escorts Service Free Home Delivery
 
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
VIP Call Girls LB Nagar ( Hyderabad ) Phone 8250192130 | ₹5k To 25k With Room...
 
Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...
Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...
Solution Manual for Principles of Corporate Finance 14th Edition by Richard B...
 
00_Main ppt_MeetupDORA&CyberSecurity.pptx
00_Main ppt_MeetupDORA&CyberSecurity.pptx00_Main ppt_MeetupDORA&CyberSecurity.pptx
00_Main ppt_MeetupDORA&CyberSecurity.pptx
 
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure serviceCall US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
Call US 📞 9892124323 ✅ Kurla Call Girls In Kurla ( Mumbai ) secure service
 

Koc5(dba)

  • 1. Lectures on Lévy Processes and Stochastic Calculus (Koc University) Lecture 5: The Ornstein-Uhlenbeck Process David Applebaum School of Mathematics and Statistics, University of Sheffield, UK 9th December 2011 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 1 / 44
  • 2. Historical Origins This process was first introduced by Ornstein and Uhlenbeck in the 1930s as a more accurate model of the physical phenomenon of Brownian motion than the Einstein-Smoluchowski-Wiener process. They argued that Brownian motion = viscous drag of fluid + random molecular bombardment. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 2 / 44
  • 3. Historical Origins This process was first introduced by Ornstein and Uhlenbeck in the 1930s as a more accurate model of the physical phenomenon of Brownian motion than the Einstein-Smoluchowski-Wiener process. They argued that Brownian motion = viscous drag of fluid + random molecular bombardment. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 2 / 44
  • 4. Historical Origins This process was first introduced by Ornstein and Uhlenbeck in the 1930s as a more accurate model of the physical phenomenon of Brownian motion than the Einstein-Smoluchowski-Wiener process. They argued that Brownian motion = viscous drag of fluid + random molecular bombardment. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 2 / 44
  • 5. Let v (t) be the velocity at time t of a particle of mass m executing Brownian motion. By Newton’s second law of motion, the total force dv (t) acting on the particle at time t is F (t) = m .We then have dt dv (t) dB(t) m = − mkv (t) + mσ , dt dt viscous drag molecular bombardment where k , σ > 0. dB(t) Of course, doesn’t exist, but this is a “physicist’s argument”. If dt we cancel the ms and multiply both sides by dt then we get a legitimate SDE - the Langevin equation Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44
  • 6. Let v (t) be the velocity at time t of a particle of mass m executing Brownian motion. By Newton’s second law of motion, the total force dv (t) acting on the particle at time t is F (t) = m .We then have dt dv (t) dB(t) m = − mkv (t) + mσ , dt dt viscous drag molecular bombardment where k , σ > 0. dB(t) Of course, doesn’t exist, but this is a “physicist’s argument”. If dt we cancel the ms and multiply both sides by dt then we get a legitimate SDE - the Langevin equation Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44
  • 7. Let v (t) be the velocity at time t of a particle of mass m executing Brownian motion. By Newton’s second law of motion, the total force dv (t) acting on the particle at time t is F (t) = m .We then have dt dv (t) dB(t) m = − mkv (t) + mσ , dt dt viscous drag molecular bombardment where k , σ > 0. dB(t) Of course, doesn’t exist, but this is a “physicist’s argument”. If dt we cancel the ms and multiply both sides by dt then we get a legitimate SDE - the Langevin equation Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44
  • 8. Let v (t) be the velocity at time t of a particle of mass m executing Brownian motion. By Newton’s second law of motion, the total force dv (t) acting on the particle at time t is F (t) = m .We then have dt dv (t) dB(t) m = − mkv (t) + mσ , dt dt viscous drag molecular bombardment where k , σ > 0. dB(t) Of course, doesn’t exist, but this is a “physicist’s argument”. If dt we cancel the ms and multiply both sides by dt then we get a legitimate SDE - the Langevin equation Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44
  • 9. Let v (t) be the velocity at time t of a particle of mass m executing Brownian motion. By Newton’s second law of motion, the total force dv (t) acting on the particle at time t is F (t) = m .We then have dt dv (t) dB(t) m = − mkv (t) + mσ , dt dt viscous drag molecular bombardment where k , σ > 0. dB(t) Of course, doesn’t exist, but this is a “physicist’s argument”. If dt we cancel the ms and multiply both sides by dt then we get a legitimate SDE - the Langevin equation Dave Applebaum (Sheffield UK) Lecture 5 December 2011 3 / 44
  • 10. dv (t) = −kv (t)dt + σdB(t) (0.1) Using the integrating factor ekt we can then easily check that the unique solution to this equation is the Ornstein-Uhlenbeck process (v (t), t ≥ 0) where t v (t) = e−kt v (0) + e−k (t−s) dB(s). 0 We are interested in Lévy processes so replace B by a d-dimensional Lévy process X and k by a d × d matrix K . Our Langevin equation is dY (t) = −KY (t)dt + dX (t) (0.2) and its unique solution is t Y (t) = e−tK Y0 + e−(t−s)K dX (s), (0.3) 0 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
  • 11. dv (t) = −kv (t)dt + σdB(t) (0.1) Using the integrating factor ekt we can then easily check that the unique solution to this equation is the Ornstein-Uhlenbeck process (v (t), t ≥ 0) where t v (t) = e−kt v (0) + e−k (t−s) dB(s). 0 We are interested in Lévy processes so replace B by a d-dimensional Lévy process X and k by a d × d matrix K . Our Langevin equation is dY (t) = −KY (t)dt + dX (t) (0.2) and its unique solution is t Y (t) = e−tK Y0 + e−(t−s)K dX (s), (0.3) 0 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
  • 12. dv (t) = −kv (t)dt + σdB(t) (0.1) Using the integrating factor ekt we can then easily check that the unique solution to this equation is the Ornstein-Uhlenbeck process (v (t), t ≥ 0) where t v (t) = e−kt v (0) + e−k (t−s) dB(s). 0 We are interested in Lévy processes so replace B by a d-dimensional Lévy process X and k by a d × d matrix K . Our Langevin equation is dY (t) = −KY (t)dt + dX (t) (0.2) and its unique solution is t Y (t) = e−tK Y0 + e−(t−s)K dX (s), (0.3) 0 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
  • 13. dv (t) = −kv (t)dt + σdB(t) (0.1) Using the integrating factor ekt we can then easily check that the unique solution to this equation is the Ornstein-Uhlenbeck process (v (t), t ≥ 0) where t v (t) = e−kt v (0) + e−k (t−s) dB(s). 0 We are interested in Lévy processes so replace B by a d-dimensional Lévy process X and k by a d × d matrix K . Our Langevin equation is dY (t) = −KY (t)dt + dX (t) (0.2) and its unique solution is t Y (t) = e−tK Y0 + e−(t−s)K dX (s), (0.3) 0 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
  • 14. dv (t) = −kv (t)dt + σdB(t) (0.1) Using the integrating factor ekt we can then easily check that the unique solution to this equation is the Ornstein-Uhlenbeck process (v (t), t ≥ 0) where t v (t) = e−kt v (0) + e−k (t−s) dB(s). 0 We are interested in Lévy processes so replace B by a d-dimensional Lévy process X and k by a d × d matrix K . Our Langevin equation is dY (t) = −KY (t)dt + dX (t) (0.2) and its unique solution is t Y (t) = e−tK Y0 + e−(t−s)K dX (s), (0.3) 0 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
  • 15. dv (t) = −kv (t)dt + σdB(t) (0.1) Using the integrating factor ekt we can then easily check that the unique solution to this equation is the Ornstein-Uhlenbeck process (v (t), t ≥ 0) where t v (t) = e−kt v (0) + e−k (t−s) dB(s). 0 We are interested in Lévy processes so replace B by a d-dimensional Lévy process X and k by a d × d matrix K . Our Langevin equation is dY (t) = −KY (t)dt + dX (t) (0.2) and its unique solution is t Y (t) = e−tK Y0 + e−(t−s)K dX (s), (0.3) 0 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
  • 16. dv (t) = −kv (t)dt + σdB(t) (0.1) Using the integrating factor ekt we can then easily check that the unique solution to this equation is the Ornstein-Uhlenbeck process (v (t), t ≥ 0) where t v (t) = e−kt v (0) + e−k (t−s) dB(s). 0 We are interested in Lévy processes so replace B by a d-dimensional Lévy process X and k by a d × d matrix K . Our Langevin equation is dY (t) = −KY (t)dt + dX (t) (0.2) and its unique solution is t Y (t) = e−tK Y0 + e−(t−s)K dX (s), (0.3) 0 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 4 / 44
  • 17. where Y0 := Y (0) is a fixed F0 measurable random variables. We still call the process Y an Ornstein-Uhlenbeck or OU process. Furthermore Y has càdlàg paths. Y is a Markov process. The process X is sometimes called the background driving Lévy process or BDLP. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44
  • 18. where Y0 := Y (0) is a fixed F0 measurable random variables. We still call the process Y an Ornstein-Uhlenbeck or OU process. Furthermore Y has càdlàg paths. Y is a Markov process. The process X is sometimes called the background driving Lévy process or BDLP. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44
  • 19. where Y0 := Y (0) is a fixed F0 measurable random variables. We still call the process Y an Ornstein-Uhlenbeck or OU process. Furthermore Y has càdlàg paths. Y is a Markov process. The process X is sometimes called the background driving Lévy process or BDLP. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44
  • 20. where Y0 := Y (0) is a fixed F0 measurable random variables. We still call the process Y an Ornstein-Uhlenbeck or OU process. Furthermore Y has càdlàg paths. Y is a Markov process. The process X is sometimes called the background driving Lévy process or BDLP. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44
  • 21. where Y0 := Y (0) is a fixed F0 measurable random variables. We still call the process Y an Ornstein-Uhlenbeck or OU process. Furthermore Y has càdlàg paths. Y is a Markov process. The process X is sometimes called the background driving Lévy process or BDLP. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 5 / 44
  • 22. We get a Markov semigroup on Bb (Rd ) called a Mehler semigroup: Tt f (x) = E(f (Y (t))|Y0 = x) = f (e−tK x + y )ρt (dy ) (0.4) Rd where ρt is the law of the stochastic integral t d t 0e−sK dX (s) = 0 e−(t−s)K dX (s). This generalises the classical Mehler formula (X (t) = B(t), K = kI) 1 1 − e−2kt y2 Tt f (x) = d f e−kt x + y e− 2 dy . (2π) 2 Rd 2k Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44
  • 23. We get a Markov semigroup on Bb (Rd ) called a Mehler semigroup: Tt f (x) = E(f (Y (t))|Y0 = x) = f (e−tK x + y )ρt (dy ) (0.4) Rd where ρt is the law of the stochastic integral t d t 0e−sK dX (s) = 0 e−(t−s)K dX (s). This generalises the classical Mehler formula (X (t) = B(t), K = kI) 1 1 − e−2kt y2 Tt f (x) = d f e−kt x + y e− 2 dy . (2π) 2 Rd 2k Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44
  • 24. We get a Markov semigroup on Bb (Rd ) called a Mehler semigroup: Tt f (x) = E(f (Y (t))|Y0 = x) = f (e−tK x + y )ρt (dy ) (0.4) Rd where ρt is the law of the stochastic integral t d t 0e−sK dX (s) = 0 e−(t−s)K dX (s). This generalises the classical Mehler formula (X (t) = B(t), K = kI) 1 1 − e−2kt y2 Tt f (x) = d f e−kt x + y e− 2 dy . (2π) 2 Rd 2k Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44
  • 25. We get a Markov semigroup on Bb (Rd ) called a Mehler semigroup: Tt f (x) = E(f (Y (t))|Y0 = x) = f (e−tK x + y )ρt (dy ) (0.4) Rd where ρt is the law of the stochastic integral t d t 0e−sK dX (s) = 0 e−(t−s)K dX (s). This generalises the classical Mehler formula (X (t) = B(t), K = kI) 1 1 − e−2kt y2 Tt f (x) = d f e−kt x + y e− 2 dy . (2π) 2 Rd 2k Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44
  • 26. We get a Markov semigroup on Bb (Rd ) called a Mehler semigroup: Tt f (x) = E(f (Y (t))|Y0 = x) = f (e−tK x + y )ρt (dy ) (0.4) Rd where ρt is the law of the stochastic integral t d t 0e−sK dX (s) = 0 e−(t−s)K dX (s). This generalises the classical Mehler formula (X (t) = B(t), K = kI) 1 1 − e−2kt y2 Tt f (x) = d f e−kt x + y e− 2 dy . (2π) 2 Rd 2k Dave Applebaum (Sheffield UK) Lecture 5 December 2011 6 / 44
  • 27. In fact (Tt , t ≥ 0) satisfies the Feller property: Tt (C0 (Rd )) ⊆ C0 (Rd ). We also have the skew-convolution semigroup property: ρs+t = ρK ∗ ρt , s where ρK (B) = ρs (etK B). Another terminology for this is s measure-valued cocycle. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 7 / 44
  • 28. In fact (Tt , t ≥ 0) satisfies the Feller property: Tt (C0 (Rd )) ⊆ C0 (Rd ). We also have the skew-convolution semigroup property: ρs+t = ρK ∗ ρt , s where ρK (B) = ρs (etK B). Another terminology for this is s measure-valued cocycle. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 7 / 44
  • 29. In fact (Tt , t ≥ 0) satisfies the Feller property: Tt (C0 (Rd )) ⊆ C0 (Rd ). We also have the skew-convolution semigroup property: ρs+t = ρK ∗ ρt , s where ρK (B) = ρs (etK B). Another terminology for this is s measure-valued cocycle. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 7 / 44
  • 30. We get nicer probabilistic properties of our solution if we make the following Assumption K is strictly positive definite. OU processes solve simple linear SDEs. They are important in applications such as volatility modelling, Lévy driven CARMA processes, branching processes with immigration. In infinite dimensions they solve the simplest linear SPDE with additive noise. To develop this theme, let H and K be separable Hilbert spaces and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator J. Let X be a Lévy process on K and C ∈ L(K , H). Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
  • 31. We get nicer probabilistic properties of our solution if we make the following Assumption K is strictly positive definite. OU processes solve simple linear SDEs. They are important in applications such as volatility modelling, Lévy driven CARMA processes, branching processes with immigration. In infinite dimensions they solve the simplest linear SPDE with additive noise. To develop this theme, let H and K be separable Hilbert spaces and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator J. Let X be a Lévy process on K and C ∈ L(K , H). Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
  • 32. We get nicer probabilistic properties of our solution if we make the following Assumption K is strictly positive definite. OU processes solve simple linear SDEs. They are important in applications such as volatility modelling, Lévy driven CARMA processes, branching processes with immigration. In infinite dimensions they solve the simplest linear SPDE with additive noise. To develop this theme, let H and K be separable Hilbert spaces and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator J. Let X be a Lévy process on K and C ∈ L(K , H). Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
  • 33. We get nicer probabilistic properties of our solution if we make the following Assumption K is strictly positive definite. OU processes solve simple linear SDEs. They are important in applications such as volatility modelling, Lévy driven CARMA processes, branching processes with immigration. In infinite dimensions they solve the simplest linear SPDE with additive noise. To develop this theme, let H and K be separable Hilbert spaces and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator J. Let X be a Lévy process on K and C ∈ L(K , H). Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
  • 34. We get nicer probabilistic properties of our solution if we make the following Assumption K is strictly positive definite. OU processes solve simple linear SDEs. They are important in applications such as volatility modelling, Lévy driven CARMA processes, branching processes with immigration. In infinite dimensions they solve the simplest linear SPDE with additive noise. To develop this theme, let H and K be separable Hilbert spaces and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator J. Let X be a Lévy process on K and C ∈ L(K , H). Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
  • 35. We get nicer probabilistic properties of our solution if we make the following Assumption K is strictly positive definite. OU processes solve simple linear SDEs. They are important in applications such as volatility modelling, Lévy driven CARMA processes, branching processes with immigration. In infinite dimensions they solve the simplest linear SPDE with additive noise. To develop this theme, let H and K be separable Hilbert spaces and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator J. Let X be a Lévy process on K and C ∈ L(K , H). Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
  • 36. We get nicer probabilistic properties of our solution if we make the following Assumption K is strictly positive definite. OU processes solve simple linear SDEs. They are important in applications such as volatility modelling, Lévy driven CARMA processes, branching processes with immigration. In infinite dimensions they solve the simplest linear SPDE with additive noise. To develop this theme, let H and K be separable Hilbert spaces and (S(t), t ≥ 0) be a C0 -semigroup on H with infinitesimal generator J. Let X be a Lévy process on K and C ∈ L(K , H). Dave Applebaum (Sheffield UK) Lecture 5 December 2011 8 / 44
  • 37. We have the SPDE dY (t) = JY (t) + CdX (t), whose unique solution is t Y (t) = S(t)Y0 + S(t − s)CdX (s) , 0 stochastic convolution and the generalised Mehler semigroup is Tt f (x) = f (S(t)x + y )ρt (dy ). Rd From now on we will work in finite dimensions and assume the strict positive-definiteness of K . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 9 / 44
  • 38. We have the SPDE dY (t) = JY (t) + CdX (t), whose unique solution is t Y (t) = S(t)Y0 + S(t − s)CdX (s) , 0 stochastic convolution and the generalised Mehler semigroup is Tt f (x) = f (S(t)x + y )ρt (dy ). Rd From now on we will work in finite dimensions and assume the strict positive-definiteness of K . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 9 / 44
  • 39. We have the SPDE dY (t) = JY (t) + CdX (t), whose unique solution is t Y (t) = S(t)Y0 + S(t − s)CdX (s) , 0 stochastic convolution and the generalised Mehler semigroup is Tt f (x) = f (S(t)x + y )ρt (dy ). Rd From now on we will work in finite dimensions and assume the strict positive-definiteness of K . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 9 / 44
  • 40. We have the SPDE dY (t) = JY (t) + CdX (t), whose unique solution is t Y (t) = S(t)Y0 + S(t − s)CdX (s) , 0 stochastic convolution and the generalised Mehler semigroup is Tt f (x) = f (S(t)x + y )ρt (dy ). Rd From now on we will work in finite dimensions and assume the strict positive-definiteness of K . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 9 / 44
  • 41. Additive Processes and Wiener-Lévy Integrals The study of O-U processes focusses attention on Wiener-Lévy t integrals If (t) := 0 f (s)dX (s). For simplicity we assume that f : Rd → Rd is continuous. Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z has independent increments and is stochastically continuous. It follows that each Z (t) is infinitely divisible. Theorem (If (t), t ≥ 0) is an additive process. Proof. (sketch) Independent increments follows from the fact that for r ≤s≤t s If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} - measurable, t If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} - measurable, 2 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
  • 42. Additive Processes and Wiener-Lévy Integrals The study of O-U processes focusses attention on Wiener-Lévy t integrals If (t) := 0 f (s)dX (s). For simplicity we assume that f : Rd → Rd is continuous. Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z has independent increments and is stochastically continuous. It follows that each Z (t) is infinitely divisible. Theorem (If (t), t ≥ 0) is an additive process. Proof. (sketch) Independent increments follows from the fact that for r ≤s≤t s If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} - measurable, t If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} - measurable, 2 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
  • 43. Additive Processes and Wiener-Lévy Integrals The study of O-U processes focusses attention on Wiener-Lévy t integrals If (t) := 0 f (s)dX (s). For simplicity we assume that f : Rd → Rd is continuous. Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z has independent increments and is stochastically continuous. It follows that each Z (t) is infinitely divisible. Theorem (If (t), t ≥ 0) is an additive process. Proof. (sketch) Independent increments follows from the fact that for r ≤s≤t s If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} - measurable, t If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} - measurable, 2 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
  • 44. Additive Processes and Wiener-Lévy Integrals The study of O-U processes focusses attention on Wiener-Lévy t integrals If (t) := 0 f (s)dX (s). For simplicity we assume that f : Rd → Rd is continuous. Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z has independent increments and is stochastically continuous. It follows that each Z (t) is infinitely divisible. Theorem (If (t), t ≥ 0) is an additive process. Proof. (sketch) Independent increments follows from the fact that for r ≤s≤t s If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} - measurable, t If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} - measurable, 2 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
  • 45. Additive Processes and Wiener-Lévy Integrals The study of O-U processes focusses attention on Wiener-Lévy t integrals If (t) := 0 f (s)dX (s). For simplicity we assume that f : Rd → Rd is continuous. Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z has independent increments and is stochastically continuous. It follows that each Z (t) is infinitely divisible. Theorem (If (t), t ≥ 0) is an additive process. Proof. (sketch) Independent increments follows from the fact that for r ≤s≤t s If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} - measurable, t If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} - measurable, 2 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
  • 46. Additive Processes and Wiener-Lévy Integrals The study of O-U processes focusses attention on Wiener-Lévy t integrals If (t) := 0 f (s)dX (s). For simplicity we assume that f : Rd → Rd is continuous. Recall that Z = (Z (t), t ≥ 0) is an additive process if Z (0) = 0 (a.s.), Z has independent increments and is stochastically continuous. It follows that each Z (t) is infinitely divisible. Theorem (If (t), t ≥ 0) is an additive process. Proof. (sketch) Independent increments follows from the fact that for r ≤s≤t s If (s) − If (r ) = r f (u)dX (u) is σ{X (b) − X (s); r ≤ a < b ≤ s} - measurable, t If (t) − If (s) = s f (u)dX (u) is σ{X (d) − X (c); s ≤ c < d ≤ t} - measurable, 2 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 10 / 44
  • 47. Theorem If X has Lévy symbol η then for each t ≥ 0, u ∈ Rd , t E(ei(u,If (t)) ) = exp η(f (s)T u) . 0 t Proof. (sketch) Define Mf (t) = exp i u, 0 f (s)dX (s) and use Itô’s formula to show that t Mf (t) = 1 + i u, Mf (s−)f (s)dB(s) 0 t t + ˜ Mf (s−)(ei(u,f (s)x) − 1)N(ds, dx) + Mf (s−)η(f (s) 0 Rd −{0} 0 Now take expectations of both sides to get t E(Mf (t)) = 1 + E(Mf (s))η(f (s)T u)ds, 0 and the result follows. 2 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 11 / 44
  • 48. Theorem If X has Lévy symbol η then for each t ≥ 0, u ∈ Rd , t E(ei(u,If (t)) ) = exp η(f (s)T u) . 0 t Proof. (sketch) Define Mf (t) = exp i u, 0 f (s)dX (s) and use Itô’s formula to show that t Mf (t) = 1 + i u, Mf (s−)f (s)dB(s) 0 t t + ˜ Mf (s−)(ei(u,f (s)x) − 1)N(ds, dx) + Mf (s−)η(f (s) 0 Rd −{0} 0 Now take expectations of both sides to get t E(Mf (t)) = 1 + E(Mf (s))η(f (s)T u)ds, 0 and the result follows. 2 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 11 / 44
  • 49. Theorem If X has Lévy symbol η then for each t ≥ 0, u ∈ Rd , t E(ei(u,If (t)) ) = exp η(f (s)T u) . 0 t Proof. (sketch) Define Mf (t) = exp i u, 0 f (s)dX (s) and use Itô’s formula to show that t Mf (t) = 1 + i u, Mf (s−)f (s)dB(s) 0 t t + ˜ Mf (s−)(ei(u,f (s)x) − 1)N(ds, dx) + Mf (s−)η(f (s) 0 Rd −{0} 0 Now take expectations of both sides to get t E(Mf (t)) = 1 + E(Mf (s))η(f (s)T u)ds, 0 and the result follows. 2 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 11 / 44
  • 50. Theorem If X has Lévy symbol η then for each t ≥ 0, u ∈ Rd , t E(ei(u,If (t)) ) = exp η(f (s)T u) . 0 t Proof. (sketch) Define Mf (t) = exp i u, 0 f (s)dX (s) and use Itô’s formula to show that t Mf (t) = 1 + i u, Mf (s−)f (s)dB(s) 0 t t + ˜ Mf (s−)(ei(u,f (s)x) − 1)N(ds, dx) + Mf (s−)η(f (s) 0 Rd −{0} 0 Now take expectations of both sides to get t E(Mf (t)) = 1 + E(Mf (s))η(f (s)T u)ds, 0 and the result follows. 2 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 11 / 44
  • 51. If X has characteristics (b, A, ν), it follows that If (t) has characteristics (btf , Af , νtf ) where t t btf = f (s)bds + f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds, 0 0 Rd −{0} t Af = t f (s)T Af (s)ds, 0 t νtf (B) = ν(f (s)−1 (B)). 0 It follows that every OU process Y conditioned on Y0 = y is an additive process. It will have characteristics as above with f (s) = e−sK and btf translated by e−tK y . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
  • 52. If X has characteristics (b, A, ν), it follows that If (t) has characteristics (btf , Af , νtf ) where t t btf = f (s)bds + f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds, 0 0 Rd −{0} t Af = t f (s)T Af (s)ds, 0 t νtf (B) = ν(f (s)−1 (B)). 0 It follows that every OU process Y conditioned on Y0 = y is an additive process. It will have characteristics as above with f (s) = e−sK and btf translated by e−tK y . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
  • 53. If X has characteristics (b, A, ν), it follows that If (t) has characteristics (btf , Af , νtf ) where t t btf = f (s)bds + f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds, 0 0 Rd −{0} t Af = t f (s)T Af (s)ds, 0 t νtf (B) = ν(f (s)−1 (B)). 0 It follows that every OU process Y conditioned on Y0 = y is an additive process. It will have characteristics as above with f (s) = e−sK and btf translated by e−tK y . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
  • 54. If X has characteristics (b, A, ν), it follows that If (t) has characteristics (btf , Af , νtf ) where t t btf = f (s)bds + f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds, 0 0 Rd −{0} t Af = t f (s)T Af (s)ds, 0 t νtf (B) = ν(f (s)−1 (B)). 0 It follows that every OU process Y conditioned on Y0 = y is an additive process. It will have characteristics as above with f (s) = e−sK and btf translated by e−tK y . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
  • 55. If X has characteristics (b, A, ν), it follows that If (t) has characteristics (btf , Af , νtf ) where t t btf = f (s)bds + f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds, 0 0 Rd −{0} t Af = t f (s)T Af (s)ds, 0 t νtf (B) = ν(f (s)−1 (B)). 0 It follows that every OU process Y conditioned on Y0 = y is an additive process. It will have characteristics as above with f (s) = e−sK and btf translated by e−tK y . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
  • 56. If X has characteristics (b, A, ν), it follows that If (t) has characteristics (btf , Af , νtf ) where t t btf = f (s)bds + f (s)x(1B (x) − 1B (f (s)x))ν(dx)ds, 0 0 Rd −{0} t Af = t f (s)T Af (s)ds, 0 t νtf (B) = ν(f (s)−1 (B)). 0 It follows that every OU process Y conditioned on Y0 = y is an additive process. It will have characteristics as above with f (s) = e−sK and btf translated by e−tK y . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 12 / 44
  • 57. Invariant Measures, Stationary Processes, Ergodicity: General Theory We want to investigate invariant measures and stationary solutions for OU processes. First a little general theory. First let (Tt , t ≥ 0) be a general Markov semigroup with transition probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for f ∈ Bb (Rd ). We say that a probability measure µ is an invariant measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ), Tt f (x)µ(dx) = f (x)µ(dx) (0.5) Rd Rd Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
  • 58. Invariant Measures, Stationary Processes, Ergodicity: General Theory We want to investigate invariant measures and stationary solutions for OU processes. First a little general theory. First let (Tt , t ≥ 0) be a general Markov semigroup with transition probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for f ∈ Bb (Rd ). We say that a probability measure µ is an invariant measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ), Tt f (x)µ(dx) = f (x)µ(dx) (0.5) Rd Rd Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
  • 59. Invariant Measures, Stationary Processes, Ergodicity: General Theory We want to investigate invariant measures and stationary solutions for OU processes. First a little general theory. First let (Tt , t ≥ 0) be a general Markov semigroup with transition probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for f ∈ Bb (Rd ). We say that a probability measure µ is an invariant measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ), Tt f (x)µ(dx) = f (x)µ(dx) (0.5) Rd Rd Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
  • 60. Invariant Measures, Stationary Processes, Ergodicity: General Theory We want to investigate invariant measures and stationary solutions for OU processes. First a little general theory. First let (Tt , t ≥ 0) be a general Markov semigroup with transition probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for f ∈ Bb (Rd ). We say that a probability measure µ is an invariant measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ), Tt f (x)µ(dx) = f (x)µ(dx) (0.5) Rd Rd Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
  • 61. Invariant Measures, Stationary Processes, Ergodicity: General Theory We want to investigate invariant measures and stationary solutions for OU processes. First a little general theory. First let (Tt , t ≥ 0) be a general Markov semigroup with transition probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for f ∈ Bb (Rd ). We say that a probability measure µ is an invariant measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ), Tt f (x)µ(dx) = f (x)µ(dx) (0.5) Rd Rd Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
  • 62. Invariant Measures, Stationary Processes, Ergodicity: General Theory We want to investigate invariant measures and stationary solutions for OU processes. First a little general theory. First let (Tt , t ≥ 0) be a general Markov semigroup with transition probabilities pt (x, B) = Tt 1B (x) so that Tt f (x) = Rd f (y )pt (x, dy ) for f ∈ Bb (Rd ). We say that a probability measure µ is an invariant measure for the semigroup if for all t ≥ 0, f ∈ Bb (Rd ), Tt f (x)µ(dx) = f (x)µ(dx) (0.5) Rd Rd Dave Applebaum (Sheffield UK) Lecture 5 December 2011 13 / 44
  • 63. Equivalently for all Borel sets B pt (x, B)µ(dx) = µ(B). (0.6) Rd To see that (0.5) ⇒ (0.6) rewrite as f (y )pt (x, dy )µ(dx) = f (x)µ(dx), Rd Rd Rd and put f = 1B . For the converse - approximate f by simple functions and take limits. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 14 / 44
  • 64. Equivalently for all Borel sets B pt (x, B)µ(dx) = µ(B). (0.6) Rd To see that (0.5) ⇒ (0.6) rewrite as f (y )pt (x, dy )µ(dx) = f (x)µ(dx), Rd Rd Rd and put f = 1B . For the converse - approximate f by simple functions and take limits. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 14 / 44
  • 65. Equivalently for all Borel sets B pt (x, B)µ(dx) = µ(B). (0.6) Rd To see that (0.5) ⇒ (0.6) rewrite as f (y )pt (x, dy )µ(dx) = f (x)µ(dx), Rd Rd Rd and put f = 1B . For the converse - approximate f by simple functions and take limits. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 14 / 44
  • 66. Equivalently for all Borel sets B pt (x, B)µ(dx) = µ(B). (0.6) Rd To see that (0.5) ⇒ (0.6) rewrite as f (y )pt (x, dy )µ(dx) = f (x)µ(dx), Rd Rd Rd and put f = 1B . For the converse - approximate f by simple functions and take limits. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 14 / 44
  • 67. e.g. A Lévy process doesn’t have an invariant probability measure but Lebesgue measure is invariant in the sense that for f ∈ L1 (Rd ) Tt f (x)dx = f (x + y )pt (dy )dx = f (x)dx. Rd Rd Rd Rd A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for all n ∈ N, t1 , . . . , tn , h ∈ R+ , d (Z (t1 ), . . . , Z (tn )) = (Z (t1 + h), . . . , Z (tn + h)) Theorem A Markov process Z wherein µ is the law of Z (0) is stationary if and only if µ is an invariant measure. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 15 / 44
  • 68. e.g. A Lévy process doesn’t have an invariant probability measure but Lebesgue measure is invariant in the sense that for f ∈ L1 (Rd ) Tt f (x)dx = f (x + y )pt (dy )dx = f (x)dx. Rd Rd Rd Rd A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for all n ∈ N, t1 , . . . , tn , h ∈ R+ , d (Z (t1 ), . . . , Z (tn )) = (Z (t1 + h), . . . , Z (tn + h)) Theorem A Markov process Z wherein µ is the law of Z (0) is stationary if and only if µ is an invariant measure. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 15 / 44
  • 69. e.g. A Lévy process doesn’t have an invariant probability measure but Lebesgue measure is invariant in the sense that for f ∈ L1 (Rd ) Tt f (x)dx = f (x + y )pt (dy )dx = f (x)dx. Rd Rd Rd Rd A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for all n ∈ N, t1 , . . . , tn , h ∈ R+ , d (Z (t1 ), . . . , Z (tn )) = (Z (t1 + h), . . . , Z (tn + h)) Theorem A Markov process Z wherein µ is the law of Z (0) is stationary if and only if µ is an invariant measure. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 15 / 44
  • 70. e.g. A Lévy process doesn’t have an invariant probability measure but Lebesgue measure is invariant in the sense that for f ∈ L1 (Rd ) Tt f (x)dx = f (x + y )pt (dy )dx = f (x)dx. Rd Rd Rd Rd A process Z = (Z (t), t ≥ 0) is (strictly) stationary if for all n ∈ N, t1 , . . . , tn , h ∈ R+ , d (Z (t1 ), . . . , Z (tn )) = (Z (t1 + h), . . . , Z (tn + h)) Theorem A Markov process Z wherein µ is the law of Z (0) is stationary if and only if µ is an invariant measure. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 15 / 44
  • 71. Proof. If the process is stationary then µ is invariant since µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) = pt (x, B)µ(dx). Rd For the converse, its sufficient to prove that E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to show E(f (Z (t)) = E(E(f (Z (t)|F0 ))) = E(Tt f (Z (0))) = (Tt f (x))µ(dx) Rd = f (x)µ(dx) = E(f (Z (0))). Rd Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
  • 72. Proof. If the process is stationary then µ is invariant since µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) = pt (x, B)µ(dx). Rd For the converse, its sufficient to prove that E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to show E(f (Z (t)) = E(E(f (Z (t)|F0 ))) = E(Tt f (Z (0))) = (Tt f (x))µ(dx) Rd = f (x)µ(dx) = E(f (Z (0))). Rd Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
  • 73. Proof. If the process is stationary then µ is invariant since µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) = pt (x, B)µ(dx). Rd For the converse, its sufficient to prove that E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to show E(f (Z (t)) = E(E(f (Z (t)|F0 ))) = E(Tt f (Z (0))) = (Tt f (x))µ(dx) Rd = f (x)µ(dx) = E(f (Z (0))). Rd Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
  • 74. Proof. If the process is stationary then µ is invariant since µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) = pt (x, B)µ(dx). Rd For the converse, its sufficient to prove that E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to show E(f (Z (t)) = E(E(f (Z (t)|F0 ))) = E(Tt f (Z (0))) = (Tt f (x))µ(dx) Rd = f (x)µ(dx) = E(f (Z (0))). Rd Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
  • 75. Proof. If the process is stationary then µ is invariant since µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) = pt (x, B)µ(dx). Rd For the converse, its sufficient to prove that E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to show E(f (Z (t)) = E(E(f (Z (t)|F0 ))) = E(Tt f (Z (0))) = (Tt f (x))µ(dx) Rd = f (x)µ(dx) = E(f (Z (0))). Rd Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
  • 76. Proof. If the process is stationary then µ is invariant since µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) = pt (x, B)µ(dx). Rd For the converse, its sufficient to prove that E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to show E(f (Z (t)) = E(E(f (Z (t)|F0 ))) = E(Tt f (Z (0))) = (Tt f (x))µ(dx) Rd = f (x)µ(dx) = E(f (Z (0))). Rd Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
  • 77. Proof. If the process is stationary then µ is invariant since µ(B) = P(Z (0) ∈ B) = P(Z (t) ∈ B) = pt (x, B)µ(dx). Rd For the converse, its sufficient to prove that E(f1 (Z (t1 + h)) · · · fn (Z (tn + h)))) is independent of h for all f1 , . . . fn ∈ Bb (Rd ). Proof is by induction. Case n = 1. Its enough to show E(f (Z (t)) = E(E(f (Z (t)|F0 ))) = E(Tt f (Z (0))) = (Tt f (x))µ(dx) Rd = f (x)µ(dx) = E(f (Z (0))). Rd Dave Applebaum (Sheffield UK) Lecture 5 December 2011 16 / 44
  • 78. In general use E(f1 (Z (t1 + h)) · · · fn (Z (tn + h))) = E(f1 (Z (t1 + h)) · · · E(fn (Z (tn + h))|Ftn−1 +h )) = E(f1 (Z (t1 + h) · · · Ttn −tn−1 fn (Z (tn−1 + h)))). 2 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 17 / 44
  • 79. In general use E(f1 (Z (t1 + h)) · · · fn (Z (tn + h))) = E(f1 (Z (t1 + h)) · · · E(fn (Z (tn + h))|Ftn−1 +h )) = E(f1 (Z (t1 + h) · · · Ttn −tn−1 fn (Z (tn−1 + h)))). 2 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 17 / 44
  • 80. Let µ be an invariant probability measure for a Markov semigroup (Tt , t ≥ 0). µ is ergodic if Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1. If µ is ergodic then “time averages” = “space averages” for the corresponding stationary Markov process, i.e. T 1 lim f (Z (s))ds = f (x)µ(dx) a.s. T →∞ T 0 Rd Fact: The invariant measures form a convex set and the ergodic measures are the extreme points of this set. It follows that if an invariant measure is unique then it is ergodic. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
  • 81. Let µ be an invariant probability measure for a Markov semigroup (Tt , t ≥ 0). µ is ergodic if Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1. If µ is ergodic then “time averages” = “space averages” for the corresponding stationary Markov process, i.e. T 1 lim f (Z (s))ds = f (x)µ(dx) a.s. T →∞ T 0 Rd Fact: The invariant measures form a convex set and the ergodic measures are the extreme points of this set. It follows that if an invariant measure is unique then it is ergodic. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
  • 82. Let µ be an invariant probability measure for a Markov semigroup (Tt , t ≥ 0). µ is ergodic if Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1. If µ is ergodic then “time averages” = “space averages” for the corresponding stationary Markov process, i.e. T 1 lim f (Z (s))ds = f (x)µ(dx) a.s. T →∞ T 0 Rd Fact: The invariant measures form a convex set and the ergodic measures are the extreme points of this set. It follows that if an invariant measure is unique then it is ergodic. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
  • 83. Let µ be an invariant probability measure for a Markov semigroup (Tt , t ≥ 0). µ is ergodic if Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1. If µ is ergodic then “time averages” = “space averages” for the corresponding stationary Markov process, i.e. T 1 lim f (Z (s))ds = f (x)µ(dx) a.s. T →∞ T 0 Rd Fact: The invariant measures form a convex set and the ergodic measures are the extreme points of this set. It follows that if an invariant measure is unique then it is ergodic. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
  • 84. Let µ be an invariant probability measure for a Markov semigroup (Tt , t ≥ 0). µ is ergodic if Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1. If µ is ergodic then “time averages” = “space averages” for the corresponding stationary Markov process, i.e. T 1 lim f (Z (s))ds = f (x)µ(dx) a.s. T →∞ T 0 Rd Fact: The invariant measures form a convex set and the ergodic measures are the extreme points of this set. It follows that if an invariant measure is unique then it is ergodic. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
  • 85. Let µ be an invariant probability measure for a Markov semigroup (Tt , t ≥ 0). µ is ergodic if Tt 1B = 1B (µ a.s.) ⇒ µ(B) = 0 or µ(B) = 1. If µ is ergodic then “time averages” = “space averages” for the corresponding stationary Markov process, i.e. T 1 lim f (Z (s))ds = f (x)µ(dx) a.s. T →∞ T 0 Rd Fact: The invariant measures form a convex set and the ergodic measures are the extreme points of this set. It follows that if an invariant measure is unique then it is ergodic. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 18 / 44
  • 86. The Self-Decomposable Connection Recall that a random variable Z is self-decomposable if for each 0 < a < 1 there exists a random variable Wa that is independent of Z such that d Z = aZ + Wa or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B). Z Z Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R. t Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s) since t Y (t) = e−kt Y0 + e−(t−s)K dX (s) 0 and by stationary increments of the process X Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
  • 87. The Self-Decomposable Connection Recall that a random variable Z is self-decomposable if for each 0 < a < 1 there exists a random variable Wa that is independent of Z such that d Z = aZ + Wa or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B). Z Z Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R. t Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s) since t Y (t) = e−kt Y0 + e−(t−s)K dX (s) 0 and by stationary increments of the process X Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
  • 88. The Self-Decomposable Connection Recall that a random variable Z is self-decomposable if for each 0 < a < 1 there exists a random variable Wa that is independent of Z such that d Z = aZ + Wa or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B). Z Z Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R. t Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s) since t Y (t) = e−kt Y0 + e−(t−s)K dX (s) 0 and by stationary increments of the process X Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
  • 89. The Self-Decomposable Connection Recall that a random variable Z is self-decomposable if for each 0 < a < 1 there exists a random variable Wa that is independent of Z such that d Z = aZ + Wa or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B). Z Z Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R. t Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s) since t Y (t) = e−kt Y0 + e−(t−s)K dX (s) 0 and by stationary increments of the process X Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
  • 90. The Self-Decomposable Connection Recall that a random variable Z is self-decomposable if for each 0 < a < 1 there exists a random variable Wa that is independent of Z such that d Z = aZ + Wa or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B). Z Z Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R. t Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s) since t Y (t) = e−kt Y0 + e−(t−s)K dX (s) 0 and by stationary increments of the process X Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
  • 91. The Self-Decomposable Connection Recall that a random variable Z is self-decomposable if for each 0 < a < 1 there exists a random variable Wa that is independent of Z such that d Z = aZ + Wa or equivalently ρZ = ρa ∗ ρWa , where ρa (B) = ρ(a−1 B). Z Z Now suppose that Y is a stationary Ornstein-Uhlenbeck process on R. t Then Y0 is self decomposable with a = e−kt and Wa(t) = 0 e−ks dX (s) since t Y (t) = e−kt Y0 + e−(t−s)K dX (s) 0 and by stationary increments of the process X Dave Applebaum (Sheffield UK) Lecture 5 December 2011 19 / 44
  • 92. t t d d Y (t) = Y0 and e−k (t−s) dX (s) = e−ks dX (s) 0 0 d ⇒ Y0 = e−kt Y0 + Wa(t) . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 20 / 44
  • 93. t t d d Y (t) = Y0 and e−k (t−s) dX (s) = e−ks dX (s) 0 0 d ⇒ Y0 = e−kt Y0 + Wa(t) . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 20 / 44
  • 94. Now suppose that µ is self-decomposable - more precisely that kt µ = µe ∗ ρt , where ρt is the law of Wa(t) . Then Tt f (x)µ(dx) = f (e−kt x + y )ρt (dy )µ(dx) R R R kt = f (x + y )ρt (dy )µe (dx) R R kt = f (x)(µe ∗ ρt )(dx) R = f (x)µ(dx). R So µ is an invariant measure. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
  • 95. Now suppose that µ is self-decomposable - more precisely that kt µ = µe ∗ ρt , where ρt is the law of Wa(t) . Then Tt f (x)µ(dx) = f (e−kt x + y )ρt (dy )µ(dx) R R R kt = f (x + y )ρt (dy )µe (dx) R R kt = f (x)(µe ∗ ρt )(dx) R = f (x)µ(dx). R So µ is an invariant measure. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
  • 96. Now suppose that µ is self-decomposable - more precisely that kt µ = µe ∗ ρt , where ρt is the law of Wa(t) . Then Tt f (x)µ(dx) = f (e−kt x + y )ρt (dy )µ(dx) R R R kt = f (x + y )ρt (dy )µe (dx) R R kt = f (x)(µe ∗ ρt )(dx) R = f (x)µ(dx). R So µ is an invariant measure. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
  • 97. Now suppose that µ is self-decomposable - more precisely that kt µ = µe ∗ ρt , where ρt is the law of Wa(t) . Then Tt f (x)µ(dx) = f (e−kt x + y )ρt (dy )µ(dx) R R R kt = f (x + y )ρt (dy )µe (dx) R R kt = f (x)(µe ∗ ρt )(dx) R = f (x)µ(dx). R So µ is an invariant measure. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
  • 98. Now suppose that µ is self-decomposable - more precisely that kt µ = µe ∗ ρt , where ρt is the law of Wa(t) . Then Tt f (x)µ(dx) = f (e−kt x + y )ρt (dy )µ(dx) R R R kt = f (x + y )ρt (dy )µe (dx) R R kt = f (x)(µe ∗ ρt )(dx) R = f (x)µ(dx). R So µ is an invariant measure. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
  • 99. Now suppose that µ is self-decomposable - more precisely that kt µ = µe ∗ ρt , where ρt is the law of Wa(t) . Then Tt f (x)µ(dx) = f (e−kt x + y )ρt (dy )µ(dx) R R R kt = f (x + y )ρt (dy )µe (dx) R R kt = f (x)(µe ∗ ρt )(dx) R = f (x)µ(dx). R So µ is an invariant measure. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 21 / 44
  • 100. So we have shown that: Theorem The following are equivalent for the O-U process Y . Y is stationary. The law of Y (0) is an invariant measure. t The law of Y (0) is self-decomposable (with Wa(t) = 0 e−ks dX (s)). Dave Applebaum (Sheffield UK) Lecture 5 December 2011 22 / 44
  • 101. So we have shown that: Theorem The following are equivalent for the O-U process Y . Y is stationary. The law of Y (0) is an invariant measure. t The law of Y (0) is self-decomposable (with Wa(t) = 0 e−ks dX (s)). Dave Applebaum (Sheffield UK) Lecture 5 December 2011 22 / 44
  • 102. So we have shown that: Theorem The following are equivalent for the O-U process Y . Y is stationary. The law of Y (0) is an invariant measure. t The law of Y (0) is self-decomposable (with Wa(t) = 0 e−ks dX (s)). Dave Applebaum (Sheffield UK) Lecture 5 December 2011 22 / 44
  • 103. So we have shown that: Theorem The following are equivalent for the O-U process Y . Y is stationary. The law of Y (0) is an invariant measure. t The law of Y (0) is self-decomposable (with Wa(t) = 0 e−ks dX (s)). Dave Applebaum (Sheffield UK) Lecture 5 December 2011 22 / 44
  • 104. We seek some condition on the Lévy process X which ensures that Y is stationary. ∞ Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is self-decomposable. To see this observe that (using stationary increments of X ) ∞ ∞ t e−ks dX (s) = e−ks dX (s) + e−ks dX (s) 0 t 0 ∞ t d = e−k (t+s) dX (s) + e−ks dX (s) 0 0 ∞ t = e−kt e−ks dX (s) + e−ks dX (s) 0 0 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
  • 105. We seek some condition on the Lévy process X which ensures that Y is stationary. ∞ Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is self-decomposable. To see this observe that (using stationary increments of X ) ∞ ∞ t e−ks dX (s) = e−ks dX (s) + e−ks dX (s) 0 t 0 ∞ t d = e−k (t+s) dX (s) + e−ks dX (s) 0 0 ∞ t = e−kt e−ks dX (s) + e−ks dX (s) 0 0 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
  • 106. We seek some condition on the Lévy process X which ensures that Y is stationary. ∞ Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is self-decomposable. To see this observe that (using stationary increments of X ) ∞ ∞ t e−ks dX (s) = e−ks dX (s) + e−ks dX (s) 0 t 0 ∞ t d = e−k (t+s) dX (s) + e−ks dX (s) 0 0 ∞ t = e−kt e−ks dX (s) + e−ks dX (s) 0 0 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
  • 107. We seek some condition on the Lévy process X which ensures that Y is stationary. ∞ Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is self-decomposable. To see this observe that (using stationary increments of X ) ∞ ∞ t e−ks dX (s) = e−ks dX (s) + e−ks dX (s) 0 t 0 ∞ t d = e−k (t+s) dX (s) + e−ks dX (s) 0 0 ∞ t = e−kt e−ks dX (s) + e−ks dX (s) 0 0 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
  • 108. We seek some condition on the Lévy process X which ensures that Y is stationary. ∞ Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is self-decomposable. To see this observe that (using stationary increments of X ) ∞ ∞ t e−ks dX (s) = e−ks dX (s) + e−ks dX (s) 0 t 0 ∞ t d = e−k (t+s) dX (s) + e−ks dX (s) 0 0 ∞ t = e−kt e−ks dX (s) + e−ks dX (s) 0 0 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
  • 109. We seek some condition on the Lévy process X which ensures that Y is stationary. ∞ Fact: If Y∞ := 0 e−ks dX (s) exists in distribution then it is self-decomposable. To see this observe that (using stationary increments of X ) ∞ ∞ t e−ks dX (s) = e−ks dX (s) + e−ks dX (s) 0 t 0 ∞ t d = e−k (t+s) dX (s) + e−ks dX (s) 0 0 ∞ t = e−kt e−ks dX (s) + e−ks dX (s) 0 0 Dave Applebaum (Sheffield UK) Lecture 5 December 2011 23 / 44
  • 110. t When does limt→∞ 0 e−ks dX (s) exist in distribution? Use the Lévy-Itô decomposition. X (t) = bt + M(t) + xN(t, dx). |x|≥1 t It is not difficult to see that limt→∞ 0 e−ks dM(s) exists in L2 -sense. t Fact: lim e−ks xN(ds, dx) exists in distribution if and only if t→∞ 0 |x|≥1 |x|≥1 log(1 + |x|)ν(dx) < ∞. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
  • 111. t When does limt→∞ 0 e−ks dX (s) exist in distribution? Use the Lévy-Itô decomposition. X (t) = bt + M(t) + xN(t, dx). |x|≥1 t It is not difficult to see that limt→∞ 0 e−ks dM(s) exists in L2 -sense. t Fact: lim e−ks xN(ds, dx) exists in distribution if and only if t→∞ 0 |x|≥1 |x|≥1 log(1 + |x|)ν(dx) < ∞. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
  • 112. t When does limt→∞ 0 e−ks dX (s) exist in distribution? Use the Lévy-Itô decomposition. X (t) = bt + M(t) + xN(t, dx). |x|≥1 t It is not difficult to see that limt→∞ 0 e−ks dM(s) exists in L2 -sense. t Fact: lim e−ks xN(ds, dx) exists in distribution if and only if t→∞ 0 |x|≥1 |x|≥1 log(1 + |x|)ν(dx) < ∞. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
  • 113. t When does limt→∞ 0 e−ks dX (s) exist in distribution? Use the Lévy-Itô decomposition. X (t) = bt + M(t) + xN(t, dx). |x|≥1 t It is not difficult to see that limt→∞ 0 e−ks dM(s) exists in L2 -sense. t Fact: lim e−ks xN(ds, dx) exists in distribution if and only if t→∞ 0 |x|≥1 |x|≥1 log(1 + |x|)ν(dx) < ∞. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
  • 114. t When does limt→∞ 0 e−ks dX (s) exist in distribution? Use the Lévy-Itô decomposition. X (t) = bt + M(t) + xN(t, dx). |x|≥1 t It is not difficult to see that limt→∞ 0 e−ks dM(s) exists in L2 -sense. t Fact: lim e−ks xN(ds, dx) exists in distribution if and only if t→∞ 0 |x|≥1 |x|≥1 log(1 + |x|)ν(dx) < ∞. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
  • 115. t When does limt→∞ 0 e−ks dX (s) exist in distribution? Use the Lévy-Itô decomposition. X (t) = bt + M(t) + xN(t, dx). |x|≥1 t It is not difficult to see that limt→∞ 0 e−ks dM(s) exists in L2 -sense. t Fact: lim e−ks xN(ds, dx) exists in distribution if and only if t→∞ 0 |x|≥1 |x|≥1 log(1 + |x|)ν(dx) < ∞. Dave Applebaum (Sheffield UK) Lecture 5 December 2011 24 / 44
  • 116. To prove this you need 1 If (ξn , n ∈ N) are i.i.d. then ∞ c n ξn converges a.s. (0 < c < 1) n=1 if and only if E(log(1 + |ξ1 |)) < ∞. 2 n n−1 d e−ks xN(ds, dx) = e−kj Mj 0 |x|≥1 j=0 j+1 −k (s−j) xN(ds, dx). where Mj := j |x|≥1 e Note that (Mj , j ∈ N) are i.i.d. In this case, Y has characteristics (b∞ , Af , ν∞ ). f ∞ f 1 e.g. Brownian motion case. X (t) = B(t). µ ∼ N 0, 2k . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44
  • 117. To prove this you need 1 If (ξn , n ∈ N) are i.i.d. then ∞ c n ξn converges a.s. (0 < c < 1) n=1 if and only if E(log(1 + |ξ1 |)) < ∞. 2 n n−1 d e−ks xN(ds, dx) = e−kj Mj 0 |x|≥1 j=0 j+1 −k (s−j) xN(ds, dx). where Mj := j |x|≥1 e Note that (Mj , j ∈ N) are i.i.d. In this case, Y has characteristics (b∞ , Af , ν∞ ). f ∞ f 1 e.g. Brownian motion case. X (t) = B(t). µ ∼ N 0, 2k . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44
  • 118. To prove this you need 1 If (ξn , n ∈ N) are i.i.d. then ∞ c n ξn converges a.s. (0 < c < 1) n=1 if and only if E(log(1 + |ξ1 |)) < ∞. 2 n n−1 d e−ks xN(ds, dx) = e−kj Mj 0 |x|≥1 j=0 j+1 −k (s−j) xN(ds, dx). where Mj := j |x|≥1 e Note that (Mj , j ∈ N) are i.i.d. In this case, Y has characteristics (b∞ , Af , ν∞ ). f ∞ f 1 e.g. Brownian motion case. X (t) = B(t). µ ∼ N 0, 2k . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44
  • 119. To prove this you need 1 If (ξn , n ∈ N) are i.i.d. then ∞ c n ξn converges a.s. (0 < c < 1) n=1 if and only if E(log(1 + |ξ1 |)) < ∞. 2 n n−1 d e−ks xN(ds, dx) = e−kj Mj 0 |x|≥1 j=0 j+1 −k (s−j) xN(ds, dx). where Mj := j |x|≥1 e Note that (Mj , j ∈ N) are i.i.d. In this case, Y has characteristics (b∞ , Af , ν∞ ). f ∞ f 1 e.g. Brownian motion case. X (t) = B(t). µ ∼ N 0, 2k . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44
  • 120. To prove this you need 1 If (ξn , n ∈ N) are i.i.d. then ∞ c n ξn converges a.s. (0 < c < 1) n=1 if and only if E(log(1 + |ξ1 |)) < ∞. 2 n n−1 d e−ks xN(ds, dx) = e−kj Mj 0 |x|≥1 j=0 j+1 −k (s−j) xN(ds, dx). where Mj := j |x|≥1 e Note that (Mj , j ∈ N) are i.i.d. In this case, Y has characteristics (b∞ , Af , ν∞ ). f ∞ f 1 e.g. Brownian motion case. X (t) = B(t). µ ∼ N 0, 2k . Dave Applebaum (Sheffield UK) Lecture 5 December 2011 25 / 44