Koc4(dba)

1,131 views

Published on

  • Be the first to comment

  • Be the first to like this

Koc4(dba)

  1. 1. Lectures on Lévy Processes and Stochastic Calculus (Koc University) Lecture 4: Stochastic Integration and Itô’s Formula David Applebaum School of Mathematics and Statistics, University of Sheffield, UK 8th December 2011Dave Applebaum (Sheffield UK) Lecture 4 December 2011 1 / 58
  2. 2. Stochastic IntegrationWe next give a rather rapid account of stochastic integration in a formsuitable for application to Lévy processes.Let X = M + C be a semimartingale. The problem of stochasticintegration is to make sense of objects of the form t t t F (s)dX (s) := F (s)dM(s) + F (s)dC(s). 0 0 0The second integral can be well-defined using the usualLebesgue-Stieltjes approach. The first one cannot - indeed if M is acontinuous martingale of finite variation, then M is a.s. constant. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 2 / 58
  3. 3. Stochastic IntegrationWe next give a rather rapid account of stochastic integration in a formsuitable for application to Lévy processes.Let X = M + C be a semimartingale. The problem of stochasticintegration is to make sense of objects of the form t t t F (s)dX (s) := F (s)dM(s) + F (s)dC(s). 0 0 0The second integral can be well-defined using the usualLebesgue-Stieltjes approach. The first one cannot - indeed if M is acontinuous martingale of finite variation, then M is a.s. constant. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 2 / 58
  4. 4. Stochastic IntegrationWe next give a rather rapid account of stochastic integration in a formsuitable for application to Lévy processes.Let X = M + C be a semimartingale. The problem of stochasticintegration is to make sense of objects of the form t t t F (s)dX (s) := F (s)dM(s) + F (s)dC(s). 0 0 0The second integral can be well-defined using the usualLebesgue-Stieltjes approach. The first one cannot - indeed if M is acontinuous martingale of finite variation, then M is a.s. constant. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 2 / 58
  5. 5. Refer to the martingale part of the Lévy-Itô decomposition. Define a“martingale-valued measure” by ˜ M(t, E) = B(t)δ0 (E) + N(t, E − {0}),for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownianmotion. The following key properties then hold:- M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for 0 ≤ s < t < ∞. E(M((s, t], E)) = 0. E(M((s, t], E)2 ) = ρ((s, t], E) where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58
  6. 6. Refer to the martingale part of the Lévy-Itô decomposition. Define a“martingale-valued measure” by ˜ M(t, E) = B(t)δ0 (E) + N(t, E − {0}),for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownianmotion. The following key properties then hold:- M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for 0 ≤ s < t < ∞. E(M((s, t], E)) = 0. E(M((s, t], E)2 ) = ρ((s, t], E) where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58
  7. 7. Refer to the martingale part of the Lévy-Itô decomposition. Define a“martingale-valued measure” by ˜ M(t, E) = B(t)δ0 (E) + N(t, E − {0}),for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownianmotion. The following key properties then hold:- M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for 0 ≤ s < t < ∞. E(M((s, t], E)) = 0. E(M((s, t], E)2 ) = ρ((s, t], E) where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58
  8. 8. Refer to the martingale part of the Lévy-Itô decomposition. Define a“martingale-valued measure” by ˜ M(t, E) = B(t)δ0 (E) + N(t, E − {0}),for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownianmotion. The following key properties then hold:- M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for 0 ≤ s < t < ∞. E(M((s, t], E)) = 0. E(M((s, t], E)2 ) = ρ((s, t], E) where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58
  9. 9. Refer to the martingale part of the Lévy-Itô decomposition. Define a“martingale-valued measure” by ˜ M(t, E) = B(t)δ0 (E) + N(t, E − {0}),for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownianmotion. The following key properties then hold:- M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for 0 ≤ s < t < ∞. E(M((s, t], E)) = 0. E(M((s, t], E)2 ) = ρ((s, t], E) where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58
  10. 10. Refer to the martingale part of the Lévy-Itô decomposition. Define a“martingale-valued measure” by ˜ M(t, E) = B(t)δ0 (E) + N(t, E − {0}),for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownianmotion. The following key properties then hold:- M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for 0 ≤ s < t < ∞. E(M((s, t], E)) = 0. E(M((s, t], E)2 ) = ρ((s, t], E) where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58
  11. 11. Refer to the martingale part of the Lévy-Itô decomposition. Define a“martingale-valued measure” by ˜ M(t, E) = B(t)δ0 (E) + N(t, E − {0}),for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownianmotion. The following key properties then hold:- M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for 0 ≤ s < t < ∞. E(M((s, t], E)) = 0. E(M((s, t], E)2 ) = ρ((s, t], E) where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58
  12. 12. Refer to the martingale part of the Lévy-Itô decomposition. Define a“martingale-valued measure” by ˜ M(t, E) = B(t)δ0 (E) + N(t, E − {0}),for E ∈ B(Rd ), where B = (B(t), t ≥ 0) is a one-dimensional Brownianmotion. The following key properties then hold:- M((s, t], E) = M(t, E) − M(s, E) is independent of Fs , for 0 ≤ s < t < ∞. E(M((s, t], E)) = 0. E(M((s, t], E)2 ) = ρ((s, t], E) where ρ((s, t], E) = (t − s)(δ0 (E) + ν(E − {0})). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 3 / 58
  13. 13. We’re going to unify the usual stochastic integral with the Poissonintegral, by defining: t t t F (s, x)M(ds, dx) := G(s)dB(s) + ˜ F (s, x)N(ds, dx). 0 E 0 0 E−{0}where G(s) = F (s, 0). Of course, we need some conditions on theclass of integrands:-Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebrawith respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1)and (2) below are measurable. 1 For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft measurable, 2 For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left continuous. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58
  14. 14. We’re going to unify the usual stochastic integral with the Poissonintegral, by defining: t t t F (s, x)M(ds, dx) := G(s)dB(s) + ˜ F (s, x)N(ds, dx). 0 E 0 0 E−{0}where G(s) = F (s, 0). Of course, we need some conditions on theclass of integrands:-Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebrawith respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1)and (2) below are measurable. 1 For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft measurable, 2 For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left continuous. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58
  15. 15. We’re going to unify the usual stochastic integral with the Poissonintegral, by defining: t t t F (s, x)M(ds, dx) := G(s)dB(s) + ˜ F (s, x)N(ds, dx). 0 E 0 0 E−{0}where G(s) = F (s, 0). Of course, we need some conditions on theclass of integrands:-Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebrawith respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1)and (2) below are measurable. 1 For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft measurable, 2 For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left continuous. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58
  16. 16. We’re going to unify the usual stochastic integral with the Poissonintegral, by defining: t t t F (s, x)M(ds, dx) := G(s)dB(s) + ˜ F (s, x)N(ds, dx). 0 E 0 0 E−{0}where G(s) = F (s, 0). Of course, we need some conditions on theclass of integrands:-Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebrawith respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1)and (2) below are measurable. 1 For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft measurable, 2 For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left continuous. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58
  17. 17. We’re going to unify the usual stochastic integral with the Poissonintegral, by defining: t t t F (s, x)M(ds, dx) := G(s)dB(s) + ˜ F (s, x)N(ds, dx). 0 E 0 0 E−{0}where G(s) = F (s, 0). Of course, we need some conditions on theclass of integrands:-Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebrawith respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1)and (2) below are measurable. 1 For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft measurable, 2 For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left continuous. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58
  18. 18. We’re going to unify the usual stochastic integral with the Poissonintegral, by defining: t t t F (s, x)M(ds, dx) := G(s)dB(s) + ˜ F (s, x)N(ds, dx). 0 E 0 0 E−{0}where G(s) = F (s, 0). Of course, we need some conditions on theclass of integrands:-Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebrawith respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1)and (2) below are measurable. 1 For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft measurable, 2 For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left continuous. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58
  19. 19. We’re going to unify the usual stochastic integral with the Poissonintegral, by defining: t t t F (s, x)M(ds, dx) := G(s)dB(s) + ˜ F (s, x)N(ds, dx). 0 E 0 0 E−{0}where G(s) = F (s, 0). Of course, we need some conditions on theclass of integrands:-Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebrawith respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1)and (2) below are measurable. 1 For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft measurable, 2 For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left continuous. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58
  20. 20. We’re going to unify the usual stochastic integral with the Poissonintegral, by defining: t t t F (s, x)M(ds, dx) := G(s)dB(s) + ˜ F (s, x)N(ds, dx). 0 E 0 0 E−{0}where G(s) = F (s, 0). Of course, we need some conditions on theclass of integrands:-Fix E ∈ B(Rd ) and 0 < T < ∞ and let P denote the smallest σ-algebrawith respect to which all mappings F : [0, T ] × E × Ω → R satisfying (1)and (2) below are measurable. 1 For each 0 ≤ t ≤ T , the mapping (x, ω) → F (t, x, ω) is B(E) ⊗ Ft measurable, 2 For each x ∈ E, ω ∈ Ω, the mapping t → F (t, x, ω) is left continuous. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 4 / 58
  21. 21. We call P the predictable σ-algebra. A P-measurable mappingG : [0, T ] × E × Ω → R is then said to be predictable. The definitionclearly extends naturally to the case where [0, T ] is replaced by R+ .Note that by (1), if G is predictable then the process t → G(t, x, ·) isadapted, for each x ∈ E. If G satisfies (1) and is left continuous then itis clearly predictable. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 5 / 58
  22. 22. We call P the predictable σ-algebra. A P-measurable mappingG : [0, T ] × E × Ω → R is then said to be predictable. The definitionclearly extends naturally to the case where [0, T ] is replaced by R+ .Note that by (1), if G is predictable then the process t → G(t, x, ·) isadapted, for each x ∈ E. If G satisfies (1) and is left continuous then itis clearly predictable. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 5 / 58
  23. 23. We call P the predictable σ-algebra. A P-measurable mappingG : [0, T ] × E × Ω → R is then said to be predictable. The definitionclearly extends naturally to the case where [0, T ] is replaced by R+ .Note that by (1), if G is predictable then the process t → G(t, x, ·) isadapted, for each x ∈ E. If G satisfies (1) and is left continuous then itis clearly predictable. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 5 / 58
  24. 24. We call P the predictable σ-algebra. A P-measurable mappingG : [0, T ] × E × Ω → R is then said to be predictable. The definitionclearly extends naturally to the case where [0, T ] is replaced by R+ .Note that by (1), if G is predictable then the process t → G(t, x, ·) isadapted, for each x ∈ E. If G satisfies (1) and is left continuous then itis clearly predictable. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 5 / 58
  25. 25. We call P the predictable σ-algebra. A P-measurable mappingG : [0, T ] × E × Ω → R is then said to be predictable. The definitionclearly extends naturally to the case where [0, T ] is replaced by R+ .Note that by (1), if G is predictable then the process t → G(t, x, ·) isadapted, for each x ∈ E. If G satisfies (1) and is left continuous then itis clearly predictable. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 5 / 58
  26. 26. Define H2 (T , E) to be the linear space of all equivalence classes ofmappings F : [0, T ] × E × Ω → R which coincide almost everywherewith respect to ρ × P and which satisfy the following conditions: F is predictable, T E(|F (t, x)|2 )ρ(dt, dx) < ∞. 0 EIt can be shown that H2 (T , E) is a real Hilbert space with respect to Tthe inner product < F , G >T ,ρ = 0 E E((F (t, x), G(t, x)))ρ(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 6 / 58
  27. 27. Define H2 (T , E) to be the linear space of all equivalence classes ofmappings F : [0, T ] × E × Ω → R which coincide almost everywherewith respect to ρ × P and which satisfy the following conditions: F is predictable, T E(|F (t, x)|2 )ρ(dt, dx) < ∞. 0 EIt can be shown that H2 (T , E) is a real Hilbert space with respect to Tthe inner product < F , G >T ,ρ = 0 E E((F (t, x), G(t, x)))ρ(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 6 / 58
  28. 28. Define H2 (T , E) to be the linear space of all equivalence classes ofmappings F : [0, T ] × E × Ω → R which coincide almost everywherewith respect to ρ × P and which satisfy the following conditions: F is predictable, T E(|F (t, x)|2 )ρ(dt, dx) < ∞. 0 EIt can be shown that H2 (T , E) is a real Hilbert space with respect to Tthe inner product < F , G >T ,ρ = 0 E E((F (t, x), G(t, x)))ρ(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 6 / 58
  29. 29. Define H2 (T , E) to be the linear space of all equivalence classes ofmappings F : [0, T ] × E × Ω → R which coincide almost everywherewith respect to ρ × P and which satisfy the following conditions: F is predictable, T E(|F (t, x)|2 )ρ(dt, dx) < ∞. 0 EIt can be shown that H2 (T , E) is a real Hilbert space with respect to Tthe inner product < F , G >T ,ρ = 0 E E((F (t, x), G(t, x)))ρ(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 6 / 58
  30. 30. Define S(T , E) to be the linear space of all simple processes inH2 (T , E),where F is simple if for some m, n ∈ N, there exists0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borelsubsets A1 , A2 , . . . , An of E with each ν(Ai ) < ∞ such that m n F = Fj,k 1(tj ,tj+1 ] 1Ak , j=1 k =1where each Fj,k is a bounded Ftj -measurable random variable. Notethat F is left continuous and B(E) ⊗ Ft measurable, hence it ispredictable. An important fact is that S(T , E) is dense in H2 (T , E), Dave Applebaum (Sheffield UK) Lecture 4 December 2011 7 / 58
  31. 31. Define S(T , E) to be the linear space of all simple processes inH2 (T , E),where F is simple if for some m, n ∈ N, there exists0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borelsubsets A1 , A2 , . . . , An of E with each ν(Ai ) < ∞ such that m n F = Fj,k 1(tj ,tj+1 ] 1Ak , j=1 k =1where each Fj,k is a bounded Ftj -measurable random variable. Notethat F is left continuous and B(E) ⊗ Ft measurable, hence it ispredictable. An important fact is that S(T , E) is dense in H2 (T , E), Dave Applebaum (Sheffield UK) Lecture 4 December 2011 7 / 58
  32. 32. Define S(T , E) to be the linear space of all simple processes inH2 (T , E),where F is simple if for some m, n ∈ N, there exists0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borelsubsets A1 , A2 , . . . , An of E with each ν(Ai ) < ∞ such that m n F = Fj,k 1(tj ,tj+1 ] 1Ak , j=1 k =1where each Fj,k is a bounded Ftj -measurable random variable. Notethat F is left continuous and B(E) ⊗ Ft measurable, hence it ispredictable. An important fact is that S(T , E) is dense in H2 (T , E), Dave Applebaum (Sheffield UK) Lecture 4 December 2011 7 / 58
  33. 33. Define S(T , E) to be the linear space of all simple processes inH2 (T , E),where F is simple if for some m, n ∈ N, there exists0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borelsubsets A1 , A2 , . . . , An of E with each ν(Ai ) < ∞ such that m n F = Fj,k 1(tj ,tj+1 ] 1Ak , j=1 k =1where each Fj,k is a bounded Ftj -measurable random variable. Notethat F is left continuous and B(E) ⊗ Ft measurable, hence it ispredictable. An important fact is that S(T , E) is dense in H2 (T , E), Dave Applebaum (Sheffield UK) Lecture 4 December 2011 7 / 58
  34. 34. Define S(T , E) to be the linear space of all simple processes inH2 (T , E),where F is simple if for some m, n ∈ N, there exists0 ≤ t1 ≤ t2 ≤ · · · ≤ tm+1 = T and there exists a family of disjoint Borelsubsets A1 , A2 , . . . , An of E with each ν(Ai ) < ∞ such that m n F = Fj,k 1(tj ,tj+1 ] 1Ak , j=1 k =1where each Fj,k is a bounded Ftj -measurable random variable. Notethat F is left continuous and B(E) ⊗ Ft measurable, hence it ispredictable. An important fact is that S(T , E) is dense in H2 (T , E), Dave Applebaum (Sheffield UK) Lecture 4 December 2011 7 / 58
  35. 35. One of Itô’s greatest achievements was the definition of the stochasticintegral IT (F ), for F simple, by separating the “past” from the “future”within the Riemann sum:- m n IT (F ) = Fj,k M((tj , tj+1 ], Ak ), (0.1) j=1 k =1so on each time interval [tj , tj+1 ], Fj,k encapsulates informationobtained by time tj , while M gives the innovation into the future (tj , tj+1 ]. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 8 / 58
  36. 36. One of Itô’s greatest achievements was the definition of the stochasticintegral IT (F ), for F simple, by separating the “past” from the “future”within the Riemann sum:- m n IT (F ) = Fj,k M((tj , tj+1 ], Ak ), (0.1) j=1 k =1so on each time interval [tj , tj+1 ], Fj,k encapsulates informationobtained by time tj , while M gives the innovation into the future (tj , tj+1 ]. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 8 / 58
  37. 37. One of Itô’s greatest achievements was the definition of the stochasticintegral IT (F ), for F simple, by separating the “past” from the “future”within the Riemann sum:- m n IT (F ) = Fj,k M((tj , tj+1 ], Ak ), (0.1) j=1 k =1so on each time interval [tj , tj+1 ], Fj,k encapsulates informationobtained by time tj , while M gives the innovation into the future (tj , tj+1 ]. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 8 / 58
  38. 38. LemmaFor each T ≥ 0, F ∈ S(T , E), T E(IT (F )) = 0, E(IT (F )2 ) = E(|F (t, x)|2 )ρ(dt, dx). 0 EProof. E(IT (F )) = 0 is a straightforward application of linearity andindependence. The second result is quite messy - we lose nothingimportant by just looking at the Brownian case, with d = 1. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 9 / 58
  39. 39. LemmaFor each T ≥ 0, F ∈ S(T , E), T E(IT (F )) = 0, E(IT (F )2 ) = E(|F (t, x)|2 )ρ(dt, dx). 0 EProof. E(IT (F )) = 0 is a straightforward application of linearity andindependence. The second result is quite messy - we lose nothingimportant by just looking at the Brownian case, with d = 1. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 9 / 58
  40. 40. LemmaFor each T ≥ 0, F ∈ S(T , E), T E(IT (F )) = 0, E(IT (F )2 ) = E(|F (t, x)|2 )ρ(dt, dx). 0 EProof. E(IT (F )) = 0 is a straightforward application of linearity andindependence. The second result is quite messy - we lose nothingimportant by just looking at the Brownian case, with d = 1. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 9 / 58
  41. 41. m mSo let F (t) := j=1 Fj 1(tj ,tj+1 ] , then IT (F ) = j=1 Fj (B(tj+1 ) − B(tj )), m m IT (F )2 = Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp )). j=1 p=1Now fix j and split the second sum into three pieces - corresponding top < j, p = j and p > j. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 10 / 58
  42. 42. m mSo let F (t) := j=1 Fj 1(tj ,tj+1 ] , then IT (F ) = j=1 Fj (B(tj+1 ) − B(tj )), m m IT (F )2 = Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp )). j=1 p=1Now fix j and split the second sum into three pieces - corresponding top < j, p = j and p > j. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 10 / 58
  43. 43. m mSo let F (t) := j=1 Fj 1(tj ,tj+1 ] , then IT (F ) = j=1 Fj (B(tj+1 ) − B(tj )), m m IT (F )2 = Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp )). j=1 p=1Now fix j and split the second sum into three pieces - corresponding top < j, p = j and p > j. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 10 / 58
  44. 44. m mSo let F (t) := j=1 Fj 1(tj ,tj+1 ] , then IT (F ) = j=1 Fj (B(tj+1 ) − B(tj )), m m IT (F )2 = Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp )). j=1 p=1Now fix j and split the second sum into three pieces - corresponding top < j, p = j and p > j. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 10 / 58
  45. 45. When p < j, Fj Fp (B(tp+1 ) − B(tp )) ∈ Ftj which is independent ofB(tj+1 ) − B(tj ), E[Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp ))] = E[Fj Fp (B(tp+1 ) − B(tp ))]E(B(tj+1 ) − B(tj )) = 0.Exactly the same argument works when p > j. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 11 / 58
  46. 46. When p < j, Fj Fp (B(tp+1 ) − B(tp )) ∈ Ftj which is independent ofB(tj+1 ) − B(tj ), E[Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp ))] = E[Fj Fp (B(tp+1 ) − B(tp ))]E(B(tj+1 ) − B(tj )) = 0.Exactly the same argument works when p > j. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 11 / 58
  47. 47. When p < j, Fj Fp (B(tp+1 ) − B(tp )) ∈ Ftj which is independent ofB(tj+1 ) − B(tj ), E[Fj Fp (B(tj+1 ) − B(tj ))(B(tp+1 ) − B(tp ))] = E[Fj Fp (B(tp+1 ) − B(tp ))]E(B(tj+1 ) − B(tj )) = 0.Exactly the same argument works when p > j. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 11 / 58
  48. 48. What remains is the case p = j, and by independence again m E(IT (F )2 ) = E(Fj2 )E(B(tj+1 ) − B(tj ))2 j=1 m = E(Fj2 )(tj+1 − tj ). 2 j=1We deduce from Lemma 1 that IT is a linear isometry from S(T , E) intoL2 (Ω, F, P),and hence it extends to an isometric embedding of thewhole of H2 (T , E) into L2 (Ω, F, P). We continue to denote thisextension as IT and we call IT (F ) the (Itô) stochastic integral ofF ∈ H2 (T , E). When convenient, we will use the Leibniz notation TIT (F ) := 0 E F (t, x)M(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58
  49. 49. What remains is the case p = j, and by independence again m E(IT (F )2 ) = E(Fj2 )E(B(tj+1 ) − B(tj ))2 j=1 m = E(Fj2 )(tj+1 − tj ). 2 j=1We deduce from Lemma 1 that IT is a linear isometry from S(T , E) intoL2 (Ω, F, P),and hence it extends to an isometric embedding of thewhole of H2 (T , E) into L2 (Ω, F, P). We continue to denote thisextension as IT and we call IT (F ) the (Itô) stochastic integral ofF ∈ H2 (T , E). When convenient, we will use the Leibniz notation TIT (F ) := 0 E F (t, x)M(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58
  50. 50. What remains is the case p = j, and by independence again m E(IT (F )2 ) = E(Fj2 )E(B(tj+1 ) − B(tj ))2 j=1 m = E(Fj2 )(tj+1 − tj ). 2 j=1We deduce from Lemma 1 that IT is a linear isometry from S(T , E) intoL2 (Ω, F, P),and hence it extends to an isometric embedding of thewhole of H2 (T , E) into L2 (Ω, F, P). We continue to denote thisextension as IT and we call IT (F ) the (Itô) stochastic integral ofF ∈ H2 (T , E). When convenient, we will use the Leibniz notation TIT (F ) := 0 E F (t, x)M(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58
  51. 51. What remains is the case p = j, and by independence again m E(IT (F )2 ) = E(Fj2 )E(B(tj+1 ) − B(tj ))2 j=1 m = E(Fj2 )(tj+1 − tj ). 2 j=1We deduce from Lemma 1 that IT is a linear isometry from S(T , E) intoL2 (Ω, F, P),and hence it extends to an isometric embedding of thewhole of H2 (T , E) into L2 (Ω, F, P). We continue to denote thisextension as IT and we call IT (F ) the (Itô) stochastic integral ofF ∈ H2 (T , E). When convenient, we will use the Leibniz notation TIT (F ) := 0 E F (t, x)M(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58
  52. 52. What remains is the case p = j, and by independence again m E(IT (F )2 ) = E(Fj2 )E(B(tj+1 ) − B(tj ))2 j=1 m = E(Fj2 )(tj+1 − tj ). 2 j=1We deduce from Lemma 1 that IT is a linear isometry from S(T , E) intoL2 (Ω, F, P),and hence it extends to an isometric embedding of thewhole of H2 (T , E) into L2 (Ω, F, P). We continue to denote thisextension as IT and we call IT (F ) the (Itô) stochastic integral ofF ∈ H2 (T , E). When convenient, we will use the Leibniz notation TIT (F ) := 0 E F (t, x)M(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58
  53. 53. What remains is the case p = j, and by independence again m E(IT (F )2 ) = E(Fj2 )E(B(tj+1 ) − B(tj ))2 j=1 m = E(Fj2 )(tj+1 − tj ). 2 j=1We deduce from Lemma 1 that IT is a linear isometry from S(T , E) intoL2 (Ω, F, P),and hence it extends to an isometric embedding of thewhole of H2 (T , E) into L2 (Ω, F, P). We continue to denote thisextension as IT and we call IT (F ) the (Itô) stochastic integral ofF ∈ H2 (T , E). When convenient, we will use the Leibniz notation TIT (F ) := 0 E F (t, x)M(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58
  54. 54. What remains is the case p = j, and by independence again m E(IT (F )2 ) = E(Fj2 )E(B(tj+1 ) − B(tj ))2 j=1 m = E(Fj2 )(tj+1 − tj ). 2 j=1We deduce from Lemma 1 that IT is a linear isometry from S(T , E) intoL2 (Ω, F, P),and hence it extends to an isometric embedding of thewhole of H2 (T , E) into L2 (Ω, F, P). We continue to denote thisextension as IT and we call IT (F ) the (Itô) stochastic integral ofF ∈ H2 (T , E). When convenient, we will use the Leibniz notation TIT (F ) := 0 E F (t, x)M(dt, dx). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 12 / 58
  55. 55. TSo for predictable F satisfying 0 E E(|F (s, x)|2 )ρ(ds, dx) < ∞, wecan find a sequence (Fn , n ∈ N) of simple processes such that T T F (t, x)M(dt, dx) = lim Fn (t, x)M(dt, dx). 0 E n→∞ 0 EThe limit is taken in the L2 -sense and is independent of the choice ofapproximating sequence. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 13 / 58
  56. 56. TSo for predictable F satisfying 0 E E(|F (s, x)|2 )ρ(ds, dx) < ∞, wecan find a sequence (Fn , n ∈ N) of simple processes such that T T F (t, x)M(dt, dx) = lim Fn (t, x)M(dt, dx). 0 E n→∞ 0 EThe limit is taken in the L2 -sense and is independent of the choice ofapproximating sequence. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 13 / 58
  57. 57. TSo for predictable F satisfying 0 E E(|F (s, x)|2 )ρ(ds, dx) < ∞, wecan find a sequence (Fn , n ∈ N) of simple processes such that T T F (t, x)M(dt, dx) = lim Fn (t, x)M(dt, dx). 0 E n→∞ 0 EThe limit is taken in the L2 -sense and is independent of the choice ofapproximating sequence. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 13 / 58
  58. 58. TSo for predictable F satisfying 0 E E(|F (s, x)|2 )ρ(ds, dx) < ∞, wecan find a sequence (Fn , n ∈ N) of simple processes such that T T F (t, x)M(dt, dx) = lim Fn (t, x)M(dt, dx). 0 E n→∞ 0 EThe limit is taken in the L2 -sense and is independent of the choice ofapproximating sequence. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 13 / 58
  59. 59. The following theorem summarises some useful properties of thestochastic integral.TheoremIf F , G ∈ H2 (T , E) and α, β ∈ R, then : 1 IT (αF + βG) = αIT (F ) + βIT (G). T 2 E(IT (F )) = 0, E(IT (F )2 ) = 0 E E(|F (t, x)|2 )ρ(dt, dx). 3 (It (F ), t ≥ 0) is Ft -adapted. 4 (It (F ), t ≥ 0) is a square-integrable martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58
  60. 60. The following theorem summarises some useful properties of thestochastic integral.TheoremIf F , G ∈ H2 (T , E) and α, β ∈ R, then : 1 IT (αF + βG) = αIT (F ) + βIT (G). T 2 E(IT (F )) = 0, E(IT (F )2 ) = 0 E E(|F (t, x)|2 )ρ(dt, dx). 3 (It (F ), t ≥ 0) is Ft -adapted. 4 (It (F ), t ≥ 0) is a square-integrable martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58
  61. 61. The following theorem summarises some useful properties of thestochastic integral.TheoremIf F , G ∈ H2 (T , E) and α, β ∈ R, then : 1 IT (αF + βG) = αIT (F ) + βIT (G). T 2 E(IT (F )) = 0, E(IT (F )2 ) = 0 E E(|F (t, x)|2 )ρ(dt, dx). 3 (It (F ), t ≥ 0) is Ft -adapted. 4 (It (F ), t ≥ 0) is a square-integrable martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58
  62. 62. The following theorem summarises some useful properties of thestochastic integral.TheoremIf F , G ∈ H2 (T , E) and α, β ∈ R, then : 1 IT (αF + βG) = αIT (F ) + βIT (G). T 2 E(IT (F )) = 0, E(IT (F )2 ) = 0 E E(|F (t, x)|2 )ρ(dt, dx). 3 (It (F ), t ≥ 0) is Ft -adapted. 4 (It (F ), t ≥ 0) is a square-integrable martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58
  63. 63. The following theorem summarises some useful properties of thestochastic integral.TheoremIf F , G ∈ H2 (T , E) and α, β ∈ R, then : 1 IT (αF + βG) = αIT (F ) + βIT (G). T 2 E(IT (F )) = 0, E(IT (F )2 ) = 0 E E(|F (t, x)|2 )ρ(dt, dx). 3 (It (F ), t ≥ 0) is Ft -adapted. 4 (It (F ), t ≥ 0) is a square-integrable martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58
  64. 64. The following theorem summarises some useful properties of thestochastic integral.TheoremIf F , G ∈ H2 (T , E) and α, β ∈ R, then : 1 IT (αF + βG) = αIT (F ) + βIT (G). T 2 E(IT (F )) = 0, E(IT (F )2 ) = 0 E E(|F (t, x)|2 )ρ(dt, dx). 3 (It (F ), t ≥ 0) is Ft -adapted. 4 (It (F ), t ≥ 0) is a square-integrable martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 14 / 58
  65. 65. Proof. (1) and (2) are easy.For (3), let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; theneach process (It (Fn ), t ≥ 0) is clearly adapted. Since eachIt (Fn ) → It (F ) in L2 as n → ∞, we can find a subsequence(Fnk , nk ∈ N) such that It (Fnk ) → It (F ) a.s. as nk → ∞, and therequired result follows. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 15 / 58
  66. 66. Proof. (1) and (2) are easy.For (3), let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; theneach process (It (Fn ), t ≥ 0) is clearly adapted. Since eachIt (Fn ) → It (F ) in L2 as n → ∞, we can find a subsequence(Fnk , nk ∈ N) such that It (Fnk ) → It (F ) a.s. as nk → ∞, and therequired result follows. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 15 / 58
  67. 67. Proof. (1) and (2) are easy.For (3), let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; theneach process (It (Fn ), t ≥ 0) is clearly adapted. Since eachIt (Fn ) → It (F ) in L2 as n → ∞, we can find a subsequence(Fnk , nk ∈ N) such that It (Fnk ) → It (F ) a.s. as nk → ∞, and therequired result follows. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 15 / 58
  68. 68. Proof. (1) and (2) are easy.For (3), let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; theneach process (It (Fn ), t ≥ 0) is clearly adapted. Since eachIt (Fn ) → It (F ) in L2 as n → ∞, we can find a subsequence(Fnk , nk ∈ N) such that It (Fnk ) → It (F ) a.s. as nk → ∞, and therequired result follows. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 15 / 58
  69. 69. Proof. (1) and (2) are easy.For (3), let (Fn , n ∈ N) be a sequence in S(T , E) converging to F ; theneach process (It (Fn ), t ≥ 0) is clearly adapted. Since eachIt (Fn ) → It (F ) in L2 as n → ∞, we can find a subsequence(Fnk , nk ∈ N) such that It (Fnk ) → It (F ) a.s. as nk → ∞, and therequired result follows. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 15 / 58
  70. 70. (4) Let F ∈ S(T , E) and (without loss of generality) choose0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F )and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,   m n Es (Is,t (F )) = Es  Fj,k M((tj , tj+1 ], Ak ) j=l+1 k =1 n n = Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0. j=l+1 k =1The result now follows by the continuity of Es in L2 . Indeed, let(Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2 = ||F − Fn ||T ,ρ → 0 as n → ∞. 2 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58
  71. 71. (4) Let F ∈ S(T , E) and (without loss of generality) choose0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F )and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,   m n Es (Is,t (F )) = Es  Fj,k M((tj , tj+1 ], Ak ) j=l+1 k =1 n n = Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0. j=l+1 k =1The result now follows by the continuity of Es in L2 . Indeed, let(Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2 = ||F − Fn ||T ,ρ → 0 as n → ∞. 2 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58
  72. 72. (4) Let F ∈ S(T , E) and (without loss of generality) choose0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F )and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,   m n Es (Is,t (F )) = Es  Fj,k M((tj , tj+1 ], Ak ) j=l+1 k =1 n n = Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0. j=l+1 k =1The result now follows by the continuity of Es in L2 . Indeed, let(Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2 = ||F − Fn ||T ,ρ → 0 as n → ∞. 2 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58
  73. 73. (4) Let F ∈ S(T , E) and (without loss of generality) choose0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F )and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,   m n Es (Is,t (F )) = Es  Fj,k M((tj , tj+1 ], Ak ) j=l+1 k =1 n n = Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0. j=l+1 k =1The result now follows by the continuity of Es in L2 . Indeed, let(Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2 = ||F − Fn ||T ,ρ → 0 as n → ∞. 2 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58
  74. 74. (4) Let F ∈ S(T , E) and (without loss of generality) choose0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F )and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,   m n Es (Is,t (F )) = Es  Fj,k M((tj , tj+1 ], Ak ) j=l+1 k =1 n n = Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0. j=l+1 k =1The result now follows by the continuity of Es in L2 . Indeed, let(Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2 = ||F − Fn ||T ,ρ → 0 as n → ∞. 2 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58
  75. 75. (4) Let F ∈ S(T , E) and (without loss of generality) choose0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F )and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,   m n Es (Is,t (F )) = Es  Fj,k M((tj , tj+1 ], Ak ) j=l+1 k =1 n n = Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0. j=l+1 k =1The result now follows by the continuity of Es in L2 . Indeed, let(Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2 = ||F − Fn ||T ,ρ → 0 as n → ∞. 2 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58
  76. 76. (4) Let F ∈ S(T , E) and (without loss of generality) choose0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F )and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,   m n Es (Is,t (F )) = Es  Fj,k M((tj , tj+1 ], Ak ) j=l+1 k =1 n n = Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0. j=l+1 k =1The result now follows by the continuity of Es in L2 . Indeed, let(Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2 = ||F − Fn ||T ,ρ → 0 as n → ∞. 2 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58
  77. 77. (4) Let F ∈ S(T , E) and (without loss of generality) choose0 < s = tl < tl+1 < t. Then it is easy to see that It (F ) = Is (F ) + Is,t (F )and hence Es (It (F )) = Is (F ) + Es (Is,t (F )) by (3). However,   m n Es (Is,t (F )) = Es  Fj,k M((tj , tj+1 ], Ak ) j=l+1 k =1 n n = Es (Fj,k )E(M((tj , tj+1 ], Ak )) = 0. j=l+1 k =1The result now follows by the continuity of Es in L2 . Indeed, let(Fn , n ∈ N) be a sequence in S(T , E) converging to F ; then we have||Es (It (F )) − Es (It (Fn ))||2 ≤ ||It (F ) − It (Fn )||2 = ||F − Fn ||T ,ρ → 0 as n → ∞. 2 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 16 / 58
  78. 78. We can extend the stochastic integral IT (F ) to integrands inP2 (T , E).This is the linear space of all equivalence classes ofmappings F : [0, T ] × E × Ω → R which coincide almost everywherewith respect to ρ × P, and which satisfy the following conditions: F is predictable. T P 0 E |F (t, x)|2 ρ(dt, dx) < ∞ = 1.If F ∈ P2 (T , E), (It (F ), t ≥ 0) is always a local martingale, but notnecessarily a martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 17 / 58
  79. 79. We can extend the stochastic integral IT (F ) to integrands inP2 (T , E).This is the linear space of all equivalence classes ofmappings F : [0, T ] × E × Ω → R which coincide almost everywherewith respect to ρ × P, and which satisfy the following conditions: F is predictable. T P 0 E |F (t, x)|2 ρ(dt, dx) < ∞ = 1.If F ∈ P2 (T , E), (It (F ), t ≥ 0) is always a local martingale, but notnecessarily a martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 17 / 58
  80. 80. We can extend the stochastic integral IT (F ) to integrands inP2 (T , E).This is the linear space of all equivalence classes ofmappings F : [0, T ] × E × Ω → R which coincide almost everywherewith respect to ρ × P, and which satisfy the following conditions: F is predictable. T P 0 E |F (t, x)|2 ρ(dt, dx) < ∞ = 1.If F ∈ P2 (T , E), (It (F ), t ≥ 0) is always a local martingale, but notnecessarily a martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 17 / 58
  81. 81. We can extend the stochastic integral IT (F ) to integrands inP2 (T , E).This is the linear space of all equivalence classes ofmappings F : [0, T ] × E × Ω → R which coincide almost everywherewith respect to ρ × P, and which satisfy the following conditions: F is predictable. T P 0 E |F (t, x)|2 ρ(dt, dx) < ∞ = 1.If F ∈ P2 (T , E), (It (F ), t ≥ 0) is always a local martingale, but notnecessarily a martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 17 / 58
  82. 82. We can extend the stochastic integral IT (F ) to integrands inP2 (T , E).This is the linear space of all equivalence classes ofmappings F : [0, T ] × E × Ω → R which coincide almost everywherewith respect to ρ × P, and which satisfy the following conditions: F is predictable. T P 0 E |F (t, x)|2 ρ(dt, dx) < ∞ = 1.If F ∈ P2 (T , E), (It (F ), t ≥ 0) is always a local martingale, but notnecessarily a martingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 17 / 58
  83. 83. Poisson Stochastic IntegralsLet A be an arbitrary Borel set in Rd − {0} which is bounded below,and introduce the compound Poisson process P = (P(t), t ≥ 0), whereeach P(t) = A xN(t, dx). Let K be a predictable mapping, thengeneralising the earlier Poisson integrals, we define T K (t, x)N(dt, dx) = K (u, ∆P(u))1A (∆P(u)), (0.2) 0 A 0≤u≤Tas a random finite sum. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 18 / 58
  84. 84. Poisson Stochastic IntegralsLet A be an arbitrary Borel set in Rd − {0} which is bounded below,and introduce the compound Poisson process P = (P(t), t ≥ 0), whereeach P(t) = A xN(t, dx). Let K be a predictable mapping, thengeneralising the earlier Poisson integrals, we define T K (t, x)N(dt, dx) = K (u, ∆P(u))1A (∆P(u)), (0.2) 0 A 0≤u≤Tas a random finite sum. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 18 / 58
  85. 85. Poisson Stochastic IntegralsLet A be an arbitrary Borel set in Rd − {0} which is bounded below,and introduce the compound Poisson process P = (P(t), t ≥ 0), whereeach P(t) = A xN(t, dx). Let K be a predictable mapping, thengeneralising the earlier Poisson integrals, we define T K (t, x)N(dt, dx) = K (u, ∆P(u))1A (∆P(u)), (0.2) 0 A 0≤u≤Tas a random finite sum. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 18 / 58
  86. 86. Poisson Stochastic IntegralsLet A be an arbitrary Borel set in Rd − {0} which is bounded below,and introduce the compound Poisson process P = (P(t), t ≥ 0), whereeach P(t) = A xN(t, dx). Let K be a predictable mapping, thengeneralising the earlier Poisson integrals, we define T K (t, x)N(dt, dx) = K (u, ∆P(u))1A (∆P(u)), (0.2) 0 A 0≤u≤Tas a random finite sum. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 18 / 58
  87. 87. In particular, if H satisfies the square-integrability condition givenabove, we may then define, for each 1 ≤ i ≤ d, T ˜ H i (t, x)N(dt, dx) 0 A T T := H i (t, x)N(dt, dx) − H i (t, x)ν(dx)dt. 0 A 0 AThe definition (0.2) can, in principle, be used to define stochasticintegrals for a more general class of integrands than we have beenconsidering. For simplicity, let N = (N(t), t ≥ 0) be a Poisson processof intensity 1 and let f : R → R, then we may define t f (N(s))dN(s) = f (N(s−) + ∆N(s))∆N(s). 0 0≤s≤t Dave Applebaum (Sheffield UK) Lecture 4 December 2011 19 / 58
  88. 88. In particular, if H satisfies the square-integrability condition givenabove, we may then define, for each 1 ≤ i ≤ d, T ˜ H i (t, x)N(dt, dx) 0 A T T := H i (t, x)N(dt, dx) − H i (t, x)ν(dx)dt. 0 A 0 AThe definition (0.2) can, in principle, be used to define stochasticintegrals for a more general class of integrands than we have beenconsidering. For simplicity, let N = (N(t), t ≥ 0) be a Poisson processof intensity 1 and let f : R → R, then we may define t f (N(s))dN(s) = f (N(s−) + ∆N(s))∆N(s). 0 0≤s≤t Dave Applebaum (Sheffield UK) Lecture 4 December 2011 19 / 58
  89. 89. In particular, if H satisfies the square-integrability condition givenabove, we may then define, for each 1 ≤ i ≤ d, T ˜ H i (t, x)N(dt, dx) 0 A T T := H i (t, x)N(dt, dx) − H i (t, x)ν(dx)dt. 0 A 0 AThe definition (0.2) can, in principle, be used to define stochasticintegrals for a more general class of integrands than we have beenconsidering. For simplicity, let N = (N(t), t ≥ 0) be a Poisson processof intensity 1 and let f : R → R, then we may define t f (N(s))dN(s) = f (N(s−) + ∆N(s))∆N(s). 0 0≤s≤t Dave Applebaum (Sheffield UK) Lecture 4 December 2011 19 / 58
  90. 90. Lévy-type stochastic integrals ˆWe take E = B − {0} = {x ∈ Rd ; 0 < |x| < 1} throughout thissubsection. We say that an Rd -valued stochastic processY = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be writtenin the following form for each 1 ≤ i ≤ d, t ≥ 0, t t Y i (t) = Y i (0) + Gi (s)ds + Fji (s)dB j (s) 0 0 t t + ˜ H i (s, x)N(ds, dx) + K i (s, x)N(ds, dx), 0 |x|<1 0 |x|≥1where for each 11 ≤ i ≤ d, 1 ≤ j ≤ m, t ≥ 0, |Gi | 2 , Fji ∈ P2 (T ), H i ∈ P2 (T , E) and K ispredictable. B is an m-dimensional standard Brownian motion and N isan independent Poisson random measure on R+ × (Rd − {0}) with ˜compensator N and intensity measure ν, which we will assume is aLévy measure. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 20 / 58
  91. 91. Lévy-type stochastic integrals ˆWe take E = B − {0} = {x ∈ Rd ; 0 < |x| < 1} throughout thissubsection. We say that an Rd -valued stochastic processY = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be writtenin the following form for each 1 ≤ i ≤ d, t ≥ 0, t t Y i (t) = Y i (0) + Gi (s)ds + Fji (s)dB j (s) 0 0 t t + ˜ H i (s, x)N(ds, dx) + K i (s, x)N(ds, dx), 0 |x|<1 0 |x|≥1where for each 11 ≤ i ≤ d, 1 ≤ j ≤ m, t ≥ 0, |Gi | 2 , Fji ∈ P2 (T ), H i ∈ P2 (T , E) and K ispredictable. B is an m-dimensional standard Brownian motion and N isan independent Poisson random measure on R+ × (Rd − {0}) with ˜compensator N and intensity measure ν, which we will assume is aLévy measure. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 20 / 58
  92. 92. Lévy-type stochastic integrals ˆWe take E = B − {0} = {x ∈ Rd ; 0 < |x| < 1} throughout thissubsection. We say that an Rd -valued stochastic processY = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be writtenin the following form for each 1 ≤ i ≤ d, t ≥ 0, t t Y i (t) = Y i (0) + Gi (s)ds + Fji (s)dB j (s) 0 0 t t + ˜ H i (s, x)N(ds, dx) + K i (s, x)N(ds, dx), 0 |x|<1 0 |x|≥1where for each 11 ≤ i ≤ d, 1 ≤ j ≤ m, t ≥ 0, |Gi | 2 , Fji ∈ P2 (T ), H i ∈ P2 (T , E) and K ispredictable. B is an m-dimensional standard Brownian motion and N isan independent Poisson random measure on R+ × (Rd − {0}) with ˜compensator N and intensity measure ν, which we will assume is aLévy measure. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 20 / 58
  93. 93. Lévy-type stochastic integrals ˆWe take E = B − {0} = {x ∈ Rd ; 0 < |x| < 1} throughout thissubsection. We say that an Rd -valued stochastic processY = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be writtenin the following form for each 1 ≤ i ≤ d, t ≥ 0, t t Y i (t) = Y i (0) + Gi (s)ds + Fji (s)dB j (s) 0 0 t t + ˜ H i (s, x)N(ds, dx) + K i (s, x)N(ds, dx), 0 |x|<1 0 |x|≥1where for each 11 ≤ i ≤ d, 1 ≤ j ≤ m, t ≥ 0, |Gi | 2 , Fji ∈ P2 (T ), H i ∈ P2 (T , E) and K ispredictable. B is an m-dimensional standard Brownian motion and N isan independent Poisson random measure on R+ × (Rd − {0}) with ˜compensator N and intensity measure ν, which we will assume is aLévy measure. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 20 / 58
  94. 94. Lévy-type stochastic integrals ˆWe take E = B − {0} = {x ∈ Rd ; 0 < |x| < 1} throughout thissubsection. We say that an Rd -valued stochastic processY = (Y (t), t ≥ 0) is a Lévy-type stochastic integral if it can be writtenin the following form for each 1 ≤ i ≤ d, t ≥ 0, t t Y i (t) = Y i (0) + Gi (s)ds + Fji (s)dB j (s) 0 0 t t + ˜ H i (s, x)N(ds, dx) + K i (s, x)N(ds, dx), 0 |x|<1 0 |x|≥1where for each 11 ≤ i ≤ d, 1 ≤ j ≤ m, t ≥ 0, |Gi | 2 , Fji ∈ P2 (T ), H i ∈ P2 (T , E) and K ispredictable. B is an m-dimensional standard Brownian motion and N isan independent Poisson random measure on R+ × (Rd − {0}) with ˜compensator N and intensity measure ν, which we will assume is aLévy measure. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 20 / 58
  95. 95. We will assume that the random variable Y (0) is F0 -measurable, andthen it is clear that Y is an adapted process.We can often simplify complicated expressions by employing thenotation of stochastic differentials to represent Lévy-type stochasticintegrals. We then write the last expression as ˜ dY (t) = G(t)dt + F (t)dB(t) + H(t, x)N(dt, dx) + K (t, x)N(dt, dx).When we want to particularly emphasise the domains of integrationwith respect to x, we will use an equivalent notation dY (t) = G(t)dt + F (t)dB(t) + ˜ H(t, x)N(dt, dx) |x|<1 + K (t, x)N(dt, dx). |x|≥1Clearly Y is a semimartingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 21 / 58
  96. 96. We will assume that the random variable Y (0) is F0 -measurable, andthen it is clear that Y is an adapted process.We can often simplify complicated expressions by employing thenotation of stochastic differentials to represent Lévy-type stochasticintegrals. We then write the last expression as ˜ dY (t) = G(t)dt + F (t)dB(t) + H(t, x)N(dt, dx) + K (t, x)N(dt, dx).When we want to particularly emphasise the domains of integrationwith respect to x, we will use an equivalent notation dY (t) = G(t)dt + F (t)dB(t) + ˜ H(t, x)N(dt, dx) |x|<1 + K (t, x)N(dt, dx). |x|≥1Clearly Y is a semimartingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 21 / 58
  97. 97. We will assume that the random variable Y (0) is F0 -measurable, andthen it is clear that Y is an adapted process.We can often simplify complicated expressions by employing thenotation of stochastic differentials to represent Lévy-type stochasticintegrals. We then write the last expression as ˜ dY (t) = G(t)dt + F (t)dB(t) + H(t, x)N(dt, dx) + K (t, x)N(dt, dx).When we want to particularly emphasise the domains of integrationwith respect to x, we will use an equivalent notation dY (t) = G(t)dt + F (t)dB(t) + ˜ H(t, x)N(dt, dx) |x|<1 + K (t, x)N(dt, dx). |x|≥1Clearly Y is a semimartingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 21 / 58
  98. 98. We will assume that the random variable Y (0) is F0 -measurable, andthen it is clear that Y is an adapted process.We can often simplify complicated expressions by employing thenotation of stochastic differentials to represent Lévy-type stochasticintegrals. We then write the last expression as ˜ dY (t) = G(t)dt + F (t)dB(t) + H(t, x)N(dt, dx) + K (t, x)N(dt, dx).When we want to particularly emphasise the domains of integrationwith respect to x, we will use an equivalent notation dY (t) = G(t)dt + F (t)dB(t) + ˜ H(t, x)N(dt, dx) |x|<1 + K (t, x)N(dt, dx). |x|≥1Clearly Y is a semimartingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 21 / 58
  99. 99. We will assume that the random variable Y (0) is F0 -measurable, andthen it is clear that Y is an adapted process.We can often simplify complicated expressions by employing thenotation of stochastic differentials to represent Lévy-type stochasticintegrals. We then write the last expression as ˜ dY (t) = G(t)dt + F (t)dB(t) + H(t, x)N(dt, dx) + K (t, x)N(dt, dx).When we want to particularly emphasise the domains of integrationwith respect to x, we will use an equivalent notation dY (t) = G(t)dt + F (t)dB(t) + ˜ H(t, x)N(dt, dx) |x|<1 + K (t, x)N(dt, dx). |x|≥1Clearly Y is a semimartingale. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 21 / 58
  100. 100. Let M = (M(t), t ≥ 0) be an adapted process which is such thatMJ ∈ P2 (t, A) whenever J ∈ P2 (t, A) (where A ∈ B(Rd ) is arbitrary).For example, it is sufficient to take M to be adapted andleft-continuous.For these processes we can define an adapted processZ = (Z (t), t ≥ 0) by the prescription that it have the stochasticdifferential ˜ dZ (t) = M(t)G(t)dt + M(t)F (t)dB(t) + M(t)H(t, x)N(dt, dx) + M(t)K (t, x)N(dt, dx),and we will adopt the natural notation, dZ (t) = M(t)dY (t). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 22 / 58
  101. 101. Let M = (M(t), t ≥ 0) be an adapted process which is such thatMJ ∈ P2 (t, A) whenever J ∈ P2 (t, A) (where A ∈ B(Rd ) is arbitrary).For example, it is sufficient to take M to be adapted andleft-continuous.For these processes we can define an adapted processZ = (Z (t), t ≥ 0) by the prescription that it have the stochasticdifferential ˜ dZ (t) = M(t)G(t)dt + M(t)F (t)dB(t) + M(t)H(t, x)N(dt, dx) + M(t)K (t, x)N(dt, dx),and we will adopt the natural notation, dZ (t) = M(t)dY (t). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 22 / 58
  102. 102. Let M = (M(t), t ≥ 0) be an adapted process which is such thatMJ ∈ P2 (t, A) whenever J ∈ P2 (t, A) (where A ∈ B(Rd ) is arbitrary).For example, it is sufficient to take M to be adapted andleft-continuous.For these processes we can define an adapted processZ = (Z (t), t ≥ 0) by the prescription that it have the stochasticdifferential ˜ dZ (t) = M(t)G(t)dt + M(t)F (t)dB(t) + M(t)H(t, x)N(dt, dx) + M(t)K (t, x)N(dt, dx),and we will adopt the natural notation, dZ (t) = M(t)dY (t). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 22 / 58
  103. 103. Let M = (M(t), t ≥ 0) be an adapted process which is such thatMJ ∈ P2 (t, A) whenever J ∈ P2 (t, A) (where A ∈ B(Rd ) is arbitrary).For example, it is sufficient to take M to be adapted andleft-continuous.For these processes we can define an adapted processZ = (Z (t), t ≥ 0) by the prescription that it have the stochasticdifferential ˜ dZ (t) = M(t)G(t)dt + M(t)F (t)dB(t) + M(t)H(t, x)N(dt, dx) + M(t)K (t, x)N(dt, dx),and we will adopt the natural notation, dZ (t) = M(t)dY (t). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 22 / 58
  104. 104. Example (Lévy Stochastic Integrals)Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itôdecomposition: X (t) = bt + Ba (t) + ˜ x N(t, dx) + xN(t, dx), |x|<1 |x|≥1for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose eachFji = aji L, H i = K i = x i L. Then we can construct processes with thestochastic differential dY (t) = L(t)dX (t) (0.3)We call Y a Lévy stochastic integral.In the case where X has finite variation, the Lévy stochastic integral Ycan also be constructed as a Lebesgue-Stieltjes integral, and thiscoincides (up to a set of measure zero) with the prescription (0.3). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58
  105. 105. Example (Lévy Stochastic Integrals)Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itôdecomposition: X (t) = bt + Ba (t) + ˜ x N(t, dx) + xN(t, dx), |x|<1 |x|≥1for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose eachFji = aji L, H i = K i = x i L. Then we can construct processes with thestochastic differential dY (t) = L(t)dX (t) (0.3)We call Y a Lévy stochastic integral.In the case where X has finite variation, the Lévy stochastic integral Ycan also be constructed as a Lebesgue-Stieltjes integral, and thiscoincides (up to a set of measure zero) with the prescription (0.3). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58
  106. 106. Example (Lévy Stochastic Integrals)Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itôdecomposition: X (t) = bt + Ba (t) + ˜ x N(t, dx) + xN(t, dx), |x|<1 |x|≥1for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose eachFji = aji L, H i = K i = x i L. Then we can construct processes with thestochastic differential dY (t) = L(t)dX (t) (0.3)We call Y a Lévy stochastic integral.In the case where X has finite variation, the Lévy stochastic integral Ycan also be constructed as a Lebesgue-Stieltjes integral, and thiscoincides (up to a set of measure zero) with the prescription (0.3). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58
  107. 107. Example (Lévy Stochastic Integrals)Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itôdecomposition: X (t) = bt + Ba (t) + ˜ x N(t, dx) + xN(t, dx), |x|<1 |x|≥1for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose eachFji = aji L, H i = K i = x i L. Then we can construct processes with thestochastic differential dY (t) = L(t)dX (t) (0.3)We call Y a Lévy stochastic integral.In the case where X has finite variation, the Lévy stochastic integral Ycan also be constructed as a Lebesgue-Stieltjes integral, and thiscoincides (up to a set of measure zero) with the prescription (0.3). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58
  108. 108. Example (Lévy Stochastic Integrals)Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itôdecomposition: X (t) = bt + Ba (t) + ˜ x N(t, dx) + xN(t, dx), |x|<1 |x|≥1for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose eachFji = aji L, H i = K i = x i L. Then we can construct processes with thestochastic differential dY (t) = L(t)dX (t) (0.3)We call Y a Lévy stochastic integral.In the case where X has finite variation, the Lévy stochastic integral Ycan also be constructed as a Lebesgue-Stieltjes integral, and thiscoincides (up to a set of measure zero) with the prescription (0.3). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58
  109. 109. Example (Lévy Stochastic Integrals)Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itôdecomposition: X (t) = bt + Ba (t) + ˜ x N(t, dx) + xN(t, dx), |x|<1 |x|≥1for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose eachFji = aji L, H i = K i = x i L. Then we can construct processes with thestochastic differential dY (t) = L(t)dX (t) (0.3)We call Y a Lévy stochastic integral.In the case where X has finite variation, the Lévy stochastic integral Ycan also be constructed as a Lebesgue-Stieltjes integral, and thiscoincides (up to a set of measure zero) with the prescription (0.3). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58
  110. 110. Example (Lévy Stochastic Integrals)Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itôdecomposition: X (t) = bt + Ba (t) + ˜ x N(t, dx) + xN(t, dx), |x|<1 |x|≥1for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose eachFji = aji L, H i = K i = x i L. Then we can construct processes with thestochastic differential dY (t) = L(t)dX (t) (0.3)We call Y a Lévy stochastic integral.In the case where X has finite variation, the Lévy stochastic integral Ycan also be constructed as a Lebesgue-Stieltjes integral, and thiscoincides (up to a set of measure zero) with the prescription (0.3). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58
  111. 111. Example (Lévy Stochastic Integrals)Let X be a Lévy process with characteristics (b, a, ν) and Lévy-Itôdecomposition: X (t) = bt + Ba (t) + ˜ x N(t, dx) + xN(t, dx), |x|<1 |x|≥1for each t ≥ 0. Let L ∈ P2 (t) for all t ≥ 0 and choose eachFji = aji L, H i = K i = x i L. Then we can construct processes with thestochastic differential dY (t) = L(t)dX (t) (0.3)We call Y a Lévy stochastic integral.In the case where X has finite variation, the Lévy stochastic integral Ycan also be constructed as a Lebesgue-Stieltjes integral, and thiscoincides (up to a set of measure zero) with the prescription (0.3). Dave Applebaum (Sheffield UK) Lecture 4 December 2011 23 / 58
  112. 112. Example: The Ornstein Uhlenbeck Process (OU Process) t Y (t) = e−λt y0 + e−λ(t−s) dX (s) (0.4) 0where y0 ∈ Rd is fixed, is a Markov process. The condition log(1 + |x|)ν(dx) < ∞ |x|>1is necessary and sufficient for it to be stationary. There are importantapplications to volatility modelling in finance which have beendeveloped by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,every self-decomposable random variable can be naturally embeddedin a stationary Lévy-driven OU process.See lecture 5 for more details. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58
  113. 113. Example: The Ornstein Uhlenbeck Process (OU Process) t Y (t) = e−λt y0 + e−λ(t−s) dX (s) (0.4) 0where y0 ∈ Rd is fixed, is a Markov process. The condition log(1 + |x|)ν(dx) < ∞ |x|>1is necessary and sufficient for it to be stationary. There are importantapplications to volatility modelling in finance which have beendeveloped by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,every self-decomposable random variable can be naturally embeddedin a stationary Lévy-driven OU process.See lecture 5 for more details. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58
  114. 114. Example: The Ornstein Uhlenbeck Process (OU Process) t Y (t) = e−λt y0 + e−λ(t−s) dX (s) (0.4) 0where y0 ∈ Rd is fixed, is a Markov process. The condition log(1 + |x|)ν(dx) < ∞ |x|>1is necessary and sufficient for it to be stationary. There are importantapplications to volatility modelling in finance which have beendeveloped by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,every self-decomposable random variable can be naturally embeddedin a stationary Lévy-driven OU process.See lecture 5 for more details. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58
  115. 115. Example: The Ornstein Uhlenbeck Process (OU Process) t Y (t) = e−λt y0 + e−λ(t−s) dX (s) (0.4) 0where y0 ∈ Rd is fixed, is a Markov process. The condition log(1 + |x|)ν(dx) < ∞ |x|>1is necessary and sufficient for it to be stationary. There are importantapplications to volatility modelling in finance which have beendeveloped by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,every self-decomposable random variable can be naturally embeddedin a stationary Lévy-driven OU process.See lecture 5 for more details. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58
  116. 116. Example: The Ornstein Uhlenbeck Process (OU Process) t Y (t) = e−λt y0 + e−λ(t−s) dX (s) (0.4) 0where y0 ∈ Rd is fixed, is a Markov process. The condition log(1 + |x|)ν(dx) < ∞ |x|>1is necessary and sufficient for it to be stationary. There are importantapplications to volatility modelling in finance which have beendeveloped by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,every self-decomposable random variable can be naturally embeddedin a stationary Lévy-driven OU process.See lecture 5 for more details. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58
  117. 117. Example: The Ornstein Uhlenbeck Process (OU Process) t Y (t) = e−λt y0 + e−λ(t−s) dX (s) (0.4) 0where y0 ∈ Rd is fixed, is a Markov process. The condition log(1 + |x|)ν(dx) < ∞ |x|>1is necessary and sufficient for it to be stationary. There are importantapplications to volatility modelling in finance which have beendeveloped by Ole Barndorff-Nielsen and Neil Sheppard. Intriguingly,every self-decomposable random variable can be naturally embeddedin a stationary Lévy-driven OU process.See lecture 5 for more details. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 24 / 58
  118. 118. Stochastic integration has a plethora of applications including filtering,stochastic control and infinite dimensional analysis. Here’s somemotivation from finance. Suppose that X (t) is the value of a stock attime t and F (t) is the number of stocks owned at time t. Assume fornow that we buy and sell stocks at discrete times0 = t0 < t1 < · · · < tn < tn+1 = T . Then the total value of the portfolioat time T is: n V (T ) = V (0) + F (tj )(X (tj+1 ) − X (tj )), j=0and in the limit as the times become infinitesimally close together, wehave the stochastic integral t V (T ) = V (0) + F (s)dX (s). 0 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 25 / 58
  119. 119. Stochastic integration has a plethora of applications including filtering,stochastic control and infinite dimensional analysis. Here’s somemotivation from finance. Suppose that X (t) is the value of a stock attime t and F (t) is the number of stocks owned at time t. Assume fornow that we buy and sell stocks at discrete times0 = t0 < t1 < · · · < tn < tn+1 = T . Then the total value of the portfolioat time T is: n V (T ) = V (0) + F (tj )(X (tj+1 ) − X (tj )), j=0and in the limit as the times become infinitesimally close together, wehave the stochastic integral t V (T ) = V (0) + F (s)dX (s). 0 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 25 / 58
  120. 120. Stochastic integration has a plethora of applications including filtering,stochastic control and infinite dimensional analysis. Here’s somemotivation from finance. Suppose that X (t) is the value of a stock attime t and F (t) is the number of stocks owned at time t. Assume fornow that we buy and sell stocks at discrete times0 = t0 < t1 < · · · < tn < tn+1 = T . Then the total value of the portfolioat time T is: n V (T ) = V (0) + F (tj )(X (tj+1 ) − X (tj )), j=0and in the limit as the times become infinitesimally close together, wehave the stochastic integral t V (T ) = V (0) + F (s)dX (s). 0 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 25 / 58
  121. 121. Stochastic integration has a plethora of applications including filtering,stochastic control and infinite dimensional analysis. Here’s somemotivation from finance. Suppose that X (t) is the value of a stock attime t and F (t) is the number of stocks owned at time t. Assume fornow that we buy and sell stocks at discrete times0 = t0 < t1 < · · · < tn < tn+1 = T . Then the total value of the portfolioat time T is: n V (T ) = V (0) + F (tj )(X (tj+1 ) − X (tj )), j=0and in the limit as the times become infinitesimally close together, wehave the stochastic integral t V (T ) = V (0) + F (s)dX (s). 0 Dave Applebaum (Sheffield UK) Lecture 4 December 2011 25 / 58
  122. 122. Stochastic integration against Brownian motion was first developed byWiener for sure functions. Itô’s groundbreaking work in 1944 extendedthis to random adapted integrands.The generalisation of the integratorto arbitrary martingales was due to Kunita and Watanabe in 1967andthe further step to allow semimartingales was due to P.A.Meyer andthe Strasbourg school in the 1970s. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 26 / 58
  123. 123. Stochastic integration against Brownian motion was first developed byWiener for sure functions. Itô’s groundbreaking work in 1944 extendedthis to random adapted integrands.The generalisation of the integratorto arbitrary martingales was due to Kunita and Watanabe in 1967andthe further step to allow semimartingales was due to P.A.Meyer andthe Strasbourg school in the 1970s. Dave Applebaum (Sheffield UK) Lecture 4 December 2011 26 / 58

×