Upcoming SlideShare
×

# Koc3(dba)

1,100 views

Published on

0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

Views
Total views
1,100
On SlideShare
0
From Embeds
0
Number of Embeds
540
Actions
Shares
0
28
0
Likes
0
Embeds 0
No embeds

No notes for slide

### Koc3(dba)

1. 1. Lectures on Lévy Processes and Stochastic Calculus (Koc University) Lecture 3: The Lévy-Itô Decomposition David Applebaum School of Mathematics and Statistics, University of Shefﬁeld, UK 7th December 2011Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 1 / 44
2. 2. Filtrations, Markov Processes and MartingalesWe recall the probability space (Ω, F, P) which underlies ourinvestigations. F contains all possible events in Ω.When we introduce the arrow of time, its convenient to be able toconsider only those events which can occur up to and including time t.We denote by Ft this sub-σ-algebra of F. To be able to consider alltime instants on an equal footing, we deﬁne a ﬁltration to be anincreasing family (Ft , t ≥ 0) of sub-σ-algebras of F, , i.e. 0 ≤ s ≤ t < ∞ ⇒ Fs ⊆ Ft . Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 2 / 44
3. 3. Filtrations, Markov Processes and MartingalesWe recall the probability space (Ω, F, P) which underlies ourinvestigations. F contains all possible events in Ω.When we introduce the arrow of time, its convenient to be able toconsider only those events which can occur up to and including time t.We denote by Ft this sub-σ-algebra of F. To be able to consider alltime instants on an equal footing, we deﬁne a ﬁltration to be anincreasing family (Ft , t ≥ 0) of sub-σ-algebras of F, , i.e. 0 ≤ s ≤ t < ∞ ⇒ Fs ⊆ Ft . Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 2 / 44
4. 4. Filtrations, Markov Processes and MartingalesWe recall the probability space (Ω, F, P) which underlies ourinvestigations. F contains all possible events in Ω.When we introduce the arrow of time, its convenient to be able toconsider only those events which can occur up to and including time t.We denote by Ft this sub-σ-algebra of F. To be able to consider alltime instants on an equal footing, we deﬁne a ﬁltration to be anincreasing family (Ft , t ≥ 0) of sub-σ-algebras of F, , i.e. 0 ≤ s ≤ t < ∞ ⇒ Fs ⊆ Ft . Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 2 / 44
5. 5. Filtrations, Markov Processes and MartingalesWe recall the probability space (Ω, F, P) which underlies ourinvestigations. F contains all possible events in Ω.When we introduce the arrow of time, its convenient to be able toconsider only those events which can occur up to and including time t.We denote by Ft this sub-σ-algebra of F. To be able to consider alltime instants on an equal footing, we deﬁne a ﬁltration to be anincreasing family (Ft , t ≥ 0) of sub-σ-algebras of F, , i.e. 0 ≤ s ≤ t < ∞ ⇒ Fs ⊆ Ft . Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 2 / 44
6. 6. A stochastic process X = (X (t), t ≥ 0) is adapted to the given ﬁltrationif each X (t) is Ft -measurable.e.g. any process is adapted to its natural ﬁltration, FtX = σ{X (s); 0 ≤ s ≤ t}.An adapted process X = (X (t), t ≥ 0) is a Markov process if for allf ∈ Bb (Rd ), 0 ≤ s ≤ t < ∞, E(f (X (t))|Fs ) = E(f (X (t))|X (s)) (a.s.). (0.1)(i.e. “past” and “future” are independent, given the present).The transition probabilities of a Markov process are ps,t (x, A) = P(X (t) ∈ A|X (s) = x),i.e. the probability that the process is in the Borel set A at time t giventhat it is at the point x at the earlier time s. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 3 / 44
7. 7. A stochastic process X = (X (t), t ≥ 0) is adapted to the given ﬁltrationif each X (t) is Ft -measurable.e.g. any process is adapted to its natural ﬁltration, FtX = σ{X (s); 0 ≤ s ≤ t}.An adapted process X = (X (t), t ≥ 0) is a Markov process if for allf ∈ Bb (Rd ), 0 ≤ s ≤ t < ∞, E(f (X (t))|Fs ) = E(f (X (t))|X (s)) (a.s.). (0.1)(i.e. “past” and “future” are independent, given the present).The transition probabilities of a Markov process are ps,t (x, A) = P(X (t) ∈ A|X (s) = x),i.e. the probability that the process is in the Borel set A at time t giventhat it is at the point x at the earlier time s. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 3 / 44
8. 8. A stochastic process X = (X (t), t ≥ 0) is adapted to the given ﬁltrationif each X (t) is Ft -measurable.e.g. any process is adapted to its natural ﬁltration, FtX = σ{X (s); 0 ≤ s ≤ t}.An adapted process X = (X (t), t ≥ 0) is a Markov process if for allf ∈ Bb (Rd ), 0 ≤ s ≤ t < ∞, E(f (X (t))|Fs ) = E(f (X (t))|X (s)) (a.s.). (0.1)(i.e. “past” and “future” are independent, given the present).The transition probabilities of a Markov process are ps,t (x, A) = P(X (t) ∈ A|X (s) = x),i.e. the probability that the process is in the Borel set A at time t giventhat it is at the point x at the earlier time s. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 3 / 44
9. 9. A stochastic process X = (X (t), t ≥ 0) is adapted to the given ﬁltrationif each X (t) is Ft -measurable.e.g. any process is adapted to its natural ﬁltration, FtX = σ{X (s); 0 ≤ s ≤ t}.An adapted process X = (X (t), t ≥ 0) is a Markov process if for allf ∈ Bb (Rd ), 0 ≤ s ≤ t < ∞, E(f (X (t))|Fs ) = E(f (X (t))|X (s)) (a.s.). (0.1)(i.e. “past” and “future” are independent, given the present).The transition probabilities of a Markov process are ps,t (x, A) = P(X (t) ∈ A|X (s) = x),i.e. the probability that the process is in the Borel set A at time t giventhat it is at the point x at the earlier time s. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 3 / 44
10. 10. A stochastic process X = (X (t), t ≥ 0) is adapted to the given ﬁltrationif each X (t) is Ft -measurable.e.g. any process is adapted to its natural ﬁltration, FtX = σ{X (s); 0 ≤ s ≤ t}.An adapted process X = (X (t), t ≥ 0) is a Markov process if for allf ∈ Bb (Rd ), 0 ≤ s ≤ t < ∞, E(f (X (t))|Fs ) = E(f (X (t))|X (s)) (a.s.). (0.1)(i.e. “past” and “future” are independent, given the present).The transition probabilities of a Markov process are ps,t (x, A) = P(X (t) ∈ A|X (s) = x),i.e. the probability that the process is in the Borel set A at time t giventhat it is at the point x at the earlier time s. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 3 / 44
11. 11. TheoremIf X is a Lévy process (adapted to its own natural ﬁltration) whereineach X (t) has law qt , then it is a Markov process with transitionprobabilities ps,t (x, A) = qt−s (A − x).Proof. This essentially follows from E(f (X (t))|Fs ) = E(f (X (s) + X (t) − X (s))|Fs ) = f (X (s) + y )qt−s (dy ). 2 Rd Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 4 / 44
12. 12. TheoremIf X is a Lévy process (adapted to its own natural ﬁltration) whereineach X (t) has law qt , then it is a Markov process with transitionprobabilities ps,t (x, A) = qt−s (A − x).Proof. This essentially follows from E(f (X (t))|Fs ) = E(f (X (s) + X (t) − X (s))|Fs ) = f (X (s) + y )qt−s (dy ). 2 Rd Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 4 / 44
13. 13. TheoremIf X is a Lévy process (adapted to its own natural ﬁltration) whereineach X (t) has law qt , then it is a Markov process with transitionprobabilities ps,t (x, A) = qt−s (A − x).Proof. This essentially follows from E(f (X (t))|Fs ) = E(f (X (s) + X (t) − X (s))|Fs ) = f (X (s) + y )qt−s (dy ). 2 Rd Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 4 / 44
14. 14. Now let X be an adapted process deﬁned on a ﬁltered probabilityspace which also satisﬁes the integrability requirement E(|X (t)|) < ∞for all t ≥ 0.We say that it is a martingale if for all 0 ≤ s < t < ∞, E(X (t)|Fs ) = X (s) a.s.Note that if X is a martingale, then the map t → E(X (t)) is constant. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 5 / 44
15. 15. Now let X be an adapted process deﬁned on a ﬁltered probabilityspace which also satisﬁes the integrability requirement E(|X (t)|) < ∞for all t ≥ 0.We say that it is a martingale if for all 0 ≤ s < t < ∞, E(X (t)|Fs ) = X (s) a.s.Note that if X is a martingale, then the map t → E(X (t)) is constant. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 5 / 44
16. 16. Now let X be an adapted process deﬁned on a ﬁltered probabilityspace which also satisﬁes the integrability requirement E(|X (t)|) < ∞for all t ≥ 0.We say that it is a martingale if for all 0 ≤ s < t < ∞, E(X (t)|Fs ) = X (s) a.s.Note that if X is a martingale, then the map t → E(X (t)) is constant. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 5 / 44
17. 17. Now let X be an adapted process deﬁned on a ﬁltered probabilityspace which also satisﬁes the integrability requirement E(|X (t)|) < ∞for all t ≥ 0.We say that it is a martingale if for all 0 ≤ s < t < ∞, E(X (t)|Fs ) = X (s) a.s.Note that if X is a martingale, then the map t → E(X (t)) is constant. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 5 / 44
18. 18. An adapted Lévy process with zero mean is a martingale (with respectto its natural ﬁltration)since in this case, for 0 ≤ s ≤ t < ∞ and using the convenient notationEs (·) := E(·|Fs ): Es (X (t)) = Es (X (s) + X (t) − X (s)) = X (s) + E(X (t) − X (s)) = X (s)Although there is no good reason why a generic Lévy process shouldbe a martingale (or even have ﬁnite mean), there are some importantexamples: Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 6 / 44
19. 19. An adapted Lévy process with zero mean is a martingale (with respectto its natural ﬁltration)since in this case, for 0 ≤ s ≤ t < ∞ and using the convenient notationEs (·) := E(·|Fs ): Es (X (t)) = Es (X (s) + X (t) − X (s)) = X (s) + E(X (t) − X (s)) = X (s)Although there is no good reason why a generic Lévy process shouldbe a martingale (or even have ﬁnite mean), there are some importantexamples: Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 6 / 44
20. 20. An adapted Lévy process with zero mean is a martingale (with respectto its natural ﬁltration)since in this case, for 0 ≤ s ≤ t < ∞ and using the convenient notationEs (·) := E(·|Fs ): Es (X (t)) = Es (X (s) + X (t) − X (s)) = X (s) + E(X (t) − X (s)) = X (s)Although there is no good reason why a generic Lévy process shouldbe a martingale (or even have ﬁnite mean), there are some importantexamples: Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 6 / 44
21. 21. e.g. the processes whose values at time t are σB(t) where B(t) is a standard Brownian motion, and σ is an r × d matrix. ˜ ˜ N(t) where N is a compensated Poisson process with intensity λ.Some important martingales associated to Lévy processes include: exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is ﬁxed. |σB(t)|2 − tr(A)t where A = σ T σ. ˜ N(t)2 − λt. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 7 / 44
22. 22. e.g. the processes whose values at time t are σB(t) where B(t) is a standard Brownian motion, and σ is an r × d matrix. ˜ ˜ N(t) where N is a compensated Poisson process with intensity λ.Some important martingales associated to Lévy processes include: exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is ﬁxed. |σB(t)|2 − tr(A)t where A = σ T σ. ˜ N(t)2 − λt. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 7 / 44
23. 23. e.g. the processes whose values at time t are σB(t) where B(t) is a standard Brownian motion, and σ is an r × d matrix. ˜ ˜ N(t) where N is a compensated Poisson process with intensity λ.Some important martingales associated to Lévy processes include: exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is ﬁxed. |σB(t)|2 − tr(A)t where A = σ T σ. ˜ N(t)2 − λt. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 7 / 44
24. 24. e.g. the processes whose values at time t are σB(t) where B(t) is a standard Brownian motion, and σ is an r × d matrix. ˜ ˜ N(t) where N is a compensated Poisson process with intensity λ.Some important martingales associated to Lévy processes include: exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is ﬁxed. |σB(t)|2 − tr(A)t where A = σ T σ. ˜ N(t)2 − λt. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 7 / 44
25. 25. e.g. the processes whose values at time t are σB(t) where B(t) is a standard Brownian motion, and σ is an r × d matrix. ˜ ˜ N(t) where N is a compensated Poisson process with intensity λ.Some important martingales associated to Lévy processes include: exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is ﬁxed. |σB(t)|2 − tr(A)t where A = σ T σ. ˜ N(t)2 − λt. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 7 / 44
26. 26. e.g. the processes whose values at time t are σB(t) where B(t) is a standard Brownian motion, and σ is an r × d matrix. ˜ ˜ N(t) where N is a compensated Poisson process with intensity λ.Some important martingales associated to Lévy processes include: exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is ﬁxed. |σB(t)|2 − tr(A)t where A = σ T σ. ˜ N(t)2 − λt. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 7 / 44
27. 27. e.g. the processes whose values at time t are σB(t) where B(t) is a standard Brownian motion, and σ is an r × d matrix. ˜ ˜ N(t) where N is a compensated Poisson process with intensity λ.Some important martingales associated to Lévy processes include: exp{i(u, X (t)) − tη(u)}, where u ∈ Rd is ﬁxed. |σB(t)|2 − tr(A)t where A = σ T σ. ˜ N(t)2 − λt. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 7 / 44
28. 28. Càdlàg PathsA function f : R+ → Rd is càdlàg if it is continue à droite et limité àgauche, i.e. right continuous with left limits. Such a function has onlyjump discontinuities.Deﬁne f (t−) = lims↑t f (s) and ∆f (t) = f (t) − f (t−). If f is càdlàg,{0 ≤ t ≤ T , ∆f (t) = 0} is at most countable.If the ﬁltration satisﬁes the “usual hypotheses” of right continuity andcompletion, then every Lévy process has a càdlàg modiﬁcation whichis itself a Lévy process. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 8 / 44
29. 29. Càdlàg PathsA function f : R+ → Rd is càdlàg if it is continue à droite et limité àgauche, i.e. right continuous with left limits. Such a function has onlyjump discontinuities.Deﬁne f (t−) = lims↑t f (s) and ∆f (t) = f (t) − f (t−). If f is càdlàg,{0 ≤ t ≤ T , ∆f (t) = 0} is at most countable.If the ﬁltration satisﬁes the “usual hypotheses” of right continuity andcompletion, then every Lévy process has a càdlàg modiﬁcation whichis itself a Lévy process. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 8 / 44
30. 30. Càdlàg PathsA function f : R+ → Rd is càdlàg if it is continue à droite et limité àgauche, i.e. right continuous with left limits. Such a function has onlyjump discontinuities.Deﬁne f (t−) = lims↑t f (s) and ∆f (t) = f (t) − f (t−). If f is càdlàg,{0 ≤ t ≤ T , ∆f (t) = 0} is at most countable.If the ﬁltration satisﬁes the “usual hypotheses” of right continuity andcompletion, then every Lévy process has a càdlàg modiﬁcation whichis itself a Lévy process. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 8 / 44
31. 31. Càdlàg PathsA function f : R+ → Rd is càdlàg if it is continue à droite et limité àgauche, i.e. right continuous with left limits. Such a function has onlyjump discontinuities.Deﬁne f (t−) = lims↑t f (s) and ∆f (t) = f (t) − f (t−). If f is càdlàg,{0 ≤ t ≤ T , ∆f (t) = 0} is at most countable.If the ﬁltration satisﬁes the “usual hypotheses” of right continuity andcompletion, then every Lévy process has a càdlàg modiﬁcation whichis itself a Lévy process. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 8 / 44
32. 32. Càdlàg PathsA function f : R+ → Rd is càdlàg if it is continue à droite et limité àgauche, i.e. right continuous with left limits. Such a function has onlyjump discontinuities.Deﬁne f (t−) = lims↑t f (s) and ∆f (t) = f (t) − f (t−). If f is càdlàg,{0 ≤ t ≤ T , ∆f (t) = 0} is at most countable.If the ﬁltration satisﬁes the “usual hypotheses” of right continuity andcompletion, then every Lévy process has a càdlàg modiﬁcation whichis itself a Lévy process. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 8 / 44
33. 33. From now on, we will always make the following assumptions:- (Ω, F, P) will be a ﬁxed probability space equipped with a ﬁltration (Ft , t ≥ 0) which satisﬁes the “usual hypotheses”. Every Lévy process X = (X (t), t ≥ 0) will be assumed to be Ft -adapted and have càdlàg sample paths. X (t) − X (s) is independent of Fs for all 0 ≤ s < t < ∞. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 9 / 44
34. 34. From now on, we will always make the following assumptions:- (Ω, F, P) will be a ﬁxed probability space equipped with a ﬁltration (Ft , t ≥ 0) which satisﬁes the “usual hypotheses”. Every Lévy process X = (X (t), t ≥ 0) will be assumed to be Ft -adapted and have càdlàg sample paths. X (t) − X (s) is independent of Fs for all 0 ≤ s < t < ∞. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 9 / 44
35. 35. From now on, we will always make the following assumptions:- (Ω, F, P) will be a ﬁxed probability space equipped with a ﬁltration (Ft , t ≥ 0) which satisﬁes the “usual hypotheses”. Every Lévy process X = (X (t), t ≥ 0) will be assumed to be Ft -adapted and have càdlàg sample paths. X (t) − X (s) is independent of Fs for all 0 ≤ s < t < ∞. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 9 / 44
36. 36. From now on, we will always make the following assumptions:- (Ω, F, P) will be a ﬁxed probability space equipped with a ﬁltration (Ft , t ≥ 0) which satisﬁes the “usual hypotheses”. Every Lévy process X = (X (t), t ≥ 0) will be assumed to be Ft -adapted and have càdlàg sample paths. X (t) − X (s) is independent of Fs for all 0 ≤ s < t < ∞. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 9 / 44
37. 37. The Jumps of A Lévy Process - Poisson RandomMeasuresThe jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévyprocess is deﬁned by ∆X (t) = X (t) − X (t−),for each t ≥ 0.TheoremIf N is a Lévy process which is increasing (a.s.) and is such that(∆N(t), t ≥ 0) takes values in {0, 1}, then N is a Poisson process.Proof. Deﬁne a sequence of stopping times recursively by T0 = 0 andTn = inf{t > Tn−1 ; N(t + Tn−1 ) − N(Tn−1 )) = 0} for each n ∈ N. Itfollows from (L2) that the sequence (T1 , T2 − T1 , . . . , Tn − Tn−1 , . . .) isi.i.d. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 10 / 44
38. 38. The Jumps of A Lévy Process - Poisson RandomMeasuresThe jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévyprocess is deﬁned by ∆X (t) = X (t) − X (t−),for each t ≥ 0.TheoremIf N is a Lévy process which is increasing (a.s.) and is such that(∆N(t), t ≥ 0) takes values in {0, 1}, then N is a Poisson process.Proof. Deﬁne a sequence of stopping times recursively by T0 = 0 andTn = inf{t > Tn−1 ; N(t + Tn−1 ) − N(Tn−1 )) = 0} for each n ∈ N. Itfollows from (L2) that the sequence (T1 , T2 − T1 , . . . , Tn − Tn−1 , . . .) isi.i.d. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 10 / 44
39. 39. The Jumps of A Lévy Process - Poisson RandomMeasuresThe jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévyprocess is deﬁned by ∆X (t) = X (t) − X (t−),for each t ≥ 0.TheoremIf N is a Lévy process which is increasing (a.s.) and is such that(∆N(t), t ≥ 0) takes values in {0, 1}, then N is a Poisson process.Proof. Deﬁne a sequence of stopping times recursively by T0 = 0 andTn = inf{t > Tn−1 ; N(t + Tn−1 ) − N(Tn−1 )) = 0} for each n ∈ N. Itfollows from (L2) that the sequence (T1 , T2 − T1 , . . . , Tn − Tn−1 , . . .) isi.i.d. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 10 / 44
40. 40. The Jumps of A Lévy Process - Poisson RandomMeasuresThe jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévyprocess is deﬁned by ∆X (t) = X (t) − X (t−),for each t ≥ 0.TheoremIf N is a Lévy process which is increasing (a.s.) and is such that(∆N(t), t ≥ 0) takes values in {0, 1}, then N is a Poisson process.Proof. Deﬁne a sequence of stopping times recursively by T0 = 0 andTn = inf{t > Tn−1 ; N(t + Tn−1 ) − N(Tn−1 )) = 0} for each n ∈ N. Itfollows from (L2) that the sequence (T1 , T2 − T1 , . . . , Tn − Tn−1 , . . .) isi.i.d. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 10 / 44
41. 41. The Jumps of A Lévy Process - Poisson RandomMeasuresThe jump process ∆X = (∆X (t), t ≥ 0) associated to a Lévyprocess is deﬁned by ∆X (t) = X (t) − X (t−),for each t ≥ 0.TheoremIf N is a Lévy process which is increasing (a.s.) and is such that(∆N(t), t ≥ 0) takes values in {0, 1}, then N is a Poisson process.Proof. Deﬁne a sequence of stopping times recursively by T0 = 0 andTn = inf{t > Tn−1 ; N(t + Tn−1 ) − N(Tn−1 )) = 0} for each n ∈ N. Itfollows from (L2) that the sequence (T1 , T2 − T1 , . . . , Tn − Tn−1 , . . .) isi.i.d. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 10 / 44
42. 42. By (L2) again, we have for each s, t ≥ 0, P(T1 > s + t) = P(N(s) = 0, N(t + s) − N(s) = 0) = P(T1 > s)P(T1 > t)From the fact that N is increasing (a.s.), it follows easily that the mapt → P(T1 > t) is decreasing and by a straightforward application ofstochastic continuity (L3) we ﬁnd that the map t → P(T1 > t) iscontinuous at t = 0. Hence there exists λ > 0 such thatP(T1 > t) = e−λt for each t ≥ 0. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 11 / 44
43. 43. By (L2) again, we have for each s, t ≥ 0, P(T1 > s + t) = P(N(s) = 0, N(t + s) − N(s) = 0) = P(T1 > s)P(T1 > t)From the fact that N is increasing (a.s.), it follows easily that the mapt → P(T1 > t) is decreasing and by a straightforward application ofstochastic continuity (L3) we ﬁnd that the map t → P(T1 > t) iscontinuous at t = 0. Hence there exists λ > 0 such thatP(T1 > t) = e−λt for each t ≥ 0. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 11 / 44
44. 44. By (L2) again, we have for each s, t ≥ 0, P(T1 > s + t) = P(N(s) = 0, N(t + s) − N(s) = 0) = P(T1 > s)P(T1 > t)From the fact that N is increasing (a.s.), it follows easily that the mapt → P(T1 > t) is decreasing and by a straightforward application ofstochastic continuity (L3) we ﬁnd that the map t → P(T1 > t) iscontinuous at t = 0. Hence there exists λ > 0 such thatP(T1 > t) = e−λt for each t ≥ 0. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 11 / 44
45. 45. By (L2) again, we have for each s, t ≥ 0, P(T1 > s + t) = P(N(s) = 0, N(t + s) − N(s) = 0) = P(T1 > s)P(T1 > t)From the fact that N is increasing (a.s.), it follows easily that the mapt → P(T1 > t) is decreasing and by a straightforward application ofstochastic continuity (L3) we ﬁnd that the map t → P(T1 > t) iscontinuous at t = 0. Hence there exists λ > 0 such thatP(T1 > t) = e−λt for each t ≥ 0. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 11 / 44
46. 46. By (L2) again, we have for each s, t ≥ 0, P(T1 > s + t) = P(N(s) = 0, N(t + s) − N(s) = 0) = P(T1 > s)P(T1 > t)From the fact that N is increasing (a.s.), it follows easily that the mapt → P(T1 > t) is decreasing and by a straightforward application ofstochastic continuity (L3) we ﬁnd that the map t → P(T1 > t) iscontinuous at t = 0. Hence there exists λ > 0 such thatP(T1 > t) = e−λt for each t ≥ 0. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 11 / 44
47. 47. So T1 has an exponential distribution with parameter λ and P(N(t) = 0) = P(T1 > t) = e−λt ,for each t ≥ 0. nNow assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) , n!thenP(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t). But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn )is the sum of (n + 1) i.i.d. exponential random variables, and so has a λn+1 sngamma distribution with density fTn+1 (s) = e−λs for s > 0. n!The required result follows on integration. 2 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 12 / 44
48. 48. So T1 has an exponential distribution with parameter λ and P(N(t) = 0) = P(T1 > t) = e−λt ,for each t ≥ 0. nNow assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) , n!thenP(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t). But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn )is the sum of (n + 1) i.i.d. exponential random variables, and so has a λn+1 sngamma distribution with density fTn+1 (s) = e−λs for s > 0. n!The required result follows on integration. 2 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 12 / 44
49. 49. So T1 has an exponential distribution with parameter λ and P(N(t) = 0) = P(T1 > t) = e−λt ,for each t ≥ 0. nNow assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) , n!thenP(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t). But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn )is the sum of (n + 1) i.i.d. exponential random variables, and so has a λn+1 sngamma distribution with density fTn+1 (s) = e−λs for s > 0. n!The required result follows on integration. 2 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 12 / 44
50. 50. So T1 has an exponential distribution with parameter λ and P(N(t) = 0) = P(T1 > t) = e−λt ,for each t ≥ 0. nNow assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) , n!thenP(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t). But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn )is the sum of (n + 1) i.i.d. exponential random variables, and so has a λn+1 sngamma distribution with density fTn+1 (s) = e−λs for s > 0. n!The required result follows on integration. 2 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 12 / 44
51. 51. So T1 has an exponential distribution with parameter λ and P(N(t) = 0) = P(T1 > t) = e−λt ,for each t ≥ 0. nNow assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) , n!thenP(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t). But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn )is the sum of (n + 1) i.i.d. exponential random variables, and so has a λn+1 sngamma distribution with density fTn+1 (s) = e−λs for s > 0. n!The required result follows on integration. 2 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 12 / 44
52. 52. So T1 has an exponential distribution with parameter λ and P(N(t) = 0) = P(T1 > t) = e−λt ,for each t ≥ 0. nNow assume as an inductive hypothesis that P(N(t) = n) = e−λt (λt) , n!thenP(N(t) = n + 1) = P(Tn+2 > t, Tn+1 ≤ t) = P(Tn+2 > t) − P(Tn+1 > t). But Tn+1 = T1 + (T2 − T1 ) + · · · + (Tn+1 − Tn )is the sum of (n + 1) i.i.d. exponential random variables, and so has a λn+1 sngamma distribution with density fTn+1 (s) = e−λs for s > 0. n!The required result follows on integration. 2 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 12 / 44
53. 53. The following result shows that ∆X is not a straightforward process toanalyse.LemmaIf X is a Lévy process, then for ﬁxed t > 0, ∆X (t) = 0 (a.s.).Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 13 / 44
54. 54. The following result shows that ∆X is not a straightforward process toanalyse.LemmaIf X is a Lévy process, then for ﬁxed t > 0, ∆X (t) = 0 (a.s.).Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 13 / 44
55. 55. The following result shows that ∆X is not a straightforward process toanalyse.LemmaIf X is a Lévy process, then for ﬁxed t > 0, ∆X (t) = 0 (a.s.).Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 13 / 44
56. 56. The following result shows that ∆X is not a straightforward process toanalyse.LemmaIf X is a Lévy process, then for ﬁxed t > 0, ∆X (t) = 0 (a.s.).Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 13 / 44
57. 57. The following result shows that ∆X is not a straightforward process toanalyse.LemmaIf X is a Lévy process, then for ﬁxed t > 0, ∆X (t) = 0 (a.s.).Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 13 / 44
58. 58. The following result shows that ∆X is not a straightforward process toanalyse.LemmaIf X is a Lévy process, then for ﬁxed t > 0, ∆X (t) = 0 (a.s.).Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 13 / 44
59. 59. The following result shows that ∆X is not a straightforward process toanalyse.LemmaIf X is a Lévy process, then for ﬁxed t > 0, ∆X (t) = 0 (a.s.).Proof. Let (t(n), n ∈ N) be a sequence in R+ with t(n) ↑ t as n → ∞,then since X has càdlàg paths, limn→∞ X (t(n)) = X (t−).However, by(L3) the sequence (X (t(n)), n ∈ N) converges in probability to X (t),and so has a subsequence which converges almost surely to X (t).The result follows by uniqueness of limits. 2 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 13 / 44
60. 60. Much of the analytic difﬁculty in manipulating Lévy processes arisesfrom the fact that it is possible for them to have |∆X (s)| = ∞ a.s. 0≤s≤tand the way in which these difﬁculties is overcome exploits the fact thatwe always have |∆X (s)|2 < ∞ a.s. 0≤s≤tWe will gain more insight into these ideas as the discussionprogresses. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 14 / 44
61. 61. Much of the analytic difﬁculty in manipulating Lévy processes arisesfrom the fact that it is possible for them to have |∆X (s)| = ∞ a.s. 0≤s≤tand the way in which these difﬁculties is overcome exploits the fact thatwe always have |∆X (s)|2 < ∞ a.s. 0≤s≤tWe will gain more insight into these ideas as the discussionprogresses. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 14 / 44
62. 62. Much of the analytic difﬁculty in manipulating Lévy processes arisesfrom the fact that it is possible for them to have |∆X (s)| = ∞ a.s. 0≤s≤tand the way in which these difﬁculties is overcome exploits the fact thatwe always have |∆X (s)|2 < ∞ a.s. 0≤s≤tWe will gain more insight into these ideas as the discussionprogresses. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 14 / 44
63. 63. Rather than exploring ∆X itself further, we will ﬁnd it more proﬁtable tocount jumps of speciﬁed size. More precisely, let 0 ≤ t < ∞ andA ∈ B(Rd − {0}). Deﬁne N(t, A) = #{0 ≤ s ≤ t; ∆X (s) ∈ A} = 1A (∆X (s)). 0≤s≤tNote that for each ω ∈ Ω, t ≥ 0, the set function A → N(t, A)(ω) is acounting measure on B(Rd − {0}) and hence E(N(t, A)) = N(t, A)(ω)dP(ω)is a Borel measure on B(Rd − {0}). We write µ(·) = E(N(1, ·)). Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 15 / 44
64. 64. Rather than exploring ∆X itself further, we will ﬁnd it more proﬁtable tocount jumps of speciﬁed size. More precisely, let 0 ≤ t < ∞ andA ∈ B(Rd − {0}). Deﬁne N(t, A) = #{0 ≤ s ≤ t; ∆X (s) ∈ A} = 1A (∆X (s)). 0≤s≤tNote that for each ω ∈ Ω, t ≥ 0, the set function A → N(t, A)(ω) is acounting measure on B(Rd − {0}) and hence E(N(t, A)) = N(t, A)(ω)dP(ω)is a Borel measure on B(Rd − {0}). We write µ(·) = E(N(1, ·)). Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 15 / 44
65. 65. Rather than exploring ∆X itself further, we will ﬁnd it more proﬁtable tocount jumps of speciﬁed size. More precisely, let 0 ≤ t < ∞ andA ∈ B(Rd − {0}). Deﬁne N(t, A) = #{0 ≤ s ≤ t; ∆X (s) ∈ A} = 1A (∆X (s)). 0≤s≤tNote that for each ω ∈ Ω, t ≥ 0, the set function A → N(t, A)(ω) is acounting measure on B(Rd − {0}) and hence E(N(t, A)) = N(t, A)(ω)dP(ω)is a Borel measure on B(Rd − {0}). We write µ(·) = E(N(1, ·)). Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 15 / 44
66. 66. Rather than exploring ∆X itself further, we will ﬁnd it more proﬁtable tocount jumps of speciﬁed size. More precisely, let 0 ≤ t < ∞ andA ∈ B(Rd − {0}). Deﬁne N(t, A) = #{0 ≤ s ≤ t; ∆X (s) ∈ A} = 1A (∆X (s)). 0≤s≤tNote that for each ω ∈ Ω, t ≥ 0, the set function A → N(t, A)(ω) is acounting measure on B(Rd − {0}) and hence E(N(t, A)) = N(t, A)(ω)dP(ω)is a Borel measure on B(Rd − {0}). We write µ(·) = E(N(1, ·)). Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 15 / 44
67. 67. / ¯We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A.LemmaIf A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0. AProof. Deﬁne a sequence of stopping times (Tn , n ∈ N) byT1A = inf{t > 0; ∆X (t) ∈ A}, and for A An > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}. ASince X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A(a.s.). AIndeed suppose that T1 = 0 with non-zero probability and letN = {ω ∈ Ω : T1 A = 0}. Assume that ω ∈ Ω − N . Then given anyu > 0, we can ﬁnd 0 < δ, δ < u and > 0 such that|X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 16 / 44
68. 68. / ¯We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A.LemmaIf A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0. AProof. Deﬁne a sequence of stopping times (Tn , n ∈ N) byT1A = inf{t > 0; ∆X (t) ∈ A}, and for A An > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}. ASince X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A(a.s.). AIndeed suppose that T1 = 0 with non-zero probability and letN = {ω ∈ Ω : T1 A = 0}. Assume that ω ∈ Ω − N . Then given anyu > 0, we can ﬁnd 0 < δ, δ < u and > 0 such that|X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 16 / 44
69. 69. / ¯We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A.LemmaIf A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0. AProof. Deﬁne a sequence of stopping times (Tn , n ∈ N) byT1A = inf{t > 0; ∆X (t) ∈ A}, and for A An > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}. ASince X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A(a.s.). AIndeed suppose that T1 = 0 with non-zero probability and letN = {ω ∈ Ω : T1 A = 0}. Assume that ω ∈ Ω − N . Then given anyu > 0, we can ﬁnd 0 < δ, δ < u and > 0 such that|X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 16 / 44
70. 70. / ¯We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A.LemmaIf A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0. AProof. Deﬁne a sequence of stopping times (Tn , n ∈ N) byT1A = inf{t > 0; ∆X (t) ∈ A}, and for A An > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}. ASince X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A(a.s.). AIndeed suppose that T1 = 0 with non-zero probability and letN = {ω ∈ Ω : T1 A = 0}. Assume that ω ∈ Ω − N . Then given anyu > 0, we can ﬁnd 0 < δ, δ < u and > 0 such that|X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 16 / 44
71. 71. / ¯We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A.LemmaIf A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0. AProof. Deﬁne a sequence of stopping times (Tn , n ∈ N) byT1A = inf{t > 0; ∆X (t) ∈ A}, and for A An > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}. ASince X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A(a.s.). AIndeed suppose that T1 = 0 with non-zero probability and letN = {ω ∈ Ω : T1 A = 0}. Assume that ω ∈ Ω − N . Then given anyu > 0, we can ﬁnd 0 < δ, δ < u and > 0 such that|X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 16 / 44
72. 72. / ¯We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A.LemmaIf A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0. AProof. Deﬁne a sequence of stopping times (Tn , n ∈ N) byT1A = inf{t > 0; ∆X (t) ∈ A}, and for A An > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}. ASince X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A(a.s.). AIndeed suppose that T1 = 0 with non-zero probability and letN = {ω ∈ Ω : T1 A = 0}. Assume that ω ∈ Ω − N . Then given anyu > 0, we can ﬁnd 0 < δ, δ < u and > 0 such that|X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 16 / 44
73. 73. / ¯We say that A ∈ B(Rd − {0}) is bounded below if 0 ∈ A.LemmaIf A is bounded below, then N(t, A) < ∞ (a.s.) for all t ≥ 0. AProof. Deﬁne a sequence of stopping times (Tn , n ∈ N) byT1A = inf{t > 0; ∆X (t) ∈ A}, and for A An > 1, Tn = inf{t > Tn−1 ; ∆X (t) ∈ A}. ASince X has càdlàg paths, we have T1 > 0 (a.s.) and limn→∞ Tn = ∞A(a.s.). AIndeed suppose that T1 = 0 with non-zero probability and letN = {ω ∈ Ω : T1 A = 0}. Assume that ω ∈ Ω − N . Then given anyu > 0, we can ﬁnd 0 < δ, δ < u and > 0 such that|X (δ)(ω) − X (δ )(ω)| > and this contradicts the (almost sure) rightcontinuity of X (·)(ω) at the origin. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 16 / 44
74. 74. Similarly, we assume that limn→∞ Tn = T A < ∞ with non-zero A Aprobability and deﬁne M = {ω ∈ Ω : limn→∞ Tn = ∞}. If ω ∈ Ω − Mthen we obtain a contradiction with the fact that X has a left limit(almost surely) at T A (ω).Hence, for each t ≥ 0, N(t, A) = 1{Tn ≤t} < ∞ a.s. A 2 n∈N Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 17 / 44
75. 75. Similarly, we assume that limn→∞ Tn = T A < ∞ with non-zero A Aprobability and deﬁne M = {ω ∈ Ω : limn→∞ Tn = ∞}. If ω ∈ Ω − Mthen we obtain a contradiction with the fact that X has a left limit(almost surely) at T A (ω).Hence, for each t ≥ 0, N(t, A) = 1{Tn ≤t} < ∞ a.s. A 2 n∈N Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 17 / 44
76. 76. Similarly, we assume that limn→∞ Tn = T A < ∞ with non-zero A Aprobability and deﬁne M = {ω ∈ Ω : limn→∞ Tn = ∞}. If ω ∈ Ω − Mthen we obtain a contradiction with the fact that X has a left limit(almost surely) at T A (ω).Hence, for each t ≥ 0, N(t, A) = 1{Tn ≤t} < ∞ a.s. A 2 n∈N Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 17 / 44
77. 77. Be aware that if A fails to be bounded below, then this lemma may nolonger hold, because of the accumulation of large numbers of smalljumps.The following result should at least be plausible, given Theorem 2 andLemma 4.Theorem 1 If A is bounded below, then (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A). 2 If A1 , . . . , Am ∈ B(Rd − {0}) are disjoint, then the random variables N(t, A1 ), . . . , N(t, Am ) are independent.It follows immediately that µ(A) < ∞ whenever A is bounded below,hence the measure µ is σ-ﬁnite. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 18 / 44
78. 78. Be aware that if A fails to be bounded below, then this lemma may nolonger hold, because of the accumulation of large numbers of smalljumps.The following result should at least be plausible, given Theorem 2 andLemma 4.Theorem 1 If A is bounded below, then (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A). 2 If A1 , . . . , Am ∈ B(Rd − {0}) are disjoint, then the random variables N(t, A1 ), . . . , N(t, Am ) are independent.It follows immediately that µ(A) < ∞ whenever A is bounded below,hence the measure µ is σ-ﬁnite. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 18 / 44
79. 79. Be aware that if A fails to be bounded below, then this lemma may nolonger hold, because of the accumulation of large numbers of smalljumps.The following result should at least be plausible, given Theorem 2 andLemma 4.Theorem 1 If A is bounded below, then (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A). 2 If A1 , . . . , Am ∈ B(Rd − {0}) are disjoint, then the random variables N(t, A1 ), . . . , N(t, Am ) are independent.It follows immediately that µ(A) < ∞ whenever A is bounded below,hence the measure µ is σ-ﬁnite. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 18 / 44
80. 80. Be aware that if A fails to be bounded below, then this lemma may nolonger hold, because of the accumulation of large numbers of smalljumps.The following result should at least be plausible, given Theorem 2 andLemma 4.Theorem 1 If A is bounded below, then (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A). 2 If A1 , . . . , Am ∈ B(Rd − {0}) are disjoint, then the random variables N(t, A1 ), . . . , N(t, Am ) are independent.It follows immediately that µ(A) < ∞ whenever A is bounded below,hence the measure µ is σ-ﬁnite. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 18 / 44
81. 81. The main properties of N, which we will use extensively in the sequel,are summarised below:-. 1 For each t > 0, ω ∈ Ω, N(t, ·)(ω) is a counting measure on B(Rd − {0}). 2 For each A bounded below, (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A) = E(N(1, A)). 3 ˜ The compensator (N(t, A), t ≥ 0) is a martingale-valued measure ˜ A) = N(t, A) − tµ(A), for A bounded below, i.e. where N(t, ˜ For ﬁxed A bounded below, (N(t, A), t ≥ 0) is a martingale. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 19 / 44
82. 82. The main properties of N, which we will use extensively in the sequel,are summarised below:-. 1 For each t > 0, ω ∈ Ω, N(t, ·)(ω) is a counting measure on B(Rd − {0}). 2 For each A bounded below, (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A) = E(N(1, A)). 3 ˜ The compensator (N(t, A), t ≥ 0) is a martingale-valued measure ˜ A) = N(t, A) − tµ(A), for A bounded below, i.e. where N(t, ˜ For ﬁxed A bounded below, (N(t, A), t ≥ 0) is a martingale. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 19 / 44
83. 83. The main properties of N, which we will use extensively in the sequel,are summarised below:-. 1 For each t > 0, ω ∈ Ω, N(t, ·)(ω) is a counting measure on B(Rd − {0}). 2 For each A bounded below, (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A) = E(N(1, A)). 3 ˜ The compensator (N(t, A), t ≥ 0) is a martingale-valued measure ˜ A) = N(t, A) − tµ(A), for A bounded below, i.e. where N(t, ˜ For ﬁxed A bounded below, (N(t, A), t ≥ 0) is a martingale. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 19 / 44
84. 84. The main properties of N, which we will use extensively in the sequel,are summarised below:-. 1 For each t > 0, ω ∈ Ω, N(t, ·)(ω) is a counting measure on B(Rd − {0}). 2 For each A bounded below, (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A) = E(N(1, A)). 3 ˜ The compensator (N(t, A), t ≥ 0) is a martingale-valued measure ˜ A) = N(t, A) − tµ(A), for A bounded below, i.e. where N(t, ˜ For ﬁxed A bounded below, (N(t, A), t ≥ 0) is a martingale. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 19 / 44
85. 85. The main properties of N, which we will use extensively in the sequel,are summarised below:-. 1 For each t > 0, ω ∈ Ω, N(t, ·)(ω) is a counting measure on B(Rd − {0}). 2 For each A bounded below, (N(t, A), t ≥ 0) is a Poisson process with intensity µ(A) = E(N(1, A)). 3 ˜ The compensator (N(t, A), t ≥ 0) is a martingale-valued measure ˜ A) = N(t, A) − tµ(A), for A bounded below, i.e. where N(t, ˜ For ﬁxed A bounded below, (N(t, A), t ≥ 0) is a martingale. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 19 / 44
86. 86. Poisson IntegrationLet f be a Borel measurable function from Rd to Rd and let A bebounded below, then for each t > 0, ω ∈ Ω, we may deﬁne the Poissonintegral of f as a random ﬁnite sum by f (x)N(t, dx)(ω) := f (x)N(t, {x})(ω). A x∈ANote that each A f (x)N(t, dx) is an Rd -valued random variable andgives rise to a càdlàg stochastic process, as we vary t.Now since N(t, {x}) = 0 ⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t, wehave f (x)N(t, dx) = f (∆X (u))1A (∆X (u)). (0.2) A 0≤u≤t Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 20 / 44
87. 87. Poisson IntegrationLet f be a Borel measurable function from Rd to Rd and let A bebounded below, then for each t > 0, ω ∈ Ω, we may deﬁne the Poissonintegral of f as a random ﬁnite sum by f (x)N(t, dx)(ω) := f (x)N(t, {x})(ω). A x∈ANote that each A f (x)N(t, dx) is an Rd -valued random variable andgives rise to a càdlàg stochastic process, as we vary t.Now since N(t, {x}) = 0 ⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t, wehave f (x)N(t, dx) = f (∆X (u))1A (∆X (u)). (0.2) A 0≤u≤t Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 20 / 44
88. 88. Poisson IntegrationLet f be a Borel measurable function from Rd to Rd and let A bebounded below, then for each t > 0, ω ∈ Ω, we may deﬁne the Poissonintegral of f as a random ﬁnite sum by f (x)N(t, dx)(ω) := f (x)N(t, {x})(ω). A x∈ANote that each A f (x)N(t, dx) is an Rd -valued random variable andgives rise to a càdlàg stochastic process, as we vary t.Now since N(t, {x}) = 0 ⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t, wehave f (x)N(t, dx) = f (∆X (u))1A (∆X (u)). (0.2) A 0≤u≤t Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 20 / 44
89. 89. Poisson IntegrationLet f be a Borel measurable function from Rd to Rd and let A bebounded below, then for each t > 0, ω ∈ Ω, we may deﬁne the Poissonintegral of f as a random ﬁnite sum by f (x)N(t, dx)(ω) := f (x)N(t, {x})(ω). A x∈ANote that each A f (x)N(t, dx) is an Rd -valued random variable andgives rise to a càdlàg stochastic process, as we vary t.Now since N(t, {x}) = 0 ⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t, wehave f (x)N(t, dx) = f (∆X (u))1A (∆X (u)). (0.2) A 0≤u≤t Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 20 / 44
90. 90. Poisson IntegrationLet f be a Borel measurable function from Rd to Rd and let A bebounded below, then for each t > 0, ω ∈ Ω, we may deﬁne the Poissonintegral of f as a random ﬁnite sum by f (x)N(t, dx)(ω) := f (x)N(t, {x})(ω). A x∈ANote that each A f (x)N(t, dx) is an Rd -valued random variable andgives rise to a càdlàg stochastic process, as we vary t.Now since N(t, {x}) = 0 ⇔ ∆X (u) = x for at least one 0 ≤ u ≤ t, wehave f (x)N(t, dx) = f (∆X (u))1A (∆X (u)). (0.2) A 0≤u≤t Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 20 / 44
91. 91. In the sequel, we will sometimes use µA to denote the restriction to Aof the measure µ. in the following theorem, Var stands for variance.TheoremLet A be bounded below, then A f (x)N(t, dx), t ≥ 0 is a compound Poisson process, with 1 characteristic function E exp i u, f (x)N(t, dx) = exp t (ei(u,x) − 1)µf ,A (dx) A Rd for each u ∈ Rd , where µf ,A (B) := µ(A ∩ f −1 (B)), for each B ∈ B(Rd ). 2 If f ∈ L1 (A, µA ), then E f (x)N(t, dx) =t f (x)µ(dx). A A Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 21 / 44
92. 92. In the sequel, we will sometimes use µA to denote the restriction to Aof the measure µ. in the following theorem, Var stands for variance.TheoremLet A be bounded below, then A f (x)N(t, dx), t ≥ 0 is a compound Poisson process, with 1 characteristic function E exp i u, f (x)N(t, dx) = exp t (ei(u,x) − 1)µf ,A (dx) A Rd for each u ∈ Rd , where µf ,A (B) := µ(A ∩ f −1 (B)), for each B ∈ B(Rd ). 2 If f ∈ L1 (A, µA ), then E f (x)N(t, dx) =t f (x)µ(dx). A A Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 21 / 44
93. 93. In the sequel, we will sometimes use µA to denote the restriction to Aof the measure µ. in the following theorem, Var stands for variance.TheoremLet A be bounded below, then A f (x)N(t, dx), t ≥ 0 is a compound Poisson process, with 1 characteristic function E exp i u, f (x)N(t, dx) = exp t (ei(u,x) − 1)µf ,A (dx) A Rd for each u ∈ Rd , where µf ,A (B) := µ(A ∩ f −1 (B)), for each B ∈ B(Rd ). 2 If f ∈ L1 (A, µA ), then E f (x)N(t, dx) =t f (x)µ(dx). A A Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 21 / 44
94. 94. In the sequel, we will sometimes use µA to denote the restriction to Aof the measure µ. in the following theorem, Var stands for variance.TheoremLet A be bounded below, then A f (x)N(t, dx), t ≥ 0 is a compound Poisson process, with 1 characteristic function E exp i u, f (x)N(t, dx) = exp t (ei(u,x) − 1)µf ,A (dx) A Rd for each u ∈ Rd , where µf ,A (B) := µ(A ∩ f −1 (B)), for each B ∈ B(Rd ). 2 If f ∈ L1 (A, µA ), then E f (x)N(t, dx) =t f (x)µ(dx). A A Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 21 / 44
95. 95. Theorem3 If f ∈ L2 (A, µA ), then Var f (x)N(t, dx) =t |f (x)|2 µ(dx). A A Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 22 / 44
96. 96. Theorem3 If f ∈ L2 (A, µA ), then Var f (x)N(t, dx) =t |f (x)|2 µ(dx). A A Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 22 / 44
97. 97. Proof. - part of it!1) For simplicity, we will prove this result in the case wheref ∈ L1 (A, µA ). First let f be a simple function and write f = n cj 1Aj j=1where each cj ∈ Rd . We can assume, without loss of generality, thatthe Aj ’s are disjoint Borel subsets of A. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 23 / 44
98. 98. Proof. - part of it!1) For simplicity, we will prove this result in the case wheref ∈ L1 (A, µA ). First let f be a simple function and write f = n cj 1Aj j=1where each cj ∈ Rd . We can assume, without loss of generality, thatthe Aj ’s are disjoint Borel subsets of A. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 23 / 44
99. 99. Proof. - part of it!1) For simplicity, we will prove this result in the case wheref ∈ L1 (A, µA ). First let f be a simple function and write f = n cj 1Aj j=1where each cj ∈ Rd . We can assume, without loss of generality, thatthe Aj ’s are disjoint Borel subsets of A. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 23 / 44
100. 100. By Theorem 5, we ﬁnd that     n E exp i u, f (x)N(t, dx) = E exp i u, cj N(t, Aj )  A  j=1  n = E exp i u, cj N(t, Aj ) j=1 n = exp t ei(u,cj ) − 1 µ(Aj ) j=1 = exp t (ei(u,f (x)) − 1)µ(dx) . A Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 24 / 44
101. 101. By Theorem 5, we ﬁnd that     n E exp i u, f (x)N(t, dx) = E exp i u, cj N(t, Aj )  A  j=1  n = E exp i u, cj N(t, Aj ) j=1 n = exp t ei(u,cj ) − 1 µ(Aj ) j=1 = exp t (ei(u,f (x)) − 1)µ(dx) . A Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 24 / 44
102. 102. By Theorem 5, we ﬁnd that     n E exp i u, f (x)N(t, dx) = E exp i u, cj N(t, Aj )  A  j=1  n = E exp i u, cj N(t, Aj ) j=1 n = exp t ei(u,cj ) − 1 µ(Aj ) j=1 = exp t (ei(u,f (x)) − 1)µ(dx) . A Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 24 / 44
103. 103. By Theorem 5, we ﬁnd that     n E exp i u, f (x)N(t, dx) = E exp i u, cj N(t, Aj )  A  j=1  n = E exp i u, cj N(t, Aj ) j=1 n = exp t ei(u,cj ) − 1 µ(Aj ) j=1 = exp t (ei(u,f (x)) − 1)µ(dx) . A Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 24 / 44
104. 104. Now for an arbitrary f ∈ L1 (A, µA ), we can ﬁnd a sequence of simplefunctions converging to f in L1 and hence a subsequence whichconverges to f almost surely. Passing to the limit along thissubsequence in the above yields the required result, via dominatedconvergence.(2) and (3) follow from (1) by differentiation. 2 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 25 / 44
105. 105. Now for an arbitrary f ∈ L1 (A, µA ), we can ﬁnd a sequence of simplefunctions converging to f in L1 and hence a subsequence whichconverges to f almost surely. Passing to the limit along thissubsequence in the above yields the required result, via dominatedconvergence.(2) and (3) follow from (1) by differentiation. 2 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 25 / 44
106. 106. Now for an arbitrary f ∈ L1 (A, µA ), we can ﬁnd a sequence of simplefunctions converging to f in L1 and hence a subsequence whichconverges to f almost surely. Passing to the limit along thissubsequence in the above yields the required result, via dominatedconvergence.(2) and (3) follow from (1) by differentiation. 2 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 25 / 44
107. 107. It follows from Theorem 6 (2) that a Poisson integral will fail to have aﬁnite mean if f ∈ L1 (A, µ). /For each f ∈ L1 (A, µA ), t ≥ 0, we deﬁne the compensated Poissonintegral by ˜ f (x)N(t, dx) = f (x)N(t, dx) − t f (x)µ(dx). A A AA straightforward argument shows that ˜ A f (x)N(t, dx), t ≥ 0 is a martingale and we will use this factextensively in the sequel. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 26 / 44
108. 108. It follows from Theorem 6 (2) that a Poisson integral will fail to have aﬁnite mean if f ∈ L1 (A, µ). /For each f ∈ L1 (A, µA ), t ≥ 0, we deﬁne the compensated Poissonintegral by ˜ f (x)N(t, dx) = f (x)N(t, dx) − t f (x)µ(dx). A A AA straightforward argument shows that ˜ A f (x)N(t, dx), t ≥ 0 is a martingale and we will use this factextensively in the sequel. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 26 / 44
109. 109. It follows from Theorem 6 (2) that a Poisson integral will fail to have aﬁnite mean if f ∈ L1 (A, µ). /For each f ∈ L1 (A, µA ), t ≥ 0, we deﬁne the compensated Poissonintegral by ˜ f (x)N(t, dx) = f (x)N(t, dx) − t f (x)µ(dx). A A AA straightforward argument shows that ˜ A f (x)N(t, dx), t ≥ 0 is a martingale and we will use this factextensively in the sequel. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 26 / 44
110. 110. Note that by Theorem 6 (2) and (3), we can easily deduce the followingtwo important facts: E exp i u, ˜ f (x)N(t, dx) A = exp t (ei(u,x) − 1 − i(u, x))µf ,A (dx) , (0.3) Rdfor each u ∈ Rd , and for f ∈ L2 (A, µA ), 2 E ˜ f (x)N(t, dx) =t |f (x)|2 µ(dx). (0.4) A A Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 27 / 44
111. 111. Note that by Theorem 6 (2) and (3), we can easily deduce the followingtwo important facts: E exp i u, ˜ f (x)N(t, dx) A = exp t (ei(u,x) − 1 − i(u, x))µf ,A (dx) , (0.3) Rdfor each u ∈ Rd , and for f ∈ L2 (A, µA ), 2 E ˜ f (x)N(t, dx) =t |f (x)|2 µ(dx). (0.4) A A Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 27 / 44
112. 112. Processes of Finite VariationWe begin by introducing a useful class of functions. LetP = {a = t1 < t2 < · · · < tn < tn+1 = b} be a partition of the interval[a, b] in R, and deﬁne its mesh to be δ = max1≤i≤n |ti+1 − ti |. We deﬁnethe variation VarP (g) of a càdlàg mapping g : [a, b] → Rd over thepartition P by the prescription n VarP (g) = |g(ti+1 ) − g(ti )|. i=1 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 28 / 44
113. 113. Processes of Finite VariationWe begin by introducing a useful class of functions. LetP = {a = t1 < t2 < · · · < tn < tn+1 = b} be a partition of the interval[a, b] in R, and deﬁne its mesh to be δ = max1≤i≤n |ti+1 − ti |. We deﬁnethe variation VarP (g) of a càdlàg mapping g : [a, b] → Rd over thepartition P by the prescription n VarP (g) = |g(ti+1 ) − g(ti )|. i=1 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 28 / 44
114. 114. Processes of Finite VariationWe begin by introducing a useful class of functions. LetP = {a = t1 < t2 < · · · < tn < tn+1 = b} be a partition of the interval[a, b] in R, and deﬁne its mesh to be δ = max1≤i≤n |ti+1 − ti |. We deﬁnethe variation VarP (g) of a càdlàg mapping g : [a, b] → Rd over thepartition P by the prescription n VarP (g) = |g(ti+1 ) − g(ti )|. i=1 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 28 / 44
115. 115. Processes of Finite VariationWe begin by introducing a useful class of functions. LetP = {a = t1 < t2 < · · · < tn < tn+1 = b} be a partition of the interval[a, b] in R, and deﬁne its mesh to be δ = max1≤i≤n |ti+1 − ti |. We deﬁnethe variation VarP (g) of a càdlàg mapping g : [a, b] → Rd over thepartition P by the prescription n VarP (g) = |g(ti+1 ) − g(ti )|. i=1 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 28 / 44
116. 116. Processes of Finite VariationWe begin by introducing a useful class of functions. LetP = {a = t1 < t2 < · · · < tn < tn+1 = b} be a partition of the interval[a, b] in R, and deﬁne its mesh to be δ = max1≤i≤n |ti+1 − ti |. We deﬁnethe variation VarP (g) of a càdlàg mapping g : [a, b] → Rd over thepartition P by the prescription n VarP (g) = |g(ti+1 ) − g(ti )|. i=1 Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 28 / 44
117. 117. If V (g) = supP VarP (g) < ∞, we say that g has ﬁnite variation on[a, b]. If g is deﬁned on the whole of R (or R+ ), it is said to have ﬁnitevariation if it has ﬁnite variation on each compact interval.It is a trivial observation that every non-decreasing g is of ﬁnitevariation. Conversely if g is of ﬁnite variation, then it can always bewritten as the difference of two non-decreasing functions - to see this,just write g = V (g)+g − V (g)−g , where V (g)(t) is the variation of g on 2 2[a, t]. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 29 / 44
118. 118. If V (g) = supP VarP (g) < ∞, we say that g has ﬁnite variation on[a, b]. If g is deﬁned on the whole of R (or R+ ), it is said to have ﬁnitevariation if it has ﬁnite variation on each compact interval.It is a trivial observation that every non-decreasing g is of ﬁnitevariation. Conversely if g is of ﬁnite variation, then it can always bewritten as the difference of two non-decreasing functions - to see this,just write g = V (g)+g − V (g)−g , where V (g)(t) is the variation of g on 2 2[a, t]. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 29 / 44
119. 119. If V (g) = supP VarP (g) < ∞, we say that g has ﬁnite variation on[a, b]. If g is deﬁned on the whole of R (or R+ ), it is said to have ﬁnitevariation if it has ﬁnite variation on each compact interval.It is a trivial observation that every non-decreasing g is of ﬁnitevariation. Conversely if g is of ﬁnite variation, then it can always bewritten as the difference of two non-decreasing functions - to see this,just write g = V (g)+g − V (g)−g , where V (g)(t) is the variation of g on 2 2[a, t]. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 29 / 44
120. 120. If V (g) = supP VarP (g) < ∞, we say that g has ﬁnite variation on[a, b]. If g is deﬁned on the whole of R (or R+ ), it is said to have ﬁnitevariation if it has ﬁnite variation on each compact interval.It is a trivial observation that every non-decreasing g is of ﬁnitevariation. Conversely if g is of ﬁnite variation, then it can always bewritten as the difference of two non-decreasing functions - to see this,just write g = V (g)+g − V (g)−g , where V (g)(t) is the variation of g on 2 2[a, t]. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 29 / 44
121. 121. If V (g) = supP VarP (g) < ∞, we say that g has ﬁnite variation on[a, b]. If g is deﬁned on the whole of R (or R+ ), it is said to have ﬁnitevariation if it has ﬁnite variation on each compact interval.It is a trivial observation that every non-decreasing g is of ﬁnitevariation. Conversely if g is of ﬁnite variation, then it can always bewritten as the difference of two non-decreasing functions - to see this,just write g = V (g)+g − V (g)−g , where V (g)(t) is the variation of g on 2 2[a, t]. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 29 / 44
122. 122. Functions of ﬁnite variation are important in integration, for supposethat we are given a function g which we are proposing as an integrator,then as a minimum we will want to be able to deﬁne the Stieltjesintegral I fdg, for all continuous functions f (where I is some ﬁniteinterval). In fact a necessary and sufﬁcient condition for obtaining suchan integral as a limit of Riemann sums is that g has ﬁnite variation.A stochastic process (X (t), t ≥ 0) is of ﬁnite variation if the paths(X (t)(ω), t ≥ 0) are of ﬁnite variation for almost all ω ∈ Ω. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 30 / 44
123. 123. Functions of ﬁnite variation are important in integration, for supposethat we are given a function g which we are proposing as an integrator,then as a minimum we will want to be able to deﬁne the Stieltjesintegral I fdg, for all continuous functions f (where I is some ﬁniteinterval). In fact a necessary and sufﬁcient condition for obtaining suchan integral as a limit of Riemann sums is that g has ﬁnite variation.A stochastic process (X (t), t ≥ 0) is of ﬁnite variation if the paths(X (t)(ω), t ≥ 0) are of ﬁnite variation for almost all ω ∈ Ω. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 30 / 44
124. 124. Functions of ﬁnite variation are important in integration, for supposethat we are given a function g which we are proposing as an integrator,then as a minimum we will want to be able to deﬁne the Stieltjesintegral I fdg, for all continuous functions f (where I is some ﬁniteinterval). In fact a necessary and sufﬁcient condition for obtaining suchan integral as a limit of Riemann sums is that g has ﬁnite variation.A stochastic process (X (t), t ≥ 0) is of ﬁnite variation if the paths(X (t)(ω), t ≥ 0) are of ﬁnite variation for almost all ω ∈ Ω. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 30 / 44
125. 125. The following is an important example for us.Example Poisson IntegralsLet N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of ﬁnitevariation on [0, t] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t], we have VarP (Y ) ≤ |f (∆X (s))|1A (∆X (s)) < ∞ a.s. (0.5) 0≤s≤twhere X (t) = A xN(t, dx), for each t ≥ 0. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 31 / 44
126. 126. The following is an important example for us.Example Poisson IntegralsLet N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of ﬁnitevariation on [0, t] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t], we have VarP (Y ) ≤ |f (∆X (s))|1A (∆X (s)) < ∞ a.s. (0.5) 0≤s≤twhere X (t) = A xN(t, dx), for each t ≥ 0. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 31 / 44
127. 127. The following is an important example for us.Example Poisson IntegralsLet N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of ﬁnitevariation on [0, t] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t], we have VarP (Y ) ≤ |f (∆X (s))|1A (∆X (s)) < ∞ a.s. (0.5) 0≤s≤twhere X (t) = A xN(t, dx), for each t ≥ 0. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 31 / 44
128. 128. The following is an important example for us.Example Poisson IntegralsLet N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of ﬁnitevariation on [0, t] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t], we have VarP (Y ) ≤ |f (∆X (s))|1A (∆X (s)) < ∞ a.s. (0.5) 0≤s≤twhere X (t) = A xN(t, dx), for each t ≥ 0. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 31 / 44
129. 129. The following is an important example for us.Example Poisson IntegralsLet N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of ﬁnitevariation on [0, t] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t], we have VarP (Y ) ≤ |f (∆X (s))|1A (∆X (s)) < ∞ a.s. (0.5) 0≤s≤twhere X (t) = A xN(t, dx), for each t ≥ 0. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 31 / 44
130. 130. The following is an important example for us.Example Poisson IntegralsLet N be a Poisson random measure with intensity measure µ and letf : Rd → Rd be Borel measurable. For A bounded below, letY = (Y (t), t ≥ 0) be given by Y (t) = A f (x)N(t, dx), then Y is of ﬁnitevariation on [0, t] for each t ≥ 0. To see this, we observe that for allpartitions P of [0, t], we have VarP (Y ) ≤ |f (∆X (s))|1A (∆X (s)) < ∞ a.s. (0.5) 0≤s≤twhere X (t) = A xN(t, dx), for each t ≥ 0. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 31 / 44
131. 131. In fact, a necessary and sufﬁcient condition for a Lévy process to be ofﬁnite variation is that there is no Brownian part (i.e. a = 0 in theLévy-Khinchine formula) , and |x|<1 |x|ν(dx) < ∞. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 32 / 44
132. 132. In fact, a necessary and sufﬁcient condition for a Lévy process to be ofﬁnite variation is that there is no Brownian part (i.e. a = 0 in theLévy-Khinchine formula) , and |x|<1 |x|ν(dx) < ∞. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 32 / 44
133. 133. The Lévy-Itô DecompositionThis is the key result of this lecture.First, note that for A bounded below, for each t ≥ 0 xN(t, dx) = ∆X (u)1A (∆X (u)) A 0≤u≤tis the sum of all the jumps taking values in the set A up to the time t.Since the paths of X are càdlàg, this is clearly a ﬁnite random sum. Inparticular, |x|≥1 xN(t, dx) is the sum of all jumps of size bigger thanone. It is a compound Poisson process, has ﬁnite variation but mayhave no ﬁnite moments. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 33 / 44
134. 134. The Lévy-Itô DecompositionThis is the key result of this lecture.First, note that for A bounded below, for each t ≥ 0 xN(t, dx) = ∆X (u)1A (∆X (u)) A 0≤u≤tis the sum of all the jumps taking values in the set A up to the time t.Since the paths of X are càdlàg, this is clearly a ﬁnite random sum. Inparticular, |x|≥1 xN(t, dx) is the sum of all jumps of size bigger thanone. It is a compound Poisson process, has ﬁnite variation but mayhave no ﬁnite moments. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 33 / 44
135. 135. The Lévy-Itô DecompositionThis is the key result of this lecture.First, note that for A bounded below, for each t ≥ 0 xN(t, dx) = ∆X (u)1A (∆X (u)) A 0≤u≤tis the sum of all the jumps taking values in the set A up to the time t.Since the paths of X are càdlàg, this is clearly a ﬁnite random sum. Inparticular, |x|≥1 xN(t, dx) is the sum of all jumps of size bigger thanone. It is a compound Poisson process, has ﬁnite variation but mayhave no ﬁnite moments. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 33 / 44
136. 136. The Lévy-Itô DecompositionThis is the key result of this lecture.First, note that for A bounded below, for each t ≥ 0 xN(t, dx) = ∆X (u)1A (∆X (u)) A 0≤u≤tis the sum of all the jumps taking values in the set A up to the time t.Since the paths of X are càdlàg, this is clearly a ﬁnite random sum. Inparticular, |x|≥1 xN(t, dx) is the sum of all jumps of size bigger thanone. It is a compound Poisson process, has ﬁnite variation but mayhave no ﬁnite moments. Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 33 / 44
137. 137. On the other hand it can be shown that X (t) − |x|≥1 xN(t, dx) is aLévy process having ﬁnite moments to all orders.Now lets turn our attention to the small jumps. We study compensatedintegrals, which we know are martingales. Introduce the notation M(t, A) := ˜ x N(t, dx) Afor t ≥ 0 and A bounded below. For each m ∈ N, let 1 1 Bm = x ∈ Rd , < |x| ≤ m+1 m nand for each n ∈ N, let An = m=1 Bm . Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 34 / 44
138. 138. On the other hand it can be shown that X (t) − |x|≥1 xN(t, dx) is aLévy process having ﬁnite moments to all orders.Now lets turn our attention to the small jumps. We study compensatedintegrals, which we know are martingales. Introduce the notation M(t, A) := ˜ x N(t, dx) Afor t ≥ 0 and A bounded below. For each m ∈ N, let 1 1 Bm = x ∈ Rd , < |x| ≤ m+1 m nand for each n ∈ N, let An = m=1 Bm . Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 34 / 44
139. 139. On the other hand it can be shown that X (t) − |x|≥1 xN(t, dx) is aLévy process having ﬁnite moments to all orders.Now lets turn our attention to the small jumps. We study compensatedintegrals, which we know are martingales. Introduce the notation M(t, A) := ˜ x N(t, dx) Afor t ≥ 0 and A bounded below. For each m ∈ N, let 1 1 Bm = x ∈ Rd , < |x| ≤ m+1 m nand for each n ∈ N, let An = m=1 Bm . Dave Applebaum (Shefﬁeld UK) Lecture 3 December 2011 34 / 44