Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Successfully reported this slideshow.

Like this presentation? Why not share!

No Downloads

Total views

1,370

On SlideShare

0

From Embeds

0

Number of Embeds

538

Shares

0

Downloads

30

Comments

0

Likes

1

No embeds

No notes for slide

- 1. Lectures on Lévy Processes and Stochastic Calculus (Koc University) Lecture 1: Inﬁnite Divisibility David Applebaum School of Mathematics and Statistics, University of Shefﬁeld, UK 5th December 2011Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 1 / 40
- 2. IntroductionA Lévy process is essentially a stochastic process with stationary andindependent increments.The basic theory was developed, principally by Paul Lévy in the 1930s.In the past 20 years there has been a renaissance of interest and aplethora of books, articles and conferences. Why ?There are both theoretical and practical reasons. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 2 / 40
- 3. IntroductionA Lévy process is essentially a stochastic process with stationary andindependent increments.The basic theory was developed, principally by Paul Lévy in the 1930s.In the past 20 years there has been a renaissance of interest and aplethora of books, articles and conferences. Why ?There are both theoretical and practical reasons. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 2 / 40
- 4. IntroductionA Lévy process is essentially a stochastic process with stationary andindependent increments.The basic theory was developed, principally by Paul Lévy in the 1930s.In the past 20 years there has been a renaissance of interest and aplethora of books, articles and conferences. Why ?There are both theoretical and practical reasons. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 2 / 40
- 5. IntroductionA Lévy process is essentially a stochastic process with stationary andindependent increments.The basic theory was developed, principally by Paul Lévy in the 1930s.In the past 20 years there has been a renaissance of interest and aplethora of books, articles and conferences. Why ?There are both theoretical and practical reasons. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 2 / 40
- 6. T HEORETICAL Lévy processes are simplest generic class of process which have (a.s.) continuous paths interspersed with random jumps of arbitrary size occurring at random times. Lévy processes comprise a natural subclass of semimartingales and of Markov processes of Feller type. There are many interesting examples - Brownian motion, simple and compound Poisson processes, α-stable processes, subordinated processes, ﬁnancial processes, relativistic process, Riemann zeta process . . . Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 3 / 40
- 7. T HEORETICAL Lévy processes are simplest generic class of process which have (a.s.) continuous paths interspersed with random jumps of arbitrary size occurring at random times. Lévy processes comprise a natural subclass of semimartingales and of Markov processes of Feller type. There are many interesting examples - Brownian motion, simple and compound Poisson processes, α-stable processes, subordinated processes, ﬁnancial processes, relativistic process, Riemann zeta process . . . Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 3 / 40
- 8. T HEORETICAL Lévy processes are simplest generic class of process which have (a.s.) continuous paths interspersed with random jumps of arbitrary size occurring at random times. Lévy processes comprise a natural subclass of semimartingales and of Markov processes of Feller type. There are many interesting examples - Brownian motion, simple and compound Poisson processes, α-stable processes, subordinated processes, ﬁnancial processes, relativistic process, Riemann zeta process . . . Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 3 / 40
- 9. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups.Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 4 / 40
- 10. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups.Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 4 / 40
- 11. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups.Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 4 / 40
- 12. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups.Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 4 / 40
- 13. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups.Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 4 / 40
- 14. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups.Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 4 / 40
- 15. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups.Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 4 / 40
- 16. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum ﬁeld theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical ﬁnance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 5 / 40
- 17. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum ﬁeld theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical ﬁnance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 5 / 40
- 18. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum ﬁeld theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical ﬁnance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 5 / 40
- 19. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum ﬁeld theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical ﬁnance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 5 / 40
- 20. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum ﬁeld theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical ﬁnance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 5 / 40
- 21. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum ﬁeld theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical ﬁnance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 5 / 40
- 22. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum ﬁeld theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical ﬁnance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 5 / 40
- 23. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum ﬁeld theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical ﬁnance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 5 / 40
- 24. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum ﬁeld theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical ﬁnance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 5 / 40
- 25. Some Basic Ideas of ProbabilityNotation. Our state space is Euclidean space Rd . The inner productbetween two vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is d (x, y ) = xi yi . i=1The associated norm (length of a vector) is 1 d 2 1 |x| = (x, x) = 2 xi2 . i=1 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 6 / 40
- 26. Some Basic Ideas of ProbabilityNotation. Our state space is Euclidean space Rd . The inner productbetween two vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is d (x, y ) = xi yi . i=1The associated norm (length of a vector) is 1 d 2 1 |x| = (x, x) = 2 xi2 . i=1 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 6 / 40
- 27. Some Basic Ideas of ProbabilityNotation. Our state space is Euclidean space Rd . The inner productbetween two vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is d (x, y ) = xi yi . i=1The associated norm (length of a vector) is 1 d 2 1 |x| = (x, x) = 2 xi2 . i=1 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 6 / 40
- 28. Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebraof subsets of Ω and P is a probability measure deﬁned on (Ω, F).Random variables are measurable functions X : Ω → Rd . The law of Xis pX , where for each A ∈ F, pX (A) = P(X ∈ A).(Xn , n ∈ N) are independent if for alli1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ), P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar ) = P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ).If X and Y are independent, the law of X + Y is given by convolution ofmeasures pX +Y = pX ∗ pY , where (pX ∗ pY )(A) = pX (A − y )pY (dy ). Rd Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 7 / 40
- 29. Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebraof subsets of Ω and P is a probability measure deﬁned on (Ω, F).Random variables are measurable functions X : Ω → Rd . The law of Xis pX , where for each A ∈ F, pX (A) = P(X ∈ A).(Xn , n ∈ N) are independent if for alli1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ), P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar ) = P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ).If X and Y are independent, the law of X + Y is given by convolution ofmeasures pX +Y = pX ∗ pY , where (pX ∗ pY )(A) = pX (A − y )pY (dy ). Rd Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 7 / 40
- 30. Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebraof subsets of Ω and P is a probability measure deﬁned on (Ω, F).Random variables are measurable functions X : Ω → Rd . The law of Xis pX , where for each A ∈ F, pX (A) = P(X ∈ A).(Xn , n ∈ N) are independent if for alli1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ), P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar ) = P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ).If X and Y are independent, the law of X + Y is given by convolution ofmeasures pX +Y = pX ∗ pY , where (pX ∗ pY )(A) = pX (A − y )pY (dy ). Rd Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 7 / 40
- 31. Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebraof subsets of Ω and P is a probability measure deﬁned on (Ω, F).Random variables are measurable functions X : Ω → Rd . The law of Xis pX , where for each A ∈ F, pX (A) = P(X ∈ A).(Xn , n ∈ N) are independent if for alli1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ), P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar ) = P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ).If X and Y are independent, the law of X + Y is given by convolution ofmeasures pX +Y = pX ∗ pY , where (pX ∗ pY )(A) = pX (A − y )pY (dy ). Rd Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 7 / 40
- 32. Equivalently g(y )(pX ∗ pY )(dy ) = g(x + y )pX (dx)pY (dy ), Rd Rd Rdfor all g ∈ Bb (Rd ) (the bounded Borel measurable functions on Rd ).If X and Y are independent with densities fX and fY , respectively, thenX + Y has density fX +Y given by convolution of functions: fX +Y = fX ∗ fY , where (fX ∗ fY )(x) = fX (x − y )fY (y )dy . Rd Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 8 / 40
- 33. Equivalently g(y )(pX ∗ pY )(dy ) = g(x + y )pX (dx)pY (dy ), Rd Rd Rdfor all g ∈ Bb (Rd ) (the bounded Borel measurable functions on Rd ).If X and Y are independent with densities fX and fY , respectively, thenX + Y has density fX +Y given by convolution of functions: fX +Y = fX ∗ fY , where (fX ∗ fY )(x) = fX (x − y )fY (y )dy . Rd Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 8 / 40
- 34. Equivalently g(y )(pX ∗ pY )(dy ) = g(x + y )pX (dx)pY (dy ), Rd Rd Rdfor all g ∈ Bb (Rd ) (the bounded Borel measurable functions on Rd ).If X and Y are independent with densities fX and fY , respectively, thenX + Y has density fX +Y given by convolution of functions: fX +Y = fX ∗ fY , where (fX ∗ fY )(x) = fX (x − y )fY (y )dy . Rd Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 8 / 40
- 35. The characteristic function of X is φX : Rd → C, where φX (u) = ei(u,x) pX (dx). RdTheorem (Kac’s theorem)X1 , . . . , Xn are independent if and only if n E exp i (uj , Xj ) = φX1 (u1 ) · · · φXn (un ) j=1for all u1 , . . . , un ∈ Rd . Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 9 / 40
- 36. The characteristic function of X is φX : Rd → C, where φX (u) = ei(u,x) pX (dx). RdTheorem (Kac’s theorem)X1 , . . . , Xn are independent if and only if n E exp i (uj , Xj ) = φX1 (u1 ) · · · φXn (un ) j=1for all u1 , . . . , un ∈ Rd . Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 9 / 40
- 37. The characteristic function of X is φX : Rd → C, where φX (u) = ei(u,x) pX (dx). RdTheorem (Kac’s theorem)X1 , . . . , Xn are independent if and only if n E exp i (uj , Xj ) = φX1 (u1 ) · · · φXn (un ) j=1for all u1 , . . . , un ∈ Rd . Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 9 / 40
- 38. More generally, the characteristic function of a probability measure µon Rd is φµ (u) = ei(u,x) µ(dx). RdImportant properties are:- 1 φµ (0) = 1. 2 ¯ φµ is positive deﬁnite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N. 3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use dominated convergence)).Also µ → φµ is injective. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 10 / 40
- 39. More generally, the characteristic function of a probability measure µon Rd is φµ (u) = ei(u,x) µ(dx). RdImportant properties are:- 1 φµ (0) = 1. 2 ¯ φµ is positive deﬁnite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N. 3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use dominated convergence)).Also µ → φµ is injective. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 10 / 40
- 40. More generally, the characteristic function of a probability measure µon Rd is φµ (u) = ei(u,x) µ(dx). RdImportant properties are:- 1 φµ (0) = 1. 2 ¯ φµ is positive deﬁnite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N. 3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use dominated convergence)).Also µ → φµ is injective. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 10 / 40
- 41. More generally, the characteristic function of a probability measure µon Rd is φµ (u) = ei(u,x) µ(dx). RdImportant properties are:- 1 φµ (0) = 1. 2 ¯ φµ is positive deﬁnite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N. 3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use dominated convergence)).Also µ → φµ is injective. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 10 / 40
- 42. More generally, the characteristic function of a probability measure µon Rd is φµ (u) = ei(u,x) µ(dx). RdImportant properties are:- 1 φµ (0) = 1. 2 ¯ φµ is positive deﬁnite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N. 3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use dominated convergence)).Also µ → φµ is injective. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 10 / 40
- 43. Conversely Bochner’s theorem states that if φ : Rd → C satisﬁes (1),(2) and is continuous at u = 0, then it is the characteristic function ofsome probability measure µ on Rd .ψ : Rd → C is conditionally positive deﬁnite if for all n ∈ N andc1 , . . . , cn ∈ C for which n cj = 0 we have j=1 n ¯ cj ck ψ(uj − uk ) ≥ 0, j,k =1for all u1 , . . . , un ∈ Rd .Note: conditionally positive deﬁnite is sometimes called negativedeﬁnite (with a minus sign).ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd . Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 11 / 40
- 44. Conversely Bochner’s theorem states that if φ : Rd → C satisﬁes (1),(2) and is continuous at u = 0, then it is the characteristic function ofsome probability measure µ on Rd .ψ : Rd → C is conditionally positive deﬁnite if for all n ∈ N andc1 , . . . , cn ∈ C for which n cj = 0 we have j=1 n ¯ cj ck ψ(uj − uk ) ≥ 0, j,k =1for all u1 , . . . , un ∈ Rd .Note: conditionally positive deﬁnite is sometimes called negativedeﬁnite (with a minus sign).ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd . Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 11 / 40
- 45. Conversely Bochner’s theorem states that if φ : Rd → C satisﬁes (1),(2) and is continuous at u = 0, then it is the characteristic function ofsome probability measure µ on Rd .ψ : Rd → C is conditionally positive deﬁnite if for all n ∈ N andc1 , . . . , cn ∈ C for which n cj = 0 we have j=1 n ¯ cj ck ψ(uj − uk ) ≥ 0, j,k =1for all u1 , . . . , un ∈ Rd .Note: conditionally positive deﬁnite is sometimes called negativedeﬁnite (with a minus sign).ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd . Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 11 / 40
- 46. Conversely Bochner’s theorem states that if φ : Rd → C satisﬁes (1),(2) and is continuous at u = 0, then it is the characteristic function ofsome probability measure µ on Rd .ψ : Rd → C is conditionally positive deﬁnite if for all n ∈ N andc1 , . . . , cn ∈ C for which n cj = 0 we have j=1 n ¯ cj ck ψ(uj − uk ) ≥ 0, j,k =1for all u1 , . . . , un ∈ Rd .Note: conditionally positive deﬁnite is sometimes called negativedeﬁnite (with a minus sign).ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd . Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 11 / 40
- 47. Conversely Bochner’s theorem states that if φ : Rd → C satisﬁes (1),(2) and is continuous at u = 0, then it is the characteristic function ofsome probability measure µ on Rd .ψ : Rd → C is conditionally positive deﬁnite if for all n ∈ N andc1 , . . . , cn ∈ C for which n cj = 0 we have j=1 n ¯ cj ck ψ(uj − uk ) ≥ 0, j,k =1for all u1 , . . . , un ∈ Rd .Note: conditionally positive deﬁnite is sometimes called negativedeﬁnite (with a minus sign).ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd . Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 11 / 40
- 48. Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive deﬁnite if and only ifetψ is positive deﬁnite for each t > 0.Proof. We only give the easy part here.Suppose that etψ is positive deﬁnite for all t > 0. Fix n ∈ N and choosec1 , . . . , cn and u1 , . . . , un as above.We then ﬁnd that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 12 / 40
- 49. Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive deﬁnite if and only ifetψ is positive deﬁnite for each t > 0.Proof. We only give the easy part here.Suppose that etψ is positive deﬁnite for all t > 0. Fix n ∈ N and choosec1 , . . . , cn and u1 , . . . , un as above.We then ﬁnd that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 12 / 40
- 50. Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive deﬁnite if and only ifetψ is positive deﬁnite for each t > 0.Proof. We only give the easy part here.Suppose that etψ is positive deﬁnite for all t > 0. Fix n ∈ N and choosec1 , . . . , cn and u1 , . . . , un as above.We then ﬁnd that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 12 / 40
- 51. Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive deﬁnite if and only ifetψ is positive deﬁnite for each t > 0.Proof. We only give the easy part here.Suppose that etψ is positive deﬁnite for all t > 0. Fix n ∈ N and choosec1 , . . . , cn and u1 , . . . , un as above.We then ﬁnd that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 12 / 40
- 52. Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive deﬁnite if and only ifetψ is positive deﬁnite for each t > 0.Proof. We only give the easy part here.Suppose that etψ is positive deﬁnite for all t > 0. Fix n ∈ N and choosec1 , . . . , cn and u1 , . . . , un as above.We then ﬁnd that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 12 / 40
- 53. Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive deﬁnite if and only ifetψ is positive deﬁnite for each t > 0.Proof. We only give the easy part here.Suppose that etψ is positive deﬁnite for all t > 0. Fix n ∈ N and choosec1 , . . . , cn and u1 , . . . , un as above.We then ﬁnd that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 12 / 40
- 54. Inﬁnite DivisibilityWe study this ﬁrst because a Lévy process is inﬁnite divisibility inmotion, i.e. inﬁnite divisibility is the underlying probabilistic idea whicha Lévy process embodies dynamically. nLet µ be a probability measure on Rd . Deﬁne µ∗ = µ ∗ · · · ∗ µ (ntimes). We say that µ has a convolution nth root, if there exists a 1 1 nprobability measure µ n for which (µ n )∗ = µ.µ is inﬁnitely divisible if it has a convolution nth root for all n ∈ N. In this 1case µ n is unique. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 13 / 40
- 55. Inﬁnite DivisibilityWe study this ﬁrst because a Lévy process is inﬁnite divisibility inmotion, i.e. inﬁnite divisibility is the underlying probabilistic idea whicha Lévy process embodies dynamically. nLet µ be a probability measure on Rd . Deﬁne µ∗ = µ ∗ · · · ∗ µ (ntimes). We say that µ has a convolution nth root, if there exists a 1 1 nprobability measure µ n for which (µ n )∗ = µ.µ is inﬁnitely divisible if it has a convolution nth root for all n ∈ N. In this 1case µ n is unique. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 13 / 40
- 56. Inﬁnite DivisibilityWe study this ﬁrst because a Lévy process is inﬁnite divisibility inmotion, i.e. inﬁnite divisibility is the underlying probabilistic idea whicha Lévy process embodies dynamically. nLet µ be a probability measure on Rd . Deﬁne µ∗ = µ ∗ · · · ∗ µ (ntimes). We say that µ has a convolution nth root, if there exists a 1 1 nprobability measure µ n for which (µ n )∗ = µ.µ is inﬁnitely divisible if it has a convolution nth root for all n ∈ N. In this 1case µ n is unique. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 13 / 40
- 57. Inﬁnite DivisibilityWe study this ﬁrst because a Lévy process is inﬁnite divisibility inmotion, i.e. inﬁnite divisibility is the underlying probabilistic idea whicha Lévy process embodies dynamically. nLet µ be a probability measure on Rd . Deﬁne µ∗ = µ ∗ · · · ∗ µ (ntimes). We say that µ has a convolution nth root, if there exists a 1 1 nprobability measure µ n for which (µ n )∗ = µ.µ is inﬁnitely divisible if it has a convolution nth root for all n ∈ N. In this 1case µ n is unique. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 13 / 40
- 58. Inﬁnite DivisibilityWe study this ﬁrst because a Lévy process is inﬁnite divisibility inmotion, i.e. inﬁnite divisibility is the underlying probabilistic idea whicha Lévy process embodies dynamically. nLet µ be a probability measure on Rd . Deﬁne µ∗ = µ ∗ · · · ∗ µ (ntimes). We say that µ has a convolution nth root, if there exists a 1 1 nprobability measure µ n for which (µ n )∗ = µ.µ is inﬁnitely divisible if it has a convolution nth root for all n ∈ N. In this 1case µ n is unique. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 13 / 40
- 59. Theoremµ is inﬁnitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn such that φµ (u) = (φn (u))n , 1for all u ∈ Rd . Moreover µn = µ n .Proof. If µ is inﬁnitely divisible, take φn = φ 1 . Conversely, for each µnn ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n RdBut φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence nµ = µ∗ . n 2 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 14 / 40
- 60. Theoremµ is inﬁnitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn such that φµ (u) = (φn (u))n , 1for all u ∈ Rd . Moreover µn = µ n .Proof. If µ is inﬁnitely divisible, take φn = φ 1 . Conversely, for each µnn ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n RdBut φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence nµ = µ∗ . n 2 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 14 / 40
- 61. Theoremµ is inﬁnitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn such that φµ (u) = (φn (u))n , 1for all u ∈ Rd . Moreover µn = µ n .Proof. If µ is inﬁnitely divisible, take φn = φ 1 . Conversely, for each µnn ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n RdBut φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence nµ = µ∗ . n 2 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 14 / 40
- 62. Theoremµ is inﬁnitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn such that φµ (u) = (φn (u))n , 1for all u ∈ Rd . Moreover µn = µ n .Proof. If µ is inﬁnitely divisible, take φn = φ 1 . Conversely, for each µnn ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n RdBut φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence nµ = µ∗ . n 2 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 14 / 40
- 63. Theoremµ is inﬁnitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn such that φµ (u) = (φn (u))n , 1for all u ∈ Rd . Moreover µn = µ n .Proof. If µ is inﬁnitely divisible, take φn = φ 1 . Conversely, for each µnn ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n RdBut φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence nµ = µ∗ . n 2 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 14 / 40
- 64. Theoremµ is inﬁnitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn such that φµ (u) = (φn (u))n , 1for all u ∈ Rd . Moreover µn = µ n .Proof. If µ is inﬁnitely divisible, take φn = φ 1 . Conversely, for each µnn ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n RdBut φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence nµ = µ∗ . n 2 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 14 / 40
- 65. Theoremµ is inﬁnitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn such that φµ (u) = (φn (u))n , 1for all u ∈ Rd . Moreover µn = µ n .Proof. If µ is inﬁnitely divisible, take φn = φ 1 . Conversely, for each µnn ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n RdBut φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence nµ = µ∗ . n 2 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 14 / 40
- 66. Theoremµ is inﬁnitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn such that φµ (u) = (φn (u))n , 1for all u ∈ Rd . Moreover µn = µ n .Proof. If µ is inﬁnitely divisible, take φn = φ 1 . Conversely, for each µnn ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n RdBut φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence nµ = µ∗ . n 2 Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 14 / 40
- 67. - If µ and ν are each inﬁnitely divisible, then so is µ ∗ ν. w- If (µn , n ∈ N) are inﬁnitely divisible and µn ⇒ µ, then µ is inﬁnitelydivisible. w[Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rdfor each f ∈ Cb (Rd ).]A random variable X is inﬁnitely divisible if its law pX is inﬁnitely d (n) (n) (n) (n)divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for eachn ∈ N. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 15 / 40
- 68. - If µ and ν are each inﬁnitely divisible, then so is µ ∗ ν. w- If (µn , n ∈ N) are inﬁnitely divisible and µn ⇒ µ, then µ is inﬁnitelydivisible. w[Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rdfor each f ∈ Cb (Rd ).]A random variable X is inﬁnitely divisible if its law pX is inﬁnitely d (n) (n) (n) (n)divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for eachn ∈ N. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 15 / 40
- 69. - If µ and ν are each inﬁnitely divisible, then so is µ ∗ ν. w- If (µn , n ∈ N) are inﬁnitely divisible and µn ⇒ µ, then µ is inﬁnitelydivisible. w[Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rdfor each f ∈ Cb (Rd ).]A random variable X is inﬁnitely divisible if its law pX is inﬁnitely d (n) (n) (n) (n)divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for eachn ∈ N. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 15 / 40
- 70. - If µ and ν are each inﬁnitely divisible, then so is µ ∗ ν. w- If (µn , n ∈ N) are inﬁnitely divisible and µn ⇒ µ, then µ is inﬁnitelydivisible. w[Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rdfor each f ∈ Cb (Rd ).]A random variable X is inﬁnitely divisible if its law pX is inﬁnitely d (n) (n) (n) (n)divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for eachn ∈ N. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 15 / 40
- 71. - If µ and ν are each inﬁnitely divisible, then so is µ ∗ ν. w- If (µn , n ∈ N) are inﬁnitely divisible and µn ⇒ µ, then µ is inﬁnitelydivisible. w[Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rdfor each f ∈ Cb (Rd ).]A random variable X is inﬁnitely divisible if its law pX is inﬁnitely d (n) (n) (n) (n)divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for eachn ∈ N. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 15 / 40
- 72. - If µ and ν are each inﬁnitely divisible, then so is µ ∗ ν. w- If (µn , n ∈ N) are inﬁnitely divisible and µn ⇒ µ, then µ is inﬁnitelydivisible. w[Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rdfor each f ∈ Cb (Rd ).]A random variable X is inﬁnitely divisible if its law pX is inﬁnitely d (n) (n) (n) (n)divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for eachn ∈ N. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 15 / 40
- 73. Examples of Inﬁnite DivisibilityIn the following, we will demonstrate inﬁnite divisibility of a random (n) (n)variable X by ﬁnding i.i.d. Y1 , . . . , Yn such that d (n) (n)X = Y1 + · · · + Yn , for each n ∈ N.Example 1 - Gaussian Random VariablesLet X = (X1 , . . . , Xd ) be a random vector.We say that it is (non − degenerate)Gaussian if it there exists a vectorm ∈ Rd and a strictly positive-deﬁnite symmetric d × d matrix A suchthat X has a pdf (probability density function) of the form:- 1 1 f (x) = d exp − (x − m, A−1 (x − m)) , (1.1) (2π) 2 det(A) 2for all x ∈ Rd .In this case we will write X ∼ N(m, A). The vector m is the mean of X ,so m = E(X ) and A is the covariance matrix so thatA = E((X − m)(X − m)T ). Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 16 / 40
- 74. Examples of Inﬁnite DivisibilityIn the following, we will demonstrate inﬁnite divisibility of a random (n) (n)variable X by ﬁnding i.i.d. Y1 , . . . , Yn such that d (n) (n)X = Y1 + · · · + Yn , for each n ∈ N.Example 1 - Gaussian Random VariablesLet X = (X1 , . . . , Xd ) be a random vector.We say that it is (non − degenerate)Gaussian if it there exists a vectorm ∈ Rd and a strictly positive-deﬁnite symmetric d × d matrix A suchthat X has a pdf (probability density function) of the form:- 1 1 f (x) = d exp − (x − m, A−1 (x − m)) , (1.1) (2π) 2 det(A) 2for all x ∈ Rd .In this case we will write X ∼ N(m, A). The vector m is the mean of X ,so m = E(X ) and A is the covariance matrix so thatA = E((X − m)(X − m)T ). Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 16 / 40
- 75. Examples of Inﬁnite DivisibilityIn the following, we will demonstrate inﬁnite divisibility of a random (n) (n)variable X by ﬁnding i.i.d. Y1 , . . . , Yn such that d (n) (n)X = Y1 + · · · + Yn , for each n ∈ N.Example 1 - Gaussian Random VariablesLet X = (X1 , . . . , Xd ) be a random vector.We say that it is (non − degenerate)Gaussian if it there exists a vectorm ∈ Rd and a strictly positive-deﬁnite symmetric d × d matrix A suchthat X has a pdf (probability density function) of the form:- 1 1 f (x) = d exp − (x − m, A−1 (x − m)) , (1.1) (2π) 2 det(A) 2for all x ∈ Rd .In this case we will write X ∼ N(m, A). The vector m is the mean of X ,so m = E(X ) and A is the covariance matrix so thatA = E((X − m)(X − m)T ). Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 16 / 40
- 76. Examples of Inﬁnite DivisibilityIn the following, we will demonstrate inﬁnite divisibility of a random (n) (n)variable X by ﬁnding i.i.d. Y1 , . . . , Yn such that d (n) (n)X = Y1 + · · · + Yn , for each n ∈ N.Example 1 - Gaussian Random VariablesLet X = (X1 , . . . , Xd ) be a random vector.We say that it is (non − degenerate)Gaussian if it there exists a vectorm ∈ Rd and a strictly positive-deﬁnite symmetric d × d matrix A suchthat X has a pdf (probability density function) of the form:- 1 1 f (x) = d exp − (x − m, A−1 (x − m)) , (1.1) (2π) 2 det(A) 2for all x ∈ Rd .In this case we will write X ∼ N(m, A). The vector m is the mean of X ,so m = E(X ) and A is the covariance matrix so thatA = E((X − m)(X − m)T ). Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 16 / 40
- 77. Examples of Inﬁnite DivisibilityIn the following, we will demonstrate inﬁnite divisibility of a random (n) (n)variable X by ﬁnding i.i.d. Y1 , . . . , Yn such that d (n) (n)X = Y1 + · · · + Yn , for each n ∈ N.Example 1 - Gaussian Random VariablesLet X = (X1 , . . . , Xd ) be a random vector.We say that it is (non − degenerate)Gaussian if it there exists a vectorm ∈ Rd and a strictly positive-deﬁnite symmetric d × d matrix A suchthat X has a pdf (probability density function) of the form:- 1 1 f (x) = d exp − (x − m, A−1 (x − m)) , (1.1) (2π) 2 det(A) 2for all x ∈ Rd .In this case we will write X ∼ N(m, A). The vector m is the mean of X ,so m = E(X ) and A is the covariance matrix so thatA = E((X − m)(X − m)T ). Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 16 / 40
- 78. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n)so we see that X is inﬁnitely divisible with each Yj ∼ N( m , n A) for n 1each 1 ≤ j ≤ n.We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for someσ > 0.We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,and these random variables are also inﬁnitely divisible. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 17 / 40
- 79. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n)so we see that X is inﬁnitely divisible with each Yj ∼ N( m , n A) for n 1each 1 ≤ j ≤ n.We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for someσ > 0.We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,and these random variables are also inﬁnitely divisible. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 17 / 40
- 80. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n)so we see that X is inﬁnitely divisible with each Yj ∼ N( m , n A) for n 1each 1 ≤ j ≤ n.We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for someσ > 0.We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,and these random variables are also inﬁnitely divisible. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 17 / 40
- 81. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n)so we see that X is inﬁnitely divisible with each Yj ∼ N( m , n A) for n 1each 1 ≤ j ≤ n.We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for someσ > 0.We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,and these random variables are also inﬁnitely divisible. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 17 / 40
- 82. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n)so we see that X is inﬁnitely divisible with each Yj ∼ N( m , n A) for n 1each 1 ≤ j ≤ n.We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for someσ > 0.We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,and these random variables are also inﬁnitely divisible. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 17 / 40
- 83. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n)so we see that X is inﬁnitely divisible with each Yj ∼ N( m , n A) for n 1each 1 ≤ j ≤ n.We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for someσ > 0.We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,and these random variables are also inﬁnitely divisible. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 17 / 40
- 84. Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n ∈ N ∪ {0}. We say that is Poisson if there existsc > 0 for which c n −c P(X = n) = e . n!In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It iseasy to verify that φX (u) = exp[c(eiu − 1)],from which we deduce that X is inﬁnitely divisible with each (n) cYj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 18 / 40
- 85. Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n ∈ N ∪ {0}. We say that is Poisson if there existsc > 0 for which c n −c P(X = n) = e . n!In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It iseasy to verify that φX (u) = exp[c(eiu − 1)],from which we deduce that X is inﬁnitely divisible with each (n) cYj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 18 / 40
- 86. Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n ∈ N ∪ {0}. We say that is Poisson if there existsc > 0 for which c n −c P(X = n) = e . n!In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It iseasy to verify that φX (u) = exp[c(eiu − 1)],from which we deduce that X is inﬁnitely divisible with each (n) cYj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 18 / 40
- 87. Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n ∈ N ∪ {0}. We say that is Poisson if there existsc > 0 for which c n −c P(X = n) = e . n!In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It iseasy to verify that φX (u) = exp[c(eiu − 1)],from which we deduce that X is inﬁnitely divisible with each (n) cYj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 18 / 40
- 88. Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n ∈ N ∪ {0}. We say that is Poisson if there existsc > 0 for which c n −c P(X = n) = e . n!In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It iseasy to verify that φX (u) = exp[c(eiu − 1)],from which we deduce that X is inﬁnitely divisible with each (n) cYj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 18 / 40
- 89. Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n ∈ N ∪ {0}. We say that is Poisson if there existsc > 0 for which c n −c P(X = n) = e . n!In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It iseasy to verify that φX (u) = exp[c(eiu − 1)],from which we deduce that X is inﬁnitely divisible with each (n) cYj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 18 / 40
- 90. Example 3 - Compound Poisson Random VariablesLet (Z (n), n ∈ N) be a sequence of i.i.d. random variables takingvalues in Rd with common law µZ and let N ∼ π(c) be a Poissonrandom variable which is independent of all the Z (n)’s.The compoundPoisson random variable X is deﬁned as follows:- 0 if N = 0 X := Z (1) + · · · + Z (N) if N > 0.TheoremFor each u ∈ Rd , φX (u) = exp (ei(u,y ) − 1)cµZ (dy ) . Rd Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 19 / 40
- 91. Example 3 - Compound Poisson Random VariablesLet (Z (n), n ∈ N) be a sequence of i.i.d. random variables takingvalues in Rd with common law µZ and let N ∼ π(c) be a Poissonrandom variable which is independent of all the Z (n)’s.The compoundPoisson random variable X is deﬁned as follows:- 0 if N = 0 X := Z (1) + · · · + Z (N) if N > 0.TheoremFor each u ∈ Rd , φX (u) = exp (ei(u,y ) − 1)cµZ (dy ) . Rd Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 19 / 40
- 92. Example 3 - Compound Poisson Random VariablesLet (Z (n), n ∈ N) be a sequence of i.i.d. random variables takingvalues in Rd with common law µZ and let N ∼ π(c) be a Poissonrandom variable which is independent of all the Z (n)’s.The compoundPoisson random variable X is deﬁned as follows:- 0 if N = 0 X := Z (1) + · · · + Z (N) if N > 0.TheoremFor each u ∈ Rd , φX (u) = exp (ei(u,y ) − 1)cµZ (dy ) . Rd Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 19 / 40
- 93. Example 3 - Compound Poisson Random VariablesLet (Z (n), n ∈ N) be a sequence of i.i.d. random variables takingvalues in Rd with common law µZ and let N ∼ π(c) be a Poissonrandom variable which is independent of all the Z (n)’s.The compoundPoisson random variable X is deﬁned as follows:- 0 if N = 0 X := Z (1) + · · · + Z (N) if N > 0.TheoremFor each u ∈ Rd , φX (u) = exp (ei(u,y ) − 1)cµZ (dy ) . Rd Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 19 / 40
- 94. Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we ﬁnd, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)],and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) cinﬁnitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 20 / 40
- 95. Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we ﬁnd, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)],and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) cinﬁnitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 20 / 40
- 96. Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we ﬁnd, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)],and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) cinﬁnitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 20 / 40
- 97. Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we ﬁnd, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)],and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) cinﬁnitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 20 / 40
- 98. Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we ﬁnd, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)],and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) cinﬁnitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 20 / 40
- 99. Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we ﬁnd, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)],and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) cinﬁnitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 20 / 40
- 100. Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we ﬁnd, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)],and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) cinﬁnitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 20 / 40
- 101. Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we ﬁnd, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)],and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) cinﬁnitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 20 / 40
- 102. The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general inﬁnitely divisiblerandom variable could be written X = Y + W , where Y and W areindependent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 RdThis is WRONG! ν(·) = cµZ (·) is a ﬁnite measure here.Lévy and Khintchine showed that ν can be σ-ﬁnite, provided it is whatis now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4)(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 21 / 40
- 103. The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general inﬁnitely divisiblerandom variable could be written X = Y + W , where Y and W areindependent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 RdThis is WRONG! ν(·) = cµZ (·) is a ﬁnite measure here.Lévy and Khintchine showed that ν can be σ-ﬁnite, provided it is whatis now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4)(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 21 / 40
- 104. The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general inﬁnitely divisiblerandom variable could be written X = Y + W , where Y and W areindependent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 RdThis is WRONG! ν(·) = cµZ (·) is a ﬁnite measure here.Lévy and Khintchine showed that ν can be σ-ﬁnite, provided it is whatis now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4)(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 21 / 40
- 105. The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general inﬁnitely divisiblerandom variable could be written X = Y + W , where Y and W areindependent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 RdThis is WRONG! ν(·) = cµZ (·) is a ﬁnite measure here.Lévy and Khintchine showed that ν can be σ-ﬁnite, provided it is whatis now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4)(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 21 / 40
- 106. The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general inﬁnitely divisiblerandom variable could be written X = Y + W , where Y and W areindependent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 RdThis is WRONG! ν(·) = cµZ (·) is a ﬁnite measure here.Lévy and Khintchine showed that ν can be σ-ﬁnite, provided it is whatis now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4)(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 21 / 40
- 107. The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general inﬁnitely divisiblerandom variable could be written X = Y + W , where Y and W areindependent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 RdThis is WRONG! ν(·) = cµZ (·) is a ﬁnite measure here.Lévy and Khintchine showed that ν can be σ-ﬁnite, provided it is whatis now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4)(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 21 / 40
- 108. The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general inﬁnitely divisiblerandom variable could be written X = Y + W , where Y and W areindependent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 RdThis is WRONG! ν(·) = cµZ (·) is a ﬁnite measure here.Lévy and Khintchine showed that ν can be σ-ﬁnite, provided it is whatis now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4)(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 21 / 40
- 109. The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general inﬁnitely divisiblerandom variable could be written X = Y + W , where Y and W areindependent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 RdThis is WRONG! ν(·) = cµZ (·) is a ﬁnite measure here.Lévy and Khintchine showed that ν can be σ-ﬁnite, provided it is whatis now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4)(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Shefﬁeld UK) Lecture 1 December 2011 21 / 40

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment