Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Lectures on Lévy Processes and Stochastic                 Calculus (Koc University)               Lecture 1: Infinite Divis...
IntroductionA Lévy process is essentially a stochastic process with stationary andindependent increments.The basic theory ...
IntroductionA Lévy process is essentially a stochastic process with stationary andindependent increments.The basic theory ...
IntroductionA Lévy process is essentially a stochastic process with stationary andindependent increments.The basic theory ...
IntroductionA Lévy process is essentially a stochastic process with stationary andindependent increments.The basic theory ...
T HEORETICAL     Lévy processes are simplest generic class of process which have     (a.s.) continuous paths interspersed ...
T HEORETICAL     Lévy processes are simplest generic class of process which have     (a.s.) continuous paths interspersed ...
T HEORETICAL     Lévy processes are simplest generic class of process which have     (a.s.) continuous paths interspersed ...
Noise. Lévy processes are a good model of “noise” in random    dynamical systems.                               Input + No...
Noise. Lévy processes are a good model of “noise” in random    dynamical systems.                               Input + No...
Noise. Lévy processes are a good model of “noise” in random    dynamical systems.                               Input + No...
Noise. Lévy processes are a good model of “noise” in random    dynamical systems.                               Input + No...
Noise. Lévy processes are a good model of “noise” in random    dynamical systems.                               Input + No...
Noise. Lévy processes are a good model of “noise” in random    dynamical systems.                               Input + No...
Noise. Lévy processes are a good model of “noise” in random    dynamical systems.                               Input + No...
A PPLICATIONSThese include:     Turbulence via Burger’s equation (Bertoin)     New examples of quantum field theories (Albe...
A PPLICATIONSThese include:     Turbulence via Burger’s equation (Bertoin)     New examples of quantum field theories (Albe...
A PPLICATIONSThese include:     Turbulence via Burger’s equation (Bertoin)     New examples of quantum field theories (Albe...
A PPLICATIONSThese include:     Turbulence via Burger’s equation (Bertoin)     New examples of quantum field theories (Albe...
A PPLICATIONSThese include:     Turbulence via Burger’s equation (Bertoin)     New examples of quantum field theories (Albe...
A PPLICATIONSThese include:     Turbulence via Burger’s equation (Bertoin)     New examples of quantum field theories (Albe...
A PPLICATIONSThese include:     Turbulence via Burger’s equation (Bertoin)     New examples of quantum field theories (Albe...
A PPLICATIONSThese include:     Turbulence via Burger’s equation (Bertoin)     New examples of quantum field theories (Albe...
A PPLICATIONSThese include:     Turbulence via Burger’s equation (Bertoin)     New examples of quantum field theories (Albe...
Some Basic Ideas of ProbabilityNotation. Our state space is Euclidean space Rd . The inner productbetween two vectors x = ...
Some Basic Ideas of ProbabilityNotation. Our state space is Euclidean space Rd . The inner productbetween two vectors x = ...
Some Basic Ideas of ProbabilityNotation. Our state space is Euclidean space Rd . The inner productbetween two vectors x = ...
Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebraof subsets of Ω and P is a probability measure d...
Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebraof subsets of Ω and P is a probability measure d...
Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebraof subsets of Ω and P is a probability measure d...
Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebraof subsets of Ω and P is a probability measure d...
Equivalently                 g(y )(pX ∗ pY )(dy ) =                g(x + y )pX (dx)pY (dy ),            Rd                ...
Equivalently                 g(y )(pX ∗ pY )(dy ) =                g(x + y )pX (dx)pY (dy ),            Rd                ...
Equivalently                 g(y )(pX ∗ pY )(dy ) =                g(x + y )pX (dx)pY (dy ),            Rd                ...
The characteristic function of X is φX : Rd → C, where                                 φX (u) =           ei(u,x) pX (dx)....
The characteristic function of X is φX : Rd → C, where                                 φX (u) =           ei(u,x) pX (dx)....
The characteristic function of X is φX : Rd → C, where                                 φX (u) =           ei(u,x) pX (dx)....
More generally, the characteristic function of a probability measure µon Rd is                       φµ (u) =      ei(u,x)...
More generally, the characteristic function of a probability measure µon Rd is                       φµ (u) =      ei(u,x)...
More generally, the characteristic function of a probability measure µon Rd is                       φµ (u) =      ei(u,x)...
More generally, the characteristic function of a probability measure µon Rd is                       φµ (u) =      ei(u,x)...
More generally, the characteristic function of a probability measure µon Rd is                       φµ (u) =      ei(u,x)...
Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),(2) and is continuous at u = 0, then it is the charact...
Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),(2) and is continuous at u = 0, then it is the charact...
Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),(2) and is continuous at u = 0, then it is the charact...
Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),(2) and is continuous at u = 0, then it is the charact...
Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),(2) and is continuous at u = 0, then it is the charact...
Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive definite if and only ifetψ is positiv...
Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive definite if and only ifetψ is positiv...
Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive definite if and only ifetψ is positiv...
Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive definite if and only ifetψ is positiv...
Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive definite if and only ifetψ is positiv...
Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive definite if and only ifetψ is positiv...
Infinite DivisibilityWe study this first because a Lévy process is infinite divisibility inmotion, i.e. infinite divisibility ...
Infinite DivisibilityWe study this first because a Lévy process is infinite divisibility inmotion, i.e. infinite divisibility ...
Infinite DivisibilityWe study this first because a Lévy process is infinite divisibility inmotion, i.e. infinite divisibility ...
Infinite DivisibilityWe study this first because a Lévy process is infinite divisibility inmotion, i.e. infinite divisibility ...
Infinite DivisibilityWe study this first because a Lévy process is infinite divisibility inmotion, i.e. infinite divisibility ...
Theoremµ is infinitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn su...
Theoremµ is infinitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn su...
Theoremµ is infinitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn su...
Theoremµ is infinitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn su...
Theoremµ is infinitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn su...
Theoremµ is infinitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn su...
Theoremµ is infinitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn su...
Theoremµ is infinitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn su...
- If µ and ν are each infinitely divisible, then so is µ ∗ ν.                                                            w-...
- If µ and ν are each infinitely divisible, then so is µ ∗ ν.                                                            w-...
- If µ and ν are each infinitely divisible, then so is µ ∗ ν.                                                            w-...
- If µ and ν are each infinitely divisible, then so is µ ∗ ν.                                                            w-...
- If µ and ν are each infinitely divisible, then so is µ ∗ ν.                                                            w-...
- If µ and ν are each infinitely divisible, then so is µ ∗ ν.                                                            w-...
Examples of Infinite DivisibilityIn the following, we will demonstrate infinite divisibility of a random                    ...
Examples of Infinite DivisibilityIn the following, we will demonstrate infinite divisibility of a random                    ...
Examples of Infinite DivisibilityIn the following, we will demonstrate infinite divisibility of a random                    ...
Examples of Infinite DivisibilityIn the following, we will demonstrate infinite divisibility of a random                    ...
Examples of Infinite DivisibilityIn the following, we will demonstrate infinite divisibility of a random                    ...
A standard calculation yields                                                  1                           φX (u) = exp {i...
A standard calculation yields                                                  1                           φX (u) = exp {i...
A standard calculation yields                                                  1                           φX (u) = exp {i...
A standard calculation yields                                                  1                           φX (u) = exp {i...
A standard calculation yields                                                  1                           φX (u) = exp {i...
A standard calculation yields                                                  1                           φX (u) = exp {i...
Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n...
Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n...
Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n...
Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n...
Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n...
Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n...
Example 3 - Compound Poisson Random VariablesLet (Z (n), n ∈ N) be a sequence of i.i.d. random variables takingvalues in R...
Example 3 - Compound Poisson Random VariablesLet (Z (n), n ∈ N) be a sequence of i.i.d. random variables takingvalues in R...
Example 3 - Compound Poisson Random VariablesLet (Z (n), n ∈ N) be a sequence of i.i.d. random variables takingvalues in R...
Example 3 - Compound Poisson Random VariablesLet (Z (n), n ∈ N) be a sequence of i.i.d. random variables takingvalues in R...
Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we find,           ...
Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we find,           ...
Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we find,           ...
Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we find,           ...
Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we find,           ...
Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we find,           ...
Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we find,           ...
Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we find,           ...
The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general infinitely divisiblerandom variable could be...
The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general infinitely divisiblerandom variable could be...
The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general infinitely divisiblerandom variable could be...
The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general infinitely divisiblerandom variable could be...
The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general infinitely divisiblerandom variable could be...
The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general infinitely divisiblerandom variable could be...
The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general infinitely divisiblerandom variable could be...
The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general infinitely divisiblerandom variable could be...
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Koc1(dba)
Upcoming SlideShare
Loading in …5
×

Koc1(dba)

1,370 views

Published on

Published in: Technology, Education
  • Be the first to comment

Koc1(dba)

  1. 1. Lectures on Lévy Processes and Stochastic Calculus (Koc University) Lecture 1: Infinite Divisibility David Applebaum School of Mathematics and Statistics, University of Sheffield, UK 5th December 2011Dave Applebaum (Sheffield UK) Lecture 1 December 2011 1 / 40
  2. 2. IntroductionA Lévy process is essentially a stochastic process with stationary andindependent increments.The basic theory was developed, principally by Paul Lévy in the 1930s.In the past 20 years there has been a renaissance of interest and aplethora of books, articles and conferences. Why ?There are both theoretical and practical reasons. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 2 / 40
  3. 3. IntroductionA Lévy process is essentially a stochastic process with stationary andindependent increments.The basic theory was developed, principally by Paul Lévy in the 1930s.In the past 20 years there has been a renaissance of interest and aplethora of books, articles and conferences. Why ?There are both theoretical and practical reasons. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 2 / 40
  4. 4. IntroductionA Lévy process is essentially a stochastic process with stationary andindependent increments.The basic theory was developed, principally by Paul Lévy in the 1930s.In the past 20 years there has been a renaissance of interest and aplethora of books, articles and conferences. Why ?There are both theoretical and practical reasons. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 2 / 40
  5. 5. IntroductionA Lévy process is essentially a stochastic process with stationary andindependent increments.The basic theory was developed, principally by Paul Lévy in the 1930s.In the past 20 years there has been a renaissance of interest and aplethora of books, articles and conferences. Why ?There are both theoretical and practical reasons. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 2 / 40
  6. 6. T HEORETICAL Lévy processes are simplest generic class of process which have (a.s.) continuous paths interspersed with random jumps of arbitrary size occurring at random times. Lévy processes comprise a natural subclass of semimartingales and of Markov processes of Feller type. There are many interesting examples - Brownian motion, simple and compound Poisson processes, α-stable processes, subordinated processes, financial processes, relativistic process, Riemann zeta process . . . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 3 / 40
  7. 7. T HEORETICAL Lévy processes are simplest generic class of process which have (a.s.) continuous paths interspersed with random jumps of arbitrary size occurring at random times. Lévy processes comprise a natural subclass of semimartingales and of Markov processes of Feller type. There are many interesting examples - Brownian motion, simple and compound Poisson processes, α-stable processes, subordinated processes, financial processes, relativistic process, Riemann zeta process . . . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 3 / 40
  8. 8. T HEORETICAL Lévy processes are simplest generic class of process which have (a.s.) continuous paths interspersed with random jumps of arbitrary size occurring at random times. Lévy processes comprise a natural subclass of semimartingales and of Markov processes of Feller type. There are many interesting examples - Brownian motion, simple and compound Poisson processes, α-stable processes, subordinated processes, financial processes, relativistic process, Riemann zeta process . . . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 3 / 40
  9. 9. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups.Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
  10. 10. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups.Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
  11. 11. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups.Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
  12. 12. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups.Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
  13. 13. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups.Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
  14. 14. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups.Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
  15. 15. Noise. Lévy processes are a good model of “noise” in random dynamical systems. Input + Noise = Output Attempts to describe this differentially leads to stochastic calculus. A large class of Markov processes can be built as solutions of stochastic differential equations (SDEs) driven by Lévy noise. Lévy driven stochastic partial differential equations (SPDEs) are currently being studied with some intensity. Robust structure. Most applications utilise Lévy processes taking values in Euclidean space but this can be replaced by a Hilbert space, a Banach space (both of these are important for SPDEs), a locally compact group, a manifold. Quantised versions are non-commutative Lévy processes on quantum groups.Dave Applebaum (Sheffield UK) Lecture 1 December 2011 4 / 40
  16. 16. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical finance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  17. 17. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical finance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  18. 18. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical finance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  19. 19. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical finance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  20. 20. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical finance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  21. 21. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical finance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  22. 22. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical finance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  23. 23. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical finance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  24. 24. A PPLICATIONSThese include: Turbulence via Burger’s equation (Bertoin) New examples of quantum field theories (Albeverio, Gottshalk, Wu) Viscoelasticity (Bouleau) Time series - Lévy driven CARMA models (Brockwell, Marquardt) Stochastic resonance in non-linear signal processing (Patel, Kosco, Applebaum) Finance and insurance (a cast of thousands)The biggest explosion of activity has been in mathematical finance.Two major areas of activity are: option pricing in incomplete markets. interest rate modelling. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 5 / 40
  25. 25. Some Basic Ideas of ProbabilityNotation. Our state space is Euclidean space Rd . The inner productbetween two vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is d (x, y ) = xi yi . i=1The associated norm (length of a vector) is 1 d 2 1 |x| = (x, x) = 2 xi2 . i=1 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 6 / 40
  26. 26. Some Basic Ideas of ProbabilityNotation. Our state space is Euclidean space Rd . The inner productbetween two vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is d (x, y ) = xi yi . i=1The associated norm (length of a vector) is 1 d 2 1 |x| = (x, x) = 2 xi2 . i=1 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 6 / 40
  27. 27. Some Basic Ideas of ProbabilityNotation. Our state space is Euclidean space Rd . The inner productbetween two vectors x = (x1 , . . . , xd ) and y = (y1 , . . . , yd ) is d (x, y ) = xi yi . i=1The associated norm (length of a vector) is 1 d 2 1 |x| = (x, x) = 2 xi2 . i=1 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 6 / 40
  28. 28. Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebraof subsets of Ω and P is a probability measure defined on (Ω, F).Random variables are measurable functions X : Ω → Rd . The law of Xis pX , where for each A ∈ F, pX (A) = P(X ∈ A).(Xn , n ∈ N) are independent if for alli1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ), P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar ) = P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ).If X and Y are independent, the law of X + Y is given by convolution ofmeasures pX +Y = pX ∗ pY , where (pX ∗ pY )(A) = pX (A − y )pY (dy ). Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 7 / 40
  29. 29. Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebraof subsets of Ω and P is a probability measure defined on (Ω, F).Random variables are measurable functions X : Ω → Rd . The law of Xis pX , where for each A ∈ F, pX (A) = P(X ∈ A).(Xn , n ∈ N) are independent if for alli1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ), P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar ) = P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ).If X and Y are independent, the law of X + Y is given by convolution ofmeasures pX +Y = pX ∗ pY , where (pX ∗ pY )(A) = pX (A − y )pY (dy ). Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 7 / 40
  30. 30. Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebraof subsets of Ω and P is a probability measure defined on (Ω, F).Random variables are measurable functions X : Ω → Rd . The law of Xis pX , where for each A ∈ F, pX (A) = P(X ∈ A).(Xn , n ∈ N) are independent if for alli1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ), P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar ) = P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ).If X and Y are independent, the law of X + Y is given by convolution ofmeasures pX +Y = pX ∗ pY , where (pX ∗ pY )(A) = pX (A − y )pY (dy ). Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 7 / 40
  31. 31. Let (Ω, F, P) be a probability space, so that Ω is a set, F is a σ-algebraof subsets of Ω and P is a probability measure defined on (Ω, F).Random variables are measurable functions X : Ω → Rd . The law of Xis pX , where for each A ∈ F, pX (A) = P(X ∈ A).(Xn , n ∈ N) are independent if for alli1 , i2 , . . . ir ∈ N, Ai1 , Ai2 , . . . , Air ∈ B(Rd ), P(Xi1 ∈ A1 , Xi2 ∈ A2 , . . . , Xir ∈ Ar ) = P(Xi1 ∈ A1 )P(Xi2 ∈ A2 ) · · · P(Xir ∈ Ar ).If X and Y are independent, the law of X + Y is given by convolution ofmeasures pX +Y = pX ∗ pY , where (pX ∗ pY )(A) = pX (A − y )pY (dy ). Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 7 / 40
  32. 32. Equivalently g(y )(pX ∗ pY )(dy ) = g(x + y )pX (dx)pY (dy ), Rd Rd Rdfor all g ∈ Bb (Rd ) (the bounded Borel measurable functions on Rd ).If X and Y are independent with densities fX and fY , respectively, thenX + Y has density fX +Y given by convolution of functions: fX +Y = fX ∗ fY , where (fX ∗ fY )(x) = fX (x − y )fY (y )dy . Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 8 / 40
  33. 33. Equivalently g(y )(pX ∗ pY )(dy ) = g(x + y )pX (dx)pY (dy ), Rd Rd Rdfor all g ∈ Bb (Rd ) (the bounded Borel measurable functions on Rd ).If X and Y are independent with densities fX and fY , respectively, thenX + Y has density fX +Y given by convolution of functions: fX +Y = fX ∗ fY , where (fX ∗ fY )(x) = fX (x − y )fY (y )dy . Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 8 / 40
  34. 34. Equivalently g(y )(pX ∗ pY )(dy ) = g(x + y )pX (dx)pY (dy ), Rd Rd Rdfor all g ∈ Bb (Rd ) (the bounded Borel measurable functions on Rd ).If X and Y are independent with densities fX and fY , respectively, thenX + Y has density fX +Y given by convolution of functions: fX +Y = fX ∗ fY , where (fX ∗ fY )(x) = fX (x − y )fY (y )dy . Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 8 / 40
  35. 35. The characteristic function of X is φX : Rd → C, where φX (u) = ei(u,x) pX (dx). RdTheorem (Kac’s theorem)X1 , . . . , Xn are independent if and only if    n E exp i (uj , Xj ) = φX1 (u1 ) · · · φXn (un ) j=1for all u1 , . . . , un ∈ Rd . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 9 / 40
  36. 36. The characteristic function of X is φX : Rd → C, where φX (u) = ei(u,x) pX (dx). RdTheorem (Kac’s theorem)X1 , . . . , Xn are independent if and only if    n E exp i (uj , Xj ) = φX1 (u1 ) · · · φXn (un ) j=1for all u1 , . . . , un ∈ Rd . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 9 / 40
  37. 37. The characteristic function of X is φX : Rd → C, where φX (u) = ei(u,x) pX (dx). RdTheorem (Kac’s theorem)X1 , . . . , Xn are independent if and only if    n E exp i (uj , Xj ) = φX1 (u1 ) · · · φXn (un ) j=1for all u1 , . . . , un ∈ Rd . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 9 / 40
  38. 38. More generally, the characteristic function of a probability measure µon Rd is φµ (u) = ei(u,x) µ(dx). RdImportant properties are:- 1 φµ (0) = 1. 2 ¯ φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N. 3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use dominated convergence)).Also µ → φµ is injective. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 10 / 40
  39. 39. More generally, the characteristic function of a probability measure µon Rd is φµ (u) = ei(u,x) µ(dx). RdImportant properties are:- 1 φµ (0) = 1. 2 ¯ φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N. 3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use dominated convergence)).Also µ → φµ is injective. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 10 / 40
  40. 40. More generally, the characteristic function of a probability measure µon Rd is φµ (u) = ei(u,x) µ(dx). RdImportant properties are:- 1 φµ (0) = 1. 2 ¯ φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N. 3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use dominated convergence)).Also µ → φµ is injective. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 10 / 40
  41. 41. More generally, the characteristic function of a probability measure µon Rd is φµ (u) = ei(u,x) µ(dx). RdImportant properties are:- 1 φµ (0) = 1. 2 ¯ φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N. 3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use dominated convergence)).Also µ → φµ is injective. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 10 / 40
  42. 42. More generally, the characteristic function of a probability measure µon Rd is φµ (u) = ei(u,x) µ(dx). RdImportant properties are:- 1 φµ (0) = 1. 2 ¯ φµ is positive definite i.e. i,j ci cj φµ (ui − uj ) ≥ 0, for all ci ∈ C, ui ∈ Rd , 1 ≤ i, j ≤ n, n ∈ N. 3 φµ is uniformly continuous - Hint: Look at |φµ (u + h) − φµ (u)| and use dominated convergence)).Also µ → φµ is injective. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 10 / 40
  43. 43. Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),(2) and is continuous at u = 0, then it is the characteristic function ofsome probability measure µ on Rd .ψ : Rd → C is conditionally positive definite if for all n ∈ N andc1 , . . . , cn ∈ C for which n cj = 0 we have j=1 n ¯ cj ck ψ(uj − uk ) ≥ 0, j,k =1for all u1 , . . . , un ∈ Rd .Note: conditionally positive definite is sometimes called negativedefinite (with a minus sign).ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 11 / 40
  44. 44. Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),(2) and is continuous at u = 0, then it is the characteristic function ofsome probability measure µ on Rd .ψ : Rd → C is conditionally positive definite if for all n ∈ N andc1 , . . . , cn ∈ C for which n cj = 0 we have j=1 n ¯ cj ck ψ(uj − uk ) ≥ 0, j,k =1for all u1 , . . . , un ∈ Rd .Note: conditionally positive definite is sometimes called negativedefinite (with a minus sign).ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 11 / 40
  45. 45. Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),(2) and is continuous at u = 0, then it is the characteristic function ofsome probability measure µ on Rd .ψ : Rd → C is conditionally positive definite if for all n ∈ N andc1 , . . . , cn ∈ C for which n cj = 0 we have j=1 n ¯ cj ck ψ(uj − uk ) ≥ 0, j,k =1for all u1 , . . . , un ∈ Rd .Note: conditionally positive definite is sometimes called negativedefinite (with a minus sign).ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 11 / 40
  46. 46. Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),(2) and is continuous at u = 0, then it is the characteristic function ofsome probability measure µ on Rd .ψ : Rd → C is conditionally positive definite if for all n ∈ N andc1 , . . . , cn ∈ C for which n cj = 0 we have j=1 n ¯ cj ck ψ(uj − uk ) ≥ 0, j,k =1for all u1 , . . . , un ∈ Rd .Note: conditionally positive definite is sometimes called negativedefinite (with a minus sign).ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 11 / 40
  47. 47. Conversely Bochner’s theorem states that if φ : Rd → C satisfies (1),(2) and is continuous at u = 0, then it is the characteristic function ofsome probability measure µ on Rd .ψ : Rd → C is conditionally positive definite if for all n ∈ N andc1 , . . . , cn ∈ C for which n cj = 0 we have j=1 n ¯ cj ck ψ(uj − uk ) ≥ 0, j,k =1for all u1 , . . . , un ∈ Rd .Note: conditionally positive definite is sometimes called negativedefinite (with a minus sign).ψ : Rd → C will be said to be hermitian if ψ(u) = ψ(−u), for all u ∈ Rd . Dave Applebaum (Sheffield UK) Lecture 1 December 2011 11 / 40
  48. 48. Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive definite if and only ifetψ is positive definite for each t > 0.Proof. We only give the easy part here.Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choosec1 , . . . , cn and u1 , . . . , un as above.We then find that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
  49. 49. Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive definite if and only ifetψ is positive definite for each t > 0.Proof. We only give the easy part here.Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choosec1 , . . . , cn and u1 , . . . , un as above.We then find that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
  50. 50. Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive definite if and only ifetψ is positive definite for each t > 0.Proof. We only give the easy part here.Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choosec1 , . . . , cn and u1 , . . . , un as above.We then find that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
  51. 51. Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive definite if and only ifetψ is positive definite for each t > 0.Proof. We only give the easy part here.Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choosec1 , . . . , cn and u1 , . . . , un as above.We then find that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
  52. 52. Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive definite if and only ifetψ is positive definite for each t > 0.Proof. We only give the easy part here.Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choosec1 , . . . , cn and u1 , . . . , un as above.We then find that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
  53. 53. Theorem (Schoenberg correspondence)ψ : Rd → C is hermitian and conditionally positive definite if and only ifetψ is positive definite for each t > 0.Proof. We only give the easy part here.Suppose that etψ is positive definite for all t > 0. Fix n ∈ N and choosec1 , . . . , cn and u1 , . . . , un as above.We then find that for each t > 0, n 1 cj ck (etψ(uj −uk ) − 1) ≥ 0, ¯ t j,k =1and so n n 1 ¯ cj ck ψ(uj − uk ) = lim cj ck (etψ(uj −uk ) − 1) ≥ 0. ¯ t→0 t j,k =1 j,k =1 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 12 / 40
  54. 54. Infinite DivisibilityWe study this first because a Lévy process is infinite divisibility inmotion, i.e. infinite divisibility is the underlying probabilistic idea whicha Lévy process embodies dynamically. nLet µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (ntimes). We say that µ has a convolution nth root, if there exists a 1 1 nprobability measure µ n for which (µ n )∗ = µ.µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this 1case µ n is unique. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 13 / 40
  55. 55. Infinite DivisibilityWe study this first because a Lévy process is infinite divisibility inmotion, i.e. infinite divisibility is the underlying probabilistic idea whicha Lévy process embodies dynamically. nLet µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (ntimes). We say that µ has a convolution nth root, if there exists a 1 1 nprobability measure µ n for which (µ n )∗ = µ.µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this 1case µ n is unique. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 13 / 40
  56. 56. Infinite DivisibilityWe study this first because a Lévy process is infinite divisibility inmotion, i.e. infinite divisibility is the underlying probabilistic idea whicha Lévy process embodies dynamically. nLet µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (ntimes). We say that µ has a convolution nth root, if there exists a 1 1 nprobability measure µ n for which (µ n )∗ = µ.µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this 1case µ n is unique. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 13 / 40
  57. 57. Infinite DivisibilityWe study this first because a Lévy process is infinite divisibility inmotion, i.e. infinite divisibility is the underlying probabilistic idea whicha Lévy process embodies dynamically. nLet µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (ntimes). We say that µ has a convolution nth root, if there exists a 1 1 nprobability measure µ n for which (µ n )∗ = µ.µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this 1case µ n is unique. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 13 / 40
  58. 58. Infinite DivisibilityWe study this first because a Lévy process is infinite divisibility inmotion, i.e. infinite divisibility is the underlying probabilistic idea whicha Lévy process embodies dynamically. nLet µ be a probability measure on Rd . Define µ∗ = µ ∗ · · · ∗ µ (ntimes). We say that µ has a convolution nth root, if there exists a 1 1 nprobability measure µ n for which (µ n )∗ = µ.µ is infinitely divisible if it has a convolution nth root for all n ∈ N. In this 1case µ n is unique. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 13 / 40
  59. 59. Theoremµ is infinitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn such that φµ (u) = (φn (u))n , 1for all u ∈ Rd . Moreover µn = µ n .Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each µnn ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n RdBut φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence nµ = µ∗ . n 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
  60. 60. Theoremµ is infinitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn such that φµ (u) = (φn (u))n , 1for all u ∈ Rd . Moreover µn = µ n .Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each µnn ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n RdBut φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence nµ = µ∗ . n 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
  61. 61. Theoremµ is infinitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn such that φµ (u) = (φn (u))n , 1for all u ∈ Rd . Moreover µn = µ n .Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each µnn ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n RdBut φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence nµ = µ∗ . n 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
  62. 62. Theoremµ is infinitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn such that φµ (u) = (φn (u))n , 1for all u ∈ Rd . Moreover µn = µ n .Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each µnn ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n RdBut φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence nµ = µ∗ . n 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
  63. 63. Theoremµ is infinitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn such that φµ (u) = (φn (u))n , 1for all u ∈ Rd . Moreover µn = µ n .Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each µnn ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n RdBut φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence nµ = µ∗ . n 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
  64. 64. Theoremµ is infinitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn such that φµ (u) = (φn (u))n , 1for all u ∈ Rd . Moreover µn = µ n .Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each µnn ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n RdBut φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence nµ = µ∗ . n 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
  65. 65. Theoremµ is infinitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn such that φµ (u) = (φn (u))n , 1for all u ∈ Rd . Moreover µn = µ n .Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each µnn ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n RdBut φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence nµ = µ∗ . n 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
  66. 66. Theoremµ is infinitely divisible iff for all n ∈ N, there exists a probability measureµn with characteristic function φn such that φµ (u) = (φn (u))n , 1for all u ∈ Rd . Moreover µn = µ n .Proof. If µ is infinitely divisible, take φn = φ 1 . Conversely, for each µnn ∈ N, by Fubini’s theorem, φµ (u) = ··· ei(u,y1 +···+yn ) µn (dy1 ) · · · µn (dyn ) Rd Rd n = ei(u,y ) µ∗ (dy ). n RdBut φµ (u) = Rd ei(u,y ) µ(dy ) and φ determines µ uniquely. Hence nµ = µ∗ . n 2 Dave Applebaum (Sheffield UK) Lecture 1 December 2011 14 / 40
  67. 67. - If µ and ν are each infinitely divisible, then so is µ ∗ ν. w- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitelydivisible. w[Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rdfor each f ∈ Cb (Rd ).]A random variable X is infinitely divisible if its law pX is infinitely d (n) (n) (n) (n)divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for eachn ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
  68. 68. - If µ and ν are each infinitely divisible, then so is µ ∗ ν. w- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitelydivisible. w[Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rdfor each f ∈ Cb (Rd ).]A random variable X is infinitely divisible if its law pX is infinitely d (n) (n) (n) (n)divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for eachn ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
  69. 69. - If µ and ν are each infinitely divisible, then so is µ ∗ ν. w- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitelydivisible. w[Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rdfor each f ∈ Cb (Rd ).]A random variable X is infinitely divisible if its law pX is infinitely d (n) (n) (n) (n)divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for eachn ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
  70. 70. - If µ and ν are each infinitely divisible, then so is µ ∗ ν. w- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitelydivisible. w[Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rdfor each f ∈ Cb (Rd ).]A random variable X is infinitely divisible if its law pX is infinitely d (n) (n) (n) (n)divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for eachn ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
  71. 71. - If µ and ν are each infinitely divisible, then so is µ ∗ ν. w- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitelydivisible. w[Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rdfor each f ∈ Cb (Rd ).]A random variable X is infinitely divisible if its law pX is infinitely d (n) (n) (n) (n)divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for eachn ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
  72. 72. - If µ and ν are each infinitely divisible, then so is µ ∗ ν. w- If (µn , n ∈ N) are infinitely divisible and µn ⇒ µ, then µ is infinitelydivisible. w[Note: Weak convergence. µn ⇒ µ means lim f (x)µn (dx) = f (x)µ(dx), n→∞ Rd Rdfor each f ∈ Cb (Rd ).]A random variable X is infinitely divisible if its law pX is infinitely d (n) (n) (n) (n)divisible, X = Y1 + · · · + Yn , where Y1 , . . . , Yn are i.i.d., for eachn ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 15 / 40
  73. 73. Examples of Infinite DivisibilityIn the following, we will demonstrate infinite divisibility of a random (n) (n)variable X by finding i.i.d. Y1 , . . . , Yn such that d (n) (n)X = Y1 + · · · + Yn , for each n ∈ N.Example 1 - Gaussian Random VariablesLet X = (X1 , . . . , Xd ) be a random vector.We say that it is (non − degenerate)Gaussian if it there exists a vectorm ∈ Rd and a strictly positive-definite symmetric d × d matrix A suchthat X has a pdf (probability density function) of the form:- 1 1 f (x) = d exp − (x − m, A−1 (x − m)) , (1.1) (2π) 2 det(A) 2for all x ∈ Rd .In this case we will write X ∼ N(m, A). The vector m is the mean of X ,so m = E(X ) and A is the covariance matrix so thatA = E((X − m)(X − m)T ). Dave Applebaum (Sheffield UK) Lecture 1 December 2011 16 / 40
  74. 74. Examples of Infinite DivisibilityIn the following, we will demonstrate infinite divisibility of a random (n) (n)variable X by finding i.i.d. Y1 , . . . , Yn such that d (n) (n)X = Y1 + · · · + Yn , for each n ∈ N.Example 1 - Gaussian Random VariablesLet X = (X1 , . . . , Xd ) be a random vector.We say that it is (non − degenerate)Gaussian if it there exists a vectorm ∈ Rd and a strictly positive-definite symmetric d × d matrix A suchthat X has a pdf (probability density function) of the form:- 1 1 f (x) = d exp − (x − m, A−1 (x − m)) , (1.1) (2π) 2 det(A) 2for all x ∈ Rd .In this case we will write X ∼ N(m, A). The vector m is the mean of X ,so m = E(X ) and A is the covariance matrix so thatA = E((X − m)(X − m)T ). Dave Applebaum (Sheffield UK) Lecture 1 December 2011 16 / 40
  75. 75. Examples of Infinite DivisibilityIn the following, we will demonstrate infinite divisibility of a random (n) (n)variable X by finding i.i.d. Y1 , . . . , Yn such that d (n) (n)X = Y1 + · · · + Yn , for each n ∈ N.Example 1 - Gaussian Random VariablesLet X = (X1 , . . . , Xd ) be a random vector.We say that it is (non − degenerate)Gaussian if it there exists a vectorm ∈ Rd and a strictly positive-definite symmetric d × d matrix A suchthat X has a pdf (probability density function) of the form:- 1 1 f (x) = d exp − (x − m, A−1 (x − m)) , (1.1) (2π) 2 det(A) 2for all x ∈ Rd .In this case we will write X ∼ N(m, A). The vector m is the mean of X ,so m = E(X ) and A is the covariance matrix so thatA = E((X − m)(X − m)T ). Dave Applebaum (Sheffield UK) Lecture 1 December 2011 16 / 40
  76. 76. Examples of Infinite DivisibilityIn the following, we will demonstrate infinite divisibility of a random (n) (n)variable X by finding i.i.d. Y1 , . . . , Yn such that d (n) (n)X = Y1 + · · · + Yn , for each n ∈ N.Example 1 - Gaussian Random VariablesLet X = (X1 , . . . , Xd ) be a random vector.We say that it is (non − degenerate)Gaussian if it there exists a vectorm ∈ Rd and a strictly positive-definite symmetric d × d matrix A suchthat X has a pdf (probability density function) of the form:- 1 1 f (x) = d exp − (x − m, A−1 (x − m)) , (1.1) (2π) 2 det(A) 2for all x ∈ Rd .In this case we will write X ∼ N(m, A). The vector m is the mean of X ,so m = E(X ) and A is the covariance matrix so thatA = E((X − m)(X − m)T ). Dave Applebaum (Sheffield UK) Lecture 1 December 2011 16 / 40
  77. 77. Examples of Infinite DivisibilityIn the following, we will demonstrate infinite divisibility of a random (n) (n)variable X by finding i.i.d. Y1 , . . . , Yn such that d (n) (n)X = Y1 + · · · + Yn , for each n ∈ N.Example 1 - Gaussian Random VariablesLet X = (X1 , . . . , Xd ) be a random vector.We say that it is (non − degenerate)Gaussian if it there exists a vectorm ∈ Rd and a strictly positive-definite symmetric d × d matrix A suchthat X has a pdf (probability density function) of the form:- 1 1 f (x) = d exp − (x − m, A−1 (x − m)) , (1.1) (2π) 2 det(A) 2for all x ∈ Rd .In this case we will write X ∼ N(m, A). The vector m is the mean of X ,so m = E(X ) and A is the covariance matrix so thatA = E((X − m)(X − m)T ). Dave Applebaum (Sheffield UK) Lecture 1 December 2011 16 / 40
  78. 78. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n)so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for n 1each 1 ≤ j ≤ n.We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for someσ > 0.We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,and these random variables are also infinitely divisible. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
  79. 79. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n)so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for n 1each 1 ≤ j ≤ n.We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for someσ > 0.We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,and these random variables are also infinitely divisible. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
  80. 80. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n)so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for n 1each 1 ≤ j ≤ n.We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for someσ > 0.We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,and these random variables are also infinitely divisible. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
  81. 81. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n)so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for n 1each 1 ≤ j ≤ n.We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for someσ > 0.We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,and these random variables are also infinitely divisible. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
  82. 82. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n)so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for n 1each 1 ≤ j ≤ n.We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for someσ > 0.We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,and these random variables are also infinitely divisible. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
  83. 83. A standard calculation yields 1 φX (u) = exp {i(m, u) − (u, Au)}, (1.2) 2and hence 1 m 1 1 (φX (u)) n = exp i ,u − u, Au , n 2 n (n)so we see that X is infinitely divisible with each Yj ∼ N( m , n A) for n 1each 1 ≤ j ≤ n.We say that X is a standard normal whenever X ∼ N(0, σ 2 I) for someσ > 0.We say that X is degenerate Gaussian if (1.2) holds with det(A) = 0,and these random variables are also infinitely divisible. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 17 / 40
  84. 84. Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n ∈ N ∪ {0}. We say that is Poisson if there existsc > 0 for which c n −c P(X = n) = e . n!In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It iseasy to verify that φX (u) = exp[c(eiu − 1)],from which we deduce that X is infinitely divisible with each (n) cYj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
  85. 85. Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n ∈ N ∪ {0}. We say that is Poisson if there existsc > 0 for which c n −c P(X = n) = e . n!In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It iseasy to verify that φX (u) = exp[c(eiu − 1)],from which we deduce that X is infinitely divisible with each (n) cYj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
  86. 86. Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n ∈ N ∪ {0}. We say that is Poisson if there existsc > 0 for which c n −c P(X = n) = e . n!In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It iseasy to verify that φX (u) = exp[c(eiu − 1)],from which we deduce that X is infinitely divisible with each (n) cYj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
  87. 87. Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n ∈ N ∪ {0}. We say that is Poisson if there existsc > 0 for which c n −c P(X = n) = e . n!In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It iseasy to verify that φX (u) = exp[c(eiu − 1)],from which we deduce that X is infinitely divisible with each (n) cYj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
  88. 88. Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n ∈ N ∪ {0}. We say that is Poisson if there existsc > 0 for which c n −c P(X = n) = e . n!In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It iseasy to verify that φX (u) = exp[c(eiu − 1)],from which we deduce that X is infinitely divisible with each (n) cYj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
  89. 89. Example 2 - Poisson Random VariablesIn this case, we take d = 1 and consider a random variable X takingvalues in the set n ∈ N ∪ {0}. We say that is Poisson if there existsc > 0 for which c n −c P(X = n) = e . n!In this case we will write X ∼ π(c). We have E(X ) = Var(X ) = c. It iseasy to verify that φX (u) = exp[c(eiu − 1)],from which we deduce that X is infinitely divisible with each (n) cYj ∼ π( n ), for 1 ≤ j ≤ n, n ∈ N. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 18 / 40
  90. 90. Example 3 - Compound Poisson Random VariablesLet (Z (n), n ∈ N) be a sequence of i.i.d. random variables takingvalues in Rd with common law µZ and let N ∼ π(c) be a Poissonrandom variable which is independent of all the Z (n)’s.The compoundPoisson random variable X is defined as follows:- 0 if N = 0 X := Z (1) + · · · + Z (N) if N > 0.TheoremFor each u ∈ Rd , φX (u) = exp (ei(u,y ) − 1)cµZ (dy ) . Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 19 / 40
  91. 91. Example 3 - Compound Poisson Random VariablesLet (Z (n), n ∈ N) be a sequence of i.i.d. random variables takingvalues in Rd with common law µZ and let N ∼ π(c) be a Poissonrandom variable which is independent of all the Z (n)’s.The compoundPoisson random variable X is defined as follows:- 0 if N = 0 X := Z (1) + · · · + Z (N) if N > 0.TheoremFor each u ∈ Rd , φX (u) = exp (ei(u,y ) − 1)cµZ (dy ) . Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 19 / 40
  92. 92. Example 3 - Compound Poisson Random VariablesLet (Z (n), n ∈ N) be a sequence of i.i.d. random variables takingvalues in Rd with common law µZ and let N ∼ π(c) be a Poissonrandom variable which is independent of all the Z (n)’s.The compoundPoisson random variable X is defined as follows:- 0 if N = 0 X := Z (1) + · · · + Z (N) if N > 0.TheoremFor each u ∈ Rd , φX (u) = exp (ei(u,y ) − 1)cµZ (dy ) . Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 19 / 40
  93. 93. Example 3 - Compound Poisson Random VariablesLet (Z (n), n ∈ N) be a sequence of i.i.d. random variables takingvalues in Rd with common law µZ and let N ∼ π(c) be a Poissonrandom variable which is independent of all the Z (n)’s.The compoundPoisson random variable X is defined as follows:- 0 if N = 0 X := Z (1) + · · · + Z (N) if N > 0.TheoremFor each u ∈ Rd , φX (u) = exp (ei(u,y ) − 1)cµZ (dy ) . Rd Dave Applebaum (Sheffield UK) Lecture 1 December 2011 19 / 40
  94. 94. Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we find, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)],and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) cinfinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
  95. 95. Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we find, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)],and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) cinfinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
  96. 96. Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we find, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)],and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) cinfinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
  97. 97. Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we find, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)],and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) cinfinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
  98. 98. Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we find, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)],and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) cinfinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
  99. 99. Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we find, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)],and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) cinfinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
  100. 100. Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we find, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)],and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) cinfinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
  101. 101. Proof. Let φZ be the common characteristic function of the Zn ’s. Byconditioning and using independence we find, ∞ φX (u) = E(ei(u,Z (1)+···+Z (N)) |N = n)P(N = n) n=0 ∞ cn = E(ei(u,Z (1))+···+Z (n)) )e−c n! n=0 ∞ [cφZ (u)]n = e−c n! n=0 = exp[c(φZ (u) − 1)],and the result follows on writing φZ (u) = ei(u,y ) µZ (dy ). 2If X is compound Poisson as above, we write X ∼ π(c, µZ ). It is clearly (n) cinfinitely divisible with each Yj ∼ π( n , µZ ), for 1 ≤ j ≤ n. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 20 / 40
  102. 102. The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general infinitely divisiblerandom variable could be written X = Y + W , where Y and W areindependent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 RdThis is WRONG! ν(·) = cµZ (·) is a finite measure here.Lévy and Khintchine showed that ν can be σ-finite, provided it is whatis now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4)(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
  103. 103. The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general infinitely divisiblerandom variable could be written X = Y + W , where Y and W areindependent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 RdThis is WRONG! ν(·) = cµZ (·) is a finite measure here.Lévy and Khintchine showed that ν can be σ-finite, provided it is whatis now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4)(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
  104. 104. The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general infinitely divisiblerandom variable could be written X = Y + W , where Y and W areindependent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 RdThis is WRONG! ν(·) = cµZ (·) is a finite measure here.Lévy and Khintchine showed that ν can be σ-finite, provided it is whatis now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4)(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
  105. 105. The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general infinitely divisiblerandom variable could be written X = Y + W , where Y and W areindependent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 RdThis is WRONG! ν(·) = cµZ (·) is a finite measure here.Lévy and Khintchine showed that ν can be σ-finite, provided it is whatis now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4)(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
  106. 106. The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general infinitely divisiblerandom variable could be written X = Y + W , where Y and W areindependent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 RdThis is WRONG! ν(·) = cµZ (·) is a finite measure here.Lévy and Khintchine showed that ν can be σ-finite, provided it is whatis now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4)(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
  107. 107. The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general infinitely divisiblerandom variable could be written X = Y + W , where Y and W areindependent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 RdThis is WRONG! ν(·) = cµZ (·) is a finite measure here.Lévy and Khintchine showed that ν can be σ-finite, provided it is whatis now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4)(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
  108. 108. The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general infinitely divisiblerandom variable could be written X = Y + W , where Y and W areindependent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 RdThis is WRONG! ν(·) = cµZ (·) is a finite measure here.Lévy and Khintchine showed that ν can be σ-finite, provided it is whatis now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4)(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40
  109. 109. The Lévy-Khintchine Formulade Finetti (1920’s) suggested that the most general infinitely divisiblerandom variable could be written X = Y + W , where Y and W areindependent, Y ∼ N(m, A), W ∼ π(c, µZ ). Then φX (u) = eη(u) , where 1 η(u) = i(m, u) − (u, Au) + (ei(u,y ) − 1)cµZ (dy ). (1.3) 2 RdThis is WRONG! ν(·) = cµZ (·) is a finite measure here.Lévy and Khintchine showed that ν can be σ-finite, provided it is whatis now called a Lévy measure on Rd − {0} = {x ∈ Rd , x = 0}, i.e. (|y |2 ∧ 1)ν(dy ) < ∞, (1.4)(where a ∧ b := min{a, b}, for a, b ∈ R). Since |y |2 ∧ ≤ |y |2 ∧ 1whenever 0 < ≤ 1, it follows from (1.4) that ν((− , )c ) < ∞ for all > 0. Dave Applebaum (Sheffield UK) Lecture 1 December 2011 21 / 40

×