SlideShare a Scribd company logo
1 of 94
Stochastic Processes
SOLO HERMELIN
Updated: 10.05.11
15.06.14
http://www.solohermelin.com
SOLO Stochastic Processes
Table of Content
Random Variables
Stochastic Differential Equation (SDE)
Brownian Motion
Smoluchowski Equation
Langevin Equation
Lévy Process
Martingale
Chapmann – Kolmogorov Equation
Itô Lemma and Itô Processes
Stratonovich Stochastic Calculus
Fokker – Planck Equation
Kolmogorov forward equation (KFE) and its adjoint the
Kolmogorov backward equation (KBE)
Propagation Equation
SOLO Stochastic Processes
Table of Content (continue)
Bartlett-Moyal Theorem
Feller- Kolmogorov Equation
Langevin and Fokker- Planck Equations
Generalized Fokker - Planck Equation
Karhunen-Loève Theorem
References
4
Random ProcessesSOLO
Random Variable:
A variable x determined by the outcome Ω of a random experiment.
( )Ω= xx
Random Process or Stochastic Process:
A function of time x determined by the outcome Ω of a random experiment.
( ) ( )Ω= ,txtx
1
Ω
2
Ω
3Ω
4Ω
x
t
This is a family or an ensemble of
functions of time, in general different
for each outcome Ω.
Mean or Ensemble Average of the Random Process: ( ) ( )[ ] ( ) ( )∫
+∞
∞−
=Ω= ξξξ dptxEtx tx
,:
Autocorrelation of the Random Process: ( ) ( ) ( )[ ] ( ) ( ) ( )∫ ∫
+∞
∞−
+∞
∞−
=ΩΩ= ηξξξη ddptxtxEttR txtx 21 ,2121
,,:,
Autocovariance of the Random Process: ( ) ( ) ( )[ ] ( ) ( )[ ]{ }221121 ,,:, txtxtxtxEttC −Ω−Ω=
( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )2121212121 ,,,, txtxttRtxtxtxtxEttC −=−ΩΩ=
Table of Content
5
SOLO
Stationarity of a Random Process
1. Wide Sense Stationarity of a Random Process:
• Mean Average of the Random Process is time invariant:
( ) ( )[ ] ( ) ( ) .,: constxdptxEtx tx
===Ω= ∫
+∞
∞−
ξξξ
• Autocorrelation of the Random Process is of the form: ( ) ( ) ( )τ
τ
RttRttR
tt 21:
2121
,
−=
=−=
( ) ( ) ( )[ ] ( ) ( ) ( ) ( )12,2121 ,,,:, 21
ttRddptxtxEttR txtx === ∫ ∫
+∞
∞−
+∞
∞−
ηξξξηωωsince:
We have: ( ) ( )ττ −= RR
Power Spectrum or Power Spectral Density of a Stationary Random Process:
( ) ( ) ( )∫
+∞
∞−
−= ττωτω djRS exp:
2. Strict Sense Stationarity of a Random Process:
All probability density functions are time invariant: ( ) ( ) ( ) .,,
constptp xtx
== ωωω
Ergodicity:
( ) ( ) ( )[ ]Ω==Ω=Ω ∫
+
−∞→
,,
2
1
:, lim txExdttx
T
tx
Ergodicity
T
TT
A Stationary Random Process for which Time Average = Assembly Average
Random Processes
6
SOLO
Time Autocorrelation:
Ergodicity:
( ) ( ) ( ) ( ) ( )∫
+
−∞→
Ω+Ω=Ω+Ω=
T
TT
dttxtx
T
txtxR ,,
2
1
:,, lim τττ
For a Ergodic Random Process define
Finite Signal Energy Assumption: ( ) ( ) ( ) ∞<Ω=Ω= ∫
+
−∞→
T
TT
dttx
T
txR ,
2
1
,0 22
lim
Define: ( )
( )


 ≤≤−Ω
=Ω
otherwise
TtTtx
txT
0
,
:, ( ) ( ) ( )∫
+∞
∞−
Ω+Ω= dttxtx
T
R TTT
,,
2
1
: ττ
( ) ( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( ) ( )∫∫∫
∫∫∫
−−
−
−
+∞
−
−
−
−
∞−
Ω+Ω−Ω+Ω=Ω+Ω=
Ω+Ω+Ω+Ω++Ω=
T
T
TT
T
T
TT
T
T
TT
T
TT
T
T
TT
T
TTT
dttxtx
T
dttxtx
T
dttxtx
T
dttxtx
T
dttxtx
T
dttxtx
T
R
τ
τ
τ
τ
τττ
ττωττ
,,
2
1
,,
2
1
,,
2
1
,,
2
1
,,
2
1
,,
2
1
00

Let compute:
( ) ( ) ( ) ( ) ( )∫∫ −∞→−∞→∞→
Ω+Ω−Ω+Ω=
T
T
TT
T
T
T
TT
T
T
T
dttxtx
T
dttxtx
T
R
τ
τττ ,,
2
1
,,
2
1
limlimlim
( ) ( ) ( )ττ Rdttxtx
T
T
T
TT
T
=Ω+Ω∫−∞→
,,
2
1
lim
( ) ( ) ( ) ( )[ ] 0,,
2
1
,,
2
1
suplimlim →








Ω+Ω≤Ω+Ω
≤≤−∞→−∞→
∫ τττ
ττ
txtx
T
dttxtx
T
TT
TtTT
T
T
TT
T
therefore: ( ) ( )ττ RRT
T
=
→∞
lim
( ) ( ) ( )[ ]Ω==Ω=Ω ∫
+
−∞→
,,
2
1
:, lim txExdttx
T
tx
Ergodicity
T
TT
T− T+
( )txT
t
Random Processes
7
SOLO
Ergodicity (continue):
( ) ( ) ( ) ( ) ( )
( ) ( )[ ] ( ) ( )( )[ ]
( ) ( ) ( ) ( )( )
( ) ( ) ( ) ( ) [ ]TTTT
TT
TT
TTT
XX
T
dvvjvxdttjtx
T
dtjtxdttjtx
T
ddttjtxtjtx
T
dttxtxdj
T
djR
*
2
1
exp,exp,
2
1
exp,exp,
2
1
exp,exp,
2
1
,,exp
2
1
exp
=−ΩΩ=
+−Ω+Ω=
+−Ω+Ω=
Ω+Ω−=−
∫∫
∫∫
∫ ∫
∫ ∫∫
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
+∞
∞−
+∞
∞−
+∞
∞−
ωω
ττωτω
ττωτω
τττωττωτLet compute:
where: and * means complex-conjugate.( ) ( )∫
+∞
∞−
−Ω= dvvjvxX TT ωexp,:
Define:
( ) ( ) ( ) ( ) ( ) ( )[ ]∫ ∫∫
+∞
∞−
+
−∞→
+∞
∞−∞→∞→ 







Ω+Ω−=








−=








= τττωττωτω ddttxtxE
T
jdjRE
T
XX
ES
T
T
TT
T
T
T
TT
T
,,
2
1
expexp
2
: limlimlim
*
Since the Random Process is Ergodic we can use the Wide Stationarity Assumption:
( ) ( )[ ] ( )ττ RtxtxE TT =Ω+Ω ,,
( ) ( ) ( ) ( ) ( )
( ) ( )∫
∫ ∫∫ ∫
∞+
∞−
+∞
∞−
+
−∞→
+∞
∞−
+
−∞→∞→
−=








−=








−=








=
ττωτ
ττωττττωω
djR
ddt
T
jRddtR
T
j
T
XX
ES
T
TT
T
TT
TT
T
exp
2
1
exp
2
1
exp
2
:
1
*
limlimlim
  
Random Processes
8
SOLO
Ergodicity (continue):
We obtained the Wiener-Khinchine Theorem (Wiener 1930):
( ) ( ) ( )∫
+∞
∞−→∞
−=





= dtjR
T
XX
ES TT
T
τωτω exp
2
:
*
lim
Norbert Wiener
1894 - 1964
Alexander Yakovlevich
Khinchine
1894 - 1959
The Power Spectrum or Power Spectral Density of
a Stationary Random Process S (ω) is the Fourier
Transform of the Autocorrelation Function R (τ).
Random Processes
9
SOLO
White Noise
A (not necessary stationary) Random Process whose Autocorrelation is zero for
any two different times is called white noise in the wide sense.
( ) ( ) ( )[ ] ( ) ( )211
2
2121
,,, ttttxtxEttR −=ΩΩ= δσ
( )1
2
tσ - instantaneous variance
Wide Sense Whiteness
Strict Sense Whiteness
A (not necessary stationary) Random Process in which the outcome for any two
different times is independent is called white noise in the strict sense.
( ) ( ) ( ) ( )2121,
,,21
ttttp txtx
−=Ω δ
A Stationary White Noise Random has the Autocorrelation:
( ) ( ) ( )[ ] ( )τδσττ 2
,, =Ω+Ω= txtxER
Note
In general whiteness requires Strict Sense Whiteness. In practice we have only
moments (typically up to second order) and thus only Wide Sense Whiteness.
Random Processes
10
SOLO
White Noise
A Stationary White Noise Random has the Autocorrelation:
( ) ( ) ( )[ ] ( )τδσττ 2
,, =Ω+Ω= txtxER
The Power Spectral Density is given by performing the Fourier Transform of the
Autocorrelation:
( ) ( ) ( ) ( ) ( ) 22
expexp στωτδστωτω =−=−= ∫∫
+∞
∞−
+∞
∞−
dtjdtjRS
( )ωS
ω
2
σ
We can see that the Power Spectrum Density contains all frequencies at the same
amplitude. This is the reason that is called White Noise.
The Power of the Noise is defined as: ( ) ( ) 2
0 σωτ ==== ∫
+∞
∞−
SdtRP
Random Processes
11
SOLO
Markov Processes
A Markov Process is defined by:
Andrei Andreevich
Markov
1856 - 1922
( ) ( )( ) ( ) ( )( ) 111
,|,,,|, tttxtxptxtxp >∀ΩΩ=≤ΩΩ ττ
i.e. the Random Process, the past up to any time t1 is fully defined
by the process at t1.
Examples of Markov Processes:
1. Continuous Dynamic System
( ) ( )
( ) ( )wuxthtz
vuxtftx
,,,
,,,
=
=
2. Discrete Dynamic System
( ) ( )
( ) ( )kkkkk
kkkkk
wuxthtz
vuxtftx
,,,
,,,
1
1
=
=
+
+
x - state space vector (n x 1)
u - input vector (m x 1)
v - white input noise vector (n x 1)
- measurement vector (p x 1)z
- white measurement noise vector (p x 1)w
Random Processes
Table of Content
SOLO Stochastic Processes
The earliest work on SDEs was done to describe Brownian motion in Einstein's famous
paper, and at the same time by Smoluchowski. However, one of the earlier works related to
Brownian motion is credited to Bachelier (1900) in his thesis 'Theory of Speculation'. This
work was followed upon by Langevin. Later Itō and Stratonovich put SDEs on more solid
mathematical footing.
In physical science, SDEs are usually written as Langevin Equations. These are sometimes
confusingly called "the Langevin Equation" even though there are many possible forms. These
consist of an ordinary differential equation containing a deterministic part and an additional
random white noise term. A second form is the Smoluchowski Equation and, more generally,
the Fokker-Planck Equation. These are partial differential equations that describe the time
evolution of probability distribution functions. The third form is the stochastic differential
equation that is used most frequently in mathematics and quantitative finance (see below). This
is similar to the Langevin form, but it is usually written in differential form. SDEs come in two
varieties, corresponding to two versions of stochastic calculus.
Background
Terminology
A stochastic differential equation (SDE) is a differential equation in which one or more of the
terms is a stochastic process, thus resulting in a solution which is itself a stochastic process.
SDE are used to model diverse phenomena such as fluctuating stock prices or physical system
subject to thermal fluctuations. Typically, SDEs incorporate white noise which can be thought
of as the derivative of Brownian motion (or the Wiener process); however, it should be
mentioned that other types of random fluctuations are possible, such as jump processes.
Stochastic Differential Equation (SDE)
SOLO Stochastic Processes
Brownian motion or the Wiener process was discovered to be exceptionally complex
mathematically. The Wiener process is non-differentiable; thus, it requires its own rules of calculus.
There are two dominating versions of stochastic calculus, the Ito Stochastic Calculus and the
Stratonovich Stochastic Calculus. Each of the two has advantages and disadvantages, and
newcomers are often confused whether the one is more appropriate than the other in a given
situation. Guidelines exist and conveniently, one can readily convert an Ito SDE to an equivalent
Stratonovich SDE and back again. Still, one must be careful which calculus to use when the SDE is
initially written down.
Stochastic Calculus
Table of Content
Stochastic ProcessesSOLO
Brownian Motion
In 1827 Brown, a botanist, discovered the motion of pollen particles in
water. At the beginning of the twentieth century, Brownian motion was
studied by Einstein, Perrin and other physicists. In 1923, against this
scientific background, Wiener defined probability measures in path spaces,
and used the concept of Lebesgue integrals to lay the mathematical
foundations of stochastic analysis. In 1942, Ito began to reconstruct from
scratch the concept of stochastic integrals, and its associated theory of
analysis. He created the theory of stochastic differential equations, which
describe motion due to random events. Albert Einstein
1879 - 1955
Norbert Wiener
1894 - 1964
Henri Léon
Lebesgue
1875-1941
Robert Brown
1773–1858
Albert Einstein's (in his 1905 paper) and Marian Smoluchowski's (1906)
independent research of the problem that brought the solution to the
attention of physicists, and presented it as a way to indirectly confirm the
existence of atoms and molecules.
Marian Ritter
von Smolan
Smoluchowski
1872 - 1917
Kiyosi Itô
1915-2008
Stochastic ProcessesSOLO
Random Walk
Assume the process of walking on a straight line at discrete intervals T. At each time
we walk a distance s , randomly, to the left or to the right, with the same probability
p=1/2. In this way we created a Stochastic Process called Random Walk. (This
experiment is equivalent to tossing a coin to get, randomly, Head or Tail).
Assume that at t = n T we have taken k steps to the right and n-k steps to the left, then
the distance traveled is
x (nT) is a Random Walk, taking the values r s, where
r equals n, n-2,…, -(n-2),-n
( ) ( ) ( ) snksknsknTx −=−−= 2
( ) ( )
2
2
nr
ksnksrnTx
+
=⇒−==
Therefore
( ){ } n
n
nr
n
pnr
n
nr
kPsrnTxP
2
1
22
2 









+=










+=





 +
===
Stochastic ProcessesSOLO
Random Walk (continue – 1)
The Random value is ( ) nxxxnTx +++= 21
We have at step i the event xi: P {xi = +s} = p = 1/2 and P {xi = - s} = 1-p = 1/2
( ){ }
( )
( )
( ) nrppn
pnk
e
n
e
ppn
nr
kPsrnTxP 2/12 2
2
2/
1
12
1
2
−−
−
−
=
−
≈





 +
===
ππ
{ } { } ( ) { } 0=−=−++== sxPssxPsxE iii
{ } { } ( ) { } 2222
ssxPssxPsxE iii =−=−++==
( ){ } { } { } { }
( ){ } { }
{ }
{ } { } { } 222
2
2
1
0
1 1
2
21 0
snxExExExxEnTxE
xExExEnTxE
n
xxEn
i
n
j
ji
n
ji
ji
=+++==
=+++=
≠
=
= =
∑∑ 

{ } { } { }
{ }



===
≠==⇒
jisxE
jixExExxE
i
ii
tindependenxx
ji
ji
22
,
0
For large r ( )nr >
and
( ){ } 





+=+≈≤ ∫
−
n
r
erfdyesrnTxP
nr
y
2
1
2
1
2
1
/
0
2/2
π
Stochastic ProcessesSOLO
Random Walk (continue – 2)
For n1 > n2 > n3 > n4 the number of steps to the right from n2T to n1T interval is
independent of the number of steps to the right between n4T to n3T interval.
Hence x (n1T) – x (n2T) is independent of x (n4T) – x (n3T).
Table of Content
SOLO Stochastic Processes
Smoluchowski Equation
In physics, the Diffusion Equation with drift term is often called Smoluchowski
equation (after Marian von Smoluchowski).
Let w(r, t) be a density, D a diffusion constant, ζ a friction coefficient, and
U(r, t) a potential. Then the Smoluchowski equation states that the density
evolves according to
The diffusivity term acts to smoothen out the density, while the drift term
shifts the density towards regions of low potential U. The equation is consistent with
each particle moving according to a stochastic differential equation, with a bias term
and a diffusivity D. Physically, the drift term originates from a force
being balanced by a viscous drag given by ζ.
The Smoluchowski equation is formally identical to the Fokker–Planck equation, the
only difference being the physical meaning of w: a distribution of particles in space for
the Smoluchowski equation, a distribution of particle velocities for the Fokker–Planck
equation.
SOLO Stochastic Processes
Einstein-Smoluchowski Equation
In physics (namely, in kinetic theory) the Einstein relation (also known as
Einstein–Smoluchowski relation) is a previously unexpected connection
revealed independently by Albert Einstein in 1905 and by Marian
Smoluchowski (1906) in their papers on Brownian motion. Two important
special cases of the relation are:
(diffusion of charged particles)
("Einstein–Stokes equation", for diffusion of spherical
particles through liquid with low Reynolds number)
Where
• ρ (x,t) density of the Brownian particles
•D is the diffusion constant,
•q is the electrical charge of a particle,
•μq, the electrical mobility of the charged particle, i.e. the ratio of the particle's
terminal drift velocity to an applied electric field,
•kB is Boltzmann's constant,
•T is the absolute temperature,
•η is viscosity
•r is the radius of the spherical particle.
The more general form of the equation is:
where the "mobility" μ is the ratio of the particle's
terminal drift velocity to an applied force, μ = vd / F.
2
2
x
D
t ∂
∂
=
∂
∂ ρρ
Einstein’s Equation
For Brownian Motion
( )
( ) 





−=
tD
x
tD
tx
4
exp
4
1
,
2
2/1
π
ρ
Table of Content
Paul Langevin
1872-1946
Langevin Equation
SOLO Stochastic Processes
Langevin equation (Paul Langevin, 1908) is a stochastic differential
equation describing the time evolution of a subset of the degrees of
freedom. These degrees of freedom typically are collective
(macroscopic) variables changing only slowly in comparison to the
other (microscopic) variables of the system. The fast (microscopic)
variables are responsible for the stochastic nature of the Langevin
equation.
The original Langevin equation describes Brownian motion, the apparently random
movement of a particle in a fluid due to collisions with the molecules of the fluid,
Langevin, P. (1908). "On the Theory of Brownian Motion". C. R. Acad. Sci. (Paris) 146: 530–533.
( )
td
xd
vtv
td
vd
m =+−= ηλ
We are interested in the position x of a particle of mass m. The force on the particle is
the sum of the viscous force proportional to particle’s velocity λ v (Stoke’s Law) plus a
noise term η (t) that has a Gaussian Probability Distribution with Correlation Function
( ) ( ) ( )'2', , ttTktt jiBji −= δδληη
where kB is Boltzmann’s constant and T is the Temperature.
Table of Content
Propagation Equation
SOLO Stochastic Processes
Definition 1: Holder Continuity Condition
( )( ) 111 , mxnxmx Kttxk ∈Given a mx1 vector on a mx1 domain, we say that is
Holder Continuous in K if for some constants C, α >0 and some norm || ||:
( ) ( ) α
2121 ,, xxCtxktxk −<−
Holder Continuity is a generalization of Lipschitz Continuity (α = 1):
Holder Continuity
Lipschitz Continuity( ) ( ) 2121 ,, xxCtxktxk −<−
Rudolf Lipschitz
1832-1903
Otto Ludwig Hölder
1859-1937
Propagation Equation
SOLO Stochastic Processes
Definition 2: Standard Stochastic State Realization (SSSR)
The Stochastic Differential Equation:
( ) ( ) ( ) ( ) [ ]fnxnxnnxnx ttttndtxGdttxftxd ,,, 0111 ∈+=
( ) ( ) ( ) ( ){ } ( ){ } ( ){ } 0===+= tndEtndEtndEtndtndtnd pgpg
we can write ( )
( )
( ) ( ){ } ( ) ( )sttQswtwE
td
tnd
tw Tg
−== δ
( )tnd g ( ) ( ){ } ( )dttQtntndE nxn
T
gg =Wiener (Gauss) Process
( )tnd p Poisson Process ( ) ( ){ }
















=
na
a
a
T
pp
n
tntndE
λσ
λσ
λσ
2
2
2
1
2
00
00
00
2
1




(1) where is independent of( ) 00 xtx = 0x ( )tnd
(2) is Holder Continuous in t, Lipschitz Continuous in( )txGnxn , x
( ) ( )txGtxG
T
nxnnxn ,, is strictly Positive Definite
( ) ( )
ji
ij
i
ij
xx
txG
x
txG
∂∂
∂
∂
∂ ,
;
, 2
are Globally Lipschitz Continuous in x, continuous in t, and globally bounded.
(3) The vector f (x,t) is Continuous in t and Globally Lipschitz Continuous in ,
and ∂fi/∂xi are Globally Lipschitz Continuous in , and continuous in t.x
x
The Stochastic Differential Equation is called a Standard Stochastic State Realization (SSSR)
Table of Content
Stochastic ProcessesSOLO
Lévy Process
In probability theory, a Lévy process, named after the French
mathematician Paul Lévy, is any continuous-time stochastic process
Paul Pierre Lévy
1886 - 1971
A Stochastic Process X = {Xt: t ≥ 0} is said to be a Lévy Process if:
1. X0 = 0 almost surely (with probability one).
2. Independent increments: For any ,
are independent.
3. Stationary increments: For any t < s, Xt – Xs is equal in distribution to X t-s .
4. is almost surely right continuous with left limits.
Independent increments
A continuous-time stochastic process assigns a random variable Xt to each point t ≥ 0
in time. In effect it is a random function of t. The increments of such a process are
the differences Xs − Xt between its values at different times t < s. To call the
increments of a process independent means that increments Xs − Xt and Xu − Xv are
independent random variables whenever the two time intervals do not overlap and,
more generally, any finite number of increments assigned to pairwise non-
overlapping time intervals are mutually (not just pairwise) independent
Stochastic ProcessesSOLO
Lévy Process (continue – 1)
Paul Pierre Lévy
1886 - 1971
A Stochastic Process X = {Xt: t ≥ 0} is said to be a Lévy Process if:
1. X0 = 0 almost surely (with probability one).
2. Independent increments: For any ,
are independent.
3. Stationary increments: For any t < s, Xt – Xs is equal in distribution to X t-s .
4. is almost surely right continuous with left limits.
Stationary increments
To call the increments stationary means that the probability distribution of any
increment Xs − Xt depends only on the length s − t of the time interval; increments
with equally long time intervals are identically distributed.
In the Wiener process, the probability distribution of Xs − Xt is normal with
expected value 0 and variance s − t.
In the (homogeneous) Poisson process, the probability distribution of Xs − Xt is a
Poisson distribution with expected value λ(s − t), where λ > 0 is the "intensity" or
"rate" of the process.
Stochastic ProcessesSOLO
Lévy Process (continue – 2)
Paul Pierre Lévy
1886 - 1971
A Stochastic Process X = {Xt: t ≥ 0} is said to be a Lévy Process if:
1. X0 = 0 almost surely (with probability one).
2. Independent increments: For any ,
are independent.
3. Stationary increments: For any t < s, Xt – Xs is equal in distribution to X t-s .
4. is almost surely right continuous with left limits.
Divisibility
Lévy processes correspond to infinitely divisible probability distributions:
The probability distributions of the increments of any Lévy process are infinitely
divisible, since the increment of length t is the sum of n increments of length t/n,
which are i.i.d. by assumption (independent increments and stationarity).
Conversely, there is a Lévy process for each infinitely divisible probability
distribution: given such a distribution D, multiples and dividing define a stochastic
process for positive rational time, defining it as a Dirac delta distribution for time 0
defines it for time 0, and taking limits defines it for real time. Independent
increments and stationarity follow by assumption of divisibility, though one must
check continuity and that taking limits gives a well-defined function for irrational
time.
Table of Content
Stochastic ProcessesSOLO
Martingale
Originally, martingale referred to a class of betting strategies that was popular in 18th century
France. The simplest of these strategies was designed for a game in which the gambler wins his
stake if a coin comes up heads and loses it if the coin comes up tails. The strategy had the gambler
double his bet after every loss so that the first win would recover all previous losses plus win a
profit equal to the original stake. As the gambler's wealth and available time jointly approach
infinity, his probability of eventually flipping heads approaches 1, which makes the martingale
betting strategy seem like a sure thing. However, the exponential growth of the bets eventually
bankrupts its users
History of Martingale
The concept of martingale in probability theory was introduced by Paul Pierre Lévy, and much
of the original development of the theory was done by Joseph Leo Doob. Part of the motivation
for that work was to show the impossibility of successful betting strategies.
Paul Pierre Lévy
1886 - 1971
Joseph Leo Doob
1910 - 2004
Stochastic ProcessesSOLO
Martingale
In probability theory, a martingale is a stochastic process (i.e., a sequence of random variables)
such that the conditional expected value of an observation at some time t, given all the observations
up to some earlier time s, is equal to the observation at that earlier time s
A discrete-time martingale is a discrete-time stochastic process (i.e., a sequence of random
variables) X1, X2, X3, ... that satisfies for all n
i.e., the conditional expected value of the next observation, given all the past observations, is equal
to the last observation.
Somewhat more generally, a sequence Y1, Y2, Y3 ... is said to be a martingale with respect to another
sequence X1, X2, X3 ... if for all n
Similarly, a continuous-time martingale with respect to the stochastic process Xt is a stochastic
process Yt such that for all t
This expresses the property that the conditional expectation of an observation at time t, given all
the observations up to time s, is equal to the observation at time s (of course, provided that s ≤ t).
Stochastic ProcessesSOLO
Martingale
In full generality, a stochastic process Y : T × Ω → S is a martingale with respect to a filtration
Σ∗ and probability measure P if
* Σ∗ is a filtration of the underlying probability space (Ω, Σ, P);
* Y is adapted to the filtration Σ∗, i.e., for each t in the index set T, the random variable Yt is a
Σt-measurable function;
* for each t, Yt lies in the Lp
space L1
(Ω, Σt, P; S), i.e.
* for all s and t with s < t and all F Σ∈ s,
where χF denotes the indicator function of the event F. In Grimmett and Stirzaker's
Probability and Random Processes, this last condition is denoted as
which is a general form of conditional expectation
It is important to note that the property of being a martingale involves both the filtration and
the probability measure (with respect to which the expectations are taken). It is possible that Y
could be a martingale with respect to one measure but not another one; the Girsanov theorem
offers a way to find a measure with respect to which an Itō process is a martingale.
Table of Content
Stochastic ProcessesSOLO
Chapmann – Kolmogorov Equation
Sydney Chapman
1888-1970
Andrey
Nikolaevich
Kolmogorov
1903-1987
Suppose that { fi } is an indexed collection of random variables, that is, a stochastic
process. Let
be the joint probability density function of the values of the random variables f1
to fn. Then, the Chapman-Kolmogorov equation is
Note that we have not yet assumed anything about the temporal (or any other) ordering of the random variables --
the above equation applies equally to the marginalization of any of them.
Particularization to Markov Chains
When the stochastic process under consideration is Markovian, the Chapman-Kolmogorov equation is
equivalent to an identity on transition densities. In the Markov chain setting, one assumes that
Then, because of the Markov property,
where the conditional probability is the transition probability between the times i > j.
So, the Chapman-Kolmogorov equation takes the form
When the probability distribution on the state space of a Markov chain is discrete and the Markov chain is
homogeneous, the Chapman-Kolmogorov equations can be expressed in terms of (possibly infinite-dimensional)
matrix multiplication, thus:
where P(t) is the transition matrix, i.e., if Xt is the state of the process at time t, then for any two points i and j in
the state space, we have
( )nii ffp n
,,1,,1

( ) ( )∫
+∞
∞−
− =− nniinii fdffpffp nn
,,,, 1,,11,, 111
 
( ) ( ) ( ) ( )1|12|11,, ||,, 11211 −−
= nniiiiinii ffpffpfpffp nnn

( ) ( ) ( )∫
+∞
∞−
= 212|23|13| ||| 122313
dfffpffpffp iiiiii
Stochastic ProcessesSOLO
Chapmann – Kolmogorov Equation (continue – 1)
Particularization to Markov Chains
( ) ( ) ( )∫
+∞
∞−
= 20022,|,22,|,00,|, ,|,,|,,|, 00220000
dttxtxptxtxptxtxp txtxtxtxtxtx
Let be a probability density function on the Markov process x(t) given that x(t0) = x0,
and t0 < t, then,
( )00,|, ,|,00
txtxp txtx
Geometric Interpretation of Chapmann – Kolmogorov Equation
Table of Content
Stochastic ProcessesSOLO
Kiyosi Itô
1915 - 2008
In 1942, Itô began to reconstruct from scratch the concept of
stochastic integrals, and its associated theory of analysis. He
created the theory of stochastic differential equations, which
describe motion due to random events.
In 1945 Ito was awarded his doctorate. He continued to develop his ideas on stochastic
analysis with many important papers on the topic. Among them were “On a stochastic
integral equation” (1946), “On the stochastic integral” (1948), “Stochastic differential
equations in a differentiable manifold” (1950), “Brownian motions in a Lie group”
(1950), and “On stochastic differential equations” (1951).
Itô Lemma and Itô Processes
Itô Lemma and Itô processes
In its simplest form, Itô 's lemma states that for an Itô process
and any twice continuously differentiable function f on the real numbers, then f(X) is also an
Itô process satisfying
Or, more extended. Let X(t) be an Itô process given by
and let f(t,x) be a function with continuous first- and second-order partial derivatives
Then by Itô's lemma:
SOLO
tttt dBdtXd σµ +=
( ) ( ) ( )
( ) ( ) ( ) dtXfXfdBXf
dtXfdXXfXfd
tt
T
tttttt
tt
T
tttt






++=
+=
σσµσ
σσ
''
2
1
''
''
2
1
'
Stochastic Processes
Itô Lemma and Itô processes (continue – 1)
Informal derivation
A formal proof of the lemma requires us to take the limit of a sequence of random variables,
which is not done here. Instead, we can derive Ito's lemma by expanding a Taylor series and
applying the rules of stochastic calculus.
Assume the Itō process is in the form of
Expanding f(x, t) in a Taylor series in x and t we have
and substituting a dt + b dB for dx gives
In the limit as dt tends to 0, the dt2
and dt dB terms disappear but the dB2
term tends to dt.
The latter can be shown if we prove that since
Deleting the dt2
and dt dB terms, substituting dt for dB2
, and collecting the dt and dB terms, we
obtain
as required.
SOLO Stochastic Processes
Table of Content
Ruslan L. Stratonovich
(1930 – 1997)
Stratonovich invented a stochastic calculus which serves as an
alternative to the Itô calculus; the Stratonovich calculus is most
natural when physical laws are being considered. The
Stratonovich integral appears in his stochastic calculus. He also
solved the problem of optimal non-linear filtering based on his
theory of conditional Markov processes, which was published in
his papers in 1959 and 1960. The Kalman-Bucy (linear) filter
(1961) is a special case of Stratonovich's filter. He also developed
the value of information theory (1965). His latest book was on
non-linear non-equilibrium thermodynamics.
SOLO
Stratonovich Stochastic Calculus
Stochastic Processes
Table of Content
A solution to the one-dimensional
Fokker–Planck equation, with both the
drift and the diffusion term. The initial
condition is a Dirac delta function in
x = 1, and the distribution drifts
towards x = 0.
The Fokker–Planck equation describes the time evolution of
the probability density function of the position of a particle, and
can be generalized to other observables as well. It is named after
Adriaan Fokker and Max Planck and is also known as the
Kolmogorov forward equation. The first use of the Fokker–
Planck equation was the statistical description of Brownian
motion of a particle in a fluid.
In one spatial dimension x, the Fokker–Planck equation for a
process with drift D1(x,t) and diffusion D2(x,t) is
More generally, the time-dependent probability distribution
may depend on a set of N macrovariables xi. The general
form of the Fokker–Planck equation is then
where D1
is the drift vector and D2
the diffusion tensor; the latter results from the presence of the
stochastic force.
Fokker – Planck Equation
Adriaan Fokker
1887-1972
Max Planck
1858-1947
SOLO
Adriaan Fokker
„Die mittlere Energie rotierender
elektrischer Dipole im Strahlungsfeld"
Annalen der Physik 43, (1914) 810-
820
Max Plank, „Ueber einen Satz der
statistichen Dynamik und eine
Erweiterung in der Quantumtheorie“,
Sitzungberichte der Preussischen
Akadademie der Wissenschaften
(1917) p. 324-341
Stochastic Processes
( ) ( ) ( )[ ] ( ) ( )[ ]txftxD
x
txftxD
x
txf
t
,,,,, 22
2
1
∂
∂
+
∂
∂
−=
∂
∂
( )[ ] ( )[ ]∑∑∑ = == ∂∂
∂
+
∂
∂
−=
∂
∂ N
i
N
j
Nji
ji
N
i
Ni
i
ftxxD
xx
ftxxD
x
f
t 1 1
1
2
2
1
1
1
,,,,,, 
Fokker – Planck Equation (continue – 1)
The Fokker–Planck equation can be used for computing the probability densities of stochastic
differential equations.
where is the state and is a standard M-dimensional Wiener process. If the initial
probability distribution is , then the probability distribution of the state
is given by the Fokker – Planck Equation with the drift and diffusion terms:
Similarly, a Fokker–Planck equation can be derived for Stratonovich stochastic differential
equations. In this case, noise-induced drift terms appear if the noise strength is state-dependent.
SOLO
Consider the Itô stochastic differential equation:
( ) ( ) ( )[ ] ( ) ( )[ ]txftxD
x
txftxD
x
txf
t
,,,,, 22
2
1
∂
∂
+
∂
∂
−=
∂
∂
Fokker – Planck Equation (continue – 2)
Derivation of the Fokker–Planck Equation
SOLO
Start with ( ) ( ) ( )11|1, 111
|, −−− −−−
= kxkkxxkkxx xpxxpxxp kkkkk
and ( ) ( ) ( ) ( )∫∫
+∞
∞−
−−−
+∞
∞−
−− −−−
== 111|11, 111
|, kkxkkxxkkkxxkx xdxpxxpxdxxpxp kkkkkk
define ( ) ( )ttxxtxxttttt kkkk ∆−==∆−== −− 11 ,,,
( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( )∫
+∞
∞−
∆−∆− ∆−∆−∆−= ttxdttxpttxtxptxp ttxttxtxtx ||
Let use the Characteristic Function of
( ) ( ) ( ) ( ) ( )[ ]{ } ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )ttxtxtxtxdttxtxpttxtxss ttxtxttxtx ∆−−=∆∆−∆−−−=Φ ∫
+∞
∞−
∆−∆−∆ |exp: ||
( ) ( ) ( ) ( )[ ]ttxtxp ttxtx ∆−∆− ||
The inverse transform is ( ) ( ) ( ) ( )[ ] ( ) ( )[ ]{ } ( ) ( ) ( )∫
∞+
∞−
∆−∆∆− Φ∆−−=∆−
j
j
ttxtxttxtx sdsttxtxs
j
ttxtxp || exp
2
1
|
π
Using Chapman-Kolmogorov Equation we obtain:
( ) ( )[ ] ( ) ( )[ ]{ } ( ) ( ) ( )
( ) ( ) ( ) ( )[ ]
( ) ( )[ ] ( )
( ) ( )[ ]{ } ( ) ( ) ( ) ( ) ( )[ ] ( )ttxdsdttxpsttxtxs
j
ttxdttxpsdsttxtxs
j
txp
j
j
ttxttxtx
ttx
ttxtxp
j
j
ttxtxtx
ttxtx
∆−∆−Φ∆−−=
∆−∆−Φ∆−−=
∫ ∫
∫ ∫
∞+
∞−
∞+
∞−
∆−∆−∆
+∞
∞−
∆−
∆−
∞+
∞−
∆−∆
∆−
|
|
|
exp
2
1
exp
2
1
|
π
π
  
Stochastic Processes
Fokker – Planck Equation (continue – 3)
Derivation of the Fokker–Planck Equation (continue – 1)
SOLO
The Characteristic Function can be expressed in terms of the moments about x (t-Δt) as:
( ) ( )[ ] ( ) ( )[ ]{ } ( ) ( ) ( ) ( ) ( )[ ] ( )ttxdsdttxpsttxtxs
j
txp
j
j
ttxttxtxtx ∆−∆−Φ∆−−= ∫ ∫
+∞
∞−
∞+
∞−
∆−∆−∆ |exp
2
1
π
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )[ ] ( ){ }∑
∞
=
∆−∆∆−∆ ∆−∆−−
−
+=Φ
1
|| |
!
1
i
i
ttxtx
i
ttxtx ttxttxtxE
i
s
s
Therefore
( ) ( )[ ] ( ) ( )[ ]{ } ( )
( ) ( ) ( ) ( )[ ] ( ){ } ( ) ( )[ ] ( )ttxdsdttxpttxttxtxE
i
s
ttxtxs
j
txp
j
j
ttx
i
i
ttxtx
i
tx ∆−∆−






∆−∆−−
−
+∆−−= ∫ ∫ ∑
+∞
∞−
∞+
∞−
∆−
∞
=
∆−
1
| |
!
1exp
2
1
π
Use the fact that ( ) ( ) ( )[ ]{ } ( ) ( ) ( )[ ]
( )[ ]
,2,1,01exp
2
1
=
∂
∆−−∂
−=∆−−−∫
∞+
∞−
i
tx
ttxtx
sdttxtxss
j i
i
i
j
j
i δ
π
( ) ( )[ ] ( ) ( )[ ]{ } ( ) ( )[ ] ( )
( ) ( ) ( )[ ]
( )[ ]
( ) ( )[ ] ( ){ } ( ) ( )[ ] ( )∫∑
∫ ∫
∞+
∞−
∞
=
∆−
+∞
∞−
∆−
∞+
∞−
∆−∆−∆−∆−−
∂
∆−−∂−
+
∆−∆−∆−−=
1
|
!
1
exp
2
1
i
ttx
i
i
ii
ttx
j
j
tx
ttxdttxpttxttxtxE
tx
ttxtx
i
ttxdttxpsdttxtxs
j
txp
δ
π
where δ [u] is the Dirac delta function:
[ ] { } ( ) [ ] ( ) ( ) ( ) ( ) ( )000..0exp
2
1
FFFtsuFFduuuFsdus
j
u
j
j
==∀== −+
+∞
∞−
∞+
∞−
∫∫ δ
π
δ
Stochastic Processes
Fokker – Planck Equation (continue – 4)
Derivation of the Fokker–Planck Equation (continue – 2)
SOLO
[ ] ( ){ } ( ) [ ] ( ) ( ) ( ) ( ) ( )afafaftsufufduuaufsduas
j
ua au
j
j
==∀=−−=− −+=
+∞
∞−
∞+
∞−
∫∫ ..exp
2
1
δ
π
δ
[ ] ( ){ } ( ) ( ) { } ( ) ( ) { }∫∫∫
∞+
∞−
∞+
∞−
∞+
∞−
=→=−
−
=−
j
j
j
j
j
j
sdussFs
j
uf
du
d
sdussF
j
ufsduass
j
ua
ud
d
exp
2
1
exp
2
1
exp
2
1
πππ
δ
( ) [ ] ( ) ( ){ } ( ) ( ){ }
{ } ( ) { } { } ( ) ( )
au
j
j
j
j
j
j
j
j
ud
ufd
sdsFass
j
sdduusufass
j
sdduuasufs
j
dusduass
j
ufduua
ud
d
uf
=
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
+∞
∞−
+∞
∞−
∞+
∞−
+∞
∞−
−=
−
=−
−
=
−
−
=−
−
=−
∫∫ ∫
∫ ∫∫ ∫∫
exp
2
1
expexp
2
1
exp
2
1
exp
2
1
ππ
ππ
δ
[ ] ( ) ( ){ } ( ) ( ) { } ( ) ( ) { }∫∫∫
∞+
∞−
∞+
∞−
∞+
∞−
=→=−
−
=−
j
j
i
i
ij
j
j
j
i
i
i
i
sdussFs
j
uf
du
d
sdussF
j
ufsduass
j
ua
ud
d
exp
2
1
exp
2
1
exp
2
1
πππ
δ
( ) [ ] ( ) ( ) ( ){ } ( ) ( ) ( ){ }
( ) { } ( ) { } ( ) ( ) { } ( ) ( )
au
i
i
i
j
j
i
ij
j
i
i
j
j
i
ij
j
i
i
i
i
ud
ufd
sdassFs
j
sdduusufass
j
sdduuasufs
j
dusduass
j
ufduua
ud
d
uf
=
−=
−
=−
−
=
−
−
=−
−
=−
∫∫ ∫
∫ ∫∫ ∫∫
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
+∞
∞−
+∞
∞−
∞+
∞−
+∞
∞−
1exp
2
1
expexp
2
1
exp
2
1
exp
2
1
ππ
ππ
δ
Useful results related to integrals involving Delta (Dirac) function
Stochastic Processes
Fokker – Planck Equation (continue – 5)
Derivation of the Fokker–Planck Equation (continue – 3)
SOLO
( ) ( )[ ]{ }
( ) ( )[ ]
( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ]txpttxdttxpttxtxttxdttxpsdttxtxs
j
ttxttxttx
ttxtx
j
j
∆−
+∞
∞−
∆−
+∞
∞−
∆−
∆−−
∞+
∞−
=∆−∆−∆−−=∆−∆−∆−− ∫∫ ∫ δ
π
δ
  
exp
2
1
( ) ( ) ( )[ ]
( )[ ] ( ) ( ) ( ) ( )[ ] ( ){ } ( ) ( )[ ] ( )
( ) ( ) ( )[ ]
( )[ ] ( ) ( ) ( ) ( )[ ] ( ){ } ( ) ( )[ ] ( )
( ) ( ) ( ) ( ) ( )[ ] ( ){ } ( ) ( )[ ]( )
( )[ ]∑
∑ ∫
∫∑
∞
=
=∆
∆−∆−
∞
=
∞+
∞−
∆−∆−
+∞
∞−
∞
=
∆−∆−
∂
∆−∆−−∂−
=
∆−∆−∆−∆−−
∂
∆−−∂−
=
∆−∆−∆−∆−−
∂
∆−−∂−
1
0
|
1
|
1
|
|
!
1
|
!
1
|
!
1
i
t
i
ttx
i
ttxtx
ii
i
ttx
i
ttxtxi
ii
i
ttx
i
ttxtxi
ii
tx
txpttxttxtxE
i
ttxdttxpttxttxtxE
tx
ttxtx
i
ttxdttxpttxttxtxE
tx
ttxtx
i
δ
δ
( ) [ ] ( ) ( ) ( )
[ ]
[ ] ( )
auau
i
i
i
i
i
i
i
i
i
ud
ufd
duua
uad
d
uf
ud
ufd
duua
ud
d
uf
==
=−
−
→−=− ∫∫
+∞
∞−
+∞
∞−
δδ 1We found
( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( ){ } ( ) ( )[ ]( )
( )[ ]∑
∞
=
=∆
∆−∆−
∆−
∂
∆−∆−−∂−
+=
1
0
| |
!
1
i
t
i
ttx
i
ttxtx
ii
ttxtx
tx
txpttxttxtxE
i
txptxp
( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ] ( ){ } ( ) ( )[ ]( )
( )[ ]∑
∞
=
∆−
→∆
∆−
→∆ ∂
∆−∆−−∂
∆
−
=
∆
−
1
00
|1
lim
!
1
lim
i
i
ttx
ii
t
i
ttxtx
t tx
txpttxttxtxE
tit
txptxp
Therefore
Rearranging, dividing by Δt, and tacking the limit Δt→0, we obtain:
Stochastic Processes
Fokker – Planck Equation (continue – 6)
Derivation of the Fokker–Planck Equation (continue – 4)
SOLO
We found ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( ){ } ( ) ( )[ ]( )
( )[ ]∑
∞
=
∆−∆−
→∆
∆−
→∆ ∂
∆−∆−−∂
∆
−
=
∆
−
1
|
00
|1
lim
!
1
lim
i
i
ttx
i
ttxtx
i
t
i
ttxtx
t tx
txpttxttxtxE
tit
txptxp
Define: ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ){ }
t
ttxttxtxE
txtxm
i
ttxtx
t
i
∆
∆−∆−−
=−
∆−
→∆
−
|
lim:
|
0
Therefore ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( )[ ]( )
( )[ ]∑
∞
=
−
∂
−∂−
=
∂
∂
1 !
1
i
i
tx
iii
tx
tx
txptxtxm
it
txp
( ) ( )ttxtx
t
∆−=
→∆
−
0
lim: and:
This equation is called the Stochastic Equation or Kinetic Equation.
It is a partial differential equation that we must solve, with the initial condition:
( ) ( )[ ] ( )[ ]000 0 txptxp tx ===
Stochastic Processes
Fokker – Planck Equation (continue – 7)
Derivation of the Fokker–Planck Equation (continue – 5)
SOLO
We want to find px(t) [x(t)] where x(t) is the solution of
( ) ( ) ( ) [ ]fg ttttntxf
dt
txd
,, 0∈+=
( ){ } 0: == tnEn gg

( )tng
( ) ( )[ ] ( ) ( )[ ]{ } ( ) ( )τδττ −=−− ttQnntntnE gggg
ˆˆ
Wiener (Gauss) Process
( ) ( )[ ] ( ) ( )[ ] ( ){ } [ ] ( ){ } [ ]{ } ( )tQnEtxnE
t
ttxttxtxE
txtxm gg
t
===
∆
∆−∆−−
=−
→∆
−
22
2
2
0
2
|
|
lim:
( ) ( )[ ] ( ) ( )[ ] ( ){ } ( ) ( ) ( ) ( ) ( )txfnEtxftx
td
txd
E
t
ttxttxtxE
txtxm g
t
,,|
|
lim:
0
0
1
=+=












=
∆
∆−∆−−
=−
→∆
−

( ) ( )[ ] ( ) ( )[ ] ( ){ } 20
|
lim:
0
>=
∆
∆−∆−−
=−
→∆
− i
t
ttxttxtxE
txtxm
i
t
i
Therefore we obtain:
( ) ( )[ ] ( )[ ] ( ) ( )[ ]( )
( )
( ) ( ) ( )[ ]
( )[ ]2
2
2
1,
tx
txp
tQ
tx
txpttxf
t
txp txtxtx
∂
∂
+
∂
∂
−=
∂
∂
Stochastic Processes
Fokker–Planck Equation
Kolmogorov forward equation (KFE) and its adjoint the
Kolmogorov backward equation (KBE)
Kolmogorov forward equation (KFE) and its adjoint the Kolmogorov backward
equation (KBE) are partial differential equations (PDE) that arise in the theory of
continuous-time continuous-state Markov processes. Both were published by Andrey
Kolmogorov in 1931. Later it was realized that the KFE was already known to
physicists under the name Fokker–Planck equation; the KBE on the other hand was
new.
Kolmogorov forward equation addresses the following problem. We have
information about the state x of the system at time t (namely a probability
distribution pt(x)); we want to know the probability distribution of the state at a
later time s > t. The adjective 'forward' refers to the fact that pt(x) serves as the
initial condition and the PDE is integrated forward in time. (In the common case
where the initial state is known exactly pt(x) is a Dirac delta function centered on
the known initial state).
Kolmogorov backward equation on the other hand is useful when we are interested at time t
in whether at a future time s the system will be in a given subset of states, sometimes called the
target set. The target is described by a given function us(x) which is equal to 1 if state x is in the
target set and zero otherwise. We want to know for every state x at time t (t < s) what is the
probability of ending up in the target set at time s (sometimes called the hit probability). In this
case us(x) serves as the final condition of the PDE, which is integrated backward in time, from
s to t.
for t ≤ s , subject to the final condition p(x,s) = us(x).
( ) ( ) ( )[ ] ( ) ( )[ ]txptxD
x
txptxD
x
txp
t
,,,,, 22
2
1
∂
∂
+
∂
∂
=
∂
∂
−
( ) ( ) ( )[ ] ( ) ( )[ ]txptxD
x
txptxD
x
txp
t
,,,,, 22
2
1
∂
∂
+
∂
∂
−=
∂
∂
Andrey
Nikolaevich
Kolmogorov
1903 - 1987
SOLO Stochastic Processes
Kolmogorov forward equation (KFE) and its adjoint the
Kolmogorov backward equation (KBE) (continue – 1)
Kolmogorov backward equation on the other hand is useful when we are interested at time t
in whether at a future time s the system will be in a given subset of states, sometimes called the
target set. The target is described by a given function us(x) which is equal to 1 if state x is in the
target set and zero otherwise. We want to know for every state x at time t (t < s) what is the
probability of ending up in the target set at time s (sometimes called the hit probability). In this
case us(x) serves as the final condition of the PDE, which is integrated backward in time, from
s to t.
Formulating the Kolmogorov backward equation
Assume that the system state x(t) evolves according to the stochastic differential equation
then the Kolmogorov backward equation is, using Itô 's lemma on p(x,t):
SOLO Stochastic Processes
Table of Content
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Let Φx(t)|x(t1) (s,t) be the Characteristic Function of the Markov Process x (t), t Tɛ
(some interval). Assume the following:
(1) Φx(t)|x(t1) (s,t) is continuous differentiable in t, t T.ɛ
( ) ( ) ( ) ( )[ ]{ } ( ){ } ( )( )txtsg
t
txtxttxsE T
txtx
,;
|1exp1|
≤
∆
−−∆+
(2)
where E {| g|} is bounded on T.
(3)
then
( ) ( ) ( ) ( )[ ]{ } ( ){ } ( )( )txts
t
txtxttxsE T
txtx
t
,;:
|1exp
lim 1|
0
φ=
∆
−−∆+
→∆
( ) ( ) ( )( )
( ) ( ) ( ){ } ( )( ) ( ){ }1|
1|
|,;exp
|,
1
1
txtxtstxsE
t
txts T
txtx
txtx
φ=
∂
Φ∂
( ) ( ) ( ) ( ){ } ( ) ( ) ( ) ( )[ ] ( )∫
+∞
∞−
−=Φ txdtxtxptxsts txtx
T
txtx 1|| |exp, 11
The Characteristic Function of ( ) ( ) ( ) ( )[ ] 11| |1
tttxtxp txtx >
Maurice Stevenson
Bartlett
1910 - 2002
Jose Enrique
Moyal
1910 - 1998
Theorem 1
Bartlett-Moyal Theorem
SOLO Stochastic Processes
( ) ( ) ( )( ) ( ) ( ) ( )( ) ( ) ( ) ( )( )
t
txtstxtts
t
txts txtxtxtx
t
txtx
∆
Φ−∆+Φ
=
∂
Φ∂
→∆
1|1|
0
1| |,|,
lim
|, 111
Proof
By definition
( ) ( ) ( ) ( ){ } ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ){ } ( ){ }1|1|| |exp|exp, 111
txtxsEtxdtxtxptxsts T
txtxtxtx
T
txtx −=−=Φ ∫
+∞
∞−
( ) ( ) ( ) ( ){ } ( ) ( ) ( ) ( )[ ] ( )∫
+∞
∞−
∆+∆+∆+−=∆+Φ ttxdtxttxpttxstts txtx
T
txtx 1|| |exp, 11
But since x (t) is a Markov process, we can use the Chapman-Kolmogorov
Equation
( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( )∫ ∆+=∆+ txdtxtxptxttxptxttxp txtxtxtxtxtx 1||1| ||| 111
( ) ( ) ( ) ( ){ } ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( )∫ ∫
+∞
∞−
∆+∆+∆+−=∆+Φ ttxdtxdtxtxptxttxpttxstts txtxtxtx
T
txtx 1||| ||exp, 111
( ){ } ( ) ( ) ( ) ( )[ ] ( ) ( )[ ]{ } ( ) ( ) ( ) ( )[ ] ( ) ( )txdttxdtxttxptxttxstxtxptxs txtx
T
txtx
T
∫ ∫ ∆+∆+−∆+−−= |exp|exp 11 |1|
( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( ){ }1|| ||expexp 11
txtxtxttxsEtxsE T
txtx
T
txtx −∆+−⋅−=
Bartlett-Moyal Theorem
SOLO Stochastic Processes
( )( ) ( )( ) ( )( )
t
txtstxtts
t
txts xx
t
x
∆
Φ−∆+Φ
=
∂
Φ∂
→∆
11
0
1 |,|,
lim
|,
Proof (continue – 1)
We found
( ) ( ) ( ) ( ){ } ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ){ } ( ){ }1|1|| |exp|exp, 111
txtxsEtxdtxtxptxsts T
txtxtxtx
T
txtx −=−=Φ ∫
+∞
∞−
( ) ( ) ( ) ( ){ } ( ) ( ) ( ) ( )[ ] ( )∫
+∞
∞−
∆− ∆+∆+∆+−=∆+Φ ttxdtxttxpttxstts ttxtx
T
txtx 1|| |exp,1
( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( ){ }1|| ||expexp 11
txtxtxttxsEtxsE T
txtx
T
txtx −∆+−⋅−=
Therefore
( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( )
( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ]
( )( )
( )
( ) ( ) ( )[ ] ( )( ) ( ){ }1|
1
,;
|
0
|
1
|
0
|
|,;exp
|
|1exp
limexp
|
1|exp
limexp
1
1
1
1
1
txtxtstxsE
tx
t
txtxttxsE
txsE
tx
t
txtxttxsE
txsE
T
txtx
txts
T
txtx
t
T
txtx
T
txtx
t
T
txtx
φ
φ
⋅−=












∆
−−∆+−
⋅−=








∆
−−∆+−
⋅−=
→∆
→∆
  
q.e.d.
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Discussion about Bartlett-Moyal Theorem
(1) The assumption that x (t) is a Markov Process is essential to the derivation
( )( ) ( ) ( ) ( ) ( )[ ]
td
txxdsE
txts
T
txtx |1exp
:,; 1| −−
=φ
(2) The function is called
Itô Differential of the Markov Process, or
Infinitesimal Generator of Markov Process
( )( )txts ,;φ
(3) The function is all we need to define the Stochastic Process
(this will be proven in the next Lemma)
( )( )txts ,;φ
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Lemma
Let x(t) be an (nx1) Vector Markov Process generated by ( ) nddttxfxd += ,
where pg ndndnd +=
pnd - is an (nx1) Poisson Process with Zero Mean and Rate Vector
and Jump Probability Density pa(α).
gnd - is an (nx1) Wiener (Gauss) Process with Zero Mean and Covariance
( ) ( ){ } ( )dttQtndtndE
T
gg =
then
( )( ) ( ) ( )[ ]∑=
−−−−=
n
i
iai
TT
sMsQstxfstxts i
1
1
2
1
,,; λφ
Proof
We have ( )( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )( )[ ] ( ){ }
td
txndnddttxfsE
td
txxdsE
txts
pg
T
txtx
T
txtx |1,exp|1exp
:,; 11 || −++−
=
−−
=φ
( ) ( ) ( )( )[ ] ( ){ } ( )[ ] [ ]{ } [ ]{ }p
T
g
TT
pg
T
txtx ndsEndsEdttxfstxndnddttxfsE −−−=++− expexp,exp|,exp1|
Because are independentpg ndndxd ,,
[ ] ( ) ( )dtdtdtdtndinjumponeonlyP i
n
ij
jii 01 +=−= ∏≠
λλλ
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Lemma
Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= ,
then ( )( ) ( ) ( )[ ]∑=
−−−−=
n
i
iai
TT
sMsQstxfstxts i
1
1
2
1
,,; λφ
Proof (continue – 1)
Because is Gaussiangnd [ ]{ } 



−=− dtsQsndsE T
g
T
2
1
expexp
The Characteristic Function of the Generalized Poisson Process can be
evaluated as follows. Let note that the Probability of two or more jumps
occurring at dt is 0(dt)→0
[ ]{ } [ ] [ ]{ } [ ]∑=
−+⋅=−
n
i
iiip
T
ndinjumponeonlyPasEjumpsnoPndsE
1
exp1exp
But [ ] ( ) ( )dtdtdtjumpsnoP
n
i
i
n
i
i 011
11
+−=−= ∑∏ ==
λλ
[ ] ( ) ( )dtdtdtdtndinjumponeonlyP i
n
ij
jii 01 +=−= ∏≠
λλλ
[ ]{ } [ ]{ }
( )
( ) ( )[ ]∑∑∑ ===
−−=+−+−=−
n
i
iai
n
i
i
sM
ii
n
i
ip
T
sMdtdtdtasEdtndsE i
iia
111
110exp1exp λλλ
  
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Lemma
Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= ,
then ( )( ) ( ) ( )[ ]∑=
−−−−=
n
i
iai
TT
sMsQstxfstxts i
1
1
2
1
,,; λφ
Proof (continue – 3)
We found
[ ]{ } 



−=− dtsQsndsE T
g
T
2
1
expexp
[ ]{ } [ ]{ }
( )
( ) ( )[ ]∑∑∑ ===
−−=+−+−=−
n
i
iai
n
i
i
sM
ii
n
i
ip
T
sMdtdtdtasEdtndsE i
ita
111
110exp1exp λλλ
  
( )( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )( )[ ] ( ){ }
( )[ ] [ ]{ } [ ]{ } ( )[ ] ( )[ ]
td
sMdtdtsQsdttxfs
td
ndsEndsEdttxfs
td
txndnddttxfsE
td
txxdsE
txts
n
i
iai
TT
p
T
g
TT
pg
T
txtx
T
txtx
i
111
2
1
exp,exp
1expexp,exp
|1,exp|1exp
:,;
1
|| 11
−





−−



−−
=
−−−−
=
−++−
=
−−
=
∑=
λ
φ
( ) ( )[ ] ( ) ( )[ ]
( ) ( )[ ]∑
∑
=
=
−−−−=
−





−−



+−+−
=
n
i
iai
TT
n
i
iai
TT
sMdtsQstxfs
td
sMdtdtdtsQsdtdttxfs
i
i
1
1
22
1
2
1
,
1110
2
1
10,1
λ
λ
q.e.d.
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Theorem 2
Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= ,
( ) [ ]∑∑∑∑ == ==
∗+−+
∂∂
∂
+
∂
∂
−=
∂
∂ n
i
ai
n
i
n
j ji
ij
n
i i
i
i
ppp
xx
pQ
x
pf
t
p
11 1
2
1 2
1
λ
Let be the Transition Probability Density Function for the
Markov Process x(t). Then p satisfies the Partial Differential Equation
( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1
where the convolution (*) is defined as
( ) ( ) ( ) ( )( )∫ −=∗ initxtxiiaa vdtxsvspvsppp ii 11| |,,,,: 1

Proof
From Theorem 1 and the previous Lemma, we have:
( ) ( ) ( )( )
( ) ( ) ( ){ } ( )( ) ( ){ }
( ) ( ) ( ){ } ( ) ( )[ ] ( )












−−−−−=
−=
∂
Φ∂
∑=
1
1
|
1|
1
1|
|1
2
1
,exp
|,;exp
|,
1
1
1
txsMsQstxfstxsE
txtxtstxsE
t
txts
n
i
iai
TTT
txtx
Lemma
T
txtx
Theorem
txtx
i
λ
φ
( ) ( ) ( ) ( ){ } ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ]
( )
( ){ } ( ) ( ) ( )∫∫
∞+
∞−
+∞
∞−
Φ=⇔−=Φ
j
j
txtx
T
ntxtxtxtx
T
txtx sdtstxs
j
txttxptxdtxttxptxsts ,exp
2
1
|,|,exp, 1111 |1|1||
π
( ) ( ) ( ) ( )[ ]
( )
( ){ } ( ) ( ) ( )∫
∞+
∞−
Φ
∂
∂
=
∂
∂
j
j
txtx
T
ntxtx sdts
t
txs
j
txttxp
t
,exp
2
1
|, 11 |1|
π
We also have:
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Theorem 2
Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= ,
( ) [ ]∑∑∑∑ == ==
∗+−+
∂∂
∂
+
∂
∂
−=
∂
∂ n
i
ai
n
i
n
j ji
ij
n
i i
i
i
ppp
xx
pQ
x
pf
t
p
11 1
2
1 2
1
λ
Let be the Transition Probability Density Function for the
Markov Process x(t). Then p satisfies the Partial Differential Equation
( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1
Proof (continue – 1)
( ) ( ) ( )( )
( ) ( ) ( ){ } ( )( ) ( ){ } ( ) ( ) ( ){ } ( ) ( )[ ] ( )









−−−−−=−=
∂
Φ∂
∑=
1
1
|1|
1
1|
|1
2
1
,exp|,;exp
|,
11
1
txsMsQstxfstxsEtxtxtstxsE
t
txts n
i
iai
TTT
txtx
Lemma
T
txtx
Theorem
txtx
i
λφ
( ) ( ) ( ) ( )[ ]
( )
( ){ } ( ) ( ) ( )∫
∞+
∞−
Φ
∂
∂
=
∂
∂
j
j
txtx
T
ntxtx sdts
t
txs
j
txttxp
t
,exp
2
1
|, 11 |1|
π
( )
( ){ } ( ) ( ) ( ){ } ( )[ ] ( ){ }
( )
( ){ } ( ) ( ) ( ) ( )( ) ( )[ ] ( )[ ]
( )
( ){ } ( ) ( ) ( ) ( ) ( )( ) ( )[ ]
( )
( ){ } ( ) ( ) ( ) ( ) ( )( ){ } ( ) ( ) ( ) ( ) ( )( )[ ] ( ) ( ) ( ) ( ) ( )( )[ ]1|
1
1|
1|
1|
1|
1|
|,
|,
|,exp
2
1
exp|,exp
2
1
,exp|exp
2
1
|,expexp
2
1
1
1
1
1
1
1
txtxptxf
x
txtxptxf
sdtxtxptxfLstxs
j
sdvdtvstxtvptvfstxs
j
sdvdtvfstvstxtvptxs
j
sdtxtxfstxsEtxs
j
txtxix
n
i i
txtxi
j
j
txtx
TT
n
j
j
T
txtx
TT
n
j
j
TT
txtx
T
n
j
j
TT
txtx
T
n
∇=
∂
∂
=
−
=
−
−
=
−−=
−−
∑∫
∫ ∫
∫ ∫
∫
=
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
π
π
π
π
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Theorem 2
Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= ,
( ) [ ]∑∑∑∑ == ==
∗+−+
∂∂
∂
+
∂
∂
−=
∂
∂ n
i
ai
n
i
n
j ji
ij
n
i i
i
i
ppp
xx
pQ
x
pf
t
p
11 1
2
1 2
1
λ
Let be the Transition Probability Density Function for the
Markov Process x(t). Then p satisfies the Partial Differential Equation
( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1
Proof (continue – 2)
( ) ( ) ( )( )
( ) ( ) ( ){ } ( )( ) ( ){ } ( ) ( ) ( ){ } ( ) ( )[ ] ( )









−−−−−=−=
∂
Φ∂
∑=
1
1
|1|
1
1|
|1
2
1
,exp|,;exp
|,
11
1
txsMsQstxfstxsEtxtxtstxsE
t
txts n
i
iai
TTT
txtx
Lemma
T
txtx
Theorem
txtx
i
λφ
( ) ( ) ( ) ( )[ ]
( )
( ){ } ( ) ( ) ( )∫
∞+
∞−
Φ
∂
∂
=
∂
∂
j
j
txtx
T
ntxtx sdts
t
txs
j
txttxp
t
,exp
2
1
|, 11 |1|
π
( )
( ){ } ( ) ( ) ( ){ } ( ) ( ){ }
( )
( ){ } ( ) ( ) ( ) ( )( ) ( )[ ] ( )
( )
( ){ } ( ) ( ) ( ) ( ) ( )( ) ( )[ ]{ }
( )
( ){ } ( ) ( ) ( ) ( ) ( )( ){ } ( ) ( ) ( ) ( ) ( )( )[ ]
∑∑∫
∫ ∫
∫ ∫
∫
= =
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
∂∂
∂
=
−
=
−
−
=
−=
−
n
i
n
j ji
txtxij
j
j
txtx
TT
n
j
j
T
txtx
TT
n
j
j
TT
txtx
T
n
j
j
TT
txtx
T
n
xx
txtxptxQ
sdstxtxptQLstxs
j
sdsvdtvstxtvptQstxs
j
sdvdstQstvstxtvptxs
j
sdtxstQstxsEtxs
j
1 1
1|
2
1|
1|
1|
1|
|,
2
1
|exp
2
1
exp|exp
2
1
exp|exp
2
1
|expexp
2
1
1
1
1
1
1
π
π
π
π
Bartlett-Moyal Theorem
SOLO Stochastic Processes
Theorem 2
Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= ,
( ) [ ]∑∑∑∑ == ==
∗+−+
∂∂
∂
+
∂
∂
−=
∂
∂ n
i
ai
n
i
n
j ji
ij
n
i i
i
i
ppp
xx
pQ
x
pf
t
p
11 1
2
1 2
1
λ
Let be the Transition Probability Density Function for the
Markov Process x(t). Then p satisfies the Partial Differential Equation
( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1
Proof (continue – 3)
( ) ( ) ( )( )
( ) ( ) ( ){ } ( )( ) ( ){ } ( ) ( ) ( ){ } ( ) ( )[ ] ( )












−−−−−=−=
∂
Φ∂
∑=
1
1
|1|
1
1|
|1
2
1
,exp|,;exp
|,
11
1
txsMsQstxfstxsEtxtxtstxsE
t
txts n
i
iai
TTT
txtx
Lemma
T
txtx
Theorem
txtx
i
λφ
( ) ( ) ( ) ( )[ ]
( )
( ){ } ( ) ( ) ( )∫
∞+
∞−
Φ
∂
∂
=
∂
∂
j
j
txtx
T
ntxtx sdts
t
txs
j
txttxp
t
,exp
2
1
|, 11 |1|
π
( )
( ){ } ( ) ( ) ( ){ } [ ]{ }[ ] ( ){ }
( )
( ){ } ( ) ( ) ( ) ( )( ) ( )[ ] [ ]{ }[ ]
( )
( ){ } [ ]{ }[ ] ( ) ( ) ( ) ( )( ) ( )[ ]{ }
( )
( ){ } [ ]{ }[ ] ( ) ( ) ( ) ( )( ){ } ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( )∫∫
∫ ∫
∫ ∫
∫
−−=−−
−
=
−−−
−
=
−−−=
−−−
∞+
∞−
∞+
∞−
∞+
∞−
∞+
∞−
initxtxiiaitxtxi
j
j
txtxiii
T
n
j
j
T
txtxiii
T
n
j
j
iii
T
txtx
T
n
j
j
iii
T
txtx
T
n
vdtxsvspvsptxtxpsdtxtvpasELtxs
j
sdvdtvstxtvpasEtxs
j
sdvdasEtvstxtvptxs
j
sdtxasEtxsEtxs
j
i 11|1|1|
1|
1|
1|
|,,,,||exp1exp
2
1
exp|exp1exp
2
1
exp1exp|exp
2
1
|exp1expexp
2
1
111
1
1
1
λλλ
π
λ
π
λ
π
λ
π
( ) ( ) ( ) ( )( )∫ −=∗ initxtxiiaa vdtxsvspvsppp ii 11| |,,,,: 1
Table of Content
Fokker- Planck Equation
SOLO Stochastic Processes
Feller- Kolmogorov Equation
Let x(t) be an (nx1) Vector Markov Process generated by ( ) pnddttxfxd += ,
( ) [ ]∑∑ ==
∗+−+
∂
∂
−=
∂
∂ n
i
ai
n
i i
i
i
ppp
x
pf
t
p
11
λ
Let be the Transition Probability Density Function for the
Markov Process x(t). Then p satisfies the Partial Differential Equation
( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1
Proof
where the convolution (*) is defined as
( ) ( ) ( ) ( )( )∫ −=∗ initxtxiiaa vdtxsvspvsppp ii 11| |,,,,: 1

Andrey
Nikolaevich
Kolmogorov
1903-1987
Derived from Theorem 2 by tacking 0=gnd
Fokker- Planck Equation
SOLO Stochastic Processes
Fokker-Planck Equation
Let x(t) be an (nx1) Vector Markov Process generated by ( ) gnddttxfxd += ,
( )
∑∑∑ = == ∂∂
∂
+
∂
∂
−=
∂
∂ n
i
n
j ji
ij
n
i i
i
xx
pQ
x
pf
t
p
1 1
2
1 2
1
Let be the Transition Probability Density Function for the
Markov Process x(t). Then p satisfies the Partial Differential Equation
( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1
Proof
Derived from Theorem 2 by tacking 0=pnd
Discussion of Fokker-Planck Equation
The Fokker-Planck Equation can be written as a Conservation Law
0
1
=∇+
∂
∂
=
∂
∂
+
∂
∂
∑=
J
t
p
x
J
t
p n
i i
where pQpfJ ∇−=
2
1
:
This Conservation Law is a consequence of the Global Conservation of Probability
( ) ( ) ( ) ( )( ) 1|, 1| 1
=∫ xdtxttxp txtx
Table of Content
Langevin and Fokker- Planck Equations
SOLO Stochastic Processes
The original Langevin equation describes Brownian motion, the apparently random
movement of a particle in a fluid due to collisions with the molecules of the fluid,
( ) ( )t
m
v
mtd
vd
td
xd
vtv
td
vd
m η
λ
ηλ
1
+−=⇒=+−=
We are interested in the position x of a particle of mass m. The force on the particle is
the sum of the viscous force proportional to particle’s velocity λ v (Stoke’s Law) plus a
noise term η (t) that has a Gaussian Probability Distribution with Correlation Function
( ) ( ) ( ) 2
, /2'2', mTkQttTktt BjiBji λδδληη =−=
where kB is Boltzmann’s constant and T is the Temperature.
Let be the Transition Probability Density Function that
corresponds to the Langevin Equation state. Then p satisfies the Partial
Differential Equation given by the Fokker-Planck Equation:
( ) ( ) ( ) ( )( ) ptvttvp tvtv =1| |,1
( ) ( ) ( ) ( )( ) ( )( )00000| |,1
vtvtvttvp tvtv −=δ
( )( )
2
2
/
v
p
Q
v
pvm
t
p
∂
∂
+
∂
−∂
−=
∂
∂ λ
We assume that the initial state at t0 is v(t0) and is deterministic
Langevin and Fokker- Planck Equations
SOLO Stochastic Processes
The Fokker-Planck Equation:
( ) ( ) ( ) ( )( )
( )
( )





 −
−= 2
2
2/120|
ˆ
2
1
exp
2
1
|,1
σσπ
vv
tvttvp tvtv
( ) ( ) ( ) ( )( ) ( )( )00000| |,1
vtvtvttvp tvtv −=δ
( )( )
2
2
/
v
p
Q
v
pvm
t
p
∂
∂
+
∂
−∂
−=
∂
∂ λ
the initial state at t0 is v(t0) is deterministic
The solution to the Fokker-Planck Equation is:
where: A solution to the one-dimensional
Fokker–Planck equation, with both the
drift and the diffusion term. The initial
condition is a Dirac delta function in
x = 1, and the distribution drifts
towards x = 0.
( )



−−= 00 expˆ tt
m
vv
λ
and:
( )












−−−= 0
2
2exp1 tt
m
Q
λ
σ
Table of Content
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
( )TXtxpx ,|,Define the set of past data. We need to find( ) ( ) ( )( )nn tttxxxTX ,,,,,,,:, 2121 =
where we assume that ( ) ( )TXtx ,∉
Start the analysis by defining the Conditional Characteristic Function of the
Increment of the Process:
( ) ( )( ) ( ) ( ) ( )( )[ ] ( ){ }
( ) ( )( )[ ] ( ) ( )( ) ( ) ( ) ( )ttxtxxtxdTXttxtxpttxtxs
TXttxttxtxsETXttxts
TXttxx
T
T
TXttxxTXttxx
∆−−=∆∆−∆−−−=
∆−∆−−−=∆−Φ
∫
∞+
∞−
∆−
∆−∆∆−∆
:,,|,exp
,,|exp,,|,
,,|
,,|,,|
( ) ( ) ( )[ ]
( )
( ) ( )[ ]{ } ( ) ( )( )∫
∞+
∞−
∆−∆∆− ∆−Φ∆−−==∆−
j
j
TXttxx
T
nTXttxtx sdTXttxtsttxtxs
j
TXvttxtxp ,,|,exp
2
1
,,|, ,,|,,|
π
The Inverse Transform is
The Fokker-Planck Equation was derived under the assumption that is a
Markov Process. Let assume that we don’t have a Markov Process, but an Arbitrary
Random Process (nx1 vector), where an arbitrary set of past value
, must be considered.nn txtxtx ,;;,;, 2211 
( )tx
( )tx
( ) ( )n
T
n
T
sssxxx  11 , ==
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Using Chapman – Kolmogorov Equation we obtain:
( ) ( ) [ ] ( ) ( ) ( )[ ] ( ) ( )( ) ( )
( )
( ) ( )[ ]{ } ( ) ( )( )
( ) ( ) ( )[ ]
( ) ( )( ) ( )
( )
( ) ( )[ ]{ } ( ) ( )( ) ( ) ( )( ) ( )∫ ∫
∫ ∫
∫
∞+
∞−
∞+
∞−
∆−∆−∆
∞+
∞−
∆−
∆−
∞+
∞−
∆−∆
+∞
∞−
∆−∆−∆−
∆−∆−∆−Φ∆−−=
∆−∆−∆−Φ∆−−=
∆−∆−∆−=
∆−
j
j
TXttxTXttxx
T
n
TXttx
TXttxtxp
j
j
TXttxx
T
n
TXttxTXttxtxTXttxtx
ttxdsdTXttxpTXttxtsttxtxs
j
ttxdTXttxpsdTXttxtsttxtxs
j
ttxdTXttxpTXttxtxpTXtxp
TXttxtx
,|,,|,exp
2
1
,|,,|,exp
2
1
,|,,|,,|,
,|,,|
,|
,,|,
,,|
,|,,|,,|
,,|
π
π
  
where
Let expand the Conditional Characteristic Function in a Taylor Series about the vector 0=s
( ) ( )( ) ( ) ( ) ( )( )[ ] ( ){ }
( ) ( )( )[ ] ( ) ( )( ) ( )∫
∞+
∞−
∆−
∆−∆∆−∆
∆−∆−∆−−−=
−∆+−=∆−Φ
ttxdTXttxtxpttxtxs
TXtxtxttxsETXttxts
TXttxx
T
T
TXttxxTXttxx
,,|,exp
,,|exp,,|,
,,|
,,|,,|
( ) ( )( ) ( ) ( )
( )
∑∑ ∑
∑∑∑
=
∞
=
∞
=
∆−∆
= =
∆−∆
=
∆−∆
∆−∆
=
∂∂
Φ∂
=
+
∂∂
Φ∂
+
∂
Φ∂
+=∆−Φ
n
i
i
m m
m
n
m
m
n
m
TXttxx
m
n
n
i
n
i
ii
ii
TXttxx
i
n
i i
TXttxx
TXttxx
mmss
ssmm
ss
ss
s
s
TXttxts
n
n
n
10 0
1
1
,,|
1
1 1
,,|
2
1
,,|
,,|
1
1
1
1 2
21
21
1
1 1
!!
1
!2
1
1,,|,




( ) ( )( )
( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ){ } ∑=
∆−∆
∆−∆
=∆−∆−−∆−−⋅∆−−−=
∂∂∂
∆−Φ∂ n
i
i
m
nn
mm
TXttxx
m
m
n
mm
TXttxx
m
mmTXttxttxtxttxtxttxtxE
sss
TXttxts n
n
1
2211,,|
21
,,|
:,,|1
,,|, 21
21


Generalized Fokker - Planck Equation
SOLO Stochastic Processes
( ) ( ) [ ]
( )
( ) ( )[ ]{ } ( ) ( )( ) ( ) ( )( ) ( )∫ ∫
+∞
∞−
∞+
∞−
∆−∆−∆∆− ∆−∆−∆−Φ∆−−=
j
j
TXttxTXttxx
T
nTXttxtx ttxdsdTXttxpTXttxtsttxtxs
j
TXtxp ,|,,|,exp
2
1
,|, ,|,,|,,|
π
( )
( ) ( )[ ]{ } ( )
( ) ( )( ) ( )∫ ∫ ∑ ∑
+∞
∞−
∞+
∞−
∆−
∞
=
∞
=
∆−∆
∆−∆−
∂∂
Φ∂
∆−−=
j
j
TXttx
m m
m
n
m
m
n
m
TXttxx
m
n
T
n
ttxdsdTXttxpss
ssmm
ttxtxs
j n
n
n
,|
!!
1
exp
2
1
.|
0 0
1
1
,,|
11
1
1



π
( )
( ) ( )[ ]{ } ( )
( ) ( )( ) ( )ttxdTXttxpdsdsss
ss
ttxtxs
jmm
TXttx
m m
j
j
j
j
n
m
n
m
m
n
m
TXttxx
m
T
n
nn
n
n
∆−∆−
∂∂
Φ∂
∆−−= ∆−
∞
=
∞
=
+∞
∞−
∞+
∞−
∞+
∞−
∆−∆
∑ ∑ ∫ ∫ ∫ ,|exp
2
1
!!
1
,|
0 0
11
1
,,|
11
1
1





π
( )
( )
( ) ( )[ ]{ } ( ) ( ) ( )( ) ( ) ( )( ) ( ){ } ( ) ( )( ) ( )ttxdTXttxpdsdsssTXttxttxtxttxtxEttxtxs
jmm
TXttx
m m
j
j
j
j
n
m
n
mm
nn
m
TXttxx
T
n
n
m
n
nn
∆−∆−∆−∆−−∆−−∆−−
−
= ∆−
∞
=
∞
=
+∞
∞−
∞+
∞−
∞+
∞−
∆−∆∑ ∑ ∫ ∫ ∫ ,|,,|exp
2
1
!!
1
,|
0 0
1111,,|
11
11



π
we obtained:
( )
( )
( ) ( )[ ]{ } ( ) ( ) ( )( ) ( ){ } ( ) ( )( ) ( )ttxdTXttxpdssTXttxttxtxEttxtxs
jm
TXttx
m m
n
i
j
j
i
m
i
m
iiTXttxxiii
i
m
n
ii
i
∆−∆−








∆−∆−−∆−−
−
= ∆−
∞
=
∞
=
+∞
∞− =
∞+
∞−
∆−∆∑ ∑ ∫ ∏ ∫ ,|,,|exp
2
1
!
1
,|
0 0 1
,,|
1
π

Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Using :
[ ] ( ){ } ( ) ( ) { } ( ) ( ) { }∫∫∫
∞+
∞−
∞+
∞−
∞+
∞−
=→=−=−
j
j
i
i
ij
j
j
j
i
i
i
sdussFs
j
uf
du
d
sdussF
j
ufsdauss
j
au
ud
d
exp
2
1
exp
2
1
exp
2
1
πππ
δ
we obtained:
we obtain:
( ) ( ) [ ]
( )
( )
( ) ( )[ ]{ } ( ) ( ) ( )( ) ( ){ } ( ) ( )( ) ( )ttxdTXttxpTXttxttxtxEdsttxtxss
jm
TXtxp
TXttx
m m
j
j
m
iiTXttxxiiii
m
i
i
mn
i
TXttxtx
n
ii
i
∆−∆−








∆−∆−−∆−−
−
= ∆−
∞
=
∞
=
∞+
∞−
∞+
∞−
∆−∆
=
∆−
∑ ∑ ∫ ∫∏ .|,,|exp
2
1
!
1
,|,
.|
0 0
,,|
1
,,|
1
π

( ) ( ) [ ]
( ) ( ) ( )[ ]
( ) ( ) ( ) ( )( ) ( ){ } ( ) ( )( ) ( )ttxdTXttxpTXttxttxtxE
tx
ttxtx
m
TXtxp
TXttx
m m
n
i
m
iiTXttxxm
i
ii
m
i
m
TXttxtx
n
i
i
ii
∆−∆−







∆−∆−−
∂
∆−−∂−
= ∆−
∞
=
∞
=
∞+
∞− =
∆−∆
∆−
∑ ∑ ∫ ∏ ,|,,|
!
1
,|,
,|
0 0 1
,,|
,,|
1
δ

( )
( )
( ) ( )[ ] ( ) ( ) ( )( ) ( ){ } ( ) ( )( ) ( )
( )
( ) ( ) ( ) ( )( ) ( ){ } ( ) ( )( )[ ]∑ ∑ ∏
∑ ∑∏ ∫
∞
=
∞
= =
=∆∆−∆
∞
=
∞
= =
+∞
∞−
∆−∆−∆








∆−∆−∆−−
∂
∂−
=








∆−∆−∆−∆−−∆−−
∂
∂−
=
0 0 1
0,|,,|
0 0 1
,|,,|
1
1
,|,,|
!
1
,|,,|
!
1
m m
n
i
tTXttx
m
iiTXtxxm
i
m
i
m
m m
n
i
TXttx
m
iiTXttxxiim
i
m
i
m
n
i
i
ii
n
i
i
ii
TXttxpTXttxttxtxE
txm
ttxdTXttxpTXttxttxtxEttxtx
txm

 δ
For m1=…=mn=m=0 we obtain : ( ) ( ) [ ]TXttxp TXttxttx ,|,,,| ∆−∆−∆−
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
we obtained:
( ) ( ) [ ] ( ) [ ]
( )
( ) ( ) ( ) ( )( ) ( ){ } ( ) ( )( )[ ] 0,|,,|
!
1
,|,,|,
10 0 1
0,|,,|
,|,,|
1
≠=







∆−∆−∆−−
∂
∂−
=
∆−−
∑∑ ∑ ∏ =
∞
=
∞
= =
=∆∆−∆
∆−∆−
n
i
i
m m
n
i
tTXttx
m
iiTXtxxm
i
m
i
m
TXttxTXttxtx
mmTXttxpTXttxttxtxE
txm
TXttxpTXtxp
n
i
i
ii

Dividing both sides by Δt and taking Δt →0 we obtain:
( ) [ ] ( ) ( ) [ ] ( ) [ ]
( )
( )
( ) ( ) ( )( ) ( ){ }
( ) ( )( ) 0,|
,,|
lim
!
1
,|,,|,
lim
,|,
10 0 1
,|
,,|
0
,|,,|
0
,|
1
≠=
















∆
∆−∆−−
∂
∂−
=
∆
∆−−
=
∂
∂
∑∑ ∑ ∏ =
∞
=
∞
= =
∆
→∆
∆−∆−
→∆
n
i
i
m m
n
i
TXtx
m
iiTXtxx
tm
i
m
i
m
TXttxTXttxtx
t
TXtx
mmTXtxp
t
TXttxttxtxE
txm
t
TXttxpTXtxp
t
TXtxp
n
i
i
ii

This is the Generalized Fokker - Planck Equation for Non-Markovian Random Processes
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Discussion of Generalized Fokker – Planck Equation
( ) [ ] ( )
( ) ( ) ( ) ( )( )( )
( ) ( ) ( )( ) ( ) ( )( ) ( ){ }
t
TXtxttxtxttxtxE
A
mmTXtxpA
txtxmmt
TXtxp
n
p
pn
n
m
nn
m
TXtxx
t
mm
n
i
iTXtxmmm
n
m
m
m m n
m
TXtx
∆
∆−−∆−−
=
≠=
∂∂
∂−
=
∂
∂
∆
→∆
=
∞
=
∞
=
∑∑ ∑
,,|
lim:
0,|
!!
1,|,
1
1
11
1
11,,|
0
,,
1
,|
10 0 1
,|





• The Generalized Fokker - Planck Equation is much more complex than the
Fokker – Planck Equation because of the presence of the infinite number of
derivative of the density function.
• It requires certain types of density function, infinitely differentiable, and
knowledge of all coefficients
• To avoid those difficulties we seek conditions on the process for which ∂p/∂t
is defined by a finite set of derivatives.
pmmA ,,1 
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Discussion of Generalized Fokker – Planck Equation
( ) [ ] ( )
( ) ( ) ( ) ( )( )( )
( ) ( ) ( )( ) ( ) ( )( ) ( ){ }
t
TXtxttxtxttxtxE
A
mmTXtxpA
txtxmmt
TXtxp
n
p
pn
n
m
nn
m
TXtxx
t
mm
n
i
iTXtxmmm
n
m
m
m m n
m
TXtx
∆
∆−−∆−−
=
≠=
∂∂
∂−
=
∂
∂
∆
→∆
=
∞
=
∞
=
∑∑ ∑
,,|
lim:
0,|
!!
1,|,
1
1
11
1
11,,|
0
,,
1
,|
10 0 1
,|





• To avoid those difficulties we seek conditions on the process for which ∂p/∂t
is defined by a finite set of derivatives. Those were defined by Pawula, R.F. (1967)
Lemma 1
Let
( ) ( ) ( )( ) ( ){ } 0
,,|
lim: 1
11,,|
0
0,,0,
1
1
≠=
∆
∆−−
=
∆
→∆
mm
t
TXtxttxtxE
A
m
TXtxx
t
m 
If is zero for some even m1, then
Proof
For m1 odd and m1 ≥ 3, we have
( ) ( ) ( )( ) ( ){ } ( ) ( ) ( )( ) ( ) ( )( ) ( )
t
TXtxttxtxttxtxE
t
TXtxttxtxE
A
mm
TXtxx
t
m
TXtxx
t
m
∆






∆−−∆−−
=
∆
∆−−
=
+−
∆
→∆
∆
→∆
,,|
lim
,,|
lim:
2
1
11
2
1
11,,|
0
11,,|
0
0,,0,
11
1
1 
0,,0,1 mA 30 10,,0,1
≥∀= mAm 
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Lemma 1
Let ( ) ( ) ( )( ) ( ){ } 0
,,|
lim: 1
11,,|
0
0,,0,
1
1
≠=
∆
∆−−
=
∆
→∆
mm
t
TXtxttxtxE
A
m
TXtxx
t
m 
Proof
For m1 odd and m1 ≥ 3, we have
( ) ( ) ( )( ) ( ){ } ( ) ( ) ( )( ) ( ) ( )( ) ( )
t
TXtxttxtxttxtxE
t
TXtxttxtxE
A
mm
TXtxx
t
m
TXtxx
t
m
∆






∆−−∆−−
=
∆
∆−−
=
+−
∆
→∆
∆
→∆
,,|
lim
,,|
lim:
2
1
11
2
1
11,,|
0
11,,|
0
0,,0,
11
1
1 
Using Schwarz Inequality, we have
( ) ( ) ( )( )( )
( ){ } ( ) ( ) ( )( )( )
( ){ }
0,,0,10,,0,1
1
11,,|
0
1
11,,|
0
2
0,,0, 11
11
1
,,|
lim
,,|
lim  +−
+
∆
→∆
−
∆
→∆
=
∆
∆−−
∆
∆−−
≤ mm
m
TXtxx
t
m
TXtxx
t
m AA
t
TXtxttxtxE
t
TXtxttxtxE
A
In the same way, for m1 ≥ 4, and m1 even we have
( ) ( ) ( )( ) ( ){ } ( ) ( ) ( )( ) ( ) ( )( ) ( )
t
TXtxttxtxttxtxE
t
TXtxttxtxE
A
mm
TXtxx
t
m
TXtxx
t
m
∆






∆−−∆−−
=
∆
∆−−
=
+−
∆
→∆
∆
→∆
,,|
lim
,,|
lim:
2
2
11
2
2
11,,|
0
11,,|
0
0,,0,
11
1
1 
0,,0,20,,0,2
2
0,,0, 111  +−≤ mmm AAAUsing Schwarz Inequality, again for m1 ≥ 4
If is zero for some even m1, then0,,0,1 mA 30 10,,0,1
≥∀= mAm 
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Lemma 1
Let
( ) ( ) ( )( ) ( ){ } 0
,,|
lim: 1
11,,|
0
0,,0,
1
1
≠=
∆
∆−−
=
∆
→∆
mm
t
TXtxttxtxE
A
m
TXtxx
t
m 
Proof (continue)
we have
evenmmAAA
oddmmAAA
mmm
mmm
110,,0,20,,0,2
2
0,,0,
110,,0,10,,0,1
2
0,,0,
4
3
111
111
≥≤
≥≤
+−
+−


00,,0, =rAFor some m1 = r even we have , and
Therefore A r-2,0,…,0=0, A r-1,0,…,0 =0, A r+1,0,…,0 =0, A r+2,0,…,0 =0, if A r,0,…,0 = 0 and all A
are bounded. This procedure will continue leaving A 1,0,…,0 not necessarily zero and
achieving:
420
310
310
420
0,,0,0,,0,4
2
0,,0,2
0,,0,20,,0,
2
0,,0,1
0,,0,0,,0,2
2
0,,0,1
0,,0,0,,0,4
2
0,,0,2
≥+=≤
≥+=≤
≥−=≤
≥−=≤
++
++
−−
−−
rAAA
rAAA
rAAA
rAAA
rrr
rrr
rrr
rrr




00,,0,0,,0,30,,0,2 ==== ∞→   rAAA
q.e.d.
If is zero for some even m1, then0,,0,1 mA 30 10,,0,1
≥∀= mAm 
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Lemma 2
Let
If each of the moments is finite and vanishes for some even
mi, then
nmmm AAA ,,0,0,,,00,,0, ,,, 21  
Proof
2,,0 321,0,0,0,,00,0, 321
≥∀=== mmmAAA mmm
( ) ( ) ( )( ) ( ) ( )( ) ( ){ } 0
,,|
lim:
1
11,,|
0
,,
1
1
>=
∆
∆−−∆−−
= ∑=
∆
→∆
n
i
i
m
nn
m
TXtxx
t
mm mm
t
TXtxttxtxttxtxE
A
n
p


20..1,0
3..00
1
,,
1
,,
1
1
≤=<=∀
≥=>∀=
∑
∑
=
=
n
i
iimm
n
i
iimm
mmtsmzeronecessarlynotA
mmtsmA
p
p


We shall prove this Lemma by Induction.
Let start with n=3
( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ){ } 0
,,|
lim
1
332211,,|
0
,,
321
321
>=
∆
∆−−∆−−∆−−
= ∑=
∆
→∆
n
i
i
mmm
TXtxx
t
mmm mm
t
TXtxttxtxttxtxttxtxE
A
We proved in Lemma 1 that and A 1,0,0, A 0,1,0,
A0,0,1 are not necessarily zero.
( ) ( ) ( )( ) ( ) ( )( ) ( ){ }
( ) ( ) ( )( ) ( ){ } ( ) ( ) ( )( ) ( ){ } 2
2,0,0
2
0,2,0
2
33,,|
0
2
22,,|
0
2
3322,,|
0
2
,,0
32
32
32
32
,,|
lim
,,|
lim
,,|
lim
mm
m
TXtxx
t
m
TXtxx
t
mm
TXtxx
t
mm
AA
t
TXtxttxtxE
t
TXtxttxtxE
t
TXtxttxtxttxtxE
A
=








∆
∆−−








∆
∆−−
≤








∆
∆−−∆−−
=
∆
→∆
∆
→∆
∆
→∆
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Lemma 2
Let
If each of the moments is finite and vanishes for some even
mi, then
nmmm AAA ,,0,0,,,00,,0, ,,, 21  
Proof (continue – 1)
2,,0 321,0,0,0,,00,0, 321
≥∀=== mmmAAA mmm
( ) ( ) ( )( ) ( ) ( )( ) ( ){ } 0
,,|
lim:
1
11,,|
0
,,
1
1
>=
∆
∆−−∆−−
= ∑=
∆
→∆
n
i
i
m
nn
m
TXtxx
t
mm mm
t
TXtxttxtxttxtxE
A
n
p


A 1,0,0, A 0,1,0, A0,0,1 are not necessarily zero.
2
2,0,0
2
0,2,0
2
,,0 3232 mmmm AAA ≤



 ≥+>=
⇒
zeroynecessarilnotA
mmmmA mm
1,1,0
3232,,0 3&0,032
20..1,0
3..00
1
,,
1
,,
1
1
≤=<=∀
≥=>∀=
∑
∑
=
=
n
i
iimm
n
i
iimm
mmtsmzeronecessarlynotA
mmtsmA
p
p


2
2,0,0
2
0,0,2
2
,0, 3131 mmmm AAA ≤



 ≥+>=
⇒
zeroynecessarilnotA
mmmmA mm
1,0,1
3131,0, 3&0,032
2
0,2,0
2
0,0,2
2
0,, 2121 mmmm AAA ≤



 ≥+>=
⇒
zeroynecessarilnotA
mmmmA mm
0,1,1
21210,, 3&0,021
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Lemma 2
Let
If each of the moments is finite and vanishes for some even
mi, then
nmmm AAA ,,0,0,,,00,,0, ,,, 21  
Proof (continue – 2)
( ) ( ) ( )( ) ( ) ( )( ) ( ){ } 0
,,|
lim:
1
11,,|
0
,,
1
1
>=
∆
∆−−∆−−
= ∑=
∆
→∆
n
i
i
m
nn
m
TXtxx
t
mm mm
t
TXtxttxtxttxtxE
A
n
p


20..1,0
3..00
1
,,
1
,,
1
1
≤=<=∀
≥=>∀=
∑
∑
=
=
n
i
iimm
n
i
iimm
mmtsmzeronecessarlynotA
mmtsmA
p
p


( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ){ } 4
332211,,|
0
4
,,
,,|
lim
321
321








∆
∆−−∆−−∆−−
=
∆
→∆ t
TXtxttxtxttxtxttxtxE
A
mmm
TXtxx
t
mmm
( ) ( ) ( )( ) ( ){ } ( ) ( ) ( )( ) ( ){ }
( ) ( ) ( )( ) ( ){ }
321
3
22
4,0,00,4,0
2
0,0,2
4
33,,|
0
4
22,,|
0
22
11,,|
0
,,|
lim
,,|
lim
,,|
lim
mmm
m
TXtxx
t
m
TXtxx
t
m
TXtxx
t
AAA
t
TXtxttxtxE
t
TXtxttxtxE
t
TXtxttxtxE
=








∆
∆−−
⋅








∆
∆−−
⋅








∆
∆−−
≤
∆
→∆
∆
→∆
∆
→∆
321321 4,0,00,4,0
2
0,0,2
4
mmmmmm AAAA ≤ Since 000,0 32132 ,,324,0,00,4,0 >∀=⇒>∀== immmmm mAmmAA
Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Lemma 2
Let
If each of the moments is finite and vanishes for some even
mi, then
nmmm AAA ,,0,0,,,00,,0, ,,, 21  
Proof (continue – 3)
q.e.d.
( ) ( ) ( )( ) ( ) ( )( ) ( ){ } 0
,,|
lim:
1
11,,|
0
,,
1
1
>=
∆
∆−−∆−−
= ∑=
∆
→∆
n
i
i
m
nn
m
TXtxx
t
mm mm
t
TXtxttxtxttxtxE
A
n
p


20..1,0
3..00
1
,,
1
,,
1
1
≤=<=∀
≥=>∀=
∑
∑
=
=
n
i
iimm
n
i
iimm
mmtsmzeronecessarlynotA
mmtsmA
p
p


We proved that only are not necessarily zero and1,1,01,0,10,1,11,0,00,1,00,0,1 ,,,,, AAAAAA
3..00
3
1
,, 321
≥=>∀= ∑=i
iimmm mmtsmA
In the same way, assuming that the result is true for (n-1) is straight forward to show
that is true for n and
20..1,0
3..00
1
,,
1
,,
1
1
≤=<=∀
≥=>∀=
∑
∑
=
=
n
i
iimm
n
i
iimm
mmtsmzeronecessarlynotA
mmtsmA
p
p


Generalized Fokker - Planck Equation
SOLO Stochastic Processes
Theorem 2
Let for some set (X,T) and let each of the moments
vanish for some even mi. Then the transition density satisfies the Generalized Fokker-Planck
Equation
nmmm AAA ,,0,0,,,00,,0, ,,, 21  
Proof
q.e.d.
( ) ( )
( ) ( ) ( )( ) ( ){ }
( ) ( ) ( )( ) ( ) ( )( ) ( ){ } 0,,1,,0,,1.,0
0
0,,1,,0
0
1 1
2
1
,,|
1
lim,
,,|
1
lim,
2
1


==
→∆
=
→∆
= ==
=−∆+−∆+
∆
=
=−∆+
∆
=
∂∂
∂
+
∂
∂
−=
∂
∂
∑∑∑
ji
i
mmjjii
t
ji
mii
t
i
n
i
n
j ji
ji
n
i i
i
ATXtxtxttxtxttxE
t
txC
ATXtxtxttxE
t
txB
xx
pC
x
pB
t
p
( )TXtxpp x ,|,=
0,,1,,0,,1,,00,,1,,0 ,  === jii mmm AA
Since vanishes for some even mi, from Lemma 2 the only
non-necessarily zero Moments are
nmmm AAA ,,0,0,,,00,,0, ,,, 21  
The Generalized Fokker – Planck Equation becomes
( ) [ ] ( )
( ) ( ) ( ) ( )( )( )
( ) ( )∑∑∑
∑∑ ∑
= =
==
=
=
=
∞
=
∞
=
⋅
∂∂
∂
+⋅
∂
∂
−=
≠=⋅
∂∂
∂−
=
∂
∂
n
i
n
j
mm
i
n
i
m
i
n
i
iTXtxmmm
n
m
m
m m n
m
TXtx
pA
xjx
pA
x
mmTXtxpA
txtxmmt
TXtxp
jii
pn
n
1 1
0,,1,,0,,1,,0
2
1
0,,1,,0
1
,|
10 0 1
,|
2
1
0,|
!!
1,|,
11
1




Generalized Fokker - Planck Equation
SOLO Stochastic Processes
History
The Fokker-Planck Equation was derived by Uhlenbeck and Orenstein for Wiener
noise in the paper: “On the Theory of Brownian Motion”, Phys. Rev. 36, pp.823 – 841
(September 1, 1930), (available on Internet)
George Eugène
Uhlenbeck
(1900-1988)
Leonard Salomon
Ornstein
(1880 -1941)
Ming Chen Wang ( 王明贞(
(1906-2010(
Un updated version was published by M.C. Wang and Uhlenbeck :
“On the Theory of Brownian Motion II”,. Rev. Modern Physics, 17,
Nos. 2 and 3, pp.323 – 342 (April-July 1945), (available on Internet).
They assumed that all Moments above second must vanish.
The sufficiency of a finite set of Moments to obtain a Fokker-Planck
Equation was shown by R.F. Pawula, “Generalization and
Extensions of Fokker-Planck-Kolmogorov Equations,”, IEEE, IT-13,
No.1, pp. 33-41 (January 1967)
Table of Content
Karhunen-Loève Theorem
SOLO
Stochastic Processes
Michel Loève
1907)Jaffa(
-1979)Berkley(
In the theory of stochastic processes, the Karhunen-Loève theorem
(named after Kari Karhunen and Michel Loève) is a representation
of a stochastic process as an infinite linear combination of
orthogonal functions, analogous to a Fourier series representation
of a function on a bounded interval. In contrast to a Fourier series
where the coefficients are real numbers and the expansion basis consists of sinusoidal functions
(that is, sine and cosine functions), the coefficients in the Karhunen-Loève theorem are random
variables and the expansion basis depends on the process. In fact, the orthogonal basis functions
used in this representation are determined by the covariance function of the process. If we
regard a stochastic process as a random function F, that is, one in which the random value is a
function on an interval [a, b], then this theorem can be considered as a random orthonormal
expansion of F.
Given a Stochastic Process x (t) defined on an interval [a,b], Karhunen-Loeve Theorem states that
( ) ( ) ( ) btatbtxtx
n
nn ≤≤=≈ ∑
∞
=1
ˆ ϕ
( ) ( )



≠
=
=∫ nm
nm
dttt
b
a
mn
0
1
*ϕϕ ( )
( ) ( ){ }
( ) ( ) 

,2,1, 122
*
21
21
==∫ mtdttttR mm
b
a
m
txtxE
ϕλϕ
:bydefined
functionslorthonormaare
are random variables( ) ( ) ,2,1* == ∫ ndtttxb
b
a
nn ϕand
( ){ } { }
{ }



=
≠
=
=→=
mn
mn
bbE
bEtxEIf
n
mn
n
λ
0
*
00
Karhunen-Loève Theorem (continue – 1)
SOLO
Stochastic Processes
Proof:
( ) ( )



≠
=
=∫ nm
nm
dttt
b
a
mn
0
1
*ϕϕ ( )
( ) ( ){ }
( ) ( ) 

,2,1, 122
*
21
21
==∫ mtdttttR mm
b
a
m
txtxE
ϕλϕ
:bydefined
functionslorthonormaare
( ) ( ) ( ) btatbtxtx
n
nn ≤≤=≈ ∑
∞
=1
ˆ ϕ and ( ) ( ) ,2,1* == ∫ ndtttxb
b
a
nn ϕIf
( ){ } ( ) ( ) ( ) ( ) ( ){ } ( ) ( ) btatsttdtttxtxEdtttxtxEbtxE mm
b
a
m
b
a
mm ≤≤∀==






= ∫∫ 111222122211 ..*** ϕλϕϕ
1
{ }



=
≠
=
mn
mn
bbE
n
mn
λ
0
*then
{ } ( ) ( ) ( ){ } ( ) ( ) ( )



=
≠
===














= ∫∫∫ mn
mn
dtttdttbtxEbdtttxEbbE
n
b
a
nmm
b
a
nmm
b
a
nmn
λ
ϕϕλϕϕ
0
****** 111111111
2
{ } ( ) ( ) ( ){ } ( )
( ){ }
,2,10**
0
===






=
=
∫∫ ndtttxEdtttxEbE
txEb
a
n
b
a
nn ϕϕ
Karhunen-Loève Theorem (continue – 2)
SOLO
Stochastic Processes
Proof:
( ) ( )



≠
=
=∫ nm
nm
dttt
b
a
mn
0
1
*ϕϕ
( )
( ) ( ){ }
( ) ( ) 

,2,1, 122
*
21
21
==∫ mtdttttR mm
b
a
m
txtxE
ϕλϕ
:andfunctionslorthonormaare
( ) ( ) ( ) btatbtxtx
n
nn ≤≤=≈ ∑
∞
=1
ˆ ϕ and ( ) ( ) ,2,1* == ∫ ndtttxb
b
a
nn ϕIf
( ){ } ( ) { } ( ) ( ) btatstttbbEbtbEbtxE mm
n
nmnm
n
nnm ≤≤∀==






= ∑∑
∞
=
∞
=
111
1
1
1
11 ..*** ϕλϕϕ
3
{ }



=
≠
=
mn
mn
bbE
n
mn
λ
0
*
then
( ){ } ( ) ( ) ( ) ( ) ( ){ } ( ) ( ) ( ) btatstdttttRdtttxtxEdtttxtxEbtxE
b
a
m
b
a
m
b
a
mm ≤≤∀==






= ∫∫∫ 112221222122211 ..,*** ϕϕϕ
but
( )
( ) ( ){ }
( ) ( ) 

,2,1, 122
*
21
21
==∫ mtdttttR mm
b
a
m
txtxE
ϕλϕtherefore
with { } positiverealbbE mmm &*=λ
Karhunen-Loève Theorem (continue – 3)
SOLO
Stochastic Processes
( ) ( ) btatbtx
n
nn ≤≤= ∑
∞
=1
ˆ ϕ then ( ) ( ){ } ( ) ( ) btatttRtxtxE
n
nn ≤≤−=− ∑
∞
=1
22
,ˆ ϕλ
Convergence of Karhunen – Loève Theorem4
therefore ( ) ( ){ } ( ) ( ) btatttRtxtxE
n
nn ≤≤=⇔=− ∑
∞
=1
22
,0ˆ ϕλ
Proof:
( ) ( ){ } ( ) ( ) ( ){ } ( ) ( ) ( ) btatttbtxEtbtxEtxtxE
n
nnn
n
nn
n
nn ≤≤==






= ∑∑∑
∞
=
∞
=
∞
= 111
******ˆ ϕϕλϕϕ
( ) ( ){ } ( ) ( ) ( ){ } ( ) ( ) ( ) btatttbtxEtbtxEtxtxE
n
nnn
n
nn
n
nn
nn
≤≤==






= ∑∑∑
∞
=
=∞
=
∞
= 1
*
11
***ˆ* ϕϕλϕϕ
λλ
( ){ } ( ) btatsttbtxE nnn ≤≤∀= 1111 ..* ϕλ
( ) ( ){ } ( ) ( ) ( ){ } ( ) ( ) ( ) btatttbtxEtbtxEtxtxE
n
nnn
n
nn
n
nn ≤≤==






= ∑∑∑
∞
=
∞
=
∞
= 111
***ˆ**ˆ*ˆˆ ϕϕλϕϕ
( ) ( ){ } ( ) ( )[ ] ( ) ( )[ ]{ } ( ){ } ( ) ( ){ } ( ) ( ){ } ( ){ }
( ) ( ) btatttR
txtxtxEtxtxEtxEtxtxtxtxEtxtxE
n
nn ≤≤−=
+−−=−−=−
∑
∞
=1
2
222
,
ˆˆ**ˆ*ˆˆˆ
ϕλ
Table of Content
References:
SOLO
http://en.wikipedia.org/wiki/Category:Stochastic_processes
http://en.wikipedia.org/wiki/Category:Stochastic_differential_equations
Papoulis, A., “Probability, Random Variables, and Stochastic Processes”,
McGraw Hill, 1965, Ch. 14 and 15
Sage, A.P. and Melsa, J.L., “Estimation Theory with Applications to
Communications and Control”, McGraw Hill, 1971
McGarty, T., “Stochastic Systems and State Estimation”, John Wiley & Sons,
1974
Maybeck, P.S., “Stochastic Systems Estimation and Control”, Academic Press,
Mathematics in Science and Engineering, Volume 141-2, 1982, Ch. 11 and 12
Stochastic Processes
Table of Content
Jazwinski, A.H., “Stochastic Processes and Filtering Theory”, Academic
Press, 1970
January 12, 2015 80
SOLO
Technion
Israeli Institute of Technology
1964 – 1968 BSc EE
1968 – 1971 MSc EE
Israeli Air Force
1970 – 1974
RAFAEL
Israeli Armament Development Authority
1974 – 2013
Stanford University
1983 – 1986 PhD AA
Functional Analysis
( ) ( ) ( ) bxtxxtxtxaxxtfdttf nnn
n
i
iiin
b
a
=<<<<<<<<=−= −−
=
+→∞
∑∫ 1121100
0
1
lim 
SOLO
Riemann Integral
http://en.wikipedia.org/wiki/Riemann_integral
ix 1+ix
it
( )itf
ax =0 bxn =
εδ <−= + ii
xx 1
( )∫
b
a
dttf
In Riemann Integral we divide the interval [a,b]
in n non-overlapping intervals, that decrease as
n increases. The value f (ti) is computed inside the
intervals.
bxtxxtxtxa nnn =<<<<<<<<= −− 1121100 
The Riemann Integral is not always defined, for example:
( )



=
irationalex
rationalex
xf
3
2
The Riemann Integral of this function is not defined.
Georg Friedrich Bernhard
Riemann
1826 - 1866
Integration
SOLO Stochastic Processes
Thomas Joannes
Stieltjes
1856 - 1894
Riemann–Stieltjes integral
Bernhard Riemann
1826 - 1866
The Stieltjes integral is a generalization of Riemann integral. Let f (x) and α (x) be] real-
valued functions defined in the closed interval [a,b]. Take a partition of the interval
and consider a Riemann sum
bxxxa n <<<<= 10
( ) ( ) ( )[ ] [ ]iii
n
i
iii xxxxf ,1
1
1 −
=
− ∈−∑ ξααξ
If the sum tends to a fixed number I when max(xi-xi-1)→0 then I is called a
Stieltjes integral or a Riemann-Stieltjes integral. The Stieltjes integral of f
with respect to α is denoted:
( ) ( )∫ xdxf α
∫ αdf
If f and α have a common point of discontinuity, then the integral doesn’t exist.
However, if f is continuous and α’ is Riemann integrable over the specific interval
or sometimes simply
( ) ( ) ( )
xd
xd
dxfxdxf
α
ααα == ∫∫ :''
Functional Analysis
my
k
y
( )[ ]k
yEµ
( )[ ]m
yEµ
1M
2M
( )[ ] 01
=MEµ
( )[ ] 02
=MEµ
( )xfy =
SOLO
Lebesgue Integral
Measure
The mean idea of the Lebesgue integral
is the notion of Measure.
Definition 1: E (M) є [a,b] is the region
in x є [a,b], of the function f (x) for
which ( ) Mxf >
Definition 2: µ [E (M)] the measure of E (M) is
( )[ ]
( )
0≥= ∫ME
dxMEµ
We can see that µ [E (M)] is the sum of lengths on x axis for which ( ) Mxf >
From the Figure above we can see that for jumps M1 and M2 ( )[ ] ( )[ ] 021 == MEME µµ
Example: Let find the measure of the rationale numbers, ratio of integers, that are
countable
n
m
rrrrrr k
====== ,,
4
3
,
4
1
,
3
2
,
3
1
,
2
1
5321 3

Since the rationale numbers are discrete we can choose ε > 0 as small as we want
and construct an open interval of length ε/2 centered around r1, an interval of ε/22
centered around r2,.., an interval of ε/2k
centered around rk
( )[ ] ε
εεε
µ =++++≤  k
rationalsE
222 2
( )[ ] 0
0
=⇒
→
rationalsEµ
ε
Functional Analysis
( ) ( ) ( )[ ] ( ) ( )xfyyyyxfyyEyydttf
bxa
nnibxa
n
i
iiin
b
a
≤≤
−≤≤
=
−∞→
=<<<<<<=−= ∑∫ supinflim 110
0
1 µ
a b
0
y
1y
1−k
y
1+k
y
ky
1−n
y
n
y
( )[ ]1+k
yEµ
( )[ ]1−k
yEµ( )[ ]kyEµ
( )xfy =
( )



=
irationalex
rationalex
xf
3
2
SOLO
Lebesgue Integral
Henri Léon Lebesgue
1875 - 1941
A function y = f (x) is said to be measurable if the set of points x at
which f (x) < c is measurable for any and all choices of the constant c.
The Lebesgue Integral for a measurable function f (x) is defined as:
Example
( )
( )
( )
( )
( )
( )
( )
( )
( ) 3013
1
0
1110/
=−==+= ∫∫∫∫ ≤≤ irationalsErationalsEirationalsExfE
dxxfdxxfdxxfdxxf

3
2
0 1 x
( )xf
Irationals
Rationals
For a continuous function the Riemann and Lebesgue integrals give the same results.
Integration
SOLO Stochastic Processes
Lebesgue-Stieltjes integration
Thomas Joannes
Stieltjes
1856 - 1894
Henri Léon
Lebesgue
1875-1941
In measure-theoretic analysis and related branches of mathematics,
Lebesgue-Stieltjes integration generalizes Riemann-Stieltjes and Lebesgue
integration, preserving the many advantages of the latter in a more general
measure-theoretic framework.
Let α (x) a monotonic increasing function of x, and define an interval I =(x1,x2).
Define the nonnegative function
( ) ( ) ( )12 xxIU αα −=
The Lebesgue integral with respect to a measure constructed using U (I)
is called Lebesgue-Stieltjes integral, or sometimes Lebesgue-Radon integral.
Johann Karl August
Radon
1887–1956
Integration
SOLO Stochastic Processes
Darboux Integral Lower (green) and upper (green plus
lavender) Darboux sums for four
subintervals
Jean-Gaston
Darboux
1842-1917
In real analysis, a branch of mathematics, the Darboux integral or Darboux sum
is one possible definition of the integral of a function. Darboux integrals are
equivalent to Riemann integrals, meaning that a function is Darboux-integrable if and only if it is
Riemann-integrable, and the values of the two integrals, if they exist, are equal. Darboux integrals
have the advantage of being simpler to define than Riemann integrals. Darboux integrals are
named after their discoverer, Gaston Darboux.
A partition of an interval [a,b] is a finite sequence of values xi such that bxxxa n <<<<= 10
Definition
Each interval [xi−1,xi] is called a subinterval of the partition. Let ƒ:[a,b]→R be a bounded
function, and let ( )nxxxP ,,, 10 = be a partition of [a,b]. Let
[ ]
( ) [ ]
( )xfmxfM
ii
ii
xxx
i
xxx
i
,, 1
1
inf:;sup:
−
−
∈∈
==
The upper Darboux sum of ƒ with respect to P is ( )∑=
−−=
n
i
iiiPf MxxU
1
1, :
The lower Darboux sum of ƒ with respect to P is ( )∑=
−−=
n
i
iiiPf mxxL
1
1, :
Integration
SOLO Stochastic Processes
Darboux Integral
(continue – 1)
Lower (green) and upper (green plus
lavender) Darboux sums for four
subintervals
Jean-Gaston
Darboux
1842-1917
The upper Darboux sum of ƒ with respect to P is ( )∑=
−−=
n
i
iiiPf MxxU
1
1, :
The lower Darboux sum of ƒ with respect to P is ( )∑=
−−=
n
i
iiiPf mxxL
1
1, :
The upper Darboux integral of ƒ is [ ]{ }baofpartitionaisPUU Pff ,:inf ,=
The lower Darboux integral of ƒ is [ ]{ }baofpartitionaisPLL Pff ,:inf ,=
If Uƒ = Lƒ, then we say that ƒ is Darboux-integrable and set
( ) ff
b
a
LUdttf ==∫
the common value of the upper and lower Darboux integrals.
Integration
SOLO Stochastic Processes
Lebesgue Integration
Henri Léon
Lebesgue
1875 - 1941
Illustration of a Riemann integral (blue)
and a Lebesgue integral (red)
Riemann Integral A sequence of Riemann sums. The numbers
in the upper right are the areas of the grey
rectangles. They converge to the integral of
the function.
Darboux Integral Lower (green) and upper (green plus
lavender) Darboux sums for four
subintervals
Jean-Gaston
Darboux
1842-1917
Bernhard Riemann
1826 - 1866
SOLO Stochastic Processes
Richard Snowden
Bucy
Abdrew James
Viterby
1935 -
Harold J.
Kushner
1932 -
Moshe Zakai
1926 -
Jose Enrique
Moyal
(1910 – 1998)
Rudolf E.
Kalman
1930 -
Maurice Stevenson
Bartlett
(1910 - 2002)
George Eugène
Uhlenbeck
(1900-1988)
Leonard
Salomon
Ornstein
(1880 -1941)
Bernard Osgood
Koopman
)1900–1981(
Edwin James George
Pitman
)1897–1993(
Georges Darmois
(1888 -1960)
4 stochastic processes
4 stochastic processes
4 stochastic processes
4 stochastic processes
4 stochastic processes

More Related Content

What's hot

2 random variables notes 2p3
2 random variables notes 2p32 random variables notes 2p3
2 random variables notes 2p3MuhannadSaleh
 
Chapter2 - Linear Time-Invariant System
Chapter2 - Linear Time-Invariant SystemChapter2 - Linear Time-Invariant System
Chapter2 - Linear Time-Invariant SystemAttaporn Ninsuwan
 
Introduction to Random Walk
Introduction to Random WalkIntroduction to Random Walk
Introduction to Random WalkShuai Zhang
 
Partial differential equations
Partial differential equationsPartial differential equations
Partial differential equationsmuhammadabullah
 
Probability Density Function (PDF)
Probability Density Function (PDF)Probability Density Function (PDF)
Probability Density Function (PDF)AakankshaR
 
Presentation on laplace transforms
Presentation on laplace transformsPresentation on laplace transforms
Presentation on laplace transformsHimel Himo
 
Gram-Schmidt process linear algbera
Gram-Schmidt process linear algberaGram-Schmidt process linear algbera
Gram-Schmidt process linear algberaPulakKundu1
 
Recurrence relations
Recurrence relationsRecurrence relations
Recurrence relationsIIUM
 
Lesson 4 ar-ma
Lesson 4 ar-maLesson 4 ar-ma
Lesson 4 ar-maankit_ppt
 
Fuzzy Set Theory
Fuzzy Set TheoryFuzzy Set Theory
Fuzzy Set TheoryAMIT KUMAR
 
Probability mass functions and probability density functions
Probability mass functions and probability density functionsProbability mass functions and probability density functions
Probability mass functions and probability density functionsAnkit Katiyar
 
Discrete probability distribution (complete)
Discrete probability distribution (complete)Discrete probability distribution (complete)
Discrete probability distribution (complete)ISYousafzai
 
DSP_2018_FOEHU - Lec 04 - The z-Transform
DSP_2018_FOEHU - Lec 04 - The z-TransformDSP_2018_FOEHU - Lec 04 - The z-Transform
DSP_2018_FOEHU - Lec 04 - The z-TransformAmr E. Mohamed
 
Probability Concepts Applications
Probability Concepts  ApplicationsProbability Concepts  Applications
Probability Concepts Applicationsguest44b78
 
Gamma, Expoential, Poisson And Chi Squared Distributions
Gamma, Expoential, Poisson And Chi Squared DistributionsGamma, Expoential, Poisson And Chi Squared Distributions
Gamma, Expoential, Poisson And Chi Squared DistributionsDataminingTools Inc
 

What's hot (20)

2 random variables notes 2p3
2 random variables notes 2p32 random variables notes 2p3
2 random variables notes 2p3
 
Curve Fitting
Curve FittingCurve Fitting
Curve Fitting
 
Chapter2 - Linear Time-Invariant System
Chapter2 - Linear Time-Invariant SystemChapter2 - Linear Time-Invariant System
Chapter2 - Linear Time-Invariant System
 
Introduction to Random Walk
Introduction to Random WalkIntroduction to Random Walk
Introduction to Random Walk
 
Partial differential equations
Partial differential equationsPartial differential equations
Partial differential equations
 
Random Variables
Random VariablesRandom Variables
Random Variables
 
Probability Density Function (PDF)
Probability Density Function (PDF)Probability Density Function (PDF)
Probability Density Function (PDF)
 
Presentation on laplace transforms
Presentation on laplace transformsPresentation on laplace transforms
Presentation on laplace transforms
 
Gram-Schmidt process linear algbera
Gram-Schmidt process linear algberaGram-Schmidt process linear algbera
Gram-Schmidt process linear algbera
 
Recurrence relations
Recurrence relationsRecurrence relations
Recurrence relations
 
Lesson 4 ar-ma
Lesson 4 ar-maLesson 4 ar-ma
Lesson 4 ar-ma
 
Classical Sets & fuzzy sets
Classical Sets & fuzzy setsClassical Sets & fuzzy sets
Classical Sets & fuzzy sets
 
Fuzzy Set Theory
Fuzzy Set TheoryFuzzy Set Theory
Fuzzy Set Theory
 
Probability mass functions and probability density functions
Probability mass functions and probability density functionsProbability mass functions and probability density functions
Probability mass functions and probability density functions
 
lattice
 lattice lattice
lattice
 
Discrete probability distribution (complete)
Discrete probability distribution (complete)Discrete probability distribution (complete)
Discrete probability distribution (complete)
 
DSP_2018_FOEHU - Lec 04 - The z-Transform
DSP_2018_FOEHU - Lec 04 - The z-TransformDSP_2018_FOEHU - Lec 04 - The z-Transform
DSP_2018_FOEHU - Lec 04 - The z-Transform
 
Fourier transforms
Fourier transforms Fourier transforms
Fourier transforms
 
Probability Concepts Applications
Probability Concepts  ApplicationsProbability Concepts  Applications
Probability Concepts Applications
 
Gamma, Expoential, Poisson And Chi Squared Distributions
Gamma, Expoential, Poisson And Chi Squared DistributionsGamma, Expoential, Poisson And Chi Squared Distributions
Gamma, Expoential, Poisson And Chi Squared Distributions
 

Viewers also liked

Stochastic modelling and its applications
Stochastic modelling and its applicationsStochastic modelling and its applications
Stochastic modelling and its applicationsKartavya Jain
 
More Stochastic Simulation Examples
More Stochastic Simulation ExamplesMore Stochastic Simulation Examples
More Stochastic Simulation ExamplesStephen Gilmore
 
Elements Of Stochastic Processes
Elements Of Stochastic ProcessesElements Of Stochastic Processes
Elements Of Stochastic ProcessesMALAKI12003
 
basics of stochastic and queueing theory
basics of stochastic and queueing theorybasics of stochastic and queueing theory
basics of stochastic and queueing theoryjyoti daddarwal
 
Stochastic Process
Stochastic ProcessStochastic Process
Stochastic Processknksmart
 
Deterministic vs stochastic
Deterministic vs stochasticDeterministic vs stochastic
Deterministic vs stochasticsohail40
 
Foundations and methods of stochastic simulation
Foundations and methods of stochastic simulationFoundations and methods of stochastic simulation
Foundations and methods of stochastic simulationSpringer
 
The Stochastic Simulation Algorithm
The Stochastic Simulation AlgorithmThe Stochastic Simulation Algorithm
The Stochastic Simulation AlgorithmStephen Gilmore
 
Discrete And Continuous Simulation
Discrete And Continuous SimulationDiscrete And Continuous Simulation
Discrete And Continuous SimulationNguyen Chien
 
Queuing theory network
Queuing theory networkQueuing theory network
Queuing theory networkAmit Dahal
 
Dresden 2014 A tour of some fractional models and the physics behind them
Dresden 2014 A tour of some fractional models and the physics behind themDresden 2014 A tour of some fractional models and the physics behind them
Dresden 2014 A tour of some fractional models and the physics behind themNick Watkins
 
LLNL Poster Symposium 2015
LLNL Poster Symposium 2015LLNL Poster Symposium 2015
LLNL Poster Symposium 2015Andrew Dublin
 
Fokker–Planck equation and DPD simulations
Fokker–Planck equation and DPD simulationsFokker–Planck equation and DPD simulations
Fokker–Planck equation and DPD simulationsKotaro Tanahashi
 
Feedback on Part 1 of the CSLP
Feedback on Part 1 of the CSLPFeedback on Part 1 of the CSLP
Feedback on Part 1 of the CSLPStephen Gilmore
 
Control Chart For Variables
Control Chart For VariablesControl Chart For Variables
Control Chart For VariablesHarshit Bansal
 
Radial Basis Function Interpolation
Radial Basis Function InterpolationRadial Basis Function Interpolation
Radial Basis Function InterpolationJesse Bettencourt
 
Introduction to Radial Basis Function Networks
Introduction to Radial Basis Function NetworksIntroduction to Radial Basis Function Networks
Introduction to Radial Basis Function NetworksESCOM
 

Viewers also liked (20)

Stochastic modelling and its applications
Stochastic modelling and its applicationsStochastic modelling and its applications
Stochastic modelling and its applications
 
More Stochastic Simulation Examples
More Stochastic Simulation ExamplesMore Stochastic Simulation Examples
More Stochastic Simulation Examples
 
Elements Of Stochastic Processes
Elements Of Stochastic ProcessesElements Of Stochastic Processes
Elements Of Stochastic Processes
 
basics of stochastic and queueing theory
basics of stochastic and queueing theorybasics of stochastic and queueing theory
basics of stochastic and queueing theory
 
Stochastic Process
Stochastic ProcessStochastic Process
Stochastic Process
 
Deterministic vs stochastic
Deterministic vs stochasticDeterministic vs stochastic
Deterministic vs stochastic
 
Foundations and methods of stochastic simulation
Foundations and methods of stochastic simulationFoundations and methods of stochastic simulation
Foundations and methods of stochastic simulation
 
The Stochastic Simulation Algorithm
The Stochastic Simulation AlgorithmThe Stochastic Simulation Algorithm
The Stochastic Simulation Algorithm
 
Discrete And Continuous Simulation
Discrete And Continuous SimulationDiscrete And Continuous Simulation
Discrete And Continuous Simulation
 
Markov Chains
Markov ChainsMarkov Chains
Markov Chains
 
Queuing theory network
Queuing theory networkQueuing theory network
Queuing theory network
 
Prob review
Prob reviewProb review
Prob review
 
Dresden 2014 A tour of some fractional models and the physics behind them
Dresden 2014 A tour of some fractional models and the physics behind themDresden 2014 A tour of some fractional models and the physics behind them
Dresden 2014 A tour of some fractional models and the physics behind them
 
LLNL Poster Symposium 2015
LLNL Poster Symposium 2015LLNL Poster Symposium 2015
LLNL Poster Symposium 2015
 
Fokker–Planck equation and DPD simulations
Fokker–Planck equation and DPD simulationsFokker–Planck equation and DPD simulations
Fokker–Planck equation and DPD simulations
 
Feedback on Part 1 of the CSLP
Feedback on Part 1 of the CSLPFeedback on Part 1 of the CSLP
Feedback on Part 1 of the CSLP
 
Zoooooohaib
ZoooooohaibZoooooohaib
Zoooooohaib
 
Control Chart For Variables
Control Chart For VariablesControl Chart For Variables
Control Chart For Variables
 
Radial Basis Function Interpolation
Radial Basis Function InterpolationRadial Basis Function Interpolation
Radial Basis Function Interpolation
 
Introduction to Radial Basis Function Networks
Introduction to Radial Basis Function NetworksIntroduction to Radial Basis Function Networks
Introduction to Radial Basis Function Networks
 

Similar to 4 stochastic processes

Circuit Network Analysis - [Chapter4] Laplace Transform
Circuit Network Analysis - [Chapter4] Laplace TransformCircuit Network Analysis - [Chapter4] Laplace Transform
Circuit Network Analysis - [Chapter4] Laplace TransformSimen Li
 
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...Wireilla
 
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...ijfls
 
Calculus of variations
Calculus of variationsCalculus of variations
Calculus of variationsSolo Hermelin
 
Introduction - Time Series Analysis
Introduction - Time Series AnalysisIntroduction - Time Series Analysis
Introduction - Time Series Analysisjaya gobi
 
Univariate Financial Time Series Analysis
Univariate Financial Time Series AnalysisUnivariate Financial Time Series Analysis
Univariate Financial Time Series AnalysisAnissa ATMANI
 
Lecture 5: Stochastic Hydrology
Lecture 5: Stochastic Hydrology Lecture 5: Stochastic Hydrology
Lecture 5: Stochastic Hydrology Amro Elfeki
 
The lattice Boltzmann equation: background and boundary conditions
The lattice Boltzmann equation: background and boundary conditionsThe lattice Boltzmann equation: background and boundary conditions
The lattice Boltzmann equation: background and boundary conditionsTim Reis
 
Semi-Classical Transport Theory.ppt
Semi-Classical Transport Theory.pptSemi-Classical Transport Theory.ppt
Semi-Classical Transport Theory.pptVivekDixit100
 
Ph 101-9 QUANTUM MACHANICS
Ph 101-9 QUANTUM MACHANICSPh 101-9 QUANTUM MACHANICS
Ph 101-9 QUANTUM MACHANICSChandan Singh
 
2 classical field theories
2 classical field theories2 classical field theories
2 classical field theoriesSolo Hermelin
 
two degree of freddom system
two degree of freddom systemtwo degree of freddom system
two degree of freddom systemYash Patel
 
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docxpaynetawnya
 
Non equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flowsNon equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flowsSpringer
 
Non equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flowsNon equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flowsSpringer
 
Linear response theory and TDDFT
Linear response theory and TDDFT Linear response theory and TDDFT
Linear response theory and TDDFT Claudio Attaccalite
 

Similar to 4 stochastic processes (20)

Circuit Network Analysis - [Chapter4] Laplace Transform
Circuit Network Analysis - [Chapter4] Laplace TransformCircuit Network Analysis - [Chapter4] Laplace Transform
Circuit Network Analysis - [Chapter4] Laplace Transform
 
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
 
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
APPROXIMATE CONTROLLABILITY RESULTS FOR IMPULSIVE LINEAR FUZZY STOCHASTIC DIF...
 
Calculus of variations
Calculus of variationsCalculus of variations
Calculus of variations
 
Introduction - Time Series Analysis
Introduction - Time Series AnalysisIntroduction - Time Series Analysis
Introduction - Time Series Analysis
 
Univariate Financial Time Series Analysis
Univariate Financial Time Series AnalysisUnivariate Financial Time Series Analysis
Univariate Financial Time Series Analysis
 
Lecture 5: Stochastic Hydrology
Lecture 5: Stochastic Hydrology Lecture 5: Stochastic Hydrology
Lecture 5: Stochastic Hydrology
 
The lattice Boltzmann equation: background and boundary conditions
The lattice Boltzmann equation: background and boundary conditionsThe lattice Boltzmann equation: background and boundary conditions
The lattice Boltzmann equation: background and boundary conditions
 
Semi-Classical Transport Theory.ppt
Semi-Classical Transport Theory.pptSemi-Classical Transport Theory.ppt
Semi-Classical Transport Theory.ppt
 
Ph 101-9 QUANTUM MACHANICS
Ph 101-9 QUANTUM MACHANICSPh 101-9 QUANTUM MACHANICS
Ph 101-9 QUANTUM MACHANICS
 
2 classical field theories
2 classical field theories2 classical field theories
2 classical field theories
 
two degree of freddom system
two degree of freddom systemtwo degree of freddom system
two degree of freddom system
 
Instantons in 1D QM
Instantons in 1D QMInstantons in 1D QM
Instantons in 1D QM
 
A05330107
A05330107A05330107
A05330107
 
M.Sc. Phy SII UIV Quantum Mechanics
M.Sc. Phy SII UIV Quantum MechanicsM.Sc. Phy SII UIV Quantum Mechanics
M.Sc. Phy SII UIV Quantum Mechanics
 
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx
1.1 PRINCIPLE OF LEAST ACTION 640-213 MelatosChapter 1.docx
 
Non equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flowsNon equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flows
 
Non equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flowsNon equilibrium thermodynamics in multiphase flows
Non equilibrium thermodynamics in multiphase flows
 
Linear response theory and TDDFT
Linear response theory and TDDFT Linear response theory and TDDFT
Linear response theory and TDDFT
 
PART I.4 - Physical Mathematics
PART I.4 - Physical MathematicsPART I.4 - Physical Mathematics
PART I.4 - Physical Mathematics
 

More from Solo Hermelin

5 introduction to quantum mechanics
5 introduction to quantum mechanics5 introduction to quantum mechanics
5 introduction to quantum mechanicsSolo Hermelin
 
Stabilization of linear time invariant systems, Factorization Approach
Stabilization of linear time invariant systems, Factorization ApproachStabilization of linear time invariant systems, Factorization Approach
Stabilization of linear time invariant systems, Factorization ApproachSolo Hermelin
 
Slide Mode Control (S.M.C.)
Slide Mode Control (S.M.C.)Slide Mode Control (S.M.C.)
Slide Mode Control (S.M.C.)Solo Hermelin
 
Sliding Mode Observers
Sliding Mode ObserversSliding Mode Observers
Sliding Mode ObserversSolo Hermelin
 
Reduced order observers
Reduced order observersReduced order observers
Reduced order observersSolo Hermelin
 
Inner outer and spectral factorizations
Inner outer and spectral factorizationsInner outer and spectral factorizations
Inner outer and spectral factorizationsSolo Hermelin
 
Keplerian trajectories
Keplerian trajectoriesKeplerian trajectories
Keplerian trajectoriesSolo Hermelin
 
Anti ballistic missiles ii
Anti ballistic missiles iiAnti ballistic missiles ii
Anti ballistic missiles iiSolo Hermelin
 
Anti ballistic missiles i
Anti ballistic missiles iAnti ballistic missiles i
Anti ballistic missiles iSolo Hermelin
 
12 performance of an aircraft with parabolic polar
12 performance of an aircraft with parabolic polar12 performance of an aircraft with parabolic polar
12 performance of an aircraft with parabolic polarSolo Hermelin
 
11 fighter aircraft avionics - part iv
11 fighter aircraft avionics - part iv11 fighter aircraft avionics - part iv
11 fighter aircraft avionics - part ivSolo Hermelin
 
10 fighter aircraft avionics - part iii
10 fighter aircraft avionics - part iii10 fighter aircraft avionics - part iii
10 fighter aircraft avionics - part iiiSolo Hermelin
 
9 fighter aircraft avionics-part ii
9 fighter aircraft avionics-part ii9 fighter aircraft avionics-part ii
9 fighter aircraft avionics-part iiSolo Hermelin
 
8 fighter aircraft avionics-part i
8 fighter aircraft avionics-part i8 fighter aircraft avionics-part i
8 fighter aircraft avionics-part iSolo Hermelin
 
6 computing gunsight, hud and hms
6 computing gunsight, hud and hms6 computing gunsight, hud and hms
6 computing gunsight, hud and hmsSolo Hermelin
 
4 navigation systems
4 navigation systems4 navigation systems
4 navigation systemsSolo Hermelin
 
2 aircraft flight instruments
2 aircraft flight instruments2 aircraft flight instruments
2 aircraft flight instrumentsSolo Hermelin
 
3 modern aircraft cutaway
3 modern aircraft cutaway3 modern aircraft cutaway
3 modern aircraft cutawaySolo Hermelin
 

More from Solo Hermelin (20)

5 introduction to quantum mechanics
5 introduction to quantum mechanics5 introduction to quantum mechanics
5 introduction to quantum mechanics
 
Stabilization of linear time invariant systems, Factorization Approach
Stabilization of linear time invariant systems, Factorization ApproachStabilization of linear time invariant systems, Factorization Approach
Stabilization of linear time invariant systems, Factorization Approach
 
Slide Mode Control (S.M.C.)
Slide Mode Control (S.M.C.)Slide Mode Control (S.M.C.)
Slide Mode Control (S.M.C.)
 
Sliding Mode Observers
Sliding Mode ObserversSliding Mode Observers
Sliding Mode Observers
 
Reduced order observers
Reduced order observersReduced order observers
Reduced order observers
 
Inner outer and spectral factorizations
Inner outer and spectral factorizationsInner outer and spectral factorizations
Inner outer and spectral factorizations
 
Keplerian trajectories
Keplerian trajectoriesKeplerian trajectories
Keplerian trajectories
 
Anti ballistic missiles ii
Anti ballistic missiles iiAnti ballistic missiles ii
Anti ballistic missiles ii
 
Anti ballistic missiles i
Anti ballistic missiles iAnti ballistic missiles i
Anti ballistic missiles i
 
Analytic dynamics
Analytic dynamicsAnalytic dynamics
Analytic dynamics
 
12 performance of an aircraft with parabolic polar
12 performance of an aircraft with parabolic polar12 performance of an aircraft with parabolic polar
12 performance of an aircraft with parabolic polar
 
11 fighter aircraft avionics - part iv
11 fighter aircraft avionics - part iv11 fighter aircraft avionics - part iv
11 fighter aircraft avionics - part iv
 
10 fighter aircraft avionics - part iii
10 fighter aircraft avionics - part iii10 fighter aircraft avionics - part iii
10 fighter aircraft avionics - part iii
 
9 fighter aircraft avionics-part ii
9 fighter aircraft avionics-part ii9 fighter aircraft avionics-part ii
9 fighter aircraft avionics-part ii
 
8 fighter aircraft avionics-part i
8 fighter aircraft avionics-part i8 fighter aircraft avionics-part i
8 fighter aircraft avionics-part i
 
6 computing gunsight, hud and hms
6 computing gunsight, hud and hms6 computing gunsight, hud and hms
6 computing gunsight, hud and hms
 
4 navigation systems
4 navigation systems4 navigation systems
4 navigation systems
 
3 earth atmosphere
3 earth atmosphere3 earth atmosphere
3 earth atmosphere
 
2 aircraft flight instruments
2 aircraft flight instruments2 aircraft flight instruments
2 aircraft flight instruments
 
3 modern aircraft cutaway
3 modern aircraft cutaway3 modern aircraft cutaway
3 modern aircraft cutaway
 

Recently uploaded

DIFFERENCE IN BACK CROSS AND TEST CROSS
DIFFERENCE IN  BACK CROSS AND TEST CROSSDIFFERENCE IN  BACK CROSS AND TEST CROSS
DIFFERENCE IN BACK CROSS AND TEST CROSSLeenakshiTyagi
 
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatidSpermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatidSarthak Sekhar Mondal
 
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...Sérgio Sacani
 
Botany 4th semester file By Sumit Kumar yadav.pdf
Botany 4th semester file By Sumit Kumar yadav.pdfBotany 4th semester file By Sumit Kumar yadav.pdf
Botany 4th semester file By Sumit Kumar yadav.pdfSumit Kumar yadav
 
VIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PVIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PPRINCE C P
 
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroidsHubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroidsSérgio Sacani
 
GFP in rDNA Technology (Biotechnology).pptx
GFP in rDNA Technology (Biotechnology).pptxGFP in rDNA Technology (Biotechnology).pptx
GFP in rDNA Technology (Biotechnology).pptxAleenaTreesaSaji
 
Disentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTDisentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTSérgio Sacani
 
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptxSOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptxkessiyaTpeter
 
Orientation, design and principles of polyhouse
Orientation, design and principles of polyhouseOrientation, design and principles of polyhouse
Orientation, design and principles of polyhousejana861314
 
Chromatin Structure | EUCHROMATIN | HETEROCHROMATIN
Chromatin Structure | EUCHROMATIN | HETEROCHROMATINChromatin Structure | EUCHROMATIN | HETEROCHROMATIN
Chromatin Structure | EUCHROMATIN | HETEROCHROMATINsankalpkumarsahoo174
 
Broad bean, Lima Bean, Jack bean, Ullucus.pptx
Broad bean, Lima Bean, Jack bean, Ullucus.pptxBroad bean, Lima Bean, Jack bean, Ullucus.pptx
Broad bean, Lima Bean, Jack bean, Ullucus.pptxjana861314
 
Formation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksFormation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksSérgio Sacani
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPirithiRaju
 
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls AgencyHire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls AgencySheetal Arora
 
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...anilsa9823
 
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bNightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bSérgio Sacani
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​kaibalyasahoo82800
 
Green chemistry and Sustainable development.pptx
Green chemistry  and Sustainable development.pptxGreen chemistry  and Sustainable development.pptx
Green chemistry and Sustainable development.pptxRajatChauhan518211
 

Recently uploaded (20)

DIFFERENCE IN BACK CROSS AND TEST CROSS
DIFFERENCE IN  BACK CROSS AND TEST CROSSDIFFERENCE IN  BACK CROSS AND TEST CROSS
DIFFERENCE IN BACK CROSS AND TEST CROSS
 
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatidSpermiogenesis or Spermateleosis or metamorphosis of spermatid
Spermiogenesis or Spermateleosis or metamorphosis of spermatid
 
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
All-domain Anomaly Resolution Office U.S. Department of Defense (U) Case: “Eg...
 
Botany 4th semester file By Sumit Kumar yadav.pdf
Botany 4th semester file By Sumit Kumar yadav.pdfBotany 4th semester file By Sumit Kumar yadav.pdf
Botany 4th semester file By Sumit Kumar yadav.pdf
 
VIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PVIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C P
 
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroidsHubble Asteroid Hunter III. Physical properties of newly found asteroids
Hubble Asteroid Hunter III. Physical properties of newly found asteroids
 
GFP in rDNA Technology (Biotechnology).pptx
GFP in rDNA Technology (Biotechnology).pptxGFP in rDNA Technology (Biotechnology).pptx
GFP in rDNA Technology (Biotechnology).pptx
 
CELL -Structural and Functional unit of life.pdf
CELL -Structural and Functional unit of life.pdfCELL -Structural and Functional unit of life.pdf
CELL -Structural and Functional unit of life.pdf
 
Disentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOSTDisentangling the origin of chemical differences using GHOST
Disentangling the origin of chemical differences using GHOST
 
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptxSOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
SOLUBLE PATTERN RECOGNITION RECEPTORS.pptx
 
Orientation, design and principles of polyhouse
Orientation, design and principles of polyhouseOrientation, design and principles of polyhouse
Orientation, design and principles of polyhouse
 
Chromatin Structure | EUCHROMATIN | HETEROCHROMATIN
Chromatin Structure | EUCHROMATIN | HETEROCHROMATINChromatin Structure | EUCHROMATIN | HETEROCHROMATIN
Chromatin Structure | EUCHROMATIN | HETEROCHROMATIN
 
Broad bean, Lima Bean, Jack bean, Ullucus.pptx
Broad bean, Lima Bean, Jack bean, Ullucus.pptxBroad bean, Lima Bean, Jack bean, Ullucus.pptx
Broad bean, Lima Bean, Jack bean, Ullucus.pptx
 
Formation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksFormation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disks
 
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdfPests of cotton_Sucking_Pests_Dr.UPR.pdf
Pests of cotton_Sucking_Pests_Dr.UPR.pdf
 
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls AgencyHire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
Hire 💕 9907093804 Hooghly Call Girls Service Call Girls Agency
 
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
Lucknow 💋 Russian Call Girls Lucknow Finest Escorts Service 8923113531 Availa...
 
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43bNightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
Nightside clouds and disequilibrium chemistry on the hot Jupiter WASP-43b
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​
 
Green chemistry and Sustainable development.pptx
Green chemistry  and Sustainable development.pptxGreen chemistry  and Sustainable development.pptx
Green chemistry and Sustainable development.pptx
 

4 stochastic processes

  • 1. Stochastic Processes SOLO HERMELIN Updated: 10.05.11 15.06.14 http://www.solohermelin.com
  • 2. SOLO Stochastic Processes Table of Content Random Variables Stochastic Differential Equation (SDE) Brownian Motion Smoluchowski Equation Langevin Equation Lévy Process Martingale Chapmann – Kolmogorov Equation Itô Lemma and Itô Processes Stratonovich Stochastic Calculus Fokker – Planck Equation Kolmogorov forward equation (KFE) and its adjoint the Kolmogorov backward equation (KBE) Propagation Equation
  • 3. SOLO Stochastic Processes Table of Content (continue) Bartlett-Moyal Theorem Feller- Kolmogorov Equation Langevin and Fokker- Planck Equations Generalized Fokker - Planck Equation Karhunen-Loève Theorem References
  • 4. 4 Random ProcessesSOLO Random Variable: A variable x determined by the outcome Ω of a random experiment. ( )Ω= xx Random Process or Stochastic Process: A function of time x determined by the outcome Ω of a random experiment. ( ) ( )Ω= ,txtx 1 Ω 2 Ω 3Ω 4Ω x t This is a family or an ensemble of functions of time, in general different for each outcome Ω. Mean or Ensemble Average of the Random Process: ( ) ( )[ ] ( ) ( )∫ +∞ ∞− =Ω= ξξξ dptxEtx tx ,: Autocorrelation of the Random Process: ( ) ( ) ( )[ ] ( ) ( ) ( )∫ ∫ +∞ ∞− +∞ ∞− =ΩΩ= ηξξξη ddptxtxEttR txtx 21 ,2121 ,,:, Autocovariance of the Random Process: ( ) ( ) ( )[ ] ( ) ( )[ ]{ }221121 ,,:, txtxtxtxEttC −Ω−Ω= ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )2121212121 ,,,, txtxttRtxtxtxtxEttC −=−ΩΩ= Table of Content
  • 5. 5 SOLO Stationarity of a Random Process 1. Wide Sense Stationarity of a Random Process: • Mean Average of the Random Process is time invariant: ( ) ( )[ ] ( ) ( ) .,: constxdptxEtx tx ===Ω= ∫ +∞ ∞− ξξξ • Autocorrelation of the Random Process is of the form: ( ) ( ) ( )τ τ RttRttR tt 21: 2121 , −= =−= ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )12,2121 ,,,:, 21 ttRddptxtxEttR txtx === ∫ ∫ +∞ ∞− +∞ ∞− ηξξξηωωsince: We have: ( ) ( )ττ −= RR Power Spectrum or Power Spectral Density of a Stationary Random Process: ( ) ( ) ( )∫ +∞ ∞− −= ττωτω djRS exp: 2. Strict Sense Stationarity of a Random Process: All probability density functions are time invariant: ( ) ( ) ( ) .,, constptp xtx == ωωω Ergodicity: ( ) ( ) ( )[ ]Ω==Ω=Ω ∫ + −∞→ ,, 2 1 :, lim txExdttx T tx Ergodicity T TT A Stationary Random Process for which Time Average = Assembly Average Random Processes
  • 6. 6 SOLO Time Autocorrelation: Ergodicity: ( ) ( ) ( ) ( ) ( )∫ + −∞→ Ω+Ω=Ω+Ω= T TT dttxtx T txtxR ,, 2 1 :,, lim τττ For a Ergodic Random Process define Finite Signal Energy Assumption: ( ) ( ) ( ) ∞<Ω=Ω= ∫ + −∞→ T TT dttx T txR , 2 1 ,0 22 lim Define: ( ) ( )    ≤≤−Ω =Ω otherwise TtTtx txT 0 , :, ( ) ( ) ( )∫ +∞ ∞− Ω+Ω= dttxtx T R TTT ,, 2 1 : ττ ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )∫∫∫ ∫∫∫ −− − − +∞ − − − − ∞− Ω+Ω−Ω+Ω=Ω+Ω= Ω+Ω+Ω+Ω++Ω= T T TT T T TT T T TT T TT T T TT T TTT dttxtx T dttxtx T dttxtx T dttxtx T dttxtx T dttxtx T R τ τ τ τ τττ ττωττ ,, 2 1 ,, 2 1 ,, 2 1 ,, 2 1 ,, 2 1 ,, 2 1 00  Let compute: ( ) ( ) ( ) ( ) ( )∫∫ −∞→−∞→∞→ Ω+Ω−Ω+Ω= T T TT T T T TT T T T dttxtx T dttxtx T R τ τττ ,, 2 1 ,, 2 1 limlimlim ( ) ( ) ( )ττ Rdttxtx T T T TT T =Ω+Ω∫−∞→ ,, 2 1 lim ( ) ( ) ( ) ( )[ ] 0,, 2 1 ,, 2 1 suplimlim →         Ω+Ω≤Ω+Ω ≤≤−∞→−∞→ ∫ τττ ττ txtx T dttxtx T TT TtTT T T TT T therefore: ( ) ( )ττ RRT T = →∞ lim ( ) ( ) ( )[ ]Ω==Ω=Ω ∫ + −∞→ ,, 2 1 :, lim txExdttx T tx Ergodicity T TT T− T+ ( )txT t Random Processes
  • 7. 7 SOLO Ergodicity (continue): ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( )( )[ ] ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( ) [ ]TTTT TT TT TTT XX T dvvjvxdttjtx T dtjtxdttjtx T ddttjtxtjtx T dttxtxdj T djR * 2 1 exp,exp, 2 1 exp,exp, 2 1 exp,exp, 2 1 ,,exp 2 1 exp =−ΩΩ= +−Ω+Ω= +−Ω+Ω= Ω+Ω−=− ∫∫ ∫∫ ∫ ∫ ∫ ∫∫ ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− +∞ ∞− +∞ ∞− +∞ ∞− ωω ττωτω ττωτω τττωττωτLet compute: where: and * means complex-conjugate.( ) ( )∫ +∞ ∞− −Ω= dvvjvxX TT ωexp,: Define: ( ) ( ) ( ) ( ) ( ) ( )[ ]∫ ∫∫ +∞ ∞− + −∞→ +∞ ∞−∞→∞→         Ω+Ω−=         −=         = τττωττωτω ddttxtxE T jdjRE T XX ES T T TT T T T TT T ,, 2 1 expexp 2 : limlimlim * Since the Random Process is Ergodic we can use the Wide Stationarity Assumption: ( ) ( )[ ] ( )ττ RtxtxE TT =Ω+Ω ,, ( ) ( ) ( ) ( ) ( ) ( ) ( )∫ ∫ ∫∫ ∫ ∞+ ∞− +∞ ∞− + −∞→ +∞ ∞− + −∞→∞→ −=         −=         −=         = ττωτ ττωττττωω djR ddt T jRddtR T j T XX ES T TT T TT TT T exp 2 1 exp 2 1 exp 2 : 1 * limlimlim    Random Processes
  • 8. 8 SOLO Ergodicity (continue): We obtained the Wiener-Khinchine Theorem (Wiener 1930): ( ) ( ) ( )∫ +∞ ∞−→∞ −=      = dtjR T XX ES TT T τωτω exp 2 : * lim Norbert Wiener 1894 - 1964 Alexander Yakovlevich Khinchine 1894 - 1959 The Power Spectrum or Power Spectral Density of a Stationary Random Process S (ω) is the Fourier Transform of the Autocorrelation Function R (τ). Random Processes
  • 9. 9 SOLO White Noise A (not necessary stationary) Random Process whose Autocorrelation is zero for any two different times is called white noise in the wide sense. ( ) ( ) ( )[ ] ( ) ( )211 2 2121 ,,, ttttxtxEttR −=ΩΩ= δσ ( )1 2 tσ - instantaneous variance Wide Sense Whiteness Strict Sense Whiteness A (not necessary stationary) Random Process in which the outcome for any two different times is independent is called white noise in the strict sense. ( ) ( ) ( ) ( )2121, ,,21 ttttp txtx −=Ω δ A Stationary White Noise Random has the Autocorrelation: ( ) ( ) ( )[ ] ( )τδσττ 2 ,, =Ω+Ω= txtxER Note In general whiteness requires Strict Sense Whiteness. In practice we have only moments (typically up to second order) and thus only Wide Sense Whiteness. Random Processes
  • 10. 10 SOLO White Noise A Stationary White Noise Random has the Autocorrelation: ( ) ( ) ( )[ ] ( )τδσττ 2 ,, =Ω+Ω= txtxER The Power Spectral Density is given by performing the Fourier Transform of the Autocorrelation: ( ) ( ) ( ) ( ) ( ) 22 expexp στωτδστωτω =−=−= ∫∫ +∞ ∞− +∞ ∞− dtjdtjRS ( )ωS ω 2 σ We can see that the Power Spectrum Density contains all frequencies at the same amplitude. This is the reason that is called White Noise. The Power of the Noise is defined as: ( ) ( ) 2 0 σωτ ==== ∫ +∞ ∞− SdtRP Random Processes
  • 11. 11 SOLO Markov Processes A Markov Process is defined by: Andrei Andreevich Markov 1856 - 1922 ( ) ( )( ) ( ) ( )( ) 111 ,|,,,|, tttxtxptxtxp >∀ΩΩ=≤ΩΩ ττ i.e. the Random Process, the past up to any time t1 is fully defined by the process at t1. Examples of Markov Processes: 1. Continuous Dynamic System ( ) ( ) ( ) ( )wuxthtz vuxtftx ,,, ,,, = = 2. Discrete Dynamic System ( ) ( ) ( ) ( )kkkkk kkkkk wuxthtz vuxtftx ,,, ,,, 1 1 = = + + x - state space vector (n x 1) u - input vector (m x 1) v - white input noise vector (n x 1) - measurement vector (p x 1)z - white measurement noise vector (p x 1)w Random Processes Table of Content
  • 12. SOLO Stochastic Processes The earliest work on SDEs was done to describe Brownian motion in Einstein's famous paper, and at the same time by Smoluchowski. However, one of the earlier works related to Brownian motion is credited to Bachelier (1900) in his thesis 'Theory of Speculation'. This work was followed upon by Langevin. Later Itō and Stratonovich put SDEs on more solid mathematical footing. In physical science, SDEs are usually written as Langevin Equations. These are sometimes confusingly called "the Langevin Equation" even though there are many possible forms. These consist of an ordinary differential equation containing a deterministic part and an additional random white noise term. A second form is the Smoluchowski Equation and, more generally, the Fokker-Planck Equation. These are partial differential equations that describe the time evolution of probability distribution functions. The third form is the stochastic differential equation that is used most frequently in mathematics and quantitative finance (see below). This is similar to the Langevin form, but it is usually written in differential form. SDEs come in two varieties, corresponding to two versions of stochastic calculus. Background Terminology A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, thus resulting in a solution which is itself a stochastic process. SDE are used to model diverse phenomena such as fluctuating stock prices or physical system subject to thermal fluctuations. Typically, SDEs incorporate white noise which can be thought of as the derivative of Brownian motion (or the Wiener process); however, it should be mentioned that other types of random fluctuations are possible, such as jump processes. Stochastic Differential Equation (SDE)
  • 13. SOLO Stochastic Processes Brownian motion or the Wiener process was discovered to be exceptionally complex mathematically. The Wiener process is non-differentiable; thus, it requires its own rules of calculus. There are two dominating versions of stochastic calculus, the Ito Stochastic Calculus and the Stratonovich Stochastic Calculus. Each of the two has advantages and disadvantages, and newcomers are often confused whether the one is more appropriate than the other in a given situation. Guidelines exist and conveniently, one can readily convert an Ito SDE to an equivalent Stratonovich SDE and back again. Still, one must be careful which calculus to use when the SDE is initially written down. Stochastic Calculus Table of Content
  • 14. Stochastic ProcessesSOLO Brownian Motion In 1827 Brown, a botanist, discovered the motion of pollen particles in water. At the beginning of the twentieth century, Brownian motion was studied by Einstein, Perrin and other physicists. In 1923, against this scientific background, Wiener defined probability measures in path spaces, and used the concept of Lebesgue integrals to lay the mathematical foundations of stochastic analysis. In 1942, Ito began to reconstruct from scratch the concept of stochastic integrals, and its associated theory of analysis. He created the theory of stochastic differential equations, which describe motion due to random events. Albert Einstein 1879 - 1955 Norbert Wiener 1894 - 1964 Henri Léon Lebesgue 1875-1941 Robert Brown 1773–1858 Albert Einstein's (in his 1905 paper) and Marian Smoluchowski's (1906) independent research of the problem that brought the solution to the attention of physicists, and presented it as a way to indirectly confirm the existence of atoms and molecules. Marian Ritter von Smolan Smoluchowski 1872 - 1917 Kiyosi Itô 1915-2008
  • 15. Stochastic ProcessesSOLO Random Walk Assume the process of walking on a straight line at discrete intervals T. At each time we walk a distance s , randomly, to the left or to the right, with the same probability p=1/2. In this way we created a Stochastic Process called Random Walk. (This experiment is equivalent to tossing a coin to get, randomly, Head or Tail). Assume that at t = n T we have taken k steps to the right and n-k steps to the left, then the distance traveled is x (nT) is a Random Walk, taking the values r s, where r equals n, n-2,…, -(n-2),-n ( ) ( ) ( ) snksknsknTx −=−−= 2 ( ) ( ) 2 2 nr ksnksrnTx + =⇒−== Therefore ( ){ } n n nr n pnr n nr kPsrnTxP 2 1 22 2           +=           +=       + ===
  • 16. Stochastic ProcessesSOLO Random Walk (continue – 1) The Random value is ( ) nxxxnTx +++= 21 We have at step i the event xi: P {xi = +s} = p = 1/2 and P {xi = - s} = 1-p = 1/2 ( ){ } ( ) ( ) ( ) nrppn pnk e n e ppn nr kPsrnTxP 2/12 2 2 2/ 1 12 1 2 −− − − = − ≈       + === ππ { } { } ( ) { } 0=−=−++== sxPssxPsxE iii { } { } ( ) { } 2222 ssxPssxPsxE iii =−=−++== ( ){ } { } { } { } ( ){ } { } { } { } { } { } 222 2 2 1 0 1 1 2 21 0 snxExExExxEnTxE xExExEnTxE n xxEn i n j ji n ji ji =+++== =+++= ≠ = = = ∑∑   { } { } { } { }    === ≠==⇒ jisxE jixExExxE i ii tindependenxx ji ji 22 , 0 For large r ( )nr > and ( ){ }       +=+≈≤ ∫ − n r erfdyesrnTxP nr y 2 1 2 1 2 1 / 0 2/2 π
  • 17. Stochastic ProcessesSOLO Random Walk (continue – 2) For n1 > n2 > n3 > n4 the number of steps to the right from n2T to n1T interval is independent of the number of steps to the right between n4T to n3T interval. Hence x (n1T) – x (n2T) is independent of x (n4T) – x (n3T). Table of Content
  • 18. SOLO Stochastic Processes Smoluchowski Equation In physics, the Diffusion Equation with drift term is often called Smoluchowski equation (after Marian von Smoluchowski). Let w(r, t) be a density, D a diffusion constant, ζ a friction coefficient, and U(r, t) a potential. Then the Smoluchowski equation states that the density evolves according to The diffusivity term acts to smoothen out the density, while the drift term shifts the density towards regions of low potential U. The equation is consistent with each particle moving according to a stochastic differential equation, with a bias term and a diffusivity D. Physically, the drift term originates from a force being balanced by a viscous drag given by ζ. The Smoluchowski equation is formally identical to the Fokker–Planck equation, the only difference being the physical meaning of w: a distribution of particles in space for the Smoluchowski equation, a distribution of particle velocities for the Fokker–Planck equation.
  • 19. SOLO Stochastic Processes Einstein-Smoluchowski Equation In physics (namely, in kinetic theory) the Einstein relation (also known as Einstein–Smoluchowski relation) is a previously unexpected connection revealed independently by Albert Einstein in 1905 and by Marian Smoluchowski (1906) in their papers on Brownian motion. Two important special cases of the relation are: (diffusion of charged particles) ("Einstein–Stokes equation", for diffusion of spherical particles through liquid with low Reynolds number) Where • ρ (x,t) density of the Brownian particles •D is the diffusion constant, •q is the electrical charge of a particle, •μq, the electrical mobility of the charged particle, i.e. the ratio of the particle's terminal drift velocity to an applied electric field, •kB is Boltzmann's constant, •T is the absolute temperature, •η is viscosity •r is the radius of the spherical particle. The more general form of the equation is: where the "mobility" μ is the ratio of the particle's terminal drift velocity to an applied force, μ = vd / F. 2 2 x D t ∂ ∂ = ∂ ∂ ρρ Einstein’s Equation For Brownian Motion ( ) ( )       −= tD x tD tx 4 exp 4 1 , 2 2/1 π ρ Table of Content
  • 20. Paul Langevin 1872-1946 Langevin Equation SOLO Stochastic Processes Langevin equation (Paul Langevin, 1908) is a stochastic differential equation describing the time evolution of a subset of the degrees of freedom. These degrees of freedom typically are collective (macroscopic) variables changing only slowly in comparison to the other (microscopic) variables of the system. The fast (microscopic) variables are responsible for the stochastic nature of the Langevin equation. The original Langevin equation describes Brownian motion, the apparently random movement of a particle in a fluid due to collisions with the molecules of the fluid, Langevin, P. (1908). "On the Theory of Brownian Motion". C. R. Acad. Sci. (Paris) 146: 530–533. ( ) td xd vtv td vd m =+−= ηλ We are interested in the position x of a particle of mass m. The force on the particle is the sum of the viscous force proportional to particle’s velocity λ v (Stoke’s Law) plus a noise term η (t) that has a Gaussian Probability Distribution with Correlation Function ( ) ( ) ( )'2', , ttTktt jiBji −= δδληη where kB is Boltzmann’s constant and T is the Temperature. Table of Content
  • 21. Propagation Equation SOLO Stochastic Processes Definition 1: Holder Continuity Condition ( )( ) 111 , mxnxmx Kttxk ∈Given a mx1 vector on a mx1 domain, we say that is Holder Continuous in K if for some constants C, α >0 and some norm || ||: ( ) ( ) α 2121 ,, xxCtxktxk −<− Holder Continuity is a generalization of Lipschitz Continuity (α = 1): Holder Continuity Lipschitz Continuity( ) ( ) 2121 ,, xxCtxktxk −<− Rudolf Lipschitz 1832-1903 Otto Ludwig Hölder 1859-1937
  • 22. Propagation Equation SOLO Stochastic Processes Definition 2: Standard Stochastic State Realization (SSSR) The Stochastic Differential Equation: ( ) ( ) ( ) ( ) [ ]fnxnxnnxnx ttttndtxGdttxftxd ,,, 0111 ∈+= ( ) ( ) ( ) ( ){ } ( ){ } ( ){ } 0===+= tndEtndEtndEtndtndtnd pgpg we can write ( ) ( ) ( ) ( ){ } ( ) ( )sttQswtwE td tnd tw Tg −== δ ( )tnd g ( ) ( ){ } ( )dttQtntndE nxn T gg =Wiener (Gauss) Process ( )tnd p Poisson Process ( ) ( ){ }                 = na a a T pp n tntndE λσ λσ λσ 2 2 2 1 2 00 00 00 2 1     (1) where is independent of( ) 00 xtx = 0x ( )tnd (2) is Holder Continuous in t, Lipschitz Continuous in( )txGnxn , x ( ) ( )txGtxG T nxnnxn ,, is strictly Positive Definite ( ) ( ) ji ij i ij xx txG x txG ∂∂ ∂ ∂ ∂ , ; , 2 are Globally Lipschitz Continuous in x, continuous in t, and globally bounded. (3) The vector f (x,t) is Continuous in t and Globally Lipschitz Continuous in , and ∂fi/∂xi are Globally Lipschitz Continuous in , and continuous in t.x x The Stochastic Differential Equation is called a Standard Stochastic State Realization (SSSR) Table of Content
  • 23. Stochastic ProcessesSOLO Lévy Process In probability theory, a Lévy process, named after the French mathematician Paul Lévy, is any continuous-time stochastic process Paul Pierre Lévy 1886 - 1971 A Stochastic Process X = {Xt: t ≥ 0} is said to be a Lévy Process if: 1. X0 = 0 almost surely (with probability one). 2. Independent increments: For any , are independent. 3. Stationary increments: For any t < s, Xt – Xs is equal in distribution to X t-s . 4. is almost surely right continuous with left limits. Independent increments A continuous-time stochastic process assigns a random variable Xt to each point t ≥ 0 in time. In effect it is a random function of t. The increments of such a process are the differences Xs − Xt between its values at different times t < s. To call the increments of a process independent means that increments Xs − Xt and Xu − Xv are independent random variables whenever the two time intervals do not overlap and, more generally, any finite number of increments assigned to pairwise non- overlapping time intervals are mutually (not just pairwise) independent
  • 24. Stochastic ProcessesSOLO Lévy Process (continue – 1) Paul Pierre Lévy 1886 - 1971 A Stochastic Process X = {Xt: t ≥ 0} is said to be a Lévy Process if: 1. X0 = 0 almost surely (with probability one). 2. Independent increments: For any , are independent. 3. Stationary increments: For any t < s, Xt – Xs is equal in distribution to X t-s . 4. is almost surely right continuous with left limits. Stationary increments To call the increments stationary means that the probability distribution of any increment Xs − Xt depends only on the length s − t of the time interval; increments with equally long time intervals are identically distributed. In the Wiener process, the probability distribution of Xs − Xt is normal with expected value 0 and variance s − t. In the (homogeneous) Poisson process, the probability distribution of Xs − Xt is a Poisson distribution with expected value λ(s − t), where λ > 0 is the "intensity" or "rate" of the process.
  • 25. Stochastic ProcessesSOLO Lévy Process (continue – 2) Paul Pierre Lévy 1886 - 1971 A Stochastic Process X = {Xt: t ≥ 0} is said to be a Lévy Process if: 1. X0 = 0 almost surely (with probability one). 2. Independent increments: For any , are independent. 3. Stationary increments: For any t < s, Xt – Xs is equal in distribution to X t-s . 4. is almost surely right continuous with left limits. Divisibility Lévy processes correspond to infinitely divisible probability distributions: The probability distributions of the increments of any Lévy process are infinitely divisible, since the increment of length t is the sum of n increments of length t/n, which are i.i.d. by assumption (independent increments and stationarity). Conversely, there is a Lévy process for each infinitely divisible probability distribution: given such a distribution D, multiples and dividing define a stochastic process for positive rational time, defining it as a Dirac delta distribution for time 0 defines it for time 0, and taking limits defines it for real time. Independent increments and stationarity follow by assumption of divisibility, though one must check continuity and that taking limits gives a well-defined function for irrational time. Table of Content
  • 26. Stochastic ProcessesSOLO Martingale Originally, martingale referred to a class of betting strategies that was popular in 18th century France. The simplest of these strategies was designed for a game in which the gambler wins his stake if a coin comes up heads and loses it if the coin comes up tails. The strategy had the gambler double his bet after every loss so that the first win would recover all previous losses plus win a profit equal to the original stake. As the gambler's wealth and available time jointly approach infinity, his probability of eventually flipping heads approaches 1, which makes the martingale betting strategy seem like a sure thing. However, the exponential growth of the bets eventually bankrupts its users History of Martingale The concept of martingale in probability theory was introduced by Paul Pierre Lévy, and much of the original development of the theory was done by Joseph Leo Doob. Part of the motivation for that work was to show the impossibility of successful betting strategies. Paul Pierre Lévy 1886 - 1971 Joseph Leo Doob 1910 - 2004
  • 27. Stochastic ProcessesSOLO Martingale In probability theory, a martingale is a stochastic process (i.e., a sequence of random variables) such that the conditional expected value of an observation at some time t, given all the observations up to some earlier time s, is equal to the observation at that earlier time s A discrete-time martingale is a discrete-time stochastic process (i.e., a sequence of random variables) X1, X2, X3, ... that satisfies for all n i.e., the conditional expected value of the next observation, given all the past observations, is equal to the last observation. Somewhat more generally, a sequence Y1, Y2, Y3 ... is said to be a martingale with respect to another sequence X1, X2, X3 ... if for all n Similarly, a continuous-time martingale with respect to the stochastic process Xt is a stochastic process Yt such that for all t This expresses the property that the conditional expectation of an observation at time t, given all the observations up to time s, is equal to the observation at time s (of course, provided that s ≤ t).
  • 28. Stochastic ProcessesSOLO Martingale In full generality, a stochastic process Y : T × Ω → S is a martingale with respect to a filtration Σ∗ and probability measure P if * Σ∗ is a filtration of the underlying probability space (Ω, Σ, P); * Y is adapted to the filtration Σ∗, i.e., for each t in the index set T, the random variable Yt is a Σt-measurable function; * for each t, Yt lies in the Lp space L1 (Ω, Σt, P; S), i.e. * for all s and t with s < t and all F Σ∈ s, where χF denotes the indicator function of the event F. In Grimmett and Stirzaker's Probability and Random Processes, this last condition is denoted as which is a general form of conditional expectation It is important to note that the property of being a martingale involves both the filtration and the probability measure (with respect to which the expectations are taken). It is possible that Y could be a martingale with respect to one measure but not another one; the Girsanov theorem offers a way to find a measure with respect to which an Itō process is a martingale. Table of Content
  • 29. Stochastic ProcessesSOLO Chapmann – Kolmogorov Equation Sydney Chapman 1888-1970 Andrey Nikolaevich Kolmogorov 1903-1987 Suppose that { fi } is an indexed collection of random variables, that is, a stochastic process. Let be the joint probability density function of the values of the random variables f1 to fn. Then, the Chapman-Kolmogorov equation is Note that we have not yet assumed anything about the temporal (or any other) ordering of the random variables -- the above equation applies equally to the marginalization of any of them. Particularization to Markov Chains When the stochastic process under consideration is Markovian, the Chapman-Kolmogorov equation is equivalent to an identity on transition densities. In the Markov chain setting, one assumes that Then, because of the Markov property, where the conditional probability is the transition probability between the times i > j. So, the Chapman-Kolmogorov equation takes the form When the probability distribution on the state space of a Markov chain is discrete and the Markov chain is homogeneous, the Chapman-Kolmogorov equations can be expressed in terms of (possibly infinite-dimensional) matrix multiplication, thus: where P(t) is the transition matrix, i.e., if Xt is the state of the process at time t, then for any two points i and j in the state space, we have ( )nii ffp n ,,1,,1  ( ) ( )∫ +∞ ∞− − =− nniinii fdffpffp nn ,,,, 1,,11,, 111   ( ) ( ) ( ) ( )1|12|11,, ||,, 11211 −− = nniiiiinii ffpffpfpffp nnn  ( ) ( ) ( )∫ +∞ ∞− = 212|23|13| ||| 122313 dfffpffpffp iiiiii
  • 30. Stochastic ProcessesSOLO Chapmann – Kolmogorov Equation (continue – 1) Particularization to Markov Chains ( ) ( ) ( )∫ +∞ ∞− = 20022,|,22,|,00,|, ,|,,|,,|, 00220000 dttxtxptxtxptxtxp txtxtxtxtxtx Let be a probability density function on the Markov process x(t) given that x(t0) = x0, and t0 < t, then, ( )00,|, ,|,00 txtxp txtx Geometric Interpretation of Chapmann – Kolmogorov Equation Table of Content
  • 31. Stochastic ProcessesSOLO Kiyosi Itô 1915 - 2008 In 1942, Itô began to reconstruct from scratch the concept of stochastic integrals, and its associated theory of analysis. He created the theory of stochastic differential equations, which describe motion due to random events. In 1945 Ito was awarded his doctorate. He continued to develop his ideas on stochastic analysis with many important papers on the topic. Among them were “On a stochastic integral equation” (1946), “On the stochastic integral” (1948), “Stochastic differential equations in a differentiable manifold” (1950), “Brownian motions in a Lie group” (1950), and “On stochastic differential equations” (1951). Itô Lemma and Itô Processes
  • 32. Itô Lemma and Itô processes In its simplest form, Itô 's lemma states that for an Itô process and any twice continuously differentiable function f on the real numbers, then f(X) is also an Itô process satisfying Or, more extended. Let X(t) be an Itô process given by and let f(t,x) be a function with continuous first- and second-order partial derivatives Then by Itô's lemma: SOLO tttt dBdtXd σµ += ( ) ( ) ( ) ( ) ( ) ( ) dtXfXfdBXf dtXfdXXfXfd tt T tttttt tt T tttt       ++= += σσµσ σσ '' 2 1 '' '' 2 1 ' Stochastic Processes
  • 33. Itô Lemma and Itô processes (continue – 1) Informal derivation A formal proof of the lemma requires us to take the limit of a sequence of random variables, which is not done here. Instead, we can derive Ito's lemma by expanding a Taylor series and applying the rules of stochastic calculus. Assume the Itō process is in the form of Expanding f(x, t) in a Taylor series in x and t we have and substituting a dt + b dB for dx gives In the limit as dt tends to 0, the dt2 and dt dB terms disappear but the dB2 term tends to dt. The latter can be shown if we prove that since Deleting the dt2 and dt dB terms, substituting dt for dB2 , and collecting the dt and dB terms, we obtain as required. SOLO Stochastic Processes Table of Content
  • 34. Ruslan L. Stratonovich (1930 – 1997) Stratonovich invented a stochastic calculus which serves as an alternative to the Itô calculus; the Stratonovich calculus is most natural when physical laws are being considered. The Stratonovich integral appears in his stochastic calculus. He also solved the problem of optimal non-linear filtering based on his theory of conditional Markov processes, which was published in his papers in 1959 and 1960. The Kalman-Bucy (linear) filter (1961) is a special case of Stratonovich's filter. He also developed the value of information theory (1965). His latest book was on non-linear non-equilibrium thermodynamics. SOLO Stratonovich Stochastic Calculus Stochastic Processes Table of Content
  • 35. A solution to the one-dimensional Fokker–Planck equation, with both the drift and the diffusion term. The initial condition is a Dirac delta function in x = 1, and the distribution drifts towards x = 0. The Fokker–Planck equation describes the time evolution of the probability density function of the position of a particle, and can be generalized to other observables as well. It is named after Adriaan Fokker and Max Planck and is also known as the Kolmogorov forward equation. The first use of the Fokker– Planck equation was the statistical description of Brownian motion of a particle in a fluid. In one spatial dimension x, the Fokker–Planck equation for a process with drift D1(x,t) and diffusion D2(x,t) is More generally, the time-dependent probability distribution may depend on a set of N macrovariables xi. The general form of the Fokker–Planck equation is then where D1 is the drift vector and D2 the diffusion tensor; the latter results from the presence of the stochastic force. Fokker – Planck Equation Adriaan Fokker 1887-1972 Max Planck 1858-1947 SOLO Adriaan Fokker „Die mittlere Energie rotierender elektrischer Dipole im Strahlungsfeld" Annalen der Physik 43, (1914) 810- 820 Max Plank, „Ueber einen Satz der statistichen Dynamik und eine Erweiterung in der Quantumtheorie“, Sitzungberichte der Preussischen Akadademie der Wissenschaften (1917) p. 324-341 Stochastic Processes ( ) ( ) ( )[ ] ( ) ( )[ ]txftxD x txftxD x txf t ,,,,, 22 2 1 ∂ ∂ + ∂ ∂ −= ∂ ∂ ( )[ ] ( )[ ]∑∑∑ = == ∂∂ ∂ + ∂ ∂ −= ∂ ∂ N i N j Nji ji N i Ni i ftxxD xx ftxxD x f t 1 1 1 2 2 1 1 1 ,,,,,, 
  • 36. Fokker – Planck Equation (continue – 1) The Fokker–Planck equation can be used for computing the probability densities of stochastic differential equations. where is the state and is a standard M-dimensional Wiener process. If the initial probability distribution is , then the probability distribution of the state is given by the Fokker – Planck Equation with the drift and diffusion terms: Similarly, a Fokker–Planck equation can be derived for Stratonovich stochastic differential equations. In this case, noise-induced drift terms appear if the noise strength is state-dependent. SOLO Consider the Itô stochastic differential equation: ( ) ( ) ( )[ ] ( ) ( )[ ]txftxD x txftxD x txf t ,,,,, 22 2 1 ∂ ∂ + ∂ ∂ −= ∂ ∂
  • 37. Fokker – Planck Equation (continue – 2) Derivation of the Fokker–Planck Equation SOLO Start with ( ) ( ) ( )11|1, 111 |, −−− −−− = kxkkxxkkxx xpxxpxxp kkkkk and ( ) ( ) ( ) ( )∫∫ +∞ ∞− −−− +∞ ∞− −− −−− == 111|11, 111 |, kkxkkxxkkkxxkx xdxpxxpxdxxpxp kkkkkk define ( ) ( )ttxxtxxttttt kkkk ∆−==∆−== −− 11 ,,, ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( )∫ +∞ ∞− ∆−∆− ∆−∆−∆−= ttxdttxpttxtxptxp ttxttxtxtx || Let use the Characteristic Function of ( ) ( ) ( ) ( ) ( )[ ]{ } ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )ttxtxtxtxdttxtxpttxtxss ttxtxttxtx ∆−−=∆∆−∆−−−=Φ ∫ +∞ ∞− ∆−∆−∆ |exp: || ( ) ( ) ( ) ( )[ ]ttxtxp ttxtx ∆−∆− || The inverse transform is ( ) ( ) ( ) ( )[ ] ( ) ( )[ ]{ } ( ) ( ) ( )∫ ∞+ ∞− ∆−∆∆− Φ∆−−=∆− j j ttxtxttxtx sdsttxtxs j ttxtxp || exp 2 1 | π Using Chapman-Kolmogorov Equation we obtain: ( ) ( )[ ] ( ) ( )[ ]{ } ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ]{ } ( ) ( ) ( ) ( ) ( )[ ] ( )ttxdsdttxpsttxtxs j ttxdttxpsdsttxtxs j txp j j ttxttxtx ttx ttxtxp j j ttxtxtx ttxtx ∆−∆−Φ∆−−= ∆−∆−Φ∆−−= ∫ ∫ ∫ ∫ ∞+ ∞− ∞+ ∞− ∆−∆−∆ +∞ ∞− ∆− ∆− ∞+ ∞− ∆−∆ ∆− | | | exp 2 1 exp 2 1 | π π    Stochastic Processes
  • 38. Fokker – Planck Equation (continue – 3) Derivation of the Fokker–Planck Equation (continue – 1) SOLO The Characteristic Function can be expressed in terms of the moments about x (t-Δt) as: ( ) ( )[ ] ( ) ( )[ ]{ } ( ) ( ) ( ) ( ) ( )[ ] ( )ttxdsdttxpsttxtxs j txp j j ttxttxtxtx ∆−∆−Φ∆−−= ∫ ∫ +∞ ∞− ∞+ ∞− ∆−∆−∆ |exp 2 1 π ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ){ }∑ ∞ = ∆−∆∆−∆ ∆−∆−− − +=Φ 1 || | ! 1 i i ttxtx i ttxtx ttxttxtxE i s s Therefore ( ) ( )[ ] ( ) ( )[ ]{ } ( ) ( ) ( ) ( ) ( )[ ] ( ){ } ( ) ( )[ ] ( )ttxdsdttxpttxttxtxE i s ttxtxs j txp j j ttx i i ttxtx i tx ∆−∆−       ∆−∆−− − +∆−−= ∫ ∫ ∑ +∞ ∞− ∞+ ∞− ∆− ∞ = ∆− 1 | | ! 1exp 2 1 π Use the fact that ( ) ( ) ( )[ ]{ } ( ) ( ) ( )[ ] ( )[ ] ,2,1,01exp 2 1 = ∂ ∆−−∂ −=∆−−−∫ ∞+ ∞− i tx ttxtx sdttxtxss j i i i j j i δ π ( ) ( )[ ] ( ) ( )[ ]{ } ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( )[ ] ( ) ( )[ ] ( ){ } ( ) ( )[ ] ( )∫∑ ∫ ∫ ∞+ ∞− ∞ = ∆− +∞ ∞− ∆− ∞+ ∞− ∆−∆−∆−∆−− ∂ ∆−−∂− + ∆−∆−∆−−= 1 | ! 1 exp 2 1 i ttx i i ii ttx j j tx ttxdttxpttxttxtxE tx ttxtx i ttxdttxpsdttxtxs j txp δ π where δ [u] is the Dirac delta function: [ ] { } ( ) [ ] ( ) ( ) ( ) ( ) ( )000..0exp 2 1 FFFtsuFFduuuFsdus j u j j ==∀== −+ +∞ ∞− ∞+ ∞− ∫∫ δ π δ Stochastic Processes
  • 39. Fokker – Planck Equation (continue – 4) Derivation of the Fokker–Planck Equation (continue – 2) SOLO [ ] ( ){ } ( ) [ ] ( ) ( ) ( ) ( ) ( )afafaftsufufduuaufsduas j ua au j j ==∀=−−=− −+= +∞ ∞− ∞+ ∞− ∫∫ ..exp 2 1 δ π δ [ ] ( ){ } ( ) ( ) { } ( ) ( ) { }∫∫∫ ∞+ ∞− ∞+ ∞− ∞+ ∞− =→=− − =− j j j j j j sdussFs j uf du d sdussF j ufsduass j ua ud d exp 2 1 exp 2 1 exp 2 1 πππ δ ( ) [ ] ( ) ( ){ } ( ) ( ){ } { } ( ) { } { } ( ) ( ) au j j j j j j j j ud ufd sdsFass j sdduusufass j sdduuasufs j dusduass j ufduua ud d uf = ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− +∞ ∞− +∞ ∞− ∞+ ∞− +∞ ∞− −= − =− − = − − =− − =− ∫∫ ∫ ∫ ∫∫ ∫∫ exp 2 1 expexp 2 1 exp 2 1 exp 2 1 ππ ππ δ [ ] ( ) ( ){ } ( ) ( ) { } ( ) ( ) { }∫∫∫ ∞+ ∞− ∞+ ∞− ∞+ ∞− =→=− − =− j j i i ij j j j i i i i sdussFs j uf du d sdussF j ufsduass j ua ud d exp 2 1 exp 2 1 exp 2 1 πππ δ ( ) [ ] ( ) ( ) ( ){ } ( ) ( ) ( ){ } ( ) { } ( ) { } ( ) ( ) { } ( ) ( ) au i i i j j i ij j i i j j i ij j i i i i ud ufd sdassFs j sdduusufass j sdduuasufs j dusduass j ufduua ud d uf = −= − =− − = − − =− − =− ∫∫ ∫ ∫ ∫∫ ∫∫ ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− +∞ ∞− +∞ ∞− ∞+ ∞− +∞ ∞− 1exp 2 1 expexp 2 1 exp 2 1 exp 2 1 ππ ππ δ Useful results related to integrals involving Delta (Dirac) function Stochastic Processes
  • 40. Fokker – Planck Equation (continue – 5) Derivation of the Fokker–Planck Equation (continue – 3) SOLO ( ) ( )[ ]{ } ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ]txpttxdttxpttxtxttxdttxpsdttxtxs j ttxttxttx ttxtx j j ∆− +∞ ∞− ∆− +∞ ∞− ∆− ∆−− ∞+ ∞− =∆−∆−∆−−=∆−∆−∆−− ∫∫ ∫ δ π δ    exp 2 1 ( ) ( ) ( )[ ] ( )[ ] ( ) ( ) ( ) ( )[ ] ( ){ } ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( )[ ] ( ) ( ) ( ) ( )[ ] ( ){ } ( ) ( )[ ] ( ) ( ) ( ) ( ) ( ) ( )[ ] ( ){ } ( ) ( )[ ]( ) ( )[ ]∑ ∑ ∫ ∫∑ ∞ = =∆ ∆−∆− ∞ = ∞+ ∞− ∆−∆− +∞ ∞− ∞ = ∆−∆− ∂ ∆−∆−−∂− = ∆−∆−∆−∆−− ∂ ∆−−∂− = ∆−∆−∆−∆−− ∂ ∆−−∂− 1 0 | 1 | 1 | | ! 1 | ! 1 | ! 1 i t i ttx i ttxtx ii i ttx i ttxtxi ii i ttx i ttxtxi ii tx txpttxttxtxE i ttxdttxpttxttxtxE tx ttxtx i ttxdttxpttxttxtxE tx ttxtx i δ δ ( ) [ ] ( ) ( ) ( ) [ ] [ ] ( ) auau i i i i i i i i i ud ufd duua uad d uf ud ufd duua ud d uf == =− − →−=− ∫∫ +∞ ∞− +∞ ∞− δδ 1We found ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( ){ } ( ) ( )[ ]( ) ( )[ ]∑ ∞ = =∆ ∆−∆− ∆− ∂ ∆−∆−−∂− += 1 0 | | ! 1 i t i ttx i ttxtx ii ttxtx tx txpttxttxtxE i txptxp ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( )[ ] ( ){ } ( ) ( )[ ]( ) ( )[ ]∑ ∞ = ∆− →∆ ∆− →∆ ∂ ∆−∆−−∂ ∆ − = ∆ − 1 00 |1 lim ! 1 lim i i ttx ii t i ttxtx t tx txpttxttxtxE tit txptxp Therefore Rearranging, dividing by Δt, and tacking the limit Δt→0, we obtain: Stochastic Processes
  • 41. Fokker – Planck Equation (continue – 6) Derivation of the Fokker–Planck Equation (continue – 4) SOLO We found ( ) ( )[ ] ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( ){ } ( ) ( )[ ]( ) ( )[ ]∑ ∞ = ∆−∆− →∆ ∆− →∆ ∂ ∆−∆−−∂ ∆ − = ∆ − 1 | 00 |1 lim ! 1 lim i i ttx i ttxtx i t i ttxtx t tx txpttxttxtxE tit txptxp Define: ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ){ } t ttxttxtxE txtxm i ttxtx t i ∆ ∆−∆−− =− ∆− →∆ − | lim: | 0 Therefore ( ) ( )[ ] ( ) ( ) ( )[ ] ( ) ( )[ ]( ) ( )[ ]∑ ∞ = − ∂ −∂− = ∂ ∂ 1 ! 1 i i tx iii tx tx txptxtxm it txp ( ) ( )ttxtx t ∆−= →∆ − 0 lim: and: This equation is called the Stochastic Equation or Kinetic Equation. It is a partial differential equation that we must solve, with the initial condition: ( ) ( )[ ] ( )[ ]000 0 txptxp tx === Stochastic Processes
  • 42. Fokker – Planck Equation (continue – 7) Derivation of the Fokker–Planck Equation (continue – 5) SOLO We want to find px(t) [x(t)] where x(t) is the solution of ( ) ( ) ( ) [ ]fg ttttntxf dt txd ,, 0∈+= ( ){ } 0: == tnEn gg  ( )tng ( ) ( )[ ] ( ) ( )[ ]{ } ( ) ( )τδττ −=−− ttQnntntnE gggg ˆˆ Wiener (Gauss) Process ( ) ( )[ ] ( ) ( )[ ] ( ){ } [ ] ( ){ } [ ]{ } ( )tQnEtxnE t ttxttxtxE txtxm gg t === ∆ ∆−∆−− =− →∆ − 22 2 2 0 2 | | lim: ( ) ( )[ ] ( ) ( )[ ] ( ){ } ( ) ( ) ( ) ( ) ( )txfnEtxftx td txd E t ttxttxtxE txtxm g t ,,| | lim: 0 0 1 =+=             = ∆ ∆−∆−− =− →∆ −  ( ) ( )[ ] ( ) ( )[ ] ( ){ } 20 | lim: 0 >= ∆ ∆−∆−− =− →∆ − i t ttxttxtxE txtxm i t i Therefore we obtain: ( ) ( )[ ] ( )[ ] ( ) ( )[ ]( ) ( ) ( ) ( ) ( )[ ] ( )[ ]2 2 2 1, tx txp tQ tx txpttxf t txp txtxtx ∂ ∂ + ∂ ∂ −= ∂ ∂ Stochastic Processes Fokker–Planck Equation
  • 43. Kolmogorov forward equation (KFE) and its adjoint the Kolmogorov backward equation (KBE) Kolmogorov forward equation (KFE) and its adjoint the Kolmogorov backward equation (KBE) are partial differential equations (PDE) that arise in the theory of continuous-time continuous-state Markov processes. Both were published by Andrey Kolmogorov in 1931. Later it was realized that the KFE was already known to physicists under the name Fokker–Planck equation; the KBE on the other hand was new. Kolmogorov forward equation addresses the following problem. We have information about the state x of the system at time t (namely a probability distribution pt(x)); we want to know the probability distribution of the state at a later time s > t. The adjective 'forward' refers to the fact that pt(x) serves as the initial condition and the PDE is integrated forward in time. (In the common case where the initial state is known exactly pt(x) is a Dirac delta function centered on the known initial state). Kolmogorov backward equation on the other hand is useful when we are interested at time t in whether at a future time s the system will be in a given subset of states, sometimes called the target set. The target is described by a given function us(x) which is equal to 1 if state x is in the target set and zero otherwise. We want to know for every state x at time t (t < s) what is the probability of ending up in the target set at time s (sometimes called the hit probability). In this case us(x) serves as the final condition of the PDE, which is integrated backward in time, from s to t. for t ≤ s , subject to the final condition p(x,s) = us(x). ( ) ( ) ( )[ ] ( ) ( )[ ]txptxD x txptxD x txp t ,,,,, 22 2 1 ∂ ∂ + ∂ ∂ = ∂ ∂ − ( ) ( ) ( )[ ] ( ) ( )[ ]txptxD x txptxD x txp t ,,,,, 22 2 1 ∂ ∂ + ∂ ∂ −= ∂ ∂ Andrey Nikolaevich Kolmogorov 1903 - 1987 SOLO Stochastic Processes
  • 44. Kolmogorov forward equation (KFE) and its adjoint the Kolmogorov backward equation (KBE) (continue – 1) Kolmogorov backward equation on the other hand is useful when we are interested at time t in whether at a future time s the system will be in a given subset of states, sometimes called the target set. The target is described by a given function us(x) which is equal to 1 if state x is in the target set and zero otherwise. We want to know for every state x at time t (t < s) what is the probability of ending up in the target set at time s (sometimes called the hit probability). In this case us(x) serves as the final condition of the PDE, which is integrated backward in time, from s to t. Formulating the Kolmogorov backward equation Assume that the system state x(t) evolves according to the stochastic differential equation then the Kolmogorov backward equation is, using Itô 's lemma on p(x,t): SOLO Stochastic Processes Table of Content
  • 45. Bartlett-Moyal Theorem SOLO Stochastic Processes Let Φx(t)|x(t1) (s,t) be the Characteristic Function of the Markov Process x (t), t Tɛ (some interval). Assume the following: (1) Φx(t)|x(t1) (s,t) is continuous differentiable in t, t T.ɛ ( ) ( ) ( ) ( )[ ]{ } ( ){ } ( )( )txtsg t txtxttxsE T txtx ,; |1exp1| ≤ ∆ −−∆+ (2) where E {| g|} is bounded on T. (3) then ( ) ( ) ( ) ( )[ ]{ } ( ){ } ( )( )txts t txtxttxsE T txtx t ,;: |1exp lim 1| 0 φ= ∆ −−∆+ →∆ ( ) ( ) ( )( ) ( ) ( ) ( ){ } ( )( ) ( ){ }1| 1| |,;exp |, 1 1 txtxtstxsE t txts T txtx txtx φ= ∂ Φ∂ ( ) ( ) ( ) ( ){ } ( ) ( ) ( ) ( )[ ] ( )∫ +∞ ∞− −=Φ txdtxtxptxsts txtx T txtx 1|| |exp, 11 The Characteristic Function of ( ) ( ) ( ) ( )[ ] 11| |1 tttxtxp txtx > Maurice Stevenson Bartlett 1910 - 2002 Jose Enrique Moyal 1910 - 1998 Theorem 1
  • 46. Bartlett-Moyal Theorem SOLO Stochastic Processes ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( ) ( ) ( )( ) t txtstxtts t txts txtxtxtx t txtx ∆ Φ−∆+Φ = ∂ Φ∂ →∆ 1|1| 0 1| |,|, lim |, 111 Proof By definition ( ) ( ) ( ) ( ){ } ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ){ } ( ){ }1|1|| |exp|exp, 111 txtxsEtxdtxtxptxsts T txtxtxtx T txtx −=−=Φ ∫ +∞ ∞− ( ) ( ) ( ) ( ){ } ( ) ( ) ( ) ( )[ ] ( )∫ +∞ ∞− ∆+∆+∆+−=∆+Φ ttxdtxttxpttxstts txtx T txtx 1|| |exp, 11 But since x (t) is a Markov process, we can use the Chapman-Kolmogorov Equation ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( )∫ ∆+=∆+ txdtxtxptxttxptxttxp txtxtxtxtxtx 1||1| ||| 111 ( ) ( ) ( ) ( ){ } ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( )∫ ∫ +∞ ∞− ∆+∆+∆+−=∆+Φ ttxdtxdtxtxptxttxpttxstts txtxtxtx T txtx 1||| ||exp, 111 ( ){ } ( ) ( ) ( ) ( )[ ] ( ) ( )[ ]{ } ( ) ( ) ( ) ( )[ ] ( ) ( )txdttxdtxttxptxttxstxtxptxs txtx T txtx T ∫ ∫ ∆+∆+−∆+−−= |exp|exp 11 |1| ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( ){ }1|| ||expexp 11 txtxtxttxsEtxsE T txtx T txtx −∆+−⋅−=
  • 47. Bartlett-Moyal Theorem SOLO Stochastic Processes ( )( ) ( )( ) ( )( ) t txtstxtts t txts xx t x ∆ Φ−∆+Φ = ∂ Φ∂ →∆ 11 0 1 |,|, lim |, Proof (continue – 1) We found ( ) ( ) ( ) ( ){ } ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ){ } ( ){ }1|1|| |exp|exp, 111 txtxsEtxdtxtxptxsts T txtxtxtx T txtx −=−=Φ ∫ +∞ ∞− ( ) ( ) ( ) ( ){ } ( ) ( ) ( ) ( )[ ] ( )∫ +∞ ∞− ∆− ∆+∆+∆+−=∆+Φ ttxdtxttxpttxstts ttxtx T txtx 1|| |exp,1 ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( ){ }1|| ||expexp 11 txtxtxttxsEtxsE T txtx T txtx −∆+−⋅−= Therefore ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )[ ]( ) ( )[ ] ( )( ) ( ) ( ) ( ) ( )[ ] ( )( ) ( ){ }1| 1 ,; | 0 | 1 | 0 | |,;exp | |1exp limexp | 1|exp limexp 1 1 1 1 1 txtxtstxsE tx t txtxttxsE txsE tx t txtxttxsE txsE T txtx txts T txtx t T txtx T txtx t T txtx φ φ ⋅−=             ∆ −−∆+− ⋅−=         ∆ −−∆+− ⋅−= →∆ →∆    q.e.d.
  • 48. Bartlett-Moyal Theorem SOLO Stochastic Processes Discussion about Bartlett-Moyal Theorem (1) The assumption that x (t) is a Markov Process is essential to the derivation ( )( ) ( ) ( ) ( ) ( )[ ] td txxdsE txts T txtx |1exp :,; 1| −− =φ (2) The function is called Itô Differential of the Markov Process, or Infinitesimal Generator of Markov Process ( )( )txts ,;φ (3) The function is all we need to define the Stochastic Process (this will be proven in the next Lemma) ( )( )txts ,;φ
  • 49. Bartlett-Moyal Theorem SOLO Stochastic Processes Lemma Let x(t) be an (nx1) Vector Markov Process generated by ( ) nddttxfxd += , where pg ndndnd += pnd - is an (nx1) Poisson Process with Zero Mean and Rate Vector and Jump Probability Density pa(α). gnd - is an (nx1) Wiener (Gauss) Process with Zero Mean and Covariance ( ) ( ){ } ( )dttQtndtndE T gg = then ( )( ) ( ) ( )[ ]∑= −−−−= n i iai TT sMsQstxfstxts i 1 1 2 1 ,,; λφ Proof We have ( )( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )( )[ ] ( ){ } td txndnddttxfsE td txxdsE txts pg T txtx T txtx |1,exp|1exp :,; 11 || −++− = −− =φ ( ) ( ) ( )( )[ ] ( ){ } ( )[ ] [ ]{ } [ ]{ }p T g TT pg T txtx ndsEndsEdttxfstxndnddttxfsE −−−=++− expexp,exp|,exp1| Because are independentpg ndndxd ,, [ ] ( ) ( )dtdtdtdtndinjumponeonlyP i n ij jii 01 +=−= ∏≠ λλλ
  • 50. Bartlett-Moyal Theorem SOLO Stochastic Processes Lemma Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , then ( )( ) ( ) ( )[ ]∑= −−−−= n i iai TT sMsQstxfstxts i 1 1 2 1 ,,; λφ Proof (continue – 1) Because is Gaussiangnd [ ]{ }     −=− dtsQsndsE T g T 2 1 expexp The Characteristic Function of the Generalized Poisson Process can be evaluated as follows. Let note that the Probability of two or more jumps occurring at dt is 0(dt)→0 [ ]{ } [ ] [ ]{ } [ ]∑= −+⋅=− n i iiip T ndinjumponeonlyPasEjumpsnoPndsE 1 exp1exp But [ ] ( ) ( )dtdtdtjumpsnoP n i i n i i 011 11 +−=−= ∑∏ == λλ [ ] ( ) ( )dtdtdtdtndinjumponeonlyP i n ij jii 01 +=−= ∏≠ λλλ [ ]{ } [ ]{ } ( ) ( ) ( )[ ]∑∑∑ === −−=+−+−=− n i iai n i i sM ii n i ip T sMdtdtdtasEdtndsE i iia 111 110exp1exp λλλ   
  • 51. Bartlett-Moyal Theorem SOLO Stochastic Processes Lemma Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , then ( )( ) ( ) ( )[ ]∑= −−−−= n i iai TT sMsQstxfstxts i 1 1 2 1 ,,; λφ Proof (continue – 3) We found [ ]{ }     −=− dtsQsndsE T g T 2 1 expexp [ ]{ } [ ]{ } ( ) ( ) ( )[ ]∑∑∑ === −−=+−+−=− n i iai n i i sM ii n i ip T sMdtdtdtasEdtndsE i ita 111 110exp1exp λλλ    ( )( ) ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )( )[ ] ( ){ } ( )[ ] [ ]{ } [ ]{ } ( )[ ] ( )[ ] td sMdtdtsQsdttxfs td ndsEndsEdttxfs td txndnddttxfsE td txxdsE txts n i iai TT p T g TT pg T txtx T txtx i 111 2 1 exp,exp 1expexp,exp |1,exp|1exp :,; 1 || 11 −      −−    −− = −−−− = −++− = −− = ∑= λ φ ( ) ( )[ ] ( ) ( )[ ] ( ) ( )[ ]∑ ∑ = = −−−−= −      −−    +−+− = n i iai TT n i iai TT sMdtsQstxfs td sMdtdtdtsQsdtdttxfs i i 1 1 22 1 2 1 , 1110 2 1 10,1 λ λ q.e.d.
  • 52. Bartlett-Moyal Theorem SOLO Stochastic Processes Theorem 2 Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , ( ) [ ]∑∑∑∑ == == ∗+−+ ∂∂ ∂ + ∂ ∂ −= ∂ ∂ n i ai n i n j ji ij n i i i i ppp xx pQ x pf t p 11 1 2 1 2 1 λ Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 where the convolution (*) is defined as ( ) ( ) ( ) ( )( )∫ −=∗ initxtxiiaa vdtxsvspvsppp ii 11| |,,,,: 1  Proof From Theorem 1 and the previous Lemma, we have: ( ) ( ) ( )( ) ( ) ( ) ( ){ } ( )( ) ( ){ } ( ) ( ) ( ){ } ( ) ( )[ ] ( )             −−−−−= −= ∂ Φ∂ ∑= 1 1 | 1| 1 1| |1 2 1 ,exp |,;exp |, 1 1 1 txsMsQstxfstxsE txtxtstxsE t txts n i iai TTT txtx Lemma T txtx Theorem txtx i λ φ ( ) ( ) ( ) ( ){ } ( ) ( ) ( ) ( )[ ] ( ) ( ) ( ) ( ) ( )[ ] ( ) ( ){ } ( ) ( ) ( )∫∫ ∞+ ∞− +∞ ∞− Φ=⇔−=Φ j j txtx T ntxtxtxtx T txtx sdtstxs j txttxptxdtxttxptxsts ,exp 2 1 |,|,exp, 1111 |1|1|| π ( ) ( ) ( ) ( )[ ] ( ) ( ){ } ( ) ( ) ( )∫ ∞+ ∞− Φ ∂ ∂ = ∂ ∂ j j txtx T ntxtx sdts t txs j txttxp t ,exp 2 1 |, 11 |1| π We also have:
  • 53. Bartlett-Moyal Theorem SOLO Stochastic Processes Theorem 2 Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , ( ) [ ]∑∑∑∑ == == ∗+−+ ∂∂ ∂ + ∂ ∂ −= ∂ ∂ n i ai n i n j ji ij n i i i i ppp xx pQ x pf t p 11 1 2 1 2 1 λ Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 Proof (continue – 1) ( ) ( ) ( )( ) ( ) ( ) ( ){ } ( )( ) ( ){ } ( ) ( ) ( ){ } ( ) ( )[ ] ( )          −−−−−=−= ∂ Φ∂ ∑= 1 1 |1| 1 1| |1 2 1 ,exp|,;exp |, 11 1 txsMsQstxfstxsEtxtxtstxsE t txts n i iai TTT txtx Lemma T txtx Theorem txtx i λφ ( ) ( ) ( ) ( )[ ] ( ) ( ){ } ( ) ( ) ( )∫ ∞+ ∞− Φ ∂ ∂ = ∂ ∂ j j txtx T ntxtx sdts t txs j txttxp t ,exp 2 1 |, 11 |1| π ( ) ( ){ } ( ) ( ) ( ){ } ( )[ ] ( ){ } ( ) ( ){ } ( ) ( ) ( ) ( )( ) ( )[ ] ( )[ ] ( ) ( ){ } ( ) ( ) ( ) ( ) ( )( ) ( )[ ] ( ) ( ){ } ( ) ( ) ( ) ( ) ( )( ){ } ( ) ( ) ( ) ( ) ( )( )[ ] ( ) ( ) ( ) ( ) ( )( )[ ]1| 1 1| 1| 1| 1| 1| |, |, |,exp 2 1 exp|,exp 2 1 ,exp|exp 2 1 |,expexp 2 1 1 1 1 1 1 1 txtxptxf x txtxptxf sdtxtxptxfLstxs j sdvdtvstxtvptvfstxs j sdvdtvfstvstxtvptxs j sdtxtxfstxsEtxs j txtxix n i i txtxi j j txtx TT n j j T txtx TT n j j TT txtx T n j j TT txtx T n ∇= ∂ ∂ = − = − − = −−= −− ∑∫ ∫ ∫ ∫ ∫ ∫ = ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− π π π π
  • 54. Bartlett-Moyal Theorem SOLO Stochastic Processes Theorem 2 Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , ( ) [ ]∑∑∑∑ == == ∗+−+ ∂∂ ∂ + ∂ ∂ −= ∂ ∂ n i ai n i n j ji ij n i i i i ppp xx pQ x pf t p 11 1 2 1 2 1 λ Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 Proof (continue – 2) ( ) ( ) ( )( ) ( ) ( ) ( ){ } ( )( ) ( ){ } ( ) ( ) ( ){ } ( ) ( )[ ] ( )          −−−−−=−= ∂ Φ∂ ∑= 1 1 |1| 1 1| |1 2 1 ,exp|,;exp |, 11 1 txsMsQstxfstxsEtxtxtstxsE t txts n i iai TTT txtx Lemma T txtx Theorem txtx i λφ ( ) ( ) ( ) ( )[ ] ( ) ( ){ } ( ) ( ) ( )∫ ∞+ ∞− Φ ∂ ∂ = ∂ ∂ j j txtx T ntxtx sdts t txs j txttxp t ,exp 2 1 |, 11 |1| π ( ) ( ){ } ( ) ( ) ( ){ } ( ) ( ){ } ( ) ( ){ } ( ) ( ) ( ) ( )( ) ( )[ ] ( ) ( ) ( ){ } ( ) ( ) ( ) ( ) ( )( ) ( )[ ]{ } ( ) ( ){ } ( ) ( ) ( ) ( ) ( )( ){ } ( ) ( ) ( ) ( ) ( )( )[ ] ∑∑∫ ∫ ∫ ∫ ∫ ∫ = = ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− ∂∂ ∂ = − = − − = −= − n i n j ji txtxij j j txtx TT n j j T txtx TT n j j TT txtx T n j j TT txtx T n xx txtxptxQ sdstxtxptQLstxs j sdsvdtvstxtvptQstxs j sdvdstQstvstxtvptxs j sdtxstQstxsEtxs j 1 1 1| 2 1| 1| 1| 1| |, 2 1 |exp 2 1 exp|exp 2 1 exp|exp 2 1 |expexp 2 1 1 1 1 1 1 π π π π
  • 55. Bartlett-Moyal Theorem SOLO Stochastic Processes Theorem 2 Let x(t) be an (nx1) Vector Markov Process generated by ( ) pg ndnddttxfxd ++= , ( ) [ ]∑∑∑∑ == == ∗+−+ ∂∂ ∂ + ∂ ∂ −= ∂ ∂ n i ai n i n j ji ij n i i i i ppp xx pQ x pf t p 11 1 2 1 2 1 λ Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 Proof (continue – 3) ( ) ( ) ( )( ) ( ) ( ) ( ){ } ( )( ) ( ){ } ( ) ( ) ( ){ } ( ) ( )[ ] ( )             −−−−−=−= ∂ Φ∂ ∑= 1 1 |1| 1 1| |1 2 1 ,exp|,;exp |, 11 1 txsMsQstxfstxsEtxtxtstxsE t txts n i iai TTT txtx Lemma T txtx Theorem txtx i λφ ( ) ( ) ( ) ( )[ ] ( ) ( ){ } ( ) ( ) ( )∫ ∞+ ∞− Φ ∂ ∂ = ∂ ∂ j j txtx T ntxtx sdts t txs j txttxp t ,exp 2 1 |, 11 |1| π ( ) ( ){ } ( ) ( ) ( ){ } [ ]{ }[ ] ( ){ } ( ) ( ){ } ( ) ( ) ( ) ( )( ) ( )[ ] [ ]{ }[ ] ( ) ( ){ } [ ]{ }[ ] ( ) ( ) ( ) ( )( ) ( )[ ]{ } ( ) ( ){ } [ ]{ }[ ] ( ) ( ) ( ) ( )( ){ } ( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ( )( )∫∫ ∫ ∫ ∫ ∫ ∫ −−=−− − = −−− − = −−−= −−− ∞+ ∞− ∞+ ∞− ∞+ ∞− ∞+ ∞− initxtxiiaitxtxi j j txtxiii T n j j T txtxiii T n j j iii T txtx T n j j iii T txtx T n vdtxsvspvsptxtxpsdtxtvpasELtxs j sdvdtvstxtvpasEtxs j sdvdasEtvstxtvptxs j sdtxasEtxsEtxs j i 11|1|1| 1| 1| 1| |,,,,||exp1exp 2 1 exp|exp1exp 2 1 exp1exp|exp 2 1 |exp1expexp 2 1 111 1 1 1 λλλ π λ π λ π λ π ( ) ( ) ( ) ( )( )∫ −=∗ initxtxiiaa vdtxsvspvsppp ii 11| |,,,,: 1 Table of Content
  • 56. Fokker- Planck Equation SOLO Stochastic Processes Feller- Kolmogorov Equation Let x(t) be an (nx1) Vector Markov Process generated by ( ) pnddttxfxd += , ( ) [ ]∑∑ == ∗+−+ ∂ ∂ −= ∂ ∂ n i ai n i i i i ppp x pf t p 11 λ Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 Proof where the convolution (*) is defined as ( ) ( ) ( ) ( )( )∫ −=∗ initxtxiiaa vdtxsvspvsppp ii 11| |,,,,: 1  Andrey Nikolaevich Kolmogorov 1903-1987 Derived from Theorem 2 by tacking 0=gnd
  • 57. Fokker- Planck Equation SOLO Stochastic Processes Fokker-Planck Equation Let x(t) be an (nx1) Vector Markov Process generated by ( ) gnddttxfxd += , ( ) ∑∑∑ = == ∂∂ ∂ + ∂ ∂ −= ∂ ∂ n i n j ji ij n i i i xx pQ x pf t p 1 1 2 1 2 1 Let be the Transition Probability Density Function for the Markov Process x(t). Then p satisfies the Partial Differential Equation ( ) ( ) ( ) ( )( ) ptxttxp txtx =1| |,1 Proof Derived from Theorem 2 by tacking 0=pnd Discussion of Fokker-Planck Equation The Fokker-Planck Equation can be written as a Conservation Law 0 1 =∇+ ∂ ∂ = ∂ ∂ + ∂ ∂ ∑= J t p x J t p n i i where pQpfJ ∇−= 2 1 : This Conservation Law is a consequence of the Global Conservation of Probability ( ) ( ) ( ) ( )( ) 1|, 1| 1 =∫ xdtxttxp txtx Table of Content
  • 58. Langevin and Fokker- Planck Equations SOLO Stochastic Processes The original Langevin equation describes Brownian motion, the apparently random movement of a particle in a fluid due to collisions with the molecules of the fluid, ( ) ( )t m v mtd vd td xd vtv td vd m η λ ηλ 1 +−=⇒=+−= We are interested in the position x of a particle of mass m. The force on the particle is the sum of the viscous force proportional to particle’s velocity λ v (Stoke’s Law) plus a noise term η (t) that has a Gaussian Probability Distribution with Correlation Function ( ) ( ) ( ) 2 , /2'2', mTkQttTktt BjiBji λδδληη =−= where kB is Boltzmann’s constant and T is the Temperature. Let be the Transition Probability Density Function that corresponds to the Langevin Equation state. Then p satisfies the Partial Differential Equation given by the Fokker-Planck Equation: ( ) ( ) ( ) ( )( ) ptvttvp tvtv =1| |,1 ( ) ( ) ( ) ( )( ) ( )( )00000| |,1 vtvtvttvp tvtv −=δ ( )( ) 2 2 / v p Q v pvm t p ∂ ∂ + ∂ −∂ −= ∂ ∂ λ We assume that the initial state at t0 is v(t0) and is deterministic
  • 59. Langevin and Fokker- Planck Equations SOLO Stochastic Processes The Fokker-Planck Equation: ( ) ( ) ( ) ( )( ) ( ) ( )       − −= 2 2 2/120| ˆ 2 1 exp 2 1 |,1 σσπ vv tvttvp tvtv ( ) ( ) ( ) ( )( ) ( )( )00000| |,1 vtvtvttvp tvtv −=δ ( )( ) 2 2 / v p Q v pvm t p ∂ ∂ + ∂ −∂ −= ∂ ∂ λ the initial state at t0 is v(t0) is deterministic The solution to the Fokker-Planck Equation is: where: A solution to the one-dimensional Fokker–Planck equation, with both the drift and the diffusion term. The initial condition is a Dirac delta function in x = 1, and the distribution drifts towards x = 0. ( )    −−= 00 expˆ tt m vv λ and: ( )             −−−= 0 2 2exp1 tt m Q λ σ Table of Content
  • 60. Generalized Fokker - Planck Equation SOLO Stochastic Processes ( )TXtxpx ,|,Define the set of past data. We need to find( ) ( ) ( )( )nn tttxxxTX ,,,,,,,:, 2121 = where we assume that ( ) ( )TXtx ,∉ Start the analysis by defining the Conditional Characteristic Function of the Increment of the Process: ( ) ( )( ) ( ) ( ) ( )( )[ ] ( ){ } ( ) ( )( )[ ] ( ) ( )( ) ( ) ( ) ( )ttxtxxtxdTXttxtxpttxtxs TXttxttxtxsETXttxts TXttxx T T TXttxxTXttxx ∆−−=∆∆−∆−−−= ∆−∆−−−=∆−Φ ∫ ∞+ ∞− ∆− ∆−∆∆−∆ :,,|,exp ,,|exp,,|, ,,| ,,|,,| ( ) ( ) ( )[ ] ( ) ( ) ( )[ ]{ } ( ) ( )( )∫ ∞+ ∞− ∆−∆∆− ∆−Φ∆−−==∆− j j TXttxx T nTXttxtx sdTXttxtsttxtxs j TXvttxtxp ,,|,exp 2 1 ,,|, ,,|,,| π The Inverse Transform is The Fokker-Planck Equation was derived under the assumption that is a Markov Process. Let assume that we don’t have a Markov Process, but an Arbitrary Random Process (nx1 vector), where an arbitrary set of past value , must be considered.nn txtxtx ,;;,;, 2211  ( )tx ( )tx ( ) ( )n T n T sssxxx  11 , ==
  • 61. Generalized Fokker - Planck Equation SOLO Stochastic Processes Using Chapman – Kolmogorov Equation we obtain: ( ) ( ) [ ] ( ) ( ) ( )[ ] ( ) ( )( ) ( ) ( ) ( ) ( )[ ]{ } ( ) ( )( ) ( ) ( ) ( )[ ] ( ) ( )( ) ( ) ( ) ( ) ( )[ ]{ } ( ) ( )( ) ( ) ( )( ) ( )∫ ∫ ∫ ∫ ∫ ∞+ ∞− ∞+ ∞− ∆−∆−∆ ∞+ ∞− ∆− ∆− ∞+ ∞− ∆−∆ +∞ ∞− ∆−∆−∆− ∆−∆−∆−Φ∆−−= ∆−∆−∆−Φ∆−−= ∆−∆−∆−= ∆− j j TXttxTXttxx T n TXttx TXttxtxp j j TXttxx T n TXttxTXttxtxTXttxtx ttxdsdTXttxpTXttxtsttxtxs j ttxdTXttxpsdTXttxtsttxtxs j ttxdTXttxpTXttxtxpTXtxp TXttxtx ,|,,|,exp 2 1 ,|,,|,exp 2 1 ,|,,|,,|, ,|,,| ,| ,,|, ,,| ,|,,|,,| ,,| π π    where Let expand the Conditional Characteristic Function in a Taylor Series about the vector 0=s ( ) ( )( ) ( ) ( ) ( )( )[ ] ( ){ } ( ) ( )( )[ ] ( ) ( )( ) ( )∫ ∞+ ∞− ∆− ∆−∆∆−∆ ∆−∆−∆−−−= −∆+−=∆−Φ ttxdTXttxtxpttxtxs TXtxtxttxsETXttxts TXttxx T T TXttxxTXttxx ,,|,exp ,,|exp,,|, ,,| ,,|,,| ( ) ( )( ) ( ) ( ) ( ) ∑∑ ∑ ∑∑∑ = ∞ = ∞ = ∆−∆ = = ∆−∆ = ∆−∆ ∆−∆ = ∂∂ Φ∂ = + ∂∂ Φ∂ + ∂ Φ∂ +=∆−Φ n i i m m m n m m n m TXttxx m n n i n i ii ii TXttxx i n i i TXttxx TXttxx mmss ssmm ss ss s s TXttxts n n n 10 0 1 1 ,,| 1 1 1 ,,| 2 1 ,,| ,,| 1 1 1 1 2 21 21 1 1 1 !! 1 !2 1 1,,|,     ( ) ( )( ) ( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ){ } ∑= ∆−∆ ∆−∆ =∆−∆−−∆−−⋅∆−−−= ∂∂∂ ∆−Φ∂ n i i m nn mm TXttxx m m n mm TXttxx m mmTXttxttxtxttxtxttxtxE sss TXttxts n n 1 2211,,| 21 ,,| :,,|1 ,,|, 21 21  
  • 62. Generalized Fokker - Planck Equation SOLO Stochastic Processes ( ) ( ) [ ] ( ) ( ) ( )[ ]{ } ( ) ( )( ) ( ) ( )( ) ( )∫ ∫ +∞ ∞− ∞+ ∞− ∆−∆−∆∆− ∆−∆−∆−Φ∆−−= j j TXttxTXttxx T nTXttxtx ttxdsdTXttxpTXttxtsttxtxs j TXtxp ,|,,|,exp 2 1 ,|, ,|,,|,,| π ( ) ( ) ( )[ ]{ } ( ) ( ) ( )( ) ( )∫ ∫ ∑ ∑ +∞ ∞− ∞+ ∞− ∆− ∞ = ∞ = ∆−∆ ∆−∆− ∂∂ Φ∂ ∆−−= j j TXttx m m m n m m n m TXttxx m n T n ttxdsdTXttxpss ssmm ttxtxs j n n n ,| !! 1 exp 2 1 .| 0 0 1 1 ,,| 11 1 1    π ( ) ( ) ( )[ ]{ } ( ) ( ) ( )( ) ( )ttxdTXttxpdsdsss ss ttxtxs jmm TXttx m m j j j j n m n m m n m TXttxx m T n nn n n ∆−∆− ∂∂ Φ∂ ∆−−= ∆− ∞ = ∞ = +∞ ∞− ∞+ ∞− ∞+ ∞− ∆−∆ ∑ ∑ ∫ ∫ ∫ ,|exp 2 1 !! 1 ,| 0 0 11 1 ,,| 11 1 1      π ( ) ( ) ( ) ( )[ ]{ } ( ) ( ) ( )( ) ( ) ( )( ) ( ){ } ( ) ( )( ) ( )ttxdTXttxpdsdsssTXttxttxtxttxtxEttxtxs jmm TXttx m m j j j j n m n mm nn m TXttxx T n n m n nn ∆−∆−∆−∆−−∆−−∆−− − = ∆− ∞ = ∞ = +∞ ∞− ∞+ ∞− ∞+ ∞− ∆−∆∑ ∑ ∫ ∫ ∫ ,|,,|exp 2 1 !! 1 ,| 0 0 1111,,| 11 11    π we obtained: ( ) ( ) ( ) ( )[ ]{ } ( ) ( ) ( )( ) ( ){ } ( ) ( )( ) ( )ttxdTXttxpdssTXttxttxtxEttxtxs jm TXttx m m n i j j i m i m iiTXttxxiii i m n ii i ∆−∆−         ∆−∆−−∆−− − = ∆− ∞ = ∞ = +∞ ∞− = ∞+ ∞− ∆−∆∑ ∑ ∫ ∏ ∫ ,|,,|exp 2 1 ! 1 ,| 0 0 1 ,,| 1 π 
  • 63. Generalized Fokker - Planck Equation SOLO Stochastic Processes Using : [ ] ( ){ } ( ) ( ) { } ( ) ( ) { }∫∫∫ ∞+ ∞− ∞+ ∞− ∞+ ∞− =→=−=− j j i i ij j j j i i i sdussFs j uf du d sdussF j ufsdauss j au ud d exp 2 1 exp 2 1 exp 2 1 πππ δ we obtained: we obtain: ( ) ( ) [ ] ( ) ( ) ( ) ( )[ ]{ } ( ) ( ) ( )( ) ( ){ } ( ) ( )( ) ( )ttxdTXttxpTXttxttxtxEdsttxtxss jm TXtxp TXttx m m j j m iiTXttxxiiii m i i mn i TXttxtx n ii i ∆−∆−         ∆−∆−−∆−− − = ∆− ∞ = ∞ = ∞+ ∞− ∞+ ∞− ∆−∆ = ∆− ∑ ∑ ∫ ∫∏ .|,,|exp 2 1 ! 1 ,|, .| 0 0 ,,| 1 ,,| 1 π  ( ) ( ) [ ] ( ) ( ) ( )[ ] ( ) ( ) ( ) ( )( ) ( ){ } ( ) ( )( ) ( )ttxdTXttxpTXttxttxtxE tx ttxtx m TXtxp TXttx m m n i m iiTXttxxm i ii m i m TXttxtx n i i ii ∆−∆−        ∆−∆−− ∂ ∆−−∂− = ∆− ∞ = ∞ = ∞+ ∞− = ∆−∆ ∆− ∑ ∑ ∫ ∏ ,|,,| ! 1 ,|, ,| 0 0 1 ,,| ,,| 1 δ  ( ) ( ) ( ) ( )[ ] ( ) ( ) ( )( ) ( ){ } ( ) ( )( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ){ } ( ) ( )( )[ ]∑ ∑ ∏ ∑ ∑∏ ∫ ∞ = ∞ = = =∆∆−∆ ∞ = ∞ = = +∞ ∞− ∆−∆−∆         ∆−∆−∆−− ∂ ∂− =         ∆−∆−∆−∆−−∆−− ∂ ∂− = 0 0 1 0,|,,| 0 0 1 ,|,,| 1 1 ,|,,| ! 1 ,|,,| ! 1 m m n i tTXttx m iiTXtxxm i m i m m m n i TXttx m iiTXttxxiim i m i m n i i ii n i i ii TXttxpTXttxttxtxE txm ttxdTXttxpTXttxttxtxEttxtx txm   δ For m1=…=mn=m=0 we obtain : ( ) ( ) [ ]TXttxp TXttxttx ,|,,,| ∆−∆−∆−
  • 64. Generalized Fokker - Planck Equation SOLO Stochastic Processes we obtained: ( ) ( ) [ ] ( ) [ ] ( ) ( ) ( ) ( ) ( )( ) ( ){ } ( ) ( )( )[ ] 0,|,,| ! 1 ,|,,|, 10 0 1 0,|,,| ,|,,| 1 ≠=        ∆−∆−∆−− ∂ ∂− = ∆−− ∑∑ ∑ ∏ = ∞ = ∞ = = =∆∆−∆ ∆−∆− n i i m m n i tTXttx m iiTXtxxm i m i m TXttxTXttxtx mmTXttxpTXttxttxtxE txm TXttxpTXtxp n i i ii  Dividing both sides by Δt and taking Δt →0 we obtain: ( ) [ ] ( ) ( ) [ ] ( ) [ ] ( ) ( ) ( ) ( ) ( )( ) ( ){ } ( ) ( )( ) 0,| ,,| lim ! 1 ,|,,|, lim ,|, 10 0 1 ,| ,,| 0 ,|,,| 0 ,| 1 ≠=                 ∆ ∆−∆−− ∂ ∂− = ∆ ∆−− = ∂ ∂ ∑∑ ∑ ∏ = ∞ = ∞ = = ∆ →∆ ∆−∆− →∆ n i i m m n i TXtx m iiTXtxx tm i m i m TXttxTXttxtx t TXtx mmTXtxp t TXttxttxtxE txm t TXttxpTXtxp t TXtxp n i i ii  This is the Generalized Fokker - Planck Equation for Non-Markovian Random Processes
  • 65. Generalized Fokker - Planck Equation SOLO Stochastic Processes Discussion of Generalized Fokker – Planck Equation ( ) [ ] ( ) ( ) ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ){ } t TXtxttxtxttxtxE A mmTXtxpA txtxmmt TXtxp n p pn n m nn m TXtxx t mm n i iTXtxmmm n m m m m n m TXtx ∆ ∆−−∆−− = ≠= ∂∂ ∂− = ∂ ∂ ∆ →∆ = ∞ = ∞ = ∑∑ ∑ ,,| lim: 0,| !! 1,|, 1 1 11 1 11,,| 0 ,, 1 ,| 10 0 1 ,|      • The Generalized Fokker - Planck Equation is much more complex than the Fokker – Planck Equation because of the presence of the infinite number of derivative of the density function. • It requires certain types of density function, infinitely differentiable, and knowledge of all coefficients • To avoid those difficulties we seek conditions on the process for which ∂p/∂t is defined by a finite set of derivatives. pmmA ,,1 
  • 66. Generalized Fokker - Planck Equation SOLO Stochastic Processes Discussion of Generalized Fokker – Planck Equation ( ) [ ] ( ) ( ) ( ) ( ) ( )( )( ) ( ) ( ) ( )( ) ( ) ( )( ) ( ){ } t TXtxttxtxttxtxE A mmTXtxpA txtxmmt TXtxp n p pn n m nn m TXtxx t mm n i iTXtxmmm n m m m m n m TXtx ∆ ∆−−∆−− = ≠= ∂∂ ∂− = ∂ ∂ ∆ →∆ = ∞ = ∞ = ∑∑ ∑ ,,| lim: 0,| !! 1,|, 1 1 11 1 11,,| 0 ,, 1 ,| 10 0 1 ,|      • To avoid those difficulties we seek conditions on the process for which ∂p/∂t is defined by a finite set of derivatives. Those were defined by Pawula, R.F. (1967) Lemma 1 Let ( ) ( ) ( )( ) ( ){ } 0 ,,| lim: 1 11,,| 0 0,,0, 1 1 ≠= ∆ ∆−− = ∆ →∆ mm t TXtxttxtxE A m TXtxx t m  If is zero for some even m1, then Proof For m1 odd and m1 ≥ 3, we have ( ) ( ) ( )( ) ( ){ } ( ) ( ) ( )( ) ( ) ( )( ) ( ) t TXtxttxtxttxtxE t TXtxttxtxE A mm TXtxx t m TXtxx t m ∆       ∆−−∆−− = ∆ ∆−− = +− ∆ →∆ ∆ →∆ ,,| lim ,,| lim: 2 1 11 2 1 11,,| 0 11,,| 0 0,,0, 11 1 1  0,,0,1 mA 30 10,,0,1 ≥∀= mAm 
  • 67. Generalized Fokker - Planck Equation SOLO Stochastic Processes Lemma 1 Let ( ) ( ) ( )( ) ( ){ } 0 ,,| lim: 1 11,,| 0 0,,0, 1 1 ≠= ∆ ∆−− = ∆ →∆ mm t TXtxttxtxE A m TXtxx t m  Proof For m1 odd and m1 ≥ 3, we have ( ) ( ) ( )( ) ( ){ } ( ) ( ) ( )( ) ( ) ( )( ) ( ) t TXtxttxtxttxtxE t TXtxttxtxE A mm TXtxx t m TXtxx t m ∆       ∆−−∆−− = ∆ ∆−− = +− ∆ →∆ ∆ →∆ ,,| lim ,,| lim: 2 1 11 2 1 11,,| 0 11,,| 0 0,,0, 11 1 1  Using Schwarz Inequality, we have ( ) ( ) ( )( )( ) ( ){ } ( ) ( ) ( )( )( ) ( ){ } 0,,0,10,,0,1 1 11,,| 0 1 11,,| 0 2 0,,0, 11 11 1 ,,| lim ,,| lim  +− + ∆ →∆ − ∆ →∆ = ∆ ∆−− ∆ ∆−− ≤ mm m TXtxx t m TXtxx t m AA t TXtxttxtxE t TXtxttxtxE A In the same way, for m1 ≥ 4, and m1 even we have ( ) ( ) ( )( ) ( ){ } ( ) ( ) ( )( ) ( ) ( )( ) ( ) t TXtxttxtxttxtxE t TXtxttxtxE A mm TXtxx t m TXtxx t m ∆       ∆−−∆−− = ∆ ∆−− = +− ∆ →∆ ∆ →∆ ,,| lim ,,| lim: 2 2 11 2 2 11,,| 0 11,,| 0 0,,0, 11 1 1  0,,0,20,,0,2 2 0,,0, 111  +−≤ mmm AAAUsing Schwarz Inequality, again for m1 ≥ 4 If is zero for some even m1, then0,,0,1 mA 30 10,,0,1 ≥∀= mAm 
  • 68. Generalized Fokker - Planck Equation SOLO Stochastic Processes Lemma 1 Let ( ) ( ) ( )( ) ( ){ } 0 ,,| lim: 1 11,,| 0 0,,0, 1 1 ≠= ∆ ∆−− = ∆ →∆ mm t TXtxttxtxE A m TXtxx t m  Proof (continue) we have evenmmAAA oddmmAAA mmm mmm 110,,0,20,,0,2 2 0,,0, 110,,0,10,,0,1 2 0,,0, 4 3 111 111 ≥≤ ≥≤ +− +−   00,,0, =rAFor some m1 = r even we have , and Therefore A r-2,0,…,0=0, A r-1,0,…,0 =0, A r+1,0,…,0 =0, A r+2,0,…,0 =0, if A r,0,…,0 = 0 and all A are bounded. This procedure will continue leaving A 1,0,…,0 not necessarily zero and achieving: 420 310 310 420 0,,0,0,,0,4 2 0,,0,2 0,,0,20,,0, 2 0,,0,1 0,,0,0,,0,2 2 0,,0,1 0,,0,0,,0,4 2 0,,0,2 ≥+=≤ ≥+=≤ ≥−=≤ ≥−=≤ ++ ++ −− −− rAAA rAAA rAAA rAAA rrr rrr rrr rrr     00,,0,0,,0,30,,0,2 ==== ∞→   rAAA q.e.d. If is zero for some even m1, then0,,0,1 mA 30 10,,0,1 ≥∀= mAm 
  • 69. Generalized Fokker - Planck Equation SOLO Stochastic Processes Lemma 2 Let If each of the moments is finite and vanishes for some even mi, then nmmm AAA ,,0,0,,,00,,0, ,,, 21   Proof 2,,0 321,0,0,0,,00,0, 321 ≥∀=== mmmAAA mmm ( ) ( ) ( )( ) ( ) ( )( ) ( ){ } 0 ,,| lim: 1 11,,| 0 ,, 1 1 >= ∆ ∆−−∆−− = ∑= ∆ →∆ n i i m nn m TXtxx t mm mm t TXtxttxtxttxtxE A n p   20..1,0 3..00 1 ,, 1 ,, 1 1 ≤=<=∀ ≥=>∀= ∑ ∑ = = n i iimm n i iimm mmtsmzeronecessarlynotA mmtsmA p p   We shall prove this Lemma by Induction. Let start with n=3 ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ){ } 0 ,,| lim 1 332211,,| 0 ,, 321 321 >= ∆ ∆−−∆−−∆−− = ∑= ∆ →∆ n i i mmm TXtxx t mmm mm t TXtxttxtxttxtxttxtxE A We proved in Lemma 1 that and A 1,0,0, A 0,1,0, A0,0,1 are not necessarily zero. ( ) ( ) ( )( ) ( ) ( )( ) ( ){ } ( ) ( ) ( )( ) ( ){ } ( ) ( ) ( )( ) ( ){ } 2 2,0,0 2 0,2,0 2 33,,| 0 2 22,,| 0 2 3322,,| 0 2 ,,0 32 32 32 32 ,,| lim ,,| lim ,,| lim mm m TXtxx t m TXtxx t mm TXtxx t mm AA t TXtxttxtxE t TXtxttxtxE t TXtxttxtxttxtxE A =         ∆ ∆−−         ∆ ∆−− ≤         ∆ ∆−−∆−− = ∆ →∆ ∆ →∆ ∆ →∆
  • 70. Generalized Fokker - Planck Equation SOLO Stochastic Processes Lemma 2 Let If each of the moments is finite and vanishes for some even mi, then nmmm AAA ,,0,0,,,00,,0, ,,, 21   Proof (continue – 1) 2,,0 321,0,0,0,,00,0, 321 ≥∀=== mmmAAA mmm ( ) ( ) ( )( ) ( ) ( )( ) ( ){ } 0 ,,| lim: 1 11,,| 0 ,, 1 1 >= ∆ ∆−−∆−− = ∑= ∆ →∆ n i i m nn m TXtxx t mm mm t TXtxttxtxttxtxE A n p   A 1,0,0, A 0,1,0, A0,0,1 are not necessarily zero. 2 2,0,0 2 0,2,0 2 ,,0 3232 mmmm AAA ≤     ≥+>= ⇒ zeroynecessarilnotA mmmmA mm 1,1,0 3232,,0 3&0,032 20..1,0 3..00 1 ,, 1 ,, 1 1 ≤=<=∀ ≥=>∀= ∑ ∑ = = n i iimm n i iimm mmtsmzeronecessarlynotA mmtsmA p p   2 2,0,0 2 0,0,2 2 ,0, 3131 mmmm AAA ≤     ≥+>= ⇒ zeroynecessarilnotA mmmmA mm 1,0,1 3131,0, 3&0,032 2 0,2,0 2 0,0,2 2 0,, 2121 mmmm AAA ≤     ≥+>= ⇒ zeroynecessarilnotA mmmmA mm 0,1,1 21210,, 3&0,021
  • 71. Generalized Fokker - Planck Equation SOLO Stochastic Processes Lemma 2 Let If each of the moments is finite and vanishes for some even mi, then nmmm AAA ,,0,0,,,00,,0, ,,, 21   Proof (continue – 2) ( ) ( ) ( )( ) ( ) ( )( ) ( ){ } 0 ,,| lim: 1 11,,| 0 ,, 1 1 >= ∆ ∆−−∆−− = ∑= ∆ →∆ n i i m nn m TXtxx t mm mm t TXtxttxtxttxtxE A n p   20..1,0 3..00 1 ,, 1 ,, 1 1 ≤=<=∀ ≥=>∀= ∑ ∑ = = n i iimm n i iimm mmtsmzeronecessarlynotA mmtsmA p p   ( ) ( ) ( )( ) ( ) ( )( ) ( ) ( )( ) ( ){ } 4 332211,,| 0 4 ,, ,,| lim 321 321         ∆ ∆−−∆−−∆−− = ∆ →∆ t TXtxttxtxttxtxttxtxE A mmm TXtxx t mmm ( ) ( ) ( )( ) ( ){ } ( ) ( ) ( )( ) ( ){ } ( ) ( ) ( )( ) ( ){ } 321 3 22 4,0,00,4,0 2 0,0,2 4 33,,| 0 4 22,,| 0 22 11,,| 0 ,,| lim ,,| lim ,,| lim mmm m TXtxx t m TXtxx t m TXtxx t AAA t TXtxttxtxE t TXtxttxtxE t TXtxttxtxE =         ∆ ∆−− ⋅         ∆ ∆−− ⋅         ∆ ∆−− ≤ ∆ →∆ ∆ →∆ ∆ →∆ 321321 4,0,00,4,0 2 0,0,2 4 mmmmmm AAAA ≤ Since 000,0 32132 ,,324,0,00,4,0 >∀=⇒>∀== immmmm mAmmAA
  • 72. Generalized Fokker - Planck Equation SOLO Stochastic Processes Lemma 2 Let If each of the moments is finite and vanishes for some even mi, then nmmm AAA ,,0,0,,,00,,0, ,,, 21   Proof (continue – 3) q.e.d. ( ) ( ) ( )( ) ( ) ( )( ) ( ){ } 0 ,,| lim: 1 11,,| 0 ,, 1 1 >= ∆ ∆−−∆−− = ∑= ∆ →∆ n i i m nn m TXtxx t mm mm t TXtxttxtxttxtxE A n p   20..1,0 3..00 1 ,, 1 ,, 1 1 ≤=<=∀ ≥=>∀= ∑ ∑ = = n i iimm n i iimm mmtsmzeronecessarlynotA mmtsmA p p   We proved that only are not necessarily zero and1,1,01,0,10,1,11,0,00,1,00,0,1 ,,,,, AAAAAA 3..00 3 1 ,, 321 ≥=>∀= ∑=i iimmm mmtsmA In the same way, assuming that the result is true for (n-1) is straight forward to show that is true for n and 20..1,0 3..00 1 ,, 1 ,, 1 1 ≤=<=∀ ≥=>∀= ∑ ∑ = = n i iimm n i iimm mmtsmzeronecessarlynotA mmtsmA p p  
  • 73. Generalized Fokker - Planck Equation SOLO Stochastic Processes Theorem 2 Let for some set (X,T) and let each of the moments vanish for some even mi. Then the transition density satisfies the Generalized Fokker-Planck Equation nmmm AAA ,,0,0,,,00,,0, ,,, 21   Proof q.e.d. ( ) ( ) ( ) ( ) ( )( ) ( ){ } ( ) ( ) ( )( ) ( ) ( )( ) ( ){ } 0,,1,,0,,1.,0 0 0,,1,,0 0 1 1 2 1 ,,| 1 lim, ,,| 1 lim, 2 1   == →∆ = →∆ = == =−∆+−∆+ ∆ = =−∆+ ∆ = ∂∂ ∂ + ∂ ∂ −= ∂ ∂ ∑∑∑ ji i mmjjii t ji mii t i n i n j ji ji n i i i ATXtxtxttxtxttxE t txC ATXtxtxttxE t txB xx pC x pB t p ( )TXtxpp x ,|,= 0,,1,,0,,1,,00,,1,,0 ,  === jii mmm AA Since vanishes for some even mi, from Lemma 2 the only non-necessarily zero Moments are nmmm AAA ,,0,0,,,00,,0, ,,, 21   The Generalized Fokker – Planck Equation becomes ( ) [ ] ( ) ( ) ( ) ( ) ( )( )( ) ( ) ( )∑∑∑ ∑∑ ∑ = = == = = = ∞ = ∞ = ⋅ ∂∂ ∂ +⋅ ∂ ∂ −= ≠=⋅ ∂∂ ∂− = ∂ ∂ n i n j mm i n i m i n i iTXtxmmm n m m m m n m TXtx pA xjx pA x mmTXtxpA txtxmmt TXtxp jii pn n 1 1 0,,1,,0,,1,,0 2 1 0,,1,,0 1 ,| 10 0 1 ,| 2 1 0,| !! 1,|, 11 1    
  • 74. Generalized Fokker - Planck Equation SOLO Stochastic Processes History The Fokker-Planck Equation was derived by Uhlenbeck and Orenstein for Wiener noise in the paper: “On the Theory of Brownian Motion”, Phys. Rev. 36, pp.823 – 841 (September 1, 1930), (available on Internet) George Eugène Uhlenbeck (1900-1988) Leonard Salomon Ornstein (1880 -1941) Ming Chen Wang ( 王明贞( (1906-2010( Un updated version was published by M.C. Wang and Uhlenbeck : “On the Theory of Brownian Motion II”,. Rev. Modern Physics, 17, Nos. 2 and 3, pp.323 – 342 (April-July 1945), (available on Internet). They assumed that all Moments above second must vanish. The sufficiency of a finite set of Moments to obtain a Fokker-Planck Equation was shown by R.F. Pawula, “Generalization and Extensions of Fokker-Planck-Kolmogorov Equations,”, IEEE, IT-13, No.1, pp. 33-41 (January 1967) Table of Content
  • 75. Karhunen-Loève Theorem SOLO Stochastic Processes Michel Loève 1907)Jaffa( -1979)Berkley( In the theory of stochastic processes, the Karhunen-Loève theorem (named after Kari Karhunen and Michel Loève) is a representation of a stochastic process as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. In contrast to a Fourier series where the coefficients are real numbers and the expansion basis consists of sinusoidal functions (that is, sine and cosine functions), the coefficients in the Karhunen-Loève theorem are random variables and the expansion basis depends on the process. In fact, the orthogonal basis functions used in this representation are determined by the covariance function of the process. If we regard a stochastic process as a random function F, that is, one in which the random value is a function on an interval [a, b], then this theorem can be considered as a random orthonormal expansion of F. Given a Stochastic Process x (t) defined on an interval [a,b], Karhunen-Loeve Theorem states that ( ) ( ) ( ) btatbtxtx n nn ≤≤=≈ ∑ ∞ =1 ˆ ϕ ( ) ( )    ≠ = =∫ nm nm dttt b a mn 0 1 *ϕϕ ( ) ( ) ( ){ } ( ) ( )   ,2,1, 122 * 21 21 ==∫ mtdttttR mm b a m txtxE ϕλϕ :bydefined functionslorthonormaare are random variables( ) ( ) ,2,1* == ∫ ndtttxb b a nn ϕand ( ){ } { } { }    = ≠ = =→= mn mn bbE bEtxEIf n mn n λ 0 * 00
  • 76. Karhunen-Loève Theorem (continue – 1) SOLO Stochastic Processes Proof: ( ) ( )    ≠ = =∫ nm nm dttt b a mn 0 1 *ϕϕ ( ) ( ) ( ){ } ( ) ( )   ,2,1, 122 * 21 21 ==∫ mtdttttR mm b a m txtxE ϕλϕ :bydefined functionslorthonormaare ( ) ( ) ( ) btatbtxtx n nn ≤≤=≈ ∑ ∞ =1 ˆ ϕ and ( ) ( ) ,2,1* == ∫ ndtttxb b a nn ϕIf ( ){ } ( ) ( ) ( ) ( ) ( ){ } ( ) ( ) btatsttdtttxtxEdtttxtxEbtxE mm b a m b a mm ≤≤∀==       = ∫∫ 111222122211 ..*** ϕλϕϕ 1 { }    = ≠ = mn mn bbE n mn λ 0 *then { } ( ) ( ) ( ){ } ( ) ( ) ( )    = ≠ ===               = ∫∫∫ mn mn dtttdttbtxEbdtttxEbbE n b a nmm b a nmm b a nmn λ ϕϕλϕϕ 0 ****** 111111111 2 { } ( ) ( ) ( ){ } ( ) ( ){ } ,2,10** 0 ===       = = ∫∫ ndtttxEdtttxEbE txEb a n b a nn ϕϕ
  • 77. Karhunen-Loève Theorem (continue – 2) SOLO Stochastic Processes Proof: ( ) ( )    ≠ = =∫ nm nm dttt b a mn 0 1 *ϕϕ ( ) ( ) ( ){ } ( ) ( )   ,2,1, 122 * 21 21 ==∫ mtdttttR mm b a m txtxE ϕλϕ :andfunctionslorthonormaare ( ) ( ) ( ) btatbtxtx n nn ≤≤=≈ ∑ ∞ =1 ˆ ϕ and ( ) ( ) ,2,1* == ∫ ndtttxb b a nn ϕIf ( ){ } ( ) { } ( ) ( ) btatstttbbEbtbEbtxE mm n nmnm n nnm ≤≤∀==       = ∑∑ ∞ = ∞ = 111 1 1 1 11 ..*** ϕλϕϕ 3 { }    = ≠ = mn mn bbE n mn λ 0 * then ( ){ } ( ) ( ) ( ) ( ) ( ){ } ( ) ( ) ( ) btatstdttttRdtttxtxEdtttxtxEbtxE b a m b a m b a mm ≤≤∀==       = ∫∫∫ 112221222122211 ..,*** ϕϕϕ but ( ) ( ) ( ){ } ( ) ( )   ,2,1, 122 * 21 21 ==∫ mtdttttR mm b a m txtxE ϕλϕtherefore with { } positiverealbbE mmm &*=λ
  • 78. Karhunen-Loève Theorem (continue – 3) SOLO Stochastic Processes ( ) ( ) btatbtx n nn ≤≤= ∑ ∞ =1 ˆ ϕ then ( ) ( ){ } ( ) ( ) btatttRtxtxE n nn ≤≤−=− ∑ ∞ =1 22 ,ˆ ϕλ Convergence of Karhunen – Loève Theorem4 therefore ( ) ( ){ } ( ) ( ) btatttRtxtxE n nn ≤≤=⇔=− ∑ ∞ =1 22 ,0ˆ ϕλ Proof: ( ) ( ){ } ( ) ( ) ( ){ } ( ) ( ) ( ) btatttbtxEtbtxEtxtxE n nnn n nn n nn ≤≤==       = ∑∑∑ ∞ = ∞ = ∞ = 111 ******ˆ ϕϕλϕϕ ( ) ( ){ } ( ) ( ) ( ){ } ( ) ( ) ( ) btatttbtxEtbtxEtxtxE n nnn n nn n nn nn ≤≤==       = ∑∑∑ ∞ = =∞ = ∞ = 1 * 11 ***ˆ* ϕϕλϕϕ λλ ( ){ } ( ) btatsttbtxE nnn ≤≤∀= 1111 ..* ϕλ ( ) ( ){ } ( ) ( ) ( ){ } ( ) ( ) ( ) btatttbtxEtbtxEtxtxE n nnn n nn n nn ≤≤==       = ∑∑∑ ∞ = ∞ = ∞ = 111 ***ˆ**ˆ*ˆˆ ϕϕλϕϕ ( ) ( ){ } ( ) ( )[ ] ( ) ( )[ ]{ } ( ){ } ( ) ( ){ } ( ) ( ){ } ( ){ } ( ) ( ) btatttR txtxtxEtxtxEtxEtxtxtxtxEtxtxE n nn ≤≤−= +−−=−−=− ∑ ∞ =1 2 222 , ˆˆ**ˆ*ˆˆˆ ϕλ Table of Content
  • 79. References: SOLO http://en.wikipedia.org/wiki/Category:Stochastic_processes http://en.wikipedia.org/wiki/Category:Stochastic_differential_equations Papoulis, A., “Probability, Random Variables, and Stochastic Processes”, McGraw Hill, 1965, Ch. 14 and 15 Sage, A.P. and Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971 McGarty, T., “Stochastic Systems and State Estimation”, John Wiley & Sons, 1974 Maybeck, P.S., “Stochastic Systems Estimation and Control”, Academic Press, Mathematics in Science and Engineering, Volume 141-2, 1982, Ch. 11 and 12 Stochastic Processes Table of Content Jazwinski, A.H., “Stochastic Processes and Filtering Theory”, Academic Press, 1970
  • 80. January 12, 2015 80 SOLO Technion Israeli Institute of Technology 1964 – 1968 BSc EE 1968 – 1971 MSc EE Israeli Air Force 1970 – 1974 RAFAEL Israeli Armament Development Authority 1974 – 2013 Stanford University 1983 – 1986 PhD AA
  • 81. Functional Analysis ( ) ( ) ( ) bxtxxtxtxaxxtfdttf nnn n i iiin b a =<<<<<<<<=−= −− = +→∞ ∑∫ 1121100 0 1 lim  SOLO Riemann Integral http://en.wikipedia.org/wiki/Riemann_integral ix 1+ix it ( )itf ax =0 bxn = εδ <−= + ii xx 1 ( )∫ b a dttf In Riemann Integral we divide the interval [a,b] in n non-overlapping intervals, that decrease as n increases. The value f (ti) is computed inside the intervals. bxtxxtxtxa nnn =<<<<<<<<= −− 1121100  The Riemann Integral is not always defined, for example: ( )    = irationalex rationalex xf 3 2 The Riemann Integral of this function is not defined. Georg Friedrich Bernhard Riemann 1826 - 1866
  • 82. Integration SOLO Stochastic Processes Thomas Joannes Stieltjes 1856 - 1894 Riemann–Stieltjes integral Bernhard Riemann 1826 - 1866 The Stieltjes integral is a generalization of Riemann integral. Let f (x) and α (x) be] real- valued functions defined in the closed interval [a,b]. Take a partition of the interval and consider a Riemann sum bxxxa n <<<<= 10 ( ) ( ) ( )[ ] [ ]iii n i iii xxxxf ,1 1 1 − = − ∈−∑ ξααξ If the sum tends to a fixed number I when max(xi-xi-1)→0 then I is called a Stieltjes integral or a Riemann-Stieltjes integral. The Stieltjes integral of f with respect to α is denoted: ( ) ( )∫ xdxf α ∫ αdf If f and α have a common point of discontinuity, then the integral doesn’t exist. However, if f is continuous and α’ is Riemann integrable over the specific interval or sometimes simply ( ) ( ) ( ) xd xd dxfxdxf α ααα == ∫∫ :''
  • 83. Functional Analysis my k y ( )[ ]k yEµ ( )[ ]m yEµ 1M 2M ( )[ ] 01 =MEµ ( )[ ] 02 =MEµ ( )xfy = SOLO Lebesgue Integral Measure The mean idea of the Lebesgue integral is the notion of Measure. Definition 1: E (M) є [a,b] is the region in x є [a,b], of the function f (x) for which ( ) Mxf > Definition 2: µ [E (M)] the measure of E (M) is ( )[ ] ( ) 0≥= ∫ME dxMEµ We can see that µ [E (M)] is the sum of lengths on x axis for which ( ) Mxf > From the Figure above we can see that for jumps M1 and M2 ( )[ ] ( )[ ] 021 == MEME µµ Example: Let find the measure of the rationale numbers, ratio of integers, that are countable n m rrrrrr k ====== ,, 4 3 , 4 1 , 3 2 , 3 1 , 2 1 5321 3  Since the rationale numbers are discrete we can choose ε > 0 as small as we want and construct an open interval of length ε/2 centered around r1, an interval of ε/22 centered around r2,.., an interval of ε/2k centered around rk ( )[ ] ε εεε µ =++++≤  k rationalsE 222 2 ( )[ ] 0 0 =⇒ → rationalsEµ ε
  • 84. Functional Analysis ( ) ( ) ( )[ ] ( ) ( )xfyyyyxfyyEyydttf bxa nnibxa n i iiin b a ≤≤ −≤≤ = −∞→ =<<<<<<=−= ∑∫ supinflim 110 0 1 µ a b 0 y 1y 1−k y 1+k y ky 1−n y n y ( )[ ]1+k yEµ ( )[ ]1−k yEµ( )[ ]kyEµ ( )xfy = ( )    = irationalex rationalex xf 3 2 SOLO Lebesgue Integral Henri Léon Lebesgue 1875 - 1941 A function y = f (x) is said to be measurable if the set of points x at which f (x) < c is measurable for any and all choices of the constant c. The Lebesgue Integral for a measurable function f (x) is defined as: Example ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) 3013 1 0 1110/ =−==+= ∫∫∫∫ ≤≤ irationalsErationalsEirationalsExfE dxxfdxxfdxxfdxxf  3 2 0 1 x ( )xf Irationals Rationals For a continuous function the Riemann and Lebesgue integrals give the same results.
  • 85. Integration SOLO Stochastic Processes Lebesgue-Stieltjes integration Thomas Joannes Stieltjes 1856 - 1894 Henri Léon Lebesgue 1875-1941 In measure-theoretic analysis and related branches of mathematics, Lebesgue-Stieltjes integration generalizes Riemann-Stieltjes and Lebesgue integration, preserving the many advantages of the latter in a more general measure-theoretic framework. Let α (x) a monotonic increasing function of x, and define an interval I =(x1,x2). Define the nonnegative function ( ) ( ) ( )12 xxIU αα −= The Lebesgue integral with respect to a measure constructed using U (I) is called Lebesgue-Stieltjes integral, or sometimes Lebesgue-Radon integral. Johann Karl August Radon 1887–1956
  • 86. Integration SOLO Stochastic Processes Darboux Integral Lower (green) and upper (green plus lavender) Darboux sums for four subintervals Jean-Gaston Darboux 1842-1917 In real analysis, a branch of mathematics, the Darboux integral or Darboux sum is one possible definition of the integral of a function. Darboux integrals are equivalent to Riemann integrals, meaning that a function is Darboux-integrable if and only if it is Riemann-integrable, and the values of the two integrals, if they exist, are equal. Darboux integrals have the advantage of being simpler to define than Riemann integrals. Darboux integrals are named after their discoverer, Gaston Darboux. A partition of an interval [a,b] is a finite sequence of values xi such that bxxxa n <<<<= 10 Definition Each interval [xi−1,xi] is called a subinterval of the partition. Let ƒ:[a,b]→R be a bounded function, and let ( )nxxxP ,,, 10 = be a partition of [a,b]. Let [ ] ( ) [ ] ( )xfmxfM ii ii xxx i xxx i ,, 1 1 inf:;sup: − − ∈∈ == The upper Darboux sum of ƒ with respect to P is ( )∑= −−= n i iiiPf MxxU 1 1, : The lower Darboux sum of ƒ with respect to P is ( )∑= −−= n i iiiPf mxxL 1 1, :
  • 87. Integration SOLO Stochastic Processes Darboux Integral (continue – 1) Lower (green) and upper (green plus lavender) Darboux sums for four subintervals Jean-Gaston Darboux 1842-1917 The upper Darboux sum of ƒ with respect to P is ( )∑= −−= n i iiiPf MxxU 1 1, : The lower Darboux sum of ƒ with respect to P is ( )∑= −−= n i iiiPf mxxL 1 1, : The upper Darboux integral of ƒ is [ ]{ }baofpartitionaisPUU Pff ,:inf ,= The lower Darboux integral of ƒ is [ ]{ }baofpartitionaisPLL Pff ,:inf ,= If Uƒ = Lƒ, then we say that ƒ is Darboux-integrable and set ( ) ff b a LUdttf ==∫ the common value of the upper and lower Darboux integrals.
  • 88. Integration SOLO Stochastic Processes Lebesgue Integration Henri Léon Lebesgue 1875 - 1941 Illustration of a Riemann integral (blue) and a Lebesgue integral (red) Riemann Integral A sequence of Riemann sums. The numbers in the upper right are the areas of the grey rectangles. They converge to the integral of the function. Darboux Integral Lower (green) and upper (green plus lavender) Darboux sums for four subintervals Jean-Gaston Darboux 1842-1917 Bernhard Riemann 1826 - 1866
  • 89. SOLO Stochastic Processes Richard Snowden Bucy Abdrew James Viterby 1935 - Harold J. Kushner 1932 - Moshe Zakai 1926 - Jose Enrique Moyal (1910 – 1998) Rudolf E. Kalman 1930 - Maurice Stevenson Bartlett (1910 - 2002) George Eugène Uhlenbeck (1900-1988) Leonard Salomon Ornstein (1880 -1941) Bernard Osgood Koopman )1900–1981( Edwin James George Pitman )1897–1993( Georges Darmois (1888 -1960)

Editor's Notes

  1. Di Franco &amp; Rabin, “Radar Detection”, pg. 117 Sage &amp; Melsa, “Estimation Theory with Applications to Communication and Control”,McGraw-Hill, 1971, pg.42
  2. http://en.wikipedia.org/wiki/Stochastic_differential_equation
  3. http://en.wikipedia.org/wiki/Stochastic_differential_equation
  4. http://en.wikipedia.org/wiki/Robert_Brown_(botanist)
  5. Athanasious Papoulis, “Probability, Random Variables and Stochastic Processes”, McGraw-Hill, 1965, pp. 290-292
  6. Athanasious Papoulis, “Probability, Random Variables and Stochastic Processes”, McGraw-Hill, 1965, pp. 290-292
  7. Athanasious Papoulis, “Probability, Random Variables and Stochastic Processes”, McGraw-Hill, 1965, pp. 290-292
  8. http://en.wikipedia.org/wiki/Smoluchowski_equation
  9. http://en.wikipedia.org/wiki/Einstein_relation_(kinetic_theory)
  10. Papoulis, “Probability , Random Variables and Stochastic Processes”, McGraw-Hill, Inc., 1965, pp. 516-519 http://en.wikipedia.org/wiki/Paul_Langevin
  11. http://en.wikipedia.org/wiki/Paul_Langevin
  12. http://en.wikipedia.org/wiki/Paul_Langevin
  13. http://en.wikipedia.org/wiki/L%C3%A9vy_process
  14. http://en.wikipedia.org/wiki/L%C3%A9vy_process
  15. http://en.wikipedia.org/wiki/L%C3%A9vy_process
  16. http://en.wikipedia.org/wiki/Martingale_(probability_theory)
  17. http://en.wikipedia.org/wiki/Martingale_(probability_theory)
  18. http://en.wikipedia.org/wiki/Martingale_(probability_theory)
  19. http://en.wikipedia.org/wiki/Chapman-Kolmogorov_equation
  20. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 75-76 http://en.wikipedia.org/wiki/Chapman-Kolmogorov_equation
  21. http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Ito.html
  22. http://en.wikipedia.org/wiki/Ito%27s_lemma
  23. http://en.wikipedia.org/wiki/Ito%27s_lemma
  24. http://www.peoples.ru/science/physics/ruslan_stratonovich/ http://en.wikipedia.org/wiki/Ruslan_L._Stratonovich
  25. http://en.wikipedia.org/wiki/Adriaan_Fokker http://en.wikipedia.org/wiki/Max_Planck http://jeff560.tripod.com/f.html http://en.wikipedia.org/wiki/Fokker-Planck_equation
  26. http://en.wikipedia.org/wiki/Adriaan_Fokker http://en.wikipedia.org/wiki/Max_Planck http://en.wikipedia.org/wiki/Fokker-Planck_equation
  27. Sage, A.P., &amp; Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971, pp. 77- 82
  28. Sage, A.P., &amp; Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971, pp. 77- 82
  29. Sage, A.P., &amp; Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971, pp. 77- 82
  30. Sage, A.P., &amp; Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971, pp. 77- 82
  31. Sage, A.P., &amp; Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971, pp. 77- 82
  32. Sage, A.P., &amp; Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971, pp. 77- 82
  33. http://en.wikipedia.org/wiki/Fokker-Planck_equation http://en.wikipedia.org/wiki/Kolmogorov_backward_equation
  34. http://en.wikipedia.org/wiki/Fokker-Planck_equation http://en.wikipedia.org/wiki/Kolmogorov_backward_equation
  35. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974
  36. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974
  37. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974
  38. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974
  39. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974
  40. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974
  41. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974
  42. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974
  43. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974
  44. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974
  45. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974
  46. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974
  47. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974
  48. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974
  49. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974
  50. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 186 – 192 Sage, A.P., &amp; Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971, pp. 77- 82
  51. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 186 – 192 Sage, A.P., &amp; Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971, pp. 77- 82
  52. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 186 – 192 Sage, A.P., &amp; Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971, pp. 77- 82
  53. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 186 – 192 Sage, A.P., &amp; Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971, pp. 77- 82
  54. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 186 – 192 Sage, A.P., &amp; Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971, pp. 77- 82
  55. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 186 – 192 Sage, A.P., &amp; Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971, pp. 77- 82
  56. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 186 - 192
  57. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 186 - 192
  58. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 186 - 192
  59. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 186 - 192
  60. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 186 - 192
  61. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 186 - 192
  62. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 186 - 192
  63. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 186 - 192
  64. McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 186 - 192
  65. Papoulis, A., “Probability, Random Variables, and Stochastic Processes”, McGraw Hill, 1965, Ch. 13 McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 161, 368 Sage, A.P. and Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971, pp.43,44 http://en.wikipedia.org/wiki/Karhunen-Lo%C3%A8ve_theorem http://owpdb.mfo.de/person_detail?id=5543 http://owpdb.mfo.de/detail?photo_id=5545
  66. Papoulis, A., “Probability, Random Variables, and Stochastic Processes”, McGraw Hill, 1965, Ch. 13 McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 161, 368 Sage, A.P. and Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971, pp.43,44 http://en.wikipedia.org/wiki/Karhunen-Lo%C3%A8ve_theorem http://owpdb.mfo.de/person_detail?id=5543 http://owpdb.mfo.de/detail?photo_id=5545
  67. Papoulis, A., “Probability, Random Variables, and Stochastic Processes”, McGraw Hill, 1965, Ch. 13 McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 161, 368 Sage, A.P. and Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971, pp.43,44 http://en.wikipedia.org/wiki/Karhunen-Lo%C3%A8ve_theorem http://owpdb.mfo.de/person_detail?id=5543 http://owpdb.mfo.de/detail?photo_id=5545
  68. Papoulis, A., “Probability, Random Variables, and Stochastic Processes”, McGraw Hill, 1965, Ch. 13 McGarty, T., “Stochastic Systems and State Estimation”, John Wiley &amp; Sons, 1974, pp. 161, 368 Sage, A.P. and Melsa, J.L., “Estimation Theory with Applications to Communications and Control”, McGraw Hill, 1971, pp.43,44 http://en.wikipedia.org/wiki/Karhunen-Lo%C3%A8ve_theorem http://owpdb.mfo.de/person_detail?id=5543 http://owpdb.mfo.de/detail?photo_id=5545
  69. http://en.wikipedia.org/wiki/It%C5%8D_calculus http://mathworld.wolfram.com/StieltjesIntegral.html http://en.wikipedia.org/wiki/Riemann-Stieltjes_integral
  70. Sokolnikoff, I.S., Redheffer, R.M., “Mathematics of Physics and Modern Engineering”, 2nd Ed., McGraw-Hill, App. B:”Comparison of Riemann and Lebesgue integrals”
  71. Sokolnikoff, I.S., Redheffer, R.M., “Mathematics of Physics and Modern Engineering”, 2nd Ed., McGraw-Hill, App. B:”Comparison of Riemann and Lebesgue integrals”
  72. http://en.wikipedia.org/wiki/It%C5%8D_calculus http://mathworld.wolfram.com/Lebesgue-StieltjesIntegral.html http://en.wikipedia.org/wiki/Lebesgue-Stieltjes_integral
  73. http://en.wikipedia.org/wiki/It%C5%8D_calculus http://en.wikipedia.org/wiki/Darboux_integral
  74. http://en.wikipedia.org/wiki/It%C5%8D_calculus http://en.wikipedia.org/wiki/Darboux_integral
  75. http://en.wikipedia.org/wiki/It%C5%8D_calculus http://en.wikipedia.org/wiki/Riemann_integral http://en.wikipedia.org/wiki/Lebesgue_integral http://en.wikipedia.org/wiki/Lebesgue_integral http://en.wikipedia.org/wiki/Darboux_integral
  76. Papoulis, “Probability , Random Variables and Stochastic Processes”, McGraw-Hill, Inc., 1965, pp. 516-519 http://en.wikipedia.org/wiki/Paul_Langevin