• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Henderson d., plaskho p.   stochastic differential equations in science and engineering
 

Henderson d., plaskho p. stochastic differential equations in science and engineering

on

  • 1,015 views

 

Statistics

Views

Total Views
1,015
Views on SlideShare
1,015
Embed Views
0

Actions

Likes
0
Downloads
16
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Henderson d., plaskho p.   stochastic differential equations in science and engineering Henderson d., plaskho p. stochastic differential equations in science and engineering Document Transcript

    • mS T O H AD I F F E R E N T I A LE Q U A T I O N S I N S C I E N C EA N D E N G I N E E R I N GD o u g l a s H e n d e r s o n • P e t e r P l a s c h k o
    • S T O C H A S T I CD I F F E R E N T I A LE Q U A T I O N S IN S C I E N C EA N D E N G I N E E R I N G
    • Douglas HendersonBrigham Young University, USAPp t p f PIi3Qf*ihk"Ac i c i r I O O U I I I V V /Uriiversidad Autonoma Metropolitans, Mexico| | p World ScientificNEW JERSEY • LONDON • SINGAPORE • BEIJING • S H A N G H A I • HONG KONG • TAIPEI » C H E N N A I
    • Published byWorld Scientific Publishing Co. Pte. Ltd.5 Toh Tuck Link, Singapore 596224USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601UK office: 57 Shelton Street, Covent Garden, London WC2H 9HEBritish Library Cataloguing-in-Publication DataA catalogue record for this book is available from the British Library.STOCHASTIC DIFFERENTIAL EQUATIONS IN SCIENCE AND ENGINEERING(With CD-ROM)Copyright © 2006 by World Scientific Publishing Co. Pte. Ltd.All rights reserved. This book, or parts thereof, may not be reproduced in anyform or by any means,electronic or mechanical, includingphotocopying, recording or any information storage and retrievalsystem now known or to be invented, without written permission from the Publisher.For photocopying of material in this volume, please pay a copying fee through the CopyrightClearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission tophotocopy is not required from the publisher.ISBN 981-256-296-6Printed in Singapore by World Scientific Printers (S) Pte Ltd
    • To Rose-Marie HendersonA good friend and spouse
    • PREFACEThis book arose from a friendship formed when we were both fac-ulty members of the Department of Physics, Universidad AutonomaMetropolitana, Iztapalapa Campus, in Mexico City. Plaschko wasteaching an intermediate to advanced course in mathematicalphysics. He had written, with Klaus Brod, a book entitled, "HoehereMathematische Methoden fuer Ingenieure und Physiker", thatHenderson admired and suggested that be translated into Englishand be updated and perhaps expanded somewhat.However, we both prefer new projects and this suggested insteadthat a book on Stochastic Differential Equations be written and thisproject was born. This is an important emerging field. From its incep-tion with Newton, physical science was dominated by the idea ofdeterminism. Everything was thought to be determined by a set ofsecond order differential equations, Newtons equations, from whicheverything could be determined, at least in principle, if the initialconditions were known. To be sure, an actual analytic solution wouldnot be possible for a complex system since the number of dynamicalequations would be enormous; even so, determinism prevailed. Thisidea took hold even to the point that some philosophers began tospeculate that humans had no free will; our lives were determinedentirely by some set of initial conditions. In this view, even beforethe authors started to write, the contents of this book were deter-mined by a set of initial conditions in the distant past. DogmaticMarxism endorsed such ideas, although perhaps not so extremely.Deterministic Newtonian mechanics yielded brilliant successes.Most astronomical events could be predicted with great accuracy.V l l
    • viii Stochastic Differential Equations in Science and EngineeringEven in case of a few difficulties, such as the orbit of Mercury, New-tonian mechanics could be replaced satisfactorily by equally deter-ministric general relativity. A little more than a century ago, thecase for determinism was challenged. The seemingly random motionof the Brownian motion of suspended particles was observed as wasthe sudden transition of the flow of a fluid past an object or obstaclefrom lamanar flow to chaotic turbulence. Recent studies have shownthat some seemingly chaotic motion is not necessarily inconsistentwith determinism (we can call this quasi-chaos). Even so, such prob-lems are best studied using probablistic notions. Quantum theoryhas shown that the motion of particles at the atomic level is funda-mentally nondeterministic. Heisenberg showed that there were limitsto the precision with which physical properties could be determined.One can only assign a probablity for the value of a physical quantity.The consequence of this idea can be manifest even on a macroscopicscale. The third law of thermodynamics is an example.Stochastic differential equations, the subject of this monograph,is an interesting extension of the deterministic differential equationsthat can be applied to Brownian motion as well as other problems.It arose from the work of Einstein and Smoluchowski among others.Recent years have seen rapid advances due to the development of thecalculii of Ito and Stratonovich.We were both trained as mathematicians and scientists and ourgoal is to present the ideas of stochastic differential equations ina short monograph in a manner that is useful for scientists andengineers, rather than mathematicians and without overpoweringmathematical rigor. We presume that the reader has some, but notextensive, knowledge of probability theory. Chapter 1 provides areminder and introduction to and definition of some fundamentalideas and quantities, including the ideas of Ito and Stratonovich.Stochastic differential equations and the Fokker-Planck equation arepresented in Chapters 2 and 3. More advanced applications follow inChapter 4. The book concludes with a presentation of some numeri-cal routines for the solution of ordinary stochastic differential equa-tions. Each chapter contains a set of exercises whose purpose is to aidthe reader in understanding the material. A CD-ROM that provides
    • Preface ixMATHEMATICA and FORTRAN programs to assist the reader withthe exercises, numerical routines and generating figures accompaniesthe text.Douglas HendersonPeter PlaschkoProvo Utah, USAMexico City DF, MexicoJune, 2006
    • CONTENTSPreface viiIntroduction xvGlossary xxi1. Stochastic Variables and Stochastic Processes 11.1. Probability Theory 11.2. Averages 41.3. Stochastic Processes, the Kolmogorov Criterionand Martingales 91.4. The Gaussian Distribution and Limit Theorems 141.4.1. The central limit theorem 161.4.2. The law of the iterated logarithm 171.5. Transformation of Stochastic Variables 171.6. The Markov Property 191.6.1. Stationary Markov processes 201.7. The Brownian Motion 211.8. Stochastic Integrals 281.9. The Ito Formula 381.9. The Ito Formula 38Appendix 45Exercises 492. Stochastic Differential Equations 552.1. One-Dimensional Equations 562.1.1. Growth of populations 562.1.2. Stratonovich equations 58
    • xii Stochastic Differential Equations in Science and Engineering2.1.3. The problem of Ornstein-Uhlenbeck andthe Maxwell distribution 592.1.4. The reduction method 632.1.5. Verification of solutions 652.2. White and Colored Noise, Spectra 672.3. The Stochastic Pendulum 702.3.1. Stochastic excitation 722.3.2. Stochastic damping (/? = 7 = 0; a ^ 0) 732.4. The General Linear SDE 762.5. A Class of Nonlinear SDE 792.6. Existence and Uniqueness of Solutions 84Exercises 873. The Fokker-Planck Equation 913.1. The Master Equation 913.2. The Derivation of the Fokker-Planck Equation 953.3. The Relation Between the Fokker-Planck Equation andOrdinary SDEs 983.4. Solutions to the Fokker-Planck Equation 1043.5. Lyapunov Exponents and Stability 1073.6. Stochastic Bifurcations 1103.6.1. First order SDEs 1103.6.2. Higher order SDEs 112Appendix A. Small Noise Intensities and the Influenceof Randomness Limit Cycles 117Appendix B.l The method of Lyapunov functions 124Appendix B.2 The method of linearization 128Exercises 1304. Advanced Topics 1354.1. Stochastic Partial Differential Equations 1354.2. Stochastic Boundary and Initial Conditions 1414.2.1. A deterministic one-dimensional waveequation 1414.2.2. Stochastic initial conditions 1444.3. Stochastic Eigenvalue Equations 1474.3.1. Introduction 1474.3.2. Mathematical methods 148
    • Contents xiii4.3.3. Examples of exactly soluble problems 1524.3.4. Probability laws and moments of the eigenvalues 1564.4. Stochastic Economics 1604.4.1. Introduction 1604.4.2. The Black-Scholes market 162Exercises 1645. Numerical Solutions of OrdinaryStochastic Differential Equations 1675.1. Random Numbers Generators and Applications 1675.1.1. Testing of random numbers 1685.2. The Convergence of Stochastic Sequences 1735.3. The Monte Carlo Integration 1755.4. The Brownian Motion and Simple Algorithms for SDEs 1795.5. The Ito-Taylor Expansion of the Solution of a ID SDE 1815.6. Modified ID Milstein Schemes 1875.7. The Ito-Taylor Expansion for N-dimensional SDEs 1895.8. Higher Order Approximations 1935.9. Strong and Weak Approximations and the Orderof the Approximation 196Exercises 201References 205Fortran Programs 211Index 213
    • INTRODUCTIONThe theory of deterministic chaos has enjoyed during the last threedecades a rapidly increasing audience of mathematicians, physicists,engineers, biologists, economists, etc. However, this type of "chaos"can be understood only as quasi-chaos in which all states of a systemcan be predicted and reproduced by experiments.Meanwhile, many experiments in natural sciences have broughtabout hard evidence of stochastic effects. The best known exampleis perhaps the Brownian motion where pollen submerged in a fluidexperience collisions with the molecules of the fluid and thus exhibitrandom motions. Other familiar examples come from fluid or plasmadynamic turbulence, optics, motions of ions in crystals, filtering the-ory, the problem of optimal pricing in economics, etc. The study ofstochasticity was initiated in the early years of the 1900s. Einstein[1], Smoluchowsky [2] and Langevin [3] wrote pioneering investiga-tions. This work was later resumed and extended by Ornstein andUhlenbeck [4]. But investigation of stochastic effects in natural sci-ence became more popular only in the last three decades. Meanwhilestudies are undertaken to calculate or at least approximate the effectof stochastic forces on otherwise deterministic oscillators, to investi-gate the stability or the transition to stochastic chaos of the latteroscillator.To motivate the following considerations of stochastic differentialequations (SDE) we introduce a few examples from natural sciences.(a) Pendulum with Stochastic ExcitationsWe study the linearized pendulum motion x(t) subjected to astochastic effect, called white noisex + x = (3£t,XV
    • xvi Stochastic Differential Equations in Science and Engineeringwhere ft is an intensity constant, t is the time and £j stands forthe white noise, with a single frequency and constant spectrum. For(3 = 0 we obtain the homogeneous deterministic (non-stochastic) tra-ditional pendulum motion. We can expect that the stochastic effectdisturbs this motion and destroys the periodicity of the motion inthe phase space (x,x). The latter has closed solutions called limitcycles. It is an interesting task to investigate whether the solutionsdisintegrate into scattered points (stochastic chaos). We will coverthis problem later in Section 2.3 and find that the average motion(in a sense to be defined in Section 1.2 of Chapter 1) of the pendu-lum is determined by the deterministic limit (/3 = 0) of the stochasticpendulum equation.(b) Stochastic Growth of PopulationsN(i) is the number of the members of a population at the time t, ais the constant of the deterministic growth and (5 is again a constantcharacterizing the intensity of the white noise. Thus we study thegrowth problem in terms of the linear scenarioThe deterministic limit (/? = 0) of this equation describes the growthof a population living on an unrestricted area with unrestrictedfood supply. Its solution (the number of such a population) growsexponentially. The stochastic effects, or the white noise describes astochastic varying food supply that influences the growth of the pop-ulation. We will consider this problem in the Section 2.1.1 and findagain that the average of the population is given by the deterministiclimit.(c) Diffraction of Optical WavesThe transfer function T(u>); UJ = (u, U2) of a two-dimensional opticaldevice is defined by/oo /-oodx / dyF{x,y)F*{x -wuy- u;2)/N;-OO J —OO/CO /*COdx dyF(x,y)2,-00 J—00
    • Introduction xvnwhere F is a complex wave amplitude and F* = cc(F) is its complexconjugate. The parameter N denotes the normalization of |F(x,y)|2and the variables x and y stand for the coordinates of the imageplane. In a simplified treatment, we assume that the wave form isgiven byF = |F|exp(—ikA); |F|,fc = const,where k and A stand for the wave number and the phase of thewaves, respectively. We suppose that the wave emerging from theoptical instrument (e.g. a lens) exhibits a phase with two differentdeviations from a spherical structure A = Ac + Ar with a controlledor deterministic phase Ac(x,y) and a random phase Ar(x,y) thatarises from polishing the optical device or from atmospheric influ-ences. Thus, we obtain•1 POO /"OOT(u>) = — dx dyexp{ifc[A(x-o;i,y-u;2) - A(x,y)}},•••*• J—oo J—oowhere K is used to include the normalization. In simple applicationswe can model the random phase using white noise with a Gaussianprobability density. To evaluate the average of the transfer function(T(ui)) we need to calculate the quantity(exp{ik[AT(x - Ui,y - u2) - Ar(x,y)]}).We will study the Gaussian probability density and complete thetask to determine the average written in the last line in Section 1.3of Chapter 1. An introduction to random effects in optics can befound in ONeill [5].(d) Filtering ProblemsSuppose that we have performed experiments of a stochastic problemsuch as the one in (a) in an interval t € [0, u] and we obtain as resultsay A(v), v = [0, u]. To improve the knowledge about the solution werepeat the experiments for t € [u,T] and we obtain A(t),t = [u,T].Yet due to inevitable experimental errors we do not obtain A(i) buta result that includes an error A(i) + noise. The question is nowhow can we filter the noise away? A filter is thus, an instrument to
    • xviii Stochastic Differential Equations in Science and Engineeringclean a result and remove the noise that arises during the observa-tion. A typical problem is where a signal with unknown frequencyis transmitted (e.g. by an electronic device) and it suffers duringthe transmission the addition of a noise. If the transmitted signalis stochastic itself (as in the case of music) we need to develop anon-deterministic model for the signal with the aid of a stochasticdifferential equation. To study basic the ideas of filtering problemsthe reader in referred to the book of Stremler [6].(e) Fluidmechanical TurbulenceThis is the perhaps most challenging and most intricate applicationof statistical science. We consider here the continuum dynamics of aflow field influenced by stochastic effects. The latter arise from initialconditions (e.g. at the nozzle of a jet flow, or at the entry region of achannel flow) and/or from background noise (e.g. acoustic waves). Inthe simplest case, the incompressible two-dimensional flows, there arethree characteristic variables (two velocity components and the pres-sure). These variables are governed by the Navier-Stokes equations(NSEs). The latter are a set of three nonlinear partial differentialequations that included a parameter, the Reynolds number R. Theinverse of R is the coefficient of the highest derivatives of the NSEs.Since turbulence occurs at intermediate to high values of the R, thisphenomenon is the rule and not the exception in Fluid Dynamics andit occurs in parameter regions where the NSEs are singular. Nonlin-ear SDEs — such as the NSEs — lead additionally to the problemof the closure, where the equation governing the statistical momentof nth order contains moments of the (n + l)th order.Hopf [7] was the first to try to find a theoretical approach tosolve the problem for the idealized case of isotropic homogenous tur-bulence, a flow configuration that can be approximately realized ingrid flows. Hopf assumed that the turbulence is Gaussian, an assump-tion that facilitates the calculation of higher statistical moments ofthe distribution (see Section 1.3 in Chapter 1). However, later mea-surements showed that the assumption of a Gaussian distributionwas rather unrealistic. Kraichnan [8] studied the problem again in
    • Introduction xixthe 60s and 70s with the direct triad interaction theory in the ide-alized configuration of homogeneous isotropic turbulence. However,this rather involved analysis could only be applied to calculate thespectrum of very small eddies where the viscosity dominates the flow.Somewhat more progress has been achieved by the investigation ofRudenko and Chirin [9]. The latter predicted with aid of stochas-tic initial conditions with random phases a broad banded spectraof a nonlinear model equation. During the last two decades therewas the intensive work done to investigate the Burgers equation andthis research is summarized in part by Wojczinsky [10]. The Burgersequation is supposed to be a reasonable one-dimensional model ofthe NSEs. We will give a short account on the work done in [9] inChapter 4.
    • GLOSSARYAC almost certainlyBC boundary conditiondBj — dWj — £td£ differential of the Brownian motion(or equivalently Wiener process)cc(a) = a* complex conjugate of aD dimension or dimensionalDF distribution functionDOF degrees of freedomSij Kronecker delta functionS(x) Dirac delta functionEX exercise at the end of a chapterFPE Fokker-Planck equationr(x) gamma functionGD Gaussian distributionGPD Gaussian probability distributionHPP homogeneous Poisson processHn(x) Hermite polynomial of order nIC initial conditionIID identically independently distributed
    • xxii Stochastic Differential Equations in Science and EngineeringIFF if and only ifIMSL international mathematical science libraryC Laplace transformM master, as in master equationMCM Monte Carlo methodNSE Navier-Stokes equationNIGD normal inverted GDN(jU, a) normal distribution with i as mean and a as varianceo Stratonovich theoryODE ordinary differential equationPD probability distributionPDE partial differential equationPDF probability distribution functionPSDE partial SDEr Reynolds numberRE random experimentRN random numberRV random variableRe(a) real part of a complex numberR, C sets of real and complex numbers, respectivelyS Prandt numberSF stochastic functionSI stochastic integralSDE stochastic differential equationSLNN strong law of large numbers
    • GlossaryTPT transition probability per unit timeWP Wiener processWS Wiener sheetWKB Wentzel, Kramers, BrillouinWRT with respect toW(t) Wiener white (single frequency) noise(a) average of a stochastic variable aa2= (a2) — (a) (a) variance{xy),{x,uy,v) conditional averagess At minimum of s and tV for all values of€ element off f(x)dx short hand for J^ f(x)dxX end of an example• end of definition$ end of theorem
    • CHAPTER 1STOCHASTIC VARIABLES ANDSTOCHASTIC PROCESSES1.1. Probability TheoryAn experiment (or a trial of some process) is performed whoseoutcome (results) is uncertain: it depends on chance. A collec-tion of all possible elementary (or individual) outcomes is calledthe sample space (or phase space, or range) and is denotedby f2. If the experiment is tossing a pair of distinguishable dice,then 0, = {(i,j) | 1 < i,j < 6}. For the case of an exper-iment with a fluctuating pressure 0, is the set of all real func-tions fi = (0, oo). An observable event A is a subset of f2; thisis written in the form A c f2. In the dice example we couldchoose an even, for example, as A = {{i,j) i + J = 4}. For thecase of fluctuating pressures we could use the subset A = (po >0,oo).Not every subset of £1 is observable (or interesting). An exampleof a non-observable event appears when a pair of dice are tossed andonly their spots are counted, fi = {(i,j),2 < i + j < 12}. Thenelementary outcomes like (1, 2), (2, 1) or (3, 1), (2, 2), (1, 3) are notdistinguished.Let r be the set of observable events for one single experiment.Then F must include the certain event of CI, and the impossibleevent of 0 (the empty set). For every A C T, Acthe complement ofA, satisfies AcC T and for every B C F the union and intersectionof events, A U B and A D B, must pertain also to F. F is calledan algebra of events. In many cases there are countable unions andintersections in F. Then it is sufficient to assume thatoo(J An e r, if An e r.1
    • 2 Stochastic Differential Equations in Science and EngineeringAn algebra with this property is called a sigma algebra. In measuretheory, the elements of T are called measurable sets and the pair of(F, Q,) is called a measurable space.A finite measure Pr(A) defined on F with0 < Pr(A) < 1, Pr(0) = 0, Pr(fi) = 1,is called the probability and the triple (I f2, Pr) is referred to as theprobability space. The set function Pr assigns to every event Athe real number Pr(A). The rules for this set function are along withthe formula abovePr(Ac) = l - P r ( A ) ;Pr(A)<Pr(B); Pr(BA) = Pr(B) - Pr(A) for A C B € T.The probability measure Pr(r) on Q, is thus a function Pr(P) —>•[0,1] and it is generally derived with Lebesque integrations that aredefined on Borel sets.We introduced this formal concept because it can be used as themost general way to introduce axiomatically the probability theory(see e.g. Chung, [1.1]). We will not follow this procedure but we willintroduce heuristically stochastic variables and their probabilities.Definition 1.1. (Stochastic variables)A random (or stochastic) variable ~X.(u),u £ Q is a real valuedfunction defined on the sample space Q. In the following we omit theparameter u) whenever no confusion is possible. •Definition 1.2. (Probability of an event)The probability of an event equals the number of elementary out-comes divided by the total number of all elementary outcomes, pro-vided that all cases are equally likely. •ExampleFor the case of a discrete sample space with a finite number of ele-mentary outcome we have, fi = {wi,... ,u>n} and an event is givenby A = {LO, ... ,u>k}, I < k < n. The probability of the event A isthen Pr(A) = k/n. *
    • Stochastic Variables and Stochastic Processes 3Definition 1.3. (Probability distribution function and probabilitydensity)In the continuous case, the probability distribution function(PDF) Fx(a;) of a vectorial stochastic variable X = (Xi,...,Xn )is defined by the monotonically increasing real functionFx(xi,...,xn) = Pr(Xi < xi,...,Xn < xn), (1.1)where we used the convention that the variable itself is writtenin upper case letters, whereas the actual values that this variableassumes are denoted by lower case letters.The probability density px(^i, • • • ,xn) (PD) of the randomvariable is then defined byFx(xi,...,xn) = ••• px (ui,...,-un )dn1 ---dun (1.2)and this leads todnFxdxi...dXn =!*(*!,...,*„). (1-3)Note that we can express (1.1) and (1.2) alternatively if we putPr(xn < Xi < X12,..., xnl < Xn < xn2)fX12 fXn2•••px(xi,...,xn)dxi •••dxn. (1.1a)rxi2 rxn-.JXn Jx„,The conditions to be imposed on the PD are given by the positivenessand the normalization conditionPxOci, ,xn)>0] / ••• / px(xi,...,xn)dxi •••dxn = 1. (1.4)In the latter equation we used the convention that integrals withoutexplicitly given limits refer to integrals extending from the lowerboundary — oo to the upper boundary oo. •In a continuous phase space the PD may contain Dirac deltafunctionsp(x) = Y^l(k)s(x- k) + P(x); q(k) = Pr(x = k), (1.5)
    • 4 Stochastic Differential Equations in Science and Engineeringwhere q(k) represents the probability that the variable x of the dis-crete set equals the integer value k. We also dropped the index X inthe latter formula. We can interpret it to correspond to a PD of a setof discrete states of probabilities q(fc) that are embedded in a con-tinuous phase space S. The normalization condition (1.4) yields now^2<ik+ p(x)dx = 1.1. J SExamples (discrete Bernoulli and Poisson distributions)First we consider the Bernoulli distribution(i) qf)= Pr(a; = fe) = 6(A:,n,p)=r™)pf c(l-p)l-f c; A; = 0,1,...and then we introduce the Poisson distribution(ii)7rfc(A0 = Pr(x = A; )=( A t ) f c eg) (-A t ); * = 0,1,....In the appendix of this chapter we will give more details about thePoisson distribution. We derive there the Poisson distribution as limitof Bernoulli distributionTTk(Xt) — lim b(k,n,p = Xt/n). *n—>ooIn the following we will consider in almost all cases only contin-uous sets.1.2. AveragesThe sample space and the PD define together completely a stochas-tic variable. To introduce observable quantities we consider now aver-ages. The expectation value (or the average, or the mean value)of a function G(xi,...,xn ) of the stochastic variables x,...,xn isdenned by(G(xi,...,xn)) = ••• G(zi,...,£n )px(xi,...,xn )dxi--dxn .(1.6)In the case of a discrete variable we must replace to integral in(1.6) by a summation. We obtain then with the use of (1.5) for p(x)<G(xi,..., xn)) = Y^ Yl G(fci> • • • M # i r • •, kn)- (1-7)
    • Stochastic Variables and Stochastic Processes 5There are two rules for the application of the averages:(i) a and b are two deterministic constants and G(x,...xn)and H(xi,...,xn ) are two functions of the random variablesx,..., xn. Then we have(aG(xi,...,xn) + bK(xi,...,xn))= a(G(xi,..., xn)) + 6(H(xi,..., xn)), (1.8a)and(ii){(G{x1,...,xn))) = (G(x1,...,xn)). (1.8b)Now we consider two scalar random variables x and y, their jointPD is p(x,y). If we do not have more information (observed values)of y, we introduce the two marginal PDs px(x) and py(y) of thesingle variables x and yPx(ar) = / p{x,y)dy; pY(y) = / p(x,y)dx, (1.9a)where we integrate over the phase spaces S^ (Sy) of the variablesx(y). The normalization condition (1.4) yields/ px(x)dx = / pY(y)dy = 1. (1.9b)Definition 1.4. (Independence of variables)We consider n random variables x,..., xn, x to be independent ofthe other variables X2, - - -, xn if(xiX2 • • • xn) = (xi)(x2---xn). (1.10a)We see easily that a sufficient condition to satisfy (1.10a) isp(xu...,xn) = pi(xi)pn_i(x2,...,a;n), (1.10b)where p^(...), k < n denotes the marginal probability distribution ofthe corresponding variables. •
    • 6 Stochastic Differential Equations in Science and EngineeringThe moments of a PD of a scalar variable x are given by<*"> = /•><**•<* " e N-where n denotes the order of the moment. The first order moment(x) is the average of x and we introduce the variance a2bya2= ((x - {x))2} = (x2) - (re)2> 0. (1.11)The random variable x — (x) is called the standard deviation.The average of the of the Fourier transform of a PD is called thecharacteristic functionG(k,..., kn) = (ex.p(ikrxr)}p(xi,..., xn) ex.Y>(ikrxr)dx • • • dxn, (1-12)where we applied a summation convention krxr = ^?=i kjxj- Thisfunction has the properties G(0,..., 0)1; | G(ki,..., kn) < 1.ExampleThe Gaussian (or normal) PD of a scalar variable x is given byp(x) = (2vr)"1/2exp(-a;2/2); -co < x < oo. (1.13a)Hence we obtain (see also EX 1.1)<*2n> = | ? 7 ; "2= i; (^2n+1) = o. (l.isb)Li litA stochastic variable characterized by N(m, s) is a normal dis-tributed variable with the average m and the variance s. The vari-able x distributed with the PD (1.13a) is thus called a normaldistributed variable with N(0, 1).
    • Stochastic Variables and Stochastic Processes 7A Taylor expansion of the characteristic function G(k) of (1.13a)yields with (1.12)G(*) = E ^ V > . (L14a)n=0 U-We define the cumulants nm by mlnG(fc) = E^f-Km- (1.14b)A comparison of equal powers of k givesKi = (x); K2 = (x2) - (x)2= a2;K3 = {X3)-3(X2)(X)+2{X)3;....(1.14c)*Definition 1.5. (Conditional probability)We assume that A, B C T are two random events of the set ofobservable events V. The conditional probability of A given B(or knowing B, or under the hypothesis of B) is defined byPr(A | B) = Pr(A n B)/Pr(B); Pr(B) > 0.Thus only events that occur simultaneously in A and B contributeto the conditional probability.Now we consider n random variables x,... ,xn with the jointPD pn (xi,..., xn). We select a subset of variables x,..., xs. and wedefine a conditional PD of the latter variables, knowing the remainingsubset xs+i,... ,xn, in the formPs|n—sxli • • • ixs I Xs--, . . . , Xn)= pn(xi, . . . , Xn)/pn-s(xs+i, . . . , Xn). (1.15)Equation (1.15) is called Bayess rule and we use the marginal PDpn-s(xs+i,...,xn) = pn{xi,...,xn)dxi---dxs, (1.16)where the integration is over the phase space of the variables x± • • • xs.Sometimes is useful to write to Bayess rule (1.15) in the formP n l ^ l j • • • j Xn) = pn—syXs-^i, . . . , 3^nJPs|n—s v^l> • • • > xs Xs--lj • • • , XnJ.(1.15)
    • 8 Stochastic Differential Equations in Science and EngineeringWe can also rearrange (1.15) and we obtainP n ^ l i • • • > -En) =Ps(.-El> • • • ) •KsjPn—ss%s+1: • • • j %n Xi, . . . ,XS).(1.15")•Definition 1.6. (Conditional averages)The conditional average of the random variable x, knowingx2, • • •, xn, is defined by(Xi | X2, . • • , Xn) = / ZlPi|n _i(xi I X2, • • • , Xn)dXi= / XxPnfa X2,..., X n ) d x i / p n _ i ( x 2 , • • • , Xn).(1.17)Note that (1.17) is a random variable.The rules for this average are in analogy to (1.8)(axi + bx2 | y) = a{xx y) + b(x2 y), ((x y)) = (x y). (1.18)DExampleWe consider a scalar stochastic variable x with its PD p(a;). An eventA is given by a; £ [a, 6]. Hence we havep(x | A) = 0 Vz^ [a, b],andp(x | A) = p(x) / / p(s)ds; xe[a,b.The conditional PD is thus given by(x | A) = / xp(x)dx / / p(s)ds.Ja I JaFor an exponentially distributed variable x in [0, oo] we have p(x) =Aexp(—Arc). Thus we obtain for a > 0 the result/•oo / /-oo{x x > a) = / xexp(—Ax)ds / / exp(—Xx)dx = a + 1/A.JO / ./a JL
    • Stochastic Variables and Stochastic Processes 91.3. Stochastic Processes, the Kolmogorov Criterionand MartingalesIn many applications (e.g. in irregular phenomena like blood flow,capital investment, or motions of molecules, etc.) one encountersa family of random variables that depend on continuous or dis-crete parameters like the time or positions. We refer to {X(t,co),t £l,u £ ft}, where I is set of (continuous or discrete) parameters andX(t7ui) £ Rn, as a stochastic process (random process or stochas-tic (random) function). If I is a discrete set it is more convenientto call X(t,u>) a time series and to use the phrase process only forcontinuous sets. If the parameter is the time t then we use I = [to, T],where to is an initial instant. For a fixed value of t £ I, X(£, a>) is arandom variable and for every fixed value of LO £ Q (hence for everyobservation) X(t, LO) is a real valued function. Any observation of thisprocess is called a sample function (realization, trajectory, path ororbit) of the process.We consider now a finite variate PD of a process and we definethe time dependent probability density functions (PDF) in analogyto (1.1) in the formFx (x,t) = Pr(X(t)<x);Fx.yfo t; y, s) = Pr(X(t) < x, Y(s) < y); ^1A^Fxu...,xn(xiiti---;xn,tn) = Pr(Xx(t) < xi,Xn(t) < xn),where we omit the dependence of the process X(t) on the chancevariable LO, whenever no confusion is possible. The system of PDFssatisfies two classes of conditions:(i) SymmetryIf {ki,..., kn} is a permutation of 1,..., n then we obtainFxlv..,xn (zfc! ,tkl;...;xkn,tkJ = FXl,...,x„ {x, h;...; xn, tn).(1.19a)(ii) CompatibilityFx1?...,x„ (xi,ti;...; xr, tr; oo, tr+i; ...;oo,tn)= FXl,...,xr(a;i, <i; •••xr, tr). (1.19b)
    • 10 Stochastic Differential Equations in Science and EngineeringThe rules to calculate averages are still given by (1.6) where thecorresponding PD is derived by (1.3) and where the PDFs of (1.19)are usedQnp(xi, ii; ...;xn, tn) = -—— —7r^FXll...,x„ (xi,h;...; xn, tn).dxi(ti) • • • dxn(tn)One would expect that a stochastic process at a high rate ofirregularity (expressed e.g. by high values of intensity constants, seeChapter 2) would exhibit sample functions (SF) with a high degreeof irregularity like jumps ore singularities. However, Kolmogorovscriterion gives a condition for continuous SF:Theorem 1.1. (Kolmogorovs criterion)A bivariate distribution is necessary to give information about thepossibility of continuous SF. If and only if (IFF)(|Xi(ti)-X2 (t2 )r> < c | i i - i 2 | 1 + b; a,6,c>0; tx,t2 G [t0,T],(1.20)then the stochastic process X(t) posses almost certainly (AC, thissymbol is discussed in Chapter 5) continuous SF. However, the lat-ter are nowhere differentiable and exhibit jumps, and higher orderderivatives singularities. &We will use later the Kolmogorovs criterion to investigate SF ofBrownian motions and of stochastic integrals.Definition 1.7. (Stationary process)A process x(t) is stationary if its PD is independent of a time shift rp(xi,h +T;...;xn,tn + T) = p(zi, tx;... ;xn,tn). (1.21a)Equation (1.21a) implies that all moments are also independent ofthe time shift(x(h + T)x(t2 + T) • • • x(tk + T))= (x(t1)x(t2)---x(tk)); forfc = l , 2 . . . . (1.21b)A consequence of (1.25a) is given by(x(t)) = (x), independent of t:(1.21c)(x(t)x(t + r)) = (x(0)x(r))=5 (r).•
    • Stochastic Variables and Stochastic Processes 11The correlation matrix is defined byCik = (zi{h)zk(t2)); Zi(ti) = Xi(ti) - (xi(ti)). (1.22)Thus, we havecik = {xi(h)xk(t2)) - {Xi(ti))(xk(t2)). (1.23)The diagonal elements of this matrix are called autocorrelationfunctions (we do not employ a summation convention)Cii = {Zi{tl)Zi{t2)).The nondiagonal elements are referred to as cross-correlationfunctions. The correlation coefficient (the nondimensionalcorrelation) is defined byr. = (xj{ti)xk{t2)) - (xi(ti))(xk(t2)) ,x 2 4 )y/{xUh)) ~ (Xt(h))^(xl(t2)) - (Xk(t2)/For stationary processes we haveCik(h,t2) = (zi(0)zk(t2 - h)) = cik(t2 - h);(1.25)Cki(h,t2) = (zkit^Zifo)) = (zk(ti -t2)zi(0)) = Cik(ti -t2).A stochastic function with C{k = 0 is called an uncorrelatedfunction and we obtain(xl{h)xk(t2)) = (Xiihfiixkfa)). (1.26)Note that the condition of noncorrelation (1.26) is weaker than thecondition of statistical independence.ExampleWe consider the process X(i) = Ui cos t + l^sini. Ui,2 are inde-pendent stochastic variables independent of the time. The momentsof the latter are given by (Uk) = 0, (U|) — a = const; k — 1,2,(U1U2) = 0. Hence we obtain (X) — 0;cxx(s,t) = acos(t — s). JI»Remark (Statistical mechanics and stochastic differential equations)In Chapter 2 we will see that stochastic differential equations or"stochastic mechanics" can be used to investigate a single mechani-cal system in the presence of stochastic influences (white or colored
    • 12 Stochastic Differential Equations in Science and Engineeringnoise). We use concepts that are similar to those developed in statis-tical mechanics such as probability distribution functions, moments,Markov properties, ergodicity, etc. We solve the stochastic differen-tial equation (analytically, but in most cases numerically) and onesolution represents a realization of the system. Repeating the solu-tion process we obtain another realization and in this way we areable to calculate the moments of the system. An alternative way tocalculate the moments would be to solve the Fokker-Planck equation(see: Chapter 3) and then use the corresponding solution to deter-mine the moments. To establish the Fokker-Planck equation we willuse again the coefficients of the stochastic differential equation.Statistical Mechanics works with the use of ensemble averages.Rather than defining a single quantity (e.g. a particle) with a PDp(x), one introduces a fictitious set of an arbitrary large number ofM quantities (e.g. particles or thermodynamic systems) and these Mnon-interacting quantities define the ensemble. In case of interact-ing particles, the ensemble is made up by M different realizationsof the N particles. In general, these quantities have different charac-teristic values (temperature, or energy, or values of N) x, in a com-mon range. The number of quantities having a characteristic valuebetween x and x + dx defines the PD. Therefore, the PD is replacedby density function for a large number of samples. One observes alarge number of quantities and averages the results. Since, by defini-tion, the quantities do not interact one obtains in this way a physicalrealization of the ensemble. The averages calculated with this den-sity function are referred to as ensemble averages and a system whereensemble averages equal time averages is called an ergodic system.In stochastic mechanics we say that a process with the property thatthe averages defined in accordance with (1.6) equal the time averages,represents an ergodic process.An other stochastic process that posses SF of some regularity iscalled a martingale. This name is related to "fair games" and wegive a discussion of this expression in a moment.In everyday language, we can state that the best prediction ofa martingale process X(t) conditional on the path of all Brownian
    • Stochastic Variables and Stochastic Processes 13motions up to s < t is given by previous value X(s). To make thisidea precise we formulate the following theorem:Theorem 1.2. (Adapted process)We consider a probability space (r, Q, Pr) with an increasing family(of sigma algebras of T) of events Ts £ Tt, 0 < s < t (see Section 1.1).A process X.(s,u);u) € Q,s £ [0, oo) is called Ts-adapted if it is Im-measurable. An rs-adapted process can be expanded into a (thelimit) of a sequence of Brownian motions Bu(u>) with u < s (but notu> s). ^ExampleFor n = 2, 3,... ; 0 < A < t we see that the processes(i) G1(t,co) = Bt/n(co), G2(t,uj) = Bt_x(u;),(ii) G3(t,Lj) = Bnt(u), G4(t,w) = Bt+X(u>),are Tj-adapted, respectively, not adapted. *Theorem 1.3. (martingale process)A process X(t) is called a martingale IFF it is adapted and thecondition<Xt |Ta) = XS V 0 < s < t < o o , (1.27)is almost certainly (AC) satisfied.If we replace the equality sign in (1.27) by < (>) we obtaina super (sub) martingale. We note that martingales have no otherdiscontinuities than at worst finite jumps (see Arnold [1.2]). ^Note that (1.27) defines a stochastic process. Its expectation((Xj | Ts)) = (Xs);s < t is a deterministic function.An interesting property of a martingale is expressed byPr(sup | X(t) |> c) < (| X(6) p)/cp; c > 0; p > 1, (1.28)where sup is the supremum of the embraced process in the interval[a, b]. (1.28) is a particular version of the Chebyshev inequality, that
    • 14 Stochastic Differential Equations in Science and Engineeringwill be derived in EX 1.2. We apply later the concept of martingalesto Wiener processes and to stochastic integrals.Finally we give an explanation of the phrase "martingale". Agambler is involved in a fair game and he has at the start the capitalX(s). Then he should posses in the mean at the instant t > s theoriginal capital X(s). This is expressed in terms of the conditionalmean value (Xt | Xs) = Xs . Etymologically, this term comes fromFrench and means a system of betting which seeks the amount to bewagered after each win or loss.1.4. The Gaussian Distribution and Limit TheoremsIn relation (1.13) we have already introduced a special case of theGaussian (normal distributed) PD (GD) for a scalar variable. A gen-eralization of (1.13) is given by theN(m,o-2) PDp(x) = (2TTCT2)-1/2exp[-(x - m)2/(2a2)]; V i e [-oo, oo] (1.29)where m is the average and <72= (a;2) — m2is the variance. The mul-tivariate form of the Gaussian PD for the set of variables xi,...,xnhas the formp(xi,...,xn) = N e x p f --AikXiXk -bkxkj , (1.30a)where we use a summation convention. The normalization constantN is given byN = (27r)-"/2[Det(A)]1/2eX pf-^A-16i 6f e ) . (1.30b)We define the characteristic function of (1.30) has the formG(ki,...,kn) = expAn expansion of (1.31) WRT powers of k yields the moments(Xi) = -A^bfc, (1.32a)and the covariance is given byCik = {{xi - {xi)){xk - (xk)) = A^1. (1.32b)-l•uv (1.31)
    • Stochastic Variables and Stochastic Processes 15This indicates that the GD is completely given, if the mean value andthe covariance matrix are evaluated. The n variables are uncorrelatedand thus are independent if A - 1and hence A itself are diagonal.The higher moments of n-variate GD with zero mean are partic-ularly easy to calculate. To show this, we recall that for zero meanwe have bk = 0 and we obtain the characteristic function with theuse of (1.31) and (1.32) in form ofG = exp XUXV)KUKV1 -p ~ XUXV)ZUZV -- XuXvj XpXq]ZuZvZpZq -pA/y ( ( I f f . /u,v,p,q,r = 1,2,.... (1.33)A comparison of equal powers of z in (1.33) and in a Taylorexpansion of the exponent in (1.31) shows that all odd momentsvanishXaXbXcf — X aXl)X CX dX ej = • • • U.We also obtain with restriction to n — 2 (bivariate GD)( 4 ) = 3 ( 4 ) 2; (xlxp} = 3(xl)(x1x2), i,p= 1,2;(1.34){xlx22) = {x){xl) + 2{xlx2)2.In the case of a trivariate PD we face additional terms of the type(xkXpXr) — 2(xkXp){xpXr) + {xkXr){Xp). The higher order variate andhigher order moments can be calculated in analogy to the results(1.34).We give also the explicit formula of the bivariate Gaussian (seealso EX 1.3)withP(x,y)1N2expx- (x),e2 ( 1 - r 2) [av = y- (y),2T-£?7 rfVab b2Tr^ab(l - r2); a2x a;(1.35a)(1.35b)and where r = vi is defined as the cross correlation coefficient (1.24).For ax = ay = 1 and (x) = (y) = 0 in (1.35) we can expand the latter
    • 16 Stochastic Differential Equations in Science and Engineeringformula and we obtainp(x,y) = (27r)~1exp[-(x2+ y2)/2} £ -Hfc(x)Hfc(y), (1.36)A;!"fc=0where Hfc(x) is the fc-th order Hermite polynomial (see Abramowitzand Stegun [1.3]). Equation (1.36) is the basis of the "Hermitian-chaos" expansion in the theory of stochastic partial differentialequations.In EX 1.3 we show that conditional probabilities of the GD(1.35a) are Gaussian themselves.Now we consider two limit theorems. The first of them is relatedto GD and we introduce the second one for later use.1.4.1. The central limit theoremWe consider the random variableu = n{xk) = 0, (1.37)where x^ are identically independent distributed (IID) (but not nec-essarily normal) variables with zero mean and variance a2= {x2,).We find easily (U) = 0 and (U2) = a2.The central limit theorem says that U tends in the limitn —> oo to a N(0, a2) variable with a PD given by (1.13a). To provethis we use the independence of the variables Xk and we perform thecalculation of the characteristic function of the variable U with theaid of (1.12)Gu(fc) = / dxip(xi) • • • / dxnp(xn) • • • exp [ik(xi - h xn)/y/n= [Gx(A;/v^)]n2^2kla~2n~+ 0(n -3/2.exp(—k a 12) for n —> oo.(1.38)We introduced in the second line of (1.38) the characteristic functionof one of the individual random functions according to (1.14a); (1.38)is the characteristic function of a GD that corresponds indeed to
    • Stochastic Variables and Stochastic Processes 17N(0, a2). Note that this result is independent of the particular formof the individual PDs p(x). It is only required that p(a;) has finitemoments. The central limit theorem explains why the Gaussian PDplays a prominent role in probability and stochastics.1.4.2. The law of the iterated logarithmWe give here only this theorem and refer the reader for its derivationto the book Chow and Teichler [1.4]. yn is the partial sum of n IIDvariablesyn = xx - -xn; (xn) = /3, {(xn - (if) = a2. (1.39)The theorem of the iterated logarithm states that there exists AC anasymptotic limit-a < lim / n~ r a / ?< a. (1.40)rwoo v/2nln[ln(n)]Equation (1.40) is particular valuable in case of estimatesof stochastic functions and we will use it later to investigateBrownian motions. We will give a numerical verification of (1.40)in program F18.1.5. Transformation of Stochastic VariablesWe consider transformations of an n-dimensional set of stochasticvariables x,... ,xn with the PD pxi-xn (x, • • •, xn). First we intro-duce the PD of a linear combination of random variablesnZ =fe=lwhere the a^ are deterministic constants. The PD of the stochasticvariable z is then defined byPz(z) = / dxi ••• / dxn8 I z - Y^akXk 1 PXi-x„(zi,--- ,xn).(1.41b)Now we investigate transformations of the stochastic variablesxi,..., xn. The new variables are defined byuk = uk(x1,...,xn), k = l,...,n. (1.42)^2otkxk, (1.41a)
    • 18 Stochastic Differential Equations in Science and EngineeringThe inversion of this transformation and the Jacobian areXk = gk(ui,...,un), J = d(x1,...,xi)/d(u1,...,u1). (1.43)We infer from an expansion of the probability measure (1.1a) thatdpx!-xn = Pr(zi < Xi < xi + dxi,..., xn < Xn < xn + dxn)= pXl...xn(^i, • • •, xn)dxi • • • dxnfor dxk —> 0, k — 1,... ,n. (1.44a)Equation (1.44a) represents the elementary probability measure thatthe variables are located in the hyper planenY[[xk,xk + dxk}.k=lThe principle of invariant elementary probability measurestates that this measure is invariant under transformations of thecoordinate system. Thus, we obtain the transformationdp^...^ =dpXi...Xn. (1.44b)This yields the transformation rule for the PDsPUi-u„(Mi(a;i, • • -,xn), • • •, un(xi,.. .,xn))=| det(J) | px!-x„(a;i,---,a;n)- (1-45)Example (The Box-Miller method)As an application we introduce the transformations method of Box-Miller to generate a GD. There are two stochastic variables givenin an elementary cube, , (I V 0 < x 1 < l , 0 < x 2 < l > n ^P ( X 1 X 2 ) =U elsewhere J ( L 4 6)Note that the bivariate PD is already normalized. Now we introducethe new variablesyi = y/—2 In x cos(27TX2),(1.47)y2 = V -21nxisin(27TX2).The inversion of (1.47) isxi = exp[-(yj + yl)/2] x2 = —arc tan(y2/yi).
    • Stochastic Variables and Stochastic Processes 19According to (1-45) we obtain the new bivariate PDp(y1,y2) = p ( x 1 ^ 2 ) | | ^ | = i - e x P [ - ( y ? + y22)/2], (1.48)and this the PD of two independent N(0, 1) variables.Until now we have only covered stochastic variables that are time-independent or stochastic processes for the case that all variablesbelong to the same instant. In the next section we discuss a propertythat is rather typical for stochastic processes. Jf»1.6. The Markov PropertyA process is called a Markov (or Markovian) process if the condi-tional PD at a given time tn depends only on the immediately priortime tn-. This means that for t < t2 < • • • < tnPln-l(yn,tn | 2/1, h . . . ; y n - l , * n - l ) = Pl|l(?/n,£n I 2/n-l>*7i-l)>(1.49)and the quantity Pii(yn,tn yn-i,tn-i) is referred to as transitionprobability distribution (TPD).A Markov process is thus completely defined if we know the twofunctionsPi(yi,*i) and p2(y2,t2 | yi,ti) forti<t2.Thus, we obtain for t < t2 (see (1.15") and note that we use asemicolon to separate coordinates that belong to different instants)V2{yiMV2,t2) =pi(yi,*i)Pi|i(y2,*21 yi,*i), (1.50.1)and for t < t2 < £3P3(yi,*i; 2/2, £2; 2/3,^3)= Pi(yi,*i)pi|i(y2,*21 yi,<i)pi|i(y3,*31 y2,t2). (1.50.2)We integrate equation (1.50.2) over the variable y2 and we obtainP2(yi,*i;y3,*3) =pi(yi,h) / Pi|i(y2,*21 2/1, £I)PI 11(2/3^312/2,t2)dy2.(1.51)
    • 20 Stochastic Differential Equations in Science and EngineeringNow we usePi|i(2/3,*31 yi,*i) = P2(yi,*i;y3,i3)M(yi,*i),and we obtain from (1.51) the Chapman-Kolmogorov equationPi|i(z/3,*31 yi,h) = / Pi|i(y2,*21 yi,*i)pi|i(y3,*31 V2,t2)dy2.(1.52)It is easy to verify that a particular solution of (1.52) is given byPi|l(2/2,*2 I 2/1, *i) = [27r{t2-t1)}~1/2exV{-{y2-y1)2/[2{t2-t1)}}.(1.53)We give in EX 1.4 hints how to verify (1.53).We can also integrate the identity (1.50.1) over y and we obtainViiViM) = I Pi(z/i,*i)Pi|i(l/2,*2 I J/i,*i)dj/i. (1-54)The latter relation is an integral equation for the function pi(?/2, t2).EX 1.5 gives hints to show that the solution to (1.54) is theGaussian PDP l (y, t) = (27rt)-V2 exp[-j/7(2i)]; lim P l (y, t) = 8(y). (1.55)t—>U-|-In Chapter 3 we use the Chapman-Kolmogorov equation (1.52)to derive the master equation that is in turn applied to deduce theFokker-Planck equation.1.6.1. Stationary Markov processesStationary Markovian processes are defined by a PD and transi-tion probabilities that depend only on the time differences. The mostimportant example is the Ornstein—Uhlenbeck-process that wewill treat in Section 2.1.3 and 3.4. There we will prove the formulasfor its PDpi(y) = (2TT)-1/2 exp(-y2/2), (1-56.1)
    • Stochastic Variables and Stochastic Processes 21and the transition probabilityPi|i(w,«2 I l/i, ti) = [2TT(1 - u 2) ] - 1" ^ (?/2~ " ^1 yJ (1.56.2)u = exp(-r); Pi|i(y2,*i I yi,*i) = <%2 - y{]The Ornstein-Uhlenbeck-process is thus stationary, Gaussian andMarkovian. A theorem from Doob [1.5] states that this is apart fromtrivial process, where all variables are independent — the only pro-cess that satisfies all the three properties listed above. We continueto consider stationary Markov processes in Section 3.1.1.7. The Brownian MotionBrown discovered in year 1828 that pollen submerged in fluids showunder collisions with fluid molecules, a completely irregular move-ment. This process is labeled with y := Bt(ui), where the subscriptis the time. It is also called a Wiener (white noise) process andlabeled with the symbol Wj (WP) that is identical to the Brownianmotion: Wt = Bt. The WP is a Gaussian [it has the PD (1.55)] anda Markov process.Note also that the PD of the Wiener process (WP) — givenby (1.55) — satisfies a parabolic partial differential equation (calledFokker—Planck equation, see Section 3.2)dp ld2pft =2 ft?" (L57)We calculate the characteristic function G(u) and we obtainaccording to (1.12)G{u) = (exp(mWt)) = exp(-n2t/2), (1.58a)and we obtain the moments in accordance with (1.13b)< w ? f c ) =2Wf f e ;(w?fc+1) = °; fcGN°- (L58b)We use the Markovian properties now to prove the independenceof Brownian increments. The latter are definedyi,y2-yi,---,yn-yn-i with yk := wtk; h<---<tn. (1.59)
    • 22 Stochastic Differential Equations in Science and EngineeringWe calculate explicitly the joint distribution given by (1.50) andwe obtain with the use (1.53) and (1.55)P2(yuh;y2,t2) = [(27r)2ti(t2 -t1 )]-1/2exp{-y2/(2t1 )- ( y 2 - y i ) 2/ [ 2 ( t 2 - i i ) ] } , (1-60)andP3(yi,*i; 2/2,^2; 2/3, *3)= [(27r)3*i(f2 - h)(t3 - t 2 ) ] - 1/ 2 e x p { _ y 2 / ( 2 i i )- (2/2 - yi)2/[2(t2 - h)] - (2/3 - y2)2/[2(«3 " *2)]},(1.61)P 4 ( y i , t i ; y 2 , * 2 ; y 3 , * 3 ; y 4 , * 4 )= [27r(*4 - * 3 ) r 1 / 2P 3 ( y i , * i ; 2 / 2 , * 2 ; 2 / 3 , * 3 )xexp{-(y4 -y3 )2/[2(i4-i3)]}-We see that the joint PDs of the variables 2/1,2/2 ~~ 2/1,2/3 ~~ 2/2,2/4 ~~ 2/3are given in (1.60) and (1.61) in a factorized form and this impliesthe independence of these variables. To prove the independence ofthe remaining variables y^ — 2/3,..., yn — yn~ we would only have tocontinue the process of constructing joint PDs with the aid of (1.49).In EX 1.6 we prove the following property(2/1^1)2/2^2)) = min(ti, t2) =hA t2. (1.62)Equation (1.62) also demonstrates that the Brownian motion is nota stationary process, since the autocorrelation does not depend onthe time difference r = t2 — t but it depends on t2 A t.To apply Kolmogorovs criterion (1.20) we choose a = 2 and weobtain with (1.58b) and (1.62) ([2/1 (*i) - 2/2to)]2) = 1*2 - *i|- Thuswe can conclude with the choice b = c = 1 that the SF of the WPare ac continuous functions. The two graphs Figures 1(a) and 1(b)are added in this section to indicate the continuous SF.We apply also the law of iterated logarithm to the WP. To thisend we consider the independent increments y^ — y^-i where weifc = kAt with a finite time increment At. This yields for the partialsum in (1.39)n]P(2/fc - 2/fe-i) = 2/n = 2/nAt! OL = (yk) = 0; ((yk - yk-i)2) = At.fc=i
    • Stochastic Variables and Stochastic Processes 23Fig. 1(a). The Brownian motion Bt versus the time axis. Included is a graph of thenumerically determined temporal evolution of the mean value and the variance.Bv(t)Fig. 1(b). The planar Brownian motion with x = Bj and y = Bj • B^, k = 1, 2 areindependent Brownian motions.We substitute the results of the last line into (1.40) and we obtain- / A ! < limW,nAtn^°° v/2nln(ln(n))< VAt.
    • 24 Stochastic Differential Equations in Science and EngineeringThe assignment of t := nAt into the last line and the approximationln(i/Ai) —> ln(i) for t —> oo gives the desired result for the ACasymptotic behavior of the WP- 1 < lim l< 1. (1.63)~ ™^°° ^/2tln(In(t))We will verify (1.63) in Chapter 5 numerically.There are various equivalent definitions of a Wiener process. Weuse the following:Definition 1.8. (Wiener process)A WP has an initial value of Wo = 0 and its increments Wj — Ws,t > s satisfies three conditions. They are(i) independent and (ii) stationary (the PD dependence on t — s)and (iii) N[0, t — s] distributed.As a consequence of these three conditions WP exhibits continu-ous sample functions with probability 1. •There are also WPs that do not start at zero. There is also ageneralization of the WP with discontinuous SF. We will return tothis point at the end of Section 1.7.Now we show that a WP is a martingale<BS | Bu> = Bu; s> u. (1.64)We prove (1.64) with the application of the Markovian property(1.53). We use (1.17) write(Bs | Bu) = (y2,s | yi,u) = / t/2Pi|iO/2,s I yi,u)dy2= /n . I 2/2exp{-(y2 - yi)2/[2{s - u)]}dy2J2lT{S — U) J= yi = Bu.This concludes the proof of (1.64).A WP has also the following properties. The translated quantityWi and the scaled quantity Wf defined byt , a > 0 : W t = W t + a - W a and Wt = 2 ( ) (1.65)
    • Stochastic Variables and Stochastic Processes 25are also a Brownian motion. To prove (1.65) we note first that theaverages of both variables are zero (Wt) = (Wt) = 0. Now we haveto show that both variables satisfy also the condition for the autocorrelation. We prove this only for the variable Wt and leave thesecond part for the EX 1.7. Thus, we put(WtWs) = (Ba24Ba2s)/a2= ^ V 2^ =tAs.So far, we considered exclusively scalar WPs. In the study of par-tial differential equations we need to introduce a set of n independentWPs. Thus, we generalize the WP to the case of an independentWPs that define a vector of a stochastic processesxi(h),...,xn(tn); tk>0. (1.66)The corresponding PD is thenp(xu...,xn) = pXl(xi)...pXn(xn)= (27T)""/2I K 1 / 2e x p [-4/(2**)] • (1-67)fc=lWe have assumed independent stochastic variables (like the orthog-onal basic vectors in the case of deterministic variables) and thisindependence is expressed by the factorized multivariate PD (1.67).We define an n-dimensional WP (or a Wiener sheet (WS)) bynMin)= n **(**): t = (t1,...,tn). (1.68)k=Now we find how we can generalize the Definition 1.8 to the case ofn stochastic processes. First, we prove easily that the variable (1.68)has a zero mean(M(tn)) = 0. (1.69)Thus, it remains to calculate the autocorrelation (1.62). We use theindependence of the set of variables Xk(tk),k = l , . . . , n and weobtain with the use of the bivariate PD (1.61) with y = Xk(tk);
    • 26 Stochastic Differential Equations in Science and EngineeringV2 = xk(sk) and factorize the result for the independent variables.Hence, we obtainn(Mjn)M(")> = Yl(xk(tk)xk(sk)}; t = (h,...,tn); s = (si,...,sn ).fc=iThe evaluation of the last line yields with (1.62)n(M(tn)M^) = HtkAsk. (1.70)fc=iThe relations (1.69) and (1.70) show now that process (1.68) is ann-WP.In analogy to deterministic variables we can now construct withstochastic variables curves, surfaces and hyper surfaces. Thus, a curvein 2-dimensional WS and surfaces on 3-dimensional WS are given byc« = Mt2f(ty stut2 = MS2)g(tl>t2)-We give here only two interesting examples.Example 1Here we put(2)Kt = M ^ ; a = exp(i), b = exp(—£); —oo < x < oo.This defines a stochastic hyperbola with zero mean and with theautocorrelation(KtKs) = (x1{et)x1{es))(x2{e-t)x2{e-s))= (e* A es)(e-* A e~s) = exp(-|t - s). (1.71)The property (1.71) shows this process is not only a WS but also astationary Ornstein-Uhlenbeck process (see Section 1.5.1). 4bExample 2Here we define the processKt = exp[-(l + c)t]M^l; a = exp(2£), b = exp(2ct); c > 0.(1.72)
    • Stochastic Variables and Stochastic Processes 27Again we see, that stochastic variable defined (1.72) has zero meanand the calculation of its autocorrelation yields(KtKs) = exp[-(l + c)(t + s)}(xl(e2t)x1(e2s)){x2(e2ct)x2(e2cs))= exp[-(l + c)(t + s)}(e2tA e2s){e2ctA e2cs)= exp[-(l + c)|t-s|]. (1.73)The latter equation means that the process (1.72) is again anOrnstein-Uhlenbeck process. Note also that because of c > 0 thereis no possibility to use (1.73) to reproduce the result of the previousexample. XJust as in the case of one parameter, there exist for WSs alsoscaling and translation. Thus, the stochastic variablesH - —M{2)•v~ ab a2uh2vT _ M ( 2 ) _ M ( 2 ) _ M ( 2 ) M ( 2 )L>u,v - Mu+a,v+b Mu+a,b Ma,v+b ~ Ma,6>(1.74)are also WSs. The proof of (1.74) is left for EX 1.8.We give in Figures 1(a) and 1(b) two graphs of the Brownianmotion.At the end of this section we wish to mention that the WP is asubclass of a Levy process L(t). The latter complies with the firsttwo conditions of the Definition 1.8. However, it does not possessnormal distributed increments. A particular feature of normal dis-tributed process x is the vanishing of the skewness (x3) / (x2)32. How-ever, many statistical phenomena (like hydrodynamic turbulence, themarket values of stocks, etc.) show remarkable values of the skew-ness. This means that a GD (with only two parameter) is not flex-ible enough to describe such phenomena and it must be replace bya PD that contains a sufficient number of parameters. An appropri-ate choice is the normal inverted Gaussian distribution (NIGD) (seeSection 4.4). The NIGD distribution does not satisfy the Kolmogorovcriterion. This means that the sample functions of the Levy pro-cess L(i) is equipped with SF that jump up and down at arbitraryinstances t. To get more information about the Levy process we referthe reader to the work of Ikeda k, Watanabe [1.6] and of Rydberg
    • 28 Stochastic Differential Equations in Science and Engineering[1.7]. In Section 4.4 we will give a short description of the applicationof the NIGD in economics theories.1.8. Stochastic IntegralsWe need stochastic integrals (SI) when we attempt to solve a stochas-tic differential equation (SDE). Hence we introduce a simple firstorder ordinary SDE^ = a(X(t),t) + b(X(t),t)Zt; X,a,b,t€R. (1.75)We use in (1.75) the deterministic functions a and b. The symbol £tindicates the only stochastic term in this equation. We assume<6> = 0; (tes) = 6(t-s). (1.76)The spectrum of the autocorrelation in (1.76) is constant (seeSection 2.2) and in view of this £t is referred as white noise andany term proportional to £( is called a noisy term. These assump-tions are based on a great variety of physical phenomena that aremet in many experimental situations.Now we replace (1.75) by a discretization and we putAtk = tk+i — tk>0; Xfc = X(ifc);AXfc = Xfc+1 -Xf c ; A; = 0,1,The substitution into (1.75) yieldsAXfc = a(Xfc, tk) Atk + b{Xk,tk) ABk;ABfc = Bf c + 1 -Bf c ; A; = 1,2,...where we usedA precise derivation of (1.77) is given in Section 2.2. Thus we canwrite (1.75) in terms ofn - lXn = X0 + Y^ [<xs,ts)Ats + b(Xa, ts)ABs] • X0 = X(t0).s=Q(1.78)
    • Stochastic Variables and Stochastic Processes 29What happens in the limit Atk —> 0? If there is a "reasonable"limit of the last term in (1.78) we obtain as solution of the SDE (1.75)X(t) = X(0)+ / a(X(s),s)ds + " / 6(X(s),s)dB8". (1.79)Jo JoThe first integral in (1.79) is a conventional integral of Riemannstype and we put the stochastic (noisy) integral into inverted commas.The irregularity of the noise does not allow to calculate the stochasticintegral in terms of a Riemann integral. This is caused by the factthat the paths of the WP are nowhere differentiable. Thus we findthat a SI depends crucially on the decomposition of the integrationinterval.We assumed in (1.75) to (1.79) that b(X,t) is a deterministicfunction. We generalize the problem of the calculation of a SI andwe consider a stochastic function1= / i(w,s)dBs. (1.80)JoWe recall that Riemann integrals of the type (g(s) is a differen-tiable function)i(s)dg(s) = [ f(s)g(s)ds,Jo0are discretized in the following mannerpT n—1/ f(s)dg(s) = lim Vf(sfc)[g(sfc+i)-g(sfc)].JUk=0Thus, it is plausible to introduce a discretization of (1.80) thattakes the formI = 53f(afc,a;)(Bfc+1-Bfc). (1.81)In Equation (1.81) we used s^ as time-argument for the integrand f.This is the value of s that corresponds to the left endpoint of thediscretization interval and we say that this decomposition does not
    • 30 Stochastic Differential Equations in Science and Engineeringlook into the future. We call this type of integral an Ito integraland writeI i = / {{s,uj)dBs. (1.82)JoAn other possible choice is to use the midpoint of the intervaland with this we obtain the Stratonovich integralIs = / f(s,w)odBs = ^f(sfc,w)(Bfc+i -Bfc); sk = -(tk+1 + tk).J° k(1.83)Note that the symbol "o" between integrand and the stochastic dif-ferential is used to indicate Stratonovich integrals.There are, of course, an uncountable infinity of other decomposi-tions of the integration interval that yield to different definitions ofa SI. It is, however, convenient to take advantage only of the Ito andthe Stratonovich integral. We will discuss their properties and findout which type of integrals seems to be more appropriate for the usein the analysis of stochastic differential equations.Properties of the Ito integral(a) We have for deterministic constants a < b < c, a, /3 G R.f [ah{s,u>) + /?f2(s,w)]dBs = all +/JI2; h = [ f*(s,w)dBs.Ja J a(1.84)Note that (1.84) remains also valid for Stratonovich integrals. Theproof of (1.84) is trivial.In the following we give non-trivial properties that apply, how-ever, exclusively to Ito integrals. Now we need a definition:Definition 1.9. (non-anticipative or adapted functions)The function f(t, Bs) is said to be non-anticipative (or adapted,see also Theorem 1.2) if it depends only on a stochastic variableof the past: Bs appears only for arguments s < t. Examples for anon-anticipative functions arei(s,co)= [Sg(u)dBu; f(Jos,u) = B,. •
    • Stochastic Variables and Stochastic Processes 31Now we list further properties of the Ito integrals that include nonanticipative functions f(s,Bs) and g(s,Bs).(b) M l E EW f(sBs)d Bs)=°- (L85)Proof.We use (1.81) and obtainMi = /^f(sf c ,Bf c )(Bf c + 1 -Bf c )But we know that Bk is independent of B^+i — Bk. The functionf(sfc,Bfc) is thus also independent of B^+i — B^. Hence we obtainMi = ^2(f(sk,Bk)){Bk+1 - Bfc) = 0.kThis concludes the proof of (1.85).(c) Here we study the average of a product of integrals and we showthatM2 = I J f(s,Bs)dBsJ g(u,Bu)dBu = J (i(s,Bs)g(s,Bs))ds.(1.86)Proof.M2 = ]T<f(sm ,Bm )(Bm + 1 -Bm )g(sn ,Bn )(Bn + 1 - B n ) ) .m,nWe have to distinguish three subclasses: (i) n > m, (ii) n < m and(hi) n = m.Taking into account the independence of the increments of WPswe see that only case (hi) contributes non-trivially to M2. This yieldsM2 = ^(f(S n ,Bn )g(sn ,Bn )(Bra+l ~ B n ) ).nBut we know that f(sn, Bn)g(sn, Bn) is again a function that is inde-pendent of (Bn+i — Bn)2. We use (1.62) and obtain((Bn+i — Bn) ) = (Bn + 1 — 2Bn + iBn + Bn) = tn+ — tn — Ain,
    • 32 Stochastic Differential Equations in Science and Engineeringand thus we getM2 = ^(f(S n ,Bn )g(S n ,Bn ))((Bn+l —noo^2(i{sn,Bn)g{sn,Bn))Atn.n = lThe last relation tends for Atn —» 0 to (1.86).(d) A generalization of the property (c) is given by/ pa rb rahbM3 = M i(s,Bs)dBsl g(u,Bu)dBu = I (f(S,Bs)g(S,Bs))ds.(1.87)To prove (1.87) we must distinguish to subclasses (i) b = a + c > aand (ii) a = b + c> b;c> 0. We consider only case (i), the proof forcase (ii) is done by analogy. We derive from (1.86) and (1.87).M3 = M2 + / / f(s,Bs)dBs / g(u,Bu)dBt J0 Ja= M2 + Yl Yl ^ Bn)g(sm, Bm )ABn ABm ).n m>nBut we see that i(sn, Bn) and ABn are independent of f(sm, Bm) andABm . Hence, we obtainM3 = M2 + £ ( f ( s „ , Bn)ABn) Y, (g(sm, Bm)ABm) = M2,n m>nwhere we use (1.85). This concludes the proof of (1.87) for case (i).Now we calculate an exampleI(t) = / BsdBs. (1.88a)JoFirst of all we obtain with the use of (1.85) and (1.86) the momentsof the stochastic variable (1.88a)<I(t)> = 0; (I(t)I(t + r)) = [B2s)ds = f a d s = 72A n QQ^Jo Jo (1.88b)7 = iA(t + r).
    • Stochastic Variables and Stochastic Processes 33We calculate the integral with an Ito decomposition1 = 2 ^ B*;(Bfc+i —Bfc).kBut we haveAB2k = (B2k+l - B2k) = (Bk+1 - Bkf + 2Bfc(Bfc+1 - Bfc)= (ABfc)2+ 2Bfc(Bfc+1-Bfc).Hence we obtainI(t) = lj2[A(Bl)-(ABk)%kWe calculate now the two sums in (1.90) separately. Thus weobtain the first placeI1(t) = EA(Bfc) = (B?-B8) + ( Bi - B? ) + - + (BN-BN-l)kN ~* Bi >where we used Bo = 0. The second integral and its average aregiven byI2(t) = Y, (AB*)2= E (B^+i "2Bk+iBk + Bl);k k(I2(i))=EAifc = tkThe relation (I2(i)) =t gives not only the average but also theintegral I2(i) itself. However, the direct calculation of l2(t) is im-practical and we refer the reader to the book of 0ksendahl [1.8],where the corresponding algebra is performed. We use instead anindirect proof and show that the quantity z (the standard deviationof I2(i)) is a deterministic function with the value zero. Thus, we putz = I2(£) — t. The mean value is clearly (z) — 0 and we obtain(Z2) = (l2(t)-2tl2(t)+t2) = (l2(t))-t2.
    • 34 Stochastic Differential Equations in Science and EngineeringBut we haveai(*)> = EE<(AB*)2(AB™)2>- (i.88c)k rnThe independence of the increments of the WPs yields((ABfc)2(ABm)2) = ((ABk)2)((ABm)2) + 5km{ABi),hence we obtain with the use of the results of EX 1.6$(«)>= (£<(ABfc)2)) +]T<(Bfc+1+Bfc)4} = £2+ 5>+1-£fc)2.V k J k kHowever, we have^2(tk+1 - tkf = J^(At)2= tAt^0 for At -> 0,k kand this indicates that (z2) = 0.This procedure can be pursued to higher orders and we obtainthe result that all moments of z are zero and thus we obtain l2(£) = t.Thus, we obtain finallyI(*) = / BsdBs = ^ ( B 2- £ ) . (1.89)There is a generalization of the previous results with respect tohigher order moments. We consider here moments of a stochasticintegral with a deterministic integrandJfc(t) = I ffc(s)dBs; k€N. (1.90)JoThese integrals are a special case of the ones in (1.82) and we knowfrom (1.85) that the mean value of (1.90) is zero. The covariance of(1.90) is given by (see (1.86))(Jfc(t)Jm(t)) = / h(s)fm(s)ds.JoBut we can obtain formally the same result if we put(dBsdB„) = 5(s - u)dsdu. (1.91)A formal justification of (1.91) is given in Chapter 2 in connectionwith formula (2.41). Here we show that (1.91) leads to a result that
    • Stochastic Variables and Stochastic Processes 35S.is identical to the consequences of (1.86)<Jfc(*)Jm(<)> = / h(s) f fm H(dB,dBu )Jo Jo- fk(s) / fm(u)5(s - u)dsdu = / ffc(s)fm(s)dJo Jo JoWe also know that Bt and hence dB4 are Gaussian and Markovian.This means that all odd moments of the integral (1.90) must vanish<Jfc(*)Jm(*)Jr(t)> = ••• = (). (1.92a)To calculate higher order moments we use the properties ofthe multivariate GD and we put for the 4th order moment of thedifferential<dBpdB9dBudB„) = <dBpdB9>(dBudB„) + (dBpdBu)(dBgdB^)+ (dBpdB^)(dB(?dBu) = [S(p - q)5{u - v)+ S(p — u)S(q — v) + 5(p — v)8(q — u)]dpdqdudv.Note that the 4th order moment of the differential of WPs has aform similar to an isotropic 4th order tensor. Hence, we obtain<Jj(t)Jm(i)Jr(*)J*(<)> = / f » f m ( a ) d a f ir{(3%((3)d(3Jo Jo+ / ij{a)ir{a)da f fm(/3)f,(/3)d/3Jo Jo+ / f » f s ( a ) d a [ fm(/3)fr(/3)d/3.Jo JoThis leads in a special case to<j£(i)> = 3<J2(i)>2. (1.92b)Again, this procedure can be carried out also for higher ordermoments and we obtain<J2"+1(i)) = 0; <J2^)) = 1.3....(2/ ,-l)<J2(i))^ ^ N .(1.92c)Equation (1.92) signifies that the stochastic Ito-integral (1.90) withthe deterministic integrand ffc(s) is N[0, fQ f|(s)ds] distributed. How-ever, one can also show that the Ito-integral with the non-anticipative
    • 36 Stochastic Differential Equations in Science and EngineeringintegrandK(i) = / g(s,Bs)dB„ (1.93a)Jois, in analogy to the stochastic integral with the deterministicintegrand,N[0,r(t)]; r(t)= [ (gu,Bu))du, (1.93b)Jodistributed (see Arnold [1.2]). The variable r(t) is referred to asintrinsic time of the stochastic integral (1.93a). We use this vari-able to show with Kolmogorovs Theorem (1.20) that (1.93a) possescontinuous SF. The Ito integralrtkXk= / g(«,Bu)dBu, ti = t > t2 = s,Jowith(xk) = 0; (x) = r(tfc) = rk; k = 1, 2, {xix2) = r2,has according to (1.35a) the joint PDxi _ (xi - x2)2"2n 2(n-r2)p2(xi,x2) = [(27r)ir1(r1 - r2)] expYet, the latter line is identical with the bivariate PD of the Wienerprocess (1.60) if we replace in the latter equation the t- by rk. Hence,we obtain from Kolmogorovs criterion ([xi(ri) — x2i(r2)]2) =|7~i — r2 | and this guarantees the continuity of the SF of the Ito-integral (1.93a). A further important feature of Ito integrals is theirmartingale property. We verify this now for the case of the integral(1.89). To achieve this, we generalize the martingale formula (1.64)for the case of arbitrary functions of the Brownian motions(%2,«) I f(yi,i)> = / %2,s)Pi|i(y2,s | yx,t)dy2 = f(yi,t);yk = Btk; Vs>t, (1.94)
    • Stochastic Variables and Stochastic Processes 37where p ^ is given by (1.53). To verify now the martingale propertyof the integral (1.89) we specify (1.94) to(I(j/2,s) | Ifo!,*)) = - i = / {vl - s) exp[-(y2 - yif /f5]dy2.The application of the standard substitution (see EX 1.1) yields(I(y2, s) | I(2/i,*)> = ^=J{y-s + 2Vlz^ + /3^2) exp(-z2)d^1= -{yi-s + P/2)=I(yi,t). (1.95)This concludes the proof that the Ito integral (1.89) is a martingale.The general proof that all Ito integrals are martingales is given by0ksendahl [1.8]. However, we will encounter the martingale propertyfor a particular class of Ito integrals in the next section.To conclude this example we add here also the Stratonovich ver-sion of the integral (1.89). This yields (the subscript s indicates aStratonovich integral)rt ils{t)= / BsodBs = - V ( B f c + 1 + Bfc)(Bfc+1-Bfc)Jo 2kkThe result (1.96) is the "classical" value of the integral whereasthe Ito integral gives a non classical result. Note also the signifi-cant differences between the Ito and Stratonovich integrals. Eventhe moments do not coincide since we infer from (1.96)&(*)> = a n d(U*)I*(«)> = ^[tu + 2(t A u)2}.It is now easy to show that the Stratonovich integral Is is not amartingale. We obtain this result if we drop the term s in secondline of (1.95)(ls(y2,s) | I8(yi,t)) = {y2+ P/2)^Uyut). XHence, we may summarize the properties of the Ito andStratonovich integrals. The Stratonovich concept uses all the trans-formation rules of classical integration theory and thus leads in many
    • 38 Stochastic Differential Equations in Science and Engineeringapplications to an easy way of performing the integration. Deviat-ing from the Ito integral, the Stratonovich integral does, however,not posses the effective rules to calculated averages such as (1.85) to(1.87) and they do not have the martingale property. In the followingwe will consider both integration concepts and their application insolution of SDE.We have calculated so far only one stochastic integral and wecontinue in the next section with helpful rules perform the stochasticintegration.1.9. The Ito FormulaWe begin with the differential of a function $(Bf, t). Its Ito differen-tial takes the formd$(Bt, t) = Qtdt + $B t dBt + ^ B t B t (dBt)2. (1.97.1)Formula (1.97.1) contains the non classical term that is proportionalto the second derivative WRT Bt. We must supplement (1.97.1) bya further non classical relation(dBt)2= dt. (1.97.2)Thus, we infer from (1.97.1,2) the final form of this differentiald$(Bt, t) = Ut + ^*BtBt) dt + ^BedBj. (1.98)Next we derive the Ito differential of the function Y = g(x, t)where x is the solution of the SDEdx = a(x,t)dt + b(x,t)dBt. (1.99.1)In analogy to (1.97.1) we include a non classical term and putdY = gtdt + gxdz + -gxx(dx)2,We substitute dx from (1.99.1) into the last line and apply the nonclassical formula(dx)2= {adt + bdBt)2= b2dt; {dt)2= dtdBt = 0; (dBt)2= dt,(1.99.2)
    • Stochastic Variables and Stochastic Processes 39and this yieldsd Y = ( 6 + 0 b + £& .)< K + t b d B , . (1.99.3)The latter equation is called the Ito formula for the total differen-tial of function Y = g{x,t) given the SDE (1.99.1). (1.99.3) containsthe non classical term b2gxx/2 and it differs thus from the classical(or Stratonovich) total differentialdYc = (gt + agx)dt + bgxdBt. (1.100)Note that both the Ito and the Stratonovich differentials coincide ifg{x,i) is a first order polynomial of the variable x.We postpone a sketch of the proof of (1.99) for a moment andgive an example of the application of this formula. We use (1.99.1)in the formdx = dBt, or x = Bt with a = 0, 6 = 1, (1.101a)and we consider the functionY = g(x) = x2/2; gt = 0; gx = x; gxx = 1. (1.101b)Thus we obtain from (1.99.3) and (1.101b)dY = d(x2/2) = dt/2 + BtdBt,and the integration of this total differential yieldsd(x2/2) = / d(B2s/2) = B2/2 = t/2+ [ BsdBsJo Joand the last line reproduces (1.89). XWe give now a sketch of the proof of the Ito formula (1.99) andwe follow in part considerations of Schuss [1.9]. It is instructive toperform this in detail and we do it in four consecutive steps labeledwith Si to S4.I
    • 40 Stochastic Differential Equations in Science and EngineeringSiWe begin with the consideration of the stochastic function x(t)given byrv rvx(v)-x(u) = a(x{s),s)ds + b(x(s),s)dBs, (1.102)Ju Juwhere a and b are two differentiate functions. Thus, we obtain thedifferential of x(t) if we put in (1.102) v = u + dt and let dt —• 0dx(u) = a(x(u),u)du + b(x(u),u)dBu. (1.103)Before we pass to the next step we consider two important examplesExample 1. (integration by parts)Here we consider a deterministic function f and a stochastic func-tion Y and we putY(Bt,t) = g(Bt)t) = f(i)Bt. (1.104a)The total differential is in both (Ito and Stratonovich) cases (see(1.98) with 3>BtBt = 0) given by the exact formuladY = d[f(t)Bt] = f(*)dBt + i(t)Btdt. (1.104b)The integration of this differential yieldsi(t)Bt= f f(s)Bsds+ [ f(s)dBs. (1.105a)Jo JoSubtracting the last line for t = u from the same relation for t — vyieldsrv rvi{v)Bv - i(u)Bu = f(s)Bsds+ f(s)dBs. (1.105b)Ju JuExample 2. (Martingale property)We consider a particular class of Ito integralsI(t) = / f(u)dBu, (1.106)Jo
    • Stochastic Variables and Stochastic Processes 41and show that I(i) is a martingale. First we realize that the integrall(t) is a particular case of the class (1.93a) with g(u,Bu) = i(u).Hence we know that the variable (1.106) is normal distributed andposses the intrinsic time given by (1.93b). Its transition probabilityPi|i is defined by (1.53) with tj = r(tj); yj = I(i,); j — 1, 2. This con-cludes the proof that the integral (1.106) obeys a martingale propertylike (1.27) or (1.64). *S2Here we consider the product of two stochastic functions subjectedto two SDE with constant coefficientsdxk(t) = ctkdt + frfcdBi; ak,bk = const; k ~ 1,2, (1.107)with the solutionsxk(t) = akt + bkBt; xfc(0) = 0. (1.108)The task to evaluate d(xiX2) is outlined in EX 1.9 and we obtainwith the aid of (1.89)d{xX2) = X2dxi + xdx2 + b^dt. (1.109)The term proportional to 6162 in (1.109) is non classical and it is amere consequence of the non classical term in (1.89).The relation (1.109) was derived for constant coefficients in(1.107). One may derive (1.109) under the assumption of step-function for the functions a and b in (1.106) and with that one canapproximate differentiable functions (see Schuss [1.9]).We consider now two examplesExample 1We take put x = Bi;X2 = Bf. Thus, we obtain with an applicationof (1.101b) and (1.109)dBt3= BtdB? + B^dBj + 2Btdi = 3(Btdt + B?dBt).The use of the induction rule yields the generalizationdBtfc= jfeB^dB* + ^ " ^ B ^ d t (1.110)
    • 42 Stochastic Differential Equations in Science and EngineeringExample 2Here we consider polynomials of the Brownian motionPn(Bt) = c0 + cxBt + • • • + cnB?; ck = const. (1.111)The application of (1.110) to (1.111) leads todPn(B4) = P;(Bt)dBt + ip£(Bt)di; = d/dBt. (1.112)The relation (1.112) is also valid for all functions that can beexpanded in form of polynomials. &S3Here we consider the product*{Bt,t) = ip(Bt)g(t), (1.113)where g is a deterministic function. The use of (1.109) yieldsd*(Bt,t) = g(i)<MBt) + <p(Bt)g!(t)dt= LdBt + ^"dtg +Vg(t)dt (1.114)= Ug + ^"g]dt + gipdBt.But we also haveThus, we obtain1 d2 , 1 ,,+25Bfj$ = g V +2 g^ ^L115)(d 1 d2 <9$d* = ( » +2 8 B ? j W f +8 B ; d B" <L116>Equation (1.116) applies, in the first place, only to the function(1.113). However, the use of the expansionCO$(Bt,t) = J>fc(B4)gfc(i), (1.117)fc=ishows that (1.116) is valid for arbitrary functions and this proves(1.98).
    • Stochastic Variables and Stochastic Processes 43S4In this last step we do not apply the separation (1.113) or (1.117)but we use a differentiable function of the variables (x,t), wherex satisfies a SDE of the type (1.107)$(Bt,t) = g(x,t) = g(at + bBt,t); x = adt + bdBt; a, 6 = const.0 (1.118)*t = agx + gt; *Bt = &gx5 $B,B, = b lgxx.Thus we obtain with (1.116)The relation (1.119) represents the Ito formula (1.99.3) (for constantcoefficients a and b). As before, we can generalize the proof and(1.119) is valid for arbitrary coefficients a(x,t) and b(x,t).We generalize now the Ito formula for the case of a multivariateprocess. First we consider K functions of the typeVk yfc(Bi1,...,BtM,t); fc = 1,2,... ,K,where B^,...,B^ are M independent Brownian motions. We takeadvantage of the summation convention and obtain the generaliza-tion of (1.97.1)dyfc(Bj,... ,Bf,,) = * £ d t + ^LdBr + I ^ d B r d B f ;dt <9B[ l2dWtWt(1.120)/c = l,...,K; r,s = l,...,M.We generalize (1.97.2) and putdB[dB? = Srsdt, (1.121)and we obtain (see (1.98))d s t ( B , , . . . , B ( - t ) = ( ^ + i ^ ) d . + ^ d B E . (1.122)Now we consider a set of n SDEsdXfe = afc(Xi,... ,Xn,t)dt + 6fer(Xi,... ,Xn,i)dB£;Jfe = l,2,...,n; r = l,2,...,R.[1.123)
    • 44 Stochastic Differential Equations in Science and EngineeringWe wish to calculate the differential of the functionZfc = Zfc(Xi,...,Xn,t); fc = l,...,K. (1.124)The differential readsdt M+dXm^m+2dXmdX1JdZfc = - ^ dt + T^dXm + - v * dXmdXM-dt + ^r— (amdt + 6mrdBj)mdt dX,1 d2Z+ o oV ov (a™di+ bmrdBrt) (audt + busdBst);m,n = 1,2,... ,n; r = 1,2,... ,R. (1.125)The n-dimensional generalization of the rule (1.99.2) is given byd(B[dBJ1) = <5rudt; (dt)2= dB[ dt = 0. (1.126)Thus, we obtain the differential of the vector valued function (1.124)dZfc = ( -XT + amT^rp - T.bmrKr ^ r ^ r I d tdt + amdxm+2bmrburdxmdxu+ h dZkaw+ t>mr flv ar3t.Now we conclude this section with two examples.Example 1A stochastic process is given by(1.127)Yi = B j + B ? + Bt3; Y2 = (B?)2-BjBWe obtain for the SDE in the form (1.120) corresponding to the lastlinedYi = dB! + dB2+ dB3;dY2 = dt + 2B2dBt2- (B^dBj + BjdB?). *
    • Stochastic Variables and Stochastic Processes 45dY 5,+ «gI + ^ + 7 2) §Example 2Here we study a single stochastic process under the influence of twoindependent Brownian motionsdx = a(x, t)dt + (3{x, t)dB] + j(x, i)dBf2. (1.128)The differential of the function Y = g(x, t) has the formdt + g^dBJ+jdB2).We consider now the special caseg = In x; a = rx; (5 — ux; 7 = ax; r,u,a = const,and we obtaind(lnar) = [r - (u2+ a2)/2]dt + (udBJ + odB2). (1.129)We will use (1.129) in Section 2.1 of the next chapter. XWe introduced in this chapter some elements of the probabilitytheory and added the basic ideas about SDE. For readers who wishto get more deeply involved in the abstract theory of probabilityand in particular with the measure theory we suggest they considerthe following books: Chung & Aitsahia [1.10], Ross [1.11], Mallivan[1.12], Pitman [1.13] and Shiryaev [1.14].Appendix: Poisson ProcessesIn many applications appears there a random set of countable pointsdriven by some stochastic system. Typical examples are arrival timesof customers (at the desk of an office, at the gate of an airport, etc.),the birth process of an organism, the number of competing buildingprojects for a state budget. The randomness in such phenomena isconveniently described by Poisson distributed variables.First we verify that the Poisson distribution is the limit of theBernoulli distribution. We substitute for the argument p in theBernoulli distribution in Section 1.1 the value p = a/n and this
    • 46 Stochastic Differential Equations in Science and EngineeringyieldsKt,n,a/n) = i ; ) ( 2 ) ( l - 2( aV6(0, n, a/n) = 1 1 —> exp(—a) for n —> oo.V nJNow we putb(k + l,n,a/n) a n — kf a~ a(A.l)b(k,n,a/n) k + 1 n n) fc + land this yieldsa26(1, n, a/n) —> a exp(—a); 6(2, n, a/n) —> — exp(—a);...anb(k, n, a/n) —y —- exp(—a) = ^ ( a ) .Definition. (Homogeneous Poisson process (HPP))A random point process N(t), t > 0 on the real axis is a HPP witha constant intensity A if it satisfies the three conditions(a) N(0) = 0.(b) The random increments N(£&) — N(£fc_i); k = 1,2,... are forany sequence of times 0 < to < t < • • • < tn < • • • mutuallyindependent.(c) The random increments defined in condition (b) are Poisson dis-tributed of the formPr([N(tr+0-Nfa)] = fc)=(A^7(-H*• (A.2)Tr = t r + i — tr, k = (J, 1 , . . . ; r = 1, 2 , . . . . ^To analyze the sample paths we consider the increment AN(£) —N(£ + At) — N(£). Its probability has, for small values of At, the form l - A A i f o r k = 0~Pr(AN(£) = fe) = ih^fL exp(-AAt)fe!AAt for A; = 10(At2) forfc>2(A.3)Equation (A.3) means that for At —> 0 the probability that N(t +At) is most likely the one of N(£) (Pr([N(t + At) - N(t)] = 0) ss 1).
    • Stochastic Variables and Stochastic Processes 47However, the part of (A.3) with Pr([N(t + At) - N(t)] = 1) « XAtindicates that there is small chance for a jump with the height unity.The probability of jumps with higher heights k = 2, 3,... correspond-ing to the third part of (A.3) is subdominantly small and such jumpsdo not appear.We calculate of the moments of the HPP in two alternative ways.(i) We use (1.5) with (A.2) to obtain/oop(x)smdx = J2km Fr(x= k)fc=0oo= exp(-a)^fcma*/A;!; a = At, (A.4)k=0or we apply (ii) the concept of the generating function defined byg(z) = J2zk Pr(x= *)> withgw = (x)fc=05"(l) = ( o : 2) - ( x ) , . . . ; *fc=° ^ (A.5)dzThis leads in the case of an HPP tooo oog(z) = ^2 zkakexp(-o;)/fc! = exp(-q) y^(zq)fc/A:!fc=o fe=o= exp[a(z-l)]. (A.6)In either case we obtain(N(t)) = At, (N2(t)) = (At)2+ At. (A.7)We calculate now the PD of the sum x + x^ of two independentHPPs. By definition this yieldsPr([a;i + x2] = k) = Prl ^ [ x i = j , x 2 = A; - j] Jfc= X P r( X l= J>2 = A; - j)A;j=o
    • 48 Stochastic Differential Equations in Science and Engineeringqk-jEexp[-(^)]fexp[-^)](fc_j)!kexp[-(01+ l92)]£^Q)/fc!3=0= exp[-(81 + 92)](01+92)k/kl (A.8)If the two variables are IID (0 = 6± = 02) (A.8) reduces toPr([xi + x2] = k) = exp(-20){20)k/k. (A.9)Poisson HPPs play important roles in Markov process (seeBremaud [1.16]). In many applications these Markov chains are iter-ations driven by "white noise" modeled by HPPs. Such iterationsarise in the study of the stability of continuous periodic phenomena,in the biology and economics, etc. We consider the form of iterationsx{t + s) = F(x(s),Z(t + s)); s , t e N 0 (A.10)where t, s are discrete variables and x(t) is a discrete random variabledriven by the white noise Z(t + s). An important particular case isZ(£ + s) := N(t + s) with a PDPr(N(£ + 8) = k) = exp(u)uk/kl; u = 9(t + s).The transition probability is the matrix governing the transi-tion from state i to state k.Examples(i) Random walkThis is an iteration of a discrete random variable x(t)x(t) = x(t-l) + N(t); x{0) = xoeN. (A.ll)N(t) is HPP with Pr([N(t) = A;]) = exp(-Xt){Xt)k/k. Hence, weobtain the transition probabilityPjl = Pr(x(t) = j , x(t - 1) = i) = Pr([i + N(j)] = 3)= P r ( N ( j ) = j - l ) .
    • Stochastic Variables and Stochastic Processes 49(ii) Flip-Flop processesThe iteration takes here the formx(i) = (-l)N ( t ). (A.12)The transition matrix takes the formp _ u = Pr(x(t + s) = 1 | x(s) = -1) = Pr(N(t) = 2k + 1) = a;p M = pr(x(t + s) = 1 | x(s) = 1 = Pr(N(t) = 2k) = 0,witha = ^Texp(-t)(t)2k+1/(2k + 1)! = exp(-At) sinh(Ai);k=ooo0 = ^exp(-Ai)(Ai)2fc/(2fc)! = exp(-Ai) cosh(At). Xfc=0Another important application of HPP is given by a ID approachto turbulence elaborated by Kerstein [1.17] and [1.18]. This modelis based on the turbulence advection by a random map. A tripletmap is applied to a shear flow velocity profile. An individual event isrepresented by a mapping that results in a new velocity profile. Asa statistical hypothesis the author assumes that the temporal rateof the event is governed by a Poisson process and the parameter ofthe map can be sampled from a given PD. Although this model wasapplied to ID turbulence, its results go beyond this limit and themodel has a remarkable power of prediction experimental data.ExercisesEX 1.1. Calculate the mean value Mn(s,t) = ((Bt - Bs)n), n G N.Hint: Use (1.60) and the standard substitution y^ = yi +Zj2{t2 — t), where z is a new variable. Show that this yields[2(£2-*i)]n/2Mn / exp(—v2)dv / exp(—z2)zndz./itThe gamma function is defined by (see Ryshik & Gradstein [1.15])r((n + l)/2)Vn = 2fc;^0Vn = 2ife + l; fceN,r(l/2) =-S/TF, r(n + l) = nT(n)./ ex.p(-z2)zndv
    • 50 Stochastic Differential Equations in Science and Engineeringa2>e2Verify the resultM2n = ir-V2[2(t2 - ii)]nr((2n + l)/2).EX 1.2. We consider a ID random variable X with the mean fi andthe variance a2. Show that the latter can be written in the form(fx(x) is the P D ^ = ( s ) ; e > 0 )<r2>( + ) fx(x)(x - tfdx; e.J fi+e J—oo /For x < /x — e and X>/J, + £^(X — fi)2> e2, this yields1 - / ix(x)dx = e 2P r ( | X - ^ | > e),J ix—eand this gives the Chebyshev inequality its final formPr{|X-/z| >e} <a2/e2.The inequality governing martingales (1-28) is obtained with con-siderations similar to the derivation of the Chebyshev inequality.EX 1.3.(a) Show that we can factorize the bivariate GD (1.35a) with zeromean and equal variance ((x) = (y) — 0; a2= a = b) in the formp(x, y) = J~1/2p(x)p((y - rx)/y/T); 7 = (1 - r2),where p(x) is the univariate GD (1.29).(b) Calculate the conditional distribution (see 1.17) of the bivariateGD (1.35a). Hint: (c is the covariance matrix)PiliO* I V) = V <W(27rD) exp[-cTO(x - ycxy/cyy)/(2D)};Verify that the latter line corresponds to a N[ycxy/cyy,cxx —ciy/cyy) distribution.EX 1.4. Prove that (1.53) is a solution of the Chapman-Kolmogorovequation (1.52)
    • Stochastic Variables and Stochastic Processes 51Hint: The integrand in (1.52) is given byT = Pi|i(y2,t2 | ?/i,*i)Pi11 (2/3,^3 I 2/2, £2)-Use the substitutionu = t<i - t > 0, v = £3 - £2 > 0; £3 - t = v + u > 0,introduce (1.53) into (1.52) and putT = (47r2m;)-1/2exp(-A);A = (2/3 - y2?/(2v) + (3/2 - 2/i)2/(2u) = a22/| + ai2/2 + «o,with afc = ctk(yi,y3),k = 1,2,3. Use the standard substitution (seeEX 1.1) to obtain/ Tdt/2 = (47rra)"1/2exp[-F(y3,y2)] / exp(-K)djy2;4a0a2 - af / ai 2F =- ^ 2 — ; K = a 2ly 2 +2^Jand compare the result of the integration with the right hand sideof (1.52).EX 1.5. Verify that the solution of (1.54) is given by (1.55). Provealso its initial condition.Hint: To verify the initial condition use the integral/ooexp[-y2/(2£)]H(y)dy,-00where H(y) is a continuous function. Use the standard substitutionin its form y = /2tz.To verify the solution (1.55) use the same substitution as inEX 1.4.EX 1.6. Calculate the average (yf (£1)^(*2)>; 2/fc = Btfc, k = 1,2;n, m G N with the use of the Markovian bivariate PD (1.60).Hint: Use standard substitution of the type given in EX 1.2.EX 1.7. Verify that the variable Bt defined in (1.65) has the auto-correlation (BtBs) = £ A s. To perform this task we calculate for a
    • 52 Stochastic Differential Equations in Science and Engineeringfixed value of a > 0<BtBs) = (Bt + a Bs + a ) - (Bt+aBa) - (Bs+aBa) + {B2a)= sAf + a - a - a + a = sAt.EX 1.8. Prove that the scaled and translated WSs defined in (1.74)are WSs.Hint: To cover the scaled WSs, putHUl„ = ^M-alpv = ^x1(a2u)x2(b2v).Because of (xi(a)x2((3)) = 0 we have (H.u,v) = 0. Its autocorrelationis given by(HUi„HPi9) = —-^(x1(a2u)x1(a2p))(x2(b2v)x2{b2q))[ao)(a2u) A (a2p)(b2v) A (b2q) = (u A p)(v A q).(ab)2For the case of the translated quantity use the consideration ofEX 1.7.EX 1.9. Verify the differential (1.109) of two linear stochasticfunctions.Hint: According to (1.89) we have dBt2= 2BtdBt + dt...EX 1.10. Show that the "inverted" stochastic variablesZt = tB1/t; H8it = stM{2lA/t,are also a WP (Zt) and a WS (HS)t).EX 1.11. Use the bivariate PD (1.60) for a Markov process, to cal-culate the two-variable characteristic function of a Brownian motion.Verify the resultG(u,v) = (exp[i(uBi + uB2)])exp -(u2h + v2t2) + 2uv(ti A£2) Bfc = Btfcand compare its ID limit with (1.58a).EX 1.12. Calculate the probability P of a particle to stay in theinterior of the circle D = {(a;, y) G R2 | x2+ y2< R}.
    • Stochastic Variables and Stochastic Processes 53Hint: Assume that components of the vector (x, y) are statisticallyindependent use the bivariate GD (1.35) with zero mean to calculateP[Bt€D] = J J p(x,y)dxdj/.EX 1.13. Consider the Brownian motion on the perimeter of ellipsesand hyperbolas(i) ellipsesx(t) = cos(Bi), y(t) = sin(Bt),(ii) hyperbolasx{t) = cosh(Bt), y(t) = sinh(Bt).Use the Ito formula to obtain the corresponding SDE and calcu-late (x(t)) and (y(t)).EX 1.14. Given the variablesZ1 = (Bj - B2)4+ (Bj)5; Z2 = (B,1- Bt2)3+ (B,1)6,where B^ and B2are independent WPs. Find the SDEs governingdZx and dZ2.EX 1.15. The random functionRW = [(Bt1)2+ --- + (BD2]1/2,is considered as the distance of an n-dimensional vector of indepen-dent WPs from the origin. Verify that its differential has the formn— 1dR(t) = J2BtdBt/R+T^Tdt-EX 1.16. Consider the stochastic functionx(t) = exp(aBt — a2t/2); a = const.(a) Show thatx(t) = x(t — s)x(s).Hint: Use (1.65).(b) Show that x(t) is a martingale.
    • 54 Stochastic Differential Equations in Science and EngineeringEX 1.17. The Wiener-Levy Theorem is given byoo »tBt = V A f c / rl>k{z)Az, (E.l)where A^ is a set of IID N(0,1) variables and ipi-; k = 1, 2,... is a setof orthonormal functions in [0, 1]»i1/Jk(z)lpm(z)dz = Skm.0Show that (E.l) defines a WP.Hint: The autocorrelation is given by (BtBs) = t A s. Show thatJod_~dt(BtBa) = — (t A s) = V Vfc(0 / Mz)dz.Multiply the last line by ipm(t) and integrate the resulting equationfrom zero to unity.EX 1.18. A bivariate PD of two variables x,y is given by p(x,y).(a) Calculate the PD of the "new" variable z and its average for(i) z = x ± y (ii) z = xy.Hint: Use (1.41b).(b) Find the PD iuy(u,v) for the "new" variables u = x + y; v =x - y.EX 1.19. The Ito representation of a given stochastic processesF(t,(j) has the formF(t,u) = (F(t,u)) + [ f(s,u)dBs,Jowhere i(s,u) is an other stochastic process. Find i(s,u) for the par-ticular cases(i) F(t,iu) = const; (ii) F(t,u) = BJ1; n = 1,2,3; (hi) F(t,u) =exp(Bt).EX 1.20. Calculate the probability of n identically independentHPPs [see (A.8)].
    • CHAPTER 2STOCHASTIC DIFFERENTIAL EQUATIONSThere are two classes of ordinary differential equations that containstochastic influences:(i) Ordinary differential equations (ODE) with stochastic coefficientfunctions and/or random initial or boundary conditions that con-tain no stochastic differentials. We consider this type of ODEs inChapter 4.3 where we will analyze eigenvalue problems. For theseODEs we can take advantage of all traditional methods of analysis.Here we give only the simple example of a linear 1st order ODEdx— = -par; p = p(w), ar(0) = x0(u),where the coefficient function p and the initial condition are x-independent random variables. The solution is x(t) = xoexp(—pt)and we obtain the moments of this solution in form of (xm) ={XQ1exp(—pmt)}. Assuming that the initial condition and the param-eter p are identically independent N(0, a) distributed, this yields(*2m> = ^ ^ e x p ( 2 a m 2t 2) ; ( x 2^ 1) = 0. *(ii) We focus in this book — with a few exceptions in Chapter 4 —exclusively on initial value problems for ordinary SDEs of the type(1.123) that contain stochastic differentials of the Brownian motions.The initial values may also vary randomly xn(0) — xn(u). In thischapter we introduce the analytical tools to reach this goal. However,in many cases we would have to resort to numerical procedures andwe perform this task in Chapter 5.The primary questions are:(i) How can we solve the equations or at least approximate thesolutions and what are the properties of the latter?55
    • 56 Stochastic Differential Equations in Science and Engineering(ii) Can we derive criteria for the existence and uniqueness of thesolutions?The theory is, however, only in a state of infancy and we will behappy if we will be able to answer these questions in case of thesimplest problems. The majority of the knowledge pertains to lin-ear ordinary SDE, nonlinear problems are covered only in examples.Partial stochastic differential equations (PSDE) will be covered inChapter 4 of this book.2.1. One-Dimensional EquationsTo introduce the ideas we begin with two simple problems.2.1.1. Growth of populationsWe consider here the growth of an isolated population. N(i) is thenumber of members of the population at the instant t. The growth(or decay) rate is proportional to the number of members and thisgrowth is, in absence of stochastic effects, exponential. We introduceadditionally a stochastic term that is also proportional to N. Wewrite the SDE first in the traditional waydN— = rN + uW(t)N; r,u = const, (2.1)where W(t) stands for the white noise. It is, however, convenient towrite Equation (2.1) in a form analogous to (1.99.1). Thus, we obtaindN = adt + bdBt; a = rN, b = uN; dBt = W(t)dt. (2.2)Equation (2.2) is a first order homogeneous ordinary SDE for thedesired solution N(B4,i). We call the function a(N, t) (the coefficientof dt) the drift coefficient and the function fe(N, t) (the coefficient ofdBi) the diffusion coefficient. SDEs with drift coefficients that areat most first order polynomials in N and diffusion coefficients thatare independent of N are called linear equations. Equation (2.2) ishence a nonlinear SDE. We solve the problem with the use of the
    • Stochastic Differential Equations 57Ito formula. Thus we introduce the function Y = g(N) = InN andapply (1.99.3) (see also (1.129))dY = d(lnN) = (agN + 62gNN/2)di + bg^dBt= (r - u2/2)dt + udBt. (2.3)Equation (2.3) is now a SDE with constant coefficients. Thus wecan directly integrate (2.3) and we obtain its solution in the formN = N0exp[(r-M2/2)£ + uBt]; N0 = N(i = 0). (2.4)There are two classes of initial conditions (ICs):(i) The initial condition (here the initial population No) is a deter-ministic quantity.(ii) The initial condition is stochastic variable. In this case we assumethat No is independent of the Brownian motion.The relation (2.4) is only a formal solution and does not offermuch information about the properties of the solutions. We obtainmore insight from the lowest moments of the formal solution (2.4).Thus we calculate the mean and the variance of N and we obtain(N(t)> = (N0) exp[(r - u2/2)t] (exp(uB4)) = (N0) exp(rt), (2.5)where we used the characteristic function (1.58a). We see that themean or average (2.5) represents the deterministic limit solution(u — 0) of the SDE. We calculate the variance with the use of(1.20) and we obtainVar(N) = exp(2ri) [(N§> exp(u2t) - (N0)2]. (2.6)An important special case is given by the combination of theparameters r = u2/2. This leads toN(t)=N0 exp(«Bt ); (N(t)> = (N0) exp(u2i/2); , ^(2.7)Var(N) = (N§> exp(2u2t) - (N0)2exp(u2f).
    • 58 Stochastic Differential Equations in Science and EngineeringWe generalize now the SDE (2.2) and we introduce the effectof two independent white noise processes. Thus, we put (see also(1.129))dN = rNdi + (udBJ + udB2) N; u, v = const. (2.8)We obtain now in lieu of (2.4) the solutionN = N0exp{[(r - (u2+ v2)/2)t + vBlt + VB2}. (2.9)Taking into account the independence of the two Brownian motionsyields again the mean value of the exponential growth (2.5) and weobtain for the varianceVar(N) = exp(2rt){(Nl)exp[(u2+ v2)t} - (N0)2}. (2.10)2.1.2. Stratonovich equationsNow we compare the results of the Ito theory in Section 2.1.1 withthe one derived by the concept of Stratonovich. We use here theclassical total differential (1.100). To indicate that we are consideringa Stratonovich SDE we rewrite (2.2) in the formdN = rNdt + «NodBt, (2.11)where the symbol "o" is used again [see (1.83) or (1.96)]. To calculatethe solution of (2.11) we use again the function g = ln(N) and weobtaind(ln(N)) = dN/N = rdi + uodBt , (2.12)We can directly integrate this (2.12) and we obtainN = N0 exp(ri + uBt) with (N) = (N0) exp[(r + u2/2)t}. (2.13)The calculation of the variance givesVar(N) = exp(2ri){(N^)exp(2u2t) - (N0)2exp(u2t)}.Note that result of the Stratonovich concept is obtained with the aidof a conventional integration.
    • Stochastic Differential Equations 592.1.3. The problem of Ornstein-Uhlenbeck andthe Maxwell distributionWe consider here the SDEdX = (m - X)dt + 6dBt; m,b = const. (2.14)Equation (2.13) is a first order inhomogenous ordinary SDE forthe desired solution X(Bt,t). Its diffusion coefficient is independentof the variable X and this variable only appears linear in the driftcoefficient. Therefore, we will classify (2.2) as a linear equation.We use the relationd X + ( X - m ) d t - e - * d [ e i( X - m ) ] , (2.15)to define an integrating factor. Thus, we obtain from (2.14) d[e*(X —m)] = 6e*dBt.Now we have a relation between just two differentials and we canintegrateX(t) = m + e-*(X0 ~m) + b J exp(s - i)dBs. (2.16)JoRelation (2.16) is the formal solution of the SDE (2.14). To obtainmore information we continue again with the calculation of the firstmoments. The integral on the right hand side of (2.17) is a nonanticipative function. The mean value is hence given by (771 and Xoare supposed to be deterministic quantities)(X(i))=m + e - i( X 0 - m ) ; (X(0)) = X0, <X(oo)) = m,(2.17a)and it varies monotonically between Xo and m and it representsthe deterministic limit of the solution (2.16). The calculation of thevariance yieldsVar(X) = b2(l2); I = / exp(s - t)dBa.JoWe use (1.86) to calculate (I2) and thus we obtainft u2Var(X) = b2/ exp[2(s - t)]ds = —(1 - e~2t). (2.17b)Jo 2In Figures 2.1 and 2.2 we compare the theoretical predictions ofaverage (2.17a) and the variance (2.17b) with numerical simulations.
    • 60 Stochastic Differential Equations in Science and EngineeringFig. 2.1. Theoretical prediction (2.17a) and numerical simulation of the average of theOrnstein-Uhlenbeck process. The irregular curve belongs to a particular simulation of(2.14). Parameters: m = 3; b = 1; Xo = 0. Numerical parameters: step width h = 0.005,and a number Ensem = 200 of repetitions of individual realizations.Fig. 2.2. Theoretical prediction (2.17b) and numerical simulation of the variance of theOrnstein-Uhlenbeck process. Parameters as in Figure 2.1.The numerical computations of the moments are performed repeatingindividual realizations (Ensem times). We see from Figure 2.1 thatthe theoretical and numerical averages are almost coinciding. Thevalues of the moments are calculated from averages of the random
    • Stochastic Differential Equations 61numbers Xj, (x) = EnLm Z^=iemxj• However, we find that in thecase of the variance a difference arises between numerics and theoryand this discrepancy grows with increasing values of the time. Inthe latter case it is advisable to use a higher value of the parameterEnsem, or to change the routine and use higher order techniques (seeSections 5.5 and 5.6).More details about the Ornstein-Uhlenbeck problem can be foundin Risken [2.1].Now we consider the Maxwell distribution. We investigate thethree-dimensional SDEdu = —udt + /26dB+, 6 — const;/ 1 9 ^ ( 2- 1 8 au=(Ul,u2,u3y, dBt=(dBt1,dBt2,dB?),where u stands for a velocity field and B^, k = 1, 2, 3 are independentWPs. Equation (2.18a) degenerates into three individual SDEs ofthe Ornstein-Uhlenbeck type (2.14). The individual solutions areuk = ukoexp(-t) + V2bexp(-t) exp(s)dB*; ukQ = uk(0).JoWe obtain as mean value uk(t) = uk(0) exp(—t) and the standarddeviation has the formVfe(t) = uk(t) - «fc(0)exp(-£) = V2bexp(-t) / exp(s)dB^.Jo(2.18b)We know that the integral in (2.18b) has a GD (see (1.90)). Wecalculate the moments of V/t and we obtain (see (2.17b))(V2k(t)) = a2= 6(1 - A2), A = exp(-t)]. (2.18c)Now we use (1.92) and this yields<V^)> = 3(V2(t))2; (Yt(t)) = 15<V2(i))3;....Clearly, the moments are in accordance with those of a GD (see(1.93)). Hence we obtain from (1.29)p(Vfc) = [2*6(1 - A2)]-1/2e x p { _v2/[26(l - A2)]}.
    • 62 Stochastic Differential Equations in Science and EngineeringThe substitution of (2.18b) into the last line leads top(«t, 11 u*o, 0) = [2TT6(1 - A2)]"1/2exp{-(«fc - uk0X)2/[2b(l - A2)]}.(2.19a)This is the transition probability to pass from t = 0 and Uko to theinstant t and u&. Furthermore it reproduces for b = 1 the stationaryOrnstein-Uhlenbeck transition probability (1.56.2).Next we derive the Maxwell distribution. We realize that all threevelocity components have the same variance (2.18c) and hence thesame distribution. Furthermore, the individual components are pro-duced by independent WPs. The velocities are hence statisticallyindependent and the transition probability of the three components(ui,U2,U3) is factorized and this yieldsp(ui,u2,u3,t | uw, u2o, U30,0)3[2vr6(l-A2)]-3/2exp -J ] K - uk0X)2/[2b(l - A2)]fc=i(2.19b)Now we choose for b the Langevin parameterb = kT/m, (2.19c)where m, k and T stand for the mass of the particle, the Boltzmannconstant and the absolute temperature, respectively. The substitu-tion of (2.19c) into (2.19b) yields the non stationary Maxwell dis-tribution. Better known is, however, the stationary version of theformula that we obtain from (2.19b) and (2.19c) in the limit t —> oo.Thus, we obtain the final form of the stationary Maxwell distribution/ m 3/2p(ui,u2,u3) = ( T^j^J exp[-?rm2/(2kT)]; u2= u + u + u.(2.19d)This distribution was an important result of the theoreticalphysics of the 19th century. It describes the velocity distribution ofan ideal monatomic gas with no internal degrees of freedom and nointermolecular forces. It was found to agree well with experiments(see McQuarrie [2.2]). The Maxwell distribution has been derivedin a number of different ways such as a stationary solution of theBoltzmann transport equation. A
    • Stochastic Differential Equations 632.1.4. The reduction methodWe consider again the first order nonlinear SDEdy = a{y,t)dt + b{y,t)dBt. (2.20)We elucidate now conditions that allow us to recast (2.20) with theaid of the transformationx = J(y,t) (2.21)as a reduced, linear SDEdx = a(t)dt + P(t)dBt, (2.22)where the new drift and diffusion coefficients are only functions ofthe time-variable. This property of (2.22) allows us to integrate (2.22)directly. We can write (2.22) in form ofx — / a(s)dsJo= P(t)dBt, (2.23)where the term x — JQ a(s)ds is the integrating factor. There are onlytwo differentials in (2.23). Thus, we obtain by integratingx(t) = XQ + [ a(s)ds+ [ (3(s)dBs; x0 = x(0). (2.24)Jo JoNow we want to find the desired condition that allows us to trans-form (2.20) into (2.22). First we obtain from (1.99) the differentialdU = (]t + aJy + ^b2Vyy dt + bJydBt.A comparison with (2.22) leads toa(t) = Ut + aXJy + ^62UW; 0(t) = bJy. (2.25)The derivation d/dy of the first part of (2.25) yields
    • 64 Stochastic Differential Equations in Science and EngineeringThe derivations 9/9y, d/dt applied to the second part of (2.25)lead tobyVy = -6UOT; P(t) = htXJy - b|- (all, + l-b2Xlyy^j. (2.27)We integrate the first part of (2.27) and we obtainUy — — and XJyyIb2 b„. (2.28)The substitution of (2.28) into the second part of (2.27) yieldsP(t) = )9(t)7(t); 7(«) = h - b~ (j - by^j • (2.29)Equation (2.29) represents now a sufficient condition to allow us toreduce the original SDE (2.20) to the directly integrable form (2.22).The integration of (2.29) givesH/3(t) = 0oexp / l{s)dsJoPo = const. (2.30)The substitution of (2.30) into the first part of (2.28) yields thetransformation functionry1S//o J ./oFinally we obtain from the first part of (2.25)U(y, t) = Po exp / 7{s)dsJodu/b(u,t). (2.31)a(t) = Vt + p (2.32)We also mention a condition for the special case where driftand diffusion coefficients of the original Equation (2.20) are time-independent a = a(y); b = b(y). In this case we obtain from (2.29)_d_dyd fa 1 db.d~y(b~2dy0. (2.33)Example 1.A simple solution of (2.33) isa =1 d(62)4 dy(2.34a)
    • Stochastic Differential Equations 65In this case we obtain a = 0,/3 = (5Q. The solution of the reducedequation is x = j3oBt + const. The transformation U(y, i) is given bythe integration of (2.31) and we obtain the solution of the originalproblem (2.20) with (2.34a) in the implicit formfvAii/ 7 T ^ = B t ; y(t = 0)=y0. (2.34b)Jy0hu)Example 2.We consider again the Ornstein-Uhlenbeck problem (2.14). We havea = m — y;b = const. These coefficients comply with (2.33) and thisyields with (2.29) (and fa = 1)7 = 1 ; a = mex.p(t)/b; (3 = exp(t); x = yexp(t)/b; XQ = yo/b.The reduced SDE (2.22) has the form dx = exp(t) (fdt + dBt) andits solution isx = XQ + — [exp(i) — 1] + / exp(s)dBs.b JoWe apply the inversion of the transformation and we obtain the solu-tion (2.17). We consider further applications of the reduction methodin EX 2.1. *2.1.5. Verification of solutionsWe add here comments on how to verify given solutions with the aidof the Ito formula and demonstrate the techniques with examples.Basically there are two types of solutions of SDE:(1) The solution depends explicitly on the Brownian motion. Anexample is the solution of the population growth problem (2.4) thatshould be written as N = N(Bt, t). The verification of such a problemis done with the use of the Ito formula (1.98). We leave this verifica-tion for the EX 2.2 and look at a generalized growth problem definedby the SDE.Example. (Generalized population growth problem)We consider the SDE:dX = rX(k - X)dt + uXdBt; k,r,u = const. (2.35a)
    • 66 Stochastic Differential Equations in Science and EngineeringThe solution is given byX(Bt, t) = J l ^ l l ; F = exp[(rfc - u2/2)t + uBt};(2.35b)I(i) = / F(Bs,s)ds; - = X(* = 0).Jo aWe use (1.98) and introduce there the assignment X := Bt. Thus weobtain in the first placeXt = (rk-u2/2)X-rX2; XBt = uX; XBtBt = u2X,and the introduction of the first line into (1.98) reproduces the SDE(2.35a). *(2) The solution of the SDE does not depend explicitly on theBrownian motion and it is given by X = X(t). An example of thistype is the solution to the Ornstein-Uhlenbeck problem (2.16). Toverify such a solution we need to differentiate the integrals of the type3(t) = [ H(s)dBs, (2.36)Jowhere H(s) is a deterministic function. The derivative of the integralin (2.36) is given bysjCH«dB--H("ir (237)To prove the relation (2.37) we only need to integrate this formula.Example. (The Brownian bridge)We consider here the linear SDEdX = 1j-Z^dt+ dBt; X(t = 0) = X0. (2.38)Its solution (see Gard [2.3]) has the formX(t) = X0(l - t) + rt + (1 - t) I dBa/(l - s).Jo(2.39)The use of (2.37) yieldsdX~d7 ~r~v dBs- * 0 +dt •/•* dBsJo 1 - s(2.40)
    • Stochastic Differential Equations 67Indeed, solving (2.39) for the integral on the right hand side gives d B s X-rt __o 1 — s 1 — tand the substitution of the last line into (2.39) reproduces again tothe SDE (2.38). *The verification of the solutions to higher-order SDEs that areanalyzed in the following sections is performed with the use of themultivariate Ito formula and (1.122) (see EX 2.4).2.2. White and Colored Noise, SpectraWe begin with the calculation of the white noise auto-correlationfunction. We start from (1-77) and obtainr ( M ) = «,&> = ( ^ ^ ) = ^ ( B , B , > = ^ A , = , ( . - «),(2.41)where S(s — t) is the Dirac function. One equivalent form of (2.41) isr{t = s + T,s) = r(T) = 5(T). (2.42)The autocorrelation function, r(t, s), depends only on the timedifference and we say that the white noise is delta-correlated. We alsocan use (2.41) to justify empirically (1.91). We can multiply (2.41)by the deterministic differentials dtds yielding (1.91).The spectrum S(CJ) is defined as the Fourier transform of r(r)and we obtain/ooexp(io;r)r(T)dT. (2.43)-ooThus, we obtain for the case of white noise the constant ("white")spectrumS(w) = 1. (2.44)The relation (2.44) means that all frequency components of the spec-trum have same importance. This result is analogous to the case ofthe white light, where all frequency components of the light occurwith equal weight factors. In many stochastic processes (such as in
    • 68 Stochastic Differential Equations in Science and Engineeringthe case of turbulent flows) we face, however, a banded spectrum andthe spectrum (2.42) is a non physical idealization.To construct a more realistic spectrum we modify the Ornstein-Uhlenbeck process (2.14) and use the linear first order SDEdt; = -b2vdt + ab2dBt; a,b = const; v0 = v(t = 0), (2.45)where the parameter b is independent of the other parameter a. Equa-tion (2.45) is a simple model to calculate the velocity v of a spheresubmerged in a viscous fluid. The term —b2v represents the hydro-dynamic resistance, which is according to the law of Stokes given byb2— 67r~Rr)/m (R, m are the radius and the mass of the sphere, ris the viscosity of the fluid). The second term on the left hand sideof (2.45) represents the force on the sphere that is caused by colli-sions with molecules of the fluid. We use an integrating factor andwe obtainv = v0 exp(-62£) + ab2/ exp[62(s - £)]dBs. (2.46)JoThe mean value of (2.46) (v) = i>oexp(—b2t). We calculate theautocorrelation r(t, r) = (Z(£ + r/2)Z(£ - r/2)); Z(£) = v(t) - (v(t)}.The solution (2.46) includes an integral of a non anticipative func-tion and we obtain with (1.87)r(t,T) = a2b4J exp[2b2(u - t)}du;7 = (t + r/2) A (t - r/2) = t - |r|/2.The evaluation of (2.47) leads tor(t, r) = A[exp(-62|r!) - exp(-262t)]; A = a2b2/2. (2.48)The asymptotic limit of (2.48) is given by£ — • 0 0 : r{t:r) —> A exp(-62|r|). (2.49)Thus, the spectrum of the asymptotic autocorrelation has the formf°° a2b4S(w) = 2A / exp(-62r) cos(o;T)dr = -j ^ (2-50)This spectrum is not delta-shaped, but it declines monotonically fromits maximum at the position zero with S(0) = a2to S(±£>2) = a2/2. It
    • Stochastic Differential Equations 69is thus the spectrum of a colored process with frequency componentsof different importance.However, there are alternative ways to derive a colored spectrumand we follow here the ideas of Stratonovich [2.4]. We introduce astochastic functionf(t) = Ai(t) cos(Ai) + A2 sin(Xt), (2.51)where A is a given deterministic frequency and the functions A& areamplitude functions that satisfy the SDEsdAk(t) = -a2Akdt + adBtfc; Afc(0) = 0; k = 1, 2; a > 0.(2.52)In Equation (2.52) we use two independent Brownian motionsB4fc, k = 1, 2. The solution of (2.52) has the formAfc(t) = aexp(-a2t) / exp(a2s)dB^. (2.53)JoThe substitution of (2.53) into (2.51) leads to (f(t)) = 0. To calculatethe autocorrelation function we putf(i + r/2)f(t - r/2) = a2exp(-2a2t) [cos(Aa+)J^ + sin(Aa+)J^"]x [cos(Aa_)J7/+sin(Aa_)J^"], (2-54)withra+ />a_J + = / exp(a2s)dB^; J^ = / exp(a2s)dB^; a± = t ± r/2.JO Jo(2.55)Now we take advantage of the independence of B] and B2and weobtain the autocorrelation functionr(t,r) = (f(t + r/2)f(t-r/2))= a2exp(—2a2t)[cos(Aa+)cos(Aa_)(J^"J]~)+ sin(Aa+)sin(Aa_)(J^J^)], (2.56)and we obtain with the use of integral in Equation (2.47)r(t,r) = c o s(A r) [e x p (-Q2|r |) - exp(-2a2t)]. (2.57)
    • 70 Stochastic Differential Equations in Science and EngineeringFinally, we perform again the limit t —>• oo and we obtain thespectrumS(W) = J r ( o o , r ) e x P ^ r ) d r = ^ + {(J2 _ x,? + 2 f t 4 ( a ; 2 + A2) •(2.58)This spectrum has for all values of a and A at w = 0 a stationary-point. However, there is limit curve Ac(w), that separates differentfeatures. For A < XC(UJ) we meet at u = 0 maximum that is the onlystationary point and the (2.58) behaves qualitatively like (2.50). ForA > Ac(w) the spectrum has a minimum at u> = 0 and a maximumat the position um(a,X) and it decreases to zero for to > ujm(a,).Note also that (2.58) degenerates in the limit A —>• 0 to the spectrum(2.50). From a practical point of view, we can conclude that thedetermination of a colored spectrum requires — additionally to theBrownian motion — the solution of at least one SDE (see (2.46) or(2.52)), whereas the construction of a white noise needs only the useof the Brownian motion.2.3. The Stochastic PendulumWe return here to the problem mentioned in the introduction. Wegeneralize this linearized problem (the nonlinear case is treated inSection 2.5) and introduce a stochastic damping (intensity coefficienta), a stochastic frequency (intensity coefficient j3) and a stochasticexcitation (intensity coefficient 7). Thus, we analyze the solutions tothe second order inhomogeneous SDEd x dx~&~p+ a^ "dT +^ +^X = 7^t; a ^7 = const(2.59)The stochastic (or noise) terms that influence the damping and fre-quency are multiplied by the variable x or its derivative. Thus thesestochastic quantities are referred to as multiplicative noise. By con-trast, we see that the excitation in (2.59) is independent of the vari-able x and its derivatives. This type of stochastic influence is hencecalled additive noise. Equation (2.59) is hence in general a nonlineartwo-dimensional SDE.
    • Stochastic Differential Equations 71We write now the SDE (2.59) as a canonical system of two firstorder SDEsdXl = X2dt> (2.60)d^2 = —xidt + (7 — ax2 — (3xi)dBfTo solve (2.60) write it in its vectorial formdx = Axdt + bdBj,We introduce an integrating factor and setdx - Axdt = exp(At)d[exp(-At)x], (2.62)where the exponential matrix is defined by a Taylor expansion0 0exp(At) = V -Aktk= I + At + - A2t2+ • • • ;tok- 2(2-63)d[exp(At)] = Aexp(At)dt,where I is the identity matrix. Thus, we obtain the SDEd[exp(-At)x] = exp(-At)bdBi ; (2.64)where the term exp(—At) represents the integrating factor. An inte-gration yieldsx = exp(At)xo + / exp[A(t — s)]b(s)dBs; xo = x(t = 0).Jo(2.65)We assume that the initial condition xo is deterministic quantityand we obtain from (2.65) the mean value(x(t)} = exp(At)x0; x0 = (x0,y0) (2.66)that is again the solution of the deterministic limit of the SDE (2.59).We obtain from (2.63)e x p ( A t ) = ( C°S S i a t (2.67)V —suit cost /
    • 72 Stochastic Differential Equations in Science and EngineeringThus, we infer from (2.65)xi = XQ cos t + yo sin t+ [ sin(i-s)[7-ax2 (s)-#c1 (s)]dB8 , (2.68.1)JoX2 = —XQ sin t + yo cos t+ / cos(t-s)[7-aa:2(s)-/foi(s)]dBs . (2.68.2)JoThus we can infer from (2.68) that the solution of the stochasticpendulum is given in general by a set of two coupled Volterra integralequations.2.3.1. Stochastic excitationHere we assume (a = j3 = 0; 7 / 0) and equation (2.59) contains onlyadditive noise and the problem is linear. The solution (2.68) is thusnot an integral equation anymore, but it involves only a stochasticintegral. We put for the mean value of the initial vector(x) = xocosi + yosint; (^2) = —XQ sin t + j/o cost, (2.69)and we calculate the autocorrelation of the x componentru(t,T) = {z(t + r/2)z(t-T/2)); z(t) = xi(t) - (xi(t)>.The application of the rule (1.87) to the last line yields (see (2.47))rii(£>r)=72/ sin(i + r/2 — u) sin(t — r/2 — u)du;Jo (2.70)a = t- |T|/2.The evaluation of this integral leads toru{t, T) = l2{2{t - |r|/2) cos(r) + [sin(|r|) - sin(2t)]}. (2.71)
    • Stochastic Differential Equations 732.3.2. Stochastic damping (/3 = 7 = 0; a / 0)We assume here /3 = 7 = 0 ; a ^ 0 and the SDE (2.59) contains amultiplicative noise term. We obtain from (2.68)xi = xocost+ yosmt — a sin(i — s)x2(s)dBs, (2.72.1)JoX2 =—XQsint + yocost — a cos(t — s)x2(s)dBs. (2.72.2)JoWe recall that the solution of the problem with additive stochasticexcitation was governed only by a stochastic integral. We see that(2.72) represents an integral equation. This means that multiplicativenoise terms influence a process in a much more drastic way thanadditive stochastic terms do. The mean value is again given by (2.69).However, the determination of the variance is an interesting problem.We useZk = xk - (xk); Vjk = (zj(t)zk(t)); j , k = l,2. (2.73)Thus we obtain from (2.72)Vn(t) = s sin2(t - u)K(u)du; K(«) = (xl(u)); s = a2;JoVi2(t) = V2i(t) =S- I sin[2(t - u)]K(u)du; (2.74)1JoV22(t) = s / cos2(i - u)K(u)du.JoWe must determine all the four components of the variance matrixand we begin withV22(t) = K(t) - (x2)2= s [ cos2(t - u)K{u)du. (2.75)JoEquation (2.75) is a convolution integral equation that governs thefunction K(t). It is convenient to solve (2.75) with an application ofthe Laplace transformPOOH(p) = £K(£) = / exp(-pt)K(t)dt. (2.76)Jo
    • 74 Stochastic Differential Equations in Science and EngineeringThe Laplace transform given by (2.76) and the use of the convolutiontheoremS, f A(t- s)B(s)ds = £A(t)£B(t), (2.77)Jolead toH(p) = £ ( - x 0 sin t + y0 cos i)2+ sH(p)£ cos2t= xl £ sin2t - xoyo £ sin It + [y2, + «H(p)] £ cos21.To reduce the algebra we consider only the special case XQ = 0,yQ ^ 0. This yieldsH(P)=[W8 + « H ( P ) ] ^ ± ^ )and solving this for the function H(p) we obtainH ( p ) = ( p! 1+ 2 ) y° ; F(p,S )=p(p2+ 4 ) - S ( p 2+ 2). (2.79)To invert the Laplace transform we need to know the zeroes ofthe function F(p, s). In EX 2.5 we give a simple special case of theparameter s to invert (2.79). However, our focus is the determinationof weak noisy damping. Thus, we consider the limit s —> 0+. In thiscase we can approximate the zeroes of F(a,p) and this leads toF(p,s) = [(p-s/4)2+4](p-s/2) + 0(s2) Vs = a 2^ 0 . (2.80)A decomposition of (2.79) into partial fractions yields with (2.80)H(p) = vi1/2 (p - s/4)/2 + 5s/8(2.81)p - s/2 {p - s/4)2+ 4With (2.81) and (2.75) we obtain the inversion of the Laplacetransform (2.76)V22(£) = K(t)-y2l cos2i= -^exp(st/2){l + exp(-st/4)x [cos(2i) + 5ssin(2t)/8]} - ygcos2^ (2.82)Note that the variance (2.82) has the order of unity for O(st) < 0(1).
    • Stochastic Differential Equations 75We give in Figures 2.3 and 2.4 a comparison of the theoreti-cally calculated average (2.69) and the variance (2.82) with numericalsimulations. We note again that the theoretically predicted and thenumerical averages almost coincide. However, we encounter a dis-crepancy between the theoretical and numerical values. The lattery(t), <y(t)>Fig. 2.3. Theoretical prediction (2.69) and numerical simulation of the average ofEquation (2.72). Parameters: a = 0.3; XQ = 0; yo = 1. Numerical parameters: stepwidth h = 0.005, and a number Ensem = 200 of repetitions of individual realizations.Fig. 2.4. Theoretical prediction (2.82) and numerical simulation of the variance ofEquation (2.72). Parameters as in Figure 2.3.
    • 76 Stochastic Differential Equations in Science and Engineeringmight be caused by a value of a that violates the condition s —> 0in Equation (2.80). However, we will repeat the computation of thevariance in Chapter 5 with the use of a different numerical routine.We assign the calculation of the two other components of thevariance matrix to EX 2.6. We also shift the solution of the pendulumproblem with a noisy frequency to EX 2.7.2.4. The General Linear SDEWe consider here the general linear inhomogeneus n-dimensionalSDEdxj = [Ajk{t)xk + a,j(t)]dt + bjr(t)dBl;j,k = l,...,n; r = l,...,m, (2.83)where the vector dj(t) represents a inhomogeneous term. Note thatthe diffusion coefficients are only time-dependent. (2.83) representsthus a problem with additive noise. Multiplicative noise problemsare (for reasons discussed in Section 2.3) included in the categoriesof nonlinear SDEs. To introduce a strategy to solve (2.83) we considerfor the moment the one-dimensional limit of (2.83). Thus we analyzethe SDEdx = [A(t)x + a(t)]dt + b(t)dBt. (2.84)To find an integrating factor we use the integrall(t) = exp A(s)dso(2.85)We find with (2.85) d[x(t)l(t)} = I(i)[a(t)dt + b(t)]dBt and after anintegration we obtainx(t)=[x0+l l{s)a(s)ds+ I(s)b(s)dBs /l(t); x0 = x(t = 0).(2.86)Thus, we determine the mean valuem{t) =IXQ+ f l{s)a(s)ds /l(t), (2.87)
    • Stochastic Differential Equations 77and this is again the solution of the deterministic limit of the SDE(2.84). The covariance has the formc(t,u) = ([x(t) — m(t)][x{u) — m(u)])= TZATT^ / l2(s)b2(s)ds; a = uM. (2.88)iKt)iu) JOTo solve the n-dimensional SDE we start with the homogeneousdeterministic limit of (2.83)±j = Ajk(t)xk; xm(0) = x0m. (2.89)We use the matrix of the fundamental solutions, <&, and we cangeneralize (2.89)—j^ = Ajk(mkm; $fcm(0) = 5km. (2.90)To compare the solutions to (2.90) with the one-dimensional case,we rewrite the solution (2.86) in form ofx(t) = *n(i)jzo + f $n(s)[a(s)ds + 6(a)dBs]|; ^(s) = l(s).(2.86)Hence, the n-dimensional solution to (2.83) isxm = $mk(t)lx0k + J ^(s)[ar(s)ds + bru{s)dBus]. (2.91)We prove (2.91) in EX 2.8 with the use of the Ito formula (1.127).The mean value is again the solution of the deterministic problem.It is given by the vectormB(t) = <S>sk(t)lx0k + f $-r1(s)ar (s)ds|, (2.92)and the covariance matrix takes the symbolic formcma(t,u) = $mk(t)$av{u) / *fcrl{s)brt7(s)<$>ul(s)bx<r{s)ds, (2.93)where the symbol 7 has the same meaning as in (2.47). More detailsabout linear SDE can be found in Kloeden et al. [2.5].
    • 78 Stochastic Differential Equations in Science and EngineeringIt was shown in Section 1.8 that the stochastic integral K(t) =Jo G(s)dBs is N[0, fQ G2(s)ds] distributed. A simple generalizationfor the case of a stochastic vector integralKj(t) = / bjr(s)dBrs; j = 0, l,...,n; r = l,...,m,Joshows that the latter integral is N[0, fQ b(s)bT(s)ds] (6Tis the trans-position of b, see EX 2.8) distributed. Thus, the solution of the gen-eral linear SDE (2.83) can be written in the symbolic formY(t) = x(t) - $(t) x0+ [ $(s)a(s)ds = / b(s)dBs. (2.94)Jo J JoThe left hand side of (2.94) is Gaussian distributed, this implies thatalso the right hand side has the same distribution.An additional point of interest is the question whether this solu-tion exhibits a stationary normal distribution. It was proved byArnold [1.2] that this distribution is stationary, provided that theequation is homogeneous (a = 0) and the matrices A and b are time-independent and all eigenvalues of the matrix A have eigenvalueswith negative imaginary parts and that the initial value vector isconstant or normal distributed. Then we obtain for the mean valueand the covariance in symbolic form(x(t)) = const,f°° (2 95)(x2{t))= exp(Ai)66Texp(ATt)dt. v ;JoIn anticipation of Chapter 3, we use as an example the stationaryFokker-Planck equation for the ID SDEda; = —Axdt + bdt; A, b = const > 0,where the negative sign of the drift coefficient is the ID remainderof the condition of negative imaginary parts of the matrix A. Thestationary Fokker—Planck equation (3.43a) governing the PD p(x)readsd f°°A(xp) + b2p"/2 = 0; = —; / p(x)dx = 1.dz J-oo
    • Stochastic Differential Equations 79A first integral leads tod 2A—p = - T Q - ^ P + T; 7 = const.da; trWe put 7 = 0 and this leads top = (27ra2)~1/2exp[-x2/(2a2)}; a2= b2/(2A),where we applied the normalization condition of a normal distribu-tion with zero mean and the variance a2and these values coincidewith the ID limit of (2.95).2.5. A Class of Nonlinear SDEWe begin with a class of multiplicative one-dimensional problemsdx = a(x, t)dt + h(t)xdBt. (2.96)Typical for this SDE is a nonlinear drift coefficient and a diffusioncoefficient proportional to x: b(x, t) = h{t)x. We define an integratingfactor with the functionG{But) = ex.pl- f h(s)dBs + ]- f h2(s)ds. (2.97a)An integration by parts in the exponent gives with (1.105)G(Bt,t) = expl-h(t)Bt+ f L" ^ lti(s)Bs + ^h2(s) ds. (2.97b)The differential dG is performed with the Ito formula (1.98) yieldsGt = h2G; GBt = -hG, GBtBt = h2G. Thus we obtaindG = Gh2dt + b2dBt, b2 = -hG (2.98a)and the application of (1.109) yieldsd(xG) = G(adt + hxdBt) + Gx(h2dt - hdBt) - Gxh2dt = Gadt.(2.98b)Now we introduce the new variableY = xG. (2.99)
    • 80 Stochastic Differential Equations in Science and EngineeringThe substitution into (2.98) yieldsdY— = Go(x = Y/G,t). (2.100)It is important to realize that (2.100) is a deterministic equationsince the differential dBt is missing.Example 1.ax = h axdBt; a = const.xWe see that with a = 1/x, h = a this example belongs to the class(2.96). The function G is given by G = exp (a2t/2-aBt). Thus weobtain with from (2.100)dY /"*Y — = G2^> Y2= const + 2 / exp(a2s - 2aBs)ds.d* JoFinally we obtain after the inversion of (2.99)x = exp(-a2t/2 + aB^VD; D = XQ + 2 / exp(a2s - 2aBs)ds.JoExample 2.We consider a generalization of problem of population growth (2.2)da; = rx(l — x)dt + axdBt; a,r = const.Note this SDE coincides with the SDE (2.35a) for k = 1. The functionG is defined as in the pervious example and (2.100) yieldsdY _ rY(G - Y)~dt ~ CJ This is a Bernoulli differential equation. We solve it with substitutionY = 1/z. Thus we obtain the linear problem z — —rz + r/G, wheredots indicate time derivatives. An integration yields1 U(t)zG l/x0 + rl(t)withU(t) = e x p [ ( r - a 2/ 2 ) i + aBt]; I(t) = / U(s)ds. *Jo
    • Stochastic Differential Equations 81The class (2.96) of one-dimensional SDEs is of importance toanalyze bifurcation problems and we will use the technique givenhere in Chapter 3. It is, however, easy to generalize the ideas ton-dimensional SDEs. Thus we introduce the problemdXfc = ak(X,t)dt + hk{t)XkdBt; k = l,...,n. (2.101)Note that we do not use the summation convention. We introducethe integrating factor (see (2.95))Gfc(Bt)t) = e x p j - J hk{s)dBs + ~ f h2k{s)ds (2.102)and we obtain the differential dGk(Bt,t) = {hdt — hkdBt)Gk. Thuswe haved{Gkxk) = akGkdt. (2.103)Thus, we obtain the system of n deterministic differentialequations— - = Gkak(X = Yi/Gi,... ,Xn = Yn/Gn,£); Y^ = GkXk.(2.104)Example: The Nonlinear Pendulum.We consider nonlinear pendulum oscillations in the presence ofstochastic damping. The corresponding SDE is (compare the linearcounterpart (2.59))d2x . dx^ + a 6 ^ + s m x = 0.Passing to a system of first order equations leads todx = xzdt; da>2 = —ctX2dBt — sinxidi. (2.105)First, we find that the fixed points (FPs) of the determin-istic problem (a — 0) given by Pi = (0,0)(center); P2,3 =(±7r,0) (saddle points) are also FPs of (2.105). To find the type ofthe FPs of the SDE we linearize (2.105) and we obtaind 6 = 6 d i ;, A f (2-106)d^2 = —a^2dBt — cos(x{ )£id£; x{ = 0, ± TT.
    • 82 Stochastic Differential Equations in Science and EngineeringIn case of the deterministic center (x[ — 0) we rediscover the linearproblem (2.59) with j3 = 7 = 0. Its variance V22 is already given by(2.82). To be complete we should also calculate the variances Vn andV12. An inspection of the Laplace transform of the first and secondintegral equations (2.74) shows2£Vn = a2H(j9)£sin2i; £Vi2 = ?-ll(p)£sm(2t).These additional variances exhibit the same structure as (2.82). Thismeans that for a2t < 0(1) the perturbations remain small and theFP (the deterministic center) is still an elliptic FP.It is also clear that the deterministic saddles become even moreunstable under the influence of stochastic influences. It is convenientto study the possibility of the occurrence of heteroclinic orbits. Toconsider this problem we first observe that (2.105) belongs to theclass (2.101) of SDEs.We obtain Gi = 1; G2 = G = exp(aB4 + a2t/2), and we use thenew variablesVi=xi; y2 = Gx2. (2.107)This yieldsill = y2/G; y2 = -Gsinyi, (2.108)These are the canonical equations of the one-degree-of-freedom non-autonomous system with the Hamilton functionP2H = — - G c o s g ; q = yi, p = y2. (2.109)In the deterministic limit (a = 0) we obtain G = 1 and (2.109)reduces to the deterministic Hamilton function H = p2/2 — cosq,where deterministic paths are given by H = H(go,Po) with qo,po asinitial coordinates (see e.g. Hale and Kocak [2.6]).We return now to the stochastic problem. There, it could beargued that we may return to the "old" coordinatesp = Gx2; q = xu (2.110)
    • Stochastic Differential Equations 83the substitution of which into (2.109) gives the transformedHamiltonian (the Kamiltonian)K = G(x|/2-cosxi). (2.111)However, it is clear that (2.108) are not the canonical equa-tions corresponding to the Kamiltonian (2.111). The reason for thisinconsistency is that (2.110) is not a canonical transform. To seethis we write the old coordinates in the form x = x(q,p) = q;%2 — %2(q,p) = p/G. A transform is canonical if its Poisson brackethas the value unity. Yet we obtain in our casedxi 3x2 3xi 3x2 1 /r, _LIHence, we return to the Hamiltonian (2.109) and continue with theinvestigation of its heteroclinic orbits. The latter are defined byH(q,p,t) = H(±7r,0, i) and this leads top = ±2Gcos{q/2). (2.112)In the stochastic case we must evaluate in general the heteroclinicorbits numerically. However, it is interesting to note that we can alsogeneralize the concept of parametrization of the heteroclinic pathwhich in the case of the stochastic pendulum is given byp = ±2Gsechi; q = 2arctan(sinht). (2.113)The variables in Equation (2.113) comply with (2.112). To verifythat p and q tend to the saddle points we obtain asymptotically q —>7rsign(t) for |i| —> oo and this gives the correct saddle coordinates.As to the momentum we obtainp —> ± 4 exp (aBt + a2t/2 - t) for |t| -> oo. (2.114)Using (1.58a) we obtain the mean of the momentum(p) -• ±4exp(a2i - |t|) for t -> oo. (2.115)To reach the saddle (ir, 0) in the mean we must comply with a < 1.If we consider the variance of p we obtain(p2) ^16exp(3a2t-2|t|).
    • 84 Stochastic Differential Equations in Science and Engineering-1 --2 "Fig. 2.5. The motion of the oscillator (2.105) in the phase space. The non-noisy curveis the corresponding deterministic limit cycle. Parameters: a = 0.3, (xo,yo) = ( T / 3 , 1),h = 0.005.The latter relation tells us that to reach the saddle (IT, 0), we needto ask for a2< 2/3. Thus, we can conclude that the higher the levelof the considered stochastic moment, the lower we have to choose thevalue of the intensity parameter a that allows the homoclinic orbitto reach for t —> oo, the right hand side saddle. Finally, we showin Figure 2.5 for one specific choice of initial conditions a particularnumerical realization of the solution to (2.105) in the phase space.We juxtapose this stochastic solution to the corresponding determin-istic solution. Thus, we see that the stochastic solution remains closeto the deterministic limit cycle only in the very first phase of itsevolution from the initial conditions. For later times the destructiveinfluence of the multiplicative noise becomes dominant and any kindof regularity of the stochastic solution disappears, no matter howweak the intensity of the noise. $2.6. Existence and Uniqueness of SolutionsThe subject considered in this section is still under intensive researchand we present here only a few available results for one-dimensional
    • Stochastic Differential Equations 85SDEs. First, we consider a deterministic ODEx = a(x); x,a E R.A sufficient condition for the existence and uniqueness of its solutionis the Lipschitz condition a(x) — a(y) < Kx — y; K > 0 where Kis Lipschitzs constant. There is, however, a serious deficiency becausethe solutions may become unbound after a small elapse of time. Asimple demonstration of this is given by an inspection of the solutionsof the equation x — x2. Here, we have x2— y2 < Kx — y and K isthe maximum of x + y in the (x, y)-plane under consideration. Thesolution of this ODE is x(t) = XQ/{1 — x^t) that becomes unboundedafter the elapse of the blow up time £& = /XQ.To ensure the global existence of the solutions of an ODE (theexistence for all times after the initial time) we need in addition tothe Lipschitzs condition, the growth bound conditiona{x) < L ( l + |x|), L > 0 ; Vt>t0where L is a constant. Now, we consider as an example x — xk, andwe obtain from the growth bound condition xk< L(l + |a;|); we cansatisfy this condition only for k < 1.We turn to a general class of one-dimensional SDEs given by(2.20). Now, the coefficients a and b must satisfy the Lipschitz andgrowth bound conditions to guarantee existence, uniqueness andboundedness. This means that the drift and diffusion coefficientsmust satisfy the stochastic Lipschitz conditiona(y,t)-a(z,t) + b(y,t)-b(z,t) <Ky-z;K > 0 , (y,z)EK2, (2.116)and the stochastic growth bound conditiona(y,t)-b(y,t) < L ( l + |y|), L > 0; V t > t 0 , x G R . (2.117)Example 1.dy = ~ 2 exp(-2y)dt + exp(-y)dBt .This SDE does not satisfy the growth bound condition for y < 0.Thus we have to expect that the solution will blow up. To ver-ify this we use the reduction method. The SDE complies with
    • 86 Stochastic Differential Equations in Science and Engineering(2.34a) and we obtain the solution y = lnjBj + exp(yo)}- This solu-tion blows up once the condition Bt = —exp(yo) is met for thefirst time. £Example 2.As a counter-example where the SDE satisfies the growth boundcondition, we reconsider EX 2.1(iii) and we obtain1 fydu 1, . . .„ ,„,.,?/ „and this yields7Ty(t) = — — + 2arctan[^exp(rB()]; z = tan(yo/2 + 7r/4). £These examples suggest that we should eliminate the drift coef-ficient. To achieve this we start from (2.20) (written for the variablex), we use the transformation y = g(x) and the application of Itosformula (1.99.3) givesdy = [ag(x) + b2{x)g"{x)/2]dt + b(x)g(x)dBt. (2.118)Hence, we can eliminate the drift coefficient of the transformed SDEif we put g"(x)/g(x) = —2a{x)/b2{x) and the integration of thisequation leads to the transformationg(x) = J expj-2 J [a(v)/b2(v)}dvdu, (2.119)where the parameter c is appropriately selected. Thus we obtain from(2.118) a transformed SDE dy = b(x) g(x)dBi; x = g~l(y) where wehave replaced x by the inverse of the function g. Hence, we obtainftV = VoJoKaratzas and Shreve [2.7] showed that although the original processx may explode in time, the process y will not explode.+ / g(xs)b(xs)dBs; xs = x(s). (2.120)L/0Example (Bessel process)We consider the SDEa - 1dx=——di + dBj; a = 2,3,.... (2.121)2x
    • Stochastic Differential Equations 87Thus (2.119) yields with c = 1g(s) = ln(s) for a = 2 and g(x) = ( x 2 _ a- l ) / ( 2 - a ) for a > 3.Hence, we obtain for a = 2: x = exp(y);g = exp(—y). Thus we getthe solutiony = Vo+ exp(-ys)dBs, ys = y(s). 4JoExercisesEX 2.1. We consider the SDE (2.20). Find which of the followingcases is amenable to the reduction technique of Section 2.1.4(i) a = ry;b = u,(ii) a = r; b = ^ ; 7 = 1/2,1, 2,(hi) a = — ^-sin(2y); b = rcos(y); r,u = const.EX 2.2. Verify the solutions (2.4) and (2.17) with the use of the Itoformula (1.98) and (2.37).EX 2.3. Verify that the SDEdx = -/32x(l - x2)dt + /3(1 - x2)dBt,has the solutionaexp(z) + a —2 ., . . ^x(*) = — ^ o — ; a=l+x°);z= ^B*-aexp(z) + 2 — aEX 2.4. Verify the solution of the stochastic pendulum (2.65). Usethe Ito formula (1.122) and a generalization of (2.37).EX 2.5. Find a parameter s that allows a simple determination ofthe zeros of (2.79) and thus an inversion of (2.79).EX 2.6. Determine the components Vn, V12 of the variance matrixfor the pendulum with stochastic damping. Use (2.74) and (2.82).EX 2.7. Solve the problem of the pendulum with a stochastic fre-quency. Use the (2.68) with the parameters a = 7 = 0 ; / 3 ^ 0 t ocalculate the variance Vn.
    • 88 Stochastic Differential Equations in Science and EngineeringWe obtain from (2.68)xi — xocost + yosint — f3 / sin(i — s)xi(s)dBs,JoX2 = —xosint + yocost — f3 / cos(£ — s)xi(s)dBs.JoWe introduce here variance Vn(i) in terms ofVn(t) = K(t) - 2/0 sin2* = (? / sin2(i - u)K(u)dit; K(t) = (x2).The determination of Vn is performed in analogy to the calculationsin Section 2.3.2 and we obtainH(p) = vlp + 2; = (32.,2p - A / 2 (p + A/4)2+4_Finally, we obtain the variance in the formVn(t) = |{exp(/32i/2) - exp(-/32i/4)x [cos(2i) + 3/32sin(2i)/8] - 2 sin2t}.Note that the variance has the same structure as (2.82) and it is oforder unity if for 0(f32t) = 1.EX 2.8. Calculate the mean and the covariance function of thestochastic vectorial integralKj(J) = / bjr(s)dBrs, j = l,...,n; r = l,...,m.JoEX 2.9. Take advantage of the multivariate Ito formula (1.127) toprove the (2.91).EX 2.10. Show that the ODE y = y/x satisfies the growth boundconditions, but not the Lipschitz condition. Solve this ODE and dis-cuss the singularities of its solution in connection with the mentionedconditions.
    • Stochastic Differential Equations 89EX 2.11. Solve the SDEda; = ydt + adB],dy = ±xdt + bdBt; a,b = const,where B^; k = 1,2 are independent WPs. The case with the nega-tive sign of the drift coefncient is a generalization of the linearizedpendulum.EX 2.12. Solve the SDE and find the positions where the solutionblows updx = xkdt + bxdBt; b,k = const; k > 1; x(0) > 0.EX 2.13. Solve the generalized population growth model SDEKda; = rxdt + x NJ frfcdBt; r,b^ — const.fe=iEX 2.14. Solve the damped oscillation SDEx + ax + UJ x = b£t(t); a,u> = const.Hint: Write the SDE in form of scalar equations and use the matrixA 4 ° 2 M; - o r -a Iexp(Ai) = ^EL^!J-[lv Cos{vt) + {lu + A) sin(vt)];vu = a/2; v = y/co2- (a/2)2,where I is the 2D unit matrix. See also the undamped case (2.59)with (2.61).
    • CHAPTER 3THE FOKKER-PLANCK EQUATIONIn Section 3.1 we derive the master equation and we use this equa-tion in Section 3.2, where we focus on the derivation of the Fokker-Planck equation that is a PDE governing the distribution function ofthe solution to a SDE. In the following sections we treat importantapplications of this equation in the field of bifurcations and limitcycles of SDEs.3.1. The Master EquationWe consider a Markovian, stationary process and our goal is to reducethe integral equation of Chapman-Kolmogorov (1.52) to a more use-ful equation called the master equation. We follow here in part theideas of Van Kampen [3.1] and we simplify in the first place the nota-tion. The transition probability depends only on the time differenceand we can writePi|i(y2,*2|yi,*i) = T(y2|yi;r); T = t2-t1. (3.1)An example was already given for the Ornstein-Uhlenbeck pro-cess (1.56.2). There we got also the generally valid initial conditionT(y2bi;0) = <5(2/2-yi). (3.2)Since the transition from (t/i, ii) to (2/25 £2) occurs AC for some valueof 2/2 w ehave the normalization condition/T(y2|yi;r)d?/2 = 1. (3.3)We define now u = £2 —£1, v = £3 —£2; this means that u+v = £3 —t. Thus, we obtain from the Chapman-Kolmogorov equation (1.52)91
    • 92 Stochastic Differential Equations in Science and Engineeringthe relationT(y3 |yi;« + ^) = T(y2y1;u)T(y3y2;v)dy2. (3.4)Next we try to find a Taylor expansion of T(j/2, Vi,u) with respectto the variable u. To achieve this we putT(y2|yi; u) = S(y2 - yi) + k(y2, yi)u + 0(u2). (3.5)The application of (3.3) yields Jk(y2,yi)dy2 = 0. To comply withthis relation we use the form% 2 , y i ) = ~a(yi)5{y2 - yi) + W(y2 |yi); W(y2|yi) > 0,(3.6)o(yi) = / W(y2|yi)dy2,where W(j/2|yi) is the transition probability per unit time (TPT) forthe passage from y to y2. Thus, we obtain the first two terms of theTaylor series in (3.5) and we can writeT(y2yi;u) = [1 - a(yi)u}5(y2 - yi) + uW(y2yi) + 0(u2). (3.7)We use now (3.7) for the transition probability T(y3y2;v) andwe obtainT(y3|y2! v) = [1 - a(y2)v}S(y3 - y2) + vW(y3y2),and substitute the last line into (3.4). Thus, we obtainT(y3|yi;« + v) = [l - a{y3)v]T(y3yi;u)|y2)T(y2|yi;w)dy2-,nd use the limit v —> 0. Thii-a(y3)T(y3yi;u)+ / W(y3y2)T(y2yi;u)dy2.+ vWe rearrange this relation and use the limit v —> 0. This leads to9T(y3|yi;ii)du(3.8)To obtain the desired master equation we see from (3.6) thata(V3) = J W(y2|y3)dy2- Hence we obtain from (3.8)9 T(y3|yi;«)W(y3|y2)T(y2|yi;«)dy2T(y3yi;u) fw(y2y3)dy2. (3.9)
    • The Fokker-Planck Equation 93The integral equation (3.9) contains as kernel the TPT W(y3|y2)-We note first that all transition probabilities in (3.9) have, in con-trast to (3.4), exclusively y as initial point. Hence, we simplify thenomenclature by usingZ(y,t) = T(yy1]t)] Z(y,t = 0) = 5(y - Vl), (3.10)and we note that Z(y,t) does not mean a single-time distribution.Equations (3.9) and (3.10) lead now to^ ^ = J dy>[W(yyW, t) - W(yy)Z(y, t)]. (3.11)The integral equation (3.11) is now the master equation (orM-equation). Note that for the derivation of this equation it wassufficient to use the two-term expansion (3.7).It is instructive to specify (3.11) for the case of a discrete set ofvariables characterized by an integral number. Here we consider thediscrete M-equation^ ^ = £ [ W n m Z m ( £ ) - W m n Z n ( t ) ] ; W n m > 0 ; Wn n = 0.m(3.12)This equation shows the structure of a gain-loss relation for the prob-abilities of the individual states Zn(£). The first term on the righthand side of (3.12) represents the gain of the state n from all theother states m, and the second term gives the loss from n to m.Example (Random walk)We consider the random walk on the one-dimensional rr-axis wherejumps between adjacent sites with equal probability are permitted.The quantity Wn m is the transition probability between the sitesn and TO. Since we consider only jumps between adjacent sites, thestate n (with the position xn) wins with equal unit-probability fromthe states n — (position xn-i) and the state n + 1 (position xn+)and loses with the same probability to the same states. Hence, wehave Wn m = 8nm-i + 8nm+i and the M-equation readsZn(t) = Zn_i(t) + Z„+i(t) - 2Zn(t). (3.13)We give more details about the solution to (3.13) in EX 3.1. X
    • 94 Stochastic Differential Equations in Science and EngineeringGenerally we can state that the physics of the process under con-sideration determines the structure of the transition probabilities perunit time W(y|y). The substitution of the latter into the M-equationyields eventually to the determination of the transition probabilityZ(y,t). As an application of the continuous M-equation (3.11) weconsider now jump moment that are defined byak(y) = j(y - V)kW(yy)dy; k = 0,1,... . (3.14)Note that the coefficient a(y) in (3.6) to (3.8) satisfies a(y) — ao(y).We analyze now the average of the variable y that can be reached bythe transition probability. Hence, we puty(t) = (y) = J yZ(y,t)dy, (3.15)and this is the conditional average of the variable y that starts withy = y± at t = 0 with the transition probability 5(y — yi). We calculatethe time derivative of (3.15) and we obtaind* dt{y)J ydt yand this gives with (3.11)^ = J dy J dyy{W(y, y)Z(y, t) - W(y, y)Z(y, t)}.We can split the right hand side of the latter line into two doubleintegrals. We interchange in the first of those integrals the variablesy and y. This yields^ = J dy J dy(y - y)W(y, y)Z(y, t). (3.16)The substitution of (3.14) into (3.16) leads tody(t)dt = J dya1(y)Z(y,t) = (a1(y)), (3.17)and we recall that the right hand side of (3.17) is a conditional aver-age. We use now the expansionai(y) = ai«y» + (y - {vM^y)) + (y - (y»2<((y» + • • • ; = d/dy. (3.18)
    • The Fokker-Planck Equation 95In the special case where a (y) is a linear function of y we haveai(y) — by; b = const, and (3.17) leads to^ = b(y) = by(t). (3-19)This is a closed equation, in the sense that only the quantity (y) (andnot higher order moments) are involved in Equation (3.19). However,if y{t) is not a linear function then we obtain from (3.17) and (3.18)±(y) = ai«y» + ({y2) - (y)2)a{((y)) + • • • . (3.20)Equation (3.20) governs the quantity (y); there appear, along with(y), higher order moments such as (y2). This equation is, hence, not aclosed relation. We need additional equations to determine the ordermoments. This dilemma is called the closure problem.In EX 3.2 we calculate an evolution-equation for the variance.We will reconsider the jump moments in the next section when wederive the Fokker-Planck equation.3.2. The Derivation of the Fokker-Planck EquationThis equation governs in its simplest version the probability functionof the solution to a one-dimensional SDE. There are several alterna-tives to derive this equation. Planck himself, used a physical moti-vated way and introduced short range atomic forces to approximateterms in the M-equation.Since we are not primarily interested in atomic physics, we usehere a more formal approach. We start from the (3.4) and we applythe nomenclature2/3 = z; 2/2 = w; yi = x;u = t; v = At, 0 < At < 1,and we note that the variable w is in (3.4) a dummy integrationvariable, whereas z and x are variables that belong to more and lessadvanced values of the time and are called forward and backward
    • 96 Stochastic Differential Equations in Science and Engineeringvariables, respectively. Thus we write (3.4) in the formT(zx;t + At) = f dwT(wx;t)T(zw;At). (3.22)We multiply now (3.22) by an arbitrary function ip(z,t) thatvanishes sufficiently rapid at z = ±oo, and we integrate the resultingequation from z = —oo to z = oo. Thus, we obtain in the first placeLHS = RHS. (3.23)We approximate the left hand side of (3.22) by a Taylor series.This yieldsLHS= dzip(z,t)T(zx;t + At) = dzip(z,t)T(zx;t)+ At [ dzif>(z,t) —T(zx;t) + OUAt)2). (3.24)J otThe right hand side of (3.22) can be written in the formRHS= dwT(wx;t)J(w,t);J(w,t) = dzT(zw;At)i/>(z,t).(3.25)We approximate now also ip(z,t) by a Taylor series and thisleads toJ(w, t) = ip(w, t) / dzT(zw; At) + AtA(w,t)ip(w,t) + -B(w,t)x/>"(w,t) + •••(3.26)ijk(,t) = dkiP(,t)/dk.We use in (3.26) the abbreviationsA{w,t)At= dzT(zw;At)(z-w);B(w,t)At= dzT(zw;At)(z-w)2.Note that the normalization condition of T(z|u;; At) gives the firstintegral on the left hand side of (3.26) the value unity.(3.27)
    • The Fokker-Planck Equation 97The substitution of (3.24) to (3.27) into (3.23) leads now todzi/)(z, t)T(zx; t) + At f dz^{z, t)—T(zx; t) + 0((Ai)2)/dwip(w,t)T(wx;t) + At dwT(wx;t).A(w,t)ip(w,t)+-B(w, t)ip"(w,t)The first terms on both sides cancel and we obtain in the limit At —> 0T(zx;t)}=Q. (3.28)dz{^(z,t)-T(zx;t)A(z,t)rl>{z,t) + -B(z,tW(z,t)where we replaced the dummy integration variable w by z. As afurther manipulation we introduce partial integrations and we putA(z,t)iJ>(z,t)T(zx;t) = [A(z,t)ip(z,t)T(zx;t)]>-TP(z,t)[A(z,t)T(zx;t)],and another relation for the term proportional to B(z, t) (see EX 3.3).Thus, we obtain3dz->P(z,t) i ^_T{zx;t) + [A(z,t)T(zx;t)}-±[B(z,t)T(zx;t)]"} = 0.The braces in the last line must disappear, since ip(z, t) is an arbitraryfunction. This yields7 1-T(z|x;t) = -[A(z,t)T(zx;t)] + -[B(z,t)T(zx;t)]" (3.29)Equation (3.29) is the Fokker—Planck equation (FPE).Since the spatial derivatives correspond to the forward spatialvariable z, (3.29) is sometimes also called the "forward Chapman-Kolmogorov" equation. In EX 3.4 we derive from (3.21) also a "back-ward Chapman-Kolmogorov" equation with spatial derivatives withrespect to the backward variable a;. It is also more convenient to use
    • 98 Stochastic Differential Equations in Science and Engineeringthe nomenclature (3.10) and we obtain in this way the FPE in itsusual form^ T 1= -|;[A(w.*)P(y.t)] + ~2my,t)P(yM(3.30)P(y,t) = T(yx;t).To apply the FPE we introduce boundary conditions, an initialcondition and the normalization [(a, j3) are the boundaries of theinterval]9V t = °> fc = 0,1; A = a and A = /?, (3.31a)oyfcP(y,0) = <%), (3.31b)andrP/ P(y,t)dy = l. (3.31c)JaThere are several generalizations of the FPE. First we can includeall terms of the Taylor expansion (3.26). This yields the Kramers—Moyal equation^ - E ^ £ P < » . « > M * ) l - (3.3.)7 1 = 1 yEquation (3.32) is more general than the Fokker-Planck equation,yet it is identical with the M-equation and it is not easier to use. Anexample of the use of (3.32) is given in EX 3.5.3.3. The Relation Between the Fokker-PlanckEquation and Ordinary SDEsTo establish a relation between the Fokker-Planck equation the solu-tions (or rather the moments of the solution) of a first order SDEwe need to calculate the coefficients A(z,t) and 3(z,t) in (3.29) or(3.30). According to Salinas [3.2], this task can be performed forrather simple examples of SDEs by a direct inspection of (3.27).However, it is more convenient to compare the moments calculatedfrom the FPE with the moments derived from solving the SDE.
    • The Fokker-Planck Equation 99To perform this task we recall that the variable t appearing inthe function P(y, t) is the time to reach from a starting point t — 0(where P satisfies (3.31b)) the position y. We multiply (3.30) by(Ay)k= [y(t) - y0}k, A; = 1,2,...; y0 = const., (3.33a)and we obtain3[(Ay)fcP] = {Ay)k 9(AP) , 1 92(BP)dt dy 2 dy2 (3.33b)which are able to put y(t) into the interior part of the partial timederivative since y(t) depends only implicitly on the time.Next we rearrange the derivatives and we calculate for first threeresulting moments for k — 1,2,3 (higher orders are considered inEX 3.6)al < A" ) P) - / 8. [ ( A » ) A P ] - A Pdt [dy+2H^[ ( A 9 , B P ,-2S( B P )} (3Mc)^ ™ ~ / 8. [ ( A » ) » A n - 2 ( A » ) A Pdt [dy+2 VKA2/) P]" V ( A y ) ( B P ) + 2BP(3.33d)and^ ^ = - { A [ ( A y ) 3 A P ] _ 3 ( A ^ A P+ { ^ ^ A y) 3 B P] " 6^[(Ay)2BP] + 6(Ay)Bp|.(3.33e)We integrate (3.33c) through (3.33e) over y and we apply theboundary condition (3.31a) for (a,/?) = (—oo,oo). This means that
    • 100 Stochastic Differential Equations in Science and Engineeringall terms arising fromaVKmz) Vs = l,2; A; = 1,2,3,...;dysK1 = A(y,i); K2 = B(y,t),vanish. The remaining terms lead to±(Ay) = {A(y,t)),-{(Ay)2) = 2((Ay)A(y,i)) + (B(y,t)), (3.34)^((Ay)3) = 3((Ay)2A(y, i)> + 3((Ay)B(y, t)),withy"(Ay)AKm(y, t)P(y, t)dy = <(Ay)AKm(y, t)); A = 1,2,... .(3.35)Now we consider a the one-dimensional SDEdy = a(y,t)dt + b(y,t)dBt; y(t = 0) = y0, (3.36a)where yo stands for a deterministic initial condition. The formalsolution isAy = y - yo = / a(y(s), s)ds + f b(y(s), s)dBs. (3.Jo Jo36b)We know the moments of this processrt(Ay) = J(t)= f (a(y(s),s))ds;Jo((Ay)2) = J2(t)+ [b2(y(s),S)}ds.Jo(3.37)We give in EX 3.7 the reason that the average of the product betweenthe two integrals in (3.36b) vanishes under the influence of the
    • The Fokker-Planck Equation 101independent Brownian movements. Hence, we obtaind{Ay)dt= Hy,t)),d ( (^ ) 2 )= 2{(Ay)a(y,t)) + {b2(y,t)), (3.38)^ M ^ = 3((Ay)2a(y, t)) + 3((Ay)b2(y, t)).In the derivation of third line of (3.38) we used again the indepen-dence of the Brownian movements.A comparison of this (3.38) with (3.34) yields for the two lowestorders (the third order is treated in later)(a(y,t)) = (A(y,t)); {(Ay)a(y,t)) = ((Ay)A(y,t));(b2(y,t)) = (B(y,t)).The simplest way to solve (3.39) is given byA(y, t) = a(y, t); B(y, t) = b2(y, t). (3.40)Note also that with (3.40) we can automatically satisfy the relationfor the third order moments that is given by the third equation of(3.38).Note also that the one-dimensional SDE (3.36a) has only thetwo scalar coefficient functions a(y,t) and b(y,t). Moments of higherorder (n > 3) can therefore not lead to additional independent rela-tions for A(y,t) and B(y,t).It is now time for an example.Example (The Ornstein-Uhlenbeck problem)We know from (2.19a) that the transition probability has in thenomenclature of the FPE (3.30) the formP(y, t) = Pl|1p(j/; ty0; 0) = [2vr6(l - A2)]"1^x exp{-(y - Xy0)2/[2b(l - A2)]}, (2.19a)with A = exp(—t) and this TB corresponds to the Ornstein-Uhlenbeck equation with m = 0 and b := v26dy = -ydt + V2bdBs; b = const., (2.14)
    • 102 Stochastic Differential Equations in Science and Engineeringwith the solutiony(t) = Xy0 + V2b [ exp(s - t)dB8. (2.16)JoWe calculate with (2.19a) the moments and this yieldsTfc = ((Ay)k) = J dy(Ay)kp1{1(y;ty0]0)= [2*6(1 - X2)}-12f dy{y - y0)fcexp(-Z2);V - Ayo = V2b(l - A2) z.Thus, we obtainTk = -±= J dz[{X - l)y0 + V26(l-A2)z]* exp(-^2).The calculation of the moments are performed separately and thisyieldsTo = 1 (normalization); Ti = (A — l)yo!T2 = (A - l)2y02+ 6(1 - A2);T3 = ( A - l ) 3^ + 36y0 (l-A2)(A-l).The time derivatives of these moments coincide with the onescalculated directly from the SDE (2.14)—Ti = (a(y,t)) = ~{y) = -Xy0,^tT2 = 2((y-yo)a(y,t)) + (b2(y,t)}= -2 J y(y - y0)P(y, t)dy + 2b J P(y, t)dy= 2bX2- 2A(A - l)j/0.We treat the third order moment problem in EX 3.8. &A further generalization of (3.30) is the Fokker-Planck equa-tion for a multivariate SDE [see (1.123) in the presence of just one
    • The Fokkei—Planck Equation 103Brownian motion]dym = am(y,t)dt + bm(y,t)dBt; y = (yi,... ,yn); m = l,...,n.(3.41a)The corresponding multivariate Fokker-Planck equation has the form^*r = -4[z(yt)flfc(l/*)1 +l^wy^(y,t)br{y,t)}.(3.41b)The proof of (3.38) can be found in the book of Risken [2.1].ExampleWe studied in Section 2.1.1 the nonlinear population growth SDE.Its solution is given by (2.4). Solving the logarithm of this solutionfor the Brownian motion leads toBt = [Z + {u2/2 - r)t]/u; Z = ln(N/N0). (3.42a)We infer from (1.55) that Bt is N(0,£) distributed and consequentlyZ is N(/3,A); 0 = (r - u2/2)t, A = uH distributed. We will verifythis result with an explicit solution of the FPE. A differentiation of(3.42a) yields a linear first order SDE with constant coefficientsdZ = (r - u2/2)dt + udBt.The corresponding FPE takes the formPt = (u2/2 - r)Pz + (u2/2)Pzz. (3.42b)Motivated by the solution of the FPE (1.55), we try a similarityform to solve (3.42b). Thus, we useP(Z,t) = t-1/2A{s); s = (Z + at)2/t.We substitute the latter line into (3.42b) and we must comply withconditions in the orders 0(t~"^1+n^2), n = 1,2. This leads toa = u2/2-r- A(s) = exp(-s/2u2);P(Z,t) = (2iru2t)-1/2exp{[-(Z + at)2}/(2u2t)}.
    • 104 Stochastic Differential Equations in Science and EngineeringWe give in the Appendix A an application of the FPE that con-cerns an oscillator that has a limit cycle without noise. It serves alsoto introduce the WKB-method to approximate solutions of the FPEin the limit of small additive noise.3.4. Solutions to the Fokker—Planck EquationWe emphasize the connection of the FPE (3.30) to the SDE (3.36a)and we write the one-dimensional FPE from now on the form3P 3 r , , n l 1 3 ri2_ = _ - W v , t ) i > i + ^"(w.OT + S T T ^ f e O P ] ; p= p(M)> (3.43a)where a and b are the drift and diffusion coefficients of theSDE (3.36a).Let us focus now on the calculation of stationary solutionsto (3.43a). There is only a stationary solution Ps(y) if the coefficientsare time-independent a = a(y); b = b(y). We use the boundary con-ditions for a — —oo; j3 = oo and we obtain a first integral in theform d(62Ps)/d,z — 2aPs. A further integration yields the stationarysolutionPs(yH?Syexp W a< u> d «/o V{uThe constant in (3.43b) serves to normalize the distribution.(3.43b)ExampleWe consider the Ornstein-Uhlenbeck problem (2.14) and (3.43a)reduces to3P = 82P 3[(m-s/)P] h = ^dt dy2dyThe stationary solution is governed by dP/dy — (m — x)P = 0 andwe obtainP = C exp(my — y2/2); C = const.We get the normalization (3.38) and this yields C = exp(—m2/2)//2~7r (see also (1.56.1) for m — 0). A
    • The Fokker-Planck Equation 105We conclude now this section with the derivation of nonstationarysolutions of autonomous SDEs witha = a(y), b = b(y). (3.44)We solve the Fokker-Planck equation with the use of a separationof variables. Thus we put (the function A(y) should not be confusedwith the coefficient A(z,t) in (3.27) through (3.30))P(y,i) = exp(-At)A(2/). (3.45)The substitution of (3.44) and (3.45) into (3.43a) gives[b2A]" - 2[aA} + 2AA = 0; = d/dy; A(±oo) = 0. (3.46)The ODE (3.46) and the BCs are homogeneous, hence (3.46)constitutes an eigenvalue problem. We will satisfy the initialcondition (3.31b) later when we replace (3.45) by an expansion ofeigenfunctions and apply orthogonality relations. The rest of thealgebra depends on the particular problem. We will consider, how-ever, only problems that satisfy the conditions of the Sturm-Liouvilleproblem (see e.g. Bender and Orszag [3.3]). The latter boundaryvalue problem has the form[p(yK(y)} + m(y)Uy) = o; Uvi) = to) = o. (3.47)The eigenfunction is denoted by in(z) and /i is the eigenvalue. Theorthogonality relation for the eigenfunctions is expressed by/ ^{z)ln{z)lm{z)dz = NmSnm, (3.48)where Nm and Snm stand for the norm of the eigenfunctions andthe Kronecker-Delta. To illustrate these ideas consider the followingexample.ExampleWe consider an SDE with the coefficientsa(y) = -f3y; b(y) = VK; P,K = const. (3.49)The coefficients in (3.49) correspond to an Ornstein-Uhlenbeck prob-lem with a scaled Brownian movement, see EX 3.9.
    • 106 Stochastic Differential Equations in Science and EngineeringThe separated equation (3.46) reads KA" + 2(3(yA) + 2AA = 0.We use a scale transformation and this yieldsd2A d(zA) _ A n /2/3az az v rv ( 3 5Q)A* = ^; A(z = ±oo) = 0.We transform (3.50) into the Sturm-Liouville operator (3.47) and weobtaindd~zP(z)—An(z) + (A* + l)p(z)An(z) = 0;(3.51)p(z) = q(z) = exp(z2/2), /x = (A* + 1).The eigenvalues and eigenfunction of (3.51) have the formA; = 0,1,...; An(z) = £^exV(-z2/2). (3.52)Now we use an eigenfunction expansion and replace (3.45) byooP(y, t) = ^2 exp(-n/3i)BnAn(yv/2/3/K); Bn = const. (3.53)n=0We can satisfy the initial condition (3.31b) and we obtainooY, BnAn(y^/2j/K) = 5(y). (3.54)n=0The evaluation of Bn is performed with the aid of (3.48) and weobtainB m = f ^ J Am(0)/Nm.The explicit evaluation of the eigenfunctions and the calculation ofthe norm is assigned to EX 3.10.
    • The Fokker-Planck Equation 1073.5. Lyapunov Exponents and StabilityTo explain the meaning of the Lyapunov exponents, we consider an-dimensional system of deterministic ODEsx = f(x,t); x(t = 0) = XQ; X,XQ, f G Rn . (3.55)We suppose that we found a solution x(t); x(0) = XQ. NOW we studythe stability of this solution. To perform this task we are interested inthe dynamics of a near trajectory y(t). Thus we use the linearizationx(t) = x(t) + ey(t); 0 < e < 1, y G Rn (3.56)where s is a small formal parameter. The substitution of (3.56) into(3.55) leads after a Taylor expansiony = J(x,t)y; 3pq = dfp(xi,... ,xn, t)/dxq; p,q = l,...,n,(3.57)where the matrix J is the Jacobian and (3.57) is the linearized system.We can replace the vector-ODE (3.57) by a matrix-ODE for thefundamental matrix $fcm(i) that takes the form$fcm(*) = hs(x, t)<S>sm(t); $sm{t = 0) = 8sm. (3.58)The solution to the linearized problem (3.57) is now given byyk{t) = <s>km(t)ym(0). (3.59)We define now the coefficient of expansion of the solution y(t) in thedirection of y(0)In the last line we use the symbol | • • • | to label the norm in Rn. TheLyapunov exponent corresponding to the direction y(0) is then
    • 108 Stochastic Differential Equations in Science and Engineeringdefined byA _ lim NM»(Q).0l. ( mt—>oo tThe solution x(t) is therefore stable (in the sense of Lyapunov)if the condition Re(A) < 0 is met. Note also that in case of one-dimensional ODE (n = 1 in (3.55)) we obtainA= limiln[y(i)]; y(t) G R. (3.61)t—>oo tEquation (3.61) is sometimes called the Lyapunov coefficient of afunction.The stability method of Lyapunov can be generalized to stochas-tic differential equations. We will see that it is plausible to calculatethe Lyapunov exponents with the use of the stationary FPE equa-tion of the corresponding SDE. A rigorous treatment to calculate theLyapunov exponents is given by Arnold [3.4]. To illustrate this ideawe consider a class of first order SDEsdx = f(x) + g(x)gx) dt + g{x)dBt; = d/dx, (3.62)where f and g are arbitrary but differentiable functions. The station-ary FPE (3.43) takes the formCPs(x) = -—-expg{x)2 [Xf(s)g-2(s)dsJoC = const. (3.63)Now we suppose that we know the solution z(t) to (3.62). Westudy its stability with the linearization x(t) = z(t) + ey(t). Thesubstitution into (3.62) leads toy(t) = (jf(z) + [g(z)g(z)}>} dt + g(z)dB?j y. (3.64)The SDE (3.64) is analogous to the one for the populationgrowth (2.1). Thus we use again the function ln(y) and we obtaind ln(y) = (f + igg") dt + gdBt. (3.65)
    • The Fokker-Planck Equation 109We integrate (3.65) and this yieldsln[y(t)} =t / I rt?+tgg»ys+^ gdB,/Jof + ^gg" + g ^ ds, (3.66)where £s is the Wiener white noise.Now we calculate the Lyapunov exponent of the one-dimensionalSDE (3.62). Thus we use (3.61) and obtainA = limt—»oo Ul(Hgs"+g&)4 (367)Equation (3.67) is the temporal average of the function f + ^gg" +g£s. For stationary processes we can replace this temporal averageby the probabilistic average that is performed with the stationarysolution of the FPE (3.63). Thus, we obtainA f + ^ + l f t - W f + ^gg"|ps(*)(V + lgg")dz, (3.68)which used the fact that (g£s) = 0. We substitute Ps from (3.63).An integration by parts yieldsA = [Psf]oo—oo fPs - 2gg"P^ da;. (3.69)The first term of the right hand side of (3.69) vanishes. We give inEX 3.11 hints how we can rearrange with the use of the (3.63) theintegral (3.69). This yields finally (Ps > 0) = -2f((/g)2Psdx<0. (3.70)Equation (3.70) means that every solution of (3.62) is stable in thesense of Lyapunov.
    • 110 Stochastic Differential Equations in Science and Engineering3.6. Stochastic BifurcationsThere are two different cases of bifurcations for SDEs:(a) P-bifurcations (or bifurcations of the PD)These bifurcations are characterized by qualitative changes of theprobability density. In many cases arise changes of the maxima orminima of the PD. The univariate Gaussian PD (1.29) has for m = 0a maximum at x — 0. A bifurcation of this PD leads generically to aPD with a minimum at x = 0 and to two maxima at the positions ±u.(b) D-bifurcations (or deterministic bifurcations)The scenario is that a solution loses its stability and bifurcates-likein the deterministic case- to a new stable branch.3.6.1. First order SDEsWe consider now three SDEs that belong to the class (3.62) with asolution that never lose their stability1dx a(x,a) + -a h(x)h(x) dt + <7h(x)dBt. (3.71)The real constants a and a are the intensity constant of the stochas-ticity (<7 = 0 indicates the deterministic limit) and the bifurcationparameter of the deterministic case. We chose the drift coeffi-cient a(x, a) such that the deterministic limit of (3.71) coincideswith one of the three normal forms of one-dimensional ODEs (seeWiggins [3.5]). The fact that the presence of a stochastic effect leadsto stable solutions means that the randomness destroys the bifurca-tion (see Crauel and Flandoli [3.6]).We investigate now these three cases separately:(i) The pitchfork caseHere we consider the SDEdx = (ax - x3)dt + adBt. (3.72)We obtain from (3.39) the stationary PDZs(x) = Cexp[(ax2- x4/2)/cr2]; C = const., (3.73)and we determine the constant using (3.31c). We infer from (3.70)that every stochastic solution of (3.72) is stable for all values of a
    • The Fokker-Planck Equation 111Fig. 3.1. The P-bifurcation of the PD (3.73). The upper, middle and lower curves belongto a = 1,0 and — 1, respectively C = a = 1.and a ^ 0. But we see from (3.73) that there arises at the criticalpoint a = 0 a P-bifurcation (see Fig. 3.1). The PD (3.73) has fora < 0 only a maximum at x = 0. By contrast we face for a > 0 atx = 0 a minimum and two maxima at locations ±^/a.(ii) The transcritical caseThe SDE under consideration isdx ax — x + -a x ) dt + axdBt. (3.74)This SDE is a member of the class (2.35). Its solution is calculatedin EX 3.12. We obtain from (3.39) the stationary PD.CxAexp(-2:r/cr2) M x > 0 _Za(x)0 Vcc<0A = 2a/a2- 1. (3.75)The determination of the norm yields C = (cr2/2)/3r(/9); f3 = 2a/a2.There is neither a D nor a P-bifurcation.(iii) The saddle-node caseHere we putdx = (a - x2+ a2/A)dt + a^/xdBt.We obtain the PDZs(x) = Cxxexp(-x2/<J2); A = 2a/a2- 1/2.(3.76)(3.77)
    • 112 Stochastic Differential Equations in Science and EngineeringNote that PD exists in x e (—00,00) for a = (n + l/2)<72/2, n =1,2,... yet these are not critical locations. Again, there arises neithera D nor a P-bifurcation.3.6.2. Higher order SDEsThere exists no general theory to cover bifurcations of higher orderSDEs. A good review of this subject is given in the book ofArnold [3.4]. We limit our attention, however, to just one instructiveexample of a second order SDE that is in linear in the deterministiclimit. We consider in this example theStochastic Mathieu equationWe study here a stochastic generalization of the Mathieu equationthat plays an important role in stability studies of excited oscillators.The stochastic problem is governed by the SDEx + e(3x + x = -ex£(t); e,j3 = const.; 0 < e « l , (3.78)where dots stand for time derivatives. Equation (3.76) describes alinear oscillator that is slightly damped (/? is the damping coefficient)and affected by a colored noise described by £(£). We follow the ideasof Rong et al. [3.7] and use the noisy term£(i) = hcos(£lt + -fBet); h, ft,7 = const., (3.79)where ft and 7 stand for a deterministic frequency and the noiseintensity, respectively. Furthermore, we use a slowly varying noiseexpressed in form of a slowly scaled Brownian movement.The deterministic case (7 = 0) of (3.78) is called the Mathieuequation. Its stability with respect to weak deterministic perturba-tions [£(£) = cos(fti)] was studied by Nayfeh [3.8] with the aid ofvarious asymptotic methods. We investigate here the stability of thesolutions of (3.78) with use of the method of multiple scales.Before we perform this task we calculate the spectrum of the col-ored noise term in (3.78). We introduce the autocorrelation function[Re(a) denotes the real part of the complex variable a](atMt + T)) = h2Re(V), (3.80a)
    • The Fokker-Planck Equation 113withV = exp{i[fi(2i + r)]}(T+) + exp(iftr)(T_);(3.80b)T± = exp[i7(Be(t+T) ± Bet)].We can evaluate (3.80b) with the use of Ex 1.11. We find thatthe first term on the right hand side of (3.80a) vanishes in the limitt —> oo. Hence, we obtain in this limit(mat + r)) = h2cos(ttr) exp(-72£|r|/2). (3.81a)Equation (3.81a) coincides — apart from a different notation — withthe limit of (2.57) for t —> oo. Thus we obtain the spectrum of thisnoise term (see also Wedig [3.9])( , e 2 u;2+ »2+ a44 ^V[U)2{ 7 jcfi + {w*-W)2+ 2aw2+ Wy a4 (3.81b)To approximate the solutions to the SDE (3.78) we use the two-variable version of the multiple scale theory. The latter routine wasdeveloped first for deterministic oscillation problems amongst othersby Nayfeh [3.8] and later extended to problems with random excita-tions by Rajan and Davies [3.10] and by Nayfeh and Serban [3.11].We use two independent variablesr = t and T = et, (3.82)where r and T are referred to as fast and slow variables, respectively.The time derivatives ared 3 9 d232n 922 a2di =a^+ e9T;di2 =a ^ + 2 £a 7 a T + ea r ^ - (3"83)Furthermore, we use also an expansion of the dependent variableand we putoox(t,e) = ^ £ nx n ( r , T ) = X0(T,T) + exi(r,T) + 0(e2). (3.84)n=0Thus, we obtain a hierarchy of second order equationsn 32e° : Lx0 = 0, L = —^ + 1;9 r(3.85)en:Lxn = RESn(x0,...,xn-i); n > 1.
    • 114 Stochastic Differential Equations in Science and EngineeringEquation (3.85) means that the leading order equation (the firstequation for e°) is homogeneous. The higher order (or correctionorder) equations are inhomogeneous with right hand sides RHSn thatdepend in general on all solutions of the previous problems.We apply this procedure now to (3.78) and we obtain in the firstplaceRHSi = -• a 2aXQ. (3.86)The leading order solution is the one of an harmonic oscillatorand we write it in complex formXQ A(T) exp(ir) + A*(T) exp(-ir) cc(a), (3.87)where an asterisk is used to denote the complex conjugation (cc).The constants of the r-integration depend on the slow variable Tand we write those constants in form of a slowly amplitude A(T)and its complex conjugate. These functions are determined in thenext order of the expansion.Now we reformulate the right hand side defined by (3.86) withthe leading order solution (3.87). Thus, we obtainRHSi = —iexp(ir) ^ + * A + CCh2x {A* exp[i(Q - l)r + i7BT] + cc}h{Aexp[i(fi + 1)T + i7BT] + cc}. (3.88)The inhomogeneity (3.88) causes resonance with solutions ofthe type raexp(ir); a / 0 and such terms are no longer periodicsolutions. Hence, we put all terms in (3.88) having the structureexp(ir)F(T) and exp(-ir)F*(T) to zero. This yields the non-resonance equation that allows the calculation of periodic solutionsand gives a condition to calculate the slowly varying amplitude func-tion A(T). We introduce a detuning parameterft-l = l + ecr; O(o-) = 1. (3.89)
    • The Fokker-Planck Equation 115Equation (3.88) leads with (3.89) toRHSi = exp(ir)h• U 2 ^ + , AhA*exp[i(aT + 7BT)]- exp(3ir)A(T) exp[i(crT + 7BT)] + cc.+ cc(3.90)Thus, we obtain the non-resonance condition if we put the coef-ficient of exp(ir) in (3.90) to zero. This yieldsh, dA, I 2 - + / J A -A* exp[i((jT + 7BT)] = 0, (3.91)and we note that the complex conjugate of (3.91) yields a differentialequation that has the same solution as (3.91).To separate (3.91) into real and imaginary parts we use the polarformA(T) = R(T) exp[ty(T)]; R ^ e R .Equation (3.92) yieldsdA(3.92)— = (R + iRVO exp[i^(T)]; A* = Rexp(-i^); = d/dT.Hence, we obtain-i[2(R + iRi/0 + PR] = — exp(iry); rj = aT + 7 B T - 2^.Comparing real and imaginary parts in (3.92) leads to2R + /3R = ——sinr],andV er + 7dBT h(3.93)(3.94)„ — . , . (3.95)dT 2 y JWe note that (3.94) is not a SDE, while (3.95) is a SDE that iswritten in the usual formd»7= (<7--cos7MdT + 7dBT . (3.95)To investigate the stability of the solutions to (3.94) and (3.95)we need to calculate the solution to the stationary FPE of (3.95).cos r.
    • 116 Stochastic Differential Equations in Science and EngineeringThis leads with (3.43a) tog _ iL[(Mcos?7)p]=0. p= P(r?). h (3-96)2a hu — —^, v = —z.Since r is an angle we use instead of the conditions (3.31b, c) theperiodicity conditionP(r? + 27r) = Pfa), (3.97)and we apply the normalization condition in the formP(ri)dri = 1. (3.98)/Jo10We obtain easily a first integral of (3.96)dP— = (it - v cos ry)P + C; C = const. (3.99)drjTo solve (3.99) we use of the variation of constants and we obtainP(rj) = exp(-u77 — i>sin?7)/oF(x) = ex.p(—ux + vsinx); D = const.D + C f F(x)dxJo(3.100)The rest of the procedure to calculate the solution to the station-ary probability function is tedious and we refer the interested readerto the original article [3.7].We return now to the calculation of the stability of the solution.We note that (3.94) has the trivial solution R = 0. Its stabilityis governed by (3.61) with y(t) = R(T); t = T. Hence, we obtainfrom (3.94)BT h fTMR(t)] = - - 4 Jo sin[77(z)]dz. (3.101)Thus we obtain the Lyapunov coefficient in the formA= " f " I J i m¥ / S™(V)<±V- (3-102)Z 4 T-+oo 1 J Q
    • The Fokker-Planck Equation 117We replace again the temporal average in (3.102) by the probabilityaverage [see also (3.67) and (3.68)] and this yieldsA = - f - |<sin(^)>. (3.103)The detailed stability calculation can by found in [3.7]. Thenumerical results show that an increase of the parameters h and| a | causes the trivial solution to lose stability. The interpretation ofthis result is simple since the parameter h controls the amplitude ofthe noise and the parameter |<r| is the detuning.Appendix A. Small Noise Intensities and theInfluence of Randomness on Limit CyclesWe consider a second order SDEx + ng(x)x + f(x) = V2nT&; 0 < n, T < 1, (A.l)where £t is a white noise term and n(T) a small damping factor(noise intensity) and we will specify the functions g(x) and i(x) inEquation (A.19). We suppose that Equation (A.l) possesses in thedeterminist limit (T = 0) a limit cycle Lo with a periodicity of thesolution T: x(t + T) = x(t). We rewrite (A.l) in form of (1.123) andwe obtain from (3.41b) the corresponding bivariate FPE (we use asummation convention)3P 3 82P~97 =~ 3 ^ : ( 0 f c P ) + n T3 ^ ; P = P(^i^2,t,n,T). (A.2)The boundary conditions are given such that on the limit cycle Lo,P has the same periodicity F as the limit cycle solution.Our aim is now the determination of the stationary solutionof (A.2). The coefficient of the highest derivative in (A.2) is givenby the small parameter nT. This means that the problem is singu-lar (see Van Dyke [3.12]) and we may apply an asymptotic expan-sion to approximate the solution of (A.2). This routine is the WKBmethod that is well known in many disciplines of applied mathemat-ics. Its general theory can be found e.g. in the book of Bender and
    • 118 Stochastic Differential Equations in Science and EngineeringOrszag [3.3]. Thus, we use, as an asymptotic expansion to calculatethe stationary probability function, the formooPs(x,y,n,T) = exp[-W(x,y)/T]Y^Pk(x,y)Tk; x = xx, y = x2.fc=0(A.3)The functions W(x, y) and pk are called the Boltzmann energy func-tion and the expansion coefficient functions.The substitution of (A.3) into (A.2) yields a hierarchy of equa-tions. We use only the first two members of this hierarchy and weobtain(A.4)and<; f(x) + n9Woxs(y)y -r , s c M 9 W/ 9 W^-[ng{y)y + i{x)]— + n (^—^2= o- 2 —dy _~] dp0 9p0 9J dy dx dy g(y)y -(aw"dy _0.(A.5)Note that the leading order given by (A.4) represents first ordernonlinear PDE called the eikonal equation. By contrast, we seethat the correction order equation (A.5) is a second order linear PDE.We obtain from (A.5), for small values of n, the periodic solutionn —> 0 : po — const. (A.6)Next, we investigate the boundary condition on the limit cycle Lo-In the first place we obtain using the deterministic limit of (A.l)T dW 9W , 9W , a w . . . r. N19Won Lo:^r = ^x +V = y^ "[ng{y)y+f(x)]"a7but with a consideration of (A.4) leads toT dw /aw2n ,k .o n L „ : - = - „ ( - ) <0. (A.7)Since the total derivative of a periodically varying function cannotdecrease we obtain from (A.7)9Won L0 : — = 0, (A.8)dy
    • The Fokker-Planck Equation 119and the substitution of (A.8) into (A.4) yields alsoT 3Won L0 : -r— = 0.oxHence, we finally obtainon Lo : grad w = 0 or W = const. (A.9)On the other hand we find the parametric lines of W = const.: Wxx+Wyy = 0 (we use subscripts to indicate partial derivatives) and thisleads tox = y; j/ = -ng{y)y + f(x)] + nW„. (A. 10)The determination of these lines given by (A. 10) is now ourprimary target. We follow here the considerations of Ben-Jacobet al. [3.14]. First, we note that in (A.10) appears Wy and the corre-sponding term seems to be nontraditional. We continue now to calcu-late Wy. To perform this task we determine the characteristic lines ofthe eikonal equation (A.4). The general theory of characteristic linescorresponding to first order PDEs is developed by Zauderer [3.13]and we give here only a brief outline how to calculate these lines fora PDE of the class (A.4). Thus we focus on the problemF(x,y,W,px,px)=0; px = W:E, py = Wy;x _ 9 Fx _ 9 F. p _ 3Fox oy opxPy =3Fdpy(A.llThe characteristic lines of (A.ll) are given by the five individualODEs that we can obtain fromdx = dy = dW = dp^ = dpy = ^"x ^y Px*x ~r Py^y -<*-x -**-yReturning to (A.4) we haveXx = -f(x)W„; Xy = Wx-nWyS(y); S(y) = [yg(y)},Px = y; Py = -[nyg(y) + i(x)} + 2nWy.
    • 120 Stochastic Differential Equations in Science and EngineeringHence, we obtain the parametric equations for the characteristiccurves of (A.4)x = y; y = -[nyg(y) + i(x)] + 2nWy;^ - = yWx- [nyg(y) + f(x)] W, + 2nW2y = nW2y; (A.13)^ = f (x)Wy; ^ L = -Wx + nWyS(y),where we used in the second line again (A.4).Now we are ready to calculate Wy. We introduce a generalizedHamilton functiony2H(z,y) = y + J i ;: PXf(u)du + n [g(y(u))y(u)-Wy(u,y{u))]du. (A.14)JzIn Equation (A.14) we use an integral along lines W = const, thatpasses from an initial point z to (x, y). (A.14) implies that H = const.along the parametric lines W = const, (see EX 3.13). We use (A.13)to verify the time derivative of the Hamilton function(^ = nyWy + 0(n2). (A.15)Finally we suppose thatH = H(W). (A.16)This yields with the second line of (A.13) to§ = K ^ 0 - ^ + O ( B ) 0 IW, = »K(W) + 0(n).(A.17)Equation (A.17) defines the function K(W) and serves now todefine for small values of n the quantity Wy. We substitute theresult (A.17) into the parametric lines of W = const, that are givenby (A. 10). Thus, we obtainx = y- y = -ny[g(y)-K(W)]-f(x). (A.18)Equation (A.18) has the same form as the deterministic limit of (A.l).The only difference is that the function g(y) in (A.l) is in (A.18)
    • The Fokker-Planck Equation 121replaced by g(y) — K(W). This means that the curves W = const.are a family of limit cycles of (A. 18) parametrized with K(W) aroundK = 0.We use now an example from the field of Fluid Dynamics. A flex-ible cylinder oscillates in the presence of an array of fixed cylindersunder the influence of the lift forces of a homogenous incoming flow(see Plaschko [3.15]). In this case we specify (A.18) tox = y; y = -ny{[s0 - K(W)] - qa2 + s2y2]} - (1 + nqa{)x.(A.19)In Equation (A.19) n stands for the mass ratio (the mass of the fluidthat replaces the cylinder over the mass of the cylinder, this param-eter has in Aerodynamics an order of magnitude n = 0(1(T3)). Theparameters so, s2, <7i, cr2 and q represent the linear and cubic damp-ing terms, memory terms and a fluid load term. The deterministicproblem is governed by (A.19) with K(W) = 0. Its limit cycle wasdetermined in [3.15] and it has the formLo:x2+ y2= 4R2+ 0(n); R2= ^ | ^ ° , (A.20)3s2where 2R denotes the radius of the limit circle. Equation (A.19) canbe understood as a problem with a modified linear damping term.Now we obtain from (A.19)x2+ 2 = 4 ^ 2 + K(W)-,0- ( A 2 1 )3soWe obtain from (A.21)3K(W)= 3s2y/2. (A.22)dyWe use now (A.17) and this yields with (A.22)3W dW dy _ Wy _ y K _ 2K~&K ~~ ~3y"9K _K^ -K^ ~ 3s^An integration of the last line leads toW(K) - K2/(3s2), (A.23)
    • 122 Stochastic Differential Equations in Science and Engineeringwhere we set the constant of the integration on the limit cycle (K = 0)to zero. Thus we obtain from (A.21) and (A.23) the final result forthe Boltzmann energy functionW{x,y) = 3s2 (r2/4-R2)2; r = x2+ y2, (A.24)and the approximation to the FPE is given byP8(x, y) = A0 exp[-A(r2- 4R2)2]{1 + T"1}, A = 3s2/(16T)(A.25)where Ao is a constant. The normalization is performed if we inte-grate (A.25) over the entire (x, y)-plane/•ool = 27rA0/ rP(x,y)dr. (A.26)JoIt is now convenient to pass to polar coordinates (r, </>). Firstwe note that the independence of (A.25) from <p indicates that theazimuth distribution function Z(</>) satisfiesZ(</>)d</> = 0. (A.27)The normalized radial distribution function G(r) is derived from(A.25) and (A.26). It has the formG(r) = 2vrA0r exp[-A(r2- 4R2)2]. (A.28)We give in Figures 3.2 and 3.3 a comparison of the theoret-ical predictions of the radial and azimuth distribution functionswith numerical simulations of the process (A.l). The correspondingnumerical routines shall be covered in Chapter 5. Finally we presentin Figure 3.4 a graph to get an impression in which way the noisedestroys the limit circle curve of the deterministic oscillator and pro-duces a banded diagram whose bandwidth increases with growingnoise intensity T.
    • The Fokker-Planck Equation 123Fig. 3.2. A comparison of the numerically computed and the theoretically predictedradial distribution function (A.28); n = <5 = 0.001, T = 0.05.600040002000600040002000-1,57 -1,047-0,5233 0 0,5233 1,047 1,57ifFig. 3.3. A comparison of the numerically computed and the theoretically predictedazimuth distribution function (A.27); parameters as in Fig. 3.2.
    • 124 Stochastic Differential Equations in Science and Engineering0,2yo-0,2-0,2 0 0,2xFig. 3.4. A single realization of the noisy oscillation phase-diagram; n = <5 = 0.001,T = 0.05; ICs: a;(0) = 0, j/(0) = -0.2.Appendix BWe discuss here two different concepts to study the stability of fix-points (FP) of SDEs. Both strategies have their merits in the limitof deterministic problems and we give a brief introduction to thecorresponding theories for ODEs. The stability of the FPs is inves-tigated in the sense of Lyapunov, or we concentrated on asymptoticstability.B.l. The method of Lyapunov functionsThis method is well established in investigations of the stability ofdeterministic ODEs. We present here a brief survey of these deter-ministic problems. Given a system of n autonomous first order ODEs—— = rix) x t -ttn; x = [Xi, X2, • • •, xn)Fi(0) = 0; j = l,2,...,n; t>0. (B.l)A FP can always be replaced to the origin (x = 0) and this isexpressed in (B.l) by Fj(0) = 0./!/) iIf 11
    • The Fokker-Planck Equation 125We introduce now a positive definite function, the Lyapunov func-tion, L(x) that satisfies the following three conditions in (B.2)dL _ ^ ,^dL(x)(B.2)(i) L(0) = 0; (ii) L(x)>0; (iii) — = Fp(x) —^ < 0,(it OXr,where the conditions (ii) and (iii) hold in an open neighborhood(|x| > 0) of the FP. If we can find such a Lyapnov function then theFP is stable in the sense of Lyapunov [see (3.60)].In the stochastic case we wish to investigate the stability of theFPs of the autonomous SDEsdxj = a,j(x)dt + 6jm(x)dB™; j = l,2,...,n; m = 1, 2,... ,K.(B.3a)A FP of (B.3) is defined bydxj = 0 => aj(x0) = 0; bjm(x0) = 0;Vj = l,2,...,n; m = l,2,...,K. (B.3b)FPs of 1-D SDEs studied in Section 3.6.1 are analyzed in EX 3.14.Lyapunov functions can be used, however, in the case of higher orderSDEs. To verify this we introduce a Lyapunov function that satisfiesthe conditions of (i) and (ii) of (B.2). Its differential [see (1.127)] isgiven bya m^ ~ +?bmrbkr?Sr flr, ) dt + bmi^dBt- ( B- 4 )A stable system has now the property that L does not increase, i.e.dL < 0. (B.5)The latter equation means that the deterministic definition of thestability holds also for every single trajectory x(t,co). However, wecannot investigate every trajectory and this condition is thus imprac-tical. Hence, we require only that the average of dL is negativedefinite<dL>=<(<-»£;+s^slfe)>*+<"-iS;dB)s°-
    • 126 Stochastic Differential Equations in Science and EngineeringWe see that the second term on the right hand side of the latter linevanishes. Thus we derive in the stochastic case as the third stabilitycriterionU(L) = a m - h -bmrbkr- — < 0. (B.6)Example 1We consider the 1-D population growth model (2.1) with the FP inthe origindx = rxdt + uxdBt; r,u — const.Its exact solution can be written in the formx(t) = x(0) exp((r - u2/2)t{l + uBt/[t(r - u2/2)]}). (B.7)To study its stability, we use the relation— -»• 0, t - • oo. (B.8)To prove (B.8) we can use the law of the iterated logarithm (1.63)or we can apply alternatively the way of EX 3.15 showing that allmoments of the right hand side vanish. (B.7) and (B.8) imply nowthat the FP is stable forr - u2/2 < 0, (B.9)and unstable elsewhere.Now we apply the Lyapunov function approach and use as a trialfunctionL(x) = xs; s > 0. (B.10)It is easy to verify that this choice of the Lyapunov functionsatisfies the first and second condition of (B.2). The substitutioninto (B.6) yieldsU(|a;|s) = 0.5sxsu2[{s - 1) + 2r/u2}. (B.ll)If we choose0 < s < 1 - 2r/u2, (B.12)we prove, with the choice of the Lyapunov function (B.10) and thecompliance of (B.9), that (B.6) is satisfied. Hence, we conclude thatthe FP stable, provided that Equation (B.9) is fulfilled.
    • The Fokker-Planck Equation 127Example 2We consider a damped nonlinear pendulum where the damping aswell as the frequency are stochastically perturbedx + S(l + a&)x + (1 + /?&) sin a; = 0, (B.13)where 5 is the damping and a > 0, (5 > 0 are intensity constants,respectively. The deterministic and the stochastic FP lie in the originand the linearized theory of deterministic problems shows that thisFP is stable for S > 0 and that at 5 = 0 a Hopf bifurcation appears.We study now the stability of the stochastic FP and rewrite (B.13)in the standard formdx = ydt; dy =—(5y + sin x)dt — (a8y + B sin x)dBt. (B.14)With (B.6) we obtain the operatorU = y- (5y + sin x) -—I— (a6v + B sin x) r—^.dx K y Jd y 2 V y y dy2We use now as trial Lyapunov functionL = Ax2+ Ba;y + y2+ 2Dsin2(x/2), (B.15)where the constants A, B and D are unknown as of now. (B.15) is aquadratic form plus a multiple of an integral of the nonlinear term/ sin(z)d,2.JoThis leads toU = xy(2 A - BS) + y2(B -28 + a2S2) + y sin(ir)(D - 2 + 2aBS), ^ / „ ^osin(x)-xsin(x) I B-B2——To comply with the condition U < 0 we use the relations betweenthe constantsA = BS/2; D = 2(1 - aB8); B - 25 - a252< 0; B//32> 1.(B.16)The Lyapunov function takes now the formL=(y+T)2 +T i1~ I)x2+4(1~a(3S)sin2^/2)- ^B-17)
    • 128 Stochastic Differential Equations in Science and EngineeringTo achieve the condition L > 0 for (x, y) ^ (0,0) we require that0 < B < 25; a(38 < 1. (B.18)A recombination of condition (B.16) yields finally« 2< ^ 2 with/3 = a7 , 7 > 0 ; B < - ^ . (B.19)With constants complying with (B.19) we can conclude now that theFP in the origin is stable.B.2. The method of linearizationWe begin with a linear nth order ODE with constant coefficientsj/(")(x) + biy^-lx) + ••• + bn.iy{x) + bny(x) = 0;bj = const.; Vj = 1, 2,... ,n, (B.20)where primes denote derivations with respect to the variable x. (B.20)has the FP(y(0),y(0),...,^-1)(0))=0. (B.21)The Routh-Hurwitz criterion (see [1.2]) states that this FP is asymp-totically (for t —> oo) stable if and only if the following determinantssatisfy the relations (see also Ex 3.16)h b3 0Ai = 6i > 0; A2 = bib2 > 0; A3 = 1 b2 0 > 0. (B.22)0 6i b3We generalize now the concept of the Routh-Hurwitz criterion forthe following class of second order SDEsy" + [ai + Ci(t)]y + [aa + Q2{t)y = 0;[a.26)aj = const.; Q(t)dt = const. dB]; j = 1,2,with the two white noise processes [see (2.41)](Cj(t)Ck(s)) = 6(t - s)Qik. (B.24)This stochastic extension was derived by Khasminskiy [3.17]. Itstates that the FP at the origin is asymptotically stable in the meansquare IFF the following conditions are meta1 > 0; a2> 0; 2axa2 > Qna2 + Q22. (B.25)
    • The Fokkei—Planck Equation 129Example (Stochastic Lorentz equations)We consider a Lorentz attractor under the influence of a stochasticperturbation. The SDE has the formdx = s(y — x)dt,dy = (rx — y — xz)dt + ay dBt, (B.26)dz = (xy — bz)dt; b:s,r > 0.This system of SDEs is a model for the development of fluiddynamical disturbances and their passage to chaos in the process ofheat transfer. The dimensionless parameters in (B.26) are given by s(the Prandtl number) r (proportional to the Rayleigh number) andb (a geometric factor). (B.26) has the deterministic FPs( 0 , 0 , 0 ) ; (±y/b{r-l), ±y/b(r-l), r - 1).However, in the stochastic case (a ^ 0) only the FP in the originsurvives.We use now the linearization routine to study the stability of theFP in the origin. The linearization of (B.26) leads todu = s(v — u)dt,dv = (ru - v)dt + avdBt, (B.27)dw = —bwdt.We infer from (B.27) that the component w is decoupled from the restof the system and because of w(t) = w(0) exp(-bt) this disturbanceis stable for b > 0. Hence we will investigate only the stability of thesystem (u,v) taking advantage of the first two equations of (B.27).However, this system is not a member of the class (B.23). To achievea transformation we eliminate the variable v by use of v = u +(du/dt)/s. This yieldsu + [(1 + s) - aC{t)}u + [s(l - r) - as((t)]u = 0. (B.28)A comparison with (B.23) and (B.24) leads then toai = l+s; a2 = s(l-r); (i =-a((t); (2 = -as((t);(B.29)_ / a2a2s Q" V a 2S a2s2)-
    • 130 Stochastic Differential Equations in Science and EngineeringThe Khasminskiy criterion applied to our example reads nowl + s > 0 ; s ( l - r ) > 0 ; 2(l + s)(l -r) > a2(l -r + s). (B.30)Since we have by definition s > 0 the first part of (B.30) is automat-ically satisfied, while the second part gives r < 1. Finally, the thirdpart of (B.30) yields for small intensity constantsr<1-207)+0^- (R31)The origin is stable in the deterministic limit for r € [0,1). Hence,equation (B.31) tells us how the stability is reduced by a small inten-sity noise a £ [0,1).The study of Hopf bifurcations is beyond the scope of this book.The interested reader can find, however, pertinent literature in [3.4],Ebeling et al. [3.18], Arnold et al. [3.19], Schenk-Hoppe [3.20], [3.21],[3.22] and Keller and Ochs [3.23].ExercisesEX 3.1. (a) To find the solution to the random walk problem (3.13)use the initial condition Zn(0) = <5no and introduce the probabilitygenerating function (u is an auxiliary variable)ooG(u,t)= £ unZn{t).n=—ooShow thatG(l,t) = 1; G(l,t) = (n(t)>, G"(l,t) = (n2(t)) - <n(t)>; = 3/3*.(b) Consider a random walk on a straight line with step size Ax anda time At between two steps. Equation (1.54) can be reduced toPi(nAx, sAt) = 2_.Pi(mAx, (s — l)At)mx p1|1(nAx,sAt|mAx, (s — I)At)).Assuming equal probability of the steps to the right and to theleft Pi|i — 0.5(5m;Tl+i + 5mjTl-i) shown that we obtainpi(nAx, sAt) = 0.5[pi((n + l)Ay, (s - l)At)+ P l ( ( n - l ) A y , ( s - l ) A t ) ] .
    • The Fokker-Planck Equation 131Derive from the latter equation the quantity[pi(nAx, sAt) - pi(nAx, (s - l)Ai)]/Aiand apply the limitx = nAx; t = sAt; Ax -> 0; At -»• 0; D = (Ax)2/(2At) = 0(1)to derive a diffusion PDEdpi = r ) 9 2P idt dx2This is the FPE with A = 0; B = 2D.EX 3.2. Calculate the expression for d(y2)/dt and use, in analogyto the derivation of (3.17), the relation^(y2) = jdyjdy[(y - y1)2+ 2y(y - y)]W(yy)Z(y,t).Verify that the evaluation of the last line leads to^t(y2} = (a2(y))+2(ya1(y)).The variance a satisfies a2 = (y2) — 2(y)(y). Verify that this yields*2 = (a2(y))+2((y-{y))al(y)).EX 3.3. To calculate the transformation of the third term in (3.28)use the relation(B^T)" = (BT)V + 2(BT)> + (BT)<,to verify thatf(BT)^"dz = /"(BT)Vdz.EX 3.4. Derive the backward Chapman Kolmogorov equation. Usein (3.4) u = At and v = t, y% = z; y2 = w and y = x (x, w and z arebackward, intermediate and forward spatial variables, respectively).
    • 132 Stochastic Differential Equations in Science and EngineeringHint: This yieldsT(z|x; t + At)= / dwT(wx; At)T(zw; t).Multiply this equation by an arbitrary function ip{x, t) and note thatx < w. Apply the other considerations that lead from (3.21) to (3.29).The result^T{zx;t) = -^[A(x,t)T(zx;t)] - ^^[B(x,t)T(zx;t)].EX 3.5. Consider the three-term Kramers-Moyal equation[see (3.32)]9P 3 1 32I d 3Multiply this formula by Ay and (Ay)2(see (3.33a)). Show that thethird term in this Kramers-Moyal equation does not affect the firstand second order terms in (3.34).Hint:V d 3 = {yv) - Sv ; y d 3 = (yv) - 6{yv) + 6^ .EX 3.6. Continue the calculation of (3.33b) for k = 4 and 5 andfind formulas in analogy to (3.33c) to (3.33e).EX 3.7. Consider the averageS = / J a(y(s),s)dsf b(y(w),w)dBl= ^2(a(ym, sm )(Wi - tm)b{yn, sn)(Bn+i - B„)).n,mThe integrand is a non anticipative function. Thus, we obtain fromthe independence of the increments of Brownian motion that theaverage written above vanishes: (S) = 0.EX 3.8. Use the third line of (3.38) to calculate dT3/di; T3 =((Ay)3). Take advantage of (2.14) and compare the results with T3calculated from (2.19a).
    • The Fokker-Planck Equation 133EX 3.9. Compare the Ornstein-Uhlenbeck-problem (2.14) withSDE that arises from the coefficients (3.49). Use a scaled Brownianmovement (see (1.65)) to perform this juxtaposition.EX 3.10. Consider (3.54) and verify1Bm =N,m6l(2(3/K)1/2z}exp(z2/2)Am(z)dz- 1 / 2Am(0)/N,The symmetry of the eigenfunctions (3.50) leads to B2m+i = 0. Even-tually we obtainp(x,t) = y^ exjp(-2nt)B2nMnn=0We observe that (3.38a) tends for t —• oo to its asymptotic limit.This is the stationary solution l i m ^ o o p ^ i ) = Boexp(—(5x2/K).Note also that we can determine the normalization constant Boin the form1 = / p(x, oo)dx =$> B0 = ZKTT/(3.EX 3.11. To prove (3.70) we put the first term in (3.69) to zero. Weobtain with the use of the first integral of the PD that leads to (3.63)1A- 2 | ( f / g ) :f ( 2 f / g 2- g / g ) - - g g "Ps dx+ / fg/gPsdxP,dx.The first part of the last line gives Equation (3.70). In the secondintegral we apply an integration by parts for the second term wheresubstitute (3.63). This shows that the second integral vanishes.EX 3.12. (a) Show that (3.62) is equivalent to da;g(x)odBt.(b) Solve with the use of (2.35) the transcritical casedx = (ax — x2)dt + axodBt-f(x)dt +
    • 134 Stochastic Differential Equations in Science and EngineeringEX 3.13. The Hamilton function H is defined by (A.14). Using(A.10) show that H = const, along W = const.EX 3.14. Find possible FPs of the 1-D SDEs (3.72), (3.74) and(3.76) of Section 3.6.1.EX 3.15. Verify that all moments of Bt/i vanish for t —> oo.EX 3.16. The Routh-Hurwitz matrices An = (ay) are in (B.22).They are constructed in the following way. First we set the coefficientof ?/n) to bo = 1. Then we put together the first row as a sequentialarray of odd coefficients6i,63,...,b2fc+i,0,..., Vl<2fe + l < n .Each subsequent row is then built according toCij = b2j-i Vi > 2, 0 < 2j — i < n and ay = 0 otherwise.i) Construct in this way A2, A3 and A4.ii) Verify the stability criterion for the case of a deterministic lineardamped and/or excited pendulum b ^ 0, 62 = A2.EX 3.17. Study the stability of the FPs of the pendulum (B.14)with a linearization routine.
    • CHAPTER 4ADVANCED TOPICSIn the first section we treat stochastic partial differential equations(SPDE). The second section is devoted to an example of the influenceof stochastic initial conditions on the evolution of partial differentialequation (SPDE). In the third section we give a brief introductionto stochastic eigenvalue problems.4.1. Stochastic Partial Differential EquationsWe investigate here a SPDE that includes an additive stochastic termL$ = aWxt; $ = $(x,t), (4.1)with a linear operator containing one space (x) and one time vari-able (£)2 ~2 2 p.L= E a^ ^ ^ + E ^ + c;k,m=l Sft S mfc=l S Kx0<x<xi; to<t<ti, (4.2)(£i,6) = 0M); a*™ = afcm(£i>6);&* = &*(&, 6); c = c(^i,6)- (4.3)In this SPDE a stands for an intensity constant and Wxt denotes theadditive stochastic term, a two-dimensional Wiener sheet [see (1.66)to (1.70)]. We have to supply boundary and initial values to theproblem (4.1) such as*(aro,*) = «o(*); $(x1,t) = u1(t), $(Mo) = T0(x). (4.4)We solve (4.1) with the method of Greens functions and weobtain the solution to (4.1)$(x,t) = $o(x,t)+a dy dsG(x,t,y,s)Wy8. (4.5)Jxo Jt0135
    • 136 Stochastic Differential Equations in Science and EngineeringThe homogeneous solution satisfies L3>o = 0. The inverse operator ofL is labeled with L_ 1and we define the Greens function byG(x, t, y, s) = L-^y - x)8(s - t), (4.6)and we note that the Greens function G is a deterministic function.Now we specify the stochastic term Wxt- We recall that for anordinary SDE we use dBt = £tdt [see (1.77)] where £t is the whitenoise and Bj is the Brownian movement. In a two-dimensional (2-D)setting we have32Ww»- = w (47)Equation (4.7) characterizes now the most general 2D white noise.The space (x or y) and the time (t or s) variables are, however,independent and we can write [see (1.68)]Wys=W(y)W(s); W(y) = ^ ; W(s) = ^ , (4.8)where B/- is the Brownian movement for the fcth coordinate.The calculation of the delta functions and the determination ofthe Greens functions is performed for a finite region with the initial-boundary conditions (4.4) and the use of an eigenfunction expansion.To determine the statistical moments of the solution (4.5) wherey and s are independent variables we use(W(y)W(s)) = {W(y))(W(s))=0;(W(yi)W(y2)W(Sl)W(S2)) = <W(2/i)W(j/2))<W(si)W(s2)> (4.9)= (2/1 Ay2 )(si As2 ).Using (4.9) we rewrite (4.5) in the form$(x,t) = $0(a;,*) + a / dBy dsG(x,t,y,s)dBs. (4.5)•Jxo JtoWe perform the average and we take advantage of(dBydBs) = (dBy)(dBs) = 0.Thus, we obtain the mean value of the solution<$(x,t)) = $oOr,S/), (4-10)
    • Advanced Topics 137and this is in analogy to linear ordinary SDE, the deterministic limitsolution (a = 0) of the problem. Now we calculate the variance ofthe solution (4.5) and this yields in first placeI fX pX ptVar($) = a 2/ / dBy / dB, / dBsJx0 J XQ Jt0x / dBuG(x,t,y,s)G(x,t,z,u)Jt0I PX px pt pt/ dBy / dB, / dBs / dBuJx0 J X0 Jt0 JtnIX0 J X0 Jt0x G(x,t,y,s)G(x,t, z, u). (4-11)To simplify this integral we use the Dirac-function expansion of thequantity<dBj,dB*dB8dBu) = (dBydB2)(dBsdB„) = 6(y - z)5(s - u),where we used again the independence of the space (y, z) and thetime (s, u) variables. Thus, we obtainVar($) = a2f dy f dsG2(x,t,y,s). (4.12)Jx0 Jt0We can infer from (4.12) that the variance exists only for square-integrable Greens functions. We will find later in this section that thisis a general criterion for the existence of stochastic solutions to SPDE.Now we calculate an example:Example (The stochastic cable equation)$t = $xx-$ + a W(s)W(t); 0 < x < vr; t>0;(4.13)$a:(0,t) = $a:(7r)t) = $(x,0) = 0.We use the deterministic eigenfunctions that are given byVt = VXx ~ V; Vx(0, t) = VX(TT, t) = 0, (4.14a)and the Greens function takes the formooG(x,t,y,s) = J2Vk(x,t)Vk(y,-s) (4.14b)fc=0
    • 138 Stochastic Differential Equations in Science and Engineeringwhere Vk(x,t) and Vk{y,s) are the eigenfunctions satisfying (4.14a)for the pair of variables (x,t) and (y,s), respectively.The separation V(x,t) = exp(-Ai)U(a;) leads to U"-(1-A)U = 0and the compliance with the boundary conditions yieldsVfc = exp[-(l + k2)t]XJk(x); k = 0,1,...,(4.15)1 12Tj0 = — ; jk = J-cos(kx) /k>l.The eigenfunctions are orthonormalized and the orthogonalityrelation is given/ XJk(x)XJm(x) dx = 8km.JOThus we obtain the Greens functionooG(x, t, y, s) = ^ Jk(x)Uk(y) exp[-Afc(i - s)]; k = 1 + k2.k=0The application of (4.5) leads to/>7r rt$ = a / dy dsG(x,t,y,s)WysJo Jooo „O T „i; Vuf c (x)exp(-Af c t) / dBj,Ufc(y) / dBsexp(Afcs). (4.16)u-n JO JOOOafc=oNote that we set $o = 0 since (4.16) already satisfies the initialcondition <&(x,i = 0) = 0. We can put (4.16) into a more elegantform if we defineZtfc= [ dBs f dByJk(y) = Btpk; (3k = f dByU*(y), (4.17a)Jo Jo Jo
    • Advanced Topics 139andAk(t) = f dBsexp[-k(t-s)} I* &ByJk{y)Jo Jo= (3k [ dBsexp[-Afc(i - s)]; Afc(0) = 0. (4.17b)JoThe differential of (4.17a) is dZ^ = (3kdBt and the substitution ofthis line into (4.17b) yieldsAk= [ dZsfeexp[-Afc(t-S)].JoYet the differential of the latter line takes the formdAfc= dZ^ - Afc dt / dZ*exp[-Afc(£-s)],Joand we obtain the ordinary SDE (we do not use a summationconvention)dAfc= dZtfc- AfcAfcdt = -AfcAfcdt + (3kdBt. (4.18)Equation (4.18) represents a special case of the Ornstein-Uhlenbeck SDE (2.14) and we obtain as solutionAk(t) = /3k [ exp[-Afc(t - s)]dBs. (4.19)JoWe use the variable Akto simplify the formula the solution (4.16)and this yieldsoo<P(x,t) = a^Uk(x)Ak{t). (4.20)fc=oThe mean value is (Ak(t)) — 0 and the autocorrelation functionhas the formC(Afc) = (Ak(t - r/2)Ak(t + r/2)) = <j32k) f exp[-2Afc(i - s)} ds,Jowhere the time 7 was defined in (2.47). Thus we obtain an auto-correlation function that is in an analogy to (2.48) given byC(Afc) = (/3fc2)[exp(-Afc|r|) - exp(-2Afct)]/(2Afc). (4.21)
    • 140 Stochastic Differential Equations in Science and EngineeringThe relation (4.20) is used in EX 4.1 calculate the variance of thesolution $(x,t) and to compare this result with the prediction of thegeneral variance formula (4.12). XWe continue now to investigate SPDE in higher space dimensions.We do this for the stochastic Poisson equation. Thus we investigatethe problem n-D problemA n $ = aU(x)Wn; $(x); x = (xi,... ,x„);(4.22)3232dBXl dBxdx 9x2 ndxi dxnThe function U(x) is a deterministic function and we specify thedomain of (4.22) as the entire Rn.The 2D and 3D Greens functions are well known (see Morse &Feshbach [4.1]) and we obtain the 2D Greens functionG2(x, x, y, y) = ~ ln[(x - xf + (y - y) (4.23a)and the 3D Greens function has the formG3(x, x,y,y,z,z) =4TTRR2- [(x - x)2+ (y- yf + (z - z)2]. (4.23b)Thus, we obtain the 2D solution$(x,y) = ^ J dBu J dBvJ(u,v)ln{(x-u)2+ (y-v)2}, (4.24a)where we put the homogeneous solution, and with this the averageof $, to zero. The variance is calculated with (4.12) and we obtainVar($) = (a/47r)2f du f dvJ2{u,v) ln2[(x - uf + (y - vf.(4.24b)We recall the criterion of square-integrable Greens functions. Weextend this criterion now to the square integrabihty of the product of
    • Advanced Topics 141the Greens function and the coefficient function U(x). As an examplewe setU(u,u) = y/6(u)5(v). (4.25)The substitution of (4.25) into (4.24) leads to the varianceVar($) = (a/4vr)2In2(a;2+ y2). (4.26)It is instructive to compare (4.26) with the result of the deterministic2D Poisson equation/ 9 29 2U +Vy) ^ = aS{x)6^with the solution<S>d = ^n{x2+ y2).Thus, we see that the stochastic 2D Poisson equation (4.22) withthe "charge" (4.25), leads to a zero mean solution and a variance thatis the square of the deterministic solution (4.26). A similar result isobtained in EX 4.2 for the case of the 3D Poisson equation.In the case of SPDE in dimensions higher than three we cannotrepresent the solutions with stochastic functions but only in formof distributions. This subject is beyond the scope of this book andwe refer the reader for investigations on this field to the article ofWalsh [4.2] and the books of Holden et al. [4.3] and Krylob et al. [4.4]and Wojczinsky [1.10].4.2. Stochastic Boundary and Initial Conditions4.2.1. A deterministic one-dimensional wave equationAn interesting subject is the study of the evolution of determinis-tic PDE subjected to stochastic boundary or initial conditions. Weinvestigate in this section the solution of a simple deterministic waveequation that is of importance in the study of nonlinear acoustics (seeWhitham [4.5]) and we will introduce a stochastic initial conditionin the next section.
    • 142 Stochastic Differential Equations in Science and EngineeringThe problem is to find a solution of the PDE that governs thespace (x) and time (t) evolution of the wave velocityov ov . . , ,, ,__N—- =—v—; v = v(x,t); T = t-x/c. (4.27)ox <r orThe variable r is the convection time and we use the initial conditionv(0,t) = u0sin(w,t). (4.28)In Equation (4.27) the parameter A is given by A = (1 + 7)/2 (7 isthe ratio of the specific heats) and c is the sound speed at rest. Toconstruct the solution to (4.27) we introduce the function$(x,i) = i>osin LO[ tc +A$(4.29)First we can simply find that $(0, t) complies with the initialcondition (4.28). We substitute in (4.28) r and we obtain with theuse of the Mach number MQ = VQ/C$(x,t) = uosin 1c 1 + Mo$>/voJ yEquation (4.30) defines the function $ only implicitly. We can, how-ever, solve (4.30) explicitly for the convection time and we obtain, , , COX AMn<I?/t>0wr = arc sin($v0 ) " ;" • (4.31)We differentiate (4.31) and this yields9$ to 9$ u> AMo$/wo9r G ox cGl + AM0$/u0 ^ ^where G is a certain function. Without giving the details of G, thatwe determine in EX 4.3, we eliminate G from the two equationsin (4.32) and this yields9$ = 1 M0$/v0 8$dx c 1 + AMo*/u0 or [ }Many applications of acoustics are, however, characterized bysmall values of the Mach number. Hence we obtain with an expansion
    • Advanced Topics 143from (4.33)9$ __ A 3 *dx c2drand this is the PDE (4.27). Since $(0, t) satisfies also the initial con-dition we can infer that the function that arises from an expansionof (4.30)$(x,t) = fosin u>[ T + -AM0$/«o (4.34)is the solution of the PDE (4.26) and satisfies the initial condition.We replace now $ by v and use a Fourier series expansion toinvestigate the frequency components of its discrete spectrum. Wewrite (4.34) in the formv(x, t) = VQ sin(u;r + /3V/VQ); (5 = UJXMOX/C. (4.35)We obtain a Fourier series fromoov/v0 = Y^ An(/?) sin(amr). (4.36)71=1The coefficients of this series are given by2 rAn(/?) = - / sin(iOT +(3v/vo)sm(ionT)d(u>r). (4.37)71" JoWe substitute £ = UIT+(3V/VO, V/VQ = sin£ and we obtain from (4.37)A„(/3) = - rs in(£)sin[n£ - n/3sin(0][l - /?cos(0]d£. (4.38)7T JOTo evaluate (4.38) we take advantage of the theorems of the har-monic functions and of the recurrence formulas of the Bessel function(see Abramowitz and Stegun [1.3]). We use the integral representa-tion of the Bessel functioni rJfc(n/3) = — / cos[ka — n/3sin(a)]da,7T Jo
    • 144 Stochastic Differential Equations in Science and Engineeringwhere 3n(z) denotes the Bessel function of order n. Thus, we obtainfrom (4.38)Ki{P) = Jn-l(-2) - Jn+l(z) - -[Jn-2(z) ~ Jn+2^)]= 2^ ± - z = n(5. (4.39)zThe substitution of (4.39) into (4.36) yields-} = f > ^ ) 1 / 2sin(no;r); { E ^ = H ^ M . (4.40)v0 ^ nJ v" K nJnf3n=lThe relation (4.40), that is sometimes called the Bessel-Fubini for-mula, represents now the explicit solution to the PDE (4.27) forsmall values of the Mach number Mo- The latter condition limits thevalidity of (4.40) to a space variable in the order of 0(x) = 1.4.2.2. Stochastic initial conditionsNow we concentrate on a stochastic wave problem where the stochas-ticity is introduced by the application of a stochastic initial conditionthat includes slowly varying amplitudes and phases. We follow herethe ideas of Rudenko & Soluyan [4.6], Akhmanov & Chirkin [4.7] andRudenko & Chirkin [1.9]. First, and we apply the initial conditionv(x = 0,i) = vo(£lt)sm[u>t + ip(ttt)}, Q = ecj, 0 < £ « 1.(4.41)The relation (4.41) represents a randomly modulated wave with astochastic phase ip. In the following we will assume that the param-eter e is sufficiently small such that we can use the solutionv(x, T) — VQ(CIT) sin LOT + -^UJV(X, T)X + ip(Qr) (4.42)where r is again the convection time defined in (4.27). (4.42) com-plies with the PDE (4.27) and with the initial condition (4.41). Werewrite (4.42) with the use of new variablesV{z,9) = A(e)sm[e + zV(z,6) + <p(9)}; 9 = UJT; V = v/a;A(0) = v0(nT)/a; ip{6) = V(fir); z = Xcoax/c2; a2= (vg),(4.42)where z is the nondimensional position of the wave.
    • Advanced Topics 145To find the explicit solution of (4.42) we take again advantage ofthe Bessel-Fubini formula and we putooV(z, 9) = A(0) sin[9 + zV + y?(0)] = ^ B" 0 ) sin{n[0 + p(0)]}.71 = 1The inverse of this relation is given byBn(z) = - A{9) ^ sin(0 sm[m{9 + ip)]d(6 + ip),Ti" Jowith £ = # + zV + </?; 0 + <p = ^ — zk. sin(£). Thus we obtainBn(z) = -A(0) / sin(0 sin[m(£ - zA sin(£)][l - zk cos(£)]d£.The evaluation of this integral yields in analogy to (4.40) to theexplicit solution with the series expansionV(z, 9) = 2 J2 3n[nzA{9)]sm{n[9 + y(g)]}. (4.43)7 1 = 1It is easy to verify that (4.43) tends in the limit z —> 0 to the intialcondition (3.41). We use (4.43) to calculate the correlation function ofthe velocity V. We employ P/i(A, A, ip, ip) as the tetravariate distri-bution function of a stationary normal process. The amplitudes havezero mean and unit variance and they vary in the interval (—oo, oo)whereas the phases vary in [0, 2-7r]. We put 9 — 9 + LOT; A = A(0),A = A(9), ip = (p(9), ip = ip{9) ((A, (p) and (A, <p) are two pairs ofcylindrical coordinates) and it was shown in [4.6] that this leads toC(z,V) = (V(z,9)V(z,9))0 0A /-OO /"OO= V 5 / dAAJn(nzA) / dAAJ^mzA)^ nmz1J0 J071,771=1/•2ir P2TTx / dnpsm[n(ip + 9)] / dipsm[m(<p+ 9)}Jo JoxP4 (A, A, ¥>,¥>)• (4-44)The four-dimensional distribution was derived in [4.6]P4(A, A,<p,<p) = Nexp{-p(r/)[A2+ A2- 2AA6 cos(<// - ip - r?)]},AA 1P = P(ri),V = UT,l3{0) = l; N = ^ ^ _ ^ ; P = 2 ( 1 _ / ? 2 ) -(4.45)
    • 146 Stochastic Differential Equations in Science and EngineeringNote that the function b(j]) is the envelope of the input-signalC(z = 0,r/) = 6(T/) cos(77). (4.46)We begin the evaluation of (4.44) with the determination of thephase integrals. To achieve this goal we must use the expansion (ln(z)is the modified Bessel function of order n (see [1.3]))exp[T cos(ip — ifCOv)} = ^2 SfcIfc(T) cos[Hf -<p~v)]k=0T = AAbp(rj); s0 = l, sk = 2Vk>l.(4.47)The evaluation of the amplitude-integrals leads eventually to thecorrelation function (see [4.6])C(z, rj) = 2 J2 ^ ~ V UKv)(nz)2] coS(nrj), (4.48)rc=l(nz)sWe compare now the growth (or decay) of the harmonic intensitiesfor the deterministic (E^) and the stochastic case (E*). We haveaccording to (4.40) and (4.48)4 2E^ = -jl(a) and E„ = - exp(-a)In[a; a = (nz)2. (4.49)a aWe obtain E^ from (4.48) for r = 0 and a = VQ. A graphical resultof this comparison is given in Figure 4.1.1,2-1Eds1 -n0,8-0,6-0,4-0,2-0-— r r . — • — ^ — • • - •• -• — * - 1 1 1[1,2-10,8-0,60,40,200 0,5 1 1,5Fig. 4.1. Variation of the harmonic (continuous lines) and stochastic (broken lines)intensities with the nondimensional position of the wave z. Upper pair of lines funda-mental mode (n = 1), lower pair of curves: second harmonic modes (n = 2).
    • Advanced Topics 1474.3. Stochastic Eigenvalue Equations4.3.1. IntroductionTo consider the nature of random eigenvalue equations we consider aboundary equation problem for an ODE that includes a parameter ALu = u; u = u(x,); x < x < X2(4.50)gn gn-1 gL=8 ^ + an~^X"A)9^=i +" + ai(xAfe + a° ^ A)where L is a linear nth order differential operator. The boundaryconditions (BCs) are given byZi(u) = 0; i = l,...,n, (4.51)and they involve conditions at the endpoints x and x^. We assumethat u(x, A),..., un(x, A) is a fundamental set of (4.50). This leadsto the general solutionnu(x, A) = y^KjUj(x, A), Kj = const. (4.52)3 =1The application of the BCs yieldsnJ]KjZi(«j(C,A)) = 0; i = Xlori = x2- i = 1,2,... ,n. (4.53)i=iEquation (4.53) forms a set of n linear homogeneous algebraic equa-tion for the coefficients KJ; j = 1,..., n. Hence, non-trivial solutionsexist only if the determinant vanishes and this gives the eigenvaluerelation$(A) = det[Zi(uj(^,A))]=0. (4.54)Solutions to (4.54) exist only for a parameter A, called the eigen-value, that satisfies certain conditions. There exists in particular adiscrete spectrum if the eigenvalue can be labeled with an integernumber k and complies with Ao < Ai < • • • < An < • • •. By con-trast we obtain a continuous spectrum if the eigenvalue is dennedon a continuous interval a < A < b. There exists also the combina-tion of an eigenvalue spectrum that contains both a discrete as well
    • 148 Stochastic Differential Equations in Science and Engineeringas a continuous part. Functions corresponding to the fcth eigenvalueUfc = u(x,k) are called eigenvectors or eigenfunctions.A stochastic eigenvalue problem arises when either the differ-ential equations or the BCs contain a stochastic parameter 9. Thismeans that we consider in Section 4.3 only ODE with random coef-ficient functions but not SDE. The main goal is the determinationof the PD of the eigenvalues PA(A). The ODE (4.50) is linear andwe can in principle calculate its solutions. However, we encounter inmany applications a nonlinear problem with a nonlinear operatoror nonlinear BCs or both the operator and the BCs are nonlinear.This means that the exact solution of (4.50) is inaccessible and exactsolutions of the PD PA(A) are impossible. Furthermore, there is invarious applications in chemistry and physics the problem that onemust measure the coefficients of the operator L or the coefficients ofthe BCs. The latter measurements introduce random effects. Geo-metrical dimensions can be measured with rather high precision butother quantities such as the position or the velocity of a particle areless accessible to exact measurements. This means that the coeffi-cients in the operator or in the BCs introduce a randomness. Thus,we must consider more modest goals and restrict the considerationsto the calculation of the lowest moments of the PD PA (A). In prac-tical cases one would be already satisfied with the calculation of themean and the variance of the eigenvalue A.4.3.2. Mathematical methodsEvery procedure to calculate the eigenvalues consists of two differentparts: a deterministic and a stochastic part. Accordingly there aretwo categories of methods.In the first method we try to calculate or approximate the firsteigenvalues and apply final stochastic routines to calculate the PDp(A) or at least the lowest order moments of this PD. Such anapproach was called by Keller [4.8] and [4.9] an "honest" procedure.By contrast, we use in the second method statistical methods asthe calculation of the moments of the ODE and apply approximatemethods such as the assumption of stationary processes or weak cor-relations. A typical approach of this kind is used in the fluid dynamic
    • Advanced Topics 149theory of the turbulence (see e.g. Prisch [4.10]). This type of proce-dure was named by Keller as "dishonest". However, we have to keepin mind that an "honest" approach does not imply more accuracyfor its solutions or approximations than the results of a "dishonest"routine.The methods that are used in both types of approaches are fre-quently one of the following routines or a combination thereof- Variational principles,- Perturbation expansions,- Greens function and transformation to an integral equation,- Asymptotic theory for high order eigenvalues,- Asymptotic theory for singular differential operators,- Iterations.Furthermore, one introduces elements of statistical theory andstatistical assumptions like independence of variables, stationary pro-cesses, weak correlations, etc. A survey of these methods is given byHaines [4.11] and Boyce [4.12]. Random boundary problems are stud-ied in detail by Bass and Fuks [4.13].Here we use an asymptotic theory to treat the problem of higher-order eigenvalues.Higher order eigenvaluesWe consider the Sturm—Liouville problem (see also (3.47)) andinvestigate the eigenvalue problemy" + PV/P - (r/p - M/p)y = 0;y = y(x,X); p = p(x); q = q(x); r = r{x); (4.55) = d/dx; 0 < x < 1,with the BCsy(0,A) + Cy(0,A) = 0; y(l, A) + Dy(l, A) = 0; C, D = const.(4.56)We use a transformation to put (4.55) into a form amenable tothe WKB routine (see also the appendix A of Chapter 3). We must
    • 150 Stochastic Differential Equations in Science and Engineering(i) to eliminate the first derivative and (ii) obtain a constant as thecoefficient of A. To this end we scale both the eigenfunction and theindependent variabley(x,) = E(x)u(£,); £ = H(x)/K; H(0) = 0;H(l) = K, 0 < £ < 1, (4.57)where E(x) and H(x) are two unknown function to be determinedsuch that the conditions (i) and (ii) are met. The derivatives y(x,X)and u(x, A) are calculated in EX 4.5 and the substitution into (4.55)leads toEH•/2K2 -«« + K+ A^EH + (EH) + ^EHE + E" +PpEH0.p PJ pHence we can satisfy the conditions (i) and (ii) if we put2I +F F +P = 0 ; H(X) =[Vq(z)/P(z)dz,where the first part of (4.59) leads toE = (w)~1 / 4.Now we can write (4.55) in the formUK + [M2- R(0]« = 0; /x2= K 2A » l ;V E" pEA K2j) E ~ ~pE ) WWe suppose that the eigenvalues increase with an order such thatfj? is a large parameter. Thus we can take advantage of the WKBroutine and we propose the asymptotic expansionR =(4.58)(4.59)(4.60)(4.61)1u(£,/x) = expf^o (01 Go(£) + - ^ ( 0 + 0 ( / O .rThis yields in leading order (/j?) to the eikonal equationx0£ 1 or u0 = ±£,and the first correction order {jil) leads toG0 ^ _ l«o&Gn 2 u0$or Go = const.(4.62)(4.63)(4.64)
    • Advanced Topics 151Thus we find that the expansion (4.62) has the formu(£,/z) = A1cos(//0 + A2sin(/i0 + O(Ai"2). (4.65)We see that u is of 0(1) whereas the derivative u% is of order0(/z). Using the results of EX 4.5 we find that the BCs (4.56) are inleading order independent of the constants C and D (4.56) reduces to^ ( 0 , ^ = ^ ( 1 , ^ = 0 . (4.66)The application of (4.56) to (4.56) leads to the eigenvalue and theeigenfunction(ik — irk; /cGN; k —> oo; u(£, fi) = Ai cos(£;7r£), (4.67)where k is an integer number. The eigenvalue A of the original prob-lem (4.55) is then given byA = (fc7r/K)2+ o(A;). (4.68)Now we make assumptions about the randomness of the problem.We introduce stochastic functions for the coefficientsq(x) = 1 + au(x); p=l + av(x), a = const, (4.69)where u(x) and v(x) are random functions and a is an intensityparameter. To find the moments of eigenvalue A we must calculatethe moments of K2. Hence, we obtain(K2m) = (fe7r)2m(A-m). (4.70)ExampleHere we use a Brownian motion for the coefficients (4.69)q(x) = 1+aB^; p=l + a/3Bx; 0 < a « l ; 0 < / 3 < l , (4.71)where Bx is the Brownian motion with a spatial argument. Thecorresponding series expansion of K2and K4for a —> 0 is given inEX 4.6 and this yields(K2) = 1 + a2{(5 - 1)[(1 + 3/3)/2 + 2(J3 - l)/3]/8 + 0(a4)}.(4.72)
    • 152 Stochastic Differential Equations in Science and Engineering4.3.3. Examples of exactly soluble problemsWe consider here three examples of simple problems with exact solu-tions. In all of them we use the nomenclature that 9 and A representa stochastic parameter and the eigenvalue, respectively.Example 1We consider the linear problemu" + Xu = 0; = d/dx; 0 < x < 1,(4.73)u(0,A)=0; ti(l,A) + 0it(l,A) = O,with 9 6 [0, oo) appearing only in one of the BCs. We solve the ODEusing u(x) = exp(<rx) and we obtain a2+ A = 0. The solutions of(4.72) that satisfy u(0) = 0 are given by(i) A > 0 : a = ±ivA and u(x) = Csin[vAa;]; C = const,or alternatively(ii) A < 0 : a = ± v |A| and u(x) = Dsin/i[y|A|x]; C = const.The application of the second BC leads to(i) C[0sin(VA) + V/Acos(v/A)], (4.74)and(ii) B[6smh{^/~) + Vcosh(y/~)}. (4.75)Thus we obtain in case (i) the eigenvalue equation9 = -VAcot(v/A) = G(A) > 0, (4.76)whereas there is no eigenvalue problem for the other alternative (ii)(see EX 4.7). The latter statement means that we have to complywithA > 0. (4.77)Equations (4.76) and (4.77) give for 9 > 0 the eigenvalue relation{2n + lf^ < A n < ( n + 1)27r2, n = 0 , i . . . . (4.78)We reveal the first and the second eigenvalue in Figures 4.2(a)and 4.2(b). We see from these figures that the eigenvalues are mono-tonically increasing functions of 9 and we can uniquely calculate the
    • Advanced Topics 153Fig. 4.2. The zero and first order eigenvalues Ao and Ai of the problem (4.73) plottedversus the stochastic parameter 6.inverse function of G(A). Thus, n = G~l{e); n = 0,l..., (4.79)where the subscript of the function G_ 1refers to the number of theeigenvalue. We continue with this problem in the next section. XExample 2We consider a problem in polar coordinatesr V + ru + (Ar2- 62)u = 0; u = u(r, A); = d/dr, (4.80)with the BCsK0,A)|<oo; u(l,A) = 0. (4.81)The solution to the ODE (4.80) takes the formu{r, A) = CJe(rVA) + DYg(ry/); C, D = const., (4.82)
    • 154 Stochastic Differential Equations in Science and Engineeringwhere Jg(x)(Yg(x)) denote the Bessel functions of the first (second)kind of the order 6 with the argument x and 6 > 0 stands again forthe random parameter. The function Yg(0) diverges, and we put inaccordance with first BC D = 0. Thus, we obtain from the secondBC the eigenvalue equationj0(vT) - o. (4.83)We give in EX 4.8 hints how to solve (4.83) numerically and wereveal the results in Figure 4.3. These results show an approximateparabolic variation of the eigenvalues. However, we can also confirmthis variation if we use for large values of /A the asymptotic behaviorof the Bessel functions. This yields with (4.83) (see [1.4])Afcf« (vr/4)2(3 + 4A; + 20)2(4.84)*Example 3A quantum-mechanical particle moves on the ti-axis in a potentialthat increases linearly with u. The separated Schrodinger equation200150100200150100Fig. 4.3. The zero and the first order eigenvalues Ao (lower pair) and Ai (upper pair) ofthe problem (4.80) are plotted against the stochastic parameter 0. The continuous (bro-ken) lines correspond to the numerical solutions of (4.83) (asymptotic approximations(4.84)).
    • Advanced Topics 155has in this case the formd2-T^MU) = ui/)(u); ip{oo) = 0. (4.85)The solution to this equation isip{u) = CAi(u); C = const.; Ai(oo) = 0, (4.86)where Ai(it) stands for the Airy function that oscillates for u < 0.After this physical motivation, we formulate a stochastic bound-ary value problem:y"(x)=en(x + e-n/3)y(x); = d/dz; y(-l) = y(oo) = 0.(4.87)We can transform (4.87) into a Schrodinger equation with a stochas-tic potential if we putu = X + x6n/3. (4.88)Indeed, using (4.88) we find that solution to (4.87) isy(u) = CAi(X + x6n/3), (4.89)and (4.89) satisfies already the BC at infinity. Complying with thesecond BC yieldsx = - 1 : Ai(A - 9n/3) = 0. (4.90)In EX 4.9 it is shown that the Ai-function in (4.90) exhibits thediscrete spectrumAf c -0n/3= - K | ; A; = 0,1,2,...with uk = {2.33811,4.08795,5.52056,...}. (4.91)We may now solve (4.91) for the variable 9 and this yields in a specialcase n = 1n = l : 0 = G(Afc) = (Afc + K|)3; Xk e [-|ufc|, oo). (4.92)*We continue in the next section with the calculation of themoments of the eigenvalues and we will use the examples 1 through3 to illustrate these ideas.
    • 156 Stochastic Differential Equations in Science and Engineering4.3.4. Probability laws and moments of theeigenvaluesIn Section 4.3.2 we could introduce random functions at will forthe coefficient function p and q (see (4.69)). Then we calculate themoments of the eigenvalues, given the probability laws of the usedrandom functions [like the Brownian motion in (4.71)].Here we assume that the task of establishing the eigenvalue rela-tion is already performed. We write the latter equation again in theform 6 = G(A). To determine the PD PA (A) we use the followingtheorem:Theorem 4.1. We recall the definition of the ID probability distri-bution density (PDF) of two continuous random variables X and YFx(x) = Pr(X < x);FY(y) = Pr(Y < y) [see (1.1)]. We trans-form the variables X and Y into each other with the aid of a strictlyincreasing function Gy = G(x);x = G_1(y), where the monotonicproperties of G ensure the existence of the inverse function G_1. Itis now easy to find (see also EX 4.10) that X < x implies Y < y.This yieldsPr(Y < G(x)) = Pr(X < x) (4.93a)and this is equivalent to the relation for the PDFsFX(x) = FY(G(x)) or FY(y) = Fx(G-1(y)). (4.93b)Equation (4.93b) also implies the rule for the PDs [see (1.3)]PY(y) = ^ = P x ( G - 1( y ) ) ^ ^ . (4.94a)Ay dyNote that if the function G(x) were strictly decreasing we wouldobtain on the right hand side of (4.94a) a minus sign (see EX 4.10).Hence we obtain in conclusionPY(y) = Px(G-1)dG -ldy(4.94b)Note also that (4.94b) is the ID version of the law of transforma-tion of stochastic variables derived in Section 1.5. 8*5
    • Advanced Topics 157The application of Theorem 4.1 to the characteristic equation 9 =G(A) with G a strictly increasing function, leads to pA(A)with uk —pe(#)(d0/dA). Thus, we obtain for the nth moment of the ktheigenvaluerv PV jn(Xk) = J A^pA(Afc)dAfc = / Ajjpfl(0)— dAfc; u = ak, v = bk,(4.95)where we took the kth eigenvalue that is defined in the interval ak <Afc < bk.We apply now (4.95) for the Examples 1 to 3 of the Section 4.3.3.Example 1We use G(Afc) given by (4.76) and we obtain the limits of the integral(4.89) for the first two eigenvalues ao = (7r/2)2,ai = (37r/2)2;6o =7r2,bi = 4-7T2. We introduce two different types of the PDs.(a) We suppose that the random variable 9 is uniformly distributedin the interval 0 < 9 < (3 and we putf 1//3 for0e[O,/3](J elsewhereWe obtain the after a substitution z = y K the integrandd# ,, 1 9„ z — sin(z) cos(z),— dK = -z2n^ ~aK [3 smzz(4.97)AfcPe^~ dAK = -zzn-±£-—^ dz; for ^ <z< zu,k.The upper limit of the integration is zUjk = y/rj, with n =—-v/A^cot(/A^). Hence, we obtain the moments with (4.95) and(4.97) in the form/? 7 /^ sin (z)Note that as /3 —> 0+ we have zU;fc -• V ^ - The correspondingintegral is an undetermined form and we obtain (see EX 4.11)(Afc)(/3 = 0+) = (2n + l ) V / 4 . (4.99)
    • 158 Stochastic Differential Equations in Science and EngineeringTable 4.1. The moments of the zero and first ordereigenvalues and the corresponding variances.p0.0010.250.51234810(Ao>2.46872.70922.93543.34604.02904.57155.01186.17216.5456°~o0.0.13730.26130.47500.79481.01351.16581.44561.4939<Ai>22.20822.45622.70223.85924.10724.95425.72628.15129.056o"l0.0.14330.28460.55851.06281.50121.87422.85043.1293We give the corresponding results now terms of Table 4.1, wherewe use^i,2 = <A?,2> - (Ai,2)2,We calculate the integral (4.98) with the Mathematica programM41, that is given in the attached CD.Note that the moments for /3 = 0+ given by (4.99) coincide almostwith the values given for /?-= 0.001 in Table 4.1.(b) Here we apply the normal PD with zero mean and we obtainPA(A) = p e ( 0 ) ^ = (27r<T)-1/2exp[-^/(2a)]^. (4.100)We use again (4.97) and we obtain moments in the form<A£> = ( 2 ^ ) - ^ r kz2n*-Bin(*)coB(z)V^k sm2(z)x expf-z2cot2(z)/{2a)]dz. (4.101)The numerical results for the eigenvalue are calculated with the pro-gram M42 and they are displayed in Table 4.2. Jj»Example 2We calculate here numerically the moments of the eigenvalues of thecharacteristic equation (4.83).
    • Advanced Topics 159Table 4.2. The moments of the zero order eigenvalueand the variances for the normal PD.<r <Ao>0.2 2.382421 2.333685 2.58837Table 4.3. The moments of thecharacteristic equation (4.86).v <Ao>0.05 3.59190.2 4.36501 6.5718V^o) - <*o}20.920721.645982.38830eigenvalues with theV^g) - <*o>23.67544.68187.9691Note that Figure 4.3 indicate only minor variations of the eigen-values with the stochastic parameter. Hence we can use a polyno-mial fit to approximate the eigenvalues with sufficient accuracy. Weuse again the POD (4.100) and we obtain the results disclosed inTable 4.3. The corresponding Mathematica program M43. £Example 3We apply (4.92) and this yieldsA£p*(G)—dAfc; a=uk. (4.102)We employ the PD (4.96) and we obtain with (4.102)(K) = ^J aK(h + a)2dk= ^P~ [ * za-z)sdz; b = p1/3-a. (4.103)P JoWe can calculate analytically the integral (4.103) and we obtain(see also EX 4.12)(K ! _ 3 ^ 1 / 3 + 0(^/3)4 a(4.104)
    • 160 Stochastic Differential Equations in Science and Engineeringand this leads to(Afc) = - K | ; a = 0(/34/3). (4.105)*4.4. Stochastic Economics4.4.1. IntroductionThe London (FTSE) and the New York (NASDAQ) stock exchangedata for about 13 years were fitted by Benth [4.14]. This leads to con-siderable differences with respect to the predictions of the theoreticalapproach by Black & Scholes [4.15] based on GPD. Particular unsat-isfying is the skewness and tail heaviness of the empirical data thatcannot be predicted by a GPD. Thus the normal inverted Gaussiandistribution NIGD (see e.g. Barndorff-Nielsen [4.16] and Eberlein &;Keller [4.17]) was invented to fit the stock data. As opposed to theGPD with two parameters, the NIGD has the advantage of beingequipped with four parameters:a: tail heavines, 0 < f3 < a,j3: skewness /3 > 0 (/3 < 0) positive (negative) skewness, /? = 0symmetric NIGD,5 > 0: scale parameter,fi: position on the real axis.With this parameters we can construct the NIGDI a x kexpP(x-[i)]p{x;a,f3,n,5) = Ki(a/>);9(4.106)p = V<52 + (x - M)2; k = — exp[5(a2- 02)1/2],where Ki is the modified Bessel function (see [1.3])1 f°°Ki(«) = x / exp{-uz/[2(l + z2)]}dz. (4.107)^ JoThe NIGD governs the stochastic Levy process L(i) and its meanand variance have the form(L(i)) = (i + 05{a2- /32r1 / 2; Var(L(i)) = a25{a2- p2)32;.(4.108)
    • Advanced Topics 161It was shown in [4.14] that approaches bases on Levy processes fitthe stock data with respect to skewness and tail heaviness and simpleprice model can be written asS(t) = S(0)exp[L(t)]. (4.109)However, Equation (4.108) is still of preliminary nature and in thenext section we will discuss a more realistic price model. We com-plete now this section by explaining some elements of the financialterminology.Assuming that an investor makes two types of investments: arisky stock and a comparatively low risk bond. The investors aim isto obtain a fair price when he sells his investments. To achieve thishe makes an option contract, where the financial asset depends onanother asset. Option means that the contract has specific choicesor alternatives. Such contracts are also referred to as derivatives,because the value is derived from an underlying asset. There existsa great variety of derivatives, we mention only the call option andalternatively the put option. A comprehensive survey of options isgiven by Hull [4.18]. The problem we wish to solve is now: what profitshould the seller accept at the time of the sale. The time of the sale(the strike) is called the exercise time and the price of the sale isdenoted strike price. Therefore we have for a call option the priceP (the payoff)P = (max(0, S(T) - K)), (4.109a)where K is the agreed strike price, and we perform an average sincethe stock price S(T) varies randomly. On the other hand we obtainfor a put optionP = (max(0, K - S(T))). (4.109b)Finally we give a heuristic explanation of self-financing portfolios.We assume again that an investor made two investments: a numberof (3{t) stocks with the price S(i) and a number of j(t) of bonds ofthe price R(£). The value of its portfolio is thereforeH(£) = P(t)S(t) + 7(t)R(<). (4.110)We assume that there is no information about future stock pricessuch that S(t) is adapted. A portfolio is self-financing if the investor
    • 162 Stochastic Differential Equations in Science and Engineeringdoes not withdraw funds or buy additional ones. He starts with aninitial capital, and from this moment on all gains or losses result fromincreases or decreases of the stock and bond prices. This means thatthe differential of the portfolio takes the formdH(t) = P(t)dS(t) + 7(t)dR(t). (4.111)We emphasize that (4.111) is valid only for self-financing port-folios. It contradicts the general formula (1.109) and its validity isrestricted to semi-martingales (see [4.14]).Finally, we mention that a contingent claim gives a holder a ran-dom amount at the exercise time T.After these preliminaries we describe a stochastic theory for mar-ket prices.4.4.2. The Black Scholes marketA financial system (a market) consists of a stock with price S(t) anda bond with a price R(i). The price dynamics takes the formdS(i) = (adt + adBt)S(t); a,a = const., (4.112)anddR(t) = rR(t)dt; r = const., R(0) = 1. (4.113)Equation (4.112) is an SDE of the population growth type (2.1),whereas (4.113) is an ODE with the solution R(i) = exp(rf). Wealso introduce the portfolio of the value H(i) given by (4.110).A claim that pays £ = f(S(T)) at the exercise time T has a priceP(i) at time t, that is function of the corresponding stock price x =S(t). Hence, we can writeP(t) = C(S(t), t) = C{x, t); x = S(t). (4.114)We apply Itos formula (1.127) to the function C(x,t) and we usethe underlying SDE (4.112). This yields1dP(t) Ct + axCx + ^(ax)2Ca dt + axCxdBt = 0. (4.115)For non-arbitrage conditions (see [4.14]) we must setP(t) = H(t). (4.116)
    • Advanced Topics 163Equation (4.110) combines with (4.116) to7=[C(M)-/?S(i)]/R. (4.117)An insertion of (4.111) to (4.114) into (4.115) yields with a compar-ison of the coefficients of dBj toP(t) = Cx(x,t), (4.118)and the same procedure for the coefficient of dt gives rise to a PDEcalled the Black-Scholes equationCt + rS(t)C+^[aS(t)]2Cxx = rC. (4.119)Equation (4.118) is subjected to the terminal conditionC(x,T) = f(a;). (4.120)The solution to (4.118) that satisfies (4.119) has the formC(x,T) = exp[-r(T - t)](f(^*(T))), (4.121)withzxt(s) = x e x p { ( r - s 2/ 2 ) ( s - i ) + [B(s)-B(i)]}; s>t. (4.122)The stochastic function (4.121) is the solution of a population growthSDE (2.1) that starts at time t from the position x: zx,t(t) = x.Note that the logarithm of (4.122) can be written asB(s) - B(t) = [In **•*(«) - In x - (r - s2/2)(s - t)]/a. (4.123)Since the right hand side of (4.123) is N(0, s — t) distributed we inferthat k(s,x,t) — ln^^s) isN(/j,,T1);fi = nx + (r-a2/2)(s-t), E = (s - t)a2(4.124a)distributed and this gives its distribution the formpK (M) = (27rS)-1/2exp[-(fc-//)2/(2S)]. (4.124b)We focus now on the verification of the Black-Scholes solution(4.121) and the latter equation has with (1.124) the formC(x, t) = exp[-r(T - t)] f f(z(T))pz(z, t)dz, (4.125)where we use the short cut z = zx,t(s). We see immediately from(4.125) that this solution satisfies the terminal condition (4.120).
    • 164 Stochastic Differential Equations in Science and EngineeringNow we perform the transformation z = exp(fc) and this gives(4.125) the form [see (1.44)]C(M) = exp[-r(T-t)] /f(efc)pK(fc,t)dk, (4.126)where the PD is pK.(k,t) is given by (4.124).Now we apply the operator of the PDE (4.119) on (4.126) and weobtain in the first place0 = / d K ( e * ) ( A + r ^ + ^ ) P K ( M ) .The integrand can be reformulated in terms of the diffusionequation (1-57)3 9 x2a292 „ , ( 329 „ , n^ + rX9^ +^9^JPK(fci)=Uv"^JPK() = 0and this verifies that (4.121) is the solution of the Black-Scholes PDE(4.119).It remains to say that Scholes and Merton (with his investigation[4.19], Black died in 1995) earned the Nobel award in economicsin 1997.ExercisesEX 4.1. Use (4.20) to calculate the variance of the solution (4.16)and compare the result with the formula (4.5) where we take intoaccount the orthogonality of the functions Ufc (x).EX 4.2. Calculate the variance of the 3D stochastic Poisson equa-tion. The Greens function is defined by (4.23b). Use the "charge"U2(x) = 5(x) and compare the variance of the stochastic PDE withthe corresponding deterministic solution <&d(x) = a/(47r|x|).EX 4.3. Determine the function G in (4.32).EX 4.4. Solve the stochastic heat transport equation9T / d B t 9 2T 9T m m . N^ = (a + b^r hx^ +^ ;«.M = «H»t.; T = T(M),
    • Advanced Topics 165with the use of the separationT(x,t) = A1(t)cos(Lot) + A2(t)sm(cot).Show that the resultant ordinary SDE can be transformed into adeterministic ODE involving the two functions Ai^i).EX 4.5. Calculate the derivatives of the functions y(x, A) and u(x, A)defined in (4.57).Hint: VerifyE H pij2 iVx = Eu + — « f ; yxx = ~^Hi + ^ [EH + (E H)>€ + E"u-EX 4.6. Referring to (4.71), calculate the series expansions of themoments of (K2) and (K4).Hint: Verify(K2) = [ dx [ dy{l + a2(P - 1)[(1 + 3/3)(x + y)Jo Jo+ 2(P-l)(xAy)]/8 + 0(a4)}= 1 + a2(/3 - 1)[(1 + 3/3)/2 + 2(/3 - l)/3] + 0(a4).EX 4.7. Explain why no eigenvalue problem arises for (4.76) in thecase of A < 0.EX 4.8. To solve (4.80) plot numerically (with the aid of "Mathe-matica") J#(/A) in the plane < 0 < # < 5 ; 0 < A < 150 and usethe plotted zeros for as initial values for an iteration to compute thenumerical values of An . Using "Mathematica" you can take advan-tage of the routine FindRoot. Explain why the asymptotic orderrelation (see [1.3])J , (^) a^3(^)":^°°;^ < 0° cannot be used to compute the eigenvalues.EX 4.9. Apply the Mathematica routines AiryAi and FindRoot toverify the zeros of the Ai-functions given by (4.91).
    • 166 Stochastic Differential Equations in Science and EngineeringEX 4.10. Derived by inspection of a strictly decreasing functiony = G{x) the relation for the PDFs Pr(X < x) = Pr(Y > G(x)) anduse this to verify (4.94b).EX 4.11. Applying the rule of lHospital to verify (4.99).Hint: Show that the PD (4.96) gives rise tolim /"Pe(0)H(0)d0 = H(O).EX 4.12. We consider the 4th order boundary value problemy w- ^ y = o; s/(°>A) = y"(°>A) = y&A) = s/"(i>A) = °-Verify that the eigenvalue equation has the form sm[y/9/(l + A)].Use the PD (4.96) to find numerically the mean and the standardvariance for Ai and A2.
    • CHAPTER 5NUMERICAL SOLUTIONS OF ORDINARY STOCHASTICDIFFERENTIAL EQUATIONSIn this chapter we will familiarize the reader with some numericalroutines to solve ordinary SDE and we shall introduce these routineswith the FORTRAN programs that are included on the attached CD.The latter programs are labeled with an "F". So, for example, F3 isthe third FORTRAN program.5.1. R a n d o m Number Generators and ApplicationsA prerequisite to solve SDE are random n u m b e r s (RNs). Thereare various techniques to construct "random numbers". In fact, thenumbers repeat after a period P. As a result, we should call thesenumbers pseudo-random numbers.To find an algorithm for a random number generator, we recallthat ever since the work of Feigenbaum [5.1] it has been known thatthe iteration of numbers on the real axis (ID maps) can lead toirregular outcomes. It is convenient to apply a congruential iterationNfc+i = (aN/- + u) mod v; k = 0 , 1 , . . . ,where a, u, i>,No are given positive integers and the symbol umodt;(the modulo) denotes the remainder of u after dividing u by v. Thechoice of v is determined by the capacity of the computer and thereare various ways to select a and u. The period P of this iteration wasinvestigated by Knuth [5.2] and it was found that the condition thatu and v are relatively prime makes it important to increase the valueof P.The IMSL (international mathematical scientific library) pro-poses for RNs with a uniform PD in the interval (0, 1) the iterationNfc+i = (16807 Nfc) mod (231- 1); k = 0 , 1 , . . . ,167
    • 168 Stochastic Differential Equations in Science and Engineeringwhere 0 < No < 1 is an initial number and the factor 231— 1 is aprime number. The numbers in the above formula are not RNs in(0, 1) but the sequenceRfe+i = Nfc+1/(231- 1),serves to produce a sequence of RNs, Rfc+i, in (0, 1).Other techniques are discussed by Press et al. [5.3] and we rec-ommend in particular the routines called rani, ran2 and ran3, thatare based on congruent iterations.The question of a uniform distribution and the independence ofthe RNs is investigated in the next section. RNs that vary in arbi-trary intervals are obtained with transformations (see Section 1.5,where also Gaussian distributed numbers are introduced).5.1.1. Testing of random numbersWe will investigate here the property of uniformity and the inde-pendence of stochastic variables. To perform this task we introducea few elements of the testing of stochastic (or probabilistic) modelsHo on random variables X.The basic idea is that we perform a random experiment (RE) ofinterest and group the outcomes into individual categories k with Kas the total number of categories. One particular outcome of the REis given by Xi and this particular outcome is called a test statistic.We categorize the predictions of an application of the model Ho inthe same way. If, under a given model Ho, values of the test statisticappear that are likely (unlikely), we are inclined to accept (reject)the model Ho-An important statistical test is x2-statistics. The quantity X is aX2-random variable of m degrees of freedom (DOF) with the PDpx (x,m) = [2sr(s)]-V- 1exp(-:r/2); s = m/2; x > 0. (5.1a)The mean and variance of (5.1a) are calculated in EX 5.1. It canbe shown that (5.1a) is the PD of a set of m independent identicalN(a, a) distributed RVs, Xi,... ,Xm, withmX = J](Xp-a)2/^2. (5.1b)P=i
    • Numerical Solutions of Ordinary Stochastic Differential Equations 169We assume now that under a given hypothesis Ho (e.g. the uni-formity of RNs) one particular simulation falls with the probabilityPfc in the category k. We perform N simulations and the expectednumber of events under Ho in the category k has the value Npfc. Thiscontrasts with the outcome of a RE where Z& is the number of eventsin the category A; of a particular simulation. Now we calculated thesum of the K squares of the deviation of Z^ from Npfc and dividedeach term by Npfc. This test statistics yields the RNKDK -i = ^(Zf c -NP f c )2/(NP f c ). (5.2)k=lIt can be shown that for N > 1 the RN in (5.2), X = DK_iapproximately has the PD (5.1a) with the DOF m = K — 1. Toverify this value of the DOF we note that we can sum RVs in (5.2)but we must consider the compatibility relation Ylk=i ^fc =N-A close agreement between the observed (Z&) and the predictedvalues (under Ho) Npfc yields (Z^ — Npfe)2<C 1. However, a largenumber of categories K leads to finite values of DK-I- This can beinferred from the mean values of (5.1a) (X) = K — 1, signifying thatmean of DK-I increases linearly with K. Hence we introduce a criticalvalue of the x2-statistics XcK-i defined byfbPr(X>6) = l - / px(:r,K-l)d:r = a; b = *cK-i- (5.3)JoIn Equation (5.3) we must substitute the PD (5.1a) and use agiven and small level c t « l . Conveniently one chooses a = O(10~2)and solves (5.3) numerically to obtain %2K - 1 . As a goodness criterionfor Ho we employ now the condition that DK-I must not exceed thecritical value. Thus, we accept (reject) the hypothesis for DK-I <X c , K - l ( DK - l > X c , K - l ) -Example (Uniformity test)As an application of the x2-test statistics we investigate now thehypothesis Ho of the uniformity of RNs. We categorize by dividingthe interval (0, 1) into K subintervals of equal length 1/K. Thenwe generate N RNs and count the numbers that fall into the fcth
    • 170 Stochastic Differential Equations in Science and Engineeringsubinterval. Now we assume thatZfc RNs in: ((k - 1)/K, Jfe/K); k = 1,..., K.Under Ho, the probability of simulation RNs in each subintervalis pfc = 1/K. Thus, we infer from (5.2)K NDK - i = N ^ ( Z f c _ N / K ) 2- ( 5-4 )fc=lWe investigate in program Fl the uniformity of RNs using 10, 30and 50 subintervals and applying various RN generators. The resultsare given in Table 5.1. £Now we discuss independence tests. Suppose that we haveobtained a random sample of the size of n pairs of variables(xi, yj);j — 1,.. -, n. To investigate the independence of the two setsof variables X = (x,..., xn), Y = (j/i,..., yn) and we calculate theircovarianceR = I>P-<*>)(%>-<3/>)/D;P = in n Inp—X p=l p=X(5.5)If the variables X and Y are uncorrelated we would find R = 0 andthe variables are independent.We focus now on a test of the independence of N RNs x,..., xnand we group them into consecutive pairs (x,X2),{x2,x%),...,(xn-i,xn). We regard the first member of such a pair as the firstTable 5.1. The evaluation of the uniformity testwith the rani, ran3, the IMSL routine and a con-gruent method (CM) for a = 0.05.D K - I D K - I D K - I D K - IK XC,K-I ( r a n l) (r a n 3) (IMSL) (CM)10305016.9242.5666.349.24037.10044.2009.20023.90033.9009.30047.78043.1009.30047.78043.340
    • Numerical Solutions of Ordinary Stochastic Differential Equations 171variable (say X) and the second member of the pair as another vari-able (say Y). The correlation coefficient Ri of these consecutive pairsis called the correlation coefficient with lag 1n - lRi = E(^-^( 1 )))(^+i-^( 2 )))/D;P = in—l n—1D2= 5 > p - <^)))2J > p + 1 - <*(2)))2; (5.6)P= p=1 n—l, , ,-Ep+k—l-n — l -^P=IWhen n is sufficiently large we can simplify (5.6) and this yields1 n(xW) « (x^)(x) = - J > p , (5-7a)n1P=In - lRi = j > p - ^»(^P+I - (*»/ E ( XP - ^))2- (5-7b)P=I * p=iIt can be shown (see e.g. Chatfield [5.4]) that ifindependent then Ri, is for R » 1 , N(0,1/n) distributed. Under theseconditions we expect |Rij < 1.96n-1/2with a confidence parametera = 0.95.However, there are cases when the pairs (x,X2), (x2,xs), areindependent but pairs such as (xi,xs), (^2,^4), are not independent.Hence, we compute the correlation coefficient with time lag k (seeTuckwell [5.5])n—k , nRk = J2(XP ~ (x)")(xp+k - (x))/ J2(XP ~ ^ ) 2 (5-8)P=I p=iwhere the mean (x) is defined as in (5.7a). We investigate the ques-tion of the independence of RNs in program F2.Finally we explain the term confidence interval. We know fromthe discussion of the central limit theorem (Section 1.4.1) that asum of IID RVs converges to a normal distribution. Hence, it is of
    • 172 Stochastic Differential Equations in Science and Engineeringpractical importance to find probabilistic limits of the normal distri-butions N(0,1) and N(/x,a).We start with a N(0,1) distributed variable and define the confi-dence parameter a byN(0,1) : Pr(Z > za/2) = a/2. (5.9)Equivalently we can rewrite (5.9) in the form1 fba/2 = 1 - —= exp(-u2/2)du] b = za/2. (5.10)V27T J-ooThere is no analytic expression for the inverse function of the expo-nential integral in (5.10). Alternatively we take in EX 5.2 advantageof an asymptotic method to approximate (5.10) and we use there theprogram F3. However, it is easy to find numerically for every valueof za/2 the confidence parameter a.Now we study a N(/i, a) distributed variable X withfx(x) = (27ra2)-1/2 e x p[-0r - ^)2/(2a2)},and we consider that it lies with the probability K = Pr(xi < X <X2)]x,2 " i ^ i aza/2 mthe indicated interval.Thus, we infer from (1.1a)K = (2ira2)-1/2H exp[-(x - n)2/(2a2)]dxfb= (27T)-1/2/ exp(-y2/2)dy = 1 - a; b = za/2. (5.11)J—bEquation (5.11) means thatPr(/x - aza/2 < X < i + aza/2) = 1 - a, (5.12)and (5.12) defines a100(1 — a)% confidence interval. (5.13)ExampleA 0.95% confidence interval is given by a = 0.05 and this implies with(5.10) zaj2 = 1.96. Numerically computed and approximate valuesof za/2 are calculated in EX 5.2 and in the program F3. •
    • Numerical Solutions of Ordinary Stochastic Differential Equations 173Detailed studies of stochastic model testing can be found inthe books of Walpole and Myers [5.6] and Overbeck-Larisch andDolejsky [5.7].5.2. The Convergence of Stochastic SequencesIn Chapter 1 we introduced already the term almost certain, (AC).We focus in this section on the convergence of sequences of RVor stochastic processes. A classification of the convergence of thesequence of RV{Xn(co)}; n = N, u> <E ftis given by strong and weak modes of convergence .(i) Strong convergenceThere are two subclasses of convergence modes:(i-1)AC convergence (or equivalently called convergence with proba-bility 1) ifPrf lim |X(w) - Xn(cu)| = 0 J = 1. (5.14)(i-2)Convergence in the fcth mean.lim (X(LU) - Xn(uj)k) = 0, fceN. (5.15)n—>ooThis means for k = 2 convergence in the mean square. By contrast,for k = 1 (5.15) reduces to the mean (or strong in the narrow sense)convergence. Note also that (5.15) is a straightforward extension ofthe deterministic convergence concept. We also note that the posi-tiveness of the variance of a RV Y, with Y > 0, leads to the inequality(Y) < v^>,and this means that the mean square convergence implies the meanconvergence.(ii) For the case of weaker convergences we do not need the behaviorof the RV, we focus only on the sequences of PDs {pn(x)} and theway they tend to their limit p(x).
    • 174 Stochastic Differential Equations in Science and Engineering(ii-1)The convergence in distribution is defined bylim pn(x) = p(x), (5.16)n—>oowhere the PDs are supposed to be continuous.(ii-2)The weak convergence in the narrow sense is expressed bylim / g(x)pn(x)dx = / g(x)p(x)dx, (5-17)n^ooj Jwhere we include in the integrals classes of test functions g(x) (e.g.polynomials).An important application of the concept of strong convergence isgiven in the following example:Strong law of large numbers (SLLN)We consider a sequence of RV xi,X2,--- with finite means fi^ ~(xk), k = 1, 2 ... and we define a new RV1 n 1 nAn = - V ] x k with Mn = - V/ifc. (5.18)fc=i k=iThen, provided thatlimSn = 0 with Sn = ^ V a r ( y x f c ) (5.19)n-^oo n *•— //c=l /the SLLN says thatlim (|An - Mn|) = 0.n—tooNote that for IID RV we have/ife = /i = const. =>• Mn = lim An = /x.n—>oo(5.20)(5.21)Proof. First we note that(A„) = M,,, Var(An) = Sn.Now we apply the Chebyshev inequality (see EX 1.2) and weobtainP r ( | A n - M n | > £ ) < ^ 4 ^ = % - > 0 forrwoo.
    • Numerical Solutions of Ordinary Stochastic Differential Equations 175This proof can be made more rigorous (see Bremaud [1.16]). Theconvergence in (5.20) is of mean square type. *An interesting application of the SLLN is given in the following:Example (Estimation of the probability distribution function, PDF)It is required to find (or numerically approximate) the PDF definedby (1.1) of the RV X. To this end we generate a sequence of inde-pendent samples of the RV Xi,X2,... and we define the indicatorfunction{1, V Xi. < x,n ^ ( 52 2 )0, otherwise.Thus we obtain with the use of the SLLN (5.20)<&(a;)) = lim -(Ci + • • • + £„) = Fx(x). (5.23)n—»oo nWe will use this method in program F4 to estimate a Gaussian PDF.5.3. The Monte Carlo IntegrationHere we make a slight digression a from our path to the calculation ofnumerical solutions of SDE and we use RNs to perform a stochastictype of numerical integration. This method is widely used in integra-tion and statistics. It is based on our ability to generate RNs and itcan be done in principle by dice or a roulette wheel. For this reasonthe method is called the Monte Carlo method (MCM).As a typical application we might consider the MCM to calculatethe value of the area A below a curve i(x) and the x-axis, see Fig. 5.1.The curve is located in a rectangular box of the area B = ab andwe consider a pair of two independent RV (x, y) that are uniformlydistributed in 0 < x < a, 0 < y < b. The probability that a randompoint (x, y) falls between the x-axis and the curve i(x) is given byPr{(x,y) GA} = ^ f f(x)dx. (5.24)We use a collection of n random points (xk,yk), k = 1,2,... ,n anddefine for the fcth member of this collection the indicator variable
    • 176 Stochastic Differential Equations in Science and EngineeringFig. 5.1. The function f(a;) whose integral is calculated.[see also (5.22)]1 for (xk,yk) <E A,0 otherwise.Thus, we infer from the SLLN (5.20) that1 n(Z) = - V z f c ^ P r { ( x , y ) G A } for n —> oo,fc=ior equivalentlyf(x)dxabny ^Zfc for n —> oo.fc=i(5.25)(5.26)(5.27)Example (Volume of spheres)We calculate the area F of a unit circle that is enclosed by a square ofarea B = 4. Clearly this example is a stochastic method of computingthe value of TT. Here we infer from (5.25)1 foTx2k + yl<l,0 otherwise.(5.28)In (5.28) we use independent RNs that vary uniformly in the interval(0, 1). The area of the unit sphere is then by symmetry four times
    • Numerical Solutions of Ordinary Stochastic Differential Equations 177the one that would be obtained by the application of (5.28) to thefirst quadrant of the (x, y)-plane. Hence, we obtain from (5.27)A nF = -J2zk^TT. (5.29)k=lHowever, it is easy to generalize this example to the calculation of thevolume of the 3D sphere, and more interestingly, to the calculation ofthe volume m-D sphere with m > l . This is where the MCM is supe-rior to traditional numerical calculations. Consider the computationof a volume in a 10D space. If we employ a minimum of 100 pointsin a single direction we would need for the traditional computation1020points. By contrast, the MCM would need according to (5.28)10 points for one indicator variable and a collection of say 107pointswould be sufficient for an accurate computation. Thus we generalizethe present example and we calculate the volume of an m-D unitsphere. The indicator variable is now given bymZfc — 2_]X2A < 1; Zfc = 0 otherwise.i=iThus, the MCM gives the volume Vm of the m-D unit spherethe value (2mis the number of sectors that make up the m-D unitsphere: 4 quadrants in case of a circle, 8 quadrants in case of a 3Dsphere,...)tym n- V z ^ V m . (5.30)n ^—k=iNote that the exact value V^ is given by (see Giinther and Kusmin[5.8])v" = w^wy (5-31)The numerical computation is made using program F5. We sum-marize the results of the computation of a 10-D sphere in Table 5.2.The error of the computation is defined byerr = 100|(VfS-Vi0)/VfS|%.
    • 178 Stochastic Differential Equations in Science and EngineeringTable 5.2. The results of aMCM calculation of the volumeof a 10-D unit sphere using ran3.n Vio err60000 2.54123 0.350481100000 2.54203 0.318841150000 2.55029 0.00508There are several improvements of the MCM for multiple integrals.In the case of the determination of the area between an irregularshaped curve i(x) and the x-axis we can writeb rbi(x)dx = dx p(x)i(x)/p(x), (5.32a)Jawhere p(x) is an arbitrary PD defined in x = [a, b]. If N RNs £&, k =1,..., n are generated from the PD p(x), thenOne of the most widely used application is in statistical mechan-ics. The PD of a statistical mechanical state with energy E is givenby the Gibbs distribution functionexp(-/?E)lK /exp(-/3E)dEwhere (3 is the inverse temperature. The computation proceeds bystarting with a finite collection of molecules in a box. It can be madeto imitate an infinite system by employing periodic BCs. Algorithmsexist to generate molecular configurations that are consistent withp(E). The most popular one is the Metropolis algorithm. A presenta-tion of this algorithm would take us too far afield. Details are given inMetropolis et al. [5.9]. Once a chain of configurations has been gen-erated, the PD is determined and various thermodynamic functionscan be obtained by suitable averages. The most simple example is(E) = VEf c p(Ef c ).
    • Numerical Solutions of Ordinary Stochastic Differential Equations 1795.4. The Brownian Motion and Simple Algorithmsfor SDEsWe recall that the Brownian motion is a non-stationary MarkovianN(0, t) distributed process. We remind also the reader of the Box-Muller method (1.46) to (1.48) of generating a N(0, 1) distributedvariable. The latter method contains the computer time-consuminguse of trigonometric functions [see (1.47)]. Hence, it is more conve-nient to use the polar Marsaglia method. We start from a variableX that is uniformly distributed in (0, 1) and use the transformationV = 2x — 1 to obtain a variable V that is uniform in (—1,1). Thenwe apply two functions of the latter type Vi and V2 withW = V? + V | < 1 , (5.33)that is distributed again in (0, 1) and the angle 8 is such that 6 =arc tan(V2/Vi) that varies in (0,2ir). The area-ratio between theinscribed circle (5.33) and the surrounding square has the value 7r/4.For this reason we see that a point (Vi, V2) has the probability 7r/4 offalling into this circle. We consider only these points and disregardthe others. We put now cos 9 = Vi//W; sin^ = "V^/VW and weobtain as in (1.47)yi = ViV/-21n(W)/W; y2 = V2 y/-21n(W)/W. (5.34)This yields in analogy to (1.48) to the PDp(yi,y2) = (27r)-1/2exp[-(y? + y22)/2}. (5.35)We calculate with this generation of the Brownian (or Wiener)processes in program F6 the numerical data pertaining to Figures 1.1and 1.2 of Chapter 1. We present in Figure 5.2 the graph of thesolution to population growth problem (2.4) and calculate the corre-sponding numerical data in F7.We give now a simple difference method to approximate solutionsof ID deterministic ODEox— = a(x, t); x(t = 0) = XQ (5.36)is the Euler method. The latter discretizes the time into finite stepsto = 0 < h < ••• < tn < tn+r, Ak = tk+1-tk. (5.37)
    • 180 Stochastic Differential Equations in Science and EngineeringFig. 5.2. A graphical evaluation of the results of the population growth model (2.4),r = 1, u = 0.2. The figure reveals also one realization and the predicted mean (2.5) thatcoincides with the numerically calculated mean using 50 realizations.This procedure transforms the ODE (5.36) into a difference equation2-n+l — Xn -- clXn, tn)Ln] Xn = X[tn). (^O.OOJThe difference equation must be solved successively in the follow-ing wayxi = x0 + &(x0, t0)A0 = x0 + a(x0, 0)ii;x2 = xi + &(x1,t1)A1 = x0 + &(x0,0)ii+ a(xi,tj)(t2 -ti);a(xi, h) = a(s0 + a(x0,0)ti,ti).Equation (5.39) represents a recursion. Given the initial value XQwe obtain with the application of (5.39) the value of Xk for everyfollowing time £&.Now we propose heuristically the Euler method for the IDautonomous SDEdx — a(x)dt + b(x)dB4; x(t — 0) — XQ. (5.40)In analogy and as a generalization to (5.39) we propose the stochasticEulerian difference equationxn+i =xn + a(xn)An + b(xn)ABn; ABn = Bn+l • B r B, = Bt(5.41)
    • Numerical Solutions of Ordinary Stochastic Differential Equations 181In the majority of the examples in this chapter we will use equidis-tant step widths,tn = to + n/S. = nA; An = A = const.This yields(ABn) = 0 and ((ABn)2) = A.In the next section we shall derive (5.41) as the lowest order termof the Ito-Taylor expansion of the solutions to the SDE (5.40).5.5. The Ito—Taylor Expansion of the Solutionof a ID SDEWe start here from the simplest case of an autonomous ID SDE(5.40). In the case of a non-autonomous N-dimensional SDE, that iscovered in Section 5.7, we need only to generalize the ideas of thepresent derivations.We integrate (5.40) over one step width and we obtain/•*n+l /"tn + 1x(tn+i) = x(tn) + / &(xs)ds+ b(xs)dBs;Jtn Jt„XQ = x(0); xs = x(s). (5-42)Now we apply Itos formula (1.99.3) to the function f(x), where thevariable x satisfies the ODE (5.40). This leads todf(s) = L°f(x)dt + L1f(x)dBi;1 (5-43)L°f(x) = a(x)f(x) + - b2(x)f"(x); L:f(x) - b(x)f (x).The integration of last line leads tof(xn+i) = f(xn) + / "+1[L°f(xs)ds + iMixJdB,]. (5.44)JtnIf we specify (5.44) for the case i(x) = x, this equation reduces againto (5.42).The next step consists in the application of (5.44) for f(xs): =&(xs) and i(xs): =b(xs) the substitution of the corresponding result
    • 182 Stochastic Differential Equations in Science and Engineeringinto (5.42). Hence, we putg(xs) = g(xn) + / [L°g(xu)du + L1g(xu)dBu]; g = a or g = b.JtnProceeding along these lines we findxn+i =xn + a(xn)An + b(xn)ABn + Ri;ftn +l fSR1= / [L°&{xu)dsdu + L1a(a;u)dsdBu (5.45)Jtn Jt„+ L0b(^)dBs du + L1b(xu)dBsdBn].Equation (5.45) is the simplest nontrivial Ito-Taylor expansionof the solutions of (5.40). Neglecting the (first order) remainder Riwe obtain the stochastic Euler formula (5.41).We repeat this procedure and this leads, with the application ofthe Ito formula the function i(xu) = L1b(x„) = b(xu)b(xu), tox(tn+i) = x(tn) + a(iEn)An + b(xn)ABn + b(xn)b(xn)lijl + R2;Ii.i = / n+1dBs f dBu = f "+1dBs(Bs - Btn) (5.46)= i ( A B 2 - A n )where we have used the Ito integral (1.89).If we proceed to second order, the remainder R2 contains twosecond and one third order multiple Ito integrals that are based onthe following integralsrtn+1 rs ftn+iI i , o = / / dBsdu = / (s-tn)dBs,Jtn Jtn <Jtnftn +l fS ftn +iIo,i= / / d s d B u - / ( B s - B t J d s ,Jtn Jtn Jtn(5.47)withIo,i + Ii,o = A„AB„, (5.48)
    • Numerical Solutions of Ordinary Stochastic Differential Equations 183andrtn+i rs ru ptn+i fs ruIi,i,i = / / / dBs dB„dB, = / dBs / dB„ / dB,Jtn Jt„ Jtn Jtn Jtn Jtn= f "+1dBs J &BU{BU - Btn) = ^[(ABn)2- 3An]ABn.(5.49)We will verify (5.48) and (5.49) in EX 5.3.The application of (5.46) with R2 = 0 yields the Milsteinschemexn+1 = xn + M(xn,An,ABn), (5.50a)with the Milstein operatorM ( x n , A n , A B n )— a(xn )An + b(xn )ABn+ ^b(xn)b(xn)[(ABn)2- An}. (5.50b)Note that the Milstein scheme (5.50) is again a recursive methodand can be considered as a generalization of the Eulerian scheme(5.41).The numerical data revealed in Figures 2.1 and 2.2 for theOrnstein-Uhlenbeck problem (2.14) are computed with the Milsteinroutine (5.50) and the corresponding program is F8. The latter is ageneral solver for first order autonomous SDEs. We calculate in par-ticular the solution of the generalized population problem (2.35a).A sample path of numerical solution of this equation is plotted inFigure 5.3.With the inclusion of more stochastic integrals, we obtain moreinformation about sample paths of a solution. Hence we include moreterms of the expansion of &(x) and b(x) into the remainder R2 of(5.46). We postpone this cumbersome algebra to treat the generalcase of non-autonomous N-D SDEs. We can, however, infer from(5.45) that one term resulting from the expansion of Loa(xu) mustbe given bypv ps 1/ ds / duL°&(xu) = -Jw Jw ^&{w)a!(w) + -b2(-u;)a"(u;) A2-tn+l, W = tn. (5.51)
    • 184 Stochastic Differential Equations in Science and EngineeringFig. 5.3. Lower line: numerical average of the solution to (2.35a), middle (irregular) line:a SF of (2.35a). Upper line: graph of the exponential, exp(krt); r = 0.01, k = 100, u = 0.2,Ensem = 200, h = 0.001.Proceeding along this lines we end up with the schemex„+i = xn + M(xn, An, ABn) + Hn(An, ABn, AZn), (5.52a)with the operatorR(xn,An, ABn,AZn)= abAZn + ~ LB! + ±b2A A2n+ Lb + b2bA (ABnAn - AZn)+ ^b(bb" + b/2)[(ABn)2- 3An]ABn, (5.52b)bwhere the drift and the diffusion coefficients depend on the argumentxn. In Equation (5.52b) a new RV appears that is defined by the firstof the integrals in (5.47)AZn = Ii,0 = / dBa (s-io),J wwhere the integration boundaries are defined in (5.51).(5.53)
    • Numerical Solutions of Ordinary Stochastic Differential Equations 185We easily find the moments (AZn) = 0 andVar(AZn) = M dBs(s - w) dBr(r - w)Jw Jw I= [ ds(s - wf = A3U, (5.54a)Jw "Jand the covariance(AZnABn) = / fVdBs(s - w) fVdBu= [ ds(s-w) = lA2n. (5.54b)Jw ^Note also that Equation (1.93) states that the variable AZn isN(0,A£/3) distributed.In order to perform a numerical simulation of the RV AZn weformulate the following theorem:Theorem 5.2 (Numerical simulation of the RV AZn)First we note that we can generate the Wiener variable ABn in formof N(0, An) distributed variable. We obtain, from the polar Marsaglia(or the Box-Miller) method (5.34), two independent N(0,1) dis-tributed RVs, yi and y2- Now we putABn = v/A^il AZn = ±A3J2L + 4 p V (5-55)We obtain from (5.55)(AZn) = 0; <(AZn)2> = ±A3n; (AZnABn) = ~A2n.and this proves (5.54). &We verify (5.52) to (5.54) numerically in program F9A. There weuse the traditional mean a of RV x by averaging over K realizationsof this variable1 K(x) = Jim — V xk, (5.56a)fc=lwhere the RVs x^ are individual samples of x. We contrast thisaverage in program F9B with the batch average. There we calculate
    • 186 Stochastic Differential Equations in Science and Engineeringthe mean of x by computing K batches, each of which consists of Msamples of x and has a different seed of the random generator. Thisyields1 K M^ = K M E E ^ (5.56b)fc=l m = l(k)where x^ represents the mth realization of the variable x in the kthbatch. Note that (5.56a) does not coincide with (5.56b), even notfor identical seeds of the individual batches. We propose to use F9Bwith K = M = 25 and compare the results with the ones of F9A withK = 625.The numerical data for Figure 5.3 were calculated with theMilstein scheme (5.50) as well as with the higher order routine (5.52)and the SFs as well as the averages that coincide up to the first threedigits. We tried also to compare the growth of the solution for gen-eralized population growth (2.35a) with the solutions of the usualgrowth problem (2.1). To accomplish this we use in (2.35a) rk = 1and compare the corresponding numerical results with to those of(2.1) with to those of (2.1) with r = 1. We also note that (2.35a) canbe written asdX = rkX(l - X/k) + uXdBt,and the deterministic limit of this SDE has stationary point at X = k.The latter property prevents the solutions from tending asymptoti-cally to each other. We can only expect an approximate coincidencein the first phase of the growth.ExampleThe application of the routine (5.52) to the problem of Ornstein-Uhlenbeck (2.14) with a — m — x; b = const, yieldsH(xn, An, ABn, AZn) = -bAZn - -(m - xn)A2n.( 1 ( 5-5 7 )xn+l =xn + (m-x) I An - - A n J + b{ABn - AZn).*The schemes discussed in this section belong to the class ofexplicit schemes, where the advanced solution xn+ is given by
    • Numerical Solutions of Ordinary Stochastic Differential Equations 187functions of the solution xn. Explicit routines have the disadvantageof including a great variety of functions and their derivatives. Anadditional disadvantage of these schemes is their instability. Hence,we replace these routines in the next section with modified rou-tines. A particular modification is given by an implicit scheme thatis known in deterministic problems to be less unstable.However, a general comment about the use of numerical routinesto approximate the solutions of SDES must be given. To eliminatethe possibility of spurious results, it is always advisable to confirmnumerical results of a particular routine with the numerical dataobtained from another method.We note also that we introduced at the beginning of thissection the Ito-Taylor expansion that is based on the applicationof the Ito formula (1.99.3). In the case of Stratonovich SDEsthere is also a Stratonovich-Taylor expansion that relies on theStratonovich differential (1.100). The latter approach is not coveredin this book and interested reader should consult the book of Kloedenet al. [2.5].5.6. Modified ID Milstein SchemesWe used the Milstein scheme in program F8 to solve the populationgrowth problem (2.1) in the regime of the variable 0 < x < 10 anddepicted the results in Figures 2.1 and 2.2. There we found that thenumerically calculated moments agree very well with the theoreticalones. We did not also come across with any problems of a divergenceof the numerical solution. However, the situation changes dramati-cally if we would like to extend the regime of this solution beyondx > 10. The Milstein routine is in this regime unstable and resultsin a function that diverges rapidly.The question of how to improve the stability of the Milsteinscheme arises. Basically, there are two possibilities to modify givenroutine and we exemplify this for the Milstein routine. In the firstcase (i) we perturb the drift coefficient. In the second alternative(ii) we modify the coefficients that are based on the diffusion coeffi-cient. We begin with the first alternative.
    • 188 Stochastic Differential Equations in Science and Engineering(i) An implicit ID schemeHere, we replace the Milstein routine (5.50) byxn+i = xn + [aa(xn+i) + (1 - a)&(xn)]An+ b(xn)ABn + ^b(xn)b(xn){(ABn)2- An}. (5.58)Note that (5.58) is for a <E [0,1] a family of SDEs. We call (5.58)in the limit a = 1 (a = 0) a fully implicit (an explicit) SDE, anda is the degree of implicitness. Note also, that implicit effects referexclusively to the drift coefficient, whereas the diffusion coefficient isalways taken at xn.In an implicit scheme we must calculate the solution xn+ asthe numerical solution of the (generally nonlinear) Equation (5.58)with given xn. To solve this aspect of the problem one usually usesa Newton-iteration routine. However, convergence problems arise inthis iteration, in particular if the stochastic influences are too strong.Thus, it is convenient to take advantage of this method, despite itspopularity in deterministic problems, only for the case of SDEs witha linear drift coefficient, where no iterations are needed.In EX 5.4 we use (5.58) for the Ornstein-Uhlenbeck SDE (2.14).However, the employment of the corresponding program Fll showsthat the scheme (5.58) again leads to unstable results in the regimex > 10. This contrasts with the result of the higher order explicitscheme (5.52) for the same problem and we must conclude that theimplicit routine (5.58) is inconsistent with a ID autonomous Ito SDE.(ii) Modification of the diffusive termsHere we use for an SDE with a variable drift coefficient the schemeproposed by Kloeden (see [2.5])xn+i = xn + a(xn)An + b(xn)ABn+ ^[(ABn)2- An][b(Sn) - h{xn)]/y/E~n, (5.59)with the support valueSn = xn + &(xn)An + b(xn)^/A^. (5.60)To motivate (5.69) we note that a Taylor expansion of b(Sn) for theusually small parameter An reproduces (5.50) to leading order.
    • Numerical Solutions of Ordinary Stochastic Differential Equations 189X, <X>20 t 30Fig. 5.4. A sample function and the mean value of the SDE (5.61); m = 3, 0 = 1,A„ = 0.005, Ensem = 200.We test successfully (5.59) and (5.60) in program F12 for the SDEwith variable drift coefficientsdx = (m — x)dt + /3sin(x)dBi; m, f3 = const. (5.61)A particular test criterion for applicability is the agreement withrespect to the average, since we obtain from (5.61) [in analogy to(2.17a)] (x) = m— (m — XQ)exp(—t). In Figure 5.4 we reveal a graphof a particular SF and depict the averages where the exact and thenumerical one are almost coinciding. Thus, we may conclude thatthe routine (5.59) represents a stable modification of the Milsteinscheme.5.7. The Ito—Taylor Expansion for N-dimensional SDEsWe consider now the general case of non-autonomous N-dimensionalSDEs with R > 1 independent Wiener processes. We introduce herethe nomenclature thatSk(tt,t) =gk(xi,...,XN,t); tit = {xi(t),...,xN(t)),is the fcth component of the vector function g at the space-time posi-tion (&,i).
    • 190 Stochastic Differential Equations in Science and EngineeringWe write now the N-dimensional SDE (1.123) in the formdxk(t) = afc(£fc, t)dt + bkr(£t, t)dBrt;Jfc = l,...,N; r = l,...,R. (5.62)For the differential of the vector function ffc(£t,i) we find now,with (1.127), the expressiondffe(&,t) = [U&,t)ik{Zt,t)]dt+ [L^t,*)ffc(&,t)]dB[ (5.63)withT _ 9 9k U ^dt dx 2 dx dx 9(5.64)Li — b m r -OXr,where the functions afc,bmr,ffc are taken at the space-time position(Ct,t) and B | , . . . , B ^ represent a set of R independent Brownianmotions.We can integrate (5.63) formally and the result isffc(6, t) = ffc(Co,«) + / {Lo(6, s)ffe(6, «)ds,+ L;(6,s)ffc(6,s)dBS}, (5"65)*o = a; €o = (xi(a),...,xu(a)).We apply now (5.65) to the functions xk to obtain Lo^^ =afe(a),L^Xfc = bkr and we infer from (5.65)Xk(t) = xk(a) + I {afe(£s,s)ds + bfcr(£s,s)dB£}. (5.66)./aHowever, we could obtain Equation (5.66) in a more simple way asthe result of a formal integration of (5.62).We apply (5.65) to calculate the functions a&(£,£) and bkr{t],t)and substitute the results into (5.66). Proceeding in this way
    • Numerical Solutions of Ordinary Stochastic Differential Equations 191litwe obtainafc(6>s) = afc(£o,a) + / {L0(£u,u)a,k(£,u,u)diJa+ Ll^u,u)ak^u,u)dBru}, (5.67a)andbto(£s,s) = bfcu;(£o, a) + / {M&*, «)bfcu,(^u, *t)du./a+ L^(^,B)bt o (^,«)dB;}. (5.67b)The substitution of (5.67) into (5.66) yieldsxk(t) = xk(a) + (t-a)a.k^0,a) + (Brt-Bra)bkr^0,a) + R^)(5.68)withR41}= f f {{L0ak)dsdu + (L>fc)dsdB^ + (L0bfcr)dB^d«Ja J a+ (L^bfcr)dB^dB-} (5.69)where all functions and operators are taken at the space-time loca-tion (£u,u). Equation (5.68) with Kk = 0 represents now thenon-autonomous N-dimensional Euler-scheme [see (5.45) for the IDautonomous case].To obtain the N-dimensional Milstein routine we expand the 3rdorder tensor L^b^ in (5.69) into an Ito-Taylor series and truncateafter the first term. This leads toTZ(£u,u) = bmw(tiu,u)—-bkr(Zu,u) <* TZ(to,a)OXyyi= bmw(€o,a)-— bkr(£0, a).oxmHence, we obtain from the last term (5.69)< }= T£.(£o,«) f dB£ f dB™ = T&.(£0,a) f dB^(B™ - Bwa).Ja Ja Ja(5.70)The integral on the right hand side of (5.70) is again a new RVK ? = f d B : ( B ™ - B - ) . (5.71a)J a
    • 192 Stochastic Differential Equations in Science and EngineeringWe know only one of its diagonal elements, such as [see 5.46)]K = f d B j ( B j - B i ) = Ii,1. (5.71b)JaFor the sake of simplicity, we reduce our goal to the investigationof an N-D non-autonomous SDE in the presence of only one Wienerprocess (this means (5.62) with R = 1, BJ = Bt) and we refer thereader for more details of the tensorial RV (5.71a) to the book ofKloeden et al. [2.5].Now we can formulate the N-dimensional Milstein scheme for thecase R = 1; this yieldsSfc(tn+i) = Xfc(i„) + Mfc(£tn,An,ABn), (5.72)with the N-D Milstein operatorMfe(&„, An, AB„) = afcAn + bfcABn + Ii,ibm -—bf c , (5.73)oxmwhere the coefficients a& and bfc refer to the space time position(&„,*n).We apply the Milstein routine (5.73) now to in the followingexampleExample 1 (The linear pendulum)The linear pendulum with stochastic damping was analyzed inSection 2.3.2. We use for a 2D SDE the nomenclaturexi{ts) = xs; x2(ts) = ys. (5.74)We infer from (2.60) with a ^ 0, /? = 7 = 0da; = ydt; dy = —xdt — aydBt, (5.75)and this means«i = y, h = 0; a2 = -x, b2 = -ay. (5.76)With (5.72) to (5.75) we find%n+l — %n 1 yn^ni1 (5-77)Vn+i =Vn- xnAn - aynABn + -a2yn [(ABn )2- A„],where the summation convention is not used.
    • Numerical Solutions of Ordinary Stochastic Differential Equations 193We solve the iteration problem (5.77) for an equidistant decom-position of the time-axis An = A = const, in program F13. The cor-responding initial condition reads XQ = x(0),yo = y(0). The resultsfor the mean and variance of this process are already revealed inFigures 2.3 and 2.4 in the regime 0 < t < 10. If we wish extend thisregime, we find that the scheme (5.77) is not stable and we will usean improved routine in the next section.Also we solve the nonlinear pendulum (2.105) with the Milsteinscheme. This SDE is incorporated in program F13 also. £5.8. Higher Order ApproximationsA higher order scheme for the non-autonomous N-D SDE is derivedin EX 5.5. We give here only the resultXn+i = xn + afcAn + bfcABn + -(L0afc)A2 + (L0bfc)I0,i+ (Liafc)Ii,0 + (L!bfc)Ii,i + (LiLib^Ii,!,! (5.78a)where we use the nomenclature [see (5.74)]xks=xk{ts), fc = l,2,...,N; s = 0,l,..., (5.78b)and where all coefficient functions, operators, afc,bfc,Lo,Ia, and allintegrals (5.46)-(5.49) and (5.53) depend on the space-time variable(x^,... ,x%,tn). Equation (5.78) is the N-D generalization of (5.52)for a non-autonomous SDE. We employ also (5.48) and (5.53) toobtain the relationIo,i = ABn An - AZn. (5.79)We can easily write the compressed equation (5.78) in a clean-cutformulation if we use the definitions of the operators (5.64).Example (The linear and the nonlinear pendulum)Using (5.76) we infer from (5.64)L0 = ^ - s i n ( x ) - + - ( a y ) 2g ? ; U = -ay-. (5.80)
    • 194 Stochastic Differential Equations in Science and EngineeringThus, we obtain from (5.78)xn+i = xn + yn(An - alifl) - - sin(xn)A^,Vn+i = yn~ sin(:rn) An - -yn cos(xn) A2- aynABn+ 2 aV [ ( A B n ) 2- An] + asin(a;n)Io,i- Ja3^[(ABn )2-3An ]ABn . (5.81)6Equation (5.81) applies to the nonlinear case. In the linear case wemust replace in (5.81) by the linearization cos(xn) := l,sin(xn) :=xn. The corresponding numerical work is done in program F13.Finally, we mention a modified Milstein regime for autonomoussystems that generalizes the ID scheme (5.60). With the use ofthe definitions (5.78b) and the comments that follow, we write forN-dimensional systems4 + i = xkn + afcAn + bfcABn + - [bfc(Y+,..., Y^)-bf c (Yi,...,Y^)]l1 ,1 /AV2Y i = x£ + af c An ±bf c Ay2, fc = l,2,...,N. (5.82)The motivation for this scheme comes again from a Taylor expansionof the last term in (5.82).We compare now the numerical data obtained for the varianceV22(t) of the linearized pendulum [with the analytic formula (2.82)].We apply (i) the Milstein regime (5.72), (ii) the higher order scheme(5.78), (iii) the modified Milstein scheme (5.82) and an implicitscheme. The latter is established in analogy to (5.58) and it is givenfor a general 2D SDE in the formxn+i = xn + [7ai(£n+1) + (1 - 7)ai(£„)]An + Zx,Vn+i = Vn + [7a2(£n+i) + (1 - 7)a2(£n)]An + Zy; (5.83)where {Zx,Zy) are the components of the stochastic terms and7 is the implicitness parameter. For the same reasons as discussed
    • Numerical Solutions of Ordinary Stochastic Differential Equations 195in connection with (5.58), we use (5.83) only for SDEs with lineardrift coefficients ai, a2.We specialize (5.83) to the case of the linear pendulum and thisyieldsxn+l = (1 + 72A2)_1{xn + (1 - 7 )yn An + Zx+ jAn[yn + (7 - l)An xn + Zy}},(5.84)yn+1 = (1 + 72A2) {yn + (7 - l)x„ An + Zy+ 7An [-xn + (7 - l)An yn - Zx}}.The numerical data are produced, in the case of (i) to (iii), withthe program F13 and the implicit scheme is executed with the aid ofprogram F14.The comparison of the numerical data shows that — at least forthe variance of the linear pendulum — the Milstein routine (5.72)and the modified Milstein routine (5.82) fail to give sufficient accu-racy. By contrast, we see that the higher order scheme (5.78) and theimplicit routine (5.84) reproduce satisfactorily the theoretical predic-tion (2.82) in the regime 30 < t < 50. The corresponding numericalsolutions are almost identical for the routine (5.82) and the implicitscheme (5.84). Thus, in Figure 5.5 a comparison of variance producedby (5.82) with the analytic values (2.82) is shown.0,250,20,15V220,10,05030 35 40 , 45 50Fig. 5.5. A comparison of the analytic variance (2.82) (continuous line) and the numer-ical data obtain by the routine (5.78); a = 0.1, A n = 0.005.
    • 196 Stochastic Differential Equations in Science and Engineering5.9. Strong and Weak Approximations and the Orderof the ApproximationWe begin with a definition.Definition 5.1 (Absolute, global and local error)We consider an n-D SDE for the variable X(i) e Rn with the ICsX(*o) = Xo 6 Rn in the interval t <£ [0,T}. A discrete approximationY(t) G Rn of the solution X(i) to this SDE has a local error that isdefined bye^im^-Yttj)]); 0<tj<T. (5.85)The global error is given byneg = 5^(|X(ifc)-Y(tfc)|); t0 = 0 < t i < - - - < t n = T, (5.86)fc=iwhereas the absolute error is defined byea =(|X(T)-Y(T)|). (5.87)In (5.85) to (5.87) we used the symbol | | which is an abbreviationfor the norm of a vector function. Thus, for instance, the absoluteerror is written in full length byea =E^ro-WT)]2) (5.87)fc=lwhere the subscript k labels the vector components.The local error (5.85) measures the closeness of the solution andits discretization at just one internal point. The global and the abso-lute error measure the pathwise closeness of the exact solution andthe approximation of the sum of K internal points and at the end-point of the interval. •In the following we will investigate the dependence of the absoluteerror on the step width of a particular discretization routine. Torealize this task with computer experiments, we choose a particularSDE with a known exact solution and approximate the solution witha given routine. Then we choose M batches of N simulations and weindicate the trajectories of the fcth simulation of the jth batch at
    • Numerical Solutions of Ordinary Stochastic Differential Equations 197the end of the interval by Xj^T) and Yj)/C(T). The average values(batch averages of the absolute errors)1 N^ = N^| X jf c ( T )~Y^( T ) l J= 1.2.---,M, (5.88)fc=iare for N » 1 independent Gaussian RVs. The batches are chosen toconstruct with the Student ^-distribution (a distribution that we willdiscuss in moment) confidence intervals for a sum of approximatelyGaussian RVs with unknown variance. In this way we compute thebatch averages1 M 1 M N1= M 5 > =MN £ £ |X*,*(T) " Yj>fc(T)|. (5.89)i = l j=l k=lWe take advantage of the latter average to obtain the estimate ofthe variance of the batch averages1 MIt is convenient to apply the Student distribution with M-l DOFto achieve a 100(1 — a)% confidence interval for r in the form(TJ-AT/.TJ + ATJ), A77 = K 1 _ a , M - i ^ / M , (5.91)where the parameter Ki-aiM-i is calculated from the Student distri-bution. As an example we list the following data: For a = 0.05 andM = 20 we obtain Ko.95,19 = 2.09 and the absolute error will lie inthe interval (5.91) with the probability 1 — a = 0.95.The Student or (£-) distribution governs an N(0,1) distributedRV X and x2is a RV following the chi-squared distribution (5.1a)with s DOF-parameter. Then we consider the ratio X / y x V s . Thelatter variable has the PDF [see (1.1)]F(X^xV^) = PrflX, ^fW* < t) • (5.92)
    • 198 Stochastic Differential Equations in Science and EngineeringWe give here the following represention of this PDF for evenvalues of the DOF (see Abramowith & Stegun [1.3])F(X/^Ts) = sin(0) | l + cos2(0) + ^ | cos4(0)+- ^ e f K H (593>9 = arctan(i/y/s), (s even),with an analogous formula for odd values of s.Definition 5.2 (Strong convergence and order of convergence)Strong convergence is a pathwise convergence characterized bythe error criterion (5.87). A discrete approximation scheme with amaximum step size 5 converges with the order 7 > 0 at a time T ifthere is a constant C independent of S such thatea = <|X(T)-Y(T)|)<GF (5.94)DIn the framework of weak convergence we are not interested inpathwise approximation of the stochastic process. We focus there onthe calculation of the PD, or its lowest moments, or functionals of thePD. Hence we do not compare the difference between exact solutionand approximation as in (5.87), yet we concentrate on the errors ofthe moments. For the lowest moment, the mean, we introduce themean error£ m = |(X(T))-(Y(T))|. (5.95)Definition 5.3 (Order of weak convergence)We use the weak convergence criterion (5.95) we define the order7 > 0 by|<g(X(T)))-(g(Y(T))>|<C<F, (5.96)where g can be a class of polynomials. Clearly, the simplest definitionof the order comes with the application of the linear polynomialg(X) = X. •
    • Numerical Solutions of Ordinary Stochastic Differential Equations 199To perform numerical experiments of strong and weak conver-gence, we use in program F15 the population growth problem (2.1).To this end we take the logarithm of (5.94) and (5.96). The slopeof the corresponding curve equals than the order of the convergence7. The results are revealed in Figures 5.6 and 5.7, where we plotthe logarithm of the errors versus the time steps. It is convenient toapply negative powers of 2 for the time steps and use the log 2 of theerror.-1.4--Log2(Err)-1.5--1.6--1.7---1.8-1 1 1 1 1 1 1 1-6 -5.5 -5 -4.5 -4 -3.5 -3 -2.5 -2Log2(DT)Fig. 5.6. Absolute error for the case of the polpulation growth problem (2.1). Uppercurve: Euler scheme, Lower curve: scheme (5.52). Parameters: r = 1, u = 0.1, N(0) = 1,T = 1.O-iLog2(Err)- 6 - . - - - • --8- . • -1 0-1 1 1 1 1 1 1 1 r-6 -5.5 -5 -4.5 -4 -3.5 -3 -2.5 -2Log2(DT)Fig. 5.7. Mean error for the case of the population growth problem (2.1). Curves andparameters as in Figure 5.6.
    • 200 Stochastic Differential Equations in Science and EngineeringHowever, there is an ambiguity in the calculation of the strongorder of convergence 7 in (5.94) related to the computation of theexact solution X(T). We discuss this problem only for a 1-D SDEand use this to calculate the numerical solution Y(T) alternativelyby (5.41), (5.50) or (5.52) (Euler, Milstein or higher order scheme).Starting from the initial condition Yo we obtain after K equidistantsteps Y K ( T ) , T = KA, where we need for each individual step thequantity ABfc = (z)fc-/A; k — 1, 2,..., K with (z)k as the kth simu-lation of a N(0,1) RV. At the end of the interval t = T we have forthe WPK KBT = ] T Bk = /A ^{z)k. (5.97)fc=i fc=iTo determine the strong order convergence we need next the exactsolution X(T) that is in many cases a function of Br. There arenow different ways to determine the latter quantity. We propose thefollowing two alternatives(i) As in program F15 we calculate X(T) with the aid ofBT = VT(Z)K+1,where we use the (K + l)-th simulation of the RVz. This is the simu-lation taken right after the computation of (5.97). This concept wasemployed in the determination of the data for Figures 5.6 and 5.7.(ii) Here we use (5.97) and we easily deriveK K<BT) = 0; (B2T) = A £ <(Z)m(Z)fe) = A ^ ( ( Z ) | ) = AK = T.k,m=l k=lThis concept has the methodological advantage of assigning to thestrong order of convergence for the exact 1-D SDEdx = adt + /3dBt; a,0 = const, (5.98)the value 7 = 00. Furthermore, we obtain with concept trends of theorder of convergence that apply to general class of 1-D SDEs. TheEulerian scheme leads for instance to 7 = 0.5 and 7 = 1 for strongand weak convergence, respectively.
    • Numerical Solutions of Ordinary Stochastic Differential Equations 201We will give more hints to the determination of the order of con-vergence in EX 5.10.The important question is, however, what is the goal of thenumerical simulation. Is a good pathwise approximation required oris an approximation of some functional of the PD, as the moments,sufficient?Finally, we wish to add some important references from the vastliterature about strong and weak convergence schemes and implicitroutines. We mention here only the following articles Milstein [5.10],[5.11], Drummond and Mortiner [5.12] Kloeden and Platen [15.13],[5.14], Saito and Mitsui [5.15] and Newton [5.16].ExercisesEX 5.1. Given the DOF m, calculate the mean value and the vari-ance of the PD (5.1a). Verify (x) = m; Var(x) = 2m.EX 5.2. Use the asymptotic method of integration by parts toapproximate (5.10). Using this method and applying subsequentlyNewtons iteration determine the parameter b(a).Hint: An integral such as the one in (5.10) can be transformed topooJ(x) = / exp(—x2/X)dxJaA f°° 1 d= / da;-—-exp(—x2/X); 0 < a, A < oo.2 Ja xdxUse integration by parts to find an asymptotic expansion of this inte-gral. The subsequent Newton iteration is performed in program F3.EX 5.3. Verify (5.48) and (5.49).Hint: Use (1.89), (1.105b), and (1.110).EX 5.4. Find an implicit solver for the Ornstein-Uhlenbeck (2.14)problem based on the Milstein routine and compare your programwith Fll.Hint: Use (5.58) to formulate the solver.
    • 202 Stochastic Differential Equations in Science and EngineeringEX 5.5. Derive the higher order scheme for a non-autonomous N-DSDE (5.62).Hint: Expand the error term (5.69).EX 5.6. Find a numerical solution of the SDE (A.l) with A := n(why?) of the appendix of Chapter 3 based on the routine (5.78).Here we obtaina = V, &i=0; a2 = -g(y)y - f(x), b2 = V2AT = const,9 9 32, 9Lo = y ^ - + «2x- + A T — ; Ll = V2XT—ox oy oyzayand hence (Lo&fc) = (Li&fc) = (LiLibfc) = 0, k = 1,2. Equation (5.78)leads to„ + ynAn + ~a2A2n + V2ATAZ„Vn+i = Vn + a2An + 72ATABn + -(L0a2)A2+ (Ua2)AZn.Find Loa2 and La2 and solve the numerical system using the param-eters given in Figure 3.4 with step sizes An = 0.01 and 0.005. Plotthe results and compare them with the phase diagram in Figure 3.4.EX 5.7. Apply the scheme (5.78) to solve the problem of theBrownian bridge (2.38). Use the step sizes An = 0.01 and 0.005and the parameters r — 1 and x(0) = 0.Hint: In the case of a ID system we have9 9! ,2 9 2T , 9at ox 2 ox2oxEX 5.8. Use the Milstein (5.73) and the higher order scheme (5.78)to approximate the solution of the stochastic Brusselator problemda; = a(ic, y)dt + ac(x)dBt,dy = — [x + &(x, y)]dt — ac(x)dBt,a(x, y) = (0- l)x + (3x2+ (x + l)2y;c(x) = x(l + x); a, /? = const.
    • Numerical Solutions of Ordinary Stochastic Differential Equations 203The deterministic Brusselator (a = 0) equation was developed atthe occasion of a scientific congress in Brussels, Belgium, to developa simple model for bifurcations in chemical reactions.Solve this problem by developing the corresponding user-suppliedsubroutines for the program F13. Use step widths An = 0.005, 0.01and the ICs x(0) = —0.1, y(0) — 0, and the parameters a — 0.1,(3 = 2. Plot the results in form of a phase diagram (see Figure 3.4)and find traces of the bifurcations.EX 5.9. Solve numerically the stochastic Lorentz equations (B.26)of Chapter 3. To achieve this goal generalize F13 to the case of 3DSDEs. Use the step widths and ICs of EX 5.8 with z(0)=0 andapply the parameters b = 1, r = 0.2, s = 0.1, a = 0.1. Compare yourprogram with F17.EX 5.10. Modify the program F15 to emply (5.97) for the determi-nation of the exact solution X(T). Use this concept after the compu-tation of each individual batch in F15. Find with this modificationa) Strong and weak convergence order that replace the data for Fig-ures 5.6 and 5.7 in the case of the population growth that wasoriginally considered in F15.b) Consider this alternative of F15 to find for the Euler routine (5.41)and calculate the strong order of convergence 7 for this case ofthe SDE (5.98).c) An other 1-D SDE with a simple exact solution is gtiven in EX 2.1(iii). Determine again the strong and weak order of convergence.
    • REFERENCESIntroduction[1-1[i-a:[1.3[1-4[1[1[i[1.9[1.10;A. Einstein, Ann. Physik, 17, 549 (1905).M. von Smoluchowsky, Ann. Physik, 21, 576 (1906).P. Langevin, Comptes Rendus Acad. Sci. (Paris), 146, 530 (1908).G. E. Uhlenbeck and L. S. Ornstein, Phys. Rev., 34, 823 (1930).E. L. ONeill, Introduction to Statistical Optics (Dover Publications, 1991).F. G. Stremler, Introduction to Communication Systems (Addison-WesleyPublishing Company, 1990).L. Hopf, Common Appl. Math., 1, 53 (1948).R. H. Kraichnan, Fluid. Mech., 67, 155 (1975).O. V. Rudenko and A. S. Chirkin, Soviet Phys. Acoustics, 19, 64 (1974).W. A. Wojczinsky, Burgers-KPZ Turbulence, Lecture Notes in Mathemat-ics 1700 (Springer-Verlag, 1998).Chapter 1[1.1] K. L. Chung, Elementary Probability Theory with Stochastic Process(Springer-Verlag, New York, 1979).[1.2] L. Arnold, Stochastic Differential Equations (J. Wiley & Sons, New York,1974).[1.3] M. Abramowitz and I. A. Stegun, Handbook of Mathematical Functions(Dover, 1964).[1.4] Y. S. Chow and H. Teichler, Probability Theory (Springer-Verlag, NewYork, 1978).[1.5] J. Doob, Stochastic Processes (John Wiley, 1953).[1.6] N. Ikeda and S. Watanabe, Stochastic Differential Equations and DiffusionProcesses (North-Holland, Kodansha, 1981).[1.7] T. H. Rydberg, The normal inverse Gaussian Levy Process: Simulationand approximation, Commun. Statist.-Stoch. Models, 13, 887-910 (1997).[1.8] B. 0ksendahl, Stochastic Differential Equations. An Introduction withApplications (Springer-Verlag, 1998).[1.9] Z. Schuss, Theory and Applications of Stochastic Differential Equations(John Wiley, 1980).205
    • 206 Stochastic Differential Equations in Science and Engineering[1.10] K. L. Chung and F. Aitsahia, Elementary Probability (Springer-Verlag,2003).[1.11] S. Ross, Probability Models (Academic Press, 2003).[1.12] P. Mallivan, Stochastic Analysis (Springer-Verlag, New York, 1997).[1.13] J. Pitman, Probability (Springer-Verlag, New York, 1997).[1.14] A. N. Shiryaev, Probability (Springer-Verlag, New York, 1996).[1.15] I. M. Ryshik and I. S. Gradstein, Tables of Sums, Products and Integrals(VEB Deutscher Verlag Wissenschaften, Berlin, 1963).[1.16] P. Bremaud, Markov Chains, Gibbs Fields, Monte Carlo Simulation andQueues (Springer-Verlag, New York, 1999).[1.17] A. R. Kerstein, Linear-eddy modelling of turbulent transport, Part 6.Microstructure of diffusive scalar mixing fields, J. Fluid Mech., 231,361-394 (1991).[1.18] A. R. Kerstein, One-dimensional turbulence: Model formulation and appli-cation to homogeneous turbulence, shear flows and buoyant stratified flows,J. Fluid Mech., 392, 277-334 (1999).Chapter 2[2.1] H. Risken, The Fokker-Planck Equation (Springer-Verlag, 1984).[2.2] D. A. McQuarrie, Statistical Mechanics (Harper and Row, New York, 1973).[2.3] T. C. Gard, Introduction to Stochastic Differential Equations (Dekker,1988).[2.4] R. L. Stratonovich, Topics in the Theory of Random Noise, Vol. 1 (Gordonand Breach, 1963).[2.5] P. E. Kloeden, E. Platen and H. Schurz, Numerical Solution of StochasticDifferential Equations (Springer-Verlag, 1994).[2.6] J. K. Hale and H. Ko§ak, Dynamics and Bifurcation (Springer-Verlag, 1991).[2.7] I. Karatzas and S. E. Shreve, Brownian Motion and Stochastic Calculus(Springer-Verlag, 1997).Chapter 3[3.1] N. G. V. Van Kampen, Stochastic Processes in Physics and Chemistry(North-Holland Amsterdam, 1992).[3.2] S. R. Salinas, Introduction to Statistical Physics (Springer-Verlag, 2001).[3.3] C. M. Bender and S. A. Orszag, Advanced Mathematical Methods for Sci-entist and Engineers (McGraw-Hill, New York, 1978).[3.4] L. Arnold, Random Dynamical Systems (Springer-Verlag, Berlin, 1998).[3.5] S. Wiggins, Introduction to Nonlinear Dynamical Systems and Chaos(Springer-Verlag, New York, 1990).[3.6] H. Crauel and F. Flandoli, Additive noise destructs a pitchfork bifurcation,J. Dynamics Differential Equations, 10, 259-274 (1998).
    • References 207[3.7] H. Rong, G. Meng, X. Wang, W. Xu and T. Fang, Invariant measures andLyapunov exponents for stochastic Mathieu system, Nonlinear Dynamics,30, 313-321 (2002).[3.8] A. H. Nayfeh, Perturbation Methods (John Wiley and Sons, New York,1973).[3.9] W. V. Wedig, Invariant measures and Lyapunov exponents for stochasticgeneralized parameter fluctuations, Structural Safety, 8, 13-25 (1990).[3.10] S. Rajan and H. G. Davies, Multiple time scaling and the response of aduffing oscillator to narrow band excitations, J. Sound Vibrations, 123,497-506 (1988).[3.11] A. H. Nayfeh and S. J. Serban, Response statistics to combined determin-istic and random excitations, Int. J. Nonlinear Mechanics, 25, 493-509(1990).[3.12] M. Van Dyke, Perturbation Methods in Fluid Mechanics (Parabolic Press,Stanford CA, 1975).[3.13] E. Zauderer, Partial Differential Equations (John Wiley and Sons, NewYork, 1989).[3.14] E. Ben-Jacob, D. J. Bergman, B. J. Matkowsky and Z. Schuss, Thermalshot effects and nonlinear oscillations, Annals of the New York Academyof Science, 410, 323-337 (1983).[3.15] P. Plaschko, Deterministic and stochastic oscillations of a flexible cylinderin arrays of static tubes, Nonlinear Dynamics, 30, 337-355 (2002)[3.16] P. Glendinning, Stability, Instability and Chaos (Cambridge UniversityPress, Cambridge UK, 1994).[3.17] R. Z. Khasminskiy, Stability of Systems of Differential Equations in thePresence of Random Noise (Nauka Press, Moscow USSR, 1969).[3.18] W. Ebeling, H. Herzel, W. Richert and L. Schimansky-Geier, Influenceof noise on Duffing-van der Pol oscillators, Zeitschrift f. Angew. Math. u.Mechanik, 66, 141-146 (1986).[3.19] L. Arnold, N. Sri Namachchivaya and K. R. Schenk-Hoppe, Towards anunderstanding of the stochastic Hopf bifurcation: A case study, /. J. Bifur-cation Chaos, 6, 1947-1975 (1996).[3.20] K. R. Schenk-Hoppe, Stochastic Hopf bifurcation: An example, I. J. Non-Linear Mechanics, 31, 685-692 (1996).[3.21] K. R. Schenk-Hoppe, Deterministic and stochastic Duffing-van der PolOscillators are non-explosive, ZAMP, 47, 740-759 (1996).[3.22] K. R. Schenk-Hoppe, The stochastic Duffing-van der Pol equation, Ph.D.Thesis, Univ. Bremen, Germany (1996).[3.23] H. Keller and G. Ochs, Numerical approximation of random attractors,Inst, of Dynamical Systems, Univ. Bremen, Germany, Rep. Nr. 431 (1998).Chapter 4[4.1] P. M. Morse and H. Feshbach, Methods of Theoretical Physics, Vol. I(McGraw-Hill, New York, 1963).
    • 208 Stochastic Differential Equations in Science and EngineeringJ. B. Walsh, An Introduction to Stochastic Partial Differential Equations,Lecture Notes in Mathematics 1180 (Springer-Verlag, 1986), pp. 265-439.H. Holden, B. 0ksendal, J. Ub0e and T. Zhang, Stochastic Partial Differ-ential Equations (Birkhauser Boston, 1996).N. V. J. Krylob, M. Rockner and N. D. J. Zabczyk, Stochastic PDEs andKolmogorov Equations in Infinite Dimensions, Lecture Notes in Mathe-matics 1715 (Springer-Verlag, 1999).G. B. Whitham, Linear and Nonlinear Waves (J. Wiley and Sons, NewYork, 1974).O. V. Rudenko and S. I. Soluyan, Theoretical Foundations of NonlinearAcoustics (Consultants Bureau, New York, 1977) [Translated from Russianby R. T. Beyer].S. A. Akhmanov and A. S. Chirkin, Statistical Phenomena in NonlinearPhysics (Izd. GMU, 1971).J. M. Keller, Wave propagation in random media, Proc. Symp. Appl. Math.Am. Math. Soc, Providence Rhode Island, 13, 227-246 (I960).J. B. Keller, Stochastic equations and wave propagation in random media,Proc. Symp. Appl. Math. Am. Math. Soc, Providence Rhode Island, 1963,16, 145-170 (1963).U. Frisch, Turbulence (Cambridge University Press, 1996).C. W. Haines, An analysis of stochastic eigenvalue problems, Ph. D. Thesis,Renssellaer Polytechnic Institute, Troy, New York (1964).W. E. Boyce, in Probabilistic Methods in Applied Mathematics I, ed. A. T.Bharucha-Reid (Academic Press, New York, 1968).F. G. Bass and I. M. Fuks, Wave Scattering from Statistically Rough Sur-faces (Pergamon Press, Oxford, 1979).F. E. Benth, Option Theory with Stochastic Analysis (Springer-Verlag,Berlin, 2004).F. Black and E. Scholes, The pricing of options and cooperate liabilities,J. Polit Economy, 81, 637-654 (1973).0. E. Barndorff-Nielsen, Processes of normal inverse Gaussian type,Finance Stock, 2, 41-68 (1998).E. Eberlein and U. Keller, Hyperbolic distributions in finance, Bernoulli,1, 281-299 (1995).J. C. Hull, Options, Futures and Other Derivative Securities (Prentice Hall,Englewood Cliffs, 1993).R. Merton, Theory of rational option pricing, Bell J. Econom. Manag.ScL, 4, 141-183 (1973).Chapter 5[5.1] M. J. Feigenbaum, Quantitative universality for a class of nonlinear trans-formations, J. Statistical Physics, 19, 25-52 (1975).[5.2] D. E. Knuth, The Art of Computer Programming, Vol. 2 (Addison-Wesley,Reading, Massachusetts, 1981).
    • References 209[5.3] W. H. Press, B. P. Flannery, S. A. Teukolsky and W. T. Vetterling, Numer-ical Recipes (Cambridge Univ. Press, New York, 1992).[5.4] C. Chatfield, The Analysis of Time Series (Chapman and Hall, London,1980).[5.5] H. C. Tuckwell, Elementary Applications of Probability Theory (Chapmanand Hall, London, 1988).[5.6] R. E. Walpole and R. H. Myers, Probability and Statistics for Engineersand Scientists (Macmillan, New York, 1986).[5.7] M. Overbeck-Larisch and W. Dolejsky, Stochastic with Mathematica(Vieweg-Verlag, Braunschweig, Germany, 1998) (in German).[5.8] N. M. Giinther and R. O. Kusmin, A Collection of Examples of HigherMathematics (VEB Deutscher Verlag d. Wissenschaften, Berlin, 1975) (inGerman).[5.9] N. Metropolis, M. N. Rosenbluth, A. W. Rosenbluth, A. H. Teller andE. Teller, Equations of state calculations by fast computing machines, J.Chem. Soc, 21, 1087-1092 (1953).[5.10] G. N. Milstein, The Numerical Integration of Stochastic Differential Equa-tions (Urals Univ. Press, Sverdlovsk USSR, 1988) (in Russian) [EnglishTranslation: Kluwer (1995)].[5.11] G. N. Milstein, A theorem on the order of convergence of mean squareapproximations of systems of stochastic differential equations, Theor. Prob.Appi, 32, 738-741 (1988).[5.12] P. D. Drummond and I. K. Mortimer, Computer simulation of multi-plicative stochastic differential equations, J. Comput. Phys., 93, 144-170(1991).[5.13] P. E. Kloeden and E. Platen, Numerical Solution of Stochastic DifferentialEquations, Applications of Mathematics, Vol. 23 (Springer, 1992).[5.14] P. E. Kloeden and E. Platen, Higher-order implicit strong numericalschemes for stochastic differential equations, J. Statist. Physics, 68,283-314 (1992).[5.15] Y. Saito and T. Mitsui, Discrete approximations for stochastic differentialequations, Trans. Japan SIAM, 2, 1-16 (1992).[5.16] N. J. Newton, Variance reduction for simulated diffusion, SIAM J. Appl.Math., 54, 1780-1805 (1994).
    • FORTRAN PROGRAMSFl.f Independence test: chi-squared criterionF2.f Independence test: Eqs. (5.5)-(5.8)F3.f The Newton Iteration of the integral Eq. (5.10)F4.f Estimation of PDF F(fc) according to Eq. (5.23)F5.f Volume of the D-dim unit sphereF6.f 1-D Brownian motion for Figs. 1.1 and 1.2F7.f Population growth, solution (2.4), data for Fig. 5.2F8.f Milstein scheme (5.50) for 1-D SDEsF9A.f Numerical generation of the RV:DZ, Eq. (5.55) (no batches)F9B.f N batches of M samples to verify Eq. (5.55)FlO.f Higher order scheme (5.52) for a 1-st order SDEFll.f Implicit Milstein scheme (5.58) for a 1-st order autonomousSDEF12.f Modified Milstein scheme (5.59) for a 1-st orderautonomous SDEF13.f 2-D schemes: Milstein routine (5.73), higher order routine(5.78) and modified routine (5.82)F14.f 2-D implicit scheme applied to the linear pendulumF15.f Weak and absolute error, confidence intervalsF16.f Double precision version of F15.fF17.f Version of F15.f for the problem EX 5.11F18f Discretization for a 3-D SDEF19f Numerical verification of the law of the iterated logarithm(1.40)211
    • INDEXadapted process, 13additive noise, 70algebra of events, 1almost coincide, 75asymptotic theory, 149autocorrelation function, 11, 67, 139batch average, 185Bayess rule, 7Bernoulli differential equation, 80Bernoulli distribution, 4Bessel process, 86Bessel-Fubini formula, 144bifurcation, 81, 110bivariate Gaussian, 15bivariate GD, 15bivariate PD, 19Black-Scholes market, 162Boltzmann transport equation, 62Borel set, 2Box-Miller method, 18Brownian bridge, 66Brownian motion, viii, xi, 21, 179Burgers equation, xvcall option, 161central limit theorem, 16Chapman-Kolmogorov equation, 20,50, 91backward equation, 97forward equation, 97characteristic function, 6, 16Chebyshev inequality, 13, 50, 174coefficient with time lag, 171colored noise, 67compatibility, 9conditional probability, 7confidence interval, 171convergencestrong, 173, 198weak, 173, 174, 198convergence in the mean square, 173correlation coefficient, 11cross-correlation function, 11differences between the Ito andStratonovich integrals, 37Dirac delta function, 3Dirac function, 67eigenfunction, 136, 148eigenvalue moment, 156eigenvector, 148eikonal equation, 118ensemble average, 12equivalent definitions of a Wienerprocess, 24ergodic system, 12errorabsolute, 196global, 196local, 196Euler method, 179existence theorem, 84expectation value, 4flip-flop process, 49fluidmechanical turbulence, xivFokker-Planck equation, 12, 21, 78,91, 95213
    • 214Gaussian (or normal), 6Gaussian probability density, xiii, 6Greens function, 136growth bound condition, 85Hamilton function, 82Hermitian-chaos, 16Hermite polynomial, 16heteroclinic orbit, 82homogeneous Poisson process, 46Hopf bifurcation, 127independence, 5, 25independence test, 170inhomogenous ordinary SDE, 59investment, 161Ito equation, 57Ito formula, 38the case of a multivariateprocess, 43Ito integral, 30properties, 37is a martingale, 37Ito-Taylor expansion, 181, 189Jacobian, 18Kamiltonian, 83Khasminskiy criterion, 128Kolmogorovs criterion, 10Kramers-Moyal equation, 98Langevin parameter, 62law of the iterated logarithm, 17Lebesque integration, 2Levy process, 27, 160linear equationhomogeneous ordinary SDE, 56nonlinear SDE, 56Lipschitz condition, 85Lyapunov function, 124Lyapunov method, 108Mach number, 142marginal PD, 5IndexMarkov or Markovian process, 19, 91,179stationary, 20Markov property, 12martingale, 12, 13Ito integral is not a martingale,37Wiener process is a martingale,24master equation, 20, 91Maxwell distribution, 61, 62mean, 57Metropolis algorithm, 178Milstein method, 183modified, 187moments of a PD, 6Monte Carlo integration, 175multiplicative noise, 70multivariate form of the GaussianPD, 14Navier-Stokes equations, xivnon-anticipative or adaptedfunctions, 30normal distributed variable, 6normal inverted Gaussiandistribution (NIGD), 27, 160normalization, 5numerical solution, 167optical diffraction, xiiordinary differential equation (ODE),55Ornstein-Uhlenbeck, 26problem, 59, 101, 186process, 20, 27SDE, 139transition probability, 62orthogonal, 138pendulum, stochastic, xi, 70, 81Poisson bracket, 83Poisson distributed variables, 45Poisson distribution, 4polar Marsaglia method, 179
    • Index 215polynomials of the Brownian motion,42population growth, 56, 65principle of invariant elementaryprobability measure states, 18probability density, 3probability distribution function, 3probability of an event, 2probability space, 2put option, 161quasi-chaos, viiirandom (or stochastic) variable, 2random number, 167random walk, 48reduction method, 63Reynolds number, xivRiemann integral, 29Routh-Hurwitz criterion, 128sample function, 9sample space, 1sigma algebra, 2stability of SDEs, 125standard deviation, 6, 61stationary normal distribution, 78stationary point, 70stationary process, 10statistical mechanics, 11stochastic effects, xiboundary condition, 141boundary value problem, 155cable equation, 137damping, 73differential equation (SDE), xiordinary SDE, 55partial SDE, 56economics, 160eigenvalue, 135eigenvalue equation, 147eigenvalue problem, 148excitation, 72growth, xiihyperbola, 26initial condition, 141Lorentz equation, 129Mathieu equation, 112partial differential equation, 135pendulum, xiPoisson equation, 140process, 9Stokes law, 68stratonovich equation, 58stratonovich integral, 30, 37not a martingale, 37properties, 37strong law of large numbers, 174student distribution, 197Sturm-Liouville problem, 149symmetry, 9Taylor expansion, 7transformation of stochasticvariables, 17transition probability distribution, 19trivariate PD, 15turbulence advection by a randommap, 49two independent Brownian motions,45uncorrelated function, 11uniformity test, 169uniqueness theorem, 84variance, 6, 57Volterra integral equation, 72white noise, xi, xii, 21, 56, 57WP or Wiener process, 14, 21, 25is a martingale, 24Wiener sheet, 25, 135Wiener-Levy theorem, 54WKB method, 117, 149
    • —zmm335"z6>>H0zWz0X>to00Traditionally, non-quantum physics has beenconcerned with deterministic equations wherethe dynamics of the system are completelydetermined by initial conditions. A centuryago the discovery of Brownian motion showedthat nature need not be deterministic. However,it is only recently that there has been broadinterest in nondeterministic and even chaoticsystems, not only in physics but in ecologyand economics. On a short term basis, thestock market is nondeterministic and oftenchaotic. Despite its significance, there are fewbooks available that introduce the reader tomodern ideas in stochastic systems. This bookprovides an introduction to this increasinglyimportant field and includes a number ofinteresting applications.orld ScientificYEARS OF PUBLISHING1 - 2 0 0 6ISBN 981-256-296-69 "789812 562968www.worldscientific.com