# Markov theory

## by De La Salle University-Manila

• 1,372 views

### Statistics

Likes
0
54
0
Embed Views
0
Views on SlideShare
1,372
Total Views
1,372

## Markov theoryDocument Transcript

• ], ,oi Mp",RKOVTHEORY 744 ~ ~Q~ 7,,~ U a4dtUlt~"1 .. 14tUe ., ~ * N41foi /4118 ti/". DEFINITION 3.1: A stochastic process, {x( t ), t E T}, is a collectionof random variables. That is, for each t E T:. X(t) is a random variable. The index t is often referred to as time and asa result, we refer to X( t) as the state of the process at.time ~..The set T is called the index set of the process. DEFINITION 3.2: When T is a countable set, the stochastic process is said to be a discrete-time process. [f T is an interval of the real line, the stochasticprocess is said to be continuous time- process. DEFINITION: 3.3: . The state space of a stochastic process is defined as the set of all possible values that the random variables X(t) can assulne. THUS, STOCHASTIC A PROCESS ISA fAMILY Of RANDOM VARIABLESTHAT DESCRIBES THE EVOLUTIONTHROUGH , TIME OF SOME (PHYSICAL) PROCESS. 1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111III111III11 MARKOV THEORY EDGAR L. DE CASTRO PAGE 1 .. .. ..
• DISCRETE-TIME PROCESSESDEFINITION 3.4:An epoch is a point in time at which the system is observed. Thestates correspond ,to the possible conditions observed. Atransition is a change of state. A record of the observed statesthrough time is caned a realization of the process.DEFINITION 3.5:A transition diagram is a pictorial map in which the states arerepresented by points and transition by arrows. o TRANSITION DIAGRAM FOR THREE STATES1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111IIIIII11MARKOVTHEORY EDGARL. DECASTRO PAGE2 ,., ." ::,.: .... . . ,.. ,", <
• DEFINITION 3.6: The process of transition can be visualized as a random walk of the particle over the transition diagram. A virtual transition is one where the new state is the same as the old. A real transition is a genuine ?hange of state. THE RANDOM WALK MODEL Consider a discrete time process whose state space is given by the integers i = O,:f: 1,:f: 2, The discrete time process is said to be a random walk, if for some number 0 < P < 1, lj,i+l = P = 1..1li,i-I i = 0,:1:1,:J:2,. .. The random walk may be thought of as being a model for an individual walking on a straight line who at each point of time either takes one step to the right with probability p and one step to the left with probability 1 - p. I1I11111I11III1111111111111111111111111111111111111111I11I1III11111111111111111111111111111111111111111111111II1111I1111I111111111111111111111111111111111111111111I11I1I1I1IIII1111 MARKOV THEORY EDGAR L. DE CASTRO PAGE 3" ", . ,.. " ,, . t: ,. . " .,
• THE MARKOV CHAINDEFINITION 3.7:A markov chain is a discrete time stochastic process in whichthe current state of each random variable Xi depends only on theprevious state. The word chain suggests the linking of the randomvariables to their immediately adjacent neighbors in the sequence.Markov is the Russian mathematician who developed the processaround the beginning of the 20th century.TRANSITION PROBABILITY (Pij) - the probability of atransition from state i to state j after one period. .TRANSITION MATRIX (P) - the matrix of transitionprobabilities. PII PI2 ... PIn P22 ... P2n P = P21 . . I . . . . . . . . . . . . PDI Pn2 ... Pnn .. t:", ,.. "1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 II 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111MARKOV THEORY EDGAR L. DE CASTRO PAGE 4 .. ... .. ...... .. ., . , . . ..".
• ASSUMPTIONS OF THE MARKOV CHAIN t. THEMARKOV ASSUMPTION, The knowledge of the state at any time is sufficient to predict the future of the process. Or, given the present, the future is independent of the parts and the process is "forgetful." 2. THE STATIONARITYASSUMPTION The probability mechanism is assumed as stable. CHAPMAN-KOLMOGOROV EQUATIONS Let PDI1) = the n step transition probability, i.e., the probability that a process in state i will be in state j after n additional transitions. pJn) = P{Xn+m = jlXnl = i}, n > 0, i,j > 0 The Chapman-Kolmogorov equations provide a method for calculating these n-step transition probabilities. 00 P(n+m) - o - L...pIk rkJ .(n)n(~n)in m> 0 all i,J ~ ij k=O -, Formally, we derive: 11111111111111111111111111111111111111111111111111111111IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII!IIIIIIIIIIIIIIIIIIIII1II1I11I11III1I1111111111111111111111111111111111111111111111I1111111 MARKOV THEORY EDGAR r..,DE CASTRO PAGE 5 " . . .," " . . .: . -.: ," , ,
• 00. . LP{Xn+m = j,Xn = kixo = i} k=O 00 = LP{Xn+m = jlXn = k,Xo = i}P{Xn = klXo = i} k=O - " n(m) p ik - .i- rkj 00 (n) k=O .If we let p~n) denote the matrix of n-step transition probabiIitiesp,(n) then 1] , p(n+m) = p(n) - p(trt)where the dot represents matrix multiplication. Hence, inparticular: p(2) = p(l+l) = p- p = p2And by induction: p(o) = p(n-l+1) = pn-I - p = pOThat is, the n-step transition matrix is obtained by multiplyingmatrix P by itself n times. Therefore the N-step transition matrix isgiven by:1II111I1I111III111111111111111111111111I1111111111111111111111111111111111111111111111111111111111111111111111I1I11111I11111II111111111111111111111111111111111111111IIIIIIIIIIililiMARKOV THEORY EDGARL. DECASTRO PAGE 6 0.
• rp(N) p(N) 12 ... In pClJ) l 111(N) p(N) ... p(N) p(N) = I P21 . 22 . . 2n . . . . . . . . .FIRST PASSAGE AND FIRST RETURNPROBABILITIESLet f~N) = first passage probability = probability of reaching state j from state i for the first time in N steps. f~N)= first return probability if i = j fi~N) = P{XN = j,XN-I :I:j,XN-2 :I:j,...,Xf :I:jlXo = i} f.(I) = p.. lJ IJ N-l f,(N)= p.(N)- IJ IJ ~ ~ IJ f.(k)p(N-k) JJ k=1111111111111111111111111111111111111111111111111111111" 11111111111" 11111111111111111111111111111111111" 111111111111111111111111111111111111111""" 11/1111111" 11111111111I11111fv1ARKOVTHEORY EDGAR L. DE CASTRO PAGE 7 . . I" .... . ".: . ...
• CLAS~IFICATION OF STATESFor fixed i andj, the fi~N) are nonnegative numbers such thatWhen the sum does equal 1, fi~N) can be considered as aprobability distribution for the random variable: first passage timeIf i =j and 00 L f(N) - IJ - 1 N=1then state i is caned a recurrent state because this conditionimplies that once the process is in state i, it will return to state i.A special case of the recuuent state is the absorbingstate. A stateis said to be an absorb,lng state if the one step transitionprobability Pij = 1. Thus, if a state is absorbing, the process winnever leave once it enters. If 00 L f(N) <t N=] 1Jthen state i is called a transient state because this condition impliesthat once the process is in state i, there is a strictly positiveprobability that it will never return to i.I1111II1I11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I1111111111111111111111111111111111111111I1I1II1II1IIIII11MARKOV THEORY EDGARL. DECASTRO PAGE 8 " , .. .:
• Let Mij = expected first passage time from i to j 00 00 if L N=1 ~ fi(N) <1 00 if L N=1 fDN) = 1 [Mijexists only if the states are recurrent]Whenever 00 ~ £.oJ 1J f.(N)- - 1 N=lThen Moo= 1+ " Pk M kJ 1J £.. I k*jWhen j = i, the expected first passage time is caUed the firstrecurrence time. If Mii = 00, it is called a null recurrentstate, If Mii< 00, it is called a positive recurrent state, In afinite Markov chain, there are no null recurrent states (onlypositive recurrent states and transient states).111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I111I11III11MARKOV THEORY EDGAR L. DE CASTRO PAGE 9 .." .
• State j is accessible from i if Pij > 0 for some n > o. If j is accessible to i and i is accessible to j, then the two states communicate. In general : (1) any state communicates with itself. (2) if state i communicates with state j, then state j communicates with state i. (3) if state i communicates with state j and state j communicates with state k, then state i communicates with state k. If all states communicate, the Markov chain is Irreducible. In a fmite Markov chain, the members of a class are either all transient states or all positive recurrent states. A state i is said to have a. period t (t > 1) if Pj]N) = 0 whenever n is not divisible by t, and t is the largest integer with this property.If a state has a period 1, it is called aperiodic state. If state i in a class is aperiodic, then .all states in the class are aperiodic. Positive recurrent states that are aperiodicare called ergodic states. 11111111111111111111111111111111111111 n 11111111111" 1111111111" II" 111111" II" II iIIlllll"" !l1I1I111111" n II111I "" 11111111111111111" 11111" I" 1111111111111111" I111III1IIII MARKOV THEORY EDGARL. DECASTRO PAGE 10 .. . . ," ..: . ." . ",I
• ERGODIC MARKOV CHAINSSTEADY STATE PROBABILITIES (LIMITINGPROBABILITIES)Let 7tj = N~oo p.(N) lim IJAs N grows large: 7t1 7t2 ... 7tn ... . pN ~ 17tI . . 7t2 . 7tn . . . . . . . . ... 7tnAs long as the process is ergodic, such Iin1itexists. p(Il) = p(N-I) 8 P Jim peN) = Jim p(N-I). p N~oo N~oo ... 7t 11 it-- 1 1t 2 ... 7t n . . . . =. .... . . . . l.p ... . 1tnJ ... L7t1 7t2 : . ,:J 1t=1t8P 1tT = pT 81t [This system possesses an infinite number of solutions.]1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I11I11111I1III11111111111I11111111111111111111111111111I1I1IIIII1MARKOV THEORY EDGAR L. DE CASTRO PAGE 11 , : . ". . ". :: . " I . ,.
• The nonnalizing equation L 1ti = 1 al1 iis used to identifY the one solution which wiU qualify as aprobability distribution.ABSORBING MARKOV CHAINSLet PH Pl2 ... Plk I Pl,k+l ... ... P21 P22 ... P2k P2,k+l ... ... . I . . . . . . . . . . . I Pkl Pk2 ... Pkk I Pk,k +1 ... ... - - - - I - - - 0 0 ... 0 I 1 ... 0 0 0 ... 0 0 ... 0 . . I . . . . . . . I . . . 0 0 .. . 0 I 0 ... IThe partitioned matrix is given by: Q I I( .., P=I- - 0 I JIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII! 11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111II1I1111111II11111MARKOV THEORY EDGAR L. DE CASTRO PAGE 12
• Let eij = mean number of times that transient state j is occupied given the initial state in i before absorption E = corresponding matrixThen, k i:l: j: eij"= L ~vevj v=l k i = j : eij = 1 + L v=l PjvevjIn matrix form : E = I + QE E - QE =I (I-Q)E=I E=(I-Q)-lLet di = total number of transitions until absorption k di = Leij j=l111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111II1I11MARKOVTHEORY EDGARL. DE CASTRO PAGE 13 .":.":
• ABSORPTION PROBABILITY - probability of entering an absorbing stateLet Aij = probability that the process even enters absorbing state j given that the initial state is i. k Aij L = Pij + v=l PivAvjIn matrix formA = matrix of Aij (not necessarily square)[where the number of rows is the number of transient states andthe number of columns is the number of absorbing states]Examining matrix A A=R+QA A-QA=R (1- Q)A = R A = [I - Q ]-1 RCONDITIONAL MEAN FIRST PASSAGE TIME - number oftransitions which will occur before an absorbing state is entered A..M..=A..+ 1J ~ P.kA .M . U IJ £.- I" kJ kJ k=#j111II1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111!11111111111111111111IIII1I111111111MARKOV THEORY EDGAR L. DE CASTRO PAGE 14 ....,. . . ., .