SlideShare a Scribd company logo
1 of 14
Download to read offline
],   ,oi




                                                                                                      Mp",RKOVTHEORY

                                                                744                ~                                   ~Q~                                                       '7,,~ U
                                                                                                                                                                        a4dtUlt~"1
                                                                   ..            14tUe                 ., ~                         '*              'N41foi '/411'8
                                                                                                                                                                  ti/".

               DEFINITION 3.1:

               A stochastic                                                        process, {x( t ), t E T}, is a collectionof random
               variables. That is, for each t E T:. X(t) is a random variable. The
               index t is often referred to as time and asa result, we refer to X( t)
               as the state of the process at.time ~..The set T is called the index
               set of the process.

               DEFINITION 3.2:

               When T is a countable set, the stochastic process is said to be a
               discrete-time process. [f T is an interval of the real line, the
               stochasticprocess is said to be continuous time- process.

               DEFINITION: 3.3:                                                                           .




               The state space of a stochastic process is defined as the set of
               all possible values that the random variables X(t) can assulne.

               THUS, STOCHASTIC
                    A          PROCESS ISA fAMILY Of RANDOM
               VARIABLESTHAT DESCRIBES
                                     THE EVOLUTIONTHROUGH
           ,   TIME OF SOME (PHYSICAL) PROCESS.




               1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111III111III11



               MARKOV THEORY                                                                                           EDGAR L. DE CASTRO                                                                  PAGE 1

                                                                                                              ..
                                                                                                              ..                                                                                      ..
DISCRETE-TIME PROCESSES

DEFINITION 3.4:

An epoch is a point in time at which the system is observed. The
states correspond ,to the possible conditions observed. A
transition is a change of state. A record of the observed states
through time is caned a realization of the process.


DEFINITION 3.5:

A transition diagram is a pictorial map in which the states are
represented by points and transition by arrows.




                                                                                                                          o
                                        TRANSITION                                                       DIAGRAM                                                FOR THREE STATES




1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111IIIIII11


MARKOVTHEORY                                                                                            EDGARL. DECASTRO                                                                         PAGE2

                                                                                                                                                                                       ',., ''
                                                                                        .'" ::,.:
                                                                                            ....                                                                                       .
                                                                                                                                                                                       . ,'..
                                                                                                                                                                                        ,", <
DEFINITION 3.6:

     The process of transition can be visualized as a random walk of
     the particle over the transition diagram. A virtual transition is
     one where the new state is the same as the old. A real transition
     is a genuine ?hange of state.


     THE RANDOM WALK MODEL

     Consider a discrete time process whose state space is given by the
     integers i = O,:f: 1,:f: 2,   The discrete time process is said to
     be a random walk, if for some number 0 < P < 1,

                                     lj,i+l                   = P = 1..1li,i-I                                                          i = 0,:1:1,:J:2,. ..


     The random walk may be thought of as being a model for an
     individual walking on a straight line who at each point of time
     either takes one step to the right with probability p and one step to
     the left with probability 1 - p.




     I1I11111I11III1111111111111111111111111111111111111111I11I1III11111111111111111111111111111111111111111111111II1111I1111I111111111111111111111111111111111111111111I11I1I1I1IIII1111

     MARKOV THEORY                                                                     EDGAR L. DE CASTRO                                                                                   PAGE 3

"                                                                                  ",                                                                            . ,
..                                                                                 "                                                                                ,,
                                                                                                                                                                 . t:
                                                                                                                                                                ,'. .
                                                                                                                                                                " .,
THE MARKOV CHAIN

DEFINITION 3.7:

A markov chain is a discrete time stochastic process in which
the current state of each random variable Xi depends only on the
previous state. The word chain suggests the linking of the random
variables to their immediately adjacent neighbors in the sequence.
Markov is the Russian mathematician who developed the process
around the beginning of the 20th century.

TRANSITION PROBABILITY (Pij) - the probability of a
transition from state i to state j after one period.



                                            .
TRANSITION                                                 MATRIX                             (P)          -         the matrix of transition
probabilities.

                                                                                 PII          PI2                 ...               PIn
                                                                                              P22
                                                                                                                  ...               P2n
                                                        P = P21
                                                             .
                                                             .          I                       .
                                                                                                .                  .
                                                                                                                   .                      .
                                                                                                                                          .
                                                                                      .   .
                                                                                                .                  .                      .
                                                                            .    PDI          Pn2
                                                                                                                  ...               Pnn

                                 ..
                                  t:",'
                                      ,..
                                  "




1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111         II 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111


MARKOV THEORY                                                                    EDGAR L. DE CASTRO                                                                                        PAGE 4

                                                                                ..                                                         ...
                                                                                ..                                                      ......

                                                                                .'.                                                         .', .
                                                                                                                                           ',' .
                                                                        . ..".
ASSUMPTIONS OF THE MARKOV CHAIN

      t. THEMARKOV ASSUMPTION,
         The knowledge of the state at any time is sufficient to predict
         the future of the process. Or, given the present, the future is
         independent of the parts and the process is "forgetful."

     2. THE STATIONARITYASSUMPTION
        The probability mechanism is assumed as stable.

     CHAPMAN-KOLMOGOROV                                                                                          EQUATIONS

     Let PDI1) = the n step transition probability, i.e., the probability
                                     that a process in state i will be in state j after n
                                     additional transitions.


                           pJn)              = P{Xn+m = jlXnl = i}, n > 0, i,j > 0
     The Chapman-Kolmogorov equations provide a method for
     calculating these n-step transition probabilities.

                                                                        00

                                       P(n+m) -
                                                                                                                                                               o


                                                               - L...pIk rkJ
                                                                      .(n)n(~n)in ' m> 0 all i,J
                                                                       ~'
                                           ij
                                                                     k=O
                                                                                      -,                                             '




     Formally, we derive:




     11111111111111111111111111111111111111111111111111111111IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII!IIIIIIIIIIIIIIIIIIIII1II1I11I11III1I1111111111111111111111111111111111111111111111I1111111

     MARKOV THEORY                                                      EDGAR r..,DE CASTRO                                                                                      PAGE 5

                                                                      "'                                                             . '. .
,"                                                                   " '                                                                   .
                                                                                                                                     . .:
                                                                                                                                   . -.:'
                                                                                                                                    ,"
                                                                                                                                         , ,
00.

                            .    LP{Xn+m = j,Xn = kixo = i}
                                k=O
                                  00

                         = LP{Xn+m = jlXn = k,Xo = i}P{Xn = klXo = i}
                                k=O
                         - " n(m) p ik
                         - .i- rkj
                                  00
                                     (n)
                                k=O                                                                        .




If we let p~n) denote the matrix of n-step transition probabiIities
p,(n) then
  1] ,


                                                         p(n+m)                    = p(n)                  - p(trt)

where the dot represents matrix multiplication.                                                                                                            Hence, in
particular:

                                                p(2)             = p(l+l) = p- p = p2
And by induction:

                                        p(o)           = p(n-l+1) = pn-I - p = pO
That is, the n-step transition matrix is obtained by multiplying
matrix P by itself n times. Therefore the N-step transition matrix is
given by:




1II111I1I111III111111111111111111111111I1111111111111111111111111111111111111111111111111111111111111111111111I1I11111I11111II111111111111111111111111111111111111111IIIIIIIIIIilili

MARKOV THEORY                                                    EDGARL. DECASTRO                                                                                      PAGE 6

                                                               0.
rp(N)                            p(N)
                                                                                                   12
                                                                                                                         ...                 In
                                                                                                                                            pCl'J)               l
                                                111(N)                                            p(N)                   ...                p(N)
                                         p(N) = I P21
                                                    .                                              22
                                                                                                           .              .                  2n
                                                                                                                                                     .
                                                                              .
                                                                              .                            .
                                                                                                           .              .
                                                                                                                          .                          .
                                                                                                                                                     .

FIRST   PASSAGE                                                                               AND                           FIRST                                    RETURN
PROBABILITIES


Let f~N)                          = first passage                                       probability
                                 = probability of reaching state j from state i for
                                   the first time in N steps.

                 f~N)= first return probability if i = j


         fi~N) = P{XN = j,XN-I :I:j,XN-2 :I:j,...,Xf                                                                                                        :I:jlXo = i}
          f.(I) = p..
             lJ                        IJ
                                                          N-l
         f,(N)= p.(N)-
          IJ     IJ
                                                          ~
                                                           ~ IJ
                                                                   f.(k)p(N-k)
                                                                                     JJ
                                                          k=1




111111111111111111111111111111111111111111111111111111"       11111111111"   11111111111111111111111111111111111"   111111111111111111111111111111111111111"""       11/1111111"   11111111111I11111


fv1ARKOVTHEORY                                                         EDGAR L. DE CASTRO                                                                                          PAGE 7


                                                                                                                                             .
                                                                                                                                       '. I"
                                                                                                                                    .... .
                                                                                                                                      ".:' .
                                                                                                                                       ...
CLAS~IFICATION OF STATES


For fixed i andj, the fi~N) are nonnegative numbers such that




When the sum does equal 1, fi~N) can be considered as a
probability distribution for the random variable: first passage time

If i =j and
                                                                                              00

                                                                                             L f(N) -
                                                                                                IJ  -                              1

                                                                                          N=1

then state i is caned a recurrent state because this condition
implies that once the process is in state i, it will return to state i.

A special case of the recuuent state is the absorbingstate. A state
is said to be an absorb,lng state if the one step transition
probability Pij = 1. Thus, if a state is absorbing, the process win
never leave once it enters. If

                                                                                              00

                                                                                             L          f(N)              <t
                                                                                          N=] 1J

then state i is called a transient state because this condition implies
that once the process is in state i, there is a strictly positive
probability that it will never return to i.

I1111II1I11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I1111111111111111111111111111111111111111I1I1II1II1IIIII11

MARKOV THEORY                                                                  EDGARL. DECASTRO                                                                                        PAGE 8


                                                                                                                                                           '"
                                                                                                                                                         , ..
                                                                                                                                                             .:
Let Mij                    = expected first passage time from i to j
                                                                                                                                    00

                                                                           00                                           if         L
                                                                                                                                 N=1 ~
                                                                                                                                              fi(N)             <1
                                                                                                                                   00


                                                                                                                         if       L
                                                                                                                                N=1
                                                                                                                                             fDN) = 1


                                          [Mijexists only if the states are recurrent]

Whenever

                                                                                            00

                                                                                           ~
                                                                                         ยฃ.oJ 1J
                                                                                                      f.(N)-
                                                                                                           -
                                                                                                                               1
                                                                                        N=l

Then

                                                                          Moo= 1+ '" P'k M kJ'
                                                                           1J     ยฃ.. I
                                                                                                            k*j


When j = i, the expected first passage time is caUed the first
recurrence time. If Mii = 00, it is called a null recurrent
state, If Mii< 00, it is called a positive recurrent state, In a
finite Markov chain, there are no null recurrent states (only
positive recurrent states and transient states).




111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I111I11III11

MARKOV THEORY                                                                 EDGAR L. DE CASTRO                                                                                       PAGE 9


                                                                                                                                                   .'." .
State j is accessible  from i if Pij > 0 for some n > o. If j is
    accessible to i and i is accessible to j, then the two states
    communicate. In general :


    (1) any state communicates                                                       with itself.
    (2) if state i communicates                                                      with state j, then state j communicates
        with state i.
    (3) if state i communicates                                                      with state j and state j communicates
        with state k, then state i                                                   communicates with state k.


    If all states communicate, the Markov chain is Irreducible. In a
    fmite Markov chain, the members of a class are either all transient
    states or all positive recurrent states. A state i is said to have a
.   period t (t > 1) if Pj]N)                                               = 0 whenever n is not divisible by t, and t
    is the largest integer with this property.If a state has a period 1, it
    is called aperiodic state. If state i in a class is aperiodic, then
    .all states in the class are aperiodic. Positive recurrent states that
     are aperiodicare called ergodic states.




    11111111111111111111111111111111111111   n 11111111111" 1111111111" II" 111111" II" II iIIlllll""   !l1I1I111111"   n II111I ""   11111111111111111" 11111" I" 1111111111111111" I111III1IIII

    MARKOV THEORY                                                        EDGARL. DECASTRO                                                                                      PAGE 10


                                                                    ..                                                                  .
                                                                                                                                  . ," ..:
                                                                                                                                    .
                                                                                                                                    '." .
                                                                                                                                      ",'I
ERGODIC MARKOV CHAINS

STEADY    STATE                                                                                 PROBABILITIES                                                                      (LIMITING
PROBABILITIES)

Let 7tj                   = N~oo p.(N)
                             lim IJ

As N grows large:

                                                                                            7t1               7t2               ...              7tn
                                                                                                                                ...
                                                                      .
                                                               pN ~ 17tI                        .              .
                                                                                                              7t2    . 7tn
                                                                                                                  . . .
                                                                                                                  . . .
                                                                                                .                        .
                                                                                                                    ... 7tn
As long as the process is ergodic,                                                                                     such Iin1itexists.

                             p(I'l) = p(N-I) 8 P
                                Jim peN) = Jim p(N-I).                                                                        p
                             N~oo                                          N~oo
                                                                     ...              7t 11                      'it--
                                                                                                                    1              1t 2               ...             7t n
                                    .
                                    .
                                    .                                                     . =. .... .
                                                                                          .                                                              .                 . l.p
                                                                     ...               .
                                                                                      1tnJ                     ...
                                                                                                             L7t1 7t2                                    :   .        ,:J
                                                                                              1t=1t8P

                                                                                              1tT = pT 81t

                  [This system possesses an infinite number of solutions.]


1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I11I11111I1III11111111111I11111111111111111111111111111I1I1IIIII1

MARKOV THEORY                                                                  EDGAR L. DE CASTRO                                                                                      PAGE 11


                                                                                                                                                   ',' : '
                                                                                                                                                     . ". .
                                                                                                                                                      ". ::
                                                                                                                                                       .
                                                                                                                                                      " 'I .
                                                                                                                                                      ,'.
The nonnalizing equation

                                                                                          L 1ti = 1
                                                                                        al1 i
is used to identifY the one solution which wiU qualify as a
probability distribution.

ABSORBING MARKOV CHAINS

Let

                                   PH                  Pl2                ...             'Plk                    I
                                                                                                                         Pl,k+l
                                                                                                                                                    ... ...
                                   P21                 P22                ...             P2k                           P2,k+l
                                                                                                                                                    ... ...
                                       .
                                                                                                                  I



                                       .                   .
                                                           .               .
                                                                           .                   .
                                                                                               .
                                       .                   .               .                   .                  I



                                   Pkl                Pk2
                                                                          ...             Pkk                     I
                                                                                                                        Pk,k +1
                                                                                                                                                    ... ...
                                      -                   -                 -                 -                   I            -                      - -
                                      0                   0               ...                 0                   I
                                                                                                                                 1                  ... 0
                                      0                   0               ...                 0                                 0                   ... 0
                                                                                                .
                                                                                                .
                                                                                                                  I

                                                                                                                                 .
                                                                                                                                 .                   .
                                                                                                                                                     .                 .
                                                                                                                                                                       .
                                                                                                .                 I
                                                                                                                                 .                   .                 .
                                      0                   0                .. .               0                   I
                                                                                                                                0                   ...                I

The partitioned                                  matrix is given by:

                                                                                             Q                I          I( ..,
                                                                            P=I-                           -
                                                                                              0               I
                                                                                                                               J



IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII!
                                          11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111II1I1111111II11111

MARKOV THEORY                                                             EDGAR L. DE CASTRO                                                                                          PAGE 12
Let eij                           = mean number                                                              of times that transient state j is occupied
                        given the initial state in i before absorption
                    E = corresponding matrix

Then,
                                                                                                                                                   k
                                                                                            i:l: j: eij"=                                       L ~vevj
                                                                                                                                              v=l
                                                                                                                                                                  k
                                                                                            i = j : eij = 1 +                                                  L
                                                                                                                                                             v=l
                                                                                                                                                                             Pjvevj


In matrix form :

                                                                                                                  E = I + QE
                                                                                                                  E - QE                                =I
                                                                                                                  (I-Q)E=I
                                                                                                                  E=(I-Q)-l

Let di = total number of transitions until absorption

                                                                                                                                                    k
                                                                                                                          di           = Leij
                                                                                                                                                 j=l




111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111II1I11


MARKOVTHEORY                                                                                           EDGARL. DE CASTRO                                                                        PAGE 13

                                                                                                                                                                                       .":.":
ABSORPTION PROBABILITY                                                                                                   - probability of entering an
                                                                                                                           absorbing state

Let Aij                  = probability that the process even enters absorbing state j
                                given that the initial state is i.

                                                                                                                   k
                                                                         Aij                  L
                                                                                      = Pij + v=l PivAvj
In matrix form


A        = matrix of Aij (not                                                       necessarily square)

[where the number of rows is the number of transient states and
the number of columns is the number of absorbing states]

Examining matrix A

                                                                                   A=R+QA
                                                                                   A-QA=R
                                                                                   (1- Q)A = R
                                                                                   A = [I - Q ]-1 R

CONDITIONAL MEAN FIRST PASSAGE TIME - number of
transitions which will occur before an absorbing state is entered

                                                         A..M..=A..+ 1J                                         ~ P.kA .M                                       .
                                                           U IJ                                                 ยฃ.- I'" kJ kJ
                                                                                                               k=#j




111II1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111!11111111111111111111IIII1I111111111

MARKOV THEORY                                                                    EDGAR L. DE CASTRO                                                                                    PAGE 14

                                                                      ..'..',. .
                                                                            .
                                                                          .,
                                                                            .'

More Related Content

What's hot

"Deep Reinforcement Learning for Optimal Order Placement in a Limit Order Boo...
"Deep Reinforcement Learning for Optimal Order Placement in a Limit Order Boo..."Deep Reinforcement Learning for Optimal Order Placement in a Limit Order Boo...
"Deep Reinforcement Learning for Optimal Order Placement in a Limit Order Boo...Quantopian
ย 
Coordinate Descent method
Coordinate Descent methodCoordinate Descent method
Coordinate Descent methodSanghyuk Chun
ย 
Discreet and continuous probability
Discreet and continuous probabilityDiscreet and continuous probability
Discreet and continuous probabilitynj1992
ย 
Monte carlo simulation
Monte carlo simulationMonte carlo simulation
Monte carlo simulationMissAnam
ย 
Markov Models
Markov ModelsMarkov Models
Markov ModelsVu Pham
ย 
Monte Carlo Simulation
Monte Carlo SimulationMonte Carlo Simulation
Monte Carlo SimulationDeepti Singh
ย 
Markov chain and its Application
Markov chain and its Application Markov chain and its Application
Markov chain and its Application Tilakpoudel2
ย 
Sensitivity Analysis
Sensitivity AnalysisSensitivity Analysis
Sensitivity AnalysisBhargav Seeram
ย 
Multiple linear regression
Multiple linear regressionMultiple linear regression
Multiple linear regressionJames Neill
ย 
PG STAT 531 Lecture 5 Probability Distribution
PG STAT 531 Lecture 5 Probability DistributionPG STAT 531 Lecture 5 Probability Distribution
PG STAT 531 Lecture 5 Probability DistributionAashish Patel
ย 
sampling distribution
sampling distributionsampling distribution
sampling distributionMmedsc Hahm
ย 
The Basic of Molecular Dynamics Simulation
The Basic of Molecular Dynamics SimulationThe Basic of Molecular Dynamics Simulation
The Basic of Molecular Dynamics SimulationSyed Lokman
ย 
Stochastic modelling and its applications
Stochastic modelling and its applicationsStochastic modelling and its applications
Stochastic modelling and its applicationsKartavya Jain
ย 
Poisson Distribution
Poisson DistributionPoisson Distribution
Poisson DistributionHafiz UsmanAli
ย 
Fractional factorial design tutorial
Fractional factorial design tutorialFractional factorial design tutorial
Fractional factorial design tutorialGaurav Kr
ย 
Lesson 2 stationary_time_series
Lesson 2 stationary_time_seriesLesson 2 stationary_time_series
Lesson 2 stationary_time_seriesankit_ppt
ย 
Madhu k s liposomes
Madhu k s liposomesMadhu k s liposomes
Madhu k s liposomesMadhu Honey
ย 
ANOVA 2-WAY Classification
ANOVA 2-WAY ClassificationANOVA 2-WAY Classification
ANOVA 2-WAY ClassificationSharlaine Ruth
ย 

What's hot (20)

"Deep Reinforcement Learning for Optimal Order Placement in a Limit Order Boo...
"Deep Reinforcement Learning for Optimal Order Placement in a Limit Order Boo..."Deep Reinforcement Learning for Optimal Order Placement in a Limit Order Boo...
"Deep Reinforcement Learning for Optimal Order Placement in a Limit Order Boo...
ย 
Coordinate Descent method
Coordinate Descent methodCoordinate Descent method
Coordinate Descent method
ย 
Discreet and continuous probability
Discreet and continuous probabilityDiscreet and continuous probability
Discreet and continuous probability
ย 
Monte carlo simulation
Monte carlo simulationMonte carlo simulation
Monte carlo simulation
ย 
Nanocrystals
NanocrystalsNanocrystals
Nanocrystals
ย 
Markov Models
Markov ModelsMarkov Models
Markov Models
ย 
Monte Carlo Simulation
Monte Carlo SimulationMonte Carlo Simulation
Monte Carlo Simulation
ย 
Markov chain and its Application
Markov chain and its Application Markov chain and its Application
Markov chain and its Application
ย 
Sensitivity Analysis
Sensitivity AnalysisSensitivity Analysis
Sensitivity Analysis
ย 
Multiple linear regression
Multiple linear regressionMultiple linear regression
Multiple linear regression
ย 
Linear Regression.pptx
Linear Regression.pptxLinear Regression.pptx
Linear Regression.pptx
ย 
PG STAT 531 Lecture 5 Probability Distribution
PG STAT 531 Lecture 5 Probability DistributionPG STAT 531 Lecture 5 Probability Distribution
PG STAT 531 Lecture 5 Probability Distribution
ย 
sampling distribution
sampling distributionsampling distribution
sampling distribution
ย 
The Basic of Molecular Dynamics Simulation
The Basic of Molecular Dynamics SimulationThe Basic of Molecular Dynamics Simulation
The Basic of Molecular Dynamics Simulation
ย 
Stochastic modelling and its applications
Stochastic modelling and its applicationsStochastic modelling and its applications
Stochastic modelling and its applications
ย 
Poisson Distribution
Poisson DistributionPoisson Distribution
Poisson Distribution
ย 
Fractional factorial design tutorial
Fractional factorial design tutorialFractional factorial design tutorial
Fractional factorial design tutorial
ย 
Lesson 2 stationary_time_series
Lesson 2 stationary_time_seriesLesson 2 stationary_time_series
Lesson 2 stationary_time_series
ย 
Madhu k s liposomes
Madhu k s liposomesMadhu k s liposomes
Madhu k s liposomes
ย 
ANOVA 2-WAY Classification
ANOVA 2-WAY ClassificationANOVA 2-WAY Classification
ANOVA 2-WAY Classification
ย 

Viewers also liked

Monte carlo
Monte carloMonte carlo
Monte carloshishirkawde
ย 
Monte carlo simulation
Monte carlo simulationMonte carlo simulation
Monte carlo simulationAnurag Jaiswal
ย 
Search Engine Marketing
Search Engine Marketing Search Engine Marketing
Search Engine Marketing Mehul Rasadiya
ย 
Monte Carlo Simulations
Monte Carlo SimulationsMonte Carlo Simulations
Monte Carlo Simulationsgfbreaux
ย 
Markov Chains
Markov ChainsMarkov Chains
Markov Chainsguest8901f4
ย 

Viewers also liked (6)

Markov chain
Markov chainMarkov chain
Markov chain
ย 
Monte carlo
Monte carloMonte carlo
Monte carlo
ย 
Monte carlo simulation
Monte carlo simulationMonte carlo simulation
Monte carlo simulation
ย 
Search Engine Marketing
Search Engine Marketing Search Engine Marketing
Search Engine Marketing
ย 
Monte Carlo Simulations
Monte Carlo SimulationsMonte Carlo Simulations
Monte Carlo Simulations
ย 
Markov Chains
Markov ChainsMarkov Chains
Markov Chains
ย 

Similar to Markov theory

1994 the influence of dimerization on the stability of ge hutclusters on si(001)
1994 the influence of dimerization on the stability of ge hutclusters on si(001)1994 the influence of dimerization on the stability of ge hutclusters on si(001)
1994 the influence of dimerization on the stability of ge hutclusters on si(001)pmloscholte
ย 
Standard Chartered Full & Final Settelment of Dues Letter
Standard Chartered Full & Final Settelment of Dues LetterStandard Chartered Full & Final Settelment of Dues Letter
Standard Chartered Full & Final Settelment of Dues LetterVishal Gondal
ย 
Process flow map
Process flow mapProcess flow map
Process flow mapadimak
ย 
Process flow map
Process flow map Process flow map
Process flow map adimak
ย 

Similar to Markov theory (6)

Mock DUI
Mock DUIMock DUI
Mock DUI
ย 
Bag filters
Bag filtersBag filters
Bag filters
ย 
1994 the influence of dimerization on the stability of ge hutclusters on si(001)
1994 the influence of dimerization on the stability of ge hutclusters on si(001)1994 the influence of dimerization on the stability of ge hutclusters on si(001)
1994 the influence of dimerization on the stability of ge hutclusters on si(001)
ย 
Standard Chartered Full & Final Settelment of Dues Letter
Standard Chartered Full & Final Settelment of Dues LetterStandard Chartered Full & Final Settelment of Dues Letter
Standard Chartered Full & Final Settelment of Dues Letter
ย 
Process flow map
Process flow mapProcess flow map
Process flow map
ย 
Process flow map
Process flow map Process flow map
Process flow map
ย 

More from De La Salle University-Manila

Verfication and validation of simulation models
Verfication and validation of simulation modelsVerfication and validation of simulation models
Verfication and validation of simulation modelsDe La Salle University-Manila
ย 
Chapter3 general principles of discrete event simulation
Chapter3   general principles of discrete event simulationChapter3   general principles of discrete event simulation
Chapter3 general principles of discrete event simulationDe La Salle University-Manila
ย 
Comparison and evaluation of alternative designs
Comparison and evaluation of alternative designsComparison and evaluation of alternative designs
Comparison and evaluation of alternative designsDe La Salle University-Manila
ย 

More from De La Salle University-Manila (20)

Queueing theory
Queueing theoryQueueing theory
Queueing theory
ย 
Queueing theory
Queueing theoryQueueing theory
Queueing theory
ย 
Queuing problems
Queuing problemsQueuing problems
Queuing problems
ย 
Verfication and validation of simulation models
Verfication and validation of simulation modelsVerfication and validation of simulation models
Verfication and validation of simulation models
ย 
Markov exercises
Markov exercisesMarkov exercises
Markov exercises
ย 
Game theory problem set
Game theory problem setGame theory problem set
Game theory problem set
ย 
Game theory
Game theoryGame theory
Game theory
ย 
Decision theory Problems
Decision theory ProblemsDecision theory Problems
Decision theory Problems
ย 
Decision theory handouts
Decision theory handoutsDecision theory handouts
Decision theory handouts
ย 
Sequential decisionmaking
Sequential decisionmakingSequential decisionmaking
Sequential decisionmaking
ย 
Decision theory
Decision theoryDecision theory
Decision theory
ย 
Decision theory blockwood
Decision theory blockwoodDecision theory blockwood
Decision theory blockwood
ย 
Decision theory
Decision theoryDecision theory
Decision theory
ย 
Random variate generation
Random variate generationRandom variate generation
Random variate generation
ย 
Random number generation
Random number generationRandom number generation
Random number generation
ย 
Monte carlo simulation
Monte carlo simulationMonte carlo simulation
Monte carlo simulation
ย 
Input modeling
Input modelingInput modeling
Input modeling
ย 
Conceptual modeling
Conceptual modelingConceptual modeling
Conceptual modeling
ย 
Chapter3 general principles of discrete event simulation
Chapter3   general principles of discrete event simulationChapter3   general principles of discrete event simulation
Chapter3 general principles of discrete event simulation
ย 
Comparison and evaluation of alternative designs
Comparison and evaluation of alternative designsComparison and evaluation of alternative designs
Comparison and evaluation of alternative designs
ย 

Recently uploaded

How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSCeline George
ย 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfAdmir Softic
ย 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibitjbellavia9
ย 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Jisc
ย 
Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Association for Project Management
ย 
Tแป”NG ร”N TแบฌP THI Vร€O LแปšP 10 Mร”N TIแบพNG ANH Nฤ‚M HแปŒC 2023 - 2024 Cร“ ฤรP รN (NGแปฎ ร‚...
Tแป”NG ร”N TแบฌP THI Vร€O LแปšP 10 Mร”N TIแบพNG ANH Nฤ‚M HแปŒC 2023 - 2024 Cร“ ฤรP รN (NGแปฎ ร‚...Tแป”NG ร”N TแบฌP THI Vร€O LแปšP 10 Mร”N TIแบพNG ANH Nฤ‚M HแปŒC 2023 - 2024 Cร“ ฤรP รN (NGแปฎ ร‚...
Tแป”NG ร”N TแบฌP THI Vร€O LแปšP 10 Mร”N TIแบพNG ANH Nฤ‚M HแปŒC 2023 - 2024 Cร“ ฤรP รN (NGแปฎ ร‚...Nguyen Thanh Tu Collection
ย 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseAnaAcapella
ย 
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfUGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfNirmal Dwivedi
ย 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17Celine George
ย 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...pradhanghanshyam7136
ย 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...ZurliaSoop
ย 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentationcamerronhm
ย 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptxMaritesTamaniVerdade
ย 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxAmanpreet Kaur
ย 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxRamakrishna Reddy Bijjam
ย 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxheathfieldcps1
ย 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfagholdier
ย 
Third Battle of Panipat detailed notes.pptx
Third Battle of Panipat detailed notes.pptxThird Battle of Panipat detailed notes.pptx
Third Battle of Panipat detailed notes.pptxAmita Gupta
ย 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and ModificationsMJDuyan
ย 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...christianmathematics
ย 

Recently uploaded (20)

How to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POSHow to Manage Global Discount in Odoo 17 POS
How to Manage Global Discount in Odoo 17 POS
ย 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
ย 
Sociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning ExhibitSociology 101 Demonstration of Learning Exhibit
Sociology 101 Demonstration of Learning Exhibit
ย 
Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)Accessible Digital Futures project (20/03/2024)
Accessible Digital Futures project (20/03/2024)
ย 
Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...
ย 
Tแป”NG ร”N TแบฌP THI Vร€O LแปšP 10 Mร”N TIแบพNG ANH Nฤ‚M HแปŒC 2023 - 2024 Cร“ ฤรP รN (NGแปฎ ร‚...
Tแป”NG ร”N TแบฌP THI Vร€O LแปšP 10 Mร”N TIแบพNG ANH Nฤ‚M HแปŒC 2023 - 2024 Cร“ ฤรP รN (NGแปฎ ร‚...Tแป”NG ร”N TแบฌP THI Vร€O LแปšP 10 Mร”N TIแบพNG ANH Nฤ‚M HแปŒC 2023 - 2024 Cร“ ฤรP รN (NGแปฎ ร‚...
Tแป”NG ร”N TแบฌP THI Vร€O LแปšP 10 Mร”N TIแบพNG ANH Nฤ‚M HแปŒC 2023 - 2024 Cร“ ฤรP รN (NGแปฎ ร‚...
ย 
Spellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please PractiseSpellings Wk 3 English CAPS CARES Please Practise
Spellings Wk 3 English CAPS CARES Please Practise
ย 
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdfUGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
UGC NET Paper 1 Mathematical Reasoning & Aptitude.pdf
ย 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
ย 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
ย 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ย 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
ย 
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
2024-NATIONAL-LEARNING-CAMP-AND-OTHER.pptx
ย 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
ย 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
ย 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
ย 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
ย 
Third Battle of Panipat detailed notes.pptx
Third Battle of Panipat detailed notes.pptxThird Battle of Panipat detailed notes.pptx
Third Battle of Panipat detailed notes.pptx
ย 
Understanding Accommodations and Modifications
Understanding  Accommodations and ModificationsUnderstanding  Accommodations and Modifications
Understanding Accommodations and Modifications
ย 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
ย 

Markov theory

  • 1. ], ,oi Mp",RKOVTHEORY 744 ~ ~Q~ '7,,~ U a4dtUlt~"1 .. 14tUe ., ~ '* 'N41foi '/411'8 ti/". DEFINITION 3.1: A stochastic process, {x( t ), t E T}, is a collectionof random variables. That is, for each t E T:. X(t) is a random variable. The index t is often referred to as time and asa result, we refer to X( t) as the state of the process at.time ~..The set T is called the index set of the process. DEFINITION 3.2: When T is a countable set, the stochastic process is said to be a discrete-time process. [f T is an interval of the real line, the stochasticprocess is said to be continuous time- process. DEFINITION: 3.3: . The state space of a stochastic process is defined as the set of all possible values that the random variables X(t) can assulne. THUS, STOCHASTIC A PROCESS ISA fAMILY Of RANDOM VARIABLESTHAT DESCRIBES THE EVOLUTIONTHROUGH , TIME OF SOME (PHYSICAL) PROCESS. 1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111III111III11 MARKOV THEORY EDGAR L. DE CASTRO PAGE 1 .. .. ..
  • 2. DISCRETE-TIME PROCESSES DEFINITION 3.4: An epoch is a point in time at which the system is observed. The states correspond ,to the possible conditions observed. A transition is a change of state. A record of the observed states through time is caned a realization of the process. DEFINITION 3.5: A transition diagram is a pictorial map in which the states are represented by points and transition by arrows. o TRANSITION DIAGRAM FOR THREE STATES 1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111IIIIII11 MARKOVTHEORY EDGARL. DECASTRO PAGE2 ',., '' .'" ::,.: .... . . ,'.. ,", <
  • 3. DEFINITION 3.6: The process of transition can be visualized as a random walk of the particle over the transition diagram. A virtual transition is one where the new state is the same as the old. A real transition is a genuine ?hange of state. THE RANDOM WALK MODEL Consider a discrete time process whose state space is given by the integers i = O,:f: 1,:f: 2, The discrete time process is said to be a random walk, if for some number 0 < P < 1, lj,i+l = P = 1..1li,i-I i = 0,:1:1,:J:2,. .. The random walk may be thought of as being a model for an individual walking on a straight line who at each point of time either takes one step to the right with probability p and one step to the left with probability 1 - p. I1I11111I11III1111111111111111111111111111111111111111I11I1III11111111111111111111111111111111111111111111111II1111I1111I111111111111111111111111111111111111111111I11I1I1I1IIII1111 MARKOV THEORY EDGAR L. DE CASTRO PAGE 3 " ", . , .. " ,, . t: ,'. . " .,
  • 4. THE MARKOV CHAIN DEFINITION 3.7: A markov chain is a discrete time stochastic process in which the current state of each random variable Xi depends only on the previous state. The word chain suggests the linking of the random variables to their immediately adjacent neighbors in the sequence. Markov is the Russian mathematician who developed the process around the beginning of the 20th century. TRANSITION PROBABILITY (Pij) - the probability of a transition from state i to state j after one period. . TRANSITION MATRIX (P) - the matrix of transition probabilities. PII PI2 ... PIn P22 ... P2n P = P21 . . I . . . . . . . . . . . . PDI Pn2 ... Pnn .. t:",' ,.. " 1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 II 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 MARKOV THEORY EDGAR L. DE CASTRO PAGE 4 .. ... .. ...... .'. .', . ',' . . ..".
  • 5. ASSUMPTIONS OF THE MARKOV CHAIN t. THEMARKOV ASSUMPTION, The knowledge of the state at any time is sufficient to predict the future of the process. Or, given the present, the future is independent of the parts and the process is "forgetful." 2. THE STATIONARITYASSUMPTION The probability mechanism is assumed as stable. CHAPMAN-KOLMOGOROV EQUATIONS Let PDI1) = the n step transition probability, i.e., the probability that a process in state i will be in state j after n additional transitions. pJn) = P{Xn+m = jlXnl = i}, n > 0, i,j > 0 The Chapman-Kolmogorov equations provide a method for calculating these n-step transition probabilities. 00 P(n+m) - o - L...pIk rkJ .(n)n(~n)in ' m> 0 all i,J ~' ij k=O -, ' Formally, we derive: 11111111111111111111111111111111111111111111111111111111IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII!IIIIIIIIIIIIIIIIIIIII1II1I11I11III1I1111111111111111111111111111111111111111111111I1111111 MARKOV THEORY EDGAR r..,DE CASTRO PAGE 5 "' . '. . ," " ' . . .: . -.:' ," , ,
  • 6. 00. . LP{Xn+m = j,Xn = kixo = i} k=O 00 = LP{Xn+m = jlXn = k,Xo = i}P{Xn = klXo = i} k=O - " n(m) p ik - .i- rkj 00 (n) k=O . If we let p~n) denote the matrix of n-step transition probabiIities p,(n) then 1] , p(n+m) = p(n) - p(trt) where the dot represents matrix multiplication. Hence, in particular: p(2) = p(l+l) = p- p = p2 And by induction: p(o) = p(n-l+1) = pn-I - p = pO That is, the n-step transition matrix is obtained by multiplying matrix P by itself n times. Therefore the N-step transition matrix is given by: 1II111I1I111III111111111111111111111111I1111111111111111111111111111111111111111111111111111111111111111111111I1I11111I11111II111111111111111111111111111111111111111IIIIIIIIIIilili MARKOV THEORY EDGARL. DECASTRO PAGE 6 0.
  • 7. rp(N) p(N) 12 ... In pCl'J) l 111(N) p(N) ... p(N) p(N) = I P21 . 22 . . 2n . . . . . . . . . FIRST PASSAGE AND FIRST RETURN PROBABILITIES Let f~N) = first passage probability = probability of reaching state j from state i for the first time in N steps. f~N)= first return probability if i = j fi~N) = P{XN = j,XN-I :I:j,XN-2 :I:j,...,Xf :I:jlXo = i} f.(I) = p.. lJ IJ N-l f,(N)= p.(N)- IJ IJ ~ ~ IJ f.(k)p(N-k) JJ k=1 111111111111111111111111111111111111111111111111111111" 11111111111" 11111111111111111111111111111111111" 111111111111111111111111111111111111111""" 11/1111111" 11111111111I11111 fv1ARKOVTHEORY EDGAR L. DE CASTRO PAGE 7 . '. I" .... . ".:' . ...
  • 8. CLAS~IFICATION OF STATES For fixed i andj, the fi~N) are nonnegative numbers such that When the sum does equal 1, fi~N) can be considered as a probability distribution for the random variable: first passage time If i =j and 00 L f(N) - IJ - 1 N=1 then state i is caned a recurrent state because this condition implies that once the process is in state i, it will return to state i. A special case of the recuuent state is the absorbingstate. A state is said to be an absorb,lng state if the one step transition probability Pij = 1. Thus, if a state is absorbing, the process win never leave once it enters. If 00 L f(N) <t N=] 1J then state i is called a transient state because this condition implies that once the process is in state i, there is a strictly positive probability that it will never return to i. I1111II1I11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I1111111111111111111111111111111111111111I1I1II1II1IIIII11 MARKOV THEORY EDGARL. DECASTRO PAGE 8 '" , .. .:
  • 9. Let Mij = expected first passage time from i to j 00 00 if L N=1 ~ fi(N) <1 00 if L N=1 fDN) = 1 [Mijexists only if the states are recurrent] Whenever 00 ~ ยฃ.oJ 1J f.(N)- - 1 N=l Then Moo= 1+ '" P'k M kJ' 1J ยฃ.. I k*j When j = i, the expected first passage time is caUed the first recurrence time. If Mii = 00, it is called a null recurrent state, If Mii< 00, it is called a positive recurrent state, In a finite Markov chain, there are no null recurrent states (only positive recurrent states and transient states). 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I111I11III11 MARKOV THEORY EDGAR L. DE CASTRO PAGE 9 .'." .
  • 10. State j is accessible from i if Pij > 0 for some n > o. If j is accessible to i and i is accessible to j, then the two states communicate. In general : (1) any state communicates with itself. (2) if state i communicates with state j, then state j communicates with state i. (3) if state i communicates with state j and state j communicates with state k, then state i communicates with state k. If all states communicate, the Markov chain is Irreducible. In a fmite Markov chain, the members of a class are either all transient states or all positive recurrent states. A state i is said to have a . period t (t > 1) if Pj]N) = 0 whenever n is not divisible by t, and t is the largest integer with this property.If a state has a period 1, it is called aperiodic state. If state i in a class is aperiodic, then .all states in the class are aperiodic. Positive recurrent states that are aperiodicare called ergodic states. 11111111111111111111111111111111111111 n 11111111111" 1111111111" II" 111111" II" II iIIlllll"" !l1I1I111111" n II111I "" 11111111111111111" 11111" I" 1111111111111111" I111III1IIII MARKOV THEORY EDGARL. DECASTRO PAGE 10 .. . . ," ..: . '." . ",'I
  • 11. ERGODIC MARKOV CHAINS STEADY STATE PROBABILITIES (LIMITING PROBABILITIES) Let 7tj = N~oo p.(N) lim IJ As N grows large: 7t1 7t2 ... 7tn ... . pN ~ 17tI . . 7t2 . 7tn . . . . . . . . ... 7tn As long as the process is ergodic, such Iin1itexists. p(I'l) = p(N-I) 8 P Jim peN) = Jim p(N-I). p N~oo N~oo ... 7t 11 'it-- 1 1t 2 ... 7t n . . . . =. .... . . . . l.p ... . 1tnJ ... L7t1 7t2 : . ,:J 1t=1t8P 1tT = pT 81t [This system possesses an infinite number of solutions.] 1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111I11I11111I1III11111111111I11111111111111111111111111111I1I1IIIII1 MARKOV THEORY EDGAR L. DE CASTRO PAGE 11 ',' : ' . ". . ". :: . " 'I . ,'.
  • 12. The nonnalizing equation L 1ti = 1 al1 i is used to identifY the one solution which wiU qualify as a probability distribution. ABSORBING MARKOV CHAINS Let PH Pl2 ... 'Plk I Pl,k+l ... ... P21 P22 ... P2k P2,k+l ... ... . I . . . . . . . . . . . I Pkl Pk2 ... Pkk I Pk,k +1 ... ... - - - - I - - - 0 0 ... 0 I 1 ... 0 0 0 ... 0 0 ... 0 . . I . . . . . . . I . . . 0 0 .. . 0 I 0 ... I The partitioned matrix is given by: Q I I( .., P=I- - 0 I J IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII! 11111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111II1I1111111II11111 MARKOV THEORY EDGAR L. DE CASTRO PAGE 12
  • 13. Let eij = mean number of times that transient state j is occupied given the initial state in i before absorption E = corresponding matrix Then, k i:l: j: eij"= L ~vevj v=l k i = j : eij = 1 + L v=l Pjvevj In matrix form : E = I + QE E - QE =I (I-Q)E=I E=(I-Q)-l Let di = total number of transitions until absorption k di = Leij j=l 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111II1I11 MARKOVTHEORY EDGARL. DE CASTRO PAGE 13 .":.":
  • 14. ABSORPTION PROBABILITY - probability of entering an absorbing state Let Aij = probability that the process even enters absorbing state j given that the initial state is i. k Aij L = Pij + v=l PivAvj In matrix form A = matrix of Aij (not necessarily square) [where the number of rows is the number of transient states and the number of columns is the number of absorbing states] Examining matrix A A=R+QA A-QA=R (1- Q)A = R A = [I - Q ]-1 R CONDITIONAL MEAN FIRST PASSAGE TIME - number of transitions which will occur before an absorbing state is entered A..M..=A..+ 1J ~ P.kA .M . U IJ ยฃ.- I'" kJ kJ k=#j 111II1111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111!11111111111111111111IIII1I111111111 MARKOV THEORY EDGAR L. DE CASTRO PAGE 14 ..'..',. . . ., .'