SlideShare a Scribd company logo
1 of 50
Download to read offline
Lesson 11
                     Markov Chains

                          Math 20


                      October 15, 2007


Announcements
   Review Session (ML), 10/16, 7:30–9:30 Hall E
   Problem Set 4 is on the course web site. Due October 17
   Midterm I 10/18, Hall A 7–8:30pm
   OH: Mondays 1–2, Tuesdays 3–4, Wednesdays 1–3 (SC 323)
   Old exams and solutions on website
The Markov Dance




  Divide the class into three groups A, B, and C .
The Markov Dance




  Divide the class into three groups A, B, and C . Upon my signal:
      1/3of group A goes to group B, and 1/3 of group A goes to
      group C .
The Markov Dance




  Divide the class into three groups A, B, and C . Upon my signal:
      1/3of group A goes to group B, and 1/3 of group A goes to
      group C .
      1/4of group B goes to group A, and 1/4 of group A goes to
      group C .
The Markov Dance




  Divide the class into three groups A, B, and C . Upon my signal:
      1/3of group A goes to group B, and 1/3 of group A goes to
      group C .
      1/4of group B goes to group A, and 1/4 of group A goes to
      group C .
      1/2   of group C goes to group B.
Another Example



  Suppose on any given class day you wake up and decide whether to
  come to class. If you went to class the time before, you’re 70%
  likely to go today, and if you skipped the last class, you’re 80%
  likely to go today.
Another Example



  Suppose on any given class day you wake up and decide whether to
  come to class. If you went to class the time before, you’re 70%
  likely to go today, and if you skipped the last class, you’re 80%
  likely to go today. Some questions you might ask are:
      If I go to class on Monday, how likely am I to go to class on
      Friday?
Another Example



  Suppose on any given class day you wake up and decide whether to
  come to class. If you went to class the time before, you’re 70%
  likely to go today, and if you skipped the last class, you’re 80%
  likely to go today. Some questions you might ask are:
      If I go to class on Monday, how likely am I to go to class on
      Friday?
      Assuming the class is infinitely long (the horror!),
      approximately what portion of class will I attend?
Many times we are interested in the transition of something
between certain “states” over discrete time steps. Examples are
    movement of people between regions
    states of the weather
    movement between positions on a Monopoly board
    your score in blackjack
Many times we are interested in the transition of something
between certain “states” over discrete time steps. Examples are
    movement of people between regions
    states of the weather
    movement between positions on a Monopoly board
    your score in blackjack

Definition
A Markov chain or Markov process is a process in which the
probability of the system being in a particular state at a given
observation period depends only on its state at the immediately
preceding observation period.
Common questions about a Markov chain are:
    What is the probability of transitions from state to state over
    multiple observations?
    Are there any “equilibria” in the process?
    Is there a long-term stability to the process?
Definition
Suppose the system has n possible states. For each i and j, let tij
be the probability of switching from state j to state i. The matrix
T whose ijth entry is tij is called the transition matrix.
Definition
Suppose the system has n possible states. For each i and j, let tij
be the probability of switching from state j to state i. The matrix
T whose ijth entry is tij is called the transition matrix.

Example
The transition matrix for the skipping class example is

                                0.7 0.8
                         T=
                                0.3 0.2
The big idea about the transition matrix reflects an important fact
about probabilities:
    All entries are nonnegative.
    The columns add up to one.
Such a matrix is called a stochastic matrix.
Definition
The state vector of a Markov process with n-states at time step k
is the vector                   (k) 
                                 p
                               1 
                               p (k) 
                        x(k) =  2 
                               .
                               . .
                                       (k)
                                     pn
       (k)
where pj     is the probability that the system is in state j at time
step k.
Definition
The state vector of a Markov process with n-states at time step k
is the vector                   (k) 
                                 p
                               1 
                               p (k) 
                        x(k) =  2 
                               .
                               . .
                                       (k)
                                     pn
       (k)
where pj     is the probability that the system is in state j at time
step k.

Example
Suppose we start out with 20 students in group A and 10 students
in groups B and C . Then the initial state vector is


                           x(0) =
Definition
The state vector of a Markov process with n-states at time step k
is the vector                   (k) 
                                 p
                               1 
                               p (k) 
                        x(k) =  2 
                               .
                               . .
                                       (k)
                                     pn
       (k)
where pj     is the probability that the system is in state j at time
step k.

Example
Suppose we start out with 20 students in group A and 10 students
in groups B and C . Then the initial state vector is
                                      
                                   0.5
                        x(0) = 0.25 .
                                  0.25
Example
Suppose after three weeks of class I am equally likely to come to
class or skip. Then my state vector would be x(10) =
Example
Suppose after three weeks of class I am equally likely to come to
                                                         0.5
class or skip. Then my state vector would be x(10) =
                                                         0.5
Example
Suppose after three weeks of class I am equally likely to come to
                                                         0.5
class or skip. Then my state vector would be x(10) =
                                                         0.5
The big idea about state vectors reflects an important fact about
probabilities:
    All entries are nonnegative.
    The entries add up to one.
Such a vector is called a probability vector.
Lemma
Let T be an n × n stochastic matrix and x an n × 1 probability
vector. Then T x is a probability vector.
Lemma
Let T be an n × n stochastic matrix and x an n × 1 probability
vector. Then T x is a probability vector.

Proof.
We need to show that the entries of T x add up to one. We have
                                  
          n                n         n                  n     n
               (T x)i =                   tij xj  =               tij   xj
                                
         i=1              i=1       j=1                j=1   i=1
                           n
                                1 · xj = 1
                     =
                          j=1
Theorem
If T is the transition matrix of a Markov process, then the state
vector x(k+1) at the (k + 1)th observation period can be
determined from the state vector x(k) at the kth observation
period, as
                           x(k+1) = T x(k)
Theorem
If T is the transition matrix of a Markov process, then the state
vector x(k+1) at the (k + 1)th observation period can be
determined from the state vector x(k) at the kth observation
period, as
                           x(k+1) = T x(k)

This comes from an important idea in conditional probability:

  P(state i at t = k + 1)
            n
       =         P(move from state j to state i)P(state j at t = k)
           j=1

That is, for each i,
                                         n
                            (k+1)                  (k)
                          pi        =         tij pj
                                        j=1
Illustration

   Example
   How does the probability of going to class on Wednesday depend
   on the probabilities of going to class on Monday?

                                                    (k)
                                (k)
                                               p2
                            p1

                           go
       Monday                                        skip
                                  t21
                    t11                               t12 t22
                    go                         go
      Wednesday                   skip                      skip
                          (k+1)          (k)              (k)
                      p1          = t11 p1 + t12 p2
                          (k+1)          (k)              (k)
                      p2          = t21 p1 + t22 p2
Example
If I go to class on Monday, what’s the probability I’ll go to class on
Friday?
Example
If I go to class on Monday, what’s the probability I’ll go to class on
Friday?

Solution
                  1
We have x(0) =      . We want to know x(2) . We have
                  0

    x(2) = T x(1) = T (T x(0) ) = T 2 = T x(0)
                         2
               0.7 0.8       1       0.7 0.8     0.7       0.73
           =                     =                     =
               0.3 0.2       0       0.3 0.2     0.3       0.27
Let’s look at successive powers of the probability matrix. Do they
converge? To what?
Let’s look at successive powers of the transition matrix in the
Markov Dance.
                                               
                          0.333333 0.25 0.
                   T = 0.333333 0.5 0.5
                          0.333333 0.25 0.5
Let’s look at successive powers of the transition matrix in the
Markov Dance.
                                               
                          0.333333 0.25 0.
                   T = 0.333333 0.5 0.5
                          0.333333 0.25 0.5
                                            
                     0.194444 0.208333 0.125
              T 2 = 0.444444 0.458333 0.5 
                     0.361111 0.333333 0.375
Let’s look at successive powers of the transition matrix in the
Markov Dance.
                                               
                          0.333333 0.25 0.
                   T = 0.333333 0.5 0.5
                          0.333333 0.25 0.5
                                             
                      0.194444 0.208333 0.125
              T 2 = 0.444444 0.458333 0.5 
                      0.361111 0.333333 0.375
                                              
                    0.175926 0.184028 0.166667
            T 3 = 0.467593 0.465278 0.479167
                    0.356481 0.350694 0.354167
                            
       0.17554 0.177662 0.175347
T 4 = 0.470679 0.469329 0.472222
       0.353781 0.353009 0.352431
                               
         0.17554   0.177662 0.175347
  4   0.470679    0.469329 0.472222
T=
        0.353781   0.353009 0.352431
                                    
        0.176183   0.176553 0.176505
T 5 = 0.470743    0.47039 0.470775
        0.353074   0.353057 0.35272
                               
                    0.17554   0.177662 0.175347
             4   0.470679    0.469329 0.472222
           T=
                   0.353781   0.353009 0.352431
                                               
                   0.176183   0.176553 0.176505
           T 5 = 0.470743    0.47039 0.470775
                   0.353074   0.353057 0.35272
                                               
                   0.176414   0.176448 0.176529
           T 6 = 0.470636    0.470575 0.470583
                    0.35295   0.352977 0.352889
Do they converge? To what?
A transition matrix (or corresponding Markov process) is called
regular if some power of the matrix has all nonzero entries. Or,
there is a positive probability of eventually moving from every state
to every state.
Theorem 2.5
If T is the transition matrix of a regular Markov process, then
(a) As n → ∞, T n approaches a matrix
                                                
                         u1 u1 . . .        u1
                        u2 u2 . . .        u2 
                   A=                           ,
                       . . . . . . . . .   . . .
                         un un . . .        un

    all of whose columns are identical.
(b) Every column u is a a probability vector all of whose
    components are positive.
Theorem 2.6
If T is a regular∗ transition matrix and A and u are as above, then
(a) For any probability vector x, Tn x → u as n → ∞, so that u is
    a steady-state vector.
(b) The steady-state vector u is the unique probability vector
    satisfying the matrix equation Tu = u.
Finding the steady-state vector




   We know the steady-state vector is unique. So we use the equation
   it satisfies to find it: Tu = u.
Finding the steady-state vector




   We know the steady-state vector is unique. So we use the equation
   it satisfies to find it: Tu = u.
   This is a matrix equation if you put it in the form

                             (T − I)u = 0
Example (Skipping class)
                                  0.7 0.8
If the transition matrix is T =           , what is the
                                  0.3 0.2
steady-state vector?
Example (Skipping class)
                                  0.7 0.8
If the transition matrix is T =           , what is the
                                  0.3 0.2
steady-state vector?

Solution
We can combine the equations (T − I )u = 0, u1 + u2 = 1 into a
single linear system with augmented matrix
                                               
                 −3/10   8/10 0         1 0 8/11
              3/10 −8/10 0          0 1 3/11 
                     1      11          00      0
Example (Skipping class)
                                    0.7 0.8
If the transition matrix is T =             , what is the
                                    0.3 0.2
steady-state vector?

Solution
We can combine the equations (T − I )u = 0, u1 + u2 = 1 into a
single linear system with augmented matrix
                                               
                 −3/10   8/10 0         1 0 8/11
              3/10 −8/10 0          0 1 3/11 
                     1      11          00      0

                                  8/11
So the steady-state vector is            . You’ll go to class about 72%
                                  3/11
of the time.
Example (The Markov Dance)
                                           
                               1/3   1/4  0
If the transition matrix is T = 1/3 1/2 1/2, what is the
                                 1/3 1/4 1/2
steady-state vector?
Example (The Markov Dance)
                                           
                               1/3   1/4  0
If the transition matrix is T = 1/3 1/2 1/2, what is the
                                 1/3 1/4 1/2
steady-state vector?

Solution
We have
                                                        
         −2/3 1/4    0        0         1   0   0   3/17
        1/3 −1/2  1/2        0      0    1   0   8/17   
                                                        
              1/4 −1/2
        1/3                  0      0    0   1   6/17   
            1   1    1        1         0   0   0     0

More Related Content

What's hot

Markov chain and its Application
Markov chain and its Application Markov chain and its Application
Markov chain and its Application Tilakpoudel2
 
Markov Chains.pptx
Markov Chains.pptxMarkov Chains.pptx
Markov Chains.pptxTarigBerba
 
MARKOV CHAIN ANALYSIS IN AN ORGANISATION
MARKOV CHAIN ANALYSIS IN AN ORGANISATIONMARKOV CHAIN ANALYSIS IN AN ORGANISATION
MARKOV CHAIN ANALYSIS IN AN ORGANISATIONVivek Tyagi
 
Markov analysis
Markov analysisMarkov analysis
Markov analysisganith2k13
 
Introduction to Maximum Likelihood Estimator
Introduction to Maximum Likelihood EstimatorIntroduction to Maximum Likelihood Estimator
Introduction to Maximum Likelihood EstimatorAmir Al-Ansary
 
17-markov-chains.pdf
17-markov-chains.pdf17-markov-chains.pdf
17-markov-chains.pdfmelda49
 
Monte Carlo Simulation
Monte Carlo SimulationMonte Carlo Simulation
Monte Carlo SimulationAguinaldo Flor
 
Markov Models
Markov ModelsMarkov Models
Markov ModelsVu Pham
 
Autoregression
AutoregressionAutoregression
Autoregressionjchristo06
 

What's hot (20)

Markov chain and its Application
Markov chain and its Application Markov chain and its Application
Markov chain and its Application
 
Markov Chains.pptx
Markov Chains.pptxMarkov Chains.pptx
Markov Chains.pptx
 
HIDDEN MARKOV MODEL AND ITS APPLICATION
HIDDEN MARKOV MODEL AND ITS APPLICATIONHIDDEN MARKOV MODEL AND ITS APPLICATION
HIDDEN MARKOV MODEL AND ITS APPLICATION
 
Markov chain
Markov chainMarkov chain
Markov chain
 
Markov chain
Markov chainMarkov chain
Markov chain
 
Markov chain-model
Markov chain-modelMarkov chain-model
Markov chain-model
 
Time series Analysis
Time series AnalysisTime series Analysis
Time series Analysis
 
Chap 4 markov chains
Chap 4   markov chainsChap 4   markov chains
Chap 4 markov chains
 
MARKOV CHAIN ANALYSIS IN AN ORGANISATION
MARKOV CHAIN ANALYSIS IN AN ORGANISATIONMARKOV CHAIN ANALYSIS IN AN ORGANISATION
MARKOV CHAIN ANALYSIS IN AN ORGANISATION
 
Markov analysis
Markov analysisMarkov analysis
Markov analysis
 
Introduction to Maximum Likelihood Estimator
Introduction to Maximum Likelihood EstimatorIntroduction to Maximum Likelihood Estimator
Introduction to Maximum Likelihood Estimator
 
17-markov-chains.pdf
17-markov-chains.pdf17-markov-chains.pdf
17-markov-chains.pdf
 
Monte Carlo Simulation
Monte Carlo SimulationMonte Carlo Simulation
Monte Carlo Simulation
 
Markov Models
Markov ModelsMarkov Models
Markov Models
 
Markov chain analysis
Markov chain analysisMarkov chain analysis
Markov chain analysis
 
Autoregression
AutoregressionAutoregression
Autoregression
 
Monte carlo
Monte carloMonte carlo
Monte carlo
 
Markov presentation
Markov presentationMarkov presentation
Markov presentation
 
Functional analysis
Functional analysis Functional analysis
Functional analysis
 
Markor chain presentation
Markor chain presentationMarkor chain presentation
Markor chain presentation
 

Viewers also liked

Markov chain intro
Markov chain introMarkov chain intro
Markov chain intro2vikasdubey
 
HR Supply forecasting
HR Supply forecastingHR Supply forecasting
HR Supply forecastingSajna Fathima
 
RELINK - benefit of micro inverters
RELINK - benefit of micro invertersRELINK - benefit of micro inverters
RELINK - benefit of micro invertersMarco Achilli
 
Mô hình mạng pert
Mô hình mạng pertMô hình mạng pert
Mô hình mạng pertHoan Vu
 
Lesson 18: Geometric Representations of Functions
Lesson 18: Geometric Representations of FunctionsLesson 18: Geometric Representations of Functions
Lesson 18: Geometric Representations of FunctionsMatthew Leingang
 
Lesson 12: The Product and Quotient Rule
Lesson 12: The Product and Quotient RuleLesson 12: The Product and Quotient Rule
Lesson 12: The Product and Quotient RuleMatthew Leingang
 
Lesson 8: Tangents, Velocity, the Derivative
Lesson 8: Tangents, Velocity, the DerivativeLesson 8: Tangents, Velocity, the Derivative
Lesson 8: Tangents, Velocity, the DerivativeMatthew Leingang
 
Lesson 16 The Spectral Theorem and Applications
Lesson 16  The Spectral Theorem and ApplicationsLesson 16  The Spectral Theorem and Applications
Lesson 16 The Spectral Theorem and ApplicationsMatthew Leingang
 
Lesson05 Continuity Slides+Notes
Lesson05    Continuity Slides+NotesLesson05    Continuity Slides+Notes
Lesson05 Continuity Slides+NotesMatthew Leingang
 

Viewers also liked (19)

Markov chain intro
Markov chain introMarkov chain intro
Markov chain intro
 
Markov theory
Markov theoryMarkov theory
Markov theory
 
HR Supply forecasting
HR Supply forecastingHR Supply forecasting
HR Supply forecasting
 
The markovchain package use r2016
The markovchain package use r2016The markovchain package use r2016
The markovchain package use r2016
 
RELINK - benefit of micro inverters
RELINK - benefit of micro invertersRELINK - benefit of micro inverters
RELINK - benefit of micro inverters
 
Markov models explained
Markov models explainedMarkov models explained
Markov models explained
 
Markov chain
Markov chainMarkov chain
Markov chain
 
Mô hình mạng pert
Mô hình mạng pertMô hình mạng pert
Mô hình mạng pert
 
Serives mktg -_banking
Serives mktg -_bankingSerives mktg -_banking
Serives mktg -_banking
 
Research Benefit esitysaineisto
Research Benefit esitysaineistoResearch Benefit esitysaineisto
Research Benefit esitysaineisto
 
Markov Chain Basic
Markov Chain BasicMarkov Chain Basic
Markov Chain Basic
 
Lesson 18: Geometric Representations of Functions
Lesson 18: Geometric Representations of FunctionsLesson 18: Geometric Representations of Functions
Lesson 18: Geometric Representations of Functions
 
Lesson 12: The Product and Quotient Rule
Lesson 12: The Product and Quotient RuleLesson 12: The Product and Quotient Rule
Lesson 12: The Product and Quotient Rule
 
Lesson 5: Continuity
Lesson 5: ContinuityLesson 5: Continuity
Lesson 5: Continuity
 
Lesson 8: Tangents, Velocity, the Derivative
Lesson 8: Tangents, Velocity, the DerivativeLesson 8: Tangents, Velocity, the Derivative
Lesson 8: Tangents, Velocity, the Derivative
 
Lesson 16 The Spectral Theorem and Applications
Lesson 16  The Spectral Theorem and ApplicationsLesson 16  The Spectral Theorem and Applications
Lesson 16 The Spectral Theorem and Applications
 
Lesson 10: Inverses
Lesson 10: InversesLesson 10: Inverses
Lesson 10: Inverses
 
Lesson 15: The Chain Rule
Lesson 15: The Chain RuleLesson 15: The Chain Rule
Lesson 15: The Chain Rule
 
Lesson05 Continuity Slides+Notes
Lesson05    Continuity Slides+NotesLesson05    Continuity Slides+Notes
Lesson05 Continuity Slides+Notes
 

Similar to Lesson 11: Markov Chains

Unbiased MCMC with couplings
Unbiased MCMC with couplingsUnbiased MCMC with couplings
Unbiased MCMC with couplingsPierre Jacob
 
Lesson 5: Tangents, Velocity, the Derivative
Lesson 5: Tangents, Velocity, the DerivativeLesson 5: Tangents, Velocity, the Derivative
Lesson 5: Tangents, Velocity, the DerivativeMatthew Leingang
 
Recent developments on unbiased MCMC
Recent developments on unbiased MCMCRecent developments on unbiased MCMC
Recent developments on unbiased MCMCPierre Jacob
 
Lecture 3 qualtifed rules of inference
Lecture 3 qualtifed rules of inferenceLecture 3 qualtifed rules of inference
Lecture 3 qualtifed rules of inferenceasimnawaz54
 
Introduction to Quantum Monte Carlo
Introduction to Quantum Monte CarloIntroduction to Quantum Monte Carlo
Introduction to Quantum Monte CarloClaudio Attaccalite
 
Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)Matthew Leingang
 
Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)Mel Anthony Pepito
 
12 Machine Learning Supervised Hidden Markov Chains
12 Machine Learning  Supervised Hidden Markov Chains12 Machine Learning  Supervised Hidden Markov Chains
12 Machine Learning Supervised Hidden Markov ChainsAndres Mendez-Vazquez
 
Book chapter-5
Book chapter-5Book chapter-5
Book chapter-5Hung Le
 
Introduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsIntroduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsUniversity of Salerno
 
Phase-Type Distributions for Finite Interacting Particle Systems
Phase-Type Distributions for Finite Interacting Particle SystemsPhase-Type Distributions for Finite Interacting Particle Systems
Phase-Type Distributions for Finite Interacting Particle SystemsStefan Eng
 
Markov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themMarkov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themPierre Jacob
 
Roots of unity_cool_app
Roots of unity_cool_appRoots of unity_cool_app
Roots of unity_cool_appdineshkrithi
 
lecture 8
lecture 8lecture 8
lecture 8sajinsc
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsChristian Robert
 

Similar to Lesson 11: Markov Chains (20)

Stochastic Processes Assignment Help
Stochastic Processes Assignment HelpStochastic Processes Assignment Help
Stochastic Processes Assignment Help
 
Unbiased MCMC with couplings
Unbiased MCMC with couplingsUnbiased MCMC with couplings
Unbiased MCMC with couplings
 
Lesson 5: Tangents, Velocity, the Derivative
Lesson 5: Tangents, Velocity, the DerivativeLesson 5: Tangents, Velocity, the Derivative
Lesson 5: Tangents, Velocity, the Derivative
 
Recent developments on unbiased MCMC
Recent developments on unbiased MCMCRecent developments on unbiased MCMC
Recent developments on unbiased MCMC
 
Lecture 3 qualtifed rules of inference
Lecture 3 qualtifed rules of inferenceLecture 3 qualtifed rules of inference
Lecture 3 qualtifed rules of inference
 
Introduction to Quantum Monte Carlo
Introduction to Quantum Monte CarloIntroduction to Quantum Monte Carlo
Introduction to Quantum Monte Carlo
 
Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)
 
Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)Lesson 15: Exponential Growth and Decay (slides)
Lesson 15: Exponential Growth and Decay (slides)
 
12 Machine Learning Supervised Hidden Markov Chains
12 Machine Learning  Supervised Hidden Markov Chains12 Machine Learning  Supervised Hidden Markov Chains
12 Machine Learning Supervised Hidden Markov Chains
 
Book chapter-5
Book chapter-5Book chapter-5
Book chapter-5
 
Logic
LogicLogic
Logic
 
Queueing theory
Queueing theoryQueueing theory
Queueing theory
 
Introduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov ChainsIntroduction to Bootstrap and elements of Markov Chains
Introduction to Bootstrap and elements of Markov Chains
 
Phase-Type Distributions for Finite Interacting Particle Systems
Phase-Type Distributions for Finite Interacting Particle SystemsPhase-Type Distributions for Finite Interacting Particle Systems
Phase-Type Distributions for Finite Interacting Particle Systems
 
Markov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing themMarkov chain Monte Carlo methods and some attempts at parallelizing them
Markov chain Monte Carlo methods and some attempts at parallelizing them
 
Roots of unity_cool_app
Roots of unity_cool_appRoots of unity_cool_app
Roots of unity_cool_app
 
lecture 8
lecture 8lecture 8
lecture 8
 
jalalam.ppt
jalalam.pptjalalam.ppt
jalalam.ppt
 
Delayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithmsDelayed acceptance for Metropolis-Hastings algorithms
Delayed acceptance for Metropolis-Hastings algorithms
 
Unit 4 jwfiles
Unit 4 jwfilesUnit 4 jwfiles
Unit 4 jwfiles
 

More from Matthew Leingang

Streamlining assessment, feedback, and archival with auto-multiple-choice
Streamlining assessment, feedback, and archival with auto-multiple-choiceStreamlining assessment, feedback, and archival with auto-multiple-choice
Streamlining assessment, feedback, and archival with auto-multiple-choiceMatthew Leingang
 
Electronic Grading of Paper Assessments
Electronic Grading of Paper AssessmentsElectronic Grading of Paper Assessments
Electronic Grading of Paper AssessmentsMatthew Leingang
 
Lesson 27: Integration by Substitution (slides)
Lesson 27: Integration by Substitution (slides)Lesson 27: Integration by Substitution (slides)
Lesson 27: Integration by Substitution (slides)Matthew Leingang
 
Lesson 26: The Fundamental Theorem of Calculus (slides)
Lesson 26: The Fundamental Theorem of Calculus (slides)Lesson 26: The Fundamental Theorem of Calculus (slides)
Lesson 26: The Fundamental Theorem of Calculus (slides)Matthew Leingang
 
Lesson 26: The Fundamental Theorem of Calculus (slides)
Lesson 26: The Fundamental Theorem of Calculus (slides)Lesson 26: The Fundamental Theorem of Calculus (slides)
Lesson 26: The Fundamental Theorem of Calculus (slides)Matthew Leingang
 
Lesson 27: Integration by Substitution (handout)
Lesson 27: Integration by Substitution (handout)Lesson 27: Integration by Substitution (handout)
Lesson 27: Integration by Substitution (handout)Matthew Leingang
 
Lesson 26: The Fundamental Theorem of Calculus (handout)
Lesson 26: The Fundamental Theorem of Calculus (handout)Lesson 26: The Fundamental Theorem of Calculus (handout)
Lesson 26: The Fundamental Theorem of Calculus (handout)Matthew Leingang
 
Lesson 25: Evaluating Definite Integrals (slides)
Lesson 25: Evaluating Definite Integrals (slides)Lesson 25: Evaluating Definite Integrals (slides)
Lesson 25: Evaluating Definite Integrals (slides)Matthew Leingang
 
Lesson 25: Evaluating Definite Integrals (handout)
Lesson 25: Evaluating Definite Integrals (handout)Lesson 25: Evaluating Definite Integrals (handout)
Lesson 25: Evaluating Definite Integrals (handout)Matthew Leingang
 
Lesson 24: Areas and Distances, The Definite Integral (handout)
Lesson 24: Areas and Distances, The Definite Integral (handout)Lesson 24: Areas and Distances, The Definite Integral (handout)
Lesson 24: Areas and Distances, The Definite Integral (handout)Matthew Leingang
 
Lesson 24: Areas and Distances, The Definite Integral (slides)
Lesson 24: Areas and Distances, The Definite Integral (slides)Lesson 24: Areas and Distances, The Definite Integral (slides)
Lesson 24: Areas and Distances, The Definite Integral (slides)Matthew Leingang
 
Lesson 23: Antiderivatives (slides)
Lesson 23: Antiderivatives (slides)Lesson 23: Antiderivatives (slides)
Lesson 23: Antiderivatives (slides)Matthew Leingang
 
Lesson 23: Antiderivatives (slides)
Lesson 23: Antiderivatives (slides)Lesson 23: Antiderivatives (slides)
Lesson 23: Antiderivatives (slides)Matthew Leingang
 
Lesson 22: Optimization Problems (slides)
Lesson 22: Optimization Problems (slides)Lesson 22: Optimization Problems (slides)
Lesson 22: Optimization Problems (slides)Matthew Leingang
 
Lesson 22: Optimization Problems (handout)
Lesson 22: Optimization Problems (handout)Lesson 22: Optimization Problems (handout)
Lesson 22: Optimization Problems (handout)Matthew Leingang
 
Lesson 21: Curve Sketching (slides)
Lesson 21: Curve Sketching (slides)Lesson 21: Curve Sketching (slides)
Lesson 21: Curve Sketching (slides)Matthew Leingang
 
Lesson 21: Curve Sketching (handout)
Lesson 21: Curve Sketching (handout)Lesson 21: Curve Sketching (handout)
Lesson 21: Curve Sketching (handout)Matthew Leingang
 
Lesson 20: Derivatives and the Shapes of Curves (slides)
Lesson 20: Derivatives and the Shapes of Curves (slides)Lesson 20: Derivatives and the Shapes of Curves (slides)
Lesson 20: Derivatives and the Shapes of Curves (slides)Matthew Leingang
 
Lesson 20: Derivatives and the Shapes of Curves (handout)
Lesson 20: Derivatives and the Shapes of Curves (handout)Lesson 20: Derivatives and the Shapes of Curves (handout)
Lesson 20: Derivatives and the Shapes of Curves (handout)Matthew Leingang
 

More from Matthew Leingang (20)

Making Lesson Plans
Making Lesson PlansMaking Lesson Plans
Making Lesson Plans
 
Streamlining assessment, feedback, and archival with auto-multiple-choice
Streamlining assessment, feedback, and archival with auto-multiple-choiceStreamlining assessment, feedback, and archival with auto-multiple-choice
Streamlining assessment, feedback, and archival with auto-multiple-choice
 
Electronic Grading of Paper Assessments
Electronic Grading of Paper AssessmentsElectronic Grading of Paper Assessments
Electronic Grading of Paper Assessments
 
Lesson 27: Integration by Substitution (slides)
Lesson 27: Integration by Substitution (slides)Lesson 27: Integration by Substitution (slides)
Lesson 27: Integration by Substitution (slides)
 
Lesson 26: The Fundamental Theorem of Calculus (slides)
Lesson 26: The Fundamental Theorem of Calculus (slides)Lesson 26: The Fundamental Theorem of Calculus (slides)
Lesson 26: The Fundamental Theorem of Calculus (slides)
 
Lesson 26: The Fundamental Theorem of Calculus (slides)
Lesson 26: The Fundamental Theorem of Calculus (slides)Lesson 26: The Fundamental Theorem of Calculus (slides)
Lesson 26: The Fundamental Theorem of Calculus (slides)
 
Lesson 27: Integration by Substitution (handout)
Lesson 27: Integration by Substitution (handout)Lesson 27: Integration by Substitution (handout)
Lesson 27: Integration by Substitution (handout)
 
Lesson 26: The Fundamental Theorem of Calculus (handout)
Lesson 26: The Fundamental Theorem of Calculus (handout)Lesson 26: The Fundamental Theorem of Calculus (handout)
Lesson 26: The Fundamental Theorem of Calculus (handout)
 
Lesson 25: Evaluating Definite Integrals (slides)
Lesson 25: Evaluating Definite Integrals (slides)Lesson 25: Evaluating Definite Integrals (slides)
Lesson 25: Evaluating Definite Integrals (slides)
 
Lesson 25: Evaluating Definite Integrals (handout)
Lesson 25: Evaluating Definite Integrals (handout)Lesson 25: Evaluating Definite Integrals (handout)
Lesson 25: Evaluating Definite Integrals (handout)
 
Lesson 24: Areas and Distances, The Definite Integral (handout)
Lesson 24: Areas and Distances, The Definite Integral (handout)Lesson 24: Areas and Distances, The Definite Integral (handout)
Lesson 24: Areas and Distances, The Definite Integral (handout)
 
Lesson 24: Areas and Distances, The Definite Integral (slides)
Lesson 24: Areas and Distances, The Definite Integral (slides)Lesson 24: Areas and Distances, The Definite Integral (slides)
Lesson 24: Areas and Distances, The Definite Integral (slides)
 
Lesson 23: Antiderivatives (slides)
Lesson 23: Antiderivatives (slides)Lesson 23: Antiderivatives (slides)
Lesson 23: Antiderivatives (slides)
 
Lesson 23: Antiderivatives (slides)
Lesson 23: Antiderivatives (slides)Lesson 23: Antiderivatives (slides)
Lesson 23: Antiderivatives (slides)
 
Lesson 22: Optimization Problems (slides)
Lesson 22: Optimization Problems (slides)Lesson 22: Optimization Problems (slides)
Lesson 22: Optimization Problems (slides)
 
Lesson 22: Optimization Problems (handout)
Lesson 22: Optimization Problems (handout)Lesson 22: Optimization Problems (handout)
Lesson 22: Optimization Problems (handout)
 
Lesson 21: Curve Sketching (slides)
Lesson 21: Curve Sketching (slides)Lesson 21: Curve Sketching (slides)
Lesson 21: Curve Sketching (slides)
 
Lesson 21: Curve Sketching (handout)
Lesson 21: Curve Sketching (handout)Lesson 21: Curve Sketching (handout)
Lesson 21: Curve Sketching (handout)
 
Lesson 20: Derivatives and the Shapes of Curves (slides)
Lesson 20: Derivatives and the Shapes of Curves (slides)Lesson 20: Derivatives and the Shapes of Curves (slides)
Lesson 20: Derivatives and the Shapes of Curves (slides)
 
Lesson 20: Derivatives and the Shapes of Curves (handout)
Lesson 20: Derivatives and the Shapes of Curves (handout)Lesson 20: Derivatives and the Shapes of Curves (handout)
Lesson 20: Derivatives and the Shapes of Curves (handout)
 

Recently uploaded

Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreternaman860154
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfhans926745
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEarley Information Science
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024Rafal Los
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024The Digital Insurer
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Scriptwesley chun
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Servicegiselly40
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProduct Anonymous
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdfhans926745
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processorsdebabhi2
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024The Digital Insurer
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)Gabriella Davis
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherRemote DBA Services
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024The Digital Insurer
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsEnterprise Knowledge
 

Recently uploaded (20)

Presentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreterPresentation on how to chat with PDF using ChatGPT code interpreter
Presentation on how to chat with PDF using ChatGPT code interpreter
 
Tech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdfTech Trends Report 2024 Future Today Institute.pdf
Tech Trends Report 2024 Future Today Institute.pdf
 
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptxEIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
EIS-Webinar-Prompt-Knowledge-Eng-2024-04-08.pptx
 
The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024The 7 Things I Know About Cyber Security After 25 Years | April 2024
The 7 Things I Know About Cyber Security After 25 Years | April 2024
 
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law DevelopmentsTrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
TrustArc Webinar - Stay Ahead of US State Data Privacy Law Developments
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
CNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of ServiceCNv6 Instructor Chapter 6 Quality of Service
CNv6 Instructor Chapter 6 Quality of Service
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf[2024]Digital Global Overview Report 2024 Meltwater.pdf
[2024]Digital Global Overview Report 2024 Meltwater.pdf
 
Exploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone ProcessorsExploring the Future Potential of AI-Enabled Smartphone Processors
Exploring the Future Potential of AI-Enabled Smartphone Processors
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Strategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a FresherStrategies for Landing an Oracle DBA Job as a Fresher
Strategies for Landing an Oracle DBA Job as a Fresher
 
Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024Partners Life - Insurer Innovation Award 2024
Partners Life - Insurer Innovation Award 2024
 
IAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI SolutionsIAC 2024 - IA Fast Track to Search Focused AI Solutions
IAC 2024 - IA Fast Track to Search Focused AI Solutions
 

Lesson 11: Markov Chains

  • 1. Lesson 11 Markov Chains Math 20 October 15, 2007 Announcements Review Session (ML), 10/16, 7:30–9:30 Hall E Problem Set 4 is on the course web site. Due October 17 Midterm I 10/18, Hall A 7–8:30pm OH: Mondays 1–2, Tuesdays 3–4, Wednesdays 1–3 (SC 323) Old exams and solutions on website
  • 2. The Markov Dance Divide the class into three groups A, B, and C .
  • 3. The Markov Dance Divide the class into three groups A, B, and C . Upon my signal: 1/3of group A goes to group B, and 1/3 of group A goes to group C .
  • 4. The Markov Dance Divide the class into three groups A, B, and C . Upon my signal: 1/3of group A goes to group B, and 1/3 of group A goes to group C . 1/4of group B goes to group A, and 1/4 of group A goes to group C .
  • 5. The Markov Dance Divide the class into three groups A, B, and C . Upon my signal: 1/3of group A goes to group B, and 1/3 of group A goes to group C . 1/4of group B goes to group A, and 1/4 of group A goes to group C . 1/2 of group C goes to group B.
  • 6.
  • 7. Another Example Suppose on any given class day you wake up and decide whether to come to class. If you went to class the time before, you’re 70% likely to go today, and if you skipped the last class, you’re 80% likely to go today.
  • 8. Another Example Suppose on any given class day you wake up and decide whether to come to class. If you went to class the time before, you’re 70% likely to go today, and if you skipped the last class, you’re 80% likely to go today. Some questions you might ask are: If I go to class on Monday, how likely am I to go to class on Friday?
  • 9. Another Example Suppose on any given class day you wake up and decide whether to come to class. If you went to class the time before, you’re 70% likely to go today, and if you skipped the last class, you’re 80% likely to go today. Some questions you might ask are: If I go to class on Monday, how likely am I to go to class on Friday? Assuming the class is infinitely long (the horror!), approximately what portion of class will I attend?
  • 10. Many times we are interested in the transition of something between certain “states” over discrete time steps. Examples are movement of people between regions states of the weather movement between positions on a Monopoly board your score in blackjack
  • 11. Many times we are interested in the transition of something between certain “states” over discrete time steps. Examples are movement of people between regions states of the weather movement between positions on a Monopoly board your score in blackjack Definition A Markov chain or Markov process is a process in which the probability of the system being in a particular state at a given observation period depends only on its state at the immediately preceding observation period.
  • 12. Common questions about a Markov chain are: What is the probability of transitions from state to state over multiple observations? Are there any “equilibria” in the process? Is there a long-term stability to the process?
  • 13. Definition Suppose the system has n possible states. For each i and j, let tij be the probability of switching from state j to state i. The matrix T whose ijth entry is tij is called the transition matrix.
  • 14. Definition Suppose the system has n possible states. For each i and j, let tij be the probability of switching from state j to state i. The matrix T whose ijth entry is tij is called the transition matrix. Example The transition matrix for the skipping class example is 0.7 0.8 T= 0.3 0.2
  • 15.
  • 16.
  • 17. The big idea about the transition matrix reflects an important fact about probabilities: All entries are nonnegative. The columns add up to one. Such a matrix is called a stochastic matrix.
  • 18. Definition The state vector of a Markov process with n-states at time step k is the vector  (k)  p 1  p (k)  x(k) =  2  . . . (k) pn (k) where pj is the probability that the system is in state j at time step k.
  • 19.
  • 20. Definition The state vector of a Markov process with n-states at time step k is the vector  (k)  p 1  p (k)  x(k) =  2  . . . (k) pn (k) where pj is the probability that the system is in state j at time step k. Example Suppose we start out with 20 students in group A and 10 students in groups B and C . Then the initial state vector is x(0) =
  • 21. Definition The state vector of a Markov process with n-states at time step k is the vector  (k)  p 1  p (k)  x(k) =  2  . . . (k) pn (k) where pj is the probability that the system is in state j at time step k. Example Suppose we start out with 20 students in group A and 10 students in groups B and C . Then the initial state vector is   0.5 x(0) = 0.25 . 0.25
  • 22.
  • 23. Example Suppose after three weeks of class I am equally likely to come to class or skip. Then my state vector would be x(10) =
  • 24. Example Suppose after three weeks of class I am equally likely to come to 0.5 class or skip. Then my state vector would be x(10) = 0.5
  • 25. Example Suppose after three weeks of class I am equally likely to come to 0.5 class or skip. Then my state vector would be x(10) = 0.5 The big idea about state vectors reflects an important fact about probabilities: All entries are nonnegative. The entries add up to one. Such a vector is called a probability vector.
  • 26. Lemma Let T be an n × n stochastic matrix and x an n × 1 probability vector. Then T x is a probability vector.
  • 27. Lemma Let T be an n × n stochastic matrix and x an n × 1 probability vector. Then T x is a probability vector. Proof. We need to show that the entries of T x add up to one. We have   n n n n n (T x)i = tij xj  = tij xj  i=1 i=1 j=1 j=1 i=1 n 1 · xj = 1 = j=1
  • 28. Theorem If T is the transition matrix of a Markov process, then the state vector x(k+1) at the (k + 1)th observation period can be determined from the state vector x(k) at the kth observation period, as x(k+1) = T x(k)
  • 29. Theorem If T is the transition matrix of a Markov process, then the state vector x(k+1) at the (k + 1)th observation period can be determined from the state vector x(k) at the kth observation period, as x(k+1) = T x(k) This comes from an important idea in conditional probability: P(state i at t = k + 1) n = P(move from state j to state i)P(state j at t = k) j=1 That is, for each i, n (k+1) (k) pi = tij pj j=1
  • 30. Illustration Example How does the probability of going to class on Wednesday depend on the probabilities of going to class on Monday? (k) (k) p2 p1 go Monday skip t21 t11 t12 t22 go go Wednesday skip skip (k+1) (k) (k) p1 = t11 p1 + t12 p2 (k+1) (k) (k) p2 = t21 p1 + t22 p2
  • 31. Example If I go to class on Monday, what’s the probability I’ll go to class on Friday?
  • 32. Example If I go to class on Monday, what’s the probability I’ll go to class on Friday? Solution 1 We have x(0) = . We want to know x(2) . We have 0 x(2) = T x(1) = T (T x(0) ) = T 2 = T x(0) 2 0.7 0.8 1 0.7 0.8 0.7 0.73 = = = 0.3 0.2 0 0.3 0.2 0.3 0.27
  • 33. Let’s look at successive powers of the probability matrix. Do they converge? To what?
  • 34. Let’s look at successive powers of the transition matrix in the Markov Dance.   0.333333 0.25 0. T = 0.333333 0.5 0.5 0.333333 0.25 0.5
  • 35. Let’s look at successive powers of the transition matrix in the Markov Dance.   0.333333 0.25 0. T = 0.333333 0.5 0.5 0.333333 0.25 0.5   0.194444 0.208333 0.125 T 2 = 0.444444 0.458333 0.5  0.361111 0.333333 0.375
  • 36. Let’s look at successive powers of the transition matrix in the Markov Dance.   0.333333 0.25 0. T = 0.333333 0.5 0.5 0.333333 0.25 0.5   0.194444 0.208333 0.125 T 2 = 0.444444 0.458333 0.5  0.361111 0.333333 0.375   0.175926 0.184028 0.166667 T 3 = 0.467593 0.465278 0.479167 0.356481 0.350694 0.354167
  • 37.  0.17554 0.177662 0.175347 T 4 = 0.470679 0.469329 0.472222 0.353781 0.353009 0.352431
  • 38.  0.17554 0.177662 0.175347 4 0.470679 0.469329 0.472222 T= 0.353781 0.353009 0.352431   0.176183 0.176553 0.176505 T 5 = 0.470743 0.47039 0.470775 0.353074 0.353057 0.35272
  • 39.  0.17554 0.177662 0.175347 4 0.470679 0.469329 0.472222 T= 0.353781 0.353009 0.352431   0.176183 0.176553 0.176505 T 5 = 0.470743 0.47039 0.470775 0.353074 0.353057 0.35272   0.176414 0.176448 0.176529 T 6 = 0.470636 0.470575 0.470583 0.35295 0.352977 0.352889 Do they converge? To what?
  • 40. A transition matrix (or corresponding Markov process) is called regular if some power of the matrix has all nonzero entries. Or, there is a positive probability of eventually moving from every state to every state.
  • 41. Theorem 2.5 If T is the transition matrix of a regular Markov process, then (a) As n → ∞, T n approaches a matrix   u1 u1 . . . u1  u2 u2 . . . u2  A= , . . . . . . . . . . . . un un . . . un all of whose columns are identical. (b) Every column u is a a probability vector all of whose components are positive.
  • 42. Theorem 2.6 If T is a regular∗ transition matrix and A and u are as above, then (a) For any probability vector x, Tn x → u as n → ∞, so that u is a steady-state vector. (b) The steady-state vector u is the unique probability vector satisfying the matrix equation Tu = u.
  • 43. Finding the steady-state vector We know the steady-state vector is unique. So we use the equation it satisfies to find it: Tu = u.
  • 44. Finding the steady-state vector We know the steady-state vector is unique. So we use the equation it satisfies to find it: Tu = u. This is a matrix equation if you put it in the form (T − I)u = 0
  • 45.
  • 46. Example (Skipping class) 0.7 0.8 If the transition matrix is T = , what is the 0.3 0.2 steady-state vector?
  • 47. Example (Skipping class) 0.7 0.8 If the transition matrix is T = , what is the 0.3 0.2 steady-state vector? Solution We can combine the equations (T − I )u = 0, u1 + u2 = 1 into a single linear system with augmented matrix     −3/10 8/10 0 1 0 8/11  3/10 −8/10 0   0 1 3/11  1 11 00 0
  • 48. Example (Skipping class) 0.7 0.8 If the transition matrix is T = , what is the 0.3 0.2 steady-state vector? Solution We can combine the equations (T − I )u = 0, u1 + u2 = 1 into a single linear system with augmented matrix     −3/10 8/10 0 1 0 8/11  3/10 −8/10 0   0 1 3/11  1 11 00 0 8/11 So the steady-state vector is . You’ll go to class about 72% 3/11 of the time.
  • 49. Example (The Markov Dance)   1/3 1/4 0 If the transition matrix is T = 1/3 1/2 1/2, what is the 1/3 1/4 1/2 steady-state vector?
  • 50. Example (The Markov Dance)   1/3 1/4 0 If the transition matrix is T = 1/3 1/2 1/2, what is the 1/3 1/4 1/2 steady-state vector? Solution We have     −2/3 1/4 0 0 1 0 0 3/17  1/3 −1/2 1/2 0 0 1 0 8/17      1/4 −1/2  1/3 0 0 0 1 6/17  1 1 1 1 0 0 0 0