Your SlideShare is downloading. ×
Markov chains1
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Saving this for later?

Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime - even offline.

Text the download link to your phone

Standard text messaging rates apply

Markov chains1

2,415
views

Published on

Published in: Business, Technology, Education

0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
2,415
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
88
Comments
0
Likes
2
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Markov Chains
  • 2. Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable evolves over time includes Stochastic Processes .
  • 3. What is a Stochastic Process?
    • Suppose we observe some characteristic of a system at discrete points in time.
    • Let X t be the value of the system characteristic at time t . In most situations, X t is not known with certainty before time t and may be viewed as a random variable.
    • A discrete-time stochastic process is simply a description of the relation between the random variables X 0 , X 1 , X 2 …..
  • 4.
    • A continuous –time stochastic process is simply the stochastic process in which the state of the system can be viewed at any time, not just at discrete instants in time.
    • For example, the number of people in a supermarket t minutes after the store opens for business may be viewed as a continuous-time stochastic process.
  • 5. The Gambler’s Ruin Problem
    • At time 0, I have Rs. 2. At times 1, 2, …, I play a game in which I bet Rs. 1, with probabilities p, I win the game, and with probability 1 – p, I lose the game. My goal is to increase my capital to Rs. 4, and as soon as I do, the game is over. The game is also over if my capital is reduced to 0.
      • Let X t represent my capital position after the time t game (if any) is played
      • X 0 , X 1 , X 2 , …. May be viewed as a discrete-time stochastic process
  • 6. What is a Markov Chain?
    • One special type of discrete-time stochastic process is called a Markov Chain.
    • Definition: A discrete-time stochastic process is a Markov chain if, for t = 0,1,2… and all states P ( X t+1 = i t+1 | X t = i t , X t-1 = i t-1 ,…, X 1 = i 1 , X 0 = i 0 ) = P ( X t+1 = i t+1 | X t = i t )
    • Essentially this says that the probability distribution of the state at time t +1 depends on the state at time t ( i t ) and does not depend on the states the chain passed through on the way to i t at time t .
  • 7.
    • In our study of Markov chains, we make further assumption that for all states i and j and all t ,
    • P ( X t+ 1 = j| X t = i ) is independent of t .
    • This assumption allows us to write
      • P ( X t+1 = j| X t = i ) = p ij where p ij is the probability that given the system is in state i at time t , it will be in a state j at time t +1.
    • If the system moves from state i during one period to state j during the next period, we that a transition from i to j has occurred.
  • 8.
    • The p ij ’s are often referred to as the transition probabilities for the Markov chain.
    • This equation implies that the probability law relating the next period’s state to the current state does not change over time.
    • It is often called the Stationary Assumption and any Markov chain that satisfies it is called a stationary Markov chain .
    • We also must define q i to be the probability that the chain is in state i at the time 0; in other words, P ( X 0 = i ) = q i .
  • 9.
    • We call the vector q = [ q 1 , q 2 ,…q s ] the initial probability distribution for the Markov chain.
    • In most applications, the transition probabilities are displayed as an s x s transition probability matrix P . The transition probability matrix P may be written as
  • 10. TPM: The Gambler’s Ruin Problem
    • Rs. 0 Rs. 1 Rs. 2 Rs. 3 Rs. 4
  • 11.
    • For each i
    • We also know that each entry in the P matrix must be nonnegative.
    • Hence, all entries in the transition probability matrix are nonnegative, and the entries in each row must sum to 1.
  • 12. Question
    • A company has two machines. During any day, each machine that is working at the beginning of the day has a 1/3 chance of breaking down. If a machine breaks down during the day it is sent to a repair facility and will be working two days after it breaks down. (Thus, if a machine breaks down during day-3, it will be working at the beginning of day 5)
    • Letting the state of the system to be the number of machines working at the beginning of the day, formulate a transition probability matrix for this situation.
  • 13. n -Step Transition Probabilities
    • A question of interest when studying a Markov chain is: If a Markov chain is in a state i at time m , what is the probability that n periods later than the Markov chain will be in state j ?
    • This probability will be independent of m , so we may write P ( X m+n = j | X m = i ) = P ( X n = j | X 0 = i ) = P ij ( n ) where P ij ( n ) is called the n -step probability of a transition from state i to state j .
    • For n > 1, P ij ( n ) = ij th element of P n
  • 14. The Cola Example
    • Suppose the entire cola industry produces only two colas.
    • Given that a person last purchased cola 1, there is a 90% chance that her next purchase will be cola 1.
    • Given that a person last purchased cola 2, there is an 80% chance that her next purchase will be cola 2.
        • If a person is currently a cola 2 purchaser, what is the probability that she will purchase cola 1 two purchases from now?
        • If a person is currently a cola 1 purchaser, what is the probability that she will purchase cola 1 three purchases from now?
  • 15. The Cola Example
    • We view each person’s purchases as a Markov chain with the state at any given time being the type of cola the person last purchased.
    • Hence, each person’s cola purchases may be represented by a two-state Markov chain, where
      • State 1 = person has last purchased cola 1
      • State 2 = person has last purchased cola 2
    • If we define X n to be the type of cola purchased by a person on her n th future cola purchase, then X 0 , X 1 , … may be described as the Markov chain with the following transition matrix:
  • 16. The Cola Example
    • We can now answer questions 1 and 2.
    • We seek P ( X 2 = 1| X 0 = 2) = P 21 (2) = element 21 of P 2 :
  • 17. The Cola Example
      • Hence, P 21 (2) =.34. This means that the probability is .34 that two purchases in the future a cola 2 drinker will purchase cola 1.
      • By using basic probability theory, we may obtain this answer in a different way.
    • We seek P 11 (3) = element 11 of P 3 :
    • Therefore, P 11 (3) = .781
  • 18.
    • Many times we do not know the state of the Markov chain at time 0. Then we can determine the probability that the system is in state i at time n by using the reasoning.
    • Probability of being in state j at time n where q =[q 1 , q 2 , … q 3 ].
    • (This is an unconditional probability).
  • 19. Limiting probabilities
    • To illustrate the behavior of the n -step transition probabilities for large values of n , we have computed several of the n -step transition probabilities for the Cola example.
    • This means that for large n, no matter what the initial state, there is a .67 chance that a person will be a cola 1 purchaser.
    • We can easily multiply matrices on a spreadsheet using the MMULT command.
  • 20. Question Find the equilibrium market shares of two firms whose probability transition matrix is as follows .5 .5 B .3 .7 A B A
  • 21. Example
    • Suppose we have a Markov transition matrix:
    1 2 3 1/2 1/2 1/2 1/2 1
  • 22. All states communicate with each other Starting from 1, the MC can return to 1 in three steps via two possible routes: Route 1: 1 to 3 to 2 to 1 with probability .5 ×1 ×.5 = 1/4 Route 2: 1 to 2 to 2 to 1 with probability .5 ×.5×.5 = 1/8 Hence the required probability is =1/4+1/8 = 3/8 .
  • 23. Steady-State Probabilities
    • Steady-state probabilities are used to describe the long-run behavior of a Markov chain.
    • Theorem 1: Let P be the transition matrix for an s -state ergodic chain. Then there exists a vector π = [ π 1 π 2 … π s ] such that
  • 24.
    • Theorem 1 tells us that for any initial state i ,
    • The vector π = [ π 1 π 2 … π s ] is often called the steady-state distribution , or equilibrium distribution , for the Markov chain.
  • 25. An Example
    • A supermarket stocks 3 brands of coffee, A , B , and C, and it has been observed that customers switch from brand to brand according to the following transition matrix:
    In the long In the long run, what fraction of the customers purchase the respective brands?
  • 26. Solution
    • Since the chain is ergodic (all states are communicating, each state is recurrent and aperiodic), the steady-state distribution exists.
    • Solving  =  P gives
    •  1 =(3/4)  1 +(1/4)  3
    •  2 =(1/4)  1 +(2/3)  2 +(1/4)  3
    •  3 =(1/3)  2 +(1/2)  3
    • Subject to  1 +  2 +  3 =1. Solving the equations gives
    •  1 =(2/7),  2 =(3/7),  3 =2/7.
  • 27. Inventory Example
    • A camera store stocks a particular model camera that can be ordered weekly. Let D 1 , D 2 , … represent the demand for this camera (the number of units that would be sold if the inventory is not depleted) during the first week, second week, …, respectively. It is assumed that the D i ’s are independent and identically distributed random variables having a Poisson distribution with a mean of 1. Let X 0 represent the number of cameras on hand at the outset, X 1 the number of cameras on hand at the end of week 1, X 2 the number of cameras on hand at the end of week 2, and so on.
      • Assume that X 0 = 3.
      • On Saturday night the store places an order that is delivered in time for the next opening of the store on Monday.
      • The store using the following order policy: If there are no cameras in stock, 3 cameras are ordered. Otherwise, no order is placed.
      • Sales are lost when demand exceeds the inventory on hand
  • 28. Inventory Example
    • X t is the number of Cameras in stock at the end of week t (as defined earlier), where X t represents the state of the system at time t
    • Given that X t = i, X t+1 depends only on D t+1 and X t (Markovian property)
    • D t has a Poisson distribution with mean equal to one. This means that P(D t+1 = n) = e -1 1 n /n! for n = 0, 1, …
    • P(D t = 0 ) = e -1 = 0.368
    • P(D t = 1 ) = e -1 = 0.368
    • P(D t = 2 ) = (1/2)e -1 = 0.184
    • P(D t  3 ) = 1 – P(D t  2) = 1 – (.368 + .368 + .184) = 0.08
    • X t+1 = max(3-D t+1 , 0) if X t = 0 and X t+1 = max(X t – D t+1 , 0) if X t  1, for t =
    • 0, 1, 2, ….
  • 29. Inventory Example: (One-Step) Transition Matrix
    • P 03 = P(D t+1 = 0) = 0.368
    • P 02 = P(D t+1 = 1) = 0.368
    • P 01 = P(D t+1 = 2) = 0.184
    • P 00 = P(D t+1  3) = 0.080
  • 30. Inventory Example: Transition Diagram 0 1 2 3
  • 31. Inventory Example: (One-Step) Transition Matrix
  • 32. Transition Matrix: Two-Step
    • P (2) = PP
  • 33. Transition Matrix: Four-Step
    • P (4) = P (2) P (2)
  • 34. Transition Matrix: Eight-Step
    • P (8) = P (4) P (4)
  • 35. Steady-State Probabilities
    • The steady-state probabilities uniquely satisfy the following steady-state equations
    •  0 =  0 p 00 +  1 p 10 +  2 p 20 +  3 p 30
    •  1 =  0 p 01 +  1 p 11 +  2 p 21 +  3 p 31
    •  2 =  0 p 02 +  1 p 12 +  2 p 22 +  3 p 32
    •  3 =  0 p 03 +  1 p 13 +  2 p 23 +  3 p 33
    • 1 =  0 +  1 +  2 +  3
  • 36. Steady-State Probabilities: Inventory Example
    •  0 = .080  0 + .632  1 + .264  2 + .080  3
    •  1 = .184  0 + .368  1 + .368  2 + .184  3
    •  2 = .368  0 + .368  2 + .368  3
    •  3 = .368  0 + .368  3
    • 1 =  0 +  1 +  2 +  3
    •  0 = .286,  1 = .285,  2 = .263,  3 = .166
    • The numbers in each row of matrix P (8) match the corresponding steady-state probability
  • 37. Mean First Passage Times
    • For an ergodic chain, let m ij = expected number of transitions before we first reach state j , given that we are currently in state i; m ij is called the mean first passage time from state i to state j .
    • In the example, we assume we are currently in state i . Then with probability p ij , it will take one transition to go from state i to state j . For k ≠ j , we next go with probability p ik to state k . In this case, it will take an average of 1 + m kj transitions to go from i and j .
  • 38.
    • This reasoning implies
    • By solving the linear equations of the equation above, we find all the mean first passage times. It can be shown that
  • 39.
    • For the cola example, π 1 =2/3 and π 2 = 1/3
    • Hence, m 11 = 1.5 and m 22 = 3
    • m 12 = 1 + p 11 m 12 = 1 + .9m 12
    • m 21 = 1 + p 22 m 21 = 1 + .8m 21
    • Solving these two equations yields,
    • m 12 = 10 and m 21 = 5