At time 0, I have Rs. 2. At times 1, 2, …, I play a game in which I bet Rs. 1, with probabilities p, I win the game, and with probability 1 – p, I lose the game. My goal is to increase my capital to Rs. 4, and as soon as I do, the game is over. The game is also over if my capital is reduced to 0.
Let X t represent my capital position after the time t game (if any) is played
X 0 , X 1 , X 2 , …. May be viewed as a discrete-time stochastic process
One special type of discrete-time stochastic process is called a Markov Chain.
Definition: A discrete-time stochastic process is a Markov chain if, for t = 0,1,2… and all states P ( X t+1 = i t+1 | X t = i t , X t-1 = i t-1 ,…, X 1 = i 1 , X 0 = i 0 ) = P ( X t+1 = i t+1 | X t = i t )
Essentially this says that the probability distribution of the state at time t +1 depends on the state at time t ( i t ) and does not depend on the states the chain passed through on the way to i t at time t .
A company has two machines. During any day, each machine that is working at the beginning of the day has a 1/3 chance of breaking down. If a machine breaks down during the day it is sent to a repair facility and will be working two days after it breaks down. (Thus, if a machine breaks down during day-3, it will be working at the beginning of day 5)
Letting the state of the system to be the number of machines working at the beginning of the day, formulate a transition probability matrix for this situation.
A question of interest when studying a Markov chain is: If a Markov chain is in a state i at time m , what is the probability that n periods later than the Markov chain will be in state j ?
This probability will be independent of m , so we may write P ( X m+n = j | X m = i ) = P ( X n = j | X 0 = i ) = P ij ( n ) where P ij ( n ) is called the n -step probability of a transition from state i to state j .
All states communicate with each other Starting from 1, the MC can return to 1 in three steps via two possible routes: Route 1: 1 to 3 to 2 to 1 with probability .5 ×1 ×.5 = 1/4 Route 2: 1 to 2 to 2 to 1 with probability .5 ×.5×.5 = 1/8 Hence the required probability is =1/4+1/8 = 3/8 .
A camera store stocks a particular model camera that can be ordered weekly. Let D 1 , D 2 , … represent the demand for this camera (the number of units that would be sold if the inventory is not depleted) during the first week, second week, …, respectively. It is assumed that the D i ’s are independent and identically distributed random variables having a Poisson distribution with a mean of 1. Let X 0 represent the number of cameras on hand at the outset, X 1 the number of cameras on hand at the end of week 1, X 2 the number of cameras on hand at the end of week 2, and so on.
Assume that X 0 = 3.
On Saturday night the store places an order that is delivered in time for the next opening of the store on Monday.
The store using the following order policy: If there are no cameras in stock, 3 cameras are ordered. Otherwise, no order is placed.
Sales are lost when demand exceeds the inventory on hand
For an ergodic chain, let m ij = expected number of transitions before we first reach state j , given that we are currently in state i; m ij is called the mean first passage time from state i to state j .
In the example, we assume we are currently in state i . Then with probability p ij , it will take one transition to go from state i to state j . For k ≠ j , we next go with probability p ik to state k . In this case, it will take an average of 1 + m kj transitions to go from i and j .