2.
Description Sometimes we are interested in how a random variable changes over time. The study of how a random variable evolves over time includes Stochastic Processes .
Suppose we observe some characteristic of a system at discrete points in time.
Let X t be the value of the system characteristic at time t . In most situations, X t is not known with certainty before time t and may be viewed as a random variable.
A discrete-time stochastic process is simply a description of the relation between the random variables X 0 , X 1 , X 2 …..
A continuous –time stochastic process is simply the stochastic process in which the state of the system can be viewed at any time, not just at discrete instants in time.
For example, the number of people in a supermarket t minutes after the store opens for business may be viewed as a continuous-time stochastic process.
At time 0, I have Rs. 2. At times 1, 2, …, I play a game in which I bet Rs. 1, with probabilities p, I win the game, and with probability 1 – p, I lose the game. My goal is to increase my capital to Rs. 4, and as soon as I do, the game is over. The game is also over if my capital is reduced to 0.
Let X t represent my capital position after the time t game (if any) is played
X 0 , X 1 , X 2 , …. May be viewed as a discrete-time stochastic process
One special type of discrete-time stochastic process is called a Markov Chain.
Definition: A discrete-time stochastic process is a Markov chain if, for t = 0,1,2… and all states P ( X t+1 = i t+1 | X t = i t , X t-1 = i t-1 ,…, X 1 = i 1 , X 0 = i 0 ) = P ( X t+1 = i t+1 | X t = i t )
Essentially this says that the probability distribution of the state at time t +1 depends on the state at time t ( i t ) and does not depend on the states the chain passed through on the way to i t at time t .
We call the vector q = [ q 1 , q 2 ,…q s ] the initial probability distribution for the Markov chain.
In most applications, the transition probabilities are displayed as an s x s transition probability matrix P . The transition probability matrix P may be written as
A company has two machines. During any day, each machine that is working at the beginning of the day has a 1/3 chance of breaking down. If a machine breaks down during the day it is sent to a repair facility and will be working two days after it breaks down. (Thus, if a machine breaks down during day-3, it will be working at the beginning of day 5)
Letting the state of the system to be the number of machines working at the beginning of the day, formulate a transition probability matrix for this situation.
A question of interest when studying a Markov chain is: If a Markov chain is in a state i at time m , what is the probability that n periods later than the Markov chain will be in state j ?
This probability will be independent of m , so we may write P ( X m+n = j | X m = i ) = P ( X n = j | X 0 = i ) = P ij ( n ) where P ij ( n ) is called the n -step probability of a transition from state i to state j .
We view each person’s purchases as a Markov chain with the state at any given time being the type of cola the person last purchased.
Hence, each person’s cola purchases may be represented by a two-state Markov chain, where
State 1 = person has last purchased cola 1
State 2 = person has last purchased cola 2
If we define X n to be the type of cola purchased by a person on her n th future cola purchase, then X 0 , X 1 , … may be described as the Markov chain with the following transition matrix:
Many times we do not know the state of the Markov chain at time 0. Then we can determine the probability that the system is in state i at time n by using the reasoning.
Probability of being in state j at time n where q =[q 1 , q 2 , … q 3 ].
To illustrate the behavior of the n -step transition probabilities for large values of n , we have computed several of the n -step transition probabilities for the Cola example.
This means that for large n, no matter what the initial state, there is a .67 chance that a person will be a cola 1 purchaser.
We can easily multiply matrices on a spreadsheet using the MMULT command.
20.
Question Find the equilibrium market shares of two firms whose probability transition matrix is as follows .5 .5 B .3 .7 A B A
22.
All states communicate with each other Starting from 1, the MC can return to 1 in three steps via two possible routes: Route 1: 1 to 3 to 2 to 1 with probability .5 ×1 ×.5 = 1/4 Route 2: 1 to 2 to 2 to 1 with probability .5 ×.5×.5 = 1/8 Hence the required probability is =1/4+1/8 = 3/8 .
A supermarket stocks 3 brands of coffee, A , B , and C, and it has been observed that customers switch from brand to brand according to the following transition matrix:
In the long In the long run, what fraction of the customers purchase the respective brands?
A camera store stocks a particular model camera that can be ordered weekly. Let D 1 , D 2 , … represent the demand for this camera (the number of units that would be sold if the inventory is not depleted) during the first week, second week, …, respectively. It is assumed that the D i ’s are independent and identically distributed random variables having a Poisson distribution with a mean of 1. Let X 0 represent the number of cameras on hand at the outset, X 1 the number of cameras on hand at the end of week 1, X 2 the number of cameras on hand at the end of week 2, and so on.
Assume that X 0 = 3.
On Saturday night the store places an order that is delivered in time for the next opening of the store on Monday.
The store using the following order policy: If there are no cameras in stock, 3 cameras are ordered. Otherwise, no order is placed.
Sales are lost when demand exceeds the inventory on hand
For an ergodic chain, let m ij = expected number of transitions before we first reach state j , given that we are currently in state i; m ij is called the mean first passage time from state i to state j .
In the example, we assume we are currently in state i . Then with probability p ij , it will take one transition to go from state i to state j . For k ≠ j , we next go with probability p ik to state k . In this case, it will take an average of 1 + m kj transitions to go from i and j .
Be the first to comment