SlideShare a Scribd company logo
1 of 38
Lecture 12.5 – Additional Issues
Concerning Discrete-Time
Markov Chains
Topics
• Review of DTMC
• Classification of states
• Economic analysis
• First-time passage
• Absorbing states
A stochastic process { Xn } where n  N = { 0, 1, 2, . . . } is
called a discrete-time Markov chain if
Pr{Xn+1 = j | X0 = k0, . . . , Xn-1 = kn-1, Xn = i }
= Pr{ Xn+1 = j | Xn = i }  transition probabilities
for every i, j, k0, . . . , kn-1 and for every n.
The future behavior of the system depends only on the
current state i and not on any of the previous states.
Discrete-Time Markov Chain
Pr{ Xn+1 = j | Xn = i } = Pr{ X1 = j |X0 = i } for all n
(They don’t change over time)
We will only consider stationary Markov chains.
The one-step transition matrix for a Markov chain
with states S = { 0, 1, 2 } is
where pij = Pr{ X1 = j | X0 = i }











22
21
20
12
11
10
02
01
00
p
p
p
p
p
p
p
p
p
P
Stationary Transition Probabilities
Classification of States
Accessible: Possible to go from state i to state j (path exists in
the network from i to j).
2 3 4 …
1
0
d4
d1
d2 d3
a0
a1 a2
a3
2 3 4 …
a a
1
a
0
a0 1 2 3
Two states communicate if both are accessible from each other. A
system is irreducible if all states communicate.
State i is recurrent if the system will return to it after leaving some
time in the future.
If a state is not recurrent, it is transient.
Classification of States (continued)
A state is periodic if it can only return to itself after a
fixed number of transitions greater than 1 (or multiple
of a fixed number).
A state that is not periodic is aperiodic.
2
0
1
(1) (1)
(1)
a. Each state visited
every 3 iterations
(1)
2
0
1
(1)
(0.5)
(1)
4
(0.5)
b. Each state visited in multiples
of 3 iterations
Classification of States (continued)
An absorbing state is one that locks in the system once it enters.
2 3 4
a a
1
a
0
d d d
1 2 3
1 2 3
This diagram might represent the wealth of a gambler who
begins with $2 and makes a series of wagers for $1 each.
Let ai be the event of winning in state i and di the event of
losing in state i.
There are two absorbing states: 0 and 4.
Classification of States (continued)
Class: set of states that communicate with each other.
A class is either all recurrent or all transient and may be either
all periodic or aperiodic.
States in a transient class communicate only with each other so
no arcs enter any of the corresponding nodes in the network
diagram from outside the class. Arcs may leave, though,
passing from a node in the class to one outside.
2
0
1 5
3
4
6
Illustration of Concepts
3 1
0
2
0
0
X
0
X
1
X
0
0
0
2
X
0
0
0
3
0
0
X
X
0
1
2
3
State
Example 1
Every pair of states communicates, forming a single
recurrent class; however, the states are not periodic.
Thus the stochastic process is aperiodic and irreducible.
Illustration of Concepts
Example 2
4
0
0
X
X
0
0
X
1
X
X
0
0
0
2
0
0
X
X
0
3
0
0
0
X
0
0
1
2
3
4
State 4
0
0
0
0
0
2
3
1
States 0 and 1 communicate and for a recurrent class.
States 3 and 4 form separate transient classes.
State 2 is an absorbing state and forms a recurrent class.
Illustration of Concepts
Example 3
3 1
0
2
0
0
0
0
X
1
X
0
0
0
2
X
0
0
0
3
0
X
X
0
0
1
2
3
State
Every state communicates with every other state, so we
have irreducible stochastic process.
Periodic? Yes, so Markov chain is irreducible and periodic.
Example
Classification of States

















2
.
0
8
.
0
0
0
0
1
.
0
4
.
0
5
.
0
0
0
0
7
.
0
3
.
0
0
0
0
0
0
5
.
0
5
.
0
0
0
0
6
.
0
4
.
0
5
4
3
2
1
P
.5
.4
.6
.5
.3 .5
.4
.8
.7
.1
1
5
2
3 4
.2
A state j is accessible from state i if pij
(n) > 0 for some n > 0.
In example, state 2 is accessible from state 1
& state 3 is accessible from state 5
but state 3 is not accessible from state 2.
States i and j communicate if i is accessible from j
and j is accessible from i.
States 1 & 2 communicate; also
states 3, 4 & 5 communicate.
States 2 & 4 do not communicate
States 1 & 2 form one communicating class.
States 3, 4 & 5 form a 2nd communicating class.
If all states in a Markov chain communicate
(i.e., all states are members of the same communicating class)
then the chain is irreducible.
The current example is not an irreducible Markov chain.
Neither is the Gambler’s Ruin example which
has 3 classes: {0}, {1, 2, 3} and {4}.
First Passage Times
Let fii = probability that the process will return to state i
(eventually) given that it starts in state i.
If fii = 1 then state i is called recurrent.
If fii < 1 then state i is called transient.
If pii = 1 then state i is called an absorbing state.
Above example has no absorbing states
States 0 & 4 are absorbing in Gambler’s Ruin problem.
The period of a state i is the smallest k > 1 such that
all paths leading back to i have a length that is
a multiple of k;
i.e., pii
(n) = 0 unless n = k, 2k, 3k, . . .
If a process can be in state i at time n or time n + 1
having started at state i then state i is aperiodic.
Each of the states in the current example are aperiodic
If all states in a Markov chain are
recurrent, aperiodic, & the chain is irreducible
then it is ergodic.
States 1, 2 and 3 each have period 2.
0 1 2 3 4
0 1 0 0 0 0
1 1-p 0 p 0 0
2 0 1-p 0 p 0
3 0 0 1-p 0 p
4 0 0 0 0 1
Example of Periodicity - Gambler’s Ruin
Existence of Steady-State Probabilities
A Markov chain is ergodic if it is aperiodic and allows
the attainment of any future state from any initial state
after one or more transitions. If these conditions hold,
then
( )
lim steady state probabilty for state
n
j ij
n
p j


 
For example,











1
.
0
9
.
0
0
3
.
0
3
.
0
4
.
0
2
.
0
0
8
.
0
P
State-transition network
1
3
2
Conclusion: chain is ergodic.
Economic Analysis
Two kinds of economic effects:
(i) those incurred when the system is in a specified state, and
(ii) those incurred when the system makes a transition from one
state to another.
The cost (profit) of being in a particular state is represented by the
m-dimensional column vector
where each component is the cost associated with state i.
The cost of a transition is embodied in the m  m matrix .
where each component specifies the cost of going from state i to
state j in a single step.
 T
S
S
2
S
1
S
,...,
, m
c
c
c

C
 
R
R
ij
c

C
Expected Cost for Markov Chain
Expected cost of being in state i: ij
m
j
ij
i
i p
c
c
c 


1
R
S
Let C = (c1, . . . cm)T
ei = (0, 0, 1, 0, 0) be the ith row of the m  m identity
matrix, and
fn = a random variable representing the economic return
associated with the stochastic process at time n.
Property 3: Let {Xn : n = 0, 1, . . .} be a Markov chain with finite
state space S, state-transition matrix P, and expected
state cost (profit) vector C. Assuming that the process
starts in state i, the expected cost (profit) at the nth step
is given by
E[fn(Xn) |X0 = i] = eiP
(n)
C.
Additional Cost Results
What if the initial state is not known?
Property 5: Let {Xn : n = 0, 1, . . .} be a Markov chain with finite
state space S, state-transition matrix P, initial
probability vector q(0), and expected state cost (profit)
vector C. The expected economic return at the nth step
is given by
E[fn(Xn) |q(0)] = q(0)P
(n)
C.
Property 6: Let {Xn : n = 0, 1, . . .} be a Markov chain with finite
state space S, state-transition matrix P, steady-state
vector π, and expected state cost (profit) vector C. Then
the long-run average return per unit time is given by
SiS πici = πC.
An insurance company charges customers annual
premiums based on their accident history
in the following fashion:
 No accident in last 2 years: $250 annual premium
 Accidents in each of last 2 years: $800 annual premium
 Accident in only 1 of last 2 years: $400 annual premium
Historical statistics:
1. If a customer had an accident last year then they
have a 10% chance of having one this year;
2. If they had no accident last year then they have a
3% chance of having one this year.
Insurance Company Example
Problem: Find the steady-state probability and the long-
run average annual premium paid by the customer.
Solution approach: Construct a Markov chain with four
states: (N, N), (N, Y), (Y, N), (Y,Y) where these indicate
(accident last year, accident this year).
(N, N) (N, Y) (Y, N) (Y, Y)
(N, N) 0.97 0.03 0 0
(N, Y) 0 0 0.90 0.10
(Y, N) 0.97 0.03 0 0
(Y, Y) 0 0 0.90 0.10
P =
Y, Y
.97
.03
.97
.90
.03
.10
.90
.10
Y, N
N, Y
N, N
State-Transition Network for
Insurance Company
This is an ergodic Markov chain.
• All states communicate (irreducible)
• Each state is recurrent (you will return, eventually)
• Each state is aperiodic
Solving the Steady–State Equations
(N,N) = 0.97 (N,N) + 0.97 (Y,N)
(N,Y) = 0.03 (N,N) + 0.03 (Y,N)
(Y,N) = 0.9 (N,Y) + 0.9 (Y,Y)
(N,N) + (N,Y)+(Y,N) + (Y,Y) = 1
Solution:
(N,N) = 0.939, (N,Y) = 0.029, (Y,N) = 0.029, (Y,Y) = 0.003
& the long-run average annual premium is
0.939*250 + 0.029*400 + 0.029*400 + 0.003*800 = 260.5
j = ipij, j = 0,…,m
j = 1, j  0,  j
m
i=1
m
j =1
Markov Chain Add-in Matrix
Transition Matrix
Calculate Regular matrix. Rows sum to 1.
Change 4 Recurrent States
Analyze 1 Recurrent State Class
0 Transient States
State 4 0 1 2 3
Index Names (N, N) (N, Y) (Y, N) (Y, Y) Sum Status
0 (N, N) (N, N) 0.97 0.03 0 0 1 Class-1
1 (N, Y) (N, Y) 0 0 0.9 0.1 1 Class-1
2 (Y, N) (Y, N) 0.97 0.03 0 0 1 Class-1
3 (Y, Y) (Y, Y) 0 0 0.9 0.1 1 Class-1
Sum 1.94 0.06 1.8 0.2
Economic Data and Solution
Economic Data Measure: Cost
Calculate Discount Expected Transition Cost Matrix
Rate State State 0 1 2 3
0 Cost Cost (N, N) (N, Y) (Y, N) (Y, Y)
0 (N, N) 250 250 0 0 0 0
1 (N, Y) 400 400 0 0 0 0
2 (Y, N) 400 400 0 0 0 0
3 (Y, Y) 800 800 0 0 0 0
Steady State The vector shows the long run probabilities of each state.
Expected
Analysis 0 1 2 3 Cost
(N, N) (N, Y) (Y, N) (Y, Y) per period
Steady State 0.93871 0.029032 0.029032 0.003226 260.483871
Transient Analysis for Insurance Company
Transient
Analysis Average Cost 260.1622 Discounted Cost 5203.243
0 1 2 3 Step Cum. Present
(N, N) (N, Y) (Y, N) (Y, Y) Cost Cost Worth
Start Initial 0 0 0 1 0 0
1 0 0 0.9 0.1 440 440 440
2 0.873 0.027 0.09 0.01 273.05 713.05 713.05
More 3 0.93411 0.02889 0.0333 0.0037 261.3635 974.4135 974.4135
4 0.938388 0.029022 0.029331 0.003259 260.5454 1234.959 1234.959
5 0.938687 0.029032 0.029053 0.003228 260.4882 1495.447 1495.447
Chart 6 0.938708 0.029032 0.029034 0.003226 260.4842 1755.931 1755.931
7 0.93871 0.029032 0.029032 0.003226 260.4839 2016.415 2016.415
8 0.93871 0.029032 0.029032 0.003226 260.4839 2276.899 2276.899
9 0.93871 0.029032 0.029032 0.003226 260.4839 2537.383 2537.383
10 0.93871 0.029032 0.029032 0.003226 260.4839 2797.867 2797.867
Let ij = expected number of steps to transition
from state i to state j
If the probability that we will eventually visit state j
given that we start in i is less than 1, then
we will have ij = +.
First Passage Times
For example, in the Gambler’s Ruin problem,
20 = + because there is a positive probability
that we will be absorbed in state 4 given that we
start in state 2 (and hence visit state 0).
If the probability of eventually visiting state j given
that we start in i is 1 then the expected number
of steps until we first visit j is given by
It will always take
at least one step.
We go from i to r in the first step
with probability pir and it takes rj
steps from r to j.
Computations for All States Recurrent
ij = 1 + pirrj, for i = 0,1, . . . , m–1
rj
For j fixed, we have linear system in m equations and m
unknowns ij , i = 0,1, . . . , m–1.
Suppose that we start in state (N,N) and want to find
the expected number of years until we have accidents
in two consecutive years (Y,Y).
This transition will occur with probability 1, eventually.
First-Passage Analysis for Insurance Company
For convenience number the states
0 1 2 3
(N,N) (N,Y) (Y,N) (Y,Y)
Then, 03 = 1 + p00 03 + p01 13 + p0223
13 = 1 + p10 03 + p11 13 + p1223
23 = 1 + p20 03 + p21 13 + p2223
03 = 1 + 0.9703 + 0.0313
13 = 1 + 0.923
23 = 1 + 0.9703 + 0.0313
(N, N) 0.97 0.03 0 0
(N, Y) 0 0 0.90 0.10
(Y, N) 0.97 0.03 0 0
(Y, Y) 0 0 0.90 0.10
Using P =
So, on average it takes 343.3 years to transition
from (N,N) to (Y,Y).
Note, 03 = 23. Why? Note, 13 < 03.
Solution: 03 = 343.3, 13 = 310, 23 = 343.3
(N, N) (N, Y) (Y, N) (Y, Y)
0
1
2
3
First-Passage Computations
states
Expected number of steps until the first passage into state 3
From 0 1 2 3
(N, N) (N, Y) (Y, N) (Y, Y)
343.3333 310 343.3333 310
First Passage Probabilities
Game of Craps
Probability of win = Pr{ 7 or 11 } = 0.167 + 0.056 = 0.223
Probability of loss = Pr{ 2, 3, 12 } = 0.028 + 0.56 + 0.028 = 0.112
Start Win Lose P4 P5 P6 P8 P9 P10
Start 0 0.222 0.111 0.083 0.111 0.139 0.139 0.111 0.083
Win 0 1 0 0 0 0 0 0 0
Lose 0 0 1 0 0 0 0 0 0
P4 0 0.083 0.167 0.75 0 0 0 0 0
P = P5 0 0.111 0.167 0 0.722 0 0 0 0
P6 0 0.139 0.167 0 0 0.694 0 0 0
P8 0 0.139 0.167 0 0 0 0.694 0 0
P9 0 0.111 0.167 0 0 0 0 0.722 0
P10 0 0.083 0.167 0 0 0 0 0 0.75
First Passage Probabilities for Craps
Rolls Start-win Start-lose Sum Cumulative
1 0.222 0.111 0.333 0.333
2 0.077 0.111 0.188 0.522
3 0.055 0.080 0.135 0.656
4 0.039 0.057 0.097 0.753
5 0.028 0.041 0.069 0.822
6 0.020 0.030 0.050 0.872
7 0.014 0.021 0.036 0.908
8 0.010 0.015 0.026 0.933
9 0.007 0.011 0.018 0.952
10 0.005 0.008 0.013 0.965
An absorbing state is a state j with pjj = 1.
Given that we start in state i, we can calculate the
probability of being absorbed in state j.
We essentially performed this calculation for the
Gambler’s Ruin problem by finding
P
(n)
= (pij
(n) ) for large n.
But we can use a more efficient analysis
like that used for calculating first passage times.
Absorbing States
Go directly to j Go to r and then to j
Let qij = probability of being absorbed in state j
given that we start in transient state i.
Then for each j we have the following relationship
qij = pij +  pirqrj , i = 0, 1, . . . , k
Let 0, 1, . . . , k be transient states and
k + 1, . . . , m – 1 be absorbing states.
k
r = 0
For fixed j (absorbing state) we have k + 1 linear
equations in k + 1 unknowns, qrj , i = 0, 1, . . . , k.
Suppose that we start with $2 and want to calculate the
probability of going broke, i.e., of being absorbed in state 0.
We know p00 = 1 and p40 = 0, thus
q20 = p20 + p21 q10 + p22 q20 + p23 q30 (+ p24 q40)
q10 = p10 + p11 q10 + p12 q20 + p13 q30 + 0
q30 = p30 + p31 q10 + p32 q20 + p33 q30 + 0
where
P =
0 1 2 3 4
0 1 0 0 0 0
1 1-p 0 p 0 0
2 0 1-p 0 p 0
3 0 0 1-p 0 p
4 0 0 0 0 1
Absorbing States – Gambler’s Ruin
Now we have three equations with three unknowns.
Using p = 0.75 (probability of winning a single bet)
we have
q20 = 0 + 0.25 q10 + 0.75 q30
q10 = 0.25 + 0.75 q20
q30 = 0 + 0.25 q20
Solving yields q10 = 0.325, q20 = 0.1, q30 = 0.025
(This is consistent with the values found earlier.)
Solution to Gambler’s Ruin Example
What You Should Know About
The Mathematics of DTMCs
• How to classify states.
• What an ergodic process is.
• How to perform economic analysis.
• How to compute first-time passages.
• How to compute absorbing probabilities.

More Related Content

What's hot

The Wishart and inverse-wishart distribution
 The Wishart and inverse-wishart distribution The Wishart and inverse-wishart distribution
The Wishart and inverse-wishart distributionPankaj Das
 
Mathematical Optimisation - Fundamentals and Applications
Mathematical Optimisation - Fundamentals and ApplicationsMathematical Optimisation - Fundamentals and Applications
Mathematical Optimisation - Fundamentals and ApplicationsGokul Alex
 
Markov Chain Monte Carlo Methods
Markov Chain Monte Carlo MethodsMarkov Chain Monte Carlo Methods
Markov Chain Monte Carlo MethodsFrancesco Casalegno
 
PROBABILITY DISTRIBUTION
PROBABILITY DISTRIBUTIONPROBABILITY DISTRIBUTION
PROBABILITY DISTRIBUTIONshahzadebaujiti
 
Stat 2153 Stochastic Process and Markov chain
Stat 2153 Stochastic Process and Markov chainStat 2153 Stochastic Process and Markov chain
Stat 2153 Stochastic Process and Markov chainKhulna University
 
Discrete distributions: Binomial, Poisson & Hypergeometric distributions
Discrete distributions:  Binomial, Poisson & Hypergeometric distributionsDiscrete distributions:  Binomial, Poisson & Hypergeometric distributions
Discrete distributions: Binomial, Poisson & Hypergeometric distributionsScholarsPoint1
 
A brief introduction to Gaussian process
A brief introduction to Gaussian processA brief introduction to Gaussian process
A brief introduction to Gaussian processEric Xihui Lin
 
Introduction To Markov Chains | Markov Chains in Python | Edureka
Introduction To Markov Chains | Markov Chains in Python | EdurekaIntroduction To Markov Chains | Markov Chains in Python | Edureka
Introduction To Markov Chains | Markov Chains in Python | EdurekaEdureka!
 
Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methodsChristian Robert
 
The Method Of Maximum Likelihood
The Method Of Maximum LikelihoodThe Method Of Maximum Likelihood
The Method Of Maximum LikelihoodMax Chipulu
 
Simple linear regression
Simple linear regressionSimple linear regression
Simple linear regressionMaria Theresa
 
Maximum Likelihood Estimation
Maximum Likelihood EstimationMaximum Likelihood Estimation
Maximum Likelihood Estimationguestfee8698
 
Binomial and Poisson Distribution
Binomial and Poisson  DistributionBinomial and Poisson  Distribution
Binomial and Poisson DistributionSundar B N
 
Partial differentiation
Partial differentiationPartial differentiation
Partial differentiationTanuj Parikh
 
Presentation on Probability Genrating Function
Presentation on Probability Genrating FunctionPresentation on Probability Genrating Function
Presentation on Probability Genrating FunctionMd Riaz Ahmed Khan
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentMuhammad Rasel
 

What's hot (20)

The Wishart and inverse-wishart distribution
 The Wishart and inverse-wishart distribution The Wishart and inverse-wishart distribution
The Wishart and inverse-wishart distribution
 
Mathematical Optimisation - Fundamentals and Applications
Mathematical Optimisation - Fundamentals and ApplicationsMathematical Optimisation - Fundamentals and Applications
Mathematical Optimisation - Fundamentals and Applications
 
Types of models
Types of modelsTypes of models
Types of models
 
Markov Chain Monte Carlo Methods
Markov Chain Monte Carlo MethodsMarkov Chain Monte Carlo Methods
Markov Chain Monte Carlo Methods
 
PROBABILITY DISTRIBUTION
PROBABILITY DISTRIBUTIONPROBABILITY DISTRIBUTION
PROBABILITY DISTRIBUTION
 
Stat 2153 Stochastic Process and Markov chain
Stat 2153 Stochastic Process and Markov chainStat 2153 Stochastic Process and Markov chain
Stat 2153 Stochastic Process and Markov chain
 
Discrete distributions: Binomial, Poisson & Hypergeometric distributions
Discrete distributions:  Binomial, Poisson & Hypergeometric distributionsDiscrete distributions:  Binomial, Poisson & Hypergeometric distributions
Discrete distributions: Binomial, Poisson & Hypergeometric distributions
 
A brief introduction to Gaussian process
A brief introduction to Gaussian processA brief introduction to Gaussian process
A brief introduction to Gaussian process
 
Introduction To Markov Chains | Markov Chains in Python | Edureka
Introduction To Markov Chains | Markov Chains in Python | EdurekaIntroduction To Markov Chains | Markov Chains in Python | Edureka
Introduction To Markov Chains | Markov Chains in Python | Edureka
 
Introduction to MCMC methods
Introduction to MCMC methodsIntroduction to MCMC methods
Introduction to MCMC methods
 
The Method Of Maximum Likelihood
The Method Of Maximum LikelihoodThe Method Of Maximum Likelihood
The Method Of Maximum Likelihood
 
Simple linear regression
Simple linear regressionSimple linear regression
Simple linear regression
 
Markov chain
Markov chainMarkov chain
Markov chain
 
Maximum Likelihood Estimation
Maximum Likelihood EstimationMaximum Likelihood Estimation
Maximum Likelihood Estimation
 
Binomial and Poisson Distribution
Binomial and Poisson  DistributionBinomial and Poisson  Distribution
Binomial and Poisson Distribution
 
Partial differentiation
Partial differentiationPartial differentiation
Partial differentiation
 
Presentation on Probability Genrating Function
Presentation on Probability Genrating FunctionPresentation on Probability Genrating Function
Presentation on Probability Genrating Function
 
Uniform Distribution
Uniform DistributionUniform Distribution
Uniform Distribution
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descent
 
Chap 4 markov chains
Chap 4   markov chainsChap 4   markov chains
Chap 4 markov chains
 

Similar to markov chain.ppt

17-markov-chains.pdf
17-markov-chains.pdf17-markov-chains.pdf
17-markov-chains.pdfmelda49
 
Hidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognitionHidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognitionbutest
 
Hidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognitionHidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognitionbutest
 
12 Machine Learning Supervised Hidden Markov Chains
12 Machine Learning  Supervised Hidden Markov Chains12 Machine Learning  Supervised Hidden Markov Chains
12 Machine Learning Supervised Hidden Markov ChainsAndres Mendez-Vazquez
 
An1 derivat.ro fizica-2_quantum physics 2 2010 2011_24125
An1 derivat.ro fizica-2_quantum physics 2 2010 2011_24125An1 derivat.ro fizica-2_quantum physics 2 2010 2011_24125
An1 derivat.ro fizica-2_quantum physics 2 2010 2011_24125Robin Cruise Jr.
 
An1 derivat.ro fizica-2_quantum physics 2 2010 2011_24125
An1 derivat.ro fizica-2_quantum physics 2 2010 2011_24125An1 derivat.ro fizica-2_quantum physics 2 2010 2011_24125
An1 derivat.ro fizica-2_quantum physics 2 2010 2011_24125Robin Cruise Jr.
 
IE 423 page 1 of 1 •••••••••••••••••••••••••••••••••••••••.docx
IE 423 page 1 of 1 •••••••••••••••••••••••••••••••••••••••.docxIE 423 page 1 of 1 •••••••••••••••••••••••••••••••••••••••.docx
IE 423 page 1 of 1 •••••••••••••••••••••••••••••••••••••••.docxsheronlewthwaite
 
Introduction to Quantum Monte Carlo
Introduction to Quantum Monte CarloIntroduction to Quantum Monte Carlo
Introduction to Quantum Monte CarloClaudio Attaccalite
 
Book chapter-5
Book chapter-5Book chapter-5
Book chapter-5Hung Le
 
Chemical dynamics and rare events in soft matter physics
Chemical dynamics and rare events in soft matter physicsChemical dynamics and rare events in soft matter physics
Chemical dynamics and rare events in soft matter physicsBoris Fackovec
 
Controllability of Linear Dynamical System
Controllability of  Linear Dynamical SystemControllability of  Linear Dynamical System
Controllability of Linear Dynamical SystemPurnima Pandit
 
STate Space Analysis
STate Space AnalysisSTate Space Analysis
STate Space AnalysisHussain K
 
Rfid presentation in internet
Rfid presentation in internetRfid presentation in internet
Rfid presentation in internetAli Azarnia
 

Similar to markov chain.ppt (20)

Markov chains1
Markov chains1Markov chains1
Markov chains1
 
17-markov-chains.pdf
17-markov-chains.pdf17-markov-chains.pdf
17-markov-chains.pdf
 
Stochastic matrices
Stochastic matricesStochastic matrices
Stochastic matrices
 
Stochastic Processes Assignment Help
Stochastic Processes Assignment HelpStochastic Processes Assignment Help
Stochastic Processes Assignment Help
 
Hidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognitionHidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognition
 
Hidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognitionHidden Markov Models with applications to speech recognition
Hidden Markov Models with applications to speech recognition
 
12 Machine Learning Supervised Hidden Markov Chains
12 Machine Learning  Supervised Hidden Markov Chains12 Machine Learning  Supervised Hidden Markov Chains
12 Machine Learning Supervised Hidden Markov Chains
 
A STUDY ON MARKOV CHAIN WITH TRANSITION DIAGRAM
A STUDY ON MARKOV CHAIN WITH TRANSITION DIAGRAMA STUDY ON MARKOV CHAIN WITH TRANSITION DIAGRAM
A STUDY ON MARKOV CHAIN WITH TRANSITION DIAGRAM
 
An1 derivat.ro fizica-2_quantum physics 2 2010 2011_24125
An1 derivat.ro fizica-2_quantum physics 2 2010 2011_24125An1 derivat.ro fizica-2_quantum physics 2 2010 2011_24125
An1 derivat.ro fizica-2_quantum physics 2 2010 2011_24125
 
An1 derivat.ro fizica-2_quantum physics 2 2010 2011_24125
An1 derivat.ro fizica-2_quantum physics 2 2010 2011_24125An1 derivat.ro fizica-2_quantum physics 2 2010 2011_24125
An1 derivat.ro fizica-2_quantum physics 2 2010 2011_24125
 
Two queue tandem resim 16 paper
Two queue tandem resim 16 paperTwo queue tandem resim 16 paper
Two queue tandem resim 16 paper
 
IE 423 page 1 of 1 •••••••••••••••••••••••••••••••••••••••.docx
IE 423 page 1 of 1 •••••••••••••••••••••••••••••••••••••••.docxIE 423 page 1 of 1 •••••••••••••••••••••••••••••••••••••••.docx
IE 423 page 1 of 1 •••••••••••••••••••••••••••••••••••••••.docx
 
Modern control 2
Modern control 2Modern control 2
Modern control 2
 
Introduction to Quantum Monte Carlo
Introduction to Quantum Monte CarloIntroduction to Quantum Monte Carlo
Introduction to Quantum Monte Carlo
 
Book chapter-5
Book chapter-5Book chapter-5
Book chapter-5
 
Markov chain
Markov chainMarkov chain
Markov chain
 
Chemical dynamics and rare events in soft matter physics
Chemical dynamics and rare events in soft matter physicsChemical dynamics and rare events in soft matter physics
Chemical dynamics and rare events in soft matter physics
 
Controllability of Linear Dynamical System
Controllability of  Linear Dynamical SystemControllability of  Linear Dynamical System
Controllability of Linear Dynamical System
 
STate Space Analysis
STate Space AnalysisSTate Space Analysis
STate Space Analysis
 
Rfid presentation in internet
Rfid presentation in internetRfid presentation in internet
Rfid presentation in internet
 

Recently uploaded

Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...dajasot375
 
RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998YohFuh
 
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Jack DiGiovanna
 
04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationshipsccctableauusergroup
 
Dubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls DubaiDubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls Dubaihf8803863
 
DBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfDBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfJohn Sterrett
 
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort servicejennyeacort
 
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝soniya singh
 
9654467111 Call Girls In Munirka Hotel And Home Service
9654467111 Call Girls In Munirka Hotel And Home Service9654467111 Call Girls In Munirka Hotel And Home Service
9654467111 Call Girls In Munirka Hotel And Home ServiceSapana Sha
 
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...Pooja Nehwal
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfSocial Samosa
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...Florian Roscheck
 
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)jennyeacort
 
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Sapana Sha
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...soniya singh
 
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改atducpo
 

Recently uploaded (20)

Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
Indian Call Girls in Abu Dhabi O5286O24O8 Call Girls in Abu Dhabi By Independ...
 
RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998RA-11058_IRR-COMPRESS Do 198 series of 1998
RA-11058_IRR-COMPRESS Do 198 series of 1998
 
Call Girls in Saket 99530🔝 56974 Escort Service
Call Girls in Saket 99530🔝 56974 Escort ServiceCall Girls in Saket 99530🔝 56974 Escort Service
Call Girls in Saket 99530🔝 56974 Escort Service
 
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
 
04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships04242024_CCC TUG_Joins and Relationships
04242024_CCC TUG_Joins and Relationships
 
Dubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls DubaiDubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls Dubai
 
DBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfDBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdf
 
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
 
E-Commerce Order PredictionShraddha Kamble.pptx
E-Commerce Order PredictionShraddha Kamble.pptxE-Commerce Order PredictionShraddha Kamble.pptx
E-Commerce Order PredictionShraddha Kamble.pptx
 
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls in Defence Colony Delhi 💯Call Us 🔝8264348440🔝
 
9654467111 Call Girls In Munirka Hotel And Home Service
9654467111 Call Girls In Munirka Hotel And Home Service9654467111 Call Girls In Munirka Hotel And Home Service
9654467111 Call Girls In Munirka Hotel And Home Service
 
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...{Pooja:  9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
{Pooja: 9892124323 } Call Girl in Mumbai | Jas Kaur Rate 4500 Free Hotel Del...
 
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdfKantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
Kantar AI Summit- Under Embargo till Wednesday, 24th April 2024, 4 PM, IST.pdf
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
 
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
Call Us ➥97111√47426🤳Call Girls in Aerocity (Delhi NCR)
 
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
Saket, (-DELHI )+91-9654467111-(=)CHEAP Call Girls in Escorts Service Saket C...
 
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
High Class Call Girls Noida Sector 39 Aarushi 🔝8264348440🔝 Independent Escort...
 
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
Deep Generative Learning for All - The Gen AI Hype (Spring 2024)
 
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
代办国外大学文凭《原版美国UCLA文凭证书》加州大学洛杉矶分校毕业证制作成绩单修改
 
꧁❤ Aerocity Call Girls Service Aerocity Delhi ❤꧂ 9999965857 ☎️ Hard And Sexy ...
꧁❤ Aerocity Call Girls Service Aerocity Delhi ❤꧂ 9999965857 ☎️ Hard And Sexy ...꧁❤ Aerocity Call Girls Service Aerocity Delhi ❤꧂ 9999965857 ☎️ Hard And Sexy ...
꧁❤ Aerocity Call Girls Service Aerocity Delhi ❤꧂ 9999965857 ☎️ Hard And Sexy ...
 

markov chain.ppt

  • 1. Lecture 12.5 – Additional Issues Concerning Discrete-Time Markov Chains Topics • Review of DTMC • Classification of states • Economic analysis • First-time passage • Absorbing states
  • 2. A stochastic process { Xn } where n  N = { 0, 1, 2, . . . } is called a discrete-time Markov chain if Pr{Xn+1 = j | X0 = k0, . . . , Xn-1 = kn-1, Xn = i } = Pr{ Xn+1 = j | Xn = i }  transition probabilities for every i, j, k0, . . . , kn-1 and for every n. The future behavior of the system depends only on the current state i and not on any of the previous states. Discrete-Time Markov Chain
  • 3. Pr{ Xn+1 = j | Xn = i } = Pr{ X1 = j |X0 = i } for all n (They don’t change over time) We will only consider stationary Markov chains. The one-step transition matrix for a Markov chain with states S = { 0, 1, 2 } is where pij = Pr{ X1 = j | X0 = i }            22 21 20 12 11 10 02 01 00 p p p p p p p p p P Stationary Transition Probabilities
  • 4. Classification of States Accessible: Possible to go from state i to state j (path exists in the network from i to j). 2 3 4 … 1 0 d4 d1 d2 d3 a0 a1 a2 a3 2 3 4 … a a 1 a 0 a0 1 2 3 Two states communicate if both are accessible from each other. A system is irreducible if all states communicate. State i is recurrent if the system will return to it after leaving some time in the future. If a state is not recurrent, it is transient.
  • 5. Classification of States (continued) A state is periodic if it can only return to itself after a fixed number of transitions greater than 1 (or multiple of a fixed number). A state that is not periodic is aperiodic. 2 0 1 (1) (1) (1) a. Each state visited every 3 iterations (1) 2 0 1 (1) (0.5) (1) 4 (0.5) b. Each state visited in multiples of 3 iterations
  • 6. Classification of States (continued) An absorbing state is one that locks in the system once it enters. 2 3 4 a a 1 a 0 d d d 1 2 3 1 2 3 This diagram might represent the wealth of a gambler who begins with $2 and makes a series of wagers for $1 each. Let ai be the event of winning in state i and di the event of losing in state i. There are two absorbing states: 0 and 4.
  • 7. Classification of States (continued) Class: set of states that communicate with each other. A class is either all recurrent or all transient and may be either all periodic or aperiodic. States in a transient class communicate only with each other so no arcs enter any of the corresponding nodes in the network diagram from outside the class. Arcs may leave, though, passing from a node in the class to one outside. 2 0 1 5 3 4 6
  • 8. Illustration of Concepts 3 1 0 2 0 0 X 0 X 1 X 0 0 0 2 X 0 0 0 3 0 0 X X 0 1 2 3 State Example 1 Every pair of states communicates, forming a single recurrent class; however, the states are not periodic. Thus the stochastic process is aperiodic and irreducible.
  • 9. Illustration of Concepts Example 2 4 0 0 X X 0 0 X 1 X X 0 0 0 2 0 0 X X 0 3 0 0 0 X 0 0 1 2 3 4 State 4 0 0 0 0 0 2 3 1 States 0 and 1 communicate and for a recurrent class. States 3 and 4 form separate transient classes. State 2 is an absorbing state and forms a recurrent class.
  • 10. Illustration of Concepts Example 3 3 1 0 2 0 0 0 0 X 1 X 0 0 0 2 X 0 0 0 3 0 X X 0 0 1 2 3 State Every state communicates with every other state, so we have irreducible stochastic process. Periodic? Yes, so Markov chain is irreducible and periodic.
  • 12. A state j is accessible from state i if pij (n) > 0 for some n > 0. In example, state 2 is accessible from state 1 & state 3 is accessible from state 5 but state 3 is not accessible from state 2. States i and j communicate if i is accessible from j and j is accessible from i. States 1 & 2 communicate; also states 3, 4 & 5 communicate. States 2 & 4 do not communicate States 1 & 2 form one communicating class. States 3, 4 & 5 form a 2nd communicating class.
  • 13. If all states in a Markov chain communicate (i.e., all states are members of the same communicating class) then the chain is irreducible. The current example is not an irreducible Markov chain. Neither is the Gambler’s Ruin example which has 3 classes: {0}, {1, 2, 3} and {4}. First Passage Times Let fii = probability that the process will return to state i (eventually) given that it starts in state i. If fii = 1 then state i is called recurrent. If fii < 1 then state i is called transient.
  • 14. If pii = 1 then state i is called an absorbing state. Above example has no absorbing states States 0 & 4 are absorbing in Gambler’s Ruin problem. The period of a state i is the smallest k > 1 such that all paths leading back to i have a length that is a multiple of k; i.e., pii (n) = 0 unless n = k, 2k, 3k, . . . If a process can be in state i at time n or time n + 1 having started at state i then state i is aperiodic. Each of the states in the current example are aperiodic
  • 15. If all states in a Markov chain are recurrent, aperiodic, & the chain is irreducible then it is ergodic. States 1, 2 and 3 each have period 2. 0 1 2 3 4 0 1 0 0 0 0 1 1-p 0 p 0 0 2 0 1-p 0 p 0 3 0 0 1-p 0 p 4 0 0 0 0 1 Example of Periodicity - Gambler’s Ruin
  • 16. Existence of Steady-State Probabilities A Markov chain is ergodic if it is aperiodic and allows the attainment of any future state from any initial state after one or more transitions. If these conditions hold, then ( ) lim steady state probabilty for state n j ij n p j     For example,            1 . 0 9 . 0 0 3 . 0 3 . 0 4 . 0 2 . 0 0 8 . 0 P State-transition network 1 3 2 Conclusion: chain is ergodic.
  • 17. Economic Analysis Two kinds of economic effects: (i) those incurred when the system is in a specified state, and (ii) those incurred when the system makes a transition from one state to another. The cost (profit) of being in a particular state is represented by the m-dimensional column vector where each component is the cost associated with state i. The cost of a transition is embodied in the m  m matrix . where each component specifies the cost of going from state i to state j in a single step.  T S S 2 S 1 S ,..., , m c c c  C   R R ij c  C
  • 18. Expected Cost for Markov Chain Expected cost of being in state i: ij m j ij i i p c c c    1 R S Let C = (c1, . . . cm)T ei = (0, 0, 1, 0, 0) be the ith row of the m  m identity matrix, and fn = a random variable representing the economic return associated with the stochastic process at time n. Property 3: Let {Xn : n = 0, 1, . . .} be a Markov chain with finite state space S, state-transition matrix P, and expected state cost (profit) vector C. Assuming that the process starts in state i, the expected cost (profit) at the nth step is given by E[fn(Xn) |X0 = i] = eiP (n) C.
  • 19. Additional Cost Results What if the initial state is not known? Property 5: Let {Xn : n = 0, 1, . . .} be a Markov chain with finite state space S, state-transition matrix P, initial probability vector q(0), and expected state cost (profit) vector C. The expected economic return at the nth step is given by E[fn(Xn) |q(0)] = q(0)P (n) C. Property 6: Let {Xn : n = 0, 1, . . .} be a Markov chain with finite state space S, state-transition matrix P, steady-state vector π, and expected state cost (profit) vector C. Then the long-run average return per unit time is given by SiS πici = πC.
  • 20. An insurance company charges customers annual premiums based on their accident history in the following fashion:  No accident in last 2 years: $250 annual premium  Accidents in each of last 2 years: $800 annual premium  Accident in only 1 of last 2 years: $400 annual premium Historical statistics: 1. If a customer had an accident last year then they have a 10% chance of having one this year; 2. If they had no accident last year then they have a 3% chance of having one this year. Insurance Company Example
  • 21. Problem: Find the steady-state probability and the long- run average annual premium paid by the customer. Solution approach: Construct a Markov chain with four states: (N, N), (N, Y), (Y, N), (Y,Y) where these indicate (accident last year, accident this year). (N, N) (N, Y) (Y, N) (Y, Y) (N, N) 0.97 0.03 0 0 (N, Y) 0 0 0.90 0.10 (Y, N) 0.97 0.03 0 0 (Y, Y) 0 0 0.90 0.10 P =
  • 22. Y, Y .97 .03 .97 .90 .03 .10 .90 .10 Y, N N, Y N, N State-Transition Network for Insurance Company This is an ergodic Markov chain. • All states communicate (irreducible) • Each state is recurrent (you will return, eventually) • Each state is aperiodic
  • 23. Solving the Steady–State Equations (N,N) = 0.97 (N,N) + 0.97 (Y,N) (N,Y) = 0.03 (N,N) + 0.03 (Y,N) (Y,N) = 0.9 (N,Y) + 0.9 (Y,Y) (N,N) + (N,Y)+(Y,N) + (Y,Y) = 1 Solution: (N,N) = 0.939, (N,Y) = 0.029, (Y,N) = 0.029, (Y,Y) = 0.003 & the long-run average annual premium is 0.939*250 + 0.029*400 + 0.029*400 + 0.003*800 = 260.5 j = ipij, j = 0,…,m j = 1, j  0,  j m i=1 m j =1
  • 24. Markov Chain Add-in Matrix Transition Matrix Calculate Regular matrix. Rows sum to 1. Change 4 Recurrent States Analyze 1 Recurrent State Class 0 Transient States State 4 0 1 2 3 Index Names (N, N) (N, Y) (Y, N) (Y, Y) Sum Status 0 (N, N) (N, N) 0.97 0.03 0 0 1 Class-1 1 (N, Y) (N, Y) 0 0 0.9 0.1 1 Class-1 2 (Y, N) (Y, N) 0.97 0.03 0 0 1 Class-1 3 (Y, Y) (Y, Y) 0 0 0.9 0.1 1 Class-1 Sum 1.94 0.06 1.8 0.2
  • 25. Economic Data and Solution Economic Data Measure: Cost Calculate Discount Expected Transition Cost Matrix Rate State State 0 1 2 3 0 Cost Cost (N, N) (N, Y) (Y, N) (Y, Y) 0 (N, N) 250 250 0 0 0 0 1 (N, Y) 400 400 0 0 0 0 2 (Y, N) 400 400 0 0 0 0 3 (Y, Y) 800 800 0 0 0 0 Steady State The vector shows the long run probabilities of each state. Expected Analysis 0 1 2 3 Cost (N, N) (N, Y) (Y, N) (Y, Y) per period Steady State 0.93871 0.029032 0.029032 0.003226 260.483871
  • 26. Transient Analysis for Insurance Company Transient Analysis Average Cost 260.1622 Discounted Cost 5203.243 0 1 2 3 Step Cum. Present (N, N) (N, Y) (Y, N) (Y, Y) Cost Cost Worth Start Initial 0 0 0 1 0 0 1 0 0 0.9 0.1 440 440 440 2 0.873 0.027 0.09 0.01 273.05 713.05 713.05 More 3 0.93411 0.02889 0.0333 0.0037 261.3635 974.4135 974.4135 4 0.938388 0.029022 0.029331 0.003259 260.5454 1234.959 1234.959 5 0.938687 0.029032 0.029053 0.003228 260.4882 1495.447 1495.447 Chart 6 0.938708 0.029032 0.029034 0.003226 260.4842 1755.931 1755.931 7 0.93871 0.029032 0.029032 0.003226 260.4839 2016.415 2016.415 8 0.93871 0.029032 0.029032 0.003226 260.4839 2276.899 2276.899 9 0.93871 0.029032 0.029032 0.003226 260.4839 2537.383 2537.383 10 0.93871 0.029032 0.029032 0.003226 260.4839 2797.867 2797.867
  • 27. Let ij = expected number of steps to transition from state i to state j If the probability that we will eventually visit state j given that we start in i is less than 1, then we will have ij = +. First Passage Times For example, in the Gambler’s Ruin problem, 20 = + because there is a positive probability that we will be absorbed in state 4 given that we start in state 2 (and hence visit state 0).
  • 28. If the probability of eventually visiting state j given that we start in i is 1 then the expected number of steps until we first visit j is given by It will always take at least one step. We go from i to r in the first step with probability pir and it takes rj steps from r to j. Computations for All States Recurrent ij = 1 + pirrj, for i = 0,1, . . . , m–1 rj For j fixed, we have linear system in m equations and m unknowns ij , i = 0,1, . . . , m–1.
  • 29. Suppose that we start in state (N,N) and want to find the expected number of years until we have accidents in two consecutive years (Y,Y). This transition will occur with probability 1, eventually. First-Passage Analysis for Insurance Company For convenience number the states 0 1 2 3 (N,N) (N,Y) (Y,N) (Y,Y) Then, 03 = 1 + p00 03 + p01 13 + p0223 13 = 1 + p10 03 + p11 13 + p1223 23 = 1 + p20 03 + p21 13 + p2223
  • 30. 03 = 1 + 0.9703 + 0.0313 13 = 1 + 0.923 23 = 1 + 0.9703 + 0.0313 (N, N) 0.97 0.03 0 0 (N, Y) 0 0 0.90 0.10 (Y, N) 0.97 0.03 0 0 (Y, Y) 0 0 0.90 0.10 Using P = So, on average it takes 343.3 years to transition from (N,N) to (Y,Y). Note, 03 = 23. Why? Note, 13 < 03. Solution: 03 = 343.3, 13 = 310, 23 = 343.3 (N, N) (N, Y) (Y, N) (Y, Y) 0 1 2 3 First-Passage Computations states
  • 31. Expected number of steps until the first passage into state 3 From 0 1 2 3 (N, N) (N, Y) (Y, N) (Y, Y) 343.3333 310 343.3333 310 First Passage Probabilities
  • 32. Game of Craps Probability of win = Pr{ 7 or 11 } = 0.167 + 0.056 = 0.223 Probability of loss = Pr{ 2, 3, 12 } = 0.028 + 0.56 + 0.028 = 0.112 Start Win Lose P4 P5 P6 P8 P9 P10 Start 0 0.222 0.111 0.083 0.111 0.139 0.139 0.111 0.083 Win 0 1 0 0 0 0 0 0 0 Lose 0 0 1 0 0 0 0 0 0 P4 0 0.083 0.167 0.75 0 0 0 0 0 P = P5 0 0.111 0.167 0 0.722 0 0 0 0 P6 0 0.139 0.167 0 0 0.694 0 0 0 P8 0 0.139 0.167 0 0 0 0.694 0 0 P9 0 0.111 0.167 0 0 0 0 0.722 0 P10 0 0.083 0.167 0 0 0 0 0 0.75
  • 33. First Passage Probabilities for Craps Rolls Start-win Start-lose Sum Cumulative 1 0.222 0.111 0.333 0.333 2 0.077 0.111 0.188 0.522 3 0.055 0.080 0.135 0.656 4 0.039 0.057 0.097 0.753 5 0.028 0.041 0.069 0.822 6 0.020 0.030 0.050 0.872 7 0.014 0.021 0.036 0.908 8 0.010 0.015 0.026 0.933 9 0.007 0.011 0.018 0.952 10 0.005 0.008 0.013 0.965
  • 34. An absorbing state is a state j with pjj = 1. Given that we start in state i, we can calculate the probability of being absorbed in state j. We essentially performed this calculation for the Gambler’s Ruin problem by finding P (n) = (pij (n) ) for large n. But we can use a more efficient analysis like that used for calculating first passage times. Absorbing States
  • 35. Go directly to j Go to r and then to j Let qij = probability of being absorbed in state j given that we start in transient state i. Then for each j we have the following relationship qij = pij +  pirqrj , i = 0, 1, . . . , k Let 0, 1, . . . , k be transient states and k + 1, . . . , m – 1 be absorbing states. k r = 0 For fixed j (absorbing state) we have k + 1 linear equations in k + 1 unknowns, qrj , i = 0, 1, . . . , k.
  • 36. Suppose that we start with $2 and want to calculate the probability of going broke, i.e., of being absorbed in state 0. We know p00 = 1 and p40 = 0, thus q20 = p20 + p21 q10 + p22 q20 + p23 q30 (+ p24 q40) q10 = p10 + p11 q10 + p12 q20 + p13 q30 + 0 q30 = p30 + p31 q10 + p32 q20 + p33 q30 + 0 where P = 0 1 2 3 4 0 1 0 0 0 0 1 1-p 0 p 0 0 2 0 1-p 0 p 0 3 0 0 1-p 0 p 4 0 0 0 0 1 Absorbing States – Gambler’s Ruin
  • 37. Now we have three equations with three unknowns. Using p = 0.75 (probability of winning a single bet) we have q20 = 0 + 0.25 q10 + 0.75 q30 q10 = 0.25 + 0.75 q20 q30 = 0 + 0.25 q20 Solving yields q10 = 0.325, q20 = 0.1, q30 = 0.025 (This is consistent with the values found earlier.) Solution to Gambler’s Ruin Example
  • 38. What You Should Know About The Mathematics of DTMCs • How to classify states. • What an ergodic process is. • How to perform economic analysis. • How to compute first-time passages. • How to compute absorbing probabilities.