SlideShare a Scribd company logo
Markov Chain Analysis
What is Markov Model?
• In probability theory, a Markov model is a stochastic model used
to model randomly changing systems where it is assumed that future
states depend only on the present state and not on the sequence of
events that preceded it (that is, it assumes the Markov property).
Generally, this assumption enables reasoning and computation with the
model that would otherwise be intractable.
• Some Examples are:
– Snake & ladder game
– Weather system
.
Assumptions for Markov model
• a fixed set of states,
• fixed transition probabilities, and the possibility of
getting from any state to another through a series of
transitions.
• a Markov process converges to a unique distribution
over states. This means that what happens in the long
run won’t depend on where the process started or on
what happened along the way.
• What happens in the long run will be completely
determined by the transition probabilities – the
likelihoods of moving between the various states.
Types of Markov models & when to
use which model
System state is fully
observable
System state is partially
observable
System is autonomous Markov Chain Hidden Markov Model
System is controlled Markov Decision Process Partially observable
Markov decision process
Source:wikipedia
Markov chain
• Here system states are observable and fully
autonomous.
• Simplest of all Markov models.
• Markov chain is a random process that undergoes
transitions from one state to another on a state space.
• It is required to possess a property that is usually
characterized as "memoryless": the probability
distribution of the next state depends only on the
current state and not on the sequence of events that
preceded it.
• Also remember we are considering that time is moving
in discrete steps.
Lets try to understand Markov chain
from very simple example
• Weather:
• raining today 40% rain tomorrow
• 60% no rain tomorrow
• not raining today 20% rain tomorrow
• 80% no rain tomorrow
rain no rain
0.60.4 0.8
0.2
Stochastic Finite State Machine:
7
Weather:
• raining today 40% rain tomorrow
60% no rain tomorrow
• not raining today 20% rain tomorrow
80% no rain tomorrow
Markov Process
Simple Example







8.02.0
6.04.0
P
• Stochastic matrix:
Rows sum up to 1
• Double stochastic matrix:
Rows and columns sum up to 1
The transition matrix:
Rain No rain
Rain
No rain
Markov Process
• Markov Property: Xt+1, the state of the system at time t+1 depends
only on the state of the system at time t
X1 X2 X3 X4 X5
   x| XxXxxX| XxX tttttttt   111111 PrPr 
• Stationary Assumption: Transition probabilities are independent of
time (t)
 1Pr t t abX b| X a p   
Let Xi be the weather of day i, 1 <= i <= t. We may
decide the probability of Xt+1 from Xi, 1 <= i <= t.
9
– Gambler starts with $10 (the initial state)
- At each play we have one of the following:
• Gambler wins $1 with probability p
• Gambler looses $1 with probability 1-p
– Game ends when gambler goes broke, or gains a fortune of $100
(Both 0 and 100 are absorbing states)
0 1 2 99 100
p p p p
1-p 1-p 1-p 1-p
Start
(10$)
Markov Process
Gambler’s Example
1-p
10
• Markov process - described by a stochastic FSM
• Markov chain - a random walk on this graph
(distribution over paths)
• Edge-weights give us
• We can ask more complex questions, like
Markov Process
 1Pr t t abX b| X a p   
  ?Pr 2  ba | XX tt
0 1 2 99 100
p p p p
1-p 1-p 1-p 1-p
Start
(10$)
11
• Given that a person’s last cola purchase was Coke,
there is a 90% chance that his next cola purchase will
also be Coke.
• If a person’s last cola purchase was Pepsi, there is
an 80% chance that his next cola purchase will also be
Pepsi.
coke pepsi
0.10.9 0.8
0.2
Markov Process
Coke vs. Pepsi Example







8.02.0
1.09.0
P
transition matrix:
coke pepsi
coke
pepsi







8.02.0
1.09.0
P
12
Given that a person is currently a Pepsi purchaser,
what is the probability that he will purchase Coke two
purchases from now?
Pr[ Pepsi?Coke ] =
Pr[ PepsiCokeCoke ] + Pr[ Pepsi Pepsi Coke ] =
0.2 * 0.9 + 0.8 * 0.2 = 0.34



















66.034.0
17.083.0
8.02.0
1.09.0
8.02.0
1.09.02
P
Markov Process
Coke vs. Pepsi Example (cont)
Pepsi  ? ?  Coke
13
Given that a person is currently a Coke purchaser,
what is the probability that he will buy Pepsi at the
third purchase from now?
Markov Process
Coke vs. Pepsi Example (cont)



















562.0438.0
219.0781.0
66.034.0
17.083.0
8.02.0
1.09.03
P
14
•Assume each person makes one cola purchase per week
•Suppose 60% of all people now drink Coke, and 40% drink Pepsi
•What fraction of people will be drinking Coke three weeks from now?
Markov Process
Coke vs. Pepsi Example (cont)







8.02.0
1.09.0
P 






562.0438.0
219.0781.03
P
Pr[X3=Coke] = 0.6 * 0.781 + 0.4 * 0.438 = 0.6438
Qi - the distribution in week i
Q0= (0.6,0.4) - initial distribution
Q3= Q0 * P3 =(0.6438,0.3562)
15
Simulation:
Markov Process
Coke vs. Pepsi Example (cont)
week - i
Pr[Xi=Coke]
2/3
   3
1
3
2
3
1
3
2
8.02.0
1.09.0






stationary distribution
coke pepsi
0.10.9 0.8
0.2
Supervised vs Unsupervised
 Decision tree learning is “supervised
learning” as we know the correct output of
each example.
 Learning based on Markov chains is
“unsupervised learning” as we don’t know
which is the correct output of “next letter”.
16
Implementation Using R
 msm (Jackson 2011) :handles Multi-State Models
for panel data;
 mcmcR (Geyer and Johnson 2013) implements
Monte Carlo Markov Chain approach;
 hmm (Himmelmann and www.linhi.com 2010) fits
hidden Markov models with covariates;
 mstate fits Multi-State Models based on Markov
chains for survival analysis (de Wreede, Fiocco,
and Putter 2011).
 markovchain
17
Implementaion using R
 Example1:Weather Prediction:
The Land of Oz is acknowledged not to have ideal
weather conditions at all: the weather is snowy or rainy
very often and, once more, there are never two nice days
in a row. Consider three weather states: rainy, nice and
snowy, Given that today it is a nice day, the corresponding
stochastic row vector is w0 = (0 , 1 , 0) and the forecast
after 1, 2 and 3 days.
Solution: please refer solution.R attached.
18
Source
 Slideshare
 Wikipedia
 Google
19

More Related Content

What's hot

MARKOV CHAIN ANALYSIS IN AN ORGANISATION
MARKOV CHAIN ANALYSIS IN AN ORGANISATIONMARKOV CHAIN ANALYSIS IN AN ORGANISATION
MARKOV CHAIN ANALYSIS IN AN ORGANISATION
Vivek Tyagi
 
Lesson 11: Markov Chains
Lesson 11: Markov ChainsLesson 11: Markov Chains
Lesson 11: Markov Chains
Matthew Leingang
 
M A R K O V C H A I N
M A R K O V  C H A I NM A R K O V  C H A I N
M A R K O V C H A I N
ashishtqm
 
Markov decision process
Markov decision processMarkov decision process
Markov decision process
Hamed Abdi
 
Markov presentation
Markov presentationMarkov presentation
Markov presentation
SUBHABRATA MAITY
 
Markov analysis
Markov analysisMarkov analysis
Markov analysisganith2k13
 
Hidden Markov Models
Hidden Markov ModelsHidden Markov Models
Hidden Markov Models
Vu Pham
 
Dynamic Programming
Dynamic ProgrammingDynamic Programming
Dynamic Programming
contact2kazi
 
Introduction to the theory of optimization
Introduction to the theory of optimizationIntroduction to the theory of optimization
Introduction to the theory of optimization
Delta Pi Systems
 
Hidden Markov Model
Hidden Markov Model Hidden Markov Model
Hidden Markov Model
Mahmoud El-tayeb
 
Artificial intelligence(03)
Artificial intelligence(03)Artificial intelligence(03)
Artificial intelligence(03)
Nazir Ahmed
 
Simple linear regression
Simple linear regressionSimple linear regression
Simple linear regression
Avjinder (Avi) Kaler
 
Lecture 9 Markov decision process
Lecture 9 Markov decision processLecture 9 Markov decision process
Lecture 9 Markov decision process
VARUN KUMAR
 
Hidden markov model
Hidden markov modelHidden markov model
Hidden markov model
Haitham Ahmed
 
Hidden markov model
Hidden markov modelHidden markov model
Hidden markov model
BushraShaikh44
 
Bayes' theorem
Bayes' theoremBayes' theorem
Bayes' theorem
Dr. C.V. Suresh Babu
 
Markor chain presentation
Markor chain presentationMarkor chain presentation
Markor chain presentation
TahsinAhmedNasim
 
17-markov-chains.pdf
17-markov-chains.pdf17-markov-chains.pdf
17-markov-chains.pdf
melda49
 

What's hot (20)

MARKOV CHAIN ANALYSIS IN AN ORGANISATION
MARKOV CHAIN ANALYSIS IN AN ORGANISATIONMARKOV CHAIN ANALYSIS IN AN ORGANISATION
MARKOV CHAIN ANALYSIS IN AN ORGANISATION
 
Lesson 11: Markov Chains
Lesson 11: Markov ChainsLesson 11: Markov Chains
Lesson 11: Markov Chains
 
M A R K O V C H A I N
M A R K O V  C H A I NM A R K O V  C H A I N
M A R K O V C H A I N
 
Markov decision process
Markov decision processMarkov decision process
Markov decision process
 
Markov presentation
Markov presentationMarkov presentation
Markov presentation
 
Markov chains1
Markov chains1Markov chains1
Markov chains1
 
Markov analysis
Markov analysisMarkov analysis
Markov analysis
 
Hidden Markov Models
Hidden Markov ModelsHidden Markov Models
Hidden Markov Models
 
Dynamic Programming
Dynamic ProgrammingDynamic Programming
Dynamic Programming
 
Introduction to the theory of optimization
Introduction to the theory of optimizationIntroduction to the theory of optimization
Introduction to the theory of optimization
 
Hidden Markov Model
Hidden Markov Model Hidden Markov Model
Hidden Markov Model
 
Artificial intelligence(03)
Artificial intelligence(03)Artificial intelligence(03)
Artificial intelligence(03)
 
Simple linear regression
Simple linear regressionSimple linear regression
Simple linear regression
 
Lecture 9 Markov decision process
Lecture 9 Markov decision processLecture 9 Markov decision process
Lecture 9 Markov decision process
 
Hidden markov model
Hidden markov modelHidden markov model
Hidden markov model
 
Hidden markov model
Hidden markov modelHidden markov model
Hidden markov model
 
Bayes' theorem
Bayes' theoremBayes' theorem
Bayes' theorem
 
Markor chain presentation
Markor chain presentationMarkor chain presentation
Markor chain presentation
 
Markov theory
Markov theoryMarkov theory
Markov theory
 
17-markov-chains.pdf
17-markov-chains.pdf17-markov-chains.pdf
17-markov-chains.pdf
 

Viewers also liked

Actuarial Application of Monte Carlo Simulation
Actuarial Application of Monte Carlo Simulation Actuarial Application of Monte Carlo Simulation
Actuarial Application of Monte Carlo Simulation Adam Conrad
 
Markov chain intro
Markov chain introMarkov chain intro
Markov chain intro
2vikasdubey
 
Berger 2000
Berger 2000Berger 2000
Berger 2000
Julyan Arbel
 
Hr forecasting techniques
Hr forecasting techniquesHr forecasting techniques
Hr forecasting techniques
Jenil Vora
 
construction risk factor analysis: BBN Network
construction risk factor analysis: BBN Networkconstruction risk factor analysis: BBN Network
construction risk factor analysis: BBN NetworkShaswati Mohapatra
 

Viewers also liked (6)

Session 42_1 Peter Fries-Hansen
Session 42_1 Peter Fries-HansenSession 42_1 Peter Fries-Hansen
Session 42_1 Peter Fries-Hansen
 
Actuarial Application of Monte Carlo Simulation
Actuarial Application of Monte Carlo Simulation Actuarial Application of Monte Carlo Simulation
Actuarial Application of Monte Carlo Simulation
 
Markov chain intro
Markov chain introMarkov chain intro
Markov chain intro
 
Berger 2000
Berger 2000Berger 2000
Berger 2000
 
Hr forecasting techniques
Hr forecasting techniquesHr forecasting techniques
Hr forecasting techniques
 
construction risk factor analysis: BBN Network
construction risk factor analysis: BBN Networkconstruction risk factor analysis: BBN Network
construction risk factor analysis: BBN Network
 

Similar to Markov chain

makov chain_basic
makov chain_basicmakov chain_basic
makov chain_basic
FEG
 
Markov Chains.pptx
Markov Chains.pptxMarkov Chains.pptx
Markov Chains.pptx
TarigBerba
 
meng.ppt
meng.pptmeng.ppt
meng.ppt
aozcan1
 
Change Point Analysis
Change Point AnalysisChange Point Analysis
Change Point Analysis
Mark Conway
 
Advanced operation research
Advanced operation researchAdvanced operation research
Advanced operation research
Sajid Ali
 
Quantum computing
Quantum computingQuantum computing
Quantum computing
GAUTHAMG5
 
Quantum Computing
Quantum ComputingQuantum Computing
Quantum Computing
MinoarHossain
 
Alex Smola, Director of Machine Learning, AWS/Amazon, at MLconf SF 2016
Alex Smola, Director of Machine Learning, AWS/Amazon, at MLconf SF 2016Alex Smola, Director of Machine Learning, AWS/Amazon, at MLconf SF 2016
Alex Smola, Director of Machine Learning, AWS/Amazon, at MLconf SF 2016
MLconf
 
Breaking the 49 qubit barrier in the simulation of quantum circuits
Breaking the 49 qubit barrier in the simulation of quantum circuitsBreaking the 49 qubit barrier in the simulation of quantum circuits
Breaking the 49 qubit barrier in the simulation of quantum circuits
hquynh
 
HMM DAY-3.ppt
HMM DAY-3.pptHMM DAY-3.ppt
HMM DAY-3.ppt
Rahul Halder
 
Physics 498 SQD -- Lecture 21---Quantum Information 1 FINAL.pptx
Physics 498 SQD -- Lecture 21---Quantum Information 1 FINAL.pptxPhysics 498 SQD -- Lecture 21---Quantum Information 1 FINAL.pptx
Physics 498 SQD -- Lecture 21---Quantum Information 1 FINAL.pptx
Raja Shekar
 
Quantum computing
Quantum computingQuantum computing
Quantum computing
dharmsinghggu
 
Mdp
MdpMdp
GMMIW_Grp1_Final
GMMIW_Grp1_FinalGMMIW_Grp1_Final
GMMIW_Grp1_FinalAlfred L.
 
Quantum Computing.pptx
Quantum Computing.pptxQuantum Computing.pptx
Quantum Computing.pptx
Biswadeep Mukhopadhyay
 
Probabilistic Models of Time Series and Sequences
Probabilistic Models of Time Series and SequencesProbabilistic Models of Time Series and Sequences
Probabilistic Models of Time Series and Sequences
Zitao Liu
 
Quantum computation with superconductors
Quantum computation with superconductorsQuantum computation with superconductors
Quantum computation with superconductors
Gabriel O'Brien
 
[PR12] PR-050: Convolutional LSTM Network: A Machine Learning Approach for Pr...
[PR12] PR-050: Convolutional LSTM Network: A Machine Learning Approach for Pr...[PR12] PR-050: Convolutional LSTM Network: A Machine Learning Approach for Pr...
[PR12] PR-050: Convolutional LSTM Network: A Machine Learning Approach for Pr...
Taegyun Jeon
 
Quantum computers
Quantum computersQuantum computers
Quantum computers
kavya1219
 

Similar to Markov chain (20)

Hidden Markov Models
Hidden Markov ModelsHidden Markov Models
Hidden Markov Models
 
makov chain_basic
makov chain_basicmakov chain_basic
makov chain_basic
 
Markov Chains.pptx
Markov Chains.pptxMarkov Chains.pptx
Markov Chains.pptx
 
meng.ppt
meng.pptmeng.ppt
meng.ppt
 
Change Point Analysis
Change Point AnalysisChange Point Analysis
Change Point Analysis
 
Advanced operation research
Advanced operation researchAdvanced operation research
Advanced operation research
 
Quantum computing
Quantum computingQuantum computing
Quantum computing
 
Quantum Computing
Quantum ComputingQuantum Computing
Quantum Computing
 
Alex Smola, Director of Machine Learning, AWS/Amazon, at MLconf SF 2016
Alex Smola, Director of Machine Learning, AWS/Amazon, at MLconf SF 2016Alex Smola, Director of Machine Learning, AWS/Amazon, at MLconf SF 2016
Alex Smola, Director of Machine Learning, AWS/Amazon, at MLconf SF 2016
 
Breaking the 49 qubit barrier in the simulation of quantum circuits
Breaking the 49 qubit barrier in the simulation of quantum circuitsBreaking the 49 qubit barrier in the simulation of quantum circuits
Breaking the 49 qubit barrier in the simulation of quantum circuits
 
HMM DAY-3.ppt
HMM DAY-3.pptHMM DAY-3.ppt
HMM DAY-3.ppt
 
Physics 498 SQD -- Lecture 21---Quantum Information 1 FINAL.pptx
Physics 498 SQD -- Lecture 21---Quantum Information 1 FINAL.pptxPhysics 498 SQD -- Lecture 21---Quantum Information 1 FINAL.pptx
Physics 498 SQD -- Lecture 21---Quantum Information 1 FINAL.pptx
 
Quantum computing
Quantum computingQuantum computing
Quantum computing
 
Mdp
MdpMdp
Mdp
 
GMMIW_Grp1_Final
GMMIW_Grp1_FinalGMMIW_Grp1_Final
GMMIW_Grp1_Final
 
Quantum Computing.pptx
Quantum Computing.pptxQuantum Computing.pptx
Quantum Computing.pptx
 
Probabilistic Models of Time Series and Sequences
Probabilistic Models of Time Series and SequencesProbabilistic Models of Time Series and Sequences
Probabilistic Models of Time Series and Sequences
 
Quantum computation with superconductors
Quantum computation with superconductorsQuantum computation with superconductors
Quantum computation with superconductors
 
[PR12] PR-050: Convolutional LSTM Network: A Machine Learning Approach for Pr...
[PR12] PR-050: Convolutional LSTM Network: A Machine Learning Approach for Pr...[PR12] PR-050: Convolutional LSTM Network: A Machine Learning Approach for Pr...
[PR12] PR-050: Convolutional LSTM Network: A Machine Learning Approach for Pr...
 
Quantum computers
Quantum computersQuantum computers
Quantum computers
 

Recently uploaded

Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
Oppotus
 
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
correoyaya
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
ewymefz
 
一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单
ewymefz
 
Adjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTESAdjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTES
Subhajit Sahu
 
一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单
ewymefz
 
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
axoqas
 
tapal brand analysis PPT slide for comptetive data
tapal brand analysis PPT slide for comptetive datatapal brand analysis PPT slide for comptetive data
tapal brand analysis PPT slide for comptetive data
theahmadsaood
 
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
nscud
 
社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .
NABLAS株式会社
 
一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单
enxupq
 
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
ewymefz
 
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdfCh03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
haila53
 
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
ewymefz
 
Jpolillo Amazon PPC - Bid Optimization Sample
Jpolillo Amazon PPC - Bid Optimization SampleJpolillo Amazon PPC - Bid Optimization Sample
Jpolillo Amazon PPC - Bid Optimization Sample
James Polillo
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
axoqas
 
一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单
enxupq
 
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
vcaxypu
 
Tabula.io Cheatsheet: automate your data workflows
Tabula.io Cheatsheet: automate your data workflowsTabula.io Cheatsheet: automate your data workflows
Tabula.io Cheatsheet: automate your data workflows
alex933524
 

Recently uploaded (20)

Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
 
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单
 
一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单
 
Adjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTESAdjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTES
 
一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单
 
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
 
tapal brand analysis PPT slide for comptetive data
tapal brand analysis PPT slide for comptetive datatapal brand analysis PPT slide for comptetive data
tapal brand analysis PPT slide for comptetive data
 
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
 
社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .社内勉強会資料_LLM Agents                              .
社内勉強会資料_LLM Agents                              .
 
一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单
 
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
 
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdfCh03-Managing the Object-Oriented Information Systems Project a.pdf
Ch03-Managing the Object-Oriented Information Systems Project a.pdf
 
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
一比一原版(UMich毕业证)密歇根大学|安娜堡分校毕业证成绩单
 
Jpolillo Amazon PPC - Bid Optimization Sample
Jpolillo Amazon PPC - Bid Optimization SampleJpolillo Amazon PPC - Bid Optimization Sample
Jpolillo Amazon PPC - Bid Optimization Sample
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
 
一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单
 
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
 
Tabula.io Cheatsheet: automate your data workflows
Tabula.io Cheatsheet: automate your data workflowsTabula.io Cheatsheet: automate your data workflows
Tabula.io Cheatsheet: automate your data workflows
 

Markov chain

  • 2. What is Markov Model? • In probability theory, a Markov model is a stochastic model used to model randomly changing systems where it is assumed that future states depend only on the present state and not on the sequence of events that preceded it (that is, it assumes the Markov property). Generally, this assumption enables reasoning and computation with the model that would otherwise be intractable. • Some Examples are: – Snake & ladder game – Weather system .
  • 3. Assumptions for Markov model • a fixed set of states, • fixed transition probabilities, and the possibility of getting from any state to another through a series of transitions. • a Markov process converges to a unique distribution over states. This means that what happens in the long run won’t depend on where the process started or on what happened along the way. • What happens in the long run will be completely determined by the transition probabilities – the likelihoods of moving between the various states.
  • 4. Types of Markov models & when to use which model System state is fully observable System state is partially observable System is autonomous Markov Chain Hidden Markov Model System is controlled Markov Decision Process Partially observable Markov decision process Source:wikipedia
  • 5. Markov chain • Here system states are observable and fully autonomous. • Simplest of all Markov models. • Markov chain is a random process that undergoes transitions from one state to another on a state space. • It is required to possess a property that is usually characterized as "memoryless": the probability distribution of the next state depends only on the current state and not on the sequence of events that preceded it. • Also remember we are considering that time is moving in discrete steps.
  • 6. Lets try to understand Markov chain from very simple example • Weather: • raining today 40% rain tomorrow • 60% no rain tomorrow • not raining today 20% rain tomorrow • 80% no rain tomorrow rain no rain 0.60.4 0.8 0.2 Stochastic Finite State Machine:
  • 7. 7 Weather: • raining today 40% rain tomorrow 60% no rain tomorrow • not raining today 20% rain tomorrow 80% no rain tomorrow Markov Process Simple Example        8.02.0 6.04.0 P • Stochastic matrix: Rows sum up to 1 • Double stochastic matrix: Rows and columns sum up to 1 The transition matrix: Rain No rain Rain No rain
  • 8. Markov Process • Markov Property: Xt+1, the state of the system at time t+1 depends only on the state of the system at time t X1 X2 X3 X4 X5    x| XxXxxX| XxX tttttttt   111111 PrPr  • Stationary Assumption: Transition probabilities are independent of time (t)  1Pr t t abX b| X a p    Let Xi be the weather of day i, 1 <= i <= t. We may decide the probability of Xt+1 from Xi, 1 <= i <= t.
  • 9. 9 – Gambler starts with $10 (the initial state) - At each play we have one of the following: • Gambler wins $1 with probability p • Gambler looses $1 with probability 1-p – Game ends when gambler goes broke, or gains a fortune of $100 (Both 0 and 100 are absorbing states) 0 1 2 99 100 p p p p 1-p 1-p 1-p 1-p Start (10$) Markov Process Gambler’s Example 1-p
  • 10. 10 • Markov process - described by a stochastic FSM • Markov chain - a random walk on this graph (distribution over paths) • Edge-weights give us • We can ask more complex questions, like Markov Process  1Pr t t abX b| X a p      ?Pr 2  ba | XX tt 0 1 2 99 100 p p p p 1-p 1-p 1-p 1-p Start (10$)
  • 11. 11 • Given that a person’s last cola purchase was Coke, there is a 90% chance that his next cola purchase will also be Coke. • If a person’s last cola purchase was Pepsi, there is an 80% chance that his next cola purchase will also be Pepsi. coke pepsi 0.10.9 0.8 0.2 Markov Process Coke vs. Pepsi Example        8.02.0 1.09.0 P transition matrix: coke pepsi coke pepsi
  • 12.        8.02.0 1.09.0 P 12 Given that a person is currently a Pepsi purchaser, what is the probability that he will purchase Coke two purchases from now? Pr[ Pepsi?Coke ] = Pr[ PepsiCokeCoke ] + Pr[ Pepsi Pepsi Coke ] = 0.2 * 0.9 + 0.8 * 0.2 = 0.34                    66.034.0 17.083.0 8.02.0 1.09.0 8.02.0 1.09.02 P Markov Process Coke vs. Pepsi Example (cont) Pepsi  ? ?  Coke
  • 13. 13 Given that a person is currently a Coke purchaser, what is the probability that he will buy Pepsi at the third purchase from now? Markov Process Coke vs. Pepsi Example (cont)                    562.0438.0 219.0781.0 66.034.0 17.083.0 8.02.0 1.09.03 P
  • 14. 14 •Assume each person makes one cola purchase per week •Suppose 60% of all people now drink Coke, and 40% drink Pepsi •What fraction of people will be drinking Coke three weeks from now? Markov Process Coke vs. Pepsi Example (cont)        8.02.0 1.09.0 P        562.0438.0 219.0781.03 P Pr[X3=Coke] = 0.6 * 0.781 + 0.4 * 0.438 = 0.6438 Qi - the distribution in week i Q0= (0.6,0.4) - initial distribution Q3= Q0 * P3 =(0.6438,0.3562)
  • 15. 15 Simulation: Markov Process Coke vs. Pepsi Example (cont) week - i Pr[Xi=Coke] 2/3    3 1 3 2 3 1 3 2 8.02.0 1.09.0       stationary distribution coke pepsi 0.10.9 0.8 0.2
  • 16. Supervised vs Unsupervised  Decision tree learning is “supervised learning” as we know the correct output of each example.  Learning based on Markov chains is “unsupervised learning” as we don’t know which is the correct output of “next letter”. 16
  • 17. Implementation Using R  msm (Jackson 2011) :handles Multi-State Models for panel data;  mcmcR (Geyer and Johnson 2013) implements Monte Carlo Markov Chain approach;  hmm (Himmelmann and www.linhi.com 2010) fits hidden Markov models with covariates;  mstate fits Multi-State Models based on Markov chains for survival analysis (de Wreede, Fiocco, and Putter 2011).  markovchain 17
  • 18. Implementaion using R  Example1:Weather Prediction: The Land of Oz is acknowledged not to have ideal weather conditions at all: the weather is snowy or rainy very often and, once more, there are never two nice days in a row. Consider three weather states: rainy, nice and snowy, Given that today it is a nice day, the corresponding stochastic row vector is w0 = (0 , 1 , 0) and the forecast after 1, 2 and 3 days. Solution: please refer solution.R attached. 18