SlideShare a Scribd company logo
1 of 31
MARKOV PROCESSES
• Trials of the system: Events/ Points of Time
• State of the System
• Transition Probability
• State Probability
• Steady State probability
• Absorbing State
• Fundamental Matrix
Markov Processes
• Trials of the Process: The events that trigger transitions of the system from one
state to another. In many applications successive time periods represent the trials
of the process.
• State of the System: The condition of the system at any particular trial or time
period.
• Transition Probability: Given the system is in state i during one period, the
transition probability pij is the probability that the system will be in state j during
the next period.
• State Probability: The probability that the system will be in any particular
state.(That is i(n) is the probability of the system being in state i in period n.)
• Steady State Probability: The probability that the system will be in any
particular state after a large number of transitions. Once steady state has been
reached , the state probabilities do not change from period to period.
MARKOV PROCESSES
•The Evolution of Systems over Repeated
Trials
•State of the System Cannot be
Determined with Certainty
•Transition Probabilities are Used
•Probability of the System being in a
particular State at a time
MARKOV PROCESSES
• Maintenance- M/C Failure - Replacement-
Inspection Strategy.
• Brand Switching- Duration
• Market Share
• Disbursement Analysis
• A/C Receivable Analysis
MARKOV PROCESSES
MARKOV CHAIN
Probability of being in a particular state at
any time period depends only on the state
in the immediately preceding time period.
---------------------------------------------------
MARKOV PROCESSES
A. MARKET SHARE:
Trials of the Process: Shopping Trips (Daily/Weekly/Monthly)
State of the System: Store Selected in a given period.
Two Shopping Alternatives- Two States (Finite)
State 1. Customer Shops at ABC
State 2. Customer Shops at XYZ.
The System is in State 2 at Trial 4 =>
MARKOV PROCESSES
MARKET RESEARCH:
Data - 100 Shoppers over 30 days.
Probability of selecting a store (State) in a given
period in terms of the store (state) that was selected
during the previous period.
Of all customers who shopped at ABC in a day,
90% shopped at ABC the following day while 10%
switched to XYZ.
Similarly, 80% shopped at XYZ the following day
and 20% switched to ABC.
Transition from a state in a given period to another
state in the following period.
MARKOV PROCESSES
TRANSITION PROBABILITIES:
Current Period Next Period
ABC XYZ
ABC 0.9 0.1
XYZ 0.2 0.8
Pij = Probability of making a transition
from state i in a given period to state j in
the next period.
MARKOV PROCESSES
P = p11 p12 = 0.9 0.1
p21 p22 0.2 0.8
ABC
ABC
XYZ
ABC
XYZ
0.9
0.9
0.1
0.2
0.8
0.1
MARKOV PROCESSES
i(n) = Probability that the System is in
State ‘i’ in period ‘n’. (State Probability)
i = state; n = number of transactions.
1(0) 2(0) = 1 0 or 0 1
ABC XYZ
MARKOV PROCESSES
 (n) = 1(n) 2(n) Vector of the state
probabilities of the system in period n.
State probabilities for period n+1:
 (next period) =  (current period) P
 (n+1) =  (n) P
 (0) = 1, 0
therefore,  (1) = (0) P (Period 1)
MARKOV PROCESSES
 (1) = (0) P (Period 1)
or 1 (1)  2(1) = 1(0)  2(0) p11 p12
p21 p22
= 1 0 0.9 0.1
0.2 0.8
= 0.9 0.1
1 (1) = 0.9 ;  2(1) = 0.1
MARKOV PROCESSES
 (2) = (1) P (Period 2)
or 1 (2)  2(2) = 1(1)  2(1) p11 p12
p21 p22
= .9 .1 0.9 0.1
0.2 0.8
= 0.83 0.17
1 (2) = 0.83 ;  2(2) = 0.17
MARKOV PROCESSES
 (3) = (2) P (Period 3)
 (4) = (3) P (Period 4)
……………………………….
 (n+1) = (n) P
State Probabilities for Future Periods (ABC)
Period (n)
State 0 1 2 3 4 5 6
Prob.
1 (n) 1 .9 .83 .781 .747 .723 .706
2 (n) 0 .1 .17 .219 .253 .277 .294
MARKOV PROCESSES
State Probabilities for Future Periods (ABC)
Period (n)
State 7 8 9 10
Prob.
1 (n) .694 .686 .680 .676
2 (n) .306 .314 .320 .324
MARKOV PROCESSES
1 (10) = 0.676
2 (10) = 0.324
For XYZ:
1 (n) 0 .2 .34 …… 0.648
2 (n) 1 .8 .66 …… 0.352
MARKOV PROCESSES
Steady State Probabilities:
The probabilities after a large number of
transitions (is dependent of the beginning
state of the system).
1 = Steady state probability for state 1.
2 = Steady state probability for state 2.
1(n+1) 2(n+1) = 1(n) 2(n) p11 p12
p21 p22
0.9 0.1
0.2 0.8
MARKOV PROCESSES
1 = 0.9 1 + 0.2 2
2 = 0.1 1 + 0.8 2
1 + 2 = 1 or 2 = (1- 1)
1 = 0.9 1 + 0.2 (1- 1)
= 0.9 1 +0.2 -0.2 1
or 1 = 2/3 = 0.667; 2 = 1/3 = 0.333
1000 Customers
ABC = 667
XYZ = 333
MARKOV PROCESSES
XYZ -- Advertisements 0.85 0.15
0.20 0.80
1 = 0.57
2 = 0.43
Profit/ Customer = Rs. ------
Cost of Promotion = Rs. -----
MARKOV PROCESSES
B. ACCOUNTS RECEIVABLE ANALYSIS:
Two aging categories:
(i) 0-3 months (ii) 4-6 months
Bad Debt > 6 months.
March 31 # Rs. 5000 A/C receivable.
Date Rupees
December 10 2000
February 10 1000
March 18 500
March 30 1500
How much bad debt? 31 March
Total Balance Method.
MARKOV PROCESSES
State Category
State 1. Paid
State 2. Bad Debt
State 3. 0-3 months
State 4. 4-6 months
MARKOV PROCESSES
Transition Probabilities:
pij = probability of a Rs. in State i in one
month moving to State j in the next month.
p11 p12 p13 p14 1 0 0 0
p21 p22 p23 p24 0 1 0 0
P = p31 p32 p33 p34 = .4 0 .3 .3
p41 p42 p43 p44 .4 .2 .3 .1
MARKOV PROCESSES
Absorbing State:
Probability of a transition to any other state
is 0. (The system remains in the state indefinitely)
Do not compute steady state probabilities.
Absorbing state probabilities for
Rs. in (4-6) month category.
MARKOV PROCESSES
Fundamental Matrix:
Partitioning the matrix (4 parts)
1 0 0 0
0 1 0 0
P = ------------------------
.4 0 .3 .3
.4 .2 .3 .1
N= Fundamental Matrix = (I - Q) -1
I
R Q
MATRIX
A= 0.7 -0.3
-0.3 0.9
d = (0.7) (0.9) - (-0.3) (-0.3) = (0.63- 0.09) = 0.54
A-1 = 0.9/0.54 0.3/0.54 = 1.67 0.56
0.3/0.54 0.7/0.54 0.56 1.30
If A = a11 a12
a21 a22 where d=a11a22- a21a12
Then A-1 = a22/d -a12/d
-a21/d a11/d
MARKOV PROCESSES
I- Q = 0.7 -0.3
-0.3 0.9 ;
1.67 0.56
N = 0.56 1.30 = (I-Q) -1
1.67 0.56 0.4 0.0
NR = 0.56 1.30 0.4 0.2
= 0.89 0.11
0.74 0.26
MARKOV PROCESSES
=> Probability that A/C receivable in states
3 or 4 will eventually reach each of the
absorbing states.
Let B = b1 b2
Rs. in (0-3) m Rs. in (4-6) m
If b1 = Rs. 3000 b2 = Rs. 2000
B. NR = 3000 2000 0.89 0.11
0.74 0.26
= 4150 850
Collected Bad Debt
MARKOV PROCESSES
Credit Policy --- discount for prompt payment
1 0 0 0
0 1 0 0
New P = -----------------------------
.6 0 .3 .1
.4 .2 .3 .1
N = 1.5 0.17 NR = .97 .03
0.5 1.17 .77 .23
B.NR = 3000 2000 .97 .03
.77 .23
MARKOV PROCESSES
= 4450 550
Collected Bad Debt
(Costs +
Discounts) 6 % Reduction in Bad Debt.
Markov Processes
• Absorbing State: A state is said to be absorbing if
the probability of making a transition out of that
state is zero. Thus once the system has made a
transition into an absorbing state, it will remain
there.
• Fundamental Matrix: A matrix necessary for the
computation of probabilities associated with
absorbing states of a Markov Process.
THANK YOU

More Related Content

Similar to Markov Processes.ppt

Improving predictability and performance by relating the number of events and...
Improving predictability and performance by relating the number of events and...Improving predictability and performance by relating the number of events and...
Improving predictability and performance by relating the number of events and...
Asoka Korale
 
Cs221 lecture8-fall11
Cs221 lecture8-fall11Cs221 lecture8-fall11
Cs221 lecture8-fall11
darwinrlo
 
Sistema de control discreto
Sistema de control discretoSistema de control discreto
Sistema de control discreto
ricardomadrid19
 
Sistemas de control discretos
Sistemas de control discretos Sistemas de control discretos
Sistemas de control discretos
ricardomadrid19
 

Similar to Markov Processes.ppt (20)

Improving predictability and performance by relating the number of events and...
Improving predictability and performance by relating the number of events and...Improving predictability and performance by relating the number of events and...
Improving predictability and performance by relating the number of events and...
 
markov chain.ppt
markov chain.pptmarkov chain.ppt
markov chain.ppt
 
Cost minimization model
Cost minimization modelCost minimization model
Cost minimization model
 
Cs221 lecture8-fall11
Cs221 lecture8-fall11Cs221 lecture8-fall11
Cs221 lecture8-fall11
 
Mdp
MdpMdp
Mdp
 
Markov chain analysis
Markov chain analysisMarkov chain analysis
Markov chain analysis
 
Binomial dist
Binomial distBinomial dist
Binomial dist
 
Test s velocity_15_5_4
Test s velocity_15_5_4Test s velocity_15_5_4
Test s velocity_15_5_4
 
Reinfrocement Learning
Reinfrocement LearningReinfrocement Learning
Reinfrocement Learning
 
Csl9 4 f15
Csl9 4 f15Csl9 4 f15
Csl9 4 f15
 
AI - Introduction to Markov Principles
AI - Introduction to Markov PrinciplesAI - Introduction to Markov Principles
AI - Introduction to Markov Principles
 
Biosight: Quantitative Methods for Policy Analysis: Stochastic Dynamic Progra...
Biosight: Quantitative Methods for Policy Analysis: Stochastic Dynamic Progra...Biosight: Quantitative Methods for Policy Analysis: Stochastic Dynamic Progra...
Biosight: Quantitative Methods for Policy Analysis: Stochastic Dynamic Progra...
 
Industrial engineering sk-mondal
Industrial engineering sk-mondalIndustrial engineering sk-mondal
Industrial engineering sk-mondal
 
Markov Chain and its Analysis
Markov Chain and its Analysis Markov Chain and its Analysis
Markov Chain and its Analysis
 
chp-1-matrices-determinants1.ppt
chp-1-matrices-determinants1.pptchp-1-matrices-determinants1.ppt
chp-1-matrices-determinants1.ppt
 
Industrial engineering notes for gate
Industrial engineering notes for gateIndustrial engineering notes for gate
Industrial engineering notes for gate
 
Sistema de control discreto
Sistema de control discretoSistema de control discreto
Sistema de control discreto
 
Sistemas de control discretos
Sistemas de control discretos Sistemas de control discretos
Sistemas de control discretos
 
Comparison GUM versus GUM+1
Comparison GUM  versus GUM+1Comparison GUM  versus GUM+1
Comparison GUM versus GUM+1
 
Mathcad - CMS (Component Mode Synthesis) Analysis.pdf
Mathcad - CMS (Component Mode Synthesis) Analysis.pdfMathcad - CMS (Component Mode Synthesis) Analysis.pdf
Mathcad - CMS (Component Mode Synthesis) Analysis.pdf
 

Recently uploaded

QUATER-1-PE-HEALTH-LC2- this is just a sample of unpacked lesson
QUATER-1-PE-HEALTH-LC2- this is just a sample of unpacked lessonQUATER-1-PE-HEALTH-LC2- this is just a sample of unpacked lesson
QUATER-1-PE-HEALTH-LC2- this is just a sample of unpacked lesson
httgc7rh9c
 
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPSSpellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
AnaAcapella
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
heathfieldcps1
 

Recently uploaded (20)

Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
 
Interdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptxInterdisciplinary_Insights_Data_Collection_Methods.pptx
Interdisciplinary_Insights_Data_Collection_Methods.pptx
 
FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024FSB Advising Checklist - Orientation 2024
FSB Advising Checklist - Orientation 2024
 
UGC NET Paper 1 Unit 7 DATA INTERPRETATION.pdf
UGC NET Paper 1 Unit 7 DATA INTERPRETATION.pdfUGC NET Paper 1 Unit 7 DATA INTERPRETATION.pdf
UGC NET Paper 1 Unit 7 DATA INTERPRETATION.pdf
 
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdfUnit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
 
Tatlong Kwento ni Lola basyang-1.pdf arts
Tatlong Kwento ni Lola basyang-1.pdf artsTatlong Kwento ni Lola basyang-1.pdf arts
Tatlong Kwento ni Lola basyang-1.pdf arts
 
QUATER-1-PE-HEALTH-LC2- this is just a sample of unpacked lesson
QUATER-1-PE-HEALTH-LC2- this is just a sample of unpacked lessonQUATER-1-PE-HEALTH-LC2- this is just a sample of unpacked lesson
QUATER-1-PE-HEALTH-LC2- this is just a sample of unpacked lesson
 
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPSSpellings Wk 4 and Wk 5 for Grade 4 at CAPS
Spellings Wk 4 and Wk 5 for Grade 4 at CAPS
 
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
TỔNG ÔN TẬP THI VÀO LỚP 10 MÔN TIẾNG ANH NĂM HỌC 2023 - 2024 CÓ ĐÁP ÁN (NGỮ Â...
 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - English
 
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
80 ĐỀ THI THỬ TUYỂN SINH TIẾNG ANH VÀO 10 SỞ GD – ĐT THÀNH PHỐ HỒ CHÍ MINH NĂ...
 
VAMOS CUIDAR DO NOSSO PLANETA! .
VAMOS CUIDAR DO NOSSO PLANETA!                    .VAMOS CUIDAR DO NOSSO PLANETA!                    .
VAMOS CUIDAR DO NOSSO PLANETA! .
 
How to Add New Custom Addons Path in Odoo 17
How to Add New Custom Addons Path in Odoo 17How to Add New Custom Addons Path in Odoo 17
How to Add New Custom Addons Path in Odoo 17
 
REMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptxREMIFENTANIL: An Ultra short acting opioid.pptx
REMIFENTANIL: An Ultra short acting opioid.pptx
 
Simple, Complex, and Compound Sentences Exercises.pdf
Simple, Complex, and Compound Sentences Exercises.pdfSimple, Complex, and Compound Sentences Exercises.pdf
Simple, Complex, and Compound Sentences Exercises.pdf
 
dusjagr & nano talk on open tools for agriculture research and learning
dusjagr & nano talk on open tools for agriculture research and learningdusjagr & nano talk on open tools for agriculture research and learning
dusjagr & nano talk on open tools for agriculture research and learning
 
The basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptxThe basics of sentences session 3pptx.pptx
The basics of sentences session 3pptx.pptx
 
How to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptxHow to setup Pycharm environment for Odoo 17.pptx
How to setup Pycharm environment for Odoo 17.pptx
 
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptxHMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
HMCS Max Bernays Pre-Deployment Brief (May 2024).pptx
 
What is 3 Way Matching Process in Odoo 17.pptx
What is 3 Way Matching Process in Odoo 17.pptxWhat is 3 Way Matching Process in Odoo 17.pptx
What is 3 Way Matching Process in Odoo 17.pptx
 

Markov Processes.ppt

  • 1. MARKOV PROCESSES • Trials of the system: Events/ Points of Time • State of the System • Transition Probability • State Probability • Steady State probability • Absorbing State • Fundamental Matrix
  • 2. Markov Processes • Trials of the Process: The events that trigger transitions of the system from one state to another. In many applications successive time periods represent the trials of the process. • State of the System: The condition of the system at any particular trial or time period. • Transition Probability: Given the system is in state i during one period, the transition probability pij is the probability that the system will be in state j during the next period. • State Probability: The probability that the system will be in any particular state.(That is i(n) is the probability of the system being in state i in period n.) • Steady State Probability: The probability that the system will be in any particular state after a large number of transitions. Once steady state has been reached , the state probabilities do not change from period to period.
  • 3. MARKOV PROCESSES •The Evolution of Systems over Repeated Trials •State of the System Cannot be Determined with Certainty •Transition Probabilities are Used •Probability of the System being in a particular State at a time
  • 4. MARKOV PROCESSES • Maintenance- M/C Failure - Replacement- Inspection Strategy. • Brand Switching- Duration • Market Share • Disbursement Analysis • A/C Receivable Analysis
  • 5. MARKOV PROCESSES MARKOV CHAIN Probability of being in a particular state at any time period depends only on the state in the immediately preceding time period. ---------------------------------------------------
  • 6. MARKOV PROCESSES A. MARKET SHARE: Trials of the Process: Shopping Trips (Daily/Weekly/Monthly) State of the System: Store Selected in a given period. Two Shopping Alternatives- Two States (Finite) State 1. Customer Shops at ABC State 2. Customer Shops at XYZ. The System is in State 2 at Trial 4 =>
  • 7. MARKOV PROCESSES MARKET RESEARCH: Data - 100 Shoppers over 30 days. Probability of selecting a store (State) in a given period in terms of the store (state) that was selected during the previous period. Of all customers who shopped at ABC in a day, 90% shopped at ABC the following day while 10% switched to XYZ. Similarly, 80% shopped at XYZ the following day and 20% switched to ABC. Transition from a state in a given period to another state in the following period.
  • 8. MARKOV PROCESSES TRANSITION PROBABILITIES: Current Period Next Period ABC XYZ ABC 0.9 0.1 XYZ 0.2 0.8 Pij = Probability of making a transition from state i in a given period to state j in the next period.
  • 9. MARKOV PROCESSES P = p11 p12 = 0.9 0.1 p21 p22 0.2 0.8 ABC ABC XYZ ABC XYZ 0.9 0.9 0.1 0.2 0.8 0.1
  • 10. MARKOV PROCESSES i(n) = Probability that the System is in State ‘i’ in period ‘n’. (State Probability) i = state; n = number of transactions. 1(0) 2(0) = 1 0 or 0 1 ABC XYZ
  • 11. MARKOV PROCESSES  (n) = 1(n) 2(n) Vector of the state probabilities of the system in period n. State probabilities for period n+1:  (next period) =  (current period) P  (n+1) =  (n) P  (0) = 1, 0 therefore,  (1) = (0) P (Period 1)
  • 12. MARKOV PROCESSES  (1) = (0) P (Period 1) or 1 (1)  2(1) = 1(0)  2(0) p11 p12 p21 p22 = 1 0 0.9 0.1 0.2 0.8 = 0.9 0.1 1 (1) = 0.9 ;  2(1) = 0.1
  • 13. MARKOV PROCESSES  (2) = (1) P (Period 2) or 1 (2)  2(2) = 1(1)  2(1) p11 p12 p21 p22 = .9 .1 0.9 0.1 0.2 0.8 = 0.83 0.17 1 (2) = 0.83 ;  2(2) = 0.17
  • 14. MARKOV PROCESSES  (3) = (2) P (Period 3)  (4) = (3) P (Period 4) ……………………………….  (n+1) = (n) P State Probabilities for Future Periods (ABC) Period (n) State 0 1 2 3 4 5 6 Prob. 1 (n) 1 .9 .83 .781 .747 .723 .706 2 (n) 0 .1 .17 .219 .253 .277 .294
  • 15. MARKOV PROCESSES State Probabilities for Future Periods (ABC) Period (n) State 7 8 9 10 Prob. 1 (n) .694 .686 .680 .676 2 (n) .306 .314 .320 .324
  • 16. MARKOV PROCESSES 1 (10) = 0.676 2 (10) = 0.324 For XYZ: 1 (n) 0 .2 .34 …… 0.648 2 (n) 1 .8 .66 …… 0.352
  • 17. MARKOV PROCESSES Steady State Probabilities: The probabilities after a large number of transitions (is dependent of the beginning state of the system). 1 = Steady state probability for state 1. 2 = Steady state probability for state 2. 1(n+1) 2(n+1) = 1(n) 2(n) p11 p12 p21 p22 0.9 0.1 0.2 0.8
  • 18. MARKOV PROCESSES 1 = 0.9 1 + 0.2 2 2 = 0.1 1 + 0.8 2 1 + 2 = 1 or 2 = (1- 1) 1 = 0.9 1 + 0.2 (1- 1) = 0.9 1 +0.2 -0.2 1 or 1 = 2/3 = 0.667; 2 = 1/3 = 0.333 1000 Customers ABC = 667 XYZ = 333
  • 19. MARKOV PROCESSES XYZ -- Advertisements 0.85 0.15 0.20 0.80 1 = 0.57 2 = 0.43 Profit/ Customer = Rs. ------ Cost of Promotion = Rs. -----
  • 20. MARKOV PROCESSES B. ACCOUNTS RECEIVABLE ANALYSIS: Two aging categories: (i) 0-3 months (ii) 4-6 months Bad Debt > 6 months. March 31 # Rs. 5000 A/C receivable. Date Rupees December 10 2000 February 10 1000 March 18 500 March 30 1500 How much bad debt? 31 March Total Balance Method.
  • 21. MARKOV PROCESSES State Category State 1. Paid State 2. Bad Debt State 3. 0-3 months State 4. 4-6 months
  • 22. MARKOV PROCESSES Transition Probabilities: pij = probability of a Rs. in State i in one month moving to State j in the next month. p11 p12 p13 p14 1 0 0 0 p21 p22 p23 p24 0 1 0 0 P = p31 p32 p33 p34 = .4 0 .3 .3 p41 p42 p43 p44 .4 .2 .3 .1
  • 23. MARKOV PROCESSES Absorbing State: Probability of a transition to any other state is 0. (The system remains in the state indefinitely) Do not compute steady state probabilities. Absorbing state probabilities for Rs. in (4-6) month category.
  • 24. MARKOV PROCESSES Fundamental Matrix: Partitioning the matrix (4 parts) 1 0 0 0 0 1 0 0 P = ------------------------ .4 0 .3 .3 .4 .2 .3 .1 N= Fundamental Matrix = (I - Q) -1 I R Q
  • 25. MATRIX A= 0.7 -0.3 -0.3 0.9 d = (0.7) (0.9) - (-0.3) (-0.3) = (0.63- 0.09) = 0.54 A-1 = 0.9/0.54 0.3/0.54 = 1.67 0.56 0.3/0.54 0.7/0.54 0.56 1.30 If A = a11 a12 a21 a22 where d=a11a22- a21a12 Then A-1 = a22/d -a12/d -a21/d a11/d
  • 26. MARKOV PROCESSES I- Q = 0.7 -0.3 -0.3 0.9 ; 1.67 0.56 N = 0.56 1.30 = (I-Q) -1 1.67 0.56 0.4 0.0 NR = 0.56 1.30 0.4 0.2 = 0.89 0.11 0.74 0.26
  • 27. MARKOV PROCESSES => Probability that A/C receivable in states 3 or 4 will eventually reach each of the absorbing states. Let B = b1 b2 Rs. in (0-3) m Rs. in (4-6) m If b1 = Rs. 3000 b2 = Rs. 2000 B. NR = 3000 2000 0.89 0.11 0.74 0.26 = 4150 850 Collected Bad Debt
  • 28. MARKOV PROCESSES Credit Policy --- discount for prompt payment 1 0 0 0 0 1 0 0 New P = ----------------------------- .6 0 .3 .1 .4 .2 .3 .1 N = 1.5 0.17 NR = .97 .03 0.5 1.17 .77 .23 B.NR = 3000 2000 .97 .03 .77 .23
  • 29. MARKOV PROCESSES = 4450 550 Collected Bad Debt (Costs + Discounts) 6 % Reduction in Bad Debt.
  • 30. Markov Processes • Absorbing State: A state is said to be absorbing if the probability of making a transition out of that state is zero. Thus once the system has made a transition into an absorbing state, it will remain there. • Fundamental Matrix: A matrix necessary for the computation of probabilities associated with absorbing states of a Markov Process.