Chapter 1

Introduction


 CONTENTS
 Queueing System (排隊系統)

    ♦ Wait in line at post office, supermarket, highway, …, etc.

    ♦ Wait in line within a ti...
 Performance Measures

 ♦   How         long       must          a          customer         wait?

     → waiting time a...
1.1 DESCRIPTION OF THE QUEUEING PROBLEMS

                               Discouraged customers
                           ...
1.2 CHARACTERISTICS OF QUEUEING PROCESSES

 Arrival Pattern of Customers

  ♦   The arrival pattern is measured in terms ...
 Queue Discipline (Service Discipline)

  ♦ First In, First Out (FIFO)

  ♦ Last In, First Out (LIFO)

  ♦   Selection fo...
♦ The shortest-elapsed-time Discipline

                                                          K
                      ...
 Number of Service Channels

    ♦ Multiple channel queueing system

    ♦ Routing and buffer sharing

      Stages of S...
1.3 NOTATION
          Table 1.1 - Queueing Notation A/B/X/Y/Z




                             9
1.4 MEASURING SYSTEM PERFORMANCE

 Blocking probability (Grade of service)

   Waiting     time      or     mean       s...
1.5 SOME GENERAL RESULTS

   Consider the queuing model (G/G/1 or G/G/C)

      Traffic Intensity : ρ ≡ λ/cµ             ...
Notation                                  Description
     N (t )       The number of customer in the system at time t
   ...
∞
L = E( N ) = ∑ npn ,             W = E[T ]
             n=0

                  ∞
Lq = E( N q ) = ∑ (n − c) pn ,   Wq = E...
1.5.1       LITTLE’S FORMULAS

        John D.C. Little related L to W ,

        John D.C. Little related Lq to Wq ,

   ...
Fig. 1.4 - Busy-period sample path




                15
A concept proof

     L = [1 × (t2 − t1 ) + 2 × (t3 − t2 ) + 1 × (t4 − t3 ) + 2 × (t5 − t4 )

           +3 × (t6 − t5 ) +...
L − Lq = E[ N ] − E[ N q ] = E[ N − N q ] = E[ N s ]

                                                       1           (...
Table 1.2 - Summary of General Result for G/G/C Queue

     L = λW                         Little’s formula

     Lq = λWq...
1.6 SIMPLE DATA BOOK KEEPING FOR QUEUES

From fig. 1.5, we can see that




                 Fig. 1.5 – Sample path for qu...
We can have event-oriented book keeping shown on Table 1.4

            Table 1.4 – Event-Oriented Bookkeeping




       ...
Column (2) : Arrival/Departure Customer i
         Master Clock Time                                    Arrival/Departure
...
Column (4) : Time of arrival i leaves service




                                22
Column (5) : Time in Queue:

                           Column (3) - Column (1)


   Set T
           (0)
                ...
   Check Little’s Formula
            Average time                70/12 = W
          (by Column (6))

       Average Que...
1.7 POISSON PROCESS AND THE EXPONENTIAL
    DISTRIBUTION

The most common stochastic queuing models assume that:

   The a...
Then, we have

            pn (t + ∆t )

        = Pr { n arrivals in t and zero arrivals in ∆t}
                         ...
where n ≥ 1




              27
Divide the above two equations by ∆t and as ∆t → 0 , we have

                                  dp0 (t )
                 ...
That is a Poisson distribution.

This can be proven by mathematical induction.




                                  29
We now show that if the arrival process follows the Poisson

distribution, the interarrival time follows the exponential

...
Thus T has the exponential distribution with mean arrival time



1
  .
λ




                              31
On the contrary, it can be shown that if the interarrival times
( X 1 , X 2 , ..., X n , X n +1 )   are independent and ha...
Let x = u + t

                           ∞   λ n +1 (u + t ) n − λ t − λ u
             Pn (t ) = ∫                      ...
34
1.8 MARKOVIAN PROPERTY OF THE EXPONENTIAL
    DISTRIBUTION

     Markovian property of the Exponential Distribution

  @ M...
is the linear form

                                       g ( y ) = cy                                             (1.18)...
Take natural logarithm

         %                %            %
      ln F (t + t0 ) = ln F (t0 ) + ln F (t )


         ...
1.9 STOCHASTIC PROCESSES AND MARKOV CHAINS

A stochastic process : { x(t ), t ∈ T } , a family of random variables.

    x...
Stationary:

                       FX ( x; t ) = FX ( x; t + τ )


Wide-sense stationary (W.S.S.):


                    ...
1.9.1 MARKOV PROCESS

   Continuous-parameter stochastic process:                     { x(t ), t > 0}     or

    Discret...
Table 1.5 – Classification of Markov Processes

                                 Parameter space (T)
   State space       ...
1.9.2 DISCRETE-PARAMETER MARKOV CHAINS

The conditional probability
    Pr { xn = j | xn −1 = i} : Transition probability ...
=
    ππΡ

                                     , where πππ = [
             ( m −1)                          (m)         ...
44
1.9.3 CONTINUOUS-PARAMETER MARKOV CHAINS

{ x(t ), t ∈ T } ,   for T = { t | 0 ≤ t ≤ ∞} and              { x(t )}    is co...
p j (t + ∆t ) = ∑ pr (t ) prj (t , t + ∆t )   - (a.2)
                              r


            → unconditional probab...
 Additional Theory:

   If (1)

         Pr { a change of state i in (t , t + ∆t )} = 1 − pii (t , t + ∆t )

            ...
Let u = 0 and assume a homogeneous process as that qi (t ) = qi

, qij (t ) = qij for all t
⇒ We then obtain

            ...
qi = ∑ qij
Note that        j ≠i
                         . ← It is intuitive




                                  49
Since   ∑ p (t , t + ∆t ) = 1
         j
             ij




                      pii (t , t + ∆t ) + ∑ pij (t , t + ∆t )...
For Poisson process (pure birth process)
                                                                            q j ...
The Poisson process is often called a pure birth process.




                                 52
1.9.4 IMBEDDED MARKOV CHAINS

If we observe the system only at certain selected times, and the

process behaves like an or...
λ1
                                                                     λi
               1          λ +µ
                ...
1.9.5      LONG-RUN BEHAVIOR OF MARKOV PROCESS: LIMITING
           DISTRIBUTION, STATIONARY DISTRIBUTION, ERGODICITY


If...
Markov Chain.

   If the limiting distribution exists, this implies that the resulting

stationary distribution exists and...
i.e. possesses time-independent distribution functions.
        π ( m ) is independent of m.

                     =
    T...
1     1 
             2     2 
   lim Ρ (m)
            =        
   m→∞
             1     1 
              ...
pQ = 0 
               continuous
       pe = 1 
              
      πQ = 0 
and              discrete
       πe ...
1.9.6 ERGODICITY

X (t ) is ergodic if time averages (statistics) equal ensemble

averages (statistics), where N (t , i ) ...
n

                 ∑ x (t )
                        i
E [ X (t ) ] =   i =1
                            = m1 (t )
       ...
Ergodic → All moments are equal (The same statistics)

                       lim X T = lim m1 (t ) < ∞
                 ...
The state is aperiodic if G ( 1) = 1

   The Chain is said to be aperiodic if all states are aperiodic.

f jjn ) : The fir...
※ state 1 is recurrent,

※ states 0,3,4 are recurrent,

※ states 2,5 are transient.




                                64
 Theorem 1.1

    (a) an irreducible, positive recurrent discrete-parameter

          Markov Chain

                    ...
such that

                            ∞

                           ∑p
                            j =0
                 ...
67
1.10STEADY-STATE BIRTH-DEATH PROCESSES

                                        p′(t ) = p(t )Q

At steady state, p′(t ) =...
∞                                     −1
                             ∞ n
                                     λ 
Since ...
Upcoming SlideShare
Loading in …5
×

Chapter 1

1,356 views

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,356
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
33
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Chapter 1

  1. 1. Chapter 1 Introduction CONTENTS
  2. 2.  Queueing System (排隊系統) ♦ Wait in line at post office, supermarket, highway, …, etc. ♦ Wait in line within a time-sharing computer system. ♦ Wait in line before a server-farm system. ♦ Wait in line before a switch/router system. ♦ Wait in line before a protocol layer program module. ♦ Wait in line in a statistical multiplexer.  Design Issues: Design System Parameters so that the queueing system has an optimal performance ♦ Channel allocation in GSM/WCDMA/OFDMA systems. ♦ Buffer design in switch/ router communication systems… ♦ Bandwidth allocation for statistical multiplexer, e.g. EPON system. ♦ Scheduling in the server-farm or switch/router system. 2
  3. 3.  Performance Measures ♦ How long must a customer wait? → waiting time and system time ♦ How many people will form in the line? → average queue length ♦ How is the productivity of the server (counter)? → throughput, utilization ♦ How much is the blocking probability? Or say how many communication systems should be prepared? “Queuing Theory” attempts to answer these questions through detailed mathematical analysis. 3
  4. 4. 1.1 DESCRIPTION OF THE QUEUEING PROBLEMS Discouraged customers leaving System … Served customers … … Waiting queue Customers Servers Fig. - Conceptual Queuing Model  Arriving customer could be connection request asked for connection setup and minimum data rate service.  The queueing model is commonly used in the traffic control, scheduling, and system design. 4
  5. 5. 1.2 CHARACTERISTICS OF QUEUEING PROCESSES  Arrival Pattern of Customers ♦ The arrival pattern is measured in terms of probability distribution or mean of arrival rate or interarrival time. ♦ The arrival pattern may be one customer at a time or bulk. In bulk-arrival situation, both the interarrival time and the number of customers in the batch are probabilistic. ♦ Customer reactions:  patient customers  balked (blocked) customer Impatient  reneged customer ustomers  jockey customer (switch from one line to another) ♦ Stationary Process (time independent) vs. Non-stationary Process (Not time independent, correlated)  Service Pattern ♦ Service time distribution (random process) Service time Service rate ♦ service may be single or batch ♦ state-independent service or state-dependent service 5
  6. 6.  Queue Discipline (Service Discipline) ♦ First In, First Out (FIFO) ♦ Last In, First Out (LIFO) ♦ Selection for Service in random order (SIRO), independent of time arrival to the queue, Most delay first serve (MDFS) ♦ Priority 1. Preemptive 2. Non preemptive ♦ The Round-Robin Service Discipline (Operating System Theory, E.G. Coffman, Jr.) K σ Processor Arriving … (CPU) 1-σ Quantum allocation (Allocate a fixed amount of execution time) Remark: for time-sharing systems applications 6
  7. 7. ♦ The shortest-elapsed-time Discipline K … K-1 ■ Serve the one who has … received the fewest quanta … ■ Serve the one who is the earliest 3 … 2 (Having received one quanta) … 1 (Having received zero quanta) Arriving …  System Capacity ♦ In some queueing process there is a physical limitation to the amount of waiting room (buffer size), i.e., system capacity is finite Fig. 1.2 - Multichannel queueing system 7
  8. 8.  Number of Service Channels ♦ Multiple channel queueing system ♦ Routing and buffer sharing  Stages of Service Fig. 1.3 - A multistage queueing system with feedback ♦ Consider a communication network with ARQ recovery control. It is a kind of multi-stage queueing system. “Before performing any mathematical analysis, it is absolutely necessary to describe adequately the process being modeled. Knowledge of the aforementioned six characteristics is “Essential” in the queueing analysis.” 8
  9. 9. 1.3 NOTATION Table 1.1 - Queueing Notation A/B/X/Y/Z 9
  10. 10. 1.4 MEASURING SYSTEM PERFORMANCE  Blocking probability (Grade of service)  Waiting time or mean system time (Quality-of-Service, requirement)  Mean queue length (Average number of customers in the system)  System utilization (Throughput) 10
  11. 11. 1.5 SOME GENERAL RESULTS  Consider the queuing model (G/G/1 or G/G/C) Traffic Intensity : ρ ≡ λ/cµ µ µ λ … … c µ a(t) : Generally Distributed with mean rate λ cµ b(t) : Generally Distributed with mean rate µ Fig. - G/G/C Queueing Model If ρ > 1 (λ > cµ ) , the queue will get bigger and bigger, the queue never settles down, and there is no steady state. If ρ = 1 (λ = cµ ) , unless the processes of arrivals and service are deterministic and perfectly scheduled, no steady state exists, since randomness will prevent the queue from ever empty out and allowing the serves to catch up, time causing the queue to grow without bound. ∴ρ <1 11
  12. 12. Notation Description N (t ) The number of customer in the system at time t N q (t ) The number of customer in the queue at time t N s (t ) The number of customer in the service at time t T System time Tq Waiting time S Service time N (t ) = N q (t ) + N s (t ) T = Tq + S  Consider C-Server queue At steady state : pn (t ) = Prob { N (t ) = n} , pn = Prob { N = n} 12
  13. 13. ∞ L = E( N ) = ∑ npn , W = E[T ] n=0 ∞ Lq = E( N q ) = ∑ (n − c) pn , Wq = E[Tq ] n=c 13
  14. 14. 1.5.1 LITTLE’S FORMULAS John D.C. Little related L to W , John D.C. Little related Lq to Wq , where W = E[T ] , Wq = E[Tq ] , and T = Tq + S .  Little’s Formulas : L = λW (1.1a) Lq = λWq (1.1b) 1 W = Wq + µ 14
  15. 15. Fig. 1.4 - Busy-period sample path 15
  16. 16. A concept proof L = [1 × (t2 − t1 ) + 2 × (t3 − t2 ) + 1 × (t4 − t3 ) + 2 × (t5 − t4 ) +3 × (t6 − t5 ) + 2 × (t7 − t6 ) + 1 × (T − t7 )]/ T (1.2a) = (Area under Curve) / T = [T + t7 + t6 − t5 − t4 + t3 − t2 − t1 ]/ T W = [(t3 − t1 ) + (t6 − t2 ) + (t7 − t4 ) + (T − t4 )]/( N c = 4) = [T + t7 + t6 − t5 − t4 + t3 − t2 − t1 ]/ N c (1.2b) = (Area under Curve) / N c Nc ∴ LT = WN c ⇒ L =W = W λ = λW (1.1a) T Lq = λWq (1.1b)  Any queueing systems: 16
  17. 17. L − Lq = E[ N ] − E[ N q ] = E[ N − N q ] = E[ N s ] 1 (1.3) = λW − λWq = λ (W − Wq ) = λ µ The expected number of customers in service in steady state. λ E[ N s ] = ≡r µ  For a single-server system : r = ρ ∞ ∞ ∞ L − Lq = ∑ nPn − ∑ (n − 1) Pn = ∑ Pn = 1 − P0 n=0 n =1 n =1 Probability that the server is busy ≡ Pb = 1 − P0 = ρ  For multiple-server system λ r c = ρ = Pb = : Pr { a given server is busy} cµ For multiple server system, Pr { At least one server is busy} = ? 17
  18. 18. Table 1.2 - Summary of General Result for G/G/C Queue L = λW Little’s formula Lq = λWq Little’s formula W = Wq + 1 µ Expected-value argument Pb = λ cµ Busy probability of an arbitrary server r = λ µ = E[ N s ] Expected number of customers in service ; Offered workload L = Lq + r ρ = r c = λ cµ Traffic intensity; workload to a server G/G/1 : P0 = 1 − ρ , L = Lq + ρ = Lq + (1 − P0 ) ※ 18
  19. 19. 1.6 SIMPLE DATA BOOK KEEPING FOR QUEUES From fig. 1.5, we can see that Fig. 1.5 – Sample path for queueing process Wq ( n +1) = (Wq( n ) + S ( n ) − T ( n ) ) if Wq( n ) + S ( n ) − T ( n ) > 0 Wq ( n ( t )) = 0 if Wq( n ) + S ( n ) − T ( n ) ≤ 0 Thus using input data shown in Table 1.3 Table 1.3 – Input Data 19
  20. 20. We can have event-oriented book keeping shown on Table 1.4 Table 1.4 – Event-Oriented Bookkeeping 20
  21. 21. Column (2) : Arrival/Departure Customer i Master Clock Time Arrival/Departure Customer i 0 1-A 1 1-D 2 2-A 3 3-A 5 2-D n −1 n-A = 0 + ∑ T (i ) i =1 n-D = n-A + Wq ( n )  + S ( n )   = n-A + Wq ( n −1) + S ( n −1) − T ( n −1)  + S ( n )   ( with W q (i ) = Wq ( i −1) + S (i −1) − T (i −1) , ∀i ≥ 2 ) n n −1 = n-A + Wq (1) + ∑S (i ) − ∑ T (i ) i =1 i =1 n = ∑ S (i ) i =1 Column (3) : Time of arrival i enters service i -A + Wq ( i −1) + S (i −1) − T ( i −1) 21
  22. 22. Column (4) : Time of arrival i leaves service 22
  23. 23. Column (5) : Time in Queue: Column (3) - Column (1) Set T (0) = 0, S (0) = 0, Wq (0) = 0 Wq (1) = Wq (0) + S (0) − T (0) = 0 Wq (2) = (Wq (1) + S (1) − T (1) ) + = 0 Wq (3) = Wq (2) + S (2) − T (2) = 2 Column (6) : Time in System: Column (4) - Column (1) Column (7) : No. in Queue just after Master Clock Time: A’s in Column (2) - D’s in Column (2) - 1 Column (8) : No. in System just after Master Clock Time: A’s in Column (2) - D’s in Column (2) 23
  24. 24.  Check Little’s Formula Average time 70/12 = W (by Column (6)) Average Queue Length 70/31 (by Column (6)) Mean arrival rate λ = 12 / 31 Average length 12 70 70 L = λW = × = 31 12 31 24
  25. 25. 1.7 POISSON PROCESS AND THE EXPONENTIAL DISTRIBUTION The most common stochastic queuing models assume that: The arrival rate and service rate follow a Poisson distribution, or equivalently, the interarrival times and service times obey the exponential distribution. Consider an arrival process {N (t ), t ≥ 0} , where N (t ) denotes the total number of arrivals up to t, with N (0) = 0 , and which satisfies the following these assumptions: 1. Pr { an arrival occurs between t and t + ∆t} = λ∆t + O ( ∆t ) O(∆t ) where λ is the arrival rate, and lim →0 ∆t → 0 ∆t 2. Pr { more than one arrival occurs between t and t + ∆t} = λ∆t + O(∆t ) 3. The number of arrivals in non overlapping intervals is statistically independent. t t t + ∆t 25
  26. 26. Then, we have pn (t + ∆t ) = Pr { n arrivals in t and zero arrivals in ∆t} (1.6) + Pr { ( n − 1) arrivals in t and one arrivals in ∆t} + Pr { ( n − 2) arrivals in t and two arrivals in ∆t} + L = pn (t ) 1 − λ∆t − O ( ∆t )  + pn −1 (t )λ∆t   (1.7) + [ pn − 2 (t ) + L + p0 (t ) ] O ( ∆t ) We have pn (t + ∆t ) − pn (t ) = − ( λ∆t ) pn (t ) + ( λ∆t ) pn −1 (t ) + [ − pn (t ) + pn − 2 (t ) + ... + p0 (t )]O (∆t ) For the case n = 0 , we have p0 (t + ∆t ) = p0 (t )[1 − λ∆t − O(∆t )], (1.8) p0 (t + ∆t ) − p0 (t ) = −λ∆tp0 (t ) − p0 (t )O(∆t ) (1.9) pn (t + ∆t ) − pn (t ) = −λ∆tpn (t ) + λ∆tpn −1 (t ) + O(∆t ) (1.10) 26
  27. 27. where n ≥ 1 27
  28. 28. Divide the above two equations by ∆t and as ∆t → 0 , we have dp0 (t ) = −λ p0 (t ) (1.11) dt dpn (t ) = −λ pn (t ) + λ pn −1 (t ) n ≥1 (1.12) dt Then we have p0 (t ) = e − λ t Q p0 (0) = 1, and pn (0) =, ∀n ≥ 1 p1 (t ) = λte − λ t (λ t ) 2 − λ t p2 (t ) = e 2! (λ t ) 3 − λ t p3 (t ) = e 3! M We conjecture the general formula to be (λt ) n −λt pn (t ) = e (1.14) n! 28
  29. 29. That is a Poisson distribution. This can be proven by mathematical induction. 29
  30. 30. We now show that if the arrival process follows the Poisson distribution, the interarrival time follows the exponential distribution. Proof Let T be the Random variable of “time between two arrivals”, then Pr { T ≥ t} = Pr { zero arrivals in time t} = p0 ( t ) = e−λt Let A ( t ) be the CDF of T, A ( t ) = Pr { T ≤ t} = 1 − e−λt ∴ (t ) = λe −λt a 30
  31. 31. Thus T has the exponential distribution with mean arrival time 1 . λ 31
  32. 32. On the contrary, it can be shown that if the interarrival times ( X 1 , X 2 , ..., X n , X n +1 ) are independent and have the same exponential distribution, then the arrival process follows the Poisson distribution. Proof Let Pn (t ) denote Pr { N (t ) ≤ n} : CDF Then Pn (t ) = Pr { sum of n + 1 interarrival times > t} λ ( λ x ) −λ x (1.15) n ∞ =∫ e dx t n! λ ( λ x ) −λ x n where e is an Erlang distribution which is the sum n! of n+1 independent and identical exponential random variables. Also pn (t ) = Pn (t ) − Pn −1 (t ) 32
  33. 33. Let x = u + t ∞ λ n +1 (u + t ) n − λ t − λ u Pn (t ) = ∫ e e du 0 n! ∞ λ n +1e − λ t e − λ u n n! =∫ 0 n! ∑ i !(n − i)!u n −it i du i =0 n λ n +1e − λ t t i ∞ − λ u n − i =∑ ∫0 e u du i = 0 i !( n − i )! (λ t ) i − λ t n =∑ e i =0 i! Via manipulation, we have n (λ t ) i − λ t (λ t ) n e − λ t Pn (t ) = ∑ e ⇒ pn (t ) = i=0 i! n! That is a Poisson process. ※ The arrival time is uniformly distributed over the time axis. 33
  34. 34. 34
  35. 35. 1.8 MARKOVIAN PROPERTY OF THE EXPONENTIAL DISTRIBUTION Markovian property of the Exponential Distribution @ Memoryless property of the Exponential Distribution @ Pr { T ≤ t1 | T ≥ t0 } = Pr { 0 ≤ T ≤ (t1 − t0 )} ( 1.17 ) Proof t1 Pr { T ≤ t1 | T ≥ t0 } = Pr { t0 ≤ T ≤ t1 } = ∫t0 λ e − λ t dt Pr { T ≥ t0 } ∞ ∫t0 λ e − λ t dt e − λ t1 − e − λ t0 = − λ t0 = 1 − e − λ ( t1− t0 ) −e = Pr { 0 ≤ T ≤ (t1 − t0 )} The exponential distribution is the only continuous distribution which exhibits this memoryless property. The proof of the above assertion rests on the fact that the only continuous function solution of the equation { x ( t ) ,t ∈T} 35
  36. 36. is the linear form g ( y ) = cy (1.18) Proof of if Pr { T ≤ t1 | T ≥ t0 } = Pr { 0 ≤ T ≤ (t1 − t0 )} , then FT ( t ) = Pr { T ≤ t} = 1 − ect Proof The memoryless property (1.17) can be rewritten as CCDF F ( t1 T ≥ t0 ) = F ( t1 − t0 ) % % (1.19) Pr { T > t1 and T ≥ t0 } F ( t1 ) % Left hand side = = Pr { T ≥ t0 } F(t ) % 0 % % % ∴ F (t1 ) = F (t0 ) F (t1 − t0 ) Batch Poisson Pn ( t ) = Pr { n occurrence in [ 0, t ] } Let t1 − t0 = t n ( λt ) i = ∑e − λt C ni ) ( i =0 i! % % % Cn : Prob.[ i occurrence give n ] ( i) F (t + t0 ) = F (t0 ) F (t ) 36
  37. 37. Take natural logarithm % % % ln F (t + t0 ) = ln F (t0 ) + ln F (t ) % Use the above-mentioned fact, we have ln F (t ) = ct % ∴ F (t ) = ect ⇒ F (t ) = 1 − ect or f (t ) = −cect = λ e − λ t  There are many possible and well-known general equations of the Poisson/exponential process and will be taken up in great detail in the text 37
  38. 38. 1.9 STOCHASTIC PROCESSES AND MARKOV CHAINS A stochastic process : { x(t ), t ∈ T } , a family of random variables. x(t ) is defined over some index set or parameter space T. T : Time range, x(t ) : state of the process at time t.  If T is a countable sequence, for example, T = { 0,1, 2,L} Then { x(t ), t ∈ T } is said to be a discrete-parameter process defined over the index set T.  If T is an interval, for example, T = { t 0 ≤ t < ∞} Then { x(t ), t ∈ T } is called a continuous-parameter process defined over the index set T. 38
  39. 39. Stationary: FX ( x; t ) = FX ( x; t + τ ) Wide-sense stationary (W.S.S.): E [ x ] independent of t E [ x (t ) x (t + τ ) ] is a function of τ 39
  40. 40. 1.9.1 MARKOV PROCESS  Continuous-parameter stochastic process: { x(t ), t > 0} or Discrete-parameter stochastic process: { x(t ), t = 0,1, 2,...} is a Markov process if 0 < t1 < t2 < ... < ti < ... < tn , ti ∈ T for any set of n time points in the index set or time range of the process, the conditional distribution of X (tn ) , depends only on the immediately preceding value X (tn −1 ) . More precisely, Pr { X (tn ) ≤ xn | X (t1 ) = x1 , ..., X (tn −1 ) = xn −1 } = Pr { X (tn ) ≤ xn | X (tn −1 ) = xn −1 } “Memoryless”: given the present, the future is independent of the past.  Classification of Markov process: According to: (1) the nature of the parameter (index set, time range) space of the process (2) the nature of the state space of the process 40
  41. 41. Table 1.5 – Classification of Markov Processes Parameter space (T) State space Discrete Continuous Discrete Discrete parameter Continuous parameter Markov chain Markov chain Continuous Discrete parameter Continuous parameter Markov process Markov process  Semi-Markov Process (SMP) or Markov Renewal Process (MRP) T T T : Denotes the state transition If the time between two consecutive transitions T is an arbitrary random variable, the process is called SMP. If T is exponentially distributed for continuous parameter cases, or geometrically distributed for discrete parameter cases, then the SMP is reduced to Markov process. 41
  42. 42. 1.9.2 DISCRETE-PARAMETER MARKOV CHAINS The conditional probability Pr { xn = j | xn −1 = i} : Transition probability (single-step) If these probabilities are independent of n, then the Markov chain is called homogeneous chain. Pr { xn = j | xn = i} = pij → [ pij ] = P : Transition Matrix For homogenous chain, the m-step transition probability Pr { xn + m = j | xn = i} = pijm ) are also independent of n. ( From basic laws of probability, pijm ) = ∑ pirm −k ) prjk ) ( 0 < k < m) ( ( ( Chapman-Kolmogorov (C-K) equations r ustomers In matrix notation (m−k ) ΡΡΡ = (m) (k ) if k = m − 1, ΡΡΡ = (m) ( m−1) and then ΡΡ ) = ( ) m , where Ρ = [ pij ] (m Define the unconditional probability of state j at the mth trial by Pr { xm = j} = π (j m ) : state probability at mth trial. π (0) . The initial distribution is given by j 42
  43. 43. = ππΡ , where πππ = [ ( m −1) (m) (m) (m) (m) ππΡπΡ= = (0) m 1 L n ]1× n  p11 p12 ⋅⋅⋅ p1n   ⋅    P= ⋅     ⋅   pn1 pn 2 ⋅⋅⋅ pnn    πππΡΙπQ−1) = (m) − (m ( m −1) ( − )= ( m −1) (1.24) ( (ΡΙ− ) →The sum of all rows equal to 0) ( m −1) If limiting probability exists, m →∞ πππ = m →∞ lim ( m ) lim = at steady state πQ = 0 (Q→The sum of all rows equals to zero) or ππP = 43
  44. 44. 44
  45. 45. 1.9.3 CONTINUOUS-PARAMETER MARKOV CHAINS { x(t ), t ∈ T } , for T = { t | 0 ≤ t ≤ ∞} and { x(t )} is countable for t ∈T . From C-K equations, intuitively, pij (u, s ) = ∑ pir (u, t ) prj (t , s ) r (1.25) In matrix notation ΡΡΡ s ) = (u , t ) (t , s ) (u , - (a.1) Letting u = 0 , s = t + ∆t pij = (0, t + ∆t ) = ∑ pir (0, t ) prj (t , t + ∆t ) r ∑ p (0) p i i ij (0, t + ∆t ) = ∑∑ pi (0) pir (0, t ) prj (t , t + ∆t ) r i 45
  46. 46. p j (t + ∆t ) = ∑ pr (t ) prj (t , t + ∆t ) - (a.2) r → unconditional probability p j (t ) : state probability of j at time t, regardless of starting state. ※ 讀課本 Poisson process 的例子 p.30 46
  47. 47.  Additional Theory: If (1) Pr { a change of state i in (t , t + ∆t )} = 1 − pii (t , t + ∆t ) = qi ( t ) ∆t + O ( t ) The prob. of change is linearly propositional to ∆t , with propositionality constant qi ( t ) which is a function of i and t. (2)   pij (t , t + ∆t ) = qij (t ) ∆t + O (t ), i ≠ j  qi (t ) = ∑ qij (t )   j ≠i  Then we have Kolmogorov’s forward and backward equations; respectively, by ∂  pij (u, t ) = − q j (t ) pij (u, t ) + ∑ qrj (t ) pir (u, t )  ∂t r≠ j   (1.28) ∂  pij (u, t ) = q j (u ) pij (u, t ) + ∑ qir (u ) prj (u, t )  ∂u r ≠i  47
  48. 48. Let u = 0 and assume a homogeneous process as that qi (t ) = qi , qij (t ) = qij for all t ⇒ We then obtain d pij (0, t ) = − q j pij (0, t ) + ∑ qrj pir (0, t ) dt r≠ j Multiplying by ∑ p (0) i i d p j (t ) = − q j p j (t ) + ∑ qrj pr (t ) j = 0,1,2,K dt r≠ j In matrix notation p′(t ) = p(t )Q , where p(t ) = ( p0 (t ), p1 (t ), p2 (t ), ...) and  −q0 q01 q02 ..  q −q1 q12 ...  Q =  10   q20 q21 −q2 ..     ... ... ... ...  At steady state, PQ = 0 , where P = ( p0 , p1 ,K) 48
  49. 49. qi = ∑ qij Note that j ≠i . ← It is intuitive 49
  50. 50. Since ∑ p (t , t + ∆t ) = 1 j ij pii (t , t + ∆t ) + ∑ pij (t , t + ∆t ) = 1 j ≠i 1 − qi ∆t + O (t ) + ∑ ( qrj ∆t + O (t )) = 1 j ≠i qi = ∑ qij we can have j ≠i and since qi = lim[1 − pii (t , t + ∆t )] ∆t ∆t → 0 ⇒ − qi = lim[ pii (t , t + ∆t ) − 1] ∆t ∆t → 0 qij = lim pij (t , t + ∆t )] ∆t ∆t → 0 ⇒ QΡI lim [ (t , t + ∆t ) − ] ∆t = ∆t → 0 Q: Intensity matrix, where Ρ (t , t + ∆t ) = { pij (t , t + ∆t )} 50
  51. 51. For Poisson process (pure birth process)  q j , j +1 = λ j qj = λj    q j , j −1 = µ j   qr , j = 0  elsewhere  q j = λ , qrj = λ  if r = j − 1 and j ≥ 1   q j = λ , qrj = 0  elsewhere ( j = 0) d pij (t ) = −λ p j (t ) + λ p j −1 (t ) 與(1.12)相同 dt λ0 λ1 λ2 λj−1 λj 0 1 2 3 … j-1 j j+1 µ1 µ2 µ3 µj µj+1 For birth-death process d p j (t ) = −(λ j + µ j ) p j (t ) + λ j −1 p j −1 (t ) + µ j +1 p j +1 (t ) j ≥1 dt d p0 (t ) = −λ0 p0 (t ) + µ1 p1 (t ) dt 51
  52. 52. The Poisson process is often called a pure birth process. 52
  53. 53. 1.9.4 IMBEDDED MARKOV CHAINS If we observe the system only at certain selected times, and the process behaves like an ordinary Markov Chain, we say we have an imbedded Markov Chain at those instants. (turn our attention away from the truly continuous-parameter queuing process to an imbedded discrete-parameter Markov Chain queuing process) Consider the birth-death process at transition time  λi  λ +µ ( j = i + 1 ; i ≥ 1)  i i  µ  i ( j = i − 1 ; i ≥ 1) pij =  λ + µ  i i  1 ( j = 1 ; i = 0)   0 (elsewhere)  P ( AB | C ) P ( A | BC ) = ※ P( B | C ) 53
  54. 54. λ1 λi 1 λ +µ 1 1 λi +µi 0 1 2 … i-1 i i+1 µ1 µi λ +µ 1 1 λi +µi The transition probability pij , { pij } Holding time : exponentially distributed with meanλ + µ Semi-Markov Process Markov Process Birth-death Markov Process 54
  55. 55. 1.9.5 LONG-RUN BEHAVIOR OF MARKOV PROCESS: LIMITING DISTRIBUTION, STATIONARY DISTRIBUTION, ERGODICITY If m →∞ pij = π j , for all i (independent of i), lim ( m ) We call {π j } the limiting probability of the Markov Chain. (Steady-state probability) Consider the unconditional state probability after m steps, ππΡ = (m) (0) (m) π (j m ) = ∑ π i(0) pijm ) ( i lim π (j m ) = lim ∑ π i(0) pijm ) ( m →∞ m →∞ i = ∑ π i(0)π j i =πj (Independent of m and i ) ππΡ = (m) ( m−1) ( m −1) lim ππΡ = lim (m) m →∞ m →∞ = ππΡ or equivalently 0 = πQ , together with boundary condition πe = 1 . These well-known equations are called stationary equations, and their solution is called stationary distribution of the 55
  56. 56. Markov Chain. If the limiting distribution exists, this implies that the resulting stationary distribution exists and implies that the process possesses steady-state. (The stationary distribution = the limiting distribution) But the converse is not true.  Example 1.1 1 1 The stationary distribution of Eq. (1.32), π = ( , ) , is existed 2 2 = (a solution to ππΡ lim ( m ) is existed), but there is no m → 0 pij , and lim π i( m ) does not exist except π 0 = ( 1 , 1 ) m→0 2 2 (i) limiting 不存在 (ii) stationary distribution 不存在 1 1 (iii) π 0 = ( , ) → strictly stationary 2 2 (iv) the process still does not possess steady-state Strictly stationary: for all k and h The joint probability distribution of X 1 ( t ) , X 2 ( t ) , K, X K ( t ) = the joint prob. distribution of X 1 ( t + h ) , X 2 ( t + h ) , K, X K ( t + h ) 56
  57. 57. i.e. possesses time-independent distribution functions. π ( m ) is independent of m. = The solution to ππΡ does not imply strict stationary, except 1 1 π 0 = stationary distribution = stationary vector ( , ) 2 2 But strict stationary does imply that π ( m ) is time- lim ( m ) independent, not m → 0 pij independent of time.  Example 1.2 The process possesses (i) a steady-state since m → 0 pij = π j , lim ( m ) 1 1 (iii) but not in general stationary unless π 0 = ( , ) , and (ii) It 2 2 1 1 is strictly stationary at π 0 = ( , ) . 2 2  Example 1.3 1 2 3 3 Ρ=  2 1   3 3 57
  58. 58. 1 1  2 2  lim Ρ (m) =   m→∞ 1 1     → The process possesses a steady-state 2 2  1 1  converge to π = ( , )  2 2  1 1 = πΡπ and πe = 1 we have the stationary solution π = ( , ) . 2 2 (i) limiting distribution exists (ii) stationary distribution exists (iii) the process is not completely stationary unless 1 1 π 0 = ( , ) only in the limit. 2 2 For continuous-parameter processes, the stationary solution can be obtained from 0 = pQ . Thus, if the limiting distribution is known to exist, the solution can be obtained from 58
  59. 59. pQ = 0   continuous pe = 1   πQ = 0  and  discrete πe = 1   59
  60. 60. 1.9.6 ERGODICITY X (t ) is ergodic if time averages (statistics) equal ensemble averages (statistics), where N (t , i ) ∈ N (t ), ∀i Time process : { N (t0 ), N (t1 ), K, N (tn ), K} Ensemble process : { N (t ), t ∈ T } Time-average: Continuous-parameter Discrete-parameter 1 T 1 m −1 X T = ∫ X k (t )dt Xm = ∑ Xi T 0 m i =0 1 T 2 1 m −1 2 X T = ∫ X k (t )dt Xm = ∑ Xi 2 2 T 0 m i =0 Ensemble-average: n ∑{ xi (t )} 2 E[{ X (t )} ] = lim i =1 2 = m2 (t ) n →∞ 14 244 4 n 3 if uniform 60
  61. 61. n ∑ x (t ) i E [ X (t ) ] = i =1 = m1 (t ) n 61
  62. 62. Ergodic → All moments are equal (The same statistics)  lim X T = lim m1 (t ) < ∞ T →∞ t →∞   lim X T 2 = lim m2 (t ) < ∞ T →∞ t →∞ Example 1.1 1 m−1  ※ Xm = ∑ Xi m i =0 The process is not ergodic if π = (1,0) 0 m1 (m) = E( X m ) =… 1 1 The process is ergodic if π 0 = ( , ) (But stationary) 2 2  Example 1.2 The process is ergodic (not stationary), but two states, i and j, are said to Communicate (i↔j) If i is accessible from j (j→i) and j is accessible from i (i→j) If all of its states communicate, the Chain is called irreducible Markov Chain. ∃m, ∋ pijm ) > 0 ( for all pairs (i, j ) m ∀m, pkk ) > 0, The GCD of {m} is called period of the state (m 62
  63. 63. The state is aperiodic if G ( 1) = 1 The Chain is said to be aperiodic if all states are aperiodic. f jjn ) : The first time, the probability that a chain starting at state j ( returns for the first time to j in n transitions. ∞ f jj = ∑ f jjn ) : The probability that the chain ever returns to j ( n =1 f jj = 1: State j is a recurrent state If f jj < 1: State j is transient state when f jj = 1, define ∞ m jj ∆ ∑ nf jjn ) : mean recurrent state ( n =1 m jj < ∞ : State j: positive recurrent state m jj = ∞ : State j: null recurrent state Note:The stationary process is ergodic, but the ergodic process need not be stationary.  Example in p.51 63
  64. 64. ※ state 1 is recurrent, ※ states 0,3,4 are recurrent, ※ states 2,5 are transient. 64
  65. 65.  Theorem 1.1 (a) an irreducible, positive recurrent discrete-parameter Markov Chain =  πΡπ   Always exists πe = 1   1 πj = m jj (b) ππ = →The process becomes stationary→Ergodic (0) (c) If the Markov Chain is irreducible, positive recurrent, and aperiodic, then the process is ergodic, and has a limiting prob. distribution = stationary prob. distribution.  Theorem 1.2 An irreducible, aperiodic chain is positive recurrent, if there exists a nonnegative solution of the system. ∞ ∑p x j =0 ij j ≤ xi − 1 i≠0 65
  66. 66. such that ∞ ∑p j =0 0j xj ≤ ∞  Theorem 1.3 For continuous-parameter Markov Chain, the imbedded Markov Chain need not be aperiodic as long as the holding times in all states are bounded for Theorem 1.1 to be valid. 66
  67. 67. 67
  68. 68. 1.10STEADY-STATE BIRTH-DEATH PROCESSES p′(t ) = p(t )Q At steady state, p′(t ) = 0 ∴ PQ = 0 From dp j (t ) = −(λ j + µ j ) p j (t ) + λ j −1 p j −1 (t ) + µ j +1 p j +1 (t ) ( j ≥ 1) dt dp0 (t ) = −λ0 p0 (t ) + µ1 p1 (t ) dt we have 0 = −(λ j + µ j ) p j + λ j −1 p j −1 + µ j +1 p j +1 ( j ≥ 1) 0 = −λ0 p0 + µ1 p1 λ λj + µj λ j −1 ⇒ p1 = 0 p0 , p j +1 = pj − p j −1 µ1 µ j +1 µ j +1 λ1 + µ1 λ λλ ∴ p2 = p1 − 0 p0 = 1 0 p0 µ2 µ2 µ2 µ1 λ2 + µ2 λ λ λλ p3 = p2 − 1 p1 = 2 1 0 p0 µ3 µ3 µ3 µ2 µ1 n λi −1 ∴ pn = p0 ∏ can be proven by induction. i =1 µi 68
  69. 69. ∞ −1  ∞ n λ  Since ∑ pn = 1, ∴ p0 = 1 + ∑∑ i −1  n =0  n =1 i =1 µi  69

×