Distributed Servers Architecture for
     Networked Video Services
    S. H. Gary Chan, Member IEEE, and Fouad
           ...
Contents
   Introduction
   System Architecture
   Caching schemes for video-on-demand services
   Analytical Model
 ...
Introduction (1)
 In a VoD system, the central server has limited
  streaming capacities
 Distributed servers architectu...
Introduction (2)
 Three models of distributed servers architecture
    Uncooperative - unicast
    Uncooperative - mult...
System Cost (1)
 Cost associated with storing a movie in a local
  server
   How much and how long the storage is used
...
System Cost (2)
 Cost associated with streaming a movie from the
  central server
   Range from low (Internet) to quite ...
Video popularity
 Movies have notion of skewness
   High demand movies should be cached locally
   Low demand serviced ...
Caching Scheme (1)
 Unicast Delivery
   A new arrival for a video opens a network channel of
    Th minutes to stream th...
Caching Scheme (2)
 Multicast Delivery
    Pre-storing: local server stores permanently the leading portion of
     each...
Caching Scheme (3)
 Multicast Delivery
    Pre-caching: local server decides if it should cache a
      multicast video ...
Caching Scheme (4)
Periodic multicasting with pre-caching   Movie is multicast at regular
                               ...
Caching Scheme (5)
 Communicating Servers (“server chain”)
   Video data can be transferred from one server to another u...
Scheme Analysis
   Movie length Th min
   Streaming rate b0 MB/min
   Request process is Poisson
   Interested in
    ...
Analysis - Unicast
 Interarrival time = W + 1/λ
                            Th
 By Little’s formula: S W 1 /
 Average n...
Analysis - Multicast
 Pre-storing
   Buffer size depends on whether any request arrives
    within a multicast interval
...
Analysis - Multicast
 Pre-caching (Periodic multicasting with pre-
  caching)
              W
             iW
    Bi e   ...
Analysis – Communicating Servers
   Communicating Servers
        If there are many local servers, a subsequent request
...
Results - Unicast
         For unicast, tradeoff between S and
         B given λ is linear with slope (-λ)
         Opt...
Results - Multicast with Pre-storing
                                        There is an optimal W to
                   ...
Results - Multicast




Pre-storing                                 Pre-caching



         Ns = 20, =0.002/min, Th = 90 m...
Results – Communicating Servers
                          Low arrival rate, large W to form a
                          c...
Results
                                    Multicast achieves cost
                                    reduction within ...
System Using Batching and Multicast

 Requests arriving within a period of time are
  grouped together
 Batching allows ...
Results – Request Batching
                            Ns is low
                            enough, distributed servers
...
Results
                                    Unicast and Combined
                                    scheme require simil...
Conclusion
 Distributed servers architecture to provide video-
  on-demand service
 Different local caching for video st...
Upcoming SlideShare
Loading in …5
×

Distributed Servers Architecture for Networked Video Services

380 views

Published on

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
380
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
7
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide

Distributed Servers Architecture for Networked Video Services

  1. 1. Distributed Servers Architecture for Networked Video Services S. H. Gary Chan, Member IEEE, and Fouad Tobagi, Fellow IEEE
  2. 2. Contents  Introduction  System Architecture  Caching schemes for video-on-demand services  Analytical Model  Results  Conclusion
  3. 3. Introduction (1)  In a VoD system, the central server has limited streaming capacities  Distributed servers architecture  Hierarchy of servers  Local servers cache the videos  Some servers are placed close to the user cluster  Determine which movie and how much video data should be stored to minimize the total cost
  4. 4. Introduction (2)  Three models of distributed servers architecture  Uncooperative - unicast  Uncooperative - multicast  Cooperative - communicating servers  This paper studied a number of caching schemes, all employing circular buffers and partial caching  All requests arriving during the cache window duration are served from the local cache  Claim that using partial caching on temporary storage can lower the system cost by an order of magnitude
  5. 5. System Cost (1)  Cost associated with storing a movie in a local server  How much and how long the storage is used  For example, 1-GB disk costing about $200  Disk to be amortized one year  Cost = $200/(365 X 6) = 0.091/h per GB  Streaming rate = 5 Mb/s  One minute of video data (0.0375 GB) costs:  0.091/h X 0.0375 = $5.71 X 10-5 /min
  6. 6. System Cost (2)  Cost associated with streaming a movie from the central server  Range from low (Internet) to quite high (satellite channels) 100-min movie cost $3  γ = ratio of storage cost to network  small γ -> relatively cheap storage  Tradeoff network cost versus storage cost
  7. 7. Video popularity  Movies have notion of skewness  High demand movies should be cached locally  Low demand serviced directly  Intermediate class should be partially cached Geometric video popularity r/m mean r% requests ask for m% of the movies 500 movies 4000 requests/h
  8. 8. Caching Scheme (1)  Unicast Delivery  A new arrival for a video opens a network channel of Th minutes to stream the video from the central server  Local server caches W minutes of data with circular buffer  All requests within a window size (W) form a group  Arrivals more than W minutes start a new stream
  9. 9. Caching Scheme (2)  Multicast Delivery  Pre-storing: local server stores permanently the leading portion of each movie Leader size: W minutes W minutes of video data served by the local server Th-W multicast from the central server
  10. 10. Caching Scheme (3)  Multicast Delivery  Pre-caching: local server decides if it should cache a multicast video or not  Two schemes depending on the multicast schedule  Periodic multicasting with pre-caching  Request-driven pre-caching
  11. 11. Caching Scheme (4) Periodic multicasting with pre-caching Movie is multicast at regular intervals of W minutes Local server pre-caches W minutes’ video data Multicast stream will be held for Th minutes or aborted at the end of W minutes Request-driven pre-caching The start of multicast streams is driven by request arrival Request arriving within W minutes can be served by the local server
  12. 12. Caching Scheme (5)  Communicating Servers (“server chain”)  Video data can be transferred from one server to another using the server network LS1 receives data from the central server using unicast stream LS1 buffers W minutes’ video data Requests at LS2 arriving within W are served by LS1 Chain is broken when two buffer allocations are separated by more than W minutes
  13. 13. Scheme Analysis  Movie length Th min  Streaming rate b0 MB/min  Request process is Poisson  Interested in  Average number of network channels, S  Average buffer size, B  Total system cost: Cˆ Bi S
  14. 14. Analysis - Unicast  Interarrival time = W + 1/λ Th  By Little’s formula: S W 1 /  Average number of buffers allocated: 1/(W+1/λ))Th which yields B ˆ  System Cost: C ( ) B Th ˆ  To minimize C, either cache or don’t  λ<γ B = W = 0  λ>γ B = Th
  15. 15. Analysis - Multicast  Pre-storing  Buffer size depends on whether any request arrives within a multicast interval iW Bi W (1 e )(Th W) 1 e W S (Th W) W Probability of an arrival within W
  16. 16. Analysis - Multicast  Pre-caching (Periodic multicasting with pre- caching) W iW Bi e (1 e i W )Th 2 W W Th S e (1 e ) W  Pre-caching (Request-driven precaching) 1 W2 Bi 0 (1 0 )Th W W 1/ 2 Th e iW (1 i ) S 0 1 W Probability of stream not started by the server
  17. 17. Analysis – Communicating Servers  Communicating Servers  If there are many local servers, a subsequent request likely comes from a different server  Average number of concurrent requests is λTh Average interval between successive B Th W channel allocation Th S 1 Average time data is kept in Ts Ts L the server network Average requests’ inter-arrival W time given that the successive L Ta e W requests are within W 1 W Ta W e 1
  18. 18. Results - Unicast For unicast, tradeoff between S and B given λ is linear with slope (-λ) Optimal caching strategy is all or nothing Determining factors for caching a movie •Skewness •Cheapness of storage
  19. 19. Results - Multicast with Pre-storing There is an optimal W to minimize cost The storage component of this curve becomes steeper as γ increases = 5 req/min, Ns = 20, =0.002/min, Th = 90 min
  20. 20. Results - Multicast Pre-storing Pre-caching Ns = 20, =0.002/min, Th = 90 min
  21. 21. Results – Communicating Servers Low arrival rate, large W to form a chain The higher the request rate, the easier it is to chain =0.002/min, Th = 90 min
  22. 22. Results Multicast achieves cost reduction within a certain window of arrival rate Chaining shouldn’t be higher cost than other systems unless local communication costs are very high =0.002/min, Th = 90 min
  23. 23. System Using Batching and Multicast  Requests arriving within a period of time are grouped together  Batching allows fewer multicast streams to be used, thus lowering the associated cost  Users will tolerate some delay, Dmax
  24. 24. Results – Request Batching Ns is low enough, distributed servers based on unicast delivery can achieve lower cost Combined system:  < ’ => batching  > ’ => local server ’ Dmax = 6 min, Th = 90 min
  25. 25. Results Unicast and Combined scheme require similar cost Multicast can further reduce the cost but not significant Chaining achieves the greatest saving in cost =0.002/min, Th = 90 min
  26. 26. Conclusion  Distributed servers architecture to provide video- on-demand service  Different local caching for video streaming  Given certain cost function, determine  which video and how much video data should be cached  More skewed the video popularity is, the more saving a distributed servers architecture can achieve

×