SlideShare a Scribd company logo
1 of 11
Download to read offline
Proxy Cache Management for Fine-Grained
                                Scalable Video Streaming
                       Jiangchuan Liu1                        Xiaowen Chu2                          Jianliang Xu2
                    ljc@cse.cuhk.edu.hk                chxw@comp.hkbu.edu.hk                  xujl@comp.hkbu.edu.hk
               1
                   Department of Computer Science and Engineering, The Chinese University of Hong Kong
                            2
                              Department of Computer Science, Hong Kong Baptist University

     Abstract - Caching video objects at proxies close to clients has    assume that the rate (either constant or variable) of the video
  attracted a lot of attention in recent years. To meet diverse client   is predetermined. Due to this non-adaptability, they suffer
  bandwidth conditions, there have been research efforts to              from two limitations: first, it is difficult to meet the diverse
  combine proxy caching with video layering or transcoding.              bandwidth conditions from heterogeneous clients, for a single
  Nevertheless, these adaptive systems suffer from either coarse         streaming rate would either overuse or underuse some client
  adaptation granularity due to the inflexible structures of existing    bandwidths; second, it is not flexible enough to control the
  layered coders or high computation overhead due to the                 backbone (server-to-proxy) bandwidth consumption, since the
  transcoding operations. In this paper, we propose a novel              streaming rate from the proxy to the clients is not adjustable.
  adaptive video caching framework that enables low-cost and
                                                                            While rate adaptability is a salient feature of video objects,
  fine-grained adaptation. The innovative approach employs the
  MPEG-4 Fine-Grained Scalable (FGS) video with post-encoding
                                                                         the use of adaptive videos poses great challenges to caching.
  rate control. We demonstrate that the proposed framework is
                                                                         The problem is particularly complicated by the fact that most
  both network aware and media adaptive: clients can be of               conventional rate adaptation mechanisms are executed during
  heterogeneous streaming rates, and the backbone bandwidth              the encoding process (e.g., adjusting quantizers [7,18]) and,
  consumption can be adaptively controlled. We also examine the          hence, are difficult to apply to cached videos. There have been
  design and management issues in the framework, in particular,          research efforts to combine proxy caching with video layering
  the optimal stream portions to cache and the optimal streaming         or transcoding [6,13,17]. However, these adaptive systems
  rate to each client. Simulation results demonstrate that,              suffer from either coarse adaptation granularity (due to the
  compared to non-adaptive caching, the proposed framework               inflexible structures of existing layered coders) or high
  with optimal cache management not only achieves significant            computation overhead (due to the transcoding operations).
  reduction on transmission costs but also enables flexible utility         In this paper, we propose a novel video caching framework
  assignment for the heterogeneous clients. Meanwhile, its               to achieve low-cost and fine-grained rate adaptation. The
  computational overhead is kept at a low level, implying that it is     innovative approach is to employ the MPEG-4 Fine-Grained
  practically deployable.                                                Scalable (FGS) video with bit-plane coding, which enables
                                                                         post-encoding rate control by partitioning the video stream at
                          I. INTRODUCTION                                any specific rate [8]. These operations can be efficiently
                                                                         implemented at the server or at a proxy with fast response and
    Due to the increasing demands on video distribution over             low cost. The proposed framework is both network aware and
  the Internet, caching video objects at proxies close to clients        media adaptive: clients can be of heterogeneous access
  has attracted much attention in recent years. However, video           bandwidths, and adaptive FGS videos are used to meet the
  objects have several distinct features, which make                     clients’ bandwidth conditions and to control the backbone
  conventional Web caching techniques inefficient, if not                bandwidth consumption.
  entirely inapplicable. In particular, a video object usually has          We examine the critical design and management issues in
  a high data rate and a long playback duration, which                   the proposed framework. Specifically, there is a two-
  combined yield a very high data volume. As an example, a               dimensional space to explore for how to cache an FGS video:
  one-hour MPEG-1 video has a data volume of about 675 MB;               the length and the rate of the portion to be cached. We stress
  caching it as a whole is clearly impractical, as several such          that the selection must take into account the interactivities in
  large streams would exhaust the capacity of a typical cache.           video playback, i.e., the non-uniform access rates of different
    To address these problems, many partial caching                      portions. Moreover, when a cached video is delivered to a
  algorithms have been proposed in recent years [3,4,21],                client, different streaming rates can be selected as long as the
  demonstrating that, even if a small portion of a video is stored       rate is no higher than the client’s available bandwidth.
  at the proxy, the network resource requirement can be                  Consequently, the proxy management becomes considerably
  significantly reduced. Most of these proposals, however,               more complex than that for a non-adaptive video based system.




0-7803-8356-7/04/$20.00 (C) 2004 IEEE                                                                              IEEE INFOCOM 2004
We develop efficient solutions to the above problems. The           of different rates [18]. Though being used in many
  objective is to offer optimal resource utilization as well as       commercial streaming products, this method suffers from high
  satisfactory and fair services to clients.                          replication redundancy. Other recent studies have introduced
     The performance of the proposed framework with optimal           proxy services with active filtering, which reduces the
  proxy cache management is extensively examined in various           bandwidth of a video object by transcoding [2,18]. However,
  aspects. The results demonstrate its superiority over non-          they usually incur much higher computation overhead due to
  adaptive caching schemes for heterogeneous clients. More            transcoding operations.
  importantly, they reveal that our FGS video based adaptive             A more efficient method is to use layered coding (also
  caching system incurs low computation overhead and, hence,          known as scalable coding), which compresses a video into
  is practically deployable.                                          several layers [8]: the most significant layer, called the base
     The rest of this paper is organized as follows. In Section II,   layer, contains the data representing the most important
  we introduce the background as well as related work. Section        features of the video, while additional layers, called
  III describes the FGS video based proxy caching system. The         enhancement layers, contain the data that progressively refine
  problems of optimal proxy management are formulated in              the quality of the reconstructed video. Layering has been
  Section IV. We also present efficient solutions to the              widely used in live video multicast to heterogeneous and
  problems. Their performance is evaluated in Section V. In           isochronous clients [18]. For proxy-assisted streaming with
  Section VI, we further consider the practical issues for            layered videos, Rejaie et al. [6] studied cache replacement and
  deploying our system. Finally, Section VII concludes the            perfecting policies with the objective of alleviating congestion
  paper and offers some future directions.                            for individual clients. Kangasharju et al. [13] simplified the
                                                                      system model by assuming the cached contents are semi-static
             II. BACKGROUND AND RELATED WORK                          and only complete layers are cached. They developed
                                                                      effective heuristics to maximize the total revenue based on a
  A. Video Caching for Homogeneous Clients                            stochastic knapsack model.
     Numerous cache algorithms for Web proxies have been                 Our work differs from these previous studies in two aspects:
  proposed in the past decade [5]. However, as mentioned in           First, we focus on the optimal resource allocation for a set of
  Introduction, several distinct features of video objects like       clients, rather than the performance perceived by an individual
  large volume make the conventional Web caching algorithms           client. This is an important design issue from a system point
  inapplicable. To this end, many segment or interval caching         of view. Second, the previous studies employed conventional
  methods have been proposed, with emphases on cache                  layered coding, where the number of layers is restricted and
  admission and replacement policies [21]. Due to the static          the rate of each layer is often fixed. For example, Kangasharju
  nature of video contents and the disk bandwidth                     et al. [13] focused on two layers only (base layer and one
  considerations at proxies, semi-statically caching popular          enhancement layer). Our work complements them by
  video portions over a relatively long time period, rather than      employing the fine-grained scalable videos to enhance
  dynamically caching them in response to individual client           adaptability.
  requests, has also been suggested [3,4,15]. Wang et al. [3]
  proposed a video staging scheme, in which only the part of a
                                                                      C. Fine-Grained Scalable (FGS) Video
  stream that exceeds a certain cut-off rate (i.e., the bursts of a      FGS generalizes the conventional layering (scalable coding)
  VBR stream) is cached at the proxy. Sen et al. [4] proposed a       methods through a bitplane coding algorithm, which uses
  prefix caching algorithm that caches the initial frames of the      embedded representations for the enhancement layer (also
  video stream. This is particularly attractive in reducing the       called FGS layer) [8]. For illustration, there are 64 (8×8) DCT
  client start-up latency, due to the predictable sequential nature   coefficients for each video block of the enhancement layer; all
  of video accesses.                                                  the most significant bits from the 64 DCT coefficients
    Most of these algorithms assumed non-adaptive/non-                constitute bitplane 0, all the second most significant bits
                                                                      constitute bitplane 1, and so on and so forth. In the output
  scalable videos and bandwidth homogeneous clients, though
                                                                      stream, the bitplanes are placed sequentially to reconstruct the
  some of them also considered work-ahead smoothing to
                                                                      coefficients. A post-encoding filter thus can truncate this
  reduce the backbone bandwidth demand for VBR videos. Our            embedded stream of the enhancement layer to achieve any
  work is motivated by these studies, in particular, the staging      specified output rate, with negligible mismatches due to block
  and prefix caching algorithms, and complements them by              boundary constraints for the truncation [8,12]. We stress that
  focusing on the issues arising from caching scalable videos         this rate control method has two advantages: (1) like
  with heterogeneous clients.                                         transcoding, it enables fine-grained rate adaptation, but the
                                                                      computation overhead of the filter is much lower; (2) like
  B. Video Caching for Heterogeneous Clients                          layered adaptation, it can be applied to stored videos and
     To handle the heterogeneity of client bandwidths, a              enables the proxy to adjust the rate of a cached stream at a low
  straightforward method is to produce replicated video streams       cost. Specifically, for narrowband clients, the proxy can




0-7803-8356-7/04/$20.00 (C) 2004 IEEE                                                                          IEEE INFOCOM 2004
reduce the streaming rate using the filter; for wideband clients,   connection; otherwise, it will continue playback by retrieving
  the proxy can fetch some uncached portion (i.e., higher-order       the suffix. We denote the lengths of the prefix and the suffix
  bitplanes) from the server and assemble it with the cached          by Lt and Ls , respectively. Assume the probability of early
  portion to generate a higher rate stream. Such operations are       terminations is pET , 0 < pET < 1 . The probability of a
  difficult to implement using transcoding, not only because it       client accessing the entire video (also the probability of
  has high computation overhead but also because it changes the       accessing the suffix) is thus 1 − pET . While here we consider
  stream syntax.                                                      this simple model only, the method can be extended to a
     FGS coding has been adopted in the MPEG-4 standard and           multiple-segment case with non-uniform access rates due to
  is undergoing active improvement. We have seen proposals            such interactivities as VCR-like operations in playback (some
  that make efficient use of FGS coding for adaptive video            preliminary results can be found in [22]).
  streaming [19]. However, they did not consider proxy caching.         Finally and most importantly, we advocate scalable adaptive
  To our knowledge, the only work that employed FGS videos            videos in this system, in particular, the MPEG-4 FGS videos.
  in caching is [20]. Their focus however was on developing a         To model an FGS video, we assume that the base layer of the
  general cache management framework, in particular, the              video has a constant rate rbase , which cannot be further
  replacement policies for mixed-media streaming; the FGS             partitioned along the rate axis. As such, the base layer
  coding was used to for the performance evaluation purpose.          represents an ensured lowest playback quality. On the other
                                                                      hand, neglecting the effect of block boundaries for bitplane
     III. SYSTEM MODEL OF FGS VIDEO BASED CACHING                     truncation, the enhancement layer can be adaptively
     The video streaming system consists of a server that stores a    partitioned into any given rate using a filter, either at the
  repository of videos, and a set of proxies on the edge of the       server or at the proxy. Thus, as illustrated in Fig. 1, a proxy
  network. Selected videos are partially cached at the proxies. A     manager can set the streaming rate to a client of class i in the
  client request is first forwarded to a nearby proxy, which          range of [rbase , ci ] .
  intercepts the request and computes a schedule for streaming:
  the cached portion of the video can be delivered to the client
                                                                        Video Server                    Video Proxy
  directly; some uncached portion, if needed, will be fetched




                                                                                                                         Filter/Assembler
  from the server and then relayed to the client. Although this            Filter
                                                                                          Uncached                                           FGS
                                                                                           Portion
  process is similar to that of many existing systems, our model                                                                            stream
                                                                                                                                              (bi)
  has three novel features, making it more general and flexible.                          Backbone         Cache

     First, we consider a relatively more complex network (e.g.,                          Network

  Intranet) behind the proxy, instead of a simple LAN assumed
                                                                           Video                                                             Client
  in most existing studies. Examples include an enterprise               Repository
                                                                                            Proxy          Proxy
                                                                                                                        Monitor             Request
                                                                                           Request        Manager
  network or a campus network, which remains highly                                                                                           (ci)

  heterogeneous in terms of client access bandwidths, due to
  such factors as hardware configurations, connection methods                 Fig. 1. Functionalities of the video server and a proxy.
  (e.g., Ethernet, ADSL, or wireless LAN), and administrative
  policies (e.g., in a campus network, the access bandwidth             The proxy manager also determines which portion of a
  provisioned for a faculty member would be higher than that          video is to be cached. In the conventional non-adaptive video
  for a student). To reflect such heterogeneity, we assume that       caching system, given a cache size for the video, the portion
  there are M classes of clients, and the maximum access              to be cached is simply determined by the length (in terms of
  bandwidth for a client (or simply client bandwidth) of class i      playback time). However, with FGS videos, there is one
  is given by ci , i = 1,2,..., M , which is an upper bound on the    additional dimension to explore: the rate of the portion to be
  video streaming rate from the proxy to a client of class i.         cached. Such flexibility potentially enables better resource
  Without loss of generality, we number the classes in                utilization, but also complicates the proxy cache management.
  ascending order of the client bandwidth, that is,                   In addition, there are two cases in which some uncached
  c1 ≤ c2 ≤ ≤ cM .                                                    portion is to be fetched from the server: (1) the length of the
     Second, we assume that the clients could terminate a video       demanded portion is longer than that of the cached portion;
  playback prematurely after they requested the playback from         and (2) the streaming rate is higher than that of the cached
  the beginning of a video. Existing studies on video server          portion. In the second case, the uncached bitplanes will be
  workloads [10] have revealed that such early terminations           fetched from the server and then assembled with the cached
  occur quite often and, hence, should be considered in system        portion to form a stream of higher rate.
  dimensioning. One approach for modeling early termination is          As in many previous studies [3,4,14], we assume that the
  to partition a video into two parts: a prefix and a suffix, where   contents of the proxy cache are semi-static and updated
  the prefix could be a preview of the video. If a client feels       periodically with changes of the system workloads. For each
  uninterested after watching the prefix, it will terminate the       period, the key issues in proxy management are to find the




0-7803-8356-7/04/$20.00 (C) 2004 IEEE                                                                                 IEEE INFOCOM 2004
optimal length as well as the optimal rate for caching a video,       more the uncached portion to be fetched from the origin
  and to determine the streaming rate for each client of the            server (which incurs a higher transmission cost). As such, it is
  video. To ensure fairness, we let the video streaming rate to         important to find a trade-off between them.
  any client of class i be identical, denoted by bi , bi ≤ ci ,
                                                                        A. Minimizing Transmission Cost
  i = 1,2,..., M . For a multi-video case of N videos, the proxy
  manager also needs to determine an optimal resource (cache              We first consider the transmission cost for a single video
  space and backbone bandwidth) sharing among the videos.               with specified streaming rates for each class of clients, b1 ,
    The key parameters of this model are summarized in Table            b2 , …, bM , bi ≤ ci , i = 1,2,..., M . We attempt to answer the
                                                                        following question: Given a limited cache size H to the video,
  1. We assume all these parameters are known a priori and do
                                                                        which portion of the stream is cached (referred to as a caching
  not change drastically in a period. Moreover, for ease of
                                                                        scheme) such that the transmission cost is minimized? As said,
  exposition, we focus on the interactions among the origin
                                                                        this objective is equivalent to minimizing the backbone
  server, a single proxy, and the clients of the proxy. Our results,
                                                                        bandwidth consumption.
  however, are applicable to the multi-proxy case where each
  proxy serves a non-overlapping set of clients.                           Note that the rates for different positions of the cached
                                                                        portion can be different when using FGS videos. Denote the
              Length of the video, L = Lt + Ls                          rate for position l (measured in the elapsed time from the
      L
                                                                        beginning of the video) of the cached video by r(l ),l ∈ [0, L] .
      Lt      Length of the prefix
                                                                        A caching scheme for the video is therefore uniquely
      Ls      Length of the suffix
                                                                        determined by the shape of r(l ) . We say that the scheme is
      pET     Probability of early terminations                                       L
                                                                        valid if 1) ∫ r(l )dl ≤ H and 2) for any l ∈ [0...L] , rbase ≤ r (l )
                                                                                     0
      H       Proxy cache size for the video                            ≤ cM or r(l ) = 0 . The above two constraints follow the
      rbase   Base layer rate of the video                              cache size limit and the base layer rate limit, respectively.
      λ       Client request rate for the video                            A caching scheme is optimal if it is valid and yields the
      M       Number of client classes                                  minimum backbone bandwidth consumption for fetching the
      pi      Probability that a client is in class i , pi > 0                                   ˆ
                                                                        uncached portion. Let V denote the total volume of the video
      ci      Client bandwidth of class i                               with rate cM (the maximum client bandwidth). Obviously,
                                                                                  ˆ
                                                                        if H ≥ V , the caching scheme r(l ) = cM , l ∈ [0, L] is optimal.
      bi      Streaming rate for a class i client ( rbase ≤ bi ≤ ci )
                                                                        For the case of limited cache size, we show two lemmas that
      αi      Utility of a client in class i , αi = bi /ci              facilitate searching the optimal caching scheme.
      Vˆ                                               ˆ
              Volume of the video with rate cM , V = L ⋅ cM                Lemma 1: If H ≤ rbase ⋅ Lt , the caching scheme
      ˆ
      B       Backbone bandwidth consumption with ideal                          
                                                                                  rbase , l ∈ [0, H / rbase ]
              client utility assignment and no caching                  r (l ) = 
                                                                                                              is optimal.
                                                                                  0, l ∈ (H / rbase , L ]
                                                                                 
                                                                                 
      Table 1. Model parameters for a single video. For the multi-          Proof: This scheme is clearly valid. For any client request,
    video case of N videos, a superscript (k ) is to be added to each
                                                                        a video portion of volume H is saved from being transmitted
    parameter of video k , k = 1,2,..., N .
                                                                        over the backbone. This is the maximum saving per client
                                                                        request that a valid caching scheme could achieve under cache
       IV. THE PROXY CACHE MANAGEMENT PROBLEM                           size H .                                             □
                            AND SOLUTIONS
                                                                                                          ˆ
                                                                          Lemma 2: If rbase ⋅ Lt < H < V , assuming that the size of
    In this section, we consider the proxy cache management
                                                                        the cached prefix is fixed to H t ( ∈ [rbase ⋅ Lt , H ] ), there
  problem for a single video object as well as for multiple
  heterogeneous videos. Our objective in designing the proxy            exists an optimal caching scheme:
  cache management module is two-fold: first, to maximize the                                   rt , l ∈ [0, Lt ]
                                                                                               
                                                                                               
  client utility, which is related to the streaming rate to each                      r (l ) =  rs , l ∈ (Lt , Lc ] ,
                                                                                               
  class of clients; and second, to minimize the transmission cost,                             
                                                                                                0, l ∈ (Lc , L ]
                                                                                               
                                                                                               
                                                                                               
  as the video streaming imposes very high data delivery
  demands to the network. As suggested in previous studies [3],         where rt and rs respectively represent (constant) rates of the
  we assume that the local (i.e., proxy-to-client) transmission         cached prefix and cached suffix, and are given by
  cost is trivial, and the overall transmission cost is a non-          rt = H t / Lt and rs = max { rbase ,(H − H t )/ Ls } ; Lc is the
  decreasing function of the average backbone (i.e., server-to-         total length of the cached portion, Lc = Lt + (H − H t )/ rs .
  proxy) bandwidth consumption. Note that the above two                   The proof of this lemma can be found in [22]. Let’s
  objectives could conflict, because it is obvious that the higher                                                        ˆ
                                                                        concentrate on this case ( rbase ⋅ Lt < H < V ) with a
  the streaming rate (which results in a higher client utility), the    given H t . It is easy to show that rt ≥ rs for an optimal




0-7803-8356-7/04/$20.00 (C) 2004 IEEE                                                                               IEEE INFOCOM 2004
caching scheme because a cached prefix will serve both early                  this has been suggested in previous studies, it is unfair to the
  terminated requests and all other requests. The backbone                      blocked clients. The FGS video, however, offers an alternative
  bandwidth consumption for all requests from class i is                        method by trading off bandwidth with client utility, that is,
  therefore given by                                                            assigning a somewhat lower yet acceptable streaming rate to
    i
   BH ,Ht (bi )=                                                                each class of clients.
                                                                                  More explicitly, we would like to investigate whether there
      λ pi [(1 − pET )(bi L − H ) + pET (bi Lt − H t )],
                                                                               exists some utility assignment to each class of clients of the
     
     
     
                                                 rt ≤ bi ≤ cM            (1)   video, such that the backbone bandwidth consumption is no
                                                                                               ˆ             ˆ
                                                                               more than η B , where B is the backbone bandwidth
     
      λ pi (1 − pET )[bi Ls − rs (Lc − Lt )],      rs < bi < rt
                                                                               consumption with the ideal utility assignment and zero-size
     
      λ pi (1 − pET )bi (L − Lc ),
     
                                               rbase ≤ bi ≤ rs                 cache (i.e., no caching); η thus reflects the factor of backbone
     
                                                                                bandwidth reduction. We are particularly interested in a utility
  which can be easily validated from Fig. 2 (for rs < bi < rt ).                assignment that achieves the best “social welfare”, i.e., the
  It follows that the optimal caching scheme for rbase ⋅ Lt                     total utility of the clients is maximized. This utility
           ˆ
  < H < V can be accomplished by a one-dimensional search                       optimization problem for the single video can be formally
  on H t in the range of [rbase ⋅ Lt , H ] . The optimal solution, or           described as follows:
  the minimum backbone bandwidth consumption, is thus                                                                   M
                                                      M
                                                                                      MU-SV: max          U H ,η =   ∑ i =1 λpi αi ,            (3)
             BH =              min
                     rbase ⋅Lt ≤H t ≤H , rt ≥rs
                                                  ∑ i =1 BH ,Ht (bi ) .
                                                          i
                                                                          (2)
                                                                                                  s.t.    rbase / ci ≤ αi ≤ 1 , i = 1, 2,..., M ,
    Assume the minimum cache grain is v , which could be the                                              αi −1ci −1 ≤ αici , i = 2, 3,..., M ,
  size of a disk block or the size of a GOP (Group-of-Pictures)                                                      ˆ
                                                                                                          BH ≤ ηB ,
  of the video; the cache size H , as well as H t , is always a
  multiple of the grain v . Then the complexity to search BH is                 where U H ,η is the total client utility per unit time for a cache
  O(H / v ) .                                                                   size H and a backbone bandwidth reduction factor η . The
                                                                                constraint αi −1ci −1 ≤ αici (equivalent to bi −1 ≤ bi ) is to
                 ˆ
    For H ≥ V and H ≤ rbase ⋅ Lt , the optimal caching                          preserve the order (priority) of the client classes in resource
  scheme can also be uniquely represented using rt , rs , and Lc .              sharing.
  The corresponding backbone bandwidth consumption thus can
                                                                                   Assume that there is a minimum bandwidth grain w ; the
  be calculated by Eqs. 1 and 2 as well.
                                                                                backbone bandwidth consumption for a certain class of clients
                                                                                is rounded to a multiple of w . Fig. 3 presents an algorithm to
                                                                                solve the problem instance of a given size of the cached prefix,
                                                                                Ht . MU-SV thus can be accomplished by a search on Ht ,
                                                                                similar to that in the previous subsection.
                                                                                  In this algorithm, u i, j ,k represents the maximum total utility
                                                                                per unit time for classes 1 through i , with backbone
                                                                                bandwidth j consumed by classes 1 through i , and backbone
                                                                                                                                     i
                                                                                bandwidth k consumed by class i . Function (BH ,H t )−1 (k ) is
                                                                                                                   i
    Fig. 2. Illustration of different portions of an FGS video stream.          a generalized inverse of BH ,H t (bi ) (see Eq. 1), representing
                                                                                the highest streaming rate for a class i client given that the
  B. Trading off Backbone Bandwidth with Client Utility                         backbone bandwidth consumed by this class is k . A special
                                                                                case is k =0. Recall that we assume λpi > 0 and
     In the above optimization, we assume that the streaming                    1 − pET > 0 in the system; zero backbone bandwidth
  rates for the clients of the video are specified. We now                      consumption implies that Lc = L and the streaming rate is
  consider a more general and flexible scheme that makes                                                                                i
                                                                                no higher than rs . Therefore, if Lc = L , we let (BH ,H t )−1 (0)
  effective use of FGS.
                                                                                be rs to maximize the total utility; otherwise, we set
     Define the utility of a class i client as αi = bi /ci for                      i                          i −1
                                                                                (BH ,H t )−1(0) to 0 and BH ,H t (0) to −∞ . Finally, note that
  rbase ≤ bi ≤ ci . The ideal utility assignment is αi = 1 ,                      i
                                                                                 BH ,H t (bi ) is undefined for bi < rbase and bi > cM ; if directly
  i = 1,2,..., M , i.e., the client bandwidth is fully utilized for
  every class. However, it is clear that the higher the client                  inversing Eq. 1 yields a value in these two ranges, we set
                                                                                    i
  utility, the more the transmission cost or the backbone                       (BH ,H t )−1 (k ) to 0 and cM , respectively.
  bandwidth consumption. To reduce bandwidth consumption,                         The correctness of this algorithm can be found in [22].
  one solution is to block some of the client requests. Although                Given the cache grain v , applying the algorithm for each H t




0-7803-8356-7/04/$20.00 (C) 2004 IEEE                                                                                        IEEE INFOCOM 2004
3
                                               ˆ
  solves problem MU-SV in time O (M ⋅  η B / w  ⋅ H / v ) .                                            N

  Since the accuracy of this algorithm depends on v and w , we
                                                                                   MU-MV: max            ∑ k =1U Hk ) ,η
                                                                                                                 (
                                                                                                                    (k )    (k )   ,               (4)

  shall investigate their impact in the next section through                                                 N
  simulation experiments.                                                                         s.t.   ∑ k =1 H (k ) ≤ H T ,
                                                                                                             N                     N

        u1, j ,k ← −∞ /* Initialization */
                                                                                                          ∑ k =1 BHk )
                                                                                                                  (
                                                                                                                     (k )   ≤ ηT ∑ k =1 B (k ) .
                                                                                                                                        ˆ

                                                                               This is a 2-dimesional (bandwidth and space) version of the
                     ˆ
       for k = 0 to ηB step w do
                                                                             knapsack problem. Given cache grain v and backbone
                           ˆ
             for j = k to ηB step w do
                                                                             bandwidth grain w , it can be solved by extending the known
                 u1, j ,k ← λ p1[(BH ,H t )−1(k )]/ c1
                                   1
                                                                             pseudo-polynomial time algorithm for 1-D knapsack [1:Ch
                                                                             16], and the time complexity of this solution is bounded by
       for i = 2 to M do /* Table filling */
                          ˆ
            for k = 0 to ηB step w do
                                                                               (         N      (k )                                 )
                                                                             O N ⋅ HT ⋅ H ′ ⋅ v−2 ⋅  ηT ∑k =1 B(k) ⋅ B′ ⋅ w−2  , where B ′ =
                                                                                                           N ˆ


                                                                             min{ηT   ∑ k =1  ˆ              ˆ
                                                                                             B , maxk B (k ) } and H ′ = min {H T ,
                y ← (BH ,Ht )−1(k )
                      i
                                                                                  ˆ
                                                                             maxkV (k ) } [22].
                              ˆ
                for j = k to ηB step w do                                                    V. PERFORMANCE EVALUATION
                           max ← −∞                                            In this section, we evaluate the performance of our FGS-
                           for x = 0 to BH−Ht (y ) step w do
                                         i 1
                                           ,                                 based caching system through simulation experiments. We
                                  temp ←λ piy / ci + ui −1, j −k ,x          examine the system along two dimensions: 1) the transmission
                                                                             cost, or backbone bandwidth consumption; and 2) the quality
                              if max < temp
                                                                             of the delivered streams, i.e., the client utility.
                                  then max ← temp , bw ← x
                                                                             A. System Settings
                ui, j ,k   ← max , hi, j ,k ← bw
                                                                               For the single video case, we assume that there are five
        u * ← −∞   /* Extract optimal results */                             classes of clients, and the client bandwidths of the classes are
                         ˆ
       for bw = 0 to ηB step w do                                            exponentially spaced, i.e., c1=128 Kbps and ci = 2ci −1 for
           temp ← uM ,  ηB / w  ⋅w,bw                                      i = 2, 3, 4, 5 , which cover the bandwidths of a broad spectrum
                                                                           of access technologies. For the client population distribution
           if u * < temp                                                     among the classes, ( p1, p1,..., p5 ), we have evaluated various
                then u * ← temp , bw * ← bw                                  settings in our experiments. Due to space limitation, in this
                                                                             paper we present the simulation results for three typical
                  ˆ
       b ′ =  ηB / w  , bw ′ = bw *                                    distributions:
       for i = M down to 1 do                                                  (1) Uniform:       (0.2, 0.2, 0.2, 0.2, 0.2) ;
             αi* ← (BH ,Ht )−1(bw ′)/ ci
                            i
                                                                               (2) S-narrow: (0.5, 0.2, 0.15, 0.1, 0.05) ;
             b ′ ← b ′ − bw ′ , bw ′ ← hi,b ′,bw ′                             (3) S-wide:        (0.05, 0.1, 0.15, 0.2, 0.5) .

      Output:                                                                  The latter two are skewed distributions, respectively
                                                                             dominated by narrowband clients and wideband clients. The
         u * : maximum expect utility for the given H , H t , η              lengths of the entire video and the prefix are set to 100
         αi* : corresponding utility assignment of a class i client          minutes and 20 minutes, respectively. The probability of early
                                                                             terminations is set to 0.3. Although a rigorous evaluation of
        Fig. 3. Algorithm for an instance of MU-SV with given H t .          the client termination behavior is beyond the scope of this
                                                                             work, we note that higher probabilities of early terminations
  C. Proxy Management for Multiple Heterogeneous Videos                      have been observed in reality [10]; in this case, more benefits
     We now extend the above proxy cache management policies                 can be expected from the caching paradigm advocated in our
  to the case of multiple heterogeneous videos. Assume that                  system.
  there are N videos, indexed as 1 through N . Given the total                 We assume that the requests follow a Poisson arrival
  cache size H T and the total bandwidth reduction factor ηT                 process with a mean rate of one request per minute. We
  for all the videos, our objective is to find a cache partitioning                                                       ˆ
                                                                             normalized the backbone bandwidth by B (the backbone
  ( H (k ) , k = 1, 2,..., N ) as well as bandwidth partitioning ( η(k ) ,   bandwidth consumption with ideal utility assignment and no
  k = 1, 2,..., N ) for these videos, such that the system utility                                             ˆ
                                                                             caching) and the cache size by V (the total volume of the
  (total utility of all the clients for the videos) is maximized.            video with rate cM ) for all presented results. Hence, our
  This problem can be formally described as follows:                         conclusions are generally applicable when the parameters are




0-7803-8356-7/04/$20.00 (C) 2004 IEEE                                                                                              IEEE INFOCOM 2004
proportionally scaled. Unless explicitly specified, the default              optimal scheme in our FGS video based caching framework.
  cache grain and backbone bandwidth grain are set at                          Hence, we believe that our optimal scheme is an effective
         ˆ             ˆ
  1/ 200V and 1/ 200B , respectively.                                          means to improve the system performance.
    For the multiple video case of N objects, we use similar
                                                                               B.2 Utility Improvement
  client settings as described above for each single video, and
  assume that the access probabilities of the videos follow a                     We now examine the tradeoff between the client utility and
  Zipf distribution, which has been suggested by workload                      the backbone bandwidth consumption, given that the
  measurements of media servers. With this distribution, the                   streaming rates to the clients can be regulated using the
                                                    N
  access probability of video k is (1/ k )θ / ∑ j = 1 (1/ j )θ ,               filter/assembler at the proxy. We employ the optimal caching
  where θ reflects the skewness among the client populations                   and utility assignment algorithms for our system, as described
  of the videos.                                                               in Section IV.B. Fig. 5 shows the expected utility for all
                                                                               clients as a function of the backbone bandwidth consumption
  B. Evaluation Results for Single Video                                       for different cache sizes and class distributions. Note that the
                                                                               normalized backbone bandwidth is essentially equal to η , as
  B.1 Backbone Bandwidth Reduction
                                                                               shown in Eq. 3.
     In the first set of experiments, we investigate the backbone                 It can be seen that, to achieve the optimal utility ( = 1 ), a
  bandwidth consumption. We are interested in examining the                    relatively high backbone bandwidth is to be consumed if the
  backbone bandwidth reductions by employing the optimal                       cache size is very small, e.g., 40% of backbone bandwidth B     ˆ
  caching scheme, compared to the following two baseline                       with a cache size 0.3V ˆ for the uniform class distribution (Fig.
  schemes:                                                                     4a). However, there is a nonlinear relation between the
                          r ′, l ∈ [0, H / r ′]
                                                                              backbone bandwidth consumption and the client utility. As a
                         
  (1) MaxLen: r(l ) =                           , r ′ = max{rbase, H / L} ;
                          0, l ∈ (H / r ′, L]                                 result, for the same setting, we can achieve an expected client
                         
                         
                                                                                                                            ˆ
                                                                               utility of 0.9 by consuming only 15% of B , that is, a 10%
                           cM , l ∈ [0, H / cM ]
                                                                              utility reduction leads to a 62.5% backbone bandwidth
  (2) MaxRate: r (l ) =                              .
                           0, l ∈ (H / cM , L ]                               reduction. This is particularly evident for the S-narrow
                          
                                                                              distribution, not only because a relatively high volume is
     These two schemes are non-adaptive because they are not                   cached for the stream to a narrowband client, but also because
  aware of the client bandwidth distributions. They also                       a slight reduction of the streaming rate to a wideband client
  resemble the caching schemes for a coarse-grained layering of                will benefit the set of narrowband clients. In this case, the
  2 layers, i.e., caching the base layer only and caching both the             expected utility is over 0.8 even with very limited resources
  base and enhancement layers.                                                                                      ˆ
                                                                               (e.g., backbone bandwidth of 0.05 B and cache size of 0.1V ). ˆ
     Fig. 4 shows the backbone bandwidth reductions for                        For the S-wide class distribution, the reduction is not that
  different cache sizes and class distributions. The client                    significant. However, in this case, the absolute backbone
  bandwidth is assumed to be fully utilized for all classes, that is,          bandwidth consumption is much higher than that for the other
   αi = 1 , i = 1, 2,..., M . We observe remarkable reductions                 two distributions; hence, a slight reduction on the normalized
  achieved by our optimal scheme over the two baseline                         bandwidth would still lead to a great reduction in the absolute
  schemes, which is generally over 10% and sometimes over                      backbone bandwidth consumed.
  50%. The reduction depends on the class distributions, e.g.,                    Such an adaptive setting of utility clearly offers a flexible
  for the S-narrow distribution (Fig. 4b), the reduction is                    space for a designer to explore to either maximize the overall
  particularly high when compared with the MaxRate scheme,                     revenue or minimize the overall cost. On the contrary, if the
  as most of the clients have a relatively low bandwidth and                   cache size is less than 0.1 ˆ and the backbone bandwidth is
                                                                                                            V
  hence, caching the stream of the highest rate becomes                                        ˆ
                                                                               less than 0.5B , a non-adaptive system that fixes the client
  wasteful. In this case, increasing the length of the cached                  utility to one does not even work for any class distribution.
  stream becomes a better alternative. However, compared to
  our optimal scheme, the MaxLen scheme still suffers from                     B.3 Sensitivity to Allocation Grains
  more than 10% bandwidth excess, because it is not flexible in                  In our experiments, the default cache grain v and backbone
  setting the rates for the cached prefix and suffix to better                                                            ˆ
                                                                               bandwidth grain w are set at 1/ 200V and 1/ 200B ,         ˆ
  accommodate early terminated requests. On the contrary, with                 respectively. To investigate their impact, we repeat the
  the S-wide distribution (Fig. 4c), since most clients have high              experiments of the previous subsection with various grain
  bandwidth demands, the MaxRate scheme is better than the                     settings. Fig. 6a shows the maximum gaps in terms of optimal
  MaxLen scheme, but is still suboptimal. Finally, with the                    client utilities between each setting and a finer-grained
  Uniform distribution (Fig. 4a), both the MaxLen and the                                        ˆ                                 ˆ
                                                                               setting: 1/1000B (bandwidth grain) and 1/1000V (cache
  MaxRate schemes are far from satisfactory.                                   grain). We can see that the performance gap quickly decreases
     It is worth noting that, compared to the two non-optimal                  with the refinement of the grains, and the gap for the default
  schemes, there is virtually no extra cost in employing the                                   ˆ           ˆ
                                                                               setting (1/200V and 1/200 B ) is already lower than 0.008.




0-7803-8356-7/04/$20.00 (C) 2004 IEEE                                                                                    IEEE INFOCOM 2004
60                                                                                                                                                               60                                                                                                                                     60

                                                                                                                                                                                                                         C om pared to MaxLen                                                                                                                      Com pared to MaxLen
  Backbone Bandwidth Reduction (%)




                                                      Compared to MaxLen




                                                                                                                                                                  Backbone Bandwidth Reduction (%)




                                                                                                                                                                                                                                                                                                       Backbone Bandwidth Reduction (%)
                                     50                                                                                                                                                               50                                                                                                                                     50
                                                      Compared to MaxRate                                                                                                                                                C om pared to MaxRate                                                                                                                     Com pared to MaxR ate


                                     40                                                                                                                                                               40                                                                                                                                     40


                                     30                                                                                                                                                               30                                                                                                                                     30


                                     20                                                                                                                                                               20                                                                                                                                     20


                                     10                                                                                                                                                               10                                                                                                                                     10


                                     0                                                                                                                                                                  0                                                                                                                                           0
                                               0.05       0.1     0.15      0.2         0.25    0.3     0.35     0.4       0.45             0.5                                                                   0.05    0.1   0.15      0.2        0.25    0.3     0.35   0.4     0.45   0.5                                                             0.05     0.1    0.15    0.2     0.25   0.3     0.35      0.4    0.45    0.5
                                                                            Normalized Cache Size                                                                                                                                         Normalized Cache Size                                                                                                                    Normalized Cache Size


                                                                                            (a)                                                                                                                                                                    (b)                                                                                                                                        (c)
                                                                                                           Fig. 4. Backbone bandwidth reductions achieved by the optimal caching scheme.
                                                                                                      (a) Uniform class distribution; (b) S-narrow class distribution; (c) S-wide class distribution.
                                           1                                                                                                                                                                 1                                                                                                                                            1

                                      0.9                                                                                                                                                                   0.9                                                                                                                                          0.9

                                      0.8                                                                                                                                                                   0.8                                                                                                                                          0.8




                                                                                                                                                                                                                                                                                                                                      Expected Utility
                                                                                                                                                                               Expected Utility
                  Expected Utility




                                      0.7                                                                                                                                                                   0.7                                                                                                                                          0.7

                                      0.6                                                                                                                                                                   0.6                                                                                                                                          0.6

                                                                                                             CacheSize = 0.1                                                                                                                                         CacheSize = 0.1                                                                                                                           CacheSize = 0.1
                                      0.5                                                                                                                                                                   0.5                                                                                                                                          0.5
                                                                                                             CacheSize = 0.3                                                                                                                                         CacheSize = 0.3                                                                                                                           CacheSize = 0.3
                                      0.4                                                                    CacheSize = 0.5                                                                                0.4                                                      CacheSize = 0.5                                                                     0.4                                                   CacheSize = 0.5

                                      0.3                                                                                                                                                                   0.3                                                                                                                                          0.3
                                         0.05             0.1      0.15           0.2      0.25       0.3     0.35        0.4             0.45         0.5                                                     0.05       0.1    0.15          0.2    0.25     0.3       0.35     0.4   0.45     0.5                                                        0.05     0.1    0.15     0.2     0.25       0.3    0.35       0.4     0.45   0.5

                                                                   Normalized Backbone Bandwidth                                                                                                                                Normalized Backbone Bandwidth                                                                                                               Normalized Backbone Bandwidth


                                                                                               (a)                                                                                                                                                                 (b)                                                                                                                                           (c)
                                                                                                            Fig. 5. Expected client utility as a function of backbone bandwidth consumption.
                                                                                                      (a) Uniform class distribution; (b) S-narrow class distribution; (c) S-wide class distribution.

                                                                                                                                                                                                                                                                   baseline scheme with a uniform cache partitioning
                                      0.02
                                                                                                                                          0.02                                                                                                                     ( H (k ) = H T / N ) among the videos and a bandwidth
                                      0.016
                                                                                                                                          0.016                                                                                                                    partitioning proportional to the client population of each
                                                                                                                                                                                                                                                                   video ( η(k ) = (λ(k ) / ∑N λ( j ) ) ⋅ (ηT ∑ N B (i) )/ B(k ) ). The
                                                                                                                                                                                                                                                                                                                     ˆ     ˆ
                                                                                                                            Average Gap
            Max Gap




                                      0.012
                                                                                                                                          0.012
                                                                                                                                                                                                                                                                                             j =1               j =1
                                      0.008                                                                                               0.008                                                                                                                    utility assignment for each single video is optimized using
                                          0.004                                                                                            0.004                                                                                                                   the algorithm for MU-SV. It is easy to verify that the system
                                               0                                                             1/100
                                                                                                                   1/50
                                                                                                                                                  0                                                                                         1/100
                                                                                                                                                                                                                                        1/200
                                                                                                                                                                                                                                                  1/50
                                                                                                                                                                                                                                                                   utility for the baseline scheme is independent of the skew
                                                   1/50                                                  1/200                                        1/50

                                               Ban
                                                          1/100
                                                                  1/200
                                                                                                1/800
                                                                                                     1/400
                                                                                                          eG
                                                                                                             ra
                                                                                                                in
                                                                                                                                                      Ban
                                                                                                                                                          dw
                                                                                                                                                             1/100

                                                                                                                                                               idth
                                                                                                                                                                                1/200
                                                                                                                                                                                                            1/400
                                                                                                                                                                                                                                    1/400
                                                                                                                                                                                                                                1/800     eG
                                                                                                                                                                                                                                            ra
                                                                                                                                                                                                                                               in
                                                                                                                                                                                                                                                                   factor θ . Fig. 7 shows the system utility improvement by the
                                                  d   wid th              1/400
                                                                                                        ch                                                                                                                                ch
                                                                                                                                                                                                                                                                   joint optimization scheme for H T = 0.2∑ N V (k ) ,
                                                                                                                                                                    G
                                                                G ra
                                                                    in
                                                                                   1/800
                                                                                                      Ca
                                                                                                                                                                                                     rain           1/800
                                                                                                                                                                                                                                     Ca                                                                                            ˆ
                                                                                                                                                                                                                                                                                                                            k =1
                                                                                                                                                                                                                                                                           T
                                                                                                                                                                                                                                                                   and η = 0.1      . It is clear that the joint optimization
                                                                              (a)                                                                                                         (b)                                                                      significantly improves the system utility, in particular, with
                                       Fig. 6. Performance gaps as compared to a fine-grained setting                                                                                                                                                              large skew factors, i.e., the client populations of the videos
                                                    ˆ             ˆ
                                     v = 1 / 1000V , w = 1/1000B . (a) Max gap; (b) Average gap.                                                                                                                                                                   are highly heterogeneous. For the S-wide distribution (Fig.
                                                                                                                                                                                                                                                                   7c), which has a relatively low system utility for the baseline
    This is negligible from a client perception point of view.                                                                                                                                                                                                     scheme under the given settings, the utility improvement is
  We also show the average gaps in Fig. 6b, which are in fact                                                                                                                                                                                                      over 30% for high skew factors.
  much smaller than the corresponding maximum gaps shown
                                                                                                                                                                                                                                                                     To identify respective contributions of the two
  in Fig. 6a. Therefore, we believe that the default setting is
                                                                                                                                                                                                                                                                   optimization dimensions (i.e., space and bandwidth), we also
  reasonably good, especially considering that the computation
                                                                                                                                                                                                                                                                   show in Fig. 7 the improvements achieved by employing the
  times with this setting are generally less than 30 ms on a
                                                                                                                                                                                                                                                                   optimal cache partitioning alone (Cache Optimal) and the
  common PC (Pentium III 1 GHz).
                                                                                                                                                                                                                                                                   optimal bandwidth partitioning alone (Band Optimal). Their
          C. Evaluation Results for Multiple Videos                                                                                                                                                                                                                corresponding bandwidth and cache partitionings are
                                                                                                                                                                                                                                                                   proportional and uniform, respectively. It can be seen that,
    Finally, we investigate the performance of the proxy                                                                                                                                                                                                           though using either optimal partitioning alone also improves
  management algorithm for multiple videos. We implement                                                                                                                                                                                                           the system utility, there remain remarkable gaps as compared
  the joint optimization scheme MU-MV for cache and                                                                                                                                                                                                                to that of the joint optimization, especially for higher skew
  bandwidth partitioning (see Section IV.C), as well as a                                                                                                                                                                                                          factors. This is because the utility of each video is a




0-7803-8356-7/04/$20.00 (C) 2004 IEEE                                                                                                                                                                                                                                                                                                                                                             IEEE INFOCOM 2004
15                                                                            5                                                                                                                                  35


                                              Joint Optimal                                                                Joint Optimal                                                                                                        30
                               12                                                                            4                                                                                                                                                        Joint Optimal
                                              Band Optimal                                                                 Band Optimal
   Utility Improvement (%) x




                                                                                 Utility Improvement (%) x




                                                                                                                                                                                                                    Utility Improvement (%) x
                                                                                                                                                                                                                                                                      Band Optimal
                                              Cache Optimal                                                                Cache Optimal                                                                                                        25                    Cache Optimal
                               9                                                                             3
                                                                                                                                                                                                                                                20


                               6                                                                             2                                                                                                                                  15

                                                                                                                                                                                                                                                10
                               3                                                                             1
                                                                                                                                                                                                                                                5

                               0                                                                             0                                                                                                                                  0
                                    0   0.2          0.4         0.6   0.8   1                                   0   0.2          0.4         0.6                                        0.8              1                                          0          0.2          0.4               0.6          0.8   1
                                                      Skew Factor                                                                  Skew Factor                                                                                                                                Skew Factor

                                                           (a)                                                                          (b)                                                                                                                                        (c)
                                 Fig. 7. System utility improvement as a function of skew factor θ ( H T = 0.2∑ N V (k ) , η T = 0.1 ) for multi-video allocation. The
                                                                                                                        ˆ
                                                                                                                   k =1
                               system utilities of the baseline scheme for the Uniform, S-narrow, and S-wide distributions are 0.77, 0.92, and 0.56, respectively. (a)
                               Uniform distribution; (b) S-narrow distribution; (c) S-wide distribution.

  nonlinear function of cache space or backbone bandwidth                                                                                    Using the optimal caching schemes, we calculate the
  consumed by the video and, hence, the uniform cache                                                                                      backbone bandwidth reduction of the FGS video based
  partitioning or the proportional bandwidth partitioning alone                                                                            system against the stream replication based system, as shown
  is suboptimal for these heterogeneous videos. As such, we                                                                                in Fig. 8. For the Uniform and S-narrow distributions, the
  believe that it is important to use the joint optimization for                                                                           reduction is significant, often over 40% and sometimes over
  the videos.                                                                                                                              60%. For the S-wide distribution, the reduction is relatively
                                                                                                                                           smaller because the backbone bandwidth consumption is
                     VI. COMPARISONS AND PRACTICAL CONSIDERATIONS                                                                          dominated by the requests from the wideband clients.
  A. Scalable Video or Replicated Video?                                                                                                   Nevertheless, the absolute value of saved bandwidth remains
                                                                                                                                           high enough in this case, as discussed in Section V-B.2.
     As mentioned in Section II, yet another approach to handle                                                                                                                           80
  the client heterogeneity is stream replication [9,18]; that is,
                                                                                                                                                      Backbone Bandwidth Reduction (%)




                                                                                                                                                                                          70                  Uniform
  on the server’s side, replicated streams are generated to meet                                                                                                                                              S-narrow
                                                                                                                                                                                          60                  S-wide
  the bandwidth condition of each class of clients. Stream
                                                                                                                                                                                          50
  replication has several advantages, such as simplicity and
  compatibility with conventional non-scalable video coders.                                                                                                                              40

  Hence, it has been widely used in existing commercial                                                                                                                                   30

  streaming systems, e.g., the RealNetwork’s SureStream.                                                                                                                                  20

  Nevertheless, replication also leads to data redundancy, not                                                                                                                            10

  only in server storage but also in proxy cache and bandwidth                                                                                                                                 0
  consumption.                                                                                                                                                                                     0.05       0.1               0.15                     0.2   0.25    0.3    0.35       0.4         0.45   0.5
                                                                                                                                                                                                                                                     Normalized Cache Size
     Scalable video and replicated video based streaming
                                                                                                                                                       Fig. 8. Backbone bandwidth reduction of FGS video
  systems have been compared in [9] with no proxy caching. A                                                                                        based caching against stream replication based caching for
  cache-aware comparison was shown in [16]. They considered                                                                                         different cache sizes and class distributions.
  coarse-grained layering (two layers) and the metric of                                                                                      Next, we consider the case of flexible utility assignment,
  interest is the request blocking probability. In this set of                                                                             where the client utility can be lower than one. The problem
  experiments, we try to complement these works from a client                                                                              of optimal caching and client utility assignment for the
  utility and bandwidth consumption point of view with the use                                                                             stream replication based system can be formulated as follows:
  of FGS videos.                                                                                                                                                                                                                                                        M
     We first investigate the backbone bandwidth consumptions                                                                                 MU-REP: max                                                                               U H ,η =                ∑ i =1 λpi αi ,                                       (7)
  of the two approaches when the client bandwidth is to be
                                                                                                                                                                                                      s.t.                               rmin / ci ≤ αi ≤ 1 , i = 1, 2,..., M ,
  fully utilized. To make a fair comparison, we develop the
  optimal caching scheme for the stream replication based                                                                                                                                                                                αi −1ci −1 ≤ αici , i = 2, 3,..., M ,
  system. In this system, the cached portion of a stream serves                                                                                                                                                                                          M
  the clients of its own class only, and its rate is equal to the                                                                                                                                                                           ∑ i =1 hi                            ˆ
                                                                                                                                                                                                                                                                 ≤ H , and BH ≤ ηB ,
  client bandwidth of the class. The problem is thus how to                                                                                where hi is the cache size allocated to stream i , and rmin is
  partition a cache with given size H for the M replicated                                                                                 the lowest streaming rate, which is set to rbase in the
  streams of the video. This is a variation of the cache                                                                                   experiments to ensure a fair comparison. This problem can
  allocation problem for multiple heterogeneous videos, as                                                                                 also be solved by checking different partitionings of the
  solved in [14].                                                                                                                          cache and, for each partitioning, the optimal utility




0-7803-8356-7/04/$20.00 (C) 2004 IEEE                                                                                                                                                                                                                                              IEEE INFOCOM 2004
Proxy Cache Management for Fine-Grained Scalable Video Streaming
Proxy Cache Management for Fine-Grained Scalable Video Streaming

More Related Content

What's hot

Impact of Video Encoding Parameters on Dynamic Video Transcoding
Impact of Video Encoding Parameters on Dynamic Video TranscodingImpact of Video Encoding Parameters on Dynamic Video Transcoding
Impact of Video Encoding Parameters on Dynamic Video TranscodingVideoguy
 
EXPERIENCES WITH HIGH DEFINITION INTERACTIVE VIDEO ...
EXPERIENCES WITH HIGH DEFINITION INTERACTIVE VIDEO ...EXPERIENCES WITH HIGH DEFINITION INTERACTIVE VIDEO ...
EXPERIENCES WITH HIGH DEFINITION INTERACTIVE VIDEO ...Videoguy
 
Next generation video compression
Next generation video compressionNext generation video compression
Next generation video compressionEricsson Slides
 
Applied technology
Applied technologyApplied technology
Applied technologyErica Fressa
 
Understanding Low And Scalable Mpi Latency
Understanding Low And Scalable Mpi LatencyUnderstanding Low And Scalable Mpi Latency
Understanding Low And Scalable Mpi Latencyseiland
 
Webcast: Reduce latency, improve analytics and maximize asset utilization in ...
Webcast: Reduce latency, improve analytics and maximize asset utilization in ...Webcast: Reduce latency, improve analytics and maximize asset utilization in ...
Webcast: Reduce latency, improve analytics and maximize asset utilization in ...Emulex Corporation
 
Motion Vector Recovery for Real-time H.264 Video Streams
Motion Vector Recovery for Real-time H.264 Video StreamsMotion Vector Recovery for Real-time H.264 Video Streams
Motion Vector Recovery for Real-time H.264 Video StreamsIDES Editor
 
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...Building Cloud-ready Video Transcoding System for Content Delivery Networks (...
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...Zhenyun Zhuang
 
Mobile-Based Video Caching Architecture Based on Billboard Manager
Mobile-Based Video Caching Architecture Based on Billboard Manager Mobile-Based Video Caching Architecture Based on Billboard Manager
Mobile-Based Video Caching Architecture Based on Billboard Manager csandit
 
IBM VideoCharger and Digital Library MediaBase.doc
IBM VideoCharger and Digital Library MediaBase.docIBM VideoCharger and Digital Library MediaBase.doc
IBM VideoCharger and Digital Library MediaBase.docVideoguy
 
Leveraging External Applications For DOCSIS 3.1 HFC Plant Optimization
Leveraging External Applications For DOCSIS 3.1 HFC Plant OptimizationLeveraging External Applications For DOCSIS 3.1 HFC Plant Optimization
Leveraging External Applications For DOCSIS 3.1 HFC Plant OptimizationCisco Service Provider
 
QLogic Adapters & Virtualized Environments
QLogic Adapters & Virtualized EnvironmentsQLogic Adapters & Virtualized Environments
QLogic Adapters & Virtualized EnvironmentsQLogic Corporation
 
An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...
An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...
An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...Cisco Service Provider
 
Experiences with High-Definition Video Conferencing
Experiences with High-Definition Video ConferencingExperiences with High-Definition Video Conferencing
Experiences with High-Definition Video ConferencingVideoguy
 
Analysis of Adaptive Streaming for Hybrid CDN/P2P Live Video Systems
Analysis of Adaptive Streaming for Hybrid CDN/P2P Live Video SystemsAnalysis of Adaptive Streaming for Hybrid CDN/P2P Live Video Systems
Analysis of Adaptive Streaming for Hybrid CDN/P2P Live Video SystemsKevin Tong
 
GY-HM750E
GY-HM750EGY-HM750E
GY-HM750EAVNed
 
Subjective quality evaluation of the upcoming HEVC video compression standard
Subjective quality evaluation of the upcoming HEVC video compression standard Subjective quality evaluation of the upcoming HEVC video compression standard
Subjective quality evaluation of the upcoming HEVC video compression standard Touradj Ebrahimi
 

What's hot (20)

Impact of Video Encoding Parameters on Dynamic Video Transcoding
Impact of Video Encoding Parameters on Dynamic Video TranscodingImpact of Video Encoding Parameters on Dynamic Video Transcoding
Impact of Video Encoding Parameters on Dynamic Video Transcoding
 
EXPERIENCES WITH HIGH DEFINITION INTERACTIVE VIDEO ...
EXPERIENCES WITH HIGH DEFINITION INTERACTIVE VIDEO ...EXPERIENCES WITH HIGH DEFINITION INTERACTIVE VIDEO ...
EXPERIENCES WITH HIGH DEFINITION INTERACTIVE VIDEO ...
 
Next generation video compression
Next generation video compressionNext generation video compression
Next generation video compression
 
10 fn s42
10 fn s4210 fn s42
10 fn s42
 
Applied technology
Applied technologyApplied technology
Applied technology
 
Understanding Low And Scalable Mpi Latency
Understanding Low And Scalable Mpi LatencyUnderstanding Low And Scalable Mpi Latency
Understanding Low And Scalable Mpi Latency
 
Webcast: Reduce latency, improve analytics and maximize asset utilization in ...
Webcast: Reduce latency, improve analytics and maximize asset utilization in ...Webcast: Reduce latency, improve analytics and maximize asset utilization in ...
Webcast: Reduce latency, improve analytics and maximize asset utilization in ...
 
Motion Vector Recovery for Real-time H.264 Video Streams
Motion Vector Recovery for Real-time H.264 Video StreamsMotion Vector Recovery for Real-time H.264 Video Streams
Motion Vector Recovery for Real-time H.264 Video Streams
 
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...Building Cloud-ready Video Transcoding System for Content Delivery Networks (...
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...
 
Mobile-Based Video Caching Architecture Based on Billboard Manager
Mobile-Based Video Caching Architecture Based on Billboard Manager Mobile-Based Video Caching Architecture Based on Billboard Manager
Mobile-Based Video Caching Architecture Based on Billboard Manager
 
Digital TV, IPTV
Digital TV, IPTVDigital TV, IPTV
Digital TV, IPTV
 
IBM VideoCharger and Digital Library MediaBase.doc
IBM VideoCharger and Digital Library MediaBase.docIBM VideoCharger and Digital Library MediaBase.doc
IBM VideoCharger and Digital Library MediaBase.doc
 
Leveraging External Applications For DOCSIS 3.1 HFC Plant Optimization
Leveraging External Applications For DOCSIS 3.1 HFC Plant OptimizationLeveraging External Applications For DOCSIS 3.1 HFC Plant Optimization
Leveraging External Applications For DOCSIS 3.1 HFC Plant Optimization
 
QLogic Adapters & Virtualized Environments
QLogic Adapters & Virtualized EnvironmentsQLogic Adapters & Virtualized Environments
QLogic Adapters & Virtualized Environments
 
An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...
An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...
An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...
 
Experiences with High-Definition Video Conferencing
Experiences with High-Definition Video ConferencingExperiences with High-Definition Video Conferencing
Experiences with High-Definition Video Conferencing
 
Analysis of Adaptive Streaming for Hybrid CDN/P2P Live Video Systems
Analysis of Adaptive Streaming for Hybrid CDN/P2P Live Video SystemsAnalysis of Adaptive Streaming for Hybrid CDN/P2P Live Video Systems
Analysis of Adaptive Streaming for Hybrid CDN/P2P Live Video Systems
 
GY-HM750E
GY-HM750EGY-HM750E
GY-HM750E
 
Subjective quality evaluation of the upcoming HEVC video compression standard
Subjective quality evaluation of the upcoming HEVC video compression standard Subjective quality evaluation of the upcoming HEVC video compression standard
Subjective quality evaluation of the upcoming HEVC video compression standard
 
H.263 Video Codec
H.263 Video CodecH.263 Video Codec
H.263 Video Codec
 

Similar to Proxy Cache Management for Fine-Grained Scalable Video Streaming

DOTNET 2013 IEEE CLOUDCOMPUTING PROJECT An adaptive cloud downloading service
DOTNET 2013 IEEE CLOUDCOMPUTING PROJECT An adaptive cloud downloading serviceDOTNET 2013 IEEE CLOUDCOMPUTING PROJECT An adaptive cloud downloading service
DOTNET 2013 IEEE CLOUDCOMPUTING PROJECT An adaptive cloud downloading serviceIEEEGLOBALSOFTTECHNOLOGIES
 
Decision Making Analysis of Video Streaming Algorithm for Private Cloud Compu...
Decision Making Analysis of Video Streaming Algorithm for Private Cloud Compu...Decision Making Analysis of Video Streaming Algorithm for Private Cloud Compu...
Decision Making Analysis of Video Streaming Algorithm for Private Cloud Compu...IJECEIAES
 
A COMPARISON OF CACHE REPLACEMENT ALGORITHMS FOR VIDEO SERVICES
A COMPARISON OF CACHE REPLACEMENT ALGORITHMS FOR VIDEO SERVICESA COMPARISON OF CACHE REPLACEMENT ALGORITHMS FOR VIDEO SERVICES
A COMPARISON OF CACHE REPLACEMENT ALGORITHMS FOR VIDEO SERVICESijcsit
 
Qwilt transparent caching-6keyfactors
Qwilt transparent caching-6keyfactorsQwilt transparent caching-6keyfactors
Qwilt transparent caching-6keyfactorsbui thequan
 
Enhancement of QOS in Cloud Front through Optimization of Video Transcoding f...
Enhancement of QOS in Cloud Front through Optimization of Video Transcoding f...Enhancement of QOS in Cloud Front through Optimization of Video Transcoding f...
Enhancement of QOS in Cloud Front through Optimization of Video Transcoding f...IRJET Journal
 
International Journal of Computer Science, Engineering and Information Techno...
International Journal of Computer Science, Engineering and Information Techno...International Journal of Computer Science, Engineering and Information Techno...
International Journal of Computer Science, Engineering and Information Techno...ijcseit
 
Caching necessity and benifits for mobile Operator
Caching necessity and benifits for mobile OperatorCaching necessity and benifits for mobile Operator
Caching necessity and benifits for mobile OperatorMd. Abdul Hadi Dipu
 
Survey on caching and replication algorithm for content distribution in peer ...
Survey on caching and replication algorithm for content distribution in peer ...Survey on caching and replication algorithm for content distribution in peer ...
Survey on caching and replication algorithm for content distribution in peer ...ijcseit
 
A Benchmark to Evaluate Mobile Video Upload to Cloud Infrastructures
A Benchmark to Evaluate Mobile Video Upload to Cloud InfrastructuresA Benchmark to Evaluate Mobile Video Upload to Cloud Infrastructures
A Benchmark to Evaluate Mobile Video Upload to Cloud InfrastructuresUniversity of Southern California
 
Enhancement of QOS in Cloudfront Through Optimization of Video Transcoding f...
Enhancement of QOS in Cloudfront Through Optimization of Video Transcoding  f...Enhancement of QOS in Cloudfront Through Optimization of Video Transcoding  f...
Enhancement of QOS in Cloudfront Through Optimization of Video Transcoding f...IRJET Journal
 
International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)inventionjournals
 
P2P Video-On-Demand Systems
P2P Video-On-Demand SystemsP2P Video-On-Demand Systems
P2P Video-On-Demand SystemsAshwini More
 
Optimal Streaming Protocol for VoD Using Clients' Residual Bandwidth
Optimal Streaming Protocol for VoD Using Clients' Residual BandwidthOptimal Streaming Protocol for VoD Using Clients' Residual Bandwidth
Optimal Streaming Protocol for VoD Using Clients' Residual BandwidthIDES Editor
 
Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...
Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...
Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...ijwmn
 
COST-EFFECTIVE LOW-DELAY DESIGN FOR MULTI-PARTY CLOUD VIDEO CONFERENCING
 COST-EFFECTIVE LOW-DELAY DESIGN FOR MULTI-PARTY CLOUD VIDEO CONFERENCING COST-EFFECTIVE LOW-DELAY DESIGN FOR MULTI-PARTY CLOUD VIDEO CONFERENCING
COST-EFFECTIVE LOW-DELAY DESIGN FOR MULTI-PARTY CLOUD VIDEO CONFERENCINGnexgentechnology
 
A new storage architecture for a flash memory video server
A new storage architecture for a flash memory video serverA new storage architecture for a flash memory video server
A new storage architecture for a flash memory video servereSAT Journals
 

Similar to Proxy Cache Management for Fine-Grained Scalable Video Streaming (20)

DOTNET 2013 IEEE CLOUDCOMPUTING PROJECT An adaptive cloud downloading service
DOTNET 2013 IEEE CLOUDCOMPUTING PROJECT An adaptive cloud downloading serviceDOTNET 2013 IEEE CLOUDCOMPUTING PROJECT An adaptive cloud downloading service
DOTNET 2013 IEEE CLOUDCOMPUTING PROJECT An adaptive cloud downloading service
 
Decision Making Analysis of Video Streaming Algorithm for Private Cloud Compu...
Decision Making Analysis of Video Streaming Algorithm for Private Cloud Compu...Decision Making Analysis of Video Streaming Algorithm for Private Cloud Compu...
Decision Making Analysis of Video Streaming Algorithm for Private Cloud Compu...
 
A COMPARISON OF CACHE REPLACEMENT ALGORITHMS FOR VIDEO SERVICES
A COMPARISON OF CACHE REPLACEMENT ALGORITHMS FOR VIDEO SERVICESA COMPARISON OF CACHE REPLACEMENT ALGORITHMS FOR VIDEO SERVICES
A COMPARISON OF CACHE REPLACEMENT ALGORITHMS FOR VIDEO SERVICES
 
Qwilt transparent caching-6keyfactors
Qwilt transparent caching-6keyfactorsQwilt transparent caching-6keyfactors
Qwilt transparent caching-6keyfactors
 
Simulation Study of Video Streaming in Multi-Hop Network
Simulation Study of Video Streaming in Multi-Hop NetworkSimulation Study of Video Streaming in Multi-Hop Network
Simulation Study of Video Streaming in Multi-Hop Network
 
F04024549
F04024549F04024549
F04024549
 
Enhancement of QOS in Cloud Front through Optimization of Video Transcoding f...
Enhancement of QOS in Cloud Front through Optimization of Video Transcoding f...Enhancement of QOS in Cloud Front through Optimization of Video Transcoding f...
Enhancement of QOS in Cloud Front through Optimization of Video Transcoding f...
 
International Journal of Computer Science, Engineering and Information Techno...
International Journal of Computer Science, Engineering and Information Techno...International Journal of Computer Science, Engineering and Information Techno...
International Journal of Computer Science, Engineering and Information Techno...
 
Caching necessity and benifits for mobile Operator
Caching necessity and benifits for mobile OperatorCaching necessity and benifits for mobile Operator
Caching necessity and benifits for mobile Operator
 
Survey on caching and replication algorithm for content distribution in peer ...
Survey on caching and replication algorithm for content distribution in peer ...Survey on caching and replication algorithm for content distribution in peer ...
Survey on caching and replication algorithm for content distribution in peer ...
 
A Benchmark to Evaluate Mobile Video Upload to Cloud Infrastructures
A Benchmark to Evaluate Mobile Video Upload to Cloud InfrastructuresA Benchmark to Evaluate Mobile Video Upload to Cloud Infrastructures
A Benchmark to Evaluate Mobile Video Upload to Cloud Infrastructures
 
Enhancement of QOS in Cloudfront Through Optimization of Video Transcoding f...
Enhancement of QOS in Cloudfront Through Optimization of Video Transcoding  f...Enhancement of QOS in Cloudfront Through Optimization of Video Transcoding  f...
Enhancement of QOS in Cloudfront Through Optimization of Video Transcoding f...
 
International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)International Journal of Engineering and Science Invention (IJESI)
International Journal of Engineering and Science Invention (IJESI)
 
P2P Video-On-Demand Systems
P2P Video-On-Demand SystemsP2P Video-On-Demand Systems
P2P Video-On-Demand Systems
 
Optimal Streaming Protocol for VoD Using Clients' Residual Bandwidth
Optimal Streaming Protocol for VoD Using Clients' Residual BandwidthOptimal Streaming Protocol for VoD Using Clients' Residual Bandwidth
Optimal Streaming Protocol for VoD Using Clients' Residual Bandwidth
 
Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...
Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...
Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...
 
COST-EFFECTIVE LOW-DELAY DESIGN FOR MULTI-PARTY CLOUD VIDEO CONFERENCING
 COST-EFFECTIVE LOW-DELAY DESIGN FOR MULTI-PARTY CLOUD VIDEO CONFERENCING COST-EFFECTIVE LOW-DELAY DESIGN FOR MULTI-PARTY CLOUD VIDEO CONFERENCING
COST-EFFECTIVE LOW-DELAY DESIGN FOR MULTI-PARTY CLOUD VIDEO CONFERENCING
 
Cg25492495
Cg25492495Cg25492495
Cg25492495
 
An adaptive cloud downloading service
An adaptive cloud downloading serviceAn adaptive cloud downloading service
An adaptive cloud downloading service
 
A new storage architecture for a flash memory video server
A new storage architecture for a flash memory video serverA new storage architecture for a flash memory video server
A new storage architecture for a flash memory video server
 

More from Videoguy

Free-riding Resilient Video Streaming in Peer-to-Peer Networks
Free-riding Resilient Video Streaming in Peer-to-Peer NetworksFree-riding Resilient Video Streaming in Peer-to-Peer Networks
Free-riding Resilient Video Streaming in Peer-to-Peer NetworksVideoguy
 
Video Streaming
Video StreamingVideo Streaming
Video StreamingVideoguy
 
Reaching a Broader Audience
Reaching a Broader AudienceReaching a Broader Audience
Reaching a Broader AudienceVideoguy
 
Impact of FEC Overhead on Scalable Video Streaming
Impact of FEC Overhead on Scalable Video StreamingImpact of FEC Overhead on Scalable Video Streaming
Impact of FEC Overhead on Scalable Video StreamingVideoguy
 
Video Streaming Services – Stage 1
Video Streaming Services – Stage 1Video Streaming Services – Stage 1
Video Streaming Services – Stage 1Videoguy
 
Flash Live Video Streaming Software
Flash Live Video Streaming SoftwareFlash Live Video Streaming Software
Flash Live Video Streaming SoftwareVideoguy
 
Videoconference Streaming Solutions Cookbook
Videoconference Streaming Solutions CookbookVideoconference Streaming Solutions Cookbook
Videoconference Streaming Solutions CookbookVideoguy
 
Streaming Video Formaten
Streaming Video FormatenStreaming Video Formaten
Streaming Video FormatenVideoguy
 
iPhone Live Video Streaming Software
iPhone Live Video Streaming SoftwareiPhone Live Video Streaming Software
iPhone Live Video Streaming SoftwareVideoguy
 
Glow: Video streaming training guide - Firefox
Glow: Video streaming training guide - FirefoxGlow: Video streaming training guide - Firefox
Glow: Video streaming training guide - FirefoxVideoguy
 
Video and Streaming in Nokia Phones v1.0
Video and Streaming in Nokia Phones v1.0Video and Streaming in Nokia Phones v1.0
Video and Streaming in Nokia Phones v1.0Videoguy
 
Video Streaming across wide area networks
Video Streaming across wide area networksVideo Streaming across wide area networks
Video Streaming across wide area networksVideoguy
 
University Information Systems Product Service Offering
University Information Systems Product Service OfferingUniversity Information Systems Product Service Offering
University Information Systems Product Service OfferingVideoguy
 
Video Communications and Video Streaming
Video Communications and Video StreamingVideo Communications and Video Streaming
Video Communications and Video StreamingVideoguy
 
Mixed Streaming of Video over Wireless Networks
Mixed Streaming of Video over Wireless NetworksMixed Streaming of Video over Wireless Networks
Mixed Streaming of Video over Wireless NetworksVideoguy
 
A Streaming Media Primer
A Streaming Media PrimerA Streaming Media Primer
A Streaming Media PrimerVideoguy
 
Video streaming
Video streamingVideo streaming
Video streamingVideoguy
 
Streaming Video
Streaming VideoStreaming Video
Streaming VideoVideoguy
 
Video_Streaming
Video_StreamingVideo_Streaming
Video_StreamingVideoguy
 

More from Videoguy (20)

Adobe
AdobeAdobe
Adobe
 
Free-riding Resilient Video Streaming in Peer-to-Peer Networks
Free-riding Resilient Video Streaming in Peer-to-Peer NetworksFree-riding Resilient Video Streaming in Peer-to-Peer Networks
Free-riding Resilient Video Streaming in Peer-to-Peer Networks
 
Video Streaming
Video StreamingVideo Streaming
Video Streaming
 
Reaching a Broader Audience
Reaching a Broader AudienceReaching a Broader Audience
Reaching a Broader Audience
 
Impact of FEC Overhead on Scalable Video Streaming
Impact of FEC Overhead on Scalable Video StreamingImpact of FEC Overhead on Scalable Video Streaming
Impact of FEC Overhead on Scalable Video Streaming
 
Video Streaming Services – Stage 1
Video Streaming Services – Stage 1Video Streaming Services – Stage 1
Video Streaming Services – Stage 1
 
Flash Live Video Streaming Software
Flash Live Video Streaming SoftwareFlash Live Video Streaming Software
Flash Live Video Streaming Software
 
Videoconference Streaming Solutions Cookbook
Videoconference Streaming Solutions CookbookVideoconference Streaming Solutions Cookbook
Videoconference Streaming Solutions Cookbook
 
Streaming Video Formaten
Streaming Video FormatenStreaming Video Formaten
Streaming Video Formaten
 
iPhone Live Video Streaming Software
iPhone Live Video Streaming SoftwareiPhone Live Video Streaming Software
iPhone Live Video Streaming Software
 
Glow: Video streaming training guide - Firefox
Glow: Video streaming training guide - FirefoxGlow: Video streaming training guide - Firefox
Glow: Video streaming training guide - Firefox
 
Video and Streaming in Nokia Phones v1.0
Video and Streaming in Nokia Phones v1.0Video and Streaming in Nokia Phones v1.0
Video and Streaming in Nokia Phones v1.0
 
Video Streaming across wide area networks
Video Streaming across wide area networksVideo Streaming across wide area networks
Video Streaming across wide area networks
 
University Information Systems Product Service Offering
University Information Systems Product Service OfferingUniversity Information Systems Product Service Offering
University Information Systems Product Service Offering
 
Video Communications and Video Streaming
Video Communications and Video StreamingVideo Communications and Video Streaming
Video Communications and Video Streaming
 
Mixed Streaming of Video over Wireless Networks
Mixed Streaming of Video over Wireless NetworksMixed Streaming of Video over Wireless Networks
Mixed Streaming of Video over Wireless Networks
 
A Streaming Media Primer
A Streaming Media PrimerA Streaming Media Primer
A Streaming Media Primer
 
Video streaming
Video streamingVideo streaming
Video streaming
 
Streaming Video
Streaming VideoStreaming Video
Streaming Video
 
Video_Streaming
Video_StreamingVideo_Streaming
Video_Streaming
 

Proxy Cache Management for Fine-Grained Scalable Video Streaming

  • 1. Proxy Cache Management for Fine-Grained Scalable Video Streaming Jiangchuan Liu1 Xiaowen Chu2 Jianliang Xu2 ljc@cse.cuhk.edu.hk chxw@comp.hkbu.edu.hk xujl@comp.hkbu.edu.hk 1 Department of Computer Science and Engineering, The Chinese University of Hong Kong 2 Department of Computer Science, Hong Kong Baptist University Abstract - Caching video objects at proxies close to clients has assume that the rate (either constant or variable) of the video attracted a lot of attention in recent years. To meet diverse client is predetermined. Due to this non-adaptability, they suffer bandwidth conditions, there have been research efforts to from two limitations: first, it is difficult to meet the diverse combine proxy caching with video layering or transcoding. bandwidth conditions from heterogeneous clients, for a single Nevertheless, these adaptive systems suffer from either coarse streaming rate would either overuse or underuse some client adaptation granularity due to the inflexible structures of existing bandwidths; second, it is not flexible enough to control the layered coders or high computation overhead due to the backbone (server-to-proxy) bandwidth consumption, since the transcoding operations. In this paper, we propose a novel streaming rate from the proxy to the clients is not adjustable. adaptive video caching framework that enables low-cost and While rate adaptability is a salient feature of video objects, fine-grained adaptation. The innovative approach employs the MPEG-4 Fine-Grained Scalable (FGS) video with post-encoding the use of adaptive videos poses great challenges to caching. rate control. We demonstrate that the proposed framework is The problem is particularly complicated by the fact that most both network aware and media adaptive: clients can be of conventional rate adaptation mechanisms are executed during heterogeneous streaming rates, and the backbone bandwidth the encoding process (e.g., adjusting quantizers [7,18]) and, consumption can be adaptively controlled. We also examine the hence, are difficult to apply to cached videos. There have been design and management issues in the framework, in particular, research efforts to combine proxy caching with video layering the optimal stream portions to cache and the optimal streaming or transcoding [6,13,17]. However, these adaptive systems rate to each client. Simulation results demonstrate that, suffer from either coarse adaptation granularity (due to the compared to non-adaptive caching, the proposed framework inflexible structures of existing layered coders) or high with optimal cache management not only achieves significant computation overhead (due to the transcoding operations). reduction on transmission costs but also enables flexible utility In this paper, we propose a novel video caching framework assignment for the heterogeneous clients. Meanwhile, its to achieve low-cost and fine-grained rate adaptation. The computational overhead is kept at a low level, implying that it is innovative approach is to employ the MPEG-4 Fine-Grained practically deployable. Scalable (FGS) video with bit-plane coding, which enables post-encoding rate control by partitioning the video stream at I. INTRODUCTION any specific rate [8]. These operations can be efficiently implemented at the server or at a proxy with fast response and Due to the increasing demands on video distribution over low cost. The proposed framework is both network aware and the Internet, caching video objects at proxies close to clients media adaptive: clients can be of heterogeneous access has attracted much attention in recent years. However, video bandwidths, and adaptive FGS videos are used to meet the objects have several distinct features, which make clients’ bandwidth conditions and to control the backbone conventional Web caching techniques inefficient, if not bandwidth consumption. entirely inapplicable. In particular, a video object usually has We examine the critical design and management issues in a high data rate and a long playback duration, which the proposed framework. Specifically, there is a two- combined yield a very high data volume. As an example, a dimensional space to explore for how to cache an FGS video: one-hour MPEG-1 video has a data volume of about 675 MB; the length and the rate of the portion to be cached. We stress caching it as a whole is clearly impractical, as several such that the selection must take into account the interactivities in large streams would exhaust the capacity of a typical cache. video playback, i.e., the non-uniform access rates of different To address these problems, many partial caching portions. Moreover, when a cached video is delivered to a algorithms have been proposed in recent years [3,4,21], client, different streaming rates can be selected as long as the demonstrating that, even if a small portion of a video is stored rate is no higher than the client’s available bandwidth. at the proxy, the network resource requirement can be Consequently, the proxy management becomes considerably significantly reduced. Most of these proposals, however, more complex than that for a non-adaptive video based system. 0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004
  • 2. We develop efficient solutions to the above problems. The of different rates [18]. Though being used in many objective is to offer optimal resource utilization as well as commercial streaming products, this method suffers from high satisfactory and fair services to clients. replication redundancy. Other recent studies have introduced The performance of the proposed framework with optimal proxy services with active filtering, which reduces the proxy cache management is extensively examined in various bandwidth of a video object by transcoding [2,18]. However, aspects. The results demonstrate its superiority over non- they usually incur much higher computation overhead due to adaptive caching schemes for heterogeneous clients. More transcoding operations. importantly, they reveal that our FGS video based adaptive A more efficient method is to use layered coding (also caching system incurs low computation overhead and, hence, known as scalable coding), which compresses a video into is practically deployable. several layers [8]: the most significant layer, called the base The rest of this paper is organized as follows. In Section II, layer, contains the data representing the most important we introduce the background as well as related work. Section features of the video, while additional layers, called III describes the FGS video based proxy caching system. The enhancement layers, contain the data that progressively refine problems of optimal proxy management are formulated in the quality of the reconstructed video. Layering has been Section IV. We also present efficient solutions to the widely used in live video multicast to heterogeneous and problems. Their performance is evaluated in Section V. In isochronous clients [18]. For proxy-assisted streaming with Section VI, we further consider the practical issues for layered videos, Rejaie et al. [6] studied cache replacement and deploying our system. Finally, Section VII concludes the perfecting policies with the objective of alleviating congestion paper and offers some future directions. for individual clients. Kangasharju et al. [13] simplified the system model by assuming the cached contents are semi-static II. BACKGROUND AND RELATED WORK and only complete layers are cached. They developed effective heuristics to maximize the total revenue based on a A. Video Caching for Homogeneous Clients stochastic knapsack model. Numerous cache algorithms for Web proxies have been Our work differs from these previous studies in two aspects: proposed in the past decade [5]. However, as mentioned in First, we focus on the optimal resource allocation for a set of Introduction, several distinct features of video objects like clients, rather than the performance perceived by an individual large volume make the conventional Web caching algorithms client. This is an important design issue from a system point inapplicable. To this end, many segment or interval caching of view. Second, the previous studies employed conventional methods have been proposed, with emphases on cache layered coding, where the number of layers is restricted and admission and replacement policies [21]. Due to the static the rate of each layer is often fixed. For example, Kangasharju nature of video contents and the disk bandwidth et al. [13] focused on two layers only (base layer and one considerations at proxies, semi-statically caching popular enhancement layer). Our work complements them by video portions over a relatively long time period, rather than employing the fine-grained scalable videos to enhance dynamically caching them in response to individual client adaptability. requests, has also been suggested [3,4,15]. Wang et al. [3] proposed a video staging scheme, in which only the part of a C. Fine-Grained Scalable (FGS) Video stream that exceeds a certain cut-off rate (i.e., the bursts of a FGS generalizes the conventional layering (scalable coding) VBR stream) is cached at the proxy. Sen et al. [4] proposed a methods through a bitplane coding algorithm, which uses prefix caching algorithm that caches the initial frames of the embedded representations for the enhancement layer (also video stream. This is particularly attractive in reducing the called FGS layer) [8]. For illustration, there are 64 (8×8) DCT client start-up latency, due to the predictable sequential nature coefficients for each video block of the enhancement layer; all of video accesses. the most significant bits from the 64 DCT coefficients Most of these algorithms assumed non-adaptive/non- constitute bitplane 0, all the second most significant bits constitute bitplane 1, and so on and so forth. In the output scalable videos and bandwidth homogeneous clients, though stream, the bitplanes are placed sequentially to reconstruct the some of them also considered work-ahead smoothing to coefficients. A post-encoding filter thus can truncate this reduce the backbone bandwidth demand for VBR videos. Our embedded stream of the enhancement layer to achieve any work is motivated by these studies, in particular, the staging specified output rate, with negligible mismatches due to block and prefix caching algorithms, and complements them by boundary constraints for the truncation [8,12]. We stress that focusing on the issues arising from caching scalable videos this rate control method has two advantages: (1) like with heterogeneous clients. transcoding, it enables fine-grained rate adaptation, but the computation overhead of the filter is much lower; (2) like B. Video Caching for Heterogeneous Clients layered adaptation, it can be applied to stored videos and To handle the heterogeneity of client bandwidths, a enables the proxy to adjust the rate of a cached stream at a low straightforward method is to produce replicated video streams cost. Specifically, for narrowband clients, the proxy can 0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004
  • 3. reduce the streaming rate using the filter; for wideband clients, connection; otherwise, it will continue playback by retrieving the proxy can fetch some uncached portion (i.e., higher-order the suffix. We denote the lengths of the prefix and the suffix bitplanes) from the server and assemble it with the cached by Lt and Ls , respectively. Assume the probability of early portion to generate a higher rate stream. Such operations are terminations is pET , 0 < pET < 1 . The probability of a difficult to implement using transcoding, not only because it client accessing the entire video (also the probability of has high computation overhead but also because it changes the accessing the suffix) is thus 1 − pET . While here we consider stream syntax. this simple model only, the method can be extended to a FGS coding has been adopted in the MPEG-4 standard and multiple-segment case with non-uniform access rates due to is undergoing active improvement. We have seen proposals such interactivities as VCR-like operations in playback (some that make efficient use of FGS coding for adaptive video preliminary results can be found in [22]). streaming [19]. However, they did not consider proxy caching. Finally and most importantly, we advocate scalable adaptive To our knowledge, the only work that employed FGS videos videos in this system, in particular, the MPEG-4 FGS videos. in caching is [20]. Their focus however was on developing a To model an FGS video, we assume that the base layer of the general cache management framework, in particular, the video has a constant rate rbase , which cannot be further replacement policies for mixed-media streaming; the FGS partitioned along the rate axis. As such, the base layer coding was used to for the performance evaluation purpose. represents an ensured lowest playback quality. On the other hand, neglecting the effect of block boundaries for bitplane III. SYSTEM MODEL OF FGS VIDEO BASED CACHING truncation, the enhancement layer can be adaptively The video streaming system consists of a server that stores a partitioned into any given rate using a filter, either at the repository of videos, and a set of proxies on the edge of the server or at the proxy. Thus, as illustrated in Fig. 1, a proxy network. Selected videos are partially cached at the proxies. A manager can set the streaming rate to a client of class i in the client request is first forwarded to a nearby proxy, which range of [rbase , ci ] . intercepts the request and computes a schedule for streaming: the cached portion of the video can be delivered to the client Video Server Video Proxy directly; some uncached portion, if needed, will be fetched Filter/Assembler from the server and then relayed to the client. Although this Filter Uncached FGS Portion process is similar to that of many existing systems, our model stream (bi) has three novel features, making it more general and flexible. Backbone Cache First, we consider a relatively more complex network (e.g., Network Intranet) behind the proxy, instead of a simple LAN assumed Video Client in most existing studies. Examples include an enterprise Repository Proxy Proxy Monitor Request Request Manager network or a campus network, which remains highly (ci) heterogeneous in terms of client access bandwidths, due to such factors as hardware configurations, connection methods Fig. 1. Functionalities of the video server and a proxy. (e.g., Ethernet, ADSL, or wireless LAN), and administrative policies (e.g., in a campus network, the access bandwidth The proxy manager also determines which portion of a provisioned for a faculty member would be higher than that video is to be cached. In the conventional non-adaptive video for a student). To reflect such heterogeneity, we assume that caching system, given a cache size for the video, the portion there are M classes of clients, and the maximum access to be cached is simply determined by the length (in terms of bandwidth for a client (or simply client bandwidth) of class i playback time). However, with FGS videos, there is one is given by ci , i = 1,2,..., M , which is an upper bound on the additional dimension to explore: the rate of the portion to be video streaming rate from the proxy to a client of class i. cached. Such flexibility potentially enables better resource Without loss of generality, we number the classes in utilization, but also complicates the proxy cache management. ascending order of the client bandwidth, that is, In addition, there are two cases in which some uncached c1 ≤ c2 ≤ ≤ cM . portion is to be fetched from the server: (1) the length of the Second, we assume that the clients could terminate a video demanded portion is longer than that of the cached portion; playback prematurely after they requested the playback from and (2) the streaming rate is higher than that of the cached the beginning of a video. Existing studies on video server portion. In the second case, the uncached bitplanes will be workloads [10] have revealed that such early terminations fetched from the server and then assembled with the cached occur quite often and, hence, should be considered in system portion to form a stream of higher rate. dimensioning. One approach for modeling early termination is As in many previous studies [3,4,14], we assume that the to partition a video into two parts: a prefix and a suffix, where contents of the proxy cache are semi-static and updated the prefix could be a preview of the video. If a client feels periodically with changes of the system workloads. For each uninterested after watching the prefix, it will terminate the period, the key issues in proxy management are to find the 0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004
  • 4. optimal length as well as the optimal rate for caching a video, more the uncached portion to be fetched from the origin and to determine the streaming rate for each client of the server (which incurs a higher transmission cost). As such, it is video. To ensure fairness, we let the video streaming rate to important to find a trade-off between them. any client of class i be identical, denoted by bi , bi ≤ ci , A. Minimizing Transmission Cost i = 1,2,..., M . For a multi-video case of N videos, the proxy manager also needs to determine an optimal resource (cache We first consider the transmission cost for a single video space and backbone bandwidth) sharing among the videos. with specified streaming rates for each class of clients, b1 , The key parameters of this model are summarized in Table b2 , …, bM , bi ≤ ci , i = 1,2,..., M . We attempt to answer the following question: Given a limited cache size H to the video, 1. We assume all these parameters are known a priori and do which portion of the stream is cached (referred to as a caching not change drastically in a period. Moreover, for ease of scheme) such that the transmission cost is minimized? As said, exposition, we focus on the interactions among the origin this objective is equivalent to minimizing the backbone server, a single proxy, and the clients of the proxy. Our results, bandwidth consumption. however, are applicable to the multi-proxy case where each proxy serves a non-overlapping set of clients. Note that the rates for different positions of the cached portion can be different when using FGS videos. Denote the Length of the video, L = Lt + Ls rate for position l (measured in the elapsed time from the L beginning of the video) of the cached video by r(l ),l ∈ [0, L] . Lt Length of the prefix A caching scheme for the video is therefore uniquely Ls Length of the suffix determined by the shape of r(l ) . We say that the scheme is pET Probability of early terminations L valid if 1) ∫ r(l )dl ≤ H and 2) for any l ∈ [0...L] , rbase ≤ r (l ) 0 H Proxy cache size for the video ≤ cM or r(l ) = 0 . The above two constraints follow the rbase Base layer rate of the video cache size limit and the base layer rate limit, respectively. λ Client request rate for the video A caching scheme is optimal if it is valid and yields the M Number of client classes minimum backbone bandwidth consumption for fetching the pi Probability that a client is in class i , pi > 0 ˆ uncached portion. Let V denote the total volume of the video ci Client bandwidth of class i with rate cM (the maximum client bandwidth). Obviously, ˆ if H ≥ V , the caching scheme r(l ) = cM , l ∈ [0, L] is optimal. bi Streaming rate for a class i client ( rbase ≤ bi ≤ ci ) For the case of limited cache size, we show two lemmas that αi Utility of a client in class i , αi = bi /ci facilitate searching the optimal caching scheme. Vˆ ˆ Volume of the video with rate cM , V = L ⋅ cM Lemma 1: If H ≤ rbase ⋅ Lt , the caching scheme ˆ B Backbone bandwidth consumption with ideal   rbase , l ∈ [0, H / rbase ] client utility assignment and no caching r (l ) =   is optimal.  0, l ∈ (H / rbase , L ]   Table 1. Model parameters for a single video. For the multi- Proof: This scheme is clearly valid. For any client request, video case of N videos, a superscript (k ) is to be added to each a video portion of volume H is saved from being transmitted parameter of video k , k = 1,2,..., N . over the backbone. This is the maximum saving per client request that a valid caching scheme could achieve under cache IV. THE PROXY CACHE MANAGEMENT PROBLEM size H . □ AND SOLUTIONS ˆ Lemma 2: If rbase ⋅ Lt < H < V , assuming that the size of In this section, we consider the proxy cache management the cached prefix is fixed to H t ( ∈ [rbase ⋅ Lt , H ] ), there problem for a single video object as well as for multiple heterogeneous videos. Our objective in designing the proxy exists an optimal caching scheme: cache management module is two-fold: first, to maximize the  rt , l ∈ [0, Lt ]   client utility, which is related to the streaming rate to each r (l ) =  rs , l ∈ (Lt , Lc ] ,  class of clients; and second, to minimize the transmission cost,   0, l ∈ (Lc , L ]    as the video streaming imposes very high data delivery demands to the network. As suggested in previous studies [3], where rt and rs respectively represent (constant) rates of the we assume that the local (i.e., proxy-to-client) transmission cached prefix and cached suffix, and are given by cost is trivial, and the overall transmission cost is a non- rt = H t / Lt and rs = max { rbase ,(H − H t )/ Ls } ; Lc is the decreasing function of the average backbone (i.e., server-to- total length of the cached portion, Lc = Lt + (H − H t )/ rs . proxy) bandwidth consumption. Note that the above two The proof of this lemma can be found in [22]. Let’s objectives could conflict, because it is obvious that the higher ˆ concentrate on this case ( rbase ⋅ Lt < H < V ) with a the streaming rate (which results in a higher client utility), the given H t . It is easy to show that rt ≥ rs for an optimal 0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004
  • 5. caching scheme because a cached prefix will serve both early this has been suggested in previous studies, it is unfair to the terminated requests and all other requests. The backbone blocked clients. The FGS video, however, offers an alternative bandwidth consumption for all requests from class i is method by trading off bandwidth with client utility, that is, therefore given by assigning a somewhat lower yet acceptable streaming rate to i BH ,Ht (bi )= each class of clients. More explicitly, we would like to investigate whether there  λ pi [(1 − pET )(bi L − H ) + pET (bi Lt − H t )],  exists some utility assignment to each class of clients of the     rt ≤ bi ≤ cM (1) video, such that the backbone bandwidth consumption is no  ˆ ˆ  more than η B , where B is the backbone bandwidth   λ pi (1 − pET )[bi Ls − rs (Lc − Lt )], rs < bi < rt  consumption with the ideal utility assignment and zero-size   λ pi (1 − pET )bi (L − Lc ),   rbase ≤ bi ≤ rs cache (i.e., no caching); η thus reflects the factor of backbone  bandwidth reduction. We are particularly interested in a utility which can be easily validated from Fig. 2 (for rs < bi < rt ). assignment that achieves the best “social welfare”, i.e., the It follows that the optimal caching scheme for rbase ⋅ Lt total utility of the clients is maximized. This utility ˆ < H < V can be accomplished by a one-dimensional search optimization problem for the single video can be formally on H t in the range of [rbase ⋅ Lt , H ] . The optimal solution, or described as follows: the minimum backbone bandwidth consumption, is thus M M MU-SV: max U H ,η = ∑ i =1 λpi αi , (3) BH = min rbase ⋅Lt ≤H t ≤H , rt ≥rs ∑ i =1 BH ,Ht (bi ) . i (2) s.t. rbase / ci ≤ αi ≤ 1 , i = 1, 2,..., M , Assume the minimum cache grain is v , which could be the αi −1ci −1 ≤ αici , i = 2, 3,..., M , size of a disk block or the size of a GOP (Group-of-Pictures) ˆ BH ≤ ηB , of the video; the cache size H , as well as H t , is always a multiple of the grain v . Then the complexity to search BH is where U H ,η is the total client utility per unit time for a cache O(H / v ) . size H and a backbone bandwidth reduction factor η . The constraint αi −1ci −1 ≤ αici (equivalent to bi −1 ≤ bi ) is to ˆ For H ≥ V and H ≤ rbase ⋅ Lt , the optimal caching preserve the order (priority) of the client classes in resource scheme can also be uniquely represented using rt , rs , and Lc . sharing. The corresponding backbone bandwidth consumption thus can Assume that there is a minimum bandwidth grain w ; the be calculated by Eqs. 1 and 2 as well. backbone bandwidth consumption for a certain class of clients is rounded to a multiple of w . Fig. 3 presents an algorithm to solve the problem instance of a given size of the cached prefix, Ht . MU-SV thus can be accomplished by a search on Ht , similar to that in the previous subsection. In this algorithm, u i, j ,k represents the maximum total utility per unit time for classes 1 through i , with backbone bandwidth j consumed by classes 1 through i , and backbone i bandwidth k consumed by class i . Function (BH ,H t )−1 (k ) is i Fig. 2. Illustration of different portions of an FGS video stream. a generalized inverse of BH ,H t (bi ) (see Eq. 1), representing the highest streaming rate for a class i client given that the B. Trading off Backbone Bandwidth with Client Utility backbone bandwidth consumed by this class is k . A special case is k =0. Recall that we assume λpi > 0 and In the above optimization, we assume that the streaming 1 − pET > 0 in the system; zero backbone bandwidth rates for the clients of the video are specified. We now consumption implies that Lc = L and the streaming rate is consider a more general and flexible scheme that makes i no higher than rs . Therefore, if Lc = L , we let (BH ,H t )−1 (0) effective use of FGS. be rs to maximize the total utility; otherwise, we set Define the utility of a class i client as αi = bi /ci for i i −1 (BH ,H t )−1(0) to 0 and BH ,H t (0) to −∞ . Finally, note that rbase ≤ bi ≤ ci . The ideal utility assignment is αi = 1 , i BH ,H t (bi ) is undefined for bi < rbase and bi > cM ; if directly i = 1,2,..., M , i.e., the client bandwidth is fully utilized for every class. However, it is clear that the higher the client inversing Eq. 1 yields a value in these two ranges, we set i utility, the more the transmission cost or the backbone (BH ,H t )−1 (k ) to 0 and cM , respectively. bandwidth consumption. To reduce bandwidth consumption, The correctness of this algorithm can be found in [22]. one solution is to block some of the client requests. Although Given the cache grain v , applying the algorithm for each H t 0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004
  • 6. 3 ˆ solves problem MU-SV in time O (M ⋅  η B / w  ⋅ H / v ) . N Since the accuracy of this algorithm depends on v and w , we MU-MV: max ∑ k =1U Hk ) ,η ( (k ) (k ) , (4) shall investigate their impact in the next section through N simulation experiments. s.t. ∑ k =1 H (k ) ≤ H T , N N u1, j ,k ← −∞ /* Initialization */ ∑ k =1 BHk ) ( (k ) ≤ ηT ∑ k =1 B (k ) . ˆ This is a 2-dimesional (bandwidth and space) version of the ˆ for k = 0 to ηB step w do knapsack problem. Given cache grain v and backbone ˆ for j = k to ηB step w do bandwidth grain w , it can be solved by extending the known u1, j ,k ← λ p1[(BH ,H t )−1(k )]/ c1 1 pseudo-polynomial time algorithm for 1-D knapsack [1:Ch 16], and the time complexity of this solution is bounded by for i = 2 to M do /* Table filling */ ˆ for k = 0 to ηB step w do ( N (k )   ) O N ⋅ HT ⋅ H ′ ⋅ v−2 ⋅  ηT ∑k =1 B(k) ⋅ B′ ⋅ w−2  , where B ′ = N ˆ min{ηT ∑ k =1 ˆ ˆ B , maxk B (k ) } and H ′ = min {H T , y ← (BH ,Ht )−1(k ) i ˆ maxkV (k ) } [22]. ˆ for j = k to ηB step w do V. PERFORMANCE EVALUATION max ← −∞ In this section, we evaluate the performance of our FGS- for x = 0 to BH−Ht (y ) step w do i 1 , based caching system through simulation experiments. We temp ←λ piy / ci + ui −1, j −k ,x examine the system along two dimensions: 1) the transmission cost, or backbone bandwidth consumption; and 2) the quality if max < temp of the delivered streams, i.e., the client utility. then max ← temp , bw ← x A. System Settings ui, j ,k ← max , hi, j ,k ← bw For the single video case, we assume that there are five u * ← −∞ /* Extract optimal results */ classes of clients, and the client bandwidths of the classes are ˆ for bw = 0 to ηB step w do exponentially spaced, i.e., c1=128 Kbps and ci = 2ci −1 for temp ← uM ,  ηB / w  ⋅w,bw i = 2, 3, 4, 5 , which cover the bandwidths of a broad spectrum   of access technologies. For the client population distribution if u * < temp among the classes, ( p1, p1,..., p5 ), we have evaluated various then u * ← temp , bw * ← bw settings in our experiments. Due to space limitation, in this paper we present the simulation results for three typical ˆ b ′ =  ηB / w  , bw ′ = bw * distributions: for i = M down to 1 do (1) Uniform: (0.2, 0.2, 0.2, 0.2, 0.2) ; αi* ← (BH ,Ht )−1(bw ′)/ ci i (2) S-narrow: (0.5, 0.2, 0.15, 0.1, 0.05) ; b ′ ← b ′ − bw ′ , bw ′ ← hi,b ′,bw ′ (3) S-wide: (0.05, 0.1, 0.15, 0.2, 0.5) . Output: The latter two are skewed distributions, respectively dominated by narrowband clients and wideband clients. The u * : maximum expect utility for the given H , H t , η lengths of the entire video and the prefix are set to 100 αi* : corresponding utility assignment of a class i client minutes and 20 minutes, respectively. The probability of early terminations is set to 0.3. Although a rigorous evaluation of Fig. 3. Algorithm for an instance of MU-SV with given H t . the client termination behavior is beyond the scope of this work, we note that higher probabilities of early terminations C. Proxy Management for Multiple Heterogeneous Videos have been observed in reality [10]; in this case, more benefits We now extend the above proxy cache management policies can be expected from the caching paradigm advocated in our to the case of multiple heterogeneous videos. Assume that system. there are N videos, indexed as 1 through N . Given the total We assume that the requests follow a Poisson arrival cache size H T and the total bandwidth reduction factor ηT process with a mean rate of one request per minute. We for all the videos, our objective is to find a cache partitioning ˆ normalized the backbone bandwidth by B (the backbone ( H (k ) , k = 1, 2,..., N ) as well as bandwidth partitioning ( η(k ) , bandwidth consumption with ideal utility assignment and no k = 1, 2,..., N ) for these videos, such that the system utility ˆ caching) and the cache size by V (the total volume of the (total utility of all the clients for the videos) is maximized. video with rate cM ) for all presented results. Hence, our This problem can be formally described as follows: conclusions are generally applicable when the parameters are 0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004
  • 7. proportionally scaled. Unless explicitly specified, the default optimal scheme in our FGS video based caching framework. cache grain and backbone bandwidth grain are set at Hence, we believe that our optimal scheme is an effective ˆ ˆ 1/ 200V and 1/ 200B , respectively. means to improve the system performance. For the multiple video case of N objects, we use similar B.2 Utility Improvement client settings as described above for each single video, and assume that the access probabilities of the videos follow a We now examine the tradeoff between the client utility and Zipf distribution, which has been suggested by workload the backbone bandwidth consumption, given that the measurements of media servers. With this distribution, the streaming rates to the clients can be regulated using the N access probability of video k is (1/ k )θ / ∑ j = 1 (1/ j )θ , filter/assembler at the proxy. We employ the optimal caching where θ reflects the skewness among the client populations and utility assignment algorithms for our system, as described of the videos. in Section IV.B. Fig. 5 shows the expected utility for all clients as a function of the backbone bandwidth consumption B. Evaluation Results for Single Video for different cache sizes and class distributions. Note that the normalized backbone bandwidth is essentially equal to η , as B.1 Backbone Bandwidth Reduction shown in Eq. 3. In the first set of experiments, we investigate the backbone It can be seen that, to achieve the optimal utility ( = 1 ), a bandwidth consumption. We are interested in examining the relatively high backbone bandwidth is to be consumed if the backbone bandwidth reductions by employing the optimal cache size is very small, e.g., 40% of backbone bandwidth B ˆ caching scheme, compared to the following two baseline with a cache size 0.3V ˆ for the uniform class distribution (Fig. schemes: 4a). However, there is a nonlinear relation between the  r ′, l ∈ [0, H / r ′]  backbone bandwidth consumption and the client utility. As a  (1) MaxLen: r(l ) =  , r ′ = max{rbase, H / L} ;  0, l ∈ (H / r ′, L] result, for the same setting, we can achieve an expected client    ˆ utility of 0.9 by consuming only 15% of B , that is, a 10%  cM , l ∈ [0, H / cM ]  utility reduction leads to a 62.5% backbone bandwidth (2) MaxRate: r (l ) =   .  0, l ∈ (H / cM , L ] reduction. This is particularly evident for the S-narrow   distribution, not only because a relatively high volume is These two schemes are non-adaptive because they are not cached for the stream to a narrowband client, but also because aware of the client bandwidth distributions. They also a slight reduction of the streaming rate to a wideband client resemble the caching schemes for a coarse-grained layering of will benefit the set of narrowband clients. In this case, the 2 layers, i.e., caching the base layer only and caching both the expected utility is over 0.8 even with very limited resources base and enhancement layers. ˆ (e.g., backbone bandwidth of 0.05 B and cache size of 0.1V ). ˆ Fig. 4 shows the backbone bandwidth reductions for For the S-wide class distribution, the reduction is not that different cache sizes and class distributions. The client significant. However, in this case, the absolute backbone bandwidth is assumed to be fully utilized for all classes, that is, bandwidth consumption is much higher than that for the other αi = 1 , i = 1, 2,..., M . We observe remarkable reductions two distributions; hence, a slight reduction on the normalized achieved by our optimal scheme over the two baseline bandwidth would still lead to a great reduction in the absolute schemes, which is generally over 10% and sometimes over backbone bandwidth consumed. 50%. The reduction depends on the class distributions, e.g., Such an adaptive setting of utility clearly offers a flexible for the S-narrow distribution (Fig. 4b), the reduction is space for a designer to explore to either maximize the overall particularly high when compared with the MaxRate scheme, revenue or minimize the overall cost. On the contrary, if the as most of the clients have a relatively low bandwidth and cache size is less than 0.1 ˆ and the backbone bandwidth is V hence, caching the stream of the highest rate becomes ˆ less than 0.5B , a non-adaptive system that fixes the client wasteful. In this case, increasing the length of the cached utility to one does not even work for any class distribution. stream becomes a better alternative. However, compared to our optimal scheme, the MaxLen scheme still suffers from B.3 Sensitivity to Allocation Grains more than 10% bandwidth excess, because it is not flexible in In our experiments, the default cache grain v and backbone setting the rates for the cached prefix and suffix to better ˆ bandwidth grain w are set at 1/ 200V and 1/ 200B , ˆ accommodate early terminated requests. On the contrary, with respectively. To investigate their impact, we repeat the the S-wide distribution (Fig. 4c), since most clients have high experiments of the previous subsection with various grain bandwidth demands, the MaxRate scheme is better than the settings. Fig. 6a shows the maximum gaps in terms of optimal MaxLen scheme, but is still suboptimal. Finally, with the client utilities between each setting and a finer-grained Uniform distribution (Fig. 4a), both the MaxLen and the ˆ ˆ setting: 1/1000B (bandwidth grain) and 1/1000V (cache MaxRate schemes are far from satisfactory. grain). We can see that the performance gap quickly decreases It is worth noting that, compared to the two non-optimal with the refinement of the grains, and the gap for the default schemes, there is virtually no extra cost in employing the ˆ ˆ setting (1/200V and 1/200 B ) is already lower than 0.008. 0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004
  • 8. 60 60 60 C om pared to MaxLen Com pared to MaxLen Backbone Bandwidth Reduction (%) Compared to MaxLen Backbone Bandwidth Reduction (%) Backbone Bandwidth Reduction (%) 50 50 50 Compared to MaxRate C om pared to MaxRate Com pared to MaxR ate 40 40 40 30 30 30 20 20 20 10 10 10 0 0 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Normalized Cache Size Normalized Cache Size Normalized Cache Size (a) (b) (c) Fig. 4. Backbone bandwidth reductions achieved by the optimal caching scheme. (a) Uniform class distribution; (b) S-narrow class distribution; (c) S-wide class distribution. 1 1 1 0.9 0.9 0.9 0.8 0.8 0.8 Expected Utility Expected Utility Expected Utility 0.7 0.7 0.7 0.6 0.6 0.6 CacheSize = 0.1 CacheSize = 0.1 CacheSize = 0.1 0.5 0.5 0.5 CacheSize = 0.3 CacheSize = 0.3 CacheSize = 0.3 0.4 CacheSize = 0.5 0.4 CacheSize = 0.5 0.4 CacheSize = 0.5 0.3 0.3 0.3 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Normalized Backbone Bandwidth Normalized Backbone Bandwidth Normalized Backbone Bandwidth (a) (b) (c) Fig. 5. Expected client utility as a function of backbone bandwidth consumption. (a) Uniform class distribution; (b) S-narrow class distribution; (c) S-wide class distribution. baseline scheme with a uniform cache partitioning 0.02 0.02 ( H (k ) = H T / N ) among the videos and a bandwidth 0.016 0.016 partitioning proportional to the client population of each video ( η(k ) = (λ(k ) / ∑N λ( j ) ) ⋅ (ηT ∑ N B (i) )/ B(k ) ). The ˆ ˆ Average Gap Max Gap 0.012 0.012 j =1 j =1 0.008 0.008 utility assignment for each single video is optimized using 0.004 0.004 the algorithm for MU-SV. It is easy to verify that the system 0 1/100 1/50 0 1/100 1/200 1/50 utility for the baseline scheme is independent of the skew 1/50 1/200 1/50 Ban 1/100 1/200 1/800 1/400 eG ra in Ban dw 1/100 idth 1/200 1/400 1/400 1/800 eG ra in factor θ . Fig. 7 shows the system utility improvement by the d wid th 1/400 ch ch joint optimization scheme for H T = 0.2∑ N V (k ) , G G ra in 1/800 Ca rain 1/800 Ca ˆ k =1 T and η = 0.1 . It is clear that the joint optimization (a) (b) significantly improves the system utility, in particular, with Fig. 6. Performance gaps as compared to a fine-grained setting large skew factors, i.e., the client populations of the videos ˆ ˆ v = 1 / 1000V , w = 1/1000B . (a) Max gap; (b) Average gap. are highly heterogeneous. For the S-wide distribution (Fig. 7c), which has a relatively low system utility for the baseline This is negligible from a client perception point of view. scheme under the given settings, the utility improvement is We also show the average gaps in Fig. 6b, which are in fact over 30% for high skew factors. much smaller than the corresponding maximum gaps shown To identify respective contributions of the two in Fig. 6a. Therefore, we believe that the default setting is optimization dimensions (i.e., space and bandwidth), we also reasonably good, especially considering that the computation show in Fig. 7 the improvements achieved by employing the times with this setting are generally less than 30 ms on a optimal cache partitioning alone (Cache Optimal) and the common PC (Pentium III 1 GHz). optimal bandwidth partitioning alone (Band Optimal). Their C. Evaluation Results for Multiple Videos corresponding bandwidth and cache partitionings are proportional and uniform, respectively. It can be seen that, Finally, we investigate the performance of the proxy though using either optimal partitioning alone also improves management algorithm for multiple videos. We implement the system utility, there remain remarkable gaps as compared the joint optimization scheme MU-MV for cache and to that of the joint optimization, especially for higher skew bandwidth partitioning (see Section IV.C), as well as a factors. This is because the utility of each video is a 0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004
  • 9. 15 5 35 Joint Optimal Joint Optimal 30 12 4 Joint Optimal Band Optimal Band Optimal Utility Improvement (%) x Utility Improvement (%) x Utility Improvement (%) x Band Optimal Cache Optimal Cache Optimal 25 Cache Optimal 9 3 20 6 2 15 10 3 1 5 0 0 0 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Skew Factor Skew Factor Skew Factor (a) (b) (c) Fig. 7. System utility improvement as a function of skew factor θ ( H T = 0.2∑ N V (k ) , η T = 0.1 ) for multi-video allocation. The ˆ k =1 system utilities of the baseline scheme for the Uniform, S-narrow, and S-wide distributions are 0.77, 0.92, and 0.56, respectively. (a) Uniform distribution; (b) S-narrow distribution; (c) S-wide distribution. nonlinear function of cache space or backbone bandwidth Using the optimal caching schemes, we calculate the consumed by the video and, hence, the uniform cache backbone bandwidth reduction of the FGS video based partitioning or the proportional bandwidth partitioning alone system against the stream replication based system, as shown is suboptimal for these heterogeneous videos. As such, we in Fig. 8. For the Uniform and S-narrow distributions, the believe that it is important to use the joint optimization for reduction is significant, often over 40% and sometimes over the videos. 60%. For the S-wide distribution, the reduction is relatively smaller because the backbone bandwidth consumption is VI. COMPARISONS AND PRACTICAL CONSIDERATIONS dominated by the requests from the wideband clients. A. Scalable Video or Replicated Video? Nevertheless, the absolute value of saved bandwidth remains high enough in this case, as discussed in Section V-B.2. As mentioned in Section II, yet another approach to handle 80 the client heterogeneity is stream replication [9,18]; that is, Backbone Bandwidth Reduction (%) 70 Uniform on the server’s side, replicated streams are generated to meet S-narrow 60 S-wide the bandwidth condition of each class of clients. Stream 50 replication has several advantages, such as simplicity and compatibility with conventional non-scalable video coders. 40 Hence, it has been widely used in existing commercial 30 streaming systems, e.g., the RealNetwork’s SureStream. 20 Nevertheless, replication also leads to data redundancy, not 10 only in server storage but also in proxy cache and bandwidth 0 consumption. 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Normalized Cache Size Scalable video and replicated video based streaming Fig. 8. Backbone bandwidth reduction of FGS video systems have been compared in [9] with no proxy caching. A based caching against stream replication based caching for cache-aware comparison was shown in [16]. They considered different cache sizes and class distributions. coarse-grained layering (two layers) and the metric of Next, we consider the case of flexible utility assignment, interest is the request blocking probability. In this set of where the client utility can be lower than one. The problem experiments, we try to complement these works from a client of optimal caching and client utility assignment for the utility and bandwidth consumption point of view with the use stream replication based system can be formulated as follows: of FGS videos. M We first investigate the backbone bandwidth consumptions MU-REP: max U H ,η = ∑ i =1 λpi αi , (7) of the two approaches when the client bandwidth is to be s.t. rmin / ci ≤ αi ≤ 1 , i = 1, 2,..., M , fully utilized. To make a fair comparison, we develop the optimal caching scheme for the stream replication based αi −1ci −1 ≤ αici , i = 2, 3,..., M , system. In this system, the cached portion of a stream serves M the clients of its own class only, and its rate is equal to the ∑ i =1 hi ˆ ≤ H , and BH ≤ ηB , client bandwidth of the class. The problem is thus how to where hi is the cache size allocated to stream i , and rmin is partition a cache with given size H for the M replicated the lowest streaming rate, which is set to rbase in the streams of the video. This is a variation of the cache experiments to ensure a fair comparison. This problem can allocation problem for multiple heterogeneous videos, as also be solved by checking different partitionings of the solved in [14]. cache and, for each partitioning, the optimal utility 0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004