This document proposes a novel video caching framework that uses MPEG-4 Fine-Grained Scalable (FGS) video with post-encoding rate control to achieve low-cost and fine-grained rate adaptation. The framework allows clients to have heterogeneous bandwidths and enables adaptive control of backbone bandwidth consumption. It examines issues in caching FGS videos, such as determining the optimal portion to cache (in terms of length and rate) and optimal streaming rate to clients. Simulation results show it significantly reduces transmission costs compared to non-adaptive caching while providing flexible utility to heterogeneous clients with low computational overhead.
Requiring only half the bitrate of its predecessor, the new standard – HEVC or H.265 – will significantly reduce the need for bandwidth and expensive, limited spectrum. HEVC (H.265) will enable the launch of new video services and in particular ultra HD television (UHDTV).
State-of-the-art video compression techniques – HEVC/H.265 – can reduce the size of raw video by a factor of about 100 without any noticeable reduction in visual quality. With estimates indicating that compressed real-time video accounts for more than 50 percent of current network traffic, and this figure is set to rise to 90 percent within a few years, HEVC/H.265 will be a welcome relief for network operators.
New services, devices and changing viewing patterns are among the factors contributing to the growth in video traffic as people watch more and more traditional TV and video-streaming services on their mobile devices.
Ericsson has been heavily involved in the standardization of HEVC since it began in 2010, and this Ericsson Review article highlights some of the contributions that have led to the compression efficiency offered by HEVC.
.
Webcast: Reduce latency, improve analytics and maximize asset utilization in ...Emulex Corporation
Join Emulex and Myricom experts to learn how to maximize performance in HFT, network security, network analytics and video content delivery environments with Emulex Network Xceleration (NX) solutions.
This webcast will discuss ways of reducing latency, increasing asset utilization and improving network analytics in high performance networks.
Motion Vector Recovery for Real-time H.264 Video StreamsIDES Editor
Among the various network protocols that can be
used to stream the video data, RTP over UDP is the best to do
with real time streaming in H.264 based video streams. Videos
transmitted over a communication channel are highly prone
to errors; it can become critical when UDP is used. In such
cases real time error concealment becomes an important
aspect. A subclass of the error concealment is the motion
vector recovery which is used to conceal errors at the decoder
side. Lagrange Interpolation is the fastest and a popular
technique for the motion vector recovery. This paper proposes
a new system architecture which enables the RTP-UDP based
real time video streaming as well as the Lagrange
interpolation based real time motion vector recovery in H.264
coded video streams. A completely open source H.264 video
codec called FFmpeg is chosen to implement the proposed
system. Proposed implementation was tested against the
different standard benchmark video sequences and the
quality of the recovered videos was measured at the decoder
side using various quality measurement metrics.
Experimental results show that the real time motion vector
recovery does not introduce any noticeable difference or
latency during display of the recovered video.
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...Zhenyun Zhuang
GLOBECOM 2012
Video streaming traffic of both VoD (Video on
Demand) and Live is exploding. Various types of businesses
and many people are relying on video streaming to attract
customers/users and for other purposes. Given the vast number
of video stream formats (e.g., MP4, FLV) and transmission
protocols (e.g., HTTP, RTMP, RTSP) for supporting varying
types of playback terminals (particularly mobile devices such as
iphone/ipad and Android phones), video content providers often
need to transcode videos to multiple formats in order to stream
to different types of users.
Being time-sensitive and requiring high bandwidth, video
streaming exerts high pressure on underlying delivery networks.
Content Delivery Network (CDN) providers can help their
customers quickly and reliably distribute stream contents to end
users. In addition to distributing video streams, CDN providers
typically allow their customers to perform video transcoding on
CDN platforms. With the high volume of video streams and the
bursty transcoding workload, CDN providers are eager to deploy
elastic and optimized cloud-based transcoding platforms.
Mobile-Based Video Caching Architecture Based on Billboard Manager csandit
Video streaming services are very popular today. Increasingly, users can now access multimedia applications and video playback wirelessly on their mobile devices. However, a significant challenge remains in ensuring smooth and uninterrupted transmission of almost any
size of video file over a 3G network, and as quickly as possible in order to optimize bandwidth consumption. In this paper, we propose to position our Billboard Manager to provide an optimal transmission rate to enable smooth video playback to a mobile device user connected to
a 3G network. Our work focuses on serving user requests by mobile operators from cached resource managed by Billboard Manager, and transmitting the video files from this pool. The
aim is to reduce the load placed on bandwidth resources of a mobile operator by routing away as much user requests away from the internet for having to search a video and, subsequently, if located, have it transferred back to the user.
Leveraging External Applications For DOCSIS 3.1 HFC Plant OptimizationCisco Service Provider
Reprinted with permission of NCTA, from the 2014 Cable Connection Spring Technical Forum Conference Proceedings. For more information on Cisco cloud solutions, visit: http://www.cisco.com/c/en/us/products/cloud-systems-management/index.html
An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...Cisco Service Provider
Reprinted with permission of NCTA, from the 2014 Cable Connection Spring Technical Forum Conference Proceedings. For more information on Cisco video solutions, visit: http://www.cisco.com/c/en/us/products/video/index.html
Subjective quality evaluation of the upcoming HEVC video compression standard Touradj Ebrahimi
Slides of my presentation at SPIE Optics+Photonics 2012 Applications of Digital Image Processing XXXV, San Diego, August 12-16, 2012
Paper available at: http://infoscience.epfl.ch/record/180494
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Decision Making Analysis of Video Streaming Algorithm for Private Cloud Compu...IJECEIAES
The issue on how to effectively deliver video streaming contents over cloud computing infrastructures is tackled in this study. Basically, quality of service of video streaming is strongly influenced by bandwidth, jitter and data loss problems. A number of intelligent video streaming algorithms are proposed by using different techniques to deal with such issues. This study aims to propose and demonstrate a novel decision making analysis which combines ISO 9126 (international standard for software engineering) and Analytic Hierarchy Process to help experts selecting the best video streaming algorithm for the case of private cloud computing infrastructure. The given case study concluded that Scalable Streaming algorithm is the best algorithm to be implemented for delivering high quality of service of video streaming over the private cloud computing infrastructure.
Requiring only half the bitrate of its predecessor, the new standard – HEVC or H.265 – will significantly reduce the need for bandwidth and expensive, limited spectrum. HEVC (H.265) will enable the launch of new video services and in particular ultra HD television (UHDTV).
State-of-the-art video compression techniques – HEVC/H.265 – can reduce the size of raw video by a factor of about 100 without any noticeable reduction in visual quality. With estimates indicating that compressed real-time video accounts for more than 50 percent of current network traffic, and this figure is set to rise to 90 percent within a few years, HEVC/H.265 will be a welcome relief for network operators.
New services, devices and changing viewing patterns are among the factors contributing to the growth in video traffic as people watch more and more traditional TV and video-streaming services on their mobile devices.
Ericsson has been heavily involved in the standardization of HEVC since it began in 2010, and this Ericsson Review article highlights some of the contributions that have led to the compression efficiency offered by HEVC.
.
Webcast: Reduce latency, improve analytics and maximize asset utilization in ...Emulex Corporation
Join Emulex and Myricom experts to learn how to maximize performance in HFT, network security, network analytics and video content delivery environments with Emulex Network Xceleration (NX) solutions.
This webcast will discuss ways of reducing latency, increasing asset utilization and improving network analytics in high performance networks.
Motion Vector Recovery for Real-time H.264 Video StreamsIDES Editor
Among the various network protocols that can be
used to stream the video data, RTP over UDP is the best to do
with real time streaming in H.264 based video streams. Videos
transmitted over a communication channel are highly prone
to errors; it can become critical when UDP is used. In such
cases real time error concealment becomes an important
aspect. A subclass of the error concealment is the motion
vector recovery which is used to conceal errors at the decoder
side. Lagrange Interpolation is the fastest and a popular
technique for the motion vector recovery. This paper proposes
a new system architecture which enables the RTP-UDP based
real time video streaming as well as the Lagrange
interpolation based real time motion vector recovery in H.264
coded video streams. A completely open source H.264 video
codec called FFmpeg is chosen to implement the proposed
system. Proposed implementation was tested against the
different standard benchmark video sequences and the
quality of the recovered videos was measured at the decoder
side using various quality measurement metrics.
Experimental results show that the real time motion vector
recovery does not introduce any noticeable difference or
latency during display of the recovered video.
Building Cloud-ready Video Transcoding System for Content Delivery Networks (...Zhenyun Zhuang
GLOBECOM 2012
Video streaming traffic of both VoD (Video on
Demand) and Live is exploding. Various types of businesses
and many people are relying on video streaming to attract
customers/users and for other purposes. Given the vast number
of video stream formats (e.g., MP4, FLV) and transmission
protocols (e.g., HTTP, RTMP, RTSP) for supporting varying
types of playback terminals (particularly mobile devices such as
iphone/ipad and Android phones), video content providers often
need to transcode videos to multiple formats in order to stream
to different types of users.
Being time-sensitive and requiring high bandwidth, video
streaming exerts high pressure on underlying delivery networks.
Content Delivery Network (CDN) providers can help their
customers quickly and reliably distribute stream contents to end
users. In addition to distributing video streams, CDN providers
typically allow their customers to perform video transcoding on
CDN platforms. With the high volume of video streams and the
bursty transcoding workload, CDN providers are eager to deploy
elastic and optimized cloud-based transcoding platforms.
Mobile-Based Video Caching Architecture Based on Billboard Manager csandit
Video streaming services are very popular today. Increasingly, users can now access multimedia applications and video playback wirelessly on their mobile devices. However, a significant challenge remains in ensuring smooth and uninterrupted transmission of almost any
size of video file over a 3G network, and as quickly as possible in order to optimize bandwidth consumption. In this paper, we propose to position our Billboard Manager to provide an optimal transmission rate to enable smooth video playback to a mobile device user connected to
a 3G network. Our work focuses on serving user requests by mobile operators from cached resource managed by Billboard Manager, and transmitting the video files from this pool. The
aim is to reduce the load placed on bandwidth resources of a mobile operator by routing away as much user requests away from the internet for having to search a video and, subsequently, if located, have it transferred back to the user.
Leveraging External Applications For DOCSIS 3.1 HFC Plant OptimizationCisco Service Provider
Reprinted with permission of NCTA, from the 2014 Cable Connection Spring Technical Forum Conference Proceedings. For more information on Cisco cloud solutions, visit: http://www.cisco.com/c/en/us/products/cloud-systems-management/index.html
An SDN Based Approach To Measuring And Optimizing ABR Video Quality Of Experi...Cisco Service Provider
Reprinted with permission of NCTA, from the 2014 Cable Connection Spring Technical Forum Conference Proceedings. For more information on Cisco video solutions, visit: http://www.cisco.com/c/en/us/products/video/index.html
Subjective quality evaluation of the upcoming HEVC video compression standard Touradj Ebrahimi
Slides of my presentation at SPIE Optics+Photonics 2012 Applications of Digital Image Processing XXXV, San Diego, August 12-16, 2012
Paper available at: http://infoscience.epfl.ch/record/180494
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.com-Visit Our Website: www.finalyearprojects.org
Decision Making Analysis of Video Streaming Algorithm for Private Cloud Compu...IJECEIAES
The issue on how to effectively deliver video streaming contents over cloud computing infrastructures is tackled in this study. Basically, quality of service of video streaming is strongly influenced by bandwidth, jitter and data loss problems. A number of intelligent video streaming algorithms are proposed by using different techniques to deal with such issues. This study aims to propose and demonstrate a novel decision making analysis which combines ISO 9126 (international standard for software engineering) and Analytic Hierarchy Process to help experts selecting the best video streaming algorithm for the case of private cloud computing infrastructure. The given case study concluded that Scalable Streaming algorithm is the best algorithm to be implemented for delivering high quality of service of video streaming over the private cloud computing infrastructure.
A COMPARISON OF CACHE REPLACEMENT ALGORITHMS FOR VIDEO SERVICESijcsit
The increasing demand for video services has made video caching a necessity to decrease download times
and reduce Internet traffic. In addition, it is very important to store the right content at the right time in
caches to make effective use of caching. An informative decision has to be made as to which videos are to
be evicted from the cache in case of cache saturation. Therefore, the best cache replacement algorithm is
the algorithm which dynamically selects a suitable subset of videos for caching, and maximizes the cache
hit ratio by attempting to cache the videos which are most likely to be referenced in the future. In this paper
we study the most popular cache replacement algorithms (OPT, CC, QC, LRU-2, LRU, LFU and FIFO)
which are currently used in video caching. We use simulations to evaluate and compare these algorithms
using video popularities that follow a Zipf distribution. We consider different cache sizes and video request
rates. Our results show that the CC algorithm achieves the largest hit ratio and performs well even under
small cache sizes. On the other hand, the FIFO algorithm has the smallest hit ratio among all algorithms.
The main problem is to avoid the complexity of retrieving the video content without streaming problem in multi network clients. The proposed work is to improve Collaboration among streaming contents on server resources in order to improve the network performance. Implementing network collaboration on a content delivery scenario, with a strong reduction of data transferred via servers. Audio and video files are transmitted in blocks to clients through the peer using the Network Coding Equivalent Content Distribution scheme. The objective of the system is to tolerate out-of-order arrival of blocks in the stream and is resilient to transmission losses of an arbitrary number of intermediate blocks, without affecting the verifiability of remaining blocks in the stream. To formulate the joint rate control and packet scheduling problem as an integer program where the objective is to minimize a cost function of the expected video distortion. Suggestions of cost functions are proposed in order to provide service differentiation and address fairness among users.
International Journal of Computer Science, Engineering and Information Techno...ijcseit
In this paper, we focuses on caching and replication algorithm for content distribution in peer to peer
networks. Caching and replication is a key strategy for improve the reliability, availability, and
performance in peer to peer networks. This paper gives a brief introduction to caching, replication and
various algorithms have been discussed and a detailed study has been performed. The comparison table
shows it clearly that an algorithm satisfies the caching and replication requirement.
Survey on caching and replication algorithm for content distribution in peer ...ijcseit
In this paper, we focuses on caching and replication algorithm for content distribution in peer to peer
networks. Caching and replication is a key strategy for improve the reliability, availability, and
performance in peer to peer networks. This paper gives a brief introduction to caching, replication and
various algorithms have been discussed and a detailed study has been performed. The comparison table
shows it clearly that an algorithm satisfies the caching and replication requirement.
Efficient and thorough data collection and its timely analysis are critical for disaster response and recovery in order to save people's lives during disasters. However, access to comprehensive data in disaster areas and their quick analysis to transform the data to actionable knowledge are challenging. With the popularity and pervasiveness of mobile devices, crowdsourcing data collection and analysis has emerged as an effective and scalable solution. This paper addresses the problem of crowdsourcing mobile videos for disasters by identifying two unique challenges of 1) prioritizing visual data collection and transmission under bandwidth scarcity caused by damaged communication networks and 2) analyzing the acquired data in a timely manner. We introduce a new crowdsourcing framework for acquiring and analyzing the mobile videos utilizing fine granularity spatial metadata of videos for a rapidly changing disaster situation. We also develop an analytical model to quantify the visual awareness of a video based on its metadata and propose the visual awareness maximization problem for acquiring the most relevant data under bandwidth constraints. The collected videos are evenly distributed to off-site analysts to collectively minimize crowdsourcing efforts for analysis. Our simulation results demonstrate the effectiveness and feasibility of the proposed framework.
International Journal of Engineering and Science Invention (IJESI)inventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Optimal Streaming Protocol for VoD Using Clients' Residual BandwidthIDES Editor
A true VoD system has tremendous demand in the
market. The existing VoD system does not cater the needs
and demands of the market. The major problem in the VoD
system is serving of clients with expected QoS is difficult. In
this paper, we proposed a protocol and algorithm that
chains the proxy servers and subscribed clients. Our
objective is to send one server stream and this stream should
be served to N asynchronous clients. The server bandwidth
is scarcity and on the client uplink bandwidth is
underutilized. In this protocol, we are using client’s residual
bandwidth such that the load on the server bandwidth is
reduced. We have proved that optimal utilization of the
buffer and bandwidth for the entire VoD system and also
less rejection ratio of the clients.
Chaining Algorithm and Protocol for Peer-to-Peer Streaming Video on Demand Sy...ijwmn
As the various architectures and protocol have been implemented a true VoD system has great demand in the global users. The traditional VoD system does not provide the needs and demands of the global users. The major problem in the traditional VoD system is serving of video stream among clients is duplicated and streamed to the different clients, which consumes more server bandwidth and the client uplink bandwidth is not utilized and the performance of the system degrades. Our objective in this paper is to send one server stream sufficient to serve the many clients without duplicating the server stream. Hence we have proposed a protocol and algorithm that chains the proxy servers and subscribed clients utilize client’s uplink bandwidth such that the load on the server is reduced. We have also proved that less rejection ratio of the clients and better utilization of the buffer and bandwidth for the entire VoD system.
COST-EFFECTIVE LOW-DELAY DESIGN FOR MULTI-PARTY CLOUD VIDEO CONFERENCINGnexgentechnology
GET IEEE BIG DATA,JAVA ,DOTNET,ANDROID ,NS2,MATLAB,EMBEDED AT LOW COST WITH BEST QUALITY PLEASE CONTACT BELOW NUMBER
FOR MORE INFORMATION PLEASE FIND THE BELOW DETAILS:
Nexgen Technology
No :66,4th cross,Venkata nagar,
Near SBI ATM,
Puducherry.
Email Id: praveen@nexgenproject.com
Mobile: 9791938249
Telephone: 0413-2211159
www.nexgenproject.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
A new storage architecture for a flash memory video servereSAT Journals
Abstract
During the past decade, there has been drastic growth in network traffics that are made from online views on video content in
Internet. Owing to current high popularities of video streaming services such as YouTube, Netflix and portals’ UCC, such a trend
is likely to be continuing in the next decade as well. For the successful proliferation of streaming services in the future, it is
important to develop high-performance video servers that can fulfil bandwidth requirements of concurrently served video streams
in a cost-effective way. To this end, we propose a video storage architecture using the SSD (solid state drive) as a cache layer.
Since the SSD storage provides fast and uniform speed of random reads, it can work well with the EDF algorithm. To employ the
EDF algorithm, the system time is divided into cycles and the bandwidth allocation is done by considering the scheduling periods
and the number of flash blocks retrieved for serviced video streams. Since the bandwidth allocations of serviced video streams are
managed within a full capacity of the scheduled SSD bandwidth, we can efficiently prevent undesirable hiccups of serviced video
streams.
KeyWords: Video streaming service, flash memory, real-time scheduling, data caching, admission control
Similar to Proxy Cache Management for Fine-Grained Scalable Video Streaming (20)
Proxy Cache Management for Fine-Grained Scalable Video Streaming
1. Proxy Cache Management for Fine-Grained
Scalable Video Streaming
Jiangchuan Liu1 Xiaowen Chu2 Jianliang Xu2
ljc@cse.cuhk.edu.hk chxw@comp.hkbu.edu.hk xujl@comp.hkbu.edu.hk
1
Department of Computer Science and Engineering, The Chinese University of Hong Kong
2
Department of Computer Science, Hong Kong Baptist University
Abstract - Caching video objects at proxies close to clients has assume that the rate (either constant or variable) of the video
attracted a lot of attention in recent years. To meet diverse client is predetermined. Due to this non-adaptability, they suffer
bandwidth conditions, there have been research efforts to from two limitations: first, it is difficult to meet the diverse
combine proxy caching with video layering or transcoding. bandwidth conditions from heterogeneous clients, for a single
Nevertheless, these adaptive systems suffer from either coarse streaming rate would either overuse or underuse some client
adaptation granularity due to the inflexible structures of existing bandwidths; second, it is not flexible enough to control the
layered coders or high computation overhead due to the backbone (server-to-proxy) bandwidth consumption, since the
transcoding operations. In this paper, we propose a novel streaming rate from the proxy to the clients is not adjustable.
adaptive video caching framework that enables low-cost and
While rate adaptability is a salient feature of video objects,
fine-grained adaptation. The innovative approach employs the
MPEG-4 Fine-Grained Scalable (FGS) video with post-encoding
the use of adaptive videos poses great challenges to caching.
rate control. We demonstrate that the proposed framework is
The problem is particularly complicated by the fact that most
both network aware and media adaptive: clients can be of conventional rate adaptation mechanisms are executed during
heterogeneous streaming rates, and the backbone bandwidth the encoding process (e.g., adjusting quantizers [7,18]) and,
consumption can be adaptively controlled. We also examine the hence, are difficult to apply to cached videos. There have been
design and management issues in the framework, in particular, research efforts to combine proxy caching with video layering
the optimal stream portions to cache and the optimal streaming or transcoding [6,13,17]. However, these adaptive systems
rate to each client. Simulation results demonstrate that, suffer from either coarse adaptation granularity (due to the
compared to non-adaptive caching, the proposed framework inflexible structures of existing layered coders) or high
with optimal cache management not only achieves significant computation overhead (due to the transcoding operations).
reduction on transmission costs but also enables flexible utility In this paper, we propose a novel video caching framework
assignment for the heterogeneous clients. Meanwhile, its to achieve low-cost and fine-grained rate adaptation. The
computational overhead is kept at a low level, implying that it is innovative approach is to employ the MPEG-4 Fine-Grained
practically deployable. Scalable (FGS) video with bit-plane coding, which enables
post-encoding rate control by partitioning the video stream at
I. INTRODUCTION any specific rate [8]. These operations can be efficiently
implemented at the server or at a proxy with fast response and
Due to the increasing demands on video distribution over low cost. The proposed framework is both network aware and
the Internet, caching video objects at proxies close to clients media adaptive: clients can be of heterogeneous access
has attracted much attention in recent years. However, video bandwidths, and adaptive FGS videos are used to meet the
objects have several distinct features, which make clients’ bandwidth conditions and to control the backbone
conventional Web caching techniques inefficient, if not bandwidth consumption.
entirely inapplicable. In particular, a video object usually has We examine the critical design and management issues in
a high data rate and a long playback duration, which the proposed framework. Specifically, there is a two-
combined yield a very high data volume. As an example, a dimensional space to explore for how to cache an FGS video:
one-hour MPEG-1 video has a data volume of about 675 MB; the length and the rate of the portion to be cached. We stress
caching it as a whole is clearly impractical, as several such that the selection must take into account the interactivities in
large streams would exhaust the capacity of a typical cache. video playback, i.e., the non-uniform access rates of different
To address these problems, many partial caching portions. Moreover, when a cached video is delivered to a
algorithms have been proposed in recent years [3,4,21], client, different streaming rates can be selected as long as the
demonstrating that, even if a small portion of a video is stored rate is no higher than the client’s available bandwidth.
at the proxy, the network resource requirement can be Consequently, the proxy management becomes considerably
significantly reduced. Most of these proposals, however, more complex than that for a non-adaptive video based system.
0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004
2. We develop efficient solutions to the above problems. The of different rates [18]. Though being used in many
objective is to offer optimal resource utilization as well as commercial streaming products, this method suffers from high
satisfactory and fair services to clients. replication redundancy. Other recent studies have introduced
The performance of the proposed framework with optimal proxy services with active filtering, which reduces the
proxy cache management is extensively examined in various bandwidth of a video object by transcoding [2,18]. However,
aspects. The results demonstrate its superiority over non- they usually incur much higher computation overhead due to
adaptive caching schemes for heterogeneous clients. More transcoding operations.
importantly, they reveal that our FGS video based adaptive A more efficient method is to use layered coding (also
caching system incurs low computation overhead and, hence, known as scalable coding), which compresses a video into
is practically deployable. several layers [8]: the most significant layer, called the base
The rest of this paper is organized as follows. In Section II, layer, contains the data representing the most important
we introduce the background as well as related work. Section features of the video, while additional layers, called
III describes the FGS video based proxy caching system. The enhancement layers, contain the data that progressively refine
problems of optimal proxy management are formulated in the quality of the reconstructed video. Layering has been
Section IV. We also present efficient solutions to the widely used in live video multicast to heterogeneous and
problems. Their performance is evaluated in Section V. In isochronous clients [18]. For proxy-assisted streaming with
Section VI, we further consider the practical issues for layered videos, Rejaie et al. [6] studied cache replacement and
deploying our system. Finally, Section VII concludes the perfecting policies with the objective of alleviating congestion
paper and offers some future directions. for individual clients. Kangasharju et al. [13] simplified the
system model by assuming the cached contents are semi-static
II. BACKGROUND AND RELATED WORK and only complete layers are cached. They developed
effective heuristics to maximize the total revenue based on a
A. Video Caching for Homogeneous Clients stochastic knapsack model.
Numerous cache algorithms for Web proxies have been Our work differs from these previous studies in two aspects:
proposed in the past decade [5]. However, as mentioned in First, we focus on the optimal resource allocation for a set of
Introduction, several distinct features of video objects like clients, rather than the performance perceived by an individual
large volume make the conventional Web caching algorithms client. This is an important design issue from a system point
inapplicable. To this end, many segment or interval caching of view. Second, the previous studies employed conventional
methods have been proposed, with emphases on cache layered coding, where the number of layers is restricted and
admission and replacement policies [21]. Due to the static the rate of each layer is often fixed. For example, Kangasharju
nature of video contents and the disk bandwidth et al. [13] focused on two layers only (base layer and one
considerations at proxies, semi-statically caching popular enhancement layer). Our work complements them by
video portions over a relatively long time period, rather than employing the fine-grained scalable videos to enhance
dynamically caching them in response to individual client adaptability.
requests, has also been suggested [3,4,15]. Wang et al. [3]
proposed a video staging scheme, in which only the part of a
C. Fine-Grained Scalable (FGS) Video
stream that exceeds a certain cut-off rate (i.e., the bursts of a FGS generalizes the conventional layering (scalable coding)
VBR stream) is cached at the proxy. Sen et al. [4] proposed a methods through a bitplane coding algorithm, which uses
prefix caching algorithm that caches the initial frames of the embedded representations for the enhancement layer (also
video stream. This is particularly attractive in reducing the called FGS layer) [8]. For illustration, there are 64 (8×8) DCT
client start-up latency, due to the predictable sequential nature coefficients for each video block of the enhancement layer; all
of video accesses. the most significant bits from the 64 DCT coefficients
Most of these algorithms assumed non-adaptive/non- constitute bitplane 0, all the second most significant bits
constitute bitplane 1, and so on and so forth. In the output
scalable videos and bandwidth homogeneous clients, though
stream, the bitplanes are placed sequentially to reconstruct the
some of them also considered work-ahead smoothing to
coefficients. A post-encoding filter thus can truncate this
reduce the backbone bandwidth demand for VBR videos. Our embedded stream of the enhancement layer to achieve any
work is motivated by these studies, in particular, the staging specified output rate, with negligible mismatches due to block
and prefix caching algorithms, and complements them by boundary constraints for the truncation [8,12]. We stress that
focusing on the issues arising from caching scalable videos this rate control method has two advantages: (1) like
with heterogeneous clients. transcoding, it enables fine-grained rate adaptation, but the
computation overhead of the filter is much lower; (2) like
B. Video Caching for Heterogeneous Clients layered adaptation, it can be applied to stored videos and
To handle the heterogeneity of client bandwidths, a enables the proxy to adjust the rate of a cached stream at a low
straightforward method is to produce replicated video streams cost. Specifically, for narrowband clients, the proxy can
0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004
3. reduce the streaming rate using the filter; for wideband clients, connection; otherwise, it will continue playback by retrieving
the proxy can fetch some uncached portion (i.e., higher-order the suffix. We denote the lengths of the prefix and the suffix
bitplanes) from the server and assemble it with the cached by Lt and Ls , respectively. Assume the probability of early
portion to generate a higher rate stream. Such operations are terminations is pET , 0 < pET < 1 . The probability of a
difficult to implement using transcoding, not only because it client accessing the entire video (also the probability of
has high computation overhead but also because it changes the accessing the suffix) is thus 1 − pET . While here we consider
stream syntax. this simple model only, the method can be extended to a
FGS coding has been adopted in the MPEG-4 standard and multiple-segment case with non-uniform access rates due to
is undergoing active improvement. We have seen proposals such interactivities as VCR-like operations in playback (some
that make efficient use of FGS coding for adaptive video preliminary results can be found in [22]).
streaming [19]. However, they did not consider proxy caching. Finally and most importantly, we advocate scalable adaptive
To our knowledge, the only work that employed FGS videos videos in this system, in particular, the MPEG-4 FGS videos.
in caching is [20]. Their focus however was on developing a To model an FGS video, we assume that the base layer of the
general cache management framework, in particular, the video has a constant rate rbase , which cannot be further
replacement policies for mixed-media streaming; the FGS partitioned along the rate axis. As such, the base layer
coding was used to for the performance evaluation purpose. represents an ensured lowest playback quality. On the other
hand, neglecting the effect of block boundaries for bitplane
III. SYSTEM MODEL OF FGS VIDEO BASED CACHING truncation, the enhancement layer can be adaptively
The video streaming system consists of a server that stores a partitioned into any given rate using a filter, either at the
repository of videos, and a set of proxies on the edge of the server or at the proxy. Thus, as illustrated in Fig. 1, a proxy
network. Selected videos are partially cached at the proxies. A manager can set the streaming rate to a client of class i in the
client request is first forwarded to a nearby proxy, which range of [rbase , ci ] .
intercepts the request and computes a schedule for streaming:
the cached portion of the video can be delivered to the client
Video Server Video Proxy
directly; some uncached portion, if needed, will be fetched
Filter/Assembler
from the server and then relayed to the client. Although this Filter
Uncached FGS
Portion
process is similar to that of many existing systems, our model stream
(bi)
has three novel features, making it more general and flexible. Backbone Cache
First, we consider a relatively more complex network (e.g., Network
Intranet) behind the proxy, instead of a simple LAN assumed
Video Client
in most existing studies. Examples include an enterprise Repository
Proxy Proxy
Monitor Request
Request Manager
network or a campus network, which remains highly (ci)
heterogeneous in terms of client access bandwidths, due to
such factors as hardware configurations, connection methods Fig. 1. Functionalities of the video server and a proxy.
(e.g., Ethernet, ADSL, or wireless LAN), and administrative
policies (e.g., in a campus network, the access bandwidth The proxy manager also determines which portion of a
provisioned for a faculty member would be higher than that video is to be cached. In the conventional non-adaptive video
for a student). To reflect such heterogeneity, we assume that caching system, given a cache size for the video, the portion
there are M classes of clients, and the maximum access to be cached is simply determined by the length (in terms of
bandwidth for a client (or simply client bandwidth) of class i playback time). However, with FGS videos, there is one
is given by ci , i = 1,2,..., M , which is an upper bound on the additional dimension to explore: the rate of the portion to be
video streaming rate from the proxy to a client of class i. cached. Such flexibility potentially enables better resource
Without loss of generality, we number the classes in utilization, but also complicates the proxy cache management.
ascending order of the client bandwidth, that is, In addition, there are two cases in which some uncached
c1 ≤ c2 ≤ ≤ cM . portion is to be fetched from the server: (1) the length of the
Second, we assume that the clients could terminate a video demanded portion is longer than that of the cached portion;
playback prematurely after they requested the playback from and (2) the streaming rate is higher than that of the cached
the beginning of a video. Existing studies on video server portion. In the second case, the uncached bitplanes will be
workloads [10] have revealed that such early terminations fetched from the server and then assembled with the cached
occur quite often and, hence, should be considered in system portion to form a stream of higher rate.
dimensioning. One approach for modeling early termination is As in many previous studies [3,4,14], we assume that the
to partition a video into two parts: a prefix and a suffix, where contents of the proxy cache are semi-static and updated
the prefix could be a preview of the video. If a client feels periodically with changes of the system workloads. For each
uninterested after watching the prefix, it will terminate the period, the key issues in proxy management are to find the
0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004
4. optimal length as well as the optimal rate for caching a video, more the uncached portion to be fetched from the origin
and to determine the streaming rate for each client of the server (which incurs a higher transmission cost). As such, it is
video. To ensure fairness, we let the video streaming rate to important to find a trade-off between them.
any client of class i be identical, denoted by bi , bi ≤ ci ,
A. Minimizing Transmission Cost
i = 1,2,..., M . For a multi-video case of N videos, the proxy
manager also needs to determine an optimal resource (cache We first consider the transmission cost for a single video
space and backbone bandwidth) sharing among the videos. with specified streaming rates for each class of clients, b1 ,
The key parameters of this model are summarized in Table b2 , …, bM , bi ≤ ci , i = 1,2,..., M . We attempt to answer the
following question: Given a limited cache size H to the video,
1. We assume all these parameters are known a priori and do
which portion of the stream is cached (referred to as a caching
not change drastically in a period. Moreover, for ease of
scheme) such that the transmission cost is minimized? As said,
exposition, we focus on the interactions among the origin
this objective is equivalent to minimizing the backbone
server, a single proxy, and the clients of the proxy. Our results,
bandwidth consumption.
however, are applicable to the multi-proxy case where each
proxy serves a non-overlapping set of clients. Note that the rates for different positions of the cached
portion can be different when using FGS videos. Denote the
Length of the video, L = Lt + Ls rate for position l (measured in the elapsed time from the
L
beginning of the video) of the cached video by r(l ),l ∈ [0, L] .
Lt Length of the prefix
A caching scheme for the video is therefore uniquely
Ls Length of the suffix
determined by the shape of r(l ) . We say that the scheme is
pET Probability of early terminations L
valid if 1) ∫ r(l )dl ≤ H and 2) for any l ∈ [0...L] , rbase ≤ r (l )
0
H Proxy cache size for the video ≤ cM or r(l ) = 0 . The above two constraints follow the
rbase Base layer rate of the video cache size limit and the base layer rate limit, respectively.
λ Client request rate for the video A caching scheme is optimal if it is valid and yields the
M Number of client classes minimum backbone bandwidth consumption for fetching the
pi Probability that a client is in class i , pi > 0 ˆ
uncached portion. Let V denote the total volume of the video
ci Client bandwidth of class i with rate cM (the maximum client bandwidth). Obviously,
ˆ
if H ≥ V , the caching scheme r(l ) = cM , l ∈ [0, L] is optimal.
bi Streaming rate for a class i client ( rbase ≤ bi ≤ ci )
For the case of limited cache size, we show two lemmas that
αi Utility of a client in class i , αi = bi /ci facilitate searching the optimal caching scheme.
Vˆ ˆ
Volume of the video with rate cM , V = L ⋅ cM Lemma 1: If H ≤ rbase ⋅ Lt , the caching scheme
ˆ
B Backbone bandwidth consumption with ideal
rbase , l ∈ [0, H / rbase ]
client utility assignment and no caching r (l ) =
is optimal.
0, l ∈ (H / rbase , L ]
Table 1. Model parameters for a single video. For the multi- Proof: This scheme is clearly valid. For any client request,
video case of N videos, a superscript (k ) is to be added to each
a video portion of volume H is saved from being transmitted
parameter of video k , k = 1,2,..., N .
over the backbone. This is the maximum saving per client
request that a valid caching scheme could achieve under cache
IV. THE PROXY CACHE MANAGEMENT PROBLEM size H . □
AND SOLUTIONS
ˆ
Lemma 2: If rbase ⋅ Lt < H < V , assuming that the size of
In this section, we consider the proxy cache management
the cached prefix is fixed to H t ( ∈ [rbase ⋅ Lt , H ] ), there
problem for a single video object as well as for multiple
heterogeneous videos. Our objective in designing the proxy exists an optimal caching scheme:
cache management module is two-fold: first, to maximize the rt , l ∈ [0, Lt ]
client utility, which is related to the streaming rate to each r (l ) = rs , l ∈ (Lt , Lc ] ,
class of clients; and second, to minimize the transmission cost,
0, l ∈ (Lc , L ]
as the video streaming imposes very high data delivery
demands to the network. As suggested in previous studies [3], where rt and rs respectively represent (constant) rates of the
we assume that the local (i.e., proxy-to-client) transmission cached prefix and cached suffix, and are given by
cost is trivial, and the overall transmission cost is a non- rt = H t / Lt and rs = max { rbase ,(H − H t )/ Ls } ; Lc is the
decreasing function of the average backbone (i.e., server-to- total length of the cached portion, Lc = Lt + (H − H t )/ rs .
proxy) bandwidth consumption. Note that the above two The proof of this lemma can be found in [22]. Let’s
objectives could conflict, because it is obvious that the higher ˆ
concentrate on this case ( rbase ⋅ Lt < H < V ) with a
the streaming rate (which results in a higher client utility), the given H t . It is easy to show that rt ≥ rs for an optimal
0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004
5. caching scheme because a cached prefix will serve both early this has been suggested in previous studies, it is unfair to the
terminated requests and all other requests. The backbone blocked clients. The FGS video, however, offers an alternative
bandwidth consumption for all requests from class i is method by trading off bandwidth with client utility, that is,
therefore given by assigning a somewhat lower yet acceptable streaming rate to
i
BH ,Ht (bi )= each class of clients.
More explicitly, we would like to investigate whether there
λ pi [(1 − pET )(bi L − H ) + pET (bi Lt − H t )],
exists some utility assignment to each class of clients of the
rt ≤ bi ≤ cM (1) video, such that the backbone bandwidth consumption is no
ˆ ˆ
more than η B , where B is the backbone bandwidth
λ pi (1 − pET )[bi Ls − rs (Lc − Lt )], rs < bi < rt
consumption with the ideal utility assignment and zero-size
λ pi (1 − pET )bi (L − Lc ),
rbase ≤ bi ≤ rs cache (i.e., no caching); η thus reflects the factor of backbone
bandwidth reduction. We are particularly interested in a utility
which can be easily validated from Fig. 2 (for rs < bi < rt ). assignment that achieves the best “social welfare”, i.e., the
It follows that the optimal caching scheme for rbase ⋅ Lt total utility of the clients is maximized. This utility
ˆ
< H < V can be accomplished by a one-dimensional search optimization problem for the single video can be formally
on H t in the range of [rbase ⋅ Lt , H ] . The optimal solution, or described as follows:
the minimum backbone bandwidth consumption, is thus M
M
MU-SV: max U H ,η = ∑ i =1 λpi αi , (3)
BH = min
rbase ⋅Lt ≤H t ≤H , rt ≥rs
∑ i =1 BH ,Ht (bi ) .
i
(2)
s.t. rbase / ci ≤ αi ≤ 1 , i = 1, 2,..., M ,
Assume the minimum cache grain is v , which could be the αi −1ci −1 ≤ αici , i = 2, 3,..., M ,
size of a disk block or the size of a GOP (Group-of-Pictures) ˆ
BH ≤ ηB ,
of the video; the cache size H , as well as H t , is always a
multiple of the grain v . Then the complexity to search BH is where U H ,η is the total client utility per unit time for a cache
O(H / v ) . size H and a backbone bandwidth reduction factor η . The
constraint αi −1ci −1 ≤ αici (equivalent to bi −1 ≤ bi ) is to
ˆ
For H ≥ V and H ≤ rbase ⋅ Lt , the optimal caching preserve the order (priority) of the client classes in resource
scheme can also be uniquely represented using rt , rs , and Lc . sharing.
The corresponding backbone bandwidth consumption thus can
Assume that there is a minimum bandwidth grain w ; the
be calculated by Eqs. 1 and 2 as well.
backbone bandwidth consumption for a certain class of clients
is rounded to a multiple of w . Fig. 3 presents an algorithm to
solve the problem instance of a given size of the cached prefix,
Ht . MU-SV thus can be accomplished by a search on Ht ,
similar to that in the previous subsection.
In this algorithm, u i, j ,k represents the maximum total utility
per unit time for classes 1 through i , with backbone
bandwidth j consumed by classes 1 through i , and backbone
i
bandwidth k consumed by class i . Function (BH ,H t )−1 (k ) is
i
Fig. 2. Illustration of different portions of an FGS video stream. a generalized inverse of BH ,H t (bi ) (see Eq. 1), representing
the highest streaming rate for a class i client given that the
B. Trading off Backbone Bandwidth with Client Utility backbone bandwidth consumed by this class is k . A special
case is k =0. Recall that we assume λpi > 0 and
In the above optimization, we assume that the streaming 1 − pET > 0 in the system; zero backbone bandwidth
rates for the clients of the video are specified. We now consumption implies that Lc = L and the streaming rate is
consider a more general and flexible scheme that makes i
no higher than rs . Therefore, if Lc = L , we let (BH ,H t )−1 (0)
effective use of FGS.
be rs to maximize the total utility; otherwise, we set
Define the utility of a class i client as αi = bi /ci for i i −1
(BH ,H t )−1(0) to 0 and BH ,H t (0) to −∞ . Finally, note that
rbase ≤ bi ≤ ci . The ideal utility assignment is αi = 1 , i
BH ,H t (bi ) is undefined for bi < rbase and bi > cM ; if directly
i = 1,2,..., M , i.e., the client bandwidth is fully utilized for
every class. However, it is clear that the higher the client inversing Eq. 1 yields a value in these two ranges, we set
i
utility, the more the transmission cost or the backbone (BH ,H t )−1 (k ) to 0 and cM , respectively.
bandwidth consumption. To reduce bandwidth consumption, The correctness of this algorithm can be found in [22].
one solution is to block some of the client requests. Although Given the cache grain v , applying the algorithm for each H t
0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004
6. 3
ˆ
solves problem MU-SV in time O (M ⋅ η B / w ⋅ H / v ) . N
Since the accuracy of this algorithm depends on v and w , we
MU-MV: max ∑ k =1U Hk ) ,η
(
(k ) (k ) , (4)
shall investigate their impact in the next section through N
simulation experiments. s.t. ∑ k =1 H (k ) ≤ H T ,
N N
u1, j ,k ← −∞ /* Initialization */
∑ k =1 BHk )
(
(k ) ≤ ηT ∑ k =1 B (k ) .
ˆ
This is a 2-dimesional (bandwidth and space) version of the
ˆ
for k = 0 to ηB step w do
knapsack problem. Given cache grain v and backbone
ˆ
for j = k to ηB step w do
bandwidth grain w , it can be solved by extending the known
u1, j ,k ← λ p1[(BH ,H t )−1(k )]/ c1
1
pseudo-polynomial time algorithm for 1-D knapsack [1:Ch
16], and the time complexity of this solution is bounded by
for i = 2 to M do /* Table filling */
ˆ
for k = 0 to ηB step w do
( N (k ) )
O N ⋅ HT ⋅ H ′ ⋅ v−2 ⋅ ηT ∑k =1 B(k) ⋅ B′ ⋅ w−2 , where B ′ =
N ˆ
min{ηT ∑ k =1 ˆ ˆ
B , maxk B (k ) } and H ′ = min {H T ,
y ← (BH ,Ht )−1(k )
i
ˆ
maxkV (k ) } [22].
ˆ
for j = k to ηB step w do V. PERFORMANCE EVALUATION
max ← −∞ In this section, we evaluate the performance of our FGS-
for x = 0 to BH−Ht (y ) step w do
i 1
, based caching system through simulation experiments. We
temp ←λ piy / ci + ui −1, j −k ,x examine the system along two dimensions: 1) the transmission
cost, or backbone bandwidth consumption; and 2) the quality
if max < temp
of the delivered streams, i.e., the client utility.
then max ← temp , bw ← x
A. System Settings
ui, j ,k ← max , hi, j ,k ← bw
For the single video case, we assume that there are five
u * ← −∞ /* Extract optimal results */ classes of clients, and the client bandwidths of the classes are
ˆ
for bw = 0 to ηB step w do exponentially spaced, i.e., c1=128 Kbps and ci = 2ci −1 for
temp ← uM , ηB / w ⋅w,bw i = 2, 3, 4, 5 , which cover the bandwidths of a broad spectrum
of access technologies. For the client population distribution
if u * < temp among the classes, ( p1, p1,..., p5 ), we have evaluated various
then u * ← temp , bw * ← bw settings in our experiments. Due to space limitation, in this
paper we present the simulation results for three typical
ˆ
b ′ = ηB / w , bw ′ = bw * distributions:
for i = M down to 1 do (1) Uniform: (0.2, 0.2, 0.2, 0.2, 0.2) ;
αi* ← (BH ,Ht )−1(bw ′)/ ci
i
(2) S-narrow: (0.5, 0.2, 0.15, 0.1, 0.05) ;
b ′ ← b ′ − bw ′ , bw ′ ← hi,b ′,bw ′ (3) S-wide: (0.05, 0.1, 0.15, 0.2, 0.5) .
Output: The latter two are skewed distributions, respectively
dominated by narrowband clients and wideband clients. The
u * : maximum expect utility for the given H , H t , η lengths of the entire video and the prefix are set to 100
αi* : corresponding utility assignment of a class i client minutes and 20 minutes, respectively. The probability of early
terminations is set to 0.3. Although a rigorous evaluation of
Fig. 3. Algorithm for an instance of MU-SV with given H t . the client termination behavior is beyond the scope of this
work, we note that higher probabilities of early terminations
C. Proxy Management for Multiple Heterogeneous Videos have been observed in reality [10]; in this case, more benefits
We now extend the above proxy cache management policies can be expected from the caching paradigm advocated in our
to the case of multiple heterogeneous videos. Assume that system.
there are N videos, indexed as 1 through N . Given the total We assume that the requests follow a Poisson arrival
cache size H T and the total bandwidth reduction factor ηT process with a mean rate of one request per minute. We
for all the videos, our objective is to find a cache partitioning ˆ
normalized the backbone bandwidth by B (the backbone
( H (k ) , k = 1, 2,..., N ) as well as bandwidth partitioning ( η(k ) , bandwidth consumption with ideal utility assignment and no
k = 1, 2,..., N ) for these videos, such that the system utility ˆ
caching) and the cache size by V (the total volume of the
(total utility of all the clients for the videos) is maximized. video with rate cM ) for all presented results. Hence, our
This problem can be formally described as follows: conclusions are generally applicable when the parameters are
0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004
7. proportionally scaled. Unless explicitly specified, the default optimal scheme in our FGS video based caching framework.
cache grain and backbone bandwidth grain are set at Hence, we believe that our optimal scheme is an effective
ˆ ˆ
1/ 200V and 1/ 200B , respectively. means to improve the system performance.
For the multiple video case of N objects, we use similar
B.2 Utility Improvement
client settings as described above for each single video, and
assume that the access probabilities of the videos follow a We now examine the tradeoff between the client utility and
Zipf distribution, which has been suggested by workload the backbone bandwidth consumption, given that the
measurements of media servers. With this distribution, the streaming rates to the clients can be regulated using the
N
access probability of video k is (1/ k )θ / ∑ j = 1 (1/ j )θ , filter/assembler at the proxy. We employ the optimal caching
where θ reflects the skewness among the client populations and utility assignment algorithms for our system, as described
of the videos. in Section IV.B. Fig. 5 shows the expected utility for all
clients as a function of the backbone bandwidth consumption
B. Evaluation Results for Single Video for different cache sizes and class distributions. Note that the
normalized backbone bandwidth is essentially equal to η , as
B.1 Backbone Bandwidth Reduction
shown in Eq. 3.
In the first set of experiments, we investigate the backbone It can be seen that, to achieve the optimal utility ( = 1 ), a
bandwidth consumption. We are interested in examining the relatively high backbone bandwidth is to be consumed if the
backbone bandwidth reductions by employing the optimal cache size is very small, e.g., 40% of backbone bandwidth B ˆ
caching scheme, compared to the following two baseline with a cache size 0.3V ˆ for the uniform class distribution (Fig.
schemes: 4a). However, there is a nonlinear relation between the
r ′, l ∈ [0, H / r ′]
backbone bandwidth consumption and the client utility. As a
(1) MaxLen: r(l ) = , r ′ = max{rbase, H / L} ;
0, l ∈ (H / r ′, L] result, for the same setting, we can achieve an expected client
ˆ
utility of 0.9 by consuming only 15% of B , that is, a 10%
cM , l ∈ [0, H / cM ]
utility reduction leads to a 62.5% backbone bandwidth
(2) MaxRate: r (l ) = .
0, l ∈ (H / cM , L ] reduction. This is particularly evident for the S-narrow
distribution, not only because a relatively high volume is
These two schemes are non-adaptive because they are not cached for the stream to a narrowband client, but also because
aware of the client bandwidth distributions. They also a slight reduction of the streaming rate to a wideband client
resemble the caching schemes for a coarse-grained layering of will benefit the set of narrowband clients. In this case, the
2 layers, i.e., caching the base layer only and caching both the expected utility is over 0.8 even with very limited resources
base and enhancement layers. ˆ
(e.g., backbone bandwidth of 0.05 B and cache size of 0.1V ). ˆ
Fig. 4 shows the backbone bandwidth reductions for For the S-wide class distribution, the reduction is not that
different cache sizes and class distributions. The client significant. However, in this case, the absolute backbone
bandwidth is assumed to be fully utilized for all classes, that is, bandwidth consumption is much higher than that for the other
αi = 1 , i = 1, 2,..., M . We observe remarkable reductions two distributions; hence, a slight reduction on the normalized
achieved by our optimal scheme over the two baseline bandwidth would still lead to a great reduction in the absolute
schemes, which is generally over 10% and sometimes over backbone bandwidth consumed.
50%. The reduction depends on the class distributions, e.g., Such an adaptive setting of utility clearly offers a flexible
for the S-narrow distribution (Fig. 4b), the reduction is space for a designer to explore to either maximize the overall
particularly high when compared with the MaxRate scheme, revenue or minimize the overall cost. On the contrary, if the
as most of the clients have a relatively low bandwidth and cache size is less than 0.1 ˆ and the backbone bandwidth is
V
hence, caching the stream of the highest rate becomes ˆ
less than 0.5B , a non-adaptive system that fixes the client
wasteful. In this case, increasing the length of the cached utility to one does not even work for any class distribution.
stream becomes a better alternative. However, compared to
our optimal scheme, the MaxLen scheme still suffers from B.3 Sensitivity to Allocation Grains
more than 10% bandwidth excess, because it is not flexible in In our experiments, the default cache grain v and backbone
setting the rates for the cached prefix and suffix to better ˆ
bandwidth grain w are set at 1/ 200V and 1/ 200B , ˆ
accommodate early terminated requests. On the contrary, with respectively. To investigate their impact, we repeat the
the S-wide distribution (Fig. 4c), since most clients have high experiments of the previous subsection with various grain
bandwidth demands, the MaxRate scheme is better than the settings. Fig. 6a shows the maximum gaps in terms of optimal
MaxLen scheme, but is still suboptimal. Finally, with the client utilities between each setting and a finer-grained
Uniform distribution (Fig. 4a), both the MaxLen and the ˆ ˆ
setting: 1/1000B (bandwidth grain) and 1/1000V (cache
MaxRate schemes are far from satisfactory. grain). We can see that the performance gap quickly decreases
It is worth noting that, compared to the two non-optimal with the refinement of the grains, and the gap for the default
schemes, there is virtually no extra cost in employing the ˆ ˆ
setting (1/200V and 1/200 B ) is already lower than 0.008.
0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004
8. 60 60 60
C om pared to MaxLen Com pared to MaxLen
Backbone Bandwidth Reduction (%)
Compared to MaxLen
Backbone Bandwidth Reduction (%)
Backbone Bandwidth Reduction (%)
50 50 50
Compared to MaxRate C om pared to MaxRate Com pared to MaxR ate
40 40 40
30 30 30
20 20 20
10 10 10
0 0 0
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Normalized Cache Size Normalized Cache Size Normalized Cache Size
(a) (b) (c)
Fig. 4. Backbone bandwidth reductions achieved by the optimal caching scheme.
(a) Uniform class distribution; (b) S-narrow class distribution; (c) S-wide class distribution.
1 1 1
0.9 0.9 0.9
0.8 0.8 0.8
Expected Utility
Expected Utility
Expected Utility
0.7 0.7 0.7
0.6 0.6 0.6
CacheSize = 0.1 CacheSize = 0.1 CacheSize = 0.1
0.5 0.5 0.5
CacheSize = 0.3 CacheSize = 0.3 CacheSize = 0.3
0.4 CacheSize = 0.5 0.4 CacheSize = 0.5 0.4 CacheSize = 0.5
0.3 0.3 0.3
0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Normalized Backbone Bandwidth Normalized Backbone Bandwidth Normalized Backbone Bandwidth
(a) (b) (c)
Fig. 5. Expected client utility as a function of backbone bandwidth consumption.
(a) Uniform class distribution; (b) S-narrow class distribution; (c) S-wide class distribution.
baseline scheme with a uniform cache partitioning
0.02
0.02 ( H (k ) = H T / N ) among the videos and a bandwidth
0.016
0.016 partitioning proportional to the client population of each
video ( η(k ) = (λ(k ) / ∑N λ( j ) ) ⋅ (ηT ∑ N B (i) )/ B(k ) ). The
ˆ ˆ
Average Gap
Max Gap
0.012
0.012
j =1 j =1
0.008 0.008 utility assignment for each single video is optimized using
0.004 0.004 the algorithm for MU-SV. It is easy to verify that the system
0 1/100
1/50
0 1/100
1/200
1/50
utility for the baseline scheme is independent of the skew
1/50 1/200 1/50
Ban
1/100
1/200
1/800
1/400
eG
ra
in
Ban
dw
1/100
idth
1/200
1/400
1/400
1/800 eG
ra
in
factor θ . Fig. 7 shows the system utility improvement by the
d wid th 1/400
ch ch
joint optimization scheme for H T = 0.2∑ N V (k ) ,
G
G ra
in
1/800
Ca
rain 1/800
Ca ˆ
k =1
T
and η = 0.1 . It is clear that the joint optimization
(a) (b) significantly improves the system utility, in particular, with
Fig. 6. Performance gaps as compared to a fine-grained setting large skew factors, i.e., the client populations of the videos
ˆ ˆ
v = 1 / 1000V , w = 1/1000B . (a) Max gap; (b) Average gap. are highly heterogeneous. For the S-wide distribution (Fig.
7c), which has a relatively low system utility for the baseline
This is negligible from a client perception point of view. scheme under the given settings, the utility improvement is
We also show the average gaps in Fig. 6b, which are in fact over 30% for high skew factors.
much smaller than the corresponding maximum gaps shown
To identify respective contributions of the two
in Fig. 6a. Therefore, we believe that the default setting is
optimization dimensions (i.e., space and bandwidth), we also
reasonably good, especially considering that the computation
show in Fig. 7 the improvements achieved by employing the
times with this setting are generally less than 30 ms on a
optimal cache partitioning alone (Cache Optimal) and the
common PC (Pentium III 1 GHz).
optimal bandwidth partitioning alone (Band Optimal). Their
C. Evaluation Results for Multiple Videos corresponding bandwidth and cache partitionings are
proportional and uniform, respectively. It can be seen that,
Finally, we investigate the performance of the proxy though using either optimal partitioning alone also improves
management algorithm for multiple videos. We implement the system utility, there remain remarkable gaps as compared
the joint optimization scheme MU-MV for cache and to that of the joint optimization, especially for higher skew
bandwidth partitioning (see Section IV.C), as well as a factors. This is because the utility of each video is a
0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004
9. 15 5 35
Joint Optimal Joint Optimal 30
12 4 Joint Optimal
Band Optimal Band Optimal
Utility Improvement (%) x
Utility Improvement (%) x
Utility Improvement (%) x
Band Optimal
Cache Optimal Cache Optimal 25 Cache Optimal
9 3
20
6 2 15
10
3 1
5
0 0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
Skew Factor Skew Factor Skew Factor
(a) (b) (c)
Fig. 7. System utility improvement as a function of skew factor θ ( H T = 0.2∑ N V (k ) , η T = 0.1 ) for multi-video allocation. The
ˆ
k =1
system utilities of the baseline scheme for the Uniform, S-narrow, and S-wide distributions are 0.77, 0.92, and 0.56, respectively. (a)
Uniform distribution; (b) S-narrow distribution; (c) S-wide distribution.
nonlinear function of cache space or backbone bandwidth Using the optimal caching schemes, we calculate the
consumed by the video and, hence, the uniform cache backbone bandwidth reduction of the FGS video based
partitioning or the proportional bandwidth partitioning alone system against the stream replication based system, as shown
is suboptimal for these heterogeneous videos. As such, we in Fig. 8. For the Uniform and S-narrow distributions, the
believe that it is important to use the joint optimization for reduction is significant, often over 40% and sometimes over
the videos. 60%. For the S-wide distribution, the reduction is relatively
smaller because the backbone bandwidth consumption is
VI. COMPARISONS AND PRACTICAL CONSIDERATIONS dominated by the requests from the wideband clients.
A. Scalable Video or Replicated Video? Nevertheless, the absolute value of saved bandwidth remains
high enough in this case, as discussed in Section V-B.2.
As mentioned in Section II, yet another approach to handle 80
the client heterogeneity is stream replication [9,18]; that is,
Backbone Bandwidth Reduction (%)
70 Uniform
on the server’s side, replicated streams are generated to meet S-narrow
60 S-wide
the bandwidth condition of each class of clients. Stream
50
replication has several advantages, such as simplicity and
compatibility with conventional non-scalable video coders. 40
Hence, it has been widely used in existing commercial 30
streaming systems, e.g., the RealNetwork’s SureStream. 20
Nevertheless, replication also leads to data redundancy, not 10
only in server storage but also in proxy cache and bandwidth 0
consumption. 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5
Normalized Cache Size
Scalable video and replicated video based streaming
Fig. 8. Backbone bandwidth reduction of FGS video
systems have been compared in [9] with no proxy caching. A based caching against stream replication based caching for
cache-aware comparison was shown in [16]. They considered different cache sizes and class distributions.
coarse-grained layering (two layers) and the metric of Next, we consider the case of flexible utility assignment,
interest is the request blocking probability. In this set of where the client utility can be lower than one. The problem
experiments, we try to complement these works from a client of optimal caching and client utility assignment for the
utility and bandwidth consumption point of view with the use stream replication based system can be formulated as follows:
of FGS videos. M
We first investigate the backbone bandwidth consumptions MU-REP: max U H ,η = ∑ i =1 λpi αi , (7)
of the two approaches when the client bandwidth is to be
s.t. rmin / ci ≤ αi ≤ 1 , i = 1, 2,..., M ,
fully utilized. To make a fair comparison, we develop the
optimal caching scheme for the stream replication based αi −1ci −1 ≤ αici , i = 2, 3,..., M ,
system. In this system, the cached portion of a stream serves M
the clients of its own class only, and its rate is equal to the ∑ i =1 hi ˆ
≤ H , and BH ≤ ηB ,
client bandwidth of the class. The problem is thus how to where hi is the cache size allocated to stream i , and rmin is
partition a cache with given size H for the M replicated the lowest streaming rate, which is set to rbase in the
streams of the video. This is a variation of the cache experiments to ensure a fair comparison. This problem can
allocation problem for multiple heterogeneous videos, as also be solved by checking different partitionings of the
solved in [14]. cache and, for each partitioning, the optimal utility
0-7803-8356-7/04/$20.00 (C) 2004 IEEE IEEE INFOCOM 2004