Zebroid: Using IPTV Data to Support Peer-Assisted VoD Content ...
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Zebroid: Using IPTV Data to Support Peer-Assisted VoD Content ...

  • 544 views
Uploaded on

 

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
544
On Slideshare
544
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
3
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Zebroid: Using IPTV Data to Support Peer-Assisted VoD Content Delivery Yih-Farn Robin Chen, Rittwik Jana, Hailong Sun Daniel Stern, Bin Wei, Mike Yang School of Computer Science and Engineering AT&T Labs - Research Beihang University, Beijing Florham Park, NJ, 07932 sunhl@act.buaa.edu.cn chen,rjana,dstern,bw,yang@research.att.com ABSTRACT topic due to its potential to help both IPTV service providers P2P file transfers and streaming have already seen a tremen- and Internet video services offload the VoD servers dur- dous growth in Internet applications. With the rapid growth ing busy hours, increase overall network delivery capacity, of IPTV, the need to efficiently disseminate large volumes of and maximize profits with the existing infrastructure. Tra- Video-on-Demand (VoD) content has prompted IPTV ser- ditional P2P approaches such as BitTorrent[5] and Cool- vice providers to consider peer-assisted VoD content deliv- Streaming[12] use a tit-for-tat approach that is not needed ery. This paper describes Zebroid, a VoD solution that uses among set-top boxes in an IPTV neighborhood; the band- IPTV operational data on an on-going basis to determine width and storage of the peers can be reserved, either through how to pre-position popular content in customer set-top static allocation or dynamic allocation with proper incen- boxes during idle hours to allow these peers to assist the tives, to assist the IPTV service providers to deliver VoD VoD server in content delivery during peak hours. Latest content [7]. In addition, the default BitTorrent download VoD request distribution, set-top box availability, and ca- strategy is not well suited to the VoD environment because pacity data on network components are all taken into con- it fetches pieces of the desired video, mostly out of order, sideration in determining the parameters used in the striping from other peers without considering when those pieces will algorithm of Zebroid. We show both by simulation and emu- be needed by the media viewer. lation on a realistic IPTV testbed that the VoD server load Toast[4] corrects this problem by providing a simple ded- can be significantly reduced by more than 50-80% during icated streaming server and it favors downloading pieces of peak hours by using Zebroid. content that will be needed by the media viewer sooner. One potential problem with using Toast in a typical IPTV envi- ronment is the low uplink bandwidth of peers - typically in Categories and Subject Descriptors the order of 1-2 Mbps; in addition, it is desirable to allo- H.4 [Information Systems Applications]: Communica- cate only a small fraction of the peer’s upload bandwidth tions Applications; H.5.1 [Multimedia Information Sys- for VoD delivery as the peer may need the rest of the up- tems]: Video load bandwidth for other activities. This makes it difficult to find enough peers to participate in the delivery of a re- quested video with the aggregated bandwidth that meets General Terms the video encoding rate (at least 6 Mbps for high-definition Algorithms, Design, Measurement, Performance (HD) content). Push-to-Peer[11] and Zebra[2][3] go one step further by Keywords proposing to pre-stripe popular VoD content on peer set- top boxes (STB) during idle hours so that many peers will Video-on-Demand, Peer-to-Peer, IPTV be available to assist in the delivery of a popular video dur- ing peak hours; however, both Push-to-Peer and Zebra only 1. INTRODUCTION used analysis and simulations without employing a real im- Video-On-Demand (VoD) services are being offered by plementation or using operational data to model their sys- many IPTV service providers today. Peer to peer systems tems. This paper describes Zebroid, a peer-assisted VoD (P2P) have become immensely successful for large scale con- scheme (and a successor to Zebra) that departs from previ- tent distribution on the Internet. Recently, peer-assisted ous approaches by using real IPTV operation data to esti- VoD delivery [6][8][1][9] has become an important research mate content placement striping parameters based on video popularity, peer availability and available bandwidth distri- butions. The contributions of this paper are in three areas: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are • Use of IPTV operational data to estimate Zebroid pre- not made or distributed for profit or commercial advantage and that copies population parameters. bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific • Design and implementation of a peer-assisted VoD sys- permission and/or a fee. NOSSDAV’09, June 3–5, 2009, Williamsburg, Virginia, USA tem that responds to typical residential network access Copyright 2009 ACM 978-1-60558-433-1/09/06 ...$5.00. conditions and busy-hour VoD request patterns. 115
  • 2. • N : Number of subscribers under each DSLAM (typi- cally 96 or 192 subscriber homes). • u: Average streaming bit rate of a video (typically 6 Mbps for HD and 2Mbps for SD). It was observed in [7] that in order to enable a P2P con- tent delivery solution across communities, content needs to be shared between customer residential gateways (RG) by traversing up from one RG to the CO and routed back down to another RG. This peering activity may result in a poten- tial bottleneck in the DSLAM to CO link. The situation can be alleviated by pre-populating content intelligently across Figure 1: An IPTV architecture with FTTN access multiple STBs in a DSLAM community and allow these peers to serve requesting peers from the same community; however, to avoid traversing the uplink to the CO would re- • Emulations of Zebroid on a realistic IPTV testbed quire changes in the existing IPTV architecture to allow IP based on traces from operational data. routing through the DSLAM switch. Alternatively, if ample cache storage is available at a DSLAM switch (not the case 2. IPTV ARCHITECTURE AND DATA today for a variety of engineering reasons), then even P2P would not be needed. 2.1 An IPTV Architecture In [7], we described and analyzed a typical physical model 2.2 IPTV Operational Data of FTTN (Fiber-to-the-node/neighborhood) access networks In this section we analyze data from an operational service for IPTV services. As shown in Figure 1, video streaming to estimate pertinent Zebroid parameters like STB uptime servers are organized in two levels - a local video hub office from power state data (PSD) to help select the right set of (VHO), which consists of a cluster of streaming servers or peers for striping, VoD request data distribution to iden- proxies to serve viewers in a particular regional market, and tify busy hours and popular content, and the capacity data national super head end (SHE) offices, which can distribute (CMD) that gives us the fan out ratio of the number of sub- videos to local serving offices based on existing policies or on scribers that subtends a CO and the measured bandwidth demand. Each local VHO connects to a set of local central consumption of these subscribers. This data was analyzed offices (CO), and then to a set of access switches such as over a period of one month. We discuss only a few that are DSL or FTTN switches called DSLAM, Digital Subscriber relevant to Zebroid in the following subsections. Line Access Multiplexer, through optical fiber cables. Each DSLAM switch connects a community of IPTV service cus- 2.3 VoD Data tomers through twisted-pair copper wires. A community The anonymous VoD request data provide information on consists of all homes which are connected to the same access the purchase timestamp, content title, duration, price and switch. The local video serving office (VHO) has a cluster genre. VoD request data was provided by a national IPTV of video servers, which stream on-demand or live broadcast service provider covering a footprint of over a million homes videos to viewers, typically with a streaming bandwidth of and multiple VHOs. Recent data was used from 2009 over at least a few hundred Mbps. a period of one month. Joining the purchase data with the We list important parameters based on this physical ar- CMD data allows us to associate the VoD requests with the chitecture that would help the reader understand the design corresponding DSLAM and CO. behind the Zebroid algorithm: Figure 2 shows the request distribution by hour across all the CO’s under one particular VHO. If we consider 1500 re- • B0D : Download bandwidth into a home (typically 25- quests as the high threshold, the chart shows that the peak 50 Mbps). This bandwidth is shared by broadcast hours fall between 8pm and 11pm every night, with Sat- channels (typically 2 HD and 2 SD channels through urday night enjoying the highest number of VoD requests. multicast), VoIP, and High Speed Internet (HSI) ac- On the other hand, if we consider 500 requests as the low cess. threshold, then 2am to 8am would be the idle hours for the • B0U : Upload bandwidth out of a home (typically 1-2 VoD traffic, an ideal time for striping popular VoD titles - Mbps). assuming few other network activities are going on at the same time. In addition, there is a significant jump in day- • B1S : Total capacity of the south-bound links (down- time viewing on Saturday and Sunday from 9am to 5pm, links) of a local access switch. A typically FTTN with the number of requests approaching those of the peak switch has 24 Gbps downlink switching capacity. hours during weekday evenings. This calls for potentially • B1N : Capacity of the north-bound link (uplink) of an different content striping and placement strategies between access switch to the service router in the CO. B1N is weekdays and weekends. The VoD request data also gives typically 1 Gbps, but is upgradable. us the popularity distribution to help us determine the most popular video titles to stripe across STBs. • B2S : Link capacity from the CO to all the DSLAM’s - typically in the order of 10 Gbps, but its actual 2.4 Power State Data throughput is limited by the VoD server streaming ca- Power State data (PSD) allows us to determine when a pacity. particular STB was turned on or off (either by the user or 116
  • 3. Figure 4: The Zebroid architecture ing server capacity is likely to become the bottleneck in the overall IPTV architecture. In the former case, an IPTV ser- vice provider will face the question of whether to upgrade Figure 2: VoD request distribution the large number of B1N links or to push popular VoD con- tent to set-top boxes in the local neighborhood during idle hours so that these STBs can serve as peers to assist in the VoD delivery. Since most peers have limited upload band- width (B0U < 2 Mbps) and only a portion of it should be used for peering activities, Zebroid needs to stripe a video over multiple peers so that the aggregated bandwidth will be sufficient to meet the video encoding rate u. Figure 4 shows the Zebroid architecture, which consists of a VoD Server and a community of STBs. The connections between the VoD server and the STBs depend on the specific IPTV service solution. For example, a typical IPTV archi- tecture as shown in Figure 1 adopts a hierarchical network Figure 3: Percentage distribution of consistently ac- structure to distribute multimedia content to its customers. tive STBs A Zebroid-based system works as follows: First, the VoD server determines which content are popu- provider). We use the data to avoid striping popular VoD lar through the analysis of the historical client request data content during idle hours on boxes that may not be up dur- and decide how many copies of each popular content can ing the peak hours. For each DSLAM neighborhood (up to be stored based on the available collective P2P storage on 192 neighbors), we can determine the percentage of STBs STBs. Typically, a service provider would allocate dedicated in that neighborhood that were on at 2am (striping time) storage in a STB that is separate from a user-allocated disk remain on at 8pm (peak hours). The analysis on a set of space. DSALM’s (see Figure 3) shows that, for most DSLAM’s, Second, the VoD server divides each popular VoD file into roughly 80-90% of the STBs satisfy this criterion. a set of chunks. Each chunk is in turn divided into a set This allows us to determine the degree of redundancy that of stripes. The size of the stripe set is based on the upload is needed. An erasure coding[10] rate of 4/5 (see Section bandwidth allocated for P2P and the number of peers that 3.2: Zr = 0.8) would be sufficient for these DSLAM’s. For is required to participate in P2P content delivery. Figure 5 example, for a chunk that is divided into 40 stripes, we will presents a simplified view on the result of striping a content need to create 50 stripes with erasure code to ensure that file. For example, an HD movie at 6 Mbps and 200 Kbps enough stripes will be available even when 20% of the STBs from each peer would require at least 30 stripes for each failed or are unavailable (turned off by the users). chunk. Zebroid then sends the stripes to a set of chosen STBs during off-peak hours based on their availability data. 2.5 Capacity Management Data Third, when a client requests the VoD server for a par- The Capacity Management Data (CMD) allows us to de- ticular popular content, the STBs with the stripes of the termine the hierarchy of VHO’s, CO’s, DSLAMs, RG’s, and corresponding popular content will serve the request con- STBs as shown in Figure 1, and their current capacity uti- currently. As a result, even if the upload bandwidth of each lization. The CMD data also allows us to partition the VoD contributing STBs is limited to 200 Kbps, the downloading requests under different network components and shows the client can still have a smooth viewing experience due to the size of subscriber community under each VHO, CO, and aggregation of the upload bandwidths from all the partici- DSLAM. Additional data on downlink/uplink bandwidth pating peers. If there are not enough peers to support the utilization is discussed in Section 4. required bandwidth due to busy STBs or STB failures, then the downloading client can go back to the VoD server to 3. ZEBROID request the missing stripes. The Zebroid peer-assisted VoD delivery greatly reduces the load of VoD server as we shall 3.1 The Zebroid Architecture see in the experiments and simulations (Section 5.2 and 5.3). As the popularity of VoD requests increases, either B1N (the link between CO and DSLAM) or the VHO stream- 117
  • 4. 30 32 serving peers, No failure, α=1 32 serving peers, 20% failure, α=0.8 25 64 serving peers, 20% failure, α=0.8 64 serving peers, No failure, α=1 Downlink bandwidth (Mbps) 20 15 10 5 0 0 5 10 15 20 25 30 Figure 5: Content striping and serving in Zebroid # of requesting peers, p Figure 6: Average downlink bandwidth of request- 3.2 The Zebroid Parameters ing peers The following parameters are used in describing the Ze- broid system in the rest of the paper, in addition to those specified in Section 2.1): ZN total number of videos in the VoD content library Zn number of striped, popular videos; 20% of the videos typically represent 80% of the requests Zk the maximum upload rate from each peer Zp min. no. of stripes required to reconstruct a chunk. Zc number of copies of a striped video Zr erasure coding rate Zs number of stripes for each chunk of the video, Zs = Zp /Zr . Zg size of the peer storage (in GB) reserved Figure 7: Average downlink bandwidth utilization for P2P content delivery. distribution given by # of supplying peers ∗ P eer uplink bandwidth 4. ANALYSIS BWd = # of requesting peers, p In this section we analyze a striping mechanism for placing (1) content at STBs in a community. Our model follows the full Figure 6 shows the monotone decrease in the downlink band- striping strategy detailed in Suh et al. [11]. Initial content width vs the number of requesting peers. This is explained placement increases content availability and improves the by the aggregate peer bandwidth shared between requesting use of peer uplink bandwidth. However, we note the follow- peers. For example, in Figure 6, the total bandwidth of 32 ing differences. First, we do not require a set of always-on or 64 supplying peers is shared equally between p request- peers. Peers can also fail as shown in Section 2.4. Second, we ing peers. In our environment, we also impose a maximum do not assume constant available bandwidth between peers. constraint on the uplink bandwidth to be at 1.8 Mbps and Uplink bandwidth availability is also modeled in Section 4.1. downlink bandwidth to be at 26 Mbps. This is seen as a Third, peers are not disconnected from the VoD server after truncated downlink average bandwidth for the case of 1 re- being bootstrapped as in [11]. We continue to maintain an questing peer. Figure 6 also shows the downlink bandwidth uplink connection from each STB to the VoD server in antic- observed at requesting peers for 20% of serving peer failures, ipation that when sufficient aggregate bandwidth to match α. the playout rate is not available from supplying peers, the residual bandwidth is supplied by the VoD server. 4.1 Available Bandwidth Assume each movie chunk of length W to be divided into We investigate the average bandwidth utilization distribu- Zs stripes, each stripe of size W/Zs . Each STB stores a tion to predict the amount of time required to pre-populate distinct stripe of a window. Consequently, a movie (win- a set of movies within a CO. Figure 7 and Figure 8 show dow) request by a client requires Zs − 1 distinct stripes to the average downlink and uplink peak bandwidths utilized be downloaded concurrently from Zs − 1 peers (or Zs if the respectively. The measurement trace includes a national de- downloading peer does not have one of the stripes). De- ployed footprint of STBs. Note that the bandwidth observed noting the movie playout rate to be u bps, each stripe is is the High Speed Internet (HSI) portion of the total down- therefore received at the rate of uj = u/Zs bps. load bandwidth of B0D , which also includes bandwidth used For a completely peer-assisted solution with no VoD server for linear programming content (HD and SD channels). A as per our experiments in Section 5.2, the maximum down- few unique observations can be drawn from the following link bandwidth, BWd that a requesting peer observes is traces. First, downlink bandwidth utilization is not uni- 118
  • 5. 4 x 10 10 9 8 7 6 # of STBs 5 4 3 2 1 0 0 0.5 1 1.5 2 Uplink bandwidth usage (Mbps) Figure 8: Average uplink bandwidth utilization dis- tribution Figure 9: Testbed network diagram south-side subnet, and each virtual machine on it were con- formly distributed. Second, there are characteristic peaks figured with a virtual Ethernet that bridged to this port. In at multiples of 2 Mbps as shown in Figure 7. This is a result effect, all VM peers had full Ethernet connectivity to the of STBs being situated at different loop lengths from the south-side subnet. In order to simulate the point-to-point CO. The mean downlink utilization is 6 Mbps. Similarly, video endpoint to ISP connection, each virtualized peer cre- for uplink bandwidth utilization, a large fraction of STBs ated a VLAN connection over its south-side Ethernet link to experience less than 1 Mbps. The distribution shows a sharp its peered router during bootstrapping. This design allowed decrease in the number of STBs as upload speeds approach 1 us to simulate a vast number of point-to-point connected Mbps. Less than 2% exhibit speeds greater than 1 Mbps. As end users without needing to run a separate Ethernet wire an example of pre-population of VoD content, we show from for each. actual traces how long a typical set of popular movies would take to complete. A set of 180 movies were requested from 5.2 Testbed Experiments a community during a period of one day. The total dura- For experiments conducted for this paper, we typically tion of all movies is 312720 seconds. Each movie is streamed assume the following values unless otherwise specified: at 2 Mbps. Assume that 50% of this set needs to be pre- sym. value comments populated across a base of 200 homes to realize a reduction ZN 1024 can be adjusted based on the real VoD data of 50% at the VoD server. Each home has a mean downlink Zn 256 stripe only the top 25% of HD videos bandwidth of 6 Mbps. The amount of time required to com- Zk 200Kbps rate limit for each upload thread (up t0 8) plete the distribution of pre-positioned popular content in Zp 32 for HD video (6.4 Mbps) Zc 1 for all popular videos in most experiments one community is below 5 minutes (312720*0.5*2/(6*200)). Zr 1 no erasure coding in the testbed experiments The data suggest that a VoD server can stripe movies over Zs 32 for HD video many communities during idle hours. The use of multicast Zg 5 5GB each of 64 peers striping can further increase the number of striped movies in each community, limited mainly by the storage capacity The HD videos we used are 1.5GB each, which represents of each STB. roughly 30 minutes of HD video. Figure 10 shows the band- width of clustered peers. Clustered peers are referred to as homes that experience similar downlink bandwidths (see 5. TESTBED AND EXPERIMENTS Figure 7). Average downlink bandwidths degrade with in- creasing number of requesting peers. A peer requests for a 5.1 The VP2P Testbed VoD randomly with a probability of 75% being from a pop- Our experimental platform is described in detail in a pre- ular set. Each supplying peer’s total uplink bandwidth is vious publication [2] [3]. We provide here a high level sum- fixed at 1.8 Mbps. A supplying peer can have a maximum mary as well as describe changes made since then. of 8 concurrent upload threads, with each contributing 200 The testbed used in this paper consisted of a cluster of Kbps (while leaving at least 200 Kbps for other upload ac- 64 virtual machines that were spread evenly on four iden- tivities). Similarly, a requesting peer can have a maximum tical Apple Mac Pro’s quipped with 2 dual-core 2.66 GHz of 32 download threads that amount to 6.4 Mbps. Note Xeon processors, 16 GBs of main memory, and four 750GB that requests for unpopular content would go back to the hard disks. There were also four Linux based bandwidth- VoD server, which may provide a higher bandwidth when shaping routers, and a central server for experiment control it is not busy since we did not rate-limit the VoD server in and data collection. Figure 9 shows the networking configu- these experiments. The chart shows that peer-assisted HD ration. The control network was used for system bootstrap- delivery is possible only for the 8Mbps and 12 Mbps peer ping and monitoring. The north-side network emulated the clusters and only when the number of requesting peers is connection from the access nodes to the VoD servers. This less than or equal to 8. segment also contained the experiment run controller. The south-side network was used to emulate peer connec- 5.3 Simulation tion to an IPTV service provider (ISP). One of the Ethernet In order to effectively investigate how Zebroid behaves in ports on the host Mac Pro was directly connected to the a large community, we simulated a typical deployment with 119
  • 6. 9 300 # of active peers 8 4 Mbps clients 200 6 Mbps clients Average cluster bandwidth (Mbps) 7 8 Mbps clients 100 6 Peers with failure rate − 0.2 12 Mbps clients Always active peers 5 0 0 1000 2000 3000 4000 5000 Server bandwidth utilization(Mbps) 4 Time index 400 3 Peers with failure rate − 0.2 300 Always active peers 2 200 1 100 0 0 0 5 10 15 20 25 30 35 0 1000 2000 3000 4000 5000 Time index Number of requesting peers Figure 10: Average downlink bandwidth of clustered Figure 11: Number of active peers and VoD server peers capacity utilization 300 peers. This was necessary since our testbed currently much needed improvements in the overall delivery capacity implements a maximum of 64 peers using virtual machines without employing network infrastructure upgrades. and there can be deployment scenarios with up to 500 mem- bers in a DSLAM community. We employed an event-driven 7. ACKNOWLEDGMENTS simulator on one community of 300 peers and a movie set of The authors would like to thank Chris Volinsky, Deborah 500 SD movies where each movie has one copy striped across Swayne, Ralph Knag, and Alex Gerber for their assistance 12 peers. With an erasure coding rate of 5/6, a peer needs with IPTV data, and Matti Hiltunen for his valuable com- 10 stripes to reconstruct a chunk. Each movie has a steam- ments on this paper. ing rate of 2 Mbps. A supplying peer consumes 200 Kbps per stripe. Figure 11 shows the number of active peers vs time and its corresponding VoD server utilization. There is 8. REFERENCES [1] M. Cha, P. Rodriguez, S. Moon, and J. Crowcroft. On an initial transition period when the users are making peer Next-Generation Telco-Managed P2P TV Architectures. In requests. The transient nature stabilizes after a period of International workshop on Peer-To-Peer Systems (IPTPS), 2008. time and the average server capacity used is 120 Mbps with [2] Y. Chen, Y. Huang, R. Jana, H. Jiang, M. Rabinovich, J. Rahe, an average number of 250 users. On the other hand to sup- B. Wei, and Z. Xiao. Towards Capacity and Profit Optimization port 250 concurrent users using unicast, the VoD server has of Video-on-Demand Services in a Peer-Assisted IPTV to be provisioned with at least 500 Mbps. With a peering Platform. ACM Multimedia Systems, 15(1):19–32, Feb. 2009. [3] Y. Chen, R. Jana, D. Stern, M. Yang, and B. Wei. VP2P - A solution, one can support a maximum of 300 peers with less Virtual Machine-Based P2P Testbed for VoD Delivery . In than 120 Mbps. In addition, with a worst case scenario of Proceedings of IEEE Consumer Communications and 20% of peers failing on average the utilized server capacity Networking Conference, 2009. increases to 150 Mbps since the VoD server now has to sup- [4] Y. Choe, D. Schuff, J. Dyaberi, and V. Pai. Improving VoD Server Efficiency with BitTorrent. In The 15th International plement for the failed capacity of the peers. In a typical Conference on Multimedia, September 2007. deployment, the peer failure rate would be very small. Note [5] B. Cohen. Incentives to Build Robustness in BitTorrent. In that peers were also allowed to recover at 5% on average Proceedings of Workshop on Economics of P2P Systems, after a certain time explaining the steady state value to not 2008. [6] C. Huang, J. Li, and K. Ross. Peer-Assisted VoD: Making fluctuate rapidly. All peers were used for striping; however, Internet Video Distribution Cheap. In Proc. of IPTPS, 2007. peers that fail do not participate in uploading chunks until [7] Y. Huang, Y. Chen, R. Jana, H. Jiang, M. Rabinovich, they recover. A. Reibman, B. Wei, and Z. Xiao. Capacity Analysis of MediaGrid: a P2P IPTV Platform for Fiber to the Node (FTTN) Networks. IEEE JSAC - Special Issue on Peer-to-Peer Communications and Applications, 6. CONCLUSION 25(1):131–139, 2007. This paper describes Zebroid, a VoD solution that uses [8] Y. Huang, T. Fu, D. M. Chiu, J. Lui, and C. Huang. Challenges, Design and Analysis of a Large-scale P2P-VoD IPTV operational data as an on-going basis to determine System. In Proceedings of SIGCOMM, 2008. how to pre-position popular content in customer set-top [9] V. Janardhan and H. Schulzrinne. Peer assisted VoD for set-top boxes during idle hours. On-going analysis of data on VoD box based IP network. In Proceedings of the 2007 workshop on request distribution, set-top box availability, and capacity peer-to-peer streaming and IP-TV, pages 335–339. ACM New York, NY, USA, 2007. data on network components are all taken into consideration [10] L. Rizzo. Effective Erasure Codes for Reliable Computer in determining the parameters used in the striping algorithm Communication Protocols. In ACM SIGCOMM Computer of Zebroid. We show both by simulation and emulation on a Communication Review, April 1997. realistic IPTV testbed that the VoD server load can be sig- [11] K. Suh, C. Diot, J. Kurose, and L. Massoulie. Push-to-peer video-on-demand system: design and evaluation. IEEE JSAC, nificantly reduced by more than 50-80% during peak hours 2007. by using Zebroid. While a typical P2P-VoD scheme with- [12] X. Zhang, J. Liu, B. Li, and T. Yum. CoolStreaming/DONet: out active intervention may see less synchrony in the users A Data-Driven Overlay Network for Efficient Live Media Streaming. In Proceedings of IEEE INFOCOM, volume 3, sharing video content, we demonstrate that our peer content pages 13–17, 2005. placement strategy based on IPTV operational data offers 120