Watching Television Over Nationwide IP Multicast

691 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
691
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
7
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Watching Television Over Nationwide IP Multicast

  1. 1. Watching Television Over Nationwide IP Multicast Paper ID # 1569058648 Meeyoung Cha∗ , Pablo Rodriguez∗ , Sue Moon† , and Jon Crowcroft‡ ∗ Telefonica Research, Barcelona † KAIST, Korea ‡ University of Cambridge, UK Abstract—After many years of academic and industry work, rewinding or fast-forwarding to any position in time, free we are finally witnessing a thriving of IP multicast in large from the broadcaster’s schedule. As a first step, we explore scale deployment of IPTV services. Based on a trace of 250,000 the potential for a cooperative peer-to-peer (P2P) and IP users, we provide an extensive study of one of the world’s largest IPTV networks. As opposed to traditional broadcast TV, IPTV multicast system in providing a simple rewind functionality. allows for far more accurate traffic analysis on how we watch We also contrast alternate content distribution strategies for television. Using detailed channel switching events collected over IPTV, such as content distribution networks (CDNs) and P2P. 700 DSLAMs, we identify for the first time previously hidden Our study ascertains the sweet spots and the overheads of user watching patterns. Our trace also permits us to contrast server-based unicast, multicast, and serverless P2P for future and compare future IPTV architectures such as a cooperative P2P and IP multicast system, assessing the potential benefits IPTV architecture. using real data rather than speculation. Last, but not least, we Finally, supporting service enhancements is very important present the result of several user clustering algorithms based in IPTV businesses, as it leads to customer satisfaction. Under- on a variety of viewing patterns, laying the path for new and standing TV viewing behavior in the old days relied on phone improved market opportunities like personalized advertising and call surveys or attaching specialized monitoring boxes for se- social recommendation engines. Index Terms—IP Multicast, IPTV, TV viewing behavior, Pop- lected users. Often surveys were limited to regional coverage. ularity, P2P, User classification, Recommendation Now, nationwide deployments and the bi-directional charac- teristics of IP multicast make it easy for service providers I. I NTRODUCTION to monitor user behaviors closely and continuously. To this extent, we profile users based on their channel switchings IP multicast was devised over 20 years ago [6], but for most behaviors (e.g., how often users browse or view channels, or of that time has seen little large-scale commercial deployment. become inactive), based on frequently viewed channels, and Speculation about reasons include critiques of its scalability, based on their active times. Such clustering can be later used in potential security threats, and lack of plausible pricing mod- personalized advertising and social recommendation engines, els [5], [7], [8], [12]. Instead, one-to-many communication assisting users with the selection of programs (from potentially has been implemented and thrived in other forms such as endless list of programs in the future) and provide “out-of-the- infrastructure-based overlays (e.g., Akamai) and end-system box” experience. peer-to-peer multicast (e.g., PPLive, Zatoo, Joost). In the following, we give a brief overview of the multicast However, long after it was anticipated by academia and IPTV service architecture. industry, IP multicast is finally transitioning from vision to practice, at least within single domains. Recent years showed A. IPTV Multicast Service Architecture a marked uptake of commercial grade live broadcast TV Figure 1 illustrates a typical multicast IPTV service and video-on-demand offerings a.k.a. IPTV, examples being architecture, where TV head ends source IPTV contents PCCW’s IPTV in Hong Kong, France Telecom’s MaLigne TV to DSLAMs (Digital Subscriber Line Access Multiplexers) in Lyon, France, Telefonica’s Imagenio in Spain, and AT&T’s through a single ISP (Internet Service Provider) backbone. U-verse in Texas, United States. In spite of old concerns, IP Customers subscribe to the quality-assured IPTV, the IP phone multicast has shown itself to be scalable (serving nearly a service, and the best-effort Internet access from the ISP. million users), secure (providing pay-per view channels and DSLAMs located at regional networks aggregate traffic from VoD), profitable (as one of the key next-generation sources of hundreds or thousands of users and connect to the high-speed revenue to service providers), and even cost-effective (com- IP backbone. For the real-time IPTV service, a TV head end pared to other designs). streams live broadcast of all the channels across DSLAMs In this paper, we study one of the largest IPTV services through bandwidth-provisioned multicast trees. Due to limited in the world to understand how we watch TV. Based on access bandwidth at the last mile (i.e., from a DSLAM to the over one-month real-world trace across 700 DSLAMs, we analyze the TV viewing patterns of a quarter million IPTV customer TV users. To the best of our knowledge, this is the first large- headend premise TV scale, in-depth study on IPTV viewing habits. Our trace also home permits us to contrast and compare future IPTV architectures, ISP gateway STB Internet phone assessing the potential benefits using real data rather than IP backbone DSLAM PC speculation. Our vision for the future architecture supports Fig. 1. Typical multicast IPTV service architecture
  2. 2. 2 home gateway at the customer premise), not all TV channels timestamp in seconds, and the IP addresses of the DSLAM, are delivered to customers all the time. A DSLAM extends the STB, and the multicast group. We pre-process the event or prunes the multicast trees dynamically in the direction logs by excluding events of non-video multicast groups1 . We toward the customers’ set-top boxes (STBs), based on the then chronologically sort IGMP join messages and count the channel switching signals. The set-top box translates a channel numbers of viewed channels and channel switchings. Table I switching event from a user to a pair of Internet Group Man- summarizes the daily statistics of our trace. The number of set- agement Protocol (IGMP) messages: one to alert the user’s top boxes in the table is smaller than a quarter million, since departure from a multicast group of one channel and the other not all users turn TV on everyday. The number of channel to join another multicast group of a new channel. A single switchings clocks 13 million on the average. DSLAM can process up to thousands of IGMP messages per TABLE I second, which translates to capacity for hundreds of STBs. DAILY STATISTICS ON THE SCALE OF THE IPTV SERVICES Size # DSLAMs # STBs # Channels # Channel switchings Our analysis is based on a large collection of IGMP messages 4GB 700 200,000 155 13 million collected from all DSLAMs within a national backbone. ISPs employ quality-of-service guarantee at last mile by B. Channel Holding Time allocating the highest priority to VoIP traffic, quality-assured One concern in understanding the TV viewing behavior bandwidth to IPTV services, and the rest to the best-effort is that viewers leave the IP set-top box on and receiving Internet traffic. Figure 2 illustrates an example bandwidth data even when the television is off. Identifying active and provisioning over the copper link joining the DSLAM and silent periods of users is not trivial. We first look into the a household. At the moment most ADSL modems that are distribution of channel holding times or the time between already deployed can support bandwidth for one or more IPTV channel switchings. Figure 3(a) shows a histogram of channel streams depending on the video codec rate (e.g., one MPEG2 holding times on a representative day. We observe spikes channel or two MPEG4 channels with 5Mb/s provisioning). around 2, 4, 6, and 12 hours, which gives us an idea of the Encapsulation natural long-term off periods, for instance, sleeping hours or Quality-assured IPTV services Video in-between dining hours. Figure 3(b) shows the cumulative Broadband access (5 Mb/s) Audio channels IRD distribution function of the channel holding times. The slope (6 Mb/s) Best-effort Internet of the curve changes around 1 minute and again around 1 hour, (1 Mb/s) indicating different users’ behaviour. Based on this observation VoIP we consider channel holding times between 1 minute and 1 Fig. 2. Bandwidth provisioning for quality-assured services hour as a viewing session. We further consider channel holding The rest of the paper is organized as follows. In §2, we times shorter than 1 minute as browsing (or surfing) and those introduce the trace and analyze diverse aspects of TV viewing longer than 1 hour as away periods. habits. §3 investigates a cooperative P2P and IP multicast sys- 1 Number of channel switchings tem to provide rewind functionalities, and §4 further contrasts 1e6 0.8 and compares the IP multicast design with alternate content 1e4 0.6 CDF distribution strategies for IPTV. §5 analyzes TV viewing 1e2 0.4 patterns from the users’ perspective. Here we cluster users 1 0.2 based on their channel switching patterns, preferred set of 1e−2 0 1 10sec 1min 1 2 24hrs channels, and active times. Finally, we present related works 0 2 4 6 12 18 Channel holding time (binned 1 sec) 24 hours Channel holding time (binned 1 sec) in §6 and in §7, we conclude. (a) Histogram (b) CDF II. A NALYSIS OF TV V IEWING B EHAVIORS Fig. 3. Channel holding time of a representative day With a wide range of real time data IPTV provides, it is C. Session Time Distribution now possible to carry out a far more detailed study on TV Based on this heuristic to identify viewing sessions, Fig- viewing habits than ever before. In this section, we introduce ure II-C shows various TV viewing statistics on a daily basis. our trace and carry out diverse set of analysis, for instance, We observe that an average user watches TV 2.54 hours per the total views by program, views by time of day, and average day and 6.3 channels per day, while each session lasts on minutes watched per household. average 1.2 hour. Note that we refer to each set-top box as a user, even though it is typically shared by multiple A. Trace Methodology individuals in a given household. This shows some similarities To understand the scale of multicast deployment and the and discordance with traditional survey results. According user behaviors in channel switching, we obtained a huge to Neilsen ratings in 2005, the average household watched collection of IPTV channel switching event logs from an television for approximately 8 hours per day. While other operational backbone provider. Our trace of 200 GB spans over surveys report that an average adult watches TV for 3 hours a month from April 17 to May 30, 2007, and records channel per day. switching behaviors of 250,000 or a quarter million users. 1 Some multicast groups are used to control the remote system (e.g., The collected IGMP event log includes the message type, bootstrapping and updating set-top boxes).
  3. 3. 3 4 4 Viewing hours per household x 10 x 10 3 10 10 18% increase on weekends 2.5 Number of viewers Number of viewers 8 8 2 6 6 1.5 weekend 4 4 1 2 2 0.5 weekday 0 0 0 00:00 06:00 12:00 18:00 24:00 00:00 6:00 12:00 18:00 24:00 Mon Tue Wed Thu Fri Sat Sun Time−of−day (binned 30 min) Time−of−day (binned 30 min) Day−of−week (a) (b) (c) Fig. 4. Number of viewers over time. Time-of-day trend in (a) and (b) shows peak hours around 8AM, 3PM, and 10PM, which closely coincide with the dining hours of the monitored population. The number of viewers is much larger on weekends, late morning, which is identified as high viewing on children’s programs and cartoons. Day-of-week trend is shown in (c), where the total time served across all users is much longer over the weekends. 1 1 0.8 0.8 for particular times of day (e.g., 3PM, 10PM). However, this 0.6 0.6 does not reflect the profile of individual users. In §5, we use CDF CDF 0.4 0.4 clustering to identify the different patterns of usage times. 0.2 0.2 Increase in TV viewing during the weekends has been also Avg = 2.54 hours Avg=2.1 0 0 shown in the survey of workers, where the viewing hours over 0 6 12 18 24 2 4 6 8 Daily viewing hours per uesr Daily number of sessions per user the weekends increased by 30-40% [1]. 1 1 0.9 F. Zooming Into Per-Program Viewing Behavior 0.8 0.6 Now we focus on how viewers leave and join a particular CDF CDF 0.5 0.4 program. Let us first zoom into a single popular program. 0.2 Figure 7 shows the user behavior over a popular soccer game Avg = 1.2 hour 0 Avg=6.3 held in May 23, the final match of the European Champions 0 0 1 2.5 12 24 hrs 0 10 20 30 40 50 League. We plot the number of viewers, the viewer share, Average session length of users Number of channels viewed per day and the number of inbound and outbound channel switchings Fig. 5. Various statistics based on sessions to the event channel. Due to the specifics of a sport event, there are four parts in the program. We see that a significant D. Channel Popularity fraction of viewers all together leave the channel during Now we look into the relative popularity of channels. half-time and re-join after the half-time, reflecting a very Figure 6(a) shows the aggregate fraction of viewer share strong correlation amongst viewers in their behaviors. We also accounting to r-th popular channels. Top 10% of channels note that many viewers join or re-join the program after the account for nearly 80% of the viewer share, indicating that beginning of each part. Motivated by this, we later explore channel popularity follows the Pareto Principal (or 80-20 rule). the possibility of feeding latecomers with missed scenes in a This is consistent across different times of day. Figure 6(b) peer-to-peer fashion. For other types of programs (e.g., soap shows the plot of channel rankings against the number of opera, news, comic), we have also found strong membership viewers. We observe that the distribution is Zipf-like for top amongst the viewers per program, with most viewers being popular channels, while the popularity decays fast for non- highly correlated about when they join and leave channels. popular channels (ranked 30th or below). Zipf-like popularity An interesting point to note from Figure 7(c) is the constant of top channels has also been shown in P2P-based IPTV set of users that depart the channel at any point in time. One services [11]. could relate this behaviour to channel surfing, where users browse in and out the channel very fast. However, in our Aggregate viewer share (%) 100 4 analysis, we have ignored those users that do not watch the Number of viewers 8AM 10 80 3PM 10PM channel for a sustainable period of time (e.g. more than few 60 10 2 minutes). It is surprising to note that even after discounting 40 10PM 3PM such users, the churn is still very high, with about 10% of 20 0 8AM 10 users constantly departing the channel. 0 0 1 2 0 20 40 60 80 100 10 10 10 Normalized channel rank Channel rank G. Multicast Group Density (a) Pareto principal (b) Zipf-like popularity As described before, the IP multicast distribution in the Fig. 6. Multicast channel popularity backbone uses static trees, where all channels are delivered to all DSLAMs at all times, even when no user is watching them. E. Number of Viewers Over Time Only the tree branch from DSLAM to customers dynamically Figures 4 shows the time-of-day and day-of-week trends on changes. Here we explore the potential benefits of an IPTV the number of viewers. We consistently observe strong peaks system using multicast trees that prune back if there are no
  4. 4. 4 4 4 x 10 40 x 10 Number of channel switchings program duration program duration program duration Viewer share (%) Number of viewers 15 30 2 Outbound all channels 1.5 10 20 Inbound event channel 1 5 10 0.5 0 0 0 20:00 21:00 22:00 23:00 24:00 20:00 21:00 22:00 23:00 24:00 20:00 21:00 22:00 23:00 24:00 Time of day (binned 5 min) Time of day (binned 5 min) Time of day (binned 5 min) (a) (b) (c) Fig. 7. Detailed user behavior on a May 23rd soccer match. The program includes four parts: first-half, half time, after-half, and awarding ceremony. The number of viewers, viewer share, and channel switching clearly reflects these partitions. At its peak, the program clocks over 35% of the viewer share. receivers. To this extent, we measure the tree density, which is IPTV offerings provide this service in a standard client-server the total number of multicast groups across all DSLAMS with fashion, as part of the VoD service. Because individual users at least one active user. Because users may leave the set-top request for different positions in the past, video servers use box on when they turn off the TV, we exclude those users unicast rather than multicast delivery. A big drawback in this who did not show any channel switching action for longer server-client approach is scalability, as the server workload than 1 hour. We have also tested with larger inactivity periods, and bandwidth grows linearly with the number of users. obtaining similar results. Figure 8 shows the tree density Here we consider the potential for a collaborative P2P and over a representative day. We observe an order magnitude IP multicast architecture, where rewind function is provided of difference between the tree density of static and dynamic through a P2P network of STBs (and by VoD servers if multicast trees. In fact, density of the dynamic multicast tree necessary). For the purpose of this paper, we limit ourselves is surprisingly low (e.g., 17% on average), suggesting that to a VCR operation where users joining a TV program want the IPTV traffic carried over the IPTV backbone is often to jump back to the beginning of it, e.g., a user wanting not consumed by any user and that an alternative design to watch an ongoing movie from the beginning (our results could significantly reduce the traffic in the backbone (e.g., can be easily generalized to the case of serving any point dynamically pruning the multicast trees). backward or forward in time). It is important to note that such rewind function is different from that provided by personal 5 video recorders (PVR) (e.g., TiVo). PVR also provides rewind Total number of multicast groups 10 From TV head ends to all DSLAMs functions, however only to those scenes locally stored. Our rewind functions supports streaming the passed scenes for any 10 4 newly joined channel. We assume that all streamed content is stored locally at users’ set-top boxes2 . This stored content is From DSLAMs to users used for serving other users as assistance to the Replay TV 3 Avg=17% video server. 10 P2P distribution has previously shown to dramatically re- 00:00 06:00 12:00 18:00 24:00 Time of day (binned 1 min) duce the video server load by distributing the load to peers [9]. Fig. 8. Multicast group density However, the efficacy of a P2P system to support the rewind This low tree density is largely determined by the DSLAM functions is unclear for a number of reasons. First, as view- to STBs aggregation ratio. For the current IP multicast design ers freely switch channels, there may be not enough peers to achieve maximum efficiency, at least one user per DSLAM who stored the content from the beginning of the program, should watch each channel. However, most DSLAMs cover decreasing the serving capacity. The situation aggravates for few hundreds of users and these users are not active all unpopular programs and for low usage periods. Second, peers the time. Moreover, users’ interest is often focused on few become unavailable when they serve other users. Scheduling channels. We have indeed found that the average number of a globally optimal streaming across multiple channels is a channels viewed per DSLAM is only 17% of all channels. very complex problem, and remains yet to be solved. Third, We expect that in the future, DSLAM to STB aggregation a topology-oblivious P2P may incur a network cost that is ratio will be even lower since more and more DSLAMs are higher than strategically-positioned redundant video servers, deployed closer to the end user to reduce the attenuation over or even IP multicast, as each P2P communication may turn the last mile. Hence we expect the multicast tree efficacy to into a unicast stream crossing the nationwide network. become even lower. To assess the potential for a cooperative P2P system, we perform a trace-driven simulation. We make several assump- III. A C OOPERATIVE P2P AND IP M ULTICAST S YSTEM tions. Video content is divided into blocks of 1 minute. A Advanced VCR features allow users to jump back in time 2 With 2-4 Mb/s of streaming rate, 21-43GB of storage space is required to and playback passed scenes – known as Replay TV. Current store one-day-long data. We assume STBs are equipped with enough storage.
  5. 5. 5 4 x 10 5 Total traffic served 100% 100 100 Request volume 4 Locally served traffic (%) Traffic served (%) 80 80 3 Served by local peers 80% 60 60 2 40 40 1 locally served in P2P Vod server load 5% 20 20 Average 0 0 0 00:00 06:00 12:00 18:00 24:00 0 10 20 30 0 100 200 300 400 500 Time of day Size of DSLAM (Number of set−top boxes) Channel ranks based on requests (a) Fraction of locally-carried traffic (b) Locality versus DSLAM size (c) Locality versus channel rank Fig. 9. Performance of the locality-aware P2P P2P request is made for each block. Users have enough 5%. In a P2P system, the serving capacity for any particular downstream bandwidth to receive both live IPTV multicast block increases over time, as peers requesting a block also stream and VCR rewind P2P stream simultaneously. We also becomes a serving peer for that block after the download. assume that the upload capacity of each peer is equal to its Hence latecomers in the program can easily find available download capacity. Note that this is a reasonable assumption peers. in a controlled environment as in IPTV; the ISP controls the With a locality-aware peer selection, a remarkable 80% of bandwidth allocation. P2P is used only for patching the missed the traffic can be served from within a DSLAM. High locality scenes, while the ongoing scenes are received via IP multicast reflects a high correlation between the local users and the and stored locally for later viewing. For other possible designs channels they watch. This is important, since a large fraction of patching in multicast video streaming, we refer to a good of requests can be handled within the DSLAM domain and body of work in [2], [10], [13]. A tracker provides an up- the backbone cost can be avoided for this traffic. Naturally to-date list of viewers for each channel. We later consider a the DSLAM size affects the level of locality. In Figure 9(b), locality-aware P2P system where a centralized tracker keeps we show the locality efficacy against the size of a DSLAM. information about which users are watching which channel The vertical axis represents, for a given DSLAM, the fraction at each DSLAM and matches local DSLAM peers whenever of locally served traffic against the total traffic served for possible. the users within the DSLAM. We observe that the locality If a peer possesses requested past blocks of a program and increases exponentially with the size of the DSLAM. Note has enough upload capacity, one is deemed available to serve that even for very small DSLAMs, the benefits are noteworthy. other peers. If a peer fails to find any available peer within a Figure 9(c) compares the locality against the channel popular- given number of trials, then the VoD server immediately serves ity. We observe that popular channels have higher chances of the peer. We assume that a peer will query up to 1, 000 peers being served locally, and even less popular channels enjoy in our simulation before giving up finding potential serving high levels of locality. peers. A peer abandons the VoD server as soon as new peers 40 Number of peers assited patching become available. P2P patching traffic also stops when the 35 Max requesting peer changes the channel. 30 Avg Min To perform our simulation we use the TV program schedule 25 20 for each particular channel and map the program schedule with 15 the channel switching logs at each time. In our trace, some of 10 the channels had varying schedules in different regions. Our 5 trace did not include this meta-information about the schedule 0 0 20 40 60 80 100 120 and the mapping of network devices to regions. Hence, we Rewind block size (1 block=1 min) have manually identified 30 channels with common program Fig. 10. Communication overhead versus request size schedules and focused on only those channels for assessing One other metric of interest when designing a P2P system P2P system. We should also mention that these 30 channels is how many peers are required to assist in patching. Figure 10 account for over 90% of traffic. shows this value as a function of the rewind period. The graph Figure 9(a) shows the total amount of traffic required to shows the maximum, the average, and the minimum number of support VCR operations over time, the amount of traffic that peers assisted patch streaming. We observe a steady increase could be served by peers in the local DSLAM, and the VoD in the number of peers as the covered rewind period increases. load that could not be served by the P2P system. The total However, the number of peers required is always below 30. traffic shows a time-of-day trend. Note that under a pure IPTV If one calculates the slope of the plot, it shows that one has architecture, all the VCR load would be directed to the VoD to switch peers once every 11 minutes on the average. This servers (top-line). With an assistance of a P2P system, we value is very similar to the average period during which a peer are able to dramatically reduce the VoD server load to mere watches each channel, which is 12 minute. We also note that
  6. 6. 6 the graph becomes noisy after 60 blocks. This is because there Source are fewer of such incidents. IV. E XPLORING A LTERNATE L IVE -TV A RCHITECTURES IP router In §3, we have investigated how a P2P system could cost = 4 DSLAM cost = 3 cost = 7 augment an IP multicast service by enabling scalable VCR STB capabilities. In this section, we consider the possibility of (a) CDN (b) Locality-aware P2P (c) Topology-oblivious P2P delivering the live TV signal through combinations of network Fig. 11. Illustrative example of alternative architectures architecture choices, such as a Content Distribution Network (CDN) (e.g., Akamai) or a P2P solution (e.g., Zatoo, PPLive, Joost). In a CDN, the TV head end sends out streams to re- opportunity to compare all these five design choices under gional servers close to the DSLAMs. Such intermediate servers realistic user demand portfolio. We assume that CDN servers replicate and forward the live broadcasts to the individual users are co-located with the IP routers in our operational multicast via unicast. Another popular architecture is P2P. In P2P-based IPTV network. In both P2P scenarios, we assume that a single IPTV system, the TV head end would send out streams to a user per channel receives the channel from the source and the small subset of set-top boxes, which in turn would serve other rest of the users stream one another. In topology-oblivious peer set-top boxes. Topology-oblivious P2P system has been P2P, peer selection is random. In a locality-aware P2P, peers shown to inadvertently shift load from video servers to ISPs, first search for seed nodes within their DSLAMs, then search while locality-aware P2P system could alleviate the load off outside. Other details of the P2P are similar to §4. ISPs [14]. Figure 12(a) compares the network access cost of the five Unlike IP multicast, alternative designs do not take into designs. From this figure we make several observations. First, account the underlying service infrastructure, resulting in the static IP multicast tree consistently requires high cost, due higher overall network usage. To illustrate the contrasting to the low DSLAM to STB aggregation ratio as discussed differences between the architectures, we use the toy example in §II-G. On the other hand, the dynamic multicast tree appears in Figure 11. We focus only on the total amount of traffic as the all-time most economical design, as expected. Second, that travels over the boundary of the ISP backbone (i.e. the the cost of CDN architecture closely follows the time-of- traffic carried between DSLAM and the first IP router). We day effect, as the CDN design utilizes unicast streams to focus on this part of the network, not on the backbone cost. individual viewers. This implies a significant CDN server congestion. As multimedia contents dominate traffic volume, load during peak usage hours. Finally, comparison of P2P the link between the IP router and the DSLAM is immediately architectures show that a sophisticated locality-aware peer affected by any increase in traffic and is always a top candidate selection will dramatically reduce the network access cost (as for congestion. We do not consider the backbone cost, for it that comparable to the dynamic IP multicast tree). However, a depends on the network topology and ISP-dependent design typical random peer selection (which is already used in many choices, and is orthogonal to our problem. Figure 11 shows a P2P systems) results in much higher cost than any designs simplified ISP network with a video source (national TV head during the peak usage hours, and generates up to 6.2 times end or regional servers, in case of a CDN), an edge IP router, a more traffic than a dynamic multicast solution. DSLAM, and a STB. In case of IP multicast, based on our toy To understand the drastic difference between the two P2P example, the access network cost is 2. In a CDN architecture, architectures, we further look into how much traffic was served the regional server sends out redundant unicast streams to by local users in Figure 12(b), indicating strong locality. The individual users as shown in Figure 11(a), resulting in the cost average fraction of traffic served by local users shows a stark of 4. In a locality-aware P2P, the leftmost peer receives the difference of 87% and 0.2%, respectively. In is interesting TV stream from the source, then serves another local peer. A to observe that the request volume served locally follow the third peer receives streaming from a remote peer, then serves time-of-day pattern for locality-aware P2P, which explains its local peer. We assume that local traffic can be re-routed to how the cost remained relatively steady during peak hours peers of a common DSLAM and is not forwarded up to the in Figure 12(a). IP router. As a result, the network cost is reduced to 3 as in Overall, we have observed that a poorly designed P2P IPTV Figure 11(b). Figure 11(c) shows the worst-case scenario of solution can have a significant impact in the ISP network. At a topology-oblivious P2P, where one peer receives TV stream the same time, whilst a dynamic Multicast solution always from the source, while the rest of the peers receiving from provides the best performance possible, a P2P solution with remote peers. As a result the network cost is 7, which is more locality has an overhead that is only slightly higher to that of than three times higher than IP multicast. a well-thought out IP multicast TV service paving the way for We consider the following five cases of network architec- new TV delivery architectures. tures: 1) static multicast tree; 2) dynamic multicast tree; 3) V. V IEWING PATTERN FROM U SERS ’ P ERSPECTIVE CDN; 4) topology-oblivious P2P; and 5) locality-aware P2P. As our IPTV channel switching logs capture the user behavior Having gathered data from a massive number of users, of a large-scale deployed service, they offer us a very unique channels and programs, we have the opportunity to take a
  7. 7. 7 4 x 10 15 4 x 10 Topology−oblivious P2P 8 Total traffic served for each P2P 100% Static IP multicast tree Network access cost Traffic served by local peers Request volume 10 6 in locality−aware P2P (87%) CDN (unicast) 4 5 Locality−aware P2P Traffic served by local peers in topology−oblivious P2P (0.2%) 2 0 Dynamic IP multicast tree 0 00:00 06:00 12:00 18:00 24:00 00:00 06:00 12:00 18:00 24:00 Time of day (binned 1 min) Time of day (a) Network cost comparison across all designs (b) Performance comparison of the two P2P designs Fig. 12. Comparison of network cost of five different architectures closer look at the behavior of IPTV users. With the help of 1 1 various clustering and data mining techniques, in this section 0.8 0.8 Browse View we extract various user patterns. We can then answer a variety 0.6 0.6 CDF Away CDF 0.4 of questions, such as how viewers surf over TV channels, at Away View 0.4 0.2 Browse 0.2 what point they are likely to start or stop watching a particular 0 0 channel, why they may select one channel versus another, and 10 0 10 1 10 2 10 3 10 4 10sec 1min 10min 1 6 24hr Num. occurances for each state per user Avg holding time for each state per user at what time of the day they are most likely to be active. To this extend, we will perform three types of analysis. In (a) Frequency within a day (b) Time spent on each state the first analysis, we infer correlation between different TV Fig. 13. Comparison of the three user states channels based on users’ viewings. In the second analysis, of target users that watch similar programs. This is potentially we study how likely users are to be in different states (e.g. very useful since content providers and advertisement compa- viewing, browsing, or being away) and how they transition nies could tailor their marketing strategies to each group of across states. In the third analysis, we classify users based on users. their temporal TV viewing patterns. To identify the channel pairs that are viewed together, we Across this section, we treat channel switching events as a use a one week trace. To filter out light users that are not sequence of user states. In particular, we identify three modes statistically representative, we only consider logs from those of viewing from a user’s perspective: viewing a program, users who watched more than two channels and went through browsing through channels, or being away from TV (no a View state at least ten times during the monitored period. viewing at all). In order to determine users’s viewing modes, Then we construct a binary vector xi for each channel i, such we refer to Figure 3 and use the thresholds explained in §2 that xi (u) = 1 iff user u watches channel i. We only consider to delineate one mode from another. Let us first compare the events of View states and exclude those events of Browse or key characteristics of the three states. Figure 13(a) shows the Away states. Our clustering technique consists of the following CDF of the number of occurrences for each state across all three steps: 1) We compute the cosine coefficient, ρi,j , between the IPTV users. The plot is based on a single representative any pair of channels i and j as: day’s trace. The graph shows that viewers browse a median of 27 times, view 9 times, and go away 3 times a day. We xT · xj i ρi,j = . (1) also observe extreme cases for viewing over 700 times and xT · xi · xT · xj i j browsing over 1900 times a day. Figure 13(b) shows, for the identical day trace, the average time spent on each state across The cosine coefficient is widely used to quantify the similarity the users. We observe that users skim through channels for a between two vectors. Channels that are always viewed together median of 10 seconds while browsing, view for 10 minutes, have a correlation ρi,j = 1; 2) We transform the cosine and be away for 5.5 hours, respectively. Note that the plot coefficient to have either 0 or 1, based on a threshold value. We shows clear boundaries between the three modes, illustrating set ρi,j = 1 if the value of ρi,j is greater than the threshold, our expedient choices for threshold values. and ρi,j = 0, otherwise. If we set the threshold high, then In the following, we present three analyses based on the no pairing of channels is likely. If low, then all channels will three-state user model. be paired with each other and one huge cluster will emerge. Note that the cosine coefficient is not transitive in that if even A. Correlation Across Channels channels A and B are paired and channels B and C are paired, We commence by identifying channels that are viewed by channels A and C are not always paired. If the threshold is the same group of users. We study the possible correlation high, then it becomes more likely that channels A and C be across channels by identifying channels that are viewed by the paired. Our results will reveal how far this type of transitivity same group of users. If two channels have the same viewers, holds in real life. We use a threshold value of 0.8 in our data; then we pair the two channels. Such pairing identifies a group 3) Finally, we pair channels with cosine coefficients above
  8. 8. 8 0.19 0.77 the threshold, and find the connected clusters from a graph of Browse Away 0.21 paired channels. 0.47 Figure 14 shows our channel clustering result. Our trace 0.04 0.08 0.57 0.32 shows 5 major connected components of 15 channels, across View over 150 channels. A node represents a channel and a link 0.35 between two nodes represents a cosine coefficient above the Fig. 15. 3-state Markov model with state transition probabilities threshold value. Note that any two nodes within the same cluster are connected, but may not share a link. When we cluster of users with a similar behavior. To this extend, we check the genre of each channel, surprisingly, we observe a first build the three-state Markov model for each individual clean matching of each cluster to a single content type or topic. user, and then compare the stationary probabilities with those The largest cluster belongs to a group of six free-to-air national of the global model in Figure 15. channels. Other clusters are focused on music, movies, and Then we cluster users based on how far away they are documentaries. This indicates that a user’s channel selection from the ”average” behaviour as follows: a) If the stationary is highly driven by the content topic. probability of Browse of an individual user is greater than that Nationals Music1 Music2 Movies Documentals of the global Browse state probability of 0.703 by θ1 , then 6 5 we assign the user to “Heavy Browsers” group; b) Likewise, 111 116 42 233 if the probability of View state is greater than 0.237 + θ2 , 4 1 110 then we assign the user to “Focused Viewers” group; b) If 3 2 112 118 43 234 the stationary probability of Away is greater than 0.060 + θ3 , Fig. 14. Channels within the same cluster show high semantic similarity then we assign the user to “Light Users” group; 4) Otherwise, we assign the user to “Average Viewers” group. We use a Once we have paired channels into clusters, we can now common threshold of θ. = 0.1, which divides users in our identify larger groups of users by a given cluster or a combi- trace into proportionally sized groups while maintaining per- nation of clusters. For instance, 40% of users only watch to the group characteristics. Table II summarizes the characteristics Nationals clusters, while 30% of users watch a combination of each cluster with its cluster size. of channels in Nationals and Music1. Channel pairing and clustering provide a natural way to identify and expand users TABLE II of similar interests. C HARACTERISTICS OF USER CLUSTERS BASED ON USER STATES Clusters Average Heavy Focused Light B. Transition between User States Viewers Browsers Viewers Viewers Size 37% 15% 25% 21% Now we focus on the three modes of a user behavior Tot. viewing time (hr/day) 2.89 1.97 3.25 0.5 (e.g., viewing, browsing, and away) and examine how frequent Avg. viewing time (min) 12.5 11.1 14.0 11.6 users are on each state and how they jump from one state to # viewed channel 7 6 6 1 # browsed channel 21 32 11 5 another. In order to see such transitions, we build a Markov model based on the three states. From our trace, we identify for all users all events where a transition from one mode We can observe that focused viewers show the largest daily to another takes place and count the number of transitions viewing hours as well as the longest average viewing time. between modes. Note that the time spent on each state is not Surprisingly, even though their viewing time is clearly the reflected in this model; in a time-based model, user patterns longest, they watch more or less the same number of channels would not emerge so clear as some states dominate others. as the average users and heavy browsers. Rather, they view Figure 15 depicts our 3-state model along with the state these channels multiple times a day, indicating a strong loyalty transition probabilities. This model reflects the average TV toward particular channels. When we compare the number viewing behavior across all users. The sum of the outgoing of browsed channels versus viewed channels, heavy browsers probabilities from each state is 1.0. Note that a user could go show significant high diversity in browsed channels, however, from an Away state to another Away state with the probability interestingly, the number of actual viewed channels is similar of 0.21. This is the case when users briefly watch a new to the average users suggesting a strong inertia towards the channel but go away because of lack of interest. We use a same set of channels. The total time spent watching TV per one-week trace in building our model. As expected, from all day drops markedly for light users. three states, the transition to Browse is the most likely. After C. Temporal Correlation in Viewing Browse, users go Away more than stay on to view another program. When a user returns from the Away state, one is Our final goal is to group those users who watch TV at more likely to begin with Browse than settling down on View. similar hours and study the temporal correlation patterns. Here Our Markov model in Figure 15 represents the average we describe the use of nonnegative matrix factorization (NMF) channel switching behavior of all users. The behavior of to find a small number of temporal patterns from a quarter individual users may vary greatly from this global pattern. million users. NMF is a powerful method to identify distinct Now we focus on individual users and see if we can find a patterns; for instance, it is used to extract features of nose and eyes from a face image [16] or effectively extract a handful

×