The Headend Revisited: A Multi-Service Video Data Center for the Modern MSO


Published on

Published in: Technology, Business
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

The Headend Revisited: A Multi-Service Video Data Center for the Modern MSO

  1. 1. The Headend Revisited: A Multi-Service Video Data Centerfor the Modern MSOS.V. Vasudevan, R. Wayne OgozalyCiscoAbstractCable MSOs are under competitivepressure to deliver an increasing number ofnew services to effectively compete with rivalservice providers, as well as emerging over-the-top players. This paper proposes astrategy which evolves the traditional cableheadend into a multi-service Video DataCenter which is well positioned for thechallenges of the modern era.The Video Data Center combines the bestof the traditional video headend with provendata center techniques. This paper discussesthe intersection of these digital deliverysystems and describes a next-generation datacenter model. Internet data centerarchitectures are examined and comparedwith the data delivery requirements of amodern digital cable system. Specificinsertion strategies will describe how cabledelivery networks can evolve to a multi-service Video Data Center, spanningbroadcast, on-demand, and a next generationIPTV services, while integrating the best oftoday’s regional headend with advances indata center technologies.INTRODUCTIONCable MSOs face considerable operationalchallenges as they seek to smoothly integrateinnovative new video services into theirexisting delivery platforms. As we witness thedeployment of more interactive videoservices, IPTV overlays, internet streaming toset top receivers, and other advancedtechnologies, of equal importance is thesmooth integration of these new functionswith existing video, voice, and data servicesacross headend, transport, and last milenetworks. These new services must coexistwith and even complement existing servicessuch as linear broadcast, switched digitalvideo, and on-demand video services. Unlikethe passive linear broadcast delivery model,these advanced services implement muchmore dynamic traffic flows betweensubscriber and headend, resulting inincreasing operational considerations.But video delivery systems are not the onlyarea that has recently experienced greatinfrastructure and traffic growth.Consumption of internet web content,information which is compositionallypopulated with a mixture of database contentand increasingly rich media components, hasfueled the evolution of data centers, super-evolved versions of the “computer room” ofthe 1970s. Traditional internet content andservice providers have deployed data centercomputing systems to support the 24x7operation of web, application, and databaseservers, supporting the efficient delivery ofinteractive web content to large userpopulations. In order to meet the increasingperformance and scale demands of a globalweb audience, enterprise computing, networkand storage architectures have evolved intoorganized and tiered architectures to supporthigh-transaction information throughput.Most modern cable headends and hubsalready use IP as a common transport for data,voice, and video traffic. Accordingly, videodelivery to consumers is rising in scale,growing in interactivity, and being distributedto an increasingly diverse set of video-capabledevices. The increasing adoption of
  2. 2. interactive services such as VOD, SDV andIPTV will present a similar increasingtransactional traffic load on service deliveryplatforms. As these changes occur and contentnetworks expand their reach to a national andglobal scale, and application server computerequirements grow to respond to theincreasing transactional load, the headend ofold begins to resemble a data center that isserving a variety of data types, led by video,to a large subscriber population.The service provider industry has alreadybenefitted from the leverage of technologythat was originally developed for orthogonalmarkets. The very use of IP switching androuting for video delivery serves as anexcellent example. The switches and routersthat now carry triple-play traffic wereoriginally developed for the enterprisenetworking market. All of the staff-years andcapital investment in ASICs, software andhardware development in the first 10 years ofswitch and router development went towardsproducts that were intended for the wiringcloset, and essentially funded by the globalenterprise IT market. By adapting thesetechnologies to the performance, scale, andreliability requirements of the service providermarket, and through clever media processingtechniques that enable the safe carriage ofmedia streams over IP networks, IP switchingand routing of entertainment grade video isnow seen as the norm. The service providerindustry enjoyed a drastic reduction in per-bittransport costs as a benefit of this technologytransfer.The remainder of this paper examines theevolution of the enterprise/internet datacenter, and provides an overview ofnetworking and computing requirements thathave driven the evolution of data centers intheir efforts to scale and server theirrequirements. We compare and contrastbetween internet data center informationprocessing requirements and cable headendinformation processing requirements. Finally,we propose some appropriate data centertechniques and best practices that can helpcable headends more effectively scale thethroughput of their delivery systems.1.0 THE DATA CENTER EVOLUTIONIntroductionMuch like the cable video headend, datacenters in the enterprise are under increasingperformance pressures. The enterprise datacenter is being influenced by shifting businesspressures and increasing operationallimitations. The new demands of theenterprise require enhanced video services,greater collaboration, near-instantaneousaccess to applications and information, andcompliance with ever-tightening regulatorycompliance. These strains have manifested asoperational issues relating to power andcooling, efficient asset utilization, elementand system management and monitoring,escalating security and provisioning needs,and increasing operational expense.A fundamental metric of data centerefficiency relates to cost, power, space,computational power, and ingress and egressthroughput. Any data center that is designed,trades off between these metrics, dependingon the underlying requirements of the offeredservice. Beyond this, there is increasingattention paid to the “-ilities”: [1]• Scalability• Reliability• Availability• Serviceability• Manageability• SecurityThe data center architecture has evolved toaccommodate these demands in an efficientway. With each architectural shift, thenetwork has become an increasinglyimportant ingredient to enable the latest round
  3. 3. of transitions. The transformation to thenewest evolution of Data Center 3.0technologies focuses on a service-orienteddesign, consolidation of server farms,virtualization across disparate networkedelements, and enhanced automation tomanage the heavy load of requests forcontent, applications, and services.Figure 1 - The Enterprise Data Center is undergoingan extreme makeover, which is focused on serverconsolidation, service virtualization, and controlledautomation.The net result of this transformation is amassively scalable architecture; some datacenters are built to contain upwards of100,000 servers, stitched together as a fabricof virtualized elements.Many design elements of the enterprisedata center are applicable to the nextgeneration cable Video Data Center. To betterappreciate the similarities and differences, it isuseful to present a brief overview ofenterprise data center architecture.Attributes of the Modern Data CenterMost modern data center architectures arebased on a proven layered approach, whichhas been deployed in many of the largest datacenters in the world. This layered approachprovides the foundation of the contemporarydata center and includes 3 major components:a core layer, aggregation layer, and accesslayer [2].Figure 2 – The Data Center architecture implementsa layered approach designed to diffuse massiveloading across thousands of server elements [2].It should be first noted that in enterprise datacenter terminology, the use of the descriptorscore, aggregation, and access are used in analmost opposite manner to the way theseterms would be used an interpreted by a cableheadend engineer.The core layer provides the high-speedpacket switching backplane for all flowsentering and leaving the data center. The corelayer provides connectivity to multipleaggregation modules and provides a resilientLayer 3 fabric for this redundant system. Thislayer runs an interior routing protocol, likeOSPF or EIGRP, and load balances trafficbetween the core network and the aggregationlayer.The aggregation layer includes multi-service switches which manage the interactionbetween servers through a Layer 2 domain.Server-to-server traffic flows through theaggregation layer and use features such asfirewall and server load balancing, to optimizeand secure applications.The access layer is where serversphysically attach to the network. Unlike thecable world, “access” in the data center refersto core of server design, where the networkprovides access to thousands of servers.Server components consist of 1RU servers,
  4. 4. blade servers with integral switches, bladeservers with pass-through cabling, clusteredservers, and mainframes. The access layernetwork typically consists of modularswitches and integral blade server switches.These switches manage Layer 2 and Layer 3topologies which distribute flows acrossvarious server pools in different broadcastdomains.Distributing Requests Across Server PoolsMany applications run inside an enterprisedata center. Each application may have one ormore IP addresses associated with it,providing an identity to which Internet userssend their requests. As Internet requests arereceived through the core and aggregationlayers, specialized load balancers distributethese requests across a pool of targetedservers, using a combination Layer 2 VLANsand internal virtual IP addresses. A series ofcomplex algorithms sift through requests,associate a virtual IP address and VLANdomain, and distribute the requests to theprivate addresses of physical servers [3].These complex load balancers may confinerequests for specific applications to a pre-allocated pool of servers, which aresegregated into manageable clusters. In thatway, server performance, redundancy, andsecurity can be managed and scaled in apredictable way.The multi-tier model in today’s data centeris dominated by HTTP-based applications.Scalable web-based services are built withmulti-tier processing layers which includeweb, application, and database tiers of servers.Web servers field http:// requests and passtransactional information. Application serversfulfill these transactional requests byconsulting one or more databases or sub-services. This information is presented back tothe web server, which composes the properlyformatted HTML page containing allrequested information. This model enables theindependent scaling of presentation,application or database resources as theservice requirements dictate.The multi-tier model may use software thatruns as separate processes on the samemachine using inter-process communication(IPC), or on different machines withcommunications over a switched network.Figure 3 – For most HTTP-based applications, Web,Application, and Database servers work across clearlydefined server pools using a tiered model, to processHTTP-based service requests [2].2.0 THE VIDEO DATA CENTERComparing and Contrasting Enterpriseand Video Data CentersAn important distinction betweenenterprise and video data centers can beunderstood by analyzing the traffic pattern ofeach service.For a 3-tired web site, the amount ofinformation offered is relatively small (an httpGET method), the computational demands aremoderate, and the amount of informationreturned is also relatively small (the averageweb page size is about 300kB). For a search
  5. 5. engine web site, while the information offeredis also small, the computational demands arevery large (a search request may bedispatched to dozens of index servers workingin concert) [4], and the information returned isalso very small. For a video service providerwith a service such as VOD, the informationoffered is relatively small, the computationaldemands are moderate, but the informationreturned is very large (a 2 hr movie in HDmoves 14GB of data). Thus we can see thatthere is an immediate distinction in bothcomputational as well as egress throughputrequirements between the internet data canterand the video data center. Accordingly, thearchitecture can be optimized for the specificservice requirements. But it cannot bemistaken that both types of data center haveshared goals in the areas of scalability,reliability, availability, serviceability, andmanageability, and security.In this spirit, a video headend architectureimplementing selected data center bestpractices is proposed.Combining the Best of Data Center andVideo Headend TechnologiesAs MSOs continue to expand the numberand scale of video services, the resultingpermutation of content, formats, interactivefeatures, and end-devices has placed a burdenon the traditional cable headend. This burdenmay be felt within the headend in a number ofways. First, services such as VOD, SDV, andIPTV require a resilient 2-waycommunications network, including outsideplant and home connections. Upstream trafficwill organically grow with the number ofsubscribers and applications.Correspondingly, the performance of theapplication servers that process theseupstream requests will need to improve. Inmany cases the end result of the subscriberrequest will be the switching or streaming ofvideo content. This information must bedelivered via a resilient IP network with strictpacket loss requirements. In the process ofbeing delivered, the stream may be spliced,encrypted, or otherwise transformed byanother cascading device. Additionally,sophisticated processes and systems areneeded to manage the increasing rotation oflicensed on-demand content, as well as adcontent.This evolving mix of new services,expansive capacity, and more interactiveservices, has triggered a evolution of thevideo headend. To accommodate this newservice mix, the Video Data Center aims tocombine the best of data center and traditionalheadend models into a more scalable andmanageable system.Video Data Center Design GoalsA number of key components haveemerged to address the needs of the VideoData Center in the modern era:• Multi-Service Support:The next generation Cable Video DataCenter not only needs to support corecable video services including broadcast,Switched Digital Video, and VOD, but itmust also modularly accommodate newservices such as IPTV, internet streamingto the TV, and video streaming to the PCor handheld device.• Service Independent Scaling:Each video service (broadcast, SDV,VOD, IPTV, etc) should be able toindependently grow in capacity, withoutdisrupting existing services. The IPnetwork and multicast control plane willplay a key role to both identify andindependently manage each video service.• Resource Modularity and Demarcation:Each video resource within the VideoData Center (acquisition, grooming,encryption, etc.) is organized as a modularcomponent with clear demarcation points.A pair of multicast addresses are used toidentify the input and output points of a
  6. 6. video stream as it traverses the deliverynetwork. The IP network and multicastcontrol plane are used to identify, grow,and independently manage each videoresource. This methodology will preparethe way for service virtualization andimproved scaling. It also enables thedevelopment of IP appliances to providefuture stream processing functionality, orsuper-appliances that consolidate morethan one stream processing function.• Service Virtualization:Service virtualization has many meanings,both in enterprise and service providerinformation processing. From the serviceprovider perspective, it describes theimplementation of a service that islogically viewed as a single client-serverentity, but may physically be realized bymore than one application instance. Incontent networking, virtualization refers tothe abstraction of a video object from itsphysical attributes or location. Forexample, a movie may be stored inmultiple locations, and be transcoded intomultiple formats to suit multiple receivingdevices, but in a virtualizedimplementation it need only be known asa single entity.• Fault Containment and Resiliency:The combination of embedded qualitymonitoring and enhanced redundancytechniques allow video faults and outagesto be rapidly identified, circumvented, andcontained to the Video Data Center. TheVideo Data Center provides a fullyredundant system in which most failuresare contained and not propagatedthroughout the video network.• IP Early Acquisition:The next generation Video Data Centerwill acquire large volumes of content,both real-time and asset-based, sourcedfrom diverse locations in a wide range offormats. Much of this content will beshared across multiple video services. The“IP Early Acquisition” module ensuresthat this content is available to all servicesin an IP format, at the earliest point in thecontent acquisition process.• Improved Operations and Management:Stream visibility and quality monitoringacross all services at strategicmeasurement points is always important.Since video streaming/processing devicesand networking devices comprise thevideo delivery chain, it is important tohave monitoring tools that can present asynthesized and unified view of thedelivery system. The Video Data Centerintegrates video service monitoring acrossthe delivery network.3.0 BROADCAST SERVICESUSING VIDEO DATA CENTERTECHNOLOGIESImplementing Switched Digital Videowithin the Video Data CenterWhile there are some fundamentaldifferences between the volume and type ofinformation exchanged from a headend versusa database-driven website, there are also manysimilarities between each data center’s designgoals. Many data center architecture andnetworking practices can be applied in thecable headend. As an illustrative example, weFigure 4 – The Switched Digital Video prototypeservice is implemented using Video Data Centertechniques and includes 100 HD and 200 SDchannels, mapped to 30 Service Groups, in a 15,000household hub site.
  7. 7. present a hypothetical system design for aSwitched Digital Video (SDV) serviceimplementation. The SDV service describedin this paper was designed and tested usingthe design parameters presented in Figure 4.The advent of Switched Digital Video(SDV) technology provides a fundamentalchange in the way the industry delivers digitalvideo entertainment. With SDV, serviceproviders have the ability to offer a widervariety of programming while managing HFCnetwork bandwidth in a sustainable way. TheSDV architecture switches only selectedcontent onto the HFC based on channelchange requests from users within a servicegroup. Thus, content that is not requested byany user in a particular service group does notoccupy HFC bandwidth.MSOs considering the deployment ofSDV technology face three operationalchallenges. The first includes the integrationof a large number of operational componentsrequired by the SDV service. Unlike the linearbroadcast model, SDV implements a muchmore dynamic service in which SDV Serversdynamically map requested content to variousQAM channels across hundreds of servicegroups. This increases operationalcomplexity.The second challenge for Cable MSOsconsidering SDV is how to smoothly integratean SDV service with existing video servicesacross the headend. The SDV service mustco-exist and even complement existingservices such as linear broadcast, VOD, IPTV,and other streaming techniques. The SDVsystem must also be managed and scaledconcurrently with these adjacent videoservices.A final challenge is in capacity planningfor future growth. Since the SDV system hasthe ability to admit a great number of linearprograms with only a modest increase inrequired HFC stream resources, operatorsneed to rely on tools to provide visibility ofsystem resource utilization. When theopportunity is presented to augment theamount of switched programming, the VideoData Center should be able to accommodatecapacity expansions without any majordisruption to the system in place.Layered Network ApproachThe Video Data Center implements alayered network approach to manage resourceaccess, security, and scaling across a diverseset of video elements. Figure 5 highlights theCore, Aggregation, and Video ResourceAccess layers within the Video Data Center.Similar to the Enterprise Data Center, theselayers provide a Layer 3 core connection tothe Regional Network, aggregation and loadbalancing of video flows across various cableservices, and access to a range of modularvideo resources. Individual video resourcesare linked together using the multicast controlplane to create a SDV cable service.Figure 5 – Similar to the Enterprise Data Center, theVideo Data Center implements a layered networkapproach, including Core, Aggregation and Accesslayers.The Aggregation Layer includes redundantmulti-service switches which provide GEaccess to individual resource elements. Since
  8. 8. broadcast services within the Video DataCenter operate on a much smaller scale thanthe traditional data center, a fewer number ofmulti-service switches are required. In thisSDV example, two high density switchesprovide dual-homed GE inter-connects to allvideo resource elements.Enabling Service VirtualizationService virtualization is a critical principleupon which many of today’s advanced datacenters are built. The goal of servicevirtualization is to provide a more efficientdelivery platform which is better aligned withthe massive scale and complexity of today’snetworked environment. Although servicevirtualization in the data center continues toevolve, different components are directlyapplicable to the Video Data Center:• Network Enabled ModularityResources are defined as modularcomponents with clear networkdemarcation points, providing a morescalable and flexible design. Capacity cantypically be added with minimal impact toexisting services. (Example: SAN-basednetwork storage which is independentlymanaged and scaled from the serverfarms)Figure 6 – Data Center storage is a modular resource,which is networked via the SAN, and can beindependently managed and scaled [2].• Distributed Service ModelsProviders have the flexibility to distributevarious processing stages across multipleservers, regional networks, and nationalbackbones. Resource managers that arepresent for interactive services will play aprincipal role in the Video Data Center.These managers will optimize theallocation of resources to satisfy a numberof parameters, including cost, latency,even potential revenue opportunities.• Masking Complexity within Multi-Service Environments:Improved resource management systems,advanced load balancing, and multi-tieredaccess networks hide the underlyingtechnology from the end user. Content isavailable in multiple formats to a myriadof consumer devices. Exactly how thiscontent is acquired, formatted, anddelivered is hidden from the end user.Similar techniques are implemented withinthe Video Data Center. As described inFigure 7, video resources operate within amodular architecture, defined by clearnetwork demarcation points.Figure 7 – Similar to the Enterprise Data Center,video resources are split into modular componentsand available as distributed network elementsThese video resources are shared bymultiple video services including linearbroadcast, SDV, time-shift TV, PCTV/IPTVand other streaming services. This
  9. 9. combination of resource modularity andnetwork demarcation enables specificfunctions like Ad Insertion or VOD Streamingto be contained within a centralized Headendor distributed throughout a regional network.SDV Service CreationThe multicast control plane plays a vitalrole in the Video Data Center design. Figure 8describes how the multicast control planestrings together modular resources to createvideo content for the SDV service. In ourexample, 300 SDV SPTS streams are mappedto unique IP multicast group addresses.Devices along each processing stage issueIGMPv3 JOIN messages to draw specificmulticast streams to each device over thelayer 3 GE network. Redundant multi-serviceswitches provide the routed control plane tomanage the handoff of these streams betweeneach processing stage.Each of the major processing stagesutilizes a unique multicast source address toidentify primary and backup video sources,and obtain the correct content. Source-Specific Multicast (SSM), a feature ofIGMPv3, also allows each stream to maintaina single multicast group address (but varyingsource addresses), spanning all of the videoprocessing stages.Figure 8 – The 300 channel SDV service is created bystringing together processing elements, using theSSM multicast control plane.In this example, the same multicast groupaddress is used for each SPTS stream acrossthe four processing stages. There is one groupaddress “GP“ mapped to the 300 primaryvideo streams in the SDV tier, and a secondgroup address “GS“ mapped to the 300secondary or backup streams. This SSMfeature greatly reduces the number ofmulticast groups used throughout the VideoData Center since a single group address (withvarying source addresses along each stage ofthe delivery chain) can follow a streamthroughout the Video Data Center. Thistechnique also provides a clear networkdemarcation point between each stage andoffers MPEG monitoring tools completevisibility into all SDV multicast streams.Within this Video Data Center architecture,the SDV video path contains four processingstages. The IP Early Acquisition stageprovides redundant access to all SDV videocontent. During this stage, SDV videocontent is acquired from redundant sourcesincluding satellite, off-air, and terrestriallinks. SDV content is transcoded to an IPMPEG format at this early stage. Wherenecessary, video streams are converted frommulti-program transport streams (MPTS) tosingle program transport streams (SPTS), asused by SDV.This second stage provides StreamConditioning of all SDV channels byredundant groomers. Since redundant copiesof the SDV video streams are available, thisstage performs a stream selection process inwhich each groomer independently selects thebest available copy based on ETR-290 MPEGquality measurements. After stream selection,groomers rate cap the resultant SPTS streamsfrom Variable Bit Rate (VBR) to Constant BitRate (CBR), as used by SDV.The third stage of processing provides AdInsertion, delivered by ad servers andstandards-based SCTE 30/35/130 MPEGinsertion techniques.
  10. 10. In the final stage of SDV processing, twoindependent encryption devices bulk-encryptall SDV SPTS channels. The redundantencryption devices independently create twoinstantiations of the SDV channel lineup. Thecomplete SDV channel lineup is nowavailable for use by remote hub sites. In thisfully redundant system, 300 primary streamsare identified by multicast group address“GP“, in addition to 300 secondary videostreams branded by group address “GS“.Throughout this data center design,processing elements compare stream qualitybetween the primary and backup SDV copies.In many cases, redundancy mechanismsidentify MPEG level faults, and perform acutover to the backup stream to contain thefault to the data center. These techniquesprevent the propagation of video faultsthroughout the video network, pre-emptingQAM-level cutovers to backup streams inmany cases.Source Specific Multicast (SSM) LoadBalancingA key element of the enterprise data centeris the ability to load balance millions of flowsacross hundreds and potentially thousands ofservers. These complex load balancingtechniques typically use a creativecombination of L2 VLANs, virtual addressschemes, and server pools to evenly distributerequests across thousands of servers. Thisdesign is driven in part by thecommoditization of server hardware in whichenterprise-class servers are being replacedwith thousands of low-cost blade servers.With the assistance of distributed computingand distributed systems management, loadbalancing across thousands of server elementsis both efficient and reliable in the traditionaldata center design.By contrast, most elements within thevideo headend remain specialized and farfrom commoditization. These specializedprocessors require their own sophisticatedsession and resource management systems.These management systems string together anumber of video resources to generate eachproperly encoded, conditioned, ad spliced,and encrypted video stream. Fortunately, forbroadcast video services, the scale of today’sVideo Data Center is typically much smallerthan its enterprise counterpart with respect tonumber of elements. Given these differences,Video Data Center designs continue toemploy resource managers and the multicastcontrol plane to provide flow control and loadbalancing for most broadcast services.For the SDV design, the SSM multicastaddress scheme provides a stable andpredictable load balancing technique, which issuitable to the size and scale of the SDVservice. Specific SDV streams are mapped toindividual GE ports via IGMPv3 JOINs.Figure 9 illustrates how a subset of multicastgroup addresses are mapped to specific GEports within the Stream Conditioning stage ofthe Video Data Center. In this example, 300SDV SPTS streams are evenly distributedacross 3 GE ports. The streams are dividedinto 3 groups of 100 streams (e.g. G1-100, G101-200, G201-300). SSM allows the same groupaddress to be reused across differentprocessing stages, simplifying streammanagement. This processing stage employsdifferent multicast source addresses to acquirecontent (from S3) and handoff the groomedstreams (from S5) to the next stage.Figure 9 – SSM multicast provides a well-defined loadbalancing technique to distribute 300 SDV videostreams across 3 GE ports.
  11. 11. This technique achieves a well-definedload balancing of SDV video flows. At eachprocessing stage, multicast groups associatedwith video programs are joined to predefinedGE ports. Each processing element is assigneda work load of multicast groups by theresource management system.Gaining Control over Massive ScalingThe SDV system should be able to scale toa larger number of channels without impact tothe current SDV deployment. Figure 9provides the estimated bandwidth if wedoubled the number of SDV channels. Basedon these calculations, the total bandwidthrequired to deliver 600 channels of SD andHD content in the SDV tier is 4.5 Gbps.Figure 10 - Bandwidth required if the SDV Servicewas doubled in capacity to include 200 HD and 400SD programs.To accommodate this additional capacity,extra video resources are required by theVideo Data Center. Figure 10 highlights thisgrowth in capacity, without interfering withthe currently deployed SDV service. At eachstage, additional multicast group addresses areassigned to the extra SDV channels. Thesenew SDV multicast groups are drawn throughthe different switch ports, groomers, adinsertion, and encryption systems, ultimatelyproviding an efficient and scalable techniqueto double the SDV service. Since each videoprocessing stage is modular and utilizes clearnetwork demarcation points, video resourceswithin each stage can scale independently.The multicast control plane and resourcemanagement system simply directs new SDVstreams to the new processing elements.Figure 11 - Modular video resources and extramulticast groups are simply added to each processingstage to accommodate 600 SDV video channels.4.0 IPTV INSERTION INTO THEVIDEO DATA CENTERIPTV Insertion StrategyThe power and modularity of the VideoData Center are highlighted with the insertionof a future IPTV service. As described inFigure 12, IPTV video streams may requireunique stream conditioning, ad insertion, andbulk encryption. To accomplish this uniqueprocessing, a separate set of multicast groupaddresses (GIPTV) are used to draw the newIPTV streams through additional videoresource elements. Similar to other cableservices, a single group address for the IPTVstreams will follow those streams throughoutthe video network. The combination ofresource modularity, network baseddemarcations, and SSM based flow controlallow this insertion technique with minimaldisruption to existing video services.
  12. 12. Figure 12 – The Video Data Center provides an IPTVinsertion strategy with minimal disruption to alreadydeployed services.The IP early acquisition stage provides acommon source of content for all cableservices, including IPTV. As improvementsto resource and element managers continue,next generation session-resource managementsystems can support both traditional cableservices and newer IPTV systems, enablingfurther consolidation. This IPTV insertionstrategy delivers a manageable insertionprocess while minimizing the impact toexisting cable video services.IP Early AcquisitionThe next generation Video Data Center willacquire large amounts of video content,sourced from diverse locations in a widerange of formats. Much of this content willbe shared across multiple video services. The“IP Early Acquisition” model insures that thisdiverse content is made available to allservices in an IP encapsulated format, at theearliest point in the acquisition process.The goal of the IP Early Acquisition stageis to convert all streams acquired from diversesources to an IP MPEG SPTS format.Figure 13 provides a typical Headend videoacquisition process in which content isacquired from satellite, off-air, and terrestrialvideo sources. RF video content fromsatellite and off-air sources is processed by aseries of encoders, and groomers to convertRF, SDI, and ASI video into IP MPEG SPTSstreams. Similarly, terrestrial video sourcesfrom remotes sources will be converted to anIP MPEG SPTS format.In this example, common MPEG videostreams, acquired from diverse sources areavailable for linear broadcast, SDV, VOD,and IPTV services.Figure 13 – IP early content acquisition collects andtranscodes video content to an IP MPEG format foruse by Broadcast, SDV, VOD, and IPTV services.5.0 ON-DEMAND SERVICESUSING VIDEO DATA CENTERTECHNOLOGIESSecond generation video-on-demandsystems take advantage of Data Centercontent caching techniques and distributedvideo streaming to more efficiently absorb themassive scale and unpredictable loading ofon-demand services. Figure 14 provides anexample of an advanced on-demand network.The intelligent IP infrastructure allowsVOD content to be stored in large vault arraysat strategic locations across the network. On-demand vaults can provide a centralized andconsolidated repository for large quantities ofcontent, which are acquired from a wide rangeof sources to support both live and on-demandapplications [5]. The ability to grow contentstorage independently from content streaming
  13. 13. is a critical enhancement, and was inspiredfrom the Web caching architectures.Figure 14 – The VOD content caching model issimilar to Web-based cached systems in which themost popular video content is efficiently cached nearthe network edge, and delivered to the user via multi-format streamers.In this distributed caching model, on-demand titles are distributed upon requestusing a tiered caching scheme. Cached videomakes its way through the intelligent transportto video streamers located near the edge of theaccess network. As content streamers aremoved closer to the edge, providers benefitfrom improved scaling, video quality, andreduced transport costs. This distributedcaching model has been shown to reducetransport network bandwidth by as much as95% since the most popular content isdelivered once, cached, and reused acrossmany requests for the same title.Multi-format video streamers in the hubsites handle traditional MPEG video formats,but also support internet streaming formatssuch as Flash or Windows Media. Thesemulti-format streaming improvementscoupled with in-home gateway devices canenable a single video infrastructure to nowsupport multiple streaming services to variousin-home devices. Backup video streamerslocated within the regional headend can bepositioned to handle unexpected peak periods,when streamers in a specific hub site aresaturated with requests.When combined with a national network,an on-demand caching scheme allows for asignificant consolidation of content ingestpoints. Advanced caching protocols enablereal-time ingest and the delivery of contentthroughout a national footprint within a 250ms period. HD bit rates are also supported ata massive scale. Based on these data centerdesign improvements, terrestrial contentdistribution will likely supplant the traditionalon-demand “pitcher/catcher” distributiontechniques of today. This networkedinfrastructure also provides a flexible platformto implement robust resiliency schemes,where multiple copies of content are storedand accessed from distributed locations. Allof these elements work together through anintelligent network to more efficiently deliverthe next massive wave of on-demand content.Figure 15 – Advanced content caching protocolsenable real-time ingest and real-time distribution ofvideo content across a national footprint.SUMMARYEnterprise computing is the midst ofanother evolutionary transformation, as datacenter architectures give way to “cloudcomputing” systems such as Amazon Dynamo[6]. Advances in this area are drivinginvestments in advanced and highly-scalablenetworking and distributed computingarchitectures. The expected increase ininteractive video traffic requirements isplacing higher requirements on cable
  14. 14. headends to deploy similarly scalable videodelivery systems that are easily adaptable togrowth and performance. From thisstandpoint, the evaluation and application ofdata center technologies and practices cangreatly benefit service providers’ efforts toscale and grow their video delivery systems.REFERENCES[1] “Datacenter Reference Guide”, SunMicrosystems, July 2007.[2] “Cisco Data Center Infrastructure 2.5Design Guide”, Cisco Systems, 2007.[3] “Towards a Next Generation Data CenterArchitecture: Scalability and Commoditization”,A. Greenberg, P. Lahiri, D. Maltz, P. Patel, and S.Sengupta. Microsoft Research, 2008.[4] “The Anatomy of a Large-Scale HypertextualSearch Engine”, L. Page and S. Brin, 1998.[5] “Climbing Mount Everest”, S. Kotay (Comcast)and J. Pickens (Cisco). NCTA Technical Papers,2008.[6] “Dynamo – Amazon’s Highly Available Key-ValueStore”, G. DeCandia et. al., ACM Symposium onOperating Systems Principles, 2007.