Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Rethinking Large Video Files in the Cloud: Strategies for Eliminating Bandwidth Bottlenecks


Published on

In March of this year, Zencoder, Aspera, Amazon Web Services and Netflix gathered in New York City to discuss cloud-based media workflow.

Netflix and Zencoder are built on AWS and use Aspera for high-speed file transfer to the cloud. All of these companies rely on the cloud for scalable storage and processing, however, for large files, the advantages of the cloud can attenuated by sluggish transfer speeds over the open Internet.

This white paper presents four strategies for eliminating bandwidth bottlenecks that crop up when using the cloud for transcoding. In the end, combining accelerated file transfer and parallel processing in the cloud results in transcoding workflow that is up to 10x faster , and makes more efficient use of bandwidth, than on-premise solutions.

Published in: Technology, Business
  • Be the first to comment

Rethinking Large Video Files in the Cloud: Strategies for Eliminating Bandwidth Bottlenecks

  1. 1. Rethinking Large Video Files in the Cloud STRATEGIES FOR ELIMINATING BANDWIDTH BOTTLENECKSAbout ZencoderZencoder is the largest and fastest cloud-based encoding service in the world.  Its products enable content providers to quickly transcode and publish video toconsumers on virtually any Internet connected device, including web, mobile, and TV.To learn more about Zencoder, visit http://zencoder.comor contact us at
  2. 2. Eliminating Bandwidth BottlenecksBandwidth is a problem for high-bitrate video. Cloud-based transcoding has enormousadvantages over on-premise transcoding: better ROI, faster speeds, and massivescalability. But professional video content is often stored at 30-100 Mbps (or more),resulting in very large files. Conventional wisdom holds that these files are too large totransfer over the public Internet. Figure 1. Common professional video formats.Format Bitrate Size (per hour) Transfer time (lossy TCP)[1]DNxHD 36 36 Mbps 15.8 GB 1.65 hoursProRes 422, SD PAL 41 Mbps 18.0 GB 1.88 hoursAVC Intra 100 100 Mbps 43.9 GB 4.59 hoursDNxHD 220 220 Mbps 96.7 GB 10.09 hoursProRes 4444 HD 330 Mbps 145.0 GB 15.14 hours[1] Actual TCP transfer at 21.8 Mbps over a 1000 Mbps connection with 10ms delay and 0.1% packet loss.This problem becomes even worse when considering the size of an entire contentlibrary. If a publisher creates two hours of high bitrate 50 Mbps video each day, they willhave a library of 32,000 GB after two years. What happens if it becomes necessary totranscode the entire library for a new mobile device or a new resolution? Even though ascalable transcoding system can transcode 32,000 GB of content in just a few hours,moving that content over the public internet at 100 Mbps would take over 30 days.Fortunately, there are solutions to these problems, and major media organizations likeNetflix and PBS are embracing cloud-based services. In this chapter of 12 Patterns ofHigh Volume Video Encoding, we will discuss four techniques used by major publishersto eliminate these bandwidth bottlenecks and efficiently transcode video in the cloud. -1-
  3. 3. 1. Store video content close to video processingThe easiest way to eliminate bandwidth bottlenecks is to locate hosting and transcodingtogether. For example, if your transcoding system is run on Amazon EC2, and youarchive your video with Amazon S3, you have free, near-instant transfer betweenstorage and processing. (This isnt always possible, so if your storage and transcodingare in separate places, the next point will help.) Fig 2. Time to Transfer 45 GB of Video (in hours) TCP 4.59 Cloud 0.10 Transfer time of 1 hour of 100 Mbit/s video. TCP achieves 22 Mbit/s transfer over a 1 Gbit/s line in typical network conditions (10ms delay, 0.1% packet loss). In-cloud transfer represents tested speeds of 1 Gbit/s between Amazon S3 and Zencoder. To eliminate bandwidth bottlenecks, store video close to transcoding. -2-
  4. 4. 2. Use accelerated file transferWhen transferring files over long distances, standard TCP transfer protocols like FTPand HTTP under-utilize bandwidth significantly. For example, a 100 Mbps connectionmay actually only transfer 10 Mbps over TCP, given a small amount of latency andpacket loss. This is due to the structure of the TCP protocol, which scales backbandwidth utilization when it thinks the network is over-utilized. This is useful for generalinternet traffic, because it ensures that everyone has fair access to limited bandwidth.But it is counter-productive when transferring large files over a dedicated connection.When it is necessary to transfer high-bitrate content over the Internet, use acceleratedfile transfer technology. Aspera and other providers offer UDP-based transfer protocols,which perform significantly better than TCP over most network conditions.If Aspera or other UDP-based file transfer technologies aren’t an option, considertransferring files via multiple TCP connections to make up for some of the inefficienciesof TCP. Fig 3. Time to Transfer 45 GB of Video (in hours) TCP 4.59 Aspera 0.20 Transfer time of 1 hour of 100 Mbit/s video. TCP achieves 22 Mbit/s transfer over a 1 Gbit/s line in typical network conditions (10ms delay, 0.1% packet loss). Aspera achieves 509 Mbit/s transfer over the same network conditions. To maximize bandwidth utilization, use file transfer technologies like Aspera, UDP, or multiple TCP connections. -3-
  5. 5. 3. Transfer once, encode manyFor video to be viewable on multiple devices over various connection speeds,different video resolutions, bitrates, and codecs are needed. Many web andmobile publishers create 10-20 versions of each file. So when doing high-volumeencoding, it is important that a file is only transferred once, and each transcode isthen performed in parallel.When using this approach, you can effectively divide the transfer time by thenumber of encodes to determine the net transfer time. For example, if transfertakes 6 minutes, but you perform 10 transcodes in the cloud, the net transferrequired for each transcode is only 36 seconds. Fig 4. Time to Transfer and Encode 10 Outputs (in hours) On-premise (serial) Cloud (TCP, parallel) Cloud (Aspera, parallel) 0 1.25 2.50 3.75 5.00 Transfer time Encoding time Transfer and encoding time of 10 outputs of 1 Hour 50 Mbit/s video at 2x realtime encoding speed. To achieve maximum efficiency, transfer a high quality file to the cloud only once, and then perform multiple encodes in parallel. -4-
  6. 6. 4. Syndicate from the cloud after transcodingWhether you transcode in the cloud or on-premise, some bandwith is required. In onecase, a high-bitrate mezzanine file is sent to the cloud for transcoding. In the othercase, when transcoding on-premise, several transcoded files are sent directly to a CDN,publishing platform, or to partners like iTunes or Hulu. Both cases require outboundbandwidth, and in many cases, syndicating from the cloud requires less overallbandwidth than syndicating from an on-premise system.For example, it is not uncommon for broadcast video to be syndicated at high bitrates. Ifa broadcaster uses a 100 Mbps mezzanine format, and then syndicates that content tofive partners at 50 Mbps, it is clearly more efficient to only send the original file out ofthe network for transcoding, and let the transcoding system handle the other transfers. Scenario A: high-bitrate syndication • Input file: 100 Mbps • Syndicated output: ∑(50 + 50 + 50 + 50 + 50) = 250 MbpsIn this scenario, 150 Mbps of egress bandwidth is saved by syndicating content fromthe cloud.Fig 5. Comparing Bandwidth Requirements of On-Premise and Cloud Encoding -5-
  7. 7. Not everyone syndicates high-bitrate content, of course. But even when encoding low-bitrate web and mobile video, multiple small files adds up. The example below showsactual bitrates recommended for a major OTT video device, encoded to 10 bitrates, forboth MP4 and HTTP Live Streaming. Scenario B: low-bitrate syndication • Input file: 50 Mbps • Syndicated output: ∑(9 + 6 + 4.5 + 3.4 + 2.25 + 1.5 + 1.1 + 0.75 + 0.55 + 0.35 + 9 + 6 + 4.5 + 3.4 + 2.25 + 1.5 + 1.1 + 0.75 + 0.55 + 0.35) = 58.8 MbpsEven in this scenario, sending a 50 Mbps file to the cloud requires less overallbandwidth than transcoding internally and delivering all 20 formats separately; and theoriginal is maintained in the cloud for subsequent transcoding. To save transfer bandwidth, syndicate content from external encoding system.ConclusionWhile transferring high-bitrate video can be a challenge, the correct approach to cloudtranscoding can mitigate these problems. High volume publishers should follow thesefour basic guidelines: ‣ Store content in the cloud ‣ Use accelerated file transfer technology ‣ Ingest each file once to a parallel cloud transcoding system ‣ Syndicate directly from the cloudBy implementing these recommendations, media companies of all types can offloadvideo processing to the cloud, and realize the benefits of scale, flexibility, and ROIprovided by cloud transcoding. -6-
  8. 8. Appendix: Bandwidth GrowthThere is one important fundamental driver that is helping to solve the bandwidthproblem: cheaper and wider bandwidth. Neilsens Law of Internet Bandwidth hastracked accurately from 1983 to the present, and states that high-end Internetconnection speeds will increase by 50% per year. Video bitrates are growing at a slowerrate, and so sending high-bitrate content over the Internet will become less of a problemover time. Fig 6. Average Internet Connectivity and Sample Streaming BitratesBut it isn’t enough to wait for internet bandwidth to improve. The right architecture,covered in the body of this document, is still required to efficiently transcode high bitratecontent and large libraries. -7-