This session looks at the technical feasibility us using terrestrial delivery via the AWS cloud as an alternative to satellite transport. The key questions addressed include: can it be implemented without effecting the underlying media layers and can it be architected for low cost at scale. The presentation includes sample architectures and design considerations to achieve content distribution within acceptable latency parameters of satellite.
13. In a non-Cloud Solution …
Multiple datacenter footprints
High speed, costly IP transit
Local ops staff to manage infrastructure
A massive cap-ex outlay
Development staff to build this out
15. Edge Locations
Availability Zone
Region
Dallas (2)
St.Louis
Miami
JacksonvilleLos Angeles (2)
Seattle
Ashburn (3)
Newark
New York (3)
Dublin
London (2)
Amsterdam (2)
Stockholm
Frankfurt (2)Paris (2)
Singapore(2)
Hong Kong (2)
Tokyo (2)
Sao Paulo
South Bend
San Jose
Palo Alto
Hayward
Osaka
Milan
Sydney
Madrid
Seoul
Mumbai
Chennai
Global Distribution Footprint
16. c
Massively Scalable Compute
Compute Intensive
Intel ES-2666 v3 (Haswell) optimized specifically for EC2
Memory Intensive
Lowest price point per GiB of RAM
GPUs
1,536 CUDA cores
4GB of video memory
Enhanced Networking
Higher PPS, Lower network jitter, low latency
IO Intensive
SSD Storage, EBS Optimized
High Storage
24 x 2000 GiB per instance
AMI
EBS
Instance
Store
Amazon
EC2
Instance
Size instance by
Application need
17. Launch a CloudFormation stack
with all the infrastructure
resources for a specific
project
Autoscale the stack as
appropriate
Automated Infrastructure
AMI
CloudFormation
Deploy
Template
CloudFormation
Terminate
Template
18. AWS Ecosystem (License included in hourly* pricing)
INGEST STORE MANAGE SECUREPROCESS
CREATE
MONETIZE
INTEGRATEDELIVER
20. What if we evolved the second hop?
First HopSecond Hop
Field Source / EncoderHeadend / ProcessingAffiliate Spoke / Decoder
21. What if we Evolved the Second Hop?
Approach
Up/downlink: dedicated and
internet-based IP links
Direct Connect
For ‘uplink’
For stream consumption
Concerns
FEC
~500ms + RTT latency
Second Hop
Headend / ProcessingAffiliate Spoke / Decoder
Direct Connect
Secure VPN
Route53
22. Looking at Bandwidth & Transport
Satellite
$3-5000/Mhz/mo* (~$30k/20Mbits*) +
Spoke costs
Fixed b/w ceiling cap
AWS
b/w to deliver an HD stream ~ $500/mo*
Pay as you go model
FEC
Can be implemented on UDP layer
ARQ, SRT, LD for jitter/latency/reliability
Sub 1Gb Direct Connect (100Mbps)
Availability stream ingest (1:1, 1:N)
Second Hop
Headend / Processing
Affiliate Spoke / Decoder
$
$
$
! !
!
23. Multi-hop Distribution with AWS
First Hop
Field Source / EncoderHeadend / ProcessingAffiliate Spoke
Ingest
Fan Out
Egress
Scale Out
Multi-Region, Multi-AZ
Cellular
Internet
Direct Connect
Secure VPN
Internet
S3
Glacier
Route53
24. Multi-hop Distribution with AWS
Affiliate Spoke
Ingest
Fan Out
Egress
Multi-Region, Multi-AZ
Direct Connect
Secure VPN
S3
Glacier
Route53
Additional Workflows
Transient infrastructure
Templatize Environments
for Quick POCs
CloudBursting
(utilizing on-prem)
Additional Regions
26. 10GbpsNetwork
placement
groups
Capacity plan for hundreds of live
HD streams and contribution silos
Low latency high throughput
Combine with regional replication
and Route53 for true nearest-
neighbor latency
Highly Scalable Infrastructure
27. c4
g2m3
High Capacity Egress
GPU Transcode
IngestEncoder
Broadcast
Decode
Low Bitrate
Proxy
Fan out / fan in
Size workflow to compute
Flexible multi-format
HLS w/ Cloudfront CDN
MPEG-UDP w/FEC
Dedicated Pipe
Multi-Path Distribution
28. Amazon Glacier (Life Cycle Policies)
Amazon S3
Segment media into S3
Periodically archive to Glacier
Time-windowed hot content with
infinite cold store
Store/Retrieve to local edit
stations via high-speed partner
appliances
Affiliates can make use of
storage infrastructure
(transcode)
Media Lifecycle Management
30. Proof of Concept
Deployed in one afternoon into AWS VPC
Co-ordinated cross-country by a team of 3 – headend
operations, en/decoder manufacturer, and AWS
6Mbps 1080p60 MPEG-UDP w/FEC (SRT) stream
Distribution over public internet
200ms encoder to AWS, AWS to decoder latency
Lower measured latency than existing satellite 2nd hop
40 day ingress uptime with no video dropouts