A difficult problem for users of Amazon S3 that deal in large-form data is how to consistently transfer ultralarge files and large sets of files at fast speeds over the WAN. Although a number of tools are available for network transfer with S3 that exploit its multipart APIs, most have practical limitations when transferring very large files or large sets of very small files with remote regions. Transfers can be slow, degrade unpredictably, and for the largest sizes fail altogether. Additional complications include resume, encryption at rest, encryption in transit, and efficient updates for synchronization.
Aspera has expertise and experience in tackling these problems and has created a suite of transport, synchronization, monitoring, and collaboration software that can transfer and store both ultralarge files (up to the 5 TB limit of an S3 object) and large numbers of very small files (millions andlt; 100 KB) consistently fast, regardless of region.
In this session, technical leaders from Aspera explain how to achieve very large file WAN transfers and integrate them into mission-critical workflows across multiple industries. EVS, a media service provider to the 2014 FIFA World Cup Brazil explains how they used Aspera solutions for the delivery of high-speed, live video transport, moving real-time video data from sports matches in Brazil to Europe for AWS-based transcoding, live streaming, and file delivery. Sponsored by Aspera.
2. PRESENTERS
Michelle Munson
Co-founder and CEO Aspera
michelle@asperasoft.com
Jay MigliaccioDirector Cloud Services, Asperajay@asperasoft.comStephane HouetProduct Manager, EVSs.houet@evs.com
•Quick Intro to Aspera
•Technology Challenges
•AsperaDirect to Cloud Solution
•Demos
•FIFA Live Streaming Use Case
•Q & A
AGENDA
3.
4.
5.
6.
7.
8. HTTP
Key
Value
H(URL)
R1, R2, R3
Data Nodes
Master Database Mapping Object ID to File Data Replicas and Storing Metadata
15. TRANSFERDATA TO CLOUD OVER WAN
EFFECTIVE THROUGHPUT
Multi-part HTTP
Typical internet conditions
•50–250ms latency & 0.1–3% packet loss
15 parallelhttp streams
<10 to 100Mbps depending on distance
Aspera FASP
Aspera FASPtransfer over WAN toCloud
Up to 1Gbps*
10 TB transferred per 24 hours
* Per EC2 Extra Large Instance -independent of distance
16. LOCATIONAND AVAILABLE BANDWIDTH
AWS ENHANCED UPLOADER
ASPERAFASP
Montrealto AWS East
•100 Mbps Shared Internet Connection
30 minutes (7-10Mbps)
3.7 minutes (80 Mbps)
9X Speed Up
Rackspacein Dallas to AWS East
•600 Mbps Shared Internet Connection
7.5 minutes (38 Mbps)
0.5 minutes (600 Mbps)
15X Speed Up
Other pains … “Enhanced Bucket Uploader” requires java applet, very large transfers time out, no good resume for interrupted transfers, no downloads
17. EFFECTIVE THROUGHPUT & TRANSFER TIME FOR 4.4 GB/15691 FILES (AVERAGE SIZE300KB)
LOCATIONAND AVAILABLE BANDWIDTH
AWS HTTP MULTIPART
ASPERA ASCP
New Yorkto AWS East Coast
•1 Gbps Shared Connection
334 seconds (113 Mbps)
107 seconds (353 Mbps)
3.3X Speed Up
New York to AWS West Coast
•1 Gbps Shared Connection
8.7 GB in 1032 seconds (36 Mbps)
8.7 GB in 110 seconds (353 Mbps)
9.4 X Speed Up
EFFECTIVE THROUGHPUT & TRANSFER TIME FOR 8.7 GB/18,995 FILES (AVERAGE SIZE 9.6MB)
LOCATIONAND AVAILABLE BANDWIDTH
AWS HTTP MULTIPART
ASPERA ASCP
New Yorkto AWS East Coast
•1 Gbps Shared Connection
477 seconds (156 Mbps)
178 seconds (420 Mbps)
2.7 X Speed Up
New York to AWS West Coast
•1 Gbps Shared Connection
967 seconds (77 Mbps)
177 seconds (420 Mbps)
5.4 X Speed Up
18.
19.
20.
21. –Maximum speed single stream transfer–Support for large files and directory sizes in a single transfer –Network and disk congestion control provides automatic adaptation of transmission speed to avoid congestion and overdrive–Automatic retry and checkpoint resume of any transfer from point of interruption –Built in over-the-wire encryption and encryption-at-rest (AES 128) –Support for authenticated Aspera docroots using private cloud credentials and platform-specific role based access control including Amazon IAMS. –Seamless fallback to HTTP(s) in restricted network environments –Concurrent transfer support scaling up to ~50 concurrent transfers per VM instance
22. Utilization > high w/m
Available Pool
New Clients connect to “available” pool
Existing clienttransfers
Console
•Collect / aggregate transfer data
•Transfer activity / reporting (UI, API)
Shares
•User management
•Storage access control
KEY COMPONENTS
•Cluster Manager for Auto-scale and Scaled DB
•Console Management UI + Reporting API
•Enhanced Client for Shares Authorizations
•Unified Access to Files/Directories (Browser, GUI, Commend Line, SDK)
Scaling Parameters
•Min/max number of t/s
•Utilization low/high watermark
•Min number of t/s in “available” pool
•Min number of idle t/s in ”available” pool
Management and Reporting
Cluster Manager
•Monitor cluster nodes
•Determine eligibility for transfer scale up / down
•Create / remove db with replicas
•Add / remove node
Scale DB Persistence Layer
26. •Near Live experiences have highly burstyprocessing and distribution requirements
•Transcoding alone is expected to generate 100s of varieties of bitrates and formats for a multitude of target devices
•Audiences peak to millions of concurrent streams and die off shortly post event
•Near “Zero Delay” in the video experience is expected
•“Second screen” depends on near instant access / instant replay, which requires reducing
•Linear transcoding approaches simply can not meet demand (and are too expensive for short term use!)
•Parallel, “cloud” architectures are essential
•Investing in on premise bandwidth for distribution is also impractical
•Millions of streams equals terabits per second
27. FASP
Scale Out High-Speed Transfer by
Aspera
Scale Out Transcoding by
Elemental
On Demand
Multi-screen capture and distribution by EVS
28. Belgian
company
+90%
market share of sports OB trucks
21
offices
+500
employees(+50% in R&D)
32. Live Streaming :
REAL TIME CONSTRAINT!
6 feeds@ 10 Mbps = 60 Mbps
X 2 gamesat the sametime
X 2 for safety
WE NEED A SOLUTION !
240Mbps
VOD MulticamNear-live replays:
Up to 24 clips @ 10 Mbps = 240 Mbps
X 2 gamesat the sametime
480Mbps
Maximum Throughput(bps) =
TCP-Window-Size (b) / Latency(s)
(65535 * 8) / 0.2 s =
2621400 bps = 2.62 Mbps
33.
34. 6 Live Streams
HLS streaming of 6 HD streams to tablets & mobiles per match
+20 Replay cameras
On-demand replays of selected events from up to 20+ cameras on the field
+4000 VoD elements
Exclusive on-demand multimedia exclusive edits
35. FASP
Scale Out High-Speed Transfer by
Aspera
Scale Out Transcoding by
Elemental
On Demand
Multi-screen capture and distribution by EVS
36. + 27 TB
of video data
Key Metrics
Total over 62 games
Average per Game
Transfer Time (in hours)
13,857
216
Number of GB Transferred
27,237
426
Number of Transfers
14,073
220
Number of Files Transferred
2,706,922
42,296
< 14,000 hrs
video transferred
200 ms
of latency over WAN
10%
packet loss over WAN
37. Live Streams
660,000 Minutes
Transcoded Output
x4.3 =
2.8 Million Minutes
Delivered Streams
x 321 =
15 Million Hours
35 Million Unique Viewers