Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Protocols for Fast Delivery of Large Data Volumes

77 views

Published on

Protocols for Fast Delivery of Large Data Volumes. Latency-Bandwidth tradeoff in Internet. TCP, Parallel TCP, Overlay Networks

Published in: Engineering
  • Be the first to comment

  • Be the first to like this

Protocols for Fast Delivery of Large Data Volumes

  1. 1. Protocols for Fast Delivery of Large Data Volumes CS4482 High Performance Networking Dilum Bandara Dilum.Bandara@uom.lk Some slides extracted from Dr. Dan Massey’s CS557 class at Colorado State University
  2. 2. High Data Volume Applications  High definition video streaming  Ultra high-definition video  Sensor networks  Video surveillance  Radar networks  Array of Radio telescopes  Data transfer between grids/clouds  Data transfer from CERN  Virtual reality  Holographic 3D display Some applications require ordered delivery others don’t 2 Source: NAIC/Arecibo Obs/NSF Source: www.idrshare.com
  3. 3. Latency Bandwidth Tradeoff  Bandwidth is increasing  10-100 Gbps networks  Latency is not reducing  Speed-of-light limitation  Small transfers are latency limited  telnet, ssh, chat messages, small file transfer  Large transfers are still bandwidth limited  Bulk transfer of files 3
  4. 4. Bulk Transfer 4 Source: N. Tolia et al., “An Architecture for Internet Data Transfer,” NSDI ‘06, 2006
  5. 5. TCP Design Assumptions  Low physical link error rates  Packet loss = congestion signal  No packet reordering at network (IP) level  Packet reordering = congestion signal  These design assumptions are challenged today  Parallel networking hardware  Packet reordering  Dedicated links or reservations  No congestion  Bulk transfers  Streaming not needed 5
  6. 6. Parallel TCP  Create many parallel TCP connections to aggregate bandwidth 6 Source: www.codeproject.com/Articles/28788/Distributed-Computing-in-Small-and-Medium-Sized-Of
  7. 7. Parallel TCP (Cont.) 7 . . . 1 2 N . . . i Bottleneck capacity c AIMD connection Xi = send rate Source: E. Altman, “Parallel TCP Sockets: Simple Model, Throughput and Validation,” IEEE Infocom, 2006
  8. 8. Parallel TCP – Pros & Cons  Pros  Aggregated bandwidth  More resilient to network layer packet losses  Only one stream would may experience timeout  More aggressive behavior  Slow-start is faster, k x MSS  Recovery is faster compared to single stream with giant window  Only one stream may experience loss  Multiplicative decrease is effectively 1/(2k) rather than 1/2  Can work around max TCP buffer size limitations  k x buffer size 8
  9. 9. Parallel TCP – Pros & Cons (Cont.)  Cons  Ideally, each connection should use a different path  Exploits TCP’s fairness  Can become unfair to other flows  Requires changes to applications to support parallel streams  May perform worse if loss is due to congestion  May add to congestion  Selecting optimum buffer size & number of streams is hard 9
  10. 10. Parallel TCP Performance 10 Source: www.codeproject.com/Articles/28788/ Distributed-Computing-in-Small-and-Medium- Sized-Of
  11. 11. Scalable TCP (SCTP)  Modifies congestion control algorithm  Each packet loss decreases congestion window by a factor of 1/8 instead of Standard TCP's ½ congestion window  When packet loss stops, rate is ramped by adding one packet every 100 successful ACKs  Standard TCP – increase by inverse of congestion window  very large windows take a long time to recover 11
  12. 12. TCP friendly Rate Adaptation Based On Loss (TRABOL)  UDP  Fast, best effort, insensitive to congestion  TCP  Slow, reliable, sensitive to congestion  TRABOL  Fast, best effort, sensitive to congestion  Depends on end application/user feedback  End application/user specifies 2 rates  Target Rate (TR)  Minimum Rate (MR)  Congestion control is similar to TCP  AIMD 12
  13. 13. TRABOL (Cont.) 13 Source: A. Trimmer et al., “Performance of High- Bandwidth TRABOL Protocol for Radar Data Streaming,” IEEE Region 5 TPS Conference, 2006.
  14. 14. TRABOL Performance 14
  15. 15. Application aWare Overlay Networks (AWON)  Packets are marked based on application requirements  Drop packets in an application-aware manner  Multicast nodes send aggregated requests to source nodes 15 Source: T. Banka et al., “An Architecture and a Programming Interface for Application-Aware Data Dissemination Using Overlay Networks,” COMSWARE '07, 2007.
  16. 16. Application-Specific Data Sample Selection 16
  17. 17. Content-Based Packet Marking 17 ADU – Application Data Unit
  18. 18. On-The-Fly Data Selection 18 • Compensation for lost packets • Select a packet from a higher rate
  19. 19. AWON Performance 19 Measurements on PlanetLab
  20. 20. Other Solutions 20 Source: http://cloudcomputingseminar.wordpress.com/2012/06/16/unit-2-grid-computing/ • XTP – Xpress Transport Protocol - Fast & light weight • RBUDP – Reliable Blast UDP - high-bandwidth, reliable • Tsunami – Improvement of RBUDP

×