Your SlideShare is downloading. ×
  • Like
A-exam talk.
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply
Published

 

Published in Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
309
On SlideShare
0
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
0
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • TCP is the one of the core protocols of the Internet that provides reliable communication. Ever since its inception in the late 80s with Van Jacobson’s influential paper on congestion control proposal, network capacity has evolved by orders of magnitude from mega bit links to fat giga bit links, but the transport layer hasn’t still evolved, showing limited success on networks of today.
  • In particular, TCP’s performance in long fat networks has been widely analyzed and criticised over this decade. These are networks in which the product of the bandwidth and the delay is high. This product is a measure of the maximum amount of data that can be sent to the pipe in one round trip time and any transport protocol should be able to handle such high values of outstanding data. Typical examples include the Abilene backbone for Internet2, the Teragrid experimental facility, the lambda rail.
  • Before we discuss why TCP underperforms in such networks, it is important to review some basics. TCP provides reliable in-order communication between end hosts. TCP is also network-congestion-aware. It uses slow start to conservatively estimate the bandwidth available in the network, by starting its window size from 1 segment. It uses additive increase to implicitly probe the network for additional bandwidth by increasing its window size by 1 segment per RTT, and during a loss, faces a drastic cutback to half the current window size. Though very conservative in its approach to probe for additional bandwidth, this protocol is known to converge to a fair and efficient throughput share across flows in the network.
  • The graph shown here is the congestion window evolution of a TCP flow. It shows the slow start, AI and the MD phases of TCP. One big problem with TCP is its sensitivity to packet losses. Depending on the number of packet losses and the loss pattern, there could be multiple cutbacks at the same time or worse a timeout when TCP has to begin all over again from slow start. Transient congestion in the form of sudden packet bursts or random losses possibly due to channel errors in the network could also lead to TCP cutting back before it can saturate the pipe.
  • These problems get aggravated in LFNs.. With window sizes of as high as 100s of MB, transient congestion and random losses would result in early cutbacks and ramping back up to the maximum capacity would take thousands of RTTs.. To give you an idea, a TCP flow on a 10 Gbps with 100ms delay would yield only 1/100 th of the bandwidth as throughput even at a loss rate of 10^(-6).. As an aside, another problem to TCP is that its slow start mechanism is conservative and even short web transfers consisting of 25 to 30 segments may take 3 to four round trips to complete. Researchers in the late 90s came up with analytical proofs that showed TCP’s limitations in such networks and paved the way for a slew of solutions.
  • Lets take a quick look at the existing solution space. This taxonomy is based on the kind of congestion signal delivered to the end host. This congestion signal can be explicitly done from the network, as an example, XCP, in which end hosts get exact information from routers about congestion window increase. The signal can also be implicit in the form of loss of one or more packets similar to what CUBIC or HS-TCP does. The signal could also be an increase in the round-trip delay like FAST. Most of these protocols try to find a growth curve that is better than AIMD
  • Primarily they try to ramp up quickly to fill the pipe but at the same time remain friendly to TCP and other flows. But they still have issues are quite unstable under transient congestion. This makes it achieving both the goals at the same time quite difficult. With the explicit congestion protocol XCP, new router hardware is required making it difficult for practical deployment.
  • Clearly we still have a problem to solve; we desire to achieve near-100% utilization all the time without affecting TCP-friendliness in any case. And this balance between aggression and fairness should be struck without any new network support that is not already available.
  • We achieve this balance by splitting any flow into two subflows which would handle these two goals separately: one comprising of the legacy TCP traffic that would handle fairness while the other subflow sends packets into the network so as to fill the pipe quickly, but with an extra requirement that the aggressive stream does not affect the TCP traffic in the network.
  • Now this requirement can be achieved if the legacy TCP flow is strictly prioritized over the aggressive subflow 2. If this is ensured, then the aggressive component will not affect the legacy traffic. This can be easily interpreted through the congestion window evolution pattern of a legacy TCP flow in a network shown here. If there is only one flow in the network, the saturated capacity indicates the region when cwnd oscillates between W, the pipe capacity and W+B, the bottleneck buffer size. The strict prioritization of the aggressive flow would not disturb this congestion evolution but only fill the troughs in the graph that go below W. This prioritization can be achieved through DiffServ already available in routers of today.
  • Most of the benefits from this protocol can be easily identified. This protocol is TCP-friendly as I have already mentioned. A PLT flow will be at least as fair as TCP to other PLT flows. Another important advantage is that PLT overcomes TCP’s conservative approach in its slow start. The low priority subflow can start with a window size much greater than 1 so that it can complete short flows faster than a regular TCP flow. It requires no new network support. Another effect is this idea can be visualized independent of the type of congestion control scheme used at the high priority subflow and can simply be used to supplement the throughput of the congestion control scheme in the first subflow.
  • Let us now look at the PLT design briefly. The PLT sender consists of a scheduler that would assign packets to the respective subflows which are handled by the high priority and the low priority congestion modules denoted by HCM and LCM respectively. The LCM handles a lossy channel since packets could get lost or starved when HCM saturates the pipe. Packets that cannot be retransmitted at the LCM are eventually retransmitted at the HCM. The sender gets the exact information about the lost and the received packets.
  • The first design question one would ask is whether LCM needs a congestion protocol at all. A simple no-holds-barred approach of blasting the channel would not only lead to a congestion collapse in the network but also a wastage of bandwidth at those links before the bottleneck. Also, over-aggression would lead to maintenance of huge outstanding windows… Hence some sort of congestion control is necessary.
  • LCM follows a simple loss-based congestion control scheme. Going by its aggressive nature, LCM follows an MIMD approach. It also uses a loss-rate based control that is more aggressive than a simple loss-based congestion control. Accordingly LCM keeps ramping up as long as it can incur tolerable loss rates. Based on the average loss rate it derives from the ACKS from the receiver and a parameter that decides the maximum tolerable loss rate, the LCM decides to ramp up or bring down its congestion window.
  • PLT has this magic number mu. It is in effect, the maximum loss rate up to which the LCM remains aggressive. The choice of the value of mu is somewhat important. Too high a value can result in wastage of bandwidth in the non-botleneck links before the bottleneck link. Too low a value, can result in a less aggressive LCM. A value typical of current loss rates in the network could be chosen.. In an implementation, this value could of course be decided by kernel tuning.
  • Before continuing further, I would like to point out how the congestion control works the way we had originally wanted it to. The two graphs represent the sender’s throughput evolution at the HCM and the LCM of a flow on a 250Mbps bottleneck link. We see that whenever the legacy TCP SACK has dropped its window size, the LCM ramps up quickly and saturates the bottleneck. And when HCM ramps back up, low priority queues starve and LCM’s timeouts ensure the cwnd ramp down to 0.
  • We have conducted ns-2 simulations to compare our protocol against TCP, FAST and XCP. We have used a bottleneck bandwidth of 250 Mbps in most of the simulations with the RTT=80ms, giving a window size of 2500. We have done experiments with higher window sizes as well, but some of the experiments did not scale, hence we present only this case for completeness.
  • FAST is a delay-based congestion protocol whose congestion window growth is to multiplicatively increase as long as the delay does not build up, but reducing the factor when the delay increases. It converges very well as long as alpha is predicted well. Its tuning is a little tricky and depends on the number of flows in the network.
  • XCP developed at MIT requires explicit notification from the routers to the end hosts about the cwnd increment. The routers specially equipped to provide this functionality in a stateless manner. But its TCP-friendliness is restricted to providing dynamic weighted fair queuing in routers. Its convergence is questionable at high transient congestion
  • Having said this, lets get back to our simulation study. The first topology that we consider is when multiple flows share a single bottleneck of 250Mbps.
  • This graphs shows the effect of random losses on the throughputs of a single flow. We find that PLT sustains 100% goodput as long as the loss rates in the network are less than the target loss threshold. The other protocols, on the other hand, under perform with less than 50% of the throughput at non-zero random loss rates.
  • The next graph is the frequency distribution of the completion times of short flows which constitute a major part of the Internet traffic. The flow sizes are pareto distributed with a mean of 25 packets. We find that most PLT flows finish within 1 or 2 RTTs while the other protocols take at least 2 RTTs to complete their flows.
  • The next graph shows how PLT reacts to flows entering and leaving the network. There are 3 flows in the network whose arrival and exit times are shown in the diagram. The graph shows the bottleneck utilization and we find that a PLT flow is able to ramp up quickly as and when another flow leaves the network.
  • The next case that we consider is a complex topology with multiple bottlenecks. There are three flows for each bottleneck and each link has a bandwidth of 250Mbps. Transient congestion happens in the form of an on-off bursty UDP traffic in the central link.
  • The graph plots aggregate goodput flows that flow across the link in which the UDP traffic passes. We find that PLT yields superior aggregate goodputs when compared to other protocols.
  • An immediate applicability is in Virtual Private Networks wherein service providers could provide the necessary prioritization techniques. If that were indeed possible, then PEPs could be installed on the edges of subnets running PLT connections between them thereby improving performance. We would also like to explore this area with different applications.
  • In the wide area, PLT needs to be disabled if there is no priority queuing available. This has to be detected in an end-to-end manner with negligible false –ve and +ves.
  • One issue that I had mentioned earlier when talking about aggressiveness was about outstanding windows. Considering that the LCM is lossy and is aggressive, it would definitely need larger outstanding windows than TCP. We even found in our simulations that sometimes, a restricted receiver window could prevent HCM from expanding its window, in which case, LCM should cut back and let HCM continue with its normal sending of packets.
  • Another interesting aspect that I would like to point is about fairness. We can see that the strict prioritization enables TCP friendliness, but we were trying to see if LCM subflows were fair to each other. Interestingly, as long as HCM traffic can saturate the bottleneck at some point of time, LCM can provide coarse-grained fairness.
  • In the late 90s, researchers came up with analytical proofs about TCP’s flaws in long fat networks which resulted in a slew of solutions to a modified approach of the transport protocol..

Transcript

  • 1. Priority layered approach to Transport protocol for Long fat networks Vidhyashankar Venkataraman Cornell University
  • 2. TCP: Transmission Control Protocol
    • TCP: ubiquitous end-to-end protocol for reliable communication
    • Networks have evolved over the past two decades
      • TCP has not
    • TCP is inadequate for current networks
    NSFNet: 1991 (1.5Mbps) Abilene backbone: 2007 (10Gbps)
  • 3. Long Fat Networks (LFNs)
    • Bandwidth delay product
      • BW X Delay = Max. amount of data ‘in the pipe’
      • Max. data that can be sent in one round trip time
    • High value in long fat networks
      • Optical eg. Abilene/I2
      • Satellite networks
      • Eg: 2 satellites 0.5 secs, 10Gbps radio link can send up to 625MB/RTT
  • 4. TCP: Basics
    • Reliability, in-order delivery
    • Congestion-aware:
      • Slow Start (SS): Increase window size (W) from 1 segment
      • Additive Increase Multiplicative Decrease (AIMD)
      • AI: Conservative increase by 1 segment/RTT
      • MD: Drastic cutback of window by half with loss
      • AIMD ensures fair throughput share across network flows
    SS AI MD Window t
  • 5. TCP’s AIMD revisited (Adapted from Nick Mckeown’s slide) Only W packets may be outstanding
    • Rule for adjusting W
      • AI : If an ACK is received: W ← W+1/W
      • MD : If a packet is lost: W ← W/2
    Source Dest Bottleneck t Multiple cutbacks Timeout Loss Window size Early cutback AI MD SS
  • 6. TCP’s inadequacies in LFNs
    • W ~ 10 5 KB or more in LFNs
    • Two problems
      • Sensitivity to transient congestion and random losses
      • Ramping up back to high W will take a long time (AI)
    • Detrimental to TCP’s throughput
      • Example : 10 Gbps link, 100ms; Loss rate of 10 -5 yields only 10Mbps throughput!
    • Another problem: Slow start: Short flows take longer time to complete
  • 7. Alternate Transport Solutions Congestion Control in LFNs Explicit Implicit Loss Delay
    • Explicit notification from
    • routers
    • XCP
    • Loss: signal for
    • congestion
    • CUBIC,
    • HS-TCP, STCP
    • RTT increase: signal for
    • congestion
    • (Queue builds up)
    • Fast
    End-to-end (like TCP) Taxonomy based on Congestion signal to end host General Idea: Window growth curve `better’ than AIMD
  • 8. Problems with existing solutions
    • These protocols strive to achieve both:
      • Aggressiveness : Ramping up quickly to fill pipe
      • Fairness : Friendly to TCP and other flows of same protocol
    • Issues
      • Unstable under frequent transient congestion events
      • Achieving both goals at the same time is difficult
      • Slow start problems still exist in many of the protocols
      • Example:
        • XCP: Needs new router hardware
        • FastTCP, HS-TCP: Stability is scenario-dependent
  • 9. A new transport protocol
    • Need : “good” aggressiveness without loss in fairness
      • “good”: Near-100% bottleneck utilization
    • Strike this balance without requiring any new network support
  • 10. Our approach: Priority Layered Transport (PLT)
    • Separate aggressiveness and fairness: Split flow into 2 subflows
    • Send TCP (SS/AIMD) packets over subflow 1 ( Fair )
    • Blast packets to fill pipe, over subflow 2 ( Aggressive )
    • Requirement : Aggressive stream ‘shouldn’t affect’ TCP streams in network
    Src1 Dst1 Bottleneck Subflow 1: Legacy TCP Subflow 2
  • 11. Prioritized Transfer
    • Sub-flow 1 strictly prioritized over sub-flow 2
    • Meaning: Sub-flow 2 fills pipe whenever 1 cannot and does that quickly
    • Routers can support strict priority queuing : DiffServ
      • Deployment issues discussed later
    t Window size Sub flow 2 fills the troughs W+B (W+Buffer) W (Pipe capacity)
  • 12. Evident Benefits from PLT
    • Fairness
      • Inter protocol fairness: TCP friendly
      • Intra protocol fairness: As fair as TCP
    • Aggression
      • Overcomes TCP’s limitations with slow start
    • Requires no new network support
    • Congestion control independence at subflow 1
      • Sub flow 2 supplements performance of sub flow 1
  • 13. PLT Design
    • Scheduler assigns packets to sub-flows
      • H igh priority C ongestion M odule (HCM): TCP
        • Module handling subflow 1
      • L ow priority C ongestion M odule (LCM)
        • Module handling subflow 2
    • LCM is lossy
      • Packets could get lost or starved when HCM saturates pipe
      • LCM Sender knows packets lost and received from receiver
  • 14. The LCM
    • Is naïve no-holds-barred sending enough?
      • No! Can lead to congestion collapse
      • Wastage of Bandwidth in non-bottleneck links
      • Outstanding windows could get large and simply cripple flow
    • Congestion control is necessary…
  • 15. Congestion control at LCM
    • Simple, Loss-based, aggressive
      • Multiplicative increase Multiplicative Decrease (MIMD)
    • Loss-rate based:
      • Sender keeps ramping up if it incurs tolerable loss rates
      • More robust to transient congestion
    • LCM sender monitors loss rate p periodically
      • Max. tolerable loss rate μ
      • p < μ => cwnd =  *cwnd (MI,  >1)
      • p >= μ => cwnd =  *cwnd (MD,  <1)
      • Timeout also results in MD
  • 16. Choice of μ
    • Too High: Wastage of bandwidth
    • Too Low : LCM is less aggressive, less robust
    • Decide from expected loss rate over Internet
      • Preferably kernel tuned in the implementation
      • Predefined in simulations
  • 17. Sender Throughput in HCM and LCM LCM fills pipe in the desired manner LCM cwnd = 0 when HCM saturates pipe
  • 18. Simulation study
    • Simulation study of PLT against TCP, FAST and XCP
    • 250 Mbps bottleneck
    • Window size: 2500
    • Drop Tail policy
  • 19. FAST TCP
    • Delay-based congestion control for LFNs: Popular
      • Congestion signal: Increase in delay
    • Ramp up much faster than AI
      • If queuing delay builds up, increase factor reduces
    • Uses parameter to decide reduction of increase factor
      • Ideal value depends on number of flows in network
    • TCP-friendliness scenario-dependent
      • Though equilibrium exists, difficult to prove convergence
  • 20. XCP: Baseline
    • Requires explicit feedback from routers
    • Routers equipped to provide cwnd increment
    • Converges quite fast
    • TCP-friendliness requires extra router support
  • 21. Single bottleneck topology
  • 22. Effect of Random loss PLT: Near-100% goodput if loss rate< μ TCP, Fast and XCP underperform at high loss rates 0
  • 23. Short PLT flows Frequency distribution of flow completion times Most PLT flows finish within 1 or 2 RTTs Flows pareto distributed (Max size = 5MB)
  • 24. Effect of flow dynamics 3 flows in the network Flows 1 and 2 leave, the other flow ramps up quickly Congestion in LCM due to another flow’s arrival
  • 25. Effect of cross traffic
  • 26. Effect of Cross traffic
    • Aggregate goodput of flows
    • FAST yields poor goodputs even with low UDP bursts
    • PLT yields 90% utilization even with 50 Mbps bursts
  • 27. Conclusion
    • PLT: layered approach to transport
      • Prioritize fairness over aggressiveness
      • Supplements aggression to a legacy congestion control
    • Simulation results are promising
      • PLT robust to random losses and transient congestion
      • We have also tested PLT-Fast and results are promising!
  • 28. Issues and Challenges ahead
    • Deployability Challenges
      • PEPs in VPNs
      • Applications over PLT
      • PLT-shutdown
    • Other issues
      • Fairness issues
      • Receiver Window dependencies
  • 29. Future Work: Deployment (Figure adapted from Nick Mckeown’s slides)
    • How could PLT be deployed?
      • In VPNs, wireless networks
      • Performance Enhancing Proxy boxes sitting at the edge
    • Different applications?
      • LCM traffic could be a little jittery
      • Performance of streaming protocols/ IPTV
    PEP PEP PLT connection
  • 30. Deployment: PLT-SHUTDOWN
    • In the wide area, PLT should be disabled if no priority queuing
      • Unfriendly to fellow TCP flows otherwise!
    • We need methods to detect priority queuing at bottleneck in an end-to-end manner
    • To be implemented and tested on the real internet
  • 31. Receive Window dependency
    • PLT needs larger outstanding windows
      • LCM is lossy: Aggression & Starvation
      • Waiting time for retransmitting lost LCM packets
    • Receive window could be bottleneck
      • LCM should cut back if HCM is restricted
      • Should be explored more
  • 32. Fairness considerations
    • Inter-protocol fairness: TCP friendliness
    • Intra-protocol fairness: HCM fairness
    • Is LCM fairness necessary?
      • LCM is more dominant in loss-prone networks
      • Can provide relaxed fairness
      • Effect of queuing disciplines
  • 33.
    • EXTRA SLIDES
  • 34. Analyses of TCP in LFNs
    • Some known analytical results
      • At loss p, ( p . ( BW. RTT) 2 )>1 => small throughputs
      • Throughput  1/RTT
      • Throughput  1/ √ p
    • (Padhye et. al. and Lakshman et. al.)
    • Several solutions proposed for modified transport
  • 35. Fairness
    • Average goodputs of PLT and TCP flows in small buffers
    • Confirms that PLT is TCP-friendly
  • 36. PLT Architecture Input buffer LCM Rexmt Buffer PLT Sender PLT Receiver HCM LCM HCM-R LCM-R HCM Packet LCM Packet Strong ACK Dropped Packets HCM ACK Socket interface Sender Receiver App App
  • 37. Other work: Chunkyspread
    • Bandwidth-sensitive peer-to-peer multicast for live-streaming
    • Scalable solution:
      • Robustness to churn, latency and bandwidth
      • Heterogeneity-aware Random graph
      • Multiple trees provided: robustness to churn
    • Balances load across peers
    • IPTPS’ 06, ICNP’ 06