A-exam talk.


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • TCP is the one of the core protocols of the Internet that provides reliable communication. Ever since its inception in the late 80s with Van Jacobson’s influential paper on congestion control proposal, network capacity has evolved by orders of magnitude from mega bit links to fat giga bit links, but the transport layer hasn’t still evolved, showing limited success on networks of today.
  • In particular, TCP’s performance in long fat networks has been widely analyzed and criticised over this decade. These are networks in which the product of the bandwidth and the delay is high. This product is a measure of the maximum amount of data that can be sent to the pipe in one round trip time and any transport protocol should be able to handle such high values of outstanding data. Typical examples include the Abilene backbone for Internet2, the Teragrid experimental facility, the lambda rail.
  • Before we discuss why TCP underperforms in such networks, it is important to review some basics. TCP provides reliable in-order communication between end hosts. TCP is also network-congestion-aware. It uses slow start to conservatively estimate the bandwidth available in the network, by starting its window size from 1 segment. It uses additive increase to implicitly probe the network for additional bandwidth by increasing its window size by 1 segment per RTT, and during a loss, faces a drastic cutback to half the current window size. Though very conservative in its approach to probe for additional bandwidth, this protocol is known to converge to a fair and efficient throughput share across flows in the network.
  • The graph shown here is the congestion window evolution of a TCP flow. It shows the slow start, AI and the MD phases of TCP. One big problem with TCP is its sensitivity to packet losses. Depending on the number of packet losses and the loss pattern, there could be multiple cutbacks at the same time or worse a timeout when TCP has to begin all over again from slow start. Transient congestion in the form of sudden packet bursts or random losses possibly due to channel errors in the network could also lead to TCP cutting back before it can saturate the pipe.
  • These problems get aggravated in LFNs.. With window sizes of as high as 100s of MB, transient congestion and random losses would result in early cutbacks and ramping back up to the maximum capacity would take thousands of RTTs.. To give you an idea, a TCP flow on a 10 Gbps with 100ms delay would yield only 1/100 th of the bandwidth as throughput even at a loss rate of 10^(-6).. As an aside, another problem to TCP is that its slow start mechanism is conservative and even short web transfers consisting of 25 to 30 segments may take 3 to four round trips to complete. Researchers in the late 90s came up with analytical proofs that showed TCP’s limitations in such networks and paved the way for a slew of solutions.
  • Lets take a quick look at the existing solution space. This taxonomy is based on the kind of congestion signal delivered to the end host. This congestion signal can be explicitly done from the network, as an example, XCP, in which end hosts get exact information from routers about congestion window increase. The signal can also be implicit in the form of loss of one or more packets similar to what CUBIC or HS-TCP does. The signal could also be an increase in the round-trip delay like FAST. Most of these protocols try to find a growth curve that is better than AIMD
  • Primarily they try to ramp up quickly to fill the pipe but at the same time remain friendly to TCP and other flows. But they still have issues are quite unstable under transient congestion. This makes it achieving both the goals at the same time quite difficult. With the explicit congestion protocol XCP, new router hardware is required making it difficult for practical deployment.
  • Clearly we still have a problem to solve; we desire to achieve near-100% utilization all the time without affecting TCP-friendliness in any case. And this balance between aggression and fairness should be struck without any new network support that is not already available.
  • We achieve this balance by splitting any flow into two subflows which would handle these two goals separately: one comprising of the legacy TCP traffic that would handle fairness while the other subflow sends packets into the network so as to fill the pipe quickly, but with an extra requirement that the aggressive stream does not affect the TCP traffic in the network.
  • Now this requirement can be achieved if the legacy TCP flow is strictly prioritized over the aggressive subflow 2. If this is ensured, then the aggressive component will not affect the legacy traffic. This can be easily interpreted through the congestion window evolution pattern of a legacy TCP flow in a network shown here. If there is only one flow in the network, the saturated capacity indicates the region when cwnd oscillates between W, the pipe capacity and W+B, the bottleneck buffer size. The strict prioritization of the aggressive flow would not disturb this congestion evolution but only fill the troughs in the graph that go below W. This prioritization can be achieved through DiffServ already available in routers of today.
  • Most of the benefits from this protocol can be easily identified. This protocol is TCP-friendly as I have already mentioned. A PLT flow will be at least as fair as TCP to other PLT flows. Another important advantage is that PLT overcomes TCP’s conservative approach in its slow start. The low priority subflow can start with a window size much greater than 1 so that it can complete short flows faster than a regular TCP flow. It requires no new network support. Another effect is this idea can be visualized independent of the type of congestion control scheme used at the high priority subflow and can simply be used to supplement the throughput of the congestion control scheme in the first subflow.
  • Let us now look at the PLT design briefly. The PLT sender consists of a scheduler that would assign packets to the respective subflows which are handled by the high priority and the low priority congestion modules denoted by HCM and LCM respectively. The LCM handles a lossy channel since packets could get lost or starved when HCM saturates the pipe. Packets that cannot be retransmitted at the LCM are eventually retransmitted at the HCM. The sender gets the exact information about the lost and the received packets.
  • The first design question one would ask is whether LCM needs a congestion protocol at all. A simple no-holds-barred approach of blasting the channel would not only lead to a congestion collapse in the network but also a wastage of bandwidth at those links before the bottleneck. Also, over-aggression would lead to maintenance of huge outstanding windows… Hence some sort of congestion control is necessary.
  • LCM follows a simple loss-based congestion control scheme. Going by its aggressive nature, LCM follows an MIMD approach. It also uses a loss-rate based control that is more aggressive than a simple loss-based congestion control. Accordingly LCM keeps ramping up as long as it can incur tolerable loss rates. Based on the average loss rate it derives from the ACKS from the receiver and a parameter that decides the maximum tolerable loss rate, the LCM decides to ramp up or bring down its congestion window.
  • PLT has this magic number mu. It is in effect, the maximum loss rate up to which the LCM remains aggressive. The choice of the value of mu is somewhat important. Too high a value can result in wastage of bandwidth in the non-botleneck links before the bottleneck link. Too low a value, can result in a less aggressive LCM. A value typical of current loss rates in the network could be chosen.. In an implementation, this value could of course be decided by kernel tuning.
  • Before continuing further, I would like to point out how the congestion control works the way we had originally wanted it to. The two graphs represent the sender’s throughput evolution at the HCM and the LCM of a flow on a 250Mbps bottleneck link. We see that whenever the legacy TCP SACK has dropped its window size, the LCM ramps up quickly and saturates the bottleneck. And when HCM ramps back up, low priority queues starve and LCM’s timeouts ensure the cwnd ramp down to 0.
  • We have conducted ns-2 simulations to compare our protocol against TCP, FAST and XCP. We have used a bottleneck bandwidth of 250 Mbps in most of the simulations with the RTT=80ms, giving a window size of 2500. We have done experiments with higher window sizes as well, but some of the experiments did not scale, hence we present only this case for completeness.
  • FAST is a delay-based congestion protocol whose congestion window growth is to multiplicatively increase as long as the delay does not build up, but reducing the factor when the delay increases. It converges very well as long as alpha is predicted well. Its tuning is a little tricky and depends on the number of flows in the network.
  • XCP developed at MIT requires explicit notification from the routers to the end hosts about the cwnd increment. The routers specially equipped to provide this functionality in a stateless manner. But its TCP-friendliness is restricted to providing dynamic weighted fair queuing in routers. Its convergence is questionable at high transient congestion
  • Having said this, lets get back to our simulation study. The first topology that we consider is when multiple flows share a single bottleneck of 250Mbps.
  • This graphs shows the effect of random losses on the throughputs of a single flow. We find that PLT sustains 100% goodput as long as the loss rates in the network are less than the target loss threshold. The other protocols, on the other hand, under perform with less than 50% of the throughput at non-zero random loss rates.
  • The next graph is the frequency distribution of the completion times of short flows which constitute a major part of the Internet traffic. The flow sizes are pareto distributed with a mean of 25 packets. We find that most PLT flows finish within 1 or 2 RTTs while the other protocols take at least 2 RTTs to complete their flows.
  • The next graph shows how PLT reacts to flows entering and leaving the network. There are 3 flows in the network whose arrival and exit times are shown in the diagram. The graph shows the bottleneck utilization and we find that a PLT flow is able to ramp up quickly as and when another flow leaves the network.
  • The next case that we consider is a complex topology with multiple bottlenecks. There are three flows for each bottleneck and each link has a bandwidth of 250Mbps. Transient congestion happens in the form of an on-off bursty UDP traffic in the central link.
  • The graph plots aggregate goodput flows that flow across the link in which the UDP traffic passes. We find that PLT yields superior aggregate goodputs when compared to other protocols.
  • An immediate applicability is in Virtual Private Networks wherein service providers could provide the necessary prioritization techniques. If that were indeed possible, then PEPs could be installed on the edges of subnets running PLT connections between them thereby improving performance. We would also like to explore this area with different applications.
  • In the wide area, PLT needs to be disabled if there is no priority queuing available. This has to be detected in an end-to-end manner with negligible false –ve and +ves.
  • One issue that I had mentioned earlier when talking about aggressiveness was about outstanding windows. Considering that the LCM is lossy and is aggressive, it would definitely need larger outstanding windows than TCP. We even found in our simulations that sometimes, a restricted receiver window could prevent HCM from expanding its window, in which case, LCM should cut back and let HCM continue with its normal sending of packets.
  • Another interesting aspect that I would like to point is about fairness. We can see that the strict prioritization enables TCP friendliness, but we were trying to see if LCM subflows were fair to each other. Interestingly, as long as HCM traffic can saturate the bottleneck at some point of time, LCM can provide coarse-grained fairness.
  • In the late 90s, researchers came up with analytical proofs about TCP’s flaws in long fat networks which resulted in a slew of solutions to a modified approach of the transport protocol..
  • A-exam talk.

    1. 1. Priority layered approach to Transport protocol for Long fat networks Vidhyashankar Venkataraman Cornell University
    2. 2. TCP: Transmission Control Protocol <ul><li>TCP: ubiquitous end-to-end protocol for reliable communication </li></ul><ul><li>Networks have evolved over the past two decades </li></ul><ul><ul><li>TCP has not </li></ul></ul><ul><li>TCP is inadequate for current networks </li></ul>NSFNet: 1991 (1.5Mbps) Abilene backbone: 2007 (10Gbps)
    3. 3. Long Fat Networks (LFNs) <ul><li>Bandwidth delay product </li></ul><ul><ul><li>BW X Delay = Max. amount of data ‘in the pipe’ </li></ul></ul><ul><ul><li>Max. data that can be sent in one round trip time </li></ul></ul><ul><li>High value in long fat networks </li></ul><ul><ul><li>Optical eg. Abilene/I2 </li></ul></ul><ul><ul><li>Satellite networks </li></ul></ul><ul><ul><li>Eg: 2 satellites 0.5 secs, 10Gbps radio link can send up to 625MB/RTT </li></ul></ul>
    4. 4. TCP: Basics <ul><li>Reliability, in-order delivery </li></ul><ul><li>Congestion-aware: </li></ul><ul><ul><li>Slow Start (SS): Increase window size (W) from 1 segment </li></ul></ul><ul><ul><li>Additive Increase Multiplicative Decrease (AIMD) </li></ul></ul><ul><ul><li>AI: Conservative increase by 1 segment/RTT </li></ul></ul><ul><ul><li>MD: Drastic cutback of window by half with loss </li></ul></ul><ul><ul><li>AIMD ensures fair throughput share across network flows </li></ul></ul>SS AI MD Window t
    5. 5. TCP’s AIMD revisited (Adapted from Nick Mckeown’s slide) Only W packets may be outstanding <ul><li>Rule for adjusting W </li></ul><ul><ul><li>AI : If an ACK is received: W ← W+1/W </li></ul></ul><ul><ul><li>MD : If a packet is lost: W ← W/2 </li></ul></ul>Source Dest Bottleneck t Multiple cutbacks Timeout Loss Window size Early cutback AI MD SS
    6. 6. TCP’s inadequacies in LFNs <ul><li>W ~ 10 5 KB or more in LFNs </li></ul><ul><li>Two problems </li></ul><ul><ul><li>Sensitivity to transient congestion and random losses </li></ul></ul><ul><ul><li>Ramping up back to high W will take a long time (AI) </li></ul></ul><ul><li>Detrimental to TCP’s throughput </li></ul><ul><ul><li>Example : 10 Gbps link, 100ms; Loss rate of 10 -5 yields only 10Mbps throughput! </li></ul></ul><ul><li>Another problem: Slow start: Short flows take longer time to complete </li></ul>
    7. 7. Alternate Transport Solutions Congestion Control in LFNs Explicit Implicit Loss Delay <ul><li>Explicit notification from </li></ul><ul><li>routers </li></ul><ul><li>XCP </li></ul><ul><li>Loss: signal for </li></ul><ul><li>congestion </li></ul><ul><li>CUBIC, </li></ul><ul><li>HS-TCP, STCP </li></ul><ul><li>RTT increase: signal for </li></ul><ul><li>congestion </li></ul><ul><li>(Queue builds up) </li></ul><ul><li>Fast </li></ul>End-to-end (like TCP) Taxonomy based on Congestion signal to end host General Idea: Window growth curve `better’ than AIMD
    8. 8. Problems with existing solutions <ul><li>These protocols strive to achieve both: </li></ul><ul><ul><li>Aggressiveness : Ramping up quickly to fill pipe </li></ul></ul><ul><ul><li>Fairness : Friendly to TCP and other flows of same protocol </li></ul></ul><ul><li>Issues </li></ul><ul><ul><li>Unstable under frequent transient congestion events </li></ul></ul><ul><ul><li>Achieving both goals at the same time is difficult </li></ul></ul><ul><ul><li>Slow start problems still exist in many of the protocols </li></ul></ul><ul><ul><li>Example: </li></ul></ul><ul><ul><ul><li>XCP: Needs new router hardware </li></ul></ul></ul><ul><ul><ul><li>FastTCP, HS-TCP: Stability is scenario-dependent </li></ul></ul></ul>
    9. 9. A new transport protocol <ul><li>Need : “good” aggressiveness without loss in fairness </li></ul><ul><ul><li>“good”: Near-100% bottleneck utilization </li></ul></ul><ul><li>Strike this balance without requiring any new network support </li></ul>
    10. 10. Our approach: Priority Layered Transport (PLT) <ul><li>Separate aggressiveness and fairness: Split flow into 2 subflows </li></ul><ul><li>Send TCP (SS/AIMD) packets over subflow 1 ( Fair ) </li></ul><ul><li>Blast packets to fill pipe, over subflow 2 ( Aggressive ) </li></ul><ul><li>Requirement : Aggressive stream ‘shouldn’t affect’ TCP streams in network </li></ul>Src1 Dst1 Bottleneck Subflow 1: Legacy TCP Subflow 2
    11. 11. Prioritized Transfer <ul><li>Sub-flow 1 strictly prioritized over sub-flow 2 </li></ul><ul><li>Meaning: Sub-flow 2 fills pipe whenever 1 cannot and does that quickly </li></ul><ul><li>Routers can support strict priority queuing : DiffServ </li></ul><ul><ul><li>Deployment issues discussed later </li></ul></ul>t Window size Sub flow 2 fills the troughs W+B (W+Buffer) W (Pipe capacity)
    12. 12. Evident Benefits from PLT <ul><li>Fairness </li></ul><ul><ul><li>Inter protocol fairness: TCP friendly </li></ul></ul><ul><ul><li>Intra protocol fairness: As fair as TCP </li></ul></ul><ul><li>Aggression </li></ul><ul><ul><li>Overcomes TCP’s limitations with slow start </li></ul></ul><ul><li>Requires no new network support </li></ul><ul><li>Congestion control independence at subflow 1 </li></ul><ul><ul><li>Sub flow 2 supplements performance of sub flow 1 </li></ul></ul>
    13. 13. PLT Design <ul><li>Scheduler assigns packets to sub-flows </li></ul><ul><ul><li>H igh priority C ongestion M odule (HCM): TCP </li></ul></ul><ul><ul><ul><li>Module handling subflow 1 </li></ul></ul></ul><ul><ul><li>L ow priority C ongestion M odule (LCM) </li></ul></ul><ul><ul><ul><li>Module handling subflow 2 </li></ul></ul></ul><ul><li>LCM is lossy </li></ul><ul><ul><li>Packets could get lost or starved when HCM saturates pipe </li></ul></ul><ul><ul><li>LCM Sender knows packets lost and received from receiver </li></ul></ul>
    14. 14. The LCM <ul><li>Is naïve no-holds-barred sending enough? </li></ul><ul><ul><li>No! Can lead to congestion collapse </li></ul></ul><ul><ul><li>Wastage of Bandwidth in non-bottleneck links </li></ul></ul><ul><ul><li>Outstanding windows could get large and simply cripple flow </li></ul></ul><ul><li>Congestion control is necessary… </li></ul>
    15. 15. Congestion control at LCM <ul><li>Simple, Loss-based, aggressive </li></ul><ul><ul><li>Multiplicative increase Multiplicative Decrease (MIMD) </li></ul></ul><ul><li>Loss-rate based: </li></ul><ul><ul><li>Sender keeps ramping up if it incurs tolerable loss rates </li></ul></ul><ul><ul><li>More robust to transient congestion </li></ul></ul><ul><li>LCM sender monitors loss rate p periodically </li></ul><ul><ul><li>Max. tolerable loss rate μ </li></ul></ul><ul><ul><li>p < μ => cwnd =  *cwnd (MI,  >1) </li></ul></ul><ul><ul><li>p >= μ => cwnd =  *cwnd (MD,  <1) </li></ul></ul><ul><ul><li>Timeout also results in MD </li></ul></ul>
    16. 16. Choice of μ <ul><li>Too High: Wastage of bandwidth </li></ul><ul><li>Too Low : LCM is less aggressive, less robust </li></ul><ul><li>Decide from expected loss rate over Internet </li></ul><ul><ul><li>Preferably kernel tuned in the implementation </li></ul></ul><ul><ul><li>Predefined in simulations </li></ul></ul>
    17. 17. Sender Throughput in HCM and LCM LCM fills pipe in the desired manner LCM cwnd = 0 when HCM saturates pipe
    18. 18. Simulation study <ul><li>Simulation study of PLT against TCP, FAST and XCP </li></ul><ul><li>250 Mbps bottleneck </li></ul><ul><li>Window size: 2500 </li></ul><ul><li>Drop Tail policy </li></ul>
    19. 19. FAST TCP <ul><li>Delay-based congestion control for LFNs: Popular </li></ul><ul><ul><li>Congestion signal: Increase in delay </li></ul></ul><ul><li>Ramp up much faster than AI </li></ul><ul><ul><li>If queuing delay builds up, increase factor reduces </li></ul></ul><ul><li>Uses parameter to decide reduction of increase factor </li></ul><ul><ul><li>Ideal value depends on number of flows in network </li></ul></ul><ul><li>TCP-friendliness scenario-dependent </li></ul><ul><ul><li>Though equilibrium exists, difficult to prove convergence </li></ul></ul>
    20. 20. XCP: Baseline <ul><li>Requires explicit feedback from routers </li></ul><ul><li>Routers equipped to provide cwnd increment </li></ul><ul><li>Converges quite fast </li></ul><ul><li>TCP-friendliness requires extra router support </li></ul>
    21. 21. Single bottleneck topology
    22. 22. Effect of Random loss PLT: Near-100% goodput if loss rate< μ TCP, Fast and XCP underperform at high loss rates 0
    23. 23. Short PLT flows Frequency distribution of flow completion times Most PLT flows finish within 1 or 2 RTTs Flows pareto distributed (Max size = 5MB)
    24. 24. Effect of flow dynamics 3 flows in the network Flows 1 and 2 leave, the other flow ramps up quickly Congestion in LCM due to another flow’s arrival
    25. 25. Effect of cross traffic
    26. 26. Effect of Cross traffic <ul><li>Aggregate goodput of flows </li></ul><ul><li>FAST yields poor goodputs even with low UDP bursts </li></ul><ul><li>PLT yields 90% utilization even with 50 Mbps bursts </li></ul>
    27. 27. Conclusion <ul><li>PLT: layered approach to transport </li></ul><ul><ul><li>Prioritize fairness over aggressiveness </li></ul></ul><ul><ul><li>Supplements aggression to a legacy congestion control </li></ul></ul><ul><li>Simulation results are promising </li></ul><ul><ul><li>PLT robust to random losses and transient congestion </li></ul></ul><ul><ul><li>We have also tested PLT-Fast and results are promising! </li></ul></ul>
    28. 28. Issues and Challenges ahead <ul><li>Deployability Challenges </li></ul><ul><ul><li>PEPs in VPNs </li></ul></ul><ul><ul><li>Applications over PLT </li></ul></ul><ul><ul><li>PLT-shutdown </li></ul></ul><ul><li>Other issues </li></ul><ul><ul><li>Fairness issues </li></ul></ul><ul><ul><li>Receiver Window dependencies </li></ul></ul>
    29. 29. Future Work: Deployment (Figure adapted from Nick Mckeown’s slides) <ul><li>How could PLT be deployed? </li></ul><ul><ul><li>In VPNs, wireless networks </li></ul></ul><ul><ul><li>Performance Enhancing Proxy boxes sitting at the edge </li></ul></ul><ul><li>Different applications? </li></ul><ul><ul><li>LCM traffic could be a little jittery </li></ul></ul><ul><ul><li>Performance of streaming protocols/ IPTV </li></ul></ul>PEP PEP PLT connection
    30. 30. Deployment: PLT-SHUTDOWN <ul><li>In the wide area, PLT should be disabled if no priority queuing </li></ul><ul><ul><li>Unfriendly to fellow TCP flows otherwise! </li></ul></ul><ul><li>We need methods to detect priority queuing at bottleneck in an end-to-end manner </li></ul><ul><li>To be implemented and tested on the real internet </li></ul>
    31. 31. Receive Window dependency <ul><li>PLT needs larger outstanding windows </li></ul><ul><ul><li>LCM is lossy: Aggression & Starvation </li></ul></ul><ul><ul><li>Waiting time for retransmitting lost LCM packets </li></ul></ul><ul><li>Receive window could be bottleneck </li></ul><ul><ul><li>LCM should cut back if HCM is restricted </li></ul></ul><ul><ul><li>Should be explored more </li></ul></ul>
    32. 32. Fairness considerations <ul><li>Inter-protocol fairness: TCP friendliness </li></ul><ul><li>Intra-protocol fairness: HCM fairness </li></ul><ul><li>Is LCM fairness necessary? </li></ul><ul><ul><li>LCM is more dominant in loss-prone networks </li></ul></ul><ul><ul><li>Can provide relaxed fairness </li></ul></ul><ul><ul><li>Effect of queuing disciplines </li></ul></ul>
    33. 33. <ul><li>EXTRA SLIDES </li></ul>
    34. 34. Analyses of TCP in LFNs <ul><li>Some known analytical results </li></ul><ul><ul><li>At loss p, ( p . ( BW. RTT) 2 )>1 => small throughputs </li></ul></ul><ul><ul><li>Throughput  1/RTT </li></ul></ul><ul><ul><li>Throughput  1/ √ p </li></ul></ul><ul><li>(Padhye et. al. and Lakshman et. al.) </li></ul><ul><li>Several solutions proposed for modified transport </li></ul>
    35. 35. Fairness <ul><li>Average goodputs of PLT and TCP flows in small buffers </li></ul><ul><li>Confirms that PLT is TCP-friendly </li></ul>
    36. 36. PLT Architecture Input buffer LCM Rexmt Buffer PLT Sender PLT Receiver HCM LCM HCM-R LCM-R HCM Packet LCM Packet Strong ACK Dropped Packets HCM ACK Socket interface Sender Receiver App App
    37. 37. Other work: Chunkyspread <ul><li>Bandwidth-sensitive peer-to-peer multicast for live-streaming </li></ul><ul><li>Scalable solution: </li></ul><ul><ul><li>Robustness to churn, latency and bandwidth </li></ul></ul><ul><ul><li>Heterogeneity-aware Random graph </li></ul></ul><ul><ul><li>Multiple trees provided: robustness to churn </li></ul></ul><ul><li>Balances load across peers </li></ul><ul><li>IPTPS’ 06, ICNP’ 06 </li></ul>