This paper was presented as part of the main technical program at IEEE INFOCOM 2011 Adaptive Delay-based Congestion Contro...
window can be doubled in a fixed number of RTTs. BIC [21]          FR value increases its congestion window more aggressive...
II. P ROTOCOL D ESCRIPTION                         cwnd + ∆cwnd over a period of one RT T , corresponding to   In this sec...
TABLE I                    F LOW S TATES & C ONTROL P OLICY                             time t, denoted respectively by cw...
TABLE II                    ACP PARAMETER S ETTING                                                                       A...
350                                                                                                         2000          ...
70                                                                                    150                                 ...
1.42                                                    the most interesting result. Even though TCP has a much           ...
P2885 jung
Upcoming SlideShare
Loading in …5

P2885 jung


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

P2885 jung

  1. 1. This paper was presented as part of the main technical program at IEEE INFOCOM 2011 Adaptive Delay-based Congestion Control for High Bandwidth-Delay Product Networks Hyungsoo Jung∗, Shin-gyu Kim† , Heon Y. Yeom† , Sooyong Kang‡, Lavy Libman∗ ∗ School of Information Technologies, University of Sydney, NSW 2006, Australia † School of Computer Science & Engineering, Seoul National University, Seoul, Korea ‡ Division of Computer Science & Engineering, Hanyang University, Seoul, Korea Email: {hyungsoo.jung, lavy.libman}, {sgkim, yeom}, Abstract—The design of an end-to-end Internet congestion an important requirement of TCP-friendliness; since a largecontrol protocol that could achieve high utilization, fair sharing portion of Internet traffic is generated by TCP flows, any newof bottleneck bandwidth, and fast convergence while remaining protocol should not gain an unfair advantage by leaving lessTCP-friendly is an ongoing challenge that continues to attractconsiderable research attention. This paper presents ACP, an bandwidth to other TCP flows than TCP itself would.Adaptive end-to-end Congestion control Protocol that achievesthe above goals in high bandwidth-delay product networks where A. Related WorkTCP becomes inefficient. The main contribution of ACP is a new Previous research efforts in end-to-end congestion controlform of congestion window control, combining the estimation can be divided into two categories: Delay-based Congestionof the bottleneck queue size and a measure of fair sharing.Specifically, upon detecting congestion, ACP decreases the con- Control (DCC) and Packet loss-based Congestion Controlgestion window size by the exact amount required to empty the (PCC). PCC performs congestion control reactively by consid-bottleneck queue while maintaining high utilization, while the ering extreme events (packet drops) only, while DCC attemptsincreases of the congestion window are based on a “fairness to make proactive decisions based on variations in RTT. Weratio” metric of each flow, which ensures fast convergence to list the most prominent proposals from both categories below.a fair equilibrium. We demonstrate the benefits of ACP usingboth ns-2 simulation and experimental measurements of a Linux Jain first proposed DCC in [9]. In 1994, TCP-Vegas wasprototype implementation. In particular, we show that the new proposed with the claim of achieving throughput improvementprotocol is TCP-friendly and allows TCP and ACP flows to coexist ranging from 37% to 71% compared with TCP-Reno [10],in various circumstances, and that ACP indeed behaves more [11]. An innovative idea in Vegas is that it detects congestionfairly than other TCP variants under heterogeneous round-trip by observing the change of a throughput rate and preventstimes (RTT). packet losses proactively. Vegas increases or decreases the congestion window by a fixed amount in every control interval, I. I NTRODUCTION regardless of the degree of congestion. FAST [12] adopts It is well recognized that the Additive Increase Multi- the minimum RTT to detect the network condition. Becauseplicative Decrease (AIMD) [1] congestion control algorithm RTT reflects the bottleneck queuing delay, this mechanismemployed by TCP [2]–[4] is ill-suited to High Bandwidth- is effective in determining the network congestion status.Delay Product (HBDP) networks. As advances in network However, use of the minimum of all measured RTT createstechnology increase the prevalence of HBDP networks in a fairness problem [13], [14]. Moreover, as shown in [15],the Internet, the design of an efficient alternative congestion [16], the correlation between RTT increase and congestion thatcontrol mechanism gains in importance. A good congestion later leads to packet loss events can be weak; in particular, thecontrol protocol aims to achieve both high utilization and RTT probe performed by TCP is too coarse to correctly predictfairness while maintaining low bottleneck queue length and congestion events [17].minimizing congestion-induced packet drop rate. PCC uses the loss of a packet as a clear indication that There have been many research efforts that have pro- the network is highly congested and the bottleneck queue isposed a variety of protocols and algorithmic techniques to full. Most TCP variants belong to this class, and control theapproach this goal, each with its own merits and shortcomings. congestion window based on the AIMD policy. Retaining theThese protocols can be classified into two categories: router- AIMD policy guarantees TCP-friendliness. However, the puresupported and end-to-end approaches. Router-supported con- additive increase policy significantly degrades utilization ingestion control schemes, like XCP [5], VCP [6], MLCP [7], HBDP networks. To improve performance in this environment,and CLTCP [8], generally show excellent performance and many solutions have been proposed. HSTCP [18] extendsefficient convergence in HBDP networks. However, the in- standard TCP by adaptively setting the increase/decrease pa-cremental rollout of such router-supported protocols remains rameters according to the congestion window size. HTCP [19]a significant challenge, as it requires backward compatibility employs a similar control policy, but modifies the increasewith legacy TCP flows. Accordingly, end-to-end congestion parameter based on the elapsed time since the last congestioncontrol algorithms are more attractive since they do not require event. STCP [20] has a Multiplicative Increase Multiplicativeany special support from routers. However, they still have Decrease (MIMD) control policy to ensure that the congestion 978-1-4244-9920-5/11/$26.00 ©2011 IEEE 2885
  2. 2. window can be doubled in a fixed number of RTTs. BIC [21] FR value increases its congestion window more aggressivelyand CUBIC [22] focuses on RTT fairness properties by adding than one with a large FR, then the protocol will achieve fasta binary search and a curve-fitting algorithm to the additive convergence to a fair equilibrium.increase and multiplicative decrease phases. LTCP [23] mod- 2) Releasing bandwidth when congestion occurs: Whenifies the TCP flow to behave as a collection of virtual flows informed of congestion, a flow should release its bandwidthfor efficient bandwidth probing, while retaining the AIMD by decreasing the congestion window. PCC protocols generallyfeatures. adopt the Multiplicative Decrease (MD) policy, reducing the We point out that there exist alternative approaches and window to a fraction (e.g. half) of its size upon a congestionstandards for Internet congestion control, such as DCCP [24] event. This leads to under-utilization of the bottleneck linkand TFRC [25], using equation-based methods that break when the capacity of its buffer is not large enough (e.g. lessaway from TCP’s concept of a congestion window altogether. than 100% of the BDP). DCC protocols fare little better, dueHowever, our work focuses on proposing an improved yet to the slow responsiveness of the RTT as an indicator ofbackward-compatible congestion control protocol, and a de- the bottleneck buffer length. Clearly, if one can accuratelytailed discussion of the pros and cons of equation- vs window- estimate the number of packets in the bottleneck queue, thenbased congestion control is beyond the scope of this paper. the congestion window size can be decreased by the exact amount necessary when the queue starts to grow (even beforeB. Motivation packets are lost) and lead to a perfect utilization. Motivated by Unlike router-supported approaches, both delay- and packet this observation, we propose to base the congestion windowloss-based end-to-end approaches have a fundamental limita- management during the decreasing phase on the gap betweention in quantitatively recognizing the load status of a bottle- the sending rate (throughput) and receiving rate (goodput) ofneck link, which makes it difficult for them to achieve the the flow, or, in other words, the difference between the numbergoal of high utilization with fairness and fast convergence. of packets sent and received in the duration of an RTT. ThisThe lack of detailed link information forces existing end-to- control variable provides much more timely and fine-grainedend protocols to take the following philosophy on congestion information about the buffer status than merely variations ofcontrol: (1) probe spare bandwidth by increasing congestion the RTT itself, and can be implemented with little cost bywindow until congestion occurs; and (2) decrease window leveraging unused fields in the TCP response to an indication of congestion. We proceed todescribe our motivation for possible improvements in each of C. Contributions of This Workthese phases. In this paper, we describe a complete congestion control 1) Acquiring available bandwidth: When there is no con- algorithm based on the idea of using the gap between receivinggestion event, end-to-end protocols actively probe and acquire and sending rates, or the goodput-throughput (G-T) differ-available bandwidth. Ideally, we would like bandwidth to be ence, as a control variable for management of the congestionprobed as quickly as possible; this is especially important to window. The G-T difference has been introduced in oura new flow entering the network and starting to compete for earlier work [26], which proposed the TCP-GT protocol andits share of the bottleneck capacity. On the other hand, a flow showed its advantages in terms of high utilization and fastmust not be too greedy in utilizing the spare bandwidth of convergence. However, TCP-GT did not concern itself oncompeting flows, which means that flows already consuming the issues of TCP-friendliness and fairness, especially amongtheir fair share of bandwidth should increase their congestion flows with heterogeneous RTT values. In this paper, we buildwindow slowly even when there are no packet drops. upon and significantly extend our previous work to show From this perspective, TCP with its Additive Increase (AI) how the goodput and throughput information can be usedprobing is very slow compared with other non-AI based to estimate the Fairness Ratio (FR) of the flow, leading toprotocols such as HSTCP, STCP, and CUBIC. Multiplicative a fast and precise estimation of its fair equilibrium bandwidthIncrease (MI) seems to be a more attractive method, because share. Consequently, we design a novel end-to-end adaptiveof its fast increase rate; however, the MI policy in many congestion control protocol, ACP, which achieves the goals ofcases carries the hazard of throughput instability (i.e., large high utilization and fast convergence to a fair equilibrium,drops near the saturation point). The fundamental reason that and can be readily implemented in practice by leveragingprobing speed and stability are hard to achieve simultaneously the existing TCP’s optional header fields. We demostrate theis the difficulty of measuring the instantaneous fairness among superior performance of ACP under a wide range of scenarios,flows whose data rates change dynamically. In router-assisted using both simulation and experimental measurements fromapproaches, such as XCP and VCP, routers continuously its implementation in Linux.1 In particular, we show thatmeasure the degree of fairness for each flow and reflect it in ACP flows and TCP flows share the bandwidth fairly if theirthe amount of positive feedback conveyed to each flow. This RTT are comparable, and even with different RTTs, ACPobservation motivates us to apply the same approach in an end- flows exhibit a fairer behavior towards TCP flows than otherto-end protocol. Specifically, we propose to use the Fairness protocols in the literature.Ratio (FR) metric (the ratio between the current bandwidthshare of a flow and its fair share in equilibrium) to adjust 1 We emphasize that the source code of the ACP simulation and Linuxthe congestion window management; if a flow with a small implementation is openly available from 2886
  3. 3. II. P ROTOCOL D ESCRIPTION cwnd + ∆cwnd over a period of one RT T , corresponding to In this section, we provide a detailed description of ACP. We a throughput of cwnd+∆cwnd . Since the goodput is limited by RT Tbegin by briefly revisiting the measurement process defined the bottleneck capacity, the queueing delay must grow by somein [26], and then describe the estimation of two values at the amount, ∆RT T , so as to limit the goodput to cwnd+∆cwnd .2 RT T +∆RT Tcore of the ACP design: the queue growth and the fairness Thus, from ∆cwnd (which is known) and ∆RT T (which isratio. Importantly, these are designed to be independent of RTT measured by the sender), one can obtain QRT T , the queueheterogeneity, which is the root cause of the RTT unfairness growth (i.e. number of additional packets enqueued) duringproblem. We then describe how these values are used in the the RT T period, as follows:various states/phases of the protocol. cwnd + ∆cwnd QRT T = · ∆RT T (1) RT T + ∆RT TA. Goodput, throughput and delay variation measurement We further generalize equation (1) to allow the estimation ACP measures the sending and receiving rate (throughput of queue growth during an arbitrary period of T , denoted byand goodput, respectively) by counting the number of packets QT . This involves using the goodput and delay variation (onwithin a predetermined time duration at the sender, which the right-hand side of (1)) over a period of T , instead of overwe call an epoch. In addition, an ACP sender keeps track a single RT T . Denote the goodput measured over a period Tof variations in the forward packet delay, td , during the by GT , the total number of packets sent during T by P T , andepoch. All the above information is obtained with the help the corresponding increase in forward queuing delay at theof the receiver that timestamps each received packet and bottleneck by ∆td . The quantities P T and ∆td are obtainedconveys the timestamps back to the sender, piggybacked in by the ACP measurement process, and from that, we can findthe acknowledgments. GT and the value of QT as follows: As we shall explain below, the forward delay variations PT GT = , QT = GT · ∆td (2)are instrumental in the estimation of the bottleneck queue T + ∆tdlength, while the difference between goodput and throughput 2) Fairness Ratio Estimation: The purpose of the fairnessin an epoch (denoted by Φ, Φ = goodput − throughput) ratio estimation is to speed up the convergence to a fair band-indicates the level of congestion in the network. Indeed, in width allocation, by having flows increase their congestiona congested network, the bottleneck link receives packets at window more aggressively when their current bandwidth sharea rate higher than its capacity, which is the limit on the is below its fair level. We first define the real fairness ratio asmaximum goodput at the receiving side. Therefore, when follows.a network gets congested, throughput becomes larger than Definition 1: Suppose there are n flows in a bottleneck linkgoodput (Φ < 0), and the forward delay td increases. On whose capacity is C. Then the real fairness ratio of flow i,the other hand, when congestion is being relieved and packets denoted by Fi , is the ratio of the throughput of flow i at timeare emptied from the bottleneck buffer at a higher rate than t, wi (t), to its fair share of link capacity, C : nnew packets arrive, goodput becomes larger than throughput n · wi (t) Fi =(Φ > 0). The absolute value of Φ corresponds to the severity Cof congestion. For further details on the goodput, throughput We now illustrate the intuition behind the fairness ratioand delay variation measurement process and its properties estimation. Consider a single link bottleneck traversed by nand benefits, see [26]. flows with the same RTT that is equal to tc . At the onset ofB. Estimation of Link Information congestion, each flow i has a throughput of wi (t) = cwndi (t) . tc We use the delay variation measurement to estimate two If any flow increases its congestion window, the excess packetstypes of link information: the queue growth at the bottleneck will join the queue, causing an increase in the queuing delay.and the flow’s degree of fairness in sharing the bandwidth. All Suppose that all flows increase the congestion window by thethe estimations are conducted using a common control period same amount ∆cwnd at the same time; then, at the end ofindependent of the flow’s RTT. We denote this common control one RTT, the bottleneck queue will have n · ∆cwnd packetsperiod as tc . queued up. However, the actual number of queued packets 1) Queue Growth Estimation: The estimation of the num- belonging to each flow will not be the same unless all flowsber of buffered packets in a bottleneck queue is essential for had the same congestion window size originally. Specifically,achieving the goal of high link utilization during periods of the number of new packets each flow contributes to the queuecongestion, since it enables to reduce the congestion window is proportional to nwi (t)j (t) . This observation can be used to j=1 was much as possible without allowing the queue to underflow. estimate the fairness ratio. If a flow i increases the congestionThis is similar to the efficiency control in XCP, which is window when the link is fully utilized, it expects to see anachieved with the help of explicit router feedback [5]. increase in the queuing delay; if the link is shared perfectly Consider, initially, a network with a single bottleneck link fairly, this increase should be the same as if the flow had beenand a single flow with a round trip delay of RT T . Suppose alone in a bottleneck link of capacity equal to wi (t). Therefore,that, during a period of congestion (i.e. the bottleneck queue 2 For simplicity, we assume that the entire increase of ∆RT T correspondsis non-empty and the link is transmitting at its maximum to the forward (data) queuing delay only, while the reverse (ACK) queuingcapacity), the sender increases the congestion window cwnd to delay, if any, is stable and does not change during congestion. 2887
  4. 4. TABLE I F LOW S TATES & C ONTROL P OLICY time t, denoted respectively by cwnd(t) and Q(t). An ACP sender applies one of the following steps based on the flow Flow States Policy state combination, as shown in Table I: Congestion State Fairness State Loss Event Φ≥0 N/A AI : cwnd(t + tc ) = cwnd(t) + fAI (t) ˆ F≥1 AI AD : cwnd ← cwnd(t) − Q(t) ˆ F<1 No Φ<0 Q ≥ γ · cwnd where fAI (t) > 0 is defined further below. Thus, while φ>0 AD window increases happen with every control period tc , window N/A Yes decreases are made as soon as the flow is detected to enter an AD state. The window decrease amount is always theby comparing the actual delay increase with the expected one, estimated amount of the flow’s queued packets from the startthe flow can deduce the status of the link. If the observed of the epoch, leading to a fast drop of the excess congestiondelay increase is greater than expected, the flow is currently while maintaining a high utilization of the bottleneck link.using more than its fair share, and conversely a smaller than 1) Setting the AI parameters: Fast convergence to the fairexpected delay increase indicates a throughput below the fair equilibrium is one of the unique characteristics of ACP. Thisshare of the flow. is achieved by the fAI (t) function that governs the congestion Based on the above intuition, we now define the estimated window increases in the different states. The key requirementsfairness ratio for flow i as the ratio of the queue growth to the for the choice of this function are as follows:window increase: ˆ Qtc i • If Φ ≥ 0, increase the congestion window to acquire Fi = (3) spare bandwidth quickly (fast probing). ∆cwnd ˆ • If Φ < 0 and F ≥ 1, increase the congestion window bywhere Qtc is the queue growth during a single tc for flow ii, estimated as described by equation (2). The validity of the a constant amount (humble increase). ˆ ˆ • If Φ < 0 and 0 ≤ F < 1, increase the congestionestimation Fi is established by the following theorem.3 window according to F ˆ so that the window approaches Theorem 1: Assume that a bottleneck link is fully saturatedwith traffic from ACP flows (and no other traffic), and that all the fairness point quickly (fairness claiming).ACP flows increase their window by the same amount in a Accordingly, we choose the AI function as follows: control period tc . Then, for every ACP flow i, the estimated α · ⌊ t−t0 ⌋  tc if Φ ≥ 0fairness ratio is equal to the real fairness ratio during the fAI (t) = α ˆ if Φ < 0, F ≥ 1 ˆcontrol period: Fi = Fi . ˆ α + κ · (1 − F )2 if Φ < 0, 0 ≤ F < 1ˆ   Thus, a sender whose current sending rate is relativelyslow, will find upon increasing its congestion window that the where α > 0, κ > 0, and t0 denotes the time that Φ ≥ 0 iscorresponding Qtc is smaller than ∆cwnd, so the ratio ∆cwndQtc first less than 1. We discuss how increasing the congestion The goal of the first requirement is to allow a flow to acquirewindow is based on the fairness ratio in the next section. spare bandwidth in a non-congested network quickly. If Φ ≥ 0 (i.e., the network is not congested), we increase ∆cwnd by αC. Flow States and the Congestion Window Control Policy segments per control period tc until the congestion signal is detected. Hence this is called the fast probing phase. As explained above, the congestion window control policy When Φ < 0 (i.e., network congestion is imminent), theof ACP is based on two parameters. The first is the goodput- function fAI (t) depends on the fairness ratio. The humblethroughput difference Φ, the sign of which indicates whether ˆ increase phase is entered when F ≥ 1. In this case, weor not the network is congested; this is similar to the load increase cwnd by α segments; throughout this paper we setfactor congestion signal proposed in [28] and computed by α = 1 so that ACP behaves similarly to TCP-NewReno duringthe routers in XCP/VCP, except that here we estimate it using ˆ this phase. Otherwise, when 0 ≤ F < 1, the flow moves toend-to-end measurements only. The second is the fairness ˆ a fairness claiming phase, in which it increases its congestionratio estimate of the flow, F . We now proceed to describe ˆ window with greater steps that depend on F (becoming largerin detail the actions of ACP under different combinations of ˆ approaches 0). The fairness convergence time is primarily as Fthese parameters. Overall, we define six possible states the determined by the fairness claiming parameter κ. If κ is tooflow can be in; these are summarized in Table I. The first small, convergence will take a long time, while setting κ toothree states are for the increase phase and the remaining three large may cause unstable fluctuations. For a moderate andstates are for the decrease phase. ˆ stable convergence, we choose to set κ so that κ·(1− F)2 = 1 Because ACP is adaptively adjusting the congestion window ˆ = 0.8, which leads to κ = 25. Thus, when a flow when Fbased on these state combinations, we refer to the congestion reaches 80% of its fair share, its fairness claiming windowcontrol policy of ACP as an Adaptive Increase Adaptive De- increase rate will be double that of TCP-NewReno. Sincecrease (AIAD) algorithm. We express the congestion window ˆ an ACP sender keeps measuring F in every control periodand the estimated number of queued packets as a function of tc , the difference between the flows’ window increase steps 3 Due to space constraints, the proofs of all theorems are omitted from this ˆ is reduced as F approaches 1, ensuring a fast and smoothpaper and can be found in [27]. convergence. 2888
  5. 5. TABLE II ACP PARAMETER S ETTING Algorithm 1 The Congestion Control Algorithm of ACP 1: ˆ Input: Φ, φ, F , Q(t), and cwnd(t) 2: Para Value Meaning 3: if DUPACKs then tc 200ms the control interval 4: cwnd(t) = cwnd(t) - Q(t); α 1 ˆ the AI parameter when F ≥ 1 5: else κ 25 ˆ the parameter for fAI (t) when F < 1 6: if Φ ≥ 0 then γ 0.2 the threshold parameter for the early control 7: /* Fast probing */ t−t 8: cwnd(t) = cwnd(t) + α · ⌊ tc 0 ⌋; 9: else if Φ < 0 then 2) Early control state for TCP-friendliness: Generally, it 10: if (φ > 0) or (Q(t) > γ · cwnd(t)) thenis very difficult to achieve a high link utilization while being 11: /* Early control */ 12: cwnd(t) = cwnd(t) - Q(t);friendly to TCP, a protocol that responds to packet loss events 13: else if φ < 0 then 14: ˆ if F ≥ 1 thenby giving up a significant amount of link capacity. In order 15: /* Humble increase */to maintain TCP-friendliness, we introduce a state with the 16: cwnd(t) = cwnd(t) + α; 17: ˆ else if 0 ≤ F < 1 thenaim of reducing the congestion window and yielding a certain 18: /* Fairness claiming */portion of occupied bandwidth to TCP before a packet loss 19: ˆ cwnd(t) = cwnd(t) + α + κ · (1 − F)2 ;occurs. We call this state an early control state. The early 20: end if 21: end ifcontrol is invoked in two cases. The first case is when a 22: end ifflow detects that Q reaches a predefined ratio of the current 23: end ifcongestion window, γ · cwnd, where γ > 0. The second case, friendliness property of the ACP algorithm, which is proveddesigned to synchronize the bandwidth yielding across all analytically in the extended version of the paper [27].ACP flows, occurs when a flow detects that other flows have Theorem 2: Assume that one TCP flow and one ACPcommenced a window reduction. When another flow reduces flow share a network with a bottleneck capacity of C, andits congestion window, the total bottleneck queue level drops their round trip delays are RT Ttcp and RT Tacp, wheresuddenly. This is reflected by a sudden increase in goodput, RT Ttcp, RT Tacp = tc . Let wtcp (t) and wacp (t) denote theleading to a positive goodput-throughput difference over a throughput of TCP and ACP at time t. Then the (real) fairnesscontrol period tc (which we denote by φ, as opposed to Φ 2w (t) ratio of the ACP flow, F (t) = acp , converges to one of Cwhich is the difference over a longer epoch); therefore, a the following values:combination of positive φ while Φ < 0 is an indication that Case 1: If the buffer is sufficiently large for the window toanother flow must have reduced its window. be reduced by early control (avoiding packet losses): The choice of the parameter γ is influenced by two factors: α·RT Ttcp α·RT Ttcpthe size of the congestion window and the size of router tc +1− tc +1 F (t) → min {2 · α·RT Ttcp , 1};buffers. If an ACP flow has a long RTT, then it has a large +1 tcwindow, raising the threshold of γ · cwnd(t). This implies Case 2: If the buffer is too small for early control and thethat a flow with a short RTT usually has more chance to window is reduced by a packet loss indication:invoke the early control. When a bottleneck buffer capacity α·RT Ttcp 2· tcis smaller than γ · cwnd(t), the window will be decreased not F (t) → α·RT Ttcp .by the early control but by a packet loss event. To make an tc +1ACP flow reduce the congestion window by the early control,γ ·cwnd(t) should be less than the queue capacity. However, if III. P ERFORMANCE E VALUATIONγ is too small, the early control becomes sensitive to the delay We now conduct an extensive simulation study to compare 1variation. Because TCP adopts a window backoff factor of 2 , the performance of ACP with that of different TCP flavoursthis should be an upper bound of γ as well. In our simulations both in conventional and high bandwidth-delay environments.and experiments, our choice of γ is 0.2, which operates well In particular, we pay special attention to the fairness ACPin a variety of network environments. achieves in scenarios with different RTTs. Experimental mea- Finally, we set the common control period to be tc = 200 surement results with our Linux implementation of ACP willms, which is a conservatively long interval. Similarly to XCP, be presented in the last subsection.choosing a conservative control period involves a tradeoff of Our simulations use the popular packet-level simulatora somewhat slower response for greater stability of estimates; ns-2 [29], which we have extended with an ACP module.we are able to afford this choice considering that the ACP We compare ACP with TCP-NewReno, FAST, CUBIC, andcongestion control algorithm employs many other techniques HSTCP over the drop tail queuing discipline. Unless statedto bring about a very fast convergence anyway. otherwise, we use a dumbbell topology with the bottleneck Table II summarizes the set of ACP parameters and their queue size set to be equal to 100% BDP of the shortest RTTtypical values, and the overall congestion control algorithm of and link capacity.ACP is summarized in Algorithm 1. We emphasize that thesame parameter values are used in all the simulations reported A. The Dynamics of ACPbelow, which highlights the robustness of the congestion This section presents the short term dynamics of ACP. Incontrol algorithm in a wide variety of scenarios. particular, we show that ACP’s throughput, congestion win- To conclude this section, we state the following TCP- dow, utilization, and queue size show better performance than 2889
  6. 6. 350 2000 100 Utilization (%) Flow 1 (RTT=20ms) Flow 1 (RTT=20ms) Flow 4 (RTT=155ms) 80 Bottleneck Congestion Window (pkts) 300 Flow 2 (RTT=65ms) Flow 2 (RTT=65ms) Flow 5 (RTT=200ms) Flow 3 (RTT=110ms) Flow 3 (RTT=110ms) 60 Throughput (Mbps) 250 Flow 4 (RTT=155ms) 1500 40 Flow 5 (RTT=200ms) 20 200 1000 0 800 Queue size: 750 Queue (pkts) 150 Bottleneck 600 100 500 400 50 200 0 0 0 0 200 400 600 800 1000 1200 1400 1600 1800 0 200 400 600 800 1000 1200 1400 1600 1800 0 200 400 600 800 1000 1200 1400 1600 1800 Time (sec) Time (sec) Time (sec) Fig. 1. ACP convergence, utilization and fairness with delayed start of flows with wide range of RTT (20-200 ms). 1000 7000 100 Queue size: 6250 Congestion Window (packets) RTT=62ms Bottleneck Queue (packets) RTT=77ms Bottleneck Utilization (%) 6000 800 RTT=92ms 80 RTT=107ms 5000 RTT=122ms 600 60 4000 400 3000 40 2000 200 20 1000 0 0 0 0 100 200 300 400 500 600 0 100 200 300 400 500 600 0 100 200 300 400 500 600 Time (sec) Time (sec) Time (sec)Fig. 2. ACP is robust against sudden changes in traffic demands. We started 50 FTP flows sharing a bottleneck. At t = 200s, we started 150 additionalflows. At t = 400s, these 150 flows were suddenly stopped and the original 50 flows were left to stabilize again. Bottleneck Utilization (%)any of the existing end-to-end protocols has ever achieved. 100Therefore, the average behavior presented in the section above 80 60is highly representative of the general behavior of ACP. ACP 100Mbps ACP 1Gbps 40 FAST 100Mbps FAST 1Gbps Convergence Behavior: To study the convergence of ACP, NewReno 100Mbps NewReno 1Gbps 20 HSTCP 100Mbps HSTCP 1Gbpswe conducted a simulation with a single bottleneck link with 0 CUBIC 100Mbps CUBIC 1Gbpsa bandwidth of 300Mbps in which we introduced 5 flows 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Bottleneck Queue Size (Fraction of BDP)into the system, joining the network 300s apart from each Fig. 3. Aggregate throughput of two competing flows with 100Mbps andother. The RTT values of the five flows are spread evenly 1Gbps bottleneck bandwidths.between 20ms and 200ms. Figure 1 clearly shows that each Normalized Throughput 1ACP flow reacts accurately to the changing circumstances,with only small and temporary disturbance to fairness and 0.1 ACPhigh link utilization, despite the high bandwidth and very large FAST NewRenodifferences of RTT. 0.01 HSTCP CUBIC Sudden Demand Change: In this simulation, we examine 20 40 60 80 100 120 140 160 180performance as traffic demands and dynamics vary consider- RTT (ms)ably. We start the simulation with 50 long-lived FTP flows Fig. 4. Ratio of throughputs of two competing flows as the propagationsharing a 1Gbps bottleneck with RTTs evenly spread between delay of the second flow is varied.50ms and 200ms. Then, we add WWW background traffic addition, we note the inability of FAST to achieve a reasonableconsisting of 10 users, 200 sessions, page inter-arrival time of utilization with small buffers, i.e. around 20% of BDP or less,4s, and 10 objects per page with inter-arrival time of 0.01s. The although the cause of that effect requires further investigation.transfer size of each object is drawn from a Pareto distributionwith an average of 3000 bytes. At t=200s, we start 150 C. RTT Fairnessnew FTP flows, again with RTTs spread evenly in the range 1) Dumbbell topology: Two Flows: In this experiment, we[50ms,200ms], and let them stabilize. Finally, at t=400s, we measure the RTT fairness of two competing flows that use theterminate these 150 flows and leave only the original 50 flows same protocol with a 200Mbps bottleneck. We fix the RTTin the system. Figure 2 shows that ACP adapts well to these of one flow to 200ms and vary the RTT of the other flow insudden traffic changes, and quickly reaches high utilization the range [20ms,180ms] with a 10ms step. Figure 4 displaysand fair allocation with different RTTs. the throughput ratio between the flows, showing that ACP and FAST are the most fair; among the other protocols, CUBIC isB. Efficiency better than TCP-NewReno and HSTCP because of its linear To evaluate the efficiency of the various protocols, we con- RTT fairness feature.sider two flows with the same propagation delay (RTT=50ms) Multiple Flows: This experiment tests the fairness ofand measure average throughput over a simulation time of ACP and other protocols with multiple competing flows with500s, for various buffer capacities ranging from 10% to 100% different RTTs. We have 20 long-lived FTP flows sharing aof the bandwidth-delay product. Figure 3 shows the results for single 1Gbps bottleneck. We conduct three sets of simulations.a 100Mbps and a 1Gbps link. For both link capacities, at lower In the first set, all flows have a common round trip propagationbuffer sizes ACP and CUBIC achieved greater link utilization delay of 50 ms. In the second set of simulations, the flowsthan TCP-NewReno, FAST, and HSTCP. This is a result of have different RTTs in the range [20ms,115ms] (evenly spacedthe accurate window downsizing of ACP, as opposed to the at increments of 5ms). In the third set, the flows again havebackoff factor of 0.5 used by TCP-NewReno and HSTCP. In different RTTs in a wider range of [20ms,210ms] (increments 2890
  7. 7. 70 150 150 ACP HSTCP: (1, 164) ACP HSTCP: (1, 465) ACP Flow Throughput (Mbps) Flow Throughput (Mbps) Flow Throughput (Mbps) FAST FAST HSTCP: NewReno: (1, 215) FAST NewReno 120 NewReno (18, 240) 120 NewReno 60 HSTCP HSTCP HSTCP CUBIC 90 CUBIC 90 CUBIC 50 60 60 40 30 30 30 0 0 1 3 5 7 9 11 13 15 17 19 1 3 5 7 9 11 13 15 17 19 1 3 5 7 9 11 13 15 17 19 Flow ID Flow ID Flow ID (a) Equal RTT (b) Different RTT (20-115ms) (c) Very Different RTT (20-210ms) Fig. 5. Bandwidth share among multiple competing flows with either equal or different RTTs. 50 100 ACP NewReno CUBIC Bottleneck Utilization (%) 90 FAST HSTCP Throughput (Mbps) 40 80 70 30 60 50 40 ACP 20 30 FAST NewReno 10 20 HSTCP 10 CUBIC 0 0 1 2 3 4 5 6 7 8 9 0 50 100 150 200 250 300 350 400 450 500 Link ID Time (sec)Fig. 6. A string of multiple congested queues, shared by an all-hop flow and separate flows over individual links. Link 5 has a lower capacity than the rest. Normalized Throughputof 10ms). In all the scenarios, the buffer capacity is set to 3 ACP TCP100% of the BDP of the flow with the lowest RTT. 2.5 2 As presented in [30], multiple TCP flows with heteroge- 1.5neous RTTs generally achieve bandwidth allocation that is pro- 1portional to RT1T z , where 1 ≤ z ≤ 2. CUBIC [22] improves 0.5 0this ratio by featuring a linear RTT fairness mechanism. We 3 TCP 30 ACP 5 TCP 30 ACP 15 TCP 30 ACP 30 TCP 30 ACP 30 TCP 15 ACP 30 TCP 5 ACP 30 TCP 3 ACPindeed observe the fairer bandwidth distribution of CUBIC (a) Queue size (100% of BDP)in Figure 5. However, ACP avoids the RTT fairness problemaltogether and consistently achieves an even distribution of Normalized Throughput 3 ACP TCPcapacity among the competing flows, even when those flows 2.5have significantly different RTTs. 2 1.5 2) A more complex topology: We now test the various 1 0.5protocols using the 9-link topology shown in Figure 6. Here, 0link 5 has the lowest capacity of 100Mbps, whereas all others 3 TCP 30 ACP 5 TCP 30 ACP 15 TCP 30 ACP 30 TCP 30 ACP 30 TCP 15 ACP 30 TCP 5 ACP 30 TCP 3 ACPare 200Mbps links. Every link has the same round trip delay of20ms. There is one flow traversing all 9 hops with an RTT of (b) Queue size (50% of BDP)180ms, and nine additional cross-flows (denoted by the small Fig. 7. TCP-friendliness of ACP.dashed arrows) that only traverse one individual link each. response to packet losses; consequently, ACP flows consumeFigure 6 shows the average utilization and throughput of the the additional bandwidth and thereby retain a high utilizationall-hop flow in this scenario. Here, ACP and FAST are the of the link without adversely affecting the amount that TCPonly protocols that guarantee a fair throughput to the all-hop flows would have achieved competing on their own.flow; with all other protocols, the all-hop flow suffers from a Finally, we extend the TCP-friendliness evaluation by vary-low bandwidth share due to packet losses in multiple links. ing the RTT of a TCP flow in the range [20ms,200ms] and the capacity of the bottleneck buffer in the range [20%,100%]D. TCP-Friendliness of ACP of the respective BDP. The TCP flow competes with one ACP Figure 7 shows the throughput obtained by competing TCP flow with RTT equal to 200ms. Figure 8 shows the ratio ofand ACP flows, with various combinations of number of flows the TCP flow throughput to the fair share (i.e. one half of theof each type. Here, the bottleneck capacity was set at 500Mbps bandwidth). This demonstrates the TCP-friendliness resultingand the round trip propagation delay was 200ms for all flows. from the early control mechanism of ACP when competingFor convenience, the throughput of each flow in all cases with a TCP flow, as indicated by Theorem shown as normalized to the fair share value, #500Mbps . of f lowsThis simulation demonstrates that ACP is as TCP-friendly as E. Linux Implementationother Internet congestion control protocols under considera- We have implemented a prototype of ACP by modifyingtion. Moreover, Figure 7(b) verifies an additional desirable the congestion window management functionality in the Linuxeffect when the bottleneck buffer is lower than the bandwidth- kernel and conducted several experiments on a dummynetdelay product. In that case, TCP flows cannot utilize the link testbed [31], as shown in Figure 9, to compare the performancein full (especially when the number of TCP flows grows), due of ACP and the Linux TCP implementation, focusing mainlyto the overly aggressive decrease of congestion window in on ACP’s TCP-friendliness and RTT fairness. The results of 2891
  8. 8. 1.42 the most interesting result. Even though TCP has a much longer RTT in this experiment, ACP does not take advantage 2 0.91 of that fact. Instead, ACP continuously yields its bandwidth Throughput Normalized 1.5 1 portion to TCP in order to redistribute bandwidth for fair 0.5 sharing. When TCP eventually reduces its congestion window, 0.66 0 1 making the network underutilized, then ACP acquires the spare 0.8 20 0.6 bandwidth quickly by entering its fast probing phase. Note that 40 60 80 100 120 140 160 0.4 Queue Capacity (Fraction of BDP) the overall convergence time of the system is rather slow in 180 200 0.2 RTT of TCP (ms) this case, which is entirely due to the slow window increase Fig. 8. Normalized throughput of TCP of the TCP flow. 3) RTT fairness: This experiment measures the RTT fair- ness between two competing ACP flows. We fix the RTT of one flow to 150ms, and vary the RTT of the other in the range [40ms,120ms] in 10ms increments. Figure 11(a) shows that the normalized throughput, i.e. the ratio between the throughputs achieved by both flows, is close to 1 in all cases considered. Figure 11(b) shows a sample of the throughput evolution over Fig. 9. Our dummynet testbed. time for a case with a large RTT difference, namely 50ms and 150ms. We observe that the flow with the shorter RTT keepsthese experiments are presented below. releasing its bandwidth portion whenever it exceeded its fair We have configured the following options in the ACP Linux share (i.e., the early control phase), while the flow with aimplementation: longer RTT claims its fair share based on the fairness ratio. • TCP Segmentation Offload (TSO): We disabled the These figures support the simulation outcomes and illustrate TSO option because a TSO-enabled sender often gen- the fairness properties of ACP in a real implementation. erates packets whose size is smaller than the Maximum Transfer Unit (MTU). This causes inaccuracy in estimat- IV. C ONCLUSION AND F UTURE W ORK ing the fairness ratio. End-to-end congestion control in the Internet has many chal- • TCP-SACK: We implemented ACP with the TCP-SACK lenges that include high utilization, fair sharing, RTT fairness, option to recover efficiently from multiple packet losses. and TCP-friendliness. We have described an adaptive end-to- • RTTM: We implemented the Round-Trip Time Measure- end congestion control protocol, ACP, that deals successfully ment (RTTM) option, to allow delay measurements to be with these challenges without requiring any router support. taken with high precision (microsecond resolution). ACP uses two measurements that estimate important link 1) Testbed setup: Our testbed consists of two senders and information; queue growth estimation is used to downsize thetwo receivers running Linux kernel 2.6.24, connected via an congestion window so as to empty the bottleneck queue andemulated router running dummynet under FreeBSD-7.0. Each retain high link utilization, and fairness ratio estimation ac-testbed machine has a single Intel Core2 Quad 2.83GHz complishes fast convergence to a fair equilibrium by allowingCPU, 8 GB of main memory, and an Intel PRO/1000 Gigabit flows to increase their window more aggressively when theirEthernet interface. We configured dummynet to have a fixed share is below the fair level. To resolve the RTT unfairnessbuffer size of 1MB and a 200Mbps bottleneck link in all problem, all estimations and window increases are performedexperiments. We used iperf for bandwidth measurement; the with a fixed control period independent of the RTT, while thebandwidth in all plots is the running average over a time early control phase of ACP releases a portion of bandwidthwindow of 4 seconds. We also used the TcpProbe kernel before packet losses to maintain fair sharing with TCP flows.module to trace the congestion window of a TCP connection. Our extensive simulations and experiments demonstrate the 2) TCP-friendliness: To assess ACP’s TCP-friendliness, we superior characteristics of ACP in comparison with TCP-conduct three sets of experiments, all of which have one ACP NewReno, FAST, CUBIC, and HSTCP under a drop tailflow joined by one TCP flow after 30 seconds. The ACP flow queuing discipline. In most scenarios, ACP was able to retainhas a 40ms RTT in all experiments, while TCP is set to have its desirable characteristics as the per-flow delay-bandwidthRTT of 20ms, 40ms, and 160ms in the three experiments, product became large, whereas TCP variants suffered severely.respectively. Figure 10 shows three pairs of graphs, namely, We therefore believe that ACP is a very attractive end-to-the congestion windows and throughputs of the TCP and ACP end congestion control mechanism for HBDP flows that areflows in each of the three experiments. becoming increasingly prevalent in the Internet. Further theo- We observe that in the first experiment (Figure 10(a)), rather retical modeling efforts, as well as more extensive evaluationthan being throttled by the TCP flow, the ACP flow tries in real networks, are the subject of ongoing converge to the fair share through the aggressive adaptive R EFERENCESincrease (fairness claiming) of ACP. Figure 10(b) shows that [1] D.-M. Chiu and R. Jain, “Analysis of the increase and decrease al-when both flows had the same RTT in the second experiment, gorithms for congestion avoidance in computer networks,” Computerthey coexisted well as expected. Finally, Figure 10(c) shows Networks and ISDN Systems, vol. 17, no. 1, pp. 1–14, 1989. 2892