Impact of  le arrivals and departures on buffer
Upcoming SlideShare
Loading in...5
×

Like this? Share it with your network

Share

Impact of le arrivals and departures on buffer

  • 739 views
Uploaded on

Dear Students...

Dear Students
Ingenious techno Solution offers an expertise guidance on you Final Year IEEE & Non- IEEE Projects on the following domain
JAVA
.NET
EMBEDDED SYSTEMS
ROBOTICS
MECHANICAL
MATLAB etc
For further details contact us:
enquiry@ingenioustech.in
044-42046028 or 8428302179.

Ingenious Techno Solution
#241/85, 4th floor
Rangarajapuram main road,
Kodambakkam (Power House)
http://www.ingenioustech.in/

More in: Education , Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
739
On Slideshare
739
From Embeds
0
Number of Embeds
0

Actions

Shares
Downloads
2
Comments
0
Likes
0

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. Impact of file arrivals and departures on buffer sizing in core routers Ashvin Lakshmikantha R Srikant Carolyn Beck Department of Electrical and Department of Electrical and Department of Industrial Computer Engineering Computer Engineering and Enterprise Systems Engineering and and and Coordinated Science Laboratory Coordinated Science Laboratory Coordinated Science Laboratory University of Illinois, University of Illinois, University of Illinois, Urbana-Champaign Urbana-Champaign Urbana-Champaign Email: lkshmknt@uiuc.edu Email: rsrikant@uiuc.edu Email: beck3@uiuc.edu Abstract—Traditionally, it had been assumed that the efficiency to sacrifice some link utilization. In particular, it was shownrequirements of TCP dictate that the buffer size at the router that a buffer size of about 20 packets are largely sufficient tomust be of the order of the bandwidth (C)-delay (RT T ) product. maintain nearly 80% link utilization (independent of the coreRecently this assumption was questioned in a number of papersand the rule was shown to be conservative for certain traffic router capacity).models. In particular, by appealing to statistical multiplexing All of the above results were obtained under the assumptionit was shown that on a router with N long-lived connections, that there are N long-lived flows in the network. The numberbuffers of size O( C×RT T ) or even O(1) are sufficient. In this √ N of long-lived flows in the network was not allowed to vary withpaper, we reexamine the buffer size requirements of core routers time. In reality, flows arrive and depart making the number ofwhen flows arrive and depart. Our conclusion is as follows: if flows in the network random and time varying. The questionthe core to access speed ratio is large, then O(1) buffers aresufficient at the core routers; otherwise, larger buffer sizes do we ask in this paper is the following: “Can the buffers onimprove the flow-level performance of the users. From a modeling the core routers be significantly reduced even when there arepoint of view, our analysis offers two new insights. First, it may flow arrivals and departures, without compromising networknot be appropriate to derive buffer-sizing rules by studying a performance?”.network with a fixed number of users. In fact, depending upon The performance metric that we use to study the impactthe core-to-access speed ratio, the buffer size itself may affect thenumber of flows in the system, so these two parameters (buffer of buffer sizing on end-user performance is the average flowsize and number of flows in the system) should not be treated as completion time (AFCT). When there are file arrivals andindependent quantities. Second, in the regime where the core-to- departures, AFCT is a better metric to use than link utilization,access speed ratio is large, we note that the O(1) buffer sizes are which is the commonly used metric when there are fixedsufficient for good performance and that no loss of utilization number of flows. For example, in an M/G/1 queue, smallresults, as previously believed. changes in traffic intensity can lead to large changes in mean delays (AFCTs) when the traffic intensity is close to 1. To see I. I NTRODUCTION ρ this, note that the mean delay is proportional to β−ρ , where Traditionally, a buffer size of C × RT T was considered ρ is the offered traffic and β is the effective link capacity[5].necessary to maintain high utilization (here C denotes the In the context of TCP-type flows, the effective capacity iscapacity of the router and RT T is the round trip time) for determined by the extent to which TCP can utilize the linkTCP type sources [21]. This buffer sizing rule implies that if with a given buffer size. Suppose ρ = 0.95. Then a change inthere are N persistent connections, each requiring a throughput β from 1 to 0.96 can increases the AFCT by a factor of 5.of c (C = N c), then the buffer size should be N c × RT T orin other words the buffers should be scaled linearly with the A. Main contributionsnumber of flows, i.e., O(N ) or O(C). This traditional view Our main contributions can be summarized as followsof buffer sizing was questioned in [2], [13], [22], [20], [11] • We first study the impact of flow arrivals and departuresand was shown to be outdated. By appealing to statistical in networks that have been the motivation of recent buffermultiplexing, it was shown that buffer sizes that are scaled √ sizing results. Such networks are characterized by a vast Cas O( N ) or O( √N ) are sufficient to maintain high link disparity in the operating speeds of access routers and theutilization. Another extension to the above work shows that core routers (roughly three to four orders of magnitude).buffer sizes can be reduced to even O(1) by smoothing the When there are flow arrivals and departures, we showarrival process to the core [13]. In other words, according that the core routers are rarely congested even at highto [13] buffer sizes can be chosen independent of the link loads of 98%. Since there is no congestion on the corecapacity and RTT, as long as the network operator is willing router, the flows are largely limited by the access speeds.
  • 2. Thus, the AFCT seen by an end user does not change networks, we would require very large buffers to ensure good significantly with the core router buffer size. While we performance to the end users. This subtlety can be noticed only arrive at our results from different considerations, our when we study the system with file arrivals and departures. results agree with [13] in that, on such networks, the core We briefly comment on the similarities and differences router buffers should be scaled as O(log(Ca × RT T ))(In between our work and the results in [10]. In [10], it was [13], the authors study a single link with N long-lived shown that on routers without any access speed limitations, flows. They assume no access speed limitations but O(C × RT T ) buffering is required when there are file ar- impose a maximum window size constraint on TCP. Their rivals and departures. The authors also suggest that back-bone result is that buffers on the core router should be scaled routers are lightly loaded due to over-provisioning, and thus as O(log(Wmax )), where Wmax denotes the maximum one would require small buffers on such routers. While we window size of TCP. It is easy to see that our result is also note that O(C × RT T ) buffering is required when there equivalent to this result) where Ca denotes the capacity are no access speed limitations, our approach and conclusions of the access router. However, unlike [13], we further find are different from [10] in many respects. We show that, even that we do not have to sacrifice link utilization to allow if the core router is heavily loaded (up to 98%), we can still such small buffer sizes. operate with very small buffers if the core to access speed • We study the impact of small buffers on a single con- ratio is small. Further, we have developed a single unifying gested link where the access limitations are absent. It model and also point out the key fact that the buffer size and is rather well known that TCP approximates processor the typical number of users in the system are not independent sharing [15], when the file-sizes are large. Therefore, at quantities. This last observation seems to be the fundamental any time, very few active flows are present in the network reason why one should consider flow arrivals and departures even at significantly high loads (for example, under in sizing router buffers. processor sharing, even at 90% loading, the probability II. C ORE ROUTERS IN ACCESS LIMITED NETWORKS that more than 50 flows are active is about 0.005. ). REQUIRE O(1) BUFFERING Therefore, the assumption that a large number of users exist in the system does not hold. Thus, large reductions In this section, we will study networks where the core router in buffer sizes due to statistical multiplexing effects speeds are several orders of magnitude larger than the access reported in previous work do not apply here. In fact, router speeds. Before we model arrivals and departures, we reducing buffer-sizes in these networks would result in first derive buffer requirements for a network with a fixed dramatic degradation in the overall performance. We have number of long-lived flows, while using link utilization as the observed an order of magnitude increase in the AFCT due performance metric. Using AFCT as the performance metric, to the use of small buffers on such links. It turns out that we then consider file arrivals and departures and show that the one would require buffers of size C × RT T, or O(C) to small buffers do not increase the AFCT significantly, unless obtain good end-user performance. the traffic load is close to the instability region of the network. • All of the above conclusions can be obtained from a The reason is that, with access speed limitations, the core single unifying model which is applicable to a large class router is not congested (i.e., the number of packets dropped of traffic scenarios. In particular, we argue that, given at the core is an order of magnitude smaller than the number a particular access to core router ratio, there exists a dropped at the access routers) unless the offered load is very threshold operating load below which small buffers seems close to the instability region of the network. Since the core to be sufficient. Above this threshold, one would require router does not get congested, the core router buffer size has buffers of size O(C × RT T ). no significant affect on the AFCT of the flows. In prior literature, the congestion at the core is often It is important to note that the results based on fixed measured using link utilization which we believe is incorrect.number of flows do not consider access speed limitations and Our model indicates that even at very high levels of linkmaximum window size limitations are removed, thus giving utilization, the core is not congested in the sense that corethe impression that the results are valid even if the core to routers will not be able to control the transmission rates ofaccess speed ratio is small. In other words, a model that the end users. This is due to the fact that packet drops are soassumes a fixed number of flows would indicate that buffer infrequent on core routers that they contribute very little tosizes can be reduced to 1% of the bandwidth delay product the overall packet loss probability.if N = 10, 000, independent of the maximum window sizelimitations and access speed limitations. However, our results Ca Cindicate that when access speed limitations and maximumwindow size limitations are removed, the number of active Source Destinationflows in the system will be quite small (typically in tens). Fig. 1. Access limited networksSince there are very few flows in the network, one cannotpresent an argument based on statistical multiplexing to reducebuffer size. If we were to design for the typical case in such Let us first consider a fixed number of flows N accessing the
  • 3. Internet via an access router and a core router. The capacity of than about 80 packets of buffering at the core router. The mostthe access router is Ca packets/sec and the core router capacity important thing to note is that this result is independent of theis C packets/sec. Let β(B, N, K) denote the core router link core router capacity.utilization, which is a function of the buffer size B, number Cof flows N and the core to access speed ratio K = Ca . Let 10 0 β = 0.95γC denote the mean packet arrival rate at the core router and β=0.90 −2 β=0.80pc denote the mean packet loss probability at the core router. 10It is straightforward to show that −4 Packet Loss Probability 10 β = γ(1 − pc ). (1) −6 10Assume that N Ca ≤ γC. In this case, by our assumption thatCa ≪ C, N can be fairly large. When N is large, by standard −8 10results in stochastic processes, the arrival rate can be wellapproximated by a Poisson process [6]. Further assuming an 10 −10M/M/1/B model for the queueing process at the core router,where B is the buffer size at the core, we can compute the 10 −12 0 10 20 30 40 50 60 70 80 90 100packet loss probability at the core router to be [5] Buffer Size (pkts) 1−γ pc = γ B . (2) Fig. 2. Packet Loss probability on an uncongested link 1 − γ B+1This formula can be used to size the buffer i.e., to obtain anupper bound on B. To do this, we first need a specification Thus, even at very high utilization levels at the core router,of the desired pc . Due to the fact that N Ca ≤ γC, it is small buffers seem to be sufficient. Next, we consider the caseclear that the network is access speed limited and therefore where pc ≫ pa . This situation would arise if N Ca ≫ C (whenwe should design the core buffer size such that it does not N Ca ≈ C, it there would be drops both on the access routerinduce significant packet loss compared to the access router. and on the core router. This scenario is particularly tricky toThis is due to the fact that TCP throughput is approximately handle and we do not do so in this paper. As our simulationsgiven by, demonstrate, such a detailed model is not required for the √ calculation of AFCT.). In this case, the per-flow throughput is 1.5 X= √ , (3) approximately given by RT T pa + pc √where X denotes TCP throughput and pa denotes the packet 1.5 X≈ √ (4)loss probability on the access router [19]. Here, we have used RT T pc .the approximation 1 − (1 − pa )(1 − pc ) ≈ pa + pc . Suppose As before, since N is large, we can model the packet arrivalwe design the buffer size such that pc = 0.1pa and the buffers process at the core router by a Poisson process. Therefore, theon the access link are sized such that the access link is fully packet loss probability on the core router is still given by (2).utilized, then we get If the per-flow throughput is X, then 1 1.5 1 1.5 NX Ca = = . . γ= (5) RT T pa + pc RT T 11pc CSubstituting for pc from the above formula in (2), we get the Given a buffer size B, (1), (2), (4) and (5) reduce to a set ofdesired buffer size to be O(log γ (Ca × RT T )). To illustrate 1 fixed point equations. These equations can be solved to obtainthe importance of the above result, we consider the following β.example. Example 2: Consider a congested core of capacity C = Example 1: Consider a core router which is accessed via 100M bps accessed via access routers of capacity Ca =access routers of capacity 2 Mbps. Let the packet size be 1000 2M bps. Let the packet size be 1000 bytes and the RTT be 50bytes. If the RT T = 50ms, then to achieve a transmission rate ms. The bandwidth delay product C × RT T = 625 packets.of 2 Mbps, using (3), the loss probability on the access router Suppose there are 60 flows in the system. In this case, the core(pa ) can be no more than 0.01. To ensure that the core router router becomes the bottleneck and hence it is the main sourcedoes not affect the throughput of a flow, we choose the buffer of congestion feedback. The amount of buffering required tosize on the core router such that pc = 0.1pa = 10−3 . The obtain a certain link utilization β, is given in Fig. 3. Thisamount of buffering required to achieve a certain loss probabil- has been obtained by solving the set of fixed point equationsity is given in Figure 2. Figure 2 is plotted using (2). Suppose as described above. As seen from Fig. 3, even if we werewe require 90% link utilization (i.e., β(B, N, K) ≈ γ = 0.9 to operate at 95% efficiency, we require no more than 40and N Ca < 0.9C ). From Fig. 2, it is clear that no more packets buffering at the core router. As we increase the corethan 40 packets are required to maintain a loss probability of router capacity, the amount of buffering required to maintainpc = 10−3 . Even at 95% link utilization, we need no more the same efficiency increases but not by a significant amount.
  • 4. When the link capacity of both the core router and the access more insight, we rewrite the AFCT as follows:router is increased by a factor of 100, the increase in the buffer   γK ∞size required is only a factor of 5 (the increase in buffer size 1 AF CT = iπi + iπi  (7)appears to be logarthmic in the core router capacity C). λ i=0 γK+1 Recall that in our design we have assumed that the router is 3 10 C× RTT = 625 congested if N ≥ γK + 1. If πi is small at i = γK + 1 and C× RTT = 6250 decreases exponentially for i > γK +1, then it is clear that the Buffer Size (pkts) C× RTT = 62500 system rarely experiences congestion and therefore the AFCT is predominantly determined by the access router speed. 2 10 If K is sufficiently large (K should be large enough to remove synchronization effects as studied in [2]), then β is an increasing function of N is the region [γK, ∞]. From (6), it 1 follows that if i > γK then 10 0.6 0.7 0.8 0.9 1 i−(γK) γK 1 1 β ρi j=0 βi βγK+1 P rob(N = i) ≤ ∞ k 1 Fig. 3. Amount of buffering required to maintain target link utilization k=0 ρk j=0 βj i−γK ρ = P rob(N = γK) ,We have now provided a simple derivation of buffer-size βγK+1requirements for a network where the number of long-lived which means that P rob(N = i) decreases at least exponen-flows is fixed, using link-utilization as the metric. However, tially for i > γK. Therefore, if P rob(N = γK) is sufficientlyas mentioned in Section I, when one considers arrivals and small (and consequently P rob(N > γK) is also very small),departures of flows, link utilization is not the appropriate thenmetric. We now study the impact of small buffers on the AFCT 1of flows, when flows arrive and depart. AF CT = . µCa Consider the following model of an access limited network The system is rarely in congestion and therefore the AFCTwhich is similar to the model in [9]. Suppose that files arrive is dictated by the access speed limitations. In the followingaccording to a Poisson process of rate λ and the file sizes 1 example, we verify that even at very high loads, P rob(N >are from an arbitrary distribution with mean µ . Recalling the γK) is quite small.definition of the core link utilization β(B, N, K), we note To evaluate P rob{N > γK} using (6) we need the valuethat whenever there are N users in the system, the per-flow of β(B, N, K) for all values of N. Note thatthroughput is β(B,N,K)C . Due to the insensitivity property Nof our model to file-size distribution, N(t) can be represented N Ca β(B, N, K) = C if N Ca ≤ γC,as a Markov chain, specifically a birth-death process [17]. Itfollows from elementary analysis of birth-death processes that where B is chosen to achieve a link utilization of γ. On λ the other hand, if N Ca > γC, and K is sufficiently large,µC < limn→∞ β(B, n, K) is necessary for the Markov chainto be stable (i.e., positive recurrent). Under this assumption, as described earlier, β(B, N, K) is an increasing function ofthe exact stationary distribution of this Markov chain can N in the region [γK, ∞). Therefore, we can determine anbe characterized. For details on calculating the stationary upper-bound on P rob{N Ca > γC} (i.e., P rob{N > γK})distribution we refer the reader to [17, Theorem 3.8]. by replacing β(B, N, K) by β(B, γK, K) for all values of The stationary distribution of the Markov chain is given by, N > γK. Based on this approximation, we present some i numerical results in the following example. 1 ρi j=0 βj Example 3: Consider again a core router with capacity πi = P rob{N = i} = ∞ k 1 , (6) C = 100 Mbps being accessed by flows via an access router k k=0 ρ j=0 βj with capacity Ca = 1Mbps. Therefore, K = 100. The total λ propagation delay is assumed to be 50ms. If the mean packetwhere βj = β(B, j, K) and ρ = µC . Since the exact stationary distribution is known, AFCT size is 1000 bytes, it is clear that we require C × RT T = 625can be easily characterized. By Little’s law it follows that packets to achieve 100% link utilization. On the other hand, toAF CT = E[N ] and therefore, λ achieve 95% link utilization, we need only about 40 packets ∞ of buffering. i=1 iπi The fraction of time the system is congested is plotted as . AF CT = λ a function of the offered load in Fig. 4. As seen from Fig.While the above expression provides a closed-form solution 4 , the core router is congested less than 10% of the timefor the AFCT, it provides very little intuition on the depen- even at 85% load. As the ratio K increases, the amount ofdence of AFCT on the buffer size at the core router. To get time spent in congestion further decreases. For example, if
  • 5. K = 500 (i.e., C = 500 Mbps and Ca = 1 Mbps), even at Nevertheless, we note that Example 2 suggests that even when90% load, the core router is congested less than 1% of the the core is heavily congested, small buffers may be sufficient.time! Since the system spends a very small amount of time in Defining a performance metric and studying the system incongestion, the AFCT should not increase significantly with more detail under such worst-case transient scenarios is ansmaller buffers. Simulations presented in Section IV show that area for future research that we do not undertake in this paper.this is indeed the case. In other words, core router buffer sizecan be chosen independently of the core router link capacity III. C ORE ROUTERS WITH O(C) BUFFERINGin networks with a large disparity between access speeds andcore router speeds. When there is a large disparity between the access speeds and the core router speeds, buffering is required primarily 0.9 to reduce the small variance of a Poisson process and the K=100 0.8 K=200 system is rarely in congestion. Therefore, small buffers are K=300 sufficient in these networks. However, as a network designer, 0.7 K=500 it is important to study the impact of small buffers in networks Congestion event probability 0.6 that get congested very often. Typically, these are the networks 0.5 where there are no access speed limitations and each flow can potentially use a large fraction of the capacity of the link. 0.4 In this section we study the impact of small buffers in 0.3 networks without access speed limitations. Our approach, as 0.2 before, is based on time-scale separation. We first construct 0.1 a detailed packet-level model of a single congested link assuming N long-lived flows access the link. Using this model 0 0 0.1 0.2 0.3 0.4 0.5 Offered Load 0.6 0.7 0.8 0.9 1 we characterize the link-utilization β(N, B) as a function of the buffer size B and the number of long-lived flows N. Fig. 4. Fraction of time system spends in Congestion Unlike in the previous section, note that the parameter beta does not depend on K, the core to access speed ratio, since in this section, there are no access speed limitations. We Remark 1: In today’s Internet, there exists networks where then study the dynamics of flow level arrivals and departures.core routers operate about 1000 − 10000 times faster than The only parameter that we require from our packet-levelthe access routers. For example consider a DSL user base models to carry out flow level analysis is the link-utilizationwith each user accessing the network with a 1 Mbps access β(N, B), since we neglect the impact of other TCP dynamicsbandwidth and the aggregation point switching the packets at and packet-level dynamics such slow-start, fast retransmission,10 Gbps (K = 10000). Further suppose that the RTT of flows time outs, etc. A more detailed model may increase the nu-is about 50 ms. The traditional buffer provisioning guidelines merical accuracy of our results, but we would lose our abilitysuggest that to achieve 100% link utilization a buffer of size to obtain qualitative insight into the congestion phenomenonB = C × RT T = 62.5 MB is required. On the other hand, at the core routers. As our simulations demonstrate, our simpleour analysis indicates that we can achieve a link utilization of model is quite accurate in predicting the AFCT of flows.β = 0.999 using a buffer of size 250KB. We consider a single-link of capacity C packets/sec ac- Assuming a buffer size of 250KB, the probability that the cessed by N long-lived flows. The round trip time of flowcore router is congested at an extremely high load of 98% is i is denoted by RT Ti . In our earlier model of access-limitedgiven by, networks, we did not explicitly model the RTT, since we could P rob{N > 9990} ≈ 0.007! use very small buffers at the core. The impact of RTT on TCPThus, in such a network, even at 98% load, there is very throughput was irrelevant as we had designed the system solittle congestion on the core router. This implies that the that the throughput of TCP was roughly equal to the accessAFCT does not increase when small buffers are used, even speed. Now, since the access speed limitations are no longerwhen the system is operating at 98% load. Therefore buffer present, the queueing delay at the core router affects the overallsizes can be reduced dramatically, without degrading end-user throughput of a flow. Therefore, we explicitly break up theperformance. RT T into propagation delay τp and queueing delay τq . We i Our results in this section depend on the fact that the assume that the propagation delay τp of a user i is uniformlyrelevant performance metric is AFCT. As mentioned earlier, distributed between [a b]. The maximum window size of TCPthe AFCT is insensitive to the file-size distribution, thus the is denoted by M W S. The packet loss probability at the coreresults apply to heavy-tailed file-size distributions as well. router is denoted by pc as before.However, heavy-tailed file-size distributions may affect the The average rate at which flow i will transmit data is giventime required by the system to reach its steady-state. In by,particular, if the network enters a heavily congested state, then 1 1.5 xi = min , MWS , (8)it may persist in this transient state for a long period of time. RT Ti bp
  • 6. where b denotes the number of packets acknowledged per TCP We now study the impact of small buffers on the AFCT whenack [19]. In the current TCP implementations, b is either 1 flows arrive and depart. We consider a single congested linkor 2. In our analysis and in our simulations, we assume that of capacity C being accessed by many flows. Flows arriveb = 1. Let τq denote the average queueing delay seen by each according to a Poisson process of rate λ. Each flow seeks touser. Therefore, transfer a file which is taken from an exponential distribution i 1 RT Ti = τp + τq . with mean µ . Then the number of active flows in the systemTaking expectations over all users, we find that the average N (t) forms a Markov chain. Note that this is the same Markovrate of transmission is chain as in Section II, except that the expression for β is b different. A similar analysis has been carried out in [15] where 1 1.5 dx the authors justify the use of processor-sharing queues to x = E[xi ] = ¯ r min , MWS b−a bp a x + τq describe TCP flow level performance. However, in [15] it is 1 b + τq 1.5 = log min , M W S . assumed that buffers are large and consequently β is chosen to b−a a + τq bp be unity. Our work can be considered a generalization, whichThe average packet arrival rate at the core router is takes into account the impact of buffer size. Define the load on the system as λ c = N x. ¯ (9) λTo complete the setup, we require a model for packet loss ρ= . µCprobability pc as a function of the arrival process. We presentsuch a model here. In Section II, we had assumed that the Assuming that ρ < limN →∞ β(B, N ) = β ∗ , the distributionarrival process to the core router is Poisson. This is a valid of the number of flows in the system at equilibrium is givenassumption when the access speeds are very small compared byto the core router speeds and a large number of flows are 1 ρirequired to congest the core router. At very high access speeds, P rob(N = i) = i , M j=1 β(i, B)it takes only a few flows to cause congestion on the core router.Therefore the packet arrival process at the core router tends where M is a normalization constant given byto be bursty and one cannot use a Poisson approximation that ∞ ρiwas used earlier. In this case, we have to model the packet M= i .arrival process using a stochastic process with a larger inter- i=0 j=1 β(i, B)arrival time variance. We use a diffusion approximation to Using these equations, we can calculate N avg and AFCTstudy the resulting queueing process. Let the load on the core (using Little’s law) asrouter queue be ρc = λc . Further, we denote the SCV (squared C ∞coefficient of variance, i.e., variance divided by the square of Navg = i · P rob {N = i},the mean) of the inter-arrival times of the arrival process by i=0c2 . Then, according to [4], the loss probability is given by AF CT = Navg λ . a  2  ca θeθB 1+ ρ We consider the following numerical example. pc = θB  , (10) e −1 2 Example 4: Consider a core router of capacity 100 Mbps. Flows use TCP for data transfer with a MWS of 64. The RTTwhere θ = 2(ρ−1) . We use simulations to estimate c2 and use of each flow is chosen to be uniformly distributed between ρc2 +1 a athis estimated value in (10). 40 − 60 ms. The mean flow size is assumed to 1.1 MB. We With small buffers, the queueing delay is negligible com- assume that each packet is 1000 bytes. For this problem C ×pared to the propagation delay. As the buffer size increases, RT T = 625 packets.the round trip time increases due to an increase in the queueing As we mentioned earlier, it is difficult to determine the SCVdelay. To model this we calculate the average number of of the arrival process analytically when there are very fewpackets in the queue. According to [4], the average number of flows. Thus, we use simulations to determine the SCV of thepackets in the queue is given by, arrival process. Our simulations indicate that as the buffer size B 1 increases, the SCV varies as B 0.63 . In our analysis we use the q= ¯ − . following empirical expression for the SCV: 1 − e−θB θTherefore, the average queueing delay (τq ) is given by c2 = B 0.63 . a q¯ τq = . (11) Using the theory developed earlier in this section, we calculate C the AFCT as a function of the buffer size at various loads.The set of fixed point equations (9)-(11) can be solved using These are plotted in Fig. 9. As indicated in these plots, it isstandard fixed-point equation solvers. Then the overall link clear that in networks with no access speed limitations it isutilization is given by impossible to reduce buffer size without seriously degrading β(N, B) = ρp (1 − p). performance. At a modest 80% load, our analysis indicates
  • 7. that the AFCT increases by nearly an order of magnitude when overall load. The long-flows which are about 10 − 30% of thesmall buffers are used! Even when the load is small, say 50%, flows make up for nearly 70% of the overall traffic.the overall AFCT doubles with the use of small buffers in the In our simulations, we neglect the effects of short-flows.network. Short-flows have very small transmission times. Since TCPThus we conclude that, whenever core routers are severely starts data transmission with a small window size, it is verycongested, it is not possible to use small buffers at the routers. likely that short-flows do not last long enough to utilize theIn fact, we require O(C × RT T ) buffers in order to maintain access speed capacity fully. As such, short-flows do not causegood performance to the end-users. Note that the fact we have congestion either on the access router or on the core router.an empirical value for c2 from simulations is not a serious a Therefore presence (or absence) of short-flows does not affectlimitation of our model. The model primarily offers qualitative our buffer sizing results, since buffer sizing is based on flowsinsight and allows us to compute the appropriate order for that cause congestion on the router.the buffer size. In practice, precise buffer sizing rules might It has been suggested [3] that a bounded Pareto (b.p.)have to be perhaps obtained using simulations, but the model distribution can be used to capture the heavy tailed propertyoffers important insight into the physics of the congestion of the Internet traffic. A b.p. distribution, B(c, d, α), has thephenomenon. following c.d.f: Remark 2: Our conclusions are based on the fact that TCP c−α − x−αis the protocol of choice even in networks where the core P rob {X < x} = .to access speed ratio is smaller than today’s networks. For c−α − d−αexample, if the access speed were to become larger than what The b.p. distribution has the following propertyTCP’s current M W S can support, then we implicitly assume y −α − x−αthat the M W S is increased correspondingly to support the P rob {X < x|X > y} = y −α − d−αaccess rate. However, one could argue that when access speedsincrease, we may use protocols other than TCP which are more We refer to all flows whose size is greater than a particularefficient in large access-speed regimes (Ex. RCP [12], FAST value y as long-flows. Then the above observation indicatesTCP [16], Scalable TCP [18], BIC-TCP [23] ). In such cases, that the distribution of long-flows will also be a b.p. distribu-similar analysis can be carried out as before, although the tion. Therefore in all our simulations, we assume that the long-values of β should be modified to reflect the efficiency of the flows are distributed according to a b.p. distribution. The long-new protocol. The conclusions depend on how the protocol Refficiency (i.e., β) varies with the number of flows and with Sthe amount of buffering. 30Mbps 100Mbps R S Avg 15ms 10ms IV. S IMULATION R ESULTS R Our objective in this section is to test the accuracy of the Svarious models described in Sections II and III. We have Fig. 5. Network Topologyconducted detailed packet level simulations using ns-2 [1]. Weconsider a dumb-bell shaped network topology as in Fig. 5.File transfer requests arrive according to a Poisson process. flows arrive according to a Poisson process with rate λ. In allThese flows access the network via the access routers, transmit the simulations, we assume that the long-flows are distributeddata and then leave the system once all the ack packets have according to a b.p. distribution with parameters α = 1.1,been received by the sender. The core router capacity is varied y = 200 KB and d = 200 MB. With these parameters, thefrom 100 Mbps-500 Mbps depending on the regime that we mean-flow-size is 1.1 MB. By varying λ, we can change thewould like to study. The access router capacity is varied from overall load on the system.to 2 Mbps-10 Mbps when we study the case with limitedaccess capacities and is set to 30 Mbps when we study high A. Access limited Networksaccess speed networks. The link delay at the core router is 10 In these simulations, we study the effect of buffer sizingms. The access links have delays that are uniformly distributed in networks where the access link capacity is very smallbetween 10 ms-20 ms. Thus, the two-way propagation delay compared to the core router capacity. We assume that TCPτp is uniformly distributed between 40 ms-60 ms. We fix the has a MWS of 64 packets. This is consistent with the currentpacket size to be 1000 bytes. implementations of TCP. The access link capacity is assumed There has been a lot of work on traffic characterization of to be 2 Mbps. The core router can switch packets at the ratethe Internet [8], [7], [14]. While the exact numbers vary from of 96 Mbps (We assume that the router can switch packetstime to time and from link to link, it is largely believed that at a raw bit rate of 100 Mbps. However, each packet has athe Internet traffic is heavy-tailed (see for example, [7], [8]); 40 byte TCP header and therefore the maximum good put is 1000that is most of the files are very short and a few files tend to 1040 × 100 = 96 Mbps.). We chose the access link buffer sizesend large amounts of data. Usually, about 70 − 90% of the to be 13 packets, mainly to ensure that the overall utilizationflows are short and they contribute to about 10 − 30% of the of the access link is close to unity. The core router buffer size
  • 8. is varied from 20 packets to 1000 packets. The load on thesystem is varied from 0.6 to 0.8. Scenario C=100Mbps, Ca = 2Mbps, Avg RTT = 50ms The results of the simulations and the corresponding theo- Simulationsretical predictions are presented in Fig. 6. As seen from the 6.2figures, our model is quite accurate and predicts the results ρ=0.6 AFCT (sec) 6 ρ=0.7with less than 10% error. Further, even when the external load 5.8 ρ=0.8 5.6is 80%, there is little degradation in throughput with small 5.4buffers. Why is this so? According to our model, when the 5.2 5access speeds are small, the core router will experience very 10 100 1000little congestion. Therefore, very small buffers suffice. We now Core router Buffer Size (pkts)need to find out whether our reasoning is correct. To justify our claim, we plot the packet drop probability Theoretical resultsat the core router and at the access router in Fig. 8. As 6.2 ρ=0.6 AFCT (sec) 6the figure rightly demonstrates, losses on the core router are 5.8 ρ=0.7 ρ=0.8several orders of magnitude smaller than the losses on the 5.6 5.4access router. Since the transmission rate of TCP, is inversely 5.2proportional to the total loss probability (i.e., pa + pc ), the 5 10 100 1000packet loss probability on the core router is too small to Core router Buffer Size (pkts)influence the transmission rate of the end-users. In otherwords, the core router is not congested. Our theoretical analysis also indicates that core router buffersize can be chosen independently of its capacity. To verify this Fig. 6. AFCT under access limited networks: Theory and Simulationsclaim, the performance of the system was studied at differentcore router speeds without changing the traffic parameters orthe access speed limitations. The external load was chosen to Impact of O(1) buffering access limited networks: ρ=0.8be 0.8. The core router buffer size was varied from 20 packets 5.4 C=200Mbps C=500Mbpsto 1000 packets. The results of the simulations are presented 5.2 C=1000Mbpsin Fig. 7. From the figure, it is quite clear that the flows do AFCT (sec) 5not suffer any performance degradation with small buffers. 4.8 Our analysis and simulations strongly suggest that in access 4.6limited networks core router buffer size can be reduced about 4.4100 packets, without affecting performance. 4.2 10 100 1000 Core router Buffer Size (pkts)B. Networks with very fast edge routers We study a scenario in which the edge routers do not limit Fig. 7. Demonstration of O(1) buffering in access limited networksthe transmission rate of the TCP. In this simulation we set theaccess speed to 30 Mbps. Since current implementations ofTCP have a MWS of 64 KB (which translates to 64 packets Section III, we were able to predict the performance of thein our simulations), the maximum throughput achievable by system for different buffer sizes. The simulation results and theTCP is, theoretical predictions are presented in Fig. 9. Our theoretical xmax = 64/RT T ≈ 12.8M bps. results match the simulation results consistently at all buffer sizes and at all loads, thereby validating our theoretical model.Therefore, it is quite clear that in this setting, access speed Additionally, as seen from these results, small buffers in suchlimitations do not limit TCP throughput. Furthermore, due to networks degrade the performance significantly. For examplewindow flow control mechanism, any TCP connection cannot when the system load is 0.8, the AFCT can be decreased bytransmit more than 64 packets within a RTT. To avoid edge nearly 85% by increasing the buffer size from 20 packets torouter imposing any kind of restriction on TCP, the edge router 1000 packets. Similarly, as seen in the simulations, the averagebuffer size was set to 64 packets. Similar to the previous throughput increases by about 400% with the increase in theexercise, the core router buffer size was varied from 20 packets buffer size.to 1000 packets. The load on the system was varied from0.5 to 0.8. To validate our theoretical results of Section III V. C ONCLUSIONSusing the simulation results, we have to know how the SCVof the arrival process (c2 ) varies with the buffer size. As a In this paper, we have developed simple models to providediscussed in Example 4, our simulations indicate that c2 varies a buffer sizing guidelines for today’s high-speed routers. Ourwith the buffer size B roughly as B 0.63 . We use this value analysis points out that the core-to-access speed ratio is theof c2 in our model. Using theoretical models developed in a key parameter which determines the buffer sizing guidelines.
  • 9. VI. ACKNOWLEDGMENTS Scenario C=100Mbps, Ca = 2Mbps, Avg RTT = 50ms We would like to thank Dr. Damon Wischik for his sugges- Access router tions and comments on the earlier version of this paper. Packet loss prob 0.05 0.045 ρ=0.6 The research reported here was supported by NSF grants 0.04 ρ=0.7 ECS 04-01125 and CCF 06-34891. 0.035 ρ=0.8 0.03 0.025 R EFERENCES 0.02 10 100 1000 [1] The network simulator: NS-2. Available at http://www.isi.edu/nsnam/ns. Core router Buffer Size (pkts) [2] G. Appenzeller, I. Keslassy, and N. McKeown. Sizing router buffers. In ACM/SIGCOMM, 2004. Core router [3] N Bansal and M. Harchol-Balter. Analysis of SRPT scheduling: Packet loss prob 0.002 Investigating unfairness. In Proceedings of the 2001 ACM SIGMETRICS ρ=0.6 International conference on Measurement and modeling of computer ρ=0.7 0.001 ρ=0.8 systems, 2001. [4] A. Berger and W. Whitt. Brownian motion approximations for the rate- controlled throttle and the G/G/1/C queue. Journal of Discrete-Event 0 Dynamic Systems, 2:7–60, 1992. 10 100 1000 [5] D. Bertsekas and R. Gallager. Data Networks. Prentice Hall, Englewood Core router Buffer Size (pkts) Cliffs, NJ, 1992. [6] J. Cao and K. Ramanan. A Poisson limit for buffer overflow probabili- ties. In Proceedings of IEEE INFOCOM, June 2002. [7] M. Crovella, M. Taqqu, and A. Bestavros. A Practical Guide to Heavy Fig. 8. Packet loss probability at core and access routers Tails: Statistical Techniques for Analyzing Heavy-Tailed Distributions. Birkhauser, 1998. [8] M.E. Crovella and A. Bestavros. Self-similarity in the World Wide Web traffic: Evidence and possible causes. IEEE/ACM transactions on Networking, pages 835–846, 1997. [9] A. Das and R. Srikant. Diffusion approximations for models of congestion control in high-speed networks. IEEE Transactions on Scenario C=100Mbps, Ca = 30Mbps, Avg RTT = 50ms Automatic Control, pages 1783–1799, October 2000. Simulations [10] A. Dhamdhere and C. Dovrolis. Open issues in router buffer sizing. 12 ACM/SIGCOMM Computer Communication Review, pages 87–92, Jan- ρ=0.6 AFCT (sec) 10 uary 2006. 8 ρ=0.7 [11] A. Dhamdhere, H. Jiang, and Constantine Dovrolis. Buffer sizing for 6 ρ=0.8 4 congested Internet links. In Proceedings of IEEE INFOCOM, March 2 2005. 0 [12] N. Dukkipati and N. McKeown. Processor sharing flows in the Internet, 10 100 1000 June 2004. High Performance Networking Group Technical Report Core router Buffer Size (pkts) TR04-HPNG-061604. [13] M. Enachescu, Y. Ganjali, A. Goel, T. Roughgarden, and N. McKeown. Theoretical results Part III: Routers with very small buffers. ACM/SIGCOMM Computer 12 Communication Review, 35(3):7, July 2005. ρ=0.6 AFCT (sec) 10 8 ρ=0.7 [14] C. Fraleigh, S. Moon, B. Lyles, C. Cotton, M. Khan, D. Moll, R. Rockell, 6 ρ=0.8 T. Seely, and C. Diot. Packet-level traffic measurements from the 4 SPRINT IP backbone. IEEE Network, 17(6):6–16, November-December 2 2003. 0 10 100 1000 [15] S. Ben Fredj, T. Bonald, A. Proutiere, G. Regnie, and J.W. Roberts. Statistical bandwidth sharing: a study of congestion at flow level. In Core router Buffer Size (pkts) Proceedings of ACM/SIGCOMM, August 2001. [16] C. Jin, D. X. Wei, S. H. Low, G. Buhrmaster, J. Bunn, D. H. Choe, R. L. A. Cottrell, J. C. Doyle, W. Feng, O. Martin, H. Newman, F. Paganini, S. Ravot, and S. Singh. FAST TCP: From theory toFig. 9. Impact of the core router buffer size in networks with fast edge experiments. IEEE Network, 19(1):4–11, January/February 2005.routers: Theory and Simulations [17] F. P. Kelly. Reversibility and Stochastic Networks. John Wiley, New York, NY, 1976. [18] T. Kelly. Scalable TCP: Improving performance in highspeed wide area networks. Computer Communication Review, 32(2), April 2003. [19] J. Padhye, V. Firoiu, D. Towsley, and J. Kurose. Modeling TCP throughput: A simple model and its empirical validation. IEEE/ACMIn particular, this parameter along with the buffer size deter- Transactions on Networking, 8(2), April 2000.mines the typical number of flows in the network. Thus, an [20] G. Raina, D. Towsley, and D. Wischik. Part II: Control theory for buffer sizing. ACM/SIGCOMM Computer Communication Review, pages 79–important message in this paper is that the number of flows and 82, July 2005.buffer size should not be treated as independent parameters in [21] C. Villamizar and C. Song. High performance TCP in ANSNET. ACMderiving buffer sizing guidelines. Further, we also point out Computer Communications Review, 24(5):45–60, 1994. [22] D. Wischik and N. McKeown. Part I: Buffer sizes for core routers.that link utilization is not a good measure of congestion level ACM/SIGCOMM Computer Communication Review, pages 75–78, Julyat a router. In fact, we show that even at 98% utilization, the 2005.core router may contribute very little to the overall packet loss [23] L. Xu, K. Harfoush, and Injong Rhee. Binary increase congestion control for fast long-distance networks. In Proceedings of the IEEE INFOCOM,probability seen by a source if the core to access speed ratio 2004.is large.