Tcp
Upcoming SlideShare
Loading in...5
×
 

Tcp

on

  • 557 views

Dear Students...

Dear Students
Ingenious techno Solution offers an expertise guidance on you Final Year IEEE & Non- IEEE Projects on the following domain
JAVA
.NET
EMBEDDED SYSTEMS
ROBOTICS
MECHANICAL
MATLAB etc
For further details contact us:
enquiry@ingenioustech.in
044-42046028 or 8428302179.

Ingenious Techno Solution
#241/85, 4th floor
Rangarajapuram main road,
Kodambakkam (Power House)
http://www.ingenioustech.in/

Statistics

Views

Total Views
557
Views on SlideShare
557
Embed Views
0

Actions

Likes
0
Downloads
5
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Tcp Tcp Document Transcript

    • 1026 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 4, AUGUST 2010 A Machine Learning Approach to TCP Throughput Prediction Mariyam Mirza, Joel Sommers, Member, IEEE, Paul Barford, and Xiaojin Zhu Abstract—TCP throughput prediction is an important capability TCP sender’s behavior to path and end-host properties such asfor networks where multiple paths exist between data senders and RTT, packet loss rate, and receive window size. In this case,receivers. In this paper, we describe a new lightweight method for different measurement tools can be used to gather the inputTCP throughput prediction. Our predictor uses Support Vector data that is then plugged into the formula to generate a pre-Regression (SVR); prediction is based on both prior file transferhistory and measurements of simple path properties. We evaluate diction. However, well-known network dynamics and limitedour predictor in a laboratory setting where ground truth can be instrumentation access complicate the basic task of gatheringmeasured with perfect accuracy. We report the performance of our timely and accurate path information, and the ever-evolving setpredictor for oracular and practical measurements of path proper- of TCP implementations means that a corresponding set of for-ties over a wide range of traffic conditions and transfer sizes. For mula-based models must be maintained.bulk transfers in heavy traffic using oracular measurements, TCP History-based TCP throughput prediction methods are con-throughput is predicted within 10% of the actual value 87% ofthe time, representing nearly a threefold improvement in accuracy ceptually straightforward. They typically use some kind ofover prior history-based methods. For practical measurements of standard time series forecasting based on throughput measure-path properties, predictions can be made within 10% of the actual ments derived from prior file transfers. In recent work, He etvalue nearly 50% of the time, approximately a 60% improvement al. show convincingly that history-based methods are generallyover history-based methods, and with much lower measurement more accurate than formula-based methods. However, the au-traffic overhead. We implement our predictor in a tool called Path- thors carefully outline the conditions under which history-basedPerf, test it in the wide area, and show that PathPerf predicts TCP prediction can be effective [11]. Also, history-based approachesthroughput accurately over diverse wide area paths. described to date remain relatively inaccurate and potentially Index Terms—Active measurements, machine learning, support heavyweight processes focused on bulk transfer throughputvector regression, TCP throughput prediction. prediction. Our goal is to develop an accurate, lightweight tool for pre- dicting end-to-end TCP throughput for arbitrary file sizes. We I. INTRODUCTION investigate the hypothesis that the accuracy of history-based pre- dictors can be improved and their impact on a path reduced byT HE availability of multiple paths between sources and re- ceivers enabled by content distribution, multihoming, andoverlay or virtual networks suggests the need for the ability to augmenting the predictor with periodic measurements of simple path properties. The questions addressed in this paper include:select the “best” path for a particular data transfer. A common 1) which path properties or combination of path properties in-starting point for this problem is to define “best” in terms of the crease the accuracy of TCP throughput prediction the most? andthroughput that can be achieved over a particular path between 2) what is a minimum set of file sizes required to generate his-two end-hosts for a given sized TCP transfer. In this case, the tory-based throughput predictors for arbitrary file sizes? Addi-fundamental challenge is to develop a technique that provides tional goals for our TCP throughput prediction tool are: 1) toan accurate TCP throughput forecast for arbitrary and possibly make it robust to “level shifts” (i.e., when path properties changehighly dynamic end-to-end paths. significantly), which He et al. show to be a challenge in his- Prior work on the problem of TCP throughput prediction has tory-based predictors; and 2) to include a confidence value withlargely fallen into two categories: those that investigate formula- predictions—a metric with little treatment in prior history-basedbased approaches and those that investigate history-based ap- throughput predictors.proaches. Formula-based methods, as the name suggests, pre- We use Support Vector Regression (SVR), a powerful ma-dict throughput using mathematical expressions that relate a chine learning technique that has shown good empirical perfor- mance in many domains, for throughput prediction. SVR has several properties that make it well suited for our study: 1) It can Manuscript received December 02, 2008; revised July 15, 2009; approved accept multiple inputs (i.e., multivariate features) and will useby IEEE/ACM TRANSACTIONS ON NETWORKING Editor C. Dovrolis. Firstpublished January 12, 2010; current version published August 18, 2010. This all of these to generate the throughput prediction. 2) SVR doeswork was supported in part by NSF grants CNS-0347252, CNS-0646256, not commit to any particular parametric form, unlike formula-CNS-0627102, and CCR-0325653. Any opinions, findings, conclusions, or based approaches. Instead, SVR models are flexible based onrecommendations expressed in this material are those of the authors and do their use of so-called nonlinear kernels. This expressive power isnot necessarily reflect the views of the NSF. An earlier version of this paperappeared in SIGMETRICS 2007. the reason why SVR has potential to be more accurate than prior M. Mirza, P. Barford, and X. Zhu are with the Department of Computer Sci- methods. 3) SVR is computationally efficient, which makes itences, University of Wisconsin–Madison, Madison, WI 53706 USA (e-mail: attractive for inclusion in a tool that can be deployed and usedmirza@cs.wisc.edu; pb@cs.wisc.edu; jerryzhu@cs.wisc.edu). J. Sommers is with the Department of Computer Science, Colgate University, in the wide area. For our application, we extend the basic SVRHamilton, NY 13346 USA (e-mail: jsommers@colgate.edu). predictor with a confidence interval estimator based on the as- Digital Object Identifier 10.1109/TNET.2009.2037812 sumption that prediction errors are normally distributed, an as- 1063-6692/$26.00 © 2010 IEEE
    • MIRZA et al.: A MACHINE LEARNING APPROACH TO TCP THROUGHPUT PREDICTION 1027sumption that we test in our laboratory experiments. Estimation been well known that many factors affect TCP throughput, suchof confidence intervals is critical for online prediction since re- as the TCP implementation, the underlying network structure,training can be triggered if measured throughput falls outside a and the dynamics of the traffic sharing the links on the path be-confidence interval computed through previous measurements. tween two hosts. A number of studies have taken steps toward We begin by using laboratory-based experiments to inves- understanding TCP behavior, including [1] and [2], which de-tigate the relationship between TCP throughput and measure- veloped stochastic models for TCP based on packet loss charac-ments of path properties including available bandwidth , teristics. A series of studies develop increasingly detailed math-queuing delays , and packet loss . The lab environment ematical expressions for TCP throughput based on modelingenables us to gather highly accurate passive measurements of the details of the TCP congestion control algorithm and mea-throughput and all path properties and develop and test our surements of path properties [4], [8], [10], [17], [18]. WhileSVR-based predictor over a range of realistic traffic conditions. our predictor also relies on measurement of path properties, theOur initial experiments focus on bulk transfers and compare SVR-based approach is completely distinguished from prior for-actual throughput of TCP flows to predictions generated by our mula-based models.SVR-based tool. Our results show that throughput predictions A large number of empirical studies of TCP file transfer andcan be improved by as much as a factor of 3 when including throughput behavior have provided valuable insight into TCPpath properties in the SVR-based tool versus a history-based performance. Paxson conducted one of the most comprehensivepredictor. For example, our results show that the SVR-based studies of TCP behavior [19], [20]. While that work exposed apredictions are within 10% of actual 87% of the time for bulk plethora of issues, it provided some of the first empirical data ontransfers under heavy traffic conditions (90% average utiliza- the characteristics of packet delay, queuing, and loss within TCPtion on the bottleneck link). Interestingly, we find that the path file transfers. Barford and Crovella’s application of critical pathproperties that provide the most improvement to the SVR-based analysis to TCP file transfers provides an even more detailedpredictor are and , respectively, and that including perspective on how delay, queuing, and loss relate to TCP per-provides almost no improvement to the predictor. formance [6]. In [5], Balakrishnan et al. studied throughput from Next, we expand the core SVR-based tool in three ways. the perspective of a large Web server and showed how it variedFirst, the initial tests were based entirely on passive traffic depending on end-host and time-of-day characteristics. Finally,measurements, which are unavailable in the wide area, so we detailed studies of throughput variability over time and corre-tested our SVR-based approach using measurements of lations between throughput and flow size can be found in [31]and provided by the BADABING tool [25]. The reduction in and [32], respectively. These studies inform our work in termsaccuracy of active versus passive measurements resulted in a of the basic characteristics of throughput that must be consid-corresponding reduction in accuracy of SVR-based throughput ered when building our predictor.predictions for bulk transfers under heavy traffic conditions Past studies of history-based methods for TCP throughputon the order of about 35%—still a significant improvement prediction are based on standard time series forecastingon history-based estimates. Also, throughput prediction based methods. Vazhkudia et al. compare several different simpleon training plus lightweight active measurements results in a forecasting methods to estimate TCP throughput for transfers ofdramatically lower network probe load than prior history-based large files and find similar performance across predictors [30].methods using long-lived TCP transfers and heavyweight A well-known system for throughput prediction is the Net-probe-based estimates of available bandwidth such as de- work Weather Service [28]. That system makes bulk transferscribed in [11]. We quantify this difference in Section VII. forecasts by attempting to correlate measurements of priorSecond, unlike prior work that focuses only on bulk transfers, large TCP file transfers with periodic small (64 kB) TCP filewe enabled SVR to predict throughput accurately for a wide transfers (referred to as “bandwidth probes”). The DualPatsrange of file sizes. We found that a training set of only three system for TCP throughput prediction is described in [16].file sizes results in accurate throughput predictions for a wide That system makes throughput estimates based on an expo-range of file sizes. Third, He et al. showed that “level shifts” nentially weighted moving average of larger size bandwidthin path conditions pose difficulties for throughput prediction probes (1.2 MB total). Similar to our work, Lu et al. found that[11], suggesting the need for adaptivity. To accomplish this, we prediction errors generally followed a normal distribution. Asaugmented the basic SVR predictor with a confidence interval mentioned earlier, He et al. extensively studied history-basedestimator as a mechanism for triggering a retraining process. predictors using three different time series forecasts [11]. OurWe show in Section VI-C1 that our technique is able to adapt SVR-based method includes information from prior transfersto level shifts quickly and to maintain high accuracy on paths for training, but otherwise only requires measurements fromwhere level shifts occur. We implement these capabilities in a lightweight probes and is generalized for all files sizes, not justtool called PathPerf1 that we tested in the wide area. We present bulk transfers.results from 18 diverse wide area paths in the RON testbed [3] Many techniques have been developed to measure path prop-and show that PathPerf predicts throughput accurately under a erties (see CAIDA’s excellent summary page for examples [9]).broad range of conditions in real networks. Prior work on path property measurement directs our selection of lightweight probe tools to collect data for our predictor. Re- II. RELATED WORK cent studies have focused on measuring available bandwidth on Since seminal work by Jacobson and Karels established the a path. AB is defined informally as the minimum unused ca-basic mechanisms for modern TCP implementations [13], it has pacity on an end-to-end path, which is a conceptually appealing 1PathPerf will be openly available for download at http://www.wail.cs.wisc. property with respect to throughput prediction. A number ofedu/waildownload.py. studies have described techniques for measuring AB including
    • 1028 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 4, AUGUST 2010[14], [26], and [27]. We investigate the ability of AB measure- freedom in by selecting a subset of features, thus reducing .ment as well as other path properties to enhance TCP throughput An implicit but more convenient alternative is to require to bepredictions. smooth,2 defined as having a small parameter norm . Com- Finally, machine learning techniques have not been widely bining loss and smoothness, we estimate the parametersapplied to network measurement. One notable exception is in by solving the following optimization problem:network intrusion detection (e.g., [12]). The only other appli-cation of SVR that we know of is to the problem of using IPaddress structure to predict round-trip time latency [7]. (2) III. A MULTIVARIATE MACHINE LEARNING TOOL where is a weight parameter to balance the two terms. The The main hypothesis of this work is that history-based TCP value of is usually selected by a procedure called cross val-throughput prediction can be improved by incorporating mea- idation, where the training set is randomly split into two parts,surements of end-to-end path properties. The task of throughput then regression functions with different are trained on oneprediction can be formulated as a regression problem, i.e., pre- part and their performance measured on the other part, and fi-dicting a real-valued number based on multiple real-valued nally one selects the value with the best performance. In ourinput features. Each file transfer is represented by a feature experiments, we used using cross validation. Thevector of dimension . Each dimension is an observed optimization problem can be solved using a quadratic program.feature, e.g., the file size, proximal measurements of path Nonetheless, a linear function is fairly restrictive andproperties such as queuing delay, loss, available bandwidth, may not be able to describe the true function . A standard math-etc. Given , we want to predict the throughput . This ematical trick is to augment the feature vector with nonlinearis achieved by training a regression function , bases derived from . For example, if , one canand applying to . The function is trained using training augment it with . The linear re-data, i.e., historical file transfers with known features and the gressor in the augmented feature spacecorresponding measured throughput. then produces a nonlinear fit in the original feature space. Note The analytical framework that we apply to this problem is has more dimensions than before. The more dimensionsSupport Vector Regression (SVR), a state-of-the-art machine has, the more expressive becomes.learning tool for multivariate regression. SVR is the regression In the extreme (and often beneficial) case, can even haveversion of the popular Support Vector Machines [29]. It has a infinite dimensions. It seems computationally impossible to esti-solid theoretical foundation and is favored in practice for its mate the corresponding infinite-dimensional parameter . How-good empirical performance. We briefly describe SVR below ever, if we convert the primal optimization problem (2) into itsand refer readers to [21] and [23] for details and to [15] as an dual form, one can show that the number of dual parameters isexample of an SVR software package. actually instead of the dimension of . Furthermore, the To understand SVR, we start from a linear regression function dual problem never uses the augmented feature explicitly. . Assume we have a training set of file It only uses the inner product between pairs of augmented fea-transfers . Training involves estimating tures . The function is known asthe -dimensional weight vector and offset so that is the kernel and can be computed from the original feature vec-close to the truth for all training examples . There tors . For instance, the Radial Basis Function (RBF) kernelare many ways to measure “closeness”. The traditional measure implicitly corresponds to anused in SVR is the -insensitive loss, defined as infinite-dimensional feature space. In our experiments, we used a RBF kernel with , again selected by cross vali- dation. The dual problem can still be efficiently solved using a if quadratic program. (1) SVR therefore works as follows. For training, one collects a otherwise. training set and specifies a kernel .This loss function measures the absolute error between predic- SVR solves the dual optimization problem, which equivalentlytion and truth, but with a tolerance of . The value is applica- finds the potentially very high-dimensional parameter andtion-dependent in general, and in our experiments we set it to in the augmented feature space defined by . This produces azero. Other loss functions (e.g., the squared loss) are possible, potentially highly nonlinear prediction function . The func-too, and often give similar performance. They are not explored tion can then be applied to arbitrary test cases and pro-in this paper. duces a prediction. In our case, test cases are the file size for It might seem that the appropriate way to estimate the pa- which a prediction is to be made and current path propertiesrameters is to minimize the overall loss on the training based on active measurements.set . However if is large compared to thenumber of training examples , one can often fit the training IV. EXPERIMENTAL ENVIRONMENT AND METHODOLOGYdata perfectly. This is dangerous because the truth in trainingdata actually contain random fluctuations, and is partly fit- This section describes the laboratory environment and exper-ting the noise. Such will generalize poorly, i.e., causing bad iments that we used to evaluate our throughput predictor.predictions on future test data. This phenomenon is known as 2If
    • are the coefficients of a polynomial function, the function will tend tooverfitting. To prevent overfitting, one can reduce the degree of be smooth (changes slowly) if k
    • k is small or noisy if k
    • k is large.
    • MIRZA et al.: A MACHINE LEARNING APPROACH TO TCP THROUGHPUT PREDICTION 1029Fig. 1. Laboratory testbed. Cross traffic flowed across one of two routers at hop B, while probe traffic flowed through the other. Optical splitters connected EndaceDAG 3.5 and 3.8 passive packet capture cards to the testbed between hops B and C and hops C and D. Measurement traffic (file transfers, loss probes, and availablebandwidth probes) flowed from left to right. Congestion in the testbed occurred at hop C.A. Experimental Environment B. Experimental Protocol We generated background traffic by running the Harpoon IP The laboratory testbed used in our experiments is shown traffic generator [24] between up to four pairs of traffic gen-in Fig. 1. It consisted of commodity end-hosts connected eration hosts as illustrated in Fig. 1. Harpoon produced open-to a dumbbell-like topology of Cisco GSR 12000 routers. loop self-similar traffic using a heavy-tailed file size distribu-Both measurement and background traffic was generated and tion, mimicking a mix of application traffic such as Web andreceived by the end-hosts. Traffic flowed from the sending peer-to-peer applications common in today’s Internet. Harpoonhosts on separate paths via Gigabit Ethernet to separate Cisco was configured to produce average offered loads ranging fromGSRs (hop B in the figure) where it was forwarded on OC12 approximately 60% to 105% on the bottleneck link (the OC3(622 Mb/s) links. This configuration was created in order to between hops C and D).accommodate a precision passive measurement system. Traffic As illustrated in Fig. 2, measurement traffic in the testbed con-from the OC12 links was then multiplexed onto a single OC3 sisted of file transfers and active measurements of AB, L, and Q.(155 Mb/s) link (hop C in the figure), which formed the bottle- We also measured all three metrics passively using packet traces.neck where congestion took place. We used an AdTech SX-14 All measurements are aggregates for all flows on the path. Mea-hardware-based propagation delay emulator on the OC3 link surements were separated by 30-s intervals. For the measurementto add 25 ms delay in each direction for all experiments and traffic hosts, we increased the TCP receive window size to 128 kBconfigured the bottleneck queue to hold approximately 50 ms (with the window scaling option turned on). In receive windowof packets. Packets exited the OC3 link via another Cisco GSR limited transfers, file transfer throughput was approximately 2112000 (hop D in the figure) and passed to receiving hosts via Mb/s. That is, if the available bandwidth on the bottleneck linkGigabit Ethernet. was 21 Mb/s or more, the flow was receive window lim- The measurement hosts and traffic generation hosts were ited; otherwise, it was congestion window limited. Weidentically configured workstations running FreeBSD 5.4. increased the receive window size because we wanted file trans-The workstations had 2-GHz Intel Pentium 4 processors fers to be -limited so we could evaluate our predictor thor-with 2 GB of RAM and Intel Pro/1000 network cards. They oughly; -limited transfers have constant throughput for awere also dual-homed, so all management traffic was on a given size, so any reasonable predictor performs well onseparate network than that depicted in Fig. 1. We disabled -limited transfers. For this reason, we do not report any re-the TCP throughput history caching feature in FreeBSD 5.4, sults for -limited flows.controlled by the variable net.inet.tcp.inflight.enable, to allow Series of measurements were collected and then split up intoTCP throughput to be determined by current path properties mutually exclusive training and test sets for SVR. Notions ofrather than throughput history. separate training and test sets are not required for history-based A key aspect of our testbed was the measurement system methods; rather, predictions are made over the continuous no-used to establish the true path properties for our evaluation. tion of distant and recent history. In our evaluation of history-Optical splitters were attached to both the ingress and egress based methods, we use the final prediction of the training set aslinks at hop C, and Endace DAG 3.5 and 3.8 passive moni- the starting point for the test set.toring cards were used to capture traces of all packets entering Fig. 2 shows that we gather three types of measurementsand leaving the bottleneck node. By comparing packet headers, for each experiment: Oracular Passive Measurements (OPM),we were able to identify which packets were lost at the con- Practical Passive Measurements (PPM), and Active Measure-gested output queue during experiments and accurately mea- ments (AM). Oracular measurements, taken during the filesure available bandwidth on the congested link. Furthermore, transfer, give us perfect information about network conditions,the fact that the measurements of packets entering and leaving so we can establish the best possible accuracy of our predictionhop C were synchronized at a very fine granularity (i.e., a single mechanism. In practice, this oracular information is not avail-microsecond) enabled us to precisely measure queuing delays able for making predictions. AM, taken before the file transfer,through the congested router. are available in practice and can thus be used for prediction.
    • 1030 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 4, AUGUST 2010 introduced in [11] to evaluate the accuracy of an individual throughput prediction. Relative prediction error is defined as In what follows, we use the distribution of the absolute value of to compare different prediction methods. V. BUILDING A ROBUST PREDICTOR This section describes how we developed, calibrated, and evaluated our prediction mechanism through an extensive set of tests conducted in our lab testbed. A. Calibration and Evaluation in the High-Traffic Scenario The first step in developing our SVR-based throughput pre- dictor is to find the combination of training features that lead to the most accurate predictions over a wide variety of path con- ditions. We trained the predictor using a feature vector for eachFig. 2. Measurement traffic protocol and oracular and practical path measure-ments used for SVR-based throughput prediction. test that contained a different combination of our set of target path measurements (AB, Q, L) and the measured throughput. The trained SVR model is then used to predict throughput forPPM, passive measurements taken at the same time AM are a feature vector containing the corresponding sets of networktaken, show the best possible accuracy of our prediction mech- measurements, and we compare the prediction accuracy for theanism with practical measurements, or show how much better different combinations.SVR would perform if the active measurements had perfect We also compare the accuracy of SVR to the exponentiallyaccuracy. weighted moving average (EWMA) History-Based Predictor AB is defined as the spare capacity on the end-to-end path. (HB) described in [11], . We useYAZ estimates available bandwidth using a relatively low-over- an value of 0.3 because it is used in [11].head, iterative method similar to PATHLOAD [14]. The time re- For most of the results reported, we generated an average ofquired to produce a single AB estimate can vary from a few 140 Mb/s of background traffic to create high utilization on ourseconds to tens of seconds. The passive measurement value of OC3 (155 Mb/s) bottleneck link. We used one set of 100 ex-AB is computed by subtracting the observed traffic rate on the periments for training and another set of 100 experiments forlink from the realizable OC3 link bandwidth. testing. An 8-MB file was transferred in each experiment. We use BADABING for active measurements of L and Q. Fig. 3(a)–(h) show scatter plots comparing the actual andBADABING requires the sender and receiver to be time-syn- predicted throughput using different prediction methods as dis-chronized. To accommodate our wide area experiments, the cussed below. A point on the diagonal represents perfect pre-BADABING receiver was modified to reflect probes back to the diction accuracy; the farther a point is from the diagonal, thesender, where they were timestamped and logged as on the greater the prediction error.original receiver. Thus, the sender clock was used for all probe 1) Using Path Measurements From an Oracle: Fig. 3(a)timestamps. BADABING reports loss in terms of loss episode shows the prediction accuracy scatter plot for the HB method.frequency and loss episode duration, defined in [25]. Although Fig. 3(b)-(g) show the prediction error with SVR using Orac-we refer to loss frequency and loss duration together as L or ular Passive Measurements (OPM) for different combinationsloss for expositional ease, these two metrics are two different of path measurements in the feature vector. For example,features for SVR. Q is defined as the difference between the SVR-OPM-Queue means that only queuing delay measure-RTT of the current BADABING probe and the minimum probe ments were used to train and test, while SVR-OPM-Loss-QueueRTT seen on the path. We set the BADABING probe probability means that both loss and queuing delay measurements wereparameter to 0.3. Other parameters were set according to used to train and test.[25]. Passive values of Q and L are computed using the same Table I shows relative prediction errors for HB forecastingdefinitions as above, except that they are applied to all TCP and for SVM-OPM-based predictions. Values in the table in-traffic on the bottleneck link instead of BADABING probes. dicate the fraction of predictions for a given method within a For experiments in the wide area, we created a tool, PathPerf. given accuracy level. For example, the first two columns of theThis tool, designed to run between a pair of end-hosts, initiates first row in Table I mean that 32% of HB predictions have rel-TCP file transfers and path property measurements (using our ative prediction errors of 10% or smaller, while 79% of SVR-modified version of BADABING) and produces throughput esti- OPM-AB predictions have relative prediction errors of 10% ormates using our SVR-based method. It can be configured to gen- smaller. We present scatter plots in addition to tabular data toerate arbitrary file size transfers for both training and testing and provide insight into how different path properties contribute toinitiates retraining when level shifts are identified as described throughput prediction in the SVR method.in Section VII. From Fig. 3(a), we can see that the predictions for the HB predictor are rather diffusely scattered around the diagonal, andC. Evaluating Prediction Accuracy that predictions in low-throughput conditions tend to have large We denote the actual throughput by and the predicted relative error. Fig. 3(b)-(d) show the behavior of the SVR pre-throughput by . We use the metric relative prediction error dictor using a single measure of path properties in the feature
    • MIRZA et al.: A MACHINE LEARNING APPROACH TO TCP THROUGHPUT PREDICTION 1031Fig. 3. Comparison of prediction accuracy of HB, SVR-OPM, and SVR-AM in high background traffic conditions. (a) HB. (b) SVR-OPM-AB. (c) SVR-OPM-loss. (d) SVR-OPM-queue. (e) SVR-OPM-AB-loss. (f) SVR-OPM-loss-queue. (g) SVR-OPM-AB-loss-queue. (h) SVR-AM-loss-queue. TABLE I RELATIVE ACCURACY OF HISTORY-BASED (HB) THROUGHPUT PREDICTION AND SVR-BASED PREDICTORS USING DIFFERENT TYPES OF ORACULAR PASSIVE PATH MEASUREMENTS (SVR-OPM) IN THE FEATURE VECTOR. TABLE VALUES INDICATE THE FRACTION OF PREDICTIONS WITHIN A GIVEN ACCURACY LEVELvector—AB, L, and Q, respectively. The AB and Q graphs have dictive power, while AB and Q are able to predict throughputa similar overall trend: Predictions are accurate (i.e., points are quite accurately.close to the diagonal) for high actual throughput values, but far Fig. 3(e) and (f) show improvements in prediction accuracyfrom the diagonal, almost in a horizontal line, for lower values obtained by using more than one path property in the SVR fea-of actual throughput. The L graph has the opposite trend: Points ture vector. We can see that when L is combined with AB orare close to the diagonal for lower values of actual throughput Q, the horizontal lines on the graphs are replaced by pointsand form a horizontal line for higher values of actual throughput. much closer to the diagonal. Combining AB or Q with L allows The explanation for these trends lies in the fact that file trans- SVR to predict accurately in both lossy and lossless conditions.fers with low actual throughput experience loss, while file trans- Since both AB and Q help predict throughput in lossless condi-fers with high actual throughput do not experience any loss. tions, do we really need both AB and Q, or can we use just oneWhen loss occurs, the values of AB and Q for the path are nearly of the two and still achieve the same prediction accuracy? Toconstant. AB is almost zero, and Q is the maximum possible answer this question, we compared AB-Loss and Loss-Queuevalue (which depends on the amount of buffering available at predictions with each other and with AB-Loss-Queue predic-the bottleneck link in the path). In this case, throughput depends tions [i.e., Figs. 3(e)–(g)]. The general trend in all three caseson the value of L. Hence, L appears to be a good predictor when is the same: The horizontal line of points is reduced or elimi-there is loss on the path, and AB and Q, being constants in this nated, suggesting that prediction from nonconstant-value mea-case, have no predictive power, resulting in horizontal lines, i.e., surements is occurring for both lossy and lossless network con-a single value of predicted throughput. On the other hand, when ditions. If we compare the AB-Loss and Loss-Queue graphsthere is no loss, L is a constant with value zero, so L has no pre- more closely, we observe two things. First, in the lossless predic-
    • 1032 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 4, AUGUST 2010 TABLE II examine three issues: first, whether our finding that Loss-Queue RELATIVE ACCURACY OF HISTORY-BASED (HB) THROUGHPUT PREDICTION has the same prediction accuracy as AB-Loss-Queue from the AND SVR-BASED PREDICTORS USING TRACE-BASED PASSIVE PATH SVR-OPM case holds for the SVR-PPM and SVR-AM case; MEASUREMENTS (PPM) OR ACTIVE PATH MEASUREMENTS (AM). TABLE VALUES INDICATE THE FRACTION OF PREDICTIONS second, whether SVR-PPM and SVR-AM have the same ac- WITHIN A GIVEN ACCURACY LEVEL curacy; and third, how SVR-AM accuracy compares with HB prediction accuracy. All prediction accuracy results in the SVR-PPM and SVR-AM columns in Table II are very similar. AB-Loss-Queue has ap- proximately the same accuracy as Loss-Queue for both SVR-PPM and SVR-AM. This is encouraging because it is con- sistent with the observation from SVR-OPM, i.e., that we can achieve good prediction accuracy without having to measure AB. SVR-AM has accuracy similar to SVR-PPM, i.e., using active measurement tools to estimate path properties yields predictions almost as accurate as having ground-truth passive measurements. This is important because in real wide-area paths, instrumentation is generally not available for collectingtion case, the points are closer to the diagonal in the Loss-Queue accurate passive measurements.case than in the AB-Loss case. Second, in the Loss-Queue case, Finally, we compare HB with SVR-AM. Althoughthe transition in the prediction from the lossless to the lossy case Fig. 3(a) and 3(h) are qualitatively similar, SVR-AM has ais smooth, i.e., there is no horizontal line of points, while in the tighter cluster of points around the diagonal for high actualAB-Loss case there is still a horizontal line of points in the actual throughput than HB. Thus, SVR-AM appears to have higherthroughput range of 11–14 Mb/s. This suggests that Q is a more accuracy than HB. As Table II shows, SVR-AM-Loss-Queueaccurate predictor than AB in the lossless case. The relative pre- predicts throughput within 10% of actual accuracy 49% ofdiction error data of Table I supports this: SVR with a feature the time, while HB does so only 32% of the time. Hence, forvector containing Loss-Queue information predicts throughput high-traffic scenarios, SVR-AM-Loss-Queue, the practicallywithin 10% of actual for 87% of transfers, while a feature vector deployable lightweight version of the SVR-based predictioncontaining AB-Loss measurements predicts with the same ac- mechanism, significantly outperforms HB prediction.curacy level for 78% of transfers. Finally, there is no differ- The above observations raise the question of why SVRence in accuracy (either qualitatively or quantitatively) between performs so much better than HB at high accuracy levels. HB isLoss-Queue and AB-Loss-Queue. a weighted average that works on the assumption that transfers The above discussion suggests that AB measurements are not close to each other in time will have similar throughputs.required for highly accurate throughput prediction and that a HB loses accuracy when there are rapid and large changescombination of L and Q is sufficient. This observation is not in throughput because in that case there is a large differenceonly surprising, but rather good news. Prior work has shown between the weighted average and individual throughputs.that accurate measurements of AB require at least moderate SVR predicts throughput by constructing a function mappingamounts of probe traffic [22], [26], and some formula-based path properties to throughput values. This function is ableTCP throughput estimation schemes take as a given that AB to capture complex nonlinear relationships between the pathmeasurements are necessary for accurate throughput prediction properties and throughput. As long as the mapping between[11]. In contrast, measurements of L and Q can be very light- path properties and resulting throughput is consistent, SVRweight probe processes [25]. We discuss measurement overhead can predict throughput accurately, regardless of the variationfurther in Section VII. between the throughput of neighboring transfers. SVR accuracy 2) Using Practical Passive and Active Path Measurements: breaks down only when the mapping between path propertiesSo far, we have considered the prediction accuracy of SVR and throughput becomes inconsistent, e.g., when path condi-based on only Oracular Passive Measurements (OPM). This tions are so volatile that they change between the active pathgives us the best-case accuracy achievable with SVR and also measurement and the file transfer, or when an aggregate pathprovides insight into how SVR uses different path properties measure does not hold for the transfer, such as when there isfor prediction. Table I shows that HB predicts 32% of transfers loss on the bottleneck link, but the file transfer experienceswithin 10% of actual, while SVR-OPM-Loss-Queue predicts more or less loss than the aggregate traffic. Thus, SVR outper-87%, an almost threefold improvement. In practice, however, forms HB because: 1) it has access to additional information,perfect measurements of path properties are not available, so in in the form of path properties such as AB, L, and Q, while HBwhat follows we assess SVR using measurements that can be has access only to past throughputs; and 2) it constructs a muchobtained in practice in the wide area. more sophisticated functional mapping of path properties to Table II presents relative prediction error data for HB, throughput compared to the simple time-series analysis thatSVR-PPM, and SVR-AM. Due to space limitations, we present HB uses. The fact that SVR’s supremacy over HB is limited toonly Loss-Queue and AB-Loss-Queue results for SVR. We high accuracy levels, and HB catches up with SVR for lowerchoose these because we expect AB-Loss-Queue to have the accuracy levels (predicted throughput within 40%–90% ofhighest accuracy as it has the most information about path prop- actual, Table II), is a feature of the distribution of throughputserties, and Loss-Queue because it is very lightweight and has in our test set. We can see from the graphs in Fig. 3(a)–(h)accuracy equal to AB-Loss-Queue for SVR-OPM. We wish to that the average throughput of the test set is between 10 and
    • MIRZA et al.: A MACHINE LEARNING APPROACH TO TCP THROUGHPUT PREDICTION 1033 TABLE III RELATIVE ACCURACY OF SVR-BASED PREDICTOR USING ORACULAR PASSIVE MEASUREMENTS (OPM) AND TRAINING SETS CONSISTING OF ONE, TWO, THREE, SIX, OR EIGHT DISTINCT FILE SIZESFig. 4. Normal Q-Q plots for prediction errors with (a) oracular measurements(SVR-OPM) and (b) active measurements (SVR-AM) of Loss-Queue.15 Mb/s. We expect the HB prediction to be around the averagethroughput. Fig. 3(a) shows that this is indeed the case: All HB file sizes, not just bulk transfers, so in this section we evaluatepredictions are between 10 and 15 Mb/s. All but one transfers prediction accuracy for a broad range of file sizes, somethinghave actual throughput between 5 and 18 Mb/s, so the relative not considered in prior HB prediction studies.error is 100% or less for low We conducted experiments using background traffic at an av-throughputs (all predictions within a factor of two of actual) erage offered load of 135 Mb/s. We used nine different trainingand 80% or less for high through- sets, each of size 100, consisting of between one and nine uniqueputs. If the range of actual throughputs had been wider but the file sizes. The file sizes for the training sets were between 32 kBaverage had been the same, HB would have been less able to bytes and 8 MB bytes . The first training set con-match SVR accuracy for relative error values of 40%–90%. sisted of 100 8-MB files; the second of 50 32-kB files and 50 A remaining question is why Q is a better predictor of 8-MB files; the third of 33 each of 32-kB bytes , 512-kBthroughput than AB. When we compare AB and Q predic- bytes , and 8-MB bytes files; the fourth of 25 each oftion behavior in Fig. 3(b) and (d), we see that as throughput -, -, -, and -byte files; and so on, dividing thedecreases, i.e., as the network moves from lossless to lossy range of the exponent, 15 to 23, into more numerous but smallerconditions, AB loses its predictive power sooner than Q does, intervals for each subsequent training set.around 15 Mb/s instead of 11 Mb/s for Q. As soon as the Test sets for our experiments consist of 100 file sizes betweenlink experiences loss, AB is zero. While Q eventually reaches 2 kB bytes and 8 MB bytes . To generate file sizes fora maximum value under sufficiently high congestion, under the test set, uniformly distributed random numbers between 11light loss it retains its resolution because Q will be different and 23 were generated. The file size was set to bytes, roundeddepending on the percentage of packets that experience the to the closest integer; note that kB, and MB.maximum possible queuing delay and those that experience less This log-based distribution of file sizes contains a much greaterthan the maximum. Hence, Q is a better predictor of throughput number of smaller files than a uniform linear distribution ofthan AB most likely because it can be measured with better file sizes from 2 kB to 8 MB would. The bias toward smallerresolution for a wider range of network conditions. file sizes is important for proper evaluation of predictor perfor- 3) The Nature of Prediction Error: Lu et al. [16] observed mance for small files because it generates a more even distribu-in their study that throughput prediction errors were approxi- tion between files that spend all their transfer time in slow-start,mately normal in distribution. As the authors noted, normality those that spend some time in slow-start and some in congestionwould justify standard computations of confidence intervals. We avoidance, and those that spend a negligible amount of time inexamined the distribution of errors in our experiments and also slow-start. If the uniform linear distribution had been used, mostfound evidence suggestive of normality. of the file sizes generated would have spent a negligible amount Fig. 4 shows two normal quantile-quantile (Q-Q) plots for of time in slow-start. We also use a wider range of small filesSVR-OPM-Loss-Queue [Fig. 4(a)] and SVR-AM-Loss-Queue in the test set compared to the training set—the test set starts[Fig. 4(b)]. Samples that are consistent with a normal distribu- with 2-kB files, while the training set starts with 32 kB—be-tion form approximately a straight line in the graph, particularly cause we want to see how well our predictor extrapolates fortoward the center. In Fig. 4(a) and (b), we see that the throughput unseen ranges of small file sizes.prediction samples in each case form approximately straight Tables III and IV present prediction accuracy for training setslines. These observations are consistent with normality in the consisting of one, two, three, six, or eight distinct file sizes fordistribution of prediction errors. Error distributions from other SVR-OPM and SVR-AM. Graphs in Fig. 5 present SVR-AM re-experiments were also consistent with the normal distribution sults for one, two, and three file sizes in the training set. We dobut are not shown due to space limitations. We further discuss not include graphs for remaining training sets due to space lim-the issues of retraining and of detecting estimation problems in itations: They are similar to those for two and three file sizesSection VII. in the training set. The first observation is that for the single file size in training, the prediction error is very high. This in-B. Evaluation of Prediction Accuracy for Different File Sizes accuracy is expected because the predictor has been given no Thus far, we have predicted throughput only for bulk transfers information about the relationship between size and throughputof 8 MB. However, our goal is accurate prediction for a range of for small files. The second observation is that for more than one
    • 1034 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 4, AUGUST 2010Fig. 5. Scatter plots for the SVR-based predictor using one, two, or three distinct file sizes in the training set. All results shown use active measurements to trainthe predictor. Testing is done using a range of 100 file sizes from 2 kB to 8 MB. (a) SVR-AM with file size of 8 MB in training set. (b) SVR-AM with file sizes of32 kB and 8 MB in training set. (c) SVR-AM with file sizes of 32 kB, 512 kB, and 8 MB in training set. TABLE IV ples for a single file size. We would not see maximum accuracy RELATIVE ACCURACY OF SVR-BASED PREDICTOR USING ACTIVE MEASUREMENTS (AM) AND TRAINING SETS CONSISTING OF ONE, occurring at three file sizes and would instead see an increase in TWO, THREE, SIX, OR EIGHT DISTINCT FILE SIZES accuracy with increasing number of file sizes in the training set if we kept the number of samples of a single file size constant at 100 in the training set and allowed the size of the training set to increase from 100 to 200, 300, etc., as we increase the number of file sizes in the training set. VI. WIDE AREA EXPERIMENTS To further evaluate our SVR-based TCP throughput predic- tion method, we created a prototype tool called PathPerf that can generate measurements and make forecasts on wide area paths. We used PathPerf to conduct experiments over a set of paths in the RON testbed [3]. This section presents the results of our wide area experiments.file size, prediction becomes dramatically more accurate, i.e.,the predictor is able to successfully extrapolate from a handful A. The RON Experimental Environmentof sizes in training to a large number of sizes in testing. The The RON wide area experiments were conducted in Januarythird observation is that relative error is low for large file sizes 2007 over 18 paths between seven different node locations. Two(corresponding to high actual throughput), while it is higher for nodes were in Europe (in Amsterdam, The Netherlands, andsmall files (low actual throughput), or in other words, predicting London, U.K.), and the remainder were located at universitiesthroughput accurately for small files is more difficult than for in the continental United States (Cornell University, Ithaca,large files. For small files, throughput increases rapidly with file NY; University of Maryland, College Park; University of Newsize during TCP’s slow-start phase due to the doubling of the Mexico, Albuquerque; New York University, New York; andwindow every RTT. Also, packet loss during the transfer of a University of Utah, Salt Lake City). Of the 18 paths, two aresmall file has a larger relative impact on throughput. Due to trans-European, nine are trans-Atlantic, and seven are transcon-the greater variability of throughput for small files, predicting tinental U.S. The RON testbed has a significantly larger numberthroughput accurately is more difficult. The fourth observation of available nodes and paths, but two considerations limited theis that for small file sizes (i.e., small actual throughput), the error number of nodes and paths that we could use. The first consid-is always that of overprediction. The smallest file in the training eration was that the nodes should have little or no other CPUset is 32 kB, while the smallest file in the test set is 2 kB. This or network load while our experiments were running; this isdifference is the cause of overprediction errors: Given the high required for BADABING to measure loss accurately. The secondvariability of throughput for small file sizes due to slow-start ef- issue was that we could not use any nodes running FreeBSD 5.xfects and losses having a greater relative impact on throughput, because the TCP throughput history caching feature in FreeBSDwithout a broader training set, the SVR mechanism is unable to 5.x, controlled by the variable net.inet.tcp.inflight.enable, is onextrapolate accurately for unseen ranges of small file sizes. by default and interfered with our experiments. Consequently, A final observation is that prediction accuracy reaches a max- we were restricted to using nodes running FreeBSD 4.7, whichimum at three file sizes in the training set, and there is no clear does not have the throughput history caching feature.trend for four to nine file sizes in the training set. The number We use the following convention for path names: A–B meansof transfers in our training set is always constant at 100, so for that A is the TCP sender and B is the receiver, i.e., the TCPthe single training size, there are 100 8-MB transfers for the two data flow direction is from A to B, and TCP ack flow is in thetraining sizes, there are 50 32-kB transfers and 50 8-MB trans- reverse direction. A–B and B–A are considered two differentfers, and so on. We believe that accuracy is maximum at three paths because routing and background traffic level asymmetrytraining sizes in our experiments because there is a tradeoff be- between the forward and reverse directions can lead to majortween capturing a diversity of file sizes and the number of sam- differences in throughput.
    • MIRZA et al.: A MACHINE LEARNING APPROACH TO TCP THROUGHPUT PREDICTION 1035Fig. 6. Wide area results for HB and SVR predictions. The HB and SVR columns present the fraction of predictions with relative error of 10% or less. Trainingand test sets consisted of 100 samples each. Two megabytes of data were transferred. Labels use node initials; see Section V for complete node names. TABLE VMINIMUM RTTS FOR THE 18 RON PATHS USED FOR WIDE-AREA EXPERIMENTS obtained by dividing the 200 measurements gathered for each path into two consecutive sets of 100, using the first 100 as the training set and the second 100 as the test set. Some paths feature more than once because node availability allowed us to repeat experiments on those paths. We observe two major trends from Fig. 6. First, for the majority of experiments, prediction accuracy is very high for both HB and SVR: Most paths have greater than 85% of predictions within 10% of actual throughput. Second, for five out of the 26 experiments—Cornell–Amsterdam, London–Maryland–1, London–Utah–2, London–Utah–5, and NYU–Amsterdam—the SVR prediction accuracy is quite low, as low as 25% for London–Utah–2. The reasons for this poor accuracy will be analyzed in detail in Section VI-C. Next, we take a more detailed approach to assessing predic- The wide area measurement protocol was the following: tion accuracy. We divide the 200 samples into sets of and1) run BADABING for 30 s; 2) transfer a 2-MB file; 3) sleep 30 s; for values of from 1 to 199. The first samples areand 4) repeat the above 200 times. used for training, and the remaining samples are used In Section V, we showed that available bandwidth measure- for testing. This allows us to understand the tradeoff betweenments are not required for accurate TCP throughput predic- training set size and prediction accuracy, which is important fortion, so we omit running YAZ in the wide area. Thus, the wide a practical online prediction system where it is desirable to startarea prediction mechanism is the SVR-AM-Loss-Queue protocol generating predictions as soon as possible. This analysis alsofrom Section V. We reduced the size of the file transfers from 8 allows us to identify the points in a trace where an event thatMB in the lab experiments to 2 MB in the wide area experiments has an impact on prediction accuracy, such as a level shift, oc-because the typical throughput in the wide area was an order of curs and whether retraining helps maintain prediction accuracymagnitude lower compared to the typical throughput in the lab in the face of changing network conditions. We present this data(1–2 Mb/s versus 10–15 Mb/s). for SVR only; for HB, there is no division of data into training The measurements were carried out at different times of the and test sets because retraining occurs after every measurement.day for different paths depending upon when the nodes were Fig. 7 presents the SVR prediction accuracy for , 5,available, i.e., when the nodes had little or no other CPU and and 10 for those experiments in Fig. 6 that had high predic-network activity in progress. Depending again on node avail- tion accuracy for . For all but three of the experimentsability, we ran a single set of experiments on some paths and in Fig. 7, there is little difference between prediction accuracymultiple sets on others. We can only conduct active measure- for training set sizes of 1, 5, 10, and 100. This is because therements in the wide area because the infrastructure required for is little variation in the throughput observed during the experi-passive measurements is not present. ments. A path with little or no variation in observed throughput Table V lists the minimum round-trip times observed on the over the course of an experimental run is the easy case for bothwide area paths over all BADABING measurements taken on each SVR and HB throughput predictors, so these experiments willpath. The shortest paths have a minimum RTT of 8 ms, and the not be discussed any further in this paper. For three experimentslongest a minimum RTT of 145 ms. In Table V, we ignore path in Fig. 7, Amsterdam–London, London–Utah–1, and Utah–Cor-directions, e.g., we list Amsterdam–London and London–Ams- nell, the prediction accuracy for values of 1, 5, and 10 is sig-terdam as a single entry because minimum RTT is the same in nificantly lower compared to that for . The reason forboth directions. this poor accuracy will be discussed in detail in Section VI-C.B. Wide Area Results C. Detailed Analysis of Wide Area Results Fig. 6 compares the accuracy of the SVR and HB throughput In this section, we analyze the reasons for poor SVR predic-predictions for the 18 wide area paths. The results in Fig. 6 are tion accuracy for five paths from Fig. 6 (Cornell–Amsterdam,
    • 1036 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 4, AUGUST 2010Fig. 7. Effect of training set size on SVR prediction accuracy in the wide area. For paths with high prediction accuracy in Fig. 6, this table shows how large atraining set size was needed to achieve high accuracy. Labels use node initials; see Section V for complete node names. shift from a throughput of 1.4 to 0.6 Mb/s occurs after 144 file transfers, and a level shift from 0.6 back to 1.4 Mb/s occurs after 176 transfers. Fig. 8(b) shows that prediction accuracy de- creases from 0.84 at one file transfer to a global minimum of 0.40 at 144 file transfers, and then increases very rapidly to 1.00 at 145 file transfers. This result can be explained by the fact that the level shift is not included in the training data. Thus, the prediction function will generate samples at the first throughput level accurately and those at the second throughput level inac- curately. Prediction accuracy decreases from 1 to 144 becauseFig. 8. London–Utah–5: An example of a wide area path with level shifts. Time there are relatively fewer samples at the first level and moreon the x-axis is represented in terms of file transfer number. (a) Throughputprofile. (b) Prediction accuracy. at the second level in the test set as the value of x increases, so there are relatively more inaccurate predictions, leading to a decreasing trend in prediction accuracy. If the division intoLondon–Maryland–1, London–Utah–2, London–Utah–5, training and test sets is at the level shift boundary, in this case theand NYU–Amsterdam) and three paths from Fig. 7 (Ams- first 144 file transfers, the training set consists only of measure-terdam–London, London–Utah–1, and Utah–Cornell). We find ment samples before the level shift and the test set of samplesthat there are two dominant reasons for poor SVR prediction after the level shift. All predictions will be inaccurate (assumingaccuracy: background traffic level shifts and changes in back- the level shift changes the throughput by more than 10%, ourground traffic on small timescales. We find that retraining can threshold) because the prediction function has never encoun-improve prediction accuracy for level shifts but not for changes tered the new throughput level. Hence, we observe minimumin network conditions on small timescales. prediction accuracy at the level shift boundary, i.e., 144 trans- In the analysis that follows, we use a pair of graphs per fers. If the division into training and test set includes a levelexperiment to illustrate the details of each experiment. Con- shift, some samples with the new network conditions and resul-sider Fig. 8(a) and (b) for London–Utah–5. The first graph, the tant new throughput are included in the training set. The predic-throughput profile, is a time series representation of the actual tion function is now aware of the new network conditions andthroughput observed during the experiment. The second graph, is able to make better predictions in the new conditions. In ourprediction accuracy, is a more detailed version of the bar graphs example, including only one sample from after the level shiftof Fig. 7, incorporating all possible values of . At an x value of in the training set, the 145th sample, is sufficient to allow all , the first of the 200 samples are used as the training set, and throughputs at the lower levels to be predicted accurately. Thatthe remaining samples as the test set. The y value is the is, the SVR predictor needs the minimum possible training setfraction of samples in the test set of whose predicted size (one single sample) for the new network conditions beforethroughput is within 10% of the actual throughput. As the value it can generate accurate predictions.of increases, the training set becomes larger compared to the Fig. 9(a) and (b) compare the behavior of SVR and HBtest set because the total number of samples is fixed at 200. For predictors for a level shift. The training set for SVR consistedvalues of close to 200, the test set is very small, and thus the of the first 145 samples, i.e., 144 samples at the first throughputprediction accuracy values, whether they are high or low, are level and one sample at the second throughput level. The testnot as meaningful. set consisted of the remaining 55 samples. For HB, recall 1) Level Shifts: Level shifts were responsible for poor that there is no separation into training and test sets, and re-prediction accuracy in four experiments: Utah–Cornell, training occurs after every measurement sample. ComparingLondon–Utah, London–Utah–2, and London–Utah–5. We Fig. 9(a) (SVR) and (b) (HB), we see that the SVR-predicteddiscuss London–Utah–5 in detail as a representative example. throughput follows the actual throughput very closely, while Fig. 8(a) and (b) are the throughput profile and prediction ac- the HB-predicted throughput takes some time to catch up withcuracy graphs for London–Utah–5. Fig. 8(a) shows that a level actual throughput after a level shift. If the SVR predictor has
    • MIRZA et al.: A MACHINE LEARNING APPROACH TO TCP THROUGHPUT PREDICTION 1037 Recall that we measure network conditions using BADABING for 30 s and then transfer a file. In this experiment, after the 60th transfer, the network conditions vary so rapidly that they change between the BADABING measurement and the file transfer. Con- sequently, there is no consistent mapping between the measured network conditions and the transfer throughput, so a correct SVR predictor cannot be constructed, leading to poor predic- tion accuracy. The HB predictor also performs poorly in these conditions.Fig. 9. London–Utah–5: Comparison of SVR and HB prediction accuracy As shown in Fig. 6, for 100 training samples for London–Mary-for background traffic with level shifts. The x-axis is the file transfer timeline land–1, HB has a prediction accuracy of 59%, while SVR hasstarting at 145. The y-axis is throughput. (a) SVR prediction accuracy. (b) HB an accuracy of 55%. When network conditions vary as rapidlyprediction accuracy. as in the above example, it is not possible to predict throughput accurately using either SVR or HB because of the absence of a consistent relationship in network conditions just before and during a file transfer. One difference we observed in experiments with level shifts versus experiments with throughput changes on small timescales was that level shifts occurred under lossless network conditions, while throughput changes on small timescales occurred under conditions of sustained loss. Thus, if only the average queuing delay on a network path changes, we observe a level shift; if a network goes from a no-loss state to a lossy state, we observe throughput changes on small timescales. A possibleFig. 10. London–Maryland–1: An example of a wide area path with throughput explanation is that if the network is in a lossy state, a particularchanges on small timescales. (a) Throughput profile. (b) prediction accuracy. TCP flow may or may not experience loss. Since TCP backs off aggressively after a loss, flows that do experience loss will have significantly lower throughput compared to those that doknowledge of the level shift, its prediction accuracy is much not experience loss, leading to large differences in throughputbetter than that of HB. After the second level shift (176 sam- of flows. However, if only average queuing delay on a pathples) no further training of the SVR predictor is required to changes, every flow on the path (unless it is extremely short)predict the remaining 23 correctly. The predicted throughput in experiences the change in queuing delay, leading to a change inFig. 9(a) follows the actual throughput closely at the level shift throughput of all flows, i.e., a level shift, on the path.after 176 transfers even though the training set consists of onlythe first 146 samples. The above example shows that the SVR predictor has two VII. DISCUSSIONadvantages over the HB predictor. First, it can adapt instanta- This section addresses two key issues related to running Path-neously, i.e., after a single training sample, to a level shift, while Perf in operational settings.the HB predictor takes longer. Second, it shows that unlike the Network Load Introduced by PathPerf: Traffic introduced byHB predictor, the SVR predictor needs to be trained only once active measurement tools is a concern because it can skew thefor a given set of conditions. The results for the other three wide network property being measured. Furthermore, network opera-area experiments that contained level shifts are similar to those tors and engineers generally wish to minimize any impact mea-of the London–Utah–5 experiment (omitted due to space con- surement traffic may have on customer traffic.siderations). For history-based TCP throughput estimation methods, the 2) Changes Over Small Timescales: Network condi- amount of traffic introduced depends on the measurement pro-tion changes over small timescales reduced SVR prediction tocol. For example, two approaches may be taken. The firstaccuracy for Amsterdam–London, Cornell–Amsterdam, method is to periodically transfer fixed-size files; the second isLondon–Maryland–1, and NYU–Amsterdam. We consider to allow a TCP connection to transfer data for a fixed time pe-London–Maryland–1 as a representative example. riod. The latter approach was taken in the history-based evalua- Fig. 10(a) and (b) present the throughput profile and pre- tion of He et al. [11]. To estimate the overhead of a history-baseddiction accuracy of the London–Maryland–1 experiment. predictor using fixed-duration transfers, assume that: 1) the TCPFig. 10(a) shows that until about the 60th file transfer, connection is not -limited; 2) that the fixed duration of datathroughput is fairly steady around 2.7 Mb/s, after which it transfer is 50 s (as in [11]); 3) throughput measurements are ini-starts to vary widely in the range between approximately 1.2 tiated every 5 min; and 4) throughput closely matches availableand 2.7 Mb/s. Fig. 10(b) shows that prediction accuracy is at bandwidth. For this example, assume that the average availablea maximum of 65% at one file transfer, decreases in accuracy bandwidth for the duration of the experiment is approximatelybetween one and 60 transfers, and varies between 50% and 50 Mb/s. Over a 30-min period, nearly 2 GB in measurement60% between 60 and 180 transfers. traffic is produced, resulting in an average bandwidth of about Unlike for level shifts, after the throughput profile changes 8.3 Mb/s.and measurement samples of the new network conditions are in- In the case of PathPerf, measurement overhead in trainingcluded in the training set, prediction accuracy does not improve. consists of file transfers and queuing/loss probe measurements.
    • 1038 IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 4, AUGUST 2010In testing, overhead consists solely of loss measurements. If two paths are similar in all features, implicit and explicit,Assume that we have a 15- min training period followed by a then a predictor trained on one path using only Q and L (and15-min testing period. Assume that file sizes of 32 kB, 512 kB, possibly AB) can be used directly for the other path. If pathsand 8 MB are transferred during the training period, using differ in implicit features, then those features will have to be10 samples of each file, and that each file transfer is preceeded added explicitly to the predictor to allow the predictor to beby a 30-s BADABING measurement. With a probe probability of portable. Fortunately, SVR extends naturally to include new0.3, BADABING traffic for each measurement is about 1.5 MB. features. If a complete set of features effecting TCP throughputFor testing, only BADABING measurements must be ongoing. can be compiled, and there is a wide enough set of consistentAssume that a 30-s BADABING measurement is initiated every training data available, then a universal SVR predictor that3 min. Thus, over the 15-min training period about 130 MB works on just about any wireline Internet path might be a possi-of measurement traffic is produced, resulting in an average bility. A potential challenge here is identifying and measuringbandwidth of about 1.2 Mb/s for the first 15 min. For the testing all factors—path properties, TCP flavors, operating systemperiod, a total of 7.5 MB of measurement traffic is produced, parameters—that might affect throughput. However, this chal-resulting in a rate of about 66 kb/s. Overall, PathPerf produces lenge can be surmounted to quite an extent by active path633 kb/s on average over the 30-min measurement period, dra- measurements, using TCP header fields, and parsing operatingmatically different from a standard history-based measurement systems configuration files to include operating parameters asapproach. Even if more conservative assumptions are made SVR features.on the history-based approach, the differences in overhead aresignificant. Again, the reason for the dramatic savings is that ACKNOWLEDGMENTonce the SVR predictor has been trained, only lightweightmeasurements are required for accurate predictions. The authors would like to thank D. Andersen for providing Detecting Problems in Estimation: An important capability access to the RON testbed and A. Bizzaro for help with thefor throughput estimation in live deployments is to detect when WAIL lab testbed. They also thank the anonymous reviewersthere are significant estimation errors. Such errors could be for their helpful suggestions.indicative of a change in network routing, causing an abruptchange in delay, loss, and throughput. It could also signal REFERENCESa pathological network condition, such as an ongoing de- [1] A. Abouzeid, S. Roy, and M. Azizoglu, “Stochastic modeling of TCPnial-of-service attack leading to endemic network loss along over lossy links,” in Proc. IEEE INFOCOM, Tel-Aviv, Israel, Mar.a path. On the other hand, it may simply be a measurement 2000, vol. 3, pp. 1724–1733. [2] E. Altman, K. Avachenkov, and C. Barakat, “A stochastic model ofoutlier with no network-based cause. TCP/IP with stationary random loss,” in Proc. ACM SIGCOMM, Aug. As discussed in Section V-A3, normality allows us to use 2000, pp. 231–242.standard statistical machinery to compute confidence intervals [3] D. G. Andersen, H. Balakrishnan, M. Frans Kaashoek, and R. Morris, “Resilient overlay networks,” in Proc. 18th ACM SOSP, Banff, AB,(i.e., using measured variance of prediction error). We show that Canada, Oct. 2001, pp. 131–145.prediction errors are consistent with a normal distribution and [4] M. Arlitt, B. Krishnamurthy, and J. Mogul, “Predicting short-transferfurther propose using confidence intervals as a mechanism for latency from TCP arcana: A trace-based validation,” in Proc. ACMtriggering retraining of the SVR as follows. Assume that we IMC, Berkeley, CA, Oct. 2005, p. 19. [5] H. Balakrishnan, S. Seshan, M. Stemm, and R. Katz, “Analyzinghave trained the SVR predictor over measurement periods stability in wide-area network performance,” in Proc. ACM SIGMET-(i.e., we have throughput samples and samples of and ). RICS, Seattle, WA, Jun. 1997, pp. 2–12.Assume that we then collect additional throughput samples, [6] P. Barford and M. Crovella, “Critical path analysis of TCP transac- tions,” in Proc. ACM SIGCOMM, Stockholm, Sweden, Aug. 2000, pp.making predictions for each sample and recording the error. We 127–138.therefore have error samples between what was predicted and [7] R. Beverly, K. Sollins, and A. Berger, “SVM learning of IP addresswhat was subsequently measured. Given a confidence level, e.g., structure for latency prediction,” in Proc. SIGCOMM Workshop Mining95%, we can calculate confidence intervals on the sample error Netw. Data, Pisa, Italy, Sep. 2006, pp. 299–304. [8] N. Cardwell, S. Savage, and T. Anderson, “Modeling TCP latency,”distribution. We can then, with low frequency, collect additional in Proc. IEEE INFOCOM, Tel-Aviv, Israel, Mar. 2000, vol. 3, pp.throughput samples to test whether the prediction error exceeds 1742–1751.the interval bounds. (Note that these additional samples may be [9] “Cooperative association for Internet data analysis,” 2006 [Online].application traffic for which predictions are used.) If so, we may Available: http://www.caida.org/tools [10] M. Goyal, R. Buerin, and R. Rajan, “Predicting TCP throughput fromdecide that retraining the SVR predictor is appropriate. A danger non-invasive network sampling,” in Proc. IEEE INFOCOM, New York,in triggering an immediate retraining is that such a policy may Jun. 2002, vol. 1, pp. 180–189.be too sensitive to outliers regardless of the confidence interval [11] Q. He, C. Dovrolis, and M. Ammar, “On the predictability of large transfer TCP throughput,” in Proc. ACM SIGCOMM, Philadelphia, PA,chosen. More generally, we can consider a threshold of con- Aug. 2005, pp. 145–156.secutive prediction errors that exceed the computed confidence [12] K. Ilgun, R. Kemmerer, and P. Porras, “State transition analysis: Ainterval bounds as a trigger for retraining. rule-based intrusion detection approach,” IEEE Trans. Softw. Eng., vol. Predictor Portability: In this paper, since we have only con- 21, no. 3, pp. 181–199, Mar. 1995. [13] V. Jacobson and M. Karels, “Congestion avoidance and control,” insidered predictors trained and tested on the same path, a number Proc. ACM SIGCOMM, Palo Alto, CA, Aug. 1988, pp. 314–329.of features, such as the identity of a path and MTU, are im- [14] M. Jain and C. Dovrolis, “End-to-end available bandwidth measure-plicit. This makes SVR simpler to use for path-specific pre- ment methodology, dynamics and relation with TCP throughput,”diction because a lot of path details that would be required by IEEE/ACM Trans. Networking, vol. 11, no. 4, pp. 537–549, Aug. 2003. [15] T. Joachims, “Making large-scale svm learning practical,” in Advancesother approaches, such as formula-based prediction, can safely in Kernel Methods—Support Vector Learning, B. Schölkopf, C.be hidden from the user. Burges, and A. Smola, Eds. Cambridge, MA: MIT Press, 1999.
    • MIRZA et al.: A MACHINE LEARNING APPROACH TO TCP THROUGHPUT PREDICTION 1039 [16] D. Lu, Y. Qiao, P. Dinda, and F. Bustamante, “Characterizing and pre- Mariyam Mirza received the B.S.E. degree in com- dicting TCP throughput on the wide area network,” in Proc. 25th IEEE puter science from Princeton University, Princeton, ICDCS, Columbus, OH, Jun. 2005, pp. 414–424. NJ, in 2002. [17] M. Mathis, J. Semke, J. Mahdavi, and T. Ott, “The macroscopic be- She is a graduate student of computer science at havior of the TCP congestion avoidance algorithm,” Comput. Commun. the University of Wisconsin–Madison. Her research Rev., vol. 27, no. 3, pp. 67–82, Jul. 1997. interests include network measurement and perfor- [18] J. Padhye, V. Firoiu, D. Towsley, and J. Kurose, “Modeling TCP mance evaluation. throughput: A simple model and its empirical validation,” in Proc. ACM SIGCOMM, Vancouver, BC, Canada, Sep. 1998, pp. 303–314. [19] V. Paxson, “End-to-end Internet packet dynamics,” in Proc. ACM SIG- COMM, Cannes, France, Sep. 1997, pp. 139–152. [20] V. Paxson, “Measurements and analysis of end-to-end Internet dy- namics,” Ph.D. dissertation, Univ. California, Berkeley, 1997. [21] B. Schölkopf and A. J. Smola, Learning With Kernels. Cambridge, MA: MIT Press, 2002. Joel Sommers (M’09) received the Ph.D. degree [22] A. Shriram, M. Murray, Y. Hyun, N. Brownlee, A. Broido, M. in computer science from the University of Wis- Fomenkov, and K. Claffy, “Comparison of public end-to-end band- consin–Madison in 2007. width estimation tools on high-speed links,” in Proc. PAM, 2005, pp. He is an Assistant Professor of computer science 306–320. at Colgate University, Hamilton, NY. His current re- [23] A. J. Smola and B. Schölkopf, “A tutorial on support vector regression,” search focus is the measurement and analysis of net- Stat. Comput., vol. 14, pp. 199–222, 2004. worked systems and network traffic. [24] J. Sommers and P. Barford, “Self-configuring network traffic genera- tion,” in Proc. ACM IMC, Taormina, Italy, Oct. 2004, pp. 68–81. [25] J. Sommers, P. Barford, N. Duffield, and A. Ron, “Improving accuracy in end-to-end packet loss measurements,” in Proc. ACM SIGCOMM, Philadelphia, PA, Aug. 2005, pp. 157–168. [26] J. Sommers, P. Barford, and W. Willinger, “A proposed framework for calibration of available bandwidth estimation tools,” in Proc. IEEE Paul Barford received the B.S. degree in electrical ISCC, Jun. 2006, pp. 709–718. engineering from the University of Illinois at Urbana- [27] J. Strauss, D. Katabi, and F. Kaashoek, “A measurement study of avail- Champaign in 1985, and the Ph.D. degree in com- able bandwidth tools,” in Proc. ACM IMC, Miami, FL, Nov. 2003, pp. puter science from Boston University, Boston, MA, 39–44. in December 2000. [28] M. Swany and R. Wolski, “Multivariate resource performance fore- He is an Associate Professor of computer science casting in the network weather service,” in Proc. Supercomput., Bal- at the University of Wisconsin–Madison. He is the timore, MD, Nov. 2002, pp. 1–10. Founder and Director of the Wisconsin Advanced In- [29] V. N. Vapnik, The Nature of Statistical Learning Theory, 2nd ed. ternet Laboratory, and his research interests are in the New York: Springer, 1995. measurement, analysis and security of wide area net- [30] S. Vazhkudai, J. Wu, and I. Foster, “Predicting the performance of wide worked systems and network protocols. area data transfers,” in Proc. IEEE IPDPS, Fort Lauderdale, FL, Apr. 2002, pp. 34–43. [31] Y. Zhang, L. Breslau, V. Paxson, and S. Shenker, “On the character- istics and origins of Internet flow rates,” in Proc. ACM SIGCOMM, Pittsburgh, PA, Aug. 2002, pp. 309–322. Xiaojin Zhu received the B.Sc. degree in com- [32] Y. Zhang, N. Duffield, V. Paxson, and S. Shenker, “On the constancy of puter science from Shanghai Jiao Tong University, internet path properties,” in Proc. Internet Meas. Workshop, San Fran- Shanghai, China, in 1993, and the Ph.D. degree cisco, CA, Oct. 2001, pp. 197–211. in language technologies from Carnegie Mellon University, Pittsburgh, PA, in 2005. He is an Assistant Professor of computer science at the University of Wisconsin–Madison. From 1996 to 1998, he was a Research Member with IBM China Research Laboratory, Beijing, China. His research in- terests are statistical machine learning and its appli- cations to natural language processing.