SlideShare a Scribd company logo
Stênio Fernandes
Performance
Evaluation for
Network Services,
Systems and
Protocols
Performance Evaluation for Network Services,
Systems and Protocols
Stênio Fernandes
Performance Evaluation
for Network Services,
Systems and Protocols
Foreword by Professor Antonio Pescapè
ISBN 978-3-319-54519-6    ISBN 978-3-319-54521-9 (eBook)
DOI 10.1007/978-3-319-54521-9
Library of Congress Control Number: 2017932749
© Springer International Publishing AG 2017
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, express or implied, with respect to the material contained herein or for any errors
or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims
in published maps and institutional affiliations.
Printed on acid-free paper
This Springer imprint is published by Springer Nature
The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Stênio Fernandes
Centro de Informática
Universidade Federal de Pernambuco
Recife, Pernambuco, Brazil
v
Foreword
The advancement of computer networks has been remarkable in these recent times.
The processes that were commonly used to facilitate the operations of these net-
works have quickly become obsolete, giving way to quicker and better technology
and ultimately better computer networks. The increased use of better virtualization
technologies has brought about a collaborative effort to improve the operations of
the computer network systems, the services rendered, and other important proto-
cols. The result is very impressive, meeting the expectation of consumers in terms
of reliability and speed. The changes experienced in network virtualization (NV),
software-defined networking (SDN), network functions virtualization (NFV), and
other similar fields have made it important to focus more on the means through
which the actual performance can be determined and evaluated for newer innova-
tions. These performance evaluations are crucial to the service providers so they can
design and plan their future networks.
I can confirm the importance of this book to the entire computer networking
research and industrial community. I have played an active role in the research com-
munity of network monitoring and measurement since 2000. I have made useful
contributions in the fields of traffic analysis and modeling, traffic classification,
traffic generation, performance monitoring, network security, and also cloud and
SDN monitoring. I am a full professor at the University of Naples Federico II, where
I lecture students on computer networks and analysis of Internet performance. I
have coauthored over 180 research papers published in international journals (e.g.,
IEEE/ACM Transactions on Networking, Communications of the ACM, IEEE TPDS,
IEEE TNSM, Computer Networks, etc.) and conferences (e.g., SIGCOMM, NSDI,
Infocom, IMC, PAM, Globecom, ICC, etc.). I have been honored with a Google
Faculty Award, several best paper awards, Microsoft and Amazon research grants,
and two IRTF (Internet Research Task Force)ANRP (Applied Networking Research
Prize) awards.
Prof. Stênio Fernandes has solid record of research publications in the field of
computer communications and networks. He has published over 120 research
papers in a number of international peer-reviewed conferences and journals. His
research interests cover the crucial aspects of performance evaluation of network
vi
and communication systems and Internet traffic measurement, modeling projects,
and analysis. I can affirm that this book is a reflection of his experiences with aca-
demic and industrial research projects related to performance evaluation of com-
puter networks. In this book, Prof. Stênio Fernandes gives a comprehensive
perspective of the methods that are used to accurately determine performance evalu-
ation of modern computer networks. In this book, the crucial and advanced features
of performance evaluation techniques are clearly explained in a way that the reader
will understand the methods of conducting the right evaluation plans. Taking
excerpts from the scientific literature, this book addresses the most relevant aspects
of experimentation, simulation, and analytical modeling of modern networks. The
readers will have a better understanding of applied statistics in computer network-
ing and how the functions on theory and the best practices in the field intersect. This
book will identify the current challenges that industrial and academic researchers
face in their work and also the potentials for better innovations in this field.
University of Naples Federico II Antonio Pescapè
Naples, Italy
Foreword
vii
Acknowledgments
I have nursed the dream of writing this book for a very long time. My position as a
member of technical program committees serving for a large number of important
scientific conferences in the computer networking field has given me the opportu-
nity to witness in wonder the number of excellently written papers that have been
rejected due to lapses and lack of rigor in their performance evaluation and analysis.
It is common to see authors come up with brilliant ideas, but they fail to scientifi-
cally prove the validity of these ideas. A poor performance evaluation will cast
aspersions on any papers’ claims about its contributions and relevance to the field.
The case is the same for scientific journals; I have been privileged to act as a referee
for many important journals in the field of computer networks and communications.
Going through the exhibition area during a scientific conference, I met Susan
Lagerstrom-Fife, an editor (Computer Science) at Springer, USA. After the usual
pleasantries, I asked her about the requirements needed to write a book for Springer.
I got some useful information and took action, and I can happily say that this book
is the result of that productive conversation. I would like to thank Susan and her
assistant Jennifer Malat for guiding me along this long road.
It was an interesting and difficult experience writing this book. I experienced
what writers call “the writer’s block” often. Now I know how real it is, and I can
confirm that it is not a very happy experience. I was able to overcome this challenge
by reading good books on focus and productivity. I owe a lot of my success in over-
coming this challenge to Barbara Oakley whose course “Learning How to Learn”
on Coursera played a vital role in helping me develop my mind and sharpen my
skills at a higher level. I was very happy to have the opportunity to thank her in
person when she came to give a talk at Carleton University, Ottawa in Canada, in
May 2016. I will not stop expressing my sincere gratitude to her for putting out all
that useful information for free.
Communicating your ideas to a diverse audience is not a very easy task. Writing
a book chapter that entails the reviews of essential concepts of statistics was difficult
to organize and deliver. I would like to thank Alexey Medvedev, who has a PhD in
mathematics (2016) from Central European University, for assessing all the equa-
tions and mathematical concepts in that chapter.
viii
I would also like to thank all my colleagues from universities around the world,
most especially from the Universidade Federal de Pernambuco (Brazil), the
University of Ottawa (Canada), and Carleton University (Canada) for the encour-
agements and kind support that helped me finish this book. I send special thanks to
my former supervisor Professor Ahmed Karmouch (University of Ottawa) and my
colleague Professor Gabriel Wainer (Carleton University). I also would like to
extend my sincere gratitude to many network engineers I met at the Internet
Engineering Task Force meetings between 2014 and 2017; I would like to thank
them for the support and tips they offered me while I was writing this book.
Finally, I would like to thank my family and friends for showing their concerns
with the regular question – “How’s the book writing going?” Many of the chal-
lenges I faced while writing this book made me to sometimes be unavailable and
impatient. I promise to catch up with you all over coffee, wine, music concerts, and
physical activities. This book would not have been possible without the love, sup-
port, and appreciation of my work expressed by my wife Nina and my children
Victor and Alice. I also wish to dedicate this book to my mother Penha and my
father (in memoriam) Fernando.
Acknowledgments
ix
Contents
	 1	
Principles of Performance Evaluation of Computer Networks ����������    1
	
1.1	
Motivation: Why Do We Need to Assess the Performance
of Computer Networks?��������������������������������������������������������������������   1
	
1.2	
Classical and Modern Scenarios: Examples
from Research Papers�����������������������������������������������������������������������    2
	
1.2.1	
Performance Evaluation in Classical Scenarios��������������������    3
	
1.2.2	
Performance Evaluation in Modern Scenarios���������������������  16
	
1.3	
The Pillars of Performance Evaluation of Networking
and Communication Systems������������������������������������������������������������  30
	1.3.1	Experimentation/Prototyping, Simulation/Emulation,
and Modeling������������������������������������������������������������������������  30
	1.3.2	Supporting Strategies: Measurements����������������������������������  39
References��������������������������������������������������������������������������������������������������  40
	 2	
Methods and Techniques for Measurements in the Internet ��������������  45
	
2.1	
Passive vs. Active vs. Hybrid Measurements������������������������������������  47
	
2.2	
Traffic Measurements: Packets, Flow Records,
and Aggregated Data������������������������������������������������������������������������  51
	2.3	Sampling Techniques for Network Management������������������������������  54
	
2.4	
Internet Topology: Measurements, Modeling, and Analysis������������  57
	2.4.1	Internet Topology Resolution������������������������������������������������  59
	
2.4.2	
Internet Topology Discovery: Tools, Techniques,
and Datasets��������������������������������������������������������������������������  61
	
2.5	
Challenges for Traffic Measurements and Analyses
in Virtual Environments��������������������������������������������������������������������  64
	2.5.1	Cloud Computing Environments������������������������������������������  64
	
2.5.2	
Virtualization at Network Level��������������������������������������������  67
	2.6	Bandwidth Estimation Methods��������������������������������������������������������  69
References��������������������������������������������������������������������������������������������������  70
x
	 3	
A Primer on Applied Statistics in Computer Networking ������������������  75
	3.1	Statistics and Computational Statistics��������������������������������������������   75
	3.2	I’m All About That Data ������������������������������������������������������������������   78
	3.3	Essential Concepts and Terminology������������������������������������������������   80
	3.4	Descriptive Statistics������������������������������������������������������������������������   82
	
3.4.1	
I Mean It (Or Measures of Centrality)����������������������������������   83
	
3.4.2	
This Is Dull (Or Measures of Dispersion)����������������������������   83
	
3.4.3	
Is It Paranormally Distributed? (or Measures
of Asymmetry and Tailedness)������������������������������������������������   86
	3.5	Inferential Statistics��������������������������������������������������������������������������   89
	
3.5.1	
Parameter Estimation: Point vs. Interval������������������������������   90
	3.5.2	Estimators and Estimation Methods ������������������������������������   95
	3.6	The Heavy-Tailed Phenomenon��������������������������������������������������������   99
	3.6.1	Outlier Detection������������������������������������������������������������������  100
	3.6.2	Heavy-Tailed Distributions
and Its Variations (Subclasses)����������������������������������������������  102
	3.6.3	Evidence of Heavy-Tailedness
in Computer Networks����������������������������������������������������������  107
References��������������������������������������������������������������������������������������������������  111
	 4	Internet Traffic Profiling ������������������������������������������������������������������������  113
	4.1	Traffic Analysis��������������������������������������������������������������������������������  114
	4.1.1	Identification and Classification��������������������������������������������  114
	
4.1.2	
Techniques, Tools, and Systems for Traffic Profiling ����������  118
	
4.2	
Industrial Approach for Traffic Profiling: Products
and Services��������������������������������������������������������������������������������������  130
	4.3	Traffic Models in Practice����������������������������������������������������������������  132
	4.3.1	Workload Generators������������������������������������������������������������  133
	4.4	Simulation and Emulation����������������������������������������������������������������  137
	
4.4.1	
Discrete-Event Simulation and Network Simulation
Environments������������������������������������������������������������������������  140
	
4.4.2	
Practical Use of Network Simulators
and Traffic Profiles����������������������������������������������������������������  147
References��������������������������������������������������������������������������������������������������  148
	 5	
Designing and Executing Experimental Plans��������������������������������������  153
	
5.1	
Designing Performance Evaluation Plans: Fundamentals����������������  153
	5.2	Design of Experiments (DoE)����������������������������������������������������������  156
	5.2.1	The DoE Jargon��������������������������������������������������������������������  159
	
5.2.2	
To Replicate or to Slice?������������������������������������������������������  161
	
5.3	
DOE Options: Choosing a Proper Design����������������������������������������  164
	5.3.1	Classification of DOE Methods��������������������������������������������  165
	5.3.2	Notation��������������������������������������������������������������������������������  166
Contents
xi
	5.4	Experimental Designs����������������������������������������������������������������������  166
	5.4.1	2k
 Factorial Designs (a.k.a. Coarse Grids)����������������������������  166
	5.4.2	2k − p
 Fractional Factorial Designs������������������������������������������  167
	5.4.3	mk
Factorial Designs (a.k.a. Finer Grids)������������������������������  167
	
5.5	
Test, Validation, Analysis, and Interpretation of DOE Results��������  167
	
5.6	
DOEs: Some Pitfalls and Caveats����������������������������������������������������  168
	
5.7	
DOE in Computer Networking Problems����������������������������������������  169
	5.7.1	General Guidelines���������������������������������������������������������������  169
	5.7.2	Hands-On������������������������������������������������������������������������������  174
References��������������������������������������������������������������������������������������������������  174
Contents
1
© Springer International Publishing AG 2017
S. Fernandes, Performance Evaluation for Network Services, Systems
and Protocols, DOI 10.1007/978-3-319-54521-9_1
Chapter 1
Principles of Performance Evaluation
of Computer Networks
In this chapter, I give you an overview of the technical aspects necessary for a
­
comprehensive performance evaluation of computer networks. First, I provide several
examples of how an accurate performance evaluation plan is essential for discovering
and revealing new phenomenon in every nuance of the software and hardware com-
ponents of a certain network scenario. Finally, I give an overview of the pillars of
performance evaluation for either simple or more sophisticated networking scenarios.
I discuss research and development challenges regarding measurements, experimen-
tation and prototyping, simulation and emulation, and modeling and analysis.
1.1  
Motivation: Why Do We Need to Assess the Performance
of Computer Networks?
Performance evaluation on the Internet is a daunting challenge. There are too many
elements to observe, from concrete components (e.g., communication links,
switches, routers, and servers) to more abstract ones (e.g., packets, protocols, and
virtual machines). And within most of those elements, there will be mechanisms,
deployed as pieces of codes, that might utilize resources unpredictably. When a
researcher or engineer in the networking field needs to assess the performance of a
certain system, she possibly has a clue of what needs to be assessed. In most cases,
they only need to design a proper performance evaluation plan to obtain meaningful
results and derive insights or answer questions from them. However, designing a
sound performance evaluation plan will likely raise a number of questions, such as:
Why do we need to assess the performance of this networked system? What to mea-
sure? Where – system wise – to measure? When to measure? Which time scale is
more appropriate for a particular measurement and analysis? Where  – network
wise – to place the measurement points? Will such measurement points interfere in
the collected metrics? How about the proper workload? How do we generate it?
2
Do we need to repeat the experiments? If so, how many times? Is sampling
­
acceptable for the given case? Do we need to derive an analytical model from the
measurements? Can we use such a derived analytical model to predict the behavior
of the networked system? If so, how far in the future? Will this performance analysis
be based on experimentation or simulation? Why? Which one is better for the given
scenario? Should we consider active measurements? Are passive measurements suf-
ficient? You’ve got the point.
It is straightforward noticing that an accurate performance evaluation of any
computing system must be carefully designed and undertaken. For a general perfor-
mance evaluation of computer systems, the classic Raj Jain’s The Art of Computer
Systems Performance Analysis: Techniques for Experimental Design, Measurement,
Simulation, and Modeling [1] has become the main source of information in the last
two decades. If you are more mathematical inclined, Yves Le Boudec’s book [2]
would add value to your performance evaluation design and analyses. In this book,
we stay in the middle by balancing between a more pragmatic approach (from the
academia and industry point of view) and stepping on some solid ground (from the
statistical perspective).
1.2  
Classical and Modern Scenarios: Examples
from Research Papers
In this section, we give some examples of well-conducted performance evaluation
studies in the computer networking field. By classical (traditional) we mean topics
that were deeply explored in the past and has currently attracted little attention of
researchers. This is the case when there is almost no room for improvement and to
bring new contributions to the topic. For instance, congestion control mechanisms
and protocols were deeply explored in the past. Thousands of research papers have
been already published, but there are still some new challenging scenarios that
need further investigation. Similar to this, peer-to-peer networking protocols and
mechanisms gained lots of attention in the past decades. They both can surely be
considered traditional or classical scenarios. Modern scenarios (or advanced)
would include studies with a strong paradigm shift on networking, such as the ones
related to network virtualization (NV), cloud computing, software-defined net-
working (SDN), network functions virtualization (NFV) and service function
chaining (SFC), and the like. The following research papers highlight the impor-
tance of a good performance evaluation design as well as the results presentation.
We bring the reader examples from top conferences and journals and show the
possible authors’ rationale to come up with sound performance analyses. We hope
those examples will make it clear that the chances of publishing in good scientific
conferences and journals are higher when the paper has a sound and convincing
performance evaluation. Also, network engineers and designers will also benefit
from those examples, since they might be required to bring a performance analysis
of the networks they manage.
1  Principles of Performance Evaluation of Computer Networks
3
1.2.1  
Performance Evaluation in Classical Scenarios
1.2.1.1  Application Layer
In this subsection, we present a couple of examples of a well-designed performance
evaluation plans for an application-layer protocol. We also present in detail how the
experiments were conducted and some selected results to highlight performance
metrics, parameterization (factors and levels), as well as precise forms of results
presentation.
In the paper Can SPDY Really Make the Web Faster? [3], Elkhatib, Tyson, and
Welzl provide a thorough experimental-based performance evaluation of a recent
protocol called SPDY [4], which served as the starting point for the development of
the HTTP/2.0 [5]. The whole Internet community (e.g., users, developers, research-
ers) had been claiming for improvements in the web surfing experience, due to the
ever-increasing development of new services and applications that require timely
communications (i.e., low latencies). The authors started the paper arguing that the
ever-increasing complexity of the web pages is likely to affect their retrieval times.
Some subset of users is more tolerant to delay than others, especially the ones
engaged in online shopping. The fundamental research questions the authors were
trying to answer was if the proposed new version of the HTTP protocol is a leap-­
forward technology or just “yet another protocol” with small performance improve-
ments. In their words, does it offer a fundamental improvement or just further
tweaking? Additional questions were raised in the paper as their experiments
showed some network parameterization severely affecting the protocol behavior
and performance. There were several essential arguments for conducting such study,
namely: i) previous work only gave a shallow understanding of SPDY performance
and ii) the only conclusions so far were that SPDY performance is highly dependent
on several factors and it has highly variable performance. Therefore, in order to
have an in-depth knowledge of SPDY performance, they conducted experiments in
real uncontrolled environments, i.e., SPDY client software (e.g., Chromium1
) and
real deployed servers, such asYouTube and Twitter, as well as in controlled environ-
ments, using open-source released versions of SPDY servers (e.g., Apache with a
mod_spdy2
module installed). Details of the experiments can be found in the origi-
nal paper [3]. As a complementary evaluation, in the paper Performance Analysis of
SPDY Protocol in Wired and Mobile Networks [77], the authors evaluated SPDY
performance in several wireless environments. They evaluated SPDY performance
in 3G, WiBro, and WLAN networks as well as in different web browsers, namely,
Chromium-based and Firefox. SPDY performance varies with some factors, such as
the network access technology (e.g., 3G, 802.11)
Both experiment design rationales highlight the importance of making the right
decisions to get sound and meaningful results for further analysis. First, in [3], the
authors selected an appropriate set of measurement tools to make it easier further
1
 http://www.chromium.org/.
2
 https://code.google.com/archive/p/mod-spdy/.
1.2 
Classical and Modern Scenarios: Examples from Research Papers
4
analysis. Then, discussion on the adequate performance metric was conducted when
they suggested the use of a new metric (i.e., time of wire (ToW), captured at the
network level) to avoid the inclusion of web browser processing times. In other
words, the ToW performance metric means the time between the departure of the
first HTTP request and the arrival of the last packet from the web server. Second, in
the measurement methodology, the authors decided to separate the experiments into
two classes, namely, wild and controlled. When dealing with real protocols, it is
important to evaluate how they would behave in the actual environment where the
experimenter would not be able to control most of its parameters. However, such
limitations impose severe restrictions on the concluding remarks that can be drawn,
since a number of assumptions might be wrong. In this particular case, it is clear to
see that the controlled set of experiments were needed since variations in network
conditions (e.g., server load, path available bandwidth, path delay, and packet ratio
losses) could not be precisely monitored. In the Live Tests experiments, the authors
sampled the most accessed websites, selecting the most representatives one (i.e., top
eight websites from Alexa3
). They also collected enough samples to ensure statisti-
cal significance of results. For instance, they collected over one million HTTP GET
requests from one site for 3 days. In the Controlled Tests, they deployed the usual
Linux network emulator (NetEm) and Linux traffic control (tc) in a high-speed local
network environment. Both tools were used to control delays and packet loss ratios,
as well as to shape the sending rate, thus mimicking the control of the available
bandwidth. There are some comments on the use of sampling strategies, but no
further details were given. In [77], the authors used the usual page download time.
Preliminary analysis of results in both papers shows that deployment of SPDY
does not necessarily imply in performance gains. There are some cases when per-
formance deteriorates, as Fig. 1.1 shows.
As there is no way to understand why this is happening, in [3] the authors have
strong arguments to conduct controlled experiments. Therefore, a series of experi-
ments were presented to show the effect of network conditions on the performance
of SPDY. They used the major performance factors, namely, delay, available band-
width, and loss. Levels of the factors were set as follows: (i) delay, from 10 to
490 ms; (ii) available bandwidth, from 64 to 8 Mbps; and (iii) packet loss ratio
(PLR), from 0% to 3%. To isolate the effects of the other factors, for each experi-
ment varying a particular factor, they make the other factor’s levels fixed. For
instance, when conducting experiments to understand the impact of the bandwidth
on the SPDY performance, they fix RTT to 150 ms and PLR to 0%. It is worth
emphasizing that a combination of levels for each factor could be taken for a more
detailed and extensive experimentation. This is essentially a design decision that the
experimenter must clearly state and provide reasonable arguments for that.
Sometimes it is just lacking space, in the case of research papers with a limited num-
ber of pages. In other cases, it might make no sense to make a full factorial experi-
mentation. In statistics, full factorial means the use of all possible combinations of
3
 www.alexa.com.
1  Principles of Performance Evaluation of Computer Networks
5
the factor’s levels. In the case of real scenarios for SPDY deployments, it is highly
unlikely that a combination of high bandwidth, PLR, and delay will make any sense.
The next three figures (Figs. 1.2, 1.3, and 1.4) illustrate the effect of RTT, band-
width, and PLR on the ToW reduction, respectively. The ToW reduction perfor-
mance metric means the percentage of improvement in the performance metric
ToW of SPDY over HTTPS.
The concluding remark from these results are: (i) in the RTT experiments, SPDY
always has better performance when compared to HTTP, especially in high-delay
environments; (ii) SPDY has better performance in mid- to low-bandwidth scenar-
ios; and (iii) increase of PLR severely impacts SPDY performance. The authors
provided detailed justifications to the behavior of SPDY in all scenarios. The main
conclusion is that SPDY may not perform that well in mobile settings. They go a step
further in their experiments by showing the impact of the server-side infrastructure
(e.g., domain sharding) on the ToW reduction.
In conclusion, it is important to understand why this paper is sound. First, the
authors presented several clear arguments for the decisions they made regarding the
experimental plan design, which included performance metrics, factors, and levels.
Second, they carefully selected the measurement tools as well as made sure they
would get enough data for the sake of statistical significance. They also use a variety
of ways to present results, such as tables, XY (scatter) plots, empirical cumulative
distribution function (ECDF), bar plots, and heat maps. Such an approach makes the
paper more exciting (or less boring, if you will) to read. Last, but not least, they drew
conclusions solely on the results, along with reasonable qualitative explanations.
Fig. 1.1  Time of wire (ToW) – SPDY websites (Source: Elkhatib et al. [3])
1.2 
Classical and Modern Scenarios: Examples from Research Papers
6
Fig. 1.2  Effect of RTT on ToW reduction (Source: Elkhatib et al. [3])
Fig. 1.3  Effect of bandwidth on ToW reduction (Source: Elkhatib et al. [3])
1  Principles of Performance Evaluation of Computer Networks
7
1.2.1.2  Transport Layer
Now we present an example of a careful and accurate performance evaluation study
for transport-layer protocol design and analysis in cellular networks. We present
details of how the authors provide solid arguments to support the development of a
new TCP-like protocol, even though the network research community has been doing
this for decades and has produced hundreds of research papers. We also show how
they design their experiments to validate the new protocol and present some selected
results. We highlight the performance metrics they used, as well as factors and levels.
In the thesis Adaptive Congestion Control for Unpredictable Cellular Networks
[6], Thomas Pötsch proposes and shows the rationale of Verus, a delay-based end-­
to-­
end congestion control protocol that is quick and accurate enough to cope with
highly variable network conditions in cellular networks. He used a mix of real pro-
totyping and simulations to evaluate Verus’ performance in a variety of scenarios.
Pötsch argues that most TCP flavors that employ different congestion control
mechanisms fail to show good performance in cellular networks, mainly due to their
inability to cope well with high available bandwidth availability, varying queuing
delays, and non-congestion-related stochastic packet losses. The main causes for such
variability at network-level lie in the underlying link and physical layers. He high-
lights the four main causes as (i) the state of a cellular channel, ii) the frame schedul-
ing algorithms, iii) device mobility, and surprisingly iv) competing traffic. Some of
these causes have a different impact on channel characteristics, as some are more
Fig. 1.4  Effect of packet loss on ToW reduction (Source: Elkhatib et al. [3])
1.2 
Classical and Modern Scenarios: Examples from Research Papers
8
prone to affect delays, whereas other might cause burstiness in the perceived channel
capacity at the receiver. It is very true that it is tough to develop precise models, algo-
rithms,andmechanismstotrackshort-andlong-termchannel­dynamics[7].Therefore,
he developed a simple yet efficient delay profile model to correlate the congestion
control variable “sending window size” to the measured end-to-end delay. In essence,
with a small modification on the additive-increase (AI) portion of the additive-
increase/multiplicative-decrease (AIMD) mechanism present in most TCP flavors,
Verus is able to quickly adapt to changing channel conditions, in several timescales.
He used essential performance metrics to study the overall performance of Verus
against some TCP flavors, such as Cubic [8], New Reno [9], Vegas [10], and Sprout
[11] (a recent TCP-like proposal for wireless environments). From these TCP flavors,
Sprout is the only one explicitly designed for cellular networks. Pötsch kept the most
popular TCP flavor currently deployed on the Internet (New Reno and Cubic) and
discarded all the others legacy ones. In addition, they bring some arguments to keep
out of the evaluation those protocols that either need or rely on explicit feedback from
the network layer, such as the use of Explicit Congestion Notification (ECN) [12].
In [75], Fabini et al. show how an HSPA downlink presents high-delay variabil-
ity (cf. Figs. 1.5 and 1.6). Thomas Pötsch [6] also shows that 3G and LTE network
do not isolate channels properly (cf. Fig. 1.7).
One interesting finding here is related to cellular channel isolation. The authors
discuss that the assumption of channel isolation by means of queue isolation does
not hold in the case of high traffic demands (as Fig. 1.6 shows). The authors also
show channel unpredictability in different timescales. It is worth emphasizing that
in small timescales, the variability effect is more prominent (cf. Fig. 1.7) [76].
Fig. 1.5  Burstiness on latency in cellular networks (Source: Pötsch [6])
1  Principles of Performance Evaluation of Computer Networks
9
Fig. 1.6  Impact of user traffic on packet delay (Source: Pötsch [6])
Fig. 1.7  Traffic received from a 3G downlink (100 and 20 ms windows) (Source: Pötsch [6])
1.2 
Classical and Modern Scenarios: Examples from Research Papers
10
We will not give details of the development of the Verus protocol and will only
focus on performance evaluation. However, we give the reader a glimpse of the
Verus design rationale.
The delay profile is the main component of the congestion control mechanism. It
is built on four basic variables, namely, delay estimator, delay profiler, window
estimator, and loss handler. The delay estimator just keeps track of the received
packet delays in a given timeframe, whereas the delay profiler resembles a regres-
sion model for correlating the sending window to the delay estimate (cf. Fig. 1.8).
The window estimator is a bit more complex, and it aims at providing information
for calculation of the number of outstanding packets in the network in the following
timeframe (called epoch). The loss handler tracks losses to be used in the loss
recovery phase as in any legacy TCP mechanism.
The authors make an important step for a sound performance evaluation, namely,
parameter sensitivity analysis. As Verus internal mechanisms comprise of several
parameters, it is important to understand its robustness to a varying of application
scenarios. In order to do that, the authors executed simulation-based evaluation in
different scenarios, with the focus on three major parameters, namely, epoch, delay
profile update interval, and delta increment.
Two types of experiments were conducted. They use real experiments and simu-
lations to show Verus performance improvements over New Reno, Cubic, and
Sprout, in both 3G and LTE networks. Figure 1.9 shows an example of performance
comparison in LTE networks. For a trace-driven simulation, they evaluate the effect
of mobility as a performance factor.
They used throughput, delay, and Jain’s fairness index [13] as performance met-
rics. The selected factors included a number of devices and flows, downlink and
Fig. 1.8  Verus delay profile (Source: Pötsch [6])
1  Principles of Performance Evaluation of Computer Networks
11
uplink data rates, the number of competing flows, user speed profile (in the case of
mobility), the arrival of new flows, RTT, etc.
Concluding remarks from these results are that Verus (i) adapts to both rapidly
changing cellular conditions and to competing traffic, (ii) achieves higher through-
put than TCP Cubic while maintaining a dramatically lower end-to-end delay, and
(iii) outperforms very recent congestion control protocols for cellular networks like
Sprout under rapidly changing network conditions.
Again, it is important to understand why this paper is sound. First, the authors pre-
sented clear arguments for the need of a new transport protocol for cellular networks.
As far as we are concerned on the experimental plan design, they provided details on
all performance metrics, factors, and levels. Second, they carefully selected the evalu-
ation environments, which included prototyping and simulation tools. Finally, as
expected for an ACM SIGCOMM paper, they also use a good variety of ways to pres-
ent their results, such as tables, scatter plots, probability distribution function (PDF),
and time series plots. Please recall that variety means more exciting (or less boring)
stuff to read. Last, but not least, they drew conclusions solely on the results from real-
world measurements, with additional support from the simulation results.
1.2.1.3  Network Layer
When it comes to performance evaluation at the network level of protocols, ­systems,
and mechanisms, computer-networking researchers and engineers are likely to be
flooded with the massive amount of research and products developed in the last
decades. Every time a new trend arises (e.g., think of ATM networks in the 80s)
Fig. 1.9  Throughput vs delay (LTE network) (Source: Pötsch [6])
1.2 
Classical and Modern Scenarios: Examples from Research Papers
12
there will often be a “gold rush” to investigate the performance of the new ­technology
in a variety of scenarios. Network operators usually want to understand if a wide
deployment of a given technology will bring operational expenditure (OPEX) or
capital expenditure (CAPEX) savings in the long run.
The particular cases of QoS provisioning or its counterpart network traffic throt-
tling have become an arena for a dispute between network operators and users [14].
The rise of a number of technologies, such as peer-to-peer (P2P), VoIP, deep packet
inspection, etc., has set this arena and has triggered interminable debate on network
neutrality [15], “premium services,” and the like. In the paper “Identifying Traffic
Differentiation in Mobile Networks,” Kakhki et al. [16] focused on the understanding
of current practices of Internet service providers (ISP) when performing traffic dif-
ferentiation in mobile environments (if any). The generic term differentiation means
that a certain ISP can either provide better (e.g., QoS provisioning) or worse (e.g.,
bandwidth throttling) services for the user. Their main motivation is that although the
debate is immense, the availability of open data is virtually inexistent to support rea-
sonable discussions. In addition, they argue that regulatory agencies have been mar-
ginally dealing with such issues, thus making difficult for end users to understand
ISP’s management policies and how this might affect their applications performance.
It is clear for some advanced users that ISPs have been deploying throttling mecha-
nisms for some time now, and this causes performance degradation of certain appli-
cations. Back in October 2005, an interview published in the IEEE Spectrum
Magazine revealed that mediation of VoIP traffic was in use by several telephone
companies that provided Internet services. In this context, mediation means traffic
differentiation. Although at that time there were some regulations (in the US) that
could prevent carriers from “blocking potentially competitive services” [14], the
article quotes the words of the vice-president of product marketing for a software
company that provided VoIP traffic identification and classification, as follows:
But there’s nothing that keeps a carrier in the United States from introducing jitter, so the
quality of the conversation isn’t good,” … “You can deteriorate the service, introduce
latency, and also offer a premium to improve it. (Excerpt from Cherry [14])
In [16], the authors present the design of a system to identify traffic differentia-
tion in wireless networks for any application. They address important technological
challenges, such as (i) performance testing of any application class, (ii) understand-
ing how differentiation occurs in the network, and (iii) wireless network measure-
ments from the user devices. They have designed and implemented a system and
validate it in a controlled environment using a commodity device (e.g., a common
smartphone). They focus only on traffic-shaping middleboxes instead of a wide
range of traffic differentiation. They assume that traffic differentiation might be trig-
gered by one or more factors, such as application signatures in the packets’ payload,
current applications throughput, and the like. Other factors were worth further
investigation, such as users’ location and time of the day. My assumption here is
that ISPs can virtually apply different policies for different users and applications at
different times of the day. It is common to find voice plans that give user unlimited
calls at evenings and on weekends [79]. Therefore it is very much plausible that
differentiation can be applied as well. Issues like traffic blocking or content
1  Principles of Performance Evaluation of Computer Networks
13
­modification were out of the scope of their work.The main steps of their ­methodology
to perform the traffic differentiation analysis, which is based on a general trace
record-­replay methodology, are:
	 (i)	 To record a packet trace from the given application
	(ii)	 To extract the communication profile between the end systems
	
(iii)	To replay the trace over the targeted network with and without the use of a
VPN channel
	
(iv)	 To perform some statistical tests to identify if traffic differentiation occurred
for that particular application.
Some interesting intermediate findings were that server IP addresses are not used
for differentiation purposes, high number ports might be classified as P2P applica-
tions, few packets (i.e., below ten packets) are necessary to trigger the traffic differen-
tiation mechanism, and encryption in the HTTP is not an issue for the traffic shapers.
The proposed detection mechanism is based on the well-known two-sample
Kolmogorov-Smirnov (K-S) test (cf. Figs. 1.10) in conjunction with an Area Test
statistic test. The testbed environment for the Controlled Tests uses an off-the-shelf
(OTS) traffic shaper and a mobile device (replay client) and a server (replay server).
Performance factors are the shaping rate (from 9% to 300% of the applications peak
traffic) and packet losses (from 0% to 1.45%, controlled by the Linux tc and netem).
Selected applications were YouTube and Netflix (TCP-based) and Skype and
Hangouts (UDP-based). Performance metrics for the calibration studies included
the overall accuracy and resilience to noise, whereas for the real measurement ones,
they collected throughput, latency, jitter, and loss rate.
Fig. 1.10  Illustration of the well-known nonparametric K-S Test
1.2 
Classical and Modern Scenarios: Examples from Research Papers
14
After some calibration studies, the authors conducted real measurements in a
­
number of operational networks for multimedia streaming applications as well as
voice/video conference systems.All experiments were repeated several times to ensure
statistical significance of results. They reported success in the detection of middle-
boxes’ behavior, such as traffic shaping, content modification, and the use of proxies.
The consequences of low shaping rate limit in TCP performance are that – in some
cases – the congestion control mechanism of the protocol might not even be able to
leave the slow-start phase. One of the main conclusion is that a high number of middle-
boxes in the selected mobile operators break the network neutrality (or the Internet’s
end-to-end) principle, thus affecting performance of some applications directly.
As usual, we highlight the importance of this paper in terms of performance eval-
uation at network level. First, the authors presented clear arguments for the need of
knowing if traffic differentiation exists in wireless networks and what are the traffic
features that trigger such mechanisms. They implemented and tested their system as
a real prototype and conducted experiments in a controlled testbed (for calibration)
and in production networks (for validation). Their experimental plan design provided
details on all performance metrics, factors, and levels. Finally, results presentation
included tables, scatter plots, and time series plots. Last, but not least, they made
available both code and datasets for the sake of reproducibility of their research.
1.2.1.4  Link Layer
Performance evaluation in wireless and mobile environments has always been in
high demand. There are several operational challenges that are related to some
uncontrolled factors, such as channel conditions and user mobility. Moreover, with
the widespread adoption of short- and mid-range wireless communication technolo-
gies (e.g., WiFi) together with cellular networks bring great opportunities for
improving network performance, whereas at the same time brings a number of
design and deployment issues, such as optimization of coverage and channel alloca-
tion. Telecom vendors have been recently offering solutions to heterogeneous net-
works (HetNet) for traffic offloading from the macro to the micro network. Such
solutions can boost customer experience by offering high-performance (e.g., high
data rates or low latencies) available from either macro or micro cell. The chal-
lenges of designing HetNets are tremendous since conflicting requirements are
always in place. In one hand it is necessary to cut total cost of ownership (TCO) and
avoid over-dimensioning. On the other hand, operators must improve performance
for the end users and optimize coverage.
In the paper When Cellular Meets WiFi in Wireless Small Cell Networks, Bennis
et al. [17], tackle some challenges of heterogeneous wireless networking design, by
addressing the integration of WiFi and cellular radio access technologies in small cell
base stations (SCBS). Figure 1.11 shows a common deployment scenario for HetNets.
The authors emphasized HetNets as a key solution for dealing with performance
issues in macrocellular-only infrastructures (macrocell base stations – MBS) since
multimode SCBS can bring complimentary benefits from the seamless integration
1  Principles of Performance Evaluation of Computer Networks
15
point of view. They also pointed out some concerns about the lack of control of the
quality of services in unlicensed bands (i.e., in WiFi networks), which can severely
degrade performance. They argue that offloading some of the traffic from the unli-
censed to the licensed (and well-managed) network can improve performance,
which motivated them to propose an intelligent distributed offloading framework.
Instead of using the usual strategy of offloading from macro to micro cells, they
argue that SCBS could simultaneously manage traffic between cellular and WiFi
radio access technologies according to the traffic profile and network conditions
(e.g., QoS requirements, network load, interference levels). In addition, they discuss
that SCBS could run a long-term optimization process by keeping track of the net-
work’s optimal transmission strategy over licensed/unlicensed bands. As examples,
they mentioned that delay-tolerant applications could be using unlicensed bands,
whereas delay-tight application could be offloaded to the licensed channels.
The authors discussed some design challenges for HetNets deployment, as clas-
sical offloading (i.e., from macro to micro cell or WiFi) might not be the best
approach. They argued that fine-grained offloading strategies should be deployed in
order to make performance-aware optimized traffic-steering decisions. Other net-
work conditions, such as backhaul congestion and channel interference, must be
taken into account when enforcing a certain offloading policy. They proposed a
reinforcement learning (RL)- [18] based solution to this complex problem.
Fig. 1.11  Common deployment scenario for HetNets (Source: Bennis et al. [17])
1.2 
Classical and Modern Scenarios: Examples from Research Papers
16
They provided a basic RL model, with realistic assumptions for network
­
parameterization, including the existence of multimode SCBS.  One particular
model formulation considers joint interference management and traffic offloading.
The basic idea behind the RL-based modeling is that the SCBS can make autono-
mous decisions to optimize the given optimization function. Specifically, they set its
goal to to devise an intelligent and online learning mechanism to optimize its
licensed spectrum transmission, and at the same time leverage WiFi by offloading
delay-tolerant traffic. They aim to accomplish this goal through two basic frame-
work’s components, namely, subband selection and proactive scheduling. The
macro behavior of the proposal is simple yet promising. Once each SCBS makes its
decision on which subband to use, a scheduling procedure starts, which takes into
account user’s requirements and network conditions.
The authors consider an LTE-A/WiFi offload case study in a multi-sector MBS
scenario integrated with multimode SCBS. In a simulation environment, the user
device (or user equipment  – UE) has a traffic mix profile unevenly distributed
between best-effort, interactive, streaming, real-time, and interactive real-time
applications. They consider four scenarios for benchmarking, as follows:
	 (i)	 Macro-only: MBS serves the UEs that use licensed bands only.
	(ii)	 HetNet: MBS and SCBS serve the UEs. SCBS is single mode (licensed bands
only).
	
(iii)	HetNet + WiFi: MBS and SCBS serve the UEs. SCBS is multimode (e.g.,
licensed and unlicensed bands).
	(iv)	 HetNet + WiFi with access method based on received power.
Performance metrics include average UE throughput, total SCBS throughput,
total cell throughput, and total cell-edge throughput. As performance factors, they
use number of UEs, subband selection strategy, and number of SCBS (i.e., small cell
densification). Figures  1.12 and 1.13 show some performance results. One can
clearly observe that a well-designed HetNet strategy can boost network and end-­
user performance when compared to an MBS-only scenario.
As a magazine paper, it is not expected to have detailed discussions on both
problem formulation and solutions and performance evaluation. The authors did a
good job balancing its content between engineering design discussions and results
to support the claims that their framework is promising to improve performance in
wireless HetNets environments. They carefully selected performance metrics as
well as factors and levels to conduct sound simulation-based experiments.
1.2.2  
Performance Evaluation in Modern Scenarios
1.2.2.1  
Virtualization and Cloud Computing
It might be a surprise for some, but virtualization concepts and technologies are not
new. In fact, a brief look at the history of virtualization of computing resources
reveals that the main concept, some proof of concepts, and real products span back
1  Principles of Performance Evaluation of Computer Networks
17
Fig. 1.12  Aggregate cell throughput vs. # of users for two traffic scheduling strategies (Source:
Bennis et al. [17])
Fig. 1.13  Aggregate cell throughput for different offloading strategies (Source: Bennis et al. [17])
1.2 
Classical and Modern Scenarios: Examples from Research Papers
18
five decades ago. IBM and Bell Labs were the main players in the 1960s. Fast
­
forwarding to the late 1990s, one can see that Sun’s Java was already gaining wide-
spread adoption along with VMWare technologies as the main players. Meanwhile,
computer scientists and engineers were developing the concept of the use of com-
puting resources as general services. When looking back 50 years ago for informa-
tion on virtualization and computing as services, it is difficult to pinpoint which
concept came first. But it is safe to say that when hardware virtualization gained
momentum in the late 1990s, the technology world changed forever, whereAmazon.
com (founded in 1994) might be considered the first heavy user of virtualization
technologies. About 10 years later (early to mid-2000s), Amazon.com started to
offer virtual computing resources services, such as the web (Amazon Web Services,
2002)4
, storage (Amazon Simple Storage Service, 2006)5
, and infrastructure
(Amazon Elastic Compute Cloud, 2006)6
.
The main point of interest here is how virtualization-based technologies and ser-
vices perform in real environments. In a closer look at the current virtualization
technologies and services, one will find that a number of new concepts and services
have arisen and evolved in the last 10 years. Virtualization is real and has spread into
hardware, desktop, storage, applications, platforms, infrastructure, networks, and
the like. Layers of virtualization components are now the norm, which brings a
number of questions, such as how they impact the performance seen by the end
users, how to optimize performance in distributed data centers, how to accurately
measure new performance metrics (e.g., elasticity, as it is specific to cloud comput-
ing environments), etc. Adequate tools and methodologies for measurements and
analysis in virtualized environments are currently under discussion in standardiza-
tion bodies, such as in the IETF’s IPPM and BMWG working groups [19, 20].
In the paper CloudNet: Dynamic Pooling of Cloud Resources by Live WAN
Migration of Virtual Machines [21], Wood et al. discuss that while virtualization
technologies have provided precise performance scaling of applications within data
centers, the support of advanced management process (e.g., VM resizing and migra-
tion) in geographically distributed data centers is still a challenge. Such a feature
would provide users and developers an abstract view of the (distributed) resources of
a cloud computing provider as a single unified pool. They highlight some of the hard
challenges for dynamic cloud resources scaling in WAN environment, as follows:
	1.	 Minimization of application downtime: If the given application handles a mas-
sive amount of data, migration to a data center over WAN connections might
introduce huge latencies as a result of the data copying process. Moreover, it
might require to keep disk and memory states consistent.
4
 Amazon Web Services (2002) - “Overview of Amazon Web Services”, White Paper, December
2015, available at https://d0.awsstatic.com/whitepapers/aws-overview.pdf.
5
 Amazon Simple Storage Service (2006) - “AWS Storage Services Overview: A Look at Storage
Services Offered by AWS”, White Paper, November 2015, available at https://d0.awsstatic.com/
whitepapers/AWS%20Storage%20Services%20Whitepaper-v9.pdf.
6
 
Amazon Elastic Computing Cloud - Varia, J., “Architecting for the Cloud: Best Practices”,
White Paper, May 2010, available at http://jineshvaria.s3.amazonaws.com/public/cloudbestprac-
tices-jvaria.pdf.
1  Principles of Performance Evaluation of Computer Networks
19
	2.	 Minimization of network configurations: In a LAN environment, VM migration
might require only a few tricks in the underlying Ethernet layer (e.g., using trans-
parent ARP/RARP quick reconfigurations). In the case where the IP address
space changes, such a migration might cause disruption in connectivity from the
application point of view.
	3.	 Management of WAN links: It is obvious to see that links’ capacities and latencies
in WAN are very different from the LAN counterpart. Mainly due to costs, WAN
capacities cannot be comparable to local networks. And even if that was true,
high utilization of links for long periods of VM migration is not a good network
management practice. In addition, link latency is highly unlikely to be a con-
trolled factor. Therefore, the challenges are to provide migration techniques over
distributed clouds that i) operate efficiently over low-bandwidth links and ii)
optimize the data transfer volume to reduce the migration latency and cost.
The authors then propose CloudNet, a platform designed to address the above
challenges, targeting live migration of applications in distributed data centers. The
paper describes the design rationale along with prototype implementation and
extended performance evaluation on a real environment.
Figure 1.14 illustrates the concept of the virtual cloud pools (VCP), which can be
seen as an abstraction of the distributed cloud resources into a single view. VCP
aims at connecting cloud resources in a secure and transparent way.VCP also allows
a precise coordination of the hypervisors’ states (e.g., memory, disks) to ease the
replication and migration process. CloudNet architecture is composed of two major
controllers, namely, Cloud Manager and Network Manager. The former is ­composed
of other architectural components, such as Migration Optimizer and Monitoring
and Control Agents. The latter is responsible for VPN resource management. In the
case of VM migration between geographically distributed data centers, CloudNet
works as follows (cf. Fig. 1.15):
	 (i)	 It establishes connectivity between VCP endpoints.
	(ii)	 It transfers hypervisor’s states (memory and disk).
	(iii)	 It pausesVM for transferring processors state along with memory state updates.
The authors emphasize that disk state migration would take the majority of the
overall VM migration. This is due to the order of magnitude of tens or hundreds of
gigabytes as compared to a few gigabytes for memory sizes. Once these phases are
completed, network connections must be redirected. They deploy an efficient mech-
anism based on virtual private LAN services (VPLS) bridges. They also propose
some optimizations to improve CloudNet performance, such as:
	 (i)	 Content-based redundancy
	(ii)	 Using pages or subpages blocks deltas
	(iii)	 Smart stop and copy (SSC) algorithm
	(iv)	 Synchronized arrivals (an extension of the SSC algorithm)
	(v)	 Deployment on Xen’s Dom-0.
1.2 
Classical and Modern Scenarios: Examples from Research Papers
Fig. 1.15  Phases of resources migration on CloudNet (Source: Wood et al. [21])
Fig. 1.14  Illustration of the virtual cloud pool (VCP) concept (Source: Wood et al. [21])
21
CloudNet implementation is based on Xen,7
Distributed Replicated Block Device
(DRBD),8
and off-the-shelf (OTS) routers that implement VPLS. Performance evalu-
ation of CloudNet was mainly conducted in three interconnected datacenters in the
USA. The main goal was to understand the performance of different applications on
top of CloudNet and under realistic network conditions. They also performed some
tests on a testbed in order to have more control of the network conditions (i.e., con-
trolling link capacity and latency). As applications, they used a Java server bench-
mark application (SPECjbb 2005), a development platform (Kernel Compile), and a
Web benchmark (TPC-W). Performance metrics include bandwidth utilization, total
migration time, data sent, and application response time. Figure 1.16 shows an exam-
ple of the benefit of deploying CloudNet as compared to default Xen’s strategy.
Figure 1.17 depicts the performance of a TPC-W application in varying band-
width conditions. It is clear that CloudNet is able to reduce the impact on migration
time in low-bandwidth conditions.
Also, the amount of data transmitted is significantly reduced for both cases of
TPC-W and SPECjbb, as Fig. 1.18 depicts.
It is worth emphasizing that tackling most research and development challenges
in virtualized environments requires careful design of the system architecture to
uncover the subtleties they bring. The authors of the abovementioned paper did an
excellent job by taking into account the most important performance factors (and
their correspondent levels) realistically that might impede network managers and
engineers to deploy a live migration of applications in a WAN environment. The
presented results are convincing enough to bring other researchers to conduct more
advanced studies in the topic. All performance metrics were carefully selected to
support the benefits of CloudNet.
7
 http://www.xenproject.org/.
8
 http://drbd.linbit.com/home/what-is-drbd/.
Fig. 1.16  Comparison of response time: Xen vs. CloudNet (Source: Wood et al. [21])
1.2 
Classical and Modern Scenarios: Examples from Research Papers
22
1.2.2.2  Software-Defined Networking
Software-defined networking (SDN) concepts and technologies have attracted a great
deal of attention from both industry and academic communities [22]. Large-­scale
adoption and deployment are yet to come. Mainly because the overall performance
of the SDN major elements (i.e., the controller and its underlying ­
software
­
components) is not clearly understood. There are a number of research papers that
address common performance issues in the SDN realm, such as the ability of the
Fig. 1.17  Performance of the TPC-W in varying bandwidth conditions (Xen vs. CloudNet)
(Source: Wood et al. [21])
1  Principles of Performance Evaluation of Computer Networks
23
SDN controller to deal with the arrival of new flows at a fast pace. There is some
evidence of both good and poor performance in specific scenarios [23, 24], but in
general, most experiments have been conducted in short time frames. For the particu-
lar case of benchmarking of OPNFV switches, Tahhan, O’Mahony, and Morton [25]
suggest at least 72 h of experiments for platform validation and to assess the base
performanceformaximumforwardingrateandlatency.Althoughthe­recommendation
Fig. 1.18  Transmitted data (TPC-W and Specjbb) (Source: Wood et al. [21])
1.2 
Classical and Modern Scenarios: Examples from Research Papers
24
is for NFV switches, it serves well for SDN environments, so the experimenter would
be able to separate transient and steady-state performance clearly. Along with the
long-term performance of SDN controllers in search for software malfunctioning
(e.g., software aging) [26], there is a need for understanding the interactions of the
software components in popular controllers in order to set the ground truth.
In On the Performance of SDN Controllers: A Reality Check, Zhao, Iannone, and
Riguidel [27] conducted controlled and extended experiments to deeply understand
how SDN controllers should be selected and configured to be deployed in real sce-
narios. They start showing that the need for a comprehensive performance evalua-
tion is due to a variety of systems implementation. In general, each implementation
of an SDN controller might perform well in a particular scenario, whereas it might
suffer in a different setting. They focused on the most popular centralized SDN
controller implementations, namely, Ryu [28], Pox [29], Nox [30], Floodlight [31],
and Beacon [32]. Figure 1.19 depicts the basic testing setup for the experiments.
They rely on a one-server setup with CBench9
playing the important role of emulat-
ing a configurable number of switches.
9
 Cbench, https://github.com/andi-bigswitch/oflops/tree/master/cbench.
Fig. 1.19  Testbed setup for experiments with SDN controllers (Source: Zhao et al. [27])
1  Principles of Performance Evaluation of Computer Networks
25
Performance metrics were kept simple, such as the use of latency, throughput,
and fairness (in a slightly different context). Experiments were replicated several
times in order to guarantee statistical significance of results. After discussing accu-
racy issues for latency measurements in Open vSwitches, the paper evaluates the
impact of selected factors in the performance metrics, as follows:
	
1.	 Factor #1: the type of the interpreter for Python-based controllers, with CPython
and PyPy as levels.
	
2.	 Factor #2: the use of multiple threads along with the hyper-threading (HT) fea-
ture of the processor. The case here is to understand if enabling or disabling HT
(e.g., levels) would impact any performance metrics. Also, they need to evaluate
if multiple threads bring any performance advantages.
	
3.	 Factor #3: the number of switches, varying from 1 to 256.
	
4.	 Factor #4: the number of threads, from 1 to 7.
It is worth emphasizing that not all possible combinations of factors and levels were
set for testing. The authors clearly explained the subset of levels for each scenario.
Let’s check some of the results from this paper. For performance factor #1 (the
type of the Python language interpreter), Tables 1.1 and 1.2 show that PyPy has
clearly better performance for both latency and throughput metrics.
The impact of the number network switches (factor #3) is evaluated for both
cases of single and multiple threads (factor #2). Figure 1.20 presents the impact on
the SDN-Switch latency as the number of switches increases. Individual latencies
rise from microseconds to milliseconds, depending on the type of the controller. In
this particular case, Beacon seems to suffer less performance degradation as the
number of switches increase, whereas Ryu has the worst overall performance. There
is not a further investigation on what exactly causes this performance issue, although
the authors suggest that the round-robin policy implemented in each controller
might be the underlying cause.
There are a number of other important results and discussion in the paper, but one
is of particular interest. In a similar strategy as in [26], the authors wanted to under-
stand how SDN controllers perform in a heavy workload scenario. The experiments
Table 1.1  Impact of python interpreter (latency – ms)
Controller CPython PyPy PyPy/CyPython
Pox 0.156 0.042 3.75
Ryu 0.143 0.037 3.86
Source: Zhao et al. [27]
Table 1.2  Impact of python interpreter (throughput – responses/ms)
Controller CPython PyPy PyPy/CyPython
Pox 11.2 105 9.38
Ryu 24.1 106 4.40
Source: Zhao et al. [27]
1.2 
Classical and Modern Scenarios: Examples from Research Papers
26
show similar results as in [26] by showing that latency (cf. Table 1.3) is propelled to
thousands of times more as compared to a light workload.
Their concluding remarks from the experiments show that Beacon has performed
better than the other controllers in almost tested scenarios, which is in agreement
with previous studies.
We see that the authors carefully selected the most important performance factors
and levels, by bringing strong qualitative arguments to the table. As expected, the
presented results are solid enough to bring other researchers to conduct more advanced
studies in the topic, and it is also aligned with some of the previous studies.
Fig. 1.20  Impact on the SDN-Switch latency as the number of switches increases (Source: Zhao
et al. [27])
Table 1.3  Latency comparison
Controller
Latency (ms)
Empty buffer Full buffer Ratio
Pox 0.042 5.26 126
Ryu 0.039 2.63 68
Nox 0.018 149.00 8358
Floodlight 0.022 76.90 3461
Beacon 0.016 50.00 3050
Source: Zhao et al. [27]
1  Principles of Performance Evaluation of Computer Networks
27
1.2.2.3  Network Functions Virtualization
The ever-growing interests in virtualization technologies have very recently brought
a new player in the field. Network functions virtualization (NFV) can be seen as the
latest paradigm that aims at virtualizing traditional network functions currently
implemented in dedicated platforms [34]. Within the scope of NFV, virtual network
functions (VNF) can be orchestrated (or chained, if you will), thus creating a new
concept, namely, Service Function Chaining (SFC). This new paradigm, along with
other virtualization concepts and technologies (e.g., SDN and virtual switches),
promises to profoundly change the way to manage networks. The main concern of
the telecommunications industry, as the NFV’s main target, is, of course, related to
performance. In particular, as the implementation of VNF or SFC will be typically
deployed in VMs or containers, is it important to understand its performance over-
head. It is clear that multiple instances of a VNF deployed in a virtual switch might
have an impact on the overall SFC performance. Therefore, performance evaluation
of NFV and its related technologies is one of the main concerns of network opera-
tors. It has attracted the interest of standardization bodies, such as ETSI [33]. In
Assessing the Performance of Virtualization Technologies for NFV: A Preliminary
Benchmarking, Bonafiglia et al. [35] argued that while performance issues in tradi-
tional virtualization are mostly related to processing tasks, in an NFV network, I/O
is the main concern. They presented a preliminary benchmarking of SFC (as a chain
of VNFs) that uses common virtualization technologies.
First, they set up a controlled environment and implemented VNF chains
deployed as virtual machines (using KVM) or containers (a Linux Docker).
Performance metrics were throughput and latency. All experiments were replicated
to ensure statistical significance of results. For the factors, they simply evaluated the
impact on the end-to-end performance as the number of VNFs in the SFC increases.
They also investigated if the technology used to interconnect the VNFs in the virtual
switch, namely, either the Open vSwitch (OvS) or the Intel Data Plane Development
Kit (DPDK)-based OvS, has any impact on the performance metrics. Figure 1.21
shows the network components for a VM-based implementation of NFV, whereas
Fig. 1.22 shows a similar architecture in a container-based approach.
The testbed for the experimental runs was configured as depicted in Fig. 1.23.
It is important to emphasize that they did not evaluate the impact of processing
overheads for particular virtualized functions. In other words, they implemented a
“dummy” function that simply forwards packets from one interface to another. A
comprehensive performance evaluation might take the processing costs of the given
functions (or classes of functions) into account. Figure 1.24 shows an example of
the impact of the VNFs in a single chain on the end-to-end throughput, for both VM
and container-based implementation in the virtual switch.
Similarly, Fig. 1.25 shows the latency introduced by the VNFs implemented in
different technologies (e.g., OvS vs. DPDK) as the SFC length increases.
I agree with the authors when they stated that this paper provides preliminary
benchmarking of VNFs chains. I just want to highlight that from the point of view
of performance evaluation, they quickly realized the importance of having some
1.2 
Classical and Modern Scenarios: Examples from Research Papers
28
Fig. 1.22  Container-based approach for NFV implementation (Source: Bonafiglia et al. [35])
Fig. 1.23  Testbed setup
for testing NFV (Source:
Bonafiglia et al. [35])
Fig. 1.21 Network
components for a
VM-based implementation
of NFV (Source:
Bonafiglia et al. [35])
1  Principles of Performance Evaluation of Computer Networks
29
Fig. 1.24  An example of the impact of the VNFs in a single chain on the end-to-end throughput
(Source: Bonafiglia et al. [35])
Fig. 1.25  An example of the impact of the VNFs in a single chain on the RTT (Source: Zhao et al.
[27])
1.2  Classical and Modern Scenarios: Examples from Research Papers
30
initial results to fill the gap in this topic. Sometimes you don’t need an extensive
performance evaluation to have a good understanding of the factors that impact
systems performance. An initial and correct (yet simple) set of hypotheses and
research questions might be enough to carry a set of experiments. Validation of
hypotheses does not need to be fancy.
1.3  
The Pillars of Performance Evaluation of Networking
and Communication Systems
When dealing with performance evaluation, we need to decide among experimenta-
tion, simulation, emulation, and modeling. Collecting samples properly from the
experiments (a.k.a. measurements) is a particular aspect since it gives support for all
performance evaluation methods. If you have to choose among these options, which
one(s) would you prefer? There is no precise scientific approach to help researchers
decide on their strategies. The best ones should be analyzed on a case-by-case basis,
depending on the availability of resources as well as one’s ability to deal with a
­
particular strategy. Let’s say one needs to evaluate if a certain transport-layer network
protocol has better overall performance than a traditional one (e.g., TCP SACK vs.
TCP Cubic). A number of decisions should be made for a proper performance evalu-
ation and analysis, but the first step is always deciding the adequate approach. Should
one select real experimentation or simulation? Should one try to develop analytical
models for both protocols? Is emulation possible for the envisaged scenarios? Would
results from a single strategy be convincing enough? These are the types of questions
we would like to pose along with presenting possible avenues to answer them.
1.3.1  Experimentation/Prototyping, Simulation/Emulation,
and Modeling
It is common sense that, when possible, the best general approach would be work-
ing with at least two strategies [1]. Limitations of an individual approach would be
somewhat overcome when you choose and work with at least two different ones.
This strategy would also help on minimizing criticisms of your experimental work
(e.g., when it is submitted to peer review or dissertation/thesis committees). It is
clear that real implementation approaches generally suffer from scalability limita-
tions. For instance, it is costly to deploy large-scale scenarios to experiment with for
either ad hoc sensor networks or Internet of things (IoT). Even if one has hundreds
of devices deployed, there is always criticism that the validation would be limited in
a larger scale (i.e., with thousands or millions of devices). Therefore, one strategy
can give support to the other, when properly argued. In this example of performance
evaluation of transport protocols, one can choose some pairs, such as experimen-
tation simulation, simulation modeling, experimentation modeling, and the like.
1  Principles of Performance Evaluation of Computer Networks
31
For example, experimentation can validate a certain mechanism the experimenters
are proposing, whereas simulations could demonstrate its effectiveness in a large
scale, and analytical modeling could provide general behavior/understanding of the
phenomenon under investigation.
1.3.1.1  
Network Experimentation and Prototyping
Engineers are more inclined to see real implementation of network protocols run-
ning in devices or OSes. Simulations might give them a first impression on how to
proceed further, but in the end, they need to implement and test their ideas in real
environments. Academics are more flexible and might rely on any of the perfor-
mance evaluation approaches, as long as they give them some meaningful results
that provide evidence of the scientific contributions of their work. In any case, doing
experimental work is a tough decision for either group of experimenters since it
generally requires analysis of cost, scalability, time to completion, learning curve
for the particular environment, and the like.
Experimentation work might involve a number of preliminary tests to make sure
the experimenter will be working in a suitable environment. There are a number of
components in the protocol stack and their corresponding implementations that
might have an impact on the performance of the given mechanism under perfor-
mance analysis. Let’s say one wants to develop and test a new architecture for a
network traffic generator [36, 37]. Even if the theoretical or simulation analysis
shows that it has outstanding performance as compared to the results found in the
literature, the real implementation of packet handling software layers in common
OSes will have a profound impact on the results. Deploying such an architecture on
top of libpcap10
or PF_RING11
will give them different results.
For dealing with large-scale experimentation, there are some experimentation
platforms that help researchers to overcome scalability issues of real prototyping.
PlanetLab (PL) [46] was one of the first worldwide experimental platforms avail-
able to researchers in the computer networking field. They are also known as Slice-­
based Federation Architecture (SFA). Recent SFA platforms include Geni12
and
OneLab.13
SFAs mostly have three key elements, namely, components, slivers, and
slices. A component is the atomic block of the SFA and can come in the form of an
end host or router. Components offer virtualized resources that can be grouped into
aggregates. A slice is a platform-wide set of computer and network resources (a.k.a.
slivers) ready to run an experiment. Figure 1.26 illustrates the essential concepts of
component, resource, sliver, and slice in an SFA-based platform.
10
 Libpcap – www.tcpdump.org/.
11
 PF_RING – www.ntop.org.
12
 http://www.geni.net/.
13
 https://onelab.eu.
1.3 
The Pillars of Performance Evaluation of Networking and Communication Systems
32
1.3.1.2  
Network Simulation and Emulation
Network simulation is an inexpensive and reliable way to develop and test ideas for
those problems where there is no need to rely on either analytical models or experi-
mental approaches. Several network simulation environments have been tested and
validated by the research community in the last decades. And most simulation
engines are efficient enough to provide fast results even for large-scale scenarios.
Barcellos et al. [49] discuss the pros and cons of conducting performance evaluation
through simulation and real prototyping.
Simulation engines are usually based on discrete-event approaches. For instance,
ns-2 [38] and ns-3 [39], OMNET++ [40], OPNET [41], and EstiNet [42] have their
simulation core based on discrete-event methods. Specific network simulation envi-
ronments, such as Mininet [43], Artery [44], and CloudSim [45], follow suit.
Emulation means the process of imitating the outside behavior of a certain
networked application, protocol, or even an entire network. I come back with
more details on this in Chap. 4. Performance evaluation that includes emulated
components helps the experimenter to deploy more realistic scenarios where
simulation, experimental evaluation, and analytical model alone are not capable
of. Emulation fits particularly well in scenarios where the experimenter needs to
understand the behavior of real networked applications in controlled networking
environments. Of course, one can control network parameterization in real envi-
ronments, but in general, large-scale experimental platforms are not available to
all. When the performance of a certain system in the wild (i.e., on the Internet)
Fig. 1.26  Building blocks
of an SFA-based platform
1  Principles of Performance Evaluation of Computer Networks
33
is known, but only gives the big picture, the researcher/engineer might want to
understand its behavior under certain network conditions. Recall that roughly
10–15 years ago, the performance of VoIP application on the Internet was not
clear. Different CODECs and other system’s eccentricities would yield different
perceived quality of experiences (QoE) by the user. Therefore, if one needed to
have an in-depth view of such VoIP applications in certain conditions (i.e., under
restricted available bandwidth, limited latencies, or packet loss rate), resorting
on emulation would be the answer. In modern scenarios, there are a number of
studies trying to understand the behavior of video applications in controlled
environments [47, 48]. NetEm [50] is a network emulation widely used in the
networking research community. It has the flexibility to change a number of
parameters, thus giving the user the possibility to mimic a number of large net-
working scenarios without the associated costs. Figure 1.27 depicts typical usage
of NetEm’s in emulation-based experiments.
Fig. 1.27  A typical usage of NetEm’s in emulation-based experiments
1.3 
The Pillars of Performance Evaluation of Networking and Communication Systems
34
1.3.1.3  Analytical Modeling
All models are wrong, but some are useful. (George Box)
The quote in this subsection comes from a well-known statistician called George
Box. His state is profound. Due to the need for an abstract view of the target prob-
lem, and to minimize complexity, analytical models sometimes suffer from limited
usage in practical situations. But, some are useful!
A formal definition of modeling closer to this book’s context is a system of pos-
tulates, data, and inferences presented as a mathematical description of an entity or
state of affairs.14
It is very true that analytical models have limitations, and they are
somewhat hard to develop since they mostly require a strong mathematical back-
ground. In general, it is generally much easier to come up with an idea for a protocol
without any profound mathematical analysis. We have seen a number of examples
in the computer networking field where formal mathematical models only came
after a certain protocol or services have been adopted and used for a long time. One
clear case is some of the mechanisms behind TCP. It took years of research studies
to provide evidence that the congestion control mechanisms of TCP would be fair
in a number of scenarios, although several TCP flavors were active and running on
the Internet for quite some time [51]. Network engineers have developed models for
the purpose of planning, designing, dimensioning, performance forecasting, and the
like. Models can be used to have an overview of general behavior or to evaluate
detailed mechanisms or architectural components.
Analytical modeling does not need to be complex. One can simply perform a
model fitting for a particular probability distribution function or a mix of them.
Alternatively, one can look at user behavior and try to derive simple models from it
to build traffic generators [52]. There are indeed a number of analytical models avail-
able to meet most requirements from network engineers and researchers in terms of
practical models. They come not only in the form of research papers but also as books
[53, 54] and book chapters [55]. Therefore, it is almost impossible to cover the body
of knowledge on analytical models in computer networking in a single book.
In order to give you a glimpse of how exactly a model looks like, I am presenting
some relevant examples of derived analytical models from different layers of the
TCP/IP reference model. The goal here is to highlight the potential of developing
powerful mathematical models that would serve as the basis for advanced perfor-
mance evaluation. It is worth recalling that a good performance evaluation plan
should include at least two strategies. Therefore, for the researchers more inclined
to mathematical and statistical studies, developing analytical models would be the
first step that could be later validated by simulation or experimental work.
14
 http://www.merriam-webster.com/dictionary/modelling/.
1  Principles of Performance Evaluation of Computer Networks
35
Video Game Characterization
Suppose you need to design a mechanism for improving the performance of online
users of a particular video game and you come up with an idea of a transport/net-
work cross-layer approach. Also, due to budget constraints, you are not able to per-
form large-scale measurements in real environments. You are now restricted to
simulation environments, where you can quickly deploy your strategy. One impor-
tant question here is: how can I generate synthetic traffic that mimics the application-­
level behavior of the game? Modeling application-level systems for use in simulation
environments is an active area since new applications come into play very fre-
quently. Let’s have a look at one recent example of modeling for game traffic gen-
eration. In [56], Cricenti and Branch proposed a skewed mixture distribution for
modeling the packet payload lengths of some well-known first-person shooting
(FPS) games. They argued that a combination of different PDFs could lead to more
precise models. They proposed the use of the ex-Gaussian distribution (also known
as exponentially modified Gaussian distribution – EMG) as a model for FPS traffic.
They showed that the ex-Gaussian distribution, through empirical validation, is able
to capture the underlying process of an FPS player well. Also, they discussed how
the model would be useful for building efficient traffic generators. The ex-Gaussian
PDF has the following representation:
	
f x
x
x
, , , ƒ
m s l l
m ls
s
m l
l
( ) =
- -
æ
è
ç
ö
ø
÷
-
( ) +
æ
è
ç
ç
ö
ø
÷
÷
exp ,
2
2
2
	
where exp is the exponential PDF and Φ is the Gaussian PDF, each one with its
respective parameters (i.e., mean, standard deviation, and rate).
Figure 1.28 shows one result for the validation of this model, in a scenario with
four players using Counter-Strike. One can see that the model predicts the packet
size distribution well when compared to the empirical data.
You can find traffic models like this previous one for virtually any types of
Internet applications. Lots of them are implemented in network simulation environ-
ments (e.g., ns-2, ns-3, OMNET++, OPNET).
TCP Throughput
As 95% of the Internet traffic is carried by TCP, it is obvious that its characterization
(i.e., modeling) has gained attention in the last decades [51]. One of the most impor-
tant studies on modeling TCP was developed by Padhye, Firoiu, Towsley, and
Kurose [57]. Their goal was to develop analytic characterization of the steady state
throughput, as a function of loss rate and round-trip time for a bulk transfer TCP
flow. By bulk transfer, they mean a long-lived TCP flow. This is one of the most
cited papers in the TCP throughput modeling context. After seven pages discussing
1.3 
The Pillars of Performance Evaluation of Networking and Communication Systems
36
and providing detailed information on how to build a precise model, they proposed
the approximation model in a single equation, as follows:
	
B p
W
RTT
RTT
bp
T
bp
p p
( ) »
+
æ
è
ç
ç
ö
ø
÷
÷ +
( )
æ
è
ç
ç
min
min
max
,
,
1
2
3
13
3
8
1 32
0
2
ç
ç
ç
ç
ö
ø
÷
÷
÷
÷
÷
,
	
where B(p) is the TCP throughput, Wmax is the maximum TCP congestion window
size, b is the number of packets acknowledge by a received ACK, p is the estimated
packet loss probability, To is the time-out period, and RTT is the round-trip time of
the end-to-end connection. Padhye’s model is a simple yet very effective TCP model
that passed the test of time.
Recently, Loiseau et al. [58] proposed a new model for the TCP throughput follow-
ing the steps of Padhye’s approach, but with different goals. They argue that Padhye’s
model might be limited to use since it is not able to capture TCP’s throughput fluctua-
tions at smaller time scales. Therefore, they aimed at characterizing the variations of
TCPthroughputaroundthemeanandatsmallertimescalestoprovidea­complementary
model to Padhye’s seminal work.The proposed model is based on the classical Markov
Chain theory, where states and transitions are mapped to TCP’s features, such as con-
gestion window size, AIMD mechanism, packet loss process, and the like. Their main
Fig. 1.28  Analytical model validation for packet size (Counter-Strike) (Source: Cricenti and
Branch [56])
1  Principles of Performance Evaluation of Computer Networks
37
contribution relies on a new method to describe deviations of the TCP’s throughput
around the almost-sure mean.As it is a bit more complex model, I encourage the inter-
ested reader to see its details in the section 3 of their 2010 paper [58].
Aggregate Background Traffic
All network simulations need some kind of traffic between end systems, right? If you
are investigating the performance of a particular new mechanism (let’s say the
HTTP/2.0), you just set up the traffic sources and sinks (either in a client-server or
peer-to-peer configuration), start generating simulated or experimental traffic, and
collect the measurement results. But, how about the noisy background traffic? The
one that you know will be there in real networks! In the real Internet, there are a num-
ber of uncontrolled traffic (e.g., cross-traffic) that might have severe effect on the
performance of the system under test. Sometimes you just need a single TCP or UDP
traffic stream competing for resources (e.g., bandwidth) in the background. If you
want more precise models to represent a noisy background traffic, you need to con-
sider the self-similar (or long-range dependent) nature of the actual traffic in the
Internet. Research on the fractal nature of the Internet traffic gained lots of attention
for more than a decade, between the early 1990s and mid-2000s. I refer the interest
reader to the seminal work of Leland et al. [59] and lots of details in the book [60]. For
the purpose of this section, I just need to present you a model (the simpler, the better)
for background traffic that captures the essence of fractal behavior in the Internet. In
the paper, Self-Similarity Through High-Variability: Statistical Analysis of Ethernet
LAN Traffic at the Source Level, Willinger et al. [61] provide some explanations for
the occurrence of fractal traffic in local networks. They showed that a superposition of
ON-OFF traffic sources yield self-similar traffic. ON-OFF sources generate packet
trains. Their very important findings paved the way to build an efficient (i.e., precise
and parsimonious) and realistic model for synthetic traffic generation.
Here are some basic concepts first. A ON-OFF traffic source means a source
generates traffic by alternating between ON periods (i.e., sending traffic) and OFF
periods (i.e., in a silent period), where both periods are independent and identically
distributed (i.i.d.). Also, the sequences of ON and OFF periods are independent
from each other. One important aspect of Willinger’s model is that they use an infi-
nite variance distribution, which is well represented by heavy-tailed distributions,
such as Pareto. It is worth emphasizing that previous models assumed finite vari-
ance distributions, such as an exponential PDF. In a nutshell, modeling a self-­similar
traffic with ON-OFF models is straightforward. A superposition of n ON-OFF
sources that follows a heavy-tailed distribution yields self-similar traffic. There are
basically a few parameters to consider, namely, the number of sources, n, and the
Hurst parameter to characterize the long tail of the PDF of each individual source.
The Hurst parameter is a number between 0.5 and 1, and it is well known as the
self-similarity index. The closer H is to 1, the more self-similar it is. For n, simulated
results indicate that 20 sources would be enough [62].
1.3 
The Pillars of Performance Evaluation of Networking and Communication Systems
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf

More Related Content

Similar to ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf

Designing with the Mind in MindSimple Guide to Unde.docx
Designing with the  Mind in MindSimple Guide to Unde.docxDesigning with the  Mind in MindSimple Guide to Unde.docx
Designing with the Mind in MindSimple Guide to Unde.docx
simonithomas47935
 
Better software, better service, better research: The Software Sustainabilit...
Better software, better service, better research: The Software Sustainabilit...Better software, better service, better research: The Software Sustainabilit...
Better software, better service, better research: The Software Sustainabilit...
Carole Goble
 
How to build an experience differentiation strategy for software business. Cu...
How to build an experience differentiation strategy for software business. Cu...How to build an experience differentiation strategy for software business. Cu...
How to build an experience differentiation strategy for software business. Cu...
VTT Technical Research Centre of Finland Ltd
 
Designing for people, effective innovation and sustainability
Designing for people, effective innovation and sustainabilityDesigning for people, effective innovation and sustainability
Designing for people, effective innovation and sustainability
Musstanser Tinauli
 
Thesis - E Papadopoulou
Thesis - E PapadopoulouThesis - E Papadopoulou
Thesis - E Papadopoulou
Elizabeth Papadopoulou
 
A Framework for Analysing, Designing and Evaluating Persuasive Technologies.pdf
A Framework for Analysing, Designing and Evaluating Persuasive Technologies.pdfA Framework for Analysing, Designing and Evaluating Persuasive Technologies.pdf
A Framework for Analysing, Designing and Evaluating Persuasive Technologies.pdf
Kayla Smith
 
Paul Henning Krogh A New Dawn For E Collaboration In Science
Paul Henning Krogh   A New Dawn For E Collaboration In SciencePaul Henning Krogh   A New Dawn For E Collaboration In Science
Paul Henning Krogh A New Dawn For E Collaboration In Science
Vincenzo Barone
 
RHouraniDSFinalPaper
RHouraniDSFinalPaperRHouraniDSFinalPaper
RHouraniDSFinalPaper
Rasheed Hourani
 
Information modelling (Stefan Berner): Extract
Information modelling (Stefan Berner): ExtractInformation modelling (Stefan Berner): Extract
Information modelling (Stefan Berner): Extract
vdf Hochschulverlag AG
 
Digital Design - An Embedded Systems Approach Using Verilog.pdf
Digital Design - An Embedded Systems Approach Using Verilog.pdfDigital Design - An Embedded Systems Approach Using Verilog.pdf
Digital Design - An Embedded Systems Approach Using Verilog.pdf
SuchithraNP1
 
Lecture rm 2
Lecture rm 2Lecture rm 2
Lecture rm 2
Mudasar Hanif
 
Computer is an electronic device or combination of electronic devices
Computer is an electronic device or combination of electronic devicesComputer is an electronic device or combination of electronic devices
Computer is an electronic device or combination of electronic devices
Arti Arora
 
Final_Dissertation
Final_Dissertation Final_Dissertation
Final_Dissertation
Christopher Powell, D.M., HCS
 
PhD report
PhD reportPhD report
PhD report
George Popescu
 
PhD report
PhD reportPhD report
PhD report
George Popescu
 
R 4.1 big6 intro
R 4.1 big6 introR 4.1 big6 intro
R 4.1 big6 intro
mrbacigalupi
 
Pratyush Pandab UX Researcher Portfolio
Pratyush Pandab UX Researcher PortfolioPratyush Pandab UX Researcher Portfolio
Pratyush Pandab UX Researcher Portfolio
Pratyush Pandab
 
Saha_Sumit_2016_thesis
Saha_Sumit_2016_thesisSaha_Sumit_2016_thesis
Saha_Sumit_2016_thesis
Sumit Saha
 
Contextual Inquiry: How Ethnographic Research can Impact the UX of Your Website
Contextual Inquiry: How Ethnographic Research can Impact the UX of Your WebsiteContextual Inquiry: How Ethnographic Research can Impact the UX of Your Website
Contextual Inquiry: How Ethnographic Research can Impact the UX of Your Website
Rachel Vacek
 
Linked Data Workshop Stanford University
Linked Data Workshop Stanford University Linked Data Workshop Stanford University
Linked Data Workshop Stanford University
Talis Consulting
 

Similar to ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf (20)

Designing with the Mind in MindSimple Guide to Unde.docx
Designing with the  Mind in MindSimple Guide to Unde.docxDesigning with the  Mind in MindSimple Guide to Unde.docx
Designing with the Mind in MindSimple Guide to Unde.docx
 
Better software, better service, better research: The Software Sustainabilit...
Better software, better service, better research: The Software Sustainabilit...Better software, better service, better research: The Software Sustainabilit...
Better software, better service, better research: The Software Sustainabilit...
 
How to build an experience differentiation strategy for software business. Cu...
How to build an experience differentiation strategy for software business. Cu...How to build an experience differentiation strategy for software business. Cu...
How to build an experience differentiation strategy for software business. Cu...
 
Designing for people, effective innovation and sustainability
Designing for people, effective innovation and sustainabilityDesigning for people, effective innovation and sustainability
Designing for people, effective innovation and sustainability
 
Thesis - E Papadopoulou
Thesis - E PapadopoulouThesis - E Papadopoulou
Thesis - E Papadopoulou
 
A Framework for Analysing, Designing and Evaluating Persuasive Technologies.pdf
A Framework for Analysing, Designing and Evaluating Persuasive Technologies.pdfA Framework for Analysing, Designing and Evaluating Persuasive Technologies.pdf
A Framework for Analysing, Designing and Evaluating Persuasive Technologies.pdf
 
Paul Henning Krogh A New Dawn For E Collaboration In Science
Paul Henning Krogh   A New Dawn For E Collaboration In SciencePaul Henning Krogh   A New Dawn For E Collaboration In Science
Paul Henning Krogh A New Dawn For E Collaboration In Science
 
RHouraniDSFinalPaper
RHouraniDSFinalPaperRHouraniDSFinalPaper
RHouraniDSFinalPaper
 
Information modelling (Stefan Berner): Extract
Information modelling (Stefan Berner): ExtractInformation modelling (Stefan Berner): Extract
Information modelling (Stefan Berner): Extract
 
Digital Design - An Embedded Systems Approach Using Verilog.pdf
Digital Design - An Embedded Systems Approach Using Verilog.pdfDigital Design - An Embedded Systems Approach Using Verilog.pdf
Digital Design - An Embedded Systems Approach Using Verilog.pdf
 
Lecture rm 2
Lecture rm 2Lecture rm 2
Lecture rm 2
 
Computer is an electronic device or combination of electronic devices
Computer is an electronic device or combination of electronic devicesComputer is an electronic device or combination of electronic devices
Computer is an electronic device or combination of electronic devices
 
Final_Dissertation
Final_Dissertation Final_Dissertation
Final_Dissertation
 
PhD report
PhD reportPhD report
PhD report
 
PhD report
PhD reportPhD report
PhD report
 
R 4.1 big6 intro
R 4.1 big6 introR 4.1 big6 intro
R 4.1 big6 intro
 
Pratyush Pandab UX Researcher Portfolio
Pratyush Pandab UX Researcher PortfolioPratyush Pandab UX Researcher Portfolio
Pratyush Pandab UX Researcher Portfolio
 
Saha_Sumit_2016_thesis
Saha_Sumit_2016_thesisSaha_Sumit_2016_thesis
Saha_Sumit_2016_thesis
 
Contextual Inquiry: How Ethnographic Research can Impact the UX of Your Website
Contextual Inquiry: How Ethnographic Research can Impact the UX of Your WebsiteContextual Inquiry: How Ethnographic Research can Impact the UX of Your Website
Contextual Inquiry: How Ethnographic Research can Impact the UX of Your Website
 
Linked Data Workshop Stanford University
Linked Data Workshop Stanford University Linked Data Workshop Stanford University
Linked Data Workshop Stanford University
 

More from ssuserf7cd2b

English Communication (AEC-01).pdf
English Communication (AEC-01).pdfEnglish Communication (AEC-01).pdf
English Communication (AEC-01).pdf
ssuserf7cd2b
 
The Oxford Dictionary of English Grammar ( PDFDrive ).pdf
The Oxford Dictionary of English Grammar ( PDFDrive ).pdfThe Oxford Dictionary of English Grammar ( PDFDrive ).pdf
The Oxford Dictionary of English Grammar ( PDFDrive ).pdf
ssuserf7cd2b
 
Attachment_0 (2).pdf
Attachment_0 (2).pdfAttachment_0 (2).pdf
Attachment_0 (2).pdf
ssuserf7cd2b
 
Attachment_0.pdf
Attachment_0.pdfAttachment_0.pdf
Attachment_0.pdf
ssuserf7cd2b
 
Attachment_0 (1).pdf
Attachment_0 (1).pdfAttachment_0 (1).pdf
Attachment_0 (1).pdf
ssuserf7cd2b
 
The Oxford Dictionary of English Grammar ( PDFDrive ) (1).pdf
The Oxford Dictionary of English Grammar ( PDFDrive ) (1).pdfThe Oxford Dictionary of English Grammar ( PDFDrive ) (1).pdf
The Oxford Dictionary of English Grammar ( PDFDrive ) (1).pdf
ssuserf7cd2b
 
AdvancedSkillsforComm.-BookI.doc
AdvancedSkillsforComm.-BookI.docAdvancedSkillsforComm.-BookI.doc
AdvancedSkillsforComm.-BookI.doc
ssuserf7cd2b
 
VA118-15-N-0042-001.docx
VA118-15-N-0042-001.docxVA118-15-N-0042-001.docx
VA118-15-N-0042-001.docx
ssuserf7cd2b
 
Noor-Book.com دليلك الكامل لمهارات الإتصال بالانجليزية.pdf
Noor-Book.com  دليلك الكامل لمهارات الإتصال بالانجليزية.pdfNoor-Book.com  دليلك الكامل لمهارات الإتصال بالانجليزية.pdf
Noor-Book.com دليلك الكامل لمهارات الإتصال بالانجليزية.pdf
ssuserf7cd2b
 
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf
ssuserf7cd2b
 
Applications.docx
Applications.docxApplications.docx
Applications.docx
ssuserf7cd2b
 
Chapter-04.pdf
Chapter-04.pdfChapter-04.pdf
Chapter-04.pdf
ssuserf7cd2b
 
Chapter 2.pdf
Chapter 2.pdfChapter 2.pdf
Chapter 2.pdf
ssuserf7cd2b
 
StandardIPinSpace.pdf
StandardIPinSpace.pdfStandardIPinSpace.pdf
StandardIPinSpace.pdf
ssuserf7cd2b
 
04 - Networking Technologies.ppt
04 - Networking Technologies.ppt04 - Networking Technologies.ppt
04 - Networking Technologies.ppt
ssuserf7cd2b
 
NZISM-Infrastructure-Network_Design,_Architecture_and_IP_Address_Management-V...
NZISM-Infrastructure-Network_Design,_Architecture_and_IP_Address_Management-V...NZISM-Infrastructure-Network_Design,_Architecture_and_IP_Address_Management-V...
NZISM-Infrastructure-Network_Design,_Architecture_and_IP_Address_Management-V...
ssuserf7cd2b
 
Chapter-3.pdf
Chapter-3.pdfChapter-3.pdf
Chapter-3.pdf
ssuserf7cd2b
 
Protocol_specification_testing_and_verif.pdf
Protocol_specification_testing_and_verif.pdfProtocol_specification_testing_and_verif.pdf
Protocol_specification_testing_and_verif.pdf
ssuserf7cd2b
 
ITN_Module_17.pptx
ITN_Module_17.pptxITN_Module_17.pptx
ITN_Module_17.pptx
ssuserf7cd2b
 
03 - Cabling Standards, Media, and Connectors.ppt
03 - Cabling Standards, Media, and Connectors.ppt03 - Cabling Standards, Media, and Connectors.ppt
03 - Cabling Standards, Media, and Connectors.ppt
ssuserf7cd2b
 

More from ssuserf7cd2b (20)

English Communication (AEC-01).pdf
English Communication (AEC-01).pdfEnglish Communication (AEC-01).pdf
English Communication (AEC-01).pdf
 
The Oxford Dictionary of English Grammar ( PDFDrive ).pdf
The Oxford Dictionary of English Grammar ( PDFDrive ).pdfThe Oxford Dictionary of English Grammar ( PDFDrive ).pdf
The Oxford Dictionary of English Grammar ( PDFDrive ).pdf
 
Attachment_0 (2).pdf
Attachment_0 (2).pdfAttachment_0 (2).pdf
Attachment_0 (2).pdf
 
Attachment_0.pdf
Attachment_0.pdfAttachment_0.pdf
Attachment_0.pdf
 
Attachment_0 (1).pdf
Attachment_0 (1).pdfAttachment_0 (1).pdf
Attachment_0 (1).pdf
 
The Oxford Dictionary of English Grammar ( PDFDrive ) (1).pdf
The Oxford Dictionary of English Grammar ( PDFDrive ) (1).pdfThe Oxford Dictionary of English Grammar ( PDFDrive ) (1).pdf
The Oxford Dictionary of English Grammar ( PDFDrive ) (1).pdf
 
AdvancedSkillsforComm.-BookI.doc
AdvancedSkillsforComm.-BookI.docAdvancedSkillsforComm.-BookI.doc
AdvancedSkillsforComm.-BookI.doc
 
VA118-15-N-0042-001.docx
VA118-15-N-0042-001.docxVA118-15-N-0042-001.docx
VA118-15-N-0042-001.docx
 
Noor-Book.com دليلك الكامل لمهارات الإتصال بالانجليزية.pdf
Noor-Book.com  دليلك الكامل لمهارات الإتصال بالانجليزية.pdfNoor-Book.com  دليلك الكامل لمهارات الإتصال بالانجليزية.pdf
Noor-Book.com دليلك الكامل لمهارات الإتصال بالانجليزية.pdf
 
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf
4.1.1.10 Packet Tracer - Configuring Extended ACLs Scenario 1.pdf
 
Applications.docx
Applications.docxApplications.docx
Applications.docx
 
Chapter-04.pdf
Chapter-04.pdfChapter-04.pdf
Chapter-04.pdf
 
Chapter 2.pdf
Chapter 2.pdfChapter 2.pdf
Chapter 2.pdf
 
StandardIPinSpace.pdf
StandardIPinSpace.pdfStandardIPinSpace.pdf
StandardIPinSpace.pdf
 
04 - Networking Technologies.ppt
04 - Networking Technologies.ppt04 - Networking Technologies.ppt
04 - Networking Technologies.ppt
 
NZISM-Infrastructure-Network_Design,_Architecture_and_IP_Address_Management-V...
NZISM-Infrastructure-Network_Design,_Architecture_and_IP_Address_Management-V...NZISM-Infrastructure-Network_Design,_Architecture_and_IP_Address_Management-V...
NZISM-Infrastructure-Network_Design,_Architecture_and_IP_Address_Management-V...
 
Chapter-3.pdf
Chapter-3.pdfChapter-3.pdf
Chapter-3.pdf
 
Protocol_specification_testing_and_verif.pdf
Protocol_specification_testing_and_verif.pdfProtocol_specification_testing_and_verif.pdf
Protocol_specification_testing_and_verif.pdf
 
ITN_Module_17.pptx
ITN_Module_17.pptxITN_Module_17.pptx
ITN_Module_17.pptx
 
03 - Cabling Standards, Media, and Connectors.ppt
03 - Cabling Standards, Media, and Connectors.ppt03 - Cabling Standards, Media, and Connectors.ppt
03 - Cabling Standards, Media, and Connectors.ppt
 

Recently uploaded

一比一原版新西兰奥塔哥大学毕业证(otago毕业证)如何办理
一比一原版新西兰奥塔哥大学毕业证(otago毕业证)如何办理一比一原版新西兰奥塔哥大学毕业证(otago毕业证)如何办理
一比一原版新西兰奥塔哥大学毕业证(otago毕业证)如何办理
taqyea
 
The 10 Most Influential Leaders Guiding Corporate Evolution, 2024.pdf
The 10 Most Influential Leaders Guiding Corporate Evolution, 2024.pdfThe 10 Most Influential Leaders Guiding Corporate Evolution, 2024.pdf
The 10 Most Influential Leaders Guiding Corporate Evolution, 2024.pdf
thesiliconleaders
 
Building Your Employer Brand with Social Media
Building Your Employer Brand with Social MediaBuilding Your Employer Brand with Social Media
Building Your Employer Brand with Social Media
LuanWise
 
The Genesis of BriansClub.cm Famous Dark WEb Platform
The Genesis of BriansClub.cm Famous Dark WEb PlatformThe Genesis of BriansClub.cm Famous Dark WEb Platform
The Genesis of BriansClub.cm Famous Dark WEb Platform
SabaaSudozai
 
How are Lilac French Bulldogs Beauty Charming the World and Capturing Hearts....
How are Lilac French Bulldogs Beauty Charming the World and Capturing Hearts....How are Lilac French Bulldogs Beauty Charming the World and Capturing Hearts....
How are Lilac French Bulldogs Beauty Charming the World and Capturing Hearts....
Lacey Max
 
Authentically Social by Corey Perlman - EO Puerto Rico
Authentically Social by Corey Perlman - EO Puerto RicoAuthentically Social by Corey Perlman - EO Puerto Rico
Authentically Social by Corey Perlman - EO Puerto Rico
Corey Perlman, Social Media Speaker and Consultant
 
Company Valuation webinar series - Tuesday, 4 June 2024
Company Valuation webinar series - Tuesday, 4 June 2024Company Valuation webinar series - Tuesday, 4 June 2024
Company Valuation webinar series - Tuesday, 4 June 2024
FelixPerez547899
 
HOW TO START UP A COMPANY A STEP-BY-STEP GUIDE.pdf
HOW TO START UP A COMPANY A STEP-BY-STEP GUIDE.pdfHOW TO START UP A COMPANY A STEP-BY-STEP GUIDE.pdf
HOW TO START UP A COMPANY A STEP-BY-STEP GUIDE.pdf
46adnanshahzad
 
Best Forex Brokers Comparison in INDIA 2024
Best Forex Brokers Comparison in INDIA 2024Best Forex Brokers Comparison in INDIA 2024
Best Forex Brokers Comparison in INDIA 2024
Top Forex Brokers Review
 
Organizational Change Leadership Agile Tour Geneve 2024
Organizational Change Leadership Agile Tour Geneve 2024Organizational Change Leadership Agile Tour Geneve 2024
Organizational Change Leadership Agile Tour Geneve 2024
Kirill Klimov
 
Digital Marketing with a Focus on Sustainability
Digital Marketing with a Focus on SustainabilityDigital Marketing with a Focus on Sustainability
Digital Marketing with a Focus on Sustainability
sssourabhsharma
 
Top mailing list providers in the USA.pptx
Top mailing list providers in the USA.pptxTop mailing list providers in the USA.pptx
Top mailing list providers in the USA.pptx
JeremyPeirce1
 
BeMetals Investor Presentation_June 1, 2024.pdf
BeMetals Investor Presentation_June 1, 2024.pdfBeMetals Investor Presentation_June 1, 2024.pdf
BeMetals Investor Presentation_June 1, 2024.pdf
DerekIwanaka1
 
Call 8867766396 Satta Matka Dpboss Matka Guessing Satta batta Matka 420 Satta...
Call 8867766396 Satta Matka Dpboss Matka Guessing Satta batta Matka 420 Satta...Call 8867766396 Satta Matka Dpboss Matka Guessing Satta batta Matka 420 Satta...
Call 8867766396 Satta Matka Dpboss Matka Guessing Satta batta Matka 420 Satta...
bosssp10
 
Satta Matka Dpboss Matka Guessing Kalyan Chart Indian Matka Kalyan panel Chart
Satta Matka Dpboss Matka Guessing Kalyan Chart Indian Matka Kalyan panel ChartSatta Matka Dpboss Matka Guessing Kalyan Chart Indian Matka Kalyan panel Chart
Satta Matka Dpboss Matka Guessing Kalyan Chart Indian Matka Kalyan panel Chart
➒➌➎➏➑➐➋➑➐➐Dpboss Matka Guessing Satta Matka Kalyan Chart Indian Matka
 
Industrial Tech SW: Category Renewal and Creation
Industrial Tech SW:  Category Renewal and CreationIndustrial Tech SW:  Category Renewal and Creation
Industrial Tech SW: Category Renewal and Creation
Christian Dahlen
 
Dpboss Matka Guessing Satta Matta Matka Kalyan Chart Satta Matka
Dpboss Matka Guessing Satta Matta Matka Kalyan Chart Satta MatkaDpboss Matka Guessing Satta Matta Matka Kalyan Chart Satta Matka
Dpboss Matka Guessing Satta Matta Matka Kalyan Chart Satta Matka
➒➌➎➏➑➐➋➑➐➐Dpboss Matka Guessing Satta Matka Kalyan Chart Indian Matka
 
Lundin Gold Corporate Presentation - June 2024
Lundin Gold Corporate Presentation - June 2024Lundin Gold Corporate Presentation - June 2024
Lundin Gold Corporate Presentation - June 2024
Adnet Communications
 
Understanding User Needs and Satisfying Them
Understanding User Needs and Satisfying ThemUnderstanding User Needs and Satisfying Them
Understanding User Needs and Satisfying Them
Aggregage
 
Unveiling the Dynamic Personalities, Key Dates, and Horoscope Insights: Gemin...
Unveiling the Dynamic Personalities, Key Dates, and Horoscope Insights: Gemin...Unveiling the Dynamic Personalities, Key Dates, and Horoscope Insights: Gemin...
Unveiling the Dynamic Personalities, Key Dates, and Horoscope Insights: Gemin...
my Pandit
 

Recently uploaded (20)

一比一原版新西兰奥塔哥大学毕业证(otago毕业证)如何办理
一比一原版新西兰奥塔哥大学毕业证(otago毕业证)如何办理一比一原版新西兰奥塔哥大学毕业证(otago毕业证)如何办理
一比一原版新西兰奥塔哥大学毕业证(otago毕业证)如何办理
 
The 10 Most Influential Leaders Guiding Corporate Evolution, 2024.pdf
The 10 Most Influential Leaders Guiding Corporate Evolution, 2024.pdfThe 10 Most Influential Leaders Guiding Corporate Evolution, 2024.pdf
The 10 Most Influential Leaders Guiding Corporate Evolution, 2024.pdf
 
Building Your Employer Brand with Social Media
Building Your Employer Brand with Social MediaBuilding Your Employer Brand with Social Media
Building Your Employer Brand with Social Media
 
The Genesis of BriansClub.cm Famous Dark WEb Platform
The Genesis of BriansClub.cm Famous Dark WEb PlatformThe Genesis of BriansClub.cm Famous Dark WEb Platform
The Genesis of BriansClub.cm Famous Dark WEb Platform
 
How are Lilac French Bulldogs Beauty Charming the World and Capturing Hearts....
How are Lilac French Bulldogs Beauty Charming the World and Capturing Hearts....How are Lilac French Bulldogs Beauty Charming the World and Capturing Hearts....
How are Lilac French Bulldogs Beauty Charming the World and Capturing Hearts....
 
Authentically Social by Corey Perlman - EO Puerto Rico
Authentically Social by Corey Perlman - EO Puerto RicoAuthentically Social by Corey Perlman - EO Puerto Rico
Authentically Social by Corey Perlman - EO Puerto Rico
 
Company Valuation webinar series - Tuesday, 4 June 2024
Company Valuation webinar series - Tuesday, 4 June 2024Company Valuation webinar series - Tuesday, 4 June 2024
Company Valuation webinar series - Tuesday, 4 June 2024
 
HOW TO START UP A COMPANY A STEP-BY-STEP GUIDE.pdf
HOW TO START UP A COMPANY A STEP-BY-STEP GUIDE.pdfHOW TO START UP A COMPANY A STEP-BY-STEP GUIDE.pdf
HOW TO START UP A COMPANY A STEP-BY-STEP GUIDE.pdf
 
Best Forex Brokers Comparison in INDIA 2024
Best Forex Brokers Comparison in INDIA 2024Best Forex Brokers Comparison in INDIA 2024
Best Forex Brokers Comparison in INDIA 2024
 
Organizational Change Leadership Agile Tour Geneve 2024
Organizational Change Leadership Agile Tour Geneve 2024Organizational Change Leadership Agile Tour Geneve 2024
Organizational Change Leadership Agile Tour Geneve 2024
 
Digital Marketing with a Focus on Sustainability
Digital Marketing with a Focus on SustainabilityDigital Marketing with a Focus on Sustainability
Digital Marketing with a Focus on Sustainability
 
Top mailing list providers in the USA.pptx
Top mailing list providers in the USA.pptxTop mailing list providers in the USA.pptx
Top mailing list providers in the USA.pptx
 
BeMetals Investor Presentation_June 1, 2024.pdf
BeMetals Investor Presentation_June 1, 2024.pdfBeMetals Investor Presentation_June 1, 2024.pdf
BeMetals Investor Presentation_June 1, 2024.pdf
 
Call 8867766396 Satta Matka Dpboss Matka Guessing Satta batta Matka 420 Satta...
Call 8867766396 Satta Matka Dpboss Matka Guessing Satta batta Matka 420 Satta...Call 8867766396 Satta Matka Dpboss Matka Guessing Satta batta Matka 420 Satta...
Call 8867766396 Satta Matka Dpboss Matka Guessing Satta batta Matka 420 Satta...
 
Satta Matka Dpboss Matka Guessing Kalyan Chart Indian Matka Kalyan panel Chart
Satta Matka Dpboss Matka Guessing Kalyan Chart Indian Matka Kalyan panel ChartSatta Matka Dpboss Matka Guessing Kalyan Chart Indian Matka Kalyan panel Chart
Satta Matka Dpboss Matka Guessing Kalyan Chart Indian Matka Kalyan panel Chart
 
Industrial Tech SW: Category Renewal and Creation
Industrial Tech SW:  Category Renewal and CreationIndustrial Tech SW:  Category Renewal and Creation
Industrial Tech SW: Category Renewal and Creation
 
Dpboss Matka Guessing Satta Matta Matka Kalyan Chart Satta Matka
Dpboss Matka Guessing Satta Matta Matka Kalyan Chart Satta MatkaDpboss Matka Guessing Satta Matta Matka Kalyan Chart Satta Matka
Dpboss Matka Guessing Satta Matta Matka Kalyan Chart Satta Matka
 
Lundin Gold Corporate Presentation - June 2024
Lundin Gold Corporate Presentation - June 2024Lundin Gold Corporate Presentation - June 2024
Lundin Gold Corporate Presentation - June 2024
 
Understanding User Needs and Satisfying Them
Understanding User Needs and Satisfying ThemUnderstanding User Needs and Satisfying Them
Understanding User Needs and Satisfying Them
 
Unveiling the Dynamic Personalities, Key Dates, and Horoscope Insights: Gemin...
Unveiling the Dynamic Personalities, Key Dates, and Horoscope Insights: Gemin...Unveiling the Dynamic Personalities, Key Dates, and Horoscope Insights: Gemin...
Unveiling the Dynamic Personalities, Key Dates, and Horoscope Insights: Gemin...
 

ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf

  • 2. Performance Evaluation for Network Services, Systems and Protocols
  • 3. Stênio Fernandes Performance Evaluation for Network Services, Systems and Protocols Foreword by Professor Antonio Pescapè
  • 4. ISBN 978-3-319-54519-6    ISBN 978-3-319-54521-9 (eBook) DOI 10.1007/978-3-319-54521-9 Library of Congress Control Number: 2017932749 © Springer International Publishing AG 2017 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Printed on acid-free paper This Springer imprint is published by Springer Nature The registered company is Springer International Publishing AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Stênio Fernandes Centro de Informática Universidade Federal de Pernambuco Recife, Pernambuco, Brazil
  • 5. v Foreword The advancement of computer networks has been remarkable in these recent times. The processes that were commonly used to facilitate the operations of these net- works have quickly become obsolete, giving way to quicker and better technology and ultimately better computer networks. The increased use of better virtualization technologies has brought about a collaborative effort to improve the operations of the computer network systems, the services rendered, and other important proto- cols. The result is very impressive, meeting the expectation of consumers in terms of reliability and speed. The changes experienced in network virtualization (NV), software-defined networking (SDN), network functions virtualization (NFV), and other similar fields have made it important to focus more on the means through which the actual performance can be determined and evaluated for newer innova- tions. These performance evaluations are crucial to the service providers so they can design and plan their future networks. I can confirm the importance of this book to the entire computer networking research and industrial community. I have played an active role in the research com- munity of network monitoring and measurement since 2000. I have made useful contributions in the fields of traffic analysis and modeling, traffic classification, traffic generation, performance monitoring, network security, and also cloud and SDN monitoring. I am a full professor at the University of Naples Federico II, where I lecture students on computer networks and analysis of Internet performance. I have coauthored over 180 research papers published in international journals (e.g., IEEE/ACM Transactions on Networking, Communications of the ACM, IEEE TPDS, IEEE TNSM, Computer Networks, etc.) and conferences (e.g., SIGCOMM, NSDI, Infocom, IMC, PAM, Globecom, ICC, etc.). I have been honored with a Google Faculty Award, several best paper awards, Microsoft and Amazon research grants, and two IRTF (Internet Research Task Force)ANRP (Applied Networking Research Prize) awards. Prof. Stênio Fernandes has solid record of research publications in the field of computer communications and networks. He has published over 120 research papers in a number of international peer-reviewed conferences and journals. His research interests cover the crucial aspects of performance evaluation of network
  • 6. vi and communication systems and Internet traffic measurement, modeling projects, and analysis. I can affirm that this book is a reflection of his experiences with aca- demic and industrial research projects related to performance evaluation of com- puter networks. In this book, Prof. Stênio Fernandes gives a comprehensive perspective of the methods that are used to accurately determine performance evalu- ation of modern computer networks. In this book, the crucial and advanced features of performance evaluation techniques are clearly explained in a way that the reader will understand the methods of conducting the right evaluation plans. Taking excerpts from the scientific literature, this book addresses the most relevant aspects of experimentation, simulation, and analytical modeling of modern networks. The readers will have a better understanding of applied statistics in computer network- ing and how the functions on theory and the best practices in the field intersect. This book will identify the current challenges that industrial and academic researchers face in their work and also the potentials for better innovations in this field. University of Naples Federico II Antonio Pescapè Naples, Italy Foreword
  • 7. vii Acknowledgments I have nursed the dream of writing this book for a very long time. My position as a member of technical program committees serving for a large number of important scientific conferences in the computer networking field has given me the opportu- nity to witness in wonder the number of excellently written papers that have been rejected due to lapses and lack of rigor in their performance evaluation and analysis. It is common to see authors come up with brilliant ideas, but they fail to scientifi- cally prove the validity of these ideas. A poor performance evaluation will cast aspersions on any papers’ claims about its contributions and relevance to the field. The case is the same for scientific journals; I have been privileged to act as a referee for many important journals in the field of computer networks and communications. Going through the exhibition area during a scientific conference, I met Susan Lagerstrom-Fife, an editor (Computer Science) at Springer, USA. After the usual pleasantries, I asked her about the requirements needed to write a book for Springer. I got some useful information and took action, and I can happily say that this book is the result of that productive conversation. I would like to thank Susan and her assistant Jennifer Malat for guiding me along this long road. It was an interesting and difficult experience writing this book. I experienced what writers call “the writer’s block” often. Now I know how real it is, and I can confirm that it is not a very happy experience. I was able to overcome this challenge by reading good books on focus and productivity. I owe a lot of my success in over- coming this challenge to Barbara Oakley whose course “Learning How to Learn” on Coursera played a vital role in helping me develop my mind and sharpen my skills at a higher level. I was very happy to have the opportunity to thank her in person when she came to give a talk at Carleton University, Ottawa in Canada, in May 2016. I will not stop expressing my sincere gratitude to her for putting out all that useful information for free. Communicating your ideas to a diverse audience is not a very easy task. Writing a book chapter that entails the reviews of essential concepts of statistics was difficult to organize and deliver. I would like to thank Alexey Medvedev, who has a PhD in mathematics (2016) from Central European University, for assessing all the equa- tions and mathematical concepts in that chapter.
  • 8. viii I would also like to thank all my colleagues from universities around the world, most especially from the Universidade Federal de Pernambuco (Brazil), the University of Ottawa (Canada), and Carleton University (Canada) for the encour- agements and kind support that helped me finish this book. I send special thanks to my former supervisor Professor Ahmed Karmouch (University of Ottawa) and my colleague Professor Gabriel Wainer (Carleton University). I also would like to extend my sincere gratitude to many network engineers I met at the Internet Engineering Task Force meetings between 2014 and 2017; I would like to thank them for the support and tips they offered me while I was writing this book. Finally, I would like to thank my family and friends for showing their concerns with the regular question – “How’s the book writing going?” Many of the chal- lenges I faced while writing this book made me to sometimes be unavailable and impatient. I promise to catch up with you all over coffee, wine, music concerts, and physical activities. This book would not have been possible without the love, sup- port, and appreciation of my work expressed by my wife Nina and my children Victor and Alice. I also wish to dedicate this book to my mother Penha and my father (in memoriam) Fernando. Acknowledgments
  • 9. ix Contents 1 Principles of Performance Evaluation of Computer Networks ����������    1 1.1 Motivation: Why Do We Need to Assess the Performance of Computer Networks?��������������������������������������������������������������������   1 1.2 Classical and Modern Scenarios: Examples from Research Papers�����������������������������������������������������������������������    2 1.2.1 Performance Evaluation in Classical Scenarios��������������������    3 1.2.2 Performance Evaluation in Modern Scenarios���������������������  16 1.3 The Pillars of Performance Evaluation of Networking and Communication Systems������������������������������������������������������������  30 1.3.1 Experimentation/Prototyping, Simulation/Emulation, and Modeling������������������������������������������������������������������������  30 1.3.2 Supporting Strategies: Measurements����������������������������������  39 References��������������������������������������������������������������������������������������������������  40 2 Methods and Techniques for Measurements in the Internet ��������������  45 2.1 Passive vs. Active vs. Hybrid Measurements������������������������������������  47 2.2 Traffic Measurements: Packets, Flow Records, and Aggregated Data������������������������������������������������������������������������  51 2.3 Sampling Techniques for Network Management������������������������������  54 2.4 Internet Topology: Measurements, Modeling, and Analysis������������  57 2.4.1 Internet Topology Resolution������������������������������������������������  59 2.4.2 Internet Topology Discovery: Tools, Techniques, and Datasets��������������������������������������������������������������������������  61 2.5 Challenges for Traffic Measurements and Analyses in Virtual Environments��������������������������������������������������������������������  64 2.5.1 Cloud Computing Environments������������������������������������������  64 2.5.2 Virtualization at Network Level��������������������������������������������  67 2.6 Bandwidth Estimation Methods��������������������������������������������������������  69 References��������������������������������������������������������������������������������������������������  70
  • 10. x 3 A Primer on Applied Statistics in Computer Networking ������������������  75 3.1 Statistics and Computational Statistics��������������������������������������������   75 3.2 I’m All About That Data ������������������������������������������������������������������   78 3.3 Essential Concepts and Terminology������������������������������������������������   80 3.4 Descriptive Statistics������������������������������������������������������������������������   82 3.4.1 I Mean It (Or Measures of Centrality)����������������������������������   83 3.4.2 This Is Dull (Or Measures of Dispersion)����������������������������   83 3.4.3 Is It Paranormally Distributed? (or Measures of Asymmetry and Tailedness)������������������������������������������������   86 3.5 Inferential Statistics��������������������������������������������������������������������������   89 3.5.1 Parameter Estimation: Point vs. Interval������������������������������   90 3.5.2 Estimators and Estimation Methods ������������������������������������   95 3.6 The Heavy-Tailed Phenomenon��������������������������������������������������������   99 3.6.1 Outlier Detection������������������������������������������������������������������  100 3.6.2 Heavy-Tailed Distributions and Its Variations (Subclasses)����������������������������������������������  102 3.6.3 Evidence of Heavy-Tailedness in Computer Networks����������������������������������������������������������  107 References��������������������������������������������������������������������������������������������������  111 4 Internet Traffic Profiling ������������������������������������������������������������������������  113 4.1 Traffic Analysis��������������������������������������������������������������������������������  114 4.1.1 Identification and Classification��������������������������������������������  114 4.1.2 Techniques, Tools, and Systems for Traffic Profiling ����������  118 4.2 Industrial Approach for Traffic Profiling: Products and Services��������������������������������������������������������������������������������������  130 4.3 Traffic Models in Practice����������������������������������������������������������������  132 4.3.1 Workload Generators������������������������������������������������������������  133 4.4 Simulation and Emulation����������������������������������������������������������������  137 4.4.1 Discrete-Event Simulation and Network Simulation Environments������������������������������������������������������������������������  140 4.4.2 Practical Use of Network Simulators and Traffic Profiles����������������������������������������������������������������  147 References��������������������������������������������������������������������������������������������������  148 5 Designing and Executing Experimental Plans��������������������������������������  153 5.1 Designing Performance Evaluation Plans: Fundamentals����������������  153 5.2 Design of Experiments (DoE)����������������������������������������������������������  156 5.2.1 The DoE Jargon��������������������������������������������������������������������  159 5.2.2 To Replicate or to Slice?������������������������������������������������������  161 5.3 DOE Options: Choosing a Proper Design����������������������������������������  164 5.3.1 Classification of DOE Methods��������������������������������������������  165 5.3.2 Notation��������������������������������������������������������������������������������  166 Contents
  • 11. xi 5.4 Experimental Designs����������������������������������������������������������������������  166 5.4.1 2k  Factorial Designs (a.k.a. Coarse Grids)����������������������������  166 5.4.2 2k − p  Fractional Factorial Designs������������������������������������������  167 5.4.3 mk Factorial Designs (a.k.a. Finer Grids)������������������������������  167 5.5 Test, Validation, Analysis, and Interpretation of DOE Results��������  167 5.6 DOEs: Some Pitfalls and Caveats����������������������������������������������������  168 5.7 DOE in Computer Networking Problems����������������������������������������  169 5.7.1 General Guidelines���������������������������������������������������������������  169 5.7.2 Hands-On������������������������������������������������������������������������������  174 References��������������������������������������������������������������������������������������������������  174 Contents
  • 12. 1 © Springer International Publishing AG 2017 S. Fernandes, Performance Evaluation for Network Services, Systems and Protocols, DOI 10.1007/978-3-319-54521-9_1 Chapter 1 Principles of Performance Evaluation of Computer Networks In this chapter, I give you an overview of the technical aspects necessary for a ­ comprehensive performance evaluation of computer networks. First, I provide several examples of how an accurate performance evaluation plan is essential for discovering and revealing new phenomenon in every nuance of the software and hardware com- ponents of a certain network scenario. Finally, I give an overview of the pillars of performance evaluation for either simple or more sophisticated networking scenarios. I discuss research and development challenges regarding measurements, experimen- tation and prototyping, simulation and emulation, and modeling and analysis. 1.1  Motivation: Why Do We Need to Assess the Performance of Computer Networks? Performance evaluation on the Internet is a daunting challenge. There are too many elements to observe, from concrete components (e.g., communication links, switches, routers, and servers) to more abstract ones (e.g., packets, protocols, and virtual machines). And within most of those elements, there will be mechanisms, deployed as pieces of codes, that might utilize resources unpredictably. When a researcher or engineer in the networking field needs to assess the performance of a certain system, she possibly has a clue of what needs to be assessed. In most cases, they only need to design a proper performance evaluation plan to obtain meaningful results and derive insights or answer questions from them. However, designing a sound performance evaluation plan will likely raise a number of questions, such as: Why do we need to assess the performance of this networked system? What to mea- sure? Where – system wise – to measure? When to measure? Which time scale is more appropriate for a particular measurement and analysis? Where  – network wise – to place the measurement points? Will such measurement points interfere in the collected metrics? How about the proper workload? How do we generate it?
  • 13. 2 Do we need to repeat the experiments? If so, how many times? Is sampling ­ acceptable for the given case? Do we need to derive an analytical model from the measurements? Can we use such a derived analytical model to predict the behavior of the networked system? If so, how far in the future? Will this performance analysis be based on experimentation or simulation? Why? Which one is better for the given scenario? Should we consider active measurements? Are passive measurements suf- ficient? You’ve got the point. It is straightforward noticing that an accurate performance evaluation of any computing system must be carefully designed and undertaken. For a general perfor- mance evaluation of computer systems, the classic Raj Jain’s The Art of Computer Systems Performance Analysis: Techniques for Experimental Design, Measurement, Simulation, and Modeling [1] has become the main source of information in the last two decades. If you are more mathematical inclined, Yves Le Boudec’s book [2] would add value to your performance evaluation design and analyses. In this book, we stay in the middle by balancing between a more pragmatic approach (from the academia and industry point of view) and stepping on some solid ground (from the statistical perspective). 1.2  Classical and Modern Scenarios: Examples from Research Papers In this section, we give some examples of well-conducted performance evaluation studies in the computer networking field. By classical (traditional) we mean topics that were deeply explored in the past and has currently attracted little attention of researchers. This is the case when there is almost no room for improvement and to bring new contributions to the topic. For instance, congestion control mechanisms and protocols were deeply explored in the past. Thousands of research papers have been already published, but there are still some new challenging scenarios that need further investigation. Similar to this, peer-to-peer networking protocols and mechanisms gained lots of attention in the past decades. They both can surely be considered traditional or classical scenarios. Modern scenarios (or advanced) would include studies with a strong paradigm shift on networking, such as the ones related to network virtualization (NV), cloud computing, software-defined net- working (SDN), network functions virtualization (NFV) and service function chaining (SFC), and the like. The following research papers highlight the impor- tance of a good performance evaluation design as well as the results presentation. We bring the reader examples from top conferences and journals and show the possible authors’ rationale to come up with sound performance analyses. We hope those examples will make it clear that the chances of publishing in good scientific conferences and journals are higher when the paper has a sound and convincing performance evaluation. Also, network engineers and designers will also benefit from those examples, since they might be required to bring a performance analysis of the networks they manage. 1  Principles of Performance Evaluation of Computer Networks
  • 14. 3 1.2.1  Performance Evaluation in Classical Scenarios 1.2.1.1  Application Layer In this subsection, we present a couple of examples of a well-designed performance evaluation plans for an application-layer protocol. We also present in detail how the experiments were conducted and some selected results to highlight performance metrics, parameterization (factors and levels), as well as precise forms of results presentation. In the paper Can SPDY Really Make the Web Faster? [3], Elkhatib, Tyson, and Welzl provide a thorough experimental-based performance evaluation of a recent protocol called SPDY [4], which served as the starting point for the development of the HTTP/2.0 [5]. The whole Internet community (e.g., users, developers, research- ers) had been claiming for improvements in the web surfing experience, due to the ever-increasing development of new services and applications that require timely communications (i.e., low latencies). The authors started the paper arguing that the ever-increasing complexity of the web pages is likely to affect their retrieval times. Some subset of users is more tolerant to delay than others, especially the ones engaged in online shopping. The fundamental research questions the authors were trying to answer was if the proposed new version of the HTTP protocol is a leap-­ forward technology or just “yet another protocol” with small performance improve- ments. In their words, does it offer a fundamental improvement or just further tweaking? Additional questions were raised in the paper as their experiments showed some network parameterization severely affecting the protocol behavior and performance. There were several essential arguments for conducting such study, namely: i) previous work only gave a shallow understanding of SPDY performance and ii) the only conclusions so far were that SPDY performance is highly dependent on several factors and it has highly variable performance. Therefore, in order to have an in-depth knowledge of SPDY performance, they conducted experiments in real uncontrolled environments, i.e., SPDY client software (e.g., Chromium1 ) and real deployed servers, such asYouTube and Twitter, as well as in controlled environ- ments, using open-source released versions of SPDY servers (e.g., Apache with a mod_spdy2 module installed). Details of the experiments can be found in the origi- nal paper [3]. As a complementary evaluation, in the paper Performance Analysis of SPDY Protocol in Wired and Mobile Networks [77], the authors evaluated SPDY performance in several wireless environments. They evaluated SPDY performance in 3G, WiBro, and WLAN networks as well as in different web browsers, namely, Chromium-based and Firefox. SPDY performance varies with some factors, such as the network access technology (e.g., 3G, 802.11) Both experiment design rationales highlight the importance of making the right decisions to get sound and meaningful results for further analysis. First, in [3], the authors selected an appropriate set of measurement tools to make it easier further 1  http://www.chromium.org/. 2  https://code.google.com/archive/p/mod-spdy/. 1.2  Classical and Modern Scenarios: Examples from Research Papers
  • 15. 4 analysis. Then, discussion on the adequate performance metric was conducted when they suggested the use of a new metric (i.e., time of wire (ToW), captured at the network level) to avoid the inclusion of web browser processing times. In other words, the ToW performance metric means the time between the departure of the first HTTP request and the arrival of the last packet from the web server. Second, in the measurement methodology, the authors decided to separate the experiments into two classes, namely, wild and controlled. When dealing with real protocols, it is important to evaluate how they would behave in the actual environment where the experimenter would not be able to control most of its parameters. However, such limitations impose severe restrictions on the concluding remarks that can be drawn, since a number of assumptions might be wrong. In this particular case, it is clear to see that the controlled set of experiments were needed since variations in network conditions (e.g., server load, path available bandwidth, path delay, and packet ratio losses) could not be precisely monitored. In the Live Tests experiments, the authors sampled the most accessed websites, selecting the most representatives one (i.e., top eight websites from Alexa3 ). They also collected enough samples to ensure statisti- cal significance of results. For instance, they collected over one million HTTP GET requests from one site for 3 days. In the Controlled Tests, they deployed the usual Linux network emulator (NetEm) and Linux traffic control (tc) in a high-speed local network environment. Both tools were used to control delays and packet loss ratios, as well as to shape the sending rate, thus mimicking the control of the available bandwidth. There are some comments on the use of sampling strategies, but no further details were given. In [77], the authors used the usual page download time. Preliminary analysis of results in both papers shows that deployment of SPDY does not necessarily imply in performance gains. There are some cases when per- formance deteriorates, as Fig. 1.1 shows. As there is no way to understand why this is happening, in [3] the authors have strong arguments to conduct controlled experiments. Therefore, a series of experi- ments were presented to show the effect of network conditions on the performance of SPDY. They used the major performance factors, namely, delay, available band- width, and loss. Levels of the factors were set as follows: (i) delay, from 10 to 490 ms; (ii) available bandwidth, from 64 to 8 Mbps; and (iii) packet loss ratio (PLR), from 0% to 3%. To isolate the effects of the other factors, for each experi- ment varying a particular factor, they make the other factor’s levels fixed. For instance, when conducting experiments to understand the impact of the bandwidth on the SPDY performance, they fix RTT to 150 ms and PLR to 0%. It is worth emphasizing that a combination of levels for each factor could be taken for a more detailed and extensive experimentation. This is essentially a design decision that the experimenter must clearly state and provide reasonable arguments for that. Sometimes it is just lacking space, in the case of research papers with a limited num- ber of pages. In other cases, it might make no sense to make a full factorial experi- mentation. In statistics, full factorial means the use of all possible combinations of 3  www.alexa.com. 1  Principles of Performance Evaluation of Computer Networks
  • 16. 5 the factor’s levels. In the case of real scenarios for SPDY deployments, it is highly unlikely that a combination of high bandwidth, PLR, and delay will make any sense. The next three figures (Figs. 1.2, 1.3, and 1.4) illustrate the effect of RTT, band- width, and PLR on the ToW reduction, respectively. The ToW reduction perfor- mance metric means the percentage of improvement in the performance metric ToW of SPDY over HTTPS. The concluding remark from these results are: (i) in the RTT experiments, SPDY always has better performance when compared to HTTP, especially in high-delay environments; (ii) SPDY has better performance in mid- to low-bandwidth scenar- ios; and (iii) increase of PLR severely impacts SPDY performance. The authors provided detailed justifications to the behavior of SPDY in all scenarios. The main conclusion is that SPDY may not perform that well in mobile settings. They go a step further in their experiments by showing the impact of the server-side infrastructure (e.g., domain sharding) on the ToW reduction. In conclusion, it is important to understand why this paper is sound. First, the authors presented several clear arguments for the decisions they made regarding the experimental plan design, which included performance metrics, factors, and levels. Second, they carefully selected the measurement tools as well as made sure they would get enough data for the sake of statistical significance. They also use a variety of ways to present results, such as tables, XY (scatter) plots, empirical cumulative distribution function (ECDF), bar plots, and heat maps. Such an approach makes the paper more exciting (or less boring, if you will) to read. Last, but not least, they drew conclusions solely on the results, along with reasonable qualitative explanations. Fig. 1.1  Time of wire (ToW) – SPDY websites (Source: Elkhatib et al. [3]) 1.2  Classical and Modern Scenarios: Examples from Research Papers
  • 17. 6 Fig. 1.2  Effect of RTT on ToW reduction (Source: Elkhatib et al. [3]) Fig. 1.3  Effect of bandwidth on ToW reduction (Source: Elkhatib et al. [3]) 1  Principles of Performance Evaluation of Computer Networks
  • 18. 7 1.2.1.2  Transport Layer Now we present an example of a careful and accurate performance evaluation study for transport-layer protocol design and analysis in cellular networks. We present details of how the authors provide solid arguments to support the development of a new TCP-like protocol, even though the network research community has been doing this for decades and has produced hundreds of research papers. We also show how they design their experiments to validate the new protocol and present some selected results. We highlight the performance metrics they used, as well as factors and levels. In the thesis Adaptive Congestion Control for Unpredictable Cellular Networks [6], Thomas Pötsch proposes and shows the rationale of Verus, a delay-based end-­ to-­ end congestion control protocol that is quick and accurate enough to cope with highly variable network conditions in cellular networks. He used a mix of real pro- totyping and simulations to evaluate Verus’ performance in a variety of scenarios. Pötsch argues that most TCP flavors that employ different congestion control mechanisms fail to show good performance in cellular networks, mainly due to their inability to cope well with high available bandwidth availability, varying queuing delays, and non-congestion-related stochastic packet losses. The main causes for such variability at network-level lie in the underlying link and physical layers. He high- lights the four main causes as (i) the state of a cellular channel, ii) the frame schedul- ing algorithms, iii) device mobility, and surprisingly iv) competing traffic. Some of these causes have a different impact on channel characteristics, as some are more Fig. 1.4  Effect of packet loss on ToW reduction (Source: Elkhatib et al. [3]) 1.2  Classical and Modern Scenarios: Examples from Research Papers
  • 19. 8 prone to affect delays, whereas other might cause burstiness in the perceived channel capacity at the receiver. It is very true that it is tough to develop precise models, algo- rithms,andmechanismstotrackshort-andlong-termchannel­dynamics[7].Therefore, he developed a simple yet efficient delay profile model to correlate the congestion control variable “sending window size” to the measured end-to-end delay. In essence, with a small modification on the additive-increase (AI) portion of the additive- increase/multiplicative-decrease (AIMD) mechanism present in most TCP flavors, Verus is able to quickly adapt to changing channel conditions, in several timescales. He used essential performance metrics to study the overall performance of Verus against some TCP flavors, such as Cubic [8], New Reno [9], Vegas [10], and Sprout [11] (a recent TCP-like proposal for wireless environments). From these TCP flavors, Sprout is the only one explicitly designed for cellular networks. Pötsch kept the most popular TCP flavor currently deployed on the Internet (New Reno and Cubic) and discarded all the others legacy ones. In addition, they bring some arguments to keep out of the evaluation those protocols that either need or rely on explicit feedback from the network layer, such as the use of Explicit Congestion Notification (ECN) [12]. In [75], Fabini et al. show how an HSPA downlink presents high-delay variabil- ity (cf. Figs. 1.5 and 1.6). Thomas Pötsch [6] also shows that 3G and LTE network do not isolate channels properly (cf. Fig. 1.7). One interesting finding here is related to cellular channel isolation. The authors discuss that the assumption of channel isolation by means of queue isolation does not hold in the case of high traffic demands (as Fig. 1.6 shows). The authors also show channel unpredictability in different timescales. It is worth emphasizing that in small timescales, the variability effect is more prominent (cf. Fig. 1.7) [76]. Fig. 1.5  Burstiness on latency in cellular networks (Source: Pötsch [6]) 1  Principles of Performance Evaluation of Computer Networks
  • 20. 9 Fig. 1.6  Impact of user traffic on packet delay (Source: Pötsch [6]) Fig. 1.7  Traffic received from a 3G downlink (100 and 20 ms windows) (Source: Pötsch [6]) 1.2  Classical and Modern Scenarios: Examples from Research Papers
  • 21. 10 We will not give details of the development of the Verus protocol and will only focus on performance evaluation. However, we give the reader a glimpse of the Verus design rationale. The delay profile is the main component of the congestion control mechanism. It is built on four basic variables, namely, delay estimator, delay profiler, window estimator, and loss handler. The delay estimator just keeps track of the received packet delays in a given timeframe, whereas the delay profiler resembles a regres- sion model for correlating the sending window to the delay estimate (cf. Fig. 1.8). The window estimator is a bit more complex, and it aims at providing information for calculation of the number of outstanding packets in the network in the following timeframe (called epoch). The loss handler tracks losses to be used in the loss recovery phase as in any legacy TCP mechanism. The authors make an important step for a sound performance evaluation, namely, parameter sensitivity analysis. As Verus internal mechanisms comprise of several parameters, it is important to understand its robustness to a varying of application scenarios. In order to do that, the authors executed simulation-based evaluation in different scenarios, with the focus on three major parameters, namely, epoch, delay profile update interval, and delta increment. Two types of experiments were conducted. They use real experiments and simu- lations to show Verus performance improvements over New Reno, Cubic, and Sprout, in both 3G and LTE networks. Figure 1.9 shows an example of performance comparison in LTE networks. For a trace-driven simulation, they evaluate the effect of mobility as a performance factor. They used throughput, delay, and Jain’s fairness index [13] as performance met- rics. The selected factors included a number of devices and flows, downlink and Fig. 1.8  Verus delay profile (Source: Pötsch [6]) 1  Principles of Performance Evaluation of Computer Networks
  • 22. 11 uplink data rates, the number of competing flows, user speed profile (in the case of mobility), the arrival of new flows, RTT, etc. Concluding remarks from these results are that Verus (i) adapts to both rapidly changing cellular conditions and to competing traffic, (ii) achieves higher through- put than TCP Cubic while maintaining a dramatically lower end-to-end delay, and (iii) outperforms very recent congestion control protocols for cellular networks like Sprout under rapidly changing network conditions. Again, it is important to understand why this paper is sound. First, the authors pre- sented clear arguments for the need of a new transport protocol for cellular networks. As far as we are concerned on the experimental plan design, they provided details on all performance metrics, factors, and levels. Second, they carefully selected the evalu- ation environments, which included prototyping and simulation tools. Finally, as expected for an ACM SIGCOMM paper, they also use a good variety of ways to pres- ent their results, such as tables, scatter plots, probability distribution function (PDF), and time series plots. Please recall that variety means more exciting (or less boring) stuff to read. Last, but not least, they drew conclusions solely on the results from real- world measurements, with additional support from the simulation results. 1.2.1.3  Network Layer When it comes to performance evaluation at the network level of protocols, ­systems, and mechanisms, computer-networking researchers and engineers are likely to be flooded with the massive amount of research and products developed in the last decades. Every time a new trend arises (e.g., think of ATM networks in the 80s) Fig. 1.9  Throughput vs delay (LTE network) (Source: Pötsch [6]) 1.2  Classical and Modern Scenarios: Examples from Research Papers
  • 23. 12 there will often be a “gold rush” to investigate the performance of the new ­technology in a variety of scenarios. Network operators usually want to understand if a wide deployment of a given technology will bring operational expenditure (OPEX) or capital expenditure (CAPEX) savings in the long run. The particular cases of QoS provisioning or its counterpart network traffic throt- tling have become an arena for a dispute between network operators and users [14]. The rise of a number of technologies, such as peer-to-peer (P2P), VoIP, deep packet inspection, etc., has set this arena and has triggered interminable debate on network neutrality [15], “premium services,” and the like. In the paper “Identifying Traffic Differentiation in Mobile Networks,” Kakhki et al. [16] focused on the understanding of current practices of Internet service providers (ISP) when performing traffic dif- ferentiation in mobile environments (if any). The generic term differentiation means that a certain ISP can either provide better (e.g., QoS provisioning) or worse (e.g., bandwidth throttling) services for the user. Their main motivation is that although the debate is immense, the availability of open data is virtually inexistent to support rea- sonable discussions. In addition, they argue that regulatory agencies have been mar- ginally dealing with such issues, thus making difficult for end users to understand ISP’s management policies and how this might affect their applications performance. It is clear for some advanced users that ISPs have been deploying throttling mecha- nisms for some time now, and this causes performance degradation of certain appli- cations. Back in October 2005, an interview published in the IEEE Spectrum Magazine revealed that mediation of VoIP traffic was in use by several telephone companies that provided Internet services. In this context, mediation means traffic differentiation. Although at that time there were some regulations (in the US) that could prevent carriers from “blocking potentially competitive services” [14], the article quotes the words of the vice-president of product marketing for a software company that provided VoIP traffic identification and classification, as follows: But there’s nothing that keeps a carrier in the United States from introducing jitter, so the quality of the conversation isn’t good,” … “You can deteriorate the service, introduce latency, and also offer a premium to improve it. (Excerpt from Cherry [14]) In [16], the authors present the design of a system to identify traffic differentia- tion in wireless networks for any application. They address important technological challenges, such as (i) performance testing of any application class, (ii) understand- ing how differentiation occurs in the network, and (iii) wireless network measure- ments from the user devices. They have designed and implemented a system and validate it in a controlled environment using a commodity device (e.g., a common smartphone). They focus only on traffic-shaping middleboxes instead of a wide range of traffic differentiation. They assume that traffic differentiation might be trig- gered by one or more factors, such as application signatures in the packets’ payload, current applications throughput, and the like. Other factors were worth further investigation, such as users’ location and time of the day. My assumption here is that ISPs can virtually apply different policies for different users and applications at different times of the day. It is common to find voice plans that give user unlimited calls at evenings and on weekends [79]. Therefore it is very much plausible that differentiation can be applied as well. Issues like traffic blocking or content 1  Principles of Performance Evaluation of Computer Networks
  • 24. 13 ­modification were out of the scope of their work.The main steps of their ­methodology to perform the traffic differentiation analysis, which is based on a general trace record-­replay methodology, are: (i) To record a packet trace from the given application (ii) To extract the communication profile between the end systems (iii) To replay the trace over the targeted network with and without the use of a VPN channel (iv) To perform some statistical tests to identify if traffic differentiation occurred for that particular application. Some interesting intermediate findings were that server IP addresses are not used for differentiation purposes, high number ports might be classified as P2P applica- tions, few packets (i.e., below ten packets) are necessary to trigger the traffic differen- tiation mechanism, and encryption in the HTTP is not an issue for the traffic shapers. The proposed detection mechanism is based on the well-known two-sample Kolmogorov-Smirnov (K-S) test (cf. Figs. 1.10) in conjunction with an Area Test statistic test. The testbed environment for the Controlled Tests uses an off-the-shelf (OTS) traffic shaper and a mobile device (replay client) and a server (replay server). Performance factors are the shaping rate (from 9% to 300% of the applications peak traffic) and packet losses (from 0% to 1.45%, controlled by the Linux tc and netem). Selected applications were YouTube and Netflix (TCP-based) and Skype and Hangouts (UDP-based). Performance metrics for the calibration studies included the overall accuracy and resilience to noise, whereas for the real measurement ones, they collected throughput, latency, jitter, and loss rate. Fig. 1.10  Illustration of the well-known nonparametric K-S Test 1.2  Classical and Modern Scenarios: Examples from Research Papers
  • 25. 14 After some calibration studies, the authors conducted real measurements in a ­ number of operational networks for multimedia streaming applications as well as voice/video conference systems.All experiments were repeated several times to ensure statistical significance of results. They reported success in the detection of middle- boxes’ behavior, such as traffic shaping, content modification, and the use of proxies. The consequences of low shaping rate limit in TCP performance are that – in some cases – the congestion control mechanism of the protocol might not even be able to leave the slow-start phase. One of the main conclusion is that a high number of middle- boxes in the selected mobile operators break the network neutrality (or the Internet’s end-to-end) principle, thus affecting performance of some applications directly. As usual, we highlight the importance of this paper in terms of performance eval- uation at network level. First, the authors presented clear arguments for the need of knowing if traffic differentiation exists in wireless networks and what are the traffic features that trigger such mechanisms. They implemented and tested their system as a real prototype and conducted experiments in a controlled testbed (for calibration) and in production networks (for validation). Their experimental plan design provided details on all performance metrics, factors, and levels. Finally, results presentation included tables, scatter plots, and time series plots. Last, but not least, they made available both code and datasets for the sake of reproducibility of their research. 1.2.1.4  Link Layer Performance evaluation in wireless and mobile environments has always been in high demand. There are several operational challenges that are related to some uncontrolled factors, such as channel conditions and user mobility. Moreover, with the widespread adoption of short- and mid-range wireless communication technolo- gies (e.g., WiFi) together with cellular networks bring great opportunities for improving network performance, whereas at the same time brings a number of design and deployment issues, such as optimization of coverage and channel alloca- tion. Telecom vendors have been recently offering solutions to heterogeneous net- works (HetNet) for traffic offloading from the macro to the micro network. Such solutions can boost customer experience by offering high-performance (e.g., high data rates or low latencies) available from either macro or micro cell. The chal- lenges of designing HetNets are tremendous since conflicting requirements are always in place. In one hand it is necessary to cut total cost of ownership (TCO) and avoid over-dimensioning. On the other hand, operators must improve performance for the end users and optimize coverage. In the paper When Cellular Meets WiFi in Wireless Small Cell Networks, Bennis et al. [17], tackle some challenges of heterogeneous wireless networking design, by addressing the integration of WiFi and cellular radio access technologies in small cell base stations (SCBS). Figure 1.11 shows a common deployment scenario for HetNets. The authors emphasized HetNets as a key solution for dealing with performance issues in macrocellular-only infrastructures (macrocell base stations – MBS) since multimode SCBS can bring complimentary benefits from the seamless integration 1  Principles of Performance Evaluation of Computer Networks
  • 26. 15 point of view. They also pointed out some concerns about the lack of control of the quality of services in unlicensed bands (i.e., in WiFi networks), which can severely degrade performance. They argue that offloading some of the traffic from the unli- censed to the licensed (and well-managed) network can improve performance, which motivated them to propose an intelligent distributed offloading framework. Instead of using the usual strategy of offloading from macro to micro cells, they argue that SCBS could simultaneously manage traffic between cellular and WiFi radio access technologies according to the traffic profile and network conditions (e.g., QoS requirements, network load, interference levels). In addition, they discuss that SCBS could run a long-term optimization process by keeping track of the net- work’s optimal transmission strategy over licensed/unlicensed bands. As examples, they mentioned that delay-tolerant applications could be using unlicensed bands, whereas delay-tight application could be offloaded to the licensed channels. The authors discussed some design challenges for HetNets deployment, as clas- sical offloading (i.e., from macro to micro cell or WiFi) might not be the best approach. They argued that fine-grained offloading strategies should be deployed in order to make performance-aware optimized traffic-steering decisions. Other net- work conditions, such as backhaul congestion and channel interference, must be taken into account when enforcing a certain offloading policy. They proposed a reinforcement learning (RL)- [18] based solution to this complex problem. Fig. 1.11  Common deployment scenario for HetNets (Source: Bennis et al. [17]) 1.2  Classical and Modern Scenarios: Examples from Research Papers
  • 27. 16 They provided a basic RL model, with realistic assumptions for network ­ parameterization, including the existence of multimode SCBS.  One particular model formulation considers joint interference management and traffic offloading. The basic idea behind the RL-based modeling is that the SCBS can make autono- mous decisions to optimize the given optimization function. Specifically, they set its goal to to devise an intelligent and online learning mechanism to optimize its licensed spectrum transmission, and at the same time leverage WiFi by offloading delay-tolerant traffic. They aim to accomplish this goal through two basic frame- work’s components, namely, subband selection and proactive scheduling. The macro behavior of the proposal is simple yet promising. Once each SCBS makes its decision on which subband to use, a scheduling procedure starts, which takes into account user’s requirements and network conditions. The authors consider an LTE-A/WiFi offload case study in a multi-sector MBS scenario integrated with multimode SCBS. In a simulation environment, the user device (or user equipment  – UE) has a traffic mix profile unevenly distributed between best-effort, interactive, streaming, real-time, and interactive real-time applications. They consider four scenarios for benchmarking, as follows: (i) Macro-only: MBS serves the UEs that use licensed bands only. (ii) HetNet: MBS and SCBS serve the UEs. SCBS is single mode (licensed bands only). (iii) HetNet + WiFi: MBS and SCBS serve the UEs. SCBS is multimode (e.g., licensed and unlicensed bands). (iv) HetNet + WiFi with access method based on received power. Performance metrics include average UE throughput, total SCBS throughput, total cell throughput, and total cell-edge throughput. As performance factors, they use number of UEs, subband selection strategy, and number of SCBS (i.e., small cell densification). Figures  1.12 and 1.13 show some performance results. One can clearly observe that a well-designed HetNet strategy can boost network and end-­ user performance when compared to an MBS-only scenario. As a magazine paper, it is not expected to have detailed discussions on both problem formulation and solutions and performance evaluation. The authors did a good job balancing its content between engineering design discussions and results to support the claims that their framework is promising to improve performance in wireless HetNets environments. They carefully selected performance metrics as well as factors and levels to conduct sound simulation-based experiments. 1.2.2  Performance Evaluation in Modern Scenarios 1.2.2.1  Virtualization and Cloud Computing It might be a surprise for some, but virtualization concepts and technologies are not new. In fact, a brief look at the history of virtualization of computing resources reveals that the main concept, some proof of concepts, and real products span back 1  Principles of Performance Evaluation of Computer Networks
  • 28. 17 Fig. 1.12  Aggregate cell throughput vs. # of users for two traffic scheduling strategies (Source: Bennis et al. [17]) Fig. 1.13  Aggregate cell throughput for different offloading strategies (Source: Bennis et al. [17]) 1.2  Classical and Modern Scenarios: Examples from Research Papers
  • 29. 18 five decades ago. IBM and Bell Labs were the main players in the 1960s. Fast ­ forwarding to the late 1990s, one can see that Sun’s Java was already gaining wide- spread adoption along with VMWare technologies as the main players. Meanwhile, computer scientists and engineers were developing the concept of the use of com- puting resources as general services. When looking back 50 years ago for informa- tion on virtualization and computing as services, it is difficult to pinpoint which concept came first. But it is safe to say that when hardware virtualization gained momentum in the late 1990s, the technology world changed forever, whereAmazon. com (founded in 1994) might be considered the first heavy user of virtualization technologies. About 10 years later (early to mid-2000s), Amazon.com started to offer virtual computing resources services, such as the web (Amazon Web Services, 2002)4 , storage (Amazon Simple Storage Service, 2006)5 , and infrastructure (Amazon Elastic Compute Cloud, 2006)6 . The main point of interest here is how virtualization-based technologies and ser- vices perform in real environments. In a closer look at the current virtualization technologies and services, one will find that a number of new concepts and services have arisen and evolved in the last 10 years. Virtualization is real and has spread into hardware, desktop, storage, applications, platforms, infrastructure, networks, and the like. Layers of virtualization components are now the norm, which brings a number of questions, such as how they impact the performance seen by the end users, how to optimize performance in distributed data centers, how to accurately measure new performance metrics (e.g., elasticity, as it is specific to cloud comput- ing environments), etc. Adequate tools and methodologies for measurements and analysis in virtualized environments are currently under discussion in standardiza- tion bodies, such as in the IETF’s IPPM and BMWG working groups [19, 20]. In the paper CloudNet: Dynamic Pooling of Cloud Resources by Live WAN Migration of Virtual Machines [21], Wood et al. discuss that while virtualization technologies have provided precise performance scaling of applications within data centers, the support of advanced management process (e.g., VM resizing and migra- tion) in geographically distributed data centers is still a challenge. Such a feature would provide users and developers an abstract view of the (distributed) resources of a cloud computing provider as a single unified pool. They highlight some of the hard challenges for dynamic cloud resources scaling in WAN environment, as follows: 1. Minimization of application downtime: If the given application handles a mas- sive amount of data, migration to a data center over WAN connections might introduce huge latencies as a result of the data copying process. Moreover, it might require to keep disk and memory states consistent. 4  Amazon Web Services (2002) - “Overview of Amazon Web Services”, White Paper, December 2015, available at https://d0.awsstatic.com/whitepapers/aws-overview.pdf. 5  Amazon Simple Storage Service (2006) - “AWS Storage Services Overview: A Look at Storage Services Offered by AWS”, White Paper, November 2015, available at https://d0.awsstatic.com/ whitepapers/AWS%20Storage%20Services%20Whitepaper-v9.pdf. 6   Amazon Elastic Computing Cloud - Varia, J., “Architecting for the Cloud: Best Practices”, White Paper, May 2010, available at http://jineshvaria.s3.amazonaws.com/public/cloudbestprac- tices-jvaria.pdf. 1  Principles of Performance Evaluation of Computer Networks
  • 30. 19 2. Minimization of network configurations: In a LAN environment, VM migration might require only a few tricks in the underlying Ethernet layer (e.g., using trans- parent ARP/RARP quick reconfigurations). In the case where the IP address space changes, such a migration might cause disruption in connectivity from the application point of view. 3. Management of WAN links: It is obvious to see that links’ capacities and latencies in WAN are very different from the LAN counterpart. Mainly due to costs, WAN capacities cannot be comparable to local networks. And even if that was true, high utilization of links for long periods of VM migration is not a good network management practice. In addition, link latency is highly unlikely to be a con- trolled factor. Therefore, the challenges are to provide migration techniques over distributed clouds that i) operate efficiently over low-bandwidth links and ii) optimize the data transfer volume to reduce the migration latency and cost. The authors then propose CloudNet, a platform designed to address the above challenges, targeting live migration of applications in distributed data centers. The paper describes the design rationale along with prototype implementation and extended performance evaluation on a real environment. Figure 1.14 illustrates the concept of the virtual cloud pools (VCP), which can be seen as an abstraction of the distributed cloud resources into a single view. VCP aims at connecting cloud resources in a secure and transparent way.VCP also allows a precise coordination of the hypervisors’ states (e.g., memory, disks) to ease the replication and migration process. CloudNet architecture is composed of two major controllers, namely, Cloud Manager and Network Manager. The former is ­composed of other architectural components, such as Migration Optimizer and Monitoring and Control Agents. The latter is responsible for VPN resource management. In the case of VM migration between geographically distributed data centers, CloudNet works as follows (cf. Fig. 1.15): (i) It establishes connectivity between VCP endpoints. (ii) It transfers hypervisor’s states (memory and disk). (iii) It pausesVM for transferring processors state along with memory state updates. The authors emphasize that disk state migration would take the majority of the overall VM migration. This is due to the order of magnitude of tens or hundreds of gigabytes as compared to a few gigabytes for memory sizes. Once these phases are completed, network connections must be redirected. They deploy an efficient mech- anism based on virtual private LAN services (VPLS) bridges. They also propose some optimizations to improve CloudNet performance, such as: (i) Content-based redundancy (ii) Using pages or subpages blocks deltas (iii) Smart stop and copy (SSC) algorithm (iv) Synchronized arrivals (an extension of the SSC algorithm) (v) Deployment on Xen’s Dom-0. 1.2  Classical and Modern Scenarios: Examples from Research Papers
  • 31. Fig. 1.15  Phases of resources migration on CloudNet (Source: Wood et al. [21]) Fig. 1.14  Illustration of the virtual cloud pool (VCP) concept (Source: Wood et al. [21])
  • 32. 21 CloudNet implementation is based on Xen,7 Distributed Replicated Block Device (DRBD),8 and off-the-shelf (OTS) routers that implement VPLS. Performance evalu- ation of CloudNet was mainly conducted in three interconnected datacenters in the USA. The main goal was to understand the performance of different applications on top of CloudNet and under realistic network conditions. They also performed some tests on a testbed in order to have more control of the network conditions (i.e., con- trolling link capacity and latency). As applications, they used a Java server bench- mark application (SPECjbb 2005), a development platform (Kernel Compile), and a Web benchmark (TPC-W). Performance metrics include bandwidth utilization, total migration time, data sent, and application response time. Figure 1.16 shows an exam- ple of the benefit of deploying CloudNet as compared to default Xen’s strategy. Figure 1.17 depicts the performance of a TPC-W application in varying band- width conditions. It is clear that CloudNet is able to reduce the impact on migration time in low-bandwidth conditions. Also, the amount of data transmitted is significantly reduced for both cases of TPC-W and SPECjbb, as Fig. 1.18 depicts. It is worth emphasizing that tackling most research and development challenges in virtualized environments requires careful design of the system architecture to uncover the subtleties they bring. The authors of the abovementioned paper did an excellent job by taking into account the most important performance factors (and their correspondent levels) realistically that might impede network managers and engineers to deploy a live migration of applications in a WAN environment. The presented results are convincing enough to bring other researchers to conduct more advanced studies in the topic. All performance metrics were carefully selected to support the benefits of CloudNet. 7  http://www.xenproject.org/. 8  http://drbd.linbit.com/home/what-is-drbd/. Fig. 1.16  Comparison of response time: Xen vs. CloudNet (Source: Wood et al. [21]) 1.2  Classical and Modern Scenarios: Examples from Research Papers
  • 33. 22 1.2.2.2  Software-Defined Networking Software-defined networking (SDN) concepts and technologies have attracted a great deal of attention from both industry and academic communities [22]. Large-­scale adoption and deployment are yet to come. Mainly because the overall performance of the SDN major elements (i.e., the controller and its underlying ­ software ­ components) is not clearly understood. There are a number of research papers that address common performance issues in the SDN realm, such as the ability of the Fig. 1.17  Performance of the TPC-W in varying bandwidth conditions (Xen vs. CloudNet) (Source: Wood et al. [21]) 1  Principles of Performance Evaluation of Computer Networks
  • 34. 23 SDN controller to deal with the arrival of new flows at a fast pace. There is some evidence of both good and poor performance in specific scenarios [23, 24], but in general, most experiments have been conducted in short time frames. For the particu- lar case of benchmarking of OPNFV switches, Tahhan, O’Mahony, and Morton [25] suggest at least 72 h of experiments for platform validation and to assess the base performanceformaximumforwardingrateandlatency.Althoughthe­recommendation Fig. 1.18  Transmitted data (TPC-W and Specjbb) (Source: Wood et al. [21]) 1.2  Classical and Modern Scenarios: Examples from Research Papers
  • 35. 24 is for NFV switches, it serves well for SDN environments, so the experimenter would be able to separate transient and steady-state performance clearly. Along with the long-term performance of SDN controllers in search for software malfunctioning (e.g., software aging) [26], there is a need for understanding the interactions of the software components in popular controllers in order to set the ground truth. In On the Performance of SDN Controllers: A Reality Check, Zhao, Iannone, and Riguidel [27] conducted controlled and extended experiments to deeply understand how SDN controllers should be selected and configured to be deployed in real sce- narios. They start showing that the need for a comprehensive performance evalua- tion is due to a variety of systems implementation. In general, each implementation of an SDN controller might perform well in a particular scenario, whereas it might suffer in a different setting. They focused on the most popular centralized SDN controller implementations, namely, Ryu [28], Pox [29], Nox [30], Floodlight [31], and Beacon [32]. Figure 1.19 depicts the basic testing setup for the experiments. They rely on a one-server setup with CBench9 playing the important role of emulat- ing a configurable number of switches. 9  Cbench, https://github.com/andi-bigswitch/oflops/tree/master/cbench. Fig. 1.19  Testbed setup for experiments with SDN controllers (Source: Zhao et al. [27]) 1  Principles of Performance Evaluation of Computer Networks
  • 36. 25 Performance metrics were kept simple, such as the use of latency, throughput, and fairness (in a slightly different context). Experiments were replicated several times in order to guarantee statistical significance of results. After discussing accu- racy issues for latency measurements in Open vSwitches, the paper evaluates the impact of selected factors in the performance metrics, as follows: 1. Factor #1: the type of the interpreter for Python-based controllers, with CPython and PyPy as levels. 2. Factor #2: the use of multiple threads along with the hyper-threading (HT) fea- ture of the processor. The case here is to understand if enabling or disabling HT (e.g., levels) would impact any performance metrics. Also, they need to evaluate if multiple threads bring any performance advantages. 3. Factor #3: the number of switches, varying from 1 to 256. 4. Factor #4: the number of threads, from 1 to 7. It is worth emphasizing that not all possible combinations of factors and levels were set for testing. The authors clearly explained the subset of levels for each scenario. Let’s check some of the results from this paper. For performance factor #1 (the type of the Python language interpreter), Tables 1.1 and 1.2 show that PyPy has clearly better performance for both latency and throughput metrics. The impact of the number network switches (factor #3) is evaluated for both cases of single and multiple threads (factor #2). Figure 1.20 presents the impact on the SDN-Switch latency as the number of switches increases. Individual latencies rise from microseconds to milliseconds, depending on the type of the controller. In this particular case, Beacon seems to suffer less performance degradation as the number of switches increase, whereas Ryu has the worst overall performance. There is not a further investigation on what exactly causes this performance issue, although the authors suggest that the round-robin policy implemented in each controller might be the underlying cause. There are a number of other important results and discussion in the paper, but one is of particular interest. In a similar strategy as in [26], the authors wanted to under- stand how SDN controllers perform in a heavy workload scenario. The experiments Table 1.1  Impact of python interpreter (latency – ms) Controller CPython PyPy PyPy/CyPython Pox 0.156 0.042 3.75 Ryu 0.143 0.037 3.86 Source: Zhao et al. [27] Table 1.2  Impact of python interpreter (throughput – responses/ms) Controller CPython PyPy PyPy/CyPython Pox 11.2 105 9.38 Ryu 24.1 106 4.40 Source: Zhao et al. [27] 1.2  Classical and Modern Scenarios: Examples from Research Papers
  • 37. 26 show similar results as in [26] by showing that latency (cf. Table 1.3) is propelled to thousands of times more as compared to a light workload. Their concluding remarks from the experiments show that Beacon has performed better than the other controllers in almost tested scenarios, which is in agreement with previous studies. We see that the authors carefully selected the most important performance factors and levels, by bringing strong qualitative arguments to the table. As expected, the presented results are solid enough to bring other researchers to conduct more advanced studies in the topic, and it is also aligned with some of the previous studies. Fig. 1.20  Impact on the SDN-Switch latency as the number of switches increases (Source: Zhao et al. [27]) Table 1.3  Latency comparison Controller Latency (ms) Empty buffer Full buffer Ratio Pox 0.042 5.26 126 Ryu 0.039 2.63 68 Nox 0.018 149.00 8358 Floodlight 0.022 76.90 3461 Beacon 0.016 50.00 3050 Source: Zhao et al. [27] 1  Principles of Performance Evaluation of Computer Networks
  • 38. 27 1.2.2.3  Network Functions Virtualization The ever-growing interests in virtualization technologies have very recently brought a new player in the field. Network functions virtualization (NFV) can be seen as the latest paradigm that aims at virtualizing traditional network functions currently implemented in dedicated platforms [34]. Within the scope of NFV, virtual network functions (VNF) can be orchestrated (or chained, if you will), thus creating a new concept, namely, Service Function Chaining (SFC). This new paradigm, along with other virtualization concepts and technologies (e.g., SDN and virtual switches), promises to profoundly change the way to manage networks. The main concern of the telecommunications industry, as the NFV’s main target, is, of course, related to performance. In particular, as the implementation of VNF or SFC will be typically deployed in VMs or containers, is it important to understand its performance over- head. It is clear that multiple instances of a VNF deployed in a virtual switch might have an impact on the overall SFC performance. Therefore, performance evaluation of NFV and its related technologies is one of the main concerns of network opera- tors. It has attracted the interest of standardization bodies, such as ETSI [33]. In Assessing the Performance of Virtualization Technologies for NFV: A Preliminary Benchmarking, Bonafiglia et al. [35] argued that while performance issues in tradi- tional virtualization are mostly related to processing tasks, in an NFV network, I/O is the main concern. They presented a preliminary benchmarking of SFC (as a chain of VNFs) that uses common virtualization technologies. First, they set up a controlled environment and implemented VNF chains deployed as virtual machines (using KVM) or containers (a Linux Docker). Performance metrics were throughput and latency. All experiments were replicated to ensure statistical significance of results. For the factors, they simply evaluated the impact on the end-to-end performance as the number of VNFs in the SFC increases. They also investigated if the technology used to interconnect the VNFs in the virtual switch, namely, either the Open vSwitch (OvS) or the Intel Data Plane Development Kit (DPDK)-based OvS, has any impact on the performance metrics. Figure 1.21 shows the network components for a VM-based implementation of NFV, whereas Fig. 1.22 shows a similar architecture in a container-based approach. The testbed for the experimental runs was configured as depicted in Fig. 1.23. It is important to emphasize that they did not evaluate the impact of processing overheads for particular virtualized functions. In other words, they implemented a “dummy” function that simply forwards packets from one interface to another. A comprehensive performance evaluation might take the processing costs of the given functions (or classes of functions) into account. Figure 1.24 shows an example of the impact of the VNFs in a single chain on the end-to-end throughput, for both VM and container-based implementation in the virtual switch. Similarly, Fig. 1.25 shows the latency introduced by the VNFs implemented in different technologies (e.g., OvS vs. DPDK) as the SFC length increases. I agree with the authors when they stated that this paper provides preliminary benchmarking of VNFs chains. I just want to highlight that from the point of view of performance evaluation, they quickly realized the importance of having some 1.2  Classical and Modern Scenarios: Examples from Research Papers
  • 39. 28 Fig. 1.22  Container-based approach for NFV implementation (Source: Bonafiglia et al. [35]) Fig. 1.23  Testbed setup for testing NFV (Source: Bonafiglia et al. [35]) Fig. 1.21 Network components for a VM-based implementation of NFV (Source: Bonafiglia et al. [35]) 1  Principles of Performance Evaluation of Computer Networks
  • 40. 29 Fig. 1.24  An example of the impact of the VNFs in a single chain on the end-to-end throughput (Source: Bonafiglia et al. [35]) Fig. 1.25  An example of the impact of the VNFs in a single chain on the RTT (Source: Zhao et al. [27]) 1.2  Classical and Modern Scenarios: Examples from Research Papers
  • 41. 30 initial results to fill the gap in this topic. Sometimes you don’t need an extensive performance evaluation to have a good understanding of the factors that impact systems performance. An initial and correct (yet simple) set of hypotheses and research questions might be enough to carry a set of experiments. Validation of hypotheses does not need to be fancy. 1.3  The Pillars of Performance Evaluation of Networking and Communication Systems When dealing with performance evaluation, we need to decide among experimenta- tion, simulation, emulation, and modeling. Collecting samples properly from the experiments (a.k.a. measurements) is a particular aspect since it gives support for all performance evaluation methods. If you have to choose among these options, which one(s) would you prefer? There is no precise scientific approach to help researchers decide on their strategies. The best ones should be analyzed on a case-by-case basis, depending on the availability of resources as well as one’s ability to deal with a ­ particular strategy. Let’s say one needs to evaluate if a certain transport-layer network protocol has better overall performance than a traditional one (e.g., TCP SACK vs. TCP Cubic). A number of decisions should be made for a proper performance evalu- ation and analysis, but the first step is always deciding the adequate approach. Should one select real experimentation or simulation? Should one try to develop analytical models for both protocols? Is emulation possible for the envisaged scenarios? Would results from a single strategy be convincing enough? These are the types of questions we would like to pose along with presenting possible avenues to answer them. 1.3.1  Experimentation/Prototyping, Simulation/Emulation, and Modeling It is common sense that, when possible, the best general approach would be work- ing with at least two strategies [1]. Limitations of an individual approach would be somewhat overcome when you choose and work with at least two different ones. This strategy would also help on minimizing criticisms of your experimental work (e.g., when it is submitted to peer review or dissertation/thesis committees). It is clear that real implementation approaches generally suffer from scalability limita- tions. For instance, it is costly to deploy large-scale scenarios to experiment with for either ad hoc sensor networks or Internet of things (IoT). Even if one has hundreds of devices deployed, there is always criticism that the validation would be limited in a larger scale (i.e., with thousands or millions of devices). Therefore, one strategy can give support to the other, when properly argued. In this example of performance evaluation of transport protocols, one can choose some pairs, such as experimen- tation simulation, simulation modeling, experimentation modeling, and the like. 1  Principles of Performance Evaluation of Computer Networks
  • 42. 31 For example, experimentation can validate a certain mechanism the experimenters are proposing, whereas simulations could demonstrate its effectiveness in a large scale, and analytical modeling could provide general behavior/understanding of the phenomenon under investigation. 1.3.1.1  Network Experimentation and Prototyping Engineers are more inclined to see real implementation of network protocols run- ning in devices or OSes. Simulations might give them a first impression on how to proceed further, but in the end, they need to implement and test their ideas in real environments. Academics are more flexible and might rely on any of the perfor- mance evaluation approaches, as long as they give them some meaningful results that provide evidence of the scientific contributions of their work. In any case, doing experimental work is a tough decision for either group of experimenters since it generally requires analysis of cost, scalability, time to completion, learning curve for the particular environment, and the like. Experimentation work might involve a number of preliminary tests to make sure the experimenter will be working in a suitable environment. There are a number of components in the protocol stack and their corresponding implementations that might have an impact on the performance of the given mechanism under perfor- mance analysis. Let’s say one wants to develop and test a new architecture for a network traffic generator [36, 37]. Even if the theoretical or simulation analysis shows that it has outstanding performance as compared to the results found in the literature, the real implementation of packet handling software layers in common OSes will have a profound impact on the results. Deploying such an architecture on top of libpcap10 or PF_RING11 will give them different results. For dealing with large-scale experimentation, there are some experimentation platforms that help researchers to overcome scalability issues of real prototyping. PlanetLab (PL) [46] was one of the first worldwide experimental platforms avail- able to researchers in the computer networking field. They are also known as Slice-­ based Federation Architecture (SFA). Recent SFA platforms include Geni12 and OneLab.13 SFAs mostly have three key elements, namely, components, slivers, and slices. A component is the atomic block of the SFA and can come in the form of an end host or router. Components offer virtualized resources that can be grouped into aggregates. A slice is a platform-wide set of computer and network resources (a.k.a. slivers) ready to run an experiment. Figure 1.26 illustrates the essential concepts of component, resource, sliver, and slice in an SFA-based platform. 10  Libpcap – www.tcpdump.org/. 11  PF_RING – www.ntop.org. 12  http://www.geni.net/. 13  https://onelab.eu. 1.3  The Pillars of Performance Evaluation of Networking and Communication Systems
  • 43. 32 1.3.1.2  Network Simulation and Emulation Network simulation is an inexpensive and reliable way to develop and test ideas for those problems where there is no need to rely on either analytical models or experi- mental approaches. Several network simulation environments have been tested and validated by the research community in the last decades. And most simulation engines are efficient enough to provide fast results even for large-scale scenarios. Barcellos et al. [49] discuss the pros and cons of conducting performance evaluation through simulation and real prototyping. Simulation engines are usually based on discrete-event approaches. For instance, ns-2 [38] and ns-3 [39], OMNET++ [40], OPNET [41], and EstiNet [42] have their simulation core based on discrete-event methods. Specific network simulation envi- ronments, such as Mininet [43], Artery [44], and CloudSim [45], follow suit. Emulation means the process of imitating the outside behavior of a certain networked application, protocol, or even an entire network. I come back with more details on this in Chap. 4. Performance evaluation that includes emulated components helps the experimenter to deploy more realistic scenarios where simulation, experimental evaluation, and analytical model alone are not capable of. Emulation fits particularly well in scenarios where the experimenter needs to understand the behavior of real networked applications in controlled networking environments. Of course, one can control network parameterization in real envi- ronments, but in general, large-scale experimental platforms are not available to all. When the performance of a certain system in the wild (i.e., on the Internet) Fig. 1.26  Building blocks of an SFA-based platform 1  Principles of Performance Evaluation of Computer Networks
  • 44. 33 is known, but only gives the big picture, the researcher/engineer might want to understand its behavior under certain network conditions. Recall that roughly 10–15 years ago, the performance of VoIP application on the Internet was not clear. Different CODECs and other system’s eccentricities would yield different perceived quality of experiences (QoE) by the user. Therefore, if one needed to have an in-depth view of such VoIP applications in certain conditions (i.e., under restricted available bandwidth, limited latencies, or packet loss rate), resorting on emulation would be the answer. In modern scenarios, there are a number of studies trying to understand the behavior of video applications in controlled environments [47, 48]. NetEm [50] is a network emulation widely used in the networking research community. It has the flexibility to change a number of parameters, thus giving the user the possibility to mimic a number of large net- working scenarios without the associated costs. Figure 1.27 depicts typical usage of NetEm’s in emulation-based experiments. Fig. 1.27  A typical usage of NetEm’s in emulation-based experiments 1.3  The Pillars of Performance Evaluation of Networking and Communication Systems
  • 45. 34 1.3.1.3  Analytical Modeling All models are wrong, but some are useful. (George Box) The quote in this subsection comes from a well-known statistician called George Box. His state is profound. Due to the need for an abstract view of the target prob- lem, and to minimize complexity, analytical models sometimes suffer from limited usage in practical situations. But, some are useful! A formal definition of modeling closer to this book’s context is a system of pos- tulates, data, and inferences presented as a mathematical description of an entity or state of affairs.14 It is very true that analytical models have limitations, and they are somewhat hard to develop since they mostly require a strong mathematical back- ground. In general, it is generally much easier to come up with an idea for a protocol without any profound mathematical analysis. We have seen a number of examples in the computer networking field where formal mathematical models only came after a certain protocol or services have been adopted and used for a long time. One clear case is some of the mechanisms behind TCP. It took years of research studies to provide evidence that the congestion control mechanisms of TCP would be fair in a number of scenarios, although several TCP flavors were active and running on the Internet for quite some time [51]. Network engineers have developed models for the purpose of planning, designing, dimensioning, performance forecasting, and the like. Models can be used to have an overview of general behavior or to evaluate detailed mechanisms or architectural components. Analytical modeling does not need to be complex. One can simply perform a model fitting for a particular probability distribution function or a mix of them. Alternatively, one can look at user behavior and try to derive simple models from it to build traffic generators [52]. There are indeed a number of analytical models avail- able to meet most requirements from network engineers and researchers in terms of practical models. They come not only in the form of research papers but also as books [53, 54] and book chapters [55]. Therefore, it is almost impossible to cover the body of knowledge on analytical models in computer networking in a single book. In order to give you a glimpse of how exactly a model looks like, I am presenting some relevant examples of derived analytical models from different layers of the TCP/IP reference model. The goal here is to highlight the potential of developing powerful mathematical models that would serve as the basis for advanced perfor- mance evaluation. It is worth recalling that a good performance evaluation plan should include at least two strategies. Therefore, for the researchers more inclined to mathematical and statistical studies, developing analytical models would be the first step that could be later validated by simulation or experimental work. 14  http://www.merriam-webster.com/dictionary/modelling/. 1  Principles of Performance Evaluation of Computer Networks
  • 46. 35 Video Game Characterization Suppose you need to design a mechanism for improving the performance of online users of a particular video game and you come up with an idea of a transport/net- work cross-layer approach. Also, due to budget constraints, you are not able to per- form large-scale measurements in real environments. You are now restricted to simulation environments, where you can quickly deploy your strategy. One impor- tant question here is: how can I generate synthetic traffic that mimics the application-­ level behavior of the game? Modeling application-level systems for use in simulation environments is an active area since new applications come into play very fre- quently. Let’s have a look at one recent example of modeling for game traffic gen- eration. In [56], Cricenti and Branch proposed a skewed mixture distribution for modeling the packet payload lengths of some well-known first-person shooting (FPS) games. They argued that a combination of different PDFs could lead to more precise models. They proposed the use of the ex-Gaussian distribution (also known as exponentially modified Gaussian distribution – EMG) as a model for FPS traffic. They showed that the ex-Gaussian distribution, through empirical validation, is able to capture the underlying process of an FPS player well. Also, they discussed how the model would be useful for building efficient traffic generators. The ex-Gaussian PDF has the following representation: f x x x , , , ƒ m s l l m ls s m l l ( ) = - - æ è ç ö ø ÷ - ( ) + æ è ç ç ö ø ÷ ÷ exp , 2 2 2 where exp is the exponential PDF and Φ is the Gaussian PDF, each one with its respective parameters (i.e., mean, standard deviation, and rate). Figure 1.28 shows one result for the validation of this model, in a scenario with four players using Counter-Strike. One can see that the model predicts the packet size distribution well when compared to the empirical data. You can find traffic models like this previous one for virtually any types of Internet applications. Lots of them are implemented in network simulation environ- ments (e.g., ns-2, ns-3, OMNET++, OPNET). TCP Throughput As 95% of the Internet traffic is carried by TCP, it is obvious that its characterization (i.e., modeling) has gained attention in the last decades [51]. One of the most impor- tant studies on modeling TCP was developed by Padhye, Firoiu, Towsley, and Kurose [57]. Their goal was to develop analytic characterization of the steady state throughput, as a function of loss rate and round-trip time for a bulk transfer TCP flow. By bulk transfer, they mean a long-lived TCP flow. This is one of the most cited papers in the TCP throughput modeling context. After seven pages discussing 1.3  The Pillars of Performance Evaluation of Networking and Communication Systems
  • 47. 36 and providing detailed information on how to build a precise model, they proposed the approximation model in a single equation, as follows: B p W RTT RTT bp T bp p p ( ) » + æ è ç ç ö ø ÷ ÷ + ( ) æ è ç ç min min max , , 1 2 3 13 3 8 1 32 0 2 ç ç ç ç ö ø ÷ ÷ ÷ ÷ ÷ , where B(p) is the TCP throughput, Wmax is the maximum TCP congestion window size, b is the number of packets acknowledge by a received ACK, p is the estimated packet loss probability, To is the time-out period, and RTT is the round-trip time of the end-to-end connection. Padhye’s model is a simple yet very effective TCP model that passed the test of time. Recently, Loiseau et al. [58] proposed a new model for the TCP throughput follow- ing the steps of Padhye’s approach, but with different goals. They argue that Padhye’s model might be limited to use since it is not able to capture TCP’s throughput fluctua- tions at smaller time scales. Therefore, they aimed at characterizing the variations of TCPthroughputaroundthemeanandatsmallertimescalestoprovidea­complementary model to Padhye’s seminal work.The proposed model is based on the classical Markov Chain theory, where states and transitions are mapped to TCP’s features, such as con- gestion window size, AIMD mechanism, packet loss process, and the like. Their main Fig. 1.28  Analytical model validation for packet size (Counter-Strike) (Source: Cricenti and Branch [56]) 1  Principles of Performance Evaluation of Computer Networks
  • 48. 37 contribution relies on a new method to describe deviations of the TCP’s throughput around the almost-sure mean.As it is a bit more complex model, I encourage the inter- ested reader to see its details in the section 3 of their 2010 paper [58]. Aggregate Background Traffic All network simulations need some kind of traffic between end systems, right? If you are investigating the performance of a particular new mechanism (let’s say the HTTP/2.0), you just set up the traffic sources and sinks (either in a client-server or peer-to-peer configuration), start generating simulated or experimental traffic, and collect the measurement results. But, how about the noisy background traffic? The one that you know will be there in real networks! In the real Internet, there are a num- ber of uncontrolled traffic (e.g., cross-traffic) that might have severe effect on the performance of the system under test. Sometimes you just need a single TCP or UDP traffic stream competing for resources (e.g., bandwidth) in the background. If you want more precise models to represent a noisy background traffic, you need to con- sider the self-similar (or long-range dependent) nature of the actual traffic in the Internet. Research on the fractal nature of the Internet traffic gained lots of attention for more than a decade, between the early 1990s and mid-2000s. I refer the interest reader to the seminal work of Leland et al. [59] and lots of details in the book [60]. For the purpose of this section, I just need to present you a model (the simpler, the better) for background traffic that captures the essence of fractal behavior in the Internet. In the paper, Self-Similarity Through High-Variability: Statistical Analysis of Ethernet LAN Traffic at the Source Level, Willinger et al. [61] provide some explanations for the occurrence of fractal traffic in local networks. They showed that a superposition of ON-OFF traffic sources yield self-similar traffic. ON-OFF sources generate packet trains. Their very important findings paved the way to build an efficient (i.e., precise and parsimonious) and realistic model for synthetic traffic generation. Here are some basic concepts first. A ON-OFF traffic source means a source generates traffic by alternating between ON periods (i.e., sending traffic) and OFF periods (i.e., in a silent period), where both periods are independent and identically distributed (i.i.d.). Also, the sequences of ON and OFF periods are independent from each other. One important aspect of Willinger’s model is that they use an infi- nite variance distribution, which is well represented by heavy-tailed distributions, such as Pareto. It is worth emphasizing that previous models assumed finite vari- ance distributions, such as an exponential PDF. In a nutshell, modeling a self-­similar traffic with ON-OFF models is straightforward. A superposition of n ON-OFF sources that follows a heavy-tailed distribution yields self-similar traffic. There are basically a few parameters to consider, namely, the number of sources, n, and the Hurst parameter to characterize the long tail of the PDF of each individual source. The Hurst parameter is a number between 0.5 and 1, and it is well known as the self-similarity index. The closer H is to 1, the more self-similar it is. For n, simulated results indicate that 20 sources would be enough [62]. 1.3  The Pillars of Performance Evaluation of Networking and Communication Systems