This document provides an overview and introduction to performance evaluation of computer networks. It begins with examples of performance evaluation in classical scenarios like queuing theory and modern scenarios involving network virtualization, software-defined networking, and network functions virtualization. It then discusses the key pillars of performance evaluation, including experimentation/prototyping, simulation/emulation, and analytical modeling, as well as supporting measurement strategies.
This document is the introduction and abstract for a PhD dissertation titled "Designing Optimal Network Topologies under Multiple Efficiency and Robustness Constraints". It was submitted by Sanket Patil to the International Institute of Information Technology–Bangalore in 2011. The dissertation addresses the problem of designing optimal network topologies when there are multiple, sometimes conflicting requirements around efficiency, robustness, costs, and other factors. It models the design problem using parameters like efficiency, robustness and cost, and uses a genetic algorithm called "topology breeding" to evolve optimal network topologies under different conditions by sampling the optimal topology space formed by the three performance parameters.
This document summarizes a student's MSc thesis project on human visual perception for image-based steganography. The student conducted experiments to analyze which steganography techniques are most effective at hiding information in digital images while avoiding detection by human observers. Techniques tested included file format conversions, bit depth alterations, least significant bit insertion, audio/text concealment, and image watermarking/filtering. Results indicated that audio insertion and picture insertion were strongest at deceiving the human eye. The student concluded the thesis by discussing each technique and identifying the "strongest" steganography images based on experimental results.
This thesis focuses on modeling and validating context-aware publish-subscribe (CAPS) systems. The author proposes designing informal case study models for CAPS systems, formally specifying the models, and validating the formal models against relevant properties using a model checker. Two case studies are presented: cellphone behavior adaptation and using a cellphone as a controller. The models are specified using the formal modeling language XMC and validated against structural and behavioral properties.
A Dynamic Middleware-based Instrumentation Framework to Assist the Understand...Luz Martinez
This document presents a thesis that proposes a dynamic middleware-based instrumentation framework to assist understanding of distributed applications. It begins with an overview of traditional software instrumentation and its limitations for distributed applications. It then outlines the aims of the thesis, which are to investigate requirements for on-demand distributed software instrumentation and promote instrumentation as a new middleware service. The main contribution is a dynamic software instrumentation framework consisting of various models including requirements, classification, formal analysis and programming models. An instrumentation architecture is also proposed at the core of the framework to provide instrumentation as services complementing core middleware services. The thesis scope involves evaluating this architecture through prototype implementation and case studies.
Augmented reality (AR) is a visualization technology that can
help in the situations described above. It merges virtual
objects into the user's view, and enhances spatial perception skills. Why isn't this technology widely utilized in everyday situations then? Are there still some technical bottlenecks that need to be solved before wide consumer-level use? Are there some business ecosystem factors that hinder AR applications from entering the consumer market? This doctoral dissertation seeks answers to these questions.
Serendipitous Web Applications through Semantic HypermediaJyotirmoy Dey
Abstract
Ever since its creation at the end of the 20th century, the Web has profoundly shaped the world’s information flow. Nowadays, the Web’s consumers no longer consist of solely people, but increasingly of machine clients that have been instructed to perform tasks for people. Lacking the ability to interpret natural language, machine clients need a more explicit means to decide what steps they should take. This thesis investigates the obstacles for machines on the current Web, and provides solutions that aim to improve the autonomy of machine clients. In addition, we will enhance the Web’s linking mechanism for people, to enable serendipitous reuse of data between Web applications that were not connected previously.
This document is the introduction and abstract for a PhD dissertation titled "Designing Optimal Network Topologies under Multiple Efficiency and Robustness Constraints". It was submitted by Sanket Patil to the International Institute of Information Technology–Bangalore in 2011. The dissertation addresses the problem of designing optimal network topologies when there are multiple, sometimes conflicting requirements around efficiency, robustness, costs, and other factors. It models the design problem using parameters like efficiency, robustness and cost, and uses a genetic algorithm called "topology breeding" to evolve optimal network topologies under different conditions by sampling the optimal topology space formed by the three performance parameters.
This document summarizes a student's MSc thesis project on human visual perception for image-based steganography. The student conducted experiments to analyze which steganography techniques are most effective at hiding information in digital images while avoiding detection by human observers. Techniques tested included file format conversions, bit depth alterations, least significant bit insertion, audio/text concealment, and image watermarking/filtering. Results indicated that audio insertion and picture insertion were strongest at deceiving the human eye. The student concluded the thesis by discussing each technique and identifying the "strongest" steganography images based on experimental results.
This thesis focuses on modeling and validating context-aware publish-subscribe (CAPS) systems. The author proposes designing informal case study models for CAPS systems, formally specifying the models, and validating the formal models against relevant properties using a model checker. Two case studies are presented: cellphone behavior adaptation and using a cellphone as a controller. The models are specified using the formal modeling language XMC and validated against structural and behavioral properties.
A Dynamic Middleware-based Instrumentation Framework to Assist the Understand...Luz Martinez
This document presents a thesis that proposes a dynamic middleware-based instrumentation framework to assist understanding of distributed applications. It begins with an overview of traditional software instrumentation and its limitations for distributed applications. It then outlines the aims of the thesis, which are to investigate requirements for on-demand distributed software instrumentation and promote instrumentation as a new middleware service. The main contribution is a dynamic software instrumentation framework consisting of various models including requirements, classification, formal analysis and programming models. An instrumentation architecture is also proposed at the core of the framework to provide instrumentation as services complementing core middleware services. The thesis scope involves evaluating this architecture through prototype implementation and case studies.
Augmented reality (AR) is a visualization technology that can
help in the situations described above. It merges virtual
objects into the user's view, and enhances spatial perception skills. Why isn't this technology widely utilized in everyday situations then? Are there still some technical bottlenecks that need to be solved before wide consumer-level use? Are there some business ecosystem factors that hinder AR applications from entering the consumer market? This doctoral dissertation seeks answers to these questions.
Serendipitous Web Applications through Semantic HypermediaJyotirmoy Dey
Abstract
Ever since its creation at the end of the 20th century, the Web has profoundly shaped the world’s information flow. Nowadays, the Web’s consumers no longer consist of solely people, but increasingly of machine clients that have been instructed to perform tasks for people. Lacking the ability to interpret natural language, machine clients need a more explicit means to decide what steps they should take. This thesis investigates the obstacles for machines on the current Web, and provides solutions that aim to improve the autonomy of machine clients. In addition, we will enhance the Web’s linking mechanism for people, to enable serendipitous reuse of data between Web applications that were not connected previously.
Better software, better service, better research: The Software Sustainabilit...Carole Goble
Ever spotted some great looking software only to discover you can’t get it, it doesn’t work, there is no documentation to help fix it and the developers don’t have the time or incentive to help? Ever produced some software that you want to be widely used or have folks contribute? What’s the sustainability of that key platform/library/tool /database your lab uses day in and day out? Are you helping the providers? The same issues stand for Data (or as we now say “FAIR” Findable, Accessible, Interoperable, Reusable Data) and its metadata. Is anyone looking out for Europe’s data services– the datasets and analysis systems you use and you make – the standards they use and the curators and developers who make them? Or is FAIR just a FAIRy story? I’ll tell how two organisations with quite different structures and approaches - the UK’s Software Sustainability Institute and the ELIXIR European Research Infrastructure for Life Science Data – are working for the common goal of better software, better service, and better research.
https://www.rothamsted.ac.uk/events/14th-international-symposium-integrative-bioinformatics
This thesis takes a human values perspective on the concept of differentiation. The aim is to understand how the concept applies to experience creation in a software company, and how decisions about experiences can be made. The monograph suggests a values-in-experience differentiation concept, develops a decision-making framework and a method for introducing values into the strategic decision process. A software case-company is used to implement the tool and attest the values perspective for experience creation.
The goal of a business strategy is to establish a unique position for the company’s offerings in the minds of customers, while maintaining financial results. Historically, uniqueness was achieved by focusing on product and service features. The experience view, however, departs from the historical assumptions about the nature and sources of differentiation. This thesis examines both the assumptions and changes of differentiation by experience.
The apparent complexity of experiences requires research approaches of a matching complexity that suits the understanding of the challenges that managers face over the course of the experience creation. The thesis suggests that values provide both a stable platform and a flexible environment to guide the strategy process. The research contrasts the human values perspective with the traditional concept of needs and rational view. It proposes that the consistency and the evolutionary change in values seem to provide grounds for combating the risks in experimentation, formulation and re-configuration of a unique position of company’s offerings.
Designing for people, effective innovation and sustainabilityMusstanser Tinauli
Designing for people, effective innovation and sustainability: Introducing experiential factors in an observational framework to evaluate technology assisted systems.
This thesis presents the PERSOnalised Negotiation, Identity Selection and Management (PersoNISM) system, which provides a comprehensive approach to privacy protection in pervasive environments. The PersoNISM system allows users to: (1) configure the terms of data disclosure through privacy policy negotiation; (2) use multiple identities to interact with services to avoid a single profile accumulating vast amounts of personal information; and (3) selectively disclose information based on various context-dependent factors. It learns user privacy preferences by monitoring behavior and uses this to personalize and automate decision making, reducing the manual effort required of users. The PersoNISM system was designed, implemented and evaluated over the course of three EU-funded
A Framework for Analysing, Designing and Evaluating Persuasive Technologies.pdfKayla Smith
This document is the thesis submitted by Isaac Wiafe to the University of Reading for the degree of Doctor of Philosophy. It presents a framework called the Unified Framework for Analysing, Designing and Evaluating persuasive technology (U-FADE). The framework expands on the Persuasive Systems Design model to provide steps for developing persuasive technology applications. It incorporates the 3-Dimensional Relationship between Attitude and Behaviour model, which analyzes the levels of cognitive dissonance of users to identify their state and craft persuasive messages. The thesis was validated through a case study demonstrating the U-FADE and 3D-RAB models are effective for persuasive technology design.
Paul Henning Krogh A New Dawn For E Collaboration In ScienceVincenzo Barone
Plone has growing reputation within research for working as an important component in international scientific collaboration infrastructures. In this panel session researchers shall present and answer questions on both their experiences in using Plone in a scientific context and on their research of studying Plone in use by scientists. Attendees will leave with a better conception of what is needed for international scientific collaboration and what Plone can offer as an e-collaboration tool to support research infrastructures. The panel participants will bring in expertise on computer supported collaborative work (CSCW) to stimulate use and development of Plone applications for such use cases. Panel headlines: - Exchange experiences with Plone in research environments (use cases) - Requirements for Plone in research environments: what's available, which extensions or modifications do we need? - Coordinate actions around Plone products for scientific use - Promote the use of Plone in scientific environments - Confront conceptions of collaborative research processes with Plone implementations of such models
This document discusses controlling risks caused by people involved in IT projects implementing open source e-learning systems. The author conducted case studies of the MOODLE system implemented at the Arabic Open University in Kuwait and the Arab Academy for Science and Technology in Syria. Surveys were distributed to IT staff, educators, and students to understand their perspectives. The most common human-related risks were analyzed. A booklet of recommendations and best practices was created and tested to help manage these risks when implementing open source projects. The research aims to increase awareness of human-related risks and provide guidance for handling them effectively.
A common level of understanding is the key to collaboration and the production of good quality. Any discussions about and the modelling of business processes and the respectively manipulated data can be only as good as the underlying comprehension of the matter itself.
Based on his experiences, the author claims that the absence of or a flawed understanding is the main cause for bad software and failed IT-projects. IT people must understand what users need (and not only what they want or ask for). Business and management people in turn must understand what information they need in order to do their business, how they can best use this information, and what they must demand of IT-people to get real value out of their software.
This book describes and documents a method and procedure for achieving mutual understanding between all stakeholders. Furthermore, it discusses how experiences and chances are shared and how difficulties in the application of methods occur.
More information: https://vdf.ch/information-modelling.html
Digital Design - An Embedded Systems Approach Using Verilog.pdfSuchithraNP1
This document provides summaries of reviews for the book "Digital Design: An Embedded Systems Approach Using Verilog" by Peter Ashenden. The reviewers praise the book for teaching digital design using an embedded systems approach and modern design methodology. They note it provides excellent coverage of all aspects of embedded system design, from logic design to processors, memory, I/O and implementation technologies. The book is also described as intuitive, accessible, instructive and a pleasure to read.
1. The document discusses key aspects of conducting rigorous academic research in computing disciplines such as computer science.
2. It emphasizes the importance of gathering sufficient and appropriate data, properly analyzing it without assumptions, and ensuring conclusions are well-founded and presented professionally.
3. The document also provides examples of research topics related to computing, the internet, and developing computer-based systems, noting that all require collecting, analyzing, and drawing conclusions from data to answer a research question.
Computer is an electronic device or combination of electronic devicesArti Arora
Computers play an essential role in the research process. They can be used to:
1. Access previously published research online and gather secondary data from websites.
2. Analyze results using software like SPSS and perform statistical analyses.
3. Disseminate research findings by publishing articles online in PDF format.
Computers make research faster and more accurate at every stage from collecting and storing data, to statistical analysis, and sharing results. However, computers are only a tool, and research still requires planning and expertise from qualified researchers and statisticians.
This dissertation examines the interpersonal communication skills of government intelligence IT project managers when interacting with IT contractors. A qualitative study was conducted using interviews with 20 IT project managers in Washington D.C. The study found that many lacked strong interpersonal communication and leadership skills, which negatively impacted IT projects and outcomes. Improving communication was found to encourage clarity and minimize ambiguity when working with contractors.
This document summarizes a PhD thesis that explored how readers without an IT background understand models created with IT modeling notations. The author conducted a qualitative case study of two modeling notations, SEAM and i*, using examples of a car maintenance service and a meeting scheduler. Readers evaluated the models and identified elements that were difficult to understand. Improved models were created to make these elements easier to comprehend and better convey the modeler's story. The research aims to help both IT modelers create understandable models and designers of modeling notations to design notations that facilitate understandable modeling.
This document summarizes a PhD thesis that explored how to design modeling notations and create models so that non-technical readers can understand them. The researcher conducted a case study of two modeling notations, SEAM and i*, to understand how readers interpret models of a car maintenance service and meeting scheduler. Readers identified elements that were difficult to comprehend, so the researcher improved the models by making those elements easier to understand. The goal was to reduce misalignment between the modeler's intended story and the readers' understanding. The thesis contributes recommendations for both modelers on how to create understandable models, and for notation designers on how to design notations that facilitate understandable modeling.
The document provides an overview of the Big 6 approach to teaching information literacy and research skills. It breaks the research process down into six steps: 1) task definition, 2) information seeking strategies, 3) location and access, 4) use of information, 5) synthesis, and 6) evaluation. These steps are designed to help students effectively find, analyze, organize and present information to complete research-based assignments. The document also provides examples of how to apply the Big 6 process to a research paper on the impacts of technology use on people's lives.
The following document shows snapshots of my work as a UX researcher in academia and collaborations with industry.
Focus is on Client Deliverables.
Includes:
+ About Me
+ User Study
+ User Interface
+ User Centred Design Approach
+ Research
+ Deliverables
+ UML
+ Projects
+ Thank you note
+ Contact details
This document presents a thesis on the design of an X band RF front-end using 0.15μm GaN HEMT technology. It describes the design of a low noise amplifier and power amplifier for X band operation between 7-11 GHz. The amplifiers were integrated with a wideband single-pole-double-throw switch to achieve an overall front-end structure. The design utilized a GaN process from NRC to take advantage of GaN's higher breakdown voltage, power density, efficiency, linearity and noise performance compared to other technologies like GaAs. Simulation results showed the front-end met requirements for next generation wireless network applications in the X band frequency range.
Contextual Inquiry: How Ethnographic Research can Impact the UX of Your WebsiteRachel Vacek
Ethnographic research methods like contextual inquiry were used to understand user experiences of the university library website. Contextual interviews were conducted with students, faculty, and staff, followed by interpretation sessions to analyze the data. This involved creating sequence models of user tasks, affinity diagrams to group themes, and personas. The goal was to gain insights into how users work in order to design services and a website that better meet their needs. Challenges included the time and resources required, but advantages were an in-depth understanding of users and their research processes to inform improvements.
The Stanford Workshop focused on creating plans to expedite a shift in how knowledge and information resources are managed and discovered through linked data. The goal was to identify capabilities and design new tools, processes, and systems that move beyond current metadata practices to link related resources and provide improved navigation and discovery through open feedback. A number of organizations from around the world participated in the workshop to discuss these issues.
This document provides an overview of English Communication as a course of study at Swami Vivekanand Subharti University. It includes 5 units that cover basics of technical communication, constituents of technical writing, forms of technical communication, presentation strategies, and value-based text readings. The units delve into topics such as the difference between general and technical communication, sentence and paragraph construction, various types of business and official letters, report writing, and essays that emphasize writing mechanics. The document also provides information about the copyright and publisher.
The Oxford Dictionary of English Grammar ( PDFDrive ).pdfssuserf7cd2b
The Oxford Dictionary of English Grammar provides definitions and explanations of grammatical terminology. It was written by Sylvia Chalker and Edmund Weiner and published in 1994 by Oxford University Press. The introduction describes the purpose and organization of the dictionary. Entries are arranged alphabetically and include part-of-speech labels, related terms, and examples from works on language.
More Related Content
Similar to ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf
Better software, better service, better research: The Software Sustainabilit...Carole Goble
Ever spotted some great looking software only to discover you can’t get it, it doesn’t work, there is no documentation to help fix it and the developers don’t have the time or incentive to help? Ever produced some software that you want to be widely used or have folks contribute? What’s the sustainability of that key platform/library/tool /database your lab uses day in and day out? Are you helping the providers? The same issues stand for Data (or as we now say “FAIR” Findable, Accessible, Interoperable, Reusable Data) and its metadata. Is anyone looking out for Europe’s data services– the datasets and analysis systems you use and you make – the standards they use and the curators and developers who make them? Or is FAIR just a FAIRy story? I’ll tell how two organisations with quite different structures and approaches - the UK’s Software Sustainability Institute and the ELIXIR European Research Infrastructure for Life Science Data – are working for the common goal of better software, better service, and better research.
https://www.rothamsted.ac.uk/events/14th-international-symposium-integrative-bioinformatics
This thesis takes a human values perspective on the concept of differentiation. The aim is to understand how the concept applies to experience creation in a software company, and how decisions about experiences can be made. The monograph suggests a values-in-experience differentiation concept, develops a decision-making framework and a method for introducing values into the strategic decision process. A software case-company is used to implement the tool and attest the values perspective for experience creation.
The goal of a business strategy is to establish a unique position for the company’s offerings in the minds of customers, while maintaining financial results. Historically, uniqueness was achieved by focusing on product and service features. The experience view, however, departs from the historical assumptions about the nature and sources of differentiation. This thesis examines both the assumptions and changes of differentiation by experience.
The apparent complexity of experiences requires research approaches of a matching complexity that suits the understanding of the challenges that managers face over the course of the experience creation. The thesis suggests that values provide both a stable platform and a flexible environment to guide the strategy process. The research contrasts the human values perspective with the traditional concept of needs and rational view. It proposes that the consistency and the evolutionary change in values seem to provide grounds for combating the risks in experimentation, formulation and re-configuration of a unique position of company’s offerings.
Designing for people, effective innovation and sustainabilityMusstanser Tinauli
Designing for people, effective innovation and sustainability: Introducing experiential factors in an observational framework to evaluate technology assisted systems.
This thesis presents the PERSOnalised Negotiation, Identity Selection and Management (PersoNISM) system, which provides a comprehensive approach to privacy protection in pervasive environments. The PersoNISM system allows users to: (1) configure the terms of data disclosure through privacy policy negotiation; (2) use multiple identities to interact with services to avoid a single profile accumulating vast amounts of personal information; and (3) selectively disclose information based on various context-dependent factors. It learns user privacy preferences by monitoring behavior and uses this to personalize and automate decision making, reducing the manual effort required of users. The PersoNISM system was designed, implemented and evaluated over the course of three EU-funded
A Framework for Analysing, Designing and Evaluating Persuasive Technologies.pdfKayla Smith
This document is the thesis submitted by Isaac Wiafe to the University of Reading for the degree of Doctor of Philosophy. It presents a framework called the Unified Framework for Analysing, Designing and Evaluating persuasive technology (U-FADE). The framework expands on the Persuasive Systems Design model to provide steps for developing persuasive technology applications. It incorporates the 3-Dimensional Relationship between Attitude and Behaviour model, which analyzes the levels of cognitive dissonance of users to identify their state and craft persuasive messages. The thesis was validated through a case study demonstrating the U-FADE and 3D-RAB models are effective for persuasive technology design.
Paul Henning Krogh A New Dawn For E Collaboration In ScienceVincenzo Barone
Plone has growing reputation within research for working as an important component in international scientific collaboration infrastructures. In this panel session researchers shall present and answer questions on both their experiences in using Plone in a scientific context and on their research of studying Plone in use by scientists. Attendees will leave with a better conception of what is needed for international scientific collaboration and what Plone can offer as an e-collaboration tool to support research infrastructures. The panel participants will bring in expertise on computer supported collaborative work (CSCW) to stimulate use and development of Plone applications for such use cases. Panel headlines: - Exchange experiences with Plone in research environments (use cases) - Requirements for Plone in research environments: what's available, which extensions or modifications do we need? - Coordinate actions around Plone products for scientific use - Promote the use of Plone in scientific environments - Confront conceptions of collaborative research processes with Plone implementations of such models
This document discusses controlling risks caused by people involved in IT projects implementing open source e-learning systems. The author conducted case studies of the MOODLE system implemented at the Arabic Open University in Kuwait and the Arab Academy for Science and Technology in Syria. Surveys were distributed to IT staff, educators, and students to understand their perspectives. The most common human-related risks were analyzed. A booklet of recommendations and best practices was created and tested to help manage these risks when implementing open source projects. The research aims to increase awareness of human-related risks and provide guidance for handling them effectively.
A common level of understanding is the key to collaboration and the production of good quality. Any discussions about and the modelling of business processes and the respectively manipulated data can be only as good as the underlying comprehension of the matter itself.
Based on his experiences, the author claims that the absence of or a flawed understanding is the main cause for bad software and failed IT-projects. IT people must understand what users need (and not only what they want or ask for). Business and management people in turn must understand what information they need in order to do their business, how they can best use this information, and what they must demand of IT-people to get real value out of their software.
This book describes and documents a method and procedure for achieving mutual understanding between all stakeholders. Furthermore, it discusses how experiences and chances are shared and how difficulties in the application of methods occur.
More information: https://vdf.ch/information-modelling.html
Digital Design - An Embedded Systems Approach Using Verilog.pdfSuchithraNP1
This document provides summaries of reviews for the book "Digital Design: An Embedded Systems Approach Using Verilog" by Peter Ashenden. The reviewers praise the book for teaching digital design using an embedded systems approach and modern design methodology. They note it provides excellent coverage of all aspects of embedded system design, from logic design to processors, memory, I/O and implementation technologies. The book is also described as intuitive, accessible, instructive and a pleasure to read.
1. The document discusses key aspects of conducting rigorous academic research in computing disciplines such as computer science.
2. It emphasizes the importance of gathering sufficient and appropriate data, properly analyzing it without assumptions, and ensuring conclusions are well-founded and presented professionally.
3. The document also provides examples of research topics related to computing, the internet, and developing computer-based systems, noting that all require collecting, analyzing, and drawing conclusions from data to answer a research question.
Computer is an electronic device or combination of electronic devicesArti Arora
Computers play an essential role in the research process. They can be used to:
1. Access previously published research online and gather secondary data from websites.
2. Analyze results using software like SPSS and perform statistical analyses.
3. Disseminate research findings by publishing articles online in PDF format.
Computers make research faster and more accurate at every stage from collecting and storing data, to statistical analysis, and sharing results. However, computers are only a tool, and research still requires planning and expertise from qualified researchers and statisticians.
This dissertation examines the interpersonal communication skills of government intelligence IT project managers when interacting with IT contractors. A qualitative study was conducted using interviews with 20 IT project managers in Washington D.C. The study found that many lacked strong interpersonal communication and leadership skills, which negatively impacted IT projects and outcomes. Improving communication was found to encourage clarity and minimize ambiguity when working with contractors.
This document summarizes a PhD thesis that explored how readers without an IT background understand models created with IT modeling notations. The author conducted a qualitative case study of two modeling notations, SEAM and i*, using examples of a car maintenance service and a meeting scheduler. Readers evaluated the models and identified elements that were difficult to understand. Improved models were created to make these elements easier to comprehend and better convey the modeler's story. The research aims to help both IT modelers create understandable models and designers of modeling notations to design notations that facilitate understandable modeling.
This document summarizes a PhD thesis that explored how to design modeling notations and create models so that non-technical readers can understand them. The researcher conducted a case study of two modeling notations, SEAM and i*, to understand how readers interpret models of a car maintenance service and meeting scheduler. Readers identified elements that were difficult to comprehend, so the researcher improved the models by making those elements easier to understand. The goal was to reduce misalignment between the modeler's intended story and the readers' understanding. The thesis contributes recommendations for both modelers on how to create understandable models, and for notation designers on how to design notations that facilitate understandable modeling.
The document provides an overview of the Big 6 approach to teaching information literacy and research skills. It breaks the research process down into six steps: 1) task definition, 2) information seeking strategies, 3) location and access, 4) use of information, 5) synthesis, and 6) evaluation. These steps are designed to help students effectively find, analyze, organize and present information to complete research-based assignments. The document also provides examples of how to apply the Big 6 process to a research paper on the impacts of technology use on people's lives.
The following document shows snapshots of my work as a UX researcher in academia and collaborations with industry.
Focus is on Client Deliverables.
Includes:
+ About Me
+ User Study
+ User Interface
+ User Centred Design Approach
+ Research
+ Deliverables
+ UML
+ Projects
+ Thank you note
+ Contact details
This document presents a thesis on the design of an X band RF front-end using 0.15μm GaN HEMT technology. It describes the design of a low noise amplifier and power amplifier for X band operation between 7-11 GHz. The amplifiers were integrated with a wideband single-pole-double-throw switch to achieve an overall front-end structure. The design utilized a GaN process from NRC to take advantage of GaN's higher breakdown voltage, power density, efficiency, linearity and noise performance compared to other technologies like GaAs. Simulation results showed the front-end met requirements for next generation wireless network applications in the X band frequency range.
Contextual Inquiry: How Ethnographic Research can Impact the UX of Your WebsiteRachel Vacek
Ethnographic research methods like contextual inquiry were used to understand user experiences of the university library website. Contextual interviews were conducted with students, faculty, and staff, followed by interpretation sessions to analyze the data. This involved creating sequence models of user tasks, affinity diagrams to group themes, and personas. The goal was to gain insights into how users work in order to design services and a website that better meet their needs. Challenges included the time and resources required, but advantages were an in-depth understanding of users and their research processes to inform improvements.
The Stanford Workshop focused on creating plans to expedite a shift in how knowledge and information resources are managed and discovered through linked data. The goal was to identify capabilities and design new tools, processes, and systems that move beyond current metadata practices to link related resources and provide improved navigation and discovery through open feedback. A number of organizations from around the world participated in the workshop to discuss these issues.
Similar to ibook.pub-performance-evaluation-for-network-services-systems-and-protocols.pdf (20)
This document provides an overview of English Communication as a course of study at Swami Vivekanand Subharti University. It includes 5 units that cover basics of technical communication, constituents of technical writing, forms of technical communication, presentation strategies, and value-based text readings. The units delve into topics such as the difference between general and technical communication, sentence and paragraph construction, various types of business and official letters, report writing, and essays that emphasize writing mechanics. The document also provides information about the copyright and publisher.
The Oxford Dictionary of English Grammar ( PDFDrive ).pdfssuserf7cd2b
The Oxford Dictionary of English Grammar provides definitions and explanations of grammatical terminology. It was written by Sylvia Chalker and Edmund Weiner and published in 1994 by Oxford University Press. The introduction describes the purpose and organization of the dictionary. Entries are arranged alphabetically and include part-of-speech labels, related terms, and examples from works on language.
This document summarizes the first deliverable from Work Package 2 of the eDREAM project. It defines the stakeholders and user groups involved in the project and outlines the methodology for requirements elicitation. The document identifies the key business requirements in areas like data aggregation, demand response optimization, flexibility services, and blockchain applications. It also presents initial user requirements related to electric meters, weather data, demand response characterization, load forecasting, and baseline calculations. The deliverable lays the groundwork for further external stakeholder engagement to refine requirements and help ensure the eDREAM platform meets user needs.
The document describes the Network Monitoring and Performance Verification (PVM) service. PVM provides automated testing and monitoring of key performance indicators for multiple network services simultaneously. It can localize performance degradations to specific network segments. PVM is implemented using open source tools and provides dashboards and reports on service quality. The service benefits network operators and users by verifying service quality. It is integrated with other network management systems using standard interfaces.
The Oxford Dictionary of English Grammar ( PDFDrive ) (1).pdfssuserf7cd2b
The dictionary provides concise definitions of terms related to English grammar. It was written by Sylvia Chalker and Edmund Weiner and published in 1994 with the aim of clarifying terminology that is often used inconsistently across grammar sources. The introduction outlines the scope and approach taken in defining terms from mainstream grammar as well as related fields like phonetics and semantics.
Shiva introduces himself at a volleyball camp, providing his name, age, where he lives, education details, subjects he likes, hobbies, favorite color, food, and his purpose for being at the camp. Self-introductions are important for both educational and career contexts. When introducing oneself for an interview, it is important to dress appropriately, greet the interviewers with confidence, and provide details about one's qualifications, strengths, weaknesses, goals and experience. Sample self-introductions are provided for educational and job interview contexts, and tips are given for introducing oneself in a telephonic conversation depending on whether the intended person answers or someone else.
This document outlines secure remote access requirements for the VA. It seeks to provide a structured approach to identifying conditions and functionalities needed from any remote access technology to support VA's core business and healthcare operations. Key requirements include allowing remote access to all clinical, financial and administrative applications from a variety of devices and connections. Remote users must be able to view diagnostic images, transfer data and collaborate with external partners without loss of functionality compared to on-site access. Performance must meet or exceed current VPN standards and support long-term, high data use connections by teleworkers. New users must also be rapidly provisioned to the remote access system.
Noor-Book.com دليلك الكامل لمهارات الإتصال بالانجليزية.pdfssuserf7cd2b
This document provides a guide to English communication skills including how to ask for information, directions, clarification, and permission as well as how to apologize, agree, disagree, give advice, and make suggestions in a polite manner. It also covers ways to express actions, ideas, preferences and recommendations respectfully. The guide is intended to help with effective interpersonal communication in English.
This document describes configuring and testing extended access control lists (ACLs) on a router to filter traffic between two PCs and a server. It outlines configuring a numbered ACL to permit FTP and ICMP from PC1 to the server, and a named ACL to permit HTTP and ICMP from PC2 to the server. The ACLs are applied to router interfaces and testing verifies only allowed traffic succeeds while denied traffic fails.
Email clients allow users to communicate securely over the internet, share files, back up data, and keep records of business transactions. They support business growth by providing a platform for digital branding and advertising. Email clients make it easy to share different file types and media with remote clients. Data can also be backed up by emailing files to oneself.
This document discusses combinational circuit design and provides examples of various combinational logic circuits. It begins with an introduction that defines combinational and sequential circuits. The remainder of the document provides details on specific combinational logic circuits including half adders, full adders, subtractors, encoders, decoders, multiplexers, comparators, and code converters. Worked examples are provided for each circuit type using truth tables, Karnaugh maps, and logic diagrams. Applications of decoders for implementing functions like a full adder are also described.
This document provides an overview of Boolean algebra and logic gates. It discusses basic logic gates like AND, OR, and NOT. It also covers other logic operations like NAND, NOR, EXOR and EXNOR. The document defines Boolean algebra and its postulates. It explains logic levels, positive and negative logic. It also discusses simplification of Boolean expressions, canonical and standard forms, and the use of Venn diagrams and minterms. The key topics covered are the basic concepts of Boolean algebra and digital logic that form the foundation for working with logic gates and circuits.
This document discusses approaches for using standard Internet protocols for space missions. It summarizes work by the NASA/GSFC Operating Missions as Nodes on the Internet (OMNI) project since 1997 to define and demonstrate end-to-end communication architectures using Internet technologies. The key is selecting appropriate Internet protocols that can support space communication needs while providing interoperability with the terrestrial Internet. Standard protocols are discussed for each layer, including physical, data link, network, transport and application layers. Current and future implementations of these protocols in spacecraft and ground systems are also described.
The document discusses several networking technologies and standards including Ethernet, token passing, wireless, FDDI, and various cable standards. Ethernet using CSMA/CD is currently the most common technology and operates at speeds between 3-1000 Mbps using UTP cable. Token passing networks avoid collisions but have higher overhead. Wireless networks use radio frequencies and CSMA/CA. Fiber Distributed Data Interface (FDDI) uses token passing over fiber optic cable at 100 Mbps. Fast Ethernet includes 100BaseTX at 100 Mbps over CAT5 cable. Gigabit Ethernet includes 1000BaseT at 1 Gbps over CAT5e/6 cable. 10 Gigabit Ethernet allows 10 Gbps transmission over fiber using the 10G
This document discusses network design, architecture, and IP address management principles to support system security and data protection. It advocates for network segmentation through security zones, classification of systems, and private IP addressing to limit lateral movement of threats. The document recommends dividing large networks into separate network domains with defined traffic rules between segments. Criteria-based segmentation and logical/physical isolation techniques can enhance performance and security by restricting access and propagation of faults across network segments.
The document discusses techniques for simplifying and minimizing Boolean functions using Karnaugh maps. It covers representing logic functions as K-maps, grouping adjacent 1s and 0s to simplify expressions, and minimizing functions specified as truth tables or minterms/maxterms. Examples are provided to demonstrate minimizing 2, 3, and 4 variable logic functions using K-maps. The document also discusses deriving sum of products and product of sums expressions from K-maps.
This document provides a summary of lectures presented at a workshop on protocol specification, testing and verification. The following key points were discussed:
- Several formal techniques for modeling concurrent systems and specifying protocols were presented, including interval logic, selection/resolution models, behavioral description languages, structured finite state automata, and Petri nets.
- Methods for analyzing protocols were discussed, including reachability analysis, structural reduction, step-wise refinement, and modeling elapsed time.
- Papers addressed verification of protocols using techniques such as automated verification systems, logic specifications, and validation tools.
- Performance analysis and reliability of protocols for industrial networks and data link layers were also covered.
The document provides an overview of commands and techniques used to verify connectivity and acquire device information in a small network. It describes using ping and traceroute to test connectivity between devices and troubleshoot connectivity issues. It also explains using the ipconfig command on Windows and ifconfig/ip commands on Linux to view a host's IP configuration, and introduces commands like show ip interface brief for viewing IP information on routers.
03 - Cabling Standards, Media, and Connectors.pptssuserf7cd2b
The document discusses various types of networking media and connectors. It describes RJ-45 connectors as the most common for modern networks. F-type connectors are used for coaxial cable, while fiber optic cable uses several connector types including ST, SC, and LC connectors. Network media can be divided into cable-based options like copper and fiber optic cable, as well as wireless options. Characteristics like speed, length limits, security, and ease of installation vary between media types.
The 10 Most Influential Leaders Guiding Corporate Evolution, 2024.pdfthesiliconleaders
In the recent edition, The 10 Most Influential Leaders Guiding Corporate Evolution, 2024, The Silicon Leaders magazine gladly features Dejan Štancer, President of the Global Chamber of Business Leaders (GCBL), along with other leaders.
Building Your Employer Brand with Social MediaLuanWise
Presented at The Global HR Summit, 6th June 2024
In this keynote, Luan Wise will provide invaluable insights to elevate your employer brand on social media platforms including LinkedIn, Facebook, Instagram, X (formerly Twitter) and TikTok. You'll learn how compelling content can authentically showcase your company culture, values, and employee experiences to support your talent acquisition and retention objectives. Additionally, you'll understand the power of employee advocacy to amplify reach and engagement – helping to position your organization as an employer of choice in today's competitive talent landscape.
The Genesis of BriansClub.cm Famous Dark WEb PlatformSabaaSudozai
BriansClub.cm, a famous platform on the dark web, has become one of the most infamous carding marketplaces, specializing in the sale of stolen credit card data.
How are Lilac French Bulldogs Beauty Charming the World and Capturing Hearts....Lacey Max
“After being the most listed dog breed in the United States for 31
years in a row, the Labrador Retriever has dropped to second place
in the American Kennel Club's annual survey of the country's most
popular canines. The French Bulldog is the new top dog in the
United States as of 2022. The stylish puppy has ascended the
rankings in rapid time despite having health concerns and limited
color choices.”
Company Valuation webinar series - Tuesday, 4 June 2024FelixPerez547899
This session provided an update as to the latest valuation data in the UK and then delved into a discussion on the upcoming election and the impacts on valuation. We finished, as always with a Q&A
HOW TO START UP A COMPANY A STEP-BY-STEP GUIDE.pdf46adnanshahzad
How to Start Up a Company: A Step-by-Step Guide Starting a company is an exciting adventure that combines creativity, strategy, and hard work. It can seem overwhelming at first, but with the right guidance, anyone can transform a great idea into a successful business. Let's dive into how to start up a company, from the initial spark of an idea to securing funding and launching your startup.
Introduction
Have you ever dreamed of turning your innovative idea into a thriving business? Starting a company involves numerous steps and decisions, but don't worry—we're here to help. Whether you're exploring how to start a startup company or wondering how to start up a small business, this guide will walk you through the process, step by step.
Navigating the world of forex trading can be challenging, especially for beginners. To help you make an informed decision, we have comprehensively compared the best forex brokers in India for 2024. This article, reviewed by Top Forex Brokers Review, will cover featured award winners, the best forex brokers, featured offers, the best copy trading platforms, the best forex brokers for beginners, the best MetaTrader brokers, and recently updated reviews. We will focus on FP Markets, Black Bull, EightCap, IC Markets, and Octa.
Digital Marketing with a Focus on Sustainabilitysssourabhsharma
Digital Marketing best practices including influencer marketing, content creators, and omnichannel marketing for Sustainable Brands at the Sustainable Cosmetics Summit 2024 in New York
Top mailing list providers in the USA.pptxJeremyPeirce1
Discover the top mailing list providers in the USA, offering targeted lists, segmentation, and analytics to optimize your marketing campaigns and drive engagement.
SATTA MATKA SATTA FAST RESULT KALYAN TOP MATKA RESULT KALYAN SATTA MATKA FAST RESULT MILAN RATAN RAJDHANI MAIN BAZAR MATKA FAST TIPS RESULT MATKA CHART JODI CHART PANEL CHART FREE FIX GAME SATTAMATKA ! MATKA MOBI SATTA 143 spboss.in TOP NO1 RESULT FULL RATE MATKA ONLINE GAME PLAY BY APP SPBOSS
Industrial Tech SW: Category Renewal and CreationChristian Dahlen
Every industrial revolution has created a new set of categories and a new set of players.
Multiple new technologies have emerged, but Samsara and C3.ai are only two companies which have gone public so far.
Manufacturing startups constitute the largest pipeline share of unicorns and IPO candidates in the SF Bay Area, and software startups dominate in Germany.
Understanding User Needs and Satisfying ThemAggregage
https://www.productmanagementtoday.com/frs/26903918/understanding-user-needs-and-satisfying-them
We know we want to create products which our customers find to be valuable. Whether we label it as customer-centric or product-led depends on how long we've been doing product management. There are three challenges we face when doing this. The obvious challenge is figuring out what our users need; the non-obvious challenges are in creating a shared understanding of those needs and in sensing if what we're doing is meeting those needs.
In this webinar, we won't focus on the research methods for discovering user-needs. We will focus on synthesis of the needs we discover, communication and alignment tools, and how we operationalize addressing those needs.
Industry expert Scott Sehlhorst will:
• Introduce a taxonomy for user goals with real world examples
• Present the Onion Diagram, a tool for contextualizing task-level goals
• Illustrate how customer journey maps capture activity-level and task-level goals
• Demonstrate the best approach to selection and prioritization of user-goals to address
• Highlight the crucial benchmarks, observable changes, in ensuring fulfillment of customer needs
Unveiling the Dynamic Personalities, Key Dates, and Horoscope Insights: Gemin...my Pandit
Explore the fascinating world of the Gemini Zodiac Sign. Discover the unique personality traits, key dates, and horoscope insights of Gemini individuals. Learn how their sociable, communicative nature and boundless curiosity make them the dynamic explorers of the zodiac. Dive into the duality of the Gemini sign and understand their intellectual and adventurous spirit.
5. v
Foreword
The advancement of computer networks has been remarkable in these recent times.
The processes that were commonly used to facilitate the operations of these net-
works have quickly become obsolete, giving way to quicker and better technology
and ultimately better computer networks. The increased use of better virtualization
technologies has brought about a collaborative effort to improve the operations of
the computer network systems, the services rendered, and other important proto-
cols. The result is very impressive, meeting the expectation of consumers in terms
of reliability and speed. The changes experienced in network virtualization (NV),
software-defined networking (SDN), network functions virtualization (NFV), and
other similar fields have made it important to focus more on the means through
which the actual performance can be determined and evaluated for newer innova-
tions. These performance evaluations are crucial to the service providers so they can
design and plan their future networks.
I can confirm the importance of this book to the entire computer networking
research and industrial community. I have played an active role in the research com-
munity of network monitoring and measurement since 2000. I have made useful
contributions in the fields of traffic analysis and modeling, traffic classification,
traffic generation, performance monitoring, network security, and also cloud and
SDN monitoring. I am a full professor at the University of Naples Federico II, where
I lecture students on computer networks and analysis of Internet performance. I
have coauthored over 180 research papers published in international journals (e.g.,
IEEE/ACM Transactions on Networking, Communications of the ACM, IEEE TPDS,
IEEE TNSM, Computer Networks, etc.) and conferences (e.g., SIGCOMM, NSDI,
Infocom, IMC, PAM, Globecom, ICC, etc.). I have been honored with a Google
Faculty Award, several best paper awards, Microsoft and Amazon research grants,
and two IRTF (Internet Research Task Force)ANRP (Applied Networking Research
Prize) awards.
Prof. Stênio Fernandes has solid record of research publications in the field of
computer communications and networks. He has published over 120 research
papers in a number of international peer-reviewed conferences and journals. His
research interests cover the crucial aspects of performance evaluation of network
6. vi
and communication systems and Internet traffic measurement, modeling projects,
and analysis. I can affirm that this book is a reflection of his experiences with aca-
demic and industrial research projects related to performance evaluation of com-
puter networks. In this book, Prof. Stênio Fernandes gives a comprehensive
perspective of the methods that are used to accurately determine performance evalu-
ation of modern computer networks. In this book, the crucial and advanced features
of performance evaluation techniques are clearly explained in a way that the reader
will understand the methods of conducting the right evaluation plans. Taking
excerpts from the scientific literature, this book addresses the most relevant aspects
of experimentation, simulation, and analytical modeling of modern networks. The
readers will have a better understanding of applied statistics in computer network-
ing and how the functions on theory and the best practices in the field intersect. This
book will identify the current challenges that industrial and academic researchers
face in their work and also the potentials for better innovations in this field.
University of Naples Federico II Antonio Pescapè
Naples, Italy
Foreword
7. vii
Acknowledgments
I have nursed the dream of writing this book for a very long time. My position as a
member of technical program committees serving for a large number of important
scientific conferences in the computer networking field has given me the opportu-
nity to witness in wonder the number of excellently written papers that have been
rejected due to lapses and lack of rigor in their performance evaluation and analysis.
It is common to see authors come up with brilliant ideas, but they fail to scientifi-
cally prove the validity of these ideas. A poor performance evaluation will cast
aspersions on any papers’ claims about its contributions and relevance to the field.
The case is the same for scientific journals; I have been privileged to act as a referee
for many important journals in the field of computer networks and communications.
Going through the exhibition area during a scientific conference, I met Susan
Lagerstrom-Fife, an editor (Computer Science) at Springer, USA. After the usual
pleasantries, I asked her about the requirements needed to write a book for Springer.
I got some useful information and took action, and I can happily say that this book
is the result of that productive conversation. I would like to thank Susan and her
assistant Jennifer Malat for guiding me along this long road.
It was an interesting and difficult experience writing this book. I experienced
what writers call “the writer’s block” often. Now I know how real it is, and I can
confirm that it is not a very happy experience. I was able to overcome this challenge
by reading good books on focus and productivity. I owe a lot of my success in over-
coming this challenge to Barbara Oakley whose course “Learning How to Learn”
on Coursera played a vital role in helping me develop my mind and sharpen my
skills at a higher level. I was very happy to have the opportunity to thank her in
person when she came to give a talk at Carleton University, Ottawa in Canada, in
May 2016. I will not stop expressing my sincere gratitude to her for putting out all
that useful information for free.
Communicating your ideas to a diverse audience is not a very easy task. Writing
a book chapter that entails the reviews of essential concepts of statistics was difficult
to organize and deliver. I would like to thank Alexey Medvedev, who has a PhD in
mathematics (2016) from Central European University, for assessing all the equa-
tions and mathematical concepts in that chapter.
8. viii
I would also like to thank all my colleagues from universities around the world,
most especially from the Universidade Federal de Pernambuco (Brazil), the
University of Ottawa (Canada), and Carleton University (Canada) for the encour-
agements and kind support that helped me finish this book. I send special thanks to
my former supervisor Professor Ahmed Karmouch (University of Ottawa) and my
colleague Professor Gabriel Wainer (Carleton University). I also would like to
extend my sincere gratitude to many network engineers I met at the Internet
Engineering Task Force meetings between 2014 and 2017; I would like to thank
them for the support and tips they offered me while I was writing this book.
Finally, I would like to thank my family and friends for showing their concerns
with the regular question – “How’s the book writing going?” Many of the chal-
lenges I faced while writing this book made me to sometimes be unavailable and
impatient. I promise to catch up with you all over coffee, wine, music concerts, and
physical activities. This book would not have been possible without the love, sup-
port, and appreciation of my work expressed by my wife Nina and my children
Victor and Alice. I also wish to dedicate this book to my mother Penha and my
father (in memoriam) Fernando.
Acknowledgments
9. ix
Contents
1
Principles of Performance Evaluation of Computer Networks ���������� 1
1.1
Motivation: Why Do We Need to Assess the Performance
of Computer Networks?�������������������������������������������������������������������� 1
1.2
Classical and Modern Scenarios: Examples
from Research Papers����������������������������������������������������������������������� 2
1.2.1
Performance Evaluation in Classical Scenarios�������������������� 3
1.2.2
Performance Evaluation in Modern Scenarios��������������������� 16
1.3
The Pillars of Performance Evaluation of Networking
and Communication Systems������������������������������������������������������������ 30
1.3.1 Experimentation/Prototyping, Simulation/Emulation,
and Modeling������������������������������������������������������������������������ 30
1.3.2 Supporting Strategies: Measurements���������������������������������� 39
References�������������������������������������������������������������������������������������������������� 40
2
Methods and Techniques for Measurements in the Internet �������������� 45
2.1
Passive vs. Active vs. Hybrid Measurements������������������������������������ 47
2.2
Traffic Measurements: Packets, Flow Records,
and Aggregated Data������������������������������������������������������������������������ 51
2.3 Sampling Techniques for Network Management������������������������������ 54
2.4
Internet Topology: Measurements, Modeling, and Analysis������������ 57
2.4.1 Internet Topology Resolution������������������������������������������������ 59
2.4.2
Internet Topology Discovery: Tools, Techniques,
and Datasets�������������������������������������������������������������������������� 61
2.5
Challenges for Traffic Measurements and Analyses
in Virtual Environments�������������������������������������������������������������������� 64
2.5.1 Cloud Computing Environments������������������������������������������ 64
2.5.2
Virtualization at Network Level�������������������������������������������� 67
2.6 Bandwidth Estimation Methods�������������������������������������������������������� 69
References�������������������������������������������������������������������������������������������������� 70
10. x
3
A Primer on Applied Statistics in Computer Networking ������������������ 75
3.1 Statistics and Computational Statistics�������������������������������������������� 75
3.2 I’m All About That Data ������������������������������������������������������������������ 78
3.3 Essential Concepts and Terminology������������������������������������������������ 80
3.4 Descriptive Statistics������������������������������������������������������������������������ 82
3.4.1
I Mean It (Or Measures of Centrality)���������������������������������� 83
3.4.2
This Is Dull (Or Measures of Dispersion)���������������������������� 83
3.4.3
Is It Paranormally Distributed? (or Measures
of Asymmetry and Tailedness)������������������������������������������������ 86
3.5 Inferential Statistics�������������������������������������������������������������������������� 89
3.5.1
Parameter Estimation: Point vs. Interval������������������������������ 90
3.5.2 Estimators and Estimation Methods ������������������������������������ 95
3.6 The Heavy-Tailed Phenomenon�������������������������������������������������������� 99
3.6.1 Outlier Detection������������������������������������������������������������������ 100
3.6.2 Heavy-Tailed Distributions
and Its Variations (Subclasses)���������������������������������������������� 102
3.6.3 Evidence of Heavy-Tailedness
in Computer Networks���������������������������������������������������������� 107
References�������������������������������������������������������������������������������������������������� 111
4 Internet Traffic Profiling ������������������������������������������������������������������������ 113
4.1 Traffic Analysis�������������������������������������������������������������������������������� 114
4.1.1 Identification and Classification�������������������������������������������� 114
4.1.2
Techniques, Tools, and Systems for Traffic Profiling ���������� 118
4.2
Industrial Approach for Traffic Profiling: Products
and Services�������������������������������������������������������������������������������������� 130
4.3 Traffic Models in Practice���������������������������������������������������������������� 132
4.3.1 Workload Generators������������������������������������������������������������ 133
4.4 Simulation and Emulation���������������������������������������������������������������� 137
4.4.1
Discrete-Event Simulation and Network Simulation
Environments������������������������������������������������������������������������ 140
4.4.2
Practical Use of Network Simulators
and Traffic Profiles���������������������������������������������������������������� 147
References�������������������������������������������������������������������������������������������������� 148
5
Designing and Executing Experimental Plans�������������������������������������� 153
5.1
Designing Performance Evaluation Plans: Fundamentals���������������� 153
5.2 Design of Experiments (DoE)���������������������������������������������������������� 156
5.2.1 The DoE Jargon�������������������������������������������������������������������� 159
5.2.2
To Replicate or to Slice?������������������������������������������������������ 161
5.3
DOE Options: Choosing a Proper Design���������������������������������������� 164
5.3.1 Classification of DOE Methods�������������������������������������������� 165
5.3.2 Notation�������������������������������������������������������������������������������� 166
Contents
11. xi
5.4 Experimental Designs���������������������������������������������������������������������� 166
5.4.1 2k
Factorial Designs (a.k.a. Coarse Grids)���������������������������� 166
5.4.2 2k − p
Fractional Factorial Designs������������������������������������������ 167
5.4.3 mk
Factorial Designs (a.k.a. Finer Grids)������������������������������ 167
5.5
Test, Validation, Analysis, and Interpretation of DOE Results�������� 167
5.6
DOEs: Some Pitfalls and Caveats���������������������������������������������������� 168
5.7
DOE in Computer Networking Problems���������������������������������������� 169
5.7.1 General Guidelines��������������������������������������������������������������� 169
5.7.2 Hands-On������������������������������������������������������������������������������ 174
References�������������������������������������������������������������������������������������������������� 174
Contents
13. 2
Do we need to repeat the experiments? If so, how many times? Is sampling
acceptable for the given case? Do we need to derive an analytical model from the
measurements? Can we use such a derived analytical model to predict the behavior
of the networked system? If so, how far in the future? Will this performance analysis
be based on experimentation or simulation? Why? Which one is better for the given
scenario? Should we consider active measurements? Are passive measurements suf-
ficient? You’ve got the point.
It is straightforward noticing that an accurate performance evaluation of any
computing system must be carefully designed and undertaken. For a general perfor-
mance evaluation of computer systems, the classic Raj Jain’s The Art of Computer
Systems Performance Analysis: Techniques for Experimental Design, Measurement,
Simulation, and Modeling [1] has become the main source of information in the last
two decades. If you are more mathematical inclined, Yves Le Boudec’s book [2]
would add value to your performance evaluation design and analyses. In this book,
we stay in the middle by balancing between a more pragmatic approach (from the
academia and industry point of view) and stepping on some solid ground (from the
statistical perspective).
1.2
Classical and Modern Scenarios: Examples
from Research Papers
In this section, we give some examples of well-conducted performance evaluation
studies in the computer networking field. By classical (traditional) we mean topics
that were deeply explored in the past and has currently attracted little attention of
researchers. This is the case when there is almost no room for improvement and to
bring new contributions to the topic. For instance, congestion control mechanisms
and protocols were deeply explored in the past. Thousands of research papers have
been already published, but there are still some new challenging scenarios that
need further investigation. Similar to this, peer-to-peer networking protocols and
mechanisms gained lots of attention in the past decades. They both can surely be
considered traditional or classical scenarios. Modern scenarios (or advanced)
would include studies with a strong paradigm shift on networking, such as the ones
related to network virtualization (NV), cloud computing, software-defined net-
working (SDN), network functions virtualization (NFV) and service function
chaining (SFC), and the like. The following research papers highlight the impor-
tance of a good performance evaluation design as well as the results presentation.
We bring the reader examples from top conferences and journals and show the
possible authors’ rationale to come up with sound performance analyses. We hope
those examples will make it clear that the chances of publishing in good scientific
conferences and journals are higher when the paper has a sound and convincing
performance evaluation. Also, network engineers and designers will also benefit
from those examples, since they might be required to bring a performance analysis
of the networks they manage.
1 Principles of Performance Evaluation of Computer Networks
14. 3
1.2.1
Performance Evaluation in Classical Scenarios
1.2.1.1 Application Layer
In this subsection, we present a couple of examples of a well-designed performance
evaluation plans for an application-layer protocol. We also present in detail how the
experiments were conducted and some selected results to highlight performance
metrics, parameterization (factors and levels), as well as precise forms of results
presentation.
In the paper Can SPDY Really Make the Web Faster? [3], Elkhatib, Tyson, and
Welzl provide a thorough experimental-based performance evaluation of a recent
protocol called SPDY [4], which served as the starting point for the development of
the HTTP/2.0 [5]. The whole Internet community (e.g., users, developers, research-
ers) had been claiming for improvements in the web surfing experience, due to the
ever-increasing development of new services and applications that require timely
communications (i.e., low latencies). The authors started the paper arguing that the
ever-increasing complexity of the web pages is likely to affect their retrieval times.
Some subset of users is more tolerant to delay than others, especially the ones
engaged in online shopping. The fundamental research questions the authors were
trying to answer was if the proposed new version of the HTTP protocol is a leap-
forward technology or just “yet another protocol” with small performance improve-
ments. In their words, does it offer a fundamental improvement or just further
tweaking? Additional questions were raised in the paper as their experiments
showed some network parameterization severely affecting the protocol behavior
and performance. There were several essential arguments for conducting such study,
namely: i) previous work only gave a shallow understanding of SPDY performance
and ii) the only conclusions so far were that SPDY performance is highly dependent
on several factors and it has highly variable performance. Therefore, in order to
have an in-depth knowledge of SPDY performance, they conducted experiments in
real uncontrolled environments, i.e., SPDY client software (e.g., Chromium1
) and
real deployed servers, such asYouTube and Twitter, as well as in controlled environ-
ments, using open-source released versions of SPDY servers (e.g., Apache with a
mod_spdy2
module installed). Details of the experiments can be found in the origi-
nal paper [3]. As a complementary evaluation, in the paper Performance Analysis of
SPDY Protocol in Wired and Mobile Networks [77], the authors evaluated SPDY
performance in several wireless environments. They evaluated SPDY performance
in 3G, WiBro, and WLAN networks as well as in different web browsers, namely,
Chromium-based and Firefox. SPDY performance varies with some factors, such as
the network access technology (e.g., 3G, 802.11)
Both experiment design rationales highlight the importance of making the right
decisions to get sound and meaningful results for further analysis. First, in [3], the
authors selected an appropriate set of measurement tools to make it easier further
1
http://www.chromium.org/.
2
https://code.google.com/archive/p/mod-spdy/.
1.2
Classical and Modern Scenarios: Examples from Research Papers
15. 4
analysis. Then, discussion on the adequate performance metric was conducted when
they suggested the use of a new metric (i.e., time of wire (ToW), captured at the
network level) to avoid the inclusion of web browser processing times. In other
words, the ToW performance metric means the time between the departure of the
first HTTP request and the arrival of the last packet from the web server. Second, in
the measurement methodology, the authors decided to separate the experiments into
two classes, namely, wild and controlled. When dealing with real protocols, it is
important to evaluate how they would behave in the actual environment where the
experimenter would not be able to control most of its parameters. However, such
limitations impose severe restrictions on the concluding remarks that can be drawn,
since a number of assumptions might be wrong. In this particular case, it is clear to
see that the controlled set of experiments were needed since variations in network
conditions (e.g., server load, path available bandwidth, path delay, and packet ratio
losses) could not be precisely monitored. In the Live Tests experiments, the authors
sampled the most accessed websites, selecting the most representatives one (i.e., top
eight websites from Alexa3
). They also collected enough samples to ensure statisti-
cal significance of results. For instance, they collected over one million HTTP GET
requests from one site for 3 days. In the Controlled Tests, they deployed the usual
Linux network emulator (NetEm) and Linux traffic control (tc) in a high-speed local
network environment. Both tools were used to control delays and packet loss ratios,
as well as to shape the sending rate, thus mimicking the control of the available
bandwidth. There are some comments on the use of sampling strategies, but no
further details were given. In [77], the authors used the usual page download time.
Preliminary analysis of results in both papers shows that deployment of SPDY
does not necessarily imply in performance gains. There are some cases when per-
formance deteriorates, as Fig. 1.1 shows.
As there is no way to understand why this is happening, in [3] the authors have
strong arguments to conduct controlled experiments. Therefore, a series of experi-
ments were presented to show the effect of network conditions on the performance
of SPDY. They used the major performance factors, namely, delay, available band-
width, and loss. Levels of the factors were set as follows: (i) delay, from 10 to
490 ms; (ii) available bandwidth, from 64 to 8 Mbps; and (iii) packet loss ratio
(PLR), from 0% to 3%. To isolate the effects of the other factors, for each experi-
ment varying a particular factor, they make the other factor’s levels fixed. For
instance, when conducting experiments to understand the impact of the bandwidth
on the SPDY performance, they fix RTT to 150 ms and PLR to 0%. It is worth
emphasizing that a combination of levels for each factor could be taken for a more
detailed and extensive experimentation. This is essentially a design decision that the
experimenter must clearly state and provide reasonable arguments for that.
Sometimes it is just lacking space, in the case of research papers with a limited num-
ber of pages. In other cases, it might make no sense to make a full factorial experi-
mentation. In statistics, full factorial means the use of all possible combinations of
3
www.alexa.com.
1 Principles of Performance Evaluation of Computer Networks
16. 5
the factor’s levels. In the case of real scenarios for SPDY deployments, it is highly
unlikely that a combination of high bandwidth, PLR, and delay will make any sense.
The next three figures (Figs. 1.2, 1.3, and 1.4) illustrate the effect of RTT, band-
width, and PLR on the ToW reduction, respectively. The ToW reduction perfor-
mance metric means the percentage of improvement in the performance metric
ToW of SPDY over HTTPS.
The concluding remark from these results are: (i) in the RTT experiments, SPDY
always has better performance when compared to HTTP, especially in high-delay
environments; (ii) SPDY has better performance in mid- to low-bandwidth scenar-
ios; and (iii) increase of PLR severely impacts SPDY performance. The authors
provided detailed justifications to the behavior of SPDY in all scenarios. The main
conclusion is that SPDY may not perform that well in mobile settings. They go a step
further in their experiments by showing the impact of the server-side infrastructure
(e.g., domain sharding) on the ToW reduction.
In conclusion, it is important to understand why this paper is sound. First, the
authors presented several clear arguments for the decisions they made regarding the
experimental plan design, which included performance metrics, factors, and levels.
Second, they carefully selected the measurement tools as well as made sure they
would get enough data for the sake of statistical significance. They also use a variety
of ways to present results, such as tables, XY (scatter) plots, empirical cumulative
distribution function (ECDF), bar plots, and heat maps. Such an approach makes the
paper more exciting (or less boring, if you will) to read. Last, but not least, they drew
conclusions solely on the results, along with reasonable qualitative explanations.
Fig. 1.1 Time of wire (ToW) – SPDY websites (Source: Elkhatib et al. [3])
1.2
Classical and Modern Scenarios: Examples from Research Papers
17. 6
Fig. 1.2 Effect of RTT on ToW reduction (Source: Elkhatib et al. [3])
Fig. 1.3 Effect of bandwidth on ToW reduction (Source: Elkhatib et al. [3])
1 Principles of Performance Evaluation of Computer Networks
18. 7
1.2.1.2 Transport Layer
Now we present an example of a careful and accurate performance evaluation study
for transport-layer protocol design and analysis in cellular networks. We present
details of how the authors provide solid arguments to support the development of a
new TCP-like protocol, even though the network research community has been doing
this for decades and has produced hundreds of research papers. We also show how
they design their experiments to validate the new protocol and present some selected
results. We highlight the performance metrics they used, as well as factors and levels.
In the thesis Adaptive Congestion Control for Unpredictable Cellular Networks
[6], Thomas Pötsch proposes and shows the rationale of Verus, a delay-based end-
to-
end congestion control protocol that is quick and accurate enough to cope with
highly variable network conditions in cellular networks. He used a mix of real pro-
totyping and simulations to evaluate Verus’ performance in a variety of scenarios.
Pötsch argues that most TCP flavors that employ different congestion control
mechanisms fail to show good performance in cellular networks, mainly due to their
inability to cope well with high available bandwidth availability, varying queuing
delays, and non-congestion-related stochastic packet losses. The main causes for such
variability at network-level lie in the underlying link and physical layers. He high-
lights the four main causes as (i) the state of a cellular channel, ii) the frame schedul-
ing algorithms, iii) device mobility, and surprisingly iv) competing traffic. Some of
these causes have a different impact on channel characteristics, as some are more
Fig. 1.4 Effect of packet loss on ToW reduction (Source: Elkhatib et al. [3])
1.2
Classical and Modern Scenarios: Examples from Research Papers
19. 8
prone to affect delays, whereas other might cause burstiness in the perceived channel
capacity at the receiver. It is very true that it is tough to develop precise models, algo-
rithms,andmechanismstotrackshort-andlong-termchanneldynamics[7].Therefore,
he developed a simple yet efficient delay profile model to correlate the congestion
control variable “sending window size” to the measured end-to-end delay. In essence,
with a small modification on the additive-increase (AI) portion of the additive-
increase/multiplicative-decrease (AIMD) mechanism present in most TCP flavors,
Verus is able to quickly adapt to changing channel conditions, in several timescales.
He used essential performance metrics to study the overall performance of Verus
against some TCP flavors, such as Cubic [8], New Reno [9], Vegas [10], and Sprout
[11] (a recent TCP-like proposal for wireless environments). From these TCP flavors,
Sprout is the only one explicitly designed for cellular networks. Pötsch kept the most
popular TCP flavor currently deployed on the Internet (New Reno and Cubic) and
discarded all the others legacy ones. In addition, they bring some arguments to keep
out of the evaluation those protocols that either need or rely on explicit feedback from
the network layer, such as the use of Explicit Congestion Notification (ECN) [12].
In [75], Fabini et al. show how an HSPA downlink presents high-delay variabil-
ity (cf. Figs. 1.5 and 1.6). Thomas Pötsch [6] also shows that 3G and LTE network
do not isolate channels properly (cf. Fig. 1.7).
One interesting finding here is related to cellular channel isolation. The authors
discuss that the assumption of channel isolation by means of queue isolation does
not hold in the case of high traffic demands (as Fig. 1.6 shows). The authors also
show channel unpredictability in different timescales. It is worth emphasizing that
in small timescales, the variability effect is more prominent (cf. Fig. 1.7) [76].
Fig. 1.5 Burstiness on latency in cellular networks (Source: Pötsch [6])
1 Principles of Performance Evaluation of Computer Networks
20. 9
Fig. 1.6 Impact of user traffic on packet delay (Source: Pötsch [6])
Fig. 1.7 Traffic received from a 3G downlink (100 and 20 ms windows) (Source: Pötsch [6])
1.2
Classical and Modern Scenarios: Examples from Research Papers
21. 10
We will not give details of the development of the Verus protocol and will only
focus on performance evaluation. However, we give the reader a glimpse of the
Verus design rationale.
The delay profile is the main component of the congestion control mechanism. It
is built on four basic variables, namely, delay estimator, delay profiler, window
estimator, and loss handler. The delay estimator just keeps track of the received
packet delays in a given timeframe, whereas the delay profiler resembles a regres-
sion model for correlating the sending window to the delay estimate (cf. Fig. 1.8).
The window estimator is a bit more complex, and it aims at providing information
for calculation of the number of outstanding packets in the network in the following
timeframe (called epoch). The loss handler tracks losses to be used in the loss
recovery phase as in any legacy TCP mechanism.
The authors make an important step for a sound performance evaluation, namely,
parameter sensitivity analysis. As Verus internal mechanisms comprise of several
parameters, it is important to understand its robustness to a varying of application
scenarios. In order to do that, the authors executed simulation-based evaluation in
different scenarios, with the focus on three major parameters, namely, epoch, delay
profile update interval, and delta increment.
Two types of experiments were conducted. They use real experiments and simu-
lations to show Verus performance improvements over New Reno, Cubic, and
Sprout, in both 3G and LTE networks. Figure 1.9 shows an example of performance
comparison in LTE networks. For a trace-driven simulation, they evaluate the effect
of mobility as a performance factor.
They used throughput, delay, and Jain’s fairness index [13] as performance met-
rics. The selected factors included a number of devices and flows, downlink and
Fig. 1.8 Verus delay profile (Source: Pötsch [6])
1 Principles of Performance Evaluation of Computer Networks
22. 11
uplink data rates, the number of competing flows, user speed profile (in the case of
mobility), the arrival of new flows, RTT, etc.
Concluding remarks from these results are that Verus (i) adapts to both rapidly
changing cellular conditions and to competing traffic, (ii) achieves higher through-
put than TCP Cubic while maintaining a dramatically lower end-to-end delay, and
(iii) outperforms very recent congestion control protocols for cellular networks like
Sprout under rapidly changing network conditions.
Again, it is important to understand why this paper is sound. First, the authors pre-
sented clear arguments for the need of a new transport protocol for cellular networks.
As far as we are concerned on the experimental plan design, they provided details on
all performance metrics, factors, and levels. Second, they carefully selected the evalu-
ation environments, which included prototyping and simulation tools. Finally, as
expected for an ACM SIGCOMM paper, they also use a good variety of ways to pres-
ent their results, such as tables, scatter plots, probability distribution function (PDF),
and time series plots. Please recall that variety means more exciting (or less boring)
stuff to read. Last, but not least, they drew conclusions solely on the results from real-
world measurements, with additional support from the simulation results.
1.2.1.3 Network Layer
When it comes to performance evaluation at the network level of protocols, systems,
and mechanisms, computer-networking researchers and engineers are likely to be
flooded with the massive amount of research and products developed in the last
decades. Every time a new trend arises (e.g., think of ATM networks in the 80s)
Fig. 1.9 Throughput vs delay (LTE network) (Source: Pötsch [6])
1.2
Classical and Modern Scenarios: Examples from Research Papers
23. 12
there will often be a “gold rush” to investigate the performance of the new technology
in a variety of scenarios. Network operators usually want to understand if a wide
deployment of a given technology will bring operational expenditure (OPEX) or
capital expenditure (CAPEX) savings in the long run.
The particular cases of QoS provisioning or its counterpart network traffic throt-
tling have become an arena for a dispute between network operators and users [14].
The rise of a number of technologies, such as peer-to-peer (P2P), VoIP, deep packet
inspection, etc., has set this arena and has triggered interminable debate on network
neutrality [15], “premium services,” and the like. In the paper “Identifying Traffic
Differentiation in Mobile Networks,” Kakhki et al. [16] focused on the understanding
of current practices of Internet service providers (ISP) when performing traffic dif-
ferentiation in mobile environments (if any). The generic term differentiation means
that a certain ISP can either provide better (e.g., QoS provisioning) or worse (e.g.,
bandwidth throttling) services for the user. Their main motivation is that although the
debate is immense, the availability of open data is virtually inexistent to support rea-
sonable discussions. In addition, they argue that regulatory agencies have been mar-
ginally dealing with such issues, thus making difficult for end users to understand
ISP’s management policies and how this might affect their applications performance.
It is clear for some advanced users that ISPs have been deploying throttling mecha-
nisms for some time now, and this causes performance degradation of certain appli-
cations. Back in October 2005, an interview published in the IEEE Spectrum
Magazine revealed that mediation of VoIP traffic was in use by several telephone
companies that provided Internet services. In this context, mediation means traffic
differentiation. Although at that time there were some regulations (in the US) that
could prevent carriers from “blocking potentially competitive services” [14], the
article quotes the words of the vice-president of product marketing for a software
company that provided VoIP traffic identification and classification, as follows:
But there’s nothing that keeps a carrier in the United States from introducing jitter, so the
quality of the conversation isn’t good,” … “You can deteriorate the service, introduce
latency, and also offer a premium to improve it. (Excerpt from Cherry [14])
In [16], the authors present the design of a system to identify traffic differentia-
tion in wireless networks for any application. They address important technological
challenges, such as (i) performance testing of any application class, (ii) understand-
ing how differentiation occurs in the network, and (iii) wireless network measure-
ments from the user devices. They have designed and implemented a system and
validate it in a controlled environment using a commodity device (e.g., a common
smartphone). They focus only on traffic-shaping middleboxes instead of a wide
range of traffic differentiation. They assume that traffic differentiation might be trig-
gered by one or more factors, such as application signatures in the packets’ payload,
current applications throughput, and the like. Other factors were worth further
investigation, such as users’ location and time of the day. My assumption here is
that ISPs can virtually apply different policies for different users and applications at
different times of the day. It is common to find voice plans that give user unlimited
calls at evenings and on weekends [79]. Therefore it is very much plausible that
differentiation can be applied as well. Issues like traffic blocking or content
1 Principles of Performance Evaluation of Computer Networks
24. 13
modification were out of the scope of their work.The main steps of their methodology
to perform the traffic differentiation analysis, which is based on a general trace
record-replay methodology, are:
(i) To record a packet trace from the given application
(ii) To extract the communication profile between the end systems
(iii) To replay the trace over the targeted network with and without the use of a
VPN channel
(iv) To perform some statistical tests to identify if traffic differentiation occurred
for that particular application.
Some interesting intermediate findings were that server IP addresses are not used
for differentiation purposes, high number ports might be classified as P2P applica-
tions, few packets (i.e., below ten packets) are necessary to trigger the traffic differen-
tiation mechanism, and encryption in the HTTP is not an issue for the traffic shapers.
The proposed detection mechanism is based on the well-known two-sample
Kolmogorov-Smirnov (K-S) test (cf. Figs. 1.10) in conjunction with an Area Test
statistic test. The testbed environment for the Controlled Tests uses an off-the-shelf
(OTS) traffic shaper and a mobile device (replay client) and a server (replay server).
Performance factors are the shaping rate (from 9% to 300% of the applications peak
traffic) and packet losses (from 0% to 1.45%, controlled by the Linux tc and netem).
Selected applications were YouTube and Netflix (TCP-based) and Skype and
Hangouts (UDP-based). Performance metrics for the calibration studies included
the overall accuracy and resilience to noise, whereas for the real measurement ones,
they collected throughput, latency, jitter, and loss rate.
Fig. 1.10 Illustration of the well-known nonparametric K-S Test
1.2
Classical and Modern Scenarios: Examples from Research Papers
25. 14
After some calibration studies, the authors conducted real measurements in a
number of operational networks for multimedia streaming applications as well as
voice/video conference systems.All experiments were repeated several times to ensure
statistical significance of results. They reported success in the detection of middle-
boxes’ behavior, such as traffic shaping, content modification, and the use of proxies.
The consequences of low shaping rate limit in TCP performance are that – in some
cases – the congestion control mechanism of the protocol might not even be able to
leave the slow-start phase. One of the main conclusion is that a high number of middle-
boxes in the selected mobile operators break the network neutrality (or the Internet’s
end-to-end) principle, thus affecting performance of some applications directly.
As usual, we highlight the importance of this paper in terms of performance eval-
uation at network level. First, the authors presented clear arguments for the need of
knowing if traffic differentiation exists in wireless networks and what are the traffic
features that trigger such mechanisms. They implemented and tested their system as
a real prototype and conducted experiments in a controlled testbed (for calibration)
and in production networks (for validation). Their experimental plan design provided
details on all performance metrics, factors, and levels. Finally, results presentation
included tables, scatter plots, and time series plots. Last, but not least, they made
available both code and datasets for the sake of reproducibility of their research.
1.2.1.4 Link Layer
Performance evaluation in wireless and mobile environments has always been in
high demand. There are several operational challenges that are related to some
uncontrolled factors, such as channel conditions and user mobility. Moreover, with
the widespread adoption of short- and mid-range wireless communication technolo-
gies (e.g., WiFi) together with cellular networks bring great opportunities for
improving network performance, whereas at the same time brings a number of
design and deployment issues, such as optimization of coverage and channel alloca-
tion. Telecom vendors have been recently offering solutions to heterogeneous net-
works (HetNet) for traffic offloading from the macro to the micro network. Such
solutions can boost customer experience by offering high-performance (e.g., high
data rates or low latencies) available from either macro or micro cell. The chal-
lenges of designing HetNets are tremendous since conflicting requirements are
always in place. In one hand it is necessary to cut total cost of ownership (TCO) and
avoid over-dimensioning. On the other hand, operators must improve performance
for the end users and optimize coverage.
In the paper When Cellular Meets WiFi in Wireless Small Cell Networks, Bennis
et al. [17], tackle some challenges of heterogeneous wireless networking design, by
addressing the integration of WiFi and cellular radio access technologies in small cell
base stations (SCBS). Figure 1.11 shows a common deployment scenario for HetNets.
The authors emphasized HetNets as a key solution for dealing with performance
issues in macrocellular-only infrastructures (macrocell base stations – MBS) since
multimode SCBS can bring complimentary benefits from the seamless integration
1 Principles of Performance Evaluation of Computer Networks
26. 15
point of view. They also pointed out some concerns about the lack of control of the
quality of services in unlicensed bands (i.e., in WiFi networks), which can severely
degrade performance. They argue that offloading some of the traffic from the unli-
censed to the licensed (and well-managed) network can improve performance,
which motivated them to propose an intelligent distributed offloading framework.
Instead of using the usual strategy of offloading from macro to micro cells, they
argue that SCBS could simultaneously manage traffic between cellular and WiFi
radio access technologies according to the traffic profile and network conditions
(e.g., QoS requirements, network load, interference levels). In addition, they discuss
that SCBS could run a long-term optimization process by keeping track of the net-
work’s optimal transmission strategy over licensed/unlicensed bands. As examples,
they mentioned that delay-tolerant applications could be using unlicensed bands,
whereas delay-tight application could be offloaded to the licensed channels.
The authors discussed some design challenges for HetNets deployment, as clas-
sical offloading (i.e., from macro to micro cell or WiFi) might not be the best
approach. They argued that fine-grained offloading strategies should be deployed in
order to make performance-aware optimized traffic-steering decisions. Other net-
work conditions, such as backhaul congestion and channel interference, must be
taken into account when enforcing a certain offloading policy. They proposed a
reinforcement learning (RL)- [18] based solution to this complex problem.
Fig. 1.11 Common deployment scenario for HetNets (Source: Bennis et al. [17])
1.2
Classical and Modern Scenarios: Examples from Research Papers
27. 16
They provided a basic RL model, with realistic assumptions for network
parameterization, including the existence of multimode SCBS. One particular
model formulation considers joint interference management and traffic offloading.
The basic idea behind the RL-based modeling is that the SCBS can make autono-
mous decisions to optimize the given optimization function. Specifically, they set its
goal to to devise an intelligent and online learning mechanism to optimize its
licensed spectrum transmission, and at the same time leverage WiFi by offloading
delay-tolerant traffic. They aim to accomplish this goal through two basic frame-
work’s components, namely, subband selection and proactive scheduling. The
macro behavior of the proposal is simple yet promising. Once each SCBS makes its
decision on which subband to use, a scheduling procedure starts, which takes into
account user’s requirements and network conditions.
The authors consider an LTE-A/WiFi offload case study in a multi-sector MBS
scenario integrated with multimode SCBS. In a simulation environment, the user
device (or user equipment – UE) has a traffic mix profile unevenly distributed
between best-effort, interactive, streaming, real-time, and interactive real-time
applications. They consider four scenarios for benchmarking, as follows:
(i) Macro-only: MBS serves the UEs that use licensed bands only.
(ii) HetNet: MBS and SCBS serve the UEs. SCBS is single mode (licensed bands
only).
(iii) HetNet + WiFi: MBS and SCBS serve the UEs. SCBS is multimode (e.g.,
licensed and unlicensed bands).
(iv) HetNet + WiFi with access method based on received power.
Performance metrics include average UE throughput, total SCBS throughput,
total cell throughput, and total cell-edge throughput. As performance factors, they
use number of UEs, subband selection strategy, and number of SCBS (i.e., small cell
densification). Figures 1.12 and 1.13 show some performance results. One can
clearly observe that a well-designed HetNet strategy can boost network and end-
user performance when compared to an MBS-only scenario.
As a magazine paper, it is not expected to have detailed discussions on both
problem formulation and solutions and performance evaluation. The authors did a
good job balancing its content between engineering design discussions and results
to support the claims that their framework is promising to improve performance in
wireless HetNets environments. They carefully selected performance metrics as
well as factors and levels to conduct sound simulation-based experiments.
1.2.2
Performance Evaluation in Modern Scenarios
1.2.2.1
Virtualization and Cloud Computing
It might be a surprise for some, but virtualization concepts and technologies are not
new. In fact, a brief look at the history of virtualization of computing resources
reveals that the main concept, some proof of concepts, and real products span back
1 Principles of Performance Evaluation of Computer Networks
28. 17
Fig. 1.12 Aggregate cell throughput vs. # of users for two traffic scheduling strategies (Source:
Bennis et al. [17])
Fig. 1.13 Aggregate cell throughput for different offloading strategies (Source: Bennis et al. [17])
1.2
Classical and Modern Scenarios: Examples from Research Papers
29. 18
five decades ago. IBM and Bell Labs were the main players in the 1960s. Fast
forwarding to the late 1990s, one can see that Sun’s Java was already gaining wide-
spread adoption along with VMWare technologies as the main players. Meanwhile,
computer scientists and engineers were developing the concept of the use of com-
puting resources as general services. When looking back 50 years ago for informa-
tion on virtualization and computing as services, it is difficult to pinpoint which
concept came first. But it is safe to say that when hardware virtualization gained
momentum in the late 1990s, the technology world changed forever, whereAmazon.
com (founded in 1994) might be considered the first heavy user of virtualization
technologies. About 10 years later (early to mid-2000s), Amazon.com started to
offer virtual computing resources services, such as the web (Amazon Web Services,
2002)4
, storage (Amazon Simple Storage Service, 2006)5
, and infrastructure
(Amazon Elastic Compute Cloud, 2006)6
.
The main point of interest here is how virtualization-based technologies and ser-
vices perform in real environments. In a closer look at the current virtualization
technologies and services, one will find that a number of new concepts and services
have arisen and evolved in the last 10 years. Virtualization is real and has spread into
hardware, desktop, storage, applications, platforms, infrastructure, networks, and
the like. Layers of virtualization components are now the norm, which brings a
number of questions, such as how they impact the performance seen by the end
users, how to optimize performance in distributed data centers, how to accurately
measure new performance metrics (e.g., elasticity, as it is specific to cloud comput-
ing environments), etc. Adequate tools and methodologies for measurements and
analysis in virtualized environments are currently under discussion in standardiza-
tion bodies, such as in the IETF’s IPPM and BMWG working groups [19, 20].
In the paper CloudNet: Dynamic Pooling of Cloud Resources by Live WAN
Migration of Virtual Machines [21], Wood et al. discuss that while virtualization
technologies have provided precise performance scaling of applications within data
centers, the support of advanced management process (e.g., VM resizing and migra-
tion) in geographically distributed data centers is still a challenge. Such a feature
would provide users and developers an abstract view of the (distributed) resources of
a cloud computing provider as a single unified pool. They highlight some of the hard
challenges for dynamic cloud resources scaling in WAN environment, as follows:
1. Minimization of application downtime: If the given application handles a mas-
sive amount of data, migration to a data center over WAN connections might
introduce huge latencies as a result of the data copying process. Moreover, it
might require to keep disk and memory states consistent.
4
Amazon Web Services (2002) - “Overview of Amazon Web Services”, White Paper, December
2015, available at https://d0.awsstatic.com/whitepapers/aws-overview.pdf.
5
Amazon Simple Storage Service (2006) - “AWS Storage Services Overview: A Look at Storage
Services Offered by AWS”, White Paper, November 2015, available at https://d0.awsstatic.com/
whitepapers/AWS%20Storage%20Services%20Whitepaper-v9.pdf.
6
Amazon Elastic Computing Cloud - Varia, J., “Architecting for the Cloud: Best Practices”,
White Paper, May 2010, available at http://jineshvaria.s3.amazonaws.com/public/cloudbestprac-
tices-jvaria.pdf.
1 Principles of Performance Evaluation of Computer Networks
30. 19
2. Minimization of network configurations: In a LAN environment, VM migration
might require only a few tricks in the underlying Ethernet layer (e.g., using trans-
parent ARP/RARP quick reconfigurations). In the case where the IP address
space changes, such a migration might cause disruption in connectivity from the
application point of view.
3. Management of WAN links: It is obvious to see that links’ capacities and latencies
in WAN are very different from the LAN counterpart. Mainly due to costs, WAN
capacities cannot be comparable to local networks. And even if that was true,
high utilization of links for long periods of VM migration is not a good network
management practice. In addition, link latency is highly unlikely to be a con-
trolled factor. Therefore, the challenges are to provide migration techniques over
distributed clouds that i) operate efficiently over low-bandwidth links and ii)
optimize the data transfer volume to reduce the migration latency and cost.
The authors then propose CloudNet, a platform designed to address the above
challenges, targeting live migration of applications in distributed data centers. The
paper describes the design rationale along with prototype implementation and
extended performance evaluation on a real environment.
Figure 1.14 illustrates the concept of the virtual cloud pools (VCP), which can be
seen as an abstraction of the distributed cloud resources into a single view. VCP
aims at connecting cloud resources in a secure and transparent way.VCP also allows
a precise coordination of the hypervisors’ states (e.g., memory, disks) to ease the
replication and migration process. CloudNet architecture is composed of two major
controllers, namely, Cloud Manager and Network Manager. The former is composed
of other architectural components, such as Migration Optimizer and Monitoring
and Control Agents. The latter is responsible for VPN resource management. In the
case of VM migration between geographically distributed data centers, CloudNet
works as follows (cf. Fig. 1.15):
(i) It establishes connectivity between VCP endpoints.
(ii) It transfers hypervisor’s states (memory and disk).
(iii) It pausesVM for transferring processors state along with memory state updates.
The authors emphasize that disk state migration would take the majority of the
overall VM migration. This is due to the order of magnitude of tens or hundreds of
gigabytes as compared to a few gigabytes for memory sizes. Once these phases are
completed, network connections must be redirected. They deploy an efficient mech-
anism based on virtual private LAN services (VPLS) bridges. They also propose
some optimizations to improve CloudNet performance, such as:
(i) Content-based redundancy
(ii) Using pages or subpages blocks deltas
(iii) Smart stop and copy (SSC) algorithm
(iv) Synchronized arrivals (an extension of the SSC algorithm)
(v) Deployment on Xen’s Dom-0.
1.2
Classical and Modern Scenarios: Examples from Research Papers
31. Fig. 1.15 Phases of resources migration on CloudNet (Source: Wood et al. [21])
Fig. 1.14 Illustration of the virtual cloud pool (VCP) concept (Source: Wood et al. [21])
32. 21
CloudNet implementation is based on Xen,7
Distributed Replicated Block Device
(DRBD),8
and off-the-shelf (OTS) routers that implement VPLS. Performance evalu-
ation of CloudNet was mainly conducted in three interconnected datacenters in the
USA. The main goal was to understand the performance of different applications on
top of CloudNet and under realistic network conditions. They also performed some
tests on a testbed in order to have more control of the network conditions (i.e., con-
trolling link capacity and latency). As applications, they used a Java server bench-
mark application (SPECjbb 2005), a development platform (Kernel Compile), and a
Web benchmark (TPC-W). Performance metrics include bandwidth utilization, total
migration time, data sent, and application response time. Figure 1.16 shows an exam-
ple of the benefit of deploying CloudNet as compared to default Xen’s strategy.
Figure 1.17 depicts the performance of a TPC-W application in varying band-
width conditions. It is clear that CloudNet is able to reduce the impact on migration
time in low-bandwidth conditions.
Also, the amount of data transmitted is significantly reduced for both cases of
TPC-W and SPECjbb, as Fig. 1.18 depicts.
It is worth emphasizing that tackling most research and development challenges
in virtualized environments requires careful design of the system architecture to
uncover the subtleties they bring. The authors of the abovementioned paper did an
excellent job by taking into account the most important performance factors (and
their correspondent levels) realistically that might impede network managers and
engineers to deploy a live migration of applications in a WAN environment. The
presented results are convincing enough to bring other researchers to conduct more
advanced studies in the topic. All performance metrics were carefully selected to
support the benefits of CloudNet.
7
http://www.xenproject.org/.
8
http://drbd.linbit.com/home/what-is-drbd/.
Fig. 1.16 Comparison of response time: Xen vs. CloudNet (Source: Wood et al. [21])
1.2
Classical and Modern Scenarios: Examples from Research Papers
33. 22
1.2.2.2 Software-Defined Networking
Software-defined networking (SDN) concepts and technologies have attracted a great
deal of attention from both industry and academic communities [22]. Large-scale
adoption and deployment are yet to come. Mainly because the overall performance
of the SDN major elements (i.e., the controller and its underlying
software
components) is not clearly understood. There are a number of research papers that
address common performance issues in the SDN realm, such as the ability of the
Fig. 1.17 Performance of the TPC-W in varying bandwidth conditions (Xen vs. CloudNet)
(Source: Wood et al. [21])
1 Principles of Performance Evaluation of Computer Networks
34. 23
SDN controller to deal with the arrival of new flows at a fast pace. There is some
evidence of both good and poor performance in specific scenarios [23, 24], but in
general, most experiments have been conducted in short time frames. For the particu-
lar case of benchmarking of OPNFV switches, Tahhan, O’Mahony, and Morton [25]
suggest at least 72 h of experiments for platform validation and to assess the base
performanceformaximumforwardingrateandlatency.Althoughtherecommendation
Fig. 1.18 Transmitted data (TPC-W and Specjbb) (Source: Wood et al. [21])
1.2
Classical and Modern Scenarios: Examples from Research Papers
35. 24
is for NFV switches, it serves well for SDN environments, so the experimenter would
be able to separate transient and steady-state performance clearly. Along with the
long-term performance of SDN controllers in search for software malfunctioning
(e.g., software aging) [26], there is a need for understanding the interactions of the
software components in popular controllers in order to set the ground truth.
In On the Performance of SDN Controllers: A Reality Check, Zhao, Iannone, and
Riguidel [27] conducted controlled and extended experiments to deeply understand
how SDN controllers should be selected and configured to be deployed in real sce-
narios. They start showing that the need for a comprehensive performance evalua-
tion is due to a variety of systems implementation. In general, each implementation
of an SDN controller might perform well in a particular scenario, whereas it might
suffer in a different setting. They focused on the most popular centralized SDN
controller implementations, namely, Ryu [28], Pox [29], Nox [30], Floodlight [31],
and Beacon [32]. Figure 1.19 depicts the basic testing setup for the experiments.
They rely on a one-server setup with CBench9
playing the important role of emulat-
ing a configurable number of switches.
9
Cbench, https://github.com/andi-bigswitch/oflops/tree/master/cbench.
Fig. 1.19 Testbed setup for experiments with SDN controllers (Source: Zhao et al. [27])
1 Principles of Performance Evaluation of Computer Networks
36. 25
Performance metrics were kept simple, such as the use of latency, throughput,
and fairness (in a slightly different context). Experiments were replicated several
times in order to guarantee statistical significance of results. After discussing accu-
racy issues for latency measurements in Open vSwitches, the paper evaluates the
impact of selected factors in the performance metrics, as follows:
1. Factor #1: the type of the interpreter for Python-based controllers, with CPython
and PyPy as levels.
2. Factor #2: the use of multiple threads along with the hyper-threading (HT) fea-
ture of the processor. The case here is to understand if enabling or disabling HT
(e.g., levels) would impact any performance metrics. Also, they need to evaluate
if multiple threads bring any performance advantages.
3. Factor #3: the number of switches, varying from 1 to 256.
4. Factor #4: the number of threads, from 1 to 7.
It is worth emphasizing that not all possible combinations of factors and levels were
set for testing. The authors clearly explained the subset of levels for each scenario.
Let’s check some of the results from this paper. For performance factor #1 (the
type of the Python language interpreter), Tables 1.1 and 1.2 show that PyPy has
clearly better performance for both latency and throughput metrics.
The impact of the number network switches (factor #3) is evaluated for both
cases of single and multiple threads (factor #2). Figure 1.20 presents the impact on
the SDN-Switch latency as the number of switches increases. Individual latencies
rise from microseconds to milliseconds, depending on the type of the controller. In
this particular case, Beacon seems to suffer less performance degradation as the
number of switches increase, whereas Ryu has the worst overall performance. There
is not a further investigation on what exactly causes this performance issue, although
the authors suggest that the round-robin policy implemented in each controller
might be the underlying cause.
There are a number of other important results and discussion in the paper, but one
is of particular interest. In a similar strategy as in [26], the authors wanted to under-
stand how SDN controllers perform in a heavy workload scenario. The experiments
Table 1.1 Impact of python interpreter (latency – ms)
Controller CPython PyPy PyPy/CyPython
Pox 0.156 0.042 3.75
Ryu 0.143 0.037 3.86
Source: Zhao et al. [27]
Table 1.2 Impact of python interpreter (throughput – responses/ms)
Controller CPython PyPy PyPy/CyPython
Pox 11.2 105 9.38
Ryu 24.1 106 4.40
Source: Zhao et al. [27]
1.2
Classical and Modern Scenarios: Examples from Research Papers
37. 26
show similar results as in [26] by showing that latency (cf. Table 1.3) is propelled to
thousands of times more as compared to a light workload.
Their concluding remarks from the experiments show that Beacon has performed
better than the other controllers in almost tested scenarios, which is in agreement
with previous studies.
We see that the authors carefully selected the most important performance factors
and levels, by bringing strong qualitative arguments to the table. As expected, the
presented results are solid enough to bring other researchers to conduct more advanced
studies in the topic, and it is also aligned with some of the previous studies.
Fig. 1.20 Impact on the SDN-Switch latency as the number of switches increases (Source: Zhao
et al. [27])
Table 1.3 Latency comparison
Controller
Latency (ms)
Empty buffer Full buffer Ratio
Pox 0.042 5.26 126
Ryu 0.039 2.63 68
Nox 0.018 149.00 8358
Floodlight 0.022 76.90 3461
Beacon 0.016 50.00 3050
Source: Zhao et al. [27]
1 Principles of Performance Evaluation of Computer Networks
38. 27
1.2.2.3 Network Functions Virtualization
The ever-growing interests in virtualization technologies have very recently brought
a new player in the field. Network functions virtualization (NFV) can be seen as the
latest paradigm that aims at virtualizing traditional network functions currently
implemented in dedicated platforms [34]. Within the scope of NFV, virtual network
functions (VNF) can be orchestrated (or chained, if you will), thus creating a new
concept, namely, Service Function Chaining (SFC). This new paradigm, along with
other virtualization concepts and technologies (e.g., SDN and virtual switches),
promises to profoundly change the way to manage networks. The main concern of
the telecommunications industry, as the NFV’s main target, is, of course, related to
performance. In particular, as the implementation of VNF or SFC will be typically
deployed in VMs or containers, is it important to understand its performance over-
head. It is clear that multiple instances of a VNF deployed in a virtual switch might
have an impact on the overall SFC performance. Therefore, performance evaluation
of NFV and its related technologies is one of the main concerns of network opera-
tors. It has attracted the interest of standardization bodies, such as ETSI [33]. In
Assessing the Performance of Virtualization Technologies for NFV: A Preliminary
Benchmarking, Bonafiglia et al. [35] argued that while performance issues in tradi-
tional virtualization are mostly related to processing tasks, in an NFV network, I/O
is the main concern. They presented a preliminary benchmarking of SFC (as a chain
of VNFs) that uses common virtualization technologies.
First, they set up a controlled environment and implemented VNF chains
deployed as virtual machines (using KVM) or containers (a Linux Docker).
Performance metrics were throughput and latency. All experiments were replicated
to ensure statistical significance of results. For the factors, they simply evaluated the
impact on the end-to-end performance as the number of VNFs in the SFC increases.
They also investigated if the technology used to interconnect the VNFs in the virtual
switch, namely, either the Open vSwitch (OvS) or the Intel Data Plane Development
Kit (DPDK)-based OvS, has any impact on the performance metrics. Figure 1.21
shows the network components for a VM-based implementation of NFV, whereas
Fig. 1.22 shows a similar architecture in a container-based approach.
The testbed for the experimental runs was configured as depicted in Fig. 1.23.
It is important to emphasize that they did not evaluate the impact of processing
overheads for particular virtualized functions. In other words, they implemented a
“dummy” function that simply forwards packets from one interface to another. A
comprehensive performance evaluation might take the processing costs of the given
functions (or classes of functions) into account. Figure 1.24 shows an example of
the impact of the VNFs in a single chain on the end-to-end throughput, for both VM
and container-based implementation in the virtual switch.
Similarly, Fig. 1.25 shows the latency introduced by the VNFs implemented in
different technologies (e.g., OvS vs. DPDK) as the SFC length increases.
I agree with the authors when they stated that this paper provides preliminary
benchmarking of VNFs chains. I just want to highlight that from the point of view
of performance evaluation, they quickly realized the importance of having some
1.2
Classical and Modern Scenarios: Examples from Research Papers
39. 28
Fig. 1.22 Container-based approach for NFV implementation (Source: Bonafiglia et al. [35])
Fig. 1.23 Testbed setup
for testing NFV (Source:
Bonafiglia et al. [35])
Fig. 1.21 Network
components for a
VM-based implementation
of NFV (Source:
Bonafiglia et al. [35])
1 Principles of Performance Evaluation of Computer Networks
40. 29
Fig. 1.24 An example of the impact of the VNFs in a single chain on the end-to-end throughput
(Source: Bonafiglia et al. [35])
Fig. 1.25 An example of the impact of the VNFs in a single chain on the RTT (Source: Zhao et al.
[27])
1.2 Classical and Modern Scenarios: Examples from Research Papers
41. 30
initial results to fill the gap in this topic. Sometimes you don’t need an extensive
performance evaluation to have a good understanding of the factors that impact
systems performance. An initial and correct (yet simple) set of hypotheses and
research questions might be enough to carry a set of experiments. Validation of
hypotheses does not need to be fancy.
1.3
The Pillars of Performance Evaluation of Networking
and Communication Systems
When dealing with performance evaluation, we need to decide among experimenta-
tion, simulation, emulation, and modeling. Collecting samples properly from the
experiments (a.k.a. measurements) is a particular aspect since it gives support for all
performance evaluation methods. If you have to choose among these options, which
one(s) would you prefer? There is no precise scientific approach to help researchers
decide on their strategies. The best ones should be analyzed on a case-by-case basis,
depending on the availability of resources as well as one’s ability to deal with a
particular strategy. Let’s say one needs to evaluate if a certain transport-layer network
protocol has better overall performance than a traditional one (e.g., TCP SACK vs.
TCP Cubic). A number of decisions should be made for a proper performance evalu-
ation and analysis, but the first step is always deciding the adequate approach. Should
one select real experimentation or simulation? Should one try to develop analytical
models for both protocols? Is emulation possible for the envisaged scenarios? Would
results from a single strategy be convincing enough? These are the types of questions
we would like to pose along with presenting possible avenues to answer them.
1.3.1 Experimentation/Prototyping, Simulation/Emulation,
and Modeling
It is common sense that, when possible, the best general approach would be work-
ing with at least two strategies [1]. Limitations of an individual approach would be
somewhat overcome when you choose and work with at least two different ones.
This strategy would also help on minimizing criticisms of your experimental work
(e.g., when it is submitted to peer review or dissertation/thesis committees). It is
clear that real implementation approaches generally suffer from scalability limita-
tions. For instance, it is costly to deploy large-scale scenarios to experiment with for
either ad hoc sensor networks or Internet of things (IoT). Even if one has hundreds
of devices deployed, there is always criticism that the validation would be limited in
a larger scale (i.e., with thousands or millions of devices). Therefore, one strategy
can give support to the other, when properly argued. In this example of performance
evaluation of transport protocols, one can choose some pairs, such as experimen-
tation simulation, simulation modeling, experimentation modeling, and the like.
1 Principles of Performance Evaluation of Computer Networks
42. 31
For example, experimentation can validate a certain mechanism the experimenters
are proposing, whereas simulations could demonstrate its effectiveness in a large
scale, and analytical modeling could provide general behavior/understanding of the
phenomenon under investigation.
1.3.1.1
Network Experimentation and Prototyping
Engineers are more inclined to see real implementation of network protocols run-
ning in devices or OSes. Simulations might give them a first impression on how to
proceed further, but in the end, they need to implement and test their ideas in real
environments. Academics are more flexible and might rely on any of the perfor-
mance evaluation approaches, as long as they give them some meaningful results
that provide evidence of the scientific contributions of their work. In any case, doing
experimental work is a tough decision for either group of experimenters since it
generally requires analysis of cost, scalability, time to completion, learning curve
for the particular environment, and the like.
Experimentation work might involve a number of preliminary tests to make sure
the experimenter will be working in a suitable environment. There are a number of
components in the protocol stack and their corresponding implementations that
might have an impact on the performance of the given mechanism under perfor-
mance analysis. Let’s say one wants to develop and test a new architecture for a
network traffic generator [36, 37]. Even if the theoretical or simulation analysis
shows that it has outstanding performance as compared to the results found in the
literature, the real implementation of packet handling software layers in common
OSes will have a profound impact on the results. Deploying such an architecture on
top of libpcap10
or PF_RING11
will give them different results.
For dealing with large-scale experimentation, there are some experimentation
platforms that help researchers to overcome scalability issues of real prototyping.
PlanetLab (PL) [46] was one of the first worldwide experimental platforms avail-
able to researchers in the computer networking field. They are also known as Slice-
based Federation Architecture (SFA). Recent SFA platforms include Geni12
and
OneLab.13
SFAs mostly have three key elements, namely, components, slivers, and
slices. A component is the atomic block of the SFA and can come in the form of an
end host or router. Components offer virtualized resources that can be grouped into
aggregates. A slice is a platform-wide set of computer and network resources (a.k.a.
slivers) ready to run an experiment. Figure 1.26 illustrates the essential concepts of
component, resource, sliver, and slice in an SFA-based platform.
10
Libpcap – www.tcpdump.org/.
11
PF_RING – www.ntop.org.
12
http://www.geni.net/.
13
https://onelab.eu.
1.3
The Pillars of Performance Evaluation of Networking and Communication Systems
43. 32
1.3.1.2
Network Simulation and Emulation
Network simulation is an inexpensive and reliable way to develop and test ideas for
those problems where there is no need to rely on either analytical models or experi-
mental approaches. Several network simulation environments have been tested and
validated by the research community in the last decades. And most simulation
engines are efficient enough to provide fast results even for large-scale scenarios.
Barcellos et al. [49] discuss the pros and cons of conducting performance evaluation
through simulation and real prototyping.
Simulation engines are usually based on discrete-event approaches. For instance,
ns-2 [38] and ns-3 [39], OMNET++ [40], OPNET [41], and EstiNet [42] have their
simulation core based on discrete-event methods. Specific network simulation envi-
ronments, such as Mininet [43], Artery [44], and CloudSim [45], follow suit.
Emulation means the process of imitating the outside behavior of a certain
networked application, protocol, or even an entire network. I come back with
more details on this in Chap. 4. Performance evaluation that includes emulated
components helps the experimenter to deploy more realistic scenarios where
simulation, experimental evaluation, and analytical model alone are not capable
of. Emulation fits particularly well in scenarios where the experimenter needs to
understand the behavior of real networked applications in controlled networking
environments. Of course, one can control network parameterization in real envi-
ronments, but in general, large-scale experimental platforms are not available to
all. When the performance of a certain system in the wild (i.e., on the Internet)
Fig. 1.26 Building blocks
of an SFA-based platform
1 Principles of Performance Evaluation of Computer Networks
44. 33
is known, but only gives the big picture, the researcher/engineer might want to
understand its behavior under certain network conditions. Recall that roughly
10–15 years ago, the performance of VoIP application on the Internet was not
clear. Different CODECs and other system’s eccentricities would yield different
perceived quality of experiences (QoE) by the user. Therefore, if one needed to
have an in-depth view of such VoIP applications in certain conditions (i.e., under
restricted available bandwidth, limited latencies, or packet loss rate), resorting
on emulation would be the answer. In modern scenarios, there are a number of
studies trying to understand the behavior of video applications in controlled
environments [47, 48]. NetEm [50] is a network emulation widely used in the
networking research community. It has the flexibility to change a number of
parameters, thus giving the user the possibility to mimic a number of large net-
working scenarios without the associated costs. Figure 1.27 depicts typical usage
of NetEm’s in emulation-based experiments.
Fig. 1.27 A typical usage of NetEm’s in emulation-based experiments
1.3
The Pillars of Performance Evaluation of Networking and Communication Systems
45. 34
1.3.1.3 Analytical Modeling
All models are wrong, but some are useful. (George Box)
The quote in this subsection comes from a well-known statistician called George
Box. His state is profound. Due to the need for an abstract view of the target prob-
lem, and to minimize complexity, analytical models sometimes suffer from limited
usage in practical situations. But, some are useful!
A formal definition of modeling closer to this book’s context is a system of pos-
tulates, data, and inferences presented as a mathematical description of an entity or
state of affairs.14
It is very true that analytical models have limitations, and they are
somewhat hard to develop since they mostly require a strong mathematical back-
ground. In general, it is generally much easier to come up with an idea for a protocol
without any profound mathematical analysis. We have seen a number of examples
in the computer networking field where formal mathematical models only came
after a certain protocol or services have been adopted and used for a long time. One
clear case is some of the mechanisms behind TCP. It took years of research studies
to provide evidence that the congestion control mechanisms of TCP would be fair
in a number of scenarios, although several TCP flavors were active and running on
the Internet for quite some time [51]. Network engineers have developed models for
the purpose of planning, designing, dimensioning, performance forecasting, and the
like. Models can be used to have an overview of general behavior or to evaluate
detailed mechanisms or architectural components.
Analytical modeling does not need to be complex. One can simply perform a
model fitting for a particular probability distribution function or a mix of them.
Alternatively, one can look at user behavior and try to derive simple models from it
to build traffic generators [52]. There are indeed a number of analytical models avail-
able to meet most requirements from network engineers and researchers in terms of
practical models. They come not only in the form of research papers but also as books
[53, 54] and book chapters [55]. Therefore, it is almost impossible to cover the body
of knowledge on analytical models in computer networking in a single book.
In order to give you a glimpse of how exactly a model looks like, I am presenting
some relevant examples of derived analytical models from different layers of the
TCP/IP reference model. The goal here is to highlight the potential of developing
powerful mathematical models that would serve as the basis for advanced perfor-
mance evaluation. It is worth recalling that a good performance evaluation plan
should include at least two strategies. Therefore, for the researchers more inclined
to mathematical and statistical studies, developing analytical models would be the
first step that could be later validated by simulation or experimental work.
14
http://www.merriam-webster.com/dictionary/modelling/.
1 Principles of Performance Evaluation of Computer Networks
46. 35
Video Game Characterization
Suppose you need to design a mechanism for improving the performance of online
users of a particular video game and you come up with an idea of a transport/net-
work cross-layer approach. Also, due to budget constraints, you are not able to per-
form large-scale measurements in real environments. You are now restricted to
simulation environments, where you can quickly deploy your strategy. One impor-
tant question here is: how can I generate synthetic traffic that mimics the application-
level behavior of the game? Modeling application-level systems for use in simulation
environments is an active area since new applications come into play very fre-
quently. Let’s have a look at one recent example of modeling for game traffic gen-
eration. In [56], Cricenti and Branch proposed a skewed mixture distribution for
modeling the packet payload lengths of some well-known first-person shooting
(FPS) games. They argued that a combination of different PDFs could lead to more
precise models. They proposed the use of the ex-Gaussian distribution (also known
as exponentially modified Gaussian distribution – EMG) as a model for FPS traffic.
They showed that the ex-Gaussian distribution, through empirical validation, is able
to capture the underlying process of an FPS player well. Also, they discussed how
the model would be useful for building efficient traffic generators. The ex-Gaussian
PDF has the following representation:
f x
x
x
, , , ƒ
m s l l
m ls
s
m l
l
( ) =
- -
æ
è
ç
ö
ø
÷
-
( ) +
æ
è
ç
ç
ö
ø
÷
÷
exp ,
2
2
2
where exp is the exponential PDF and Φ is the Gaussian PDF, each one with its
respective parameters (i.e., mean, standard deviation, and rate).
Figure 1.28 shows one result for the validation of this model, in a scenario with
four players using Counter-Strike. One can see that the model predicts the packet
size distribution well when compared to the empirical data.
You can find traffic models like this previous one for virtually any types of
Internet applications. Lots of them are implemented in network simulation environ-
ments (e.g., ns-2, ns-3, OMNET++, OPNET).
TCP Throughput
As 95% of the Internet traffic is carried by TCP, it is obvious that its characterization
(i.e., modeling) has gained attention in the last decades [51]. One of the most impor-
tant studies on modeling TCP was developed by Padhye, Firoiu, Towsley, and
Kurose [57]. Their goal was to develop analytic characterization of the steady state
throughput, as a function of loss rate and round-trip time for a bulk transfer TCP
flow. By bulk transfer, they mean a long-lived TCP flow. This is one of the most
cited papers in the TCP throughput modeling context. After seven pages discussing
1.3
The Pillars of Performance Evaluation of Networking and Communication Systems
47. 36
and providing detailed information on how to build a precise model, they proposed
the approximation model in a single equation, as follows:
B p
W
RTT
RTT
bp
T
bp
p p
( ) »
+
æ
è
ç
ç
ö
ø
÷
÷ +
( )
æ
è
ç
ç
min
min
max
,
,
1
2
3
13
3
8
1 32
0
2
ç
ç
ç
ç
ö
ø
÷
÷
÷
÷
÷
,
where B(p) is the TCP throughput, Wmax is the maximum TCP congestion window
size, b is the number of packets acknowledge by a received ACK, p is the estimated
packet loss probability, To is the time-out period, and RTT is the round-trip time of
the end-to-end connection. Padhye’s model is a simple yet very effective TCP model
that passed the test of time.
Recently, Loiseau et al. [58] proposed a new model for the TCP throughput follow-
ing the steps of Padhye’s approach, but with different goals. They argue that Padhye’s
model might be limited to use since it is not able to capture TCP’s throughput fluctua-
tions at smaller time scales. Therefore, they aimed at characterizing the variations of
TCPthroughputaroundthemeanandatsmallertimescalestoprovideacomplementary
model to Padhye’s seminal work.The proposed model is based on the classical Markov
Chain theory, where states and transitions are mapped to TCP’s features, such as con-
gestion window size, AIMD mechanism, packet loss process, and the like. Their main
Fig. 1.28 Analytical model validation for packet size (Counter-Strike) (Source: Cricenti and
Branch [56])
1 Principles of Performance Evaluation of Computer Networks
48. 37
contribution relies on a new method to describe deviations of the TCP’s throughput
around the almost-sure mean.As it is a bit more complex model, I encourage the inter-
ested reader to see its details in the section 3 of their 2010 paper [58].
Aggregate Background Traffic
All network simulations need some kind of traffic between end systems, right? If you
are investigating the performance of a particular new mechanism (let’s say the
HTTP/2.0), you just set up the traffic sources and sinks (either in a client-server or
peer-to-peer configuration), start generating simulated or experimental traffic, and
collect the measurement results. But, how about the noisy background traffic? The
one that you know will be there in real networks! In the real Internet, there are a num-
ber of uncontrolled traffic (e.g., cross-traffic) that might have severe effect on the
performance of the system under test. Sometimes you just need a single TCP or UDP
traffic stream competing for resources (e.g., bandwidth) in the background. If you
want more precise models to represent a noisy background traffic, you need to con-
sider the self-similar (or long-range dependent) nature of the actual traffic in the
Internet. Research on the fractal nature of the Internet traffic gained lots of attention
for more than a decade, between the early 1990s and mid-2000s. I refer the interest
reader to the seminal work of Leland et al. [59] and lots of details in the book [60]. For
the purpose of this section, I just need to present you a model (the simpler, the better)
for background traffic that captures the essence of fractal behavior in the Internet. In
the paper, Self-Similarity Through High-Variability: Statistical Analysis of Ethernet
LAN Traffic at the Source Level, Willinger et al. [61] provide some explanations for
the occurrence of fractal traffic in local networks. They showed that a superposition of
ON-OFF traffic sources yield self-similar traffic. ON-OFF sources generate packet
trains. Their very important findings paved the way to build an efficient (i.e., precise
and parsimonious) and realistic model for synthetic traffic generation.
Here are some basic concepts first. A ON-OFF traffic source means a source
generates traffic by alternating between ON periods (i.e., sending traffic) and OFF
periods (i.e., in a silent period), where both periods are independent and identically
distributed (i.i.d.). Also, the sequences of ON and OFF periods are independent
from each other. One important aspect of Willinger’s model is that they use an infi-
nite variance distribution, which is well represented by heavy-tailed distributions,
such as Pareto. It is worth emphasizing that previous models assumed finite vari-
ance distributions, such as an exponential PDF. In a nutshell, modeling a self-similar
traffic with ON-OFF models is straightforward. A superposition of n ON-OFF
sources that follows a heavy-tailed distribution yields self-similar traffic. There are
basically a few parameters to consider, namely, the number of sources, n, and the
Hurst parameter to characterize the long tail of the PDF of each individual source.
The Hurst parameter is a number between 0.5 and 1, and it is well known as the
self-similarity index. The closer H is to 1, the more self-similar it is. For n, simulated
results indicate that 20 sources would be enough [62].
1.3
The Pillars of Performance Evaluation of Networking and Communication Systems