This document introduces the concept of quality attenuation (ΔQ) in data networks. ΔQ refers to the inevitable increase in delay and potential for packet loss that occurs as data passes through network elements. The total ΔQ is conserved as it propagates through the network. ΔQ can be represented and measured to understand its sources and ensure it is sufficiently bounded for a given application. The document examines how ΔQ composes across network elements and can be allocated a "budget" to help manage quality of experience for end users.
Introduction to ΔQ and Network Performance Science (extracts)Martin Geddes
Introduction and summary sections from long slide deck (165 slides) on network performance science as the associated mathematical breakthrough that makes it possible.
Essential science for broadband regulationMartin Geddes
Is 'net neutrality' an objectively measurable thing? The scientific report recently commissioned by Ofcom (the UK telecoms regulator) on Traffic Management Detection says 'no'. Furthermore, 'neutrality' isn't even what we want! This presentation is an annotated version from a webinar that summarises the report and suggests a way out of the 'neutrality' quagmire.
This presentation shares some of the latest advances in network performance science. What are the problems with how people today measure, model and manage performance? What are the state of the art approaches? A case study from Kent Public Service Network will illustrate the scope for improvements in all network operators.
Introduction to ΔQ and Network Performance Science (extracts)Martin Geddes
Introduction and summary sections from long slide deck (165 slides) on network performance science as the associated mathematical breakthrough that makes it possible.
Essential science for broadband regulationMartin Geddes
Is 'net neutrality' an objectively measurable thing? The scientific report recently commissioned by Ofcom (the UK telecoms regulator) on Traffic Management Detection says 'no'. Furthermore, 'neutrality' isn't even what we want! This presentation is an annotated version from a webinar that summarises the report and suggests a way out of the 'neutrality' quagmire.
This presentation shares some of the latest advances in network performance science. What are the problems with how people today measure, model and manage performance? What are the state of the art approaches? A case study from Kent Public Service Network will illustrate the scope for improvements in all network operators.
"Our vision is to enable telecom operators to transform their business model and service delivery through innovation. Working with innovators like Connectem, our telecom technology team aims to help our carrier
partners transform their networks to handle signaling storms and future network loads through the use of our virtualization enablers."
— Patrick P. Gelsinger, Chief Executive Office, VMware
The Connectem Virtual Core for Mobile (VCM) solution uses network
functions virtualization (NFV) to enable mobile operators to address
challenges related to packet core networking, such as managing peaks and
troughs of load on the EPC for both control plane and data plane surges.
The VCM solution uses a unified, centrally-managed, elastic, scalable, and
robust virtualization platform that is powered by VMware®
. This document describes the Connectem VCM solution, powered by VMware vSphere®, and the extremely promising results it has achieved. In lab tests, VMware and Connectem were able to demonstrate that the optimized performance of LTE Evolved Packet Core (EPC) control plane processing on standard, commercial-grade servers
and virtualization software is greater than what is
available today on purpose-built systems and
software.
Quantifying QoS Requirements of Network Services: A Cheat-Proof FrameworkAcademia Sinica
Despite all the efforts devoted to improving the QoS of networked multimedia services, the baseline for such improvements has yet to be defined. In other words, although it is well recognized that better network conditions generally yield better service quality, the exact minimum level of network QoS required to ensure satisfactory user experience remains an open question.
In this paper, we propose a general, cheat-proof framework that enables researchers to systematically quantify the minimum QoS needs for real-time networked multimedia services. Our framework has two major features: 1) it measures the quality of a service that users find intolerable by intuitive responses and therefore reduces the burden on experiment participants; and 2) it is cheat-proof because it supports systematic verification of the participants' inputs. Via a pilot study involving 38 participants, we verify the efficacy of our framework by proving that even inexperienced participants can easily produce consistent judgments. In addition, by cross-application and cross-service comparative analysis, we demonstrate the usefulness of the derived QoS thresholds. Such knowledge will serve important reference in the evaluation of competitive applications, application recommendation, network planning, and resource arbitration.
The goal of this presentation is to share exemplars of important broadband Internet access performance phenomena. In particular, we highlight the critical role of stationarity.
When they have non-stationarity, networks are useless for most applications. We show real-world examples of both stationarity and non-stationarity, and discuss the implications for broadband stakeholders.
These phenomena are only visible when using state-of-the-art high-fidelity metrics and measures that capture instantaneous flow.
Do you run an MPLS network to some or all of your branches? If so, you are likely wasting MPLS capacity backhauling Internet traffic.
For many organizations, a lot of the traffic is Internet-bound due to increased cloud-usage. Backhauling Internet traffic over an expensive MPLS service adds latency and puts pressure on limited and expensive MPLS capacity.
Kaoru Yano
Chairman of the Board
NEC Corporation
Today’s Agenda
1. Introduction to NEC
2. What is SDN?
3. Real-world deployments
4. Closing
ONS2015: http://bit.ly/ons2015sd
ONS Inspire! Webinars: http://bit.ly/oiw-sd
Watch the talk (video) on ONS Content Archives: http://bit.ly/ons-archives-sd
Power Consumption in cloud centers is increasing
rapidly due to the popularity of Cloud Computing. High power
consumption not only leads to high operational cost, it also leads
to high carbon emissions which is not environment friendly.
Thousands of Physical Machines/Servers inside Cloud Centers
are becoming a commonplace. In many instances, some of the
Physical Machines might have very few active Virtual Machines,
migration of these Virtual Machines, so that, less loaded Physical
Machines can be shutdown, which in-turn aids in reduction of
consumed power has been extensively studied in the literature.
However, recent studies have demonstrated that, migration of
Virtual Machines is usually associated with excessive cost and
delay. Hence, recently, a new technique in which the load
balancing in cloud centers by migrating the extra tasks of
overloaded Virtual Machines was proposed. This task migration
technique has not been properly studied for its effectiveness
w.r.t. Server Consolidation in the literature. In this work, the
Virtual Machine task migration technique is extended to address
the Server Consolidation issue. Empirical results reveal excellent
effectiveness of the proposed technique in reducing the power
consumed in Cloud Centers.
Planning for a (Mostly) Hassle-Free Cloud Migration | VTUG 2016 Winter WarmerJoe Conlin
There is no "one right way" when it comes to a cloud migration or cloud transformation, and in this 2016 VTUG talk I explore some of the methods that have proven successful in my experience.
Come sfruttare al meglio la sinergia tra Kemp LoadMaster e il prodotti VMware
http://vinfrastructure.it/it/2015/10/webinar-kemp-lm-e-vmware-vsphere-la-sinergia-perfetta/
For more discussions and topics around Service Providers, please visit our SP Community: http://cisco.com/go/serviceprovidercommunity
Download the full PDF report here: https://communities.cisco.com/docs/DOC-37834
When we get water, electricity, or gas delivered to our home or place of work we expect it to have predictable quality. Why isn't this also true of broadband? The answer is we don't (yet) have the "glue" to integrate performance in digital supply chains.
Migrating Into the Cloud: The Brownfield vs. Greenfield OpportunityJulia Smith
The IT world is a complex space and companies may not have the money to completely replace all of their systems. Therefore we need solutions to optimize what we already have. This white paper examines the differences between Greenfield and Brownfield environments – particularly as it pertains to cloud migrations.
Performance Testing: Putting Cloud Customers Back in the Driver’s SeatCompuware APM
Many businesses wrongly assume they will enjoy Google.com- and Amazon.com-like performance and consistency when they enlist cloud computing services from these and other major cloud providers.
The truth is that businesses must conduct due diligence and insist on business-relevant performance guarantees in their service level agreements (SLAs). The keys for businesses success in working with cloud providers lies in understanding exactly why businesses are using the cloud and in testing performance levels from the realistic perspective of application end-users--both before and after a cloud service provider is enlisted.
The issue of quality in networks has been long being troublesome, resulting in endless deferral. It was a hard issue for the pioneers to deal with ‘quality’ and ‘QoS’ as the underlying mathematics was insufficient to support their ambitions. We have now filled in a significant part of the missing mathematical foundations. The culmination of that work is the ∆Q framework.
As a by-product of this framework, a new approach to sharing quality has become possible: a polyservice network. We believe that this is a significant conceptual and practical advance. However, we have (until now) lacked industry standard terminology to describe it.
This short presentation introduces the idea of a polyservice network, and contrasts it with pre-existing approaches to ‘priority QoS’.
This is the subject slides for the module MMS2401 - Multimedia System and Communication taught in Shepherd College of Media Technology, Affiliated with Purbanchal University.
"Our vision is to enable telecom operators to transform their business model and service delivery through innovation. Working with innovators like Connectem, our telecom technology team aims to help our carrier
partners transform their networks to handle signaling storms and future network loads through the use of our virtualization enablers."
— Patrick P. Gelsinger, Chief Executive Office, VMware
The Connectem Virtual Core for Mobile (VCM) solution uses network
functions virtualization (NFV) to enable mobile operators to address
challenges related to packet core networking, such as managing peaks and
troughs of load on the EPC for both control plane and data plane surges.
The VCM solution uses a unified, centrally-managed, elastic, scalable, and
robust virtualization platform that is powered by VMware®
. This document describes the Connectem VCM solution, powered by VMware vSphere®, and the extremely promising results it has achieved. In lab tests, VMware and Connectem were able to demonstrate that the optimized performance of LTE Evolved Packet Core (EPC) control plane processing on standard, commercial-grade servers
and virtualization software is greater than what is
available today on purpose-built systems and
software.
Quantifying QoS Requirements of Network Services: A Cheat-Proof FrameworkAcademia Sinica
Despite all the efforts devoted to improving the QoS of networked multimedia services, the baseline for such improvements has yet to be defined. In other words, although it is well recognized that better network conditions generally yield better service quality, the exact minimum level of network QoS required to ensure satisfactory user experience remains an open question.
In this paper, we propose a general, cheat-proof framework that enables researchers to systematically quantify the minimum QoS needs for real-time networked multimedia services. Our framework has two major features: 1) it measures the quality of a service that users find intolerable by intuitive responses and therefore reduces the burden on experiment participants; and 2) it is cheat-proof because it supports systematic verification of the participants' inputs. Via a pilot study involving 38 participants, we verify the efficacy of our framework by proving that even inexperienced participants can easily produce consistent judgments. In addition, by cross-application and cross-service comparative analysis, we demonstrate the usefulness of the derived QoS thresholds. Such knowledge will serve important reference in the evaluation of competitive applications, application recommendation, network planning, and resource arbitration.
The goal of this presentation is to share exemplars of important broadband Internet access performance phenomena. In particular, we highlight the critical role of stationarity.
When they have non-stationarity, networks are useless for most applications. We show real-world examples of both stationarity and non-stationarity, and discuss the implications for broadband stakeholders.
These phenomena are only visible when using state-of-the-art high-fidelity metrics and measures that capture instantaneous flow.
Do you run an MPLS network to some or all of your branches? If so, you are likely wasting MPLS capacity backhauling Internet traffic.
For many organizations, a lot of the traffic is Internet-bound due to increased cloud-usage. Backhauling Internet traffic over an expensive MPLS service adds latency and puts pressure on limited and expensive MPLS capacity.
Kaoru Yano
Chairman of the Board
NEC Corporation
Today’s Agenda
1. Introduction to NEC
2. What is SDN?
3. Real-world deployments
4. Closing
ONS2015: http://bit.ly/ons2015sd
ONS Inspire! Webinars: http://bit.ly/oiw-sd
Watch the talk (video) on ONS Content Archives: http://bit.ly/ons-archives-sd
Power Consumption in cloud centers is increasing
rapidly due to the popularity of Cloud Computing. High power
consumption not only leads to high operational cost, it also leads
to high carbon emissions which is not environment friendly.
Thousands of Physical Machines/Servers inside Cloud Centers
are becoming a commonplace. In many instances, some of the
Physical Machines might have very few active Virtual Machines,
migration of these Virtual Machines, so that, less loaded Physical
Machines can be shutdown, which in-turn aids in reduction of
consumed power has been extensively studied in the literature.
However, recent studies have demonstrated that, migration of
Virtual Machines is usually associated with excessive cost and
delay. Hence, recently, a new technique in which the load
balancing in cloud centers by migrating the extra tasks of
overloaded Virtual Machines was proposed. This task migration
technique has not been properly studied for its effectiveness
w.r.t. Server Consolidation in the literature. In this work, the
Virtual Machine task migration technique is extended to address
the Server Consolidation issue. Empirical results reveal excellent
effectiveness of the proposed technique in reducing the power
consumed in Cloud Centers.
Planning for a (Mostly) Hassle-Free Cloud Migration | VTUG 2016 Winter WarmerJoe Conlin
There is no "one right way" when it comes to a cloud migration or cloud transformation, and in this 2016 VTUG talk I explore some of the methods that have proven successful in my experience.
Come sfruttare al meglio la sinergia tra Kemp LoadMaster e il prodotti VMware
http://vinfrastructure.it/it/2015/10/webinar-kemp-lm-e-vmware-vsphere-la-sinergia-perfetta/
For more discussions and topics around Service Providers, please visit our SP Community: http://cisco.com/go/serviceprovidercommunity
Download the full PDF report here: https://communities.cisco.com/docs/DOC-37834
When we get water, electricity, or gas delivered to our home or place of work we expect it to have predictable quality. Why isn't this also true of broadband? The answer is we don't (yet) have the "glue" to integrate performance in digital supply chains.
Migrating Into the Cloud: The Brownfield vs. Greenfield OpportunityJulia Smith
The IT world is a complex space and companies may not have the money to completely replace all of their systems. Therefore we need solutions to optimize what we already have. This white paper examines the differences between Greenfield and Brownfield environments – particularly as it pertains to cloud migrations.
Performance Testing: Putting Cloud Customers Back in the Driver’s SeatCompuware APM
Many businesses wrongly assume they will enjoy Google.com- and Amazon.com-like performance and consistency when they enlist cloud computing services from these and other major cloud providers.
The truth is that businesses must conduct due diligence and insist on business-relevant performance guarantees in their service level agreements (SLAs). The keys for businesses success in working with cloud providers lies in understanding exactly why businesses are using the cloud and in testing performance levels from the realistic perspective of application end-users--both before and after a cloud service provider is enlisted.
The issue of quality in networks has been long being troublesome, resulting in endless deferral. It was a hard issue for the pioneers to deal with ‘quality’ and ‘QoS’ as the underlying mathematics was insufficient to support their ambitions. We have now filled in a significant part of the missing mathematical foundations. The culmination of that work is the ∆Q framework.
As a by-product of this framework, a new approach to sharing quality has become possible: a polyservice network. We believe that this is a significant conceptual and practical advance. However, we have (until now) lacked industry standard terminology to describe it.
This short presentation introduces the idea of a polyservice network, and contrasts it with pre-existing approaches to ‘priority QoS’.
This is the subject slides for the module MMS2401 - Multimedia System and Communication taught in Shepherd College of Media Technology, Affiliated with Purbanchal University.
ACS Seminar: Components & perceptions of SerVal in B2B cloud computingRoland Padilla
This file was presented to the IT practitioners of the Australian Computer Society, particularly the cloud computing SIG (Special Interest Group). The research project determined five components of service value in a B2B context of cloud computing: service quality, service equity, confidence benefits, perceived sacrifices & cloud service governance. Finally, the perceptions of cloud customers based on these components were then measured and established its significance through PLS-SEM (Partial least squares structural equation modelling).
Digital supply chain quality managementMartin Geddes
We've figured out how to send physical goods around the world: aggregate them into containers. We're still struggling how to do digital good, which we disaggregate into packets. Here's the answer.
The Cloud Computing China Congress (CCCC http://www.cloudcomputingchina.org ) is specially designed for senior IT and line of business executives evaluating and making purchasing decisions in the areas of on-demand infrastructure and software services.
CIS 524 Discussion 1 post responses.Respond to the colleagues .docxsleeperharwell
CIS 524 Discussion 1 post responses.
Respond to the colleagues posts regarding:
"Quality of Service" Please respond to the following:
• Your design team presents a project to you, in which most inputs seem to have about a 1.5-second delay before a response. The lead designer has decided this response is acceptable. Analyze response-time models and decide if the response time in the presented project is acceptable. Explain why it is or is not.
• Evaluate the importance quality of service has to designers. Choose two areas discussed in the textbook you would focus your attention to ensure quality of service for a team of designers that you were managing. Justify your choices.
NM’s post states the following:
• Your design team presents a project to you, in which most inputs seem to have about a 1.5-second delay before a response. The lead designer has decided this response is acceptable. Analyze response-time models and decide if the response time in the presented project is acceptable. Explain why it is or is not.
Hello Classmates and Professor,
Already halfway through the week.
You have to remember quality of service refers to any technology that manages data traffic to reduce packet loss, latency and jitter on the network and it controls and manages network resources by setting priorities for specific types of data on the network.
Providing sufficient Quality of Service (QoS) across IP networks is becoming an increasingly demand in today’s IT enterprise infrastructures. QoS is necessary for voice and video streaming over the network and over all internet usage.
Some applications that are run on the network are sensitive to delay. These applications commonly use the UDP protocol as opposed to the TCP protocol. QoS helps manage packet loss, delay and jitter on your network infrastructure.
Organizations can achieve QoS by using certain tools and techniques, such as jitter buffer and traffic shaping. For many organizations, QoS is included in the service-level agreement (SLA) with their network service provider to guarantee a certain level of performance.
https://www.networkcomputing.com/networking/basics-qos
Response time is the time between the submission of a request and the completion of the response. Your design must take this into consideration because after the users presses their option it is critical that user expectations are met with great return speed of task performance with little to no error rates and/or handling processes and procedures. This will affect the user’s interest and quality assessment you want to avoid the user from having frustrating experiences with your design especially if they had previous experience and tolerance for delays. Your design should also be scalable A scalable system is one that can handle increasing numbers of requests without adversely affecting response time and throughput.
Lastly, your design should be tested for the following:
* service time .
Tech Talk: Leverage the combined power of CA Unified Infrastructure Managemen...CA Technologies
Take the guesswork out of your infrastructure environment by combining CA Unified Infrastructure Management, CA Network Flow Analysis and CA Application Delivery Analysis. Learn how to optimize your infrastructure by combining IT monitoring, network traffic monitoring and application response time monitoring solutions to give you enhanced end-to-end visibility into your infrastructure. This sessions will review the power of the three solutions and explain how you can easily combine them to give you the information you need.
For more information, please visit http://cainc.to/Nv2VOe
Cloud Lock-in vs. Cloud Interoperability - Indicthreads cloud computing conf...IndicThreads
Session presented at the 2nd IndicThreads.com Conference on Cloud Computing held in Pune, India on 3-4 June 2011.
http://CloudComputing.IndicThreads.com
Abstract:As the cloud adoption increases, there is a growing concern about the lock-in of customers into the various cloud platforms. This session will discuss various major cloud platforms, the type of lock-in the customer will face in each of these platforms and what each customer can do to minimize their lock-in.
Key takeaways for audience are:
Understand what is cloud lock-in
Types of cloud vendor lock-ins
What is cloud interoperability
Major initiatives around cloud interoperability standards
Goals, differences and players/proponents of these major standards
Steps to minimize cloud lock-in for your customers
Speaker: Ashwin Waknis is a Sr. IT professional with 15 years in the industry. Ashwin is currently head of the Cloud Professional Services Business at Persistent Systems. Before that Ashwin was a Sr. Product Manager at Cisco Systems where he lead major initiatives around Knowledge Management, Enterprise Portal, Web 2.0/Social softwares and Enterprise Search. For the last 2 years, Ashwin has been involved in Cloud Computing initiatives first at Cisco and then at Persistent Systems.Ashwin has spoken at many customer workshops and events organized for educational institutes.
Broadband is a relatively new technology, and its underlying science is still being developed. We have long understood the 'right' units in other engineering disciplines: mass, length, hardness, etc. What is the 'right' unit for supply and demand for broadband?
This presentation discusses the need for having the right metric. This means solving two problems: the 'abstraction' gap, and the 'inference' gap. ∆Q is the ideal metric because it fills both gaps.
PhD completion seminar: SerVal in B2B cloud computingRoland Padilla
Cloud computing services are used in many businesses. However, little is known about the measurement of service value in B2B cloud computing services from a customer perspective. In the B2C service literature, service value has four components: customer perceptions of the quality, equity, benefits, and sacrifices for the delivered service. There is also a relationship between service value and customer satisfaction, and the intention to repurchase the service. This research project asks whether this model applies for B2B cloud computing services. To answer this question, we followed a two-phase design: contextualizing interviews (N=21) followed by a survey (N=328). The interviews involved managers responsible for managing services and for decision to repurchase services. The contextualizing interviews resulted in confirmation of the existing four components, and evidence for a fifth component we called “cloud service governance”. We then developed a 30-item survey instrument measuring the service value model. Users of B2B cloud services were then surveyed. We used an empirical technique called partial least squares structural equation modeling (PLS-SEM) to estimate the measurement and structural model which linked the perceptions of these service value components to repurchase intentions and customer satisfaction. We found broad support for the extended service value model in a B2B cloud computing context. Importantly, we found empirical support for the extra component called “cloud service governance”. Further research will explore whether this applies to general B2B services. However, we didn't find support for the component called “service equity”. This may be because of the lack of maturity in the cloud computing service market. As the market matures, re-testing may find this component also applies. However, it may also be that service equity is not as important in B2B services compared with B2C services. Both require future research. This research advances the literature by extending the established B2C service value model to the B2B context of cloud computing. Knowing this will enable customers to better understand the components of service value. Vendors will be able to measure how well their services are leading to value and satisfaction for their customers.
Internet Path Selection on Video QoE Analysis and ImprovementsIJTET Journal
Abstract— Systematically study a large number of Internet paths between popular video destinations and clients to create an empirical understanding of the location, existence, and repetition of failures. Finding ways to lower a providers costs for real-time, Internet protocol television services through a Internet protocol television architecture and through intelligent destination-shifting of selected services investigate ways to recover from Quality of Experience degradation. Using Live Television and Video on Demand as examples, we can take advantage of the different deadlines associated with each service to effectively obtain these services. Designing and implementing a prototype packet forwarding module called source initiated frame restoration. We implemented source initiated frame restoration on nodes and compared the performance of source initiated frame restoration to the default Internet routing. We found that source initiated frame restoration outperforms IP path selection by providing higher on-screen perceptual quality. These failures are mapped to the desired video quality in need by reconstructing video clips and by conducting user surveys. We can then examine ways to recover from Quality of Experience degradation by choosing one hop detour paths that preserve application specific policies. Path ranking methodology is used to find the path which contain high quality videos with low cost and occupies very low memory space. By ranking videos according to their quality, size, and cost, the top ranking videos can be retrieved by the client.
Test Your Cloud Maturity Level: A Practical Guide to Self AssessmentDavid Resnic
Organizations start the path to cloud, but with multiple approaches you can easily go down the wrong path. The keys to cloud success are understanding how it changes process, people and technology. In these slides, which CA Technologies VP of Product Marketing Andi Mann presented at Gartner ITxpo, take you through a self assessment to determine where you are today and the right next steps for the future.
Future of Broadband workshop presentation - ITU Telecom World 2013Martin Geddes
Is "bandwidth" the right resource model for broadband? This presentation suggests that the telecoms industry is in a death spiral because it has fundamentally misunderstood the nature of the resource it offers. In its place it offers a "quality" model that has the properties we desire, and enables us to properly match supply to demand.
Gomez Blazing Fast Cloud Best Practices Compuware APM
Are you planning to deploy Web applications in the cloud? Will their performance be acceptable? What will you do to make sure?
There are a lot of good reasons to deploy applications in a cloud environment — but they are all forgotten if your application is slow or has poor availability. Poor performance results in unhappy, lost customers. Traditional data center techniques for monitoring, measuring, and optimizing Web application performance won’t work in the cloud. There are a new set of best practices that you need to learn to optimize the performance of your cloud-based Web applications.
Cloud Computing Roadmap Public Vs Private Vs Hybrid And SaaS Vs PaaS Vs IaaS ...SlideTeam
Incorporate How Project Quality Is Managed PowerPoint Presentation Slides to determine how quality will be managed throughout by handling processes and procedures. Analyze the quality-related concerns of the firm by using this effective PPT slideshow. Showcase the information regarding the quality standards that are defined in order to manage overall quality by taking the assistance of the project quality management PowerPoint slideshow. Provide detailed information about product development, design, and testing with the help of a quality management plan PPT slideshow. Showcase various quality-related initiatives, product quality assurance checklist, etc by incorporating this PowerPoint slide deck. Highlight detail about various quality control initiatives, product quality control checklist, quality assurance, etc. by using project management PPT themes. Explain control log, quality control, and assurance issues reporting plan. You can also present information on the project inspection checklist. Present testing techniques that are used to evaluate materials, components properties, in order to determine defects and discontinuities by taking the assistance of project quality assurance PowerPoint slides. The project quality PPT also allows you to present key quality management tools, weekly quality defect occurrence with check sheet, etc. https://bit.ly/3gpFPdy
Network performance optimisation using high-fidelity measuresMartin Geddes
Communications service providers are seeking to increase their profitability and return on assets Predictable Network Solutions Ltd has the capability to support optimisation beyond traditional approaches to network data analytics. This capability is built around a robust scientific method. CSPs can benefit greatly from enhancing the fidelity of their measurements of critical aspects of network performance. Standard techniques fail to capture enough resolution. We have the missing leading-edge measurement capabilities that all CSPs need.
Similar to The Properties and Mathematics of Data Transport Quality (20)
This presentation outlines a methodology for managing timeliness and resource consumption hazards in complex distributed systems with statistically shared resources
The PEnDAR project investigated the application of stochastic engineering techniques to verification and validation of complex and cyber-physical systems
First webinar from the PEnDAR project, outlining the distributed system design challenge, and begin to investigate the application of advanced system engineering techniques for all phases of system design to ensure that critical cost and performance targets can be met. The goal is to make expensive performance failures and cost overruns a thing of the past!
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
A tale of scale & speed: How the US Navy is enabling software delivery from l...
The Properties and Mathematics of Data Transport Quality
1. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
The Properties and Mathematics of
Data Transport Quality
A Brief Introduction to ’Quality’ in Data Networks; its Interaction
with End User Experience, its Conservation, Propagation, and
how it can be Traded, Costed and Managed.
Neil Davies
Predictable Network Solutions Limited
neil.davies@pnsol.com
Ofcom, Riverside House
5th February 2009
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
2. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Outline
1 Delivering “Quality”
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
2 Quality Attenuation
Fundamental Properties
Representation and Measurement
Compositional Properties
3 Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
3. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
Outline
1 Delivering “Quality”
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
2 Quality Attenuation
Fundamental Properties
Representation and Measurement
Compositional Properties
3 Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
4. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
Delivering Quality
Layered View
For an end-user to achieve a certain quality of experience, an
application interacts (with a server or another application) across
the network.
For any particular application, the quality the user experiences will
depend on how quickly the application can interact (with the
remote peer) across the network.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
5. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
Delivering Quality
Not Just Quantity – Some Frequently Asked Questions:
1 Doesn’t it depend on the specific application? Yes and no.
Badly designed or written applications can make things worse,
however the delivered end-to-end quality is now typically
dominating the delivered quality of experience.
2 Isn’t more bandwidth more quality? No.
It doesn’t matter how much bandwidth you deliver, if the delay
is large (or rapidly varying) enough or the loss rate is high
enough then the application will fail.
3 So why do people keep on talking about adding bandwidth as
the answer? Adding more resources may resolve some issues
under limited circumstances. We’ll return to this point later.
Any given application’s effectiveness depends on end-to-end quality
being available in sufficient quantity – no more, no less.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
6. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
Outline
1 Delivering “Quality”
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
2 Quality Attenuation
Fundamental Properties
Representation and Measurement
Compositional Properties
3 Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
7. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
The ’Just Add Quality’ Myth
What has Silence, Cold, Dark and Quality all got in common?
You can no more ’add quality’ to a network than you can ’add
silence’ to a noisy room.
Just as silence is the absence of noise, what is colloquially
called ’quality’ in data networking is really the absence of
something.
Every network element attenuates the quality - introduces
delay and (the potential for) loss – every transmission line,
switch, router etc.
People may talk about quality and even desire it, but quality
attenuation is the physical property we have to work with.
This is a key concept – having introduced quality attenuation we
can start re-framing the issues in a coherent framework.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
8. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
Delivering Quality ≡ Bounding Quality Attenuation
Introduction to Properties of Quality Attenuation (∆Q)
In data networks, ’quality of service’ is achieved when the delivered
quality attenuation, over the end-to-end path, is suitably bounded.
We use the concept of quality attenuation so frequently that we
refer to it as ’∆Q’ - think of it as the change in quality.
This inevitable ∆Q comes in two forms: immutable – fixed by
physics, and mutable which can be managed and traded.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
9. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
Outline
1 Delivering “Quality”
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
2 Quality Attenuation
Fundamental Properties
Representation and Measurement
Compositional Properties
3 Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
10. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
Examples – 1
What Bounded Quality Attenuation Delivers
You want to assure some
average performance for
typical (10kb) HTTP
web page access – What
∆Q should you aspire
to deliver?
What is the
dependency on the
one-way delay?
What is the
dependency on the
loss rate?
The dependency is on
both delay and loss,
not delay or loss
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
11. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
Examples – 2
Applies to Real-time Services As Well
You want to assure some
perceived quality for a
G.711 VoIP call – what
∆Q should you aspire
to deliver?
What is the
dependency on the
one-way delay?
What is the
dependency on the
loss rate?
The dependency is on
both delay and loss,
not delay or loss
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
12. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Fundamental Properties
Representation and Measurement
Compositional Properties
Outline
1 Delivering “Quality”
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
2 Quality Attenuation
Fundamental Properties
Representation and Measurement
Compositional Properties
3 Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
13. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Fundamental Properties
Representation and Measurement
Compositional Properties
Quality Attenuation
Properties of ∆Q
It is Conserved
it only every increases, and can’t be ’destroyed’;
you can’t ’un-delay’ packets or ’un-loose’ them.
hence is monotonically increasing – ’adds’, but not by simple
arithmetic.
Manifests itself in two different ways:
1 ∆Q associated with the data transport for a single user or
application instance — an application’s viewpoint.
2 ∆Q associated with a network element (for example a
switch/router where multiplexing occurs) applying to all the
streams of data that are flowing through that point — a
network operations viewpoint.
The total ∆Q at that network element is still conserved - but
can be ’traded’ through differential allocation amongst the
individual data streams.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
14. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Fundamental Properties
Representation and Measurement
Compositional Properties
Outline
1 Delivering “Quality”
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
2 Quality Attenuation
Fundamental Properties
Representation and Measurement
Compositional Properties
3 Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
15. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Fundamental Properties
Representation and Measurement
Compositional Properties
Representing Quality Attenuation – 1
Observing and Predicting Outcomes
From a performance / ∆Q point of view; the interest is in outcomes.
If event A should lead to B
occurring, the measure is:
how frequently B
actually occurs
the time interval
between B and A.
We can express both the
aspiration (here 50% out-
comes occurring within 3s,
95% within 10s - the blue
stepped line) and what was
delivered (the black line).
As the delivered curve is always to the left and above the aspiration curve —
the aspiration was met and ’quality’ delivered.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
16. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Fundamental Properties
Representation and Measurement
Compositional Properties
Representing Quality Attenuation – 2
Focusing on the ’Tail’
It is the tail of distributions that is of most interest1
These graphs represent the same outcome: 50% delivered with
54ms; 90% within 82ms; 99% within 91ms; with 0.5% packet loss.
1
Maths Note: This a Cumulative Distribution Function (CDF), technically
they are improper CDFs as P → 1 as t → ∞.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
17. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Fundamental Properties
Representation and Measurement
Compositional Properties
Representing Quality Attenuation – 3
Comparing What is Actually Delivered
Compare the previous
slide’s delivered ∆Q:
50% within 54ms;
90% within 82ms;
99% within 91ms;
105ms - 0.5%
loss.
with this graph.
50% within 43ms;
90% within 60ms;
95% within 70ms;
2000ms - 2% loss.
Same ISP, same applica-
tion, same two end points,
just different time of day.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
18. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Fundamental Properties
Representation and Measurement
Compositional Properties
Quality Attenuation as the Representation Measure
Focusing on the quality attenuation (∆Q) – especially when the
upper bound ( ∆Q ) an application can tolerate is known – is the
key.
Importantly, All application requirements can be reduced to
this form – it says:
“Within this quality attenuation from A to B
deliver to me this (minimum) rate”
We have a ’budget’ ( ∆Q ) to work within!
How can this budget be divided? How is it allocated across
the network elements on the end to end path? What is a
reasonable expectation on, for example, the access network?
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
19. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Fundamental Properties
Representation and Measurement
Compositional Properties
Outline
1 Delivering “Quality”
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
2 Quality Attenuation
Fundamental Properties
Representation and Measurement
Compositional Properties
3 Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
20. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Fundamental Properties
Representation and Measurement
Compositional Properties
Quality Attenuation Budgets
How Aspects of the End to End Path Contribute
The contribution of any network element can be broken down into
three components:
G - dependent on geographical and other fixed factors.
S - dependent on the packet size and the transmission media.
V - the variability; dependent on many factors, see below.
1 G is a constant for a given path, incorporates factors such propagation
delay and residual error rates for transmission media. It is immutable.
2 S is fixed for a given packet size over a particular path (given that path is
fixed) - it captures the delay of processing packets. It is immutable.
3 V is the effects of the rest of the network on this traffic - this is mutable
and often highly variable - it is this component that requires management.
These ’sum’ (convolve) component-wise for each network element
traversed.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
21. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Fundamental Properties
Representation and Measurement
Compositional Properties
Composing ∆Q
If it helps you can think of:
G as being the time for a packet of zero length to get from A
to B (a packet that pays no serialisation/de-serialisation
overhead but has to gain access to the transmission medium).
ADSL that would be [0–1.5ms]2
(256k) + propagation time;
UMTS that would be [0–10ms] + propagation time.
S being the time to transmit a packet of a given size, this is
dependent on packet size and the level-2 networking
technology overheads (e.g. quantisation for ATM, frame
transmission time in wireless) and incorporates any time that it
takes the transmission medium to become available for the
next packet/frame (inter-frame gap)
This gap is 0 for ADSL and UMTS but a fixed 9.6µs for
10mbps Ethernet.
2
Uniform distribution between the bounds
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
22. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Fundamental Properties
Representation and Measurement
Compositional Properties
Measuring G, S and V
Beginning to See the Art of the Possible
Here we have taken a sam-
ple run and grouped the
times by packet size (in
this case the number of
ATM cells). From this we
can deduce:
G ≈ 8.2ms
S ≈ 2.3ms cell−1
V ∈ [0 · · · 20ms]
It is the magnitude of V
that determines the cus-
tomer experience
Both ends are 256/512k ADSL tails using IP Stream, one in BA6 the other in
CT2 going via an ISP based Telehouse North – the Central link was not being
used for anything else. There was ≈ 0.5% packet loss.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
23. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Fundamental Properties
Representation and Measurement
Compositional Properties
Compositional Properties – Taking Stock
1 Got the tools to measure and analyse where ∆Q is accruing.
or alternately divide up a ∆Q budget and allocate to the
elements of the network
2 Know that high quality services are feasible (there can be a
reasonable bound on ∆Q in access networks)
3 Key to delivering quality is managing and controlling V
can’t eliminate V ; it comes with using statistical multiplexing
How do we ’tame’ V ?
Need to look a little more deeply into some other properties of
∆Q.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
24. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Fundamental Properties
Representation and Measurement
Compositional Properties
Why ∆Q, not ’Delay and Loss’
Two Degrees of Freedom
Every queue has two degrees of freedom.
Fix two parameters and you’ve
fixed the third
Fix one parameter and you
establish a relationship between the
other two
Can’t choose any arbitrary three
values at will
For a fixed load, if you want to reduce
loss you have to increase delay; For a fixed
loss, as load increases delay must increase;
and so on
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
25. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Fundamental Properties
Representation and Measurement
Compositional Properties
That ’Question’
Why do People See Bandwidth as the Answer?
“So why do people keep on talking about adding bandwidth as the answer?”
1 Providers are not managing V , they are taking what ’emerges’ from the
day-to-day operation of their network.
We’ll come to what that means for the consumer and the operator shortly
2 The V they deliver to their customers is an arbitrary and un-manged
relationship between delay and loss
2◦ of freedom along with the offered load creeping up day by day
3 Their customers complain because their applications are not ’delivering sufficient
Quality of Experience’
The delay and loss, ∆Q, they are delivering to their customers is too high
(over the application’s implicit budget)
4 They increase the capacity of links/network to reduce the offered load
while the physics allows and they can afford it
5 Because of (1) they return to step (2) and iterate
Hence ISP — the Internet (in-)Solvency Problem
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
26. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Fundamental Properties
Representation and Measurement
Compositional Properties
2◦
of Freedom ⇒ Trading Space with Two Dimensions
Trading in Loss and Delay – ∆Q as partial order
This is how trading within a given ∆Q (or at least the V component) can be
visualised. Individual data streams can be given different loss and delay
characteristics, so that where contention for resources occur (which is where
queues are in network) the resulting ∆Q can be differentially distributed.
For example: traffic in B2 gets lower loss than traffic in C3, but equal delay,
lower delay than B3, but equal loss and both lower delay and loss than C3.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
27. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Fundamental Properties
Representation and Measurement
Compositional Properties
Quality Trading in Data Links
More Properly: Quality Attenuation Trading
The properties described above have many interesting consequences on what is
possible, or more valuable, what is not possible with data networks.
One of the more interesting consequences is that any ’pipe’ (a path over
which data can be delivered within a bounded ∆Q) can carry multiple,
differentiated, data transport services even though the ’pipe’ itself doesn’t
support differentiation.
Alternately, given a multiple data streams the characteristics of the ’pipe’
needed can be calculated, so that the collected set of traffic can be
carried with that all their individual ∆Q constraints met.
This means that a ’differentiated service’ network can be built on top of an
existing ’single service’ network — if you understand the characteristics and
constraints properly.
This offers an incremental (hopefully lower cost) route to delivering
differentiated services. Which is useful as differentiated services are essential for
the long term economic viability of data networks
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
28. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Outline
1 Delivering “Quality”
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
2 Quality Attenuation
Fundamental Properties
Representation and Measurement
Compositional Properties
3 Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
29. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Delivering Quality
There is No Quality in Averages
Averages are dangerous.
People do not remember
’averages’ they remember
extremes.
So delivering quality
to users is about
making bad
experiences rare.
This is the graph of the
95% centile of time to
complete the same 10Kb
HTTP transfer presented
earlier.
This imposes more
stringent limitations
on ∆Q
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
30. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Establishing the Relationship Between QoE and ∆Q
So what is the quantity of quality that is needed to achieve some
task? There are three basic ways of establishing this:3
1 Emulate ∆Q: Connected the parts of the application together through a
suitable ’Network Degrader’
Expensive, tedious, can be difficult to reproduce faults – however
should be part of any validation process
2 Simulate both the application and the network (simulating everything)
Expensive, often restricted by computation, supplied libraries (for
network protocols) often don’t behave the same way as real
implementations
3 Analytically. Mathematically model behaviour and ∆Q — solve
analytically or numerically
Cheaper, used to show feasibility and trends. Can be used to
formulate hypotheses to be tested by method (1) or (2).
3
Note for the unwary: most of the tools out there do not work properly -
they will introduce loss and delay, but not in the same way a real network will
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
31. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
An Example
Loading Google’s Front Page
This is a DNS look up followed by small HTTP transfer, with
allowance for server response times.
Would have a median/75% centile/95% centile time to
complete of 0.73s, 0.76s and 0.81s; given the round trip time
was in the range 125ms to 200ms.
This rises to 2.33s, 2.69s, 3.17s if the round trip range was
125ms to 1000ms
The downstream rates need to support this ’quality’ vary from
34.3kbps to 10.8kpbs; the same amount of data over a longer time.
This shows there can be an advantage (to the provider) in
giving ’bad’ quality — it reduces the instantaneous offered
load — conversely ’good’ quality can increase both the peak
offered load and its variability.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
32. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Outline
1 Delivering “Quality”
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
2 Quality Attenuation
Fundamental Properties
Representation and Measurement
Compositional Properties
3 Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
33. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Nature of ’The Service’ in Current Networks
Specifically Access Networks
Current access networks only offer a single service – the service is not one that
they ’specified’, it is what ’emerges’ during operation. In this service an
application’s data traffic has:
1 No isolation from the effects of other traffic flowing to/from that end user
2 No isolation from the effects of other traffic flowing to/from other end
users (or even ISPs)
Which leads to:
1 People shutting down all their applications and disconnecting other
computers so that they can play an interactive game, make a VoIP call or
stream some video.
2 Really annoys people as there is nothing they can do about it.
Access network providers do try to do something about (2) — BT uniformly
shares bandwidth at their BRAS’s, ComCast buckets ’heavy’ users into a
constrained service class.
What consumers need is assured bounds on quality attenuation for some
portions of their traffic — then the application that is wish to use will deliver
what they require.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
34. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
UK is Well Positioned
Though More by Accident than Design (or Good Engineering Principles Win Through)
These comments are specifically about IP Stream. It is the only access network
with sufficient data about its design and operation in the public domain to be
able to draw reliable conclusions.
BT’s planning rule for capacity between a BRAS and a DSLAM is
that an end user should be able to achieve 2Mbps during the busy
period, 90% of the time4.
This is the specification of an outcome and, as you will now
know, there must be an associated delivered ∆Q .
The equivalent ∆Q corresponds to delivering 97%+ of
packets with a low delay variation (15ms to 20ms)
Thus, in the UK, over the national data infrastructure, we already
have ’pipes’ with sufficiently known, and good, properties into
which multiple differentiated services can be multiplexed.
4
the same planning rules have been proposed for 21CN
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
35. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Outline
1 Delivering “Quality”
Layered Viewpoint
“Would you Like Quality with that, Sir?”
Relationship with End User Experience
2 Quality Attenuation
Fundamental Properties
Representation and Measurement
Compositional Properties
3 Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
36. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Why Single Service Networks Are Bad News
Why to Remain Tenable Broadband Needs to Move to Multiple Services
1 Citizens, Consumers, Commerce and the Government want to
get more out of Broadband.
2 Many of the services people want will require stronger upper
bounds on quality attenuation.
for example video conferencing or highly interactive
applications.
3 But a single service network can, at best, only have a quality
attenuation bound — therein lies the problem.
No industry can afford to structure its business to deliver all its
services at the cost point only a few would be willing to pay.
Having data traffic with differing quality requirements is needed to
make optimal use of the infrastructure, with the savings that
implies.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
37. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Differentiated Service Access Network
So, Who Gets to Decide Which Traffic Gets Treated Which Way?
Simple, the End User.
Only they know the importance of the applications quality of
experience to their requirements.
The same application can take on different roles, requiring
different bounds on the quality attenuation at different times.
There should be a differential price (though that may not mean a
differential charge) for different qualities
This creates the appropriate economic feedback to make a
rational market.
Most important is need for a ’scavenger’ style class — one where
there are no published bounds on ∆Q, a ’below normal’ service
where the delivered rate can be reduced to a trickle for peak periods
— would make a substantial difference to the economics of
Broadband delivery
The price/charge differential would need to be reasonably high
to persuade end users to engage.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
38. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Costing Differential Services
Exploiting the Two Dimensional Nature of the Trading Space
The two dimensional nature of the ∆Q trading space has one other
interesting property. It can be used to calculate a the cost of
delivering of quality, using an opportunity cost argument.
1 At any point (network element) in the network there is some
∆Q, that ∆Q is conserved. Giving ’less’ ∆Q to some traffic
means that the remaining traffic must experience
proportionately more.
E.g. more traffic in the A1 box means that B2 traffic must
experience a greater ∆Q in proportion.
2 So the B2 traffic is not just reduced by the volume in A1, but
as some of the ∆Q budget has already been ’consumed’, even
less volume of traffic can be carried in B2 and meet the
budget.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd
39. Delivering “Quality”
Quality Attenuation
Exploiting the Understanding
Applying it to the Application(s)
Applying it to the Network(s)
Applying it to the Economics
Conclusions
We’ve come at the ’quality’ issue in several different ways — all those
’quality’ issues can represented in terms of ∆Q — Quality Attenuation.
∆Q, its conservation and 2◦
of freedom is the underlying physical
property of statistically multiplexed data networks.
Any policy, regulation, service specification, network design,
application design, etc, etc has to work within its constraints.
This is good news, it helps define the ’Art of the Possible’, partially by
showing what is not possible, and partially by
∆Q bringing a quantitative basis to many of the contentious issues that
surround data networking today, like
How to specify requirements, predict performance, manage large
scale networks through creating ∆Q budgets, describe service
agreements, and cost proposed services.
It has been (and being) used to design new network elements, create
novel services over existing infrastructure and make distributed computer
system safer and more reliable.
Neil Davies Properties of Data Transport Quality – an Introduction(c) 2009 Predictable Network Solutions Ltd