SlideShare a Scribd company logo
1 of 146
Download to read offline
MSC DATA COMMUNICATION SYSTEMS
DEPARTMENT OF ELECTRONIC & COMPUTER
ENGINEERING
BRUNEL UNIVERSITY
INTRACOM S.A.
IMPLEMENTING QUALITY OF SERVICE
IN IP NETWORKS
by
Nikolaos Tossiou
September 2002
A dissertation submitted in partial fulfillment of the
requirements for the degree of Master of Science
MSC DATA COMMUNICATION SYSTEMS
DEPARTMENT OF ELECTRONIC & COMPUTER
ENGINEERING
BRUNEL UNIVERSITY
INTRACOM S.A.
IMPLEMENTING QUALITY OF SERVICE
IN IP NETWORKS
by
Nikolaos Tossiou
supervised by
Mr. Nasheter Bhatti
September 2002
A dissertation submitted in partial fulfillment of the
requirements for the degree of Master of Science
Copyright © 2002 by
Nikolaos Tossiou & Intracom S.A.
To my parents, Olga and Vassilios Tossiou
Implementing Quality of Service in IP Networks
- i -
TABLE OF CONTENTS
ABSTRACT ....................................................................................... 1
ACKNOWLEDGEMENTS ..................................................................... 3
CHAPTER 1: IP QUALITY OF SERVICE: AN OVERVIEW .....................
1.1. INTRODUCTION ...............................................................................
1.2. THE INTERNET: A BIG PICTURE .....................................................
1.3. DEFINITION OF QOS .......................................................................
1.4. APPLICATIONS REQUIRING QOS .....................................................
1.5. QOS MECHANISMS ........................................................................
1.5.1. CLASSIFICATION/MARKING ....................................................
1.5.2. POLICING/SHAPING .................................................................
1.5.3. SIGNALING ..............................................................................
1.5.4. QUEUE SCHEDULING/CONGESTION MANAGEMENT ...............
1.5.5. QUEUE MANAGEMENT/CONGESTION AVOIDANCE .................
1.5.6. TRAFFIC ENGINEERING ...........................................................
1.6. QOS ARCHITECTURES ....................................................................
1.6.1. INTEGRATED SERVICES ...........................................................
1.6.2. DIFFERENTIATED SERVICES ....................................................
1.7. MULTI-PROTOCOL LABEL SWITCHING ...........................................
1.8. IP QOS: A CRITICAL EVALUATION ...............................................
4
4
8
9
15
16
17
19
22
23
26
30
31
31
32
36
37
CHAPTER 2: CASE STUDY: DESIGN AND IMPLEMENTATION OF A
QOS-ENABLED NETWORK ...............................................................
2.1. IDENTIFICATION OF APPLICATIONS ................................................
2.2. DEFINITION OF QOS CLASSES .......................................................
2.3. ROLES OF INTERFACES ..................................................................
2.4. CASE STUDY NETWORK TOPOLOGY ..............................................
2.5. LABORATORY NETWORK TOPOLOGY ............................................
2.6. NETWORK DEVICES .......................................................................
2.7. PROTOCOLS AND QOS MECHANISMS ............................................
2.7.1. QOS MECHANISMS FOR EDGE INTERFACES ...........................
2.7.2. QOS MECHANISMS FOR CORE INTERFACES ..........................
2.8. CONFIGURATION OF NETWORK DEVICES .......................................
44
46
47
49
53
55
56
60
61
70
70
Implementing Quality of Service in IP Networks
- ii -
CHAPTER 3: MEASUREMENT INFRASTRUCTURE FOR VALIDATING
QOS OF A NETWORK ...........................................................................
3.1. TRAFFIC GENERATION AND MEASUREMENT TOOLS ..........................
3.2. DESIGN OF VALIDATION TESTS .........................................................
3.3. COLLECTION AND ANALYSIS OF RESULTS .........................................
84
84
88
91
CHAPTER 4: CONCLUSIONS .................................................................
4.1. SUMMARY .........................................................................................
4.2. CONCLUSIONS ...................................................................................
4.3. RECOMMENDATIONS FOR FURTHER STUDY .......................................
99
99
101
103
BIBLIOGRAPHY ................................................................................... 105
REFERENCES ....................................................................................... 109
NOTATION .......................................................................................... 113
APPENDIX ...........................................................................................
PART I – INTERIM REPORT .......................................................................
PART II – INTERIM REPORT GANTT CHART ..............................................
PART III – FINAL PROJECT TIME PLAN .....................................................
PART IV – FINAL PROJECT GANTT CHART ...............................................
PART V – IETF RFC SPECIFICATIONS .....................................................
115
115
133
134
135
136
Implementing Quality of Service in IP Networks
- iii -
FIGURE 1-1: THE PROCESS OF QOS ......................................................................
FIGURE 1-2: IP PRECEDENCE AND DSCP IN THE TOS BYTE ................................
FIGURE 1-3: EXAMPLE OF TRAFFIC RATE .............................................................
FIGURE 1-4: POLICING ..........................................................................................
FIGURE 1-5: SHAPING ...........................................................................................
FIGURE 1-6: RSVP PROCESS ................................................................................
FIGURE 1-7: GLOBAL SYNCHRONIZATION ............................................................
FIGURE 1-8: WRED ALGORITHM .........................................................................
FIGURE 1-9: TOS BYTE .......................................................................................
FIGURE 1-10: DS BYTE .......................................................................................
FIGURE 2-1: EDGE AND CORE INTERFACES ..........................................................
FIGURE 2-2: SIMPLIFIED ROUTER SCHEMATIC ......................................................
FIGURE 2-3: INTERFACE INGRESS SCHEMATIC ......................................................
FIGURE 2-4: INTERFACE EGRESS SCHEMATIC .......................................................
FIGURE 2-5: CASE STUDY NETWORK TOPOLOGY .................................................
FIGURE 2-6: LABORATORY NETWORK TOPOLOGY ...............................................
FIGURE 2-7: QPM MAIN SCREEN .........................................................................
FIGURE 2-8: DEVICE PROPERTIES PAGE ...............................................................
FIGURE 2-9: DEVICE CONFIGURATION LISTING ...................................................
FIGURE 2-10: INTERFACE PROPERTIES PAGE ........................................................
FIGURE 2-11: POLICY EDITOR ..............................................................................
FIGURE 2-12: CREATED POLICIES .........................................................................
FIGURE 2-13: POLICY COMMANDS .......................................................................
FIGURE 2-14: QDM MAIN SCREEN ......................................................................
FIGURE 3-1: NETMEETING SCREENSHOT ..............................................................
FIGURE 3-2: IP TRAFFIC SCREENSHOT .................................................................
FIGURE 3-3: SNIFFER PRO SCREENSHOT ...............................................................
14
18
20
21
22
23
27
29
33
33
51
52
52
53
54
55
72
72
73
73
74
79
80
82
85
86
87
LIST OF FIGURES
Implementing Quality of Service in IP Networks
- iv -
LIST OF TABLES
TABLE 1-1: APPLICATIONS REQUIRING QOS ........................................................
TABLE 1-2: IP PRECEDENCE VALUES, BITS AND NAMES .....................................
TABLE 1-3: DSCP CLASS SELECTORS ..................................................................
TABLE 1-4: AF PHB ............................................................................................
TABLE 1-5: DSCP TO IP PRECEDENCE MAPPINGS ...............................................
TABLE 2-1: CASE STUDY CLASSES OF SERVICE ....................................................
TABLE 2-2: NETWORK DEVICES, DESCRIPTIONS AND IOS VERSIONS ..................
TABLE 2-3: INTERFACES, IP ADDRESSES AND INTERFACE LINK SPEEDS ..............
TABLE 2-4: ROUTER LOOPBACK INTERFACE IP ADDRESSES ................................
TABLE 2-5: RECOMMENDED CAR NB AND EB SETTINGS ...................................
TABLE 3-1: FIRST TEST: TCP BEST-EFFORT .........................................................
TABLE 3-2: FIRST TEST: TCP BEST-EFFORT & TCP ASSURED ............................
TABLE 3-3: FIRST TEST: ALL TRAFFIC FLOWS WITHOUT VIDEOCONFERENCE ......
TABLE 3-4: FIRST TEST: ALL TRAFFIC FLOWS WITH VIDEOCONFERENCE .............
TABLE 3-5: FIRST TEST: ALL TRAFFIC FLOWS WITHOUT VIDEOCONFERENCE
AFTER FIRST POLICY CHANGE ............................................................
TABLE 3-6: FIRST TEST: ALL TRAFFIC FLOWS WITH VIDEOCONFERENCE AFTER
FIRST POLICY CHANGE .....................................................................
TABLE 3-7: FIRST TEST: ALL TRAFFIC FLOWS WITH VIDEOCONFERENCE AFTER
SECOND POLICY CHANGE .................................................................
TABLE 3-8: SECOND TEST: ALL TRAFFIC FLOWS WITH VIDEOCONFERENCE .........
15
19
34
35
35
48
58
59
59
67
91
92
92
93
94
94
95
96
Implementing Quality of Service in IP Networks
1
ABSTRACT
Traditionally the Internet has provided users with a “best-effort” service
without guarantees as to whether or when data is delivered. Data is processed as
quickly as possible regardless of its nature. With the advent of new technologies
and developments, user demands as well as network and Internet applications
have become more complex, thus requiring better services than “best-effort”.
Quality of Service (QoS) is the specification of a level of service, which is
required from the network. It is defined by specifying certain service parameters,
such as delay, loss, throughput and jitter. QoS is implemented by a set of
architectures that tackle this problem by providing the means necessary to reserve
resources, differentiate between different traffic types that require different
servicing properties and manage the flow of data from source to destination based
on specific characteristics, thus ensuring actual as well as timely delivery,
according to requirements. These architectures achieve this by utilizing various
mechanisms for traffic classification, policing, signaling, queue scheduling and
queue management. All these mechanisms will be presented and discussed in
detail.
The two Internet QoS architectures that will be examined are Integrated
Services (IntServ) and Differentiated Services (DiffServ).
The main function of IntServ is to reserve resources (such as bandwidth or
maximum delay) along the path of a flow. A flow can be defined as a stream of
data from a source to a destination with specific characteristics such as source and
destination address, source and destination port numbers and protocol identifier.
The main purpose of DiffServ is to divide flows into different classes and
treat them according to their pre-specified requirements as well as enforce specific
policies.
Implementing Quality of Service in IP Networks
2
MPLS (Multi-Protocol Label Switching) is a transport technology for IP
networks that possesses certain features, which may assist the provision of QoS.
MPLS labels packets in a way that makes the analysis of the IP header at
intermediate points between source and destination unnecessary, thus reducing
processing overhead greatly and significantly speeding up traffic flow. Also,
MPLS may be used for implementing explicit routing of certain traffic flows
through an IP network, thus realizing traffic engineering, which is a mechanism
for improving service quality.
Implementation, analysis, experiments and derived results will be based
mainly on the DiffServ architecture. The relevant mechanisms will be
implemented exclusively on Cisco® Systems network equipment and all results
will be relevant to Cisco® configurations and QoS implementations.
This dissertation is aiming at investigating the mechanisms, algorithms,
protocols and technologies that have been defined for implementing QoS in IP
networks, identifying current state-of-the-art availability and operation of QoS
features and capabilities in current commercial network devices and finally
experimenting with such capabilities of network devices in a networking lab
environment by designing, implementing and validating a network provided with
QoS mechanisms to derive conclusions as to the usefulness and feasibility of QoS
in modern IP networks.
Implementing Quality of Service in IP Networks
3
ACKNOWLEDGEMENTS
This dissertation has been completed with the help of a lot of people.
Without their input, comments and guidance it would have been impossible.
First and foremost I would like to thank my academic supervisor, Mr.
Nasheter Bhatti, who has provided me with useful comments and guided me to the
right direction throughout the duration of this dissertation.
Secondly, I would like to thank Mr. Evangelos Vayias, my industrial
supervisor and mentor at Intracom S.A., who has helped me with lots of technical
details and provided me with a variety of technical documents. He was also eager
to comment on the dissertation and guide me to the right direction in the practical
part of the dissertation. This dissertation would have been impossible without his
support.
I would also like to thank all the people at Intracom S.A., especially the
Cisco® Professional Services team, who have all been supportive throughout my
stay at the company and helped me out whenever I had a problem or needed
guidance and assistance.
Also, I would like to thank the Chachalos family, who has generously
provided me with a home to stay in Athens. I owe them a lot.
Last but not least I would like to wholeheartedly thank my parents.
Without them neither my studies nor this dissertation would have been possible.
They were always by my side, providing me with any kind of support I required.
May they always be healthy.
Implementing Quality of Service in IP Networks
4
CHAPTER 1: IP QUALITY OF SERVICE: AN OVERVIEW
1.1. INTRODUCTION
The exponential growth of the Internet, networks in general and network
users around the globe, has led to the emergence of a significant number of new
user requirements and demands. New applications have emerged to meet the
users’ needs and provide them with new services. Such applications include e-
commerce, Voice-over-IP (VoIP), streaming media (audio and video),
teleconferencing and other bandwidth-intensive applications. This situation has
led to high demand for a network architecture that will provide appropriate
facilities and mechanisms to ensure proper operation of these new applications
[1].
Traditionally, the Internet does not have the ability to distinguish between
different service classes, therefore being unable to cope with different
requirements for different applications. All traffic flows are treated as “best-
effort”, which means that there is no mechanism to ensure actual delivery, nor is it
possible to ensure timely delivery of specific traffic flows that would require it.
There is no doubt that the dominating protocol used in networks nowadays –
besides the Internet of course – is the Internet Protocol (IP). Fundamentally, it is a
best-effort protocol; that is why it cannot guarantee the timely or lossless delivery
of data packets. Reliable delivery of data packets at the destination is the
responsibility of the Transmission Control Protocol (TCP), which is located just
above the IP in the well-known seven-layer Open Systems Interconnection (OSI)
reference model. In other words, traffic – no matter what its type and requirements
– is processed as quickly as possible and there is no guarantee as to timely or
actual delivery [2].
Implementing Quality of Service in IP Networks
5
This lack of guarantees together with the new user demands and
requirements has led to the emergence of Quality of Service (QoS) network
architectures and mechanisms in today’s IP networks.
The objective of this dissertation is to investigate these architectures and
mechanisms by analyzing the way they operate and distinguish their usability
under different conditions and for different scenarios, to identify current
availability and operation of QoS architectures and mechanisms in current
commercial network devices, such as routers and switches, and to experiment
with the QoS capabilities and features of such devices in a networking laboratory
environment by designing, implementing and validating a network that utilizes
QoS mechanisms. This will lead to a conclusion as to which QoS mechanisms are
better for different kinds of traffic flows and at which point they should be
applied. Finally, it will lead to a conclusion which QoS architecture is more
feasible, viable and useful for today’s IP networks and why.
The investigation of QoS architectures and mechanisms is a vast subject,
which is of very high interest. QoS in the Internet is a hotly debated issue, because
several frameworks have been proposed to provide different service classes and
different levels of guarantees to different users, according to their needs. In other
words, QoS is most certainly going to become one of the basic and most widely
used set of mechanisms and architectures for any network infrastructure because
the Internet is establishing itself as a global multiservice infrastructure [3].
This dissertation will investigate the main QoS mechanisms such as queue
management, queue scheduling, traffic policing/shaping and signaling and the
QoS architectures that utilize these mechanisms (DiffServ and IntServ). These
will be applied and implemented on current state-of-the art network equipment in
order to investigate their operation, feasibility and usefulness.
Implementing Quality of Service in IP Networks
6
The dissertation will not deal with specialized mechanisms that are
available only for specific traffic flows and on specific network equipment.
Furthermore, it will not investigate economic aspects of QoS that arise, i.e.
different pricing policies from Internet Service Providers (ISPs) for different
service offerings. These will be mentioned, but not analyzed in detail. Such issues
are beyond the scope of the dissertation.
At the end of this dissertation the reader should be able to distinguish
between the different QoS architectures and which mechanism should be used for
different traffic flows under what circumstances, i.e. which mechanism is more
appropriate for what kind of traffic flow and what type of requirements.
Furthermore, by comparing the two different QoS architectures, the reader should
be able to tell which one is more scalable or appropriate for today’s modern IP
networks, although – as we will see later on – both architectures have their
purpose and field of usage. Nevertheless, the tests that will be carried out will lead
to the conclusion that one of the two architectures is preferable for any kind of
network and will discuss the reasons.
The starting point will be to present the current state of the Internet to
justify the existence and need for QoS by identifying the various ways the Internet
is used today. We will then proceed by defining the term “Quality of Service”,
identifying applications that require QoS and discussing the various QoS
mechanisms and architectures available. Chapter 1 will be concluded with the
author’s critical evaluation of the two QoS architectures. In Chapter 2 we will
present and discuss a case study of a network topology on which the tests will be
carried out and results will be collected. Finally, in Chapter 3 we will present the
experiments and tests that have been carried out at the laboratory as well as
analyze all collected results. Finally, we will conclude in the last chapter by
Implementing Quality of Service in IP Networks
7
summarizing the results and linking them with the theoretical part as well as by
giving direction for further works on this topic.
The whole project has been carried out at Intracom S.A. in Peania, Greece,
at the Department of Information Systems, Cisco® Professional Services team.
Intracom is the largest company in the Telecommunications and Information
Technology industry in Greece with a significant international presence. It has
over 4000 employees and is a certified Cisco® Golden Partner and Professional
Services partner. The Cisco® Professional Services team consists of
Cisco-certified network engineers (CCNP level and above). The networking
laboratory is fully equipped with latest state-of-the-art networking technology
and is available for the conduct of experiments and simulations, both in the
scope of commercial projects, as well as for research and development purposes.
The project was carried out in the context of a study on the subject
according to the company’s requirements. The practical part – topology design
etc. – was based on larger project carried out by the company for a customer. As a
result, network equipment in the laboratory was not always available for testing
and this was the main limitation of the dissertation.
One further limitation was the subject itself. Because it is so vast, it was
impossible to cover all its aspects in such short time, especially in the practical
part. However, the author tried to give an overview and a critical evaluation of all
relevant material.
All tests and results were based on a case study which was implemented in
the networking laboratory. It was carried out solely by the writer with technical
help and guidance by the industrial mentor, Mr. Evangelos Vayias, who helped
mostly with the practical part of this dissertation, i.e. router configurations,
topology design, test execution and collection of results.
Implementing Quality of Service in IP Networks
8
1.2. THE INTERNET: A BIG PICTURE
The Internet consists of many interconnected networks of different types,
shapes and sizes, which use different technologies. Local Area Networks (LANs),
Metropolitan Area Networks (MANs) and Wide Area Networks (WANs) are
interconnected by a backbone. LANs and MANs are usually corporate, campus or
citywide networks; WANs are usually ISP or large-scale corporate networks [4].
As stated above, all these networks are interconnected using different protocols,
mechanisms and link layer technologies. However, when referring to the Internet,
the common denominator of these networks is the Internet Protocol (IP) suite that
is uniform for all these network types. Today, IP forms the foundation for a
universal and global network. However, this raises several issues for ISPs as well
as enterprise and corporate networks. IP’s problem is that it is a connectionless
technology and therefore cannot differentiate between different traffic types or
make certain guarantees for different types of applications [5]. QoS architectures
attempt to resolve this issue. This dissertation is based on the IP protocol suite and
investigates QoS mechanisms based upon IP, i.e. it will deal only with QoS
provision at Layers 2 and 3 and will not go beyond these layers.
With the growth of the Internet new applications emerged that required
specific service quality and together with additional user demands it became
apparent that best-effort service is not enough and that certain mechanisms were
needed to distinguish between different traffic classes and provide QoS according
to user requirements. However, there is one opinion, which claims that new
technologies, such as optical fiber and Dense Wavelength Division Multiplexing
(DWDM) will make bandwidth so abundant and cheap, that QoS will become
unnecessary, i.e. there will be no need for mechanisms that provide QoS because
bandwidth will be almost “infinite” [6]. The reply to this opinion is that no matter
Implementing Quality of Service in IP Networks
9
how much bandwidth will be made available, new applications will be developed
to utilize it and once again make the implementation of QoS mechanisms and
architectures necessary. Also, high bandwidth availability cannot be ensured in all
areas throughout a network. Bottleneck points will always exist, where QoS
mechanisms will have to be applied.
The main driving forces of QoS are basically different user demands for
different traffic classes and the ability of ISPs to offer different service levels –
and, as a result, different pricing policies – to their clients according to
agreements. These agreements are called Service Level Agreements (SLAs),
which specify what type of service will be provided to the client (i.e. premium,
assured or best-effort) and how the client will have access to the bandwidth he/she
needs. In other words, it specifies the QoS level the client is to receive and the ISP
is to offer.
1.3. DEFINITION OF QOS
Firstly, there is the need to define the term “Quality of Service” and what
is meant by it. The new generation of applications, such as real-time voice and
video, has specific bandwidth and timing requirements, such as delay tolerance
and higher end-to-end delivery guarantees than regular best-effort data [7]. These
new types of data streams cannot operate properly with the traditional best-effort
service. QoS mechanisms attempt to resolve this issue in various ways that will be
discussed later on.
There are no agreed quantifiable measures that unambiguously define QoS
as perceived by a user. Terms, such as “better”, “worse”, “high”, “medium, “low”,
“good”, “fair”, “poor”, are typically used, but these are subjective and therefore
cannot be always translated precisely into network level parameters that can
Implementing Quality of Service in IP Networks
10
subsequently be used for network design. Furthermore, QoS is also heavily
dependent upon factors such as compression algorithms, coding schemes, the
presence of higher layer protocols for security, data recovery, retransmission and
the ability of applications to adapt to network congestion or their requirements for
synchronization [8].
However, network providers need performance metrics that they can agree
upon with their peers (when exchanging traffic), and with service providers
buying resources from them with certain performance guarantees. The following
five network performance metrics are considered the most important in defining
the QoS required from the network [9]:
 Availability: Ideally, a network should be available 100% of the time, i.e.
it should be up and running without any failures. Even a high-sounding
figure as 99.8% translates into about an hour and a half of downtime per
month, which may be unacceptable for a large enterprise. Serious carriers
strive for 99.9999% availability, which they refer to as “six nines” and
which translates into a downtime of 2.6 seconds per month.
 Throughput: This is the effective data transfer rate measured in bits per
second. It is not the same as the maximum capacity (maximum
bandwidth), or wire speed, of the network. Sharing a network lowers the
throughput that can be achieved by a user, as does the overhead imposed
by the extra bits included in every packet for identification and other
purposes. A minimum rate of throughput is usually guaranteed by a
service provider (who needs to have a similar guarantee from the network
provider). A traffic flow, such as Voice-over-IP (VoIP), requires a
minimum throughput capacity in order to operate correctly and efficiently,
otherwise it becomes virtually unusable.
Implementing Quality of Service in IP Networks
11
 Packet loss ratio: Packet loss specifies the percentage of packets lost
during transport [10]. Network devices, such as switches and routers,
sometimes have to hold data packets in buffered queues when a link gets
congested. If the link remains congested for too long, the buffered queues
will overflow and data will be lost, since the devices will have to drop
packets at the end of the queue, also known as tail drop. The lost packets
must be retransmitted, adding to the total transmission time. In a well-
managed network, packet loss will typically be less than 1% averaged over
a month. It is obvious that real-time applications may be very sensitive to
packet loss, since any drop would require retransmission of the lost
packets and would interfere with the real-time transmission. Such
applications demand packet loss guarantees from the network.
 Delay (latency): The elapsed time for a packet to travel from the sender,
through the network, to the receiver is known as delay. The end-to-end
delay consists of serialization delay, propagation delay and switching
delay. Serialization delay (also called transmission delay) is the time it
takes for a network device to transmit a packet at the given output rate and
depends on link bandwidth as well as packet size. Propagation delay is the
time it takes for a transmitted bit to reach its destination and it is limited
by the speed of light. Finally, switching delay refers to the time it takes for
a network device to start transmitting a packet after its reception [11].
Unless satellites are involved, the end-to-end delay of a 5000km voice call
carried by a circuit-switched telephone network is about 25ms. For the
Internet, a voice call may easily exceed 150ms of delay because of signal
processing (digitizing and compressing the analog voice input) and
congestion (queuing). For a real-time application, such as voice
Implementing Quality of Service in IP Networks
12
communication, packets have to be delivered within a specific delay
bound, otherwise they become useless for the receiver.
 Jitter (delay variation): Jitter refers to the variation of the delay on
individual packets of the same flow, i.e. one packet may be delayed more
or less than another. In other words, it is the variation in end-to-end transit
delay. This has many causes, including variations in queue length,
variations in the processing time needed to reorder packets that arrived out
of order because they traveled over different paths and variations in the
processing time needed to reassemble packets that were segmented by the
source before being transmitted. Jitter may affect applications that need a
constant flow of data from transmitter to receiver in order to maintain
smooth operation. Any variations in this flow may cause the application
performance to degrade up to a point that it becomes unusable.
It is apparent that newly emerged applications require specific guarantees
regarding timeliness and actual delivery. QoS realized and implemented by is a
set of technologies, architectures, mechanisms and methods that attempt to
provide the resources the applications require without compromising average
throughput and network performance in general. The main purpose of these
mechanisms is to deliver guaranteed and differentiated services to any IP-based
network. Since the Internet is made up of many different link technologies and
physical media, end-to-end QoS in IP-based networks is mainly attained at Layer
3, the network layer.
A general overview of QoS is illustrated below in Figure 1-1. Applications
requiring QoS (mission-critical data, audio/video teleconferencing etc.), which are
time- and delay-sensitive, real-time, jitter- and loss-intolerant, will utilize one of
the two specific QoS architectural models, IntServ or DiffServ. These
Implementing Quality of Service in IP Networks
13
architectures in turn apply specific mechanisms to provide QoS. These
mechanisms deal with traffic classification, traffic shaping and policing, resource
allocation (signaling), queue scheduling/congestion management and queue
management/congestion avoidance. It is possible, although not necessary, to apply
more than one mechanism or a specific combination thereof to provide better
QoS. This depends mostly on the type of traffic that will pass through the network
as well as network equipment and topology. Ultimately, it is the network
manager’s decision to define which mechanisms should be applied at which
network locations to provide efficient QoS.
For realizing end-to-end QoS, such mechanisms are implemented at the
Network Layer. However, they may be supported by relevant functions at lower
layers. Furthermore, MPLS is employed at this stage (if chosen). This is where
possible packet header switching takes place. All the above mechanisms and
architectures will be discussed in detail later on.
Finally, the traffic is transmitted by using any type of Link Layer
technology, such as Asynchronous Transfer Mode (ATM), Digital Subscriber
Line (DSL) or Ethernet.
In order to provide end-to-end QoS, all network devices along the route
must support QoS functions, otherwise incoming traffic will be serviced using
traditional best-effort service until the next hop along the route that supports QoS.
Furthermore, there is a difference on which mechanisms should be employed in a
QoS-enabled network and it depends on the router’s location, i.e. if it is an edge or
a core router and if it is an edge or a core interface. These details will be discussed
in chapter 2.
Figure 1-1 below illustrates the QoS process:
Implementing Quality of Service in IP Networks
14
INTERNET PROTOCOL (IP)
QoS
MECHANISMS
APPLICATIONS
Voice-over-
IP
E-mail
Mission
critical data
Video-
conference
VPNFTP
QoS ARCHITECTURES
IntServ DiffServ
TRANSMISSION MECHANISMS (PHYSICAL LAYER)
ATM DSL OtherSONET EthernetModem
Time-sensitive, real-
time, delay-, loss- and
jitter-intolerant
Classification/Marking (IP Precedence, DSCP)
Policing/Shaping
Queue Scheduling/Congestion Management (WFQ, CBWFQ, MDRR)
Signaling (RSVP)
Queue Management/Congestion Avoidance (RED, WRED)
Traffic Engineering
MPLS
Fig. 1-1: The Process of QoS
The above schematic gives a basic idea of how QoS is employed in a
network, from application level to the physical layer. The first step is to identify
the applications that require QoS, which will be discussed below.
Implementing Quality of Service in IP Networks
15
1.4. APPLICATIONS REQUIRING QOS
A vast number of network applications exist nowadays and are used on a
daily basis by home or business users around the world. While most applications
work more or less satisfactory with the traditional best-effort service, the new
generation of applications requires QoS in order to operate correctly and
efficiently.
In general, such applications are real-time audio/video applications (e.g.
teleconferencing, streaming audio/video), e-commerce, transaction and mission-
critical applications, such as database and client/server applications. All these
applications have in common the fact that they are sensitive to a smaller or greater
extent to the four QoS metrics mentioned above (throughput, loss, delay and
jitter). Table 1-1 illustrates different kinds of applications and their sensitivities
regarding QoS [12].
Traffic Type
Sensitivities
Throughput Loss Delay Jitter
E-mail (MIME) Low to High - Low Low
Real-time Voice (e.g. VoIP) Low High High High
Real-time Video High High High High
Telnet Low - Medium Low
File transfer (FTP) High - Low Low
E-commerce, transactions Low - High Medium
Casual browsing Low - Medium Low
Serious browsing Medium - High Low
Table 1-1: Applications Requiring QoS
The transport protocol all applications, except real-time voice and video, is
TCP. TCP handles losses with retransmissions, which eventually increases delay.
Implementing Quality of Service in IP Networks
16
Thus, what is perceived by applications when packets are lost is an increase in
end-to-end delay. Moreover, the throughput sensitivity of e-mail ranges from low
to high, because it depends on what kind of e-mail is being sent. If it is a simple
text message, then throughput sensitivity is low, but if a large attachment (e.g.
5MB) is to be sent, then throughput sensitivity is high.
Note that network availability is a prerequisite for all of the above factors
regardless of the application, therefore it is not presented in the table, as it is equal
for every application.
After observing the above table, it becomes apparent that some
applications are more sensitive than others and therefore require increased QoS
guarantees so as to perform satisfactorily. For example, e-mail is sensitive only to
packet loss, which is, moreover, handled by the reliable transfer offered by TCP,
whereas real-time video is very sensitive to throughput, delay and jitter and
sensitive to packet loss. This shows that every application has different QoS
requirements and must be treated differently if it is to operate within its acceptable
limits. The QoS mechanisms that match each application’s requirements are
discussed below.
1.5. QOS MECHANISMS
As mentioned above, there are many different QoS mechanisms that
perform different tasks under different circumstances. These mechanisms can be
implemented either separately or in combination. It always depends on the
objective and on what purpose they should serve as well as what kind of
applications will be used. Furthermore, it should be noted that no single transport
or network-layer mechanism will provide QoS for all flow types and that a QoS-
Implementing Quality of Service in IP Networks
17
enabled network has to deploy a number of mechanisms in order to meet the
broad range of requirements [13].
The main QoS mechanisms are:
 Packet classification/marking: Multi-Field (MF) or Behavior-Aggregate (BA)
classification (IP precedence or DiffServ Code Point (DSCP) marking).
 Traffic conditioning: Policing, shaping.
 Resource Reservation/Signaling: Resource Reservation Protocol (RSVP).
 Queue Scheduling/Congestion Management: Weighted Fair Queuing (WFQ),
Class-Based WFQ (CBWFQ), Modified Deficit Round Robin (MDRR).
 Queue Management/Congestion Avoidance: Random Early Detection (RED),
Weighted RED (WRED).
Each one of these mechanisms serves a specific purpose as will be made
clear later on and should be employed under specific circumstances. In other
words, a mechanism would provide better QoS than another for a specific type of
application. The mechanisms deal with the four mentioned QoS metrics and each
one “specializes” in a specific area and performs different functions. The
mechanisms have to be set up in every network device (router or switch) along the
traffic route and either at the incoming and/or at the outgoing interface so as to
provide end-to-end QoS. The QoS architectures that will be discussed later refer
to the QoS of a network as a whole. A discussion and explanation of the QoS
mechanisms follows.
1.5.1. CLASSIFICATION/MARKING
Packet classification is a mechanism that assigns packets to a certain pre-
specified class based on one or more fields in that packet. This procedure is also
called packet marking or coloring, where certain fields in the packet are marked,
Implementing Quality of Service in IP Networks
18
specifying a class for that packet [14]. There are two different types of
classification for IP QoS, which include the following:
 IP flow identification based on source IP address, destination IP address,
IP protocol field, source port number and destination port number.
 Classification based on IP precedence bits or DSCP field.
Packets can be marked to indicate their traffic class. The first method is a
general and common approach identifying traffic flows so as to mark them to
assign them to a specific traffic class. IP precedence and/or DSCP field are more
effective methods for identifying, classifying and marking traffic flows and enable
a much wider range of QoS functions.
The IP precedence field is located in the packet’s IP header and indicates
the relative priority with which the packet should be handled, ranging from
“routine” (best-effort) to “critical” (highest priority). Furthermore, there are
another two classes, which are used for network and internetwork control
messages. The IP precedence field is made up of three bits in the IP header’s Type
of Service (ToS) byte, while DSCP uses six bits. Figure 1-2 shows the ToS byte
structure and illustrates IP precedence and DSCP locations within the ToS byte:
0
Precedence
DSCP
1 2 3 4 5 6 7
Fig. 1-2: IP Precedence and DSCP in the ToS Byte
Implementing Quality of Service in IP Networks
19
Table 1-2 shows the different IP precedence values, bits and names [15]:
IP Precedence Value IP Precedence Bits IP Precedence Names
0 000 Routine
1 001 Priority
2 010 Immediate
3 011 Flash
4 100 Flash Override
5 101 Critical
6 110 Internetwork Control
7 111 Network Control
Table 1-2: IP Precedence Values, Bits and Names
Packet marking can be performed either by the application that sends the
traffic or by a node in the network. The ToS byte has been specified by the IETF
in [16].
DSCP is discussed in detail in section 1.6.2 because it is the basis of
DiffServ.
1.5.2. POLICING/SHAPING
Service providers and network users agree on a profile for the traffic that
is to receive certain QoS. This traffic profile usually specifies a maximum rate for
the traffic which is to be admitted into the network at a certain QoS level, as well
as some degree of the traffic’s burstiness.
The user’s traffic transmitted to the network has to conform to this profile
in order to receive the agreed QoS. Non-conforming traffic is not guaranteed to
receive this QoS and may be dropped.
Implementing Quality of Service in IP Networks
20
Traffic conditioning is a set of mechanisms that permit on one hand the
service provider to monitor and police conformance to the traffic profile, on the
other hand the user to shape transmitted traffic to this profile.
Traffic conditioning is usually employed at network boundaries, i.e. where
traffic first enters or exits the network. Traffic conditioning is implemented in
network access devices, i.e. routers and switches that support QoS mechanisms
[17].
As the traffic enters the network, it has a rate that fluctuates over time.
Figure 1-3 illustrates this:
Traffic
Time
Profile
limit
Traffic
Rate
Fig. 1-3: Example of Traffic Rate
In order to provide QoS, traffic has to conform to the pre-specified and
pre-agreed (between customer and ISP) traffic profile.
Policing is the mechanism that allows the network to monitor traffic
behavior (e.g. burstiness) and manage throughput by dropping out-of-profile
packets. In essence, policing discards misbehaving traffic in order to avoid
network resources being exhausted or other QoS classes being starved. This will
Implementing Quality of Service in IP Networks
21
ensure that network resources, as allocated, will provide the agreed QoS to the
agreed traffic. The policing function in a router is often called a dropper, because
it essentially drops out-of-profile packets, i.e. packets that exceed the agreed
policy throughput. Figure 1-4 illustrates the policing mechanism:
Traffic
Time
Profile
limit
Traffic
Rate
Fig. 1-4: Policing
Policing is used not only to drop out-of-profile packets, but also to re-mark
them and indicate to dropping mechanisms downstream that they should be
dropped ahead of the in-profile packets.
Shaping, on the other hand, allows the network user to enforce bandwidth
policies by buffering excess traffic rate and smoothing bursts. This increases link
utilization efficiency, as packets are not dropped, but in a way “spread” across the
available bandwidth. Shaping, as the word itself implies, shapes traffic in order to
meet a pre-defined profile, i.e. to be in-profile. Figure 1-5 shows how shaping
works:
Implementing Quality of Service in IP Networks
22
Traffic
Time
Profile
limit
Traffic
Rate
Fig. 1-5: Shaping
Shaping is commonly used where speed-mismatches exist (e.g. going from
a HQ site with a T1/E1 connection to a Frame Relay network, down to a remote
site with a 128Kbps connection).
1.5.3. SIGNALING
The signaling mechanism in an IP context essentially refers to the
Resource Reservation Protocol (RSVP), which is an integral part of the IntServ
architecture, discussed in section 1.6.1. RSVP has been initially proposed in [18]
and defined in [19].
RSVP was specified as a signaling protocol for applications to reserve
resources. The signaling process is illustrated in Figure 1-6. The sender sends a
“PATH” message to the receiver (1) specifying the characteristics of the traffic.
Every intermediate router along the path forwards the “PATH” message to the
next hop (2) determined by the routing protocol. Upon receiving a “PATH”
message (3), the receiver responds with a “RESV” message (4) to request
Implementing Quality of Service in IP Networks
23
resources for the flow. Every intermediate router along the path can reject or
accept the request of the “RESV”, depending on resource availability (5). If the
request is rejected, the router will send an error message to the receiver, and the
signaling process will terminate. If the request is accepted, link bandwidth and
buffer space are allocated for the flow (6) and the related flow state information
will be installed in the router [20].
RSVP Cloud
PATH (1)
PATH (2)
PATH (3)
RESV (4)
RESV (5)
RESV (6)
Sender
Receiver
Fig. 1-6: RSVP Process
The problem with RSVP is that the amount of state information increases
proportionally with the number of flows, a fact that places a huge storage and
processing overhead on the routers, therefore it does not scale well in the Internet
core; and secondly, the requirements on routers is high, because they all must
implement and support RSVP, admission control, MF classification and packet
scheduling. These problems are further discussed in section 1.6.1.
1.5.4. QUEUE SCHEDULING/CONGESTION MANAGEMENT
Packet delay control is an important purpose of Internet QoS. Packet delay
consists of three parts: propagation, transmission and queuing delay. Propagation
delay is given by the distance, the medium and the speed of light. The per-hop
Implementing Quality of Service in IP Networks
24
transmission delay is given by the packet size divided by the link bandwidth. The
queuing delay is the waiting time a packet spends in a queue before it is
transmitted. This delay is determined mainly by the scheduling policy.
Besides delay control, link sharing is another important purpose of queue
scheduling. The aggregate bandwidth of a link can be shared among multiple
entities, such as different organizations, multiple protocols (TCP, UDP (User
Datagram Protocol)) or multiple services (FTP, telnet, real-time streams). An
overloaded link should be shared in a controlled way, while an idle link can be
used in any proportion.
Although delay and rate guarantee provisioning is crucial for queue
scheduling, it still needs to be kept simple because it must be performed at packet
arrival rates. Scheduling can be performed on a per-flow or a per-traffic-class
basis. The result of a combination of these two is a hierarchical scheduling [21].
The most important scheduling mechanisms, besides traditional first-come first-
served scheduling, are the following:
 Weighted Fair Queuing (WFQ): WFQ is a sophisticated queuing
process that requires little configuration, because it dynamically
detects traffic flows between applications and automatically manages
separate queues for those flows. WFQ is described in [22] and [23]. In
WFQ terms flows are called conversations. A conversation could be a
Telnet session, an FTP transfer, a video stream or some other TCP or
UDP flow between two hosts. WFQ considers packets part of the same
conversation if they contain the same source and destination IP address
and port numbers and IP protocol identifier. When WFQ detects a
conversation and determines that packets belonging to that
conversation need to be queued, it automatically creates a queue for
Implementing Quality of Service in IP Networks
25
that conversation. If there are many conversations passing through the
interface, WFQ manages multiple conversation queues, one for each
unique conversation and sorts packets into the appropriate queues
based on the conversation identification information. Because WFQ
creates queues and sorts packets into the queues automatically, there is
no need for manually configured classification lists [24].
 Class-Based Weighted Fair Queuing (CBWFQ): CBWFQ is a
variation of WFQ. It supports the concept of user-defined traffic
classes. Instead of queuing on a per-conversation basis, CBWFQ uses
predefined classes of traffic and then divides the link’s bandwidth
among these classes in order to control their QoS. CBWFQ is not
automatic like WFQ, but it provides more scalable traffic queuing and
bandwidth allocation [25].
 Modified Deficit Round Robin (MDRR): MDRR is a variant of the
well-known Round-Robin scheduling scheme adapted to the
requirements of scheduling variable-length units, such as IP packets.
MDRR is an advanced scheduling algorithm. Within a MDRR
scheduler, each queue has an associated quantum value, which is the
average number of bytes served in each round, and a deficit counter
initialized to the quantum value. Each non-empty flow queue is
serviced in a round-robin fashion by transmitting at most quantum
bytes in each round. Packets in a queue are serviced as long as the
deficit counter is greater than zero. Each serviced packet decreases the
deficit counter by a value equal to its length in bytes. A queue can no
longer be served after the deficit counter becomes zero or negative. In
Implementing Quality of Service in IP Networks
26
each new round, each nonempty queue’s deficit counter is incremented
by its quantum value [26].
1.5.5. QUEUE MANAGEMENT/CONGESTION AVOIDANCE
Another important aspect of QoS is to control packet loss. This is achieved
mainly through queue management. Packets get lost for two reasons: either they
become corrupted in transit or are dropped at network nodes – such as routers –
due to network congestion, i.e. because of overflowing buffers. Loss due to
damage is rare (<<0.1%), therefore packet loss is often a sign of network
congestion. To control and avoid network congestion, certain mechanisms must
be employed both at network end-points and at intermediate routers. At network
endpoints, the TCP protocol, which uses adaptive algorithms such as slow start,
additive increase and multiplicative decrease, performs this task well. Inside
routers, queue management must be used so as to ensure that high priority packets
are not dropped and arrive at their destination properly.
Traditionally, packets are dropped only when the queue is full. Either
arriving packets are dropped (tail drop), the packets that have been in the queue
for the longest time period are dropped (drop front) or a randomly chosen packet
is discarded from the queue. TCP stacks react to multiple packet drops by severely
reducing their window sizes, thus causing severe interruption to higher layer
applications. In addition, multiple packet drops can lead to a phenomenon called
“global synchronization” whereby TCP stacks become globally synchronized
resulting in a wave-like traffic flow. Figure 1-7 shows the effect of global
synchronization:
Implementing Quality of Service in IP Networks
27
Throughput
Time
Queue Utilization
(100%)
Tail dropTail drop
Fig. 1-7: Global Synchronization
Three TCP traffic flows start at different times at low data rates. As time
passes, TCP increases the data rate. When the queue buffer fills up, several tail
drops occur and all three traffic flows drop their throughput abruptly. This is
highly inefficient, because the more TCP flows there are, the sooner the buffers
will fill up and thus the sooner a tail drop will occur, disrupting all TCP flows.
Another problem is that after the first tail drop it is very likely that all three TCP
flows will start almost simultaneously, which will cause the buffers to fill up
quickly and lead to another tail drop.
Furthermore, there are two drawbacks with drop-on-full, namely lock-out
and full queues. Lock-out describes the problem that a single connection or a few
flows monopolize queue space, preventing other connections from getting room in
the queue. The “full queue” problem refers to the tendency of drop-on-full
policies to keep queues at or near maximum occupancy for long periods. Lock-out
causes unfairness of resource usage while steady-state large queues result in
longer delay times.
Implementing Quality of Service in IP Networks
28
To avoid these problems, active queue management is required, which
drops packets before a queue becomes full. It allows routers to control when,
which and how many packets to drop [27]. The two most important queue
management mechanisms are the following:
 Random Early Detection (RED): RED randomly drops packets queued on
an interface. As a queue reaches its maximum capacity, RED drops
packets more aggressively to avoid a tail drop. It throttles back flows and
takes advantage of TCP slow start. Rather than tail-dropping all packets
when the queue is full, RED manages queue depth by randomly dropping
some packets as the queue fills past a certain threshold. As packets drop,
the applications associated with the dropped packets slow down and go
through TCP slow start, reducing the traffic destined for the link and
providing relief to the queue system. If the queue continues to fill, RED
drops more packets to slow down additional applications in order to avoid
a tail drop. The result is that RED prevents global synchronization and
increases overall utilization of the line, eliminating the sawtooth utilization
pattern [28]. RED is described in [29].
 Weighted Random Early Detection (WRED): Weighted RED (WRED) is
a variant of RED that attempts to influence the selection of packets to be
discarded. There are many other variants of RED, mechanisms that try to
prevent congestion before it occurs by dropping packets according to
specific criteria (traffic conforms/does not conform), thus ensuring that
mission-critical data is not dropped and gets priority. WRED is a
congestion avoidance and control mechanism whereby packets will be
randomly dropped when the average class queue depth reaches a certain
minimum threshold. As congestion increases, packets will be randomly
Implementing Quality of Service in IP Networks
29
dropped (and with a rising drop probability) until a second threshold
where packets will be dropped with a drop probability equal to the mark
probability denominator. Above max-threshold, packets are tail-dropped.
Figure 1-8 below depicts the WRED algorithm:
Packet Drop
Probability
Average
Queue SizeMin1
1
0
Max1 Min2 Max2
WRED Values:
min=20
max=50
prod=1
WRED Values:
min=55
max=70
prod=10
Fig. 1-8: WRED Algorithm
WRED will selectively instruct TCP stacks to back-off by dropping
packets. Obviously, WRED has no influence on UDP based applications (besides
the fact that their packets will be dropped equally).
The average queue depth is calculated using the following formula:
new_average = (old_average * (1-2-e
) + (current_queue_depth * 2-e
)
The “e” is the “exponential weighting constant”. The larger this constant,
the slower the WRED algorithm will react. The smaller this constant, the faster
the WRED algorithm will react, i.e. the faster it will begin dropping packets after
the specified threshold has been reached.
Implementing Quality of Service in IP Networks
30
The exponential weighting constant can be set on a per-class basis. The
min-threshold, max-threshold and mark probability denominator can be set on a
per precedence or per DSCP basis.
The mark probability denominator should always be set to 1 (100 % drop
probability at max-threshold).
It should be stressed that tuning QoS parameters is never a straightforward
process and the results are depending on a large number of factors, including the
offered traffic load and profile, the ratio of load to available capacity, the behavior
of end-system TCP stacks in the event of packet drops etc. Therefore, it is
strongly recommended to test these settings in a testbed environment using
expected customer traffic profiles and to tune them, if required.
1.5.6. TRAFFIC ENGINEERING
Traffic engineering is not an actual mechanism, but can be considered as
the process of designing and setting up a network’s control mechanisms (e.g.
routing) in such a way that all network resources are used effectively, i.e. the
effective design and engineering of the network paths, traffic load balancing,
creating redundancy etc. In other words it refers to the logical configuration of the
network. The task of traffic engineering is to relieve congestion and to prevent
concentration of high priority traffic. It essentially provides good QoS even
without the implementation of another mechanism.
Effective traffic engineering provides QoS by recognizing congested or
highly loaded network paths and redirecting traffic through idle network paths,
thus avoiding congestion, optimizing network load balancing and effectively
utilizing the available network resources.
Implementing Quality of Service in IP Networks
31
1.6. QOS ARCHITECTURES
QoS architectures are architectural models proposed and established by the
Internet Engineering Task Force (IETF) that utilize the above mentioned
mechanisms (or combinations thereof) to provide the required services.
1.6.1. INTEGRATED SERVICES
The IntServ architecture was an effort by the IETF to expand the Internet’s
service model in order to meet the requirements of emerging applications, such as
voice and video. It is the earliest of the procedures defined by the IETF for
supporting QoS.
Its main purpose is to define a new enhanced Internet service model as
well as provide the means for applications to express end-to-end resource
requirements with support mechanisms in network devices [30].
IntServ assigns a specific flow of data to a so-called “traffic class”, which
defines a certain level of service. It may, for example, require only “best-effort”
delivery, or else it might impose some limits on the delay. Two services are
defined for this purpose: guaranteed and controlled load. Guaranteed service,
defined in [31], provides deterministic delay guarantees, whereas controlled load
service, defined in [32], provides a network service similar to that provided by a
regular best-effort network under lightly loaded conditions [33].
An integral part of IntServ is RSVP. Once a class has been assigned to the
data flow, a “PATH” message is forwarded to the destination to determine
whether the network has the available resources (transmission capacity, buffer
space etc.) needed to support that specific Class of Service (CoS). If all devices
along the path are found capable of providing the required resources, the receiver
generates a “RESV” message and returns it to the source indicating that the latter
Implementing Quality of Service in IP Networks
32
may start transmitting its data. The procedure is repeated continually to verify that
the necessary resources remain available. If the required resources are not
available, the receiver sends an RSVP error message to the sender. The IETF
specified RSVP as the signaling protocol for the IntServ architecture. RSVP
enables applications to signal per-flow QoS requirements to the network [34].
Theoretically, this continuous checking of available resources means that
the network resources are used very efficiently. When the resource availability
reaches a minimum threshold, services with strict QoS requirements will not
receive a “RESV” message and will know that the QoS is not guaranteed.
However, although IntServ has some attractive aspects, it does have its
problems. It has no means of ensuring that the necessary resources will be
available when required. Another problem is that it reserves network resources on
a per-flow basis. If multiple flows from an aggregation point all require the same
resources, the flows will nevertheless all be treated individually. The “RESV”
message must be sent separately for each flow. This means that as the number of
flows increases, so does the RSVP overhead and thus the service level is
degrading. In other words, IntServ does not scale well, and so wastes network
resources [35].
1.6.2. DIFFERENTIATED SERVICES
DiffServ is an architecture that addresses QoS requirements in a
connectionless environment. DiffServ’s specification can be found in [36]. Its
main purpose is to standardize a set of QoS building blocks with which providers
can implement QoS enhanced IP services. DiffServ QoS is meant to be
implemented at the network edge by access devices and then supported across the
backbone by DiffServ-capable routers. Since it operates purely at Layer 3,
Implementing Quality of Service in IP Networks
33
DiffServ can be deployed on any Layer 2 infrastructure. DiffServ and non-
DiffServ routers and services can be mixed in the same environment.
DiffServ eliminates much of the complexity that developed from IntServ’s
use of the Resource Reservation Protocol (RSVP) to set up and tear down QoS
reservations. DiffServ eliminates the need for RSVP and instead makes a fresh
start with the existing Type of Service (ToS) field found in the header of IPv4 and
IPv6 packets, as illustrated below. IP precedence resides in bits P2-P0, ToS is bits
T3-T0 and the last bit is reserved for future use (currently unused).
Fig. 1-9: ToS Byte
The ToS field was originally intended by IP’s designers to support
capabilities much like QoS – allowing applications to specify high or low delay,
reliability, and throughput requirements – but was never used in a well-defined,
global manner. But since the field is a standard element of IP headers, DiffServ is
reasonably compatible with existing IP equipment and can be implemented to a
large extent by software/firmware upgrades.
DiffServ takes the ToS field (eight bits), renames it as the DS (DiffServ)
byte, specified in [37], or DSCP and restructures it to define IP QoS parameters.
Figure 1-10 illustrates the DS byte, where the DSCP is made up of bits DS5-DS0
and the last two bits are currently unused:
Fig. 1-10: DS Byte
P2 P1 P0 T3 T2 T1 T0 CU
DS5 DS4 DS3 DS2 DS1 DS0 CU CU
Implementing Quality of Service in IP Networks
34
The DSCPs defined thus far by the IETF are the following (please note
that class selector DSCPs are defined to be backward compatible with IP
precedence) [38]:
Class Selectors DSCP Value
Default 000000
Precedence 1 001000
Precedence 2 010000
Precedence 3 011000
Precedence 4 100000
Precedence 5 101000
Precedence 6 110000
Precedence 7 111000
Table 1-3: DSCP Class Selectors
The DSCP will determine how routers handle a packet. Instead of trying
to identify and manage IP traffic flows, as RSVP must do, DiffServ brings QoS
consideration to the normal IP practice of packet-by-packet, hop-by-hop routing.
Service decisions are made according to parameters defined by the DS byte, and
are manifested in Per-Hop Behavior (PHB). For example, the default PHB in a
DiffServ network is traditional best-effort service, using first-in first-out (FIFO)
queuing. Other PHBs are defined for higher CoSs. One example is expedited
forwarding (EF). The EF PHB has been defined in [39]. EF defines premium
service and the recommended DSCP value is 101110. When EF packets enter a
DiffServ router, they are meant to be handled in short queues and quickly serviced
with top priority to maintain low latency, packet loss, and jitter. Another PHB,
one that allows variable priority but still ensures that packets arrive in the proper
order, is assured forwarding (AF). The specification of AF can be found in [40].
Implementing Quality of Service in IP Networks
35
AF defines four service levels, with each service level having three drop
precedence levels. As a result, AF PHB recommends 12 code points, as shown in
Table 1-4 [41]:
Drop Precedence Class 1 Class 2 Class 3 Class 4
Low 001010 010010 011010 100010
Medium 001100 010100 011100 100100
High 001110 010110 011110 100110
Table 1-4: AF PHB
DiffServ standardizes PHB classification, marking and queuing
procedures, but it leaves equipment vendors and service providers free to develop
PHB flavors of their own. Router vendors will define parameters and capabilities
they think will furnish effective QoS controls and service providers can devise
combinations of PHB to differentiate their qualities of service [42].
As mentioned above, DSCP is backward-compatible to IP precedence.
Table 1-5 illustrates DSCP to IP precedence mappings:
Table 1-5: DSCP to IP Precedence Mappings
DSCP IP Precedence
0-7 0
8-15 1
16-23 2
24-31 3
32-39 4
40-47 5
48-55 6
56-63 7
Implementing Quality of Service in IP Networks
36
1.7. MULTI-PROTOCOL LABEL SWITCHING
MPLS is a strategy for streamlining the backbone transport of IP packets
across a Layer 3/Layer 2 network. Although it does involve QoS issues, that is not
its main purpose. MPLS is focused mainly on improving Internet scalability
through better Traffic Engineering. MPLS will help to build backbone networks
that better support QoS traffic.
MPLS is essentially a hybrid of the network (Layer 3) and transport (Layer
2) structure, and may represent an entirely new way of building IP backbone
networks. MPLS is rooted in the IP switching and tag switching technologies that
were developed to bring circuit switching concepts to IP’s connectionless routing
environment.
In the current draft specification [43], MPLS edge devices add a 4-byte
(32 bits) label to the header of IP packets. An IP encapsulation scheme, basically,
the label provides routing information that allows packets to be steered over the
backbone using pre-established paths. These paths function at Layer 3 or can even
be mapped directly to Layer 2 transport such as ATM or Frame Relay. MPLS
improves Internet scalability by eliminating the need for each router and switch in
a packet’s path to perform conventional address lookups and route calculation.
MPLS also permits explicit backbone routing, which specifies in advance
the hops that a packet will take across the network. Explicit routing will give IP
traffic a semblance of end-to-end connections over the backbone. This should
allow more deterministic, or predictable, performance that can be used to
guarantee QoS.
The MPLS definition of IP QoS parameters is limited. Out of 32 bits total,
an MPLS label reserves just three bits for specifying QoS. Label-switching routers
will examine these bits and forward packets over paths that provide the
Implementing Quality of Service in IP Networks
37
appropriate QoS levels. But the exact values and functions of these so called
“experimental bits” remain yet to be defined.
MPLS is focused more on backbone network architecture and traffic
engineering than on QoS definition. In an ATM network using cell-based MPLS,
MPLS tags can be mapped directly to ATM virtual circuits (VCs) that provide
ATM’s strong QoS and class of service capabilities. For example, the MPLS label
could specify whether traffic requires constant bit rate (CBR) or variable bit rate
(VBR) service, and the ATM network will ensure that guarantees are met [44].
1.8. IP QOS: A CRITICAL EVALUATION
Up to now we have defined the term QoS, we have analyzed and discussed
mechanisms that provide QoS and we have given an overview of the two QoS
frameworks. The questions that arise at this point are the following: Is QoS really
needed in today’s networks? Does the overprovisioning of bandwidth not solve
the problem? Which framework is more viable in today’s IP networks – mainly
the Internet – and why? This comparison and critical view of the two frameworks
is based on the literature survey, which means that the conclusion is based on
theory. A critical comparison of the two frameworks will display the author’s
view on this issue.
One might think that any issues, such as network congestion, can be
overcome by simply upgrading the existing lines to provide more bandwidth.
While this is essentially the first step to provide QoS, it is the author’s view that
bandwidth overprovisioning might not always be possible, firstly because the
customer might not want to pay more and secondly because there still might be
applications that would use the whole bandwidth, such as peer-to-peer file sharing
applications, high-definition streaming video and so on. In essence, no matter how
Implementing Quality of Service in IP Networks
38
much bandwidth is available, there will always be applications that will make use
of this bandwidth and therefore there will always be the need for mechanisms to
provide QoS for the various types of applications.
Of course, just upgrading bandwidth is the simplest way to provide some
sort of QoS. By implementing schemes and mechanisms, the complexity of the
network infrastructure will increase. Implementing mechanisms and frameworks
that provide QoS in a network means adding complexity to the network, making it
more difficult and challenging to design, configure, operate and monitor. So, is it
worth it to add complexity to the network? Wouldn’t it be more efficient to
increase the bandwidth every time it reaches the network’s limit? In the author’s
opinion, the answer is no, because – as mentioned above – the customer may not
be always willing to pay, even if bandwidth becomes abundant and cheap.
Furthermore, any network is bound to grow in size, which means more users will
access it and as a result bandwidth will be eaten up more quickly. It is obvious
that at some point, even if all available mechanisms and frameworks will be
employed in the most efficient way (which obviously is very difficult) to provide
QoS, the bandwidth will have to be increased eventually, not drastically, but
gradually. One can say that the frameworks that provide QoS simply enhance and
prolong the lifespan of an existing network, without the need for upgrading
existing lines in very short time periods. Therefore, these frameworks and
mechanisms are essential for designing and deploying a modern multi-service
network, even if it increases its complexity. However, the most fundamental
problem of any complex network is that it is very difficult for network managers
to specify the parameters needed in order to provide QoS and that these
parameters must be monitored constantly and adjusted to keep up with the traffic
shifts. Nevertheless, the architectures and mechanisms that provide QoS are
Implementing Quality of Service in IP Networks
39
beneficial for both customers and service providers, therefore complexity is not a
strong enough reason not to implement such architectures. Besides, a combination
of slight capacity overprovisioning and a “light” implementation of QoS
mechanisms keeps a balance between network complexity and effectiveness and
is the ideal solution.
Another issue is of course that by implementing QoS mechanisms and
frameworks, ISPs are able to offer different service levels to their customers,
ranging from best-effort to premium. This opens a whole new world of
opportunity for both ISPs and customers, because – to put it simply – the
customer is able to get what he/she wants or needs and the ISP is able to offer the
customer the desired service level. Without these frameworks, this would not be
possible, even by just increasing bandwidth, because – as mentioned earlier – IP
does not have the ability to distinguish between different classes of service and
therefore ISPs would not be able to provide assurance to their customers as to the
offered service quality. The economic aspect of QoS should not be underestimated
in the modern world, as customer’s requirements in today’s Internet are very
diverse. This, in the author’s opinion, is another reason why QoS is essential for
today’s IP networks.
The next question is obviously which framework is more viable for
today’s IP networks. In order to be able to answer this, one must first consider
each framework’s advantages and disadvantages. In the author’s opinion,
DiffServ is the architecture for the future, but why?
When IntServ was first introduced, it was initially considered to be the
framework that would provide efficient QoS in the Internet. However, several
characteristics of this framework made this assumption difficult and IntServ soon
became somewhat “obsolete” for large-scale IP networks.
Implementing Quality of Service in IP Networks
40
Firstly, IntServ treats each traffic flow individually, which means that all
routers along a path must keep current flow-state information for each individual
flow in their memory. This approach does not scale well in the Internet core,
because there are thousands, even millions of individual flows. As a result,
processing overhead and flow-state information increases proportionally to the
number of flows. In the Internet, such behavior is unacceptable and highly
inefficient because of the very large number of flows. This treatment would
essentially make any efforts for providing QoS “non-existent”, because the
available resources that are needed for a reservation would not be enough for
every flow and as a result it would not be possible to provide QoS for every flow.
IntServ is not able to cope with a large number of flows at high speeds.
Secondly, resource reservation in the Internet requires the cooperation of
multiple service providers, i.e. the service providers must make agreements for
accounting and settlement information. Such infrastructures do not exist in
today’s Internet and it is highly unlikely that they will be implemented in the
future. We have to keep in mind that there are simply too many ISPs for such a
solution to be viable, because when multiple ISPs are involved in a reservation,
they have to agree on the charges for carrying traffic from other ISPs’ customers
and settle these charges between them. It is true that a number of ISPs is bound by
bilateral agreements, but to extend these agreements to an Internet-wide policy
would be impossible because of the sheer number of ISPs.
Moreover, IntServ’s usefulness was further compromised by the fact that
RSVP was for a very long period in experimental stage. As a result, its
implementation in end systems was fairly limited and even today there are very
few applications that support it. What is the best technology worth (at least on
Implementing Quality of Service in IP Networks
41
paper), if there are no applications that support it? The answer is obvious and this
is also a severe disadvantage of the IntServ architecture.
Finally, IntServ focused mainly on multimedia applications, leaving
mission-critical ones out. The truth is that even though multimedia applications
(videoconferencing or VoIP) are important for today’s corporate networks,
mission-critical applications and their efficient operation are still the most
important ones. After all, that is what their name implies: “mission-critical”,
critical for the company’s growth, expansion and survival. IntServ did not offer
the versatility to effectively support both types of application on a large-scale
level.
At this point one might ask where and how the IntServ framework should
be used and implemented. The answer is that IntServ might be viable in small-
scale corporate networks because of their limited size and – as a result – limited
number of traffic flows. Furthermore, the above mentioned settlement issues
disappear. However, the author’s opinion on this matter is that IntServ might be a
good solution for such types of networks, but what if these networks extend
beyond a single site and require WAN access, which is the case in most corporate
networks nowadays? Then we have to face the fact that IntServ does not scale
well and a WAN means an increased number of users and traffic flows. The
IntServ architecture simply was not designed for today’s networks and therefore is
not a viable solution. IntServ looks good on paper, i.e. in its specification, but in
the real world its implementation range is very limited.
Since the author limited IntServ as a viable solution for today’s Internet
only for a few occasions, the answer to the question “which architecture is more
viable and effective” is obvious: DiffServ is the architecture that will be able to
Implementing Quality of Service in IP Networks
42
provide effective and sufficient QoS in the future. But what are DiffServ’s strong
points and advantages over IntServ?
DiffServ relies on provisioning to provide resource assurance. Users’
traffic is divided into pre-defined traffic classes. These classes are specified in the
SLAs between customer and ISP. The ISP is therefore able to adjust the level of
resource provisioning and control the degree of resource assurance to the users.
What this means in essence is that each traffic flow is not treated separately, like
in IntServ, but as an aggregate. This of course makes the DiffServ architecture
extremely scalable, especially in a network where there are several thousands of
traffic flows, because DiffServ classifies each traffic flow into one of (usually)
three traffic classes and treats them according to pre-specified parameters. This
gives ISPs greater control of traffic handling and enables them to protect the
network from misbehaving sources. The downside is that because of the dynamic
nature of traffic flows, exact provisioning is extremely difficult, both for the
customer and the ISP. In order to be able to offer reliable services and
deterministic guarantees, the ISP must assign proper classification characteristics
for each traffic class. This, on the other hand, requires that the customer knows
exactly what applications will be used and what kind of traffic will enter the
network. As a result, network design and implementation is generally more
difficult, but in the author’s opinion it is worth the effort because once resources
have been organized efficiently, ISPs are able to deliver their commitments
towards the customers and minimize the cost. The customer can rest assured that
he/she will get what he/she wants for the agreed price. This means that cost-
effectiveness of services is increased and both ISPs and customers can be satisfied
with the result. The DiffServ model therefore is a more viable solution for today’s
IP networks, even small-scale ones.
Implementing Quality of Service in IP Networks
43
The author’s view is that DiffServ’s treatment of flows as aggregates
rather that individual fully justifies its superiority over IntServ, at least for the
large majority of today’s IP networks. IntServ’s capabilities, compared to the ones
of DiffServ, are fairly limited and can be employed only for specific cases.
DiffServ is extremely flexible and scalable. It is true that DiffServ is more
difficult to implement than IntServ because of its nature, but once it has been
setup properly, it is beneficial for all the parties involved. Additionally, it enables
ISPs to develop new services over the Internet and attract more customers to
increase their return. Thus, DiffServ’s economical model is more viable,
profitable and flexible than IntServ’s and this is – in the author’s opinion – a very
important advantage.
The conclusion to all this is that good network design, simplicity and
protection are the keys for providing QoS in a large-scale IP network. Good
network design plus a certain degree of overprovisioning network capacity not
only makes a network more robust against failure, but also eliminates the need for
overly complex mechanisms designed to solve these problems. Implementing the
DiffServ architecture keeps the network simple because in most cases three traffic
classes (premium, assured, best-effort) are sufficient to meet almost every
customer’s requirements under various network conditions. As a result, the
network is utilized more efficiently and the balance between complexity and
providing QoS is kept at its optimum level.
We will proceed by presenting a case study, on which the practical part of
the dissertation will be based. We will discuss the problems and issues faced
during the design and implementation stage and how they were overcome.
Implementing Quality of Service in IP Networks
44
CHAPTER 2: CASE STUDY: DESIGN AND IMPLEMENTATION
OF A QOS-ENABLED NETWORK
In order to test and observe QoS mechanisms, a network that supports QoS
functions must be designed and set up. For this purpose, a case study will be used
that involves specific applications that require QoS as well as network devices
(routers) that support QoS functions and protocols.
In our case study we will assume that a customer wants to build a network
that connects four different LANs located at different sites. The budget is limited
and as a result most connections will have limited bandwidth. The customer wants
to use VoIP, videoconferencing, mission-critical, Web and E-mail applications.
Therefore, it is imperative to implement a framework that provides QoS for these
applications. The reason for this is that if all traffic would be treated as best-effort,
real-time as well as mission-critical applications would suffer greatly and even
become unusable because of network congestion at various nodes in the network.
Another reason is that the available bandwidth is fairly limited and must therefore
be provisioned as efficiently as possible.
Since we already know the customer’s requirements, i.e. the applications
that will be used as well as the network topology design (see section 2.4), the first
issue regarding network design and QoS implementation is to decide which
framework to use in order to provide QoS and why.
The QoS framework to be used will be the DiffServ framework. DiffServ,
as mentioned above, is a standardized, scalable (treating aggregate rather than
individual flows) solution for providing QoS in a large-scale IP network like the
case study network topology. DiffServ is based on the following mechanisms:
Implementing Quality of Service in IP Networks
45
 Classification: each packet is classified into one of some pre-defined
CoSs, either according to the values of the IP/TCP/UDP header or
according to its already marked DSCP.
 Marking: after classification, each packet is marked with the DSCP value
corresponding to its CoS (if not already marked).
 Policing: control of the amount of traffic offered at each CoS.
 Scheduling and Queue Management: each CoS is assigned to a queue of
a router’s egress interface. Prioritization is applied among these queues, so
that each CoS gets the desired performance level. To improve throughput,
in case queues are filling up, queue management mechanisms, like RED,
can be applied to these queues.
But why DiffServ? Since the network is a large-scale corporate network,
we have to assume that there will be a large number of traffic flows, especially
during peak times. As discussed in section 1.8, IntServ does not scale well and
would not be an efficient solution for the customer’s needs. DiffServ’s scalability
also suggests that it is “future-proof”, i.e. it can be more easily upgraded,
expanded and adapted to the customer’s needs.
Another reason is that the applications used do not support RSVP and
hence implementation of the IntServ architecture would not be the best solution.
In the author’s opinion, DiffServ’s scalability and flexibility is the main reason
why it is more appropriate in the context of the case study. A complete
justification for why the DiffServ architecture is preferable has been given above
in section 1.8. We will assume the same for our case study.
Implementing Quality of Service in IP Networks
46
2.1. IDENTIFICATION OF APPLICATIONS
After deciding on the architecture, the next step is to identify the
applications the customer will use in the network. This is necessary in order to be
able to appropriately classify the traffic into CoSs.
The applications that will be used in the case study will be the following:
 VoIP
 Videoconference
 Streaming video
 Mission-critical data (e.g. database, client/server applications)
 Internet (e-mail, web browsing etc.)
VoIP and videoconference are both real-time streaming applications that
use UDP and therefore require high levels of QoS (premium services). For
mission-critical traffic, which uses TCP, proper and timely delivery must be
ensured, thus this type of traffic will require assured services. Finally, the rest of
the traffic (such as web browsing or e-mail) will be treated as best-effort and
allocated the remaining bandwidth.
Policing has to be applied so that the first two CoSs (premium and
assured) do not claim all the bandwidth for themselves, thus leaving none for the
best-effort traffic.
Here we see the difficulty of properly designing and implementing CoSs.
Every aspect of the network traffic needs to be thoroughly considered so as to
ensure proper operation of all CoSs. In the author’s opinion, this poses the
greatest challenge in QoS implementation. It is also very essential that the
customer knows exactly his/her traffic requirements, so that the service provider is
able to design precise CoSs according to the customer’s needs.
Implementing Quality of Service in IP Networks
47
2.2. DEFINITION OF QOS CLASSES
The foundation of the DiffServ architecture is the definition of the Classes
of Service (CoSs) to be supported by the network and the mapping of each CoS to
a DSCP value. This part of QoS implementation is probably the most important
one, because if CoSs are not chosen and designed properly and precisely, QoS
provision will be inefficient and will not provide the expected service level.
Therefore, it is essential to carefully plan and design the CoSs that will be used
and assign traffic appropriately.
We have chosen to classify traffic with DSCP rather than with IP
precedence because DSCP offers more flexibility (i.e. more granular treatment of
traffic) than IP precedence and is therefore more appropriate for the customer’s
requirements in our case study. It is also the author’s opinion that DSCP is far
more effective than IP precedence in a DiffServ-capable network like the one that
will be used in the case study.
Knowing that real-time traffic has very specific requirements and is very
sensitive to the four network performance parameters discussed in section 1.3, we
can classify such traffic with the highest DSCP, which is EF (46). Furthermore,
real-time traffic must get strict priority over all other traffic. This will ensure
proper operation of the applications. Thus, all real-time traffic will be classified as
premium.
Mission-critical traffic will receive assured services (AF), because it is not
as delay- and jitter-sensitive as real-time traffic. Loss rate for this kind of traffic
must still be kept at a minimum possible.
Finally, the rest of the traffic (Web, e-mail) will be treated as best-effort,
but we still must ensure that some bandwidth remains for this type of traffic, as
mentioned in the previous section.
Implementing Quality of Service in IP Networks
48
After the identification of applications, careful consideration of the above
factors and with regard to the general application requirements outlined in Table
1-1, the CoSs that have been defined for out case study are presented below:
Class of
Service
Applications
Performance
Level
DSCP
value
Premium VoIP
Videoconference
Video streaming
Low delay
Low jitter
Low loss rate
101110
(46)
Assured Mission-critical
data
Controlled delay
Controlled jitter
Low loss rate
001010
(10)
Best-effort Rest of traffic Uncontrolled
(best-effort)
000000
(0)
Table 2-1: Case Study Classes of Service
At implementation stage it is very important to identify what protocol each
application uses. Real-time applications are UDP-based whereas mission-critical
applications are TCP-based. This fact will have an impact on how the router will
handle the traffic at interface egress and will be discussed later on. Moreover,
“Controlled delay” implies delay similar to that of an uncongested network.
In the network testbed, all DiffServ mechanisms will be implemented in
the network’s routers. No mechanism is to be implemented in the end systems, i.e.
no host-based marking will be implemented. It is obvious that proper
configuration of network devices is also very important and will be discussed later
on.
Implementing Quality of Service in IP Networks
49
2.3. ROLES OF INTERFACES
It is very important to understand that a router’s interface in a network can
have one of two possible roles. It can either be an edge or a core interface.
Depending on the role of a router interface, different mechanisms have to be
applied at that interface, because traffic has different characteristics when entering
an edge interface than when entering a core interface. These differences are
discussed below.
This difference also plays an important role at configuration stage. Which
mechanisms need to be applied at what point and why? These are all issues that
we have to face when implementing and configuring the network.
 Edge Interface:
These are interfaces receiving/sending traffic from/to end-systems, i.e.
there is no other layer 3 device, such as a router, between the interface and the
end-systems. End-systems will send unmarked traffic through these interfaces,
thus ingress traffic at these interfaces will be classified, marked with a DSCP
value and policed.
Policing on ingress is necessary for the Premium traffic. This traffic will
have absolute priority in the scheduling schemes applied in the network, therefore
it is necessary to control the amount of Premium traffic offered to the network in
order to avoid starvation of the other CoSs. Also, although not necessary, Assured
traffic will be policed as well in order to guarantee a minimum amount of
bandwidth for the Best-effort traffic.
At egress, these interfaces will classify packets according to their DSCP
values and will apply scheduling and queue management mechanisms. In
particular, Premium traffic, which is UDP-based, will receive strict queue priority
Implementing Quality of Service in IP Networks
50
over all other traffic; Assured traffic will be treated with WRED, so as to
maximize link utilization and avoid TCP global synchronization, since Assured
traffic is TCP-based. The rest of the traffic will be treated as best effort.
At this point we cannot exactly specify bandwidth requirements for all
three CoSs, because we must first test their behavior. An initial estimation is 20%
for Premium traffic, 35% for Assured traffic and 20% for the rest of the traffic.
The remaining unused 25% will be used for overhead and other unpredictable
traffic that may occur. This is to minimize the possibility of link saturation and
network congestion. If all 100% of the link bandwidth were to be used, traffic
unaccounted for would cause the link bandwidth to exceed the limit and thus
congestion would occur. These 25% can be considered a “safety net” for the link.
These numbers are initial estimations and cannot be predicted beforehand
because of the heterogeneous nature of the network, i.e. the variety of link speeds.
For example, at a 128Kbps link 20% of the bandwidth for Premium traffic might
not be enough and may have to be increased. In that case, bandwidth for Assured
traffic might also have to be increased, but the remaining bandwidth may not be
enough for the best-effort traffic, i.e. the link will be too saturated for the best-
effort traffic to come through. These are design issues that are difficult to
overcome, therefore we must design the mechanisms as efficiently as possible.
One solution would be to upgrade the link to a higher speed, but we are assuming
that the customer does not want to spend money on such an upgrade.
 Core interface:
These are interfaces receiving/sending traffic from/to other routers. Ingress
traffic at these interfaces will already be marked with a DSCP. Therefore, for
these interfaces there is just a need to classify traffic at egress according to DSCP
Implementing Quality of Service in IP Networks
51
values and apply scheduling and queue management mechanisms. Premium traffic
will again get strict queue priority; Assured traffic will be serviced with WRED
with an exponential weighting constant of 1 so as to avoid TCP global
synchronization and maximize link utilization. Best-effort will be serviced with a
regular tail-drop mechanism.
Once again we cannot precisely specify bandwidth percentages for the
three CoSs, therefore we will estimate the same numbers as for the edge
interfaces. The precise numbers will have to be worked out during the testing
phase.
Figure 2-4 illustrates an example topology with core and edge interfaces
and which mechanisms will be applied at each:
Edge Interface
Classification
Marking
Policing
Queuing
Core Interface
Queuing
Core Interface
Queuing
Fig. 2-1: Edge and Core Interfaces
The figures that follow point out the differences between interface ingress
and interface egress of a router. Figure 2-2 is a highly simplified design of a
router’s operation. It should be noted that an interface’s ingress in one direction
Implementing Quality of Service in IP Networks
52
acts simultaneously as the interface’s egress in the other direction and vice versa.
For reasons of simplicity only one traffic direction is shown:
Interface
Ingress
Routing and
Forwarding
Functions
Interface
Egress
Data In Data Out
Fig. 2-2: Simplified Router Schematic
A router consists therefore of three logical blocks. Figure 2-3 illustrates
the QoS actions that may be taken at the router’s interface ingress:
Classification
Metering
Marking Policing
Incoming Traffic
(Interface In)
Outgoing Traffic
Fig. 2-3: Interface Ingress Schematic
As the traffic enters the interface, it is examined and classified according
to the router’s metering function. Then, the packets are marked according to the
pre-defined policies (in our case by their DSCP value) and then policed before
they exit the ingress. Throughout all these stages the traffic is “measured”, i.e.
identified according to the above mentioned criteria (source/destination IP and
port number, protocol, DSCP value). The traffic then is forwarded to the routing
and forwarding logical block of the router.
Implementing Quality of Service in IP Networks
53
Figure 2-4 illustrates the QoS functions that may take place at interface
egress. As traffic enters, it can be re-classified according to its destination. It then
may be remarked and either policed or shaped. The metering function remains
active until the packet reaches the queue, where it is scheduled and finally exits
the interface and is sent on its way.
It should be noted that usually only queue management and scheduling
mechanisms are employed at interface egress, since the other functions have been
performed at interface ingress. Nevertheless, the figure shows that there is a
possibility for classification, marking and policing at egress in case it is needed.
Finally, as stated above, it is also crucial for the decision which QoS mechanisms
to employ where the interface is located, i.e. if it is an edge or a core interface.
Classification
Metering
Marking
Policing/
Shaping
Scheduling/Queuing
Incoming Traffic
Outgoing Traffic
(Interface Out)
Fig. 2-4: Interface Egress Schematic
2.4. CASE STUDY NETWORK TOPOLOGY
The customer’s network consists of several different routers linked
together at different link speeds and types. The network interconnects four
separate LANs at different locations. The applications that will be used at each
LAN are shown as appropriate icons. It also shows the customer’s specifications
for the link speeds and types.
Implementing Quality of Service in IP Networks
54
The first observation is that there are several 128Kbps links, a fact that
will complicate efficient design for QoS provisioning because of the very limited
bandwidth.
Figure 2-5 below shows the topology proposed by the customer.:
CORE NETWORK
LAN 1
LAN 4
Workstation
VideoVoIP
Telephone
LAN 2
Workstation
VideoVoIP
Telephone
LAN 3
Workstation
VideoVoIP
Telephone
Database
ServerWorkstation
VoIP
Telephone
Video
1984Kbit
Serial
1984Kbit
Serial
1984Kbit
Serial
1544Kbit
Serial
100Mbit
Fast Ethernet
128Kbit
Serial
100Mbit
Fast Ethernet
128Kbit
Serial
100Mbit
Fast Ethernet
100Mbit
Fast Ethernet
256Kbit
Serial
128Kbit
Serial
Fig. 2-5: Case Study Network Topology
Implementing Quality of Service in IP Networks
55
2.5. LABORATORY NETWORK TOPOLOGY
The actual topology testbed that will be used differs in that we will not use
LANs, but workstations, and not four, but two.
Figure 2-6 shows the actual topology that will be used to simulate the case
study topology and execute all tests:
CORE NETWORK
R2620
R7206
E0/0
163.100.140.1
S2/0
163.100.130.2
S0/3:1
132.1.30.1
S0/1
163.100.130.1
S0/3:1
132.1.20.1
S0/0
132.1.40.1
FE0/0
163.105.11.2
S1/0:0
132.1.10.1
S1/1:1
132.1.30.2
S0/3:1
163.100.30.2
S0/0
132.1.40.2
R2612
S0/2:0
132.1.10.2
S0/2:0
163.100.30.1
S0/2:0
132.1.20.2
1984Kbit1984Kbit
1984Kbit
128Kbit
100Mbit
100Mbit
256Kbit
128Kbit
R2621
NetMeeting
Sniffer
IP Traffic
NetMeeting
IP Traffic
PC 2
QPM
R3640
163.105.1.200
163.100.140.169
PC 1
Fig. 2-6: Laboratory Network Topology
Implementing Quality of Service in IP Networks
56
The figure also shows router and interface designations, workstation IP
addresses as well as applications that will be used at each workstation. These
applications will be presented in chapter 3, except for QPM, which will be
presented and discussed in section 2.8. Router configurations will also be
discussed in section 2.8.
2.6. NETWORK DEVICES
As mentioned previously, all network devices are Cisco® Systems routers.
Several routers from different product families will be used for the laboratory
topology testbed, ranging from large-scale enterprise routers (Cisco 7206) to
medium-scale branch office routers (Cisco 2612). In order to gain a better
understanding of the capabilities of the different routers, a short description of
each product family follows:
 2600 Series:
The Cisco 2600 series is a family of modular multiservice access routers,
providing LAN and WAN configurations, multiple security options and a range of
high performance processors.
Cisco 2600 modular multiservice routers offer versatility, integration, and
power. With over 70 network modules and interfaces, the modular architecture of
the Cisco 2600 series easily allows interfaces to be upgraded to accommodate
network expansion.
Cisco 2600 series routers are also capable of delivering a broad range of
end-to-end IP and Frame Relay-based packet telephony solutions.
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou
Implementing QoS in IP Networks - Nikolaos Tossiou

More Related Content

What's hot

Integrating wind and solar energy in india for a smart grid platform
Integrating wind and solar energy in india for a smart grid platformIntegrating wind and solar energy in india for a smart grid platform
Integrating wind and solar energy in india for a smart grid platformFarhan Beg
 
Systems Analysis And Design Methodology And Supporting Processes
Systems Analysis And Design Methodology And Supporting ProcessesSystems Analysis And Design Methodology And Supporting Processes
Systems Analysis And Design Methodology And Supporting ProcessesAlan McSweeney
 
Oreilly cisco ios access lists
Oreilly   cisco ios access listsOreilly   cisco ios access lists
Oreilly cisco ios access listsFadel Abbas
 
Introduction to Solution Architecture Book
Introduction to Solution Architecture BookIntroduction to Solution Architecture Book
Introduction to Solution Architecture BookAlan McSweeney
 
Web application security the fast guide
Web application security the fast guideWeb application security the fast guide
Web application security the fast guideDr.Sami Khiami
 
Aidan_O_Mahony_Project_Report
Aidan_O_Mahony_Project_ReportAidan_O_Mahony_Project_Report
Aidan_O_Mahony_Project_ReportAidan O Mahony
 
Ibm system z in a mobile world providing secure and timely mobile access to...
Ibm system z in a mobile world   providing secure and timely mobile access to...Ibm system z in a mobile world   providing secure and timely mobile access to...
Ibm system z in a mobile world providing secure and timely mobile access to...bupbechanhgmail
 
Implementing IBM InfoSphere BigInsights on System x
Implementing IBM InfoSphere BigInsights on System xImplementing IBM InfoSphere BigInsights on System x
Implementing IBM InfoSphere BigInsights on System xIBM India Smarter Computing
 
S.r0141-0_v1.0_m2_m_study_report
  S.r0141-0_v1.0_m2_m_study_report  S.r0141-0_v1.0_m2_m_study_report
S.r0141-0_v1.0_m2_m_study_reportjoehsmith
 
Data Privatisation, Data Anonymisation, Data Pseudonymisation and Differentia...
Data Privatisation, Data Anonymisation, Data Pseudonymisation and Differentia...Data Privatisation, Data Anonymisation, Data Pseudonymisation and Differentia...
Data Privatisation, Data Anonymisation, Data Pseudonymisation and Differentia...Alan McSweeney
 
ISS Docking Standards
ISS Docking StandardsISS Docking Standards
ISS Docking StandardsBill Duncan
 
Ibm tivoli security solutions for microsoft software environments redp4430
Ibm tivoli security solutions for microsoft software environments redp4430Ibm tivoli security solutions for microsoft software environments redp4430
Ibm tivoli security solutions for microsoft software environments redp4430Banking at Ho Chi Minh city
 
I Series System Security Guide
I Series System Security GuideI Series System Security Guide
I Series System Security GuideSJeffrey23
 
Sun GlassFish Web Space Server 100 Installation Guide
Sun GlassFish Web Space Server 100 Installation GuideSun GlassFish Web Space Server 100 Installation Guide
Sun GlassFish Web Space Server 100 Installation Guidewebhostingguy
 
Set Up Security and Integration with DataPower XI50z
Set Up Security and Integration with DataPower XI50zSet Up Security and Integration with DataPower XI50z
Set Up Security and Integration with DataPower XI50zSarah Duffy
 
Arduino bộ vi điều khiển cho tất cả chúng ta part 1
Arduino bộ vi điều khiển cho tất cả chúng ta part 1Arduino bộ vi điều khiển cho tất cả chúng ta part 1
Arduino bộ vi điều khiển cho tất cả chúng ta part 1tungdientu
 
Nvidia cuda programming_guide_0.8.2
Nvidia cuda programming_guide_0.8.2Nvidia cuda programming_guide_0.8.2
Nvidia cuda programming_guide_0.8.2Piyush Mittal
 

What's hot (20)

Integrating wind and solar energy in india for a smart grid platform
Integrating wind and solar energy in india for a smart grid platformIntegrating wind and solar energy in india for a smart grid platform
Integrating wind and solar energy in india for a smart grid platform
 
Systems Analysis And Design Methodology And Supporting Processes
Systems Analysis And Design Methodology And Supporting ProcessesSystems Analysis And Design Methodology And Supporting Processes
Systems Analysis And Design Methodology And Supporting Processes
 
Oreilly cisco ios access lists
Oreilly   cisco ios access listsOreilly   cisco ios access lists
Oreilly cisco ios access lists
 
Introduction to Solution Architecture Book
Introduction to Solution Architecture BookIntroduction to Solution Architecture Book
Introduction to Solution Architecture Book
 
Web application security the fast guide
Web application security the fast guideWeb application security the fast guide
Web application security the fast guide
 
Aidan_O_Mahony_Project_Report
Aidan_O_Mahony_Project_ReportAidan_O_Mahony_Project_Report
Aidan_O_Mahony_Project_Report
 
Ibm system z in a mobile world providing secure and timely mobile access to...
Ibm system z in a mobile world   providing secure and timely mobile access to...Ibm system z in a mobile world   providing secure and timely mobile access to...
Ibm system z in a mobile world providing secure and timely mobile access to...
 
Implementing IBM InfoSphere BigInsights on System x
Implementing IBM InfoSphere BigInsights on System xImplementing IBM InfoSphere BigInsights on System x
Implementing IBM InfoSphere BigInsights on System x
 
S.r0141-0_v1.0_m2_m_study_report
  S.r0141-0_v1.0_m2_m_study_report  S.r0141-0_v1.0_m2_m_study_report
S.r0141-0_v1.0_m2_m_study_report
 
Data Privatisation, Data Anonymisation, Data Pseudonymisation and Differentia...
Data Privatisation, Data Anonymisation, Data Pseudonymisation and Differentia...Data Privatisation, Data Anonymisation, Data Pseudonymisation and Differentia...
Data Privatisation, Data Anonymisation, Data Pseudonymisation and Differentia...
 
IBM Streams - Redbook
IBM Streams - RedbookIBM Streams - Redbook
IBM Streams - Redbook
 
ISS Docking Standards
ISS Docking StandardsISS Docking Standards
ISS Docking Standards
 
Ibm tivoli security solutions for microsoft software environments redp4430
Ibm tivoli security solutions for microsoft software environments redp4430Ibm tivoli security solutions for microsoft software environments redp4430
Ibm tivoli security solutions for microsoft software environments redp4430
 
I Series System Security Guide
I Series System Security GuideI Series System Security Guide
I Series System Security Guide
 
Sun GlassFish Web Space Server 100 Installation Guide
Sun GlassFish Web Space Server 100 Installation GuideSun GlassFish Web Space Server 100 Installation Guide
Sun GlassFish Web Space Server 100 Installation Guide
 
Set Up Security and Integration with DataPower XI50z
Set Up Security and Integration with DataPower XI50zSet Up Security and Integration with DataPower XI50z
Set Up Security and Integration with DataPower XI50z
 
Idl basics
Idl basicsIdl basics
Idl basics
 
Arduino bộ vi điều khiển cho tất cả chúng ta part 1
Arduino bộ vi điều khiển cho tất cả chúng ta part 1Arduino bộ vi điều khiển cho tất cả chúng ta part 1
Arduino bộ vi điều khiển cho tất cả chúng ta part 1
 
Nvidia cuda programming_guide_0.8.2
Nvidia cuda programming_guide_0.8.2Nvidia cuda programming_guide_0.8.2
Nvidia cuda programming_guide_0.8.2
 
Itsa policy
Itsa policyItsa policy
Itsa policy
 

Similar to Implementing QoS in IP Networks - Nikolaos Tossiou

FYP_enerScope_Final_v4
FYP_enerScope_Final_v4FYP_enerScope_Final_v4
FYP_enerScope_Final_v4Hafiiz Osman
 
Motorola ws2000 wireless switch system reference guide
Motorola ws2000 wireless switch system reference guideMotorola ws2000 wireless switch system reference guide
Motorola ws2000 wireless switch system reference guideAdvantec Distribution
 
Motorola ws2000 wireless switch system reference guide
Motorola ws2000 wireless switch system reference guideMotorola ws2000 wireless switch system reference guide
Motorola ws2000 wireless switch system reference guideAdvantec Distribution
 
Design And Implementation Of A Phone Card Company
Design And Implementation Of A Phone Card CompanyDesign And Implementation Of A Phone Card Company
Design And Implementation Of A Phone Card Companygrysh129
 
iPDC-v1.3.0 - A Complete Technical Report including iPDC, PMU Simulator, and ...
iPDC-v1.3.0 - A Complete Technical Report including iPDC, PMU Simulator, and ...iPDC-v1.3.0 - A Complete Technical Report including iPDC, PMU Simulator, and ...
iPDC-v1.3.0 - A Complete Technical Report including iPDC, PMU Simulator, and ...Nitesh Pandit
 
I Pdc V1.3.0 A Complete Technical Report Including I Pdc, Pmu Simulator, An...
I Pdc V1.3.0   A Complete Technical Report Including I Pdc, Pmu Simulator, An...I Pdc V1.3.0   A Complete Technical Report Including I Pdc, Pmu Simulator, An...
I Pdc V1.3.0 A Complete Technical Report Including I Pdc, Pmu Simulator, An...Nitesh Pandit
 
Victor thesis
Victor thesisVictor thesis
Victor thesissohail_uv
 
WWTC_implementation_plan_Group5_FINAL
WWTC_implementation_plan_Group5_FINALWWTC_implementation_plan_Group5_FINAL
WWTC_implementation_plan_Group5_FINALJohn Bernal
 
BE Project Final Report on IVRS
BE Project Final Report on IVRSBE Project Final Report on IVRS
BE Project Final Report on IVRSAbhishek Nadkarni
 
Tech note umts
Tech note umtsTech note umts
Tech note umtsMorg
 
final-year-project-latest
final-year-project-latestfinal-year-project-latest
final-year-project-latestLasitha Konara
 
Bx310x Product Specification
Bx310x Product SpecificationBx310x Product Specification
Bx310x Product SpecificationFrederic Petit
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemkurkute1994
 

Similar to Implementing QoS in IP Networks - Nikolaos Tossiou (20)

iPDC Report Nitesh
iPDC Report NiteshiPDC Report Nitesh
iPDC Report Nitesh
 
iPDC Report Kedar
iPDC Report KedariPDC Report Kedar
iPDC Report Kedar
 
FYP_enerScope_Final_v4
FYP_enerScope_Final_v4FYP_enerScope_Final_v4
FYP_enerScope_Final_v4
 
Motorola ws2000 wireless switch system reference guide
Motorola ws2000 wireless switch system reference guideMotorola ws2000 wireless switch system reference guide
Motorola ws2000 wireless switch system reference guide
 
Motorola ws2000 wireless switch system reference guide
Motorola ws2000 wireless switch system reference guideMotorola ws2000 wireless switch system reference guide
Motorola ws2000 wireless switch system reference guide
 
Design And Implementation Of A Phone Card Company
Design And Implementation Of A Phone Card CompanyDesign And Implementation Of A Phone Card Company
Design And Implementation Of A Phone Card Company
 
Milan_thesis.pdf
Milan_thesis.pdfMilan_thesis.pdf
Milan_thesis.pdf
 
iPDC-v1.3.0 - A Complete Technical Report including iPDC, PMU Simulator, and ...
iPDC-v1.3.0 - A Complete Technical Report including iPDC, PMU Simulator, and ...iPDC-v1.3.0 - A Complete Technical Report including iPDC, PMU Simulator, and ...
iPDC-v1.3.0 - A Complete Technical Report including iPDC, PMU Simulator, and ...
 
I Pdc V1.3.0 A Complete Technical Report Including I Pdc, Pmu Simulator, An...
I Pdc V1.3.0   A Complete Technical Report Including I Pdc, Pmu Simulator, An...I Pdc V1.3.0   A Complete Technical Report Including I Pdc, Pmu Simulator, An...
I Pdc V1.3.0 A Complete Technical Report Including I Pdc, Pmu Simulator, An...
 
Victor thesis
Victor thesisVictor thesis
Victor thesis
 
Victor thesis
Victor thesisVictor thesis
Victor thesis
 
WWTC_implementation_plan_Group5_FINAL
WWTC_implementation_plan_Group5_FINALWWTC_implementation_plan_Group5_FINAL
WWTC_implementation_plan_Group5_FINAL
 
BE Project Final Report on IVRS
BE Project Final Report on IVRSBE Project Final Report on IVRS
BE Project Final Report on IVRS
 
Tech note umts
Tech note umtsTech note umts
Tech note umts
 
final-year-project-latest
final-year-project-latestfinal-year-project-latest
final-year-project-latest
 
document
documentdocument
document
 
IPI-IBS-Guidelines.pdf
IPI-IBS-Guidelines.pdfIPI-IBS-Guidelines.pdf
IPI-IBS-Guidelines.pdf
 
Bx310x Product Specification
Bx310x Product SpecificationBx310x Product Specification
Bx310x Product Specification
 
pin-Documentation
pin-Documentationpin-Documentation
pin-Documentation
 
Project report on Eye tracking interpretation system
Project report on Eye tracking interpretation systemProject report on Eye tracking interpretation system
Project report on Eye tracking interpretation system
 

Implementing QoS in IP Networks - Nikolaos Tossiou

  • 1. MSC DATA COMMUNICATION SYSTEMS DEPARTMENT OF ELECTRONIC & COMPUTER ENGINEERING BRUNEL UNIVERSITY INTRACOM S.A. IMPLEMENTING QUALITY OF SERVICE IN IP NETWORKS by Nikolaos Tossiou September 2002 A dissertation submitted in partial fulfillment of the requirements for the degree of Master of Science
  • 2. MSC DATA COMMUNICATION SYSTEMS DEPARTMENT OF ELECTRONIC & COMPUTER ENGINEERING BRUNEL UNIVERSITY INTRACOM S.A. IMPLEMENTING QUALITY OF SERVICE IN IP NETWORKS by Nikolaos Tossiou supervised by Mr. Nasheter Bhatti September 2002 A dissertation submitted in partial fulfillment of the requirements for the degree of Master of Science
  • 3. Copyright © 2002 by Nikolaos Tossiou & Intracom S.A.
  • 4. To my parents, Olga and Vassilios Tossiou
  • 5. Implementing Quality of Service in IP Networks - i - TABLE OF CONTENTS ABSTRACT ....................................................................................... 1 ACKNOWLEDGEMENTS ..................................................................... 3 CHAPTER 1: IP QUALITY OF SERVICE: AN OVERVIEW ..................... 1.1. INTRODUCTION ............................................................................... 1.2. THE INTERNET: A BIG PICTURE ..................................................... 1.3. DEFINITION OF QOS ....................................................................... 1.4. APPLICATIONS REQUIRING QOS ..................................................... 1.5. QOS MECHANISMS ........................................................................ 1.5.1. CLASSIFICATION/MARKING .................................................... 1.5.2. POLICING/SHAPING ................................................................. 1.5.3. SIGNALING .............................................................................. 1.5.4. QUEUE SCHEDULING/CONGESTION MANAGEMENT ............... 1.5.5. QUEUE MANAGEMENT/CONGESTION AVOIDANCE ................. 1.5.6. TRAFFIC ENGINEERING ........................................................... 1.6. QOS ARCHITECTURES .................................................................... 1.6.1. INTEGRATED SERVICES ........................................................... 1.6.2. DIFFERENTIATED SERVICES .................................................... 1.7. MULTI-PROTOCOL LABEL SWITCHING ........................................... 1.8. IP QOS: A CRITICAL EVALUATION ............................................... 4 4 8 9 15 16 17 19 22 23 26 30 31 31 32 36 37 CHAPTER 2: CASE STUDY: DESIGN AND IMPLEMENTATION OF A QOS-ENABLED NETWORK ............................................................... 2.1. IDENTIFICATION OF APPLICATIONS ................................................ 2.2. DEFINITION OF QOS CLASSES ....................................................... 2.3. ROLES OF INTERFACES .................................................................. 2.4. CASE STUDY NETWORK TOPOLOGY .............................................. 2.5. LABORATORY NETWORK TOPOLOGY ............................................ 2.6. NETWORK DEVICES ....................................................................... 2.7. PROTOCOLS AND QOS MECHANISMS ............................................ 2.7.1. QOS MECHANISMS FOR EDGE INTERFACES ........................... 2.7.2. QOS MECHANISMS FOR CORE INTERFACES .......................... 2.8. CONFIGURATION OF NETWORK DEVICES ....................................... 44 46 47 49 53 55 56 60 61 70 70
  • 6. Implementing Quality of Service in IP Networks - ii - CHAPTER 3: MEASUREMENT INFRASTRUCTURE FOR VALIDATING QOS OF A NETWORK ........................................................................... 3.1. TRAFFIC GENERATION AND MEASUREMENT TOOLS .......................... 3.2. DESIGN OF VALIDATION TESTS ......................................................... 3.3. COLLECTION AND ANALYSIS OF RESULTS ......................................... 84 84 88 91 CHAPTER 4: CONCLUSIONS ................................................................. 4.1. SUMMARY ......................................................................................... 4.2. CONCLUSIONS ................................................................................... 4.3. RECOMMENDATIONS FOR FURTHER STUDY ....................................... 99 99 101 103 BIBLIOGRAPHY ................................................................................... 105 REFERENCES ....................................................................................... 109 NOTATION .......................................................................................... 113 APPENDIX ........................................................................................... PART I – INTERIM REPORT ....................................................................... PART II – INTERIM REPORT GANTT CHART .............................................. PART III – FINAL PROJECT TIME PLAN ..................................................... PART IV – FINAL PROJECT GANTT CHART ............................................... PART V – IETF RFC SPECIFICATIONS ..................................................... 115 115 133 134 135 136
  • 7. Implementing Quality of Service in IP Networks - iii - FIGURE 1-1: THE PROCESS OF QOS ...................................................................... FIGURE 1-2: IP PRECEDENCE AND DSCP IN THE TOS BYTE ................................ FIGURE 1-3: EXAMPLE OF TRAFFIC RATE ............................................................. FIGURE 1-4: POLICING .......................................................................................... FIGURE 1-5: SHAPING ........................................................................................... FIGURE 1-6: RSVP PROCESS ................................................................................ FIGURE 1-7: GLOBAL SYNCHRONIZATION ............................................................ FIGURE 1-8: WRED ALGORITHM ......................................................................... FIGURE 1-9: TOS BYTE ....................................................................................... FIGURE 1-10: DS BYTE ....................................................................................... FIGURE 2-1: EDGE AND CORE INTERFACES .......................................................... FIGURE 2-2: SIMPLIFIED ROUTER SCHEMATIC ...................................................... FIGURE 2-3: INTERFACE INGRESS SCHEMATIC ...................................................... FIGURE 2-4: INTERFACE EGRESS SCHEMATIC ....................................................... FIGURE 2-5: CASE STUDY NETWORK TOPOLOGY ................................................. FIGURE 2-6: LABORATORY NETWORK TOPOLOGY ............................................... FIGURE 2-7: QPM MAIN SCREEN ......................................................................... FIGURE 2-8: DEVICE PROPERTIES PAGE ............................................................... FIGURE 2-9: DEVICE CONFIGURATION LISTING ................................................... FIGURE 2-10: INTERFACE PROPERTIES PAGE ........................................................ FIGURE 2-11: POLICY EDITOR .............................................................................. FIGURE 2-12: CREATED POLICIES ......................................................................... FIGURE 2-13: POLICY COMMANDS ....................................................................... FIGURE 2-14: QDM MAIN SCREEN ...................................................................... FIGURE 3-1: NETMEETING SCREENSHOT .............................................................. FIGURE 3-2: IP TRAFFIC SCREENSHOT ................................................................. FIGURE 3-3: SNIFFER PRO SCREENSHOT ............................................................... 14 18 20 21 22 23 27 29 33 33 51 52 52 53 54 55 72 72 73 73 74 79 80 82 85 86 87 LIST OF FIGURES
  • 8. Implementing Quality of Service in IP Networks - iv - LIST OF TABLES TABLE 1-1: APPLICATIONS REQUIRING QOS ........................................................ TABLE 1-2: IP PRECEDENCE VALUES, BITS AND NAMES ..................................... TABLE 1-3: DSCP CLASS SELECTORS .................................................................. TABLE 1-4: AF PHB ............................................................................................ TABLE 1-5: DSCP TO IP PRECEDENCE MAPPINGS ............................................... TABLE 2-1: CASE STUDY CLASSES OF SERVICE .................................................... TABLE 2-2: NETWORK DEVICES, DESCRIPTIONS AND IOS VERSIONS .................. TABLE 2-3: INTERFACES, IP ADDRESSES AND INTERFACE LINK SPEEDS .............. TABLE 2-4: ROUTER LOOPBACK INTERFACE IP ADDRESSES ................................ TABLE 2-5: RECOMMENDED CAR NB AND EB SETTINGS ................................... TABLE 3-1: FIRST TEST: TCP BEST-EFFORT ......................................................... TABLE 3-2: FIRST TEST: TCP BEST-EFFORT & TCP ASSURED ............................ TABLE 3-3: FIRST TEST: ALL TRAFFIC FLOWS WITHOUT VIDEOCONFERENCE ...... TABLE 3-4: FIRST TEST: ALL TRAFFIC FLOWS WITH VIDEOCONFERENCE ............. TABLE 3-5: FIRST TEST: ALL TRAFFIC FLOWS WITHOUT VIDEOCONFERENCE AFTER FIRST POLICY CHANGE ............................................................ TABLE 3-6: FIRST TEST: ALL TRAFFIC FLOWS WITH VIDEOCONFERENCE AFTER FIRST POLICY CHANGE ..................................................................... TABLE 3-7: FIRST TEST: ALL TRAFFIC FLOWS WITH VIDEOCONFERENCE AFTER SECOND POLICY CHANGE ................................................................. TABLE 3-8: SECOND TEST: ALL TRAFFIC FLOWS WITH VIDEOCONFERENCE ......... 15 19 34 35 35 48 58 59 59 67 91 92 92 93 94 94 95 96
  • 9. Implementing Quality of Service in IP Networks 1 ABSTRACT Traditionally the Internet has provided users with a “best-effort” service without guarantees as to whether or when data is delivered. Data is processed as quickly as possible regardless of its nature. With the advent of new technologies and developments, user demands as well as network and Internet applications have become more complex, thus requiring better services than “best-effort”. Quality of Service (QoS) is the specification of a level of service, which is required from the network. It is defined by specifying certain service parameters, such as delay, loss, throughput and jitter. QoS is implemented by a set of architectures that tackle this problem by providing the means necessary to reserve resources, differentiate between different traffic types that require different servicing properties and manage the flow of data from source to destination based on specific characteristics, thus ensuring actual as well as timely delivery, according to requirements. These architectures achieve this by utilizing various mechanisms for traffic classification, policing, signaling, queue scheduling and queue management. All these mechanisms will be presented and discussed in detail. The two Internet QoS architectures that will be examined are Integrated Services (IntServ) and Differentiated Services (DiffServ). The main function of IntServ is to reserve resources (such as bandwidth or maximum delay) along the path of a flow. A flow can be defined as a stream of data from a source to a destination with specific characteristics such as source and destination address, source and destination port numbers and protocol identifier. The main purpose of DiffServ is to divide flows into different classes and treat them according to their pre-specified requirements as well as enforce specific policies.
  • 10. Implementing Quality of Service in IP Networks 2 MPLS (Multi-Protocol Label Switching) is a transport technology for IP networks that possesses certain features, which may assist the provision of QoS. MPLS labels packets in a way that makes the analysis of the IP header at intermediate points between source and destination unnecessary, thus reducing processing overhead greatly and significantly speeding up traffic flow. Also, MPLS may be used for implementing explicit routing of certain traffic flows through an IP network, thus realizing traffic engineering, which is a mechanism for improving service quality. Implementation, analysis, experiments and derived results will be based mainly on the DiffServ architecture. The relevant mechanisms will be implemented exclusively on Cisco® Systems network equipment and all results will be relevant to Cisco® configurations and QoS implementations. This dissertation is aiming at investigating the mechanisms, algorithms, protocols and technologies that have been defined for implementing QoS in IP networks, identifying current state-of-the-art availability and operation of QoS features and capabilities in current commercial network devices and finally experimenting with such capabilities of network devices in a networking lab environment by designing, implementing and validating a network provided with QoS mechanisms to derive conclusions as to the usefulness and feasibility of QoS in modern IP networks.
  • 11. Implementing Quality of Service in IP Networks 3 ACKNOWLEDGEMENTS This dissertation has been completed with the help of a lot of people. Without their input, comments and guidance it would have been impossible. First and foremost I would like to thank my academic supervisor, Mr. Nasheter Bhatti, who has provided me with useful comments and guided me to the right direction throughout the duration of this dissertation. Secondly, I would like to thank Mr. Evangelos Vayias, my industrial supervisor and mentor at Intracom S.A., who has helped me with lots of technical details and provided me with a variety of technical documents. He was also eager to comment on the dissertation and guide me to the right direction in the practical part of the dissertation. This dissertation would have been impossible without his support. I would also like to thank all the people at Intracom S.A., especially the Cisco® Professional Services team, who have all been supportive throughout my stay at the company and helped me out whenever I had a problem or needed guidance and assistance. Also, I would like to thank the Chachalos family, who has generously provided me with a home to stay in Athens. I owe them a lot. Last but not least I would like to wholeheartedly thank my parents. Without them neither my studies nor this dissertation would have been possible. They were always by my side, providing me with any kind of support I required. May they always be healthy.
  • 12. Implementing Quality of Service in IP Networks 4 CHAPTER 1: IP QUALITY OF SERVICE: AN OVERVIEW 1.1. INTRODUCTION The exponential growth of the Internet, networks in general and network users around the globe, has led to the emergence of a significant number of new user requirements and demands. New applications have emerged to meet the users’ needs and provide them with new services. Such applications include e- commerce, Voice-over-IP (VoIP), streaming media (audio and video), teleconferencing and other bandwidth-intensive applications. This situation has led to high demand for a network architecture that will provide appropriate facilities and mechanisms to ensure proper operation of these new applications [1]. Traditionally, the Internet does not have the ability to distinguish between different service classes, therefore being unable to cope with different requirements for different applications. All traffic flows are treated as “best- effort”, which means that there is no mechanism to ensure actual delivery, nor is it possible to ensure timely delivery of specific traffic flows that would require it. There is no doubt that the dominating protocol used in networks nowadays – besides the Internet of course – is the Internet Protocol (IP). Fundamentally, it is a best-effort protocol; that is why it cannot guarantee the timely or lossless delivery of data packets. Reliable delivery of data packets at the destination is the responsibility of the Transmission Control Protocol (TCP), which is located just above the IP in the well-known seven-layer Open Systems Interconnection (OSI) reference model. In other words, traffic – no matter what its type and requirements – is processed as quickly as possible and there is no guarantee as to timely or actual delivery [2].
  • 13. Implementing Quality of Service in IP Networks 5 This lack of guarantees together with the new user demands and requirements has led to the emergence of Quality of Service (QoS) network architectures and mechanisms in today’s IP networks. The objective of this dissertation is to investigate these architectures and mechanisms by analyzing the way they operate and distinguish their usability under different conditions and for different scenarios, to identify current availability and operation of QoS architectures and mechanisms in current commercial network devices, such as routers and switches, and to experiment with the QoS capabilities and features of such devices in a networking laboratory environment by designing, implementing and validating a network that utilizes QoS mechanisms. This will lead to a conclusion as to which QoS mechanisms are better for different kinds of traffic flows and at which point they should be applied. Finally, it will lead to a conclusion which QoS architecture is more feasible, viable and useful for today’s IP networks and why. The investigation of QoS architectures and mechanisms is a vast subject, which is of very high interest. QoS in the Internet is a hotly debated issue, because several frameworks have been proposed to provide different service classes and different levels of guarantees to different users, according to their needs. In other words, QoS is most certainly going to become one of the basic and most widely used set of mechanisms and architectures for any network infrastructure because the Internet is establishing itself as a global multiservice infrastructure [3]. This dissertation will investigate the main QoS mechanisms such as queue management, queue scheduling, traffic policing/shaping and signaling and the QoS architectures that utilize these mechanisms (DiffServ and IntServ). These will be applied and implemented on current state-of-the art network equipment in order to investigate their operation, feasibility and usefulness.
  • 14. Implementing Quality of Service in IP Networks 6 The dissertation will not deal with specialized mechanisms that are available only for specific traffic flows and on specific network equipment. Furthermore, it will not investigate economic aspects of QoS that arise, i.e. different pricing policies from Internet Service Providers (ISPs) for different service offerings. These will be mentioned, but not analyzed in detail. Such issues are beyond the scope of the dissertation. At the end of this dissertation the reader should be able to distinguish between the different QoS architectures and which mechanism should be used for different traffic flows under what circumstances, i.e. which mechanism is more appropriate for what kind of traffic flow and what type of requirements. Furthermore, by comparing the two different QoS architectures, the reader should be able to tell which one is more scalable or appropriate for today’s modern IP networks, although – as we will see later on – both architectures have their purpose and field of usage. Nevertheless, the tests that will be carried out will lead to the conclusion that one of the two architectures is preferable for any kind of network and will discuss the reasons. The starting point will be to present the current state of the Internet to justify the existence and need for QoS by identifying the various ways the Internet is used today. We will then proceed by defining the term “Quality of Service”, identifying applications that require QoS and discussing the various QoS mechanisms and architectures available. Chapter 1 will be concluded with the author’s critical evaluation of the two QoS architectures. In Chapter 2 we will present and discuss a case study of a network topology on which the tests will be carried out and results will be collected. Finally, in Chapter 3 we will present the experiments and tests that have been carried out at the laboratory as well as analyze all collected results. Finally, we will conclude in the last chapter by
  • 15. Implementing Quality of Service in IP Networks 7 summarizing the results and linking them with the theoretical part as well as by giving direction for further works on this topic. The whole project has been carried out at Intracom S.A. in Peania, Greece, at the Department of Information Systems, Cisco® Professional Services team. Intracom is the largest company in the Telecommunications and Information Technology industry in Greece with a significant international presence. It has over 4000 employees and is a certified Cisco® Golden Partner and Professional Services partner. The Cisco® Professional Services team consists of Cisco-certified network engineers (CCNP level and above). The networking laboratory is fully equipped with latest state-of-the-art networking technology and is available for the conduct of experiments and simulations, both in the scope of commercial projects, as well as for research and development purposes. The project was carried out in the context of a study on the subject according to the company’s requirements. The practical part – topology design etc. – was based on larger project carried out by the company for a customer. As a result, network equipment in the laboratory was not always available for testing and this was the main limitation of the dissertation. One further limitation was the subject itself. Because it is so vast, it was impossible to cover all its aspects in such short time, especially in the practical part. However, the author tried to give an overview and a critical evaluation of all relevant material. All tests and results were based on a case study which was implemented in the networking laboratory. It was carried out solely by the writer with technical help and guidance by the industrial mentor, Mr. Evangelos Vayias, who helped mostly with the practical part of this dissertation, i.e. router configurations, topology design, test execution and collection of results.
  • 16. Implementing Quality of Service in IP Networks 8 1.2. THE INTERNET: A BIG PICTURE The Internet consists of many interconnected networks of different types, shapes and sizes, which use different technologies. Local Area Networks (LANs), Metropolitan Area Networks (MANs) and Wide Area Networks (WANs) are interconnected by a backbone. LANs and MANs are usually corporate, campus or citywide networks; WANs are usually ISP or large-scale corporate networks [4]. As stated above, all these networks are interconnected using different protocols, mechanisms and link layer technologies. However, when referring to the Internet, the common denominator of these networks is the Internet Protocol (IP) suite that is uniform for all these network types. Today, IP forms the foundation for a universal and global network. However, this raises several issues for ISPs as well as enterprise and corporate networks. IP’s problem is that it is a connectionless technology and therefore cannot differentiate between different traffic types or make certain guarantees for different types of applications [5]. QoS architectures attempt to resolve this issue. This dissertation is based on the IP protocol suite and investigates QoS mechanisms based upon IP, i.e. it will deal only with QoS provision at Layers 2 and 3 and will not go beyond these layers. With the growth of the Internet new applications emerged that required specific service quality and together with additional user demands it became apparent that best-effort service is not enough and that certain mechanisms were needed to distinguish between different traffic classes and provide QoS according to user requirements. However, there is one opinion, which claims that new technologies, such as optical fiber and Dense Wavelength Division Multiplexing (DWDM) will make bandwidth so abundant and cheap, that QoS will become unnecessary, i.e. there will be no need for mechanisms that provide QoS because bandwidth will be almost “infinite” [6]. The reply to this opinion is that no matter
  • 17. Implementing Quality of Service in IP Networks 9 how much bandwidth will be made available, new applications will be developed to utilize it and once again make the implementation of QoS mechanisms and architectures necessary. Also, high bandwidth availability cannot be ensured in all areas throughout a network. Bottleneck points will always exist, where QoS mechanisms will have to be applied. The main driving forces of QoS are basically different user demands for different traffic classes and the ability of ISPs to offer different service levels – and, as a result, different pricing policies – to their clients according to agreements. These agreements are called Service Level Agreements (SLAs), which specify what type of service will be provided to the client (i.e. premium, assured or best-effort) and how the client will have access to the bandwidth he/she needs. In other words, it specifies the QoS level the client is to receive and the ISP is to offer. 1.3. DEFINITION OF QOS Firstly, there is the need to define the term “Quality of Service” and what is meant by it. The new generation of applications, such as real-time voice and video, has specific bandwidth and timing requirements, such as delay tolerance and higher end-to-end delivery guarantees than regular best-effort data [7]. These new types of data streams cannot operate properly with the traditional best-effort service. QoS mechanisms attempt to resolve this issue in various ways that will be discussed later on. There are no agreed quantifiable measures that unambiguously define QoS as perceived by a user. Terms, such as “better”, “worse”, “high”, “medium, “low”, “good”, “fair”, “poor”, are typically used, but these are subjective and therefore cannot be always translated precisely into network level parameters that can
  • 18. Implementing Quality of Service in IP Networks 10 subsequently be used for network design. Furthermore, QoS is also heavily dependent upon factors such as compression algorithms, coding schemes, the presence of higher layer protocols for security, data recovery, retransmission and the ability of applications to adapt to network congestion or their requirements for synchronization [8]. However, network providers need performance metrics that they can agree upon with their peers (when exchanging traffic), and with service providers buying resources from them with certain performance guarantees. The following five network performance metrics are considered the most important in defining the QoS required from the network [9]:  Availability: Ideally, a network should be available 100% of the time, i.e. it should be up and running without any failures. Even a high-sounding figure as 99.8% translates into about an hour and a half of downtime per month, which may be unacceptable for a large enterprise. Serious carriers strive for 99.9999% availability, which they refer to as “six nines” and which translates into a downtime of 2.6 seconds per month.  Throughput: This is the effective data transfer rate measured in bits per second. It is not the same as the maximum capacity (maximum bandwidth), or wire speed, of the network. Sharing a network lowers the throughput that can be achieved by a user, as does the overhead imposed by the extra bits included in every packet for identification and other purposes. A minimum rate of throughput is usually guaranteed by a service provider (who needs to have a similar guarantee from the network provider). A traffic flow, such as Voice-over-IP (VoIP), requires a minimum throughput capacity in order to operate correctly and efficiently, otherwise it becomes virtually unusable.
  • 19. Implementing Quality of Service in IP Networks 11  Packet loss ratio: Packet loss specifies the percentage of packets lost during transport [10]. Network devices, such as switches and routers, sometimes have to hold data packets in buffered queues when a link gets congested. If the link remains congested for too long, the buffered queues will overflow and data will be lost, since the devices will have to drop packets at the end of the queue, also known as tail drop. The lost packets must be retransmitted, adding to the total transmission time. In a well- managed network, packet loss will typically be less than 1% averaged over a month. It is obvious that real-time applications may be very sensitive to packet loss, since any drop would require retransmission of the lost packets and would interfere with the real-time transmission. Such applications demand packet loss guarantees from the network.  Delay (latency): The elapsed time for a packet to travel from the sender, through the network, to the receiver is known as delay. The end-to-end delay consists of serialization delay, propagation delay and switching delay. Serialization delay (also called transmission delay) is the time it takes for a network device to transmit a packet at the given output rate and depends on link bandwidth as well as packet size. Propagation delay is the time it takes for a transmitted bit to reach its destination and it is limited by the speed of light. Finally, switching delay refers to the time it takes for a network device to start transmitting a packet after its reception [11]. Unless satellites are involved, the end-to-end delay of a 5000km voice call carried by a circuit-switched telephone network is about 25ms. For the Internet, a voice call may easily exceed 150ms of delay because of signal processing (digitizing and compressing the analog voice input) and congestion (queuing). For a real-time application, such as voice
  • 20. Implementing Quality of Service in IP Networks 12 communication, packets have to be delivered within a specific delay bound, otherwise they become useless for the receiver.  Jitter (delay variation): Jitter refers to the variation of the delay on individual packets of the same flow, i.e. one packet may be delayed more or less than another. In other words, it is the variation in end-to-end transit delay. This has many causes, including variations in queue length, variations in the processing time needed to reorder packets that arrived out of order because they traveled over different paths and variations in the processing time needed to reassemble packets that were segmented by the source before being transmitted. Jitter may affect applications that need a constant flow of data from transmitter to receiver in order to maintain smooth operation. Any variations in this flow may cause the application performance to degrade up to a point that it becomes unusable. It is apparent that newly emerged applications require specific guarantees regarding timeliness and actual delivery. QoS realized and implemented by is a set of technologies, architectures, mechanisms and methods that attempt to provide the resources the applications require without compromising average throughput and network performance in general. The main purpose of these mechanisms is to deliver guaranteed and differentiated services to any IP-based network. Since the Internet is made up of many different link technologies and physical media, end-to-end QoS in IP-based networks is mainly attained at Layer 3, the network layer. A general overview of QoS is illustrated below in Figure 1-1. Applications requiring QoS (mission-critical data, audio/video teleconferencing etc.), which are time- and delay-sensitive, real-time, jitter- and loss-intolerant, will utilize one of the two specific QoS architectural models, IntServ or DiffServ. These
  • 21. Implementing Quality of Service in IP Networks 13 architectures in turn apply specific mechanisms to provide QoS. These mechanisms deal with traffic classification, traffic shaping and policing, resource allocation (signaling), queue scheduling/congestion management and queue management/congestion avoidance. It is possible, although not necessary, to apply more than one mechanism or a specific combination thereof to provide better QoS. This depends mostly on the type of traffic that will pass through the network as well as network equipment and topology. Ultimately, it is the network manager’s decision to define which mechanisms should be applied at which network locations to provide efficient QoS. For realizing end-to-end QoS, such mechanisms are implemented at the Network Layer. However, they may be supported by relevant functions at lower layers. Furthermore, MPLS is employed at this stage (if chosen). This is where possible packet header switching takes place. All the above mechanisms and architectures will be discussed in detail later on. Finally, the traffic is transmitted by using any type of Link Layer technology, such as Asynchronous Transfer Mode (ATM), Digital Subscriber Line (DSL) or Ethernet. In order to provide end-to-end QoS, all network devices along the route must support QoS functions, otherwise incoming traffic will be serviced using traditional best-effort service until the next hop along the route that supports QoS. Furthermore, there is a difference on which mechanisms should be employed in a QoS-enabled network and it depends on the router’s location, i.e. if it is an edge or a core router and if it is an edge or a core interface. These details will be discussed in chapter 2. Figure 1-1 below illustrates the QoS process:
  • 22. Implementing Quality of Service in IP Networks 14 INTERNET PROTOCOL (IP) QoS MECHANISMS APPLICATIONS Voice-over- IP E-mail Mission critical data Video- conference VPNFTP QoS ARCHITECTURES IntServ DiffServ TRANSMISSION MECHANISMS (PHYSICAL LAYER) ATM DSL OtherSONET EthernetModem Time-sensitive, real- time, delay-, loss- and jitter-intolerant Classification/Marking (IP Precedence, DSCP) Policing/Shaping Queue Scheduling/Congestion Management (WFQ, CBWFQ, MDRR) Signaling (RSVP) Queue Management/Congestion Avoidance (RED, WRED) Traffic Engineering MPLS Fig. 1-1: The Process of QoS The above schematic gives a basic idea of how QoS is employed in a network, from application level to the physical layer. The first step is to identify the applications that require QoS, which will be discussed below.
  • 23. Implementing Quality of Service in IP Networks 15 1.4. APPLICATIONS REQUIRING QOS A vast number of network applications exist nowadays and are used on a daily basis by home or business users around the world. While most applications work more or less satisfactory with the traditional best-effort service, the new generation of applications requires QoS in order to operate correctly and efficiently. In general, such applications are real-time audio/video applications (e.g. teleconferencing, streaming audio/video), e-commerce, transaction and mission- critical applications, such as database and client/server applications. All these applications have in common the fact that they are sensitive to a smaller or greater extent to the four QoS metrics mentioned above (throughput, loss, delay and jitter). Table 1-1 illustrates different kinds of applications and their sensitivities regarding QoS [12]. Traffic Type Sensitivities Throughput Loss Delay Jitter E-mail (MIME) Low to High - Low Low Real-time Voice (e.g. VoIP) Low High High High Real-time Video High High High High Telnet Low - Medium Low File transfer (FTP) High - Low Low E-commerce, transactions Low - High Medium Casual browsing Low - Medium Low Serious browsing Medium - High Low Table 1-1: Applications Requiring QoS The transport protocol all applications, except real-time voice and video, is TCP. TCP handles losses with retransmissions, which eventually increases delay.
  • 24. Implementing Quality of Service in IP Networks 16 Thus, what is perceived by applications when packets are lost is an increase in end-to-end delay. Moreover, the throughput sensitivity of e-mail ranges from low to high, because it depends on what kind of e-mail is being sent. If it is a simple text message, then throughput sensitivity is low, but if a large attachment (e.g. 5MB) is to be sent, then throughput sensitivity is high. Note that network availability is a prerequisite for all of the above factors regardless of the application, therefore it is not presented in the table, as it is equal for every application. After observing the above table, it becomes apparent that some applications are more sensitive than others and therefore require increased QoS guarantees so as to perform satisfactorily. For example, e-mail is sensitive only to packet loss, which is, moreover, handled by the reliable transfer offered by TCP, whereas real-time video is very sensitive to throughput, delay and jitter and sensitive to packet loss. This shows that every application has different QoS requirements and must be treated differently if it is to operate within its acceptable limits. The QoS mechanisms that match each application’s requirements are discussed below. 1.5. QOS MECHANISMS As mentioned above, there are many different QoS mechanisms that perform different tasks under different circumstances. These mechanisms can be implemented either separately or in combination. It always depends on the objective and on what purpose they should serve as well as what kind of applications will be used. Furthermore, it should be noted that no single transport or network-layer mechanism will provide QoS for all flow types and that a QoS-
  • 25. Implementing Quality of Service in IP Networks 17 enabled network has to deploy a number of mechanisms in order to meet the broad range of requirements [13]. The main QoS mechanisms are:  Packet classification/marking: Multi-Field (MF) or Behavior-Aggregate (BA) classification (IP precedence or DiffServ Code Point (DSCP) marking).  Traffic conditioning: Policing, shaping.  Resource Reservation/Signaling: Resource Reservation Protocol (RSVP).  Queue Scheduling/Congestion Management: Weighted Fair Queuing (WFQ), Class-Based WFQ (CBWFQ), Modified Deficit Round Robin (MDRR).  Queue Management/Congestion Avoidance: Random Early Detection (RED), Weighted RED (WRED). Each one of these mechanisms serves a specific purpose as will be made clear later on and should be employed under specific circumstances. In other words, a mechanism would provide better QoS than another for a specific type of application. The mechanisms deal with the four mentioned QoS metrics and each one “specializes” in a specific area and performs different functions. The mechanisms have to be set up in every network device (router or switch) along the traffic route and either at the incoming and/or at the outgoing interface so as to provide end-to-end QoS. The QoS architectures that will be discussed later refer to the QoS of a network as a whole. A discussion and explanation of the QoS mechanisms follows. 1.5.1. CLASSIFICATION/MARKING Packet classification is a mechanism that assigns packets to a certain pre- specified class based on one or more fields in that packet. This procedure is also called packet marking or coloring, where certain fields in the packet are marked,
  • 26. Implementing Quality of Service in IP Networks 18 specifying a class for that packet [14]. There are two different types of classification for IP QoS, which include the following:  IP flow identification based on source IP address, destination IP address, IP protocol field, source port number and destination port number.  Classification based on IP precedence bits or DSCP field. Packets can be marked to indicate their traffic class. The first method is a general and common approach identifying traffic flows so as to mark them to assign them to a specific traffic class. IP precedence and/or DSCP field are more effective methods for identifying, classifying and marking traffic flows and enable a much wider range of QoS functions. The IP precedence field is located in the packet’s IP header and indicates the relative priority with which the packet should be handled, ranging from “routine” (best-effort) to “critical” (highest priority). Furthermore, there are another two classes, which are used for network and internetwork control messages. The IP precedence field is made up of three bits in the IP header’s Type of Service (ToS) byte, while DSCP uses six bits. Figure 1-2 shows the ToS byte structure and illustrates IP precedence and DSCP locations within the ToS byte: 0 Precedence DSCP 1 2 3 4 5 6 7 Fig. 1-2: IP Precedence and DSCP in the ToS Byte
  • 27. Implementing Quality of Service in IP Networks 19 Table 1-2 shows the different IP precedence values, bits and names [15]: IP Precedence Value IP Precedence Bits IP Precedence Names 0 000 Routine 1 001 Priority 2 010 Immediate 3 011 Flash 4 100 Flash Override 5 101 Critical 6 110 Internetwork Control 7 111 Network Control Table 1-2: IP Precedence Values, Bits and Names Packet marking can be performed either by the application that sends the traffic or by a node in the network. The ToS byte has been specified by the IETF in [16]. DSCP is discussed in detail in section 1.6.2 because it is the basis of DiffServ. 1.5.2. POLICING/SHAPING Service providers and network users agree on a profile for the traffic that is to receive certain QoS. This traffic profile usually specifies a maximum rate for the traffic which is to be admitted into the network at a certain QoS level, as well as some degree of the traffic’s burstiness. The user’s traffic transmitted to the network has to conform to this profile in order to receive the agreed QoS. Non-conforming traffic is not guaranteed to receive this QoS and may be dropped.
  • 28. Implementing Quality of Service in IP Networks 20 Traffic conditioning is a set of mechanisms that permit on one hand the service provider to monitor and police conformance to the traffic profile, on the other hand the user to shape transmitted traffic to this profile. Traffic conditioning is usually employed at network boundaries, i.e. where traffic first enters or exits the network. Traffic conditioning is implemented in network access devices, i.e. routers and switches that support QoS mechanisms [17]. As the traffic enters the network, it has a rate that fluctuates over time. Figure 1-3 illustrates this: Traffic Time Profile limit Traffic Rate Fig. 1-3: Example of Traffic Rate In order to provide QoS, traffic has to conform to the pre-specified and pre-agreed (between customer and ISP) traffic profile. Policing is the mechanism that allows the network to monitor traffic behavior (e.g. burstiness) and manage throughput by dropping out-of-profile packets. In essence, policing discards misbehaving traffic in order to avoid network resources being exhausted or other QoS classes being starved. This will
  • 29. Implementing Quality of Service in IP Networks 21 ensure that network resources, as allocated, will provide the agreed QoS to the agreed traffic. The policing function in a router is often called a dropper, because it essentially drops out-of-profile packets, i.e. packets that exceed the agreed policy throughput. Figure 1-4 illustrates the policing mechanism: Traffic Time Profile limit Traffic Rate Fig. 1-4: Policing Policing is used not only to drop out-of-profile packets, but also to re-mark them and indicate to dropping mechanisms downstream that they should be dropped ahead of the in-profile packets. Shaping, on the other hand, allows the network user to enforce bandwidth policies by buffering excess traffic rate and smoothing bursts. This increases link utilization efficiency, as packets are not dropped, but in a way “spread” across the available bandwidth. Shaping, as the word itself implies, shapes traffic in order to meet a pre-defined profile, i.e. to be in-profile. Figure 1-5 shows how shaping works:
  • 30. Implementing Quality of Service in IP Networks 22 Traffic Time Profile limit Traffic Rate Fig. 1-5: Shaping Shaping is commonly used where speed-mismatches exist (e.g. going from a HQ site with a T1/E1 connection to a Frame Relay network, down to a remote site with a 128Kbps connection). 1.5.3. SIGNALING The signaling mechanism in an IP context essentially refers to the Resource Reservation Protocol (RSVP), which is an integral part of the IntServ architecture, discussed in section 1.6.1. RSVP has been initially proposed in [18] and defined in [19]. RSVP was specified as a signaling protocol for applications to reserve resources. The signaling process is illustrated in Figure 1-6. The sender sends a “PATH” message to the receiver (1) specifying the characteristics of the traffic. Every intermediate router along the path forwards the “PATH” message to the next hop (2) determined by the routing protocol. Upon receiving a “PATH” message (3), the receiver responds with a “RESV” message (4) to request
  • 31. Implementing Quality of Service in IP Networks 23 resources for the flow. Every intermediate router along the path can reject or accept the request of the “RESV”, depending on resource availability (5). If the request is rejected, the router will send an error message to the receiver, and the signaling process will terminate. If the request is accepted, link bandwidth and buffer space are allocated for the flow (6) and the related flow state information will be installed in the router [20]. RSVP Cloud PATH (1) PATH (2) PATH (3) RESV (4) RESV (5) RESV (6) Sender Receiver Fig. 1-6: RSVP Process The problem with RSVP is that the amount of state information increases proportionally with the number of flows, a fact that places a huge storage and processing overhead on the routers, therefore it does not scale well in the Internet core; and secondly, the requirements on routers is high, because they all must implement and support RSVP, admission control, MF classification and packet scheduling. These problems are further discussed in section 1.6.1. 1.5.4. QUEUE SCHEDULING/CONGESTION MANAGEMENT Packet delay control is an important purpose of Internet QoS. Packet delay consists of three parts: propagation, transmission and queuing delay. Propagation delay is given by the distance, the medium and the speed of light. The per-hop
  • 32. Implementing Quality of Service in IP Networks 24 transmission delay is given by the packet size divided by the link bandwidth. The queuing delay is the waiting time a packet spends in a queue before it is transmitted. This delay is determined mainly by the scheduling policy. Besides delay control, link sharing is another important purpose of queue scheduling. The aggregate bandwidth of a link can be shared among multiple entities, such as different organizations, multiple protocols (TCP, UDP (User Datagram Protocol)) or multiple services (FTP, telnet, real-time streams). An overloaded link should be shared in a controlled way, while an idle link can be used in any proportion. Although delay and rate guarantee provisioning is crucial for queue scheduling, it still needs to be kept simple because it must be performed at packet arrival rates. Scheduling can be performed on a per-flow or a per-traffic-class basis. The result of a combination of these two is a hierarchical scheduling [21]. The most important scheduling mechanisms, besides traditional first-come first- served scheduling, are the following:  Weighted Fair Queuing (WFQ): WFQ is a sophisticated queuing process that requires little configuration, because it dynamically detects traffic flows between applications and automatically manages separate queues for those flows. WFQ is described in [22] and [23]. In WFQ terms flows are called conversations. A conversation could be a Telnet session, an FTP transfer, a video stream or some other TCP or UDP flow between two hosts. WFQ considers packets part of the same conversation if they contain the same source and destination IP address and port numbers and IP protocol identifier. When WFQ detects a conversation and determines that packets belonging to that conversation need to be queued, it automatically creates a queue for
  • 33. Implementing Quality of Service in IP Networks 25 that conversation. If there are many conversations passing through the interface, WFQ manages multiple conversation queues, one for each unique conversation and sorts packets into the appropriate queues based on the conversation identification information. Because WFQ creates queues and sorts packets into the queues automatically, there is no need for manually configured classification lists [24].  Class-Based Weighted Fair Queuing (CBWFQ): CBWFQ is a variation of WFQ. It supports the concept of user-defined traffic classes. Instead of queuing on a per-conversation basis, CBWFQ uses predefined classes of traffic and then divides the link’s bandwidth among these classes in order to control their QoS. CBWFQ is not automatic like WFQ, but it provides more scalable traffic queuing and bandwidth allocation [25].  Modified Deficit Round Robin (MDRR): MDRR is a variant of the well-known Round-Robin scheduling scheme adapted to the requirements of scheduling variable-length units, such as IP packets. MDRR is an advanced scheduling algorithm. Within a MDRR scheduler, each queue has an associated quantum value, which is the average number of bytes served in each round, and a deficit counter initialized to the quantum value. Each non-empty flow queue is serviced in a round-robin fashion by transmitting at most quantum bytes in each round. Packets in a queue are serviced as long as the deficit counter is greater than zero. Each serviced packet decreases the deficit counter by a value equal to its length in bytes. A queue can no longer be served after the deficit counter becomes zero or negative. In
  • 34. Implementing Quality of Service in IP Networks 26 each new round, each nonempty queue’s deficit counter is incremented by its quantum value [26]. 1.5.5. QUEUE MANAGEMENT/CONGESTION AVOIDANCE Another important aspect of QoS is to control packet loss. This is achieved mainly through queue management. Packets get lost for two reasons: either they become corrupted in transit or are dropped at network nodes – such as routers – due to network congestion, i.e. because of overflowing buffers. Loss due to damage is rare (<<0.1%), therefore packet loss is often a sign of network congestion. To control and avoid network congestion, certain mechanisms must be employed both at network end-points and at intermediate routers. At network endpoints, the TCP protocol, which uses adaptive algorithms such as slow start, additive increase and multiplicative decrease, performs this task well. Inside routers, queue management must be used so as to ensure that high priority packets are not dropped and arrive at their destination properly. Traditionally, packets are dropped only when the queue is full. Either arriving packets are dropped (tail drop), the packets that have been in the queue for the longest time period are dropped (drop front) or a randomly chosen packet is discarded from the queue. TCP stacks react to multiple packet drops by severely reducing their window sizes, thus causing severe interruption to higher layer applications. In addition, multiple packet drops can lead to a phenomenon called “global synchronization” whereby TCP stacks become globally synchronized resulting in a wave-like traffic flow. Figure 1-7 shows the effect of global synchronization:
  • 35. Implementing Quality of Service in IP Networks 27 Throughput Time Queue Utilization (100%) Tail dropTail drop Fig. 1-7: Global Synchronization Three TCP traffic flows start at different times at low data rates. As time passes, TCP increases the data rate. When the queue buffer fills up, several tail drops occur and all three traffic flows drop their throughput abruptly. This is highly inefficient, because the more TCP flows there are, the sooner the buffers will fill up and thus the sooner a tail drop will occur, disrupting all TCP flows. Another problem is that after the first tail drop it is very likely that all three TCP flows will start almost simultaneously, which will cause the buffers to fill up quickly and lead to another tail drop. Furthermore, there are two drawbacks with drop-on-full, namely lock-out and full queues. Lock-out describes the problem that a single connection or a few flows monopolize queue space, preventing other connections from getting room in the queue. The “full queue” problem refers to the tendency of drop-on-full policies to keep queues at or near maximum occupancy for long periods. Lock-out causes unfairness of resource usage while steady-state large queues result in longer delay times.
  • 36. Implementing Quality of Service in IP Networks 28 To avoid these problems, active queue management is required, which drops packets before a queue becomes full. It allows routers to control when, which and how many packets to drop [27]. The two most important queue management mechanisms are the following:  Random Early Detection (RED): RED randomly drops packets queued on an interface. As a queue reaches its maximum capacity, RED drops packets more aggressively to avoid a tail drop. It throttles back flows and takes advantage of TCP slow start. Rather than tail-dropping all packets when the queue is full, RED manages queue depth by randomly dropping some packets as the queue fills past a certain threshold. As packets drop, the applications associated with the dropped packets slow down and go through TCP slow start, reducing the traffic destined for the link and providing relief to the queue system. If the queue continues to fill, RED drops more packets to slow down additional applications in order to avoid a tail drop. The result is that RED prevents global synchronization and increases overall utilization of the line, eliminating the sawtooth utilization pattern [28]. RED is described in [29].  Weighted Random Early Detection (WRED): Weighted RED (WRED) is a variant of RED that attempts to influence the selection of packets to be discarded. There are many other variants of RED, mechanisms that try to prevent congestion before it occurs by dropping packets according to specific criteria (traffic conforms/does not conform), thus ensuring that mission-critical data is not dropped and gets priority. WRED is a congestion avoidance and control mechanism whereby packets will be randomly dropped when the average class queue depth reaches a certain minimum threshold. As congestion increases, packets will be randomly
  • 37. Implementing Quality of Service in IP Networks 29 dropped (and with a rising drop probability) until a second threshold where packets will be dropped with a drop probability equal to the mark probability denominator. Above max-threshold, packets are tail-dropped. Figure 1-8 below depicts the WRED algorithm: Packet Drop Probability Average Queue SizeMin1 1 0 Max1 Min2 Max2 WRED Values: min=20 max=50 prod=1 WRED Values: min=55 max=70 prod=10 Fig. 1-8: WRED Algorithm WRED will selectively instruct TCP stacks to back-off by dropping packets. Obviously, WRED has no influence on UDP based applications (besides the fact that their packets will be dropped equally). The average queue depth is calculated using the following formula: new_average = (old_average * (1-2-e ) + (current_queue_depth * 2-e ) The “e” is the “exponential weighting constant”. The larger this constant, the slower the WRED algorithm will react. The smaller this constant, the faster the WRED algorithm will react, i.e. the faster it will begin dropping packets after the specified threshold has been reached.
  • 38. Implementing Quality of Service in IP Networks 30 The exponential weighting constant can be set on a per-class basis. The min-threshold, max-threshold and mark probability denominator can be set on a per precedence or per DSCP basis. The mark probability denominator should always be set to 1 (100 % drop probability at max-threshold). It should be stressed that tuning QoS parameters is never a straightforward process and the results are depending on a large number of factors, including the offered traffic load and profile, the ratio of load to available capacity, the behavior of end-system TCP stacks in the event of packet drops etc. Therefore, it is strongly recommended to test these settings in a testbed environment using expected customer traffic profiles and to tune them, if required. 1.5.6. TRAFFIC ENGINEERING Traffic engineering is not an actual mechanism, but can be considered as the process of designing and setting up a network’s control mechanisms (e.g. routing) in such a way that all network resources are used effectively, i.e. the effective design and engineering of the network paths, traffic load balancing, creating redundancy etc. In other words it refers to the logical configuration of the network. The task of traffic engineering is to relieve congestion and to prevent concentration of high priority traffic. It essentially provides good QoS even without the implementation of another mechanism. Effective traffic engineering provides QoS by recognizing congested or highly loaded network paths and redirecting traffic through idle network paths, thus avoiding congestion, optimizing network load balancing and effectively utilizing the available network resources.
  • 39. Implementing Quality of Service in IP Networks 31 1.6. QOS ARCHITECTURES QoS architectures are architectural models proposed and established by the Internet Engineering Task Force (IETF) that utilize the above mentioned mechanisms (or combinations thereof) to provide the required services. 1.6.1. INTEGRATED SERVICES The IntServ architecture was an effort by the IETF to expand the Internet’s service model in order to meet the requirements of emerging applications, such as voice and video. It is the earliest of the procedures defined by the IETF for supporting QoS. Its main purpose is to define a new enhanced Internet service model as well as provide the means for applications to express end-to-end resource requirements with support mechanisms in network devices [30]. IntServ assigns a specific flow of data to a so-called “traffic class”, which defines a certain level of service. It may, for example, require only “best-effort” delivery, or else it might impose some limits on the delay. Two services are defined for this purpose: guaranteed and controlled load. Guaranteed service, defined in [31], provides deterministic delay guarantees, whereas controlled load service, defined in [32], provides a network service similar to that provided by a regular best-effort network under lightly loaded conditions [33]. An integral part of IntServ is RSVP. Once a class has been assigned to the data flow, a “PATH” message is forwarded to the destination to determine whether the network has the available resources (transmission capacity, buffer space etc.) needed to support that specific Class of Service (CoS). If all devices along the path are found capable of providing the required resources, the receiver generates a “RESV” message and returns it to the source indicating that the latter
  • 40. Implementing Quality of Service in IP Networks 32 may start transmitting its data. The procedure is repeated continually to verify that the necessary resources remain available. If the required resources are not available, the receiver sends an RSVP error message to the sender. The IETF specified RSVP as the signaling protocol for the IntServ architecture. RSVP enables applications to signal per-flow QoS requirements to the network [34]. Theoretically, this continuous checking of available resources means that the network resources are used very efficiently. When the resource availability reaches a minimum threshold, services with strict QoS requirements will not receive a “RESV” message and will know that the QoS is not guaranteed. However, although IntServ has some attractive aspects, it does have its problems. It has no means of ensuring that the necessary resources will be available when required. Another problem is that it reserves network resources on a per-flow basis. If multiple flows from an aggregation point all require the same resources, the flows will nevertheless all be treated individually. The “RESV” message must be sent separately for each flow. This means that as the number of flows increases, so does the RSVP overhead and thus the service level is degrading. In other words, IntServ does not scale well, and so wastes network resources [35]. 1.6.2. DIFFERENTIATED SERVICES DiffServ is an architecture that addresses QoS requirements in a connectionless environment. DiffServ’s specification can be found in [36]. Its main purpose is to standardize a set of QoS building blocks with which providers can implement QoS enhanced IP services. DiffServ QoS is meant to be implemented at the network edge by access devices and then supported across the backbone by DiffServ-capable routers. Since it operates purely at Layer 3,
  • 41. Implementing Quality of Service in IP Networks 33 DiffServ can be deployed on any Layer 2 infrastructure. DiffServ and non- DiffServ routers and services can be mixed in the same environment. DiffServ eliminates much of the complexity that developed from IntServ’s use of the Resource Reservation Protocol (RSVP) to set up and tear down QoS reservations. DiffServ eliminates the need for RSVP and instead makes a fresh start with the existing Type of Service (ToS) field found in the header of IPv4 and IPv6 packets, as illustrated below. IP precedence resides in bits P2-P0, ToS is bits T3-T0 and the last bit is reserved for future use (currently unused). Fig. 1-9: ToS Byte The ToS field was originally intended by IP’s designers to support capabilities much like QoS – allowing applications to specify high or low delay, reliability, and throughput requirements – but was never used in a well-defined, global manner. But since the field is a standard element of IP headers, DiffServ is reasonably compatible with existing IP equipment and can be implemented to a large extent by software/firmware upgrades. DiffServ takes the ToS field (eight bits), renames it as the DS (DiffServ) byte, specified in [37], or DSCP and restructures it to define IP QoS parameters. Figure 1-10 illustrates the DS byte, where the DSCP is made up of bits DS5-DS0 and the last two bits are currently unused: Fig. 1-10: DS Byte P2 P1 P0 T3 T2 T1 T0 CU DS5 DS4 DS3 DS2 DS1 DS0 CU CU
  • 42. Implementing Quality of Service in IP Networks 34 The DSCPs defined thus far by the IETF are the following (please note that class selector DSCPs are defined to be backward compatible with IP precedence) [38]: Class Selectors DSCP Value Default 000000 Precedence 1 001000 Precedence 2 010000 Precedence 3 011000 Precedence 4 100000 Precedence 5 101000 Precedence 6 110000 Precedence 7 111000 Table 1-3: DSCP Class Selectors The DSCP will determine how routers handle a packet. Instead of trying to identify and manage IP traffic flows, as RSVP must do, DiffServ brings QoS consideration to the normal IP practice of packet-by-packet, hop-by-hop routing. Service decisions are made according to parameters defined by the DS byte, and are manifested in Per-Hop Behavior (PHB). For example, the default PHB in a DiffServ network is traditional best-effort service, using first-in first-out (FIFO) queuing. Other PHBs are defined for higher CoSs. One example is expedited forwarding (EF). The EF PHB has been defined in [39]. EF defines premium service and the recommended DSCP value is 101110. When EF packets enter a DiffServ router, they are meant to be handled in short queues and quickly serviced with top priority to maintain low latency, packet loss, and jitter. Another PHB, one that allows variable priority but still ensures that packets arrive in the proper order, is assured forwarding (AF). The specification of AF can be found in [40].
  • 43. Implementing Quality of Service in IP Networks 35 AF defines four service levels, with each service level having three drop precedence levels. As a result, AF PHB recommends 12 code points, as shown in Table 1-4 [41]: Drop Precedence Class 1 Class 2 Class 3 Class 4 Low 001010 010010 011010 100010 Medium 001100 010100 011100 100100 High 001110 010110 011110 100110 Table 1-4: AF PHB DiffServ standardizes PHB classification, marking and queuing procedures, but it leaves equipment vendors and service providers free to develop PHB flavors of their own. Router vendors will define parameters and capabilities they think will furnish effective QoS controls and service providers can devise combinations of PHB to differentiate their qualities of service [42]. As mentioned above, DSCP is backward-compatible to IP precedence. Table 1-5 illustrates DSCP to IP precedence mappings: Table 1-5: DSCP to IP Precedence Mappings DSCP IP Precedence 0-7 0 8-15 1 16-23 2 24-31 3 32-39 4 40-47 5 48-55 6 56-63 7
  • 44. Implementing Quality of Service in IP Networks 36 1.7. MULTI-PROTOCOL LABEL SWITCHING MPLS is a strategy for streamlining the backbone transport of IP packets across a Layer 3/Layer 2 network. Although it does involve QoS issues, that is not its main purpose. MPLS is focused mainly on improving Internet scalability through better Traffic Engineering. MPLS will help to build backbone networks that better support QoS traffic. MPLS is essentially a hybrid of the network (Layer 3) and transport (Layer 2) structure, and may represent an entirely new way of building IP backbone networks. MPLS is rooted in the IP switching and tag switching technologies that were developed to bring circuit switching concepts to IP’s connectionless routing environment. In the current draft specification [43], MPLS edge devices add a 4-byte (32 bits) label to the header of IP packets. An IP encapsulation scheme, basically, the label provides routing information that allows packets to be steered over the backbone using pre-established paths. These paths function at Layer 3 or can even be mapped directly to Layer 2 transport such as ATM or Frame Relay. MPLS improves Internet scalability by eliminating the need for each router and switch in a packet’s path to perform conventional address lookups and route calculation. MPLS also permits explicit backbone routing, which specifies in advance the hops that a packet will take across the network. Explicit routing will give IP traffic a semblance of end-to-end connections over the backbone. This should allow more deterministic, or predictable, performance that can be used to guarantee QoS. The MPLS definition of IP QoS parameters is limited. Out of 32 bits total, an MPLS label reserves just three bits for specifying QoS. Label-switching routers will examine these bits and forward packets over paths that provide the
  • 45. Implementing Quality of Service in IP Networks 37 appropriate QoS levels. But the exact values and functions of these so called “experimental bits” remain yet to be defined. MPLS is focused more on backbone network architecture and traffic engineering than on QoS definition. In an ATM network using cell-based MPLS, MPLS tags can be mapped directly to ATM virtual circuits (VCs) that provide ATM’s strong QoS and class of service capabilities. For example, the MPLS label could specify whether traffic requires constant bit rate (CBR) or variable bit rate (VBR) service, and the ATM network will ensure that guarantees are met [44]. 1.8. IP QOS: A CRITICAL EVALUATION Up to now we have defined the term QoS, we have analyzed and discussed mechanisms that provide QoS and we have given an overview of the two QoS frameworks. The questions that arise at this point are the following: Is QoS really needed in today’s networks? Does the overprovisioning of bandwidth not solve the problem? Which framework is more viable in today’s IP networks – mainly the Internet – and why? This comparison and critical view of the two frameworks is based on the literature survey, which means that the conclusion is based on theory. A critical comparison of the two frameworks will display the author’s view on this issue. One might think that any issues, such as network congestion, can be overcome by simply upgrading the existing lines to provide more bandwidth. While this is essentially the first step to provide QoS, it is the author’s view that bandwidth overprovisioning might not always be possible, firstly because the customer might not want to pay more and secondly because there still might be applications that would use the whole bandwidth, such as peer-to-peer file sharing applications, high-definition streaming video and so on. In essence, no matter how
  • 46. Implementing Quality of Service in IP Networks 38 much bandwidth is available, there will always be applications that will make use of this bandwidth and therefore there will always be the need for mechanisms to provide QoS for the various types of applications. Of course, just upgrading bandwidth is the simplest way to provide some sort of QoS. By implementing schemes and mechanisms, the complexity of the network infrastructure will increase. Implementing mechanisms and frameworks that provide QoS in a network means adding complexity to the network, making it more difficult and challenging to design, configure, operate and monitor. So, is it worth it to add complexity to the network? Wouldn’t it be more efficient to increase the bandwidth every time it reaches the network’s limit? In the author’s opinion, the answer is no, because – as mentioned above – the customer may not be always willing to pay, even if bandwidth becomes abundant and cheap. Furthermore, any network is bound to grow in size, which means more users will access it and as a result bandwidth will be eaten up more quickly. It is obvious that at some point, even if all available mechanisms and frameworks will be employed in the most efficient way (which obviously is very difficult) to provide QoS, the bandwidth will have to be increased eventually, not drastically, but gradually. One can say that the frameworks that provide QoS simply enhance and prolong the lifespan of an existing network, without the need for upgrading existing lines in very short time periods. Therefore, these frameworks and mechanisms are essential for designing and deploying a modern multi-service network, even if it increases its complexity. However, the most fundamental problem of any complex network is that it is very difficult for network managers to specify the parameters needed in order to provide QoS and that these parameters must be monitored constantly and adjusted to keep up with the traffic shifts. Nevertheless, the architectures and mechanisms that provide QoS are
  • 47. Implementing Quality of Service in IP Networks 39 beneficial for both customers and service providers, therefore complexity is not a strong enough reason not to implement such architectures. Besides, a combination of slight capacity overprovisioning and a “light” implementation of QoS mechanisms keeps a balance between network complexity and effectiveness and is the ideal solution. Another issue is of course that by implementing QoS mechanisms and frameworks, ISPs are able to offer different service levels to their customers, ranging from best-effort to premium. This opens a whole new world of opportunity for both ISPs and customers, because – to put it simply – the customer is able to get what he/she wants or needs and the ISP is able to offer the customer the desired service level. Without these frameworks, this would not be possible, even by just increasing bandwidth, because – as mentioned earlier – IP does not have the ability to distinguish between different classes of service and therefore ISPs would not be able to provide assurance to their customers as to the offered service quality. The economic aspect of QoS should not be underestimated in the modern world, as customer’s requirements in today’s Internet are very diverse. This, in the author’s opinion, is another reason why QoS is essential for today’s IP networks. The next question is obviously which framework is more viable for today’s IP networks. In order to be able to answer this, one must first consider each framework’s advantages and disadvantages. In the author’s opinion, DiffServ is the architecture for the future, but why? When IntServ was first introduced, it was initially considered to be the framework that would provide efficient QoS in the Internet. However, several characteristics of this framework made this assumption difficult and IntServ soon became somewhat “obsolete” for large-scale IP networks.
  • 48. Implementing Quality of Service in IP Networks 40 Firstly, IntServ treats each traffic flow individually, which means that all routers along a path must keep current flow-state information for each individual flow in their memory. This approach does not scale well in the Internet core, because there are thousands, even millions of individual flows. As a result, processing overhead and flow-state information increases proportionally to the number of flows. In the Internet, such behavior is unacceptable and highly inefficient because of the very large number of flows. This treatment would essentially make any efforts for providing QoS “non-existent”, because the available resources that are needed for a reservation would not be enough for every flow and as a result it would not be possible to provide QoS for every flow. IntServ is not able to cope with a large number of flows at high speeds. Secondly, resource reservation in the Internet requires the cooperation of multiple service providers, i.e. the service providers must make agreements for accounting and settlement information. Such infrastructures do not exist in today’s Internet and it is highly unlikely that they will be implemented in the future. We have to keep in mind that there are simply too many ISPs for such a solution to be viable, because when multiple ISPs are involved in a reservation, they have to agree on the charges for carrying traffic from other ISPs’ customers and settle these charges between them. It is true that a number of ISPs is bound by bilateral agreements, but to extend these agreements to an Internet-wide policy would be impossible because of the sheer number of ISPs. Moreover, IntServ’s usefulness was further compromised by the fact that RSVP was for a very long period in experimental stage. As a result, its implementation in end systems was fairly limited and even today there are very few applications that support it. What is the best technology worth (at least on
  • 49. Implementing Quality of Service in IP Networks 41 paper), if there are no applications that support it? The answer is obvious and this is also a severe disadvantage of the IntServ architecture. Finally, IntServ focused mainly on multimedia applications, leaving mission-critical ones out. The truth is that even though multimedia applications (videoconferencing or VoIP) are important for today’s corporate networks, mission-critical applications and their efficient operation are still the most important ones. After all, that is what their name implies: “mission-critical”, critical for the company’s growth, expansion and survival. IntServ did not offer the versatility to effectively support both types of application on a large-scale level. At this point one might ask where and how the IntServ framework should be used and implemented. The answer is that IntServ might be viable in small- scale corporate networks because of their limited size and – as a result – limited number of traffic flows. Furthermore, the above mentioned settlement issues disappear. However, the author’s opinion on this matter is that IntServ might be a good solution for such types of networks, but what if these networks extend beyond a single site and require WAN access, which is the case in most corporate networks nowadays? Then we have to face the fact that IntServ does not scale well and a WAN means an increased number of users and traffic flows. The IntServ architecture simply was not designed for today’s networks and therefore is not a viable solution. IntServ looks good on paper, i.e. in its specification, but in the real world its implementation range is very limited. Since the author limited IntServ as a viable solution for today’s Internet only for a few occasions, the answer to the question “which architecture is more viable and effective” is obvious: DiffServ is the architecture that will be able to
  • 50. Implementing Quality of Service in IP Networks 42 provide effective and sufficient QoS in the future. But what are DiffServ’s strong points and advantages over IntServ? DiffServ relies on provisioning to provide resource assurance. Users’ traffic is divided into pre-defined traffic classes. These classes are specified in the SLAs between customer and ISP. The ISP is therefore able to adjust the level of resource provisioning and control the degree of resource assurance to the users. What this means in essence is that each traffic flow is not treated separately, like in IntServ, but as an aggregate. This of course makes the DiffServ architecture extremely scalable, especially in a network where there are several thousands of traffic flows, because DiffServ classifies each traffic flow into one of (usually) three traffic classes and treats them according to pre-specified parameters. This gives ISPs greater control of traffic handling and enables them to protect the network from misbehaving sources. The downside is that because of the dynamic nature of traffic flows, exact provisioning is extremely difficult, both for the customer and the ISP. In order to be able to offer reliable services and deterministic guarantees, the ISP must assign proper classification characteristics for each traffic class. This, on the other hand, requires that the customer knows exactly what applications will be used and what kind of traffic will enter the network. As a result, network design and implementation is generally more difficult, but in the author’s opinion it is worth the effort because once resources have been organized efficiently, ISPs are able to deliver their commitments towards the customers and minimize the cost. The customer can rest assured that he/she will get what he/she wants for the agreed price. This means that cost- effectiveness of services is increased and both ISPs and customers can be satisfied with the result. The DiffServ model therefore is a more viable solution for today’s IP networks, even small-scale ones.
  • 51. Implementing Quality of Service in IP Networks 43 The author’s view is that DiffServ’s treatment of flows as aggregates rather that individual fully justifies its superiority over IntServ, at least for the large majority of today’s IP networks. IntServ’s capabilities, compared to the ones of DiffServ, are fairly limited and can be employed only for specific cases. DiffServ is extremely flexible and scalable. It is true that DiffServ is more difficult to implement than IntServ because of its nature, but once it has been setup properly, it is beneficial for all the parties involved. Additionally, it enables ISPs to develop new services over the Internet and attract more customers to increase their return. Thus, DiffServ’s economical model is more viable, profitable and flexible than IntServ’s and this is – in the author’s opinion – a very important advantage. The conclusion to all this is that good network design, simplicity and protection are the keys for providing QoS in a large-scale IP network. Good network design plus a certain degree of overprovisioning network capacity not only makes a network more robust against failure, but also eliminates the need for overly complex mechanisms designed to solve these problems. Implementing the DiffServ architecture keeps the network simple because in most cases three traffic classes (premium, assured, best-effort) are sufficient to meet almost every customer’s requirements under various network conditions. As a result, the network is utilized more efficiently and the balance between complexity and providing QoS is kept at its optimum level. We will proceed by presenting a case study, on which the practical part of the dissertation will be based. We will discuss the problems and issues faced during the design and implementation stage and how they were overcome.
  • 52. Implementing Quality of Service in IP Networks 44 CHAPTER 2: CASE STUDY: DESIGN AND IMPLEMENTATION OF A QOS-ENABLED NETWORK In order to test and observe QoS mechanisms, a network that supports QoS functions must be designed and set up. For this purpose, a case study will be used that involves specific applications that require QoS as well as network devices (routers) that support QoS functions and protocols. In our case study we will assume that a customer wants to build a network that connects four different LANs located at different sites. The budget is limited and as a result most connections will have limited bandwidth. The customer wants to use VoIP, videoconferencing, mission-critical, Web and E-mail applications. Therefore, it is imperative to implement a framework that provides QoS for these applications. The reason for this is that if all traffic would be treated as best-effort, real-time as well as mission-critical applications would suffer greatly and even become unusable because of network congestion at various nodes in the network. Another reason is that the available bandwidth is fairly limited and must therefore be provisioned as efficiently as possible. Since we already know the customer’s requirements, i.e. the applications that will be used as well as the network topology design (see section 2.4), the first issue regarding network design and QoS implementation is to decide which framework to use in order to provide QoS and why. The QoS framework to be used will be the DiffServ framework. DiffServ, as mentioned above, is a standardized, scalable (treating aggregate rather than individual flows) solution for providing QoS in a large-scale IP network like the case study network topology. DiffServ is based on the following mechanisms:
  • 53. Implementing Quality of Service in IP Networks 45  Classification: each packet is classified into one of some pre-defined CoSs, either according to the values of the IP/TCP/UDP header or according to its already marked DSCP.  Marking: after classification, each packet is marked with the DSCP value corresponding to its CoS (if not already marked).  Policing: control of the amount of traffic offered at each CoS.  Scheduling and Queue Management: each CoS is assigned to a queue of a router’s egress interface. Prioritization is applied among these queues, so that each CoS gets the desired performance level. To improve throughput, in case queues are filling up, queue management mechanisms, like RED, can be applied to these queues. But why DiffServ? Since the network is a large-scale corporate network, we have to assume that there will be a large number of traffic flows, especially during peak times. As discussed in section 1.8, IntServ does not scale well and would not be an efficient solution for the customer’s needs. DiffServ’s scalability also suggests that it is “future-proof”, i.e. it can be more easily upgraded, expanded and adapted to the customer’s needs. Another reason is that the applications used do not support RSVP and hence implementation of the IntServ architecture would not be the best solution. In the author’s opinion, DiffServ’s scalability and flexibility is the main reason why it is more appropriate in the context of the case study. A complete justification for why the DiffServ architecture is preferable has been given above in section 1.8. We will assume the same for our case study.
  • 54. Implementing Quality of Service in IP Networks 46 2.1. IDENTIFICATION OF APPLICATIONS After deciding on the architecture, the next step is to identify the applications the customer will use in the network. This is necessary in order to be able to appropriately classify the traffic into CoSs. The applications that will be used in the case study will be the following:  VoIP  Videoconference  Streaming video  Mission-critical data (e.g. database, client/server applications)  Internet (e-mail, web browsing etc.) VoIP and videoconference are both real-time streaming applications that use UDP and therefore require high levels of QoS (premium services). For mission-critical traffic, which uses TCP, proper and timely delivery must be ensured, thus this type of traffic will require assured services. Finally, the rest of the traffic (such as web browsing or e-mail) will be treated as best-effort and allocated the remaining bandwidth. Policing has to be applied so that the first two CoSs (premium and assured) do not claim all the bandwidth for themselves, thus leaving none for the best-effort traffic. Here we see the difficulty of properly designing and implementing CoSs. Every aspect of the network traffic needs to be thoroughly considered so as to ensure proper operation of all CoSs. In the author’s opinion, this poses the greatest challenge in QoS implementation. It is also very essential that the customer knows exactly his/her traffic requirements, so that the service provider is able to design precise CoSs according to the customer’s needs.
  • 55. Implementing Quality of Service in IP Networks 47 2.2. DEFINITION OF QOS CLASSES The foundation of the DiffServ architecture is the definition of the Classes of Service (CoSs) to be supported by the network and the mapping of each CoS to a DSCP value. This part of QoS implementation is probably the most important one, because if CoSs are not chosen and designed properly and precisely, QoS provision will be inefficient and will not provide the expected service level. Therefore, it is essential to carefully plan and design the CoSs that will be used and assign traffic appropriately. We have chosen to classify traffic with DSCP rather than with IP precedence because DSCP offers more flexibility (i.e. more granular treatment of traffic) than IP precedence and is therefore more appropriate for the customer’s requirements in our case study. It is also the author’s opinion that DSCP is far more effective than IP precedence in a DiffServ-capable network like the one that will be used in the case study. Knowing that real-time traffic has very specific requirements and is very sensitive to the four network performance parameters discussed in section 1.3, we can classify such traffic with the highest DSCP, which is EF (46). Furthermore, real-time traffic must get strict priority over all other traffic. This will ensure proper operation of the applications. Thus, all real-time traffic will be classified as premium. Mission-critical traffic will receive assured services (AF), because it is not as delay- and jitter-sensitive as real-time traffic. Loss rate for this kind of traffic must still be kept at a minimum possible. Finally, the rest of the traffic (Web, e-mail) will be treated as best-effort, but we still must ensure that some bandwidth remains for this type of traffic, as mentioned in the previous section.
  • 56. Implementing Quality of Service in IP Networks 48 After the identification of applications, careful consideration of the above factors and with regard to the general application requirements outlined in Table 1-1, the CoSs that have been defined for out case study are presented below: Class of Service Applications Performance Level DSCP value Premium VoIP Videoconference Video streaming Low delay Low jitter Low loss rate 101110 (46) Assured Mission-critical data Controlled delay Controlled jitter Low loss rate 001010 (10) Best-effort Rest of traffic Uncontrolled (best-effort) 000000 (0) Table 2-1: Case Study Classes of Service At implementation stage it is very important to identify what protocol each application uses. Real-time applications are UDP-based whereas mission-critical applications are TCP-based. This fact will have an impact on how the router will handle the traffic at interface egress and will be discussed later on. Moreover, “Controlled delay” implies delay similar to that of an uncongested network. In the network testbed, all DiffServ mechanisms will be implemented in the network’s routers. No mechanism is to be implemented in the end systems, i.e. no host-based marking will be implemented. It is obvious that proper configuration of network devices is also very important and will be discussed later on.
  • 57. Implementing Quality of Service in IP Networks 49 2.3. ROLES OF INTERFACES It is very important to understand that a router’s interface in a network can have one of two possible roles. It can either be an edge or a core interface. Depending on the role of a router interface, different mechanisms have to be applied at that interface, because traffic has different characteristics when entering an edge interface than when entering a core interface. These differences are discussed below. This difference also plays an important role at configuration stage. Which mechanisms need to be applied at what point and why? These are all issues that we have to face when implementing and configuring the network.  Edge Interface: These are interfaces receiving/sending traffic from/to end-systems, i.e. there is no other layer 3 device, such as a router, between the interface and the end-systems. End-systems will send unmarked traffic through these interfaces, thus ingress traffic at these interfaces will be classified, marked with a DSCP value and policed. Policing on ingress is necessary for the Premium traffic. This traffic will have absolute priority in the scheduling schemes applied in the network, therefore it is necessary to control the amount of Premium traffic offered to the network in order to avoid starvation of the other CoSs. Also, although not necessary, Assured traffic will be policed as well in order to guarantee a minimum amount of bandwidth for the Best-effort traffic. At egress, these interfaces will classify packets according to their DSCP values and will apply scheduling and queue management mechanisms. In particular, Premium traffic, which is UDP-based, will receive strict queue priority
  • 58. Implementing Quality of Service in IP Networks 50 over all other traffic; Assured traffic will be treated with WRED, so as to maximize link utilization and avoid TCP global synchronization, since Assured traffic is TCP-based. The rest of the traffic will be treated as best effort. At this point we cannot exactly specify bandwidth requirements for all three CoSs, because we must first test their behavior. An initial estimation is 20% for Premium traffic, 35% for Assured traffic and 20% for the rest of the traffic. The remaining unused 25% will be used for overhead and other unpredictable traffic that may occur. This is to minimize the possibility of link saturation and network congestion. If all 100% of the link bandwidth were to be used, traffic unaccounted for would cause the link bandwidth to exceed the limit and thus congestion would occur. These 25% can be considered a “safety net” for the link. These numbers are initial estimations and cannot be predicted beforehand because of the heterogeneous nature of the network, i.e. the variety of link speeds. For example, at a 128Kbps link 20% of the bandwidth for Premium traffic might not be enough and may have to be increased. In that case, bandwidth for Assured traffic might also have to be increased, but the remaining bandwidth may not be enough for the best-effort traffic, i.e. the link will be too saturated for the best- effort traffic to come through. These are design issues that are difficult to overcome, therefore we must design the mechanisms as efficiently as possible. One solution would be to upgrade the link to a higher speed, but we are assuming that the customer does not want to spend money on such an upgrade.  Core interface: These are interfaces receiving/sending traffic from/to other routers. Ingress traffic at these interfaces will already be marked with a DSCP. Therefore, for these interfaces there is just a need to classify traffic at egress according to DSCP
  • 59. Implementing Quality of Service in IP Networks 51 values and apply scheduling and queue management mechanisms. Premium traffic will again get strict queue priority; Assured traffic will be serviced with WRED with an exponential weighting constant of 1 so as to avoid TCP global synchronization and maximize link utilization. Best-effort will be serviced with a regular tail-drop mechanism. Once again we cannot precisely specify bandwidth percentages for the three CoSs, therefore we will estimate the same numbers as for the edge interfaces. The precise numbers will have to be worked out during the testing phase. Figure 2-4 illustrates an example topology with core and edge interfaces and which mechanisms will be applied at each: Edge Interface Classification Marking Policing Queuing Core Interface Queuing Core Interface Queuing Fig. 2-1: Edge and Core Interfaces The figures that follow point out the differences between interface ingress and interface egress of a router. Figure 2-2 is a highly simplified design of a router’s operation. It should be noted that an interface’s ingress in one direction
  • 60. Implementing Quality of Service in IP Networks 52 acts simultaneously as the interface’s egress in the other direction and vice versa. For reasons of simplicity only one traffic direction is shown: Interface Ingress Routing and Forwarding Functions Interface Egress Data In Data Out Fig. 2-2: Simplified Router Schematic A router consists therefore of three logical blocks. Figure 2-3 illustrates the QoS actions that may be taken at the router’s interface ingress: Classification Metering Marking Policing Incoming Traffic (Interface In) Outgoing Traffic Fig. 2-3: Interface Ingress Schematic As the traffic enters the interface, it is examined and classified according to the router’s metering function. Then, the packets are marked according to the pre-defined policies (in our case by their DSCP value) and then policed before they exit the ingress. Throughout all these stages the traffic is “measured”, i.e. identified according to the above mentioned criteria (source/destination IP and port number, protocol, DSCP value). The traffic then is forwarded to the routing and forwarding logical block of the router.
  • 61. Implementing Quality of Service in IP Networks 53 Figure 2-4 illustrates the QoS functions that may take place at interface egress. As traffic enters, it can be re-classified according to its destination. It then may be remarked and either policed or shaped. The metering function remains active until the packet reaches the queue, where it is scheduled and finally exits the interface and is sent on its way. It should be noted that usually only queue management and scheduling mechanisms are employed at interface egress, since the other functions have been performed at interface ingress. Nevertheless, the figure shows that there is a possibility for classification, marking and policing at egress in case it is needed. Finally, as stated above, it is also crucial for the decision which QoS mechanisms to employ where the interface is located, i.e. if it is an edge or a core interface. Classification Metering Marking Policing/ Shaping Scheduling/Queuing Incoming Traffic Outgoing Traffic (Interface Out) Fig. 2-4: Interface Egress Schematic 2.4. CASE STUDY NETWORK TOPOLOGY The customer’s network consists of several different routers linked together at different link speeds and types. The network interconnects four separate LANs at different locations. The applications that will be used at each LAN are shown as appropriate icons. It also shows the customer’s specifications for the link speeds and types.
  • 62. Implementing Quality of Service in IP Networks 54 The first observation is that there are several 128Kbps links, a fact that will complicate efficient design for QoS provisioning because of the very limited bandwidth. Figure 2-5 below shows the topology proposed by the customer.: CORE NETWORK LAN 1 LAN 4 Workstation VideoVoIP Telephone LAN 2 Workstation VideoVoIP Telephone LAN 3 Workstation VideoVoIP Telephone Database ServerWorkstation VoIP Telephone Video 1984Kbit Serial 1984Kbit Serial 1984Kbit Serial 1544Kbit Serial 100Mbit Fast Ethernet 128Kbit Serial 100Mbit Fast Ethernet 128Kbit Serial 100Mbit Fast Ethernet 100Mbit Fast Ethernet 256Kbit Serial 128Kbit Serial Fig. 2-5: Case Study Network Topology
  • 63. Implementing Quality of Service in IP Networks 55 2.5. LABORATORY NETWORK TOPOLOGY The actual topology testbed that will be used differs in that we will not use LANs, but workstations, and not four, but two. Figure 2-6 shows the actual topology that will be used to simulate the case study topology and execute all tests: CORE NETWORK R2620 R7206 E0/0 163.100.140.1 S2/0 163.100.130.2 S0/3:1 132.1.30.1 S0/1 163.100.130.1 S0/3:1 132.1.20.1 S0/0 132.1.40.1 FE0/0 163.105.11.2 S1/0:0 132.1.10.1 S1/1:1 132.1.30.2 S0/3:1 163.100.30.2 S0/0 132.1.40.2 R2612 S0/2:0 132.1.10.2 S0/2:0 163.100.30.1 S0/2:0 132.1.20.2 1984Kbit1984Kbit 1984Kbit 128Kbit 100Mbit 100Mbit 256Kbit 128Kbit R2621 NetMeeting Sniffer IP Traffic NetMeeting IP Traffic PC 2 QPM R3640 163.105.1.200 163.100.140.169 PC 1 Fig. 2-6: Laboratory Network Topology
  • 64. Implementing Quality of Service in IP Networks 56 The figure also shows router and interface designations, workstation IP addresses as well as applications that will be used at each workstation. These applications will be presented in chapter 3, except for QPM, which will be presented and discussed in section 2.8. Router configurations will also be discussed in section 2.8. 2.6. NETWORK DEVICES As mentioned previously, all network devices are Cisco® Systems routers. Several routers from different product families will be used for the laboratory topology testbed, ranging from large-scale enterprise routers (Cisco 7206) to medium-scale branch office routers (Cisco 2612). In order to gain a better understanding of the capabilities of the different routers, a short description of each product family follows:  2600 Series: The Cisco 2600 series is a family of modular multiservice access routers, providing LAN and WAN configurations, multiple security options and a range of high performance processors. Cisco 2600 modular multiservice routers offer versatility, integration, and power. With over 70 network modules and interfaces, the modular architecture of the Cisco 2600 series easily allows interfaces to be upgraded to accommodate network expansion. Cisco 2600 series routers are also capable of delivering a broad range of end-to-end IP and Frame Relay-based packet telephony solutions.