TAMPERE UNIVERSITY OF TECHNOLOGY
Department of Information Technology
83390 Advanced Topics in Broadband Network
Seminar report, 1.12.2000
Marko Immonen, 150192
CPU Central Processing Unit
IP Internet Protocol
QoS Quality of Service
RSVP ReSource Reservation Protocol
TCP Transport Control Protocol
VoIP Voice over IP
UDP User Datagram Protocol
ISA Integrated Service Architecture
DiffServ Differentiated Service
LAN Local Area Network
IPV4/V6 Internet Protocol Version 4/ Version 6
Quality of Service (QoS) was largely ignored in the initial design of IP. IP was
built and optimised to transport data, not voice or video. The only 'quality of
service' that was requires was that the technology makes it feasible to
transport real-time data over an IP network. Therefore it becomes extremely
important to be able to characterise and control jitter and transit delays in the
network. Jitter is important mainly in real-time applications, which need to
maintain worst-case buffers to allow for timely delivery of the packets. Jitter
and latency are described more detailed in the Chapter 2.
RSVP is a setup protocol for Internet resource reservations; its purpose is to
create flow-specific resource Reservation State in routers and hosts. RSVP is
a component of the QoS extensions to the Internet architecture known as
integrated services. RSVP was designed to provide robust, efficient, flexible,
and extensible reservation service for multicast and unicast data flows.
RSVP's design principle was to enable the real-time data transfer, like video
and voice in the Internet.
Differentiated Service mechanisms allow providers to allocate different levels
of service to different users of the Internet to support various types of
applications by relatively simple, lightweight mechanism that does not depend
entirely on per-flow resource reservation. While in Integrated Services every
network element holds state information for every individual data flow, in
Differentiated Service each IP packet is handled independently. Differentiated
Service tries to push the complexity to edge router and make the core router
as simple as possible.
RSVP and Differentiated service are choices for implementing QoS. More for
this issues are described in the Chapter 3, 4 and 5.
2 VoIP QoS
The way in which a network handles voice traffic can be characterised through
a set of metrics such as latency, jitter, bandwidth etc. (See Table 1.)
Bandwidth Hold time Burstiness Latency Jitter
needs (session sensitivity
Voice Low Low Low High Medium
Table 1: Voice and the network services it requires
The main metrics characterising QoS in a packet network are therefore:
Latency and Jitter.
The way traffic is handled can be characterised by a number of parameters,
such as round-trip delay, variance in delay (jitter), packet loss, peak
throughput, and so on. The main reason that inelastic applications (like Voice)
don't generally work well over IP-based network is latency, and the change in
latency that traffic queuing can produce.
There are three components of latency; delay introduced by the internetwork,
delay introduced by the protocol, and delay introduced by the turnaround of
2.1.1 Internetwork delay
Internetwork delay is constant when the capacity of the network exceeds the
information across the network, it is the aggregate of link delays and device
delays. In order to improve this best-case delay preferentially (for example to
improve performance of videoconferencing) an alternate path across the
network must be used. It can be called the selection of paths based on traffic
need "QoS – Based" routing.
Internetwork delay grows when the traffic introduced into the network exceeds
the network's capacity. The network can be thought of as a storage buffer that
holds excess traffic until it can be transmitted. A properly built network has
more capacity than the expected average data rate in order to clear burst of
congestion quickly. Building networks so that they handle the maximum burst
rate is called over-engineering, and it is generally unaffordable to do so.
The change in delay that an internetwork introduces when traffic load exceeds
capacity is directly related to the fullness of the buffers on the various devices
a packet must cross. But simply knowing buffer size won't tell you what you
need to know about delay, since different kinds of traffic get through the
buffers at different rates.
2.1.2 Protocol Delay
Different protocols have inherent delays, either because of the way they work
or because of the way they interact with the network.
• Compression or encryption schemes introduce computational delay.
• TCP/IP detects dropped or delayed packets and infers congestion, causing
it to slow down.
The specific ways in which protocols interact with queuing and discard
mechanisms in the network as well as the layers above and below them make
bandwidth management somewhat of a black art.
2.1.3 Information turnaround
The delay that occurs between receipt of a network request and effective
response, the delay that a server's internal processes introduce into total
round-trip delay. The total response time experience by end users is the one
that shapes their opinion of the network's performance. Information turnaround
delays are generally a function of server load, which comes from many
• The serve's CPU may be occupied by other processing and unable to
service all queries promptly.
• The request may require computationally intensive processing, such as
running interpreted scripts in real time.
• The server may be able to service and process the query promptly, but a
back-end system such as a database lookup may take a long time.
Load balancing systems provide some relief from server delays for example
sophisticated load-balancer that shares the load equitably across many
servers. Such devices can also improve the availability of servers in case of
failure and redirect clients to a geographically closer server to reduce the
2.2 Jitter sensitivity
At first glance, a network that can offer the needed capacity with an
acceptable delay would seem to be suitable for a given application. In practise,
it is the variability of the delay (=jitter) that matters most for many time
sensitive applications (like voice).
Now consider the network, which has a variable delay. Sometimes the voice's
bits arrive rapidly, with only 20 milliseconds' delay. Sometimes, they take an
entire 2 seconds. What should the wise application developer do?
Conventional wisdom holds that a buffer sufficient to support the largest
possible delay for the duration of the session will allow the application to
insulate the viewer from the network. This buffer must be accompanies by
clever protocol mechanisms, which might signal congestion, throttle the voice's
output, and discard frames in order to stay in sync with the transmitting end. A
predictable delay will reduce software overhead and improve performance by
reducing the "worst-case" buffers.
2.2.1 Physical delay variance
Most networks provide a constant level of delay. Older systems might vary
transmission speeds over very long lengths, but modern networks include
better error correction and shielding, preventing physical variance.
2.2.2 Access delay variability
Access variability is the change in the delay with which an application is able
to gain the right to transmit. Reducing the variability in media access is one
reason to deploy switching to the desktop in anticipation of voice-over-IP
2.2.3 Network delay variability
When network is congested, queue depth and retransmission due to discards
are two major sources of delay. Queue depths change with the level of
congestion. Retransmissions due to discarded packets increase as network
devices attempt to signal transmitters about congestion. Consequently, the
variability in network delay is the major cause of jitter in networks.
2.2.4 Session-establishment variability
When an application creates a session across a network, a series of setup
acknowledgements must take place. Depending on factors such as server
load or discard rate, this sequence may experience an unpredictable delay to
the start of the network service. Session-establishment variability is seldom an
issue for the timely delivery of real time traffic once the session is established,
3 Differentiated Services
The DiffServ architecture offers a framework within which service providers can
offer each customer a range of network services. Customers request a specific
performance level on a packet by packet basis, by marking the DS field of each
packet with a specific value. This value specifies the Per-Hop Behavior (PHB) to be
experienced by the packet within the provider's network. Typically, the customer
and provider negotiate a profile describing the rate at which traffic can be submitted
at each service level. A noteworthy feature of DiffServ is its scalability, which allows
it to be deployed in very large networks.
The DS field is a replacement header field for the existing IPv4 TOS octet and the
IPv6 Traffic Class octet.
3.1 Traffic Class
The Traffic Class is an 8-bit field in IP version 6. There have been several
suggestions on how to use these bits, but none of them has been fully accepted.
Thus, the structure for this field is adopted straight from the DS field structure. DS
Field, presented in Figure 1, is an eight bit field out of which the first six bits are
used as Differentiated Services CodePoint (DSCP) to select the PHB a packet
experiences at each node. The values of the remaining bits –grouped as Currently
Unused (CU) – are ignored. The DS compliant nodes select PHBs by matching
against the entire 6-bit DSCP field; e.g. by treating the value of the field as a table
indexes which is used to select a particular packet handling mechanism.
0 1 2 3 4 5 6 7
DSCP : differentiated services codepoint
CU : currently unused
Figure 1: The DS field structure
3.2 Codepoint space
The DSCP field is capable of conveying 64 distinct codepoints. The codepoint
space is divided into three pools for the purpose of codepoint assignment and
management. Pool 1 consists of 32 recommended codepoints to be assigned by
Standards Action. Pool 2 includes 16 codepoints to be reserved for experimental or
local use. Finally a third pool of 16 codepoints, which are initially available for
experimental or local use, but which should be preferentially utilized for
standardized actions, if Pool 1 is exhausted.
Pool Codepoint space Assignment Policy
1 xxxxx0 Standards Action
2 xxxx11 Experimental / Local use
3 xxxx01 Experimental / Local use
Table 1: Codepoint space usage
3.3 Per-Hop Behaviours
A per-hop behavior is the externally observable forwarding behavior applied to a
collection of packets made distinct by the DSCP field. This collection of packets
with the same codepoint crossing a link in a particular direction is called a behavior
aggregate or just aggregate. PHBs are implemented in nodes by means of some
buffer management and packet scheduling mechanisms, and they are defined in
terms of behavior characteristics rather than particular implementation
mechanisms. Thus, a single PHB may be implemented in several different ways.
PHB group is a set of one or more PHBs that can only be meaningfully specified
and implemented simultaneously. A PHB group including several PHBs assigns
several codepoints for itself. It is likely that more than one PHB group may be
implemented on a node and utilized within a domain.
A PHB is selected at a node by mapping of the DS codepoint in a received packet.
Standardized PHBs have a recommended codepoint, allocated out of Pool 1. In a
codepoint to PHB mapping table, multiple codepoints may map to the same PHB,
but all codepoints must be mapped to some PHB. Packets with an unrecognized
codepoint are mapped to Default behavior (best-effort).
Although the PHBs are the heart of the differentiated services it has to be underlined that
they are only building blocks for the services in which the customers are interested. There
are different types of PHB's like default PHB, Expedited Forwarding PHB (EF), Assured
Forwarding PHB Group (AF),
3.4 Traffic classification and conditioning entities
SLS SLS SLS SLS
Host EN EN EN EN Host
DS domain 1 DS domain 2 DS domain 3
EN Edge Node = Edge Router IN Interior Node = Core Router
Figure 2: Differentiated Services Network Architecture
In order to provide reliable and fair service to users, the differentiated services
providers have to make a service level specification (SLS) with its customers
(formerly known as SLA= service level agreement). SLS is the high level
agreement, which defines the overall features and performance expected by the
customer. Figure 2 depicts the network architecture of DiffServ in whole.
A traffic conditioning specification (TCS) (formerly known as TCA = traffic
conditioning agreement) is a subset of SLS. TCS specifies packet classifying and
conditioning rules. The TCS information can be a static (and pre-defined) set of
tables or it can be dynamically modifiable by the signaling functions. The logical
entities, to which TCS affects on, are presented in Figure 3.
Figure 3: Logical blocks of the traffic conditioning
It is proposed in [DS_router], that TCS would consist of a constraint and a fine-grain
part. The function of the constraint TCS would be protecting of the resources of the
differentiated service network. By using the fine-grain TCS, the service provider
could offer per-flow value-added services to the customer.
3.5 Traffic classification
A packet classifier selects packets from the data stream and steers them to the
traffic conditioning entity. There has been specified two different ways to classify
− The behavior aggregate (BA) classifier makes its decisions based only on the
information of the DS codepoint
− The multi-field (MF) classifier uses information of several header fields (e.g.,
source/destination address, source/destination TCP/UDP port) to make its
classification decision. The integrated services use the MF classification.
3.6 Traffic conditioning
The traffic conditioning consists of traffic meter, marker, shaper and dropper
elements. Not all elements have to be implemented in the traffic conditioner.
The traffic meter measures the temporal properties of the packet stream coming out
from the traffic classifier. The traffic meter compares the packet stream to the traffic
profile definitions agreed in the TCS. The meter is able to give information whether
the particular packet belongs to in-profile or out-of-profile.
There are two alternatives to handle in-profile packets, when they are entering to a
− In-profile packets will be delivered further without any modifications.
− The DS codepoint of the in-profile packets may be changed, when it is set to a
non-default value for the first time or when the DS codepoint to PHB group
mapping policy different from the previous DS domain.
The out-of-profile packets may be
− shaped, i.e., queued until they are back in-profile,
− policed, i.e., discarded,
− re-marked with the new DS codepoint,
− forwarded unchanged (some accounting procedures may be applied).
The packet marker is allowed to mark all incoming packets to a single codepoint or
to one of set of codepoints in order to select the appropriate PHB in according to
the state of the traffic meter. The traffic shaper delays the out-of-profile packets and
the dropper discards packets.
4 Integrated Services Architecture
4.1 ISA Approach
In ISA, each packet can be associated with a flow, which is a distinguishable
stream of related datagrams that results from a single user activity and requires the
same QoS . A flow is unidirectional and there can be more than one recepient of
a flow (multicast).
The Integrated Services model defines four mechanisms that comprise the traffic-
• Packet scheduler. Manages the forwarding of different packet streams using a
set of queues and perhaps other mechanisms like timers.
• Packet classifier. Maps each incoming packet to a specific class so that these
classes may be acted on individually to deliver traffic differentiation.
• Admission control. Determines whether a flow can be granted the requested
QoS without affecting other established flows in the network.
• Resource reservation. A resource reservation protocol (e.g. RSVP) is necessary
to set up flow state in the requesting end-systems as well as each router along
the end-to-end traffic path.
Figure 4 is a general depiction of the implementation architecture for ISA within a
router. In the lower part of the picture are the forwarding functions, which are
executed for each packet. In the upper part of the router are background functions
that create data structures used by the forwarding functions.
Routing Reservation Admission
Protocol Protocol Control
Classifier & QoS queing
Scheduler Best-effort queing
Figure 4: Integrated services architecture implemented in router
4.2 ISA Services
The Integrated Services model defines three categories of service: Guaranteed
Service, Controlled Load and Best Effort.
The key elements of the Guaranteed Service are as follows:
1. The service provides assured capacity level, or data rate.
2. There is a specified upper bound on the queuing delay through the network.
3. There are no queuing losses. That is, no packets are lost due to buffer overflow;
packets may be lost due to failures in the network or changes in routing paths.
The Guaranteed Service is the most demanding service provided by ISA. This
service is designed for applications that require a precisely known quality of service,
such as some audio and video "play-back" applications.
1.1.2Controlled Load Service
The key elements of the controlled load service are:
1. The service tightly approximates the behaviour visible to applications receiving
Best Effort service under unloaded conditions.
2. There is no specified upper bound on the end-to-end queuing delay. However
the service ensures that a very high percentage of the packets do not
experience delays that greatly exceed the minimum transit delay (i.e. the delay
due to propagation time plus router processing time with no queuing delays).
3. A very high percentage of transmitted packets will be successfully delivered.
This service is intended to support a broad class of applications, which have been
developed for use in today's Internet, but are highly sensitive to overloaded
conditions. Important members of this class are the "adaptive real-time"
4.2.1 Best Effort Service
The Best Effort delivery model means that the network will do its best without
making any guarantees of service.
5 Resource Reservation Protocol (RSVP)
RSVP is an IETF proposed standard, a resource reservation setup protocol
designed for an integrated services in Internet. It is used by applications to request
specific quality of service (QoS) from the network for particular application data
streams or flows. RSVP is a simplex protocol, i.e. it reserves resources in one
direction, and the receiver(s) of a data flow are responsible for the initiation of
resource reservation. RSVP operates on top of the Internet Protocol (IPv4 or IPv6)
and is a control protocol similar to ICMP and routing protocols. RSVP control
messages are sent as raw IP datagrams using protocol number 46, but UDP
encapsulation is also possible.
5.1 General RSVP Operation
An RSVP sender sends Path messages downstream toward an RSVP receiver
Path messages are used to store path information of each node in the traffic path.
Each node maintains this path state characterization for the specified sender's flow,
as indicated in the sender's Tspec parameter. After receiving the Path message,
the receiver sends Resv (reservation request) messages back upstream to the
Resv / Path Resv / Path
Application Process Policy Routing Process Policy
Process Control Process Control
Packet Scheduler Packet Scheduler
Figure 5: RSVP in hosts and routers
sender along the same hop-by-hop traffic path the Path messages had traveled.
Because the receiver is responsible for requesting the desired QoS service, the
Resv messages specify the desired QoS and set up the reservation state in each
node in the traffic path. After successfully receiving the RESV message, the sender
begins sending its data. Figure 5 shows a simplified version of RSVP in hosts and
5.2 RSVP Messages and Objects
An RSVP message contains a message-type field in the header that indicates the
function of the message. There are two fundamental RSVP message types: the
Resv and Path messages, which provide the basic operation of RSVP as described
earlier. Every RSVP message contains a session object, which is an identifier of a
specific data flow. Session is defined by the triple: destination IP address,
destination port and IP protocol identifier.
5.2.1 Path Message
The RSVP Path message, which is constructed by the RSVP sender, includes at a
minimum the IP address of each Previous Hop in the traffic path. This is used to
determine the path in which the subsequent Resv messages will be forwarded. The
Path message contains also additional information objects called the Sender
Template, the Sender TSpec, and the AdSpec. depicts the structure of the Path
• The Sender Template describes the format of data traffic the sender will
• The Sender TSpec characterizes the traffic flow the sender will generate, and it
is transported unchanged through the network.
• The AdSpec contains both parameters describing the properties of the data path
and parameters required by specific control services to operate correctly.
AdSpec objects can be used and updated inside the network before being
delivered to receiving applications.
PATH Dest Port
SESSION Previous Hop
Logical Interface Handle
Refresh Period (R)
SrcPort / Flow Label
Token Bucket Rate (r)
Token Bucket Size (b)
Peak Data Rate (p)
SENDER Minimum Policed Unit (m)
TSPEC Maximum Packet Size (M)
ADSPEC Global Break
Path BW estimate
IS hop cnt
Path BW estimate General
Minimum path latency
Figure 6: RSVP Objects Conveyed in a PATH Message
5.2.2 Resv Message
The Resv message, which is sent by the RSVP receiver, contains information about
the reservation style and next hops address, the appropriate Flowspec object, and
the Filter Spec. Figure 7 depicts the fields of a RESV message originating at a
receiver requesting a fixed-filter style Guaranteed Service. For Controlled Load
Service Rspec part is empty.
• A reservation request includes a set of options that are collectively called the
reservation style. Three reservation styles are defined in RSVP: wild-card filter
(WF), fixed-filter (FF) and shared-explicit (SE) styles.
• The Flowspec consists of Service Class, RSpec and TSpec. The service class is
an identifier of a type of service being requested. The other two parameters are
sets of numeric values. The RSpec (R for reserve) defines the desired quality of
service, and the TSpec (T for traffic) describes the data flow. When the receiver
requests controlled load service, RSpec parameter is not used.
• The Filter Spec defines the set of packets for which a reservation is requested.
Data that does not match any of the Filter Specs is treated as Best Effort traffic.
RESV Dest Port
SESSION Next Hop Address
Logical Interface Handle
Refresh Period (R)
F Token Bucket Rate (r)
L Token Bucket Size (b)
O Peak Data Rate (p)
W Minimum Policed Unit (m)
S Maximum Packet Size (M)
C Rate (R)
Slack Term (S)
SrcPort / Flow Label
Figure 7: RSVP Objects Conveyed in a RESV Message
5.2.3 Additional Message Types
Aside from the Resv and Path messages, the remaining RSVP message types
concern path and reservation errors (PathErr and ResvErr), path and reservation
teardown (PathTear and ResvTear), and confirmation for a requested reservation
QoS is the most important factor for VoIP to be feasible in commercial way. Packet
data by its nature are not meant for guaranteed, lossless, real time traffic. The
elastic property of the system has to be somehow configured to the non-elastic
requirement of the real time applications. Voice has the tolerance threshold for
latency, above which it would become very annoying for users and use of voice
over packet data system then becomes questionable. The latency in the existing
network has to be reduced to an acceptable level. The latency could be added in
various sections of the network in the end-to-end transmission, e.g. routers, buffers,
LAN, gateways, coders, decoders, bridges, etc. The various techniques being
studied so far shows the way of improving the delays and jitters. Each technique
has its own unique advantage and disadvantage. The technique that is good for one
environment/condition might not be suitable for the other. An adaptive, hybrid and
robust techniques that could stand various circumstances are definitely the demand
of the present day.