The document discusses various techniques for providing quality of service in computer networks, including:
1. Token bucket filtering which characterizes bandwidth requirements using token buckets with different rates and depths.
2. Admission control which decides if a new flow's requested service can be provided without impacting existing flows.
3. Reservation protocols like RSVP which establish resource reservations along the path between sender and receiver.
4. Differentiated services which allocates resources to a small number of traffic classes using packet marking and per-hop behaviors.
1. In the above Figure illustrates how a token bucket can be used to characterize a flow’s
bandwidth requirements.
For simplicity, assume that each flow can send data as individual bytes, rather than as
packets. Flow A generates data at a steady rate of 1 MBps, so it can be described by a token
bucket filter with a rate r = 1 MBps and a bucket depth of 1 byte.
This means that it receives tokens at a rate of 1 MBps but that it cannot store more than 1
token—it spends them immediately. Flow B also sends at a rate that averages out to 1 MBps
over the long term, but does so by sending at 0.5 MBps for 2 seconds and then at 2 MBps for
1 second. Since the token bucket rate r is, in a sense, a long-term average rate, flow B can be
described by a token bucket with a rate of 1 MBps.
Unlike flow A, however, flow B needs a bucket depth B of at least 1 MB, so that it can store
up tokens while it sends at less than 1 MBps to be used when it sends at 2 MBps. For the first
2 seconds in this example, it receives tokens at a rate of 1 MBps but spends them at only 0.5
MBps, so it can save up 2 × 0.5 = 1 MB of tokens, which it then spends in the third second
(along with the new tokens that continue to accrue in that second) to send data at 2 MBps.
At the end of the third second having spent the excess tokens, it starts to save them up again
by sending at 0.5 MBps again.
Admission Control
The idea behind admission control is simple: When some new flow wants to receive a
particular level of service, admission control looks at the TSpec and RSpec of the flow and
tries to decide if the desired service can be provided to that amount of traffic, given the
currently available resources, without causing any previously admitted flow to receive worse
service than it had requested.
Reservation Protocol
Two nice features of RSVP:
Soft state—in contrast to the hard state found in connection-oriented networks—does
not need to be explicitly deleted when it is no longer needed. Instead, it times out after
some fairly short period (say, a minute) if it is not periodically refreshed.
Receiver oriented Reservation-Since each receiver periodically sends refresh
messages to keep the soft state in place, it is easy to send a new reservation that asks for a
new level of resources.
Initially, consider the case of one sender and one receiver trying to get a reservation for
traffic flowing between them.
2. There are two things that need to happen before a receiver can make the reservation.
First, the receiver needs to know what traffic the sender is likely to send so that it can
make an appropriate reservation. That is, it needs to know the sender’s TSpec.
Second, it needs to know what path the packets will follow from sender to receiver, so
that it can establish a resource reservation at each router on the path. Both of these
requirements can be met by sending a message from the sender to the receiver that
contains the TSpec.
Obviously, this gets the Tspec to the receiver. The other thing that happens is that each
router looks at this message (called a PATH message) as it goes past, and it figures out
the reverse path that will be used to send reservations from the receiver back to the
sender in an effort to get the reservation to each router on the path.
Having received a PATH message, the receiver sends a reservation back “up” the
multicast tree in a RESV message.
This message contains the sender’s TSpec and an RSpec describing the requirements of
this receiver. Each router on the path looks at the reservation request and tries to allocate
the necessary resources to satisfy it reservation can be made, the RESV request is passed
on to the next router.
If not, an error message is returned to the receiver who made the request. If all goes well,
the correct reservation is installed at every router between the sender and the receiver.
Aslong as the receiver wants to retain the reservation, it sends the same RESV message
about once every 30 seconds.
Making reservations on a multicast tree.
Packet Classifying and Scheduling
3. Once we have described our traffic and our desired network service and have installed a
suitable reservation at all the routers on the path, the only thing that remains is for the
routers to actually deliver the requested service to the data packets.
There are two things that need to be done:
Associate each packet with the appropriate reservation so that it can be handled correctly,
a process known as classifying packets.
Manage the packets in the queues so that they receive the service that has been
requested, a process known as packet scheduling.
The first part is done by examining up to five fields in the packet: the source address,
destination address, protocol number, source port, and destination port. (In IPv6, it is
possible that the FlowLabel field in theheader could be used to enable the lookup to be
done based on a single, shorter key.)
Based on this information, the packet can be placed in the appropriate class. For example,
it may be classified into the controlled load classes, or it may be part of a guaranteed flow
that needs to be handled separately from all other guaranteed flows.
The details of packet scheduling ideally should not be specified in the service model.
Instead, this is an area where implementers can try to do creative things to realize the
service model efficiently.
In the case of guaranteed service, it has been established that a weighted fair queuing
discipline, in which each flow gets its own individual queue with a certain share of the
link, will provide a guaranteed end-to-end delay bound that can readily be calculated.
For controlled load, simpler schemes(FIFO) may be used.
Scalability Issues
While the Integrated Services architecture and RSVP represented a significant
enhancement of the best-effort service model of IP, many Internet service providers felt
that it was not the right model for them to deploy.
The reason for this reticence relates to one of the fundamental design goals of IP:
scalability. In the best-effort service model, routers in the Internet store little or no state
about the individual flows passing through them.
Thus, as the Internet grows, the only thing routers have to do to keep up with that growth
is to move more bits per second and to deal with larger routing tables.
4. ButRSVP raises the possibility that every flow passing through a router might have a
corresponding reservation.
Each of those reservations needs some amount of state that needs to be stored in memory
and refreshed periodically.
The router needs to classify, police, and queue each of those flows. Admission control
decisions need to be made every time such a flow requests a reservation.
And some mechanisms are needed to “push back” on users so that they don’t make
arbitrarily large reservations for long periods of time
Differentiated Services (EF, AF)
Differentiated Services model (often called DiffServ for short) allocates resources to a
small number of classes of traffic.
In fact, some proposed approaches to DiffServ simply divide traffic into two classes.
Suppose that we have decided to enhance the best-effort service model by adding just one
new class, which we’ll call “premium”.
Clearly, we will need some way to figure out which packets are premium and which are
regular old best effort.
Rather than using a protocol like RSVP to tell all the routers that some flow I sending
premium packets, it would be much easier if the packets could just identify themselves to
the router when they arrive.
This could obviously be done by using a bit in the packet header—if that bit is a 1, the
packet is a premium packet; if it’s a 0, the packet is best effort.
In fact, the Differentiated Services working group of the IETF is standardizing a set of
router behaviors to be applied to marked packets.
These are called “per-hop behaviors” (PHBs), a term that indicates that they define the
behavior of individual routers rather than end-to-end services.
Because there is more than one new behavior, there is also a need for more than 1 bit in
the packet header to tell the routers which behavior to apply.
The IETF has decided to take the old TOS byte from the IP header, which has notbeen
widely used, and redefine it. Six bits of this byte have been allocated for DiffServ code
5. points (DSCP), where each DSCP is a 6-bit value that identifies a particular PHB to be
applied to a packet.
One of the simplest PHBs to explain is known as “expedited forwarding” (EF). Packets
marked for EF treatment should be forwarded by the router with minimal delay and loss.
Another PHB is known as “assured forwarding” (AF). This behavior has its roots in an
approach known as “RED with In and Out” (RIO) or “Weighted RED,” both of which are
enhancements to the basic RED algorithm.
RED with “in” and “out” drop probabilities
“RED with In and Out” (RIO)
drop probability on the y-axis increasing as average queue length increases along the x-
axis. But now, for our two classes of traffic, we have two separate drop probability
curves.
RIO calls the two classes “in” and “out” for reasons that will become clear shortly.
Because the “out” curve has a lower MinThreshold than the “in” curve, it is clear that,
under low levels of congestion, only packets marked “out” will be discarded by the RED
algorithm.
If the congestion becomes more serious, a higher percentage of “out” packets are
dropped, and then if the average queue length exceeds Minin, RED starts to drop “in”
packets as well.
Weighted RED
6. That is, we have effectively reserved 20% of the link for premium packets.
ATM Quality of Service
In many respects, the QoS capabilities that are provided in ATM networks are similar to
those provided in an IP network using Integrated Services.
However, the ATM standards bodies came up with a total of five service classes
compared to the IETF’s three. The five ATM service classes are
constant bit rate (CBR)
variable bit rate—real-time (VBR-rt)
variable bit rate—non-real-time (VBR-nrt)
available bit rate (ABR)
unspecified bit rate (UBR)
Comparison of RSVP and ATM
Equation-Based Congestion Control
TCP itself is not appropriate for real-time applications. One reason is that TCP is a
reliable protocol, and real-time applications often cannot afford the delays introduced by
retransmission.
7. Specifically, several so-called TCP-friendly congestion-control algorithms have been
proposed. These algorithms have two main goals. One is to slowly adapt the congestion
window. This is done by adapting over relatively longer time periods (e.g., an RTT)
rather than on a per-packet basis. This smooths out the transmission rate.
The second is to be TCP-friendly in the sense of being fair to competing TCP flows. This
property is often enforced by ensuring that the flow’s behavior adheres to an equation
that models TCP’s behavior. Hence, this approach is sometimes called equation-based
congestion control.
The transmission rate must be inversely proportional to the round-trip time (RTT) and the
square root of the loss rate (ρ).