1. “It’s one small step for man, one
giant leap for mankind.”
- Neil Armstrong
2. NETWORK LAYER DESIGN ISSUES
• Some of the issues that the designers of the network layer must grapple with. These issues include
• the service provided to the transport layer and the internal design of the network
• 1 Store-and-Forward Packet Switching
• 2 Services Provided to the Transport Layer
• 3 Implementation of Connectionless Service
• 4 Implementation of Connection-Oriented Service
• 5 Comparison of Virtual-Circuit and Datagram Networks
3. Store-and-Forward Packet Switching
• The major components of the network are the ISP’s equipment (routers connected by transmission lines),
shown inside the shaded oval, and the customers’ equipment, shown outside the oval.
• A host with a packet to send transmits it to the nearest router, either on its own LAN or over a point-to-
point link to the ISP.
• The packet is stored there until it has fully arrived and the link has finished its processing by verifying the
checksum. Then it is forwarded to the next router along the path until it reaches the destination host,
where it is delivered.
4. Services Provided to the Transport Layer
• The services need to be carefully designed with the following goals in mind:
• 1. The services should be independent of the router technology.
• 2. The transport layer should be shielded from the number, type, and topology of the routers present.
• 3. The network addresses made available to the transport layer should use a uniform numbering plan, even across
LANs and WANs
5. Implementation of connectionless service
• If connectionless service is offered, packets are injected into the network
individually and routed independently of each other. No advance setup is
needed. In this context, the packets are frequently called datagrams (in
analogy with telegrams) and the network is called a datagram network.
• Let us assume for this example that the message is four times longer than
the maximum packet size, so the network layer has to break it into four
packets, 1, 2, 3, and 4, and send each of them in turn to router A.
• Every router has an internal table telling it where to send packets for each
of the possible destinations. Each table entry is a pair(destination and the
outgoing line). Only directly connected lines can be used.
6. Implementation of connectionless service
The algorithm that manages the tables and makes the routing decisions is called the routing algorithm
.
7. Implementation of connection-oriented service
• If connection-oriented service is used, a path from the source router all
the way to the destination router must be established before any data
packets can be sent. This connection is called a VC (virtual circuit), and
the network is called a virtual-circuit network.
• When a connection is established, a route from the source machine to
the destination machine is chosen as part of the connection setup and
stored in tables inside the routers. That route is used for all traffic
flowing over the connection, exactly the same way that the telephone
system works. When the connection is released, the virtual circuit is
also terminated.
10. Routing Algorithms
• The main function of NL (Network Layer) is routing packets from the source
machine to the destination machine.
• There are two processes inside router:
• a) One of them handles each packet as it arrives, looking up the outgoing line to
use for it in the routing table. This process is forwarding.
• b) The other process is responsible for filling in and updating the routing tables.
That is where the routing algorithm comes into play. This process is routing.
• Regardless of whether routes are chosen independently for each packet or only
when new connections are established, certain properties are desirable in a
routing algorithm correctness, simplicity, robustness, stability, fairness, optimality
11. Routing Algorithms
• Routing algorithms can be grouped into two major classes:
• 1) nonadaptive (Static Routing)
• 2) adaptive. (Dynamic Routing)
• Nonadaptive algorithm do not base their routing decisions on measurements or
estimates of the current traffic and topology. This procedure is sometimes called
static routing
• Adaptive algorithm, in contrast, change their routing decisions to reflect changes in
the topology, and usually the traffic as well. This procedure is called dynamic routing
12. Different Routing Algorithms
Optimality principle
Shortest path algorithm
Flooding
Distance vector routing
Link state routing
Hierarchical Routing
13. The Optimality Principle
• One can make a general statement about optimal routes
without regard to network topology or traffic. This statement
is known as the optimality principle.
• It states that if router J is on the optimal path from router I to
router K, then the optimal path from J to K also falls along the
same
• As a direct consequence of the optimality principle, we can
see that the set of optimal routes from all sources to a given
destination form a tree rooted at the destination. Such a tree
is called a sink tree. If we allow all of the possible paths to be chosen, the tree becomes a more
general structure called a DAG (Directed Acyclic Graph). DAGs have no loops.
14. Shortest Path Algorithm
• The idea is to build a graph of the subnet, with each node of the graph representing a router and
each arc of the graph representing a communication line or link.
• To choose a route between a given pair of routers, the algorithm just finds the shortest path between
them on the graph
• 1. Start with the local node (router) as the root of the tree. Assign a cost of 0 to this node and make it
the first permanent node.
• 2. Examine each neighbor of the node that was the last permanent node.
• 3. Assign a cumulative cost to each node and make it tentative
• 4. Among the list of tentative nodes
• a. Find the node with the smallest cost and make it Permanent
• b. If a node can be reached from more than one route then select the route with the shortest
cumulative cost.
• 5. Repeat steps 2 to 4 until every node becomes permanent
15.
16.
17. Flooding
• Another static algorithm is flooding, in which every incoming packet is sent out on
every outgoing line except the one it arrived on.
• • Flooding obviously generates vast numbers of duplicate packets,
• • One such measure is to have a hop counter contained in the header of each
packet, which is decremented at each hop, with the packet being discarded when
the counter reaches zero.
• • A variation of flooding that is slightly more practical is selective flooding.
• • Flooding is not practical in most applications.
18. Distance Vector Routing
• Computer networks utilize dynamic routing algorithms, such as distance vector routing and link state
routing, which are more complex but more efficient in finding the shortest paths for the current
topology, with a focus on the former algorithm.
• A distance vector routing algorithm operates by having each router maintain a table (i.e., a vector)
giving the best known distance to each destination and which link to use to get there. These tables are
updated by exchanging information with the neighbors. Eventually, every router knows the best link to
reach each destination.
• The distance vector routing algorithm, also known as the Bellman-Ford routing algorithm, was the
original ARPANET routing algorithm and was used in the Internet under the name RIP
19.
20. Link State Routing
• The primary problem that caused Distance vector routing its demise was that the algorithm often took
too long to converge after the network topology changed (due to the count-to-infinity problem).
Consequently, it was replaced by an entirely new algorithm, now called link state routing. Variants of
link state routing called IS-IS and OSPF are the routing algorithms that are most widely used inside
large networks and the Internet today.
• The idea behind link state routing is fairly simple and can be stated as five parts. Each router must do
the following things to make it work:
• 1. Discover its neighbors and learn their network addresses.
• 2. Set the distance or cost metric to each of its neighbors.
• 3. Construct a packet telling all it has just learned.
• 4. Send this packet to and receive packets from all other routers.
• 5. Compute the shortest path to every other router.
21. Learning about the Neighbors
• A router learns its neighbors by sending a special HELLO packet on each point-to-point line, and the
router on the other end sends a reply with its name.
• Setting Link Costs:
• The link state routing algorithm requires each link to have a distance or cost metric for finding shortest
paths. The cost to reach neighbors can be set automatically, or configured by the network operator
• Building Link State Packets : Once the information needed for the exchange has been collected, the
next step is for each router to build a packet containing all the data. The packet starts with the identity
of the sender, followed by a sequence number and age (to be described later) and a list of neighbors.
22. 4. RIP (6): Table processing
• RIP routing tables managed by application-level process called
route-d (daemon)
• advertisements sent in UDP packets, periodically repeated
physical
link
network forwarding
(IP) table
Transport
(UDP)
routed
physical
link
network
(IP)
Transport
(UDP)
routed
forwarding
table
23. 4. OSPF (1) (Open Shortest Path First)
• “open”: publicly available
• Uses Link State algorithm
• LS packet dissemination
• Topology map at each node
• Route computation using Dijkstra’s algorithm
• OSPF advertisement carries one entry per
neighbor router
• Advertisements disseminated to entire AS (via
flooding)
– Carried in OSPF messages directly over IP (rather than
24. Hierarchical Routing
• As networks grow, router routing tables grow proportionally, consuming more memory, CPU time, and
bandwidth. When the network becomes too large for every router to have an entry for every other router,
hierarchical routing is used.
• • Routers are divided into regions, each knowing how to route packets within its own region but not about the
internal structure of other regions. This allows interconnected networks to treat each region as a separate area,
allowing routers to avoid knowing the topological structure of other networks.
• • For large networks, a two-level hierarchy may not be sufficient, as it may be necessary to group regions into
clusters, clusters into zones, and zones into groups. For example, a packet from Berkeley to Malindi would be
routed through a multilevel hierarchy
27. 4. Hierarchical OSPF (3)
• Two-level hierarchy: local area, backbone.
• Link-state advertisements only in area
• each nodes has detailed area topology; only know
direction (shortest path) to nets in other areas.
• Area border routers: “summarize” distances
to nets in own area, advertise to other Area
Border routers.
• Backbone routers: run OSPF routing limited
to backbone.
• Boundary routers: connect to other AS’s.
28. Broadcast Routing
Broadcasting is the process of sending a packet to all destinations simultaneously, such as weather reports
or stock market updates.
This method requires no network features and requires a complete list of all destinations.
Multidestination routing is an improvement in network communication where each packet contains a list of
destinations or a bit map indicating the desired destinations.
When a packet arrives at a router, it checks all the destinations to determine the set of output lines needed.
The router generates a new copy of the packet for each output line and includes only those destinations that
will use the line. This partitions the destination set among the output lines, resulting in more efficient
network bandwidth usage.
• Flooding is a better broadcast routing technique that efficiently uses links with a simple decision rule at
routers.
• Reverse path forwarding is an elegant and simple idea that checks if a broadcast packet arrived on the
link normally used for sending packets toward the source
29. • Reverse path forwarding is a network routing algorithm where router I sends packets to previously
unvisited routes on the first hop. The second hop generates eight packets, two by each router that
received a packet on the first hop.
• All eight packets arrive at previously unvisited routes, with five along the preferred line. The third hop
generates six packets, with only three arriving on the preferred path.
• After five hops and 24 packets, broadcasting terminates, compared to four hops and 14 packets if the
sink tree was followed exactly.
30. Multicast Routing
• Multicasting: to send messages to well-defined
groups that are numerically large in size but small
compared to the network as whole.
• Group management: Some way is needed to create
and destroy groups, and to allow processes to join
and leave groups.
• Computing a spanning tree covering all other
routers.
• Multicast routing is to prune the spanning tree.
• • When a process sends a multicast packet to a
31. • (a) A network.
• (b) A spanning tree for the leftmost router.
• (c) A multicast tree for group 1.
• (d) A multicast tree for group 2
32.
33.
34. Routing for Mobile Hosts
• The model of the world that we will consider is one in which all hosts are assumed to have a
permanent home location that never changes
• The routing goal in systems with mobile hosts is to make it possible to send packets to mobile hosts
using their fixed home addresses and have the packets efficiently reach them wherever they may be
• The basic idea used for mobile routing in the Internet and cellular networks is for the mobile host to
tell a host at the home location where it is now. This host, is called the home agent.
• Once it knows where the mobile host is currently located, it can forward packets so that they are
delivered.
35. Fig. 5-19 shows mobile routing in action. A sender in the northwest city of Seattle wants to send a
packet to a host normally located across the United States in New York.
The case of interest to us is when the mobile host is not at home. Instead, it is temporarily in San
Diego.
36. 6. Routing in Ad Hoc Networks
• Possibilities when the routers are mobile:
• Military vehicles on battlefield.
– No infrastructure.
• A fleet of ships at sea.
– All moving all the time
• Emergency works at earthquake .
– The infrastructure destroyed.
• A gathering of people with notebook computers.
– In an area lacking 802.11.
37. 6. Routing in Ad Hoc Networks: Route Discovery
• (a) Range of A's broadcast.
• (b) After B and D have received A's broadcast.
• (c) After C, F, and G have received A's broadcast.
• (d) After E, H, and I have received A's broadcast.
• Shaded nodes are new recipients. Arrows show possible reverse routes.
38. 6. Routing in Ad Hoc Networks: Route Discovery
• Format of a ROUTE REQUEST packet.
39. 6. Routing in Ad Hoc Networks: Route Discovery
• Format of a ROUTE REPLY packet.
40. 6. Routing in Ad Hoc Networks (5): Route Maintenance
• (a) D's routing table before G
goes down.
• (b) The graph after G has gone
down.
41. CONGESTION CONTROL ALGORITHMS
• oo many packets present in (a part of) the network causes packet delay and loss that degrades
performance. This situation is called congestion.
• The network and transport layers share the responsibility for handling congestion.
• Unless the network is well designed, it may experience a congestion collapse, in which
performance plummets as the offered load increases beyond the capacity
42. Approaches to Congestion Control
• The presence of congestion means that the load is (temporarily) greater than the resources (in a part of
the network) can handle.
• Two solutions come to mind: increase the resources or decrease the load.
43. Congestion Control Algorithms
• Approaches to congestion control
• Traffic-aware routing
• Admission control
• Traffic throttling
• Load shedding
44. • More often, links and routers that are regularly heavily utilized are upgraded at the earliest opportunity.
This is called provisioning
• To make the most of the existing network capacity, routes can be tailored to traffic patterns that change
during the day as network users wake and sleep in different time zones. This is called traffic-aware
routing
• In a virtual-circuit network, new connections can be refused if they would cause the network to become
• congested. This is called admission control.
• Two difficulties with this approach are how to identify the onset of congestion, and how to inform the
source that needs to slow down.
• To tackle the first issue, routers can monitor the average load, queueing delay, or packet loss.
• To tackle the second issue, routers must participate in a feedback loop with the sources.
• Finally, when all else fails, the network is forced to discard packets that it cannot deliver. The general
name for this is load shedding.
45. Traffic-Aware Routing
• The goal in taking load into account when computing routes is to shift traffic away from hotspots that
will be the first places in the network to experience congestion.
• The most direct way to do this is to set the link weight to be a function of the (fixed) link bandwidth
and propagation delay plus the (variable) measured load or average queuing delay.
46. Admission Control
• The idea is simple: do not set up a new virtual circuit unless the network can carry the added traffic without
becoming congested.
47. Congestion Control in Virtual-Circuit
Subnets: Admission control
(a) A congested subnet. (b) A redrawn subnet, eliminates
congestion and a virtual circuit from A to B.
49. Congestion Control in Datagram Subnets: Warning Bit
The old DECNET and frame relay networks:
A warning bit is sent back in the ack to the source in the case
congestion. Every router on the path can set the warning bit.
f
a
au
u old
new )
1
(
Each router monitors its utilization u based on its temporary
utilization f (either 0 or 1). a is a forgetness rate.
If u is above a threshold, a warning state is reached.
50. Hop-by-Hop Choke Packets
(in high speed nets)
(a) A choke packet that affects only
the source.
(b) A choke packet that affects
each hop it passes through.
51. Dropping packets
Load shedding: Wine Vs. Milk
Wine: drop new packets (keep old); good for file transfer
Milk: drop old packets (keep new); good for mulitmedia
Random Early Detection
When the average queue length exceeds a threshold,
packets are picked at random from the queue and discarded.
52. Quality of Service
• Application requirements
• Traffic shaping
• Packet scheduling
• Admission control
• Integrated services
• Differentiated services
53. QOS
• Previous sections are designed to reduce congestion and improve
network performance.
Overprovisioning
• An easy solution for ensuring good quality of service is to build a
network with sufficient capacity to handle any anticipated
traffic.
• Allows the network to carry application traffic with minimal loss
and low latency, assuming a decent routing scheme.
• The result is optimal performance
• Example - telephone system
54. Drawbacks of Overprovisioning
• its high cost by investing more resources.
• Quality of service mechanisms offer an alternative by
enabling a network with less capacity to meet application
requirements at a lower cost.
• Overprovisioning relies on expected traffic, and significant
issues may arise if traffic patterns change unexpectedly.
55. Four design issues for QOS
• Four issues must be addressed to ensure quality of
service:
– What applications need from the network.
– How to regulate the traffic that enters the network.
– How to reserve resources at routers to guarantee
performance.
– Whether the network can safely accept more traffic.
57. Application requirements
• Jitter and its Impact:
– Jitter, the variation in delay or packet arrival times, is crucial.
– Email, audio, and file transfer are generally not sensitive to
jitter.
– Remote login may be affected by jitter, causing bursts of
updates on the screen.
– Video and audio are extremely sensitive to jitter; even small
variations can have a significant impact.
59. Categories of QoS and Examples
• Networks may support different categories of QoS.
– Constant bit rate (e.g., telephony).
– Real-time variable bit rate (e.g., compressed
video conferencing).
– Non-real-time variable bit rate (e.g., watching a
movie on demand).
– Available bit rate (e.g., file transfer).
60. Traffic Shaping
• Traffic shaping is a technique that regulates the average
rate and burstiness of data flow entering the network.
– It allows applications to transmit diverse traffic patterns while
providing a simple and useful way to describe these patterns to
the network.
– The customer and provider agree on a traffic pattern (shape)
for a flow, often outlined in a Service Level Agreement (SLA) for
long-term commitments.
61. SLA and Traffic Monitoring:
• The Service Level Agreement (SLA) outlines the
agreed-upon traffic patterns between the customer and
provider.
• Traffic shaping, when adhered to by the customer, helps
reduce congestion and ensures the network can fulfill its
promises.
• Traffic policing involves monitoring a traffic flow to
verify adherence to the agreement, potentially dropping
excess packets or marking them with lower priority.
62. Traffic Shaping - Leaky bucket and Token bucket
• (a) Shaping packets. (b) A leaky bucket. (c) A token
bucket
63. Leaky Bucket
• The Leaky bucket Algorithm
used to control rate in a
network.
• It is implemented as a single
server queue with cinstant
service time.
• If the bucket (buffer) overflows
then packets are discarded.
• enforced a constant output
rate regardless of the
burstiness of the input. Does
nothing when the input is idle.
64. Leaky Bucket
• Host injects one packet per
clock tick onto the network.
• This resuts in a uniform flow of
packets, Soothing out bursts
and reducing congestion.
• If the packets are the same
size clock tick is okay. for
variable length - number of
bits are taken into
consideration.
65. Token Bucket
• Allows the output rate to vary
depending on the size of the burst.
• Buckets holds tokens.
• To transmit the packet, the host
must capture and destroy one
token.
• Tokens are generayed by a clock at
the rate of one token every delta(t)
second
• Idle host can capture and save up
token(up to the maximum size of
the bucket) in order to send larger
burst later.
66. Traffic Shaping (2)
•(a) Traffic from a host. Output shaped by a token bucket of
rate 200 Mbps and capacity (b) 9600 KB, (c) 0 KB.
67. Traffic Shaping (3)
• Token bucket level for shaping with rate 200 Mbps and
capacity (d) 16000 KB, (e) 9600 KB, and (f) 0KB..
68. Packet Scheduling (1)
•In a network, packets are accumulated and queued into memory
buffers of router and switches.
•Common way to arrange packets - FIFO
•Other methods may be used to prioritize oackets and ensure all are
delivered without blocking the resources.
•Buffer overflow handled my common method TAILDROP.
•Packet scheduling algorithm determines the order in which backlogged
packets are transmitted on an output link
–Allocates output bandwidth
–Controls packet delay
–scheduler
69. First-Come-First Served (FIFO)
• Packets are transmitted in the order of their arrival
• Advantage:
– Very simple to implement
– Disadvantage:
– Cannot give different service to different types of connections
– Each flow (even with low data rate) can experience long delays
70. Tail Drop in FIFO Routers:
• FIFO routers often drop newly arriving packets when the
queue is full.
• This behaviour is known as tail drop, where the newly
arrived packet is dropped since it would have been placed
at the end of the full queue.
• Various scheduling algorithms create different
opportunities for deciding which packet to drop when
buffers are full.
71. Packet Scheduling
• Packets from different flows arrive at a switch or router for
processing.
• A good scheduling technique treats the different flows in a fair and
appropriate manner.
• Several scheduling techniques are designed to improve the
quality of service.
– Three of them are:
• 1. FIFO queuing,
• 2. priority queuing,
• 3. weighted fair queuing
72. Packet Scheduling - fair queueing algorithm
• Round-robin Fair
Queuing
• When the line becomes idle, the router scans the
queues round-robin
• It then takes the first packet on the next queue. In
this way, with n hosts competing for the output line,
each host gets to send one out of every n packets.
• It is fair in the sense that all flows get to send
packets at the same rate. Sending more packets
will not improve this rate.
• flaw:
• it gives more bandwidth to hosts that use large
packets than to hosts that use small packets.
• improvement:
• the round-robin is done in such a way as to
simulate a byte-by-byte round-robin, instead of
a packet-by-packet round-robin.
74. Admission Control (1)
• Admission control refers to the mechanism used by a router, or a
switch, to accept or reject a flow based on predefined parameters
called flow specifications.
• • Before a router accepts a flow for processing, it checks the flow
specifications to see if its capacity (in terms of bandwidth, buffer
size, CPU speed, etc.) and its previous commitments to other
flows can handle the new flow.
75. Admission control
• Reservations and Path Considerations
• Complexity in Acceptance or Rejection Decision
• Negotiation of Flow Parameters
• Importance of Accurate Flow Description
• Flow Specification
• Establishment of Parameters
• Queueing Delay and Burst Size
• Path Through Multiple Routers
77. Integrated Services
• "integrated services," addressing both unicast and
multicast applications.
• Unicast example: a single user streaming a video clip
from a news site.
• Multicast example: digital television stations
broadcasting their programs as IP packet streams to
many receivers at various locations.
78. Multicast
• Dynamic Group Membership in Multicast
• Challenges with Bandwidth Reservation for Dynamic
Groups
79. RSVP—The Resource reservation Protocol
• RSVP is responsible for making reservations in the
network
– It supports multiple senders transmitting to multiple groups of
receivers.
– Allows individual receivers to switch channels freely.
– Optimizes bandwidth utilization and simultaneously eliminates
congestion issues.
80. Integrated Services (1)
• (a) A network. (b) The multicast spanning tree for host 1.
(c) The multicast spanning tree for host 2.
81. Integrated Services (2)
•(a) Host 3 requests a channel to host 1. (b) Host 3 then requests a second channel, to host 2.
(c) Host 5 requests a channel to host 1.
(a): This graph shows a constant rate of 125 MB/s for 125 milliseconds. This means that the token bucket is filled at a constant rate of 125 Mbps, and data can be sent or received at this rate as long as there are tokens in the bucket. Once the bucket is empty, data transfer stops until more tokens are generated.
(b): This graph shows a constant rate of 25 MB/s for 250 milliseconds, followed by a burst rate of 9600 KB. This means that the token bucket is filled at a constant rate of 25 Mbps, but it can also accommodate a burst of data up to 9600 KB. The burst rate is useful for short-lived data transfers that need high priority, such as opening a web page or sending an email.
(c): This graph shows a constant rate of 25 MB/s with a buffer size of 0 KB. This means that the token bucket is filled at a constant rate of 25 Mbps, but there is no buffer to store data that arrives when the bucket is empty. As a result, data is dropped if it arrives when the bucket is empty.
Sure, I can explain the graph you sent me for the token bucket algorithm.
The graph shows the number of tokens in a token bucket over time, with two horizontal lines indicating the burst size and bucket capacity. Here's a breakdown of the key elements:
X-axis: Represents time in milliseconds (ms).
Y-axis: Represents the number of tokens in the bucket.
Lines:
Solid blue line: This line shows the number of tokens in the bucket at any given time.
Dashed green line: This line indicates the burst size, which is the maximum number of tokens that can be added to the bucket in a single instance. In this graph, the burst size is 8 tokens.
Dashed red line: This line indicates the bucket capacity, which is the maximum number of tokens that the bucket can hold at any given time. In this graph, the bucket capacity is 16 tokens.
Graph behavior:
The graph starts with an empty bucket (0 tokens).
At the beginning (around 0 ms), tokens are added to the bucket at a constant rate, represented by the slope of the blue line. This rate is slower than the burst size, so the number of tokens in the bucket gradually increases until it reaches the bucket capacity (around 500 ms).
Once the bucket is full, no more tokens can be added, and the blue line flattens out.
At around 750 ms, there is a sudden decrease in the number of tokens, represented by a sharp drop in the blue line. This indicates that data is being transmitted, consuming tokens from the bucket.
The rate of data transmission is faster than the rate at which tokens are added, so the number of tokens in the bucket decreases.
When the number of tokens reaches the burst size (around 850 ms), the data transmission stops because there are not enough tokens available to continue.
Tokens continue to be added to the bucket at a constant rate, and the number of tokens gradually increases again.
The cycle repeats, with the bucket filling up, data being transmitted, and the bucket emptying again.
• In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node (router or switch) is
ready to process them. If the average arrival rate is higher than the average processing rate, the queue
will fill up and new packets will be discarded.
• In priority queuing, packets are first assigned to a priority class. Each priority class has its own queue.
The packets in the highest-priority queue are processed first. Packets in the lowest-priority queue are
processed last. Note that the system does not stop serving a queue until it is empty.
• In weighted fair queuing technique, the packets are still assigned to different classes and admitted to
different queues. The queues, however, are weighted based on the priority of the queues; higher
priority means a higher weight. The system processes packets in each queue in a round-robin fashion
with the number of packets selected from each queue based on the corresponding weight.