SlideShare a Scribd company logo
1 of 29
Download to read offline
Packet Scheduling
for QoS management
Part II
TELCOM2321 – CS2520
Wide Area Networks
Dr. Walter Cerroni
University of Bologna – Italy
Visiting Assistant Professor at SIS, Telecom Program
Slides partly based on Dr. Znati’s material
27
Readings
• Textbook Chapter 17, Section 2
28
Max-Min weighted fair share
• Max-Min fair share assumes that all flows have equal
right to resources
• Different flows might have different rights due to different
QoS requirements
– e.g. voice and video flows require different bandwidth levels
– customers of video service are willing to pay more to get the
required bandwidth
– they must have the same chances as voice customer to get what
they need
• Associate a weight to each demand to take into account
different requirements
– normalize demands with corresponding weight
– use weights to compute fair share
29
Max-Min weighted fair share: Example
• Demands: d1 = d2 = 10, d3 = d4 = 25, d5 = 50
• Bandwidth: C = 100
• Max-Min satisfies d1, d2, d3, and d4 while d5 gets 30
• If d5 has the same right as the others, use weighted
Max-Min:
1. Weights: w1 = w2 = w3 = w4 = 1, w5 = 5
2. Weighted demands: d’1 = d’2 = 10, d’3 = d’4 = 25, d’5 = 50 / 5 = 10
3. FS = C / (sum of weights) = 100 / 9 = 11.11
4. d’1, d’2 and d’5 satisfied, d5 gets 50
5. Sum of weights of unsatisfied demands: M = w3 + w4 = 2
6. Bandwidth left: R = 100 – (10 + 10 + 50) = 30
7. FS = 30 / 2 = 15
8. d’3 and d’4 not satisfied they get 15
30
Fairness in Work Conserving Schedulers
• If a scheduling policy guarantees fair resource sharing
(e.g. Max-Min) it is also protective
– a misbehaving source do not get more than fair share
• Resources should be assigned based on QoS
requirements
– Max-Min weighted fair share
– weights limited by conservation law
• Fair work conserving scheduling policies
– Fair Queuing
– Processor Sharing
– Generalized Processor Sharing
– Weighted Fair Queuing
31
Fair Queuing
• A router maintains multiple queues
– separate queues for different traffic flows or classes
– queues are serviced by a round-robin scheduler
– empty queues are skipped over
• Greedy connections face long delays, but other
connections are not affected by it
• Drawbacks
– no service differentiation
– the scheme penalizes flows with short packets
...
1
2
K
32
Processor Sharing
• The server is able to serve multiple customers at
the same time
• Fluid model
– traffic is infinitely divisible
– service can be provided to infinitesimal amounts of
data
– the packet size does not count anymore
– if N customer are served by server with capacity C,
capacity seen by each customer is C / N
– ideal model to provide an upper bound on the fairness
of other schemes
33
Head-of-Queue Processor Sharing
• Separate queues for different traffic flows or classes
– head-of-queue packets are served from nonempty queues
– following packets wait to be serviced
– a nonempty queue is always served
• Each queue is assigned a fair share of server capacity
– when some flows become silent, their capacity is equally split
among the other flows Max-Min fair share
– independently of packet size
...
1
2
K
3
in service
34
Processor Sharing: Virtual time
• 0 ≤ N(t) ≤ K, number of nonempty queues at time t
• r(t) = C / N(t), capacity available to each nonempty queue
when N(t) > 0
• tk,i, actual arrival time of packet i to queue k
• Lk,i, length of packet i in queue k
• Actual service time depends on r(t)
– actual service time stretches or shrinks based on arrival and
departure events
• It is useful to define a virtual time scale v(t)
– continuous piecewise linear function of t with slope 1/N(t)
• Now it is possible to define
– sk,i, virtual starting transmission time of packet i in queue k
– fk,i, virtual ending transmission time of packet i in queue k
35
Processor Sharing: Example
t2,2 = 2
L2,2 = 4
t1,2 = 2
L1,2 = 1
t3,1 = 3
L3,1 = 3
t2,1 = 1
L2,1 = 1
t1,1 = 0
L1,1 = 3
Queue 3Queue 2Queue 1
0 1 2 3
arrival time
packet size
1
3
4
C = 1
36
Processor Sharing: Example
t =
N(t) =
v(t) =
0
1
0
1
2
1
2
2
3
3
2
4
3
5
3
6
3
3
7
3
8
3
9
2
4
10
2
11
1
5
12
0
6
queue 1
queue 2
queue 3
t
37
Virtual starting and ending times
Recursive definition
• fk,i = sk,i + Lk,i/C
• sk,i = max [ fk,i–1 , v(tk,i) ]
In our example
• s1,1 = 0, f1,1 = 3
• s2,1 = 1, f2,1 = 2
• s2,2 = 2, f2,2 = 6
• s3,1 = 2, f3,1 = 5
• s1,2 = 3, f1,2 = 4
38
Packet-based Processor Sharing
• Real-life schedulers are not able to serve infinitesimal
amounts of data
• A packet-based scheduling policy that approximates the
PS scheme can be implemented as follows
– compute virtual starting and ending times as in the ideal PS
– when a packet finishes its service, next packet to be serviced is
the one with smallest virtual finishing time
– In our example:
s1,1 = 0, f1,1 = 3
s2,1 = 1, f2,1 = 2
s1,2 = 3, f1,2 = 4
s3,1 = 2, f3,1 = 5
s2,2 = 2, f2,2 = 6
ideal PS
packet-based PS
39
Comparison with FIFO and FQ
• Load = capacity
• PS gives preference to short packets average delay of
flows 2 and 3 smaller than FIFO
40
Comparison with FIFO and FQ
• Load > capacity queue builds up
• FQ: flow 2 is penalized, although has same load as flow 1
• PS gives preference to short packets and it’s fair
41
Generalized Processor Sharing
• PS and its packet-level equivalent provide Max-Min
fairness but do not differentiate resource allocation
• Generalized Processor Sharing (GPS) is capable of
differentiation by associating a weight to each flow
• Each nonempty queue is serviced for an infinitesimal
amount of data in proportion to the associated weight
– server capacity is split among nonempty queues according to
weights
• GPS provides an exact Max-Min weighted fair share
allocation
42
GPS: Formal Definition
• GPS is a work conserving scheduler operating at fixed
capacity C over K flows
• Each flow is characterized by a positive real weight
• is the amount of traffic from flow serviced in the
interval
• GPS server is defined as a server for which
for any flow that is backlogged (i.e. has nonempty
queue) throughout
43
GPS: Flow Rate
• Summing over all flows
• Each backlogged flow receives a service rate of
• If a source rate is then such rate is always
guaranteed independently of the other flows
• number of backlogged queues at time
• The service rate seen by flow when is
with the sum taken over all backlogged queues
44
GPS: Virtual Time
• Now virtual time definition must consider weights
– only backlogged queues contribute to the virtual time computation
– the sum of the weights of backlogged queues may change at
arrival and departure times only
• continuous piecewise linear function of t with slope
• Actual arrival time of packet of flow is
• Its length is
• Virtual starting and ending transmission times are
45
GPS Emulation: Weighted Fair Queuing
• It is a packet-based GPS
– compute virtual starting and ending times as in the
ideal GPS
– when a packet finishes its service, next packet to be
serviced is the one with smallest virtual finishing time
• Increased computational complexity since
scheduler
– must keep track of the backlogged flows and their
weights
– must compute virtual finishing times and sort them out
– must update virtual time at arrival and departure
events
46
Comparison with FIFO
47
Comparison with FIFO
Flow 1 now sends packets at the guaranteed rate = 0.5
48
Weighted Fair Queuing: Delay Bound
• WFQ approximates GPS at packet level
– Max-Min weighted fair share achieved
– weights allow service differentiation
• WFQ also keeps queuing delay bounded for
well-behaved flows
• Assume each flow shaped with a token bucket scheme,
token bucket parameters for flow
• Choose weights proportional to token rates
• Assume all buckets full of tokens when transmission
starts from all flows (worst case)
– each flow sends a burst and then keeps transmitting at
49
Weighted Fair Queuing: Delay Bound
• In case of GPS, each queue is filled up to and is
both serviced and feed at rate
• Since queue size cannot exceed , the maximum delay
experienced by packets of flow is
• In case of WFQ, the effect of servicing packets increases
the delay, but this is still bounded
• End-to-end queuing delay of flow through WFQ
hops is actually bounded
50
Weighted Fair Queuing: Delay Bound
• maximum packet size of flow
•
• server capacity at node
maximum delay at the first node due to burst
transmission (same as GPS)
maximum delay when a packet arrives at
each node just after its supposed starting
time and must wait for preceding packet
maximum delay due to packet-level service:
each arriving packet must wait for the one
currently in service
51
GPS Emulation: Weighted Round Robin
• Each flows is characterized by its
– weight
– average packet size
• The server divides each flow weight by its mean packet
size to a obtain a normalized set of weights
• Normalized weights are used to determine the number of
packets serviced from each connection in each round
• Simpler scheduler than WFQ
• Mean packet size may not be known a-priori
52
GPS Emulation: Deficit Round Robin
• DRR modifies WRR to handle variable packet sizes
without knowing the mean packet size of each flow in
advance
– DRR associates each flow with a “deficit counter”
– in each turn, the server attempts to serve one “quantum" worth of
bits from each visited flow
• A packet is serviced if the sum of the deficit counter and
the quantum is larger than the packet size
– if the packet receives service, its deficit counter is reduced by the
packet size
– if the packet does not receive service, a quantum is added to its
deficit counter
• During a visit, if connection is idle, its deficit counter is
reset to 0
– prevents a connection from cumulating service while idle
53
Scheduling and admission control
• Given a set of currently admitted connections and a
descriptor for a new connection request, the network
must verify whether it is able to meet performance
requirements of new connection
• It is also desirable that the admission policy does not
lead to network underutilization
• A scheduler define its “schedulable region” as the set of
all possible combinations of performance bounds that it
can simultaneously meet
– resources are finite, so a server can only provide performance
bounds to a finite number of connections
• New connection requests are matched with the
schedulable region of relevant nodes
54
Scheduling and admission control
• Schedulable region is a technique for efficient admission
control and a way to measure the efficiency of a
scheduling discipline
– given a schedulable region, admission control consists of
checking whether the resulting combination of connection
parameters lies within the scheduling region or not
– the larger the scheduling region, the more efficient the scheduling
policy
No. of class 1
requests
No. of class 2
requests
(C1,C2)
(C1MAX,0)
schedulable
region
(0,C2MAX)

More Related Content

What's hot

Delays in packet switch network
Delays in packet switch networkDelays in packet switch network
Delays in packet switch networkShanza Sohail
 
Traffic Characterization
Traffic CharacterizationTraffic Characterization
Traffic CharacterizationIsmail Mukiibi
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
Qos Quality of services
Qos   Quality of services Qos   Quality of services
Qos Quality of services HayderThary
 
Congestion control and quality of services
Congestion control and quality of servicesCongestion control and quality of services
Congestion control and quality of servicesJawad Ghumman
 
NZNOG 2020: Buffers, Buffer Bloat and BBR
NZNOG 2020: Buffers, Buffer Bloat and BBRNZNOG 2020: Buffers, Buffer Bloat and BBR
NZNOG 2020: Buffers, Buffer Bloat and BBRAPNIC
 
Leaky Bucket & Tocken Bucket - Traffic shaping
Leaky Bucket & Tocken Bucket - Traffic shapingLeaky Bucket & Tocken Bucket - Traffic shaping
Leaky Bucket & Tocken Bucket - Traffic shapingVimal Dewangan
 
Congetion Control.pptx
Congetion Control.pptxCongetion Control.pptx
Congetion Control.pptxNaveen Dubey
 
Congestion Control in Networks
Congestion Control in NetworksCongestion Control in Networks
Congestion Control in Networksrapatil
 
QoS (quality of service)
QoS (quality of service)QoS (quality of service)
QoS (quality of service)Sri Safrina
 
Quality of service(qos) by M.BILAL.SATTI
Quality of service(qos) by M.BILAL.SATTIQuality of service(qos) by M.BILAL.SATTI
Quality of service(qos) by M.BILAL.SATTIMuhammad Bilal Satti
 

What's hot (20)

Congestion control and quality of service
Congestion control and quality of serviceCongestion control and quality of service
Congestion control and quality of service
 
Delays in packet switch network
Delays in packet switch networkDelays in packet switch network
Delays in packet switch network
 
Traffic Characterization
Traffic CharacterizationTraffic Characterization
Traffic Characterization
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Quality of Service
Quality  of  ServiceQuality  of  Service
Quality of Service
 
Qos Quality of services
Qos   Quality of services Qos   Quality of services
Qos Quality of services
 
Chap24
Chap24Chap24
Chap24
 
Quality of service
Quality of serviceQuality of service
Quality of service
 
Congestion Control
Congestion ControlCongestion Control
Congestion Control
 
Congestion control and quality of services
Congestion control and quality of servicesCongestion control and quality of services
Congestion control and quality of services
 
Quality of Service
Quality of ServiceQuality of Service
Quality of Service
 
NZNOG 2020: Buffers, Buffer Bloat and BBR
NZNOG 2020: Buffers, Buffer Bloat and BBRNZNOG 2020: Buffers, Buffer Bloat and BBR
NZNOG 2020: Buffers, Buffer Bloat and BBR
 
Leaky Bucket & Tocken Bucket - Traffic shaping
Leaky Bucket & Tocken Bucket - Traffic shapingLeaky Bucket & Tocken Bucket - Traffic shaping
Leaky Bucket & Tocken Bucket - Traffic shaping
 
Dcn lecture 3
Dcn lecture 3Dcn lecture 3
Dcn lecture 3
 
Congetion Control.pptx
Congetion Control.pptxCongetion Control.pptx
Congetion Control.pptx
 
Congestion control in TCP
Congestion control in TCPCongestion control in TCP
Congestion control in TCP
 
Congestion Control in Networks
Congestion Control in NetworksCongestion Control in Networks
Congestion Control in Networks
 
QoS (quality of service)
QoS (quality of service)QoS (quality of service)
QoS (quality of service)
 
Quality of service
Quality of serviceQuality of service
Quality of service
 
Quality of service(qos) by M.BILAL.SATTI
Quality of service(qos) by M.BILAL.SATTIQuality of service(qos) by M.BILAL.SATTI
Quality of service(qos) by M.BILAL.SATTI
 

Viewers also liked

Multicasting and multicast routing protocols
Multicasting and multicast routing protocolsMulticasting and multicast routing protocols
Multicasting and multicast routing protocolsAbhishek Kesharwani
 
Presentation, Firewalls
Presentation, FirewallsPresentation, Firewalls
Presentation, Firewallskkkseld
 
Multicast vs unicast diagram
Multicast vs unicast diagramMulticast vs unicast diagram
Multicast vs unicast diagraminternetstreams
 
Distance vector and link state routing protocol
Distance vector and link state routing protocolDistance vector and link state routing protocol
Distance vector and link state routing protocolCCNAStudyGuide
 
Routing Information Protocol (RIP)
Routing Information Protocol(RIP)Routing Information Protocol(RIP)
Routing Information Protocol (RIP)waqasahmad1995
 
Classful and classless addressing
Classful and classless addressingClassful and classless addressing
Classful and classless addressingSourav Jyoti Das
 
Classless addressing
Classless addressingClassless addressing
Classless addressingIqra Abbas
 
IPSec Overview
IPSec OverviewIPSec Overview
IPSec Overviewdavisli
 
Simple mail transfer protocol
Simple mail transfer protocolSimple mail transfer protocol
Simple mail transfer protocolAnagha Ghotkar
 
Protocolos- SMTP, POP3 e IMAP4
Protocolos- SMTP, POP3 e IMAP4Protocolos- SMTP, POP3 e IMAP4
Protocolos- SMTP, POP3 e IMAP4Felipe Weizenmann
 
Unicasting , Broadcasting And Multicasting New
Unicasting , Broadcasting And Multicasting NewUnicasting , Broadcasting And Multicasting New
Unicasting , Broadcasting And Multicasting Newtechbed
 
Unicast multicast & broadcast
Unicast multicast & broadcastUnicast multicast & broadcast
Unicast multicast & broadcastNetProtocol Xpert
 

Viewers also liked (20)

Firewalls
FirewallsFirewalls
Firewalls
 
Ip addressing classful
Ip addressing classfulIp addressing classful
Ip addressing classful
 
Multicasting and multicast routing protocols
Multicasting and multicast routing protocolsMulticasting and multicast routing protocols
Multicasting and multicast routing protocols
 
Presentation, Firewalls
Presentation, FirewallsPresentation, Firewalls
Presentation, Firewalls
 
Firewall
FirewallFirewall
Firewall
 
Multicast vs unicast diagram
Multicast vs unicast diagramMulticast vs unicast diagram
Multicast vs unicast diagram
 
Firewalls
FirewallsFirewalls
Firewalls
 
Distance vector and link state routing protocol
Distance vector and link state routing protocolDistance vector and link state routing protocol
Distance vector and link state routing protocol
 
Routing Information Protocol (RIP)
Routing Information Protocol(RIP)Routing Information Protocol(RIP)
Routing Information Protocol (RIP)
 
Smtp, pop3, imapv 4
Smtp, pop3, imapv 4Smtp, pop3, imapv 4
Smtp, pop3, imapv 4
 
Classful and classless addressing
Classful and classless addressingClassful and classless addressing
Classful and classless addressing
 
Firewall
FirewallFirewall
Firewall
 
Classless addressing
Classless addressingClassless addressing
Classless addressing
 
IPSec Overview
IPSec OverviewIPSec Overview
IPSec Overview
 
Ipsec
IpsecIpsec
Ipsec
 
Simple mail transfer protocol
Simple mail transfer protocolSimple mail transfer protocol
Simple mail transfer protocol
 
Protocolos- SMTP, POP3 e IMAP4
Protocolos- SMTP, POP3 e IMAP4Protocolos- SMTP, POP3 e IMAP4
Protocolos- SMTP, POP3 e IMAP4
 
Classless subnetting
Classless subnettingClassless subnetting
Classless subnetting
 
Unicasting , Broadcasting And Multicasting New
Unicasting , Broadcasting And Multicasting NewUnicasting , Broadcasting And Multicasting New
Unicasting , Broadcasting And Multicasting New
 
Unicast multicast & broadcast
Unicast multicast & broadcastUnicast multicast & broadcast
Unicast multicast & broadcast
 

Similar to Schedulling

Congestion control
Congestion controlCongestion control
Congestion controlAman Jaiswal
 
MULTIMEDIA COMMUNICATION & NETWORKS
MULTIMEDIA COMMUNICATION & NETWORKSMULTIMEDIA COMMUNICATION & NETWORKS
MULTIMEDIA COMMUNICATION & NETWORKSKathirvel Ayyaswamy
 
RIPE 80: Buffers and Protocols
RIPE 80: Buffers and ProtocolsRIPE 80: Buffers and Protocols
RIPE 80: Buffers and ProtocolsAPNIC
 
integrated and diffrentiated services
 integrated and diffrentiated services integrated and diffrentiated services
integrated and diffrentiated servicesRishabh Gupta
 
RIPE 76: TCP and BBR
RIPE 76: TCP and BBRRIPE 76: TCP and BBR
RIPE 76: TCP and BBRAPNIC
 
CN Module 5 part 2 2022.pdf
CN Module 5 part 2 2022.pdfCN Module 5 part 2 2022.pdf
CN Module 5 part 2 2022.pdfMayankRaj687571
 
AusNOG 2019: TCP and BBR
AusNOG 2019: TCP and BBRAusNOG 2019: TCP and BBR
AusNOG 2019: TCP and BBRAPNIC
 
Module 2.pptx.............sdvsdcdssdfsdf
Module 2.pptx.............sdvsdcdssdfsdfModule 2.pptx.............sdvsdcdssdfsdf
Module 2.pptx.............sdvsdcdssdfsdfShivakrishnan18
 
An Energy Aware QOS Routing Protocol
An Energy Aware QOS Routing ProtocolAn Energy Aware QOS Routing Protocol
An Energy Aware QOS Routing Protocoljaimin_m_raval
 
An energy aware qos routing protocol
An energy aware qos routing protocolAn energy aware qos routing protocol
An energy aware qos routing protocoljaimin_m_raval
 
UNIT-3 network security layers andits types
UNIT-3 network security layers andits typesUNIT-3 network security layers andits types
UNIT-3 network security layers andits typesgjeyasriitaamecnew
 

Similar to Schedulling (20)

QoSintro.PPT
QoSintro.PPTQoSintro.PPT
QoSintro.PPT
 
qos-f05 (2).ppt
qos-f05 (2).pptqos-f05 (2).ppt
qos-f05 (2).ppt
 
qos-f05 (3).ppt
qos-f05 (3).pptqos-f05 (3).ppt
qos-f05 (3).ppt
 
qos-f05.pdf
qos-f05.pdfqos-f05.pdf
qos-f05.pdf
 
qos-f05.ppt
qos-f05.pptqos-f05.ppt
qos-f05.ppt
 
Congestion control
Congestion controlCongestion control
Congestion control
 
MULTIMEDIA COMMUNICATION & NETWORKS
MULTIMEDIA COMMUNICATION & NETWORKSMULTIMEDIA COMMUNICATION & NETWORKS
MULTIMEDIA COMMUNICATION & NETWORKS
 
RIPE 80: Buffers and Protocols
RIPE 80: Buffers and ProtocolsRIPE 80: Buffers and Protocols
RIPE 80: Buffers and Protocols
 
Qo s 09-integrated and red
Qo s 09-integrated and redQo s 09-integrated and red
Qo s 09-integrated and red
 
integrated and diffrentiated services
 integrated and diffrentiated services integrated and diffrentiated services
integrated and diffrentiated services
 
SGSGS
SGSGSSGSGS
SGSGS
 
9_Network.ppt
9_Network.ppt9_Network.ppt
9_Network.ppt
 
RIPE 76: TCP and BBR
RIPE 76: TCP and BBRRIPE 76: TCP and BBR
RIPE 76: TCP and BBR
 
CN Module 5 part 2 2022.pdf
CN Module 5 part 2 2022.pdfCN Module 5 part 2 2022.pdf
CN Module 5 part 2 2022.pdf
 
AusNOG 2019: TCP and BBR
AusNOG 2019: TCP and BBRAusNOG 2019: TCP and BBR
AusNOG 2019: TCP and BBR
 
Multimedia networks
Multimedia networksMultimedia networks
Multimedia networks
 
Module 2.pptx.............sdvsdcdssdfsdf
Module 2.pptx.............sdvsdcdssdfsdfModule 2.pptx.............sdvsdcdssdfsdf
Module 2.pptx.............sdvsdcdssdfsdf
 
An Energy Aware QOS Routing Protocol
An Energy Aware QOS Routing ProtocolAn Energy Aware QOS Routing Protocol
An Energy Aware QOS Routing Protocol
 
An energy aware qos routing protocol
An energy aware qos routing protocolAn energy aware qos routing protocol
An energy aware qos routing protocol
 
UNIT-3 network security layers andits types
UNIT-3 network security layers andits typesUNIT-3 network security layers andits types
UNIT-3 network security layers andits types
 

More from Abhishek Kesharwani

Unit 1 web technology uptu slide
Unit 1 web technology uptu slideUnit 1 web technology uptu slide
Unit 1 web technology uptu slideAbhishek Kesharwani
 
Unit1 Web Technology UPTU UNIT 1
Unit1 Web Technology UPTU UNIT 1 Unit1 Web Technology UPTU UNIT 1
Unit1 Web Technology UPTU UNIT 1 Abhishek Kesharwani
 
Mtech syllabus computer science uptu
Mtech syllabus computer science uptu Mtech syllabus computer science uptu
Mtech syllabus computer science uptu Abhishek Kesharwani
 

More from Abhishek Kesharwani (20)

uptu web technology unit 2 html
uptu web technology unit 2 htmluptu web technology unit 2 html
uptu web technology unit 2 html
 
uptu web technology unit 2 html
uptu web technology unit 2 htmluptu web technology unit 2 html
uptu web technology unit 2 html
 
uptu web technology unit 2 html
uptu web technology unit 2 htmluptu web technology unit 2 html
uptu web technology unit 2 html
 
uptu web technology unit 2 html
uptu web technology unit 2 htmluptu web technology unit 2 html
uptu web technology unit 2 html
 
uptu web technology unit 2 html
uptu web technology unit 2 htmluptu web technology unit 2 html
uptu web technology unit 2 html
 
uptu web technology unit 2 Css
uptu web technology unit 2 Cssuptu web technology unit 2 Css
uptu web technology unit 2 Css
 
uptu web technology unit 2 Css
uptu web technology unit 2 Cssuptu web technology unit 2 Css
uptu web technology unit 2 Css
 
uptu web technology unit 2 Xml2
uptu web technology unit 2 Xml2uptu web technology unit 2 Xml2
uptu web technology unit 2 Xml2
 
uptu web technology unit 2 Xml2
uptu web technology unit 2 Xml2uptu web technology unit 2 Xml2
uptu web technology unit 2 Xml2
 
uptu web technology unit 2 Xml2
uptu web technology unit 2 Xml2uptu web technology unit 2 Xml2
uptu web technology unit 2 Xml2
 
uptu web technology unit 2 Xml2
uptu web technology unit 2 Xml2uptu web technology unit 2 Xml2
uptu web technology unit 2 Xml2
 
Unit 1 web technology uptu slide
Unit 1 web technology uptu slideUnit 1 web technology uptu slide
Unit 1 web technology uptu slide
 
Unit1 Web Technology UPTU UNIT 1
Unit1 Web Technology UPTU UNIT 1 Unit1 Web Technology UPTU UNIT 1
Unit1 Web Technology UPTU UNIT 1
 
Unit1 2
Unit1 2 Unit1 2
Unit1 2
 
Web Technology UPTU UNIT 1
Web Technology UPTU UNIT 1 Web Technology UPTU UNIT 1
Web Technology UPTU UNIT 1
 
Mtech syllabus computer science uptu
Mtech syllabus computer science uptu Mtech syllabus computer science uptu
Mtech syllabus computer science uptu
 
Wi max tutorial
Wi max tutorialWi max tutorial
Wi max tutorial
 
Virtual lan
Virtual lanVirtual lan
Virtual lan
 
Virtual lan
Virtual lanVirtual lan
Virtual lan
 
Tcp traffic control and red ecn
Tcp traffic control and red ecnTcp traffic control and red ecn
Tcp traffic control and red ecn
 

Schedulling

  • 1. Packet Scheduling for QoS management Part II TELCOM2321 – CS2520 Wide Area Networks Dr. Walter Cerroni University of Bologna – Italy Visiting Assistant Professor at SIS, Telecom Program Slides partly based on Dr. Znati’s material
  • 3. 28 Max-Min weighted fair share • Max-Min fair share assumes that all flows have equal right to resources • Different flows might have different rights due to different QoS requirements – e.g. voice and video flows require different bandwidth levels – customers of video service are willing to pay more to get the required bandwidth – they must have the same chances as voice customer to get what they need • Associate a weight to each demand to take into account different requirements – normalize demands with corresponding weight – use weights to compute fair share
  • 4. 29 Max-Min weighted fair share: Example • Demands: d1 = d2 = 10, d3 = d4 = 25, d5 = 50 • Bandwidth: C = 100 • Max-Min satisfies d1, d2, d3, and d4 while d5 gets 30 • If d5 has the same right as the others, use weighted Max-Min: 1. Weights: w1 = w2 = w3 = w4 = 1, w5 = 5 2. Weighted demands: d’1 = d’2 = 10, d’3 = d’4 = 25, d’5 = 50 / 5 = 10 3. FS = C / (sum of weights) = 100 / 9 = 11.11 4. d’1, d’2 and d’5 satisfied, d5 gets 50 5. Sum of weights of unsatisfied demands: M = w3 + w4 = 2 6. Bandwidth left: R = 100 – (10 + 10 + 50) = 30 7. FS = 30 / 2 = 15 8. d’3 and d’4 not satisfied they get 15
  • 5. 30 Fairness in Work Conserving Schedulers • If a scheduling policy guarantees fair resource sharing (e.g. Max-Min) it is also protective – a misbehaving source do not get more than fair share • Resources should be assigned based on QoS requirements – Max-Min weighted fair share – weights limited by conservation law • Fair work conserving scheduling policies – Fair Queuing – Processor Sharing – Generalized Processor Sharing – Weighted Fair Queuing
  • 6. 31 Fair Queuing • A router maintains multiple queues – separate queues for different traffic flows or classes – queues are serviced by a round-robin scheduler – empty queues are skipped over • Greedy connections face long delays, but other connections are not affected by it • Drawbacks – no service differentiation – the scheme penalizes flows with short packets ... 1 2 K
  • 7. 32 Processor Sharing • The server is able to serve multiple customers at the same time • Fluid model – traffic is infinitely divisible – service can be provided to infinitesimal amounts of data – the packet size does not count anymore – if N customer are served by server with capacity C, capacity seen by each customer is C / N – ideal model to provide an upper bound on the fairness of other schemes
  • 8. 33 Head-of-Queue Processor Sharing • Separate queues for different traffic flows or classes – head-of-queue packets are served from nonempty queues – following packets wait to be serviced – a nonempty queue is always served • Each queue is assigned a fair share of server capacity – when some flows become silent, their capacity is equally split among the other flows Max-Min fair share – independently of packet size ... 1 2 K 3 in service
  • 9. 34 Processor Sharing: Virtual time • 0 ≤ N(t) ≤ K, number of nonempty queues at time t • r(t) = C / N(t), capacity available to each nonempty queue when N(t) > 0 • tk,i, actual arrival time of packet i to queue k • Lk,i, length of packet i in queue k • Actual service time depends on r(t) – actual service time stretches or shrinks based on arrival and departure events • It is useful to define a virtual time scale v(t) – continuous piecewise linear function of t with slope 1/N(t) • Now it is possible to define – sk,i, virtual starting transmission time of packet i in queue k – fk,i, virtual ending transmission time of packet i in queue k
  • 10. 35 Processor Sharing: Example t2,2 = 2 L2,2 = 4 t1,2 = 2 L1,2 = 1 t3,1 = 3 L3,1 = 3 t2,1 = 1 L2,1 = 1 t1,1 = 0 L1,1 = 3 Queue 3Queue 2Queue 1 0 1 2 3 arrival time packet size 1 3 4 C = 1
  • 11. 36 Processor Sharing: Example t = N(t) = v(t) = 0 1 0 1 2 1 2 2 3 3 2 4 3 5 3 6 3 3 7 3 8 3 9 2 4 10 2 11 1 5 12 0 6 queue 1 queue 2 queue 3 t
  • 12. 37 Virtual starting and ending times Recursive definition • fk,i = sk,i + Lk,i/C • sk,i = max [ fk,i–1 , v(tk,i) ] In our example • s1,1 = 0, f1,1 = 3 • s2,1 = 1, f2,1 = 2 • s2,2 = 2, f2,2 = 6 • s3,1 = 2, f3,1 = 5 • s1,2 = 3, f1,2 = 4
  • 13. 38 Packet-based Processor Sharing • Real-life schedulers are not able to serve infinitesimal amounts of data • A packet-based scheduling policy that approximates the PS scheme can be implemented as follows – compute virtual starting and ending times as in the ideal PS – when a packet finishes its service, next packet to be serviced is the one with smallest virtual finishing time – In our example: s1,1 = 0, f1,1 = 3 s2,1 = 1, f2,1 = 2 s1,2 = 3, f1,2 = 4 s3,1 = 2, f3,1 = 5 s2,2 = 2, f2,2 = 6 ideal PS packet-based PS
  • 14. 39 Comparison with FIFO and FQ • Load = capacity • PS gives preference to short packets average delay of flows 2 and 3 smaller than FIFO
  • 15. 40 Comparison with FIFO and FQ • Load > capacity queue builds up • FQ: flow 2 is penalized, although has same load as flow 1 • PS gives preference to short packets and it’s fair
  • 16. 41 Generalized Processor Sharing • PS and its packet-level equivalent provide Max-Min fairness but do not differentiate resource allocation • Generalized Processor Sharing (GPS) is capable of differentiation by associating a weight to each flow • Each nonempty queue is serviced for an infinitesimal amount of data in proportion to the associated weight – server capacity is split among nonempty queues according to weights • GPS provides an exact Max-Min weighted fair share allocation
  • 17. 42 GPS: Formal Definition • GPS is a work conserving scheduler operating at fixed capacity C over K flows • Each flow is characterized by a positive real weight • is the amount of traffic from flow serviced in the interval • GPS server is defined as a server for which for any flow that is backlogged (i.e. has nonempty queue) throughout
  • 18. 43 GPS: Flow Rate • Summing over all flows • Each backlogged flow receives a service rate of • If a source rate is then such rate is always guaranteed independently of the other flows • number of backlogged queues at time • The service rate seen by flow when is with the sum taken over all backlogged queues
  • 19. 44 GPS: Virtual Time • Now virtual time definition must consider weights – only backlogged queues contribute to the virtual time computation – the sum of the weights of backlogged queues may change at arrival and departure times only • continuous piecewise linear function of t with slope • Actual arrival time of packet of flow is • Its length is • Virtual starting and ending transmission times are
  • 20. 45 GPS Emulation: Weighted Fair Queuing • It is a packet-based GPS – compute virtual starting and ending times as in the ideal GPS – when a packet finishes its service, next packet to be serviced is the one with smallest virtual finishing time • Increased computational complexity since scheduler – must keep track of the backlogged flows and their weights – must compute virtual finishing times and sort them out – must update virtual time at arrival and departure events
  • 22. 47 Comparison with FIFO Flow 1 now sends packets at the guaranteed rate = 0.5
  • 23. 48 Weighted Fair Queuing: Delay Bound • WFQ approximates GPS at packet level – Max-Min weighted fair share achieved – weights allow service differentiation • WFQ also keeps queuing delay bounded for well-behaved flows • Assume each flow shaped with a token bucket scheme, token bucket parameters for flow • Choose weights proportional to token rates • Assume all buckets full of tokens when transmission starts from all flows (worst case) – each flow sends a burst and then keeps transmitting at
  • 24. 49 Weighted Fair Queuing: Delay Bound • In case of GPS, each queue is filled up to and is both serviced and feed at rate • Since queue size cannot exceed , the maximum delay experienced by packets of flow is • In case of WFQ, the effect of servicing packets increases the delay, but this is still bounded • End-to-end queuing delay of flow through WFQ hops is actually bounded
  • 25. 50 Weighted Fair Queuing: Delay Bound • maximum packet size of flow • • server capacity at node maximum delay at the first node due to burst transmission (same as GPS) maximum delay when a packet arrives at each node just after its supposed starting time and must wait for preceding packet maximum delay due to packet-level service: each arriving packet must wait for the one currently in service
  • 26. 51 GPS Emulation: Weighted Round Robin • Each flows is characterized by its – weight – average packet size • The server divides each flow weight by its mean packet size to a obtain a normalized set of weights • Normalized weights are used to determine the number of packets serviced from each connection in each round • Simpler scheduler than WFQ • Mean packet size may not be known a-priori
  • 27. 52 GPS Emulation: Deficit Round Robin • DRR modifies WRR to handle variable packet sizes without knowing the mean packet size of each flow in advance – DRR associates each flow with a “deficit counter” – in each turn, the server attempts to serve one “quantum" worth of bits from each visited flow • A packet is serviced if the sum of the deficit counter and the quantum is larger than the packet size – if the packet receives service, its deficit counter is reduced by the packet size – if the packet does not receive service, a quantum is added to its deficit counter • During a visit, if connection is idle, its deficit counter is reset to 0 – prevents a connection from cumulating service while idle
  • 28. 53 Scheduling and admission control • Given a set of currently admitted connections and a descriptor for a new connection request, the network must verify whether it is able to meet performance requirements of new connection • It is also desirable that the admission policy does not lead to network underutilization • A scheduler define its “schedulable region” as the set of all possible combinations of performance bounds that it can simultaneously meet – resources are finite, so a server can only provide performance bounds to a finite number of connections • New connection requests are matched with the schedulable region of relevant nodes
  • 29. 54 Scheduling and admission control • Schedulable region is a technique for efficient admission control and a way to measure the efficiency of a scheduling discipline – given a schedulable region, admission control consists of checking whether the resulting combination of connection parameters lies within the scheduling region or not – the larger the scheduling region, the more efficient the scheduling policy No. of class 1 requests No. of class 2 requests (C1,C2) (C1MAX,0) schedulable region (0,C2MAX)