SlideShare a Scribd company logo
1 of 24
Download to read offline
Copyright (c) 2020, Katsushi Kobayashi. All rights reserved.
A DRAM-friendly priority queue
Internet packet scheduler
implementation and its effects on
TCP
ikob@acm.org

Katsushi Kobayashi
1
1.Introduction

2.Hardware-based packet scheduler implementation

3.TCP behaviors with real end-systems

4.Deployment issues

5.Related work

6.Conclusion
2
Packet buffer on router
• Have a significant impact on applications QoE.

• Preferred buffer size depend on applications

• Throughput centric flows: large buffers up to BDP.

• Legacy FTP, Video streaming buffering at start.

• Latency sensitive flows: small as possible. 

• VoIP, Interactive Web, Gaming.

• Existing FIFO cannot satisfy all of them:

• Built up queue worsens latency sensitive applications aka.
bufferbloat, if large.

• Suppressing rapid cwnd growth reduces throughput, if small.
3
AQMs against bufferbloat

Int-/Diff-serv among ISPs
• CoDel, PIE compromise throughput centric and latency
sensitive applications with accepting TCP slow-start.

• Transient queue build up still blocks latency
sensitive flows.

• Limit on "One-size fits all" approach.

• Deploying Int-/Diff-serv demands economic infrastructure
in addition to updating network facilities[RFC5290].

• DetNet focuses on closed controlled networks, NOT
on Internet.

• Best-effort service model should be used.
4
Latency Awareness

on a Future Internet
• Satisfy various buffer latency requirements within best-effort service.

• Architecture: Ends and networks work together:

• Applications indicate latency limit on IP header, e.g. ToS, DSCP.

• Routers schedule packets with Earliest Dead Line First (EDF) manner.

• Challenge: No resource management on best-effort service.

• In case of congestion on priority queue, elapsed deadline packets
block entire queue and cause infinite delays.
5
VoIP
Interactive
Web
OS Update
t2Latency = t3 t1
EDF with reneging (EDFR)

Scheduler
• Work as:

• Dequeue earliest deadline packet.

• Forward it, if deadline is NOT elapsed.
• Otherwise, discard it.
• No build up queue, even if congested.

• Similar loss property with limited FIFO:

1.Entire loss rates vs. traffic intensity
similar with limited FIFO which size
corresponds mean deadline[Kruk].

2.Loss rates distributions are almost flat
except quite short deadlines.

➡ No significant impact expected on TCP
flows.

• Confirmed by NS-2 simulations.
6
Kobayashi, K. "LAWIN: A Latency-AWare InterNet architecture for latency support
on best-effort networks." HPSR 2015.
Kruk, Łukasz, et al. "Heavy traffic analysis for EDF queues with reneging." The
Annals of Applied Probability 21.2 (2011).
10
30
50
100
150
D = 200, U(5,B)
10
30
50
100
150
D = 200, U(1,B)
0 100 200 300 400
1e041e031e021e011e+00
Deadline
FractionofRenege
System total by Theory*
EDF w Reneging, M/M/1
= 0.98, D:U(1,B), U(5,B)
≈ (1-ρ)/(1-ρ^(N+1))ρ^N
Blocking rate in M/M/1/N
Objectives
• Present the feasibility of latency aware Internet:

• Hardware-based EDFR packet scheduler
implementations able to support 100Gbps or
more.

• Investigate how TCP behaviors change with
EDFR by using real end-systems.
7
1.Introduction

2.Hardware-based packet scheduler implementation

3.TCP behaviors with real end-systems

4.Deployment issues

5.Related work

6.Conclusion
8
EDFR implementation
• DRAM is the only choice for packet buffer.

• 1.25GB for 100ms. buffer, where 100Gbps link.

• BW: 460GB/s@HBM2, 20GB/s@DDR4

• Random access latency: about 100ns.

• A priority queue that regards remaining time to the deadline
as priority.

• A lot efficient priority queue packet schedulers:

Heap (O(log n)), Calendar queue (O(1)), ....

• Incompatible with DRAM due to random access nature.

5ns. for 64bytes, where 100Gbps link.
9
Priority Queue:

Multiple ring buffers + Priority encoder.
• Naive, but compromised implementation.

• # of deadline is small, < 256

• 8bits for ToS, 6bits for DSCP.

• Able to represent up to 256ms with 1ms
granularity.

• Ring buffer FIFO:

• Can bring out DRAM BW performance 

by its sequential access nature.

• Compatible with variable packet size.

• Priority encoder:

• Regard the per-packet deadline as the
priority.

• On EDFR, dropped packets consume
memory BW in addition to forwarded.
10
Packet Buffer (Ring Buffer)
HeadTail
wr_ptr rd_ptr dequeue pkt.
enqueue pkt.
………
Class 0
Class 1
Class n
Enqueue
Send
Drop
Receive
Dequeue
Skip-FIFO
• Two FIFOs

• To reduce BW wastage by dropped packets.

1.Packet data : 

ring buffer with wr_ptr / rd_ptr

2.Timestamp + ptr. :

At each skip interval,

• Input : (wr_ptr, T_now, deadline)

Enqueue at each skip_interval.

• Output : (elapded_ptr, T_deadline)

Dequeue, if T_deadline < T_now

• If elapsed_ptr > rd_ptr,

rd_ptr = elapsed_ptr

• Skip FIFO + Priority encoder :

• Approx. implementation of EDFR relies on skip
interval.

• 12μs with 4K words FIFO for 200ms buffer
capacity.

<< 10-15ms. for modern AQM update intervals.
11
Timestamp FIFO
Packet Buffer (Ring Buffer)
HeadTail
wr_ptr rd_ptr dequeue pkt.
enqueue pkt.
enqueue(wr_ptr, T_now + deadline)
at each skip_interval
dequeue(elapsed_ptr, T_deadline)
if elapsed_ptr > rd_ptr:
rd_ptr = elapsed_ptr
………
Class 0
Class 1
Class n
Enqueue
Send
Drop
Receive
Dequeue
DRAM-based EDFR 

on FPGAs
• Implementations:

• For TCP behaviors with real-ends:

• Kintex-7 (28nm)

• NetFPGA-CML

• 512MB DDR3

• 4ports x GbE

• For throughputs:

• Xilinx Vertex U+ (16nm)

• Alveo U280-ES

8GB HBM DRAM

• AWS F1

64GB DDR4 DRAM

• Consume only 20% more LUTs than
ordinary ring buffer FIFO.
12
Scheduler
BRAM
(SRAM)
LUT FF
Skip-FIFO 10 1746 655
FIFO 2 1428 437
Virtual FIFO

(Xilinx)
4 1169 1938
FIFO controllers' resource utilization with 64-bit
data width and 512-byte burst size.
Skip-FIFO throughputs

with SRAM, DDR4, HBM
• Constant regardless of packet sizes (including metadata).

• Increase with larger transaction (AXI-MM burst length).

• HBM: 39Gbps @4KB burst, 1.8Gbps @64B

• DDR4: 60Gbps @4KB, 2.7Gbps @64B

• Entire system HBM >> DDR4, while single channel HBM2 < DDR4.

• HBM : 1.2Tbps @4KB (76% of theoretical max.)

• DDR4: 240Gbps @4KB
13
T1
T2
T3
S1
R1S2
S3
12ms
0ms
25ms
. . .
Skip-FIFO bandwidth throughputs.(a) Single channel throughputs. The edged bar areas eliminate meta- data overhead.
(b) Entire system throughputs aggregating available memory channels.
1.Introduction

2.Hardware-based packet scheduler implementation

3.TCP behaviors with real end-systems

4.Deployment issues

5.Related work

6.Conclusion
14
TCP behaviors with EDFR

on real end systems.
• Emulation system:

• Network switch: 

NetFPGA-CML as 4-ports switch with
EDFR scheduler supporting 3-delay
classes.

• Hosts: Ubuntu 18.04

• Link Delay: Linux NetEm.

• Traffic generator: Flowgrind

• 3 evaluation scenarios:

1.Confirm deadline support scheduling on
EDFR.

2.Loss-throughputs with Web-like traffic.

3.Throughputs competing flows requesting
different deadlines.

Follows TCP evaluation suite[draft-irtf-iccrg-tcpeval-01].
15
T1
T2
T3
T4
T5
T5
S1
R1S2
S3
S4
R2 S5
S6
75ms
12ms
0ms
25ms
37ms
2ms
R
T1
T2
37ms
×
. . . . . .
T1
T2
T3
T4
T5
T5
S1
R1S2
S3
S4
R2 S5
S6
75ms
12ms
0ms
25ms
37ms
2ms
R1 R2
T30msT1
T2 T4
37ms
1. Per packet deadline support
on EDFR.
• Generate two 3x3 TCP long-lived CUBIC flow
groups.

• Each 3x3 flow group has different deadline, e.g.,
30-100ms.

• In case of FIFO, buffer capacities are 100ms.

• Figs. show CDF of "queueing delay" aggregating
each deadline.

• Confirmed deadline support on EDFR.

• Note: (c)100-100ms delays is similar with (f) Shared
FIFO rather than (e) Dedicated FIFOs.
16
T1
T2
T3
T4
T5
T5
S1
R1S2
S3
S4
R2 S5
S6
75ms
12ms
0ms
25ms
37ms
2ms
R1 R2
T30msT1
T2 T4
37ms
×
×
. . . . . .
A 6×6 dumbbell topology that comprises two 3×3 flow groups,
i.e., (T1...T6) and (R1...R6). All links have a capacity of 1 Gbps.
2. Loss and Throughput

with moderate load
• Generate two 3x3 flow groups.

• 3GPP HTTP model traffic instead of real traffic
trace.

• Consume 80-90% bottleneck BW.

• Throughputs: No significant differences were not
found for all deadline combinations. 

• Loss: longer deadline flow has slightly higher loss
than shorter deadlines.

• Disagreed with ns-2 simulations and EDFR
nature, but not significant.
17
T1
T2
T3
T4
T5
T5
S1
R1S2
S3
S4
R2 S5
S6
75ms
12ms
0ms
25ms
37ms
2ms
R1 R2
T30msT1
T2 T4
37ms
×
×
. . . . . .
A 6×6 dumbbell topology that comprises two 3×3 flow groups,
i.e., (T1...T6) and (R1...R6). All links have a capacity of 1 Gbps.
3. Flow Completion Time (FCT)
with competing flows
• Generate two flows requesting different deadlines.

• Long-lived traffic.

• 2nd flows started after 100s.

• Consume 100% bottleneck BW.

• Upper bars:

• FCTs of 1.5GB with two different deadline flows.

• FCTs are almost equal in all deadline combinations.

• Lower:

• All FCTs grew in steady as well as FIFO.
18
T4
T5
T5
S4
S5
S6
75ms
37ms
2ms
R1 R2
T30msT1
T2 T4
37ms
×
×
Has two pairs of nodes, (T1,T2) and (T2,T3). All
links have a capacity of 1 Gbps. The dumbbell in
Flow Completion Time (FCT) of 1.5GB data
competing different deadline flows.
Flow Completion Time(FCT) plots with two TCP
CUBIC flows competing (a) 60-80 ms deadline pair
on EDFR, and (b) on FIFO.
TCP behaviors summary
• Exiting TCP stacks will work well, if ordinary FIFO
schedulers are replaced with EDFR.

• Most properties of CUBIC were fully retained with
Reno as well.
19
1.Introduction

2.Hardware-based packet scheduler implementation

3.TCP behaviors with real end-systems

4.Deployment issues

5.Related work

6.Conclusion
20
Latency aware Internet
deployment
• Applications:

• Able to adapt latency support by just calling
existing socket API setting IP ToS field.

• Routers:

• Expect to reduce packet buffer size as a result of
explicit per-flow deadline declarations.

• Economic infrastructure is NOT required since it
is best effort service.
21
1.Introduction

2.Hardware-based packet scheduler implementation

3.TCP behaviors with real end-systems

4.Deployment issues

5.Related work

6.Conclusion
22
Related work
• CoDel, PIE reduce packet buffer latency.

• Target delay is fixed.

• Allow transient queue build up.

• Least Slack Time First (LSTF) scheduler takes account of only buffered
delay as our architecture.

• But, LSTF considers cumulative buffered delay unlike per-hop basis.
23
Conclusion

Future work
• Feasibility of Latency aware Internet:

• DRAM-friendly EDFR packet scheduler able to
support 1Tbps or more.

• TCP behaviors almost unchanged on EDFR by
using real end-systems.

• Work together with emerging UDP-based transport:

• HTTP priority can map to deadline.
24

More Related Content

What's hot

Simulation and Performance Analysis of AODV using NS-2.34
Simulation and Performance Analysis of AODV using NS-2.34Simulation and Performance Analysis of AODV using NS-2.34
Simulation and Performance Analysis of AODV using NS-2.34Shaikhul Islam Chowdhury
 
Understanding DPDK algorithmics
Understanding DPDK algorithmicsUnderstanding DPDK algorithmics
Understanding DPDK algorithmicsDenys Haryachyy
 
Spanning Tree Protocol
Spanning Tree ProtocolSpanning Tree Protocol
Spanning Tree ProtocolManoj Gharate
 
Lec14 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech --- Coherence
Lec14 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech --- CoherenceLec14 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech --- Coherence
Lec14 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech --- CoherenceHsien-Hsin Sean Lee, Ph.D.
 
Example problems
Example problemsExample problems
Example problemsdeepakps22
 
Lec13 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- SMP
Lec13 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- SMPLec13 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- SMP
Lec13 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- SMPHsien-Hsin Sean Lee, Ph.D.
 
Difference b/w STP RSTP PVST & MSTP
Difference b/w STP RSTP PVST & MSTPDifference b/w STP RSTP PVST & MSTP
Difference b/w STP RSTP PVST & MSTPNetwax Lab
 
B2 b fc credits performance deadlocks
B2 b fc credits performance deadlocksB2 b fc credits performance deadlocks
B2 b fc credits performance deadlocksBarry Wright
 
RSTP (rapid spanning tree protocol)
RSTP (rapid spanning tree protocol)RSTP (rapid spanning tree protocol)
RSTP (rapid spanning tree protocol)Netwax Lab
 
All-Reduce and Prefix-Sum Operations
All-Reduce and Prefix-Sum Operations All-Reduce and Prefix-Sum Operations
All-Reduce and Prefix-Sum Operations Syed Zaid Irshad
 
Juniper mpls best practice part 1
Juniper mpls best practice   part 1Juniper mpls best practice   part 1
Juniper mpls best practice part 1Febrian ‎
 
CS4344 09/10 Lecture 10: Transport Protocol for Networked Games
CS4344 09/10 Lecture 10: Transport Protocol for Networked GamesCS4344 09/10 Lecture 10: Transport Protocol for Networked Games
CS4344 09/10 Lecture 10: Transport Protocol for Networked GamesWei Tsang Ooi
 
STP Protection
STP ProtectionSTP Protection
STP ProtectionNetwax Lab
 
3b multiple access
3b multiple access3b multiple access
3b multiple accesskavish dani
 

What's hot (20)

Tuning 17 march
Tuning 17 marchTuning 17 march
Tuning 17 march
 
Ns2
Ns2Ns2
Ns2
 
Introduction to MPLS - NANOG 61
Introduction to MPLS - NANOG 61Introduction to MPLS - NANOG 61
Introduction to MPLS - NANOG 61
 
Simulation and Performance Analysis of AODV using NS-2.34
Simulation and Performance Analysis of AODV using NS-2.34Simulation and Performance Analysis of AODV using NS-2.34
Simulation and Performance Analysis of AODV using NS-2.34
 
Alp Stp
Alp StpAlp Stp
Alp Stp
 
Understanding DPDK algorithmics
Understanding DPDK algorithmicsUnderstanding DPDK algorithmics
Understanding DPDK algorithmics
 
Spanning Tree Protocol
Spanning Tree ProtocolSpanning Tree Protocol
Spanning Tree Protocol
 
Lec14 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech --- Coherence
Lec14 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech --- CoherenceLec14 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech --- Coherence
Lec14 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech --- Coherence
 
Example problems
Example problemsExample problems
Example problems
 
Lec13 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- SMP
Lec13 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- SMPLec13 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- SMP
Lec13 Computer Architecture by Hsien-Hsin Sean Lee Georgia Tech -- SMP
 
Difference b/w STP RSTP PVST & MSTP
Difference b/w STP RSTP PVST & MSTPDifference b/w STP RSTP PVST & MSTP
Difference b/w STP RSTP PVST & MSTP
 
B2 b fc credits performance deadlocks
B2 b fc credits performance deadlocksB2 b fc credits performance deadlocks
B2 b fc credits performance deadlocks
 
RSTP (rapid spanning tree protocol)
RSTP (rapid spanning tree protocol)RSTP (rapid spanning tree protocol)
RSTP (rapid spanning tree protocol)
 
All-Reduce and Prefix-Sum Operations
All-Reduce and Prefix-Sum Operations All-Reduce and Prefix-Sum Operations
All-Reduce and Prefix-Sum Operations
 
Ns2pre
Ns2preNs2pre
Ns2pre
 
Juniper mpls best practice part 1
Juniper mpls best practice   part 1Juniper mpls best practice   part 1
Juniper mpls best practice part 1
 
Week3.1
Week3.1Week3.1
Week3.1
 
CS4344 09/10 Lecture 10: Transport Protocol for Networked Games
CS4344 09/10 Lecture 10: Transport Protocol for Networked GamesCS4344 09/10 Lecture 10: Transport Protocol for Networked Games
CS4344 09/10 Lecture 10: Transport Protocol for Networked Games
 
STP Protection
STP ProtectionSTP Protection
STP Protection
 
3b multiple access
3b multiple access3b multiple access
3b multiple access
 

Similar to A DRAM-friendly priority queue Internet packet scheduler implementation and its effects on TCP

LAWIN: a Latency-AWare InterNet Architecture for Latency Support on Best-Effo...
LAWIN: a Latency-AWare InterNet Architecture for Latency Support on Best-Effo...LAWIN: a Latency-AWare InterNet Architecture for Latency Support on Best-Effo...
LAWIN: a Latency-AWare InterNet Architecture for Latency Support on Best-Effo...Katsushi Kobayashi
 
PLNOG 13: Alexis Dacquay: Handling high-bandwidth-consumption applications in...
PLNOG 13: Alexis Dacquay: Handling high-bandwidth-consumption applications in...PLNOG 13: Alexis Dacquay: Handling high-bandwidth-consumption applications in...
PLNOG 13: Alexis Dacquay: Handling high-bandwidth-consumption applications in...PROIDEA
 
Computer network (5)
Computer network (5)Computer network (5)
Computer network (5)NYversity
 
Theta and the Future of Accelerator Programming
Theta and the Future of Accelerator ProgrammingTheta and the Future of Accelerator Programming
Theta and the Future of Accelerator Programminginside-BigData.com
 
Current state of IEEE 802.1 Time-Sensitive Networking Task Group Norman Finn,...
Current state of IEEE 802.1 Time-Sensitive Networking Task Group Norman Finn,...Current state of IEEE 802.1 Time-Sensitive Networking Task Group Norman Finn,...
Current state of IEEE 802.1 Time-Sensitive Networking Task Group Norman Finn,...Jörgen Gade
 
Multi protocol label switching (mpls)
Multi protocol label switching (mpls)Multi protocol label switching (mpls)
Multi protocol label switching (mpls)Online
 
6TiSCH @Telecom Bretagne 2015
6TiSCH @Telecom Bretagne 20156TiSCH @Telecom Bretagne 2015
6TiSCH @Telecom Bretagne 2015Pascal Thubert
 
RxNetty vs Tomcat Performance Results
RxNetty vs Tomcat Performance ResultsRxNetty vs Tomcat Performance Results
RxNetty vs Tomcat Performance ResultsBrendan Gregg
 
Designing TCP-Friendly Window-based Congestion Control
Designing TCP-Friendly Window-based Congestion ControlDesigning TCP-Friendly Window-based Congestion Control
Designing TCP-Friendly Window-based Congestion Controlsoohyunc
 
RIPE 80: Buffers and Protocols
RIPE 80: Buffers and ProtocolsRIPE 80: Buffers and Protocols
RIPE 80: Buffers and ProtocolsAPNIC
 
MEDIUM ACCESS CONTROL Sublayer IN CN.ppt
MEDIUM ACCESS CONTROL Sublayer IN CN.pptMEDIUM ACCESS CONTROL Sublayer IN CN.ppt
MEDIUM ACCESS CONTROL Sublayer IN CN.pptssuser35e92d
 
3-MACSublayer.ppt
3-MACSublayer.ppt3-MACSublayer.ppt
3-MACSublayer.pptDigiPlexus
 
lec 3 4 Core Delays Thruput Net Arch.ppt
lec 3 4 Core Delays Thruput Net Arch.pptlec 3 4 Core Delays Thruput Net Arch.ppt
lec 3 4 Core Delays Thruput Net Arch.pptMahamKhurram4
 
High-performance 32G Fibre Channel Module on MDS 9700 Directors:
High-performance 32G Fibre Channel Module on MDS 9700 Directors:High-performance 32G Fibre Channel Module on MDS 9700 Directors:
High-performance 32G Fibre Channel Module on MDS 9700 Directors:Tony Antony
 
100G Networking Berlin.pdf
100G Networking Berlin.pdf100G Networking Berlin.pdf
100G Networking Berlin.pdfJunZhao68
 
A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...
A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...
A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...Communication Systems & Networks
 

Similar to A DRAM-friendly priority queue Internet packet scheduler implementation and its effects on TCP (20)

LAWIN: a Latency-AWare InterNet Architecture for Latency Support on Best-Effo...
LAWIN: a Latency-AWare InterNet Architecture for Latency Support on Best-Effo...LAWIN: a Latency-AWare InterNet Architecture for Latency Support on Best-Effo...
LAWIN: a Latency-AWare InterNet Architecture for Latency Support on Best-Effo...
 
PLNOG 13: Alexis Dacquay: Handling high-bandwidth-consumption applications in...
PLNOG 13: Alexis Dacquay: Handling high-bandwidth-consumption applications in...PLNOG 13: Alexis Dacquay: Handling high-bandwidth-consumption applications in...
PLNOG 13: Alexis Dacquay: Handling high-bandwidth-consumption applications in...
 
Computer network (5)
Computer network (5)Computer network (5)
Computer network (5)
 
Corralling Big Data at TACC
Corralling Big Data at TACCCorralling Big Data at TACC
Corralling Big Data at TACC
 
Theta and the Future of Accelerator Programming
Theta and the Future of Accelerator ProgrammingTheta and the Future of Accelerator Programming
Theta and the Future of Accelerator Programming
 
Current state of IEEE 802.1 Time-Sensitive Networking Task Group Norman Finn,...
Current state of IEEE 802.1 Time-Sensitive Networking Task Group Norman Finn,...Current state of IEEE 802.1 Time-Sensitive Networking Task Group Norman Finn,...
Current state of IEEE 802.1 Time-Sensitive Networking Task Group Norman Finn,...
 
QoSintro.PPT
QoSintro.PPTQoSintro.PPT
QoSintro.PPT
 
Multi protocol label switching (mpls)
Multi protocol label switching (mpls)Multi protocol label switching (mpls)
Multi protocol label switching (mpls)
 
ch5-network.ppt
ch5-network.pptch5-network.ppt
ch5-network.ppt
 
6TiSCH @Telecom Bretagne 2015
6TiSCH @Telecom Bretagne 20156TiSCH @Telecom Bretagne 2015
6TiSCH @Telecom Bretagne 2015
 
RxNetty vs Tomcat Performance Results
RxNetty vs Tomcat Performance ResultsRxNetty vs Tomcat Performance Results
RxNetty vs Tomcat Performance Results
 
Designing TCP-Friendly Window-based Congestion Control
Designing TCP-Friendly Window-based Congestion ControlDesigning TCP-Friendly Window-based Congestion Control
Designing TCP-Friendly Window-based Congestion Control
 
RIPE 80: Buffers and Protocols
RIPE 80: Buffers and ProtocolsRIPE 80: Buffers and Protocols
RIPE 80: Buffers and Protocols
 
MEDIUM ACCESS CONTROL Sublayer IN CN.ppt
MEDIUM ACCESS CONTROL Sublayer IN CN.pptMEDIUM ACCESS CONTROL Sublayer IN CN.ppt
MEDIUM ACCESS CONTROL Sublayer IN CN.ppt
 
3-MACSublayer.ppt
3-MACSublayer.ppt3-MACSublayer.ppt
3-MACSublayer.ppt
 
lec 3 4 Core Delays Thruput Net Arch.ppt
lec 3 4 Core Delays Thruput Net Arch.pptlec 3 4 Core Delays Thruput Net Arch.ppt
lec 3 4 Core Delays Thruput Net Arch.ppt
 
High-performance 32G Fibre Channel Module on MDS 9700 Directors:
High-performance 32G Fibre Channel Module on MDS 9700 Directors:High-performance 32G Fibre Channel Module on MDS 9700 Directors:
High-performance 32G Fibre Channel Module on MDS 9700 Directors:
 
Advanced networking - scheduling and QoS part 1
Advanced networking - scheduling and QoS part 1Advanced networking - scheduling and QoS part 1
Advanced networking - scheduling and QoS part 1
 
100G Networking Berlin.pdf
100G Networking Berlin.pdf100G Networking Berlin.pdf
100G Networking Berlin.pdf
 
A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...
A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...
A Study on MPTCP for Tolerating Packet Reordering and Path Heterogeneity in W...
 

Recently uploaded

Hot Service (+9316020077 ) Goa Call Girls Real Photos and Genuine Service
Hot Service (+9316020077 ) Goa  Call Girls Real Photos and Genuine ServiceHot Service (+9316020077 ) Goa  Call Girls Real Photos and Genuine Service
Hot Service (+9316020077 ) Goa Call Girls Real Photos and Genuine Servicesexy call girls service in goa
 
FULL ENJOY Call Girls In Mayur Vihar Delhi Contact Us 8377087607
FULL ENJOY Call Girls In Mayur Vihar Delhi Contact Us 8377087607FULL ENJOY Call Girls In Mayur Vihar Delhi Contact Us 8377087607
FULL ENJOY Call Girls In Mayur Vihar Delhi Contact Us 8377087607dollysharma2066
 
DDoS In Oceania and the Pacific, presented by Dave Phelan at NZNOG 2024
DDoS In Oceania and the Pacific, presented by Dave Phelan at NZNOG 2024DDoS In Oceania and the Pacific, presented by Dave Phelan at NZNOG 2024
DDoS In Oceania and the Pacific, presented by Dave Phelan at NZNOG 2024APNIC
 
How is AI changing journalism? (v. April 2024)
How is AI changing journalism? (v. April 2024)How is AI changing journalism? (v. April 2024)
How is AI changing journalism? (v. April 2024)Damian Radcliffe
 
Call Girls South Delhi Delhi reach out to us at ☎ 9711199012
Call Girls South Delhi Delhi reach out to us at ☎ 9711199012Call Girls South Delhi Delhi reach out to us at ☎ 9711199012
Call Girls South Delhi Delhi reach out to us at ☎ 9711199012rehmti665
 
Chennai Call Girls Porur Phone 🍆 8250192130 👅 celebrity escorts service
Chennai Call Girls Porur Phone 🍆 8250192130 👅 celebrity escorts serviceChennai Call Girls Porur Phone 🍆 8250192130 👅 celebrity escorts service
Chennai Call Girls Porur Phone 🍆 8250192130 👅 celebrity escorts servicesonalikaur4
 
Call Girls In Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls In Defence Colony Delhi 💯Call Us 🔝8264348440🔝Call Girls In Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls In Defence Colony Delhi 💯Call Us 🔝8264348440🔝soniya singh
 
Best VIP Call Girls Noida Sector 75 Call Me: 8448380779
Best VIP Call Girls Noida Sector 75 Call Me: 8448380779Best VIP Call Girls Noida Sector 75 Call Me: 8448380779
Best VIP Call Girls Noida Sector 75 Call Me: 8448380779Delhi Call girls
 
On Starlink, presented by Geoff Huston at NZNOG 2024
On Starlink, presented by Geoff Huston at NZNOG 2024On Starlink, presented by Geoff Huston at NZNOG 2024
On Starlink, presented by Geoff Huston at NZNOG 2024APNIC
 
Call Girls Service Chandigarh Lucky ❤️ 7710465962 Independent Call Girls In C...
Call Girls Service Chandigarh Lucky ❤️ 7710465962 Independent Call Girls In C...Call Girls Service Chandigarh Lucky ❤️ 7710465962 Independent Call Girls In C...
Call Girls Service Chandigarh Lucky ❤️ 7710465962 Independent Call Girls In C...Sheetaleventcompany
 
VIP Kolkata Call Girl Salt Lake 👉 8250192130 Available With Room
VIP Kolkata Call Girl Salt Lake 👉 8250192130  Available With RoomVIP Kolkata Call Girl Salt Lake 👉 8250192130  Available With Room
VIP Kolkata Call Girl Salt Lake 👉 8250192130 Available With Roomishabajaj13
 
VIP Call Girls Kolkata Ananya 🤌 8250192130 🚀 Vip Call Girls Kolkata
VIP Call Girls Kolkata Ananya 🤌  8250192130 🚀 Vip Call Girls KolkataVIP Call Girls Kolkata Ananya 🤌  8250192130 🚀 Vip Call Girls Kolkata
VIP Call Girls Kolkata Ananya 🤌 8250192130 🚀 Vip Call Girls Kolkataanamikaraghav4
 
'Future Evolution of the Internet' delivered by Geoff Huston at Everything Op...
'Future Evolution of the Internet' delivered by Geoff Huston at Everything Op...'Future Evolution of the Internet' delivered by Geoff Huston at Everything Op...
'Future Evolution of the Internet' delivered by Geoff Huston at Everything Op...APNIC
 
VIP Kolkata Call Girls Salt Lake 8250192130 Available With Room
VIP Kolkata Call Girls Salt Lake 8250192130 Available With RoomVIP Kolkata Call Girls Salt Lake 8250192130 Available With Room
VIP Kolkata Call Girls Salt Lake 8250192130 Available With Roomgirls4nights
 
Gram Darshan PPT cyber rural in villages of india
Gram Darshan PPT cyber rural  in villages of indiaGram Darshan PPT cyber rural  in villages of india
Gram Darshan PPT cyber rural in villages of indiaimessage0108
 
₹5.5k {Cash Payment}New Friends Colony Call Girls In [Delhi NIHARIKA] 🔝|97111...
₹5.5k {Cash Payment}New Friends Colony Call Girls In [Delhi NIHARIKA] 🔝|97111...₹5.5k {Cash Payment}New Friends Colony Call Girls In [Delhi NIHARIKA] 🔝|97111...
₹5.5k {Cash Payment}New Friends Colony Call Girls In [Delhi NIHARIKA] 🔝|97111...Diya Sharma
 
VIP Kolkata Call Girl Kestopur 👉 8250192130 Available With Room
VIP Kolkata Call Girl Kestopur 👉 8250192130  Available With RoomVIP Kolkata Call Girl Kestopur 👉 8250192130  Available With Room
VIP Kolkata Call Girl Kestopur 👉 8250192130 Available With Roomdivyansh0kumar0
 

Recently uploaded (20)

Hot Service (+9316020077 ) Goa Call Girls Real Photos and Genuine Service
Hot Service (+9316020077 ) Goa  Call Girls Real Photos and Genuine ServiceHot Service (+9316020077 ) Goa  Call Girls Real Photos and Genuine Service
Hot Service (+9316020077 ) Goa Call Girls Real Photos and Genuine Service
 
FULL ENJOY Call Girls In Mayur Vihar Delhi Contact Us 8377087607
FULL ENJOY Call Girls In Mayur Vihar Delhi Contact Us 8377087607FULL ENJOY Call Girls In Mayur Vihar Delhi Contact Us 8377087607
FULL ENJOY Call Girls In Mayur Vihar Delhi Contact Us 8377087607
 
Dwarka Sector 26 Call Girls | Delhi | 9999965857 🫦 Vanshika Verma More Our Se...
Dwarka Sector 26 Call Girls | Delhi | 9999965857 🫦 Vanshika Verma More Our Se...Dwarka Sector 26 Call Girls | Delhi | 9999965857 🫦 Vanshika Verma More Our Se...
Dwarka Sector 26 Call Girls | Delhi | 9999965857 🫦 Vanshika Verma More Our Se...
 
DDoS In Oceania and the Pacific, presented by Dave Phelan at NZNOG 2024
DDoS In Oceania and the Pacific, presented by Dave Phelan at NZNOG 2024DDoS In Oceania and the Pacific, presented by Dave Phelan at NZNOG 2024
DDoS In Oceania and the Pacific, presented by Dave Phelan at NZNOG 2024
 
How is AI changing journalism? (v. April 2024)
How is AI changing journalism? (v. April 2024)How is AI changing journalism? (v. April 2024)
How is AI changing journalism? (v. April 2024)
 
Rohini Sector 26 Call Girls Delhi 9999965857 @Sabina Saikh No Advance
Rohini Sector 26 Call Girls Delhi 9999965857 @Sabina Saikh No AdvanceRohini Sector 26 Call Girls Delhi 9999965857 @Sabina Saikh No Advance
Rohini Sector 26 Call Girls Delhi 9999965857 @Sabina Saikh No Advance
 
Call Girls South Delhi Delhi reach out to us at ☎ 9711199012
Call Girls South Delhi Delhi reach out to us at ☎ 9711199012Call Girls South Delhi Delhi reach out to us at ☎ 9711199012
Call Girls South Delhi Delhi reach out to us at ☎ 9711199012
 
Chennai Call Girls Porur Phone 🍆 8250192130 👅 celebrity escorts service
Chennai Call Girls Porur Phone 🍆 8250192130 👅 celebrity escorts serviceChennai Call Girls Porur Phone 🍆 8250192130 👅 celebrity escorts service
Chennai Call Girls Porur Phone 🍆 8250192130 👅 celebrity escorts service
 
Call Girls In Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls In Defence Colony Delhi 💯Call Us 🔝8264348440🔝Call Girls In Defence Colony Delhi 💯Call Us 🔝8264348440🔝
Call Girls In Defence Colony Delhi 💯Call Us 🔝8264348440🔝
 
Best VIP Call Girls Noida Sector 75 Call Me: 8448380779
Best VIP Call Girls Noida Sector 75 Call Me: 8448380779Best VIP Call Girls Noida Sector 75 Call Me: 8448380779
Best VIP Call Girls Noida Sector 75 Call Me: 8448380779
 
On Starlink, presented by Geoff Huston at NZNOG 2024
On Starlink, presented by Geoff Huston at NZNOG 2024On Starlink, presented by Geoff Huston at NZNOG 2024
On Starlink, presented by Geoff Huston at NZNOG 2024
 
Call Girls Service Chandigarh Lucky ❤️ 7710465962 Independent Call Girls In C...
Call Girls Service Chandigarh Lucky ❤️ 7710465962 Independent Call Girls In C...Call Girls Service Chandigarh Lucky ❤️ 7710465962 Independent Call Girls In C...
Call Girls Service Chandigarh Lucky ❤️ 7710465962 Independent Call Girls In C...
 
Rohini Sector 22 Call Girls Delhi 9999965857 @Sabina Saikh No Advance
Rohini Sector 22 Call Girls Delhi 9999965857 @Sabina Saikh No AdvanceRohini Sector 22 Call Girls Delhi 9999965857 @Sabina Saikh No Advance
Rohini Sector 22 Call Girls Delhi 9999965857 @Sabina Saikh No Advance
 
VIP Kolkata Call Girl Salt Lake 👉 8250192130 Available With Room
VIP Kolkata Call Girl Salt Lake 👉 8250192130  Available With RoomVIP Kolkata Call Girl Salt Lake 👉 8250192130  Available With Room
VIP Kolkata Call Girl Salt Lake 👉 8250192130 Available With Room
 
VIP Call Girls Kolkata Ananya 🤌 8250192130 🚀 Vip Call Girls Kolkata
VIP Call Girls Kolkata Ananya 🤌  8250192130 🚀 Vip Call Girls KolkataVIP Call Girls Kolkata Ananya 🤌  8250192130 🚀 Vip Call Girls Kolkata
VIP Call Girls Kolkata Ananya 🤌 8250192130 🚀 Vip Call Girls Kolkata
 
'Future Evolution of the Internet' delivered by Geoff Huston at Everything Op...
'Future Evolution of the Internet' delivered by Geoff Huston at Everything Op...'Future Evolution of the Internet' delivered by Geoff Huston at Everything Op...
'Future Evolution of the Internet' delivered by Geoff Huston at Everything Op...
 
VIP Kolkata Call Girls Salt Lake 8250192130 Available With Room
VIP Kolkata Call Girls Salt Lake 8250192130 Available With RoomVIP Kolkata Call Girls Salt Lake 8250192130 Available With Room
VIP Kolkata Call Girls Salt Lake 8250192130 Available With Room
 
Gram Darshan PPT cyber rural in villages of india
Gram Darshan PPT cyber rural  in villages of indiaGram Darshan PPT cyber rural  in villages of india
Gram Darshan PPT cyber rural in villages of india
 
₹5.5k {Cash Payment}New Friends Colony Call Girls In [Delhi NIHARIKA] 🔝|97111...
₹5.5k {Cash Payment}New Friends Colony Call Girls In [Delhi NIHARIKA] 🔝|97111...₹5.5k {Cash Payment}New Friends Colony Call Girls In [Delhi NIHARIKA] 🔝|97111...
₹5.5k {Cash Payment}New Friends Colony Call Girls In [Delhi NIHARIKA] 🔝|97111...
 
VIP Kolkata Call Girl Kestopur 👉 8250192130 Available With Room
VIP Kolkata Call Girl Kestopur 👉 8250192130  Available With RoomVIP Kolkata Call Girl Kestopur 👉 8250192130  Available With Room
VIP Kolkata Call Girl Kestopur 👉 8250192130 Available With Room
 

A DRAM-friendly priority queue Internet packet scheduler implementation and its effects on TCP

  • 1. Copyright (c) 2020, Katsushi Kobayashi. All rights reserved. A DRAM-friendly priority queue Internet packet scheduler implementation and its effects on TCP ikob@acm.org Katsushi Kobayashi 1
  • 2. 1.Introduction 2.Hardware-based packet scheduler implementation 3.TCP behaviors with real end-systems 4.Deployment issues 5.Related work 6.Conclusion 2
  • 3. Packet buffer on router • Have a significant impact on applications QoE. • Preferred buffer size depend on applications • Throughput centric flows: large buffers up to BDP. • Legacy FTP, Video streaming buffering at start. • Latency sensitive flows: small as possible. • VoIP, Interactive Web, Gaming. • Existing FIFO cannot satisfy all of them: • Built up queue worsens latency sensitive applications aka. bufferbloat, if large. • Suppressing rapid cwnd growth reduces throughput, if small. 3
  • 4. AQMs against bufferbloat Int-/Diff-serv among ISPs • CoDel, PIE compromise throughput centric and latency sensitive applications with accepting TCP slow-start. • Transient queue build up still blocks latency sensitive flows. • Limit on "One-size fits all" approach. • Deploying Int-/Diff-serv demands economic infrastructure in addition to updating network facilities[RFC5290]. • DetNet focuses on closed controlled networks, NOT on Internet. • Best-effort service model should be used. 4
  • 5. Latency Awareness on a Future Internet • Satisfy various buffer latency requirements within best-effort service. • Architecture: Ends and networks work together: • Applications indicate latency limit on IP header, e.g. ToS, DSCP. • Routers schedule packets with Earliest Dead Line First (EDF) manner. • Challenge: No resource management on best-effort service. • In case of congestion on priority queue, elapsed deadline packets block entire queue and cause infinite delays. 5 VoIP Interactive Web OS Update t2Latency = t3 t1
  • 6. EDF with reneging (EDFR) Scheduler • Work as: • Dequeue earliest deadline packet. • Forward it, if deadline is NOT elapsed. • Otherwise, discard it. • No build up queue, even if congested. • Similar loss property with limited FIFO: 1.Entire loss rates vs. traffic intensity similar with limited FIFO which size corresponds mean deadline[Kruk]. 2.Loss rates distributions are almost flat except quite short deadlines. ➡ No significant impact expected on TCP flows. • Confirmed by NS-2 simulations. 6 Kobayashi, K. "LAWIN: A Latency-AWare InterNet architecture for latency support on best-effort networks." HPSR 2015. Kruk, Łukasz, et al. "Heavy traffic analysis for EDF queues with reneging." The Annals of Applied Probability 21.2 (2011). 10 30 50 100 150 D = 200, U(5,B) 10 30 50 100 150 D = 200, U(1,B) 0 100 200 300 400 1e041e031e021e011e+00 Deadline FractionofRenege System total by Theory* EDF w Reneging, M/M/1 = 0.98, D:U(1,B), U(5,B) ≈ (1-ρ)/(1-ρ^(N+1))ρ^N Blocking rate in M/M/1/N
  • 7. Objectives • Present the feasibility of latency aware Internet: • Hardware-based EDFR packet scheduler implementations able to support 100Gbps or more. • Investigate how TCP behaviors change with EDFR by using real end-systems. 7
  • 8. 1.Introduction 2.Hardware-based packet scheduler implementation 3.TCP behaviors with real end-systems 4.Deployment issues 5.Related work 6.Conclusion 8
  • 9. EDFR implementation • DRAM is the only choice for packet buffer. • 1.25GB for 100ms. buffer, where 100Gbps link. • BW: 460GB/s@HBM2, 20GB/s@DDR4 • Random access latency: about 100ns. • A priority queue that regards remaining time to the deadline as priority. • A lot efficient priority queue packet schedulers:
 Heap (O(log n)), Calendar queue (O(1)), .... • Incompatible with DRAM due to random access nature.
 5ns. for 64bytes, where 100Gbps link. 9
  • 10. Priority Queue: Multiple ring buffers + Priority encoder. • Naive, but compromised implementation. • # of deadline is small, < 256 • 8bits for ToS, 6bits for DSCP. • Able to represent up to 256ms with 1ms granularity. • Ring buffer FIFO: • Can bring out DRAM BW performance 
 by its sequential access nature. • Compatible with variable packet size. • Priority encoder: • Regard the per-packet deadline as the priority. • On EDFR, dropped packets consume memory BW in addition to forwarded. 10 Packet Buffer (Ring Buffer) HeadTail wr_ptr rd_ptr dequeue pkt. enqueue pkt. ……… Class 0 Class 1 Class n Enqueue Send Drop Receive Dequeue
  • 11. Skip-FIFO • Two FIFOs • To reduce BW wastage by dropped packets. 1.Packet data : 
 ring buffer with wr_ptr / rd_ptr 2.Timestamp + ptr. :
 At each skip interval, • Input : (wr_ptr, T_now, deadline)
 Enqueue at each skip_interval. • Output : (elapded_ptr, T_deadline)
 Dequeue, if T_deadline < T_now • If elapsed_ptr > rd_ptr,
 rd_ptr = elapsed_ptr • Skip FIFO + Priority encoder : • Approx. implementation of EDFR relies on skip interval. • 12μs with 4K words FIFO for 200ms buffer capacity.
 << 10-15ms. for modern AQM update intervals. 11 Timestamp FIFO Packet Buffer (Ring Buffer) HeadTail wr_ptr rd_ptr dequeue pkt. enqueue pkt. enqueue(wr_ptr, T_now + deadline) at each skip_interval dequeue(elapsed_ptr, T_deadline) if elapsed_ptr > rd_ptr: rd_ptr = elapsed_ptr ……… Class 0 Class 1 Class n Enqueue Send Drop Receive Dequeue
  • 12. DRAM-based EDFR on FPGAs • Implementations: • For TCP behaviors with real-ends: • Kintex-7 (28nm) • NetFPGA-CML • 512MB DDR3 • 4ports x GbE • For throughputs: • Xilinx Vertex U+ (16nm) • Alveo U280-ES
 8GB HBM DRAM • AWS F1
 64GB DDR4 DRAM • Consume only 20% more LUTs than ordinary ring buffer FIFO. 12 Scheduler BRAM (SRAM) LUT FF Skip-FIFO 10 1746 655 FIFO 2 1428 437 Virtual FIFO
 (Xilinx) 4 1169 1938 FIFO controllers' resource utilization with 64-bit data width and 512-byte burst size.
  • 13. Skip-FIFO throughputs
 with SRAM, DDR4, HBM • Constant regardless of packet sizes (including metadata). • Increase with larger transaction (AXI-MM burst length). • HBM: 39Gbps @4KB burst, 1.8Gbps @64B • DDR4: 60Gbps @4KB, 2.7Gbps @64B • Entire system HBM >> DDR4, while single channel HBM2 < DDR4. • HBM : 1.2Tbps @4KB (76% of theoretical max.) • DDR4: 240Gbps @4KB 13 T1 T2 T3 S1 R1S2 S3 12ms 0ms 25ms . . . Skip-FIFO bandwidth throughputs.(a) Single channel throughputs. The edged bar areas eliminate meta- data overhead. (b) Entire system throughputs aggregating available memory channels.
  • 14. 1.Introduction 2.Hardware-based packet scheduler implementation 3.TCP behaviors with real end-systems 4.Deployment issues 5.Related work 6.Conclusion 14
  • 15. TCP behaviors with EDFR on real end systems. • Emulation system: • Network switch: 
 NetFPGA-CML as 4-ports switch with EDFR scheduler supporting 3-delay classes. • Hosts: Ubuntu 18.04 • Link Delay: Linux NetEm. • Traffic generator: Flowgrind • 3 evaluation scenarios: 1.Confirm deadline support scheduling on EDFR. 2.Loss-throughputs with Web-like traffic. 3.Throughputs competing flows requesting different deadlines. Follows TCP evaluation suite[draft-irtf-iccrg-tcpeval-01]. 15 T1 T2 T3 T4 T5 T5 S1 R1S2 S3 S4 R2 S5 S6 75ms 12ms 0ms 25ms 37ms 2ms R T1 T2 37ms × . . . . . . T1 T2 T3 T4 T5 T5 S1 R1S2 S3 S4 R2 S5 S6 75ms 12ms 0ms 25ms 37ms 2ms R1 R2 T30msT1 T2 T4 37ms
  • 16. 1. Per packet deadline support on EDFR. • Generate two 3x3 TCP long-lived CUBIC flow groups. • Each 3x3 flow group has different deadline, e.g., 30-100ms. • In case of FIFO, buffer capacities are 100ms. • Figs. show CDF of "queueing delay" aggregating each deadline. • Confirmed deadline support on EDFR. • Note: (c)100-100ms delays is similar with (f) Shared FIFO rather than (e) Dedicated FIFOs. 16 T1 T2 T3 T4 T5 T5 S1 R1S2 S3 S4 R2 S5 S6 75ms 12ms 0ms 25ms 37ms 2ms R1 R2 T30msT1 T2 T4 37ms × × . . . . . . A 6×6 dumbbell topology that comprises two 3×3 flow groups, i.e., (T1...T6) and (R1...R6). All links have a capacity of 1 Gbps.
  • 17. 2. Loss and Throughput with moderate load • Generate two 3x3 flow groups. • 3GPP HTTP model traffic instead of real traffic trace. • Consume 80-90% bottleneck BW. • Throughputs: No significant differences were not found for all deadline combinations. • Loss: longer deadline flow has slightly higher loss than shorter deadlines. • Disagreed with ns-2 simulations and EDFR nature, but not significant. 17 T1 T2 T3 T4 T5 T5 S1 R1S2 S3 S4 R2 S5 S6 75ms 12ms 0ms 25ms 37ms 2ms R1 R2 T30msT1 T2 T4 37ms × × . . . . . . A 6×6 dumbbell topology that comprises two 3×3 flow groups, i.e., (T1...T6) and (R1...R6). All links have a capacity of 1 Gbps.
  • 18. 3. Flow Completion Time (FCT) with competing flows • Generate two flows requesting different deadlines. • Long-lived traffic. • 2nd flows started after 100s. • Consume 100% bottleneck BW. • Upper bars: • FCTs of 1.5GB with two different deadline flows. • FCTs are almost equal in all deadline combinations. • Lower: • All FCTs grew in steady as well as FIFO. 18 T4 T5 T5 S4 S5 S6 75ms 37ms 2ms R1 R2 T30msT1 T2 T4 37ms × × Has two pairs of nodes, (T1,T2) and (T2,T3). All links have a capacity of 1 Gbps. The dumbbell in Flow Completion Time (FCT) of 1.5GB data competing different deadline flows. Flow Completion Time(FCT) plots with two TCP CUBIC flows competing (a) 60-80 ms deadline pair on EDFR, and (b) on FIFO.
  • 19. TCP behaviors summary • Exiting TCP stacks will work well, if ordinary FIFO schedulers are replaced with EDFR. • Most properties of CUBIC were fully retained with Reno as well. 19
  • 20. 1.Introduction 2.Hardware-based packet scheduler implementation 3.TCP behaviors with real end-systems 4.Deployment issues 5.Related work 6.Conclusion 20
  • 21. Latency aware Internet deployment • Applications: • Able to adapt latency support by just calling existing socket API setting IP ToS field. • Routers: • Expect to reduce packet buffer size as a result of explicit per-flow deadline declarations. • Economic infrastructure is NOT required since it is best effort service. 21
  • 22. 1.Introduction 2.Hardware-based packet scheduler implementation 3.TCP behaviors with real end-systems 4.Deployment issues 5.Related work 6.Conclusion 22
  • 23. Related work • CoDel, PIE reduce packet buffer latency. • Target delay is fixed. • Allow transient queue build up. • Least Slack Time First (LSTF) scheduler takes account of only buffered delay as our architecture. • But, LSTF considers cumulative buffered delay unlike per-hop basis. 23
  • 24. Conclusion Future work • Feasibility of Latency aware Internet: • DRAM-friendly EDFR packet scheduler able to support 1Tbps or more. • TCP behaviors almost unchanged on EDFR by using real end-systems. • Work together with emerging UDP-based transport: • HTTP priority can map to deadline. 24