SlideShare a Scribd company logo
Investigating the Use of
Synchronized Clocks in TCP
    Congestion Control
         Michele Weigle
       Dissertation Defense
          May 14, 2003

      Advisor: Kevin Jeffay
Research Question


Can the use of exact timing
information improve TCP
congestion control?


                              2
Claim
         synchronized clocks

       exact timing information

      early congestion detection

  less packet loss and shorter queues

  better overall network performance

                                        3
Outline
•   Background
•   Related Work
•   Thesis Statement
•   Sync-TCP
•   Evaluation
•   Conclusions
•   Future Work


                       4
Background
  Queuing
• Router queues are FIFO and finite
   – the longer the queue, the longer a packet at the end of the
     queue is delayed
   – if queue is full, incoming packets are dropped




                        X


• Most queues are drop-tail
   – incoming packets are only dropped when the queue is full
                                                                   5
Background
    Congestion
• Sustained period where the incoming rate is
  greater than the service rate

• Leads to increased
  queuing delays

• Leads to packet loss
   – leads to increased latency for TCP flows
   – leads to low throughput



                                                6
Background
      TCP Data Transfer
      sender                      receiver
                data 1                    OTT
                                          (data)      congestion
                                                      window size
RTT                                                   (cwnd) = 1
                     2                    OTT
               ACK
                                          (ACK)

                         data 2              throughput = cwnd / RTT




      time                         time
                                                                       7
Background
      TCP Congestion Window
      sender                   receiver
                  data 1
                 data 2                        cwnd = 3
                data 3
RTT                                       throughput = cwnd / RTT
                    2
               A CK
                     3
                A CK 4
                  ACK
                      data 4
                     data 5
                    data 6


      time                      time
                                                                    8
Background
   TCP Congestion Control
• Available network bandwidth is unknown

• TCP probes the network by increasing the
  congestion window when ACKs return

• TCP backs off by reducing the congestion
  window when loss is detected


                                             9
Background
  TCP Reno Loss Detection
• 3 Duplicate ACKs                           throughput = cwnd / RTT
  – reduce congestion




                         congestion window
    window by 50%
                                                           x
                                               x
• Retransmission                                                   timeout

  Timeout
  – reduce congestion                          duplicate
                                               ACKs
    window to 1 packet
                                                       time



                                                                             10
Background
TCP Reno Data Recovery
    sender                               receiver
                         data 1
                        data 2
                       data 3        X
                        data 4
                      data 5
                  2
            ACK              2
                         A CK K 2
                            AC K 2
                               AC
                                  data 6


                            data 2

     time                                  time
                                                    11
The Problem
  TCP Congestion Control
• Overflows queues in search for more
  resources

• Uses packet loss as its only indicator of
  congestion
  – relies on a binary signal of congestion




                                              12
The Problem
                     Congestion Control
                                               • TCP Reno: React to packet loss
congestion window




                                 x
                      x                           – reduce sending rate only when
                                     timeout        packets are lost
                                                  – perform congestion control only
                      duplicate                     when it is time to retransmit lost
                      ACKs                          packets
                          time

                                               • Goal: React to congestion early
 congestion window




                                                 and avoid losses
                                                  – congestion occurs before packets are
                                                    lost
                                                  – decouple congestion control and
                                                    retransmission
                          time
                                                                                           13
Related Work
 Congestion Control
• End-to-End
   – TCP Reno is the problem

                               Internet
                router                         router

      adaptation                                       adaptation
• Router-based
   – drop-tail queues are the problem
   – active queue management (AQM)

                               Internet
               router                              router
                                          adaptation

                                                                    14
Related Work
  Congestion Control
• End-to-End
  –   Delay-based congestion control [R. Jain, 1989]
  –   TCP Vegas [Brakmo, O’Malley, Peterson, 1994]
  –   TCP Santa Cruz [Parsa, Garcia-Luna-Aceves, 1999]
  –   TCP Westwood [Mascolo, Casetti, Gerla, Sanadidi, Wang, 2001]
  –   TCP Peach [Akyildiz, Morabito, Palazzo, 2001]
  –   Binomial algorithms [Bansal, Balakrishan, 2001]
• Router-based
  –   DECbit [Ramakrishnan, R. Jain, 1990]
  –   Random Early Detection (RED) [Floyd, Jacobson, 1993]
  –   Explicit Congestion Notification (ECN) [Floyd, 1994]
  –   Adaptive RED [Floyd, Gummadi, Shenker, 2001]


                                                                     15
Thesis Statement
Precise knowledge of one-way transit
times can be used to improve the
performance of TCP congestion
control.




                                       16
Thesis Statement
Precise knowledge of one-way transit
times can be used to improve the
performance of TCP congestion
control.
         • network-level metrics: packet loss and
         average queue sizes at congested routers

         • application-level metrics: HTTP
         response times and goodput per HTTP
         response
                                                    17
Thesis Statement
Precise knowledge of one-way transit
times can be used to improve the
performance of TCP congestion
control.
         • provide lower packet loss and lower
         queue sizes than TCP Reno

         • provide lower HTTP response time and
         higher goodput per HTTP response than
         TCP Reno
                                                  18
My Approach
1.   Exchange exact timing information
2.   Detect congestion
3.   React to congestion
4.   Sync-TCP congestion control
5.   Evaluate Sync-TCP vs. TCP Reno




                                         19
Sync-TCP
 Synchronized Clocks
• Allow measurement
  of OTT
• Methods of
  synchronization        Internet
  – Global Positioning
    System (GPS)
  – Network Time
    Protocol (NTP)


                                    20
Sync-TCP
  TCP Header Option
                                           32 bits
• New option in the TCP       source port #     dest port #
  header                             sequence number
  – 14 bytes                     acknowledgment number
                            head not U A P
                                           RSF   rcvr window size
            type   length    len used
                                checksum          ptr urgent data
      OTT (ms)
     timestamp                 options (variable length)
     echo reply

                                    application data
                                    (variable length)


                                                                    21
Sync-TCP
       Example
  [OTT, timestamp, echo reply]       Sender’s Calculations
sender                    receiver   time data received = time data
1          [-1, 1, -1]        1      sent (echo reply) + OTT
2                             2
3          [1, 3, 1]          3      time ACK delayed = time ACK
4                             4      sent (timestamp) - time data
5          [1, 5, 3           5      received
                   ]
6                             6
7                             7      queuing delay = OTT - minimum
8          [2, 8, 5]          8      OTT
9                             9
time                       time
                                                                      22
Sync-TCP
    Congestion Detection
• 50% of maximum-observed queuing delay
    (queuing delay = OTT – minimum-observed OTT)
•   50% of minimum-observed OTT
•   Average queuing delay
•   Trend analysis of queuing delays
•   Trend analysis of the average queuing delay



                                                   23
Sync-TCP
  Trend Analysis of Average Queuing Delay
• Trend analysis for available bandwidth estimation
  adapted from [Jain and Dovrolis, 2002]

• Operation:
   –   compute 9 average queuing delay samples
   –   split into 3 groups of 3 samples each
   –   compute median, mi , of each group
   –   trend is relationship of m1, m2, m3




                                                      24
Sync-TCP
  Trend Analysis of Average Queuing Delay
• Every arriving ACK, compute smoothed
  average queuing delay from OTT
• Compute trend of average queuing delay
  – after first 9 ACKs       1 2 3 4 5 6 7 8 9 10 11 12
  – afterwards, every 3 ACKs
• Calculate the average queuing delay as a
  percentage of the maximum-observed
  queuing delay
  – divide into 25% increments
                                                          25
Sync-TCP
                           Queuing Delay at Router
                     100     queuing delay at router


                      80
queuing delay (ms)




                      60


                      40


                      20


                      0
                      260         261             262             263   264   265
                                                       time (s)                     26
Sync-TCP
                           Trend Analysis of Average Queuing Delay
                     100      queuing delay at router                         max
                              average computed queuing delay
                              increasing trend
                      80      decreasing trend

                                                                              75%
queuing delay (ms)




                      60

                                                                              50%
                      40

                                                                              25%
                      20


                      0                                                       0%
                      260          261           262            263   264   265
                                                     time (s)                       27
Sync-TCP
  Congestion Reaction
• Decrease congestion window by 50% upon
  congestion notification
  – same reaction as TCP Reno to packet loss
• Increase and decrease congestion window
  according to congestion signal
  – intended to be used with trend analysis of average
    queuing delay congestion detection
  – operates the same as TCP Reno until 9 ACKs
    have been received

                                                         28
Sync-TCP
                        Congestion Window Adjustment
                          increasing trend            decreasing trend
                                                                             max
                          decrease 50%                  no change
average queuing delay




                                                                             75%
                                                              increase 10%
                          decrease 25%                          per RTT
                                                                             50%
                                                      increase 25%
                          decrease 10%                  per RTT
                                                                             25%
                           increase 1 packet          increase 50%
                               per RTT                  per RTT
                                                                             0%
                                               time                                29
Sync-TCP
  Congestion Control
Congestion Detection   Congestion Reaction
• Trend analysis of    • Increase and decrease
  smoothed average       congestion window
  queuing delay          according to congestion
• 50% of maximum         signal
  queuing delay
                       • Decrease congestion
• 50% of minimum OTT     window by 50% upon
• Smoothed average       congestion notification
  queuing delay
• Trend analysis of
  queuing delays
                                                   30
Evaluation
  Experiment Plan
• NS-2 network simulator
  – assume synchronized clocks
• FTP bulk-transfer traffic
  – examine the steady-state operation of the
    mechanisms
• HTTP traffic
  – integrate traffic model developed at Bell Labs
    into NS-2
     • main parameter is average number of HTTP requests
       per second
  – calibrate HTTP request rate to desired load level
                                                           31
Evaluation
  HTTP Simulation Environment
• Sync-TCP and TCP Reno
  flows do not compete             web                    web
• Two-way traffic                 servers                clients
   – measure performance in one
     direction only                           10 Mbps
• 70-150 new HTTP requests
  generated per second
• 45-2,500 HTTP                    web                   web
                                  clients               servers
  connections active
  simultaneously                            request

• 250,000 HTTP request-                     response
  response pairs completed


                                                                   32
Evaluation
   HTTP Experiment Space
• Sync-TCP congestion control mechanism
   – 50% max queuing delay detection and reduce by 50% reaction
   – trend analysis of average queuing delay detection and adjust
     according to signal reaction
• TCP for comparison
   – TCP Reno, TCP SACK
• Queuing method for comparison
   – drop-tail, Adaptive RED, Adaptive RED with ECN
• End-to-end load (% of link capacity)
   – 50%, 60%, 70%, 80%, 85%, 90%, 95%, 100%, 105%
• Number of congested links
   – 1, 2 (75% total load, 90% total load, 105% total load)


                                                                    33
Evaluation
  Evaluating HTTP Performance
• Network-level Metrics
  – packet loss at bottleneck router
  – queue size at bottleneck router

• Application-level Metrics
  – goodput per HTTP response
     • bytes received per second at web client
  – HTTP response times
     • time between sending the request and receiving the
       entire response

                                                            34
Evaluation
                    Average Packet Loss at Bottleneck
                8




                                                                                     400 K
                      TCP Reno
                      Sync-TCP
                6
packet loss %




                                                                                             275 K
                                                                      180 K
                4




                                                       100 K




                                                                              90 K
                2
                                           45 K




                                                               25 K
                                    21 K
                    3.5 K



                             8K




                                                  5K
                                  300




                0
                    0



                            0




                    50%     60%   70%     80%     85%                 90%            95%
                                     offered load                                                    35
Evaluation
                            Average Queue Size at Bottleneck
                             TCP Reno
                       80
                             Sync-TCP
queue size (packets)




                       60


                       40


                       20


                        0
                            50%    60%   70%     80%     85%   90%   95%
                                            offered load                   36
Evaluation
                       Average Goodput per Response
                                                            TCP Reno
                 160
                                                            Sync-TCP
goodput (kbps)




                 120


                  80


                  40


                   0
                       50%   60%   70%      80%     85%   90%   95%
                                     offered load                      37
Response Time CDF
                          Example
                         100

                          80
cumulative probability




                                             ~75% of the responses
                                             completed in 400 ms
                          60
                                             or less

                          40

                          20


                          0
                               0   200   400 600 800 1000 1200 1400
                                           HTTP response time (ms)    38
Response Time CDF
                          50% Load
                         100

                          80
cumulative probability




                                                   No large difference
                                                   between uncongested
                          60
                                                   and congested


                          40

                          20


                          0
                               0   200   400 600 800 1000 1200 1400
                                           HTTP response time (ms)       39
Response Time CDF
                          70% Load
                         100

                          80
cumulative probability




                                                  Sync-TCP performs
                                                  slightly better than
                          60                      TCP Reno


                          40

                          20


                          0
                               0   200   400 600 800 1000 1200 1400
                                           HTTP response time (ms)       40
Response Time CDF
                          80% Load
                         100

                          80
cumulative probability




                                                   Sync-TCP performs
                          60                       better than both TCP
                                                   Reno and AQM

                          40

                          20


                          0
                               0   200   400 600 800 1000 1200 1400
                                           HTTP response time (ms)        41
Response Time CDF
                          85% Load
                         100

                          80
cumulative probability




                                                   Sync-TCP performs
                          60                       better than both TCP
                                                   Reno and AQM

                          40

                          20


                          0
                               0   200   400 600 800 1000 1200 1400
                                           HTTP response time (ms)        42
Evaluation
  Early Congestion Detection
• Sync-TCP early congestion detection only
  operates after 9 ACKs have been received
  – HTTP responses > 25 KB

• Only 7-8% of HTTP responses > 25 KB

• HTTP responses < 25 KB do not use Sync-
  TCP early congestion detection
  – use TCP Reno

                                             43
Evaluation
                              85% Load, 48 MB Response
                              68
                                                                TCP Reno
congestion window (packets)



                                                                (17 ms base RTT)
                                                              x packet drop (952)
                              34


                               0
                              68
                                                                Sync-TCP
                                                                (47 ms base RTT)
                                                              x packet drop (190)
                              34


                              0
                                   900   1000 1100 1200 1300 1400 1500 1600
                                                    time (s)                        44
Conclusions
• Sync-TCP performs better than TCP Reno
  –   packet loss
  –   average queue size
  –   goodput per HTTP response
  –   HTTP response time
• Sync-TCP has comparable performance to “best”
  TCP and AQM combination
• Limitations of delay-based congestion control
  – may not compete well with TCP Reno on same network
  – with many congested links, decrease in one queue could
    mask increase in another queue

                                                             45
Summary
       synchronized clocks
                                      Taking advantage of
      one-way transit times           synchronized clocks
                                      in TCP can result in
                                      better network
    early congestion detection
                                      performance.

less packet loss and shorter queues

better overall network performance

                                                             46
My Contributions
• Method for measuring a flow's OTT and returning
  this exact timing information to the sender

• Comparison of several methods for using OTTs to
  detect congestion

• Sync-TCP: a family of end-to-end congestion
  control mechanisms based on using OTTs for
  congestion detection

                                                    47
Supporting Work
• Study of standards-track TCP congestion control and
  error recovery mechanisms in the context of HTTP
  traffic
   – Weigle, Jeffay, and Smith, “Quantifying the Effects of Recent
     Protocol Improvements to Standards-Track TCP,” in submission.
• Additions to NS-2
   – integrated a state-of-the-art random number generator
   – integrated Bell Labs’ HTTP traffic model
   – developed a module for delaying and dropping packets on
     a per-flow basis according to a given distribution
• Heuristics for determining appropriate run length for
  HTTP simulations
                                                                     48
Future Work
• Further Analysis
   – accuracy of clock synchronization
   – multiple congested links
   – Sync-TCP with router support
• Extensions to Sync-TCP
   –   improve congestion detection and reaction
   –   ACK compression
   –   ACK congestion control
   –   improve fairness
• Uses for synchronized clocks in TCP
   – statistics for time-critical applications
   – wireless devices
                                                   49
Thank You
• Committee Members
  Kevin Jeffay        Don Smith
  Ketan Mayer-Patel   Sanjoy Baruah
  Bert Dempsey        Jasleen Kaur
• UNC Department of Computer Science
• My parents, Mike & Jean Clark
• My husband, Chris


                                       50

More Related Content

What's hot

Xtcp Performance Brochure
Xtcp Performance BrochureXtcp Performance Brochure
Xtcp Performance Brochure
Tarik KUCUK
 
TCP Congestion Control
TCP Congestion ControlTCP Congestion Control
TCP Congestion Control
Michail Grigoropoulos
 
Tcp congestion control (1)
Tcp congestion control (1)Tcp congestion control (1)
Tcp congestion control (1)
Abdo sayed
 
TCP Congestion Control By Owais Jara
TCP Congestion Control By Owais JaraTCP Congestion Control By Owais Jara
TCP Congestion Control By Owais Jara
Owaîs Járå
 
Chap05 gtp 03_kh
Chap05 gtp 03_khChap05 gtp 03_kh
Chap05 gtp 03_kh
Farzad Ramin
 
A Baker's dozen of TCP
A Baker's dozen of TCPA Baker's dozen of TCP
A Baker's dozen of TCP
Stephen Hemminger
 
Tcp congestion avoidance algorithm identification
Tcp congestion avoidance algorithm identificationTcp congestion avoidance algorithm identification
Tcp congestion avoidance algorithm identification
Bala Lavanya
 
TCP congestion control
TCP congestion controlTCP congestion control
TCP congestion control
Shubham Jain
 
Analysis of TCP variants
Analysis of TCP variantsAnalysis of TCP variants
Tcp traffic control and red ecn
Tcp traffic control and red ecnTcp traffic control and red ecn
Tcp traffic control and red ecn
Abhishek Kesharwani
 
Congestion control avoidance
Congestion control avoidanceCongestion control avoidance
Congestion control avoidance
Anthony-Claret Onwutalobi
 
TCP-FIT: An Improved TCP Congestion Control Algorithm and its Performance
TCP-FIT: An Improved TCP Congestion Control Algorithm and its PerformanceTCP-FIT: An Improved TCP Congestion Control Algorithm and its Performance
TCP-FIT: An Improved TCP Congestion Control Algorithm and its Performance
Kevin Tong
 
Tcp congestion avoidance
Tcp congestion avoidanceTcp congestion avoidance
Tcp congestion avoidance
Ahmed Kamel Taha
 
Congestion control in tcp
Congestion control in tcpCongestion control in tcp
Congestion control in tcp
samarai_apoc
 
3g umts-originating-call Call Flow
3g umts-originating-call Call Flow3g umts-originating-call Call Flow
3g umts-originating-call Call Flow
Eduard Lucena
 
Congestion control in TCP
Congestion control in TCPCongestion control in TCP
Congestion control in TCP
selvakumar_b1985
 
Tcp Congestion Avoidance
Tcp Congestion AvoidanceTcp Congestion Avoidance
Tcp Congestion Avoidance
Ram Dutt Shukla
 
Congestion control
Congestion controlCongestion control
Congestion control
Abhay Pai
 

What's hot (18)

Xtcp Performance Brochure
Xtcp Performance BrochureXtcp Performance Brochure
Xtcp Performance Brochure
 
TCP Congestion Control
TCP Congestion ControlTCP Congestion Control
TCP Congestion Control
 
Tcp congestion control (1)
Tcp congestion control (1)Tcp congestion control (1)
Tcp congestion control (1)
 
TCP Congestion Control By Owais Jara
TCP Congestion Control By Owais JaraTCP Congestion Control By Owais Jara
TCP Congestion Control By Owais Jara
 
Chap05 gtp 03_kh
Chap05 gtp 03_khChap05 gtp 03_kh
Chap05 gtp 03_kh
 
A Baker's dozen of TCP
A Baker's dozen of TCPA Baker's dozen of TCP
A Baker's dozen of TCP
 
Tcp congestion avoidance algorithm identification
Tcp congestion avoidance algorithm identificationTcp congestion avoidance algorithm identification
Tcp congestion avoidance algorithm identification
 
TCP congestion control
TCP congestion controlTCP congestion control
TCP congestion control
 
Analysis of TCP variants
Analysis of TCP variantsAnalysis of TCP variants
Analysis of TCP variants
 
Tcp traffic control and red ecn
Tcp traffic control and red ecnTcp traffic control and red ecn
Tcp traffic control and red ecn
 
Congestion control avoidance
Congestion control avoidanceCongestion control avoidance
Congestion control avoidance
 
TCP-FIT: An Improved TCP Congestion Control Algorithm and its Performance
TCP-FIT: An Improved TCP Congestion Control Algorithm and its PerformanceTCP-FIT: An Improved TCP Congestion Control Algorithm and its Performance
TCP-FIT: An Improved TCP Congestion Control Algorithm and its Performance
 
Tcp congestion avoidance
Tcp congestion avoidanceTcp congestion avoidance
Tcp congestion avoidance
 
Congestion control in tcp
Congestion control in tcpCongestion control in tcp
Congestion control in tcp
 
3g umts-originating-call Call Flow
3g umts-originating-call Call Flow3g umts-originating-call Call Flow
3g umts-originating-call Call Flow
 
Congestion control in TCP
Congestion control in TCPCongestion control in TCP
Congestion control in TCP
 
Tcp Congestion Avoidance
Tcp Congestion AvoidanceTcp Congestion Avoidance
Tcp Congestion Avoidance
 
Congestion control
Congestion controlCongestion control
Congestion control
 

Similar to Investigating the Use of Synchronized Clocks in TCP Congestion Control

Lecture 19 22. transport protocol for ad-hoc
Lecture 19 22. transport protocol for ad-hoc Lecture 19 22. transport protocol for ad-hoc
Lecture 19 22. transport protocol for ad-hoc
Chandra Meena
 
Unit III IPV6 UDP
Unit III IPV6 UDPUnit III IPV6 UDP
Unit III IPV6 UDP
sangusajjan
 
NE #1.pptx
NE #1.pptxNE #1.pptx
NE #1.pptx
tahaniali27
 
6610-l14.pptx
6610-l14.pptx6610-l14.pptx
6610-l14.pptx
ArvindRamesh22
 
Part9-congestion.pptx
Part9-congestion.pptxPart9-congestion.pptx
Part9-congestion.pptx
Olivier Bonaventure
 
TCP Theory
TCP TheoryTCP Theory
TCP Theory
soohyunc
 
TCP_Congestion_Control.ppt
TCP_Congestion_Control.pptTCP_Congestion_Control.ppt
TCP_Congestion_Control.ppt
19UCSA032ASANJAYKUMA
 
Features of tcp (part 2) .68
Features of tcp  (part 2) .68Features of tcp  (part 2) .68
Features of tcp (part 2) .68
myrajendra
 
LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecases
LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecasesLF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecases
LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecases
LF_OpenvSwitch
 
Designing TCP-Friendly Window-based Congestion Control
Designing TCP-Friendly Window-based Congestion ControlDesigning TCP-Friendly Window-based Congestion Control
Designing TCP-Friendly Window-based Congestion Control
soohyunc
 
Improving Distributed TCP Caching for Wireless Sensor Networks
Improving Distributed TCP Caching for Wireless Sensor NetworksImproving Distributed TCP Caching for Wireless Sensor Networks
Improving Distributed TCP Caching for Wireless Sensor Networks
Ahmed Ayadi
 
Cvc2009 Moscow Repeater+Ica Fabian Kienle Final
Cvc2009 Moscow Repeater+Ica  Fabian Kienle FinalCvc2009 Moscow Repeater+Ica  Fabian Kienle Final
Cvc2009 Moscow Repeater+Ica Fabian Kienle Final
Liudmila Li
 
DCTcp
DCTcpDCTcp
Mobile Transpot Layer
Mobile Transpot LayerMobile Transpot Layer
Mobile Transpot Layer
Maulik Patel
 
features of tcp important for the web
features of tcp  important for the webfeatures of tcp  important for the web
features of tcp important for the web
rinnocente
 
Toward an Understanding of the Processing Delay of Peer-to-Peer Relay Nodes
Toward an Understanding of the Processing Delay of Peer-to-Peer Relay NodesToward an Understanding of the Processing Delay of Peer-to-Peer Relay Nodes
Toward an Understanding of the Processing Delay of Peer-to-Peer Relay Nodes
Academia Sinica
 
Network and TCP performance relationship workshop
Network and TCP performance relationship workshopNetwork and TCP performance relationship workshop
Network and TCP performance relationship workshop
Kae Hsu
 
05 ergeg mmergm maergergcongergeestion.ppt
05 ergeg mmergm maergergcongergeestion.ppt05 ergeg mmergm maergergcongergeestion.ppt
05 ergeg mmergm maergergcongergeestion.ppt
TLRTHR
 
05compuernetworkscongestioncontrolalgo.ppt
05compuernetworkscongestioncontrolalgo.ppt05compuernetworkscongestioncontrolalgo.ppt
05compuernetworkscongestioncontrolalgo.ppt
TLRTHR
 
Mobile comn.pptx
Mobile comn.pptxMobile comn.pptx
Mobile comn.pptx
DAMANDEEPSINGH61
 

Similar to Investigating the Use of Synchronized Clocks in TCP Congestion Control (20)

Lecture 19 22. transport protocol for ad-hoc
Lecture 19 22. transport protocol for ad-hoc Lecture 19 22. transport protocol for ad-hoc
Lecture 19 22. transport protocol for ad-hoc
 
Unit III IPV6 UDP
Unit III IPV6 UDPUnit III IPV6 UDP
Unit III IPV6 UDP
 
NE #1.pptx
NE #1.pptxNE #1.pptx
NE #1.pptx
 
6610-l14.pptx
6610-l14.pptx6610-l14.pptx
6610-l14.pptx
 
Part9-congestion.pptx
Part9-congestion.pptxPart9-congestion.pptx
Part9-congestion.pptx
 
TCP Theory
TCP TheoryTCP Theory
TCP Theory
 
TCP_Congestion_Control.ppt
TCP_Congestion_Control.pptTCP_Congestion_Control.ppt
TCP_Congestion_Control.ppt
 
Features of tcp (part 2) .68
Features of tcp  (part 2) .68Features of tcp  (part 2) .68
Features of tcp (part 2) .68
 
LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecases
LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecasesLF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecases
LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecases
 
Designing TCP-Friendly Window-based Congestion Control
Designing TCP-Friendly Window-based Congestion ControlDesigning TCP-Friendly Window-based Congestion Control
Designing TCP-Friendly Window-based Congestion Control
 
Improving Distributed TCP Caching for Wireless Sensor Networks
Improving Distributed TCP Caching for Wireless Sensor NetworksImproving Distributed TCP Caching for Wireless Sensor Networks
Improving Distributed TCP Caching for Wireless Sensor Networks
 
Cvc2009 Moscow Repeater+Ica Fabian Kienle Final
Cvc2009 Moscow Repeater+Ica  Fabian Kienle FinalCvc2009 Moscow Repeater+Ica  Fabian Kienle Final
Cvc2009 Moscow Repeater+Ica Fabian Kienle Final
 
DCTcp
DCTcpDCTcp
DCTcp
 
Mobile Transpot Layer
Mobile Transpot LayerMobile Transpot Layer
Mobile Transpot Layer
 
features of tcp important for the web
features of tcp  important for the webfeatures of tcp  important for the web
features of tcp important for the web
 
Toward an Understanding of the Processing Delay of Peer-to-Peer Relay Nodes
Toward an Understanding of the Processing Delay of Peer-to-Peer Relay NodesToward an Understanding of the Processing Delay of Peer-to-Peer Relay Nodes
Toward an Understanding of the Processing Delay of Peer-to-Peer Relay Nodes
 
Network and TCP performance relationship workshop
Network and TCP performance relationship workshopNetwork and TCP performance relationship workshop
Network and TCP performance relationship workshop
 
05 ergeg mmergm maergergcongergeestion.ppt
05 ergeg mmergm maergergcongergeestion.ppt05 ergeg mmergm maergergcongergeestion.ppt
05 ergeg mmergm maergergcongergeestion.ppt
 
05compuernetworkscongestioncontrolalgo.ppt
05compuernetworkscongestioncontrolalgo.ppt05compuernetworkscongestioncontrolalgo.ppt
05compuernetworkscongestioncontrolalgo.ppt
 
Mobile comn.pptx
Mobile comn.pptxMobile comn.pptx
Mobile comn.pptx
 

More from Michele Weigle

Comparing the Archival Rate of Arabic, English, Danish, and Korean Language W...
Comparing the Archival Rate of Arabic, English, Danish, and Korean Language W...Comparing the Archival Rate of Arabic, English, Danish, and Korean Language W...
Comparing the Archival Rate of Arabic, English, Danish, and Korean Language W...
Michele Weigle
 
WS-DL’s Work towards Enabling Personal Use of Web Archives
WS-DL’s Work towards Enabling Personal Use of Web ArchivesWS-DL’s Work towards Enabling Personal Use of Web Archives
WS-DL’s Work towards Enabling Personal Use of Web Archives
Michele Weigle
 
Intro to Web Archiving
Intro to Web ArchivingIntro to Web Archiving
Intro to Web Archiving
Michele Weigle
 
Enabling Personal Use of Web Archives
Enabling Personal Use of Web ArchivesEnabling Personal Use of Web Archives
Enabling Personal Use of Web Archives
Michele Weigle
 
Visualizing Webpage Changes Over Time
Visualizing Webpage Changes Over TimeVisualizing Webpage Changes Over Time
Visualizing Webpage Changes Over Time
Michele Weigle
 
How to Write an Academic Paper
How to Write an Academic PaperHow to Write an Academic Paper
How to Write an Academic Paper
Michele Weigle
 
How to Prepare and Give and Academic Presentation
How to Prepare and Give and Academic PresentationHow to Prepare and Give and Academic Presentation
How to Prepare and Give and Academic Presentation
Michele Weigle
 
My Academic Story via Internet Archive
My Academic Story via Internet ArchiveMy Academic Story via Internet Archive
My Academic Story via Internet Archive
Michele Weigle
 
A Retasking Framework For Wireless Sensor Networks
A Retasking Framework For Wireless Sensor NetworksA Retasking Framework For Wireless Sensor Networks
A Retasking Framework For Wireless Sensor Networks
Michele Weigle
 
Strategies for Sensor Data Aggregation in Support of Emergency Response
Strategies for Sensor Data Aggregation in Support of Emergency ResponseStrategies for Sensor Data Aggregation in Support of Emergency Response
Strategies for Sensor Data Aggregation in Support of Emergency Response
Michele Weigle
 
Detecting Off-Topic Web Pages at #CUWARC
Detecting Off-Topic Web Pages at #CUWARCDetecting Off-Topic Web Pages at #CUWARC
Detecting Off-Topic Web Pages at #CUWARC
Michele Weigle
 
Energy Harvesting-aware Design for Wireless Nanonetworks
Energy Harvesting-aware Design for Wireless NanonetworksEnergy Harvesting-aware Design for Wireless Nanonetworks
Energy Harvesting-aware Design for Wireless Nanonetworks
Michele Weigle
 
2015-capwic-gradschool
2015-capwic-gradschool2015-capwic-gradschool
2015-capwic-gradschool
Michele Weigle
 
2015-odu-ece-tools-for-past-web
2015-odu-ece-tools-for-past-web2015-odu-ece-tools-for-past-web
2015-odu-ece-tools-for-past-web
Michele Weigle
 
Tools for Managing the Past Web
Tools for Managing the Past WebTools for Managing the Past Web
Tools for Managing the Past Web
Michele Weigle
 
Archive What I See Now - 2014 NEH ODH Overview
Archive What I See Now - 2014 NEH ODH OverviewArchive What I See Now - 2014 NEH ODH Overview
Archive What I See Now - 2014 NEH ODH Overview
Michele Weigle
 
Bits of Research
Bits of ResearchBits of Research
Bits of Research
Michele Weigle
 
Telling Stories with Web Archives
Telling Stories with Web ArchivesTelling Stories with Web Archives
Telling Stories with Web Archives
Michele Weigle
 
"Archive What I See Now" - NEH ODH overview
"Archive What I See Now" - NEH ODH overview"Archive What I See Now" - NEH ODH overview
"Archive What I See Now" - NEH ODH overview
Michele Weigle
 
TDMA Slot Reservation in Cluster-Based VANETs
TDMA Slot Reservation in Cluster-Based VANETsTDMA Slot Reservation in Cluster-Based VANETs
TDMA Slot Reservation in Cluster-Based VANETs
Michele Weigle
 

More from Michele Weigle (20)

Comparing the Archival Rate of Arabic, English, Danish, and Korean Language W...
Comparing the Archival Rate of Arabic, English, Danish, and Korean Language W...Comparing the Archival Rate of Arabic, English, Danish, and Korean Language W...
Comparing the Archival Rate of Arabic, English, Danish, and Korean Language W...
 
WS-DL’s Work towards Enabling Personal Use of Web Archives
WS-DL’s Work towards Enabling Personal Use of Web ArchivesWS-DL’s Work towards Enabling Personal Use of Web Archives
WS-DL’s Work towards Enabling Personal Use of Web Archives
 
Intro to Web Archiving
Intro to Web ArchivingIntro to Web Archiving
Intro to Web Archiving
 
Enabling Personal Use of Web Archives
Enabling Personal Use of Web ArchivesEnabling Personal Use of Web Archives
Enabling Personal Use of Web Archives
 
Visualizing Webpage Changes Over Time
Visualizing Webpage Changes Over TimeVisualizing Webpage Changes Over Time
Visualizing Webpage Changes Over Time
 
How to Write an Academic Paper
How to Write an Academic PaperHow to Write an Academic Paper
How to Write an Academic Paper
 
How to Prepare and Give and Academic Presentation
How to Prepare and Give and Academic PresentationHow to Prepare and Give and Academic Presentation
How to Prepare and Give and Academic Presentation
 
My Academic Story via Internet Archive
My Academic Story via Internet ArchiveMy Academic Story via Internet Archive
My Academic Story via Internet Archive
 
A Retasking Framework For Wireless Sensor Networks
A Retasking Framework For Wireless Sensor NetworksA Retasking Framework For Wireless Sensor Networks
A Retasking Framework For Wireless Sensor Networks
 
Strategies for Sensor Data Aggregation in Support of Emergency Response
Strategies for Sensor Data Aggregation in Support of Emergency ResponseStrategies for Sensor Data Aggregation in Support of Emergency Response
Strategies for Sensor Data Aggregation in Support of Emergency Response
 
Detecting Off-Topic Web Pages at #CUWARC
Detecting Off-Topic Web Pages at #CUWARCDetecting Off-Topic Web Pages at #CUWARC
Detecting Off-Topic Web Pages at #CUWARC
 
Energy Harvesting-aware Design for Wireless Nanonetworks
Energy Harvesting-aware Design for Wireless NanonetworksEnergy Harvesting-aware Design for Wireless Nanonetworks
Energy Harvesting-aware Design for Wireless Nanonetworks
 
2015-capwic-gradschool
2015-capwic-gradschool2015-capwic-gradschool
2015-capwic-gradschool
 
2015-odu-ece-tools-for-past-web
2015-odu-ece-tools-for-past-web2015-odu-ece-tools-for-past-web
2015-odu-ece-tools-for-past-web
 
Tools for Managing the Past Web
Tools for Managing the Past WebTools for Managing the Past Web
Tools for Managing the Past Web
 
Archive What I See Now - 2014 NEH ODH Overview
Archive What I See Now - 2014 NEH ODH OverviewArchive What I See Now - 2014 NEH ODH Overview
Archive What I See Now - 2014 NEH ODH Overview
 
Bits of Research
Bits of ResearchBits of Research
Bits of Research
 
Telling Stories with Web Archives
Telling Stories with Web ArchivesTelling Stories with Web Archives
Telling Stories with Web Archives
 
"Archive What I See Now" - NEH ODH overview
"Archive What I See Now" - NEH ODH overview"Archive What I See Now" - NEH ODH overview
"Archive What I See Now" - NEH ODH overview
 
TDMA Slot Reservation in Cluster-Based VANETs
TDMA Slot Reservation in Cluster-Based VANETsTDMA Slot Reservation in Cluster-Based VANETs
TDMA Slot Reservation in Cluster-Based VANETs
 

Recently uploaded

Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
SOFTTECHHUB
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
Safe Software
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Malak Abu Hammad
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
Matthew Sinclair
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems S.M.S.A.
 
Full-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalizationFull-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalization
Zilliz
 
Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
Octavian Nadolu
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Aggregage
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
Adtran
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Albert Hoitingh
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
sonjaschweigert1
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
mikeeftimakis1
 
Building RAG with self-deployed Milvus vector database and Snowpark Container...
Building RAG with self-deployed Milvus vector database and Snowpark Container...Building RAG with self-deployed Milvus vector database and Snowpark Container...
Building RAG with self-deployed Milvus vector database and Snowpark Container...
Zilliz
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
Neo4j
 
“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
Claudio Di Ciccio
 
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...
Zilliz
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
名前 です男
 
Large Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial ApplicationsLarge Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial Applications
Rohit Gautam
 
How to use Firebase Data Connect For Flutter
How to use Firebase Data Connect For FlutterHow to use Firebase Data Connect For Flutter
How to use Firebase Data Connect For Flutter
Daiki Mogmet Ito
 
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Speck&Tech
 

Recently uploaded (20)

Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!
 
Essentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FMEEssentials of Automations: The Art of Triggers and Actions in FME
Essentials of Automations: The Art of Triggers and Actions in FME
 
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfUnlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdf
 
20240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 202420240605 QFM017 Machine Intelligence Reading List May 2024
20240605 QFM017 Machine Intelligence Reading List May 2024
 
Uni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdfUni Systems Copilot event_05062024_C.Vlachos.pdf
Uni Systems Copilot event_05062024_C.Vlachos.pdf
 
Full-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalizationFull-RAG: A modern architecture for hyper-personalization
Full-RAG: A modern architecture for hyper-personalization
 
Artificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopmentArtificial Intelligence for XMLDevelopment
Artificial Intelligence for XMLDevelopment
 
Generative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionGenerative AI Deep Dive: Advancing from Proof of Concept to Production
Generative AI Deep Dive: Advancing from Proof of Concept to Production
 
Pushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 daysPushing the limits of ePRTC: 100ns holdover for 100 days
Pushing the limits of ePRTC: 100ns holdover for 100 days
 
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024
 
A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...A tale of scale & speed: How the US Navy is enabling software delivery from l...
A tale of scale & speed: How the US Navy is enabling software delivery from l...
 
Introduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - CybersecurityIntroduction to CHERI technology - Cybersecurity
Introduction to CHERI technology - Cybersecurity
 
Building RAG with self-deployed Milvus vector database and Snowpark Container...
Building RAG with self-deployed Milvus vector database and Snowpark Container...Building RAG with self-deployed Milvus vector database and Snowpark Container...
Building RAG with self-deployed Milvus vector database and Snowpark Container...
 
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...
 
“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”“I’m still / I’m still / Chaining from the Block”
“I’m still / I’m still / Chaining from the Block”
 
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...
 
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
みなさんこんにちはこれ何文字まで入るの?40文字以下不可とか本当に意味わからないけどこれ限界文字数書いてないからマジでやばい文字数いけるんじゃないの?えこ...
 
Large Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial ApplicationsLarge Language Model (LLM) and it’s Geospatial Applications
Large Language Model (LLM) and it’s Geospatial Applications
 
How to use Firebase Data Connect For Flutter
How to use Firebase Data Connect For FlutterHow to use Firebase Data Connect For Flutter
How to use Firebase Data Connect For Flutter
 
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?
 

Investigating the Use of Synchronized Clocks in TCP Congestion Control

  • 1. Investigating the Use of Synchronized Clocks in TCP Congestion Control Michele Weigle Dissertation Defense May 14, 2003 Advisor: Kevin Jeffay
  • 2. Research Question Can the use of exact timing information improve TCP congestion control? 2
  • 3. Claim synchronized clocks exact timing information early congestion detection less packet loss and shorter queues better overall network performance 3
  • 4. Outline • Background • Related Work • Thesis Statement • Sync-TCP • Evaluation • Conclusions • Future Work 4
  • 5. Background Queuing • Router queues are FIFO and finite – the longer the queue, the longer a packet at the end of the queue is delayed – if queue is full, incoming packets are dropped X • Most queues are drop-tail – incoming packets are only dropped when the queue is full 5
  • 6. Background Congestion • Sustained period where the incoming rate is greater than the service rate • Leads to increased queuing delays • Leads to packet loss – leads to increased latency for TCP flows – leads to low throughput 6
  • 7. Background TCP Data Transfer sender receiver data 1 OTT (data) congestion window size RTT (cwnd) = 1 2 OTT ACK (ACK) data 2 throughput = cwnd / RTT time time 7
  • 8. Background TCP Congestion Window sender receiver data 1 data 2 cwnd = 3 data 3 RTT throughput = cwnd / RTT 2 A CK 3 A CK 4 ACK data 4 data 5 data 6 time time 8
  • 9. Background TCP Congestion Control • Available network bandwidth is unknown • TCP probes the network by increasing the congestion window when ACKs return • TCP backs off by reducing the congestion window when loss is detected 9
  • 10. Background TCP Reno Loss Detection • 3 Duplicate ACKs throughput = cwnd / RTT – reduce congestion congestion window window by 50% x x • Retransmission timeout Timeout – reduce congestion duplicate ACKs window to 1 packet time 10
  • 11. Background TCP Reno Data Recovery sender receiver data 1 data 2 data 3 X data 4 data 5 2 ACK 2 A CK K 2 AC K 2 AC data 6 data 2 time time 11
  • 12. The Problem TCP Congestion Control • Overflows queues in search for more resources • Uses packet loss as its only indicator of congestion – relies on a binary signal of congestion 12
  • 13. The Problem Congestion Control • TCP Reno: React to packet loss congestion window x x – reduce sending rate only when timeout packets are lost – perform congestion control only duplicate when it is time to retransmit lost ACKs packets time • Goal: React to congestion early congestion window and avoid losses – congestion occurs before packets are lost – decouple congestion control and retransmission time 13
  • 14. Related Work Congestion Control • End-to-End – TCP Reno is the problem Internet router router adaptation adaptation • Router-based – drop-tail queues are the problem – active queue management (AQM) Internet router router adaptation 14
  • 15. Related Work Congestion Control • End-to-End – Delay-based congestion control [R. Jain, 1989] – TCP Vegas [Brakmo, O’Malley, Peterson, 1994] – TCP Santa Cruz [Parsa, Garcia-Luna-Aceves, 1999] – TCP Westwood [Mascolo, Casetti, Gerla, Sanadidi, Wang, 2001] – TCP Peach [Akyildiz, Morabito, Palazzo, 2001] – Binomial algorithms [Bansal, Balakrishan, 2001] • Router-based – DECbit [Ramakrishnan, R. Jain, 1990] – Random Early Detection (RED) [Floyd, Jacobson, 1993] – Explicit Congestion Notification (ECN) [Floyd, 1994] – Adaptive RED [Floyd, Gummadi, Shenker, 2001] 15
  • 16. Thesis Statement Precise knowledge of one-way transit times can be used to improve the performance of TCP congestion control. 16
  • 17. Thesis Statement Precise knowledge of one-way transit times can be used to improve the performance of TCP congestion control. • network-level metrics: packet loss and average queue sizes at congested routers • application-level metrics: HTTP response times and goodput per HTTP response 17
  • 18. Thesis Statement Precise knowledge of one-way transit times can be used to improve the performance of TCP congestion control. • provide lower packet loss and lower queue sizes than TCP Reno • provide lower HTTP response time and higher goodput per HTTP response than TCP Reno 18
  • 19. My Approach 1. Exchange exact timing information 2. Detect congestion 3. React to congestion 4. Sync-TCP congestion control 5. Evaluate Sync-TCP vs. TCP Reno 19
  • 20. Sync-TCP Synchronized Clocks • Allow measurement of OTT • Methods of synchronization Internet – Global Positioning System (GPS) – Network Time Protocol (NTP) 20
  • 21. Sync-TCP TCP Header Option 32 bits • New option in the TCP source port # dest port # header sequence number – 14 bytes acknowledgment number head not U A P RSF rcvr window size type length len used checksum ptr urgent data OTT (ms) timestamp options (variable length) echo reply application data (variable length) 21
  • 22. Sync-TCP Example [OTT, timestamp, echo reply] Sender’s Calculations sender receiver time data received = time data 1 [-1, 1, -1] 1 sent (echo reply) + OTT 2 2 3 [1, 3, 1] 3 time ACK delayed = time ACK 4 4 sent (timestamp) - time data 5 [1, 5, 3 5 received ] 6 6 7 7 queuing delay = OTT - minimum 8 [2, 8, 5] 8 OTT 9 9 time time 22
  • 23. Sync-TCP Congestion Detection • 50% of maximum-observed queuing delay (queuing delay = OTT – minimum-observed OTT) • 50% of minimum-observed OTT • Average queuing delay • Trend analysis of queuing delays • Trend analysis of the average queuing delay 23
  • 24. Sync-TCP Trend Analysis of Average Queuing Delay • Trend analysis for available bandwidth estimation adapted from [Jain and Dovrolis, 2002] • Operation: – compute 9 average queuing delay samples – split into 3 groups of 3 samples each – compute median, mi , of each group – trend is relationship of m1, m2, m3 24
  • 25. Sync-TCP Trend Analysis of Average Queuing Delay • Every arriving ACK, compute smoothed average queuing delay from OTT • Compute trend of average queuing delay – after first 9 ACKs 1 2 3 4 5 6 7 8 9 10 11 12 – afterwards, every 3 ACKs • Calculate the average queuing delay as a percentage of the maximum-observed queuing delay – divide into 25% increments 25
  • 26. Sync-TCP Queuing Delay at Router 100 queuing delay at router 80 queuing delay (ms) 60 40 20 0 260 261 262 263 264 265 time (s) 26
  • 27. Sync-TCP Trend Analysis of Average Queuing Delay 100 queuing delay at router max average computed queuing delay increasing trend 80 decreasing trend 75% queuing delay (ms) 60 50% 40 25% 20 0 0% 260 261 262 263 264 265 time (s) 27
  • 28. Sync-TCP Congestion Reaction • Decrease congestion window by 50% upon congestion notification – same reaction as TCP Reno to packet loss • Increase and decrease congestion window according to congestion signal – intended to be used with trend analysis of average queuing delay congestion detection – operates the same as TCP Reno until 9 ACKs have been received 28
  • 29. Sync-TCP Congestion Window Adjustment increasing trend decreasing trend max decrease 50% no change average queuing delay 75% increase 10% decrease 25% per RTT 50% increase 25% decrease 10% per RTT 25% increase 1 packet increase 50% per RTT per RTT 0% time 29
  • 30. Sync-TCP Congestion Control Congestion Detection Congestion Reaction • Trend analysis of • Increase and decrease smoothed average congestion window queuing delay according to congestion • 50% of maximum signal queuing delay • Decrease congestion • 50% of minimum OTT window by 50% upon • Smoothed average congestion notification queuing delay • Trend analysis of queuing delays 30
  • 31. Evaluation Experiment Plan • NS-2 network simulator – assume synchronized clocks • FTP bulk-transfer traffic – examine the steady-state operation of the mechanisms • HTTP traffic – integrate traffic model developed at Bell Labs into NS-2 • main parameter is average number of HTTP requests per second – calibrate HTTP request rate to desired load level 31
  • 32. Evaluation HTTP Simulation Environment • Sync-TCP and TCP Reno flows do not compete web web • Two-way traffic servers clients – measure performance in one direction only 10 Mbps • 70-150 new HTTP requests generated per second • 45-2,500 HTTP web web clients servers connections active simultaneously request • 250,000 HTTP request- response response pairs completed 32
  • 33. Evaluation HTTP Experiment Space • Sync-TCP congestion control mechanism – 50% max queuing delay detection and reduce by 50% reaction – trend analysis of average queuing delay detection and adjust according to signal reaction • TCP for comparison – TCP Reno, TCP SACK • Queuing method for comparison – drop-tail, Adaptive RED, Adaptive RED with ECN • End-to-end load (% of link capacity) – 50%, 60%, 70%, 80%, 85%, 90%, 95%, 100%, 105% • Number of congested links – 1, 2 (75% total load, 90% total load, 105% total load) 33
  • 34. Evaluation Evaluating HTTP Performance • Network-level Metrics – packet loss at bottleneck router – queue size at bottleneck router • Application-level Metrics – goodput per HTTP response • bytes received per second at web client – HTTP response times • time between sending the request and receiving the entire response 34
  • 35. Evaluation Average Packet Loss at Bottleneck 8 400 K TCP Reno Sync-TCP 6 packet loss % 275 K 180 K 4 100 K 90 K 2 45 K 25 K 21 K 3.5 K 8K 5K 300 0 0 0 50% 60% 70% 80% 85% 90% 95% offered load 35
  • 36. Evaluation Average Queue Size at Bottleneck TCP Reno 80 Sync-TCP queue size (packets) 60 40 20 0 50% 60% 70% 80% 85% 90% 95% offered load 36
  • 37. Evaluation Average Goodput per Response TCP Reno 160 Sync-TCP goodput (kbps) 120 80 40 0 50% 60% 70% 80% 85% 90% 95% offered load 37
  • 38. Response Time CDF Example 100 80 cumulative probability ~75% of the responses completed in 400 ms 60 or less 40 20 0 0 200 400 600 800 1000 1200 1400 HTTP response time (ms) 38
  • 39. Response Time CDF 50% Load 100 80 cumulative probability No large difference between uncongested 60 and congested 40 20 0 0 200 400 600 800 1000 1200 1400 HTTP response time (ms) 39
  • 40. Response Time CDF 70% Load 100 80 cumulative probability Sync-TCP performs slightly better than 60 TCP Reno 40 20 0 0 200 400 600 800 1000 1200 1400 HTTP response time (ms) 40
  • 41. Response Time CDF 80% Load 100 80 cumulative probability Sync-TCP performs 60 better than both TCP Reno and AQM 40 20 0 0 200 400 600 800 1000 1200 1400 HTTP response time (ms) 41
  • 42. Response Time CDF 85% Load 100 80 cumulative probability Sync-TCP performs 60 better than both TCP Reno and AQM 40 20 0 0 200 400 600 800 1000 1200 1400 HTTP response time (ms) 42
  • 43. Evaluation Early Congestion Detection • Sync-TCP early congestion detection only operates after 9 ACKs have been received – HTTP responses > 25 KB • Only 7-8% of HTTP responses > 25 KB • HTTP responses < 25 KB do not use Sync- TCP early congestion detection – use TCP Reno 43
  • 44. Evaluation 85% Load, 48 MB Response 68 TCP Reno congestion window (packets) (17 ms base RTT) x packet drop (952) 34 0 68 Sync-TCP (47 ms base RTT) x packet drop (190) 34 0 900 1000 1100 1200 1300 1400 1500 1600 time (s) 44
  • 45. Conclusions • Sync-TCP performs better than TCP Reno – packet loss – average queue size – goodput per HTTP response – HTTP response time • Sync-TCP has comparable performance to “best” TCP and AQM combination • Limitations of delay-based congestion control – may not compete well with TCP Reno on same network – with many congested links, decrease in one queue could mask increase in another queue 45
  • 46. Summary synchronized clocks Taking advantage of one-way transit times synchronized clocks in TCP can result in better network early congestion detection performance. less packet loss and shorter queues better overall network performance 46
  • 47. My Contributions • Method for measuring a flow's OTT and returning this exact timing information to the sender • Comparison of several methods for using OTTs to detect congestion • Sync-TCP: a family of end-to-end congestion control mechanisms based on using OTTs for congestion detection 47
  • 48. Supporting Work • Study of standards-track TCP congestion control and error recovery mechanisms in the context of HTTP traffic – Weigle, Jeffay, and Smith, “Quantifying the Effects of Recent Protocol Improvements to Standards-Track TCP,” in submission. • Additions to NS-2 – integrated a state-of-the-art random number generator – integrated Bell Labs’ HTTP traffic model – developed a module for delaying and dropping packets on a per-flow basis according to a given distribution • Heuristics for determining appropriate run length for HTTP simulations 48
  • 49. Future Work • Further Analysis – accuracy of clock synchronization – multiple congested links – Sync-TCP with router support • Extensions to Sync-TCP – improve congestion detection and reaction – ACK compression – ACK congestion control – improve fairness • Uses for synchronized clocks in TCP – statistics for time-critical applications – wireless devices 49
  • 50. Thank You • Committee Members Kevin Jeffay Don Smith Ketan Mayer-Patel Sanjoy Baruah Bert Dempsey Jasleen Kaur • UNC Department of Computer Science • My parents, Mike & Jean Clark • My husband, Chris 50