My PhD defense
May 14, 2003
University of North Carolina, Chapel Hill
Investigating the Use of Synchronized Clocks in TCP Congestion Control
Advisor: Kevin Jeffay
This document summarizes key concepts about congestion control in TCP including:
- TCP uses additive increase multiplicative decrease (AIMD) to dynamically adjust the congestion window size and maintain efficiency and fairness.
- TCP has slow start and congestion avoidance states that govern how the congestion window is adjusted in response to acknowledgements.
- TCP responds to packet loss through fast retransmit, fast recovery, and halving the congestion window size to reduce congestion according to protocols like Tahoe, Reno, and New Reno.
- TCP uses congestion control and avoidance to prevent network congestion collapse. It operates in a distributed manner without centralized control.
- TCP's congestion control is based on additive increase, multiplicative decrease (AIMD) and uses a congestion window and packet pacing to smoothly increase and decrease transmission rates in response to packet loss as a signal of congestion.
- The key mechanisms are slow start for initial rapid ramp up, congestion avoidance for gradual increase, fast retransmit for quick recovery from single losses, and timeout for recovery from multiple losses or ack losses. These mechanisms work together to keep TCP stable and efficient under different network conditions.
CUBIC is a TCP congestion control algorithm that uses a cubic window growth function to help TCP scale better in high bandwidth-delay product networks. It aims to improve scalability, stability, and fairness compared to previous algorithms like BIC-TCP. The window growth is independent of round-trip time and becomes nearly zero around the maximum window size to help stability. CUBIC also incorporates features to help it behave friendly to standard TCP implementations.
CS4344 09/10 Lecture 10: Transport Protocol for Networked GamesWei Tsang Ooi
The document discusses transport protocols for networked games and compares TCP and UDP. While TCP provides reliable delivery, it has higher latency than UDP. UDP has lower overhead but is unreliable. The document examines why certain popular games use TCP or UDP and outlines strategies to make TCP perform better for games, such as reducing delays, retransmitting bundles of data, and combining thin streams. It suggests the Stream Control Transmission Protocol (SCTP) as a potentially ideal transport for games since it allows flexibility in reliability and ordering of messages.
TCP uses congestion control to prevent network congestion collapse. It uses additive increase multiplicative decrease (AIMD) where the sending rate is increased slowly but cut in half after a loss. TCP paces packets using a congestion window that limits unacknowledged data. It uses slow start to quickly reach bandwidth and congestion avoidance to increase the window by 1 packet per RTT. This models TCP behavior and shows throughput is related to window size, loss rate, and RTT.
This document provides an overview of TCP performance modeling and network simulation using the ns-2 simulator. It begins with background on TCP congestion control algorithms like slow start, congestion avoidance, fast retransmit, and fast recovery. Two analytical models for TCP throughput - a simple model and a more complex model - are described. The document then provides instructions on installing and using the ns-2 network simulator and Otcl scripting language. It explains how to create network topologies in ns-2 including nodes, links, agents and applications. Tracing, monitoring and running simulations are also covered. The document concludes with an example simulation study comparing TCP throughput models to ns-2 results.
The document discusses TCP congestion control algorithms. It describes the Additive Increase Multiplicative Decrease (AIMD) approach where the congestion window (cwnd) is increased linearly but reduced by half when packet loss is detected. Slow start is used to quickly ramp up cwnd initially through exponential growth. Fast retransmit detects lost packets using duplicate ACKs to retransmit earlier. Fast recovery then resumes increasing cwnd after a retransmit. The document also examines algorithms for adaptive retransmission timeouts based on mean and variance of measured round-trip times.
XPDS13: On Paravirualizing TCP - Congestion Control on Xen VMs - Luwei Cheng,...The Linux Foundation
While datacenters are increasingly adopting VMs to provide elastic cloud services, they still rely on traditional TCP for congestion control. In this talk, I will first show that VM scheduling delays can heavily contaminate RTTs sensed by VM senders, preventing TCP from correctly learning the physical network condition. Focusing on the incast problem, which is commonly seen in large-scale distributed data processing such as MapReduce and web search, I find that the solutions that have been developed for *physical* clusters fall short in a Xen *virtual* cluster. Second, I will provide a concrete understanding of the problem, and reveal that the situations that when the sending VM is preempted versus when the receiving VM is preempted, are different. Third, I will introduce my recent attempts on paravirtualizing TCP to overcome the negative effect caused by VM scheduling delays.
This document summarizes key concepts about congestion control in TCP including:
- TCP uses additive increase multiplicative decrease (AIMD) to dynamically adjust the congestion window size and maintain efficiency and fairness.
- TCP has slow start and congestion avoidance states that govern how the congestion window is adjusted in response to acknowledgements.
- TCP responds to packet loss through fast retransmit, fast recovery, and halving the congestion window size to reduce congestion according to protocols like Tahoe, Reno, and New Reno.
- TCP uses congestion control and avoidance to prevent network congestion collapse. It operates in a distributed manner without centralized control.
- TCP's congestion control is based on additive increase, multiplicative decrease (AIMD) and uses a congestion window and packet pacing to smoothly increase and decrease transmission rates in response to packet loss as a signal of congestion.
- The key mechanisms are slow start for initial rapid ramp up, congestion avoidance for gradual increase, fast retransmit for quick recovery from single losses, and timeout for recovery from multiple losses or ack losses. These mechanisms work together to keep TCP stable and efficient under different network conditions.
CUBIC is a TCP congestion control algorithm that uses a cubic window growth function to help TCP scale better in high bandwidth-delay product networks. It aims to improve scalability, stability, and fairness compared to previous algorithms like BIC-TCP. The window growth is independent of round-trip time and becomes nearly zero around the maximum window size to help stability. CUBIC also incorporates features to help it behave friendly to standard TCP implementations.
CS4344 09/10 Lecture 10: Transport Protocol for Networked GamesWei Tsang Ooi
The document discusses transport protocols for networked games and compares TCP and UDP. While TCP provides reliable delivery, it has higher latency than UDP. UDP has lower overhead but is unreliable. The document examines why certain popular games use TCP or UDP and outlines strategies to make TCP perform better for games, such as reducing delays, retransmitting bundles of data, and combining thin streams. It suggests the Stream Control Transmission Protocol (SCTP) as a potentially ideal transport for games since it allows flexibility in reliability and ordering of messages.
TCP uses congestion control to prevent network congestion collapse. It uses additive increase multiplicative decrease (AIMD) where the sending rate is increased slowly but cut in half after a loss. TCP paces packets using a congestion window that limits unacknowledged data. It uses slow start to quickly reach bandwidth and congestion avoidance to increase the window by 1 packet per RTT. This models TCP behavior and shows throughput is related to window size, loss rate, and RTT.
This document provides an overview of TCP performance modeling and network simulation using the ns-2 simulator. It begins with background on TCP congestion control algorithms like slow start, congestion avoidance, fast retransmit, and fast recovery. Two analytical models for TCP throughput - a simple model and a more complex model - are described. The document then provides instructions on installing and using the ns-2 network simulator and Otcl scripting language. It explains how to create network topologies in ns-2 including nodes, links, agents and applications. Tracing, monitoring and running simulations are also covered. The document concludes with an example simulation study comparing TCP throughput models to ns-2 results.
The document discusses TCP congestion control algorithms. It describes the Additive Increase Multiplicative Decrease (AIMD) approach where the congestion window (cwnd) is increased linearly but reduced by half when packet loss is detected. Slow start is used to quickly ramp up cwnd initially through exponential growth. Fast retransmit detects lost packets using duplicate ACKs to retransmit earlier. Fast recovery then resumes increasing cwnd after a retransmit. The document also examines algorithms for adaptive retransmission timeouts based on mean and variance of measured round-trip times.
XPDS13: On Paravirualizing TCP - Congestion Control on Xen VMs - Luwei Cheng,...The Linux Foundation
While datacenters are increasingly adopting VMs to provide elastic cloud services, they still rely on traditional TCP for congestion control. In this talk, I will first show that VM scheduling delays can heavily contaminate RTTs sensed by VM senders, preventing TCP from correctly learning the physical network condition. Focusing on the incast problem, which is commonly seen in large-scale distributed data processing such as MapReduce and web search, I find that the solutions that have been developed for *physical* clusters fall short in a Xen *virtual* cluster. Second, I will provide a concrete understanding of the problem, and reveal that the situations that when the sending VM is preempted versus when the receiving VM is preempted, are different. Third, I will introduce my recent attempts on paravirtualizing TCP to overcome the negative effect caused by VM scheduling delays.
The document summarizes performance tests comparing the eXtreme TCP (XTCP) protocol to standard TCP and other TCP variants. Automated tests transferred a 64MB file between servers located around the world via FTP. XTCP consistently achieved download rates 5-13 times faster than standard TCP and showed small performance gains over TCP variants like Vegas, Cubic, and HTCP in most test scenarios. XTCP was able to better detect and utilize available network bandwidth, especially over high latency connections.
Transmission Control Protocol (TCP) is a fundamental protocol of the Internet Protocol Suite. TCP complements the Internet Protocol (IP), therefore it is common to refer to the internet protocol suit as TCP/IP. TCP is used for error detection, detection of packet loss or out of order delivery of data. TCP requests retransmission, rearranges data and helps with network congestion.
Several congestion control algorithms have been developed, over the last years, to improve TCP's performance over various technologies and network conditions.
The purpose of this assignment is to present TCP, network congestion, congestion algorithms and simulate different algorithms in different network conditions to measure their performance. For this assignment's needs, OPNET IT Guru Academic Edition software was used to accomplish the reproduction of projects that have been already published and gave the wanted results.
The document discusses TCP congestion control algorithms. It describes the Additive Increase Multiplicative Decrease (AIMD) approach where the congestion window (cwnd) is increased linearly but reduced by half when packet loss is detected. Slow start is used to quickly ramp up cwnd initially through exponential growth. Fast retransmit detects lost packets using duplicate ACKs to retransmit earlier. Fast recovery then resumes increasing cwnd after a retransmit. The document also examines algorithms for adaptive retransmission timeouts based on mean and variance of measured round-trip times.
TCP uses congestion control algorithms to dynamically adjust the transmission rate depending on network conditions. It uses three main algorithms:
1. Slow start exponentially increases the congestion window when no congestion is detected.
2. Congestion avoidance additively increases the window when congestion is detected to slow growth.
3. Fast recovery allows additive increases when duplicate ACKs are received, indicating a lost packet but not severe congestion.
TCP detects congestion through timeouts or duplicate ACKs and multiplicatively decreases the window size by half in response to avoid worsening congestion. It transitions between these algorithms depending on congestion signs to maximize throughput while avoiding network overload.
The document describes the GPRS Tunnelling Protocol (GTP) used in 2G and 3G mobile networks. It discusses GTP interfaces and tunnels, message formats including the GTP header, and message groups. The key points are:
1. GTP is used between GPRS Support Nodes (GSNs) and between SGSN and RNC to tunnel user data packets and control signaling messages.
2. The GTP header contains fields for version, message type, length, TEID, and optional fields for sequence number and N-PDU number.
3. GTP messages are grouped into path management messages for path verification, tunnel management messages for context creation/deletion, and location/mobility management messages
The document summarizes congestion control versus bufferbloat. It discusses TCP congestion control types like Reno, Vegas, CUBIC and HTCP. Problems with traditional TCP Reno include slow recovery times at high bandwidths. Delay based congestion control approaches like Vegas can be too sensitive. Hybrid approaches that combine loss-based and delay-based may help address issues but need further refinement. More real-world testing of congestion control algorithms is needed.
The document describes a tool called TCP Congestion Avoidance Algorithm Identification (CAAI) that was proposed to identify the TCP congestion avoidance algorithm of remote web servers. CAAI works in three steps: 1) it gathers TCP window size traces from web servers in emulated network environments, 2) it extracts features like the multiplicative decrease parameter and window growth function from the traces, and 3) it uses these features to classify the TCP algorithm. Testing CAAI on over 30,000 web servers, it was able to identify the default algorithms used by major operating systems, like RENO for Windows and BIC/CUBIC for Linux, as well as some non-default algorithms.
This document provides an overview of TCP congestion control algorithms. It describes the basic additive increase/multiplicative decrease approach and key mechanisms like slow start, fast retransmit, and fast recovery. It also discusses algorithms for setting the retransmission timeout value and adaptations made in protocols like New Reno and Cubic.
This document discusses several TCP congestion control algorithms: TCP Tahoe, Reno, New Reno, SACK, and Vegas. It provides details on how each algorithm handles slow start, congestion avoidance, fast retransmit, and congestion detection. TCP Vegas is highlighted as being superior to the other algorithms because it can detect and retransmit lost packets faster, has fewer retransmissions, more efficiently measures bandwidth availability, and experiences less congestion overall through proactive congestion detection and modified slow start and congestion avoidance.
This document summarizes key aspects of TCP traffic control as covered in Chapter 12. It discusses TCP flow and congestion control, including how the transmission rate is determined by incoming ACKs. It then covers traffic control fields in the TCP header, credit allocation mechanisms, and the impact of window size on throughput. The document also summarizes TCP congestion control techniques like slow start, congestion avoidance, fast retransmit, fast recovery, and limited transmit. It notes the differentiating impact of "mice vs. elephant" flows on network congestion.
Computer networks have experienced an explosive growth over the past few years and with
that growth have come severe congestion problems. For example, it is now common to see
internet gateways drop 10% of the incoming packets because of local buffer overflows.
Our investigation of some of these problems has shown that much of the cause lies in
transport protocol implementations (
not
in the protocols themselves): The ‘obvious’ ways
to implement a window-based transport protocol can result in exactly the wrong behavior
in response to network congestion. We give examples of ‘wrong’ behavior and describe
some simple algorithms that can be used to make right things happen. The algorithms are
rooted in the idea of achieving network stability by forcing the transport connection to obey
a ‘packet conservation’ principle. We show how the algorithms derive from this principle
and what effect they have on traffic over congested networks.
In October of ’86, the Internet had the first of what became a series of ‘congestion col-
lapses’. During this period, the data throughput from LBL to UC Berkeley (sites separated
by 400 yards and two IMP hops) dropped from 32 Kbps to 40 bps. We were fascinated by
this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why
things had gotten so bad. In particular, we wondered if the 4.3
BSD
(Berkeley U
NIX
)
TCP
was mis-behaving or if it could be tuned to work better under abysmal network conditions.
The answer to both of these questions was “yes”.
TCP-FIT: An Improved TCP Congestion Control Algorithm and its PerformanceKevin Tong
The document discusses TCP-FIT, a new TCP congestion control algorithm inspired by parallel TCP. TCP-FIT aims to improve TCP performance in scenarios with high bandwidth-delay products (BDP) or wireless networks. It classifies congestion control algorithms and discusses their limitations. TCP-FIT adapts concepts from parallel TCP like GridFTP to achieve high utilization while maintaining compatibility and fairness. Experimental results show TCP-FIT performs well in BDP and wireless scenarios and achieves inter-fairness and RTT-fairness. However, its bandwidth estimation model is simplistic compared to FAST TCP, resulting in lower performance on networks with large bandwidth variations.
TCP provides reliable data transmission through mechanisms like the three-way handshake, congestion control using AIMD, and fast retransmit. However, it is vulnerable to attacks like RST injection to terminate connections or FIN scans to detect open ports. Defenses include randomizing sequence numbers, stateful firewalls to validate packets, and intrusion detection systems to detect scanning behaviors.
This document summarizes a presentation on congestion control in TCP/IP networks. It discusses basics of congestion and how it can be catastrophic if not handled. It then describes the basic strategies used by TCP to combat congestion, including slow start, congestion avoidance, detection, and illustration of algorithms like fast retransmit and recovery. Issues with wireless networks and variants of TCP like New Reno, Vegas, and Westwood are also summarized. The presentation proposes a new congestion control algorithm and discusses plans to simulate and test it.
The document describes the signaling flow for an originating 3G-UMTS call. It involves the setup of radio bearers between the UE and RNC, as well as signaling sessions between the RNC and core network to authenticate the user, setup a voice bearer, and connect the call. Key events include radio resource control (RRC) connection establishment, core network authentication, security mode command, radio access bearer (RAB) assignment, and call alerting, connection, and release signaling.
TCP uses congestion control to determine how much capacity is available in the network and regulate how many packets can be in transit. It uses additive increase/multiplicative decrease (AIMD) where the congestion window is increased slowly with each ACK but halved upon timeout. Slow start is used initially and after idle periods to grow the window exponentially until congestion is detected. Fast retransmit and fast recovery help detect and recover from packet loss without requiring a timeout.
The document discusses several algorithms used for congestion control in TCP/IP networks, including slow start, congestion avoidance, fast retransmit, fast recovery, random early discard (RED), and traffic shaping using leaky bucket and token bucket algorithms. Slow start and congestion avoidance control the transmission rate by adjusting the congestion window size. Fast retransmit and fast recovery allow quicker retransmission of lost packets without waiting for timeouts. RED proactively discards packets before buffer overflow. Leaky bucket and token bucket algorithms shape traffic flow through use of buffers and tokens to smooth bursts and control transmission rates.
This document discusses various techniques for congestion control in computer networks. It describes:
1. The difference between congestion control, which deals with overall traffic levels across a network, and flow control, which regulates traffic between two endpoints.
2. Common congestion control techniques like leaky bucket and token bucket algorithms, which shape traffic to prevent bursts that could cause congestion.
3. Other approaches like choke packets, where routers notify sources to reduce their transmission rates if a link becomes congested, and load shedding as a last resort if congestion cannot be avoided.
Lecture 19 22. transport protocol for ad-hoc Chandra Meena
This document discusses transport layer protocols for mobile ad hoc networks (MANETs). It begins with an introduction to MANETs and the need for new network architectures and protocols to support new types of networks. It then provides an overview of TCP/IP and how TCP works, including congestion control mechanisms. The document discusses challenges for TCP over wireless networks, where packet losses are often due to errors rather than congestion. It covers different versions of TCP and their approaches to congestion control. The goal is to design transport layer protocols that can address the unreliable links and frequent topology changes in MANETs.
Here are the key steps of reverse path broadcasting/multicasting using the example network:
1. Router S sends the multicast packet and all routers know the shortest path to S is directly through S.
2. Router directly connected to S forwards the packet to all other ports except the port it arrived on (parent port).
3. Subsequent routers forward the packet to all ports except the parent port, following the shortest path to S in reverse.
4. This process continues until all destinations are reached, with each router forwarding only once.
The end result is an efficient multicast distribution tree is built to reach all destinations.
1) Congestion occurs when there are too many sources sending too much data too fast for the network to handle, leading to lost packets and long delays. TCP uses congestion control to address this problem.
2) TCP uses Additive Increase Multiplicative Decrease (AIMD) congestion control, where it slowly increases the transmission rate and halves it upon detecting packet loss, exhibiting a sawtooth pattern.
3) Key TCP congestion control mechanisms include slow start for initial exponential increase, fast retransmit to quickly retransmit lost packets based on duplicate ACKs, and fast recovery to resume transmission after a fast retransmit.
Here is a strategy the prisoners could employ:
1. On the first day, the prisoner who visits the switch room toggles one of the switches to the ON position.
2. On subsequent days, the prisoner toggles the other switch if it is in the OFF position, or says "all prisoners have visited" if both switches are in the ON position.
3. This strategy guarantees that after 31 days, both switches will be in the ON position, allowing the prisoner to correctly say "all prisoners have visited" and ensure all prisoners are set free.
The document summarizes performance tests comparing the eXtreme TCP (XTCP) protocol to standard TCP and other TCP variants. Automated tests transferred a 64MB file between servers located around the world via FTP. XTCP consistently achieved download rates 5-13 times faster than standard TCP and showed small performance gains over TCP variants like Vegas, Cubic, and HTCP in most test scenarios. XTCP was able to better detect and utilize available network bandwidth, especially over high latency connections.
Transmission Control Protocol (TCP) is a fundamental protocol of the Internet Protocol Suite. TCP complements the Internet Protocol (IP), therefore it is common to refer to the internet protocol suit as TCP/IP. TCP is used for error detection, detection of packet loss or out of order delivery of data. TCP requests retransmission, rearranges data and helps with network congestion.
Several congestion control algorithms have been developed, over the last years, to improve TCP's performance over various technologies and network conditions.
The purpose of this assignment is to present TCP, network congestion, congestion algorithms and simulate different algorithms in different network conditions to measure their performance. For this assignment's needs, OPNET IT Guru Academic Edition software was used to accomplish the reproduction of projects that have been already published and gave the wanted results.
The document discusses TCP congestion control algorithms. It describes the Additive Increase Multiplicative Decrease (AIMD) approach where the congestion window (cwnd) is increased linearly but reduced by half when packet loss is detected. Slow start is used to quickly ramp up cwnd initially through exponential growth. Fast retransmit detects lost packets using duplicate ACKs to retransmit earlier. Fast recovery then resumes increasing cwnd after a retransmit. The document also examines algorithms for adaptive retransmission timeouts based on mean and variance of measured round-trip times.
TCP uses congestion control algorithms to dynamically adjust the transmission rate depending on network conditions. It uses three main algorithms:
1. Slow start exponentially increases the congestion window when no congestion is detected.
2. Congestion avoidance additively increases the window when congestion is detected to slow growth.
3. Fast recovery allows additive increases when duplicate ACKs are received, indicating a lost packet but not severe congestion.
TCP detects congestion through timeouts or duplicate ACKs and multiplicatively decreases the window size by half in response to avoid worsening congestion. It transitions between these algorithms depending on congestion signs to maximize throughput while avoiding network overload.
The document describes the GPRS Tunnelling Protocol (GTP) used in 2G and 3G mobile networks. It discusses GTP interfaces and tunnels, message formats including the GTP header, and message groups. The key points are:
1. GTP is used between GPRS Support Nodes (GSNs) and between SGSN and RNC to tunnel user data packets and control signaling messages.
2. The GTP header contains fields for version, message type, length, TEID, and optional fields for sequence number and N-PDU number.
3. GTP messages are grouped into path management messages for path verification, tunnel management messages for context creation/deletion, and location/mobility management messages
The document summarizes congestion control versus bufferbloat. It discusses TCP congestion control types like Reno, Vegas, CUBIC and HTCP. Problems with traditional TCP Reno include slow recovery times at high bandwidths. Delay based congestion control approaches like Vegas can be too sensitive. Hybrid approaches that combine loss-based and delay-based may help address issues but need further refinement. More real-world testing of congestion control algorithms is needed.
The document describes a tool called TCP Congestion Avoidance Algorithm Identification (CAAI) that was proposed to identify the TCP congestion avoidance algorithm of remote web servers. CAAI works in three steps: 1) it gathers TCP window size traces from web servers in emulated network environments, 2) it extracts features like the multiplicative decrease parameter and window growth function from the traces, and 3) it uses these features to classify the TCP algorithm. Testing CAAI on over 30,000 web servers, it was able to identify the default algorithms used by major operating systems, like RENO for Windows and BIC/CUBIC for Linux, as well as some non-default algorithms.
This document provides an overview of TCP congestion control algorithms. It describes the basic additive increase/multiplicative decrease approach and key mechanisms like slow start, fast retransmit, and fast recovery. It also discusses algorithms for setting the retransmission timeout value and adaptations made in protocols like New Reno and Cubic.
This document discusses several TCP congestion control algorithms: TCP Tahoe, Reno, New Reno, SACK, and Vegas. It provides details on how each algorithm handles slow start, congestion avoidance, fast retransmit, and congestion detection. TCP Vegas is highlighted as being superior to the other algorithms because it can detect and retransmit lost packets faster, has fewer retransmissions, more efficiently measures bandwidth availability, and experiences less congestion overall through proactive congestion detection and modified slow start and congestion avoidance.
This document summarizes key aspects of TCP traffic control as covered in Chapter 12. It discusses TCP flow and congestion control, including how the transmission rate is determined by incoming ACKs. It then covers traffic control fields in the TCP header, credit allocation mechanisms, and the impact of window size on throughput. The document also summarizes TCP congestion control techniques like slow start, congestion avoidance, fast retransmit, fast recovery, and limited transmit. It notes the differentiating impact of "mice vs. elephant" flows on network congestion.
Computer networks have experienced an explosive growth over the past few years and with
that growth have come severe congestion problems. For example, it is now common to see
internet gateways drop 10% of the incoming packets because of local buffer overflows.
Our investigation of some of these problems has shown that much of the cause lies in
transport protocol implementations (
not
in the protocols themselves): The ‘obvious’ ways
to implement a window-based transport protocol can result in exactly the wrong behavior
in response to network congestion. We give examples of ‘wrong’ behavior and describe
some simple algorithms that can be used to make right things happen. The algorithms are
rooted in the idea of achieving network stability by forcing the transport connection to obey
a ‘packet conservation’ principle. We show how the algorithms derive from this principle
and what effect they have on traffic over congested networks.
In October of ’86, the Internet had the first of what became a series of ‘congestion col-
lapses’. During this period, the data throughput from LBL to UC Berkeley (sites separated
by 400 yards and two IMP hops) dropped from 32 Kbps to 40 bps. We were fascinated by
this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why
things had gotten so bad. In particular, we wondered if the 4.3
BSD
(Berkeley U
NIX
)
TCP
was mis-behaving or if it could be tuned to work better under abysmal network conditions.
The answer to both of these questions was “yes”.
TCP-FIT: An Improved TCP Congestion Control Algorithm and its PerformanceKevin Tong
The document discusses TCP-FIT, a new TCP congestion control algorithm inspired by parallel TCP. TCP-FIT aims to improve TCP performance in scenarios with high bandwidth-delay products (BDP) or wireless networks. It classifies congestion control algorithms and discusses their limitations. TCP-FIT adapts concepts from parallel TCP like GridFTP to achieve high utilization while maintaining compatibility and fairness. Experimental results show TCP-FIT performs well in BDP and wireless scenarios and achieves inter-fairness and RTT-fairness. However, its bandwidth estimation model is simplistic compared to FAST TCP, resulting in lower performance on networks with large bandwidth variations.
TCP provides reliable data transmission through mechanisms like the three-way handshake, congestion control using AIMD, and fast retransmit. However, it is vulnerable to attacks like RST injection to terminate connections or FIN scans to detect open ports. Defenses include randomizing sequence numbers, stateful firewalls to validate packets, and intrusion detection systems to detect scanning behaviors.
This document summarizes a presentation on congestion control in TCP/IP networks. It discusses basics of congestion and how it can be catastrophic if not handled. It then describes the basic strategies used by TCP to combat congestion, including slow start, congestion avoidance, detection, and illustration of algorithms like fast retransmit and recovery. Issues with wireless networks and variants of TCP like New Reno, Vegas, and Westwood are also summarized. The presentation proposes a new congestion control algorithm and discusses plans to simulate and test it.
The document describes the signaling flow for an originating 3G-UMTS call. It involves the setup of radio bearers between the UE and RNC, as well as signaling sessions between the RNC and core network to authenticate the user, setup a voice bearer, and connect the call. Key events include radio resource control (RRC) connection establishment, core network authentication, security mode command, radio access bearer (RAB) assignment, and call alerting, connection, and release signaling.
TCP uses congestion control to determine how much capacity is available in the network and regulate how many packets can be in transit. It uses additive increase/multiplicative decrease (AIMD) where the congestion window is increased slowly with each ACK but halved upon timeout. Slow start is used initially and after idle periods to grow the window exponentially until congestion is detected. Fast retransmit and fast recovery help detect and recover from packet loss without requiring a timeout.
The document discusses several algorithms used for congestion control in TCP/IP networks, including slow start, congestion avoidance, fast retransmit, fast recovery, random early discard (RED), and traffic shaping using leaky bucket and token bucket algorithms. Slow start and congestion avoidance control the transmission rate by adjusting the congestion window size. Fast retransmit and fast recovery allow quicker retransmission of lost packets without waiting for timeouts. RED proactively discards packets before buffer overflow. Leaky bucket and token bucket algorithms shape traffic flow through use of buffers and tokens to smooth bursts and control transmission rates.
This document discusses various techniques for congestion control in computer networks. It describes:
1. The difference between congestion control, which deals with overall traffic levels across a network, and flow control, which regulates traffic between two endpoints.
2. Common congestion control techniques like leaky bucket and token bucket algorithms, which shape traffic to prevent bursts that could cause congestion.
3. Other approaches like choke packets, where routers notify sources to reduce their transmission rates if a link becomes congested, and load shedding as a last resort if congestion cannot be avoided.
Lecture 19 22. transport protocol for ad-hoc Chandra Meena
This document discusses transport layer protocols for mobile ad hoc networks (MANETs). It begins with an introduction to MANETs and the need for new network architectures and protocols to support new types of networks. It then provides an overview of TCP/IP and how TCP works, including congestion control mechanisms. The document discusses challenges for TCP over wireless networks, where packet losses are often due to errors rather than congestion. It covers different versions of TCP and their approaches to congestion control. The goal is to design transport layer protocols that can address the unreliable links and frequent topology changes in MANETs.
Here are the key steps of reverse path broadcasting/multicasting using the example network:
1. Router S sends the multicast packet and all routers know the shortest path to S is directly through S.
2. Router directly connected to S forwards the packet to all other ports except the port it arrived on (parent port).
3. Subsequent routers forward the packet to all ports except the parent port, following the shortest path to S in reverse.
4. This process continues until all destinations are reached, with each router forwarding only once.
The end result is an efficient multicast distribution tree is built to reach all destinations.
1) Congestion occurs when there are too many sources sending too much data too fast for the network to handle, leading to lost packets and long delays. TCP uses congestion control to address this problem.
2) TCP uses Additive Increase Multiplicative Decrease (AIMD) congestion control, where it slowly increases the transmission rate and halves it upon detecting packet loss, exhibiting a sawtooth pattern.
3) Key TCP congestion control mechanisms include slow start for initial exponential increase, fast retransmit to quickly retransmit lost packets based on duplicate ACKs, and fast recovery to resume transmission after a fast retransmit.
Here is a strategy the prisoners could employ:
1. On the first day, the prisoner who visits the switch room toggles one of the switches to the ON position.
2. On subsequent days, the prisoner toggles the other switch if it is in the OFF position, or says "all prisoners have visited" if both switches are in the ON position.
3. This strategy guarantees that after 31 days, both switches will be in the ON position, allowing the prisoner to correctly say "all prisoners have visited" and ensure all prisoners are set free.
TCP provides reliable data transfer over unreliable packet networks by using acknowledgments, retransmissions, and adaptive congestion control. It works with IP to transfer data through routers that may drop packets. While TCP ensures reliable delivery, it must control its transmission rate to avoid overwhelming network capacity and causing congestion collapse. This is achieved through additive-increase, multiplicative-decrease of the congestion window and techniques like active queue management.
TCP uses congestion control algorithms like AIMD (Additive Increase Multiplicative Decrease) to adjust the congestion window size (cwnd) in response to indications of congestion from dropped or marked packets. Cwnd is increased linearly but decreased multiplicatively in half upon timeouts. Slow start is used initially and after timeouts to exponentially increase cwnd through doubling. Fast retransmit and fast recovery improve on timeouts by using duplicate ACKs to trigger early retransmits. Adaptive retransmission algorithms factor in the variability of measured RTTs.
This document discusses features of TCP (Transmission Control Protocol):
- TCP is a widely used transport layer protocol that provides reliable, ordered, and error-checked delivery of data between applications running on hosts communicating over an IP network.
- Key TCP features include segment numbering, flow control using sliding windows, error control using checksums and acknowledgements, congestion control, and connection-oriented data transfer.
- TCP guarantees delivery of all bytes in the correct order through mechanisms like retransmission of lost or corrupted segments, discarding of duplicate segments, and temporary buffering of out-of-order segments.
LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecasesLF_OpenvSwitch
1) Mobile networks today handle a large number of simultaneous short duration flows, with high call rates of 100k-200k connections per second. Statistics like call duration and bandwidth usage need to be tracked for each flow for billing purposes.
2) Testing was conducted injecting a 10Gbps mobile traffic profile of 1 million flows into OVS, with 200k flows created and destroyed per second. Key metrics measured were maximum throughput, latency, and jitter at different flow table sizes and core counts.
3) Conntrack performance was tested for OVS kernel and DPDK versions. For 100k flows, OVS kernel achieved 152k pps for 4-tuple matching while OVS-DPDK achieved
TFWC is a proposed window-based congestion control algorithm that is designed to be TCP-friendly for real-time multimedia applications, while addressing some issues with the standard rate-based TFRC algorithm. TFWC uses a TCP-like acknowledgment clock and window sizing equation to achieve smooth throughput similar to TFRC, but provides better fairness when competing with TCP traffic and is simpler to implement without needing to measure round-trip times. Analysis shows that TFWC provides fairness comparable to TFRC, smoothness on par with TFRC, and faster responsiveness to changes in available bandwidth.
Improving Distributed TCP Caching for Wireless Sensor NetworksAhmed Ayadi
The document proposes an enhanced distributed TCP caching (EDTC) approach to improve TCP performance over wireless sensor networks. EDTC improves upon distributed TCP caching (DTC) by detecting and handling TCP acknowledgment losses, disabling unnecessary retransmissions, and using a smoothed retransmission timeout value. Simulation results show that EDTC reduces energy consumption and transfer duration compared to DTC and TCP, especially in high packet loss networks.
Cvc2009 Moscow Repeater+Ica Fabian Kienle FinalLiudmila Li
Citrix Repeater 5.0 introduces new ICA acceleration capabilities that optimize ICA traffic between branch offices and central data centers. It does this by caching common data at the branch repeater and avoiding sending redundant information over the WAN link. This improves performance for activities like printing, file sharing, and using common Microsoft Office documents across multiple users. The new ICA acceleration is most effective for applications with identical window contents and redundant data, and less so for applications like Adobe and CAD that have little common data. It requires XenApp 5.0 servers and supported Citrix appliances.
DCTCP is an enhancement to TCP for data center networks that leverages ECN to provide more granular congestion feedback than standard TCP. It estimates the fraction of marked packets to react to the extent of congestion rather than just its presence. This finer control allows DCTCP to maintain low buffer occupancy while achieving high throughput. The algorithm adapts the window size based on the marked packet ratio in each RTT to be more responsive to congestion than standard TCP. DCTCP addresses the key requirements for data center networks of handling bursty traffic, low latency, and high throughput.
This document discusses various transport layer protocols for mobile networks. It begins by describing TCP and its mechanisms for congestion avoidance, flow control, slow start, and retransmission. It then covers several TCP variants including Tahoe, Reno, and Vegas. It also discusses indirect TCP, Snoop TCP, and Mobile TCP which aim to optimize TCP for wireless networks by handling retransmissions locally or splitting the connection. The document provides details on the algorithms and functioning of these different protocols.
This document discusses transport protocols and how they have been optimized for large data transfers but are not as well suited for the small file transfers that now dominate web traffic. It describes several key aspects of TCP including flow control using a sliding window, congestion control algorithms like slow start and congestion avoidance, and mechanisms for detecting and responding to packet loss like fast retransmit. It notes how TCP was adapted over time, including additions like fast recovery, and alternatives like TCP Vegas which aims to avoid rather than just respond to congestion. The document provides historical context and details on TCP implementations.
Toward an Understanding of the Processing Delay of Peer-to-Peer Relay NodesAcademia Sinica
Peer-to-peer relaying is commonly used in realtime applications to cope with NAT and firewall restrictions and provide better quality network paths. As relaying is not natively supported by the Internet, it is usually implemented at the application layer. Also, in a modern operating system, the processor is shared, so the receive-process-forward process for each relay packet may take a considerable amount of time if the host is busy handling some other tasks. Thus, if we happen to select a loaded relay node, the relaying may introduce significant delays to the packet transmission time and even degrade the application performance.
In this work, based on an extensive set of Internet traces, we pursue an understanding of the processing delays incurred at relay nodes and their impact on the application performance. Our contribution is three-fold: 1) we propose a methodology for measuring the processing delays at any relay node on the Internet; 2) we characterize the workload patterns of a variety of Internet relay nodes; and 3) we show that, serious VoIP quality degradation may occur due to relay processing, thus we have to monitor the processing delays of a relay node continuously to prevent the application performance from being degraded.
Network and TCP performance relationship workshopKae Hsu
The document discusses TCP performance factors and techniques to improve TCP performance in network environments. It covers TCP operation principles, factors that impact TCP performance like packet loss, out-of-order packets, and congestion. It also discusses approaches to improve performance through the network like reducing packet loss and congestion, and through appliances like TCP offloading and optimization to reduce system resource usage.
This document discusses various techniques for congestion control and quality of service (QoS) in computer networks. It covers queuing disciplines like FIFO and fair queuing. It also discusses TCP congestion control algorithms like additive increase/multiplicative decrease (AIMD) and slow start. The document outlines router-based approaches like DECbit and Random Early Detection (RED) gateways as well as host-based approaches such as TCP Vegas. It also discusses integrated services and differentiated services frameworks for providing QoS.
This document discusses various techniques for congestion control and quality of service (QoS) in computer networks. It covers queuing disciplines like FIFO and fair queuing, as well as transport layer protocols like TCP that use additive increase/multiplicative decrease to control congestion. The document also discusses router-based approaches like DECbit and RED for avoiding congestion, as well as the TCP Vegas protocol that monitors round-trip times to detect impending congestion. Finally, it discusses QoS approaches for real-time applications like guaranteed and controlled-load service classes.
This document discusses various TCP flavors and congestion control mechanisms. It begins with an overview of TCP functions like connection orientation, flow control, retransmission, and congestion control. It then covers retransmission mechanisms including timeout-based retransmission and fast retransmission. TCP congestion control mechanisms like slow start and additive increase multiplicative decrease are explained. The document summarizes TCP Tahoe, Reno, New Reno, Vegas, and Freeze flavors and how they implement congestion control algorithms. It provides state diagrams and examples to illustrate the differences between these TCP variants.
Similar to Investigating the Use of Synchronized Clocks in TCP Congestion Control (20)
Comparing the Archival Rate of Arabic, English, Danish, and Korean Language W...Michele Weigle
Based on work published in ACM Transactions on Information Systems (TOIS), 36(1), July 2017 by Lulwah Alkwai, Michael L. Nelson, and Michele C. Weigle
Presented at ACM SIGIR 2019 on July 24, 2019 by Michele C. Weigle
WS-DL’s Work towards Enabling Personal Use of Web ArchivesMichele Weigle
Talk given at Library of Congress by Michele C. Weigle (@weiglemc)
December 18, 2018
Web Science and Digital Libraries (WS-DL) Research Group (@WebSciDL)
Old Dominion University
Norfolk, VA
This document provides an introduction to web archiving presented by Dr. Michele Weigle from Old Dominion University's Web Sciences and Digital Libraries Group. It discusses how webpages can disappear from the live web, the importance of archiving webpages to preserve history, and several tools developed by the group like Mink, #icanhazmemento, and ArchiveNow that make web archiving easier. It also introduces Memento, a system that allows accessing archived versions of webpages from multiple archives.
Keynote talk presented at Web Archiving and Digital Libraries (WADL) 2018
June 6, 2018 - Fort Worth, TX
Michele C. Weigle (@weiglemc)
Web Science and Digital Libraries (WS-DL) Research Group (@WebSciDL)
Old Dominion University
Norfolk, VA
This document provides guidance on writing academic papers. It discusses what a PhD program entails and emphasizes communication skills. It outlines the typical structure of academic papers, including an introduction, related work, approach, evaluation, future work and conclusions. It covers citations, references, and the writing process. Effective organization, structure, and attention to detail are important. The writing process should begin with outlining before writing full sentences. Telling the story in a clear way takes significant time and effort.
How to Prepare and Give and Academic PresentationMichele Weigle
The document provides tips for preparing and delivering an academic presentation. It begins by discussing what a PhD program entails and emphasizing the importance of communication skills. It then outlines how to structure a presentation like telling a story, including setting the scene, presenting the problem, highlighting the approach, showing results, and concluding with a summary. The document concludes by offering concrete tips, such as considering the audience, using visuals effectively, and helping the audience, as well as things to avoid such as walls of text. It emphasizes speaking clearly, facing the audience, practicing, and planning beginnings and endings.
The document describes the speaker's career journey through computer science by summarizing highlights captured on the Internet Archive from 1997 to 2013, including her undergraduate studies, teaching positions, research, marriage, graduation, faculty roles, and advising her first PhD student. She notes it was interesting but not always easy to piece her story together from archived web pages. Today social media would also contribute to one's story but posts are not saved by the Internet Archive.
A Retasking Framework For Wireless Sensor NetworksMichele Weigle
The document discusses re-tasking wireless sensor networks using the Deluge framework. Key points:
1. The goals were to analyze Deluge, implement selective re-tasking of specific nodes or groups, and design a GUI for monitoring and re-tasking.
2. Deluge allows distributing program binaries over sensor networks but only supports re-tasking all nodes.
3. The author implemented selective re-tasking using a node ID hash and group ID to target specific nodes or groups for re-tasking. Collection was also added to gather network status from nodes.
4. A Deluge visualizer GUI was created to issue commands and monitor node status. The changes increased code size
Strategies for Sensor Data Aggregation in Support of Emergency ResponseMichele Weigle
Presented by Xianping Wang
Military Communications Conference (MILCOM)
October 6-8, 2014
Baltimore, MD
Xianping Wang, Aaron Walden, Michele C. Weigle and Stephan Olariu, "Strategies for Sensor Data Aggregation in Support of Emergency Response," In Proceedings of the Military Communications Conference (MILCOM). Baltimore, MD, October 2014.
Presented by Michele C. Weigle, June 4, 2015
Columbia University Web Archiving Collaboration: New Tools and Models
Work by Yasmin AlNoamany, Michele C. Weigle, and Michael L. Nelson
What's Grad School All About?
Capital Region Celebration of Women in Computing (CAPWIC), Harrisonburg, VA
February 27, 2015
Presented by Michele Weigle
The document summarizes Dr. Michele Weigle's presentation on tools for managing the past web. It discusses how webpages can disappear quickly from the live web and may only be accessible through web archives. It then describes several tools and projects from Old Dominion University's Web Sciences and Digital Libraries group that make archived web content more accessible and useful, such as tools that integrate the live and archived web, detect damage in archived pages, summarize collections of archived pages, and enable personal web archiving.
The document summarizes tools developed by the Web Sciences and Digital Libraries Group for managing archived web content. It describes WARCreate, a Chrome extension that archives the current state of web pages; WAIL, which loads archived web pages (WARC files) into a local Wayback instance for viewing; and Mink, a Chrome extension that displays archived versions of visited pages. It also discusses techniques for assessing damage in archived pages, generating thumbnail summaries of archive collections, and detecting off-topic pages within archives. The tools are intended to make web archiving more accessible and help curate archived web collections.
Archive What I See Now - 2014 NEH ODH OverviewMichele Weigle
"Archive What I See Now": Bringing Institutional Web Archiving Tools to the Individual Researcher
Slides from 2014 NEH ODH Project Directors' Meeting
September 15, 2014
Michele C. Weigle, Michael L. Nelson, Liza Potts
The document discusses potential summer research projects in the areas of digital preservation, web archiving, and information visualization. It describes the Web Sciences and Digital Libraries research group's work in web archiving and efforts to make archived web content more accessible and usable. Potential summer projects outlined include developing a visualization of health data from the Blue Button initiative, visualizing aggregate health data using public datasets, and exploring tools for analyzing large collections of academic documents.
Dr. Michele Weigle gave a presentation on telling stories using web archives. She discussed defining a story's timeline and key events, identifying relevant archived web pages, and visualizing the assembled story. Her research group is exploring how to help others reconstruct personal and historical narratives from archived web content, as pages on the live web often disappear over time.
"Archive What I See Now" - NEH ODH overviewMichele Weigle
"Archive What I See Now": Bringing Institutional Web Archiving Tools to the Individual Researcher
Slides from shutdown-cancelled NEH ODH Project Directors' Meeting (originally scheduled for Oct 4, 2013)
Michele C. Weigle and Michael L. Nelson
TDMA Slot Reservation in Cluster-Based VANETsMichele Weigle
Mohammad Almalag's PhD Defense Slides
Department of Computer Science
Old Dominion University
April 3, 2013
Note: You may need to download the file to see all of the animations.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Building RAG with self-deployed Milvus vector database and Snowpark Container...Zilliz
This talk will give hands-on advice on building RAG applications with an open-source Milvus database deployed as a docker container. We will also introduce the integration of Milvus with Snowpark Container Services.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
3. Claim
synchronized clocks
exact timing information
early congestion detection
less packet loss and shorter queues
better overall network performance
3
4. Outline
• Background
• Related Work
• Thesis Statement
• Sync-TCP
• Evaluation
• Conclusions
• Future Work
4
5. Background
Queuing
• Router queues are FIFO and finite
– the longer the queue, the longer a packet at the end of the
queue is delayed
– if queue is full, incoming packets are dropped
X
• Most queues are drop-tail
– incoming packets are only dropped when the queue is full
5
6. Background
Congestion
• Sustained period where the incoming rate is
greater than the service rate
• Leads to increased
queuing delays
• Leads to packet loss
– leads to increased latency for TCP flows
– leads to low throughput
6
7. Background
TCP Data Transfer
sender receiver
data 1 OTT
(data) congestion
window size
RTT (cwnd) = 1
2 OTT
ACK
(ACK)
data 2 throughput = cwnd / RTT
time time
7
8. Background
TCP Congestion Window
sender receiver
data 1
data 2 cwnd = 3
data 3
RTT throughput = cwnd / RTT
2
A CK
3
A CK 4
ACK
data 4
data 5
data 6
time time
8
9. Background
TCP Congestion Control
• Available network bandwidth is unknown
• TCP probes the network by increasing the
congestion window when ACKs return
• TCP backs off by reducing the congestion
window when loss is detected
9
10. Background
TCP Reno Loss Detection
• 3 Duplicate ACKs throughput = cwnd / RTT
– reduce congestion
congestion window
window by 50%
x
x
• Retransmission timeout
Timeout
– reduce congestion duplicate
ACKs
window to 1 packet
time
10
11. Background
TCP Reno Data Recovery
sender receiver
data 1
data 2
data 3 X
data 4
data 5
2
ACK 2
A CK K 2
AC K 2
AC
data 6
data 2
time time
11
12. The Problem
TCP Congestion Control
• Overflows queues in search for more
resources
• Uses packet loss as its only indicator of
congestion
– relies on a binary signal of congestion
12
13. The Problem
Congestion Control
• TCP Reno: React to packet loss
congestion window
x
x – reduce sending rate only when
timeout packets are lost
– perform congestion control only
duplicate when it is time to retransmit lost
ACKs packets
time
• Goal: React to congestion early
congestion window
and avoid losses
– congestion occurs before packets are
lost
– decouple congestion control and
retransmission
time
13
14. Related Work
Congestion Control
• End-to-End
– TCP Reno is the problem
Internet
router router
adaptation adaptation
• Router-based
– drop-tail queues are the problem
– active queue management (AQM)
Internet
router router
adaptation
14
15. Related Work
Congestion Control
• End-to-End
– Delay-based congestion control [R. Jain, 1989]
– TCP Vegas [Brakmo, O’Malley, Peterson, 1994]
– TCP Santa Cruz [Parsa, Garcia-Luna-Aceves, 1999]
– TCP Westwood [Mascolo, Casetti, Gerla, Sanadidi, Wang, 2001]
– TCP Peach [Akyildiz, Morabito, Palazzo, 2001]
– Binomial algorithms [Bansal, Balakrishan, 2001]
• Router-based
– DECbit [Ramakrishnan, R. Jain, 1990]
– Random Early Detection (RED) [Floyd, Jacobson, 1993]
– Explicit Congestion Notification (ECN) [Floyd, 1994]
– Adaptive RED [Floyd, Gummadi, Shenker, 2001]
15
17. Thesis Statement
Precise knowledge of one-way transit
times can be used to improve the
performance of TCP congestion
control.
• network-level metrics: packet loss and
average queue sizes at congested routers
• application-level metrics: HTTP
response times and goodput per HTTP
response
17
18. Thesis Statement
Precise knowledge of one-way transit
times can be used to improve the
performance of TCP congestion
control.
• provide lower packet loss and lower
queue sizes than TCP Reno
• provide lower HTTP response time and
higher goodput per HTTP response than
TCP Reno
18
19. My Approach
1. Exchange exact timing information
2. Detect congestion
3. React to congestion
4. Sync-TCP congestion control
5. Evaluate Sync-TCP vs. TCP Reno
19
20. Sync-TCP
Synchronized Clocks
• Allow measurement
of OTT
• Methods of
synchronization Internet
– Global Positioning
System (GPS)
– Network Time
Protocol (NTP)
20
21. Sync-TCP
TCP Header Option
32 bits
• New option in the TCP source port # dest port #
header sequence number
– 14 bytes acknowledgment number
head not U A P
RSF rcvr window size
type length len used
checksum ptr urgent data
OTT (ms)
timestamp options (variable length)
echo reply
application data
(variable length)
21
22. Sync-TCP
Example
[OTT, timestamp, echo reply] Sender’s Calculations
sender receiver time data received = time data
1 [-1, 1, -1] 1 sent (echo reply) + OTT
2 2
3 [1, 3, 1] 3 time ACK delayed = time ACK
4 4 sent (timestamp) - time data
5 [1, 5, 3 5 received
]
6 6
7 7 queuing delay = OTT - minimum
8 [2, 8, 5] 8 OTT
9 9
time time
22
23. Sync-TCP
Congestion Detection
• 50% of maximum-observed queuing delay
(queuing delay = OTT – minimum-observed OTT)
• 50% of minimum-observed OTT
• Average queuing delay
• Trend analysis of queuing delays
• Trend analysis of the average queuing delay
23
24. Sync-TCP
Trend Analysis of Average Queuing Delay
• Trend analysis for available bandwidth estimation
adapted from [Jain and Dovrolis, 2002]
• Operation:
– compute 9 average queuing delay samples
– split into 3 groups of 3 samples each
– compute median, mi , of each group
– trend is relationship of m1, m2, m3
24
25. Sync-TCP
Trend Analysis of Average Queuing Delay
• Every arriving ACK, compute smoothed
average queuing delay from OTT
• Compute trend of average queuing delay
– after first 9 ACKs 1 2 3 4 5 6 7 8 9 10 11 12
– afterwards, every 3 ACKs
• Calculate the average queuing delay as a
percentage of the maximum-observed
queuing delay
– divide into 25% increments
25
26. Sync-TCP
Queuing Delay at Router
100 queuing delay at router
80
queuing delay (ms)
60
40
20
0
260 261 262 263 264 265
time (s) 26
27. Sync-TCP
Trend Analysis of Average Queuing Delay
100 queuing delay at router max
average computed queuing delay
increasing trend
80 decreasing trend
75%
queuing delay (ms)
60
50%
40
25%
20
0 0%
260 261 262 263 264 265
time (s) 27
28. Sync-TCP
Congestion Reaction
• Decrease congestion window by 50% upon
congestion notification
– same reaction as TCP Reno to packet loss
• Increase and decrease congestion window
according to congestion signal
– intended to be used with trend analysis of average
queuing delay congestion detection
– operates the same as TCP Reno until 9 ACKs
have been received
28
29. Sync-TCP
Congestion Window Adjustment
increasing trend decreasing trend
max
decrease 50% no change
average queuing delay
75%
increase 10%
decrease 25% per RTT
50%
increase 25%
decrease 10% per RTT
25%
increase 1 packet increase 50%
per RTT per RTT
0%
time 29
30. Sync-TCP
Congestion Control
Congestion Detection Congestion Reaction
• Trend analysis of • Increase and decrease
smoothed average congestion window
queuing delay according to congestion
• 50% of maximum signal
queuing delay
• Decrease congestion
• 50% of minimum OTT window by 50% upon
• Smoothed average congestion notification
queuing delay
• Trend analysis of
queuing delays
30
31. Evaluation
Experiment Plan
• NS-2 network simulator
– assume synchronized clocks
• FTP bulk-transfer traffic
– examine the steady-state operation of the
mechanisms
• HTTP traffic
– integrate traffic model developed at Bell Labs
into NS-2
• main parameter is average number of HTTP requests
per second
– calibrate HTTP request rate to desired load level
31
32. Evaluation
HTTP Simulation Environment
• Sync-TCP and TCP Reno
flows do not compete web web
• Two-way traffic servers clients
– measure performance in one
direction only 10 Mbps
• 70-150 new HTTP requests
generated per second
• 45-2,500 HTTP web web
clients servers
connections active
simultaneously request
• 250,000 HTTP request- response
response pairs completed
32
33. Evaluation
HTTP Experiment Space
• Sync-TCP congestion control mechanism
– 50% max queuing delay detection and reduce by 50% reaction
– trend analysis of average queuing delay detection and adjust
according to signal reaction
• TCP for comparison
– TCP Reno, TCP SACK
• Queuing method for comparison
– drop-tail, Adaptive RED, Adaptive RED with ECN
• End-to-end load (% of link capacity)
– 50%, 60%, 70%, 80%, 85%, 90%, 95%, 100%, 105%
• Number of congested links
– 1, 2 (75% total load, 90% total load, 105% total load)
33
34. Evaluation
Evaluating HTTP Performance
• Network-level Metrics
– packet loss at bottleneck router
– queue size at bottleneck router
• Application-level Metrics
– goodput per HTTP response
• bytes received per second at web client
– HTTP response times
• time between sending the request and receiving the
entire response
34
35. Evaluation
Average Packet Loss at Bottleneck
8
400 K
TCP Reno
Sync-TCP
6
packet loss %
275 K
180 K
4
100 K
90 K
2
45 K
25 K
21 K
3.5 K
8K
5K
300
0
0
0
50% 60% 70% 80% 85% 90% 95%
offered load 35
38. Response Time CDF
Example
100
80
cumulative probability
~75% of the responses
completed in 400 ms
60
or less
40
20
0
0 200 400 600 800 1000 1200 1400
HTTP response time (ms) 38
39. Response Time CDF
50% Load
100
80
cumulative probability
No large difference
between uncongested
60
and congested
40
20
0
0 200 400 600 800 1000 1200 1400
HTTP response time (ms) 39
40. Response Time CDF
70% Load
100
80
cumulative probability
Sync-TCP performs
slightly better than
60 TCP Reno
40
20
0
0 200 400 600 800 1000 1200 1400
HTTP response time (ms) 40
41. Response Time CDF
80% Load
100
80
cumulative probability
Sync-TCP performs
60 better than both TCP
Reno and AQM
40
20
0
0 200 400 600 800 1000 1200 1400
HTTP response time (ms) 41
42. Response Time CDF
85% Load
100
80
cumulative probability
Sync-TCP performs
60 better than both TCP
Reno and AQM
40
20
0
0 200 400 600 800 1000 1200 1400
HTTP response time (ms) 42
43. Evaluation
Early Congestion Detection
• Sync-TCP early congestion detection only
operates after 9 ACKs have been received
– HTTP responses > 25 KB
• Only 7-8% of HTTP responses > 25 KB
• HTTP responses < 25 KB do not use Sync-
TCP early congestion detection
– use TCP Reno
43
44. Evaluation
85% Load, 48 MB Response
68
TCP Reno
congestion window (packets)
(17 ms base RTT)
x packet drop (952)
34
0
68
Sync-TCP
(47 ms base RTT)
x packet drop (190)
34
0
900 1000 1100 1200 1300 1400 1500 1600
time (s) 44
45. Conclusions
• Sync-TCP performs better than TCP Reno
– packet loss
– average queue size
– goodput per HTTP response
– HTTP response time
• Sync-TCP has comparable performance to “best”
TCP and AQM combination
• Limitations of delay-based congestion control
– may not compete well with TCP Reno on same network
– with many congested links, decrease in one queue could
mask increase in another queue
45
46. Summary
synchronized clocks
Taking advantage of
one-way transit times synchronized clocks
in TCP can result in
better network
early congestion detection
performance.
less packet loss and shorter queues
better overall network performance
46
47. My Contributions
• Method for measuring a flow's OTT and returning
this exact timing information to the sender
• Comparison of several methods for using OTTs to
detect congestion
• Sync-TCP: a family of end-to-end congestion
control mechanisms based on using OTTs for
congestion detection
47
48. Supporting Work
• Study of standards-track TCP congestion control and
error recovery mechanisms in the context of HTTP
traffic
– Weigle, Jeffay, and Smith, “Quantifying the Effects of Recent
Protocol Improvements to Standards-Track TCP,” in submission.
• Additions to NS-2
– integrated a state-of-the-art random number generator
– integrated Bell Labs’ HTTP traffic model
– developed a module for delaying and dropping packets on
a per-flow basis according to a given distribution
• Heuristics for determining appropriate run length for
HTTP simulations
48
49. Future Work
• Further Analysis
– accuracy of clock synchronization
– multiple congested links
– Sync-TCP with router support
• Extensions to Sync-TCP
– improve congestion detection and reaction
– ACK compression
– ACK congestion control
– improve fairness
• Uses for synchronized clocks in TCP
– statistics for time-critical applications
– wireless devices
49
50. Thank You
• Committee Members
Kevin Jeffay Don Smith
Ketan Mayer-Patel Sanjoy Baruah
Bert Dempsey Jasleen Kaur
• UNC Department of Computer Science
• My parents, Mike & Jean Clark
• My husband, Chris
50