Different high speed Transport layer protocols have been designed and proposed in the literature to
improve the performance of standard TCP on high BDP links. They are mainly different in their increase
and decrease formulas of their respective congestion control algorithm. Most of these high speed protocols
consider every packet drop in the network as an indication of congestion and they immediately reduce their
congestion window size. Such an approach will usually result in under utilization of available bandwidth in
case of noisy channel conditions. We take CUBIC as a test case and have compared its performance in
case of normal and noisy channel conditions. The throughput of CUBIC was drastically degraded from
50Mbps to 0.5Mbps when we introduced a random packet drops with 0.001 probability. When the
probability of the packet drops increases then the throughput gets decreases. Indeed, we need to
complement existing congestion control algorithms with some intelligent mechanisms that can differentiate
whether a certain packet drop is because of congestion or channel error thus avoid unnecessary window
reduction. In order to distinguish between packets drops, we have developed a k-NN based module to guess
whether the packet drops are due to the congestion or any other reasons. After integrating this module with
CUBIC algorithm, we have observed significant performance improvement.
This document summarizes a survey and analysis of various host-to-host congestion control proposals for TCP data transmission. It discusses the basic principles that underlie current host-to-host algorithms, including probing available network resources, estimating congestion through packet loss or delay, and quickly detecting packet losses. The document then analyzes specific algorithms like slow start, congestion avoidance, and fast recovery. It also examines calculating retransmission timeout and round-trip time, congestion avoidance and packet recovery techniques, and data transmission in TCP. The overall goal of these proposals is to control congestion in a distributed manner without relying on explicit network notifications.
PERFORMANCE EVALUATION OF SELECTED E2E TCP CONGESTION CONTROL MECHANISM OVER ...ijwmn
TCP is one of the main protocols that govern the Internet traffic nowadays. However, it suffers significant
performance degradation over wireless links. Since wireless networks are leading the communication
technologies recently, it is imperative to introduce effective solutions for the TCP congestion control
mechanisms over such networks. In this research four End-to-End TCP implementations are discussed,
they are TCP Westwood, Hybla, Highspeed, and NewReno. The performance of these variants is compared
using LTE emulated environment in terms of throughput, delay, and fairness. Ns-3 simulator is used to
simulate the LTE networks environment. The simulation results showed that TCP Highspeed achieves the
best throughput results. Although TCP Westwood recorded the lowest latency values comparing to others,
it behaved unfairly among different traffic flows. Moreover, TCP Hybla demonstrated the best fairness
behaviour among other TCP variants
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Improvement of Congestion window and Link utilization of High Speed Protocols...IOSR Journals
This document summarizes a research paper that proposes using a k-nearest neighbors (k-NN) algorithm to help high-speed transport layer protocols like CUBIC better distinguish between packet drops due to network congestion versus other factors like noise. The k-NN algorithm would analyze patterns in packet drop history to classify new drops, helping protocols avoid unnecessary window size reductions when drops are not actually due to congestion. The document provides background on high-speed protocols, issues like underutilization from treating all drops as congestion, and how incorporating k-NN classification could improve protocols' performance in noisy network conditions.
This document discusses various transport layer protocols for mobile networks. It begins with an overview of TCP and UDP, and then describes several strategies for improving TCP performance over mobile networks, including indirect TCP (I-TCP), snooping TCP, and Mobile TCP. It also discusses congestion control strategies like slow start and fast retransmit. Overall, the document analyzes how TCP can be optimized through techniques like connection splitting, buffering, and selective retransmission to better accommodate the characteristics of wireless networks.
TCP uses congestion control to determine how much capacity is available in the network and regulate how many packets can be in transit. It uses additive increase/multiplicative decrease (AIMD) where the congestion window is increased slowly with each ACK but halved upon timeout. Slow start is used initially and after idle periods to grow the window exponentially until congestion is detected. Fast retransmit and fast recovery help detect and recover from packet loss without requiring a timeout.
Analytical Research of TCP Variants in Terms of Maximum ThroughputIJLT EMAS
This paper is comparative, throughput analysis, for
the TCP variants as for New Reno, Westwood & High Speed,
and it analyzes the outcomes in simulated environment for NS -3
(version 3.25) simulator with reference to multiple varying
network parameters that includes network simulation time,
router bandwidth, varying traffic source counts to observe which
is one of the best TCP variant in different scenarios. Analysis
was done using dumbbell topology to figure out the comparative
maximum throughput of TCP variants. The analysis gives result
as TCP Variant “NewReno” is good when low bandwidth is used,
while TCP Variant “HighS peed” is good in terms of using large
bandwidths in comparison to Westwood. Network traffic flow
was observed in NetAnim tool.
EFFICIENT ADAPTATION OF FUZZY CONTROLLER FOR SMOOTH SENDING RATE TO AVOID CON...ijcsit
ABSTRACT
This paper prefers a fuzzy-logic-based sending rate adaption scheme named FSR(Fuzzy Sending Rate) intending to improve the evenness of TCPFriendly Multicast Congestion Control (TFMCC). To mitigate fluctuation of sending rate for TFMCC sender, FSR intends, five actions and link utilization for tuning sending rate and uses a fuzzy controller to determine which operation should be reaped according to the feedback information from CLR (current limiting receiver). Asymmetrical membership functions and biased fuzzy inference rules make FSR as friendly to TCP flows as TFMCC. Simulation results show that FSR has exceptional smoothness and fine TCP Friendliness.
This document summarizes a survey and analysis of various host-to-host congestion control proposals for TCP data transmission. It discusses the basic principles that underlie current host-to-host algorithms, including probing available network resources, estimating congestion through packet loss or delay, and quickly detecting packet losses. The document then analyzes specific algorithms like slow start, congestion avoidance, and fast recovery. It also examines calculating retransmission timeout and round-trip time, congestion avoidance and packet recovery techniques, and data transmission in TCP. The overall goal of these proposals is to control congestion in a distributed manner without relying on explicit network notifications.
PERFORMANCE EVALUATION OF SELECTED E2E TCP CONGESTION CONTROL MECHANISM OVER ...ijwmn
TCP is one of the main protocols that govern the Internet traffic nowadays. However, it suffers significant
performance degradation over wireless links. Since wireless networks are leading the communication
technologies recently, it is imperative to introduce effective solutions for the TCP congestion control
mechanisms over such networks. In this research four End-to-End TCP implementations are discussed,
they are TCP Westwood, Hybla, Highspeed, and NewReno. The performance of these variants is compared
using LTE emulated environment in terms of throughput, delay, and fairness. Ns-3 simulator is used to
simulate the LTE networks environment. The simulation results showed that TCP Highspeed achieves the
best throughput results. Although TCP Westwood recorded the lowest latency values comparing to others,
it behaved unfairly among different traffic flows. Moreover, TCP Hybla demonstrated the best fairness
behaviour among other TCP variants
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Improvement of Congestion window and Link utilization of High Speed Protocols...IOSR Journals
This document summarizes a research paper that proposes using a k-nearest neighbors (k-NN) algorithm to help high-speed transport layer protocols like CUBIC better distinguish between packet drops due to network congestion versus other factors like noise. The k-NN algorithm would analyze patterns in packet drop history to classify new drops, helping protocols avoid unnecessary window size reductions when drops are not actually due to congestion. The document provides background on high-speed protocols, issues like underutilization from treating all drops as congestion, and how incorporating k-NN classification could improve protocols' performance in noisy network conditions.
This document discusses various transport layer protocols for mobile networks. It begins with an overview of TCP and UDP, and then describes several strategies for improving TCP performance over mobile networks, including indirect TCP (I-TCP), snooping TCP, and Mobile TCP. It also discusses congestion control strategies like slow start and fast retransmit. Overall, the document analyzes how TCP can be optimized through techniques like connection splitting, buffering, and selective retransmission to better accommodate the characteristics of wireless networks.
TCP uses congestion control to determine how much capacity is available in the network and regulate how many packets can be in transit. It uses additive increase/multiplicative decrease (AIMD) where the congestion window is increased slowly with each ACK but halved upon timeout. Slow start is used initially and after idle periods to grow the window exponentially until congestion is detected. Fast retransmit and fast recovery help detect and recover from packet loss without requiring a timeout.
Analytical Research of TCP Variants in Terms of Maximum ThroughputIJLT EMAS
This paper is comparative, throughput analysis, for
the TCP variants as for New Reno, Westwood & High Speed,
and it analyzes the outcomes in simulated environment for NS -3
(version 3.25) simulator with reference to multiple varying
network parameters that includes network simulation time,
router bandwidth, varying traffic source counts to observe which
is one of the best TCP variant in different scenarios. Analysis
was done using dumbbell topology to figure out the comparative
maximum throughput of TCP variants. The analysis gives result
as TCP Variant “NewReno” is good when low bandwidth is used,
while TCP Variant “HighS peed” is good in terms of using large
bandwidths in comparison to Westwood. Network traffic flow
was observed in NetAnim tool.
EFFICIENT ADAPTATION OF FUZZY CONTROLLER FOR SMOOTH SENDING RATE TO AVOID CON...ijcsit
ABSTRACT
This paper prefers a fuzzy-logic-based sending rate adaption scheme named FSR(Fuzzy Sending Rate) intending to improve the evenness of TCPFriendly Multicast Congestion Control (TFMCC). To mitigate fluctuation of sending rate for TFMCC sender, FSR intends, five actions and link utilization for tuning sending rate and uses a fuzzy controller to determine which operation should be reaped according to the feedback information from CLR (current limiting receiver). Asymmetrical membership functions and biased fuzzy inference rules make FSR as friendly to TCP flows as TFMCC. Simulation results show that FSR has exceptional smoothness and fine TCP Friendliness.
This document presents an overview of computer network congestion and congestion control techniques. It defines congestion as occurring when too many packets are present in a network link, causing queues to overflow and packets to drop. It then discusses factors that can cause congestion as well as the costs. It outlines open-loop and closed-loop congestion control approaches. Specific algorithms covered include leaky bucket, token bucket, choke packets, hop-by-hop choke packets, and load shedding. The document concludes by noting the importance of efficient congestion control techniques with room for improvement.
Connection Establishment & Flow and Congestion ControlAdeel Rasheed
On these slides i describe all the detail about Connection Establishment & Flow and Congestion Control. For more detail visit: https://chauhantricks.blogspot.com/
The document proposes RAMPTCP, a receiver-assisted extension to MPTCP for edge clouds. RAMPTCP aims to improve MPTCP performance in edge-to-edge networks by having the receiver send network condition information to help the sender make better scheduling decisions. Preliminary ns3 simulations show RAMPTCP achieves around 19% higher throughput and 58% fewer retransmissions compared to default MPTCP in a scenario where one network path experiences packet loss. Future work includes incorporating different access technologies and developing effective RAMPTCP control actions.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IMPACT OF CONTENTION WINDOW ON CONGESTION CONTROL ALGORITHMS FOR WIRELESS ADH...cscpconf
TCP congestion control mechanism is highly dependent on MAC layer Backoff algorithms that
predict the optimal Contention Window size to increase the TCP performance in wireless adhoc
network. This paper critically examines the impact of Contention Window in TCP congestion
control approaches. The modified TCP congestion control method gives the stability of
congestion window which provides higher throughput and shorter delay than the traditional TCP. Various Backoff algorithms that are used to adjust Contention Window are simulatedusing NS2 along with modified TCP and their performance are analyzed to depict the influence of Contention Window in TCP performance considering the metrics such as throughput, delay, packet loss and end-to-end delay
Analysis of Link State Resource Reservation Protocol for Congestion Managemen...ijgca
With the wide spread of WiFi hotspots, concentrated traffic workload on Smart Web (SW) can slow down
the network performance. This paper presents a congestion management strategy considering real time
activities in today’s smart web. With the SW context, cooperative packet recovery using resource
reservation procedure for TCP flows was adapted for mitigating packet losses. This is to maintain data
consistency between various access points of smart web hotspot. Using a real world scenario, it was
confirmed that generic TCP cannot handle traffic congestion in a SW hotspot network. With TCP in
scalable workload environments, continuous packet drops at the event of congestion remains obvious. This
is unacceptable for mission critical domains. An enhanced Link State Resource Reservation Protocol (LSRSVP)
which serves as dynamic feedback mechanism in smart web hotspots is presented. The contextual
behaviour was contrasted with the generic TCP model. For the LS-RSVP, a simulation experiment for TCP
connection between servers at the remote core layer and the access layer was carried out while using
selected benchmark metrics. From the results, under realistic workloads, a steady-state throughput
response was achieved by TCP LS-RSVP to about 3650Bits/secs compared with generic TCP plots in a
previous study. Considering network service availability, this was found to be dependent on fault-tolerance
of the hotspot network. From study, a high peak threshold of 0.009 (i.e. 90%) was observed. This shows
fairly acceptable service availability behaviour compared with the existing TCP schemes. For packet drop
effects, an analysis on the network behaviour with respect to the LS-RSVP yielded a drop response of about
0.000106 bits/sec which is much lower compared with the case with generic TCP with over 0.38 bits/sec.
The latency profile of average FTP download response was found to be 0.030secs, but with that of FTP
upload response, this yielded about 0.028 sec. The results from the study demonstrate efficiency and
optimality for realistic loads in Smart web contexts.
Transmission Control Protocol (TCP) is a fundamental protocol of the Internet Protocol Suite. TCP complements the Internet Protocol (IP), therefore it is common to refer to the internet protocol suit as TCP/IP. TCP is used for error detection, detection of packet loss or out of order delivery of data. TCP requests retransmission, rearranges data and helps with network congestion.
Several congestion control algorithms have been developed, over the last years, to improve TCP's performance over various technologies and network conditions.
The purpose of this assignment is to present TCP, network congestion, congestion algorithms and simulate different algorithms in different network conditions to measure their performance. For this assignment's needs, OPNET IT Guru Academic Edition software was used to accomplish the reproduction of projects that have been already published and gave the wanted results.
The document discusses several algorithms used for congestion control in TCP/IP networks, including slow start, congestion avoidance, fast retransmit, fast recovery, random early discard (RED), and traffic shaping using leaky bucket and token bucket algorithms. Slow start and congestion avoidance control the transmission rate by adjusting the congestion window size. Fast retransmit and fast recovery allow quicker retransmission of lost packets without waiting for timeouts. RED proactively discards packets before buffer overflow. Leaky bucket and token bucket algorithms shape traffic flow through use of buffers and tokens to smooth bursts and control transmission rates.
Iaetsd an effective approach to eliminate tcp incastIaetsd Iaetsd
This document proposes an Incast Congestion Control for TCP (ICTCP) scheme to eliminate TCP incast collapse in datacenter environments. TCP incast collapse occurs when multiple synchronized servers send data to the same receiver in parallel, overwhelming the switch buffer and causing packet loss. ICTCP is a receiver-side approach that proactively adjusts the TCP receive window size of connections to control their aggregate burstiness and prevent switch buffer overflow before packet loss occurs. It estimates available bandwidth and uses this as a quota to coordinate receive window increases. For each connection, the receive window is adjusted based on the ratio of the difference between measured and expected throughput. This allows adaptive tuning of receive windows to meet sender throughput needs while avoiding congest
This document discusses computer networks and congestion control techniques. It provides information on routing algorithms, causes of congestion, effects of congestion, and open-loop and closed-loop congestion control methods. Specifically, it describes the leaky bucket algorithm and token bucket algorithm for traffic shaping, and how they regulate data flow rates to prevent network congestion.
This document summarizes a presentation on congestion control in TCP/IP networks. It discusses basics of congestion and how it can be catastrophic if not handled. It then describes the basic strategies used by TCP to combat congestion, including slow start, congestion avoidance, detection, and illustration of algorithms like fast retransmit and recovery. Issues with wireless networks and variants of TCP like New Reno, Vegas, and Westwood are also summarized. The presentation proposes a new congestion control algorithm and discusses plans to simulate and test it.
This document discusses congestion in computer networks and how TCP controls congestion. It begins by listing group members and an agenda covering learning about congestion, network performance, and TCP congestion control. It then defines congestion as occurring when network load exceeds capacity. Various causes of and issues resulting from congestion are described. The document outlines open-loop and closed-loop congestion control techniques before focusing on TCP, explaining how it uses a congestion window and the slow start, congestion avoidance, and congestion detection algorithms to control transmission rates and respond to lost packets or congestion.
Congestion occurs when routers receive packets faster than they can forward them, causing their queues to fill up. There are two ways routers deal with congestion - by preventing additional packets from entering the congested region until packets can be processed, or by discarding queued packets to make room for new ones. Congestion control techniques like warning bits, choke packets, and load shedding help detect and recover from congestion on a global scale across an entire subnet, while flow control operates on a point-to-point basis between individual senders and receivers.
This document discusses congestion control in computer networks. It begins by defining congestion as occurring when the number of packets entering a network exceeds its carrying capacity. Several open-loop and closed-loop techniques for congestion control are then described, including leaky bucket algorithms, token bucket algorithms, choke packets, and slow start. The effects of congestion like reduced throughput and increased delay are also covered. The document provides an in-depth look at strategies for detecting and mitigating congestion in computer networks.
This document discusses congestion control in computer networks. It begins by defining common traffic descriptors like average data rate and peak data rate. It then explains different traffic profiles such as constant bit rate and variable bit rate traffic. The document discusses how congestion occurs when network load exceeds capacity. It describes open-loop and closed-loop congestion control techniques, including choke packets and hop-by-hop choke packets. The document also discusses the Internet Control Message Protocol and how it is used for error reporting and source quenching to control congestion.
Comparison of TCP congestion control mechanisms Tahoe, Newreno and VegasIOSR Journals
The widely used reliable transport protocol TCP, is an end to end protocol designed for the wireline
networks characterized by negligible random packet losses. This paper represents exploratory study of TCP
congestion control principles and mechanisms. Modern implementations of TCP contain four intertwined
algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery. In addition to the standard
algorithms used in common implementations of TCP, this paper also describes some of the more common
proposals developed by researchers over the years. We also study, through extensive simulations, the
performance characteristics of four representative TCP schemes, namely TCP Tahoe, New Reno and Vegas
under the network conditions of bottleneck link capacities for wired network
Comparative Analysis of Different TCP Variants in Mobile Ad-Hoc Network partha pratim deb
The document analyzes the performance of different TCP variants (New Reno, Reno, Tahoe) with MANET routing protocols (AODV, DSR, TORA) through simulation. It finds that in scenarios with 3 and 5 nodes, AODV has better throughput than DSR and TORA for all TCP variants. Throughput decreases for all variants as node count increases. New Reno provides multiple packet loss recovery and is the best choice for AODV in MANETs due to its consistent performance with changes in node count. Further analysis of additional protocols and TCP variants is recommended.
Recital Study of Various Congestion Control Protocols in wireless networkiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document discusses and compares several congestion control protocols for wireless networks, including TCP, RCP, and RCP+. It implemented an enhanced version of RCP+ in the NS-2 simulator. Simulation results showed that the proposed approach achieved higher throughput and packet delivery ratio than TCP and RCP+ in a wireless network with 10-50 nodes, with performance degrading as the number of nodes increased beyond 20 due to increased congestion. The paper analyzes the mechanisms and equations of each protocol and argues the proposed approach combines benefits of improved AIMD and RCP+ to address their individual shortcomings.
NetWork Design Question2.) How does TCP prevent Congestion Dicuss.pdfoptokunal1
NetWork Design Question
2.) How does TCP prevent Congestion? Dicuss the information identifying congestion in the
network as well as the mechanism for reducing congestion?
Solution
Congestion is a problem that occurs on shared networks when multiple users contend for access
to the same resources (bandwidth, buffers, and queues).
Transmission Control Protocol (TCP) uses a network congestion-avoidance algorithm that
includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, with
other schemes such as slow-start to achieve congestion avoidance.
The TCP congestion-avoidance algorithm is the primary basis for congestion control in the
Internet.
Congestion typically occurs where multiple links feed into a single link, such as where internal
LANs are connected to WAN links. Congestion also occurs at routers in core networks where
nodes are subjected to more traffic than they are designed to handle.
TCP/IP networks such as the Internet are especially susceptible to congestion because of their
basic connection- less nature. There are no virtual circuits with guaranteed bandwidth. Packets
are injected by any host at any time, and those packets are variable in size, which make
predicting traffic patterns and providing guaranteed service impossible. While connectionless
networks have advantages, quality of service is not one of them.
Shared LANs such as Ethernet have their own congestion control mechanisms in the form of
access controls that prevent multiple nodes from transmitting at the same time.
Identifying:
Congestion is primarily reflected by a conventional user feeling-- slowness. This statement
reflects the change in the network effective flow, that is the time required to transmit an entire
data from one point to another. The effective flow doenot exist as such, it consists in reality of
three seperate indicators:
*Latency:the effective flow is inversely proportional to the latency.
*Jitter:it is latency variation over time, impacts by influencing the flow latency
*Loss Rate:the theoritical bandwidth is inversely proportional to the square root of the loss rate
These Congestion symtoms allow us to rely on objective indicators to characterize it.
Mechanism to reduce congestion:
The standard fare in TCP implementations today has four standard congestion control algorithms
that are now in common use. Their usefulness has passed the test of time.
The four algorithms, Slow Start, Congestion Avoidance, Fast Retransmit and Fast Recovery are
described below. (a) Slow Start
Slow Start, a requirement for TCP software implementations is a mechanism used by the sender
to control the transmission rate, otherwise known as sender-based flow control. This is
accomplished through the return rate of acknowledgements from the receiver. In other words, the
rate of acknowledgements returned by the receiver determine the rate at which the sender can
transmit data. When a TCP connection first begins, the Slow Start algorithm initializes a
.
Towards Seamless TCP Congestion Avoidance in Multiprotocol EnvironmentsIDES Editor
In this paper we explore the area of congestion
avoidance in computer networks. We provide a brief overview
of the current state of the art in congestion avoidance and also
list our extension to the TCP congestion avoidance mechanism.
This extension was previously published on an international
forum and in this paper we describe an improved version which
allows multiprotocol support. We list preliminary results
carried out in a simulation environment.
New introduced approach called Advanced Notification
Congestion System (ACNS) allows TCP flows prioritization
based on the TCP flow age and priority carried in the header
of the network layer protocol. The aim of this approach is to
provide more bandwidth for young and high prioritized TCP
flows by means of penalizing old greedy flows with a low
priority. Using ACNS, substantial network performance
increase can be achieved.
Computer networks have experienced an explosive growth over the past few years and with
that growth have come severe congestion problems. For example, it is now common to see
internet gateways drop 10% of the incoming packets because of local buffer overflows.
Our investigation of some of these problems has shown that much of the cause lies in
transport protocol implementations (
not
in the protocols themselves): The ‘obvious’ ways
to implement a window-based transport protocol can result in exactly the wrong behavior
in response to network congestion. We give examples of ‘wrong’ behavior and describe
some simple algorithms that can be used to make right things happen. The algorithms are
rooted in the idea of achieving network stability by forcing the transport connection to obey
a ‘packet conservation’ principle. We show how the algorithms derive from this principle
and what effect they have on traffic over congested networks.
In October of ’86, the Internet had the first of what became a series of ‘congestion col-
lapses’. During this period, the data throughput from LBL to UC Berkeley (sites separated
by 400 yards and two IMP hops) dropped from 32 Kbps to 40 bps. We were fascinated by
this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why
things had gotten so bad. In particular, we wondered if the 4.3
BSD
(Berkeley U
NIX
)
TCP
was mis-behaving or if it could be tuned to work better under abysmal network conditions.
The answer to both of these questions was “yes”.
This document presents an overview of computer network congestion and congestion control techniques. It defines congestion as occurring when too many packets are present in a network link, causing queues to overflow and packets to drop. It then discusses factors that can cause congestion as well as the costs. It outlines open-loop and closed-loop congestion control approaches. Specific algorithms covered include leaky bucket, token bucket, choke packets, hop-by-hop choke packets, and load shedding. The document concludes by noting the importance of efficient congestion control techniques with room for improvement.
Connection Establishment & Flow and Congestion ControlAdeel Rasheed
On these slides i describe all the detail about Connection Establishment & Flow and Congestion Control. For more detail visit: https://chauhantricks.blogspot.com/
The document proposes RAMPTCP, a receiver-assisted extension to MPTCP for edge clouds. RAMPTCP aims to improve MPTCP performance in edge-to-edge networks by having the receiver send network condition information to help the sender make better scheduling decisions. Preliminary ns3 simulations show RAMPTCP achieves around 19% higher throughput and 58% fewer retransmissions compared to default MPTCP in a scenario where one network path experiences packet loss. Future work includes incorporating different access technologies and developing effective RAMPTCP control actions.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IMPACT OF CONTENTION WINDOW ON CONGESTION CONTROL ALGORITHMS FOR WIRELESS ADH...cscpconf
TCP congestion control mechanism is highly dependent on MAC layer Backoff algorithms that
predict the optimal Contention Window size to increase the TCP performance in wireless adhoc
network. This paper critically examines the impact of Contention Window in TCP congestion
control approaches. The modified TCP congestion control method gives the stability of
congestion window which provides higher throughput and shorter delay than the traditional TCP. Various Backoff algorithms that are used to adjust Contention Window are simulatedusing NS2 along with modified TCP and their performance are analyzed to depict the influence of Contention Window in TCP performance considering the metrics such as throughput, delay, packet loss and end-to-end delay
Analysis of Link State Resource Reservation Protocol for Congestion Managemen...ijgca
With the wide spread of WiFi hotspots, concentrated traffic workload on Smart Web (SW) can slow down
the network performance. This paper presents a congestion management strategy considering real time
activities in today’s smart web. With the SW context, cooperative packet recovery using resource
reservation procedure for TCP flows was adapted for mitigating packet losses. This is to maintain data
consistency between various access points of smart web hotspot. Using a real world scenario, it was
confirmed that generic TCP cannot handle traffic congestion in a SW hotspot network. With TCP in
scalable workload environments, continuous packet drops at the event of congestion remains obvious. This
is unacceptable for mission critical domains. An enhanced Link State Resource Reservation Protocol (LSRSVP)
which serves as dynamic feedback mechanism in smart web hotspots is presented. The contextual
behaviour was contrasted with the generic TCP model. For the LS-RSVP, a simulation experiment for TCP
connection between servers at the remote core layer and the access layer was carried out while using
selected benchmark metrics. From the results, under realistic workloads, a steady-state throughput
response was achieved by TCP LS-RSVP to about 3650Bits/secs compared with generic TCP plots in a
previous study. Considering network service availability, this was found to be dependent on fault-tolerance
of the hotspot network. From study, a high peak threshold of 0.009 (i.e. 90%) was observed. This shows
fairly acceptable service availability behaviour compared with the existing TCP schemes. For packet drop
effects, an analysis on the network behaviour with respect to the LS-RSVP yielded a drop response of about
0.000106 bits/sec which is much lower compared with the case with generic TCP with over 0.38 bits/sec.
The latency profile of average FTP download response was found to be 0.030secs, but with that of FTP
upload response, this yielded about 0.028 sec. The results from the study demonstrate efficiency and
optimality for realistic loads in Smart web contexts.
Transmission Control Protocol (TCP) is a fundamental protocol of the Internet Protocol Suite. TCP complements the Internet Protocol (IP), therefore it is common to refer to the internet protocol suit as TCP/IP. TCP is used for error detection, detection of packet loss or out of order delivery of data. TCP requests retransmission, rearranges data and helps with network congestion.
Several congestion control algorithms have been developed, over the last years, to improve TCP's performance over various technologies and network conditions.
The purpose of this assignment is to present TCP, network congestion, congestion algorithms and simulate different algorithms in different network conditions to measure their performance. For this assignment's needs, OPNET IT Guru Academic Edition software was used to accomplish the reproduction of projects that have been already published and gave the wanted results.
The document discusses several algorithms used for congestion control in TCP/IP networks, including slow start, congestion avoidance, fast retransmit, fast recovery, random early discard (RED), and traffic shaping using leaky bucket and token bucket algorithms. Slow start and congestion avoidance control the transmission rate by adjusting the congestion window size. Fast retransmit and fast recovery allow quicker retransmission of lost packets without waiting for timeouts. RED proactively discards packets before buffer overflow. Leaky bucket and token bucket algorithms shape traffic flow through use of buffers and tokens to smooth bursts and control transmission rates.
Iaetsd an effective approach to eliminate tcp incastIaetsd Iaetsd
This document proposes an Incast Congestion Control for TCP (ICTCP) scheme to eliminate TCP incast collapse in datacenter environments. TCP incast collapse occurs when multiple synchronized servers send data to the same receiver in parallel, overwhelming the switch buffer and causing packet loss. ICTCP is a receiver-side approach that proactively adjusts the TCP receive window size of connections to control their aggregate burstiness and prevent switch buffer overflow before packet loss occurs. It estimates available bandwidth and uses this as a quota to coordinate receive window increases. For each connection, the receive window is adjusted based on the ratio of the difference between measured and expected throughput. This allows adaptive tuning of receive windows to meet sender throughput needs while avoiding congest
This document discusses computer networks and congestion control techniques. It provides information on routing algorithms, causes of congestion, effects of congestion, and open-loop and closed-loop congestion control methods. Specifically, it describes the leaky bucket algorithm and token bucket algorithm for traffic shaping, and how they regulate data flow rates to prevent network congestion.
This document summarizes a presentation on congestion control in TCP/IP networks. It discusses basics of congestion and how it can be catastrophic if not handled. It then describes the basic strategies used by TCP to combat congestion, including slow start, congestion avoidance, detection, and illustration of algorithms like fast retransmit and recovery. Issues with wireless networks and variants of TCP like New Reno, Vegas, and Westwood are also summarized. The presentation proposes a new congestion control algorithm and discusses plans to simulate and test it.
This document discusses congestion in computer networks and how TCP controls congestion. It begins by listing group members and an agenda covering learning about congestion, network performance, and TCP congestion control. It then defines congestion as occurring when network load exceeds capacity. Various causes of and issues resulting from congestion are described. The document outlines open-loop and closed-loop congestion control techniques before focusing on TCP, explaining how it uses a congestion window and the slow start, congestion avoidance, and congestion detection algorithms to control transmission rates and respond to lost packets or congestion.
Congestion occurs when routers receive packets faster than they can forward them, causing their queues to fill up. There are two ways routers deal with congestion - by preventing additional packets from entering the congested region until packets can be processed, or by discarding queued packets to make room for new ones. Congestion control techniques like warning bits, choke packets, and load shedding help detect and recover from congestion on a global scale across an entire subnet, while flow control operates on a point-to-point basis between individual senders and receivers.
This document discusses congestion control in computer networks. It begins by defining congestion as occurring when the number of packets entering a network exceeds its carrying capacity. Several open-loop and closed-loop techniques for congestion control are then described, including leaky bucket algorithms, token bucket algorithms, choke packets, and slow start. The effects of congestion like reduced throughput and increased delay are also covered. The document provides an in-depth look at strategies for detecting and mitigating congestion in computer networks.
This document discusses congestion control in computer networks. It begins by defining common traffic descriptors like average data rate and peak data rate. It then explains different traffic profiles such as constant bit rate and variable bit rate traffic. The document discusses how congestion occurs when network load exceeds capacity. It describes open-loop and closed-loop congestion control techniques, including choke packets and hop-by-hop choke packets. The document also discusses the Internet Control Message Protocol and how it is used for error reporting and source quenching to control congestion.
Comparison of TCP congestion control mechanisms Tahoe, Newreno and VegasIOSR Journals
The widely used reliable transport protocol TCP, is an end to end protocol designed for the wireline
networks characterized by negligible random packet losses. This paper represents exploratory study of TCP
congestion control principles and mechanisms. Modern implementations of TCP contain four intertwined
algorithms: slow start, congestion avoidance, fast retransmit, and fast recovery. In addition to the standard
algorithms used in common implementations of TCP, this paper also describes some of the more common
proposals developed by researchers over the years. We also study, through extensive simulations, the
performance characteristics of four representative TCP schemes, namely TCP Tahoe, New Reno and Vegas
under the network conditions of bottleneck link capacities for wired network
Comparative Analysis of Different TCP Variants in Mobile Ad-Hoc Network partha pratim deb
The document analyzes the performance of different TCP variants (New Reno, Reno, Tahoe) with MANET routing protocols (AODV, DSR, TORA) through simulation. It finds that in scenarios with 3 and 5 nodes, AODV has better throughput than DSR and TORA for all TCP variants. Throughput decreases for all variants as node count increases. New Reno provides multiple packet loss recovery and is the best choice for AODV in MANETs due to its consistent performance with changes in node count. Further analysis of additional protocols and TCP variants is recommended.
Recital Study of Various Congestion Control Protocols in wireless networkiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document discusses and compares several congestion control protocols for wireless networks, including TCP, RCP, and RCP+. It implemented an enhanced version of RCP+ in the NS-2 simulator. Simulation results showed that the proposed approach achieved higher throughput and packet delivery ratio than TCP and RCP+ in a wireless network with 10-50 nodes, with performance degrading as the number of nodes increased beyond 20 due to increased congestion. The paper analyzes the mechanisms and equations of each protocol and argues the proposed approach combines benefits of improved AIMD and RCP+ to address their individual shortcomings.
NetWork Design Question2.) How does TCP prevent Congestion Dicuss.pdfoptokunal1
NetWork Design Question
2.) How does TCP prevent Congestion? Dicuss the information identifying congestion in the
network as well as the mechanism for reducing congestion?
Solution
Congestion is a problem that occurs on shared networks when multiple users contend for access
to the same resources (bandwidth, buffers, and queues).
Transmission Control Protocol (TCP) uses a network congestion-avoidance algorithm that
includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, with
other schemes such as slow-start to achieve congestion avoidance.
The TCP congestion-avoidance algorithm is the primary basis for congestion control in the
Internet.
Congestion typically occurs where multiple links feed into a single link, such as where internal
LANs are connected to WAN links. Congestion also occurs at routers in core networks where
nodes are subjected to more traffic than they are designed to handle.
TCP/IP networks such as the Internet are especially susceptible to congestion because of their
basic connection- less nature. There are no virtual circuits with guaranteed bandwidth. Packets
are injected by any host at any time, and those packets are variable in size, which make
predicting traffic patterns and providing guaranteed service impossible. While connectionless
networks have advantages, quality of service is not one of them.
Shared LANs such as Ethernet have their own congestion control mechanisms in the form of
access controls that prevent multiple nodes from transmitting at the same time.
Identifying:
Congestion is primarily reflected by a conventional user feeling-- slowness. This statement
reflects the change in the network effective flow, that is the time required to transmit an entire
data from one point to another. The effective flow doenot exist as such, it consists in reality of
three seperate indicators:
*Latency:the effective flow is inversely proportional to the latency.
*Jitter:it is latency variation over time, impacts by influencing the flow latency
*Loss Rate:the theoritical bandwidth is inversely proportional to the square root of the loss rate
These Congestion symtoms allow us to rely on objective indicators to characterize it.
Mechanism to reduce congestion:
The standard fare in TCP implementations today has four standard congestion control algorithms
that are now in common use. Their usefulness has passed the test of time.
The four algorithms, Slow Start, Congestion Avoidance, Fast Retransmit and Fast Recovery are
described below. (a) Slow Start
Slow Start, a requirement for TCP software implementations is a mechanism used by the sender
to control the transmission rate, otherwise known as sender-based flow control. This is
accomplished through the return rate of acknowledgements from the receiver. In other words, the
rate of acknowledgements returned by the receiver determine the rate at which the sender can
transmit data. When a TCP connection first begins, the Slow Start algorithm initializes a
.
Towards Seamless TCP Congestion Avoidance in Multiprotocol EnvironmentsIDES Editor
In this paper we explore the area of congestion
avoidance in computer networks. We provide a brief overview
of the current state of the art in congestion avoidance and also
list our extension to the TCP congestion avoidance mechanism.
This extension was previously published on an international
forum and in this paper we describe an improved version which
allows multiprotocol support. We list preliminary results
carried out in a simulation environment.
New introduced approach called Advanced Notification
Congestion System (ACNS) allows TCP flows prioritization
based on the TCP flow age and priority carried in the header
of the network layer protocol. The aim of this approach is to
provide more bandwidth for young and high prioritized TCP
flows by means of penalizing old greedy flows with a low
priority. Using ACNS, substantial network performance
increase can be achieved.
Computer networks have experienced an explosive growth over the past few years and with
that growth have come severe congestion problems. For example, it is now common to see
internet gateways drop 10% of the incoming packets because of local buffer overflows.
Our investigation of some of these problems has shown that much of the cause lies in
transport protocol implementations (
not
in the protocols themselves): The ‘obvious’ ways
to implement a window-based transport protocol can result in exactly the wrong behavior
in response to network congestion. We give examples of ‘wrong’ behavior and describe
some simple algorithms that can be used to make right things happen. The algorithms are
rooted in the idea of achieving network stability by forcing the transport connection to obey
a ‘packet conservation’ principle. We show how the algorithms derive from this principle
and what effect they have on traffic over congested networks.
In October of ’86, the Internet had the first of what became a series of ‘congestion col-
lapses’. During this period, the data throughput from LBL to UC Berkeley (sites separated
by 400 yards and two IMP hops) dropped from 32 Kbps to 40 bps. We were fascinated by
this sudden factor-of-thousand drop in bandwidth and embarked on an investigation of why
things had gotten so bad. In particular, we wondered if the 4.3
BSD
(Berkeley U
NIX
)
TCP
was mis-behaving or if it could be tuned to work better under abysmal network conditions.
The answer to both of these questions was “yes”.
Abstract - The Transmission Control Protocol (TCP) is
connection oriented, reliable and end-to-end protocol that support
flow and congestion control, with the evolution and rapid growth
of the internet and emergence of internet of things IoT, flow and
congestion have clear impact in the network performance. In this
paper we study congestion control mechanisms Tahoe, Reno,
Newreno, SACK and Vegas, which are introduced to control
network utilization and increase throughput, in the performance
evaluation we evaluate the performance metrics such as
throughput, packets loss, delivery and reveals impact of the cwnd.
Showing that SACK had done better performance in terms of
numbers of packets sent, throughput and delivery ratio than
Newreno, Vegas shows the best performance of all of them.
Performance Evaluation of High Speed Congestion Control ProtocolsIOSR Journals
This document evaluates and compares the performance of seven high-speed congestion control protocols: Bic TCP, Cubic TCP, Hamilton TCP, HighSpeed TCP, Illinois TCP, Scalable TCP and YeAH TCP. The protocols are simulated using NS-2 with multiple flows and their efficiency, fairness, and overall performance are calculated and compared for different numbers of flows. Key metrics like throughput, link utilization, and fairness index are analyzed for the protocols under varying parameters like RTT, queue size, and number of flows. Cubic TCP and Bic TCP generally showed better performance than the other protocols based on the simulations.
Lecture 19 22. transport protocol for ad-hoc Chandra Meena
This document discusses transport layer protocols for mobile ad hoc networks (MANETs). It begins with an introduction to MANETs and the need for new network architectures and protocols to support new types of networks. It then provides an overview of TCP/IP and how TCP works, including congestion control mechanisms. The document discusses challenges for TCP over wireless networks, where packet losses are often due to errors rather than congestion. It covers different versions of TCP and their approaches to congestion control. The goal is to design transport layer protocols that can address the unreliable links and frequent topology changes in MANETs.
ANALYSIS AND EXPERIMENTAL EVALUATION OF THE TRANSMISSION CONTROL PROTOCOL CON...IRJET Journal
This document analyzes and experimentally evaluates several TCP congestion control algorithms (variants) - TCP cubic, TCP hybla, TCP scalable, TCP Vegas, and TCP Westwood - in a wireless multihop environment. It aims to understand the throughput performance of each variant as the number of nodes increases. The analysis provides insights into how well different variants can adapt to dynamic multihop wireless networks. It experimentally tests the variants in a simulation using Network Simulator 2 and compares their throughput performance under varying node counts. The goal is to help develop more robust TCP algorithms that can effectively manage congestion in challenging wireless network conditions.
This document summarizes TCP congestion control. It discusses how TCP uses a congestion window and detects congestion via timeouts or duplicate ACKs to dynamically control transmission rates. TCP employs slow start, additive increase, and fast recovery policies. Slow start exponentially increases the window size initially. Additive increase slowly increases the window by 1/cwnd after each ACK. Fast recovery allows quick recovery from single losses. TCP switches between these policies and calculates throughput using additive increase, multiplicative decrease of the window over time.
Improving Performance of TCP in Wireless Environment using TCP-PIDES Editor
Improving the performance of the transmission
control protocol (TCP) in wireless environment has been an
active research area. Main reason behind performance
degradation of TCP is not having ability to detect actual reason
of packet losses in wireless environment. In this paper, we are
providing a simulation results for TCP-P (TCP-Performance).
TCP-P is intelligent protocol in wireless environment which
is able to distinguish actual reasons for packet losses and
applies an appropriate solution to packet loss.
TCP-P deals with main three issues, Congestion in
network, Disconnection in network and random packet losses.
TCP-P consists of Congestion avoidance algorithm and
Disconnection detection algorithm with some changes in TCP
header part. If congestion is occurring in network then
congestion avoidance algorithm is applied. In congestion
avoidance algorithm, TCP-P calculates number of sending
packets and receiving acknowledgements and accordingly set
a sending buffer value, so that it can prevent system from
happening congestion. In disconnection detection algorithm,
TCP-P senses medium continuously to detect a happening
disconnection in network. TCP-P modifies header of TCP
packet so that loss packet can itself notify sender that it is
lost.This paper describes the design of TCP-P, and presents
results from experiments using the NS-2 network simulator.
Results from simulations show that TCP-P is 4% more
efficient than TCP-Tahoe, 5% more efficient than TCP-Vegas,
7% more efficient than TCP-Sack and equally efficient in
performance as of TCP-Reno and TCP-New Reno. But we can
say TCP-P is more efficient than TCP-Reno and TCP-New
Reno since it is able to solve more issues of TCP in wireless
environment.
A Survey of Different Approaches for Differentiating Bit Error and Congestion...IJERD Editor
TCP provides reliable wireless communication. The packet loss occurs in wireless network during
the data transmission and these losses are always classified as congestion losses. While Packet is also lost due to
random bit error. But traditional TCP always consider as packet is lost due to congestion and reduce it
congestion window. Thus, TCP gives poor performance in wireless link. Many TCP variants have been
proposed for congestion control but they cannot distinguish error either due to congestion or due to bit error thus
it reduces congestion window every time but when there is a bit error then no need to reduce the transmission
rate. In this survey the general approaches taken for differentiating congestion or bit error has been discussed.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
1) The document discusses improving transport layer performance in data communication networks by delaying transmissions. It proposes a Queue Length Based Pacing (QLBP) algorithm that delays packets based on the length of the local packet buffer to reduce burstiness.
2) It reviews several related works on using pacing, small buffers, and forward error correction to improve performance in small buffer networks and optical packet switched networks.
3) The document analyzes problems like packet loss from bit errors and congestion that degrade transport layer performance, and how QLBP aims to reduce packet loss through traffic pacing.
This document summarizes an Active Congestion Control (ACC) mechanism that uses active networking technology to make feedback congestion control more responsive to network congestion. ACC includes programs in data packets that allow routers to react to congestion without incurring round trip delay. When congestion is detected, the congested router calculates and sends a new congestion control state to endpoints to synchronize distributed states. Simulations show ACC TCP can achieve up to 18% higher throughput than standard TCP under bursty traffic conditions by reacting more quickly to congestion. ACC and TCP perform comparably under stable network conditions.
This document proposes a new dynamic approach to congestion control in TCP that determines congestion based on round trip time rather than packet loss. It suggests using the estimated round trip time as a threshold and increasing the congestion window linearly as long as the sample round trip time is below the threshold. If the sample round trip time exceeds the estimated time, the congestion window is halved, and if it exceeds the estimated time plus four times the deviation round trip time, the window is reset to one MSS. This approach aims to overcome issues with TCP Reno and Tahoe like the bottleneck problem by avoiding an exponential increase in window size. The document discusses advantages like preventing network congestion and disadvantages like ambiguity in measured sample round trip
This document proposes a new dynamic approach to congestion control in TCP that determines congestion based on round trip time rather than packet loss. It suggests using the estimated round trip time as a threshold and increasing the congestion window linearly as long as the sample round trip time is below the threshold. If the sample round trip time exceeds the estimated time, the congestion window would be halved, and if it exceeds the estimated time plus four times the deviation round trip time, the window would be reset to one MSS. This approach aims to avoid the bottleneck problem of TCP by not including a slow start phase and responding earlier to potential congestion based on network delays.
This document proposes a new dynamic approach to congestion control in TCP that determines congestion based on round trip time rather than packet loss. It suggests using the estimated round trip time as a threshold and increasing the congestion window linearly as long as the sample round trip time is below the threshold. If the sample round trip time exceeds the estimated time, the congestion window would be halved, and if it exceeds the estimated time plus four times the deviation round trip time, the window would be reset to one MSS. This approach aims to avoid the bottleneck problem of TCP by not including a slow start phase and responding earlier to potential congestion based on network delays.
This document summarizes a survey on congestion control mechanisms. It discusses how congestion control plays an important role in computer networks and modern telecommunications to prevent network congestion. It categorizes major congestion control mechanisms and algorithms, including black box approaches like TCP that rely only on binary feedback and grey/green box approaches that use more network information. Common goals for congestion control algorithms are discussed like efficiency, fairness and smooth convergence.
TCP INCAST AVOIDANCE BASED ON CONNECTION SERIALIZATION IN DATA CENTER NETWORKSIJCNCJournal
In distributed file systems, a well-known congestion collapse called TCP incast (Incast briefly) occurs
because many servers almost simultaneously send data to the same client and then many packets overflow
the port buffer of the link connecting to the client. Incast leads to throughput degradation in the network. In
this paper, we propose three methods to avoid Incast based on the fact that the bandwidth-delay product is
small in current data center networks. The first method is a method which completely serializes connection
establishments. By the serialization, the number of packets in the port buffer becomes very small, which
leads to Incast avoidance. The second and third methods are methods which overlap the slow start period
of the next connection with the current established connection to improve throughput in the first method.
Numerical results from extensive simulation runs show the effectiveness of our three proposed methods.
Similar to A Packet Drop Guesser Module for Congestion Control Protocols for High speed Networks (20)
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
How to Setup Default Value for a Field in Odoo 17Celine George
In Odoo, we can set a default value for a field during the creation of a record for a model. We have many methods in odoo for setting a default value to the field.
🔥🔥🔥🔥🔥🔥🔥🔥🔥
إضغ بين إيديكم من أقوى الملازم التي صممتها
ملزمة تشريح الجهاز الهيكلي (نظري 3)
💀💀💀💀💀💀💀💀💀💀
تتميز هذهِ الملزمة بعِدة مُميزات :
1- مُترجمة ترجمة تُناسب جميع المستويات
2- تحتوي على 78 رسم توضيحي لكل كلمة موجودة بالملزمة (لكل كلمة !!!!)
#فهم_ماكو_درخ
3- دقة الكتابة والصور عالية جداً جداً جداً
4- هُنالك بعض المعلومات تم توضيحها بشكل تفصيلي جداً (تُعتبر لدى الطالب أو الطالبة بإنها معلومات مُبهمة ومع ذلك تم توضيح هذهِ المعلومات المُبهمة بشكل تفصيلي جداً
5- الملزمة تشرح نفسها ب نفسها بس تكلك تعال اقراني
6- تحتوي الملزمة في اول سلايد على خارطة تتضمن جميع تفرُعات معلومات الجهاز الهيكلي المذكورة في هذهِ الملزمة
واخيراً هذهِ الملزمة حلالٌ عليكم وإتمنى منكم إن تدعولي بالخير والصحة والعافية فقط
كل التوفيق زملائي وزميلاتي ، زميلكم محمد الذهبي 💊💊
🔥🔥🔥🔥🔥🔥🔥🔥🔥
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
CapTechTalks Webinar Slides June 2024 Donovan Wright.pptxCapitolTechU
Slides from a Capitol Technology University webinar held June 20, 2024. The webinar featured Dr. Donovan Wright, presenting on the Department of Defense Digital Transformation.
220711130088 Sumi Basak Virtual University EPC 3.pptx
A Packet Drop Guesser Module for Congestion Control Protocols for High speed Networks
1. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.3, No.3,June 2013
DOI : 10.5121/ijcseit.2013.3303 21
A Packet Drop Guesser Module for Congestion
Control Protocols for High speed Networks
Ghani-ur-rehman1
, Muhammad Asif2
, Israrullah3
1
(Department of Computer Science, Khushal Khan Khattak University Karak, Pakistan)
Ghani84kk@yahoo.com
2
(Department of Computer Science, International Islamic University, Islamabad,
Pakistan)
Asif253026@yahoo.com
3
(Department of Computer Science, National University of Computer & Emerging
Sciences, Pakistan).
Israrullahkk@yahoo.com
ABSTRACT
Different high speed Transport layer protocols have been designed and proposed in the literature to
improve the performance of standard TCP on high BDP links. They are mainly different in their increase
and decrease formulas of their respective congestion control algorithm. Most of these high speed protocols
consider every packet drop in the network as an indication of congestion and they immediately reduce their
congestion window size. Such an approach will usually result in under utilization of available bandwidth in
case of noisy channel conditions. We take CUBIC as a test case and have compared its performance in
case of normal and noisy channel conditions. The throughput of CUBIC was drastically degraded from
50Mbps to 0.5Mbps when we introduced a random packet drops with 0.001 probability. When the
probability of the packet drops increases then the throughput gets decreases. Indeed, we need to
complement existing congestion control algorithms with some intelligent mechanisms that can differentiate
whether a certain packet drop is because of congestion or channel error thus avoid unnecessary window
reduction. In order to distinguish between packets drops, we have developed a k-NN based module to guess
whether the packet drops are due to the congestion or any other reasons. After integrating this module with
CUBIC algorithm, we have observed significant performance improvement.
KEYWORDS
Congestion, Traditional TCP, Throughput, High speed Protocols, CUBIC TCP and K-NN Technique.
1. INTRODUCTION
High speed networks [1] refer to the networks that usually have higher rate of data transmission
such as high speed LAN and Ethernet. The rate may be varying from few Mega bit per seconds
(Mbps) to Giga bit per seconds (Gbps). The common applications of high speed networks are
telemedicine, videoconferencing and weather simulations etc. High speed network may perform
better in the situation where the end system may regulate their flow of data for using the available
network resources efficiently without more loads on the system. When there are more loads on
the systems then it leads to congestion and throughput collapse. Simply high speed networks are
introduced to sends a large amount of data quickly. Network congestion refers to the situation in
which the capacity of a network is exceeded by the number of packets sent to it. It may mean load
on the network. When this load is keep below the total capacity of the network then it is called
congestion control. Congestion is possible in any system that involves waiting. In network
2. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.3, No.3,June2013
22
congestion occurs because routers and switches has queue which stores the packets. We can
easily understand congestion control by taking an example of congestion control in TCP. TCP [2]
is an acknowledged based protocol in which a receiver must sends an acknowledgment to the
sender after receiving a packet. Sender can only sent a new packet after an ACK has been
received from receiver. One of the important duties of the TCP is congestion control.TCP handle
congestion in three steps: slow start, congestion avoidance and congestion detection. This slow
start has an exponentially increase. This increase is dictated by the size of the congestion window
(maximum number of packets that can be transmitted at a particular time) which starts with one
maximum segment size (maximum amount of data that a segment can hold). During the
connection creation the maximum segment size is determined by using the option of the same
name. When an acknowledgment is received for the send segment the size of the cwnd is
increased to 1 MSS. As the name implies the window start slowly and increases exponentially.
For example the sender start with congestion window is equal to one maximum segment size
means that sender can sends only one segment. For example we have seven segments, when an
ACK is received for segment 1, then the size of the window is incremented by 1, means the value
of cwnd is now 2 and 2 more segments can be sent. When an ACK is received for those two
segments, the size of the window is incremented by one MSS for each segment. When all seven
segments are ACK the congestion window is equal to 8. When there is delayed acknowledgement
then the size of the window is incremented less than power of 2. Slow start cannot continue
infinitely and stop when it reach threshold. The size of the window in slow start phase is
increased exponentially. To avoid congestion, TCP defines congestion avoidance algorithm,
which uses additive increase? When cwnd size reaches threshold, slow start stop and additive
increase starts. In Additive increase algorithm the size of cwnd is incremented by 1 additively
until congestion is detected. If congestion occurs, the size of the cwnd must be decreased. When
sender knows that congestion has occurred, it retransmits a segment. The segment can be
retransmitted for two reasons. The first one is when timer times out and the second is by receiving
three duplicate ACKs. The threshold’s size is halved due to these reasons. For these two reasons
the size of the threshold is reduced to one half (multiplicative decrease). Thus, there are two
choices for the TCP:
1) If a time-out happens, the possibility of congestion is very high. Here TCP has a strong
reaction.
• Set the value of threshold to cwnd/2.
• Set cwnd to the size of one segment.
• Start with slow-start step again.
2) If three duplicate ACKs are received, the chances that congestion occurs are less. A
network segment may have been dropped but not all the segments, because three
duplicate ACKs are received for the segments after this one show that they are received
safely. This is called fast transmission and fast recovery. For fast retransmission and fast
recovery the size of the congestion window must be four packets or more. In this case
TCP has not a stronger reaction:
• Set the value of threshold to cwnd/2.
• Set cwnd to the value of threshold.
• Start the congestion avoidance phase.
Congestion detection may occur in one of the followings two ways.
(1) If detection is due to first case, a new slow start phase begins.
(2) If detection is due to second case, a new congestion avoidance phase is begins. The
increase and decrease formula for TCP is given below.
3. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.3, No.3,June2013
23
1. Increase when all acknowledge has been received for the window means fully acknowledged:
= + 1.
2. Decrease when acknowledgment for message is not received means message is lost:
= − /2.
Figure 1: This shows how throughput of CUBIC is affected with random packet drop probability of 0.001
As TCP is not best suitable for high-speed networks, for this purpose different high speed
network protocols are come into existence also called the variants of the TCP. These high speed
protocol are BIC, CUBIC, FAST, Xcp, TCP VEGAS, SCALABLE and HSTCP etc. But the main
problem with these High speed Protocols is that they consider every packet drop as an indication
of network congestion. The performance of CUBIC protocol in term of throughput, congestion
window and link utilization under normal network condition is fine. When packet drops
randomly, then CUBIC Protocol performance in this case is degraded and does not achieve the
desire bandwidth. With error rate of 0.001, the performance of CUBIC was drastically decreased
as shown in Figure 4. So we incorporate k-NN technique in CUBIC protocol and the performance
is much improved.
The rest of the paper is organized as follows: the related work is presented in Section II. Section
III contains the pro- posed work. Results are discussed in Section IV. We conclude the paper in
Section V.
2. RELATED WORKS
As we know that Conventional TCP [1] is not a good candidate on high bandwidth delay product
link. TCP’s congestion control algorithm takes long time to take benefit of large bandwidth and to
recover from loss.TCP has the followings main drawbacks. The first main problem with TCP is
that it considers the entire packet drop due to the network congestion. When a packet is lost in the
network it cut the size of the window into half. If there is further packet drop in the network it
again cut it into half. All packets drop may not be due network congestion only but it may be due
some type of noisy errors. The second drawback with TCP is that when the RTT from source to
4. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.3, No.3,June2013
24
the destination and back increases it operates slower and slower. A number [1], [2] of high speed
protocols are proposed to improve the traditional TCP performance on high speed network. The
high speed protocols can be further classified into two types, loss-based and delay based.
Protocols that depend on packet drops due congestion detection are referred to as Loss-based and
delay based is those protocols that depend on queuing delay to collect information’s about
congestion detection. FAST is the only delay based protocol. Loss-based are further classified
into two types: first, those that take the previous congestion window in consideration for
calculating the next congestion widow, secondly, those that are time-based, since last packet
dropped for calculating the next congestion window. The driving force behind developing BIC
TCP was to improve the TCP’s effectiveness on links with high bandwidth delay product. BIC
TCP uses two schemes of window size control. Additive increase and binary search increase are
two scheme used by BIC. In Binary search increase scheme the congestion control is consider to
be a searching problem. In this, it provides ’Yes’ or ’No’ feedback for the packet loss. There are
two starting points for this search. The first one is the current minimum window size Wmin (size
of the window just after the reduction). At this point the cwnd is free of any losses. The second
one is maximum window size Wmax (the size of window just before the reduction) where the
available link bandwidth is utilized by the cwnd. Binary search increase repeatedly calculates the
midpoint between Wmax and Wmin called target window size. This midpoint is the set as the
current size. It also keeps watch on feedback as packet loss. The feedback being ’Yes’ means that
the packet is lost. Then the midpoint is set to the new Wmax. But when feedback is ’No’ then it
means that there is no packet loss. In this case the midpoint is set to the new Wmin. This process
continues until the difference between the size of the window just after the reduction and the size
of the window just before the reduction is below the preset threshold value called the minimum
increment (Smin). This process is called binary search increase. Binary search increase is more
aggressive at the start, at a time when the difference between the current window size and target
window size is very large. Similarly, for this difference being small, the binary search increase is
less aggressive. One of the important properties of the protocol is that its increase function is
logarithmic. It can also reduce the chances of the packet loss. The main advantage of the binary
search increase is that it provides a concave response function [3], [4]. Mixed with additive
increase, the binary search increase gets faster convergence and RTT-fairness. With a large
distance from the current minimum to the midpoint, the window size may be increased to that
midpoint, resulting in more stress that the network observes. When the current window size’s
distance to a target in the binary search increase is larger than the Smax, the window’s size is
increase by Smax, Smax being maximum increment. This is done until the distance becomes less
than Smax at time the window increases to the target after a large amount of window reduction.
In the beginning this policy increases the window linearly and then logarithmically. The
combination of binary search increase and additive increase is called binary increase. When the
window is large and multiplicative decrease policy is mixed with the binary increase becomes an
additive increase. When the size of the window is larger then there is a large reduction in the
multiplicative decrease and hence long additive increase period. But for the size of the window
being small then it is approximately equal to the binary search increase and hence shorter additive
increase period. In Multiplicative decrease when packet lost is happen then there will be a
reduction in the size of the cwnd by a multiplicative factor of β, which is commonly 0.875 used
by others protocols. Of the many advantages of the BIC TCP, the main advantage [6] is that due
to the use of the binary increase policy it uses the bandwidth in the most optimum way. CUBIC
can be regarded as the extension of BIC protocol as it the BIC variant with many new features
that add to the usefulness of BIC. The window control of the BIC TCP can further simplified by
the CUBIC TCP. Although BIC guarantees good stability, fairness and scalability during current
high speed TCP variants, but its growth function is not that good for TCP under smaller RTT and
low speed networks. The window control’s different phases make it too complex to understand
the protocol easily. To discuss CUBIC protocol [5] in detailed the followings terms is used
below. Where cwnd: value of the congestion window, C: scaling constant, T: time to denote the
5. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.3, No.3,June2013
25
last congestion event, Wmax: it denotes the size of the cwnd, β: multiplicative decrease factor,
which is normally 0.8.
K = (Wmax β/C)¹/3
When an acknowledgment is arrived:
Cwnd = C(T − K)3 + Wmax
When packet is lost means no acknowledgement is received:
cwnd = βWmax
The window growth function of CUBIC protocol is cubic function and its shape is very much the
same as the growth function of the BIC. The CUBIC growth function grows very fast at Wmax
with the origin. It may grow slow when it nears to the Wmax. Around the Wmax the window
increment becomes 0. Above Wmax, CUBIC starts checking more bandwidth available in which
the window gradually increases at the start and makes its growth faster when it moves away from
Wmax. This slow growth ensures modification in the stability of the protocol while the fast
growth ensures the scalability of the protocol. It also ensures the intra protocol fairness among
different flow of the same protocol due to the cubic function of CUBIC protocol. As the rate of
growth is dominated by the elapsed time t, thus cubic function also provides good RTT fairness
property [3], [5]. This ensures linear RTT fairness of different competing flows. Limit is imposed
on window increment so that it is less than or equal to Smax per second to add something more to
the fairness and stability property. Due to this feature the window will grow linearly when it is
not near to the Wmax. Under small RTT, the CUBIC is fairer than other high speed networks
protocols. Of the many other advantages of CUBIC [6], [7], [8] is that it modifies the fairness
characteristic of BIC while keeping its scalability and stability intact. The main disadvantage of
the CUBIC is that it is so slower in increasing its congestion window to completely occupy the
link. FAST continuously monitors the flow’s RTT so that it is able to check the congestion level
in the network. The congestion window is updated by FAST and every other RTT based on the
RTT observed and which contains the number of packets it tries to maintain in the router queues.
FAST is not good a performer for the size of the router queue being less than? When the observed
minimum value is not changed it aggressively increases the congestion window. In FAST the
congestion control of TCP is composed of four important components. These four components
are not dependent upon each other. All these components can be designed separately. The work
of data control component is to examine which packet to transmit, the window control examines
how many packets to transmit, and the burstiness control examines when to transmit these
packets. The estimation component provides some meaningful information. The other three
components can make decisions upon this provided information. The estimation component also
gives two types of information for the packet being sent. The data control component chooses the
next packet to send from three available options: new packet, negatively acknowledgement
packet and packet for which an acknowledgement is not yet been received. Window control sends
packets in order at RTT timescale. In FAST, the network consists of a set of resources with
limited capacity. These resources are transmission link, processing unit and memory etc.
Different unicast flow shares the network. These unicast flow is identified by their sources. The
main problem faced in this protocol is the unfairness and the instability in small buffer or in long
delay networks [9]. HS-TCP was proposed by Sally Floyd [10] especially for large congestion
window. It also solves the standard TCP’s shortcoming of getting a large congestion window in a
situation where the packet loss rate in not too much high. HS-TCP also proposed a little change in
the increment and decrement parameters of the standard TCP. It also makes an attempt to
improve the lost recovery time of the traditional TCP by bringing some modifications in its
AIMD algorithm. This enhanced algorithm affects the higher congestion window. For the
6. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.3, No.3,June2013
26
congestion window being lower than the value of the threshold, called low window, then the
standard AIMD algorithm is applied. Otherwise it applies high speed AIMD algorithm. The
important and demanding task of HS-TCP is to makes its flows more aggressively having no
response from receiver or link router. HS-TCP is specifically designed for low loss rate in a high
bandwidth environment and tries its best to be more aggressive than standard TCP. Scalable TCP
shortened as STCP was proposed by Tom Kelly [11], [12]. It is the enhanced version of High-
Speed TCP or HS-TCP. The scalable TCP’s basic goal is the improvement of loss recovery time,
not catered for in the standard TCP. As mentioned earlier the scalable TCP being the enhanced
version of HS-TCP, takes its main idea from HS-TCP. To compare and contrast the scalable TCP
and standard TCP and HS-TCP: In standard TCP and HS-TCP connections, the time for packet
loss recovery depends directly on the RTT and the size of the connection window. On the other
hand, in scalable TCP, the packet loss recovery time depends only upon the RTT and not on the
size of the connection window. With all the other changes and modifications, the standard TCP’s
slow start phase is not modified for the scalable TCP [8]. The scalable TCP increases the size of
its congestion more speedily than the standard TCP while decreases it less speedily than the
standard TCP. As in HS-TCP, the STCP has set a threshold for the window size and whenever the
size of the cwnd is more than this threshold window size, STCP is used otherwise the standard
TCP is used. The threshold window’s is set to 16 segments, as the default value. STCP is
deployed incrementally. Its behaviour is exactly the same as the parent SCPT for congestion
window size lower than the threshold. It has the ability to double its sending rate for any other
rate in 70 RTTs and thus the name scalable TCP. TCP Vegas was proposed by Habibullah Jamal
and Kiran Sultan that uses packet delay instead of packet loss to determine the size of congestion
window [13], [14]. Unlike other congestion control algorithms which take proper measures only
after a packet drop has occurred, the TCP Vegas is sensitive to increase in the RTT. The TCP
Vegas extends its retransmission mechanism in the following ways: with the transmission of each
segment, system clock is read and then recorded; at the arrival of ACK, clock is again read. RTT
is calculated using this time. The timestamp is recorded for the relevant segment. This is a more
accurate RTT, using which the retransmission can be calculated (a). When a duplicate
acknowledgement is received, the algorithm confirms whether the new RTT is greater than RTO
or not, where RTT is current time minus timestamp recorded. If greater, the Vegas retransmit the
segment without waiting for the third duplicate acknowledgement (b). On the reception of a non-
duplicate acknowledgement, whether the first or second one after a retransmission, Vegas checks
again to see if RTT is greater than RTO; if it is, then the segment is retransmitted. Thus, there are
some ACKs, that help Vegas determine whether the timeout should happen or not. Explicit
Control Protocol is the feedback-based congestion control scheme. For congestion being there in
the network, it applies direct and precise router feedback to avoid it. It is especially developed for
the purpose of scalability and generality. Explicit Congestion Notification router is used in this
protocol to immediately inform sender about the congestion in the network. It shows better
performance in high delay bandwidth product network. XCP uses router-assistance to exactly
notify the sender about the congestion. The resource allocation function is divided between a
fairness controller and congestion controller by XPC. The duty of the congestion controller is to
make sure that flows make use of all the accessible capacity on other hand the duty of the fairness
controller is to fairly allocate the capacity to all the flows. The majority of congestion control
schemes are not able to perform this division. In XCP, the implementation and explanation of
these two resource allocation functions is made possible due to this division.
3. PROPOSED SOLUTION
Proposed solution is based on k-NN and we have developed a packet drop guesser module
considering only two parameters i.e. time and congestion window. Of course, we can have several
other parameters but that’s not taken into account for this work as the two chosen parameters are
considered sufficient and the most relevant to the problem in hand. We get information about
7. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.3, No.3,June2013
27
network status in the form of two feedback events: a) when we receive an Ack packet, it shows
non- congestion as the data is going through. b) When timer expires or Not-Ack packet is
received, it shows that network may be congested. So we keep record of these two types of events
in our module (i.e. Ack and drop events) and congestion window along with time instant of the
event is recorded in the history which will help us in matching a pattern to distinguish a random
packet drop caused by noisy channels from genuine packet drops due to network congestion.
Obviously, packet drops caused by reasons other than congestion, will be randomly distributed
and we will certainly have many ACK events in the neighborhood of such drop events. While
packet drops due congestion will have consistent pattern and congestion window will almost be
same for all of them. Thus using simple k-NN technique, we can distinguish packets drops caused
by congestion from random packet drops. k-NN is a technique which maintains a history of
previous instances on the basis of which it estimates whether the packet drop was due to
congestion or not. This estimation is done the basis of majority of instances. The k-NN algorithm
is presented below.
3.1 Packet Drop Guesser Module
Our Packet drop guesser module work as follows: Input is event and output is true or false. In first
step when the type of the event is Ack then it is simply recorded and exit. But when the type of
the event is other than Ack, means Drop then it is recoded and if one-third of the maximum size
of the history is greater than the current size then it means there is no congestion and return false
otherwise it return true means congestion.
3.2 K-NN Prediction Algorithm
When it return true means that there is congestion so k-NN prediction algorithm is invoked in this
case which is given below?
Packet Drop Guesser Module
Input: Event
Output: Return TRUE/FALSE
Procedure:
1. If Event_Type= ACK
a. Record Event and Exit.
2. Else
a. Record Event
b. If MAX_SIZE/3 > CUR_SIZE
i. Return FALSE // No congestion
c. Else
i. Result= KNN_Prediction_Algorithm()
ii. Return Result
8. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.3, No.3,June2013
28
KNN_Prediction_Algorithm
Input: Drop_Event, Event_Record, Range
Output: Return TRUE/FALSE
Procedure:
1. TRUE_Count=0
2. FALSE_Count=0
3. For each Event in Event_Record
If Event is within Range of Drop_Event
i. If Event.Flag = TRUE
TRUE_Count++
ii. Else
FALSE_Count++
4. End For
5. If TRUE_Count < FALSE_Count
Return FALSE
6. Else
Return TRUE
4. EMPRICAL EVALUATION
4.1 Simulation Parameters
This section compares the performance of the proposed scheme with traditional CUBIC protocol.
All the simulations will be performed using NS-2. Table I shows details of varying parameters
used in simulation. All the simulations are performed in NS-2 using TCP/Linux code available at
[15]. We have incorporated our module into CUBIC protocol as a test case. CUBIC was selected
because its performance drastically degraded when random packet drops are introduced [6].
Secondly we used only two nodes called source node X and destination node Y. The link used
between these two nodes is 100 Mbps. The simulation parameters are shown in the table below.
TABLE 1. Simulation Parameters
S.No Parameters Value
01 No of flows 01
02 Link Delay 64 msec
03 Packet Size 1448 bytes
04 Buffer size 220 Packets
05 Simulation Time 50 sec
06 Minimum Bandwidth 100 Mbps
4.2 Performance Metrics
The followings performance metrics are used in the comparison of the protocols.
• Throughput: Throughput is the main performance parameter of our undertaking. It tells how
close to success our results are. It is the rate at which a device sends successfully and can be
expressed in term of Mbps.
• Link utilization: It is the percentage of the bandwidth of the link that is currently being used.
9. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.3, No.3,June2013
29
• Congestion Window: Shortened as cwnd, the congestion window determines the maximum
number of packets that can be transmitted at a particular time. The congestion window is a value
read at particular time from CUBIC protocol. The size of the congestion window is however a
variable that is gets reduced when the network in noisy.
But in this paper, only throughput is takes into account as a performance metrics for different
error rates.
5. RESULTS AND EXPERIMENTS
The performance of CUBIC protocol in normal network condition, CUBIC without k-NN and
CUBIC with k-NN technique is compared in this section. For throughput the results are from
three different environments with packet drops rate of 0.1, 0.01, and 0.001 randomly. The figure
(2) shows the throughput when packet drops rate is 0.1 randomly. An error rate of 0.1 means that
there is 1 packet dropped out of every ten packets (needless to say this is an extremely high error
rate). Whenever there is no error (normal CUBIC) in the network, then throughput is better as
shown. Shown in the below graph are results of the CUBIC without k-NN which as expected and
as it should be are almost zero at every point. Next is the results of CUBIC with k-NN technique,
these are obviously better than the CUBIC without k-NN results and almost reaching an average
throughput of 20 Mbps. These values are very sharp and to present a smoother graph moving
average of 5 previous values is also shown. The figure (3) shows the throughput when error rate
is 0.01. It means that there is 1 packet dropped out of every hundred packets, which is
comparably stable environment than the one with an error rate of 0.1. Whenever there is no error
(normal CUBIC) in the network, then throughput is better as shown. But when the error occurs at
rate of 0.01, then the throughput is affected almost comes down to 0 Mbps.
Figure 2. This shows how throughput of CUBIC is affected with random packet drop probability of 0.1
So we incorporate k-NN module to achieve some acceptable level of throughput which is an
average of 30 Mbps. Similarly, the figure (4) below shows the throughput when error rate is
0.001. It means that there is 1 packet dropped out of every one thousand packets. Whenever there
is no error (normal CUBIC) in the network, then throughput is very fine as shown. But when the
error occurs at rate of 0.001, then the throughput is deteriorated badly and is slightly greater than
5 Mbps. So we incorporate k-NN module to achieve some acceptable level of throughput which is
10. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.3, No.3,June2013
30
almost 40 Mbps. So the throughput gets better and better when the random packet drop rate is
decreased.
Figure 3. This shows how throughput of CUBIC is affected with random packet drop probability of 0.01
Figure 4. This shows how throughput of CUBIC is affected with random packet drop probability of 0.001
6. CONCLUSIONS
In this paper the performance of the CUBIC protocol was greatly improved by incorporating k-
NN module. The performance of CUBIC algorithm in an environment with no errors and the
results are seen for its throughput in different error rates. It is found that the performance of
CUBIC in this environment was very well and comparable to any other high-speed network
algorithm. The same procedure was done in a very noisy environment where the error rate was
0.1, 0.01 and 0.001 and its performance is degraded. So CUBIC’s performance was gradually
becoming better and better as the environment became less and less noisy and the error rate went
on decreasing. Still more modifications and extensions can be made so that we get a better and
better protocol.
11. International Journal of Computer Science, Engineering and Information Technology (IJCSEIT), Vol.3, No.3,June2013
31
ACKNOWLEDGEMENTS
All members of International Islamic University have contributed towards various developments
reported in this paper. In particular, we would like to thanks to Dr.Sher and Dr.Arif, their
contributions in improving the paper.
REFRENCES
[1] Junsoo Lee, Stephan Bohacek, Joao P. Hespanha Networks, University of Southern California, white
paper 2007., Katia Obraczka, A Study of TCP Fairness in High-Speed
[2] S. Molnr, B. Sonkoly, and T. A. Trinh, ”A comprehensive TCP fairness analysis in high speed
networks,” Computer Communications, vol. 32, no. 13-14, pp. 1460-1484, Aug. 2009.
[3] L. XU, K.Harfoush, and I.Rhee,”Binary Increases Congestion Control for fast long distance
Networks”, In Proceeding of the IEEE INFOCOM, Hong Kong, March, 2004.
[4] M.Jehan, G.Radhamani and T.Kalakumari”Experimental Evaluation of TCP BIC and Vegas in
MANETs” International Journal of Computer Applications (0975-8887) Volume 16-No.1, 2010.
[5] Pankaj Sharma, “Performance Analysis of High-Speed Transport Control Protocols”, Master of
Science (Computer Science) Thesis, Clemson University, August 2006.
[6] K.Satyanarayan reddy and Lokanatha C.Reddy ” A Survey on Congestion Control Protocols For high
Speed Networks” IJCSNS International Journal of Computer Science and Network Security, Vol.8
No.7 July ,2008.
[7] S. Ha, I. Rhee, and L. Xu. Cubic: a new tcp-friendly high-speed tcp variant. SIGOPS Oper. Syst.
Rev., 42(5):64-74, 2008.
[8] Injong Rhee, and Lisong Xu”CUBIC: A New TCP-Friendly High-Speed TCP Variant” Department of
Computer Science, North Carolina State University, Raleigh, NC 27695-7534 USA, 2006.
[9] C. Jin, D. X. Wei, and S. H. Low, Hegde, S. FAST TCP: Motivation, Architecture, Algorithms, and
Performance”. IEEE/ACM Transactions on Networking, Volume 14, Issue 6, pp. :1246 - 1259, Dec.
2006.
[10] S. Floyd. ”High-speed TCP for large congestion window”, Internet Draft draft-floyd-tcp-highspeed-
01.txt, February 2003.
[11] Tom Kelly “Scalable TCP: Improving performance in High-speed Wide Area Networks, 2004.
[12] Tom Kelly” Scalable TCP: Improving performance in High speed Wide Area Networks” 2005.
[13] Habibullah Jamal, Kiran Sultan” Performance Analysis of TCP Congestion Control Algorithms”.
INTERNATIONAL JOURNAL OF COMPUTERS AND COMMUNICATIONS, Issue 1, Volume 2,
2008.
[14] D. M. Lopez-Pacheco and C. Pham” Robust Transport Protocol for Dynamic High-Speed Networks:
enhancing the XCP approach” in Proc. of IEEE MICCICON, 2005.
[15] http://netlab.caltech.edu/projects/ns2tcplinux/ns2linux/.
AUTHORS PROFILE
Mr.Ghani-ur-rehman is working as a Lecturer in the Department of Computer Science
at Khushal Khan Khattak University Karak. He has more than 2 years of teaching high-
speed Networks. He received his BS (CS) and MS (CS) from Kohat University of Science
& Technology, Kohat and International Islamic University, Islamabad in year 2006, 2012
respectively. He is also a research scholar in Khushal Khan Khattak University Karak.
Mr.Asif is working as a Lecturer in Government degree College Ahmad Abad Karak. He
has more than 2 years of teaching experience of teaching networking subjects. He received
his MS (CS) degree from IIU, Islamabad. He is also a research scholar in Government
degree College Ahmad Abad Karak.
Mr.Israrullah is working as a Lecturer in Virtual University of Pakistan. He has more
than 4 years of teaching networking subjects. He received his MS (CS) degree from
National University of Computer & Emerging Sciences, Islamabad. He is also a research
scholar in National University of Computer & Emerging Sciences.