This document summarizes research on enhancing the performance of HTTP web traffic using updated transport layer techniques. It describes how standard TCP behavior is unsuitable for bursty HTTP traffic. The Congestion Window Validation (CWV) method was proposed to address this, but had drawbacks. A new method called newCWV was designed to estimate available path capacity more accurately and set the congestion window appropriately. The paper discusses implementing newCWV in the Linux TCP/IP stack and experimental results showing a 50% improvement in web browsing speed over conventional TCP in an uncongested network.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Abstract - The Transmission Control Protocol (TCP) is
connection oriented, reliable and end-to-end protocol that support
flow and congestion control, with the evolution and rapid growth
of the internet and emergence of internet of things IoT, flow and
congestion have clear impact in the network performance. In this
paper we study congestion control mechanisms Tahoe, Reno,
Newreno, SACK and Vegas, which are introduced to control
network utilization and increase throughput, in the performance
evaluation we evaluate the performance metrics such as
throughput, packets loss, delivery and reveals impact of the cwnd.
Showing that SACK had done better performance in terms of
numbers of packets sent, throughput and delivery ratio than
Newreno, Vegas shows the best performance of all of them.
This document presents a formal modeling and verification approach for TCP congestion control implementations using colored Petri nets. It describes a verification environment that includes two TCP endpoints with an intermediary acting as a "bad" router to control packet flow. A colored Petri net model of TCP congestion control is developed based on this environment. The model encodes controllable and observable events to allow verification by searching the model's occurrence graph using a randomized algorithm.
This document summarizes a review paper on congestion control approaches for real-time streaming applications on the Internet. It discusses how TCP is not well-suited for real-time streaming due to its reliance on packet loss and variable bitrates. The paper reviews different end-to-end and active queue management approaches for congestion control that aim to reduce latency and jitter. It covers issues with single and shared bottlenecks on the Internet that can lead to congestion and the need for new transport protocols and congestion control for real-time media streaming.
Recital Study of Various Congestion Control Protocols in wireless networkiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document discusses and compares several congestion control protocols for wireless networks, including TCP, RCP, and RCP+. It implemented an enhanced version of RCP+ in the NS-2 simulator. Simulation results showed that the proposed approach achieved higher throughput and packet delivery ratio than TCP and RCP+ in a wireless network with 10-50 nodes, with performance degrading as the number of nodes increased beyond 20 due to increased congestion. The paper analyzes the mechanisms and equations of each protocol and argues the proposed approach combines benefits of improved AIMD and RCP+ to address their individual shortcomings.
Effective Router Assisted Congestion Control for SDN IJECEIAES
This document proposes a new congestion control method called PACEC (Path Associativity Centralized Congestion Control) that works within the Software Defined Networking (SDN) framework. PACEC aims to overcome weaknesses of traditional Router Assisted Congestion Control (RACC) methods by utilizing global network information available in SDN. It calculates an aggregate rate for the entire data path rather than individual links. The controller collects switch utilization data and uses it to determine the path rate (Rp), updating it each control period. Simulation results show PACEC achieves better efficiency and fairness than TCP and RCP.
IRJET- Simulation Analysis of a New Startup Algorithm for TCP New RenoIRJET Journal
This document presents a simulation analysis of a new startup algorithm for TCP New Reno to improve responsiveness for short-lived applications. The proposed TCP SYN Loss (TSL) startup algorithm uses a less conservative congestion response than standard TCP when connection setup packets are lost. Simulations are conducted using the ns-2 network simulator to evaluate the performance of TSL variants under different levels of congestion. The main results show that TSL variants can achieve an average latency gain of 15 round-trip times compared to standard TCP at up to 90% link utilization with a packet loss rate of 1%.
International Journal of Engineering Research and Applications (IJERA) is a team of researchers not publication services or private publications running the journals for monetary benefits, we are association of scientists and academia who focus only on supporting authors who want to publish their work. The articles published in our journal can be accessed online, all the articles will be archived for real time access.
Our journal system primarily aims to bring out the research talent and the works done by sciaentists, academia, engineers, practitioners, scholars, post graduate students of engineering and science. This journal aims to cover the scientific research in a broader sense and not publishing a niche area of research facilitating researchers from various verticals to publish their papers. It is also aimed to provide a platform for the researchers to publish in a shorter of time, enabling them to continue further All articles published are freely available to scientific researchers in the Government agencies,educators and the general public. We are taking serious efforts to promote our journal across the globe in various ways, we are sure that our journal will act as a scientific platform for all researchers to publish their works online.
Abstract - The Transmission Control Protocol (TCP) is
connection oriented, reliable and end-to-end protocol that support
flow and congestion control, with the evolution and rapid growth
of the internet and emergence of internet of things IoT, flow and
congestion have clear impact in the network performance. In this
paper we study congestion control mechanisms Tahoe, Reno,
Newreno, SACK and Vegas, which are introduced to control
network utilization and increase throughput, in the performance
evaluation we evaluate the performance metrics such as
throughput, packets loss, delivery and reveals impact of the cwnd.
Showing that SACK had done better performance in terms of
numbers of packets sent, throughput and delivery ratio than
Newreno, Vegas shows the best performance of all of them.
This document presents a formal modeling and verification approach for TCP congestion control implementations using colored Petri nets. It describes a verification environment that includes two TCP endpoints with an intermediary acting as a "bad" router to control packet flow. A colored Petri net model of TCP congestion control is developed based on this environment. The model encodes controllable and observable events to allow verification by searching the model's occurrence graph using a randomized algorithm.
This document summarizes a review paper on congestion control approaches for real-time streaming applications on the Internet. It discusses how TCP is not well-suited for real-time streaming due to its reliance on packet loss and variable bitrates. The paper reviews different end-to-end and active queue management approaches for congestion control that aim to reduce latency and jitter. It covers issues with single and shared bottlenecks on the Internet that can lead to congestion and the need for new transport protocols and congestion control for real-time media streaming.
Recital Study of Various Congestion Control Protocols in wireless networkiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document discusses and compares several congestion control protocols for wireless networks, including TCP, RCP, and RCP+. It implemented an enhanced version of RCP+ in the NS-2 simulator. Simulation results showed that the proposed approach achieved higher throughput and packet delivery ratio than TCP and RCP+ in a wireless network with 10-50 nodes, with performance degrading as the number of nodes increased beyond 20 due to increased congestion. The paper analyzes the mechanisms and equations of each protocol and argues the proposed approach combines benefits of improved AIMD and RCP+ to address their individual shortcomings.
Effective Router Assisted Congestion Control for SDN IJECEIAES
This document proposes a new congestion control method called PACEC (Path Associativity Centralized Congestion Control) that works within the Software Defined Networking (SDN) framework. PACEC aims to overcome weaknesses of traditional Router Assisted Congestion Control (RACC) methods by utilizing global network information available in SDN. It calculates an aggregate rate for the entire data path rather than individual links. The controller collects switch utilization data and uses it to determine the path rate (Rp), updating it each control period. Simulation results show PACEC achieves better efficiency and fairness than TCP and RCP.
IRJET- Simulation Analysis of a New Startup Algorithm for TCP New RenoIRJET Journal
This document presents a simulation analysis of a new startup algorithm for TCP New Reno to improve responsiveness for short-lived applications. The proposed TCP SYN Loss (TSL) startup algorithm uses a less conservative congestion response than standard TCP when connection setup packets are lost. Simulations are conducted using the ns-2 network simulator to evaluate the performance of TSL variants under different levels of congestion. The main results show that TSL variants can achieve an average latency gain of 15 round-trip times compared to standard TCP at up to 90% link utilization with a packet loss rate of 1%.
Improving Performance of TCP in Wireless Environment using TCP-PIDES Editor
Improving the performance of the transmission
control protocol (TCP) in wireless environment has been an
active research area. Main reason behind performance
degradation of TCP is not having ability to detect actual reason
of packet losses in wireless environment. In this paper, we are
providing a simulation results for TCP-P (TCP-Performance).
TCP-P is intelligent protocol in wireless environment which
is able to distinguish actual reasons for packet losses and
applies an appropriate solution to packet loss.
TCP-P deals with main three issues, Congestion in
network, Disconnection in network and random packet losses.
TCP-P consists of Congestion avoidance algorithm and
Disconnection detection algorithm with some changes in TCP
header part. If congestion is occurring in network then
congestion avoidance algorithm is applied. In congestion
avoidance algorithm, TCP-P calculates number of sending
packets and receiving acknowledgements and accordingly set
a sending buffer value, so that it can prevent system from
happening congestion. In disconnection detection algorithm,
TCP-P senses medium continuously to detect a happening
disconnection in network. TCP-P modifies header of TCP
packet so that loss packet can itself notify sender that it is
lost.This paper describes the design of TCP-P, and presents
results from experiments using the NS-2 network simulator.
Results from simulations show that TCP-P is 4% more
efficient than TCP-Tahoe, 5% more efficient than TCP-Vegas,
7% more efficient than TCP-Sack and equally efficient in
performance as of TCP-Reno and TCP-New Reno. But we can
say TCP-P is more efficient than TCP-Reno and TCP-New
Reno since it is able to solve more issues of TCP in wireless
environment.
IMPACT OF CONTENTION WINDOW ON CONGESTION CONTROL ALGORITHMS FOR WIRELESS ADH...cscpconf
TCP congestion control mechanism is highly dependent on MAC layer Backoff algorithms that
predict the optimal Contention Window size to increase the TCP performance in wireless adhoc
network. This paper critically examines the impact of Contention Window in TCP congestion
control approaches. The modified TCP congestion control method gives the stability of
congestion window which provides higher throughput and shorter delay than the traditional TCP. Various Backoff algorithms that are used to adjust Contention Window are simulatedusing NS2 along with modified TCP and their performance are analyzed to depict the influence of Contention Window in TCP performance considering the metrics such as throughput, delay, packet loss and end-to-end delay
PERFORMANCE EVALUATION OF SELECTED E2E TCP CONGESTION CONTROL MECHANISM OVER ...ijwmn
TCP is one of the main protocols that govern the Internet traffic nowadays. However, it suffers significant
performance degradation over wireless links. Since wireless networks are leading the communication
technologies recently, it is imperative to introduce effective solutions for the TCP congestion control
mechanisms over such networks. In this research four End-to-End TCP implementations are discussed,
they are TCP Westwood, Hybla, Highspeed, and NewReno. The performance of these variants is compared
using LTE emulated environment in terms of throughput, delay, and fairness. Ns-3 simulator is used to
simulate the LTE networks environment. The simulation results showed that TCP Highspeed achieves the
best throughput results. Although TCP Westwood recorded the lowest latency values comparing to others,
it behaved unfairly among different traffic flows. Moreover, TCP Hybla demonstrated the best fairness
behaviour among other TCP variants
Proposition of an Adaptive Retransmission Timeout for TCP in 802.11 Wireless ...IJERA Editor
The Transport Control Protocol (TCP) is used to establish and control a session between two endpoints. The problem is that in 802.11 wireless environments TCP always considers that the packet loss is caused by network congestion. However, in these networks packet loss are usually caused by the high bit error rate, and the wireless link failures. Researchers found out that TCP performance in wireless networks can be highly enhanced as long as it is feasible to identify the packet loss causes; hence appropriate measures can be dynamically applied during an established TCP session in order to adjust the session parameters. This paper proposes an endto-end adaptive mechanism that allows the TCP session to dynamically adjust the RTO (Retransmission Timeout) of a TCP session; the server will have to adjust the timers based on feedbacks from clients. Feedbacks are piggybacked in the TCP Options header field of the ACK (Acknowledgment) messages. A feedback is an approximation of the time needed by the wireless channel to get the errors fixed. The mechanism has been validated using numerical analysis and simulations, and then compared to the original TCP protocol. Simulation results have shown better performance in terms of number of retransmissions at the server side due to the decrease in the number of timeouts; and thus lowest congestion on the wireless access point.
Towards Seamless TCP Congestion Avoidance in Multiprotocol EnvironmentsIDES Editor
In this paper we explore the area of congestion
avoidance in computer networks. We provide a brief overview
of the current state of the art in congestion avoidance and also
list our extension to the TCP congestion avoidance mechanism.
This extension was previously published on an international
forum and in this paper we describe an improved version which
allows multiprotocol support. We list preliminary results
carried out in a simulation environment.
New introduced approach called Advanced Notification
Congestion System (ACNS) allows TCP flows prioritization
based on the TCP flow age and priority carried in the header
of the network layer protocol. The aim of this approach is to
provide more bandwidth for young and high prioritized TCP
flows by means of penalizing old greedy flows with a low
priority. Using ACNS, substantial network performance
increase can be achieved.
Comparative Analysis of Different TCP Variants in Mobile Ad-Hoc Network partha pratim deb
The document analyzes the performance of different TCP variants (New Reno, Reno, Tahoe) with MANET routing protocols (AODV, DSR, TORA) through simulation. It finds that in scenarios with 3 and 5 nodes, AODV has better throughput than DSR and TORA for all TCP variants. Throughput decreases for all variants as node count increases. New Reno provides multiple packet loss recovery and is the best choice for AODV in MANETs due to its consistent performance with changes in node count. Further analysis of additional protocols and TCP variants is recommended.
Iaetsd an effective approach to eliminate tcp incastIaetsd Iaetsd
This document proposes an Incast Congestion Control for TCP (ICTCP) scheme to eliminate TCP incast collapse in datacenter environments. TCP incast collapse occurs when multiple synchronized servers send data to the same receiver in parallel, overwhelming the switch buffer and causing packet loss. ICTCP is a receiver-side approach that proactively adjusts the TCP receive window size of connections to control their aggregate burstiness and prevent switch buffer overflow before packet loss occurs. It estimates available bandwidth and uses this as a quota to coordinate receive window increases. For each connection, the receive window is adjusted based on the ratio of the difference between measured and expected throughput. This allows adaptive tuning of receive windows to meet sender throughput needs while avoiding congest
NetWork Design Question2.) How does TCP prevent Congestion Dicuss.pdfoptokunal1
NetWork Design Question
2.) How does TCP prevent Congestion? Dicuss the information identifying congestion in the
network as well as the mechanism for reducing congestion?
Solution
Congestion is a problem that occurs on shared networks when multiple users contend for access
to the same resources (bandwidth, buffers, and queues).
Transmission Control Protocol (TCP) uses a network congestion-avoidance algorithm that
includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, with
other schemes such as slow-start to achieve congestion avoidance.
The TCP congestion-avoidance algorithm is the primary basis for congestion control in the
Internet.
Congestion typically occurs where multiple links feed into a single link, such as where internal
LANs are connected to WAN links. Congestion also occurs at routers in core networks where
nodes are subjected to more traffic than they are designed to handle.
TCP/IP networks such as the Internet are especially susceptible to congestion because of their
basic connection- less nature. There are no virtual circuits with guaranteed bandwidth. Packets
are injected by any host at any time, and those packets are variable in size, which make
predicting traffic patterns and providing guaranteed service impossible. While connectionless
networks have advantages, quality of service is not one of them.
Shared LANs such as Ethernet have their own congestion control mechanisms in the form of
access controls that prevent multiple nodes from transmitting at the same time.
Identifying:
Congestion is primarily reflected by a conventional user feeling-- slowness. This statement
reflects the change in the network effective flow, that is the time required to transmit an entire
data from one point to another. The effective flow doenot exist as such, it consists in reality of
three seperate indicators:
*Latency:the effective flow is inversely proportional to the latency.
*Jitter:it is latency variation over time, impacts by influencing the flow latency
*Loss Rate:the theoritical bandwidth is inversely proportional to the square root of the loss rate
These Congestion symtoms allow us to rely on objective indicators to characterize it.
Mechanism to reduce congestion:
The standard fare in TCP implementations today has four standard congestion control algorithms
that are now in common use. Their usefulness has passed the test of time.
The four algorithms, Slow Start, Congestion Avoidance, Fast Retransmit and Fast Recovery are
described below. (a) Slow Start
Slow Start, a requirement for TCP software implementations is a mechanism used by the sender
to control the transmission rate, otherwise known as sender-based flow control. This is
accomplished through the return rate of acknowledgements from the receiver. In other words, the
rate of acknowledgements returned by the receiver determine the rate at which the sender can
transmit data. When a TCP connection first begins, the Slow Start algorithm initializes a
.
The document discusses two papers on the Stream Control Transmission Protocol (SCTP). The first paper provides an overview of SCTP, describing its main features like multihoming and multistreaming. It also discusses SCTP's state in research, products available, and technical challenges. The second paper analyzes key SCTP features compared to TCP and UDP, and how SCTP benefits HTTP transfers through multihoming and multistreaming. It outlines SCTP's association setup, data transfer process, and the two socket models supported.
This document analyzes and compares the congestion window behavior of three TCP variants (HS-TCP, Full-TCP, and TCP-Linux) over a Long Term Evolution (LTE) network model using network simulation. It first reviews related work that has analyzed TCP performance but with assumptions like equal window sizes or only considering uploads/downloads. The document then describes the topology and parameters used to simulate the LTE network in NS-2. Simulation results are presented that analyze the slow-start and congestion avoidance phases of each TCP variant individually, and also compare their congestion window behavior over the full simulation period.
ANALYSIS AND EXPERIMENTAL EVALUATION OF THE TRANSMISSION CONTROL PROTOCOL CON...IRJET Journal
This document analyzes and experimentally evaluates several TCP congestion control algorithms (variants) - TCP cubic, TCP hybla, TCP scalable, TCP Vegas, and TCP Westwood - in a wireless multihop environment. It aims to understand the throughput performance of each variant as the number of nodes increases. The analysis provides insights into how well different variants can adapt to dynamic multihop wireless networks. It experimentally tests the variants in a simulation using Network Simulator 2 and compares their throughput performance under varying node counts. The goal is to help develop more robust TCP algorithms that can effectively manage congestion in challenging wireless network conditions.
This document discusses several approaches to improving TCP performance over mobile networks:
Traditional TCP uses slow start and congestion control mechanisms that reduce efficiency over mobile networks. Indirect TCP segments connections and uses a proxy to improve performance. Snooping TCP buffers packets to enable fast retransmissions without changing endpoints. Mobile TCP freezes the sender's window on disconnection to avoid unnecessary retransmissions. These methods aim to isolate wireless losses from congestion responses and enable fast recovery from errors and handovers.
A novel token based approach towards packet loss controleSAT Journals
This document summarizes a research paper that proposes a novel congestion control mechanism called Stable Token-Limited Congestion Control (STLCC). STLCC monitors inter-domain traffic rates and limits the number of tokens to control congestion and improve network performance. The authors implemented STLCC in a prototype application and found that it was effective at controlling packet loss and improving network performance compared to other congestion control methods. They concluded that STLCC can automatically measure and reduce congestion to allocate network resources stably.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
The document discusses various approaches to improving TCP for mobile networks. It begins by describing traditional TCP and its mechanisms for congestion control and reliable data transmission. However, TCP was designed for fixed networks and faces challenges in mobile environments due to higher error rates, mobility-induced packet loss, and inefficient slow start behavior after losses. Several TCP improvements are then outlined, including Indirect TCP which segments connections and uses a proxy at the access point to isolate the wireless portion. This allows specialized TCP variants to be used for the wireless link without changing the fixed network. Finally, socket migration during handover is discussed to maintain connections as the mobile host moves.
Transmission Control Protocol (TCP) is a fundamental protocol of the Internet Protocol Suite. TCP complements the Internet Protocol (IP), therefore it is common to refer to the internet protocol suit as TCP/IP. TCP is used for error detection, detection of packet loss or out of order delivery of data. TCP requests retransmission, rearranges data and helps with network congestion.
Several congestion control algorithms have been developed, over the last years, to improve TCP's performance over various technologies and network conditions.
The purpose of this assignment is to present TCP, network congestion, congestion algorithms and simulate different algorithms in different network conditions to measure their performance. For this assignment's needs, OPNET IT Guru Academic Edition software was used to accomplish the reproduction of projects that have been already published and gave the wanted results.
A Survey of Different Approaches for Differentiating Bit Error and Congestion...IJERD Editor
TCP provides reliable wireless communication. The packet loss occurs in wireless network during
the data transmission and these losses are always classified as congestion losses. While Packet is also lost due to
random bit error. But traditional TCP always consider as packet is lost due to congestion and reduce it
congestion window. Thus, TCP gives poor performance in wireless link. Many TCP variants have been
proposed for congestion control but they cannot distinguish error either due to congestion or due to bit error thus
it reduces congestion window every time but when there is a bit error then no need to reduce the transmission
rate. In this survey the general approaches taken for differentiating congestion or bit error has been discussed.
TCP INCAST AVOIDANCE BASED ON CONNECTION SERIALIZATION IN DATA CENTER NETWORKSIJCNCJournal
In distributed file systems, a well-known congestion collapse called TCP incast (Incast briefly) occurs
because many servers almost simultaneously send data to the same client and then many packets overflow
the port buffer of the link connecting to the client. Incast leads to throughput degradation in the network. In
this paper, we propose three methods to avoid Incast based on the fact that the bandwidth-delay product is
small in current data center networks. The first method is a method which completely serializes connection
establishments. By the serialization, the number of packets in the port buffer becomes very small, which
leads to Incast avoidance. The second and third methods are methods which overlap the slow start period
of the next connection with the current established connection to improve throughput in the first method.
Numerical results from extensive simulation runs show the effectiveness of our three proposed methods.
This document proposes a new dynamic approach to congestion control in TCP that determines congestion based on round trip time rather than packet loss. It suggests using the estimated round trip time as a threshold and increasing the congestion window linearly as long as the sample round trip time is below the threshold. If the sample round trip time exceeds the estimated time, the congestion window is halved, and if it exceeds the estimated time plus four times the deviation round trip time, the window is reset to one MSS. This approach aims to overcome issues with TCP Reno and Tahoe like the bottleneck problem by avoiding an exponential increase in window size. The document discusses advantages like preventing network congestion and disadvantages like ambiguity in measured sample round trip
This document proposes a new dynamic approach to congestion control in TCP that determines congestion based on round trip time rather than packet loss. It suggests using the estimated round trip time as a threshold and increasing the congestion window linearly as long as the sample round trip time is below the threshold. If the sample round trip time exceeds the estimated time, the congestion window would be halved, and if it exceeds the estimated time plus four times the deviation round trip time, the window would be reset to one MSS. This approach aims to avoid the bottleneck problem of TCP by not including a slow start phase and responding earlier to potential congestion based on network delays.
International Journal of Computer Networks & Communications (IJCNC) - ---- Sc...IJCNCJournal
International Journal of Computer Networks & Communications (IJCNC)
Citations, h-index, i10-index of IJCNC
---- Scopus, ERA Listed, WJCI Indexed ----
Scopus Cite Score 2022--1.8
https://airccse.org/journal/ijcnc.html
IJCNC is listed in ERA 2023 as per the Australian Research Council (ARC) Journal Ranking
Scope & Topics
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to this journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the Computer Networks & Communications.
Topics of Interest
• Network Protocols & Wireless Networks
• Network Architectures
• High speed networks
• Routing, switching and addressing techniques
• Next Generation Internet
• Next Generation Web Architectures
• Network Operations & management
• Adhoc and sensor networks
• Internet and Web applications
• Ubiquitous networks
• Mobile networks & Wireless LAN
• Wireless Multimedia systems
• Wireless communications
• Heterogeneous wireless networks
• Measurement & Performance Analysis
• Peer to peer and overlay networks
• QoS and Resource Management
• Network Based applications
• Network Security
• Self-Organizing Networks and Networked Systems
• Optical Networking
• Mobile & Broadband Wireless Internet
• Recent trends & Developments in Computer Networks
Paper Submission
Authors are invited to submit papers for this journal through E-mail: ijcnc@airccse.org or through Submission System. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this Journal.
Important Dates
• Submission Deadline : June 30, 2024
• Notification : July 29, 2024
• Final Manuscript Due : August 05, 2024
• Publication Date : Determined by the Editor-in-Chief
Contact Us
Here's where you can reach us: ijcnc@airccse.org or ijcnc@aircconline.com
For other details please visit - http://airccse.org/journal/ijcnc.html
Particle Swarm Optimization–Long Short-Term Memory based Channel Estimation w...IJCNCJournal
Paper Title
Particle Swarm Optimization–Long Short-Term Memory based Channel Estimation with Hybrid Beam Forming Power Transfer in WSN-IoT Applications
Authors
Reginald Jude Sixtus J and Tamilarasi Muthu, Puducherry Technological University, India
Abstract
Non-Orthogonal Multiple Access (NOMA) helps to overcome various difficulties in future technology wireless communications. NOMA, when utilized with millimeter wave multiple-input multiple-output (MIMO) systems, channel estimation becomes extremely difficult. For reaping the benefits of the NOMA and mm-Wave combination, effective channel estimation is required. In this paper, we propose an enhanced particle swarm optimization based long short-term memory estimator network (PSOLSTMEstNet), which is a neural network model that can be employed to forecast the bandwidth required in the mm-Wave MIMO network. The prime advantage of the LSTM is that it has the capability of dynamically adapting to the functioning pattern of fluctuating channel state. The LSTM stage with adaptive coding and modulation enhances the BER.PSO algorithm is employed to optimize input weights of LSTM network. The modified algorithm splits the power by channel condition of every single user. Participants will be first sorted into distinct groups depending upon respective channel conditions, using a hybrid beamforming approach. The network characteristics are fine-estimated using PSO-LSTMEstNet after a rough approximation of channels parameters derived from the received data.
Keywords
Signal to Noise Ratio (SNR), Bit Error Rate (BER), mm-Wave, MIMO, NOMA, deep learning, optimization.
Volume URL: https://airccse.org/journal/ijc2022.html
Abstract URL:https://aircconline.com/abstract/ijcnc/v14n5/14522cnc05.html
Pdf URL: https://aircconline.com/ijcnc/V14N5/14522cnc05.pdf
#scopuspublication #scopusindexed #callforpapers #researchpapers #cfp #researchers #phdstudent #researchScholar #journalpaper #submission #journalsubmission #WBAN #requirements #tailoredtreatment #MACstrategy #enhancedefficiency #protrcal #computing #analysis #wirelessbodyareanetworks #wirelessnetworks
#adhocnetwork #VANETs #OLSRrouting #routing #MPR #nderesidualenergy #korea #cognitiveradionetworks #radionetworks #rendezvoussequence
Here's where you can reach us : ijcnc@airccse.org or ijcnc@aircconline.com
More Related Content
Similar to Enhancing HTTP Web Protocol Performance with Updated Transport Layer Techniques
Improving Performance of TCP in Wireless Environment using TCP-PIDES Editor
Improving the performance of the transmission
control protocol (TCP) in wireless environment has been an
active research area. Main reason behind performance
degradation of TCP is not having ability to detect actual reason
of packet losses in wireless environment. In this paper, we are
providing a simulation results for TCP-P (TCP-Performance).
TCP-P is intelligent protocol in wireless environment which
is able to distinguish actual reasons for packet losses and
applies an appropriate solution to packet loss.
TCP-P deals with main three issues, Congestion in
network, Disconnection in network and random packet losses.
TCP-P consists of Congestion avoidance algorithm and
Disconnection detection algorithm with some changes in TCP
header part. If congestion is occurring in network then
congestion avoidance algorithm is applied. In congestion
avoidance algorithm, TCP-P calculates number of sending
packets and receiving acknowledgements and accordingly set
a sending buffer value, so that it can prevent system from
happening congestion. In disconnection detection algorithm,
TCP-P senses medium continuously to detect a happening
disconnection in network. TCP-P modifies header of TCP
packet so that loss packet can itself notify sender that it is
lost.This paper describes the design of TCP-P, and presents
results from experiments using the NS-2 network simulator.
Results from simulations show that TCP-P is 4% more
efficient than TCP-Tahoe, 5% more efficient than TCP-Vegas,
7% more efficient than TCP-Sack and equally efficient in
performance as of TCP-Reno and TCP-New Reno. But we can
say TCP-P is more efficient than TCP-Reno and TCP-New
Reno since it is able to solve more issues of TCP in wireless
environment.
IMPACT OF CONTENTION WINDOW ON CONGESTION CONTROL ALGORITHMS FOR WIRELESS ADH...cscpconf
TCP congestion control mechanism is highly dependent on MAC layer Backoff algorithms that
predict the optimal Contention Window size to increase the TCP performance in wireless adhoc
network. This paper critically examines the impact of Contention Window in TCP congestion
control approaches. The modified TCP congestion control method gives the stability of
congestion window which provides higher throughput and shorter delay than the traditional TCP. Various Backoff algorithms that are used to adjust Contention Window are simulatedusing NS2 along with modified TCP and their performance are analyzed to depict the influence of Contention Window in TCP performance considering the metrics such as throughput, delay, packet loss and end-to-end delay
PERFORMANCE EVALUATION OF SELECTED E2E TCP CONGESTION CONTROL MECHANISM OVER ...ijwmn
TCP is one of the main protocols that govern the Internet traffic nowadays. However, it suffers significant
performance degradation over wireless links. Since wireless networks are leading the communication
technologies recently, it is imperative to introduce effective solutions for the TCP congestion control
mechanisms over such networks. In this research four End-to-End TCP implementations are discussed,
they are TCP Westwood, Hybla, Highspeed, and NewReno. The performance of these variants is compared
using LTE emulated environment in terms of throughput, delay, and fairness. Ns-3 simulator is used to
simulate the LTE networks environment. The simulation results showed that TCP Highspeed achieves the
best throughput results. Although TCP Westwood recorded the lowest latency values comparing to others,
it behaved unfairly among different traffic flows. Moreover, TCP Hybla demonstrated the best fairness
behaviour among other TCP variants
Proposition of an Adaptive Retransmission Timeout for TCP in 802.11 Wireless ...IJERA Editor
The Transport Control Protocol (TCP) is used to establish and control a session between two endpoints. The problem is that in 802.11 wireless environments TCP always considers that the packet loss is caused by network congestion. However, in these networks packet loss are usually caused by the high bit error rate, and the wireless link failures. Researchers found out that TCP performance in wireless networks can be highly enhanced as long as it is feasible to identify the packet loss causes; hence appropriate measures can be dynamically applied during an established TCP session in order to adjust the session parameters. This paper proposes an endto-end adaptive mechanism that allows the TCP session to dynamically adjust the RTO (Retransmission Timeout) of a TCP session; the server will have to adjust the timers based on feedbacks from clients. Feedbacks are piggybacked in the TCP Options header field of the ACK (Acknowledgment) messages. A feedback is an approximation of the time needed by the wireless channel to get the errors fixed. The mechanism has been validated using numerical analysis and simulations, and then compared to the original TCP protocol. Simulation results have shown better performance in terms of number of retransmissions at the server side due to the decrease in the number of timeouts; and thus lowest congestion on the wireless access point.
Towards Seamless TCP Congestion Avoidance in Multiprotocol EnvironmentsIDES Editor
In this paper we explore the area of congestion
avoidance in computer networks. We provide a brief overview
of the current state of the art in congestion avoidance and also
list our extension to the TCP congestion avoidance mechanism.
This extension was previously published on an international
forum and in this paper we describe an improved version which
allows multiprotocol support. We list preliminary results
carried out in a simulation environment.
New introduced approach called Advanced Notification
Congestion System (ACNS) allows TCP flows prioritization
based on the TCP flow age and priority carried in the header
of the network layer protocol. The aim of this approach is to
provide more bandwidth for young and high prioritized TCP
flows by means of penalizing old greedy flows with a low
priority. Using ACNS, substantial network performance
increase can be achieved.
Comparative Analysis of Different TCP Variants in Mobile Ad-Hoc Network partha pratim deb
The document analyzes the performance of different TCP variants (New Reno, Reno, Tahoe) with MANET routing protocols (AODV, DSR, TORA) through simulation. It finds that in scenarios with 3 and 5 nodes, AODV has better throughput than DSR and TORA for all TCP variants. Throughput decreases for all variants as node count increases. New Reno provides multiple packet loss recovery and is the best choice for AODV in MANETs due to its consistent performance with changes in node count. Further analysis of additional protocols and TCP variants is recommended.
Iaetsd an effective approach to eliminate tcp incastIaetsd Iaetsd
This document proposes an Incast Congestion Control for TCP (ICTCP) scheme to eliminate TCP incast collapse in datacenter environments. TCP incast collapse occurs when multiple synchronized servers send data to the same receiver in parallel, overwhelming the switch buffer and causing packet loss. ICTCP is a receiver-side approach that proactively adjusts the TCP receive window size of connections to control their aggregate burstiness and prevent switch buffer overflow before packet loss occurs. It estimates available bandwidth and uses this as a quota to coordinate receive window increases. For each connection, the receive window is adjusted based on the ratio of the difference between measured and expected throughput. This allows adaptive tuning of receive windows to meet sender throughput needs while avoiding congest
NetWork Design Question2.) How does TCP prevent Congestion Dicuss.pdfoptokunal1
NetWork Design Question
2.) How does TCP prevent Congestion? Dicuss the information identifying congestion in the
network as well as the mechanism for reducing congestion?
Solution
Congestion is a problem that occurs on shared networks when multiple users contend for access
to the same resources (bandwidth, buffers, and queues).
Transmission Control Protocol (TCP) uses a network congestion-avoidance algorithm that
includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, with
other schemes such as slow-start to achieve congestion avoidance.
The TCP congestion-avoidance algorithm is the primary basis for congestion control in the
Internet.
Congestion typically occurs where multiple links feed into a single link, such as where internal
LANs are connected to WAN links. Congestion also occurs at routers in core networks where
nodes are subjected to more traffic than they are designed to handle.
TCP/IP networks such as the Internet are especially susceptible to congestion because of their
basic connection- less nature. There are no virtual circuits with guaranteed bandwidth. Packets
are injected by any host at any time, and those packets are variable in size, which make
predicting traffic patterns and providing guaranteed service impossible. While connectionless
networks have advantages, quality of service is not one of them.
Shared LANs such as Ethernet have their own congestion control mechanisms in the form of
access controls that prevent multiple nodes from transmitting at the same time.
Identifying:
Congestion is primarily reflected by a conventional user feeling-- slowness. This statement
reflects the change in the network effective flow, that is the time required to transmit an entire
data from one point to another. The effective flow doenot exist as such, it consists in reality of
three seperate indicators:
*Latency:the effective flow is inversely proportional to the latency.
*Jitter:it is latency variation over time, impacts by influencing the flow latency
*Loss Rate:the theoritical bandwidth is inversely proportional to the square root of the loss rate
These Congestion symtoms allow us to rely on objective indicators to characterize it.
Mechanism to reduce congestion:
The standard fare in TCP implementations today has four standard congestion control algorithms
that are now in common use. Their usefulness has passed the test of time.
The four algorithms, Slow Start, Congestion Avoidance, Fast Retransmit and Fast Recovery are
described below. (a) Slow Start
Slow Start, a requirement for TCP software implementations is a mechanism used by the sender
to control the transmission rate, otherwise known as sender-based flow control. This is
accomplished through the return rate of acknowledgements from the receiver. In other words, the
rate of acknowledgements returned by the receiver determine the rate at which the sender can
transmit data. When a TCP connection first begins, the Slow Start algorithm initializes a
.
The document discusses two papers on the Stream Control Transmission Protocol (SCTP). The first paper provides an overview of SCTP, describing its main features like multihoming and multistreaming. It also discusses SCTP's state in research, products available, and technical challenges. The second paper analyzes key SCTP features compared to TCP and UDP, and how SCTP benefits HTTP transfers through multihoming and multistreaming. It outlines SCTP's association setup, data transfer process, and the two socket models supported.
This document analyzes and compares the congestion window behavior of three TCP variants (HS-TCP, Full-TCP, and TCP-Linux) over a Long Term Evolution (LTE) network model using network simulation. It first reviews related work that has analyzed TCP performance but with assumptions like equal window sizes or only considering uploads/downloads. The document then describes the topology and parameters used to simulate the LTE network in NS-2. Simulation results are presented that analyze the slow-start and congestion avoidance phases of each TCP variant individually, and also compare their congestion window behavior over the full simulation period.
ANALYSIS AND EXPERIMENTAL EVALUATION OF THE TRANSMISSION CONTROL PROTOCOL CON...IRJET Journal
This document analyzes and experimentally evaluates several TCP congestion control algorithms (variants) - TCP cubic, TCP hybla, TCP scalable, TCP Vegas, and TCP Westwood - in a wireless multihop environment. It aims to understand the throughput performance of each variant as the number of nodes increases. The analysis provides insights into how well different variants can adapt to dynamic multihop wireless networks. It experimentally tests the variants in a simulation using Network Simulator 2 and compares their throughput performance under varying node counts. The goal is to help develop more robust TCP algorithms that can effectively manage congestion in challenging wireless network conditions.
This document discusses several approaches to improving TCP performance over mobile networks:
Traditional TCP uses slow start and congestion control mechanisms that reduce efficiency over mobile networks. Indirect TCP segments connections and uses a proxy to improve performance. Snooping TCP buffers packets to enable fast retransmissions without changing endpoints. Mobile TCP freezes the sender's window on disconnection to avoid unnecessary retransmissions. These methods aim to isolate wireless losses from congestion responses and enable fast recovery from errors and handovers.
A novel token based approach towards packet loss controleSAT Journals
This document summarizes a research paper that proposes a novel congestion control mechanism called Stable Token-Limited Congestion Control (STLCC). STLCC monitors inter-domain traffic rates and limits the number of tokens to control congestion and improve network performance. The authors implemented STLCC in a prototype application and found that it was effective at controlling packet loss and improving network performance compared to other congestion control methods. They concluded that STLCC can automatically measure and reduce congestion to allocate network resources stably.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
The document discusses various approaches to improving TCP for mobile networks. It begins by describing traditional TCP and its mechanisms for congestion control and reliable data transmission. However, TCP was designed for fixed networks and faces challenges in mobile environments due to higher error rates, mobility-induced packet loss, and inefficient slow start behavior after losses. Several TCP improvements are then outlined, including Indirect TCP which segments connections and uses a proxy at the access point to isolate the wireless portion. This allows specialized TCP variants to be used for the wireless link without changing the fixed network. Finally, socket migration during handover is discussed to maintain connections as the mobile host moves.
Transmission Control Protocol (TCP) is a fundamental protocol of the Internet Protocol Suite. TCP complements the Internet Protocol (IP), therefore it is common to refer to the internet protocol suit as TCP/IP. TCP is used for error detection, detection of packet loss or out of order delivery of data. TCP requests retransmission, rearranges data and helps with network congestion.
Several congestion control algorithms have been developed, over the last years, to improve TCP's performance over various technologies and network conditions.
The purpose of this assignment is to present TCP, network congestion, congestion algorithms and simulate different algorithms in different network conditions to measure their performance. For this assignment's needs, OPNET IT Guru Academic Edition software was used to accomplish the reproduction of projects that have been already published and gave the wanted results.
A Survey of Different Approaches for Differentiating Bit Error and Congestion...IJERD Editor
TCP provides reliable wireless communication. The packet loss occurs in wireless network during
the data transmission and these losses are always classified as congestion losses. While Packet is also lost due to
random bit error. But traditional TCP always consider as packet is lost due to congestion and reduce it
congestion window. Thus, TCP gives poor performance in wireless link. Many TCP variants have been
proposed for congestion control but they cannot distinguish error either due to congestion or due to bit error thus
it reduces congestion window every time but when there is a bit error then no need to reduce the transmission
rate. In this survey the general approaches taken for differentiating congestion or bit error has been discussed.
TCP INCAST AVOIDANCE BASED ON CONNECTION SERIALIZATION IN DATA CENTER NETWORKSIJCNCJournal
In distributed file systems, a well-known congestion collapse called TCP incast (Incast briefly) occurs
because many servers almost simultaneously send data to the same client and then many packets overflow
the port buffer of the link connecting to the client. Incast leads to throughput degradation in the network. In
this paper, we propose three methods to avoid Incast based on the fact that the bandwidth-delay product is
small in current data center networks. The first method is a method which completely serializes connection
establishments. By the serialization, the number of packets in the port buffer becomes very small, which
leads to Incast avoidance. The second and third methods are methods which overlap the slow start period
of the next connection with the current established connection to improve throughput in the first method.
Numerical results from extensive simulation runs show the effectiveness of our three proposed methods.
This document proposes a new dynamic approach to congestion control in TCP that determines congestion based on round trip time rather than packet loss. It suggests using the estimated round trip time as a threshold and increasing the congestion window linearly as long as the sample round trip time is below the threshold. If the sample round trip time exceeds the estimated time, the congestion window is halved, and if it exceeds the estimated time plus four times the deviation round trip time, the window is reset to one MSS. This approach aims to overcome issues with TCP Reno and Tahoe like the bottleneck problem by avoiding an exponential increase in window size. The document discusses advantages like preventing network congestion and disadvantages like ambiguity in measured sample round trip
This document proposes a new dynamic approach to congestion control in TCP that determines congestion based on round trip time rather than packet loss. It suggests using the estimated round trip time as a threshold and increasing the congestion window linearly as long as the sample round trip time is below the threshold. If the sample round trip time exceeds the estimated time, the congestion window would be halved, and if it exceeds the estimated time plus four times the deviation round trip time, the window would be reset to one MSS. This approach aims to avoid the bottleneck problem of TCP by not including a slow start phase and responding earlier to potential congestion based on network delays.
Similar to Enhancing HTTP Web Protocol Performance with Updated Transport Layer Techniques (20)
International Journal of Computer Networks & Communications (IJCNC) - ---- Sc...IJCNCJournal
International Journal of Computer Networks & Communications (IJCNC)
Citations, h-index, i10-index of IJCNC
---- Scopus, ERA Listed, WJCI Indexed ----
Scopus Cite Score 2022--1.8
https://airccse.org/journal/ijcnc.html
IJCNC is listed in ERA 2023 as per the Australian Research Council (ARC) Journal Ranking
Scope & Topics
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to this journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the Computer Networks & Communications.
Topics of Interest
• Network Protocols & Wireless Networks
• Network Architectures
• High speed networks
• Routing, switching and addressing techniques
• Next Generation Internet
• Next Generation Web Architectures
• Network Operations & management
• Adhoc and sensor networks
• Internet and Web applications
• Ubiquitous networks
• Mobile networks & Wireless LAN
• Wireless Multimedia systems
• Wireless communications
• Heterogeneous wireless networks
• Measurement & Performance Analysis
• Peer to peer and overlay networks
• QoS and Resource Management
• Network Based applications
• Network Security
• Self-Organizing Networks and Networked Systems
• Optical Networking
• Mobile & Broadband Wireless Internet
• Recent trends & Developments in Computer Networks
Paper Submission
Authors are invited to submit papers for this journal through E-mail: ijcnc@airccse.org or through Submission System. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this Journal.
Important Dates
• Submission Deadline : June 30, 2024
• Notification : July 29, 2024
• Final Manuscript Due : August 05, 2024
• Publication Date : Determined by the Editor-in-Chief
Contact Us
Here's where you can reach us: ijcnc@airccse.org or ijcnc@aircconline.com
For other details please visit - http://airccse.org/journal/ijcnc.html
Particle Swarm Optimization–Long Short-Term Memory based Channel Estimation w...IJCNCJournal
Paper Title
Particle Swarm Optimization–Long Short-Term Memory based Channel Estimation with Hybrid Beam Forming Power Transfer in WSN-IoT Applications
Authors
Reginald Jude Sixtus J and Tamilarasi Muthu, Puducherry Technological University, India
Abstract
Non-Orthogonal Multiple Access (NOMA) helps to overcome various difficulties in future technology wireless communications. NOMA, when utilized with millimeter wave multiple-input multiple-output (MIMO) systems, channel estimation becomes extremely difficult. For reaping the benefits of the NOMA and mm-Wave combination, effective channel estimation is required. In this paper, we propose an enhanced particle swarm optimization based long short-term memory estimator network (PSOLSTMEstNet), which is a neural network model that can be employed to forecast the bandwidth required in the mm-Wave MIMO network. The prime advantage of the LSTM is that it has the capability of dynamically adapting to the functioning pattern of fluctuating channel state. The LSTM stage with adaptive coding and modulation enhances the BER.PSO algorithm is employed to optimize input weights of LSTM network. The modified algorithm splits the power by channel condition of every single user. Participants will be first sorted into distinct groups depending upon respective channel conditions, using a hybrid beamforming approach. The network characteristics are fine-estimated using PSO-LSTMEstNet after a rough approximation of channels parameters derived from the received data.
Keywords
Signal to Noise Ratio (SNR), Bit Error Rate (BER), mm-Wave, MIMO, NOMA, deep learning, optimization.
Volume URL: https://airccse.org/journal/ijc2022.html
Abstract URL:https://aircconline.com/abstract/ijcnc/v14n5/14522cnc05.html
Pdf URL: https://aircconline.com/ijcnc/V14N5/14522cnc05.pdf
#scopuspublication #scopusindexed #callforpapers #researchpapers #cfp #researchers #phdstudent #researchScholar #journalpaper #submission #journalsubmission #WBAN #requirements #tailoredtreatment #MACstrategy #enhancedefficiency #protrcal #computing #analysis #wirelessbodyareanetworks #wirelessnetworks
#adhocnetwork #VANETs #OLSRrouting #routing #MPR #nderesidualenergy #korea #cognitiveradionetworks #radionetworks #rendezvoussequence
Here's where you can reach us : ijcnc@airccse.org or ijcnc@aircconline.com
June 2024 - Top 10 Read Articles in Computer Networks & CommunicationsIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Enhanced Traffic Congestion Management with Fog Computing - A Simulation-Base...IJCNCJournal
Abstract: Accurate latency computation is essential for the Internet of Things (IoT) since the connected
devices generate a vast amount of data that is processed on cloud infrastructure. However, the cloud is not
an optimal solution. To overcome this issue, fog computing is used to enable processing at the edge while
still allowing communication with the cloud. Many applications rely on fog computing, including traffic
management. In this paper, an Intelligent Traffic Congestion Mitigation System (ITCMS) is proposed to
address traffic congestion in heavily populated smart cities. The proposed system is implemented using fog
computing and tested in a crowdedCairo city. The results obtained indicate that the execution time of the
simulation is 4,538 seconds, and the delay in the application loop is 49.67 seconds. The paper addresses
various issues, including CPU usage, heap memory usage, throughput, and the total average delay, which
are essential for evaluating the performance of the ITCMS. Our system model is also compared with other
models to assess its performance. A comparison is made using two parameters, namely throughput and the
total average delay, between the ITCMS, IOV (Internet of Vehicle), and STL (Seasonal-Trend
Decomposition Procedure based on LOESS). Consequently, the results confirm that the proposed system
outperforms the others in terms of higher accuracy, lower latency, and improved traffic efficiency.
Call for Papers -International Journal of Computer Networks & Communications ...IJCNCJournal
International Journal of Computer Networks & Communications (IJCNC)
Citations, h-index, i10-index of IJCNC
---- Scopus, ERA Listed, WJCI Indexed ----
Scopus Cite Score 2022--1.8
https://airccse.org/journal/ijcnc.html
IJCNC is listed in ERA 2023 as per the Australian Research Council (ARC) Journal Ranking
Scope & Topics
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Authors are solicited to contribute to this journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the Computer Networks & Communications.
Topics of Interest
· Network Protocols & Wireless Networks
· Network Architectures
· High speed networks
· Routing, switching and addressing techniques
· Next Generation Internet
· Next Generation Web Architectures
· Network Operations & management
· Adhoc and sensor networks
· Internet and Web applications
· Ubiquitous networks
· Mobile networks & Wireless LAN
· Wireless Multimedia systems
· Wireless communications
· Heterogeneous wireless networks
· Measurement & Performance Analysis
· Peer to peer and overlay networks
· QoS and Resource Management
· Network Based applications
· Network Security
· Self-Organizing Networks and Networked Systems
· Optical Networking
· Mobile & Broadband Wireless Internet
· Recent trends & Developments in Computer Networks
Paper Submission
Authors are invited to submit papers for this journal through E-mail: ijcnc@airccse.org or through Submission System. Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this Journal.
Important Dates
· Submission Deadline : June 22, 2024
· Notification : July 22, 2024
· Final Manuscript Due : July 29, 2024
· Publication Date : Determined by the Editor-in-Chief
Contact Us
Here's where you can reach us: ijcnc@airccse.org or ijcnc@aircconline.com
For other details please visit - http://airccse.org/journal/ijcnc.html
Rendezvous Sequence Generation Algorithm for Cognitive Radio Networks in Post...IJCNCJournal
Recent natural disasters have inflicted tremendous damage on humanity, with their scale progressively increasing and leading to numerous casualties. Events such as earthquakes can trigger secondary disasters, such as tsunamis, further complicating the situation by destroying communication infrastructures. This destruction impedes the dissemination of information about secondary disasters and complicates post-disaster rescue efforts. Consequently, there is an urgent demand for technologies capable of substituting for these destroyed communication infrastructures. This paper proposes a technique for generating rendezvous sequences to swiftly reconnect communication infrastructures in post-disaster scenarios. We compare the time required for rendezvous using the proposed technique against existing methods and analyze the average time taken to establish links with the rendezvous technique, discussing its significance. This research presents a novel approach enabling rapid recovery of destroyed communication infrastructures in disaster environments through Cognitive Radio Network (CRN) technology, showcasing the potential to significantly improve disaster response and recovery efforts. The proposed method reduces the time for the rendezvous compared to existing methods, suggesting that it can enhance the efficiency of rescue operations in post-disaster scenarios and contribute to life-saving efforts.
Blockchain Enforced Attribute based Access Control with ZKP for Healthcare Se...IJCNCJournal
The relationship between doctors and patients is reinforced through the expanded communication channels provided by remote healthcare services, resulting in heightened patient satisfaction and loyalty. Nonetheless, the growth of these services is hampered by security and privacy challenges they confront. Additionally, patient electronic health records (EHR) information is dispersed across multiple hospitals in different formats, undermining data sovereignty. It allows any service to assert authority over their EHR, effectively controlling its usage. This paper proposes a blockchain enforced attribute-based access control in healthcare service. To enhance the privacy and data-sovereignty, the proposed system employs attribute-based access control, zero-knowledge proof (ZKP) and blockchain. The role of data within our system is pivotal in defining attributes. These attributes, in turn, form the fundamental basis for access control criteria. Blockchain is used to keep hospital information in public chain but EHR related data in private chain. Furthermore, EHR provides access control by using the attributed based cryptosystem before they are stored in the blockchain. Analysis shows that the proposed system provides data sovereignty with privacy provision based on the attributed based access control.
EECRPSID: Energy-Efficient Cluster-Based Routing Protocol with a Secure Intru...IJCNCJournal
A revolutionary idea that has gained significance in technology for Internet of Things (IoT) networks backed by WSNs is the " Energy-Efficient Cluster-Based Routing Protocol with a Secure Intrusion Detection" (EECRPSID). A WSN-powered IoT infrastructure's hardware foundation is hardware with autonomous sensing capabilities. The significant features of the proposed technology are intelligent environment sensing, independent data collection, and information transfer to connected devices. However, hardware flaws and issues with energy consumption may be to blame for device failures in WSN-assisted IoT networks. This can potentially obstruct the transfer of data. A reliable route significantly reduces data retransmissions, which reduces traffic and conserves energy. The sensor hardware is often widely dispersed by IoT networks that enable WSNs. Data duplication could occur if numerous sensor devices are used to monitor a location. Finding a solution to this issue by using clustering. Clustering lessens network traffic while retaining path dependability compared to the multipath technique. To relieve duplicate data in EECRPSID, we applied the clustering technique. The multipath strategy might make the provided protocol more dependable. Using the EECRPSID algorithm, will reduce the overall energy consumption, minimize the End-to-end delay to 0.14s, achieve a 99.8% Packet Delivery Ratio, and the network's lifespan will be increased. The NS2 simulator is used to run the whole set of simulations. The EECRPSID method has been implemented in NS2, and simulated results indicate that comparing the other three technologies improves the performance measures.
Analysis and Evolution of SHA-1 Algorithm - Analytical TechniqueIJCNCJournal
A 160-bit (20-byte) hash value, sometimes called a message digest, is generated using the SHA-1 (Secure Hash Algorithm 1) hash function in cryptography. This value is commonly represented as 40 hexadecimal digits. It is a Federal Information Processing Standard in the United States and was developed by the National Security Agency. Although it has been cryptographically cracked, the technique is still in widespread usage. In this work, we conduct a detailed and practical analysis of the SHA-1 algorithm's theoretical elements and show how they have been implemented through the use of several different hash configurations.
Optimizing CNN-BiGRU Performance: Mish Activation and Comparative AnalysisIJCNCJournal
Deep learning is currently extensively employed across a range of research domains. The continuous advancements in deep learning techniques contribute to solving intricate challenges. Activation functions (AF) are fundamental components within neural networks, enabling them to capture complex patterns and relationships in the data. By introducing non-linearities, AF empowers neural networks to model and adapt to the diverse and nuanced nature of real-world data, enhancing their ability to make accurate predictions across various tasks. In the context of intrusion detection, the Mish, a recent AF, was implemented in the CNN-BiGRU model, using three datasets: ASNM-TUN, ASNM-CDX, and HOGZILLA. The comparison with Rectified Linear Unit (ReLU), a widely used AF, revealed that Mish outperforms ReLU, showcasing superior performance across the evaluated datasets. This study illuminates the effectiveness of AF in elevating the performance of intrusion detection systems.
An Hybrid Framework OTFS-OFDM Based on Mobile Speed EstimationIJCNCJournal
The Future wireless communication systems face the challenging task of simultaneously providing high-quality service (QoS) and broadband data transmission, while also minimizing power consumption, latency, and system complexity. Although Orthogonal Frequency Division Multiplexing (OFDM) has been widely adopted in 4G and 5G systems, it struggles to cope with a significant delay and Doppler spread in high mobility scenarios. To address these challenges, a novel waveform named Orthogonal Time Frequency Space (OTFS). Designers aim to outperform OFDM by closely aligning signals with the channel behaviour. In this paper, we propose a switching strategy that empowers operators to select the most appropriate waveform based on an estimated speed of the mobile user. This strategy enables the base station to dynamically choose the waveform that best suits the mobile user’s speed. Additionally, we suggest retaining an Integrated Sensing and Communication (ISAC) radar approach for accurate Doppler estimation. This provides precise information to facilitate the waveform selection procedure. By leveraging the switching strategy and harnessing the Doppler estimation capabilities of an ISAC radar.Our proposed approach aims to enhance the performance of wireless communication systems in high mobility cases. Considering the complexity of waveform processing, we introduce an optimized hybrid system that combines OTFS and OFDM, resulting in reduced complexity while still retaining performance benefits.This hybrid system presents a promising solution for improving the performance of wireless communication systems in higher mobility.The simulation results validate the effectiveness of our approach, demonstrating its potential advantages for future wireless communication systems. The effectiveness of the proposed approach is validated by simulation results as it will be illustrated.
Enhanced Traffic Congestion Management with Fog Computing - A Simulation-Base...IJCNCJournal
Accurate latency computation is essential for the Internet of Things (IoT) since the connected devices generate a vast amount of data that is processed on cloud infrastructure. However, the cloud is not an optimal solution. To overcome this issue, fog computing is used to enable processing at the edge while still allowing communication with the cloud. Many applications rely on fog computing, including traffic management. In this paper, an Intelligent Traffic Congestion Mitigation System (ITCMS) is proposed to address traffic congestion in heavily populated smart cities. The proposed system is implemented using fog computing and tested in a crowdedCairo city. The results obtained indicate that the execution time of the simulation is 4,538 seconds, and the delay in the application loop is 49.67 seconds. The paper addresses various issues, including CPU usage, heap memory usage, throughput, and the total average delay, which are essential for evaluating the performance of the ITCMS. Our system model is also compared with other models to assess its performance. A comparison is made using two parameters, namely throughput and the total average delay, between the ITCMS, IOV (Internet of Vehicle), and STL (Seasonal-Trend Decomposition Procedure based on LOESS). Consequently, the results confirm that the proposed system outperforms the others in terms of higher accuracy, lower latency, and improved traffic efficiency.
Rendezvous Sequence Generation Algorithm for Cognitive Radio Networks in Post...IJCNCJournal
Recent natural disasters have inflicted tremendous damage on humanity, with their scale progressively increasing and leading to numerous casualties. Events such as earthquakes can trigger secondary disasters, such as tsunamis, further complicating the situation by destroying communication infrastructures. This destruction impedes the dissemination of information about secondary disasters and complicates post-disaster rescue efforts. Consequently, there is an urgent demand for technologies capable of substituting for these destroyed communication infrastructures. This paper proposes a technique for generating rendezvous sequences to swiftly reconnect communication infrastructures in post-disaster scenarios. We compare the time required for rendezvous using the proposed technique against existing methods and analyze the average time taken to establish links with the rendezvous technique, discussing its significance. This research presents a novel approach enabling rapid recovery of destroyed communication infrastructures in disaster environments through Cognitive Radio Network (CRN) technology, showcasing the potential to significantly improve disaster response and recovery efforts. The proposed method reduces the time for the rendezvous compared to existing methods, suggesting that it can enhance the efficiency of rescue operations in post-disaster scenarios and contribute to life-saving efforts.
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
May 2024, Volume 16, Number 3 - The International Journal of Computer Network...IJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
Vehicle Ad Hoc Networks (VANETs) have become a viable technology to improve traffic flow and safety on the roads. Due to its effectiveness and scalability, the Wingsuit Search-based Optimised Link State Routing Protocol (WS-OLSR) is frequently used for data distribution in VANETs. However, the selection of MultiPoint Relays (MPRs) plays a pivotal role in WS-OLSR's performance. This paper presents an improved MPR selection algorithm tailored to WS-OLSR, designed to enhance the overall routing efficiency and reduce overhead. The analysis found that the current OLSR protocol has problems such as redundancy of HELLO and TC message packets or failure to update routing information in time, so a WS-OLSR routing protocol based on improved-MPR selection algorithm was proposed. Firstly, factors such as node mobility and link changes are comprehensively considered to reflect network topology changes, and the broadcast cycle of node HELLO messages is controlled through topology changes. Secondly, a new MPR selection algorithm is proposed, considering link stability issues and nodes. Finally, evaluate its effectiveness in terms of packet delivery ratio, end-to-end delay, and control message overhead. Simulation results demonstrate the superior performance of our improved MR selection algorithm when compared to traditional approaches.
A Novel Medium Access Control Strategy for Heterogeneous Traffic in Wireless ...IJCNCJournal
So far, Wireless Body Area Networks (WBANs) have played a pivotal role in driving the development of intelligent healthcare systems with broad applicability across various domains. Each WBAN consists of one or more types of sensors that can be embedded in clothing, attached directly to the body, or even implanted beneath an individual's skin. These sensors typically serve asingle application. However, the traffic generated by each sensor may have distinct requirements. This diversity necessitates a dual approach: tailored treatment based on the specific needs of each traffic typeand the fulfillment of application requirements, such asreliability and timeliness. Never the less, the presence of energy constraints and the unreliable nature of wireless communications make QoS provisioning under such networks a non-trivial task. In this context, the current paper introduces a novel Medium AccessControl (MAC) strategy for the regular traffic applications of WBANs, designed to significantly enhance efficiency when compared to the established MAC protocols IEEE 802.15.4 and IEEE 802.15.6, with a particular focus on improving reliability, timeliness, and energy efficiency.
May_2024 Top 10 Read Articles in Computer Networks & Communications.pdfIJCNCJournal
The International Journal of Computer Networks & Communications (IJCNC) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of Computer Networks & Communications. The journal focuses on all technical and practical aspects of Computer Networks & data Communications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on advanced networking concepts and establishing new collaborations in these areas.
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...IJCNCJournal
The efficient use of energy in wireless sensor networks is critical for extending node lifetime. The network topology is one of the factors that have a significant impact on the energy usage at the nodes and the quality of transmission (QoT) in the network. We propose a topology control algorithm for software-defined wireless sensor networks (SDWSNs) in this paper. Our method is to formulate topology control algorithm as a nonlinear programming (NP) problem with the objective to optimizing two metrics, maximum communication range, and desired degree. This NP problem is solved at the SDWSN controller by employing the genetic algorithm (GA) to determine the best topology. The simulation results show that the proposed algorithm outperforms the MaxPower algorithm in terms of average node degree and energy expansion ratio.
Multi-Server user Authentication Scheme for Privacy Preservation with Fuzzy C...IJCNCJournal
The integration of artificial intelligence technology with a scalable Internet of Things (IoT) platform facilitates diverse smart communication services, allowing remote users to access services from anywhere at any time. The multi-server environment within IoT introduces a flexible security service model, enabling users to interact with any server through a single registration. To ensure secure and privacy preservation services for resources, an authentication scheme is essential. Zhao et al. recently introduced a user authentication scheme for the multi-server environment, utilizing passwords and smart cards, claiming resilience against well-known attacks. This paper conducts cryptanalysis on Zhao et al.'s scheme, focusing on denial of service and privacy attacks, revealing a lack of user-friendliness. Subsequently, we propose a new multi-server user authentication scheme for privacy preservation with fuzzy commitment over the IoT environment, addressing the shortcomings of Zhao et al.'s scheme. Formal security verification of the proposed scheme is conducted using the ProVerif simulation tool. Through both formal and informal security analyses, we demonstrate that the proposed scheme is resilient against various known attacks and those identified in Zhao et al.'s scheme.
Andreas Schleicher presents PISA 2022 Volume III - Creative Thinking - 18 Jun...EduSkills OECD
Andreas Schleicher, Director of Education and Skills at the OECD presents at the launch of PISA 2022 Volume III - Creative Minds, Creative Schools on 18 June 2024.
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.pptHenry Hollis
The History of NZ 1870-1900.
Making of a Nation.
From the NZ Wars to Liberals,
Richard Seddon, George Grey,
Social Laboratory, New Zealand,
Confiscations, Kotahitanga, Kingitanga, Parliament, Suffrage, Repudiation, Economic Change, Agriculture, Gold Mining, Timber, Flax, Sheep, Dairying,
Elevate Your Nonprofit's Online Presence_ A Guide to Effective SEO Strategies...TechSoup
Whether you're new to SEO or looking to refine your existing strategies, this webinar will provide you with actionable insights and practical tips to elevate your nonprofit's online presence.
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumMJDuyan
(𝐓𝐋𝐄 𝟏𝟎𝟎) (𝐋𝐞𝐬𝐬𝐨𝐧 𝟏)-𝐏𝐫𝐞𝐥𝐢𝐦𝐬
𝐃𝐢𝐬𝐜𝐮𝐬𝐬 𝐭𝐡𝐞 𝐄𝐏𝐏 𝐂𝐮𝐫𝐫𝐢𝐜𝐮𝐥𝐮𝐦 𝐢𝐧 𝐭𝐡𝐞 𝐏𝐡𝐢𝐥𝐢𝐩𝐩𝐢𝐧𝐞𝐬:
- Understand the goals and objectives of the Edukasyong Pantahanan at Pangkabuhayan (EPP) curriculum, recognizing its importance in fostering practical life skills and values among students. Students will also be able to identify the key components and subjects covered, such as agriculture, home economics, industrial arts, and information and communication technology.
𝐄𝐱𝐩𝐥𝐚𝐢𝐧 𝐭𝐡𝐞 𝐍𝐚𝐭𝐮𝐫𝐞 𝐚𝐧𝐝 𝐒𝐜𝐨𝐩𝐞 𝐨𝐟 𝐚𝐧 𝐄𝐧𝐭𝐫𝐞𝐩𝐫𝐞𝐧𝐞𝐮𝐫:
-Define entrepreneurship, distinguishing it from general business activities by emphasizing its focus on innovation, risk-taking, and value creation. Students will describe the characteristics and traits of successful entrepreneurs, including their roles and responsibilities, and discuss the broader economic and social impacts of entrepreneurial activities on both local and global scales.
A Free 200-Page eBook ~ Brain and Mind Exercise.pptxOH TEIK BIN
(A Free eBook comprising 3 Sets of Presentation of a selection of Puzzles, Brain Teasers and Thinking Problems to exercise both the mind and the Right and Left Brain. To help keep the mind and brain fit and healthy. Good for both the young and old alike.
Answers are given for all the puzzles and problems.)
With Metta,
Bro. Oh Teik Bin 🙏🤓🤔🥰
Enhancing HTTP Web Protocol Performance with Updated Transport Layer Techniques
1. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
DOI: 10.5121/ijcnc.2023.15401 1
ENHANCING HTTP WEB PROTOCOL
PERFORMANCE WITH UPDATED TRANSPORT
LAYER TECHNIQUES
Ziaul Hossain1
and Gorry Fairhurst2
1
Department of Computing, University of Fraser Valley, Abbotsford, BC, Canada
2
School of Engineering, University of Aberdeen, UK
ABSTRACT
Popular Internet applications such as web browsing, and web video download use HTTP protocol as
application over the standard Transport Control Protocol (TCP). Traditional TCP behavior is unsuitable
for this style of application because their transmission rate and traffic pattern are different from
conventional bulk transfer applications. Previous works have analyzed the interaction of these applications
with the congestion control algorithms in TCP and the proposed Congestion Window Validation (CWV) as
a solution. However, this method was incomplete and has been shown to present drawbacks. This paper
focuses on the ‘newCWV’ which was designed to address these drawbacks. NewCWV provides a practical
mechanism to estimate the available path capacity and suggests a more appropriate congestion control
behavior. This paper describes how this algorithm was implemented in the Linux TCP/IP stack and tested
by experiments, where results indicate that, with newCWV, the browsing can get 50% faster in an
uncongested network.
KEYWORDS
Network Protocols, HTTP, TCP, Congestion Control, newCWV, Bursty TCP traffic
1. INTRODUCTION
With the development of the Internet, many applications have gained enormous popularity.
Email, VoIP applications, File sharing, etc., each have taken a share of the total Internet traffic,
but the largest share is currently Web browsing applications with almost 70% of the total traffic
across the Internet [1]. Web traffic traditionally uses the TCP and HTTP [2] [3] protocols for the
request and delivery of web page content. There had already been numerous developments across
these protocols with a view to improving the performance without proposing any replacement of
these standards.
Many of these updates to TCP focus on congestion control with a technique that defines how
much data can be transferred from the sender to receiver for an application flow. This ensures
that the sending rate of a flow is comparatively safe for the other flows that share the same
bottleneck along the path between the sender and the receiver. Many TCP improvements were
designed to improve bulk file transfers. These modifications are not suitable for HTTP traffic,
which is ‘bursty’ (variable rate traffic with irregular intervals) in nature. This problem has been
reported earlier and several attempts had also been made to realise a solution [4][5].
Unfortunately, the existing solutions were still conservative and lacked proper measurement of
the available path capacity to set the congestion window (cwnd) – the most important parameter
of the congestion control method. This shortcoming limits the performance of bursty applications
such as HTTP.
2. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
2
A newer method has been developed termed ‘newCWV’ [6]. When sending bursty or rate-limited
traffic, this new method allows a sender to estimate the path capacity more accurately and set the
cwnd to an appropriate value accordingly. The rationale for newCWV is presented briefly and the
algorithm is explained in [6]. But there is a gap in validating the arguments and also in
measurements of the expected application performance improvement with this proposal.
This paper explains the motivation behind developing newCWV in detail and then analyse the
web traffic transfer durations to measure improvements. This paper explores an implementation
and the integration of newCWV into a Linux stack and running-code experiments to investigate
the protocol behaviour of the method. Through experiments, this paper shows that when HTTP-
like traffic uses newCWV, there is a significant gain in performance compared to conventional
TCP. Web browsing can proceed at approximately 50% faster in an uncongested network with
the newCWV.
Section 2 of this paper explains the bursty pattern of the HTTP traffic that we consider, the basics
of TCP congestion control and the state of the art to set the background. Then, section 3 explains
the modification specified in [6]. Section 4 summarises the experiment and presents the results
with discussion. Finally, section 5 concludes the findings.
2. BACKGROUND
To understand the problem of transporting HTTP-like traffic with unmodified TCP, the behavior
of these protocols needs to be examined. This section explores the bursty characteristics of the
HTTP protocol, the conventional congestion control of TCP and explains the interactions when
these are used together.
2.1. Nature of HTTP Traffic
HTTP web traffic is naturally bursty. Burstiness could be termed as a property of an application
where the traffic is generated at different rates over its running time. This could be characterised
as periods of inactivity separated by periods when the chunks of data are downloaded. [7] showed
that popular HTTP applications such as Web video (YouTube), Maps (Google Maps), and
Remote Control (Log Me In) all send data downstream at variable rates with peaks up to
400KB/s, separated by periods with no activity. This burstiness is caused by the HTTP request
pattern in the client/user application. Besides application behaviour, small-scale burstiness can
also be caused by TCP itself.
[8] showed that TCP self-clocking, combined with network queuing delay (due to packets of the
same flow or cross traffic) can shape the packet inter-arrivals of a TCP flow resulting in an ON-
OFF pattern. With a view to modelling the inactivity (OFF) periods of the web clients, [9]
showed that the OFF duration could range from a few seconds to many tens of seconds, with a
probability of 80% and 10% respectively, which causes burstiness of a TCP connection when
requesting content from the server. The cited papers all agree that burstiness has become a
common pattern for HTTP traffic. A simple experiment was run that captured packets while
accessing a webpage from a browser to capture this bursty traffic characteristics.
Figure 1 shows the resulting burstiness. In this capture, the chunks of data are the results of
HTTP GET requests made by the client. It is visible that there are considerable inactive periods
etween consecutive bursts.
3. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
3
Figure 1. Bursty traffic pattern of an HTTP web for a single web page
2.2. TCP Congestion Control
After being first standardized in 1981 by the Internet Engineering Task Force (IETF), TCP was
enriched by a series of developments to face numerous challenges occurring in the underlying
network. [10] provides a roadmap that described many of these changes.
A basic operating procedure of TCP is explained in the remainder of this subsection.
A TCP sender uses a parameter called the congestion window, or cwnd. This is initialised to the
Initial Window (IW) size. It determines the amount of data that can be sent to the receiver before
receiving an acknowledgment from the receiver. The value of the cwnd is important, as it
ultimately dictates the transfer rate and eventually the response time for an HTTP connection.
TCP uses four congestion control algorithms to set the value of this cwnd that were specified by
RFC2001, RFC2581, and RFC5681 [11][12][13]. They are Slow Start, Congestion Avoidance,
Fast Retransmit, and Fast Recovery.
Slow Start: In the Slow Start phase, a TCP sender sends data limited by the cwnd value and
waits for Acknowledgement (ACK) packet from the receiver. Upon receiving an ACK, the value
of the cwnd is increased by one segment. So, if a sender sent 4 segments at first (because cwnd =
4), and then receives 4 ACKs for these segments, then after increasing cwnd for each segment,
the final cwnd value will be 6, and 6 segments can be sent. As a result of this cumulative
increase, the cwnd increases using an exponential function. This continues until it reaches the
Slow Start threshold (ssthresh) or the sender discovers congestion or encounters a loss.
Congestion Avoidance (CA): When the cwnd reaches the ssthresh, a limit is imposed on the
increase of the cwnd. After this point, the size of the cwnd is only increased by one segment in
one RTT. For example, if 8 segments are sent altogether, then when the 8 segments are
acknowledged, the cwnd becomes only 4. This corresponds to slower linear growth of cwnd.
Fast Retransmit: When a packet is lost, the subsequent packets are received out of order at the
receiver. When this happens, the receiver sends duplicate ACK packets when each segment is
received. All the ACK packets acknowledge the same sequence number. Upon receiving the first
duplicate ACK, the sender does not immediately act but waits to see if this is a re-ordering issue
or a packet loss. When it receives a series of duplicate ACKs equal to the DupACK threshold (3,
4. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
4
as currently standardised), the sender TCP retransmits the segment, and resets the congestion
state.
Fast Recovery: When a segment is lost, rather than setting the cwnd to the lowest value and then
sending packets in sequence, it is assumed that a better approach would be to start from an
intermediate value so that the flow is not badly affected. So, after a lost segment has been
successfully retransmitted, CA is performed instead of Slow Start. The second is set to (ssthresh
+ DupACK) segments. This is to virtually inflate the cwnd. Since the packets acknowledged by
DupACKs have been received, this means these spcific packets have left the network (i.e. had
been received successfully). With each further DupACK, the cwnd is incremented by one
segment. When an ACK is received that acknowledges all the data, the cwnd is reset the ssthresh
and CA is resumed.
Selective ACK (SACK): SACK acknowledges the reception of out-of-sequence packets. This
helps avoid the retransmission of already received packets. Using SACK, the receiver appends a
TCP option in the ACK packet that contains a range of non-contiguous data that have been
received. This allows the sender to resend only the data that was missing from the flow. Support
for SACK is negotiated at the beginning of a TCP connection; it can be used if both ends support
the SACK option. [14] showed that NewReno with SACK enabled, requires fewer packet
transmissions in the First Recovery phase, reduces unnecessary duplicate transmission, and
avoids unnecessary waiting time.
2.3. TCP Variants
Different variants of TCP have evolved using combinations of these algorithms and with
modifications to control the data flow and to improve response to network congestion. When a
loss is detected, the TCP sender takes measures to control the flow of further packets by reducing
the sending rate. Different TCP variants such as Tahoe, Reno, and NewReno, act differently in
response to detected congestion.
Tahoe used Slow Start, Congestion Avoidance, and Fast Retransmit. A problem with Tahoe is
that restarted from the initial cwnd value after each packet loss. This limited the throughput. To
deal with this, Reno implemented Fast Recovery. This effectively recovered a single packet loss
within a window. If two or more packets were dropped in the same window, the sender was
forced to timeout and restart in Slow Start. To overcome this problem, NewReno uses a modified
Fast Retransmission phase based on the research [15][16]. This starts when a packet is lost and
ends when a Full ACK is received, which means that all the packets transmitted between the lost
packet and the last packet has been successfully received. However, if there are multiple packet
drops, then the sender will acknowledge a packet that has a lower sequence number than the last
transmitted packet. This is a Partial ACK, and in this case, the lost packet is retransmitted
immediately without waiting for receiving duplicate ACKs. This avoids a possible timeout. This
ensures better performance than Reno but may need to restart after a timeout if many packets are
dropped from the same window.
When there are multiple losses, SACK provides better performance by enabling the receiver to
inform the sender when there are multiple packet losses. A SACK block indicates a contiguous
block of data that has been successfully received. The segment just before the first block and the
gap between any two consecutive blocks denote lost segments (more accurately, these are
segments for which there is no acknowledgment). When the sender receives a SACK option, it
can find out which segments may be lost and retransmits them. SACK is widely implemented in
5. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
5
the current Internet, usually in combination with NewReno [10]. Therefore, NewReno with
SACK is considered a standard congestion control for TCP.
Figure 2 shows the cwnd evolution for different variants with a 50ms path delay. It can be
noticed in this figure that after some variation during the early stage (about 3s), all the variants
reach a steady state with similar behavior. At time 1s, all variants start in the Slow Start phase
and increase the cwnd exponentially until cwnd=ssthresh is reached or after it suffers a loss.
When this happens, Tahoe reduces its cwnd to an initial value (IW) and resumes in Slow Start.
Once it reaches ssthresh, the sender enters the CA phase (a linear increase of cwnd). Other
variants enter Fast Retransmission and Fast Recovery phase, where the cwnd progresses almost
linearly.
Figure 2. Congestion Window evolution with time during the simulation
Whenever packet loss occurs (at 1.8s, 5s, and 8.5s for Tahoe) Tahoe enters the Fast Retransmit
phase and retransmits the lost packet. The cwnd is set to the restart window (RW) (normally 1
segment) and continues in the Slow Start phase.
NewReno and SACK enter the Fast Recovery stage (at 1.8s for the first loss) where the ssthresh
is set to a new value that is half the size of the unacknowledged data and then cwnd is set to
ssthresh. At 2s, more losses cause NewReno to fail to recover, and eventually the sender is forced
to enter the Slow Start phase. SACK was successful in recovering and eventually followed the
CA until it faced another loss. Later, during the simulation, NewReno successfully recovered
using Fast Retransmit and Fast Recovery. Then it moves to the CA phase.
2.4. TCP CWV
Standard TCP congestion control required that when an application is idle for a period greater
than the Retransmission Timeout (RTO), the cwnd is reset to a small value. So, the next burst of
data requires the sender to re-enter the Slow Start phase from this small value. Several RTTs may
be consumed before the previous sending rate is again achieved. This approach is too
conservative, in that it fails to use available capacity. For a bursty application, this scenario is
6. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
6
quite common where each burst is separated by an idle period. As a result, the application
performance suffers from this conservative behavior of TCP.
Figure 3 explains the situation as a diagram. After RTO, the cwnd (the solid bold red line) drops
to the RW. Then, it takes time to grow back for the next burst. So, the next burst is unnecessarily
delayed while the path capacity might have been enough to transmit the burst in shorter RTTs.
Figure 3. Reducing the cwnd to a low value of RW makes it overly conservative for idle period.
Figure 4. Increasing the cwnd during the application limited period makes it invalid.
On the other hand, during an application-limited period, a Standard TCP sender continues to
grow the cwnd for every received acknowledged packet (ACK), allowing the cwnd to reach an
arbitrarily large value. However, in this case, the packet probes along the transmission path are
sent at a lower rate than permitted by cwnd, so the reception of an ACK does not actually provide
evidence that the network path was able to sustain the transmission rate reflected by the current
cwnd. The cwnd is then marked as ‘invalid’. Figure 4 explains such a scenario. The actual path
capacity, may be significantly lower than the cwnd.
If an application with an invalid cwnd were to suddenly increase its transmission rate, the sender
would be allowed to immediately inject a significant volume of additional traffic into the
network. This could lead to severe network congestion, potentially harming other flows that share
a common bottleneck.
7. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
7
TCP Congestion Window Validation (TCP-CWV), which was first specified in RFC 2861 [4],
was proposed as an experimental standard by the IETF. The intention was to find a remedy for
the problems imposed by TCP when used by a bursty application. TCP-CWV changed how the
cwnd is updated and is to be used during an idle or application-limited period.
TCP-CWV modified the congestion control algorithm of standard TCP during an application-
limited period when the cwnd had not been fully utilised for a period larger than an RTO.
During an idle period, which is greater than one RTO, TCP-CWV reduced cwnd by half for every
RTO period. This is equivalent to exponentially decaying cwnd during the idle period compared
to reducing the cwnd in a single step with standard TCP. This is a common traffic pattern for a
bursty application, which can have an idle period in the order of seconds – which could be larger
than a few RTOs worth of time. As a result, TCP-CWV ultimately reduces to the RW and
resulting in performance problems.
Another recommendation of CWV was to set the cwnd according to (w_used+cwnd)/2 for each
RTO period that does not utilise the full cwnd, where w_used is the maximum amount that has
been used since the sender was last network-limited/and-limited. This avoids a growth of cwnd to
an invalid value; it can cause the cwnd to reduce to a value that is close to the current application
rate.
This results in two problems:
First, the cwnd should reflect the network capacity for a flow and control the amount of data that
the network could sustain. However, CWV tends to set the cwnd according to the traffic pattern
and application rate - only seeking to be conservative in the use of network capacity. As a result,
the cwnd is set to a lower value that is more conservative than when using standard TCP, which
would have allowed larger bursts.
Secondly, CWV used w_used, the amount of data that has been sent by the application, but not
yet acknowledged. In an application-limited period where the application is not using the
allowed path capacity, w_used does not reveal the available capacity. According to this approach,
the cwnd is set to a value that is determined by the application’s sending rate in the last RTO
period (last few RTTs), rather than the network capacity. This impacts the application
performance where the subsequent bursts are rate-limited and would take a longer to complete.
So, the cwnd should not be regarded as the available path capacity for the TCP flow.
In summary, when TCP-CWV was specified in 2000, it identified a need to change the way that
TCP reacted to the traffic of bursty applications, but failed to offer a complete solution.
3. TCP NEWCWV: MODIFICATION FOR HTTP-LIKE TRAFFIC
When newCWV was standardised in 2015, it introduced a variable called ‘pipeACK’ that was
used to measure the acknowledged size of the network pipe. The pipeACK variable is considered
a safe bound for the capacity available to the sender since this represents the actual amount of
data that was successfully transmitted in an RTT from the sender to the receiver. This variable
can be computed by measuring the volume of data that has been acknowledged by the receiver
within the last RTT.
The pipeACK is used to determine if the sender has validated the cwnd. The sender enters the
non-validated phase when:
8. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
8
NewCWV also defined a new phase. A sender was allowed to use the cwnd for a period (5
minutes), called the Non-Validated Period (NVP). During the NVP, the cwnd is preserved. The
cwnd is stored for several minutes because this is the default server timeout for a TCP
connection.
The pipeACK algorithm that kept the history, as explained in [6], tends to express the available
path capacity more accurately.
Figure 5. newCWV behavior for bursty traffic (a) pipeACK is calculated every RTT (b) pipeACK is
calculated as the maximum of the last 5 samples; this result is more stable
Figure 5 shows the result, demonstrating the pipeACK (the black line with squares) is more
stable in (b) because of keeping history. Using the filter, the cwnd increased more rapidly to a
higher value than in the first case (i.e., 60 segments compared to 50 segments in Figure 5). This is
a general behavior when a maximum filter is used to calculate the pipeACK variable (case 2)
because pipeACK retains a larger cwnd for longer and enters the non-validated phase later than
when using a method without a filter. This benefits applications by transmitting bursts quickly,
but at the same time it can send larger bursts. TCP pacing could mitigate this problem.
In summary, newCWV brought stability for both the phases of the rate-limiting period and the
application-limiting period for HTTP-like traffic. An algorithm was proposed and implemented
in the Linux Kernel module, which was used to verify the effectiveness of this modification in the
next section.
4. EXPERIMENT, RESULTS & DISCUSSION
This section first describes the network emulation used to explore the behavior of newCWV.
Then presents the findings in different scenarios with possible explanations for such behavior.
4.1. Experimental Setup
A network emulation method was chosen to conduct the experiments because this enables an
implementation of network protocols to be tested in a controlled environment. The test bed used a
dumbbell topology representing a single network path bottleneck (refer to Figure 6).
9. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
9
Client 1 and Server 1 were used to benchmark the newCWV behavior for the main traffic (either
HTTP web or HTTP streaming content). Another server (client 2, server 2) was used to inject
cross-traffic (in this case a large file transfer using FTP) into a shared network bottleneck. All
servers ran Linux kernel version 3.12, and the clients were running 3.8 or greater.
Figure 6. Experiment topology: the bottleneck router imposed a fixed bandwidth and delay between the
client and server
Server 1 acts as a web server or streaming server that used standard TCP (NewReno with SACK).
It was installed with the newCWV Loadable Kernel Modules (LKM) for Linux, the traffic
generators, and iproute2 utilities (enabling pacing when required) allowing this to be chosen at
the start of each experiment.
The comparison plots, shown in the results section, present the improvement in burst transfer
time (less time required for transmission) when newCWV is used compared to using a standard
TCP (NewReno with SACK). The performance gain in transfer time (% improvement) is
calculated by taking an average of the transfer gain over all bursts. The transfer gain was
calculated by the following formula, where the time taken using NewReno/SACK is Tr and time
taken for a burst with newCWV is Tc: Gain (as a percentage) = ( Tr – Tc ) / Tr x 100.
There is a positive benefit because a burst is transmitted faster or negative impact when a
particular HTTP response takes longer to complete due to the additional time for loss and
recovery. A positive average of all these values indicates an overall gain in performance – the
higher the value, the better.
The table below (Table 1) summarizes the experiment parameters:
Table 1. Experiment Parameters
Parameter Value
TCP Initial Window (IW) 3 Segments
Ssthresh sharing NO
Bottleneck Bandwidth 2 Mbps
Delay / RTT 200 ms
HTTP Generator Tmix tool
Linux Kernel 3.12
No of HTTP connections 3151
Total Data analysed 7.68 GB
Average Transfer rate 700 kbps
Iterations with same parameters 5
10. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
10
The experiments are run across a range of time intervals that represent values that range between
HTTP response bursts (idle periods) and HTTP response sizes larger than a particular value
(Burst size after idle). The results obtained from multiple iterations of these experiments are
averaged to measure the completion time of the HTTP/TCP connections for different
combinations of idle periods and burst sizes.
4.2. Comparing Performance Over an Uncongested Path
To understand the effect of newCWV in a non-congested scenario, experiments were run with no
bandwidth limit at the bottleneck router; only a link delay of 200 ms was applied. There was no
cross-traffic and no rate limit was applied. Figure 7 below presents the performance improvement
of newCWV compared to NewReno, plotted burst sizes vs. different idle periods.
The improvement is visible in this figure (Figure 7). A newCWV sender transfers a burst in 37-
62% less time than NewReno. Larger improvements are achieved for the higher burst sizes, as
expected; about 10% more improvement is achieved for bursts of 80KB (60%) than 5KB bursts
(50%). While standard TCP reduces its cwnd after an idle period, newCWV retains a larger cwnd
and can transfer the burst in less time, saving several RTTs – an approximate average of 50%
improvement suggests newCWV requires half the RTTs compared to NewReno.
.
Figure 7. Performance Improvement of HTTP traffic shown when newCWV is used instead of NewReno
over different burst sizes and idle periods.
For a size of burst, for example, a 40 KB burst, it would encounter almost 20% more
improvement in performance when the idle period is larger. So, it shows that for real-life web
browsing traffic, even if the idle period is high newCWV will support more traffic than the
conventional TCP.
In short, for an uncongested scenario (as may be expected in a LAN), newCWV shows improved
performance over standard TCP.
11. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
11
4.3. Comparing Performance in a Congested Path
To test the effectiveness of newCWV in an Internet context, a bottleneck of 2 Mbps was set with
a finite router buffer (30 KB). The Path MTU was 1500 B, which ensured a maximum of 20
segments to be queued in the buffer. The newCWV method still shows improvement over
standard TCP, which now varies from 10-35% over the idle period – burst size domain (shown in
Figure 8).
Figure 8. HTTP performance improvement in percentage is shown in a congested scenario when newCWV
is used instead of NewReno over different burst sizes and idle periods.
Figure 9. Loss plot comparing New Reno and newCWV; newCWV suffers more loss on average than
standard TCP.
While in the previous scenario, there was no other traffic, the performance improved more for
higher burst sizes. However, in this congested scenario, the trend is somewhat the opposite:
Higher burst sizes offer less improvement. In the case of the idle period comparison, the
similarity remains, where a larger idle period increases the improvement as in the previous non-
congested scenario.
Total losses:
newCWV = 3219
NewReno = 2504
12. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
12
For bursts larger than 5 KB (larger than an IW of 4 KB) it takes about 25-33% less time on
average for a transfer with an idle period. A larger improvement is demonstrated around 35%
with newCWV, but the advantage diminishes for larger burst sizes (for 40 KB or 80 KB),
because it encounters higher loss. The newCWV sender is prone to a higher loss rate for larger
bursts. These bursts can appear either at the beginning of the TCP connection or after an idle
period. Figure 9 confirms that the number of losses is higher when using newCWV.
A newCWV sender allows larger bursts into the network for a large HTTP response. A router
with a finite network buffer will increase the probability of (burst) loss and queuing delay for this
flow and other flows that share a common bottleneck (e.g., higher packet loss and jitter for
concurrent real-time applications).
Although newCWV continues to show better performance in delivering bursts faster in a
congested scenario, higher burst losses for larger bursts may degrade the overall average
improvement in burst transfer times that could have been possible for HTTP flows.
4.4. Effect of other Applications on HTTP with newCWV
At first, the HTTP workload was run with newCWV without pacing. Table 2 shows that, for
HTTP responses with a size of 5KB or more, there was an improvement in transfer times of 18-
20%, although it is low compared to the previous cases without any cross traffic.
For large responses, such as 80 KB or more, the performance of newCWV reduces compared to
standard TCP. The newCWV sender was observed to take about 10-15% more time to complete
the bursts than NewReno. The negative values show that with newCWV, the performance is
negatively impacted. The large level of packet loss (and therefore delay) caused by large bursts
being injected into the network eliminated the benefit of newCWV. This indicates that some burst
mitigation technique is desirable.
Table 2. Performance Improvement in Percentages when HTTP runs with newCWV against NewReno. A
negative value means performance degradation.
Burst Size
(KB)
Idle Periods
1 s 3 s 5 s 7 s 10 s 15 s
5 17.9 % 18.3 % 18.4 % 18.9 % 19.1 % 19.5 %
10 10.2 % 10.6 % 10.7 % 11.2 % 11.4 % 11.7 %
20 6.7 % 6.7 % 6.9 % 7.1 % 7.3 % 7.3 %
40 4.2 % 4.4 % 4.4 % 4.6 % 4.9 % 5.2 %
80 -10.2 % - 12.5 % - 13.4% - 13.9 % - 15.1 % - 15.3 %
In summary, when using a very congested bottleneck shared with other applications, newCWV
needs to be combined with pacing – sending the burst in regular intervals – to ensure
performance improvement. Otherwise, it can lead to significant loss and induce a delay for the
applications using the bottleneck.
4.5. Solving the Loss Problem with Pacing
Pacing [17] may be used to mitigate the impact of bursty application traffic. The use of pacing
would ensure that these bursts of segments are spaced apart in time without delaying the total
transmission time. The effect of this is shown in Figure 10.
13. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
13
Figure 10: The effect of pacing using the FQ module in Linux for a RTT of 200ms; the inclined green lines
show a burst of segments transmitted over 200 ms.
From Figure 10, The RED ‘+’ marking shows the time segments that were transmitted using
newCWV, and the GREEN ‘x’ marks shows the same for newCWV with pacing enabled. The
gradient of the segments v. time confirms that with pacing the segments were transmitted over a
period of the order of an RTT rather than as an abrupt injection of data using only newCWV
(vertical RED line).
When pacing was enabled for the HTTP traffic, HTTP was able to regain the advantage of
newCWV over NewReno. Table 3 shows that the improvement in transfer times varies between
10-20% across the wide burst size –idle period plane. To compare, it can be seen that for 80 KB
burst sizes, in the previous scenario, newCWV actually was worse. Using a TCP Pacing
technique, from Table 3, newCWV achieves 13-16% performance improvement for larger bursts
(80 KB) as well.
Table 3. Performance Improvement in Percentages when HTTP runs with newCWV against NewReno. A
negative value means performance degradation.
Burst Size
(KB)
Idle Periods
1 s 3 s 5 s 7 s 10 s 15 s
5 20.6 % 20.9 % 21.2 % 21.4 % 21.6 % 21.8 %
10 19.1 % 19.2 % 19.2 % 19.4 % 19.5 % 19.7 %
20 17.3 % 17.5 % 17.7 % 17.9 % 18.1 % 18.1 %
40 15.1 % 15.4 % 16.1 % 16.6 % 17.1 % 17.4 %
80 13.4 % 13.9 % 14.1% 15.6 % 15.7 % 16.1 %
Even on a very congested link sharing with another bulk TCP flow, newCWV continues to show
better performance than NewReno when the pacing is enabled at the newCWV sender.
4.6. Fairness of new CWV
Whenever a new TCP technique is proposed, there is a possibility that with the improved
performance and aggression, it may cause starvation for the other flows that share the same
bottleneck. Every congestion control should ensure fairness and no harm to other co-existing
flows. To assess the fairness of newCWV, an FTP application (running NewReno) shared the
bottleneck with an HTTP workload that used different algorithms: New Reno, newCWV, and
paced newCWV.
14. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
14
Figure 11, shows that for the whole period of the experiment (about an hour), the FTP application
competed with the HTTP traffic for a share of the capacity of the 2 Mbps bottleneck. Fluctuations
in FTP throughput were observed as it shared the bottleneck with the variable rate HTTP web
traffic. FTP did not suffer from starvation when the other TCP was using the newCWV method.
The curves for the performance of newCWV follow the curve for NewReno with hardly any
differences. Though newCWV seems to be more aggressive after an idle period than standard
TCP (NewReno), which helps a bursty sender application, it was reacting to congestion
appropriately sharing the bottleneck with another long-lived TCP flow. This depicts the
friendliness of newCWV with other TCP applications.
Figure 11. FTP cross-traffic throughput; no significant differences in background application performance
while the HTTP traffic was running different algorithms.
4.7. Discussion
In summary, newCWV demonstrated improved performance for HTTP traffic in both congested
and uncongested scenarios. It is recommended that newCWV is used in combination with pacing,
to smooth out the burst and hence also to reduce losses. NewCWV is also fair in the sense that it
does not pose a significant threat (aggressiveness or starvation) to other co-existing TCP flows.
Since newCWV can avoid suboptimal performance, by defining a new way to use the cwnd and
ssthresh during a rate-limited interval and specifies how to update these parameters after
congestion has been detected. The method defined in RFC 7661 is considered safe to use even
when cwnd is greater than the receive window [18] because it validates the cwnd based on the
amount of data acknowledged by the network in an RTT, which implicitly accounts for the
allowed receive window.
The paper evaluated a working version of this algorithm in Linux. Since newCWV was published
as an experimental specification in the RFC-series as RFC 7661, it has been implemented in
some production endpoint TCP stacks. It is also referenced in TCP Control Block
Interdependence (RFC 9040),
The traffic patterns that newCWV seeks to accommodate are also important for other transports
that do not use TCP. It is referenced in the IETF QUIC [19] transport specification: QUIC Loss
Detection and Congestion Control, (RFC 9002), which provides an alternative transport for
15. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
15
HTTP. It is also referenced in a range of other IETF specifications, that include Self-Clocked
Rate Adaptation for Multimedia (RFC 8298), Model-Based Metrics for Bulk Transport Capacity
(RFC 8337), and Operational Considerations for Streaming Media (RFC 9317).
5. CONCLUSION
Web-based traffic is the dominant type of traffic in today’s Internet. When web browsers use
HTTP/2, this uses TCP as the underlying protocol. It is important that the transport is designed
for efficient browsing with arbitrary traffic patterns. A set of problems have been identified in
earlier research works when a bursty HTTP application uses traditional TCP congestion control.
Although solutions have been proposed, these were limited, and did not appropriately address the
key requirements. Instead, the NewCWV seeks to address the congestion control problems and
this paper shows that this is implementable.
This paper found that the newCWV method benefits applications that send at varying rates in
both their rate-limited and idle periods. The paper shows that NewCWV can result in up to 50%
faster completion, for HTTP-based traffic, which is important for web browsing, smoother web-
based video, etc. Moreover, this is designed to avoid inducing harm to other traffic that share a
common network bottleneck.
The key advantage of using newCWV is that application designers do not have to worry about
the underlying transport support for a bursty application traffic, since the transport can
accommodate a wide range of traffic variation. This provides application developers more
freedom when developing new applications and encourages the development of next-generation
Internet applications. For future work, it would be useful to compare the performance when using
newCWV with current TCP and with modern QUIC implementations, and to extend the study to
evaluate a variety of network conditions.
CONFLICTS OF INTEREST
The authors declare no conflict of interest.
ACKNOWLEDGMENTS
The author acknowledges the Electronics Research Group of the University of Aberdeen, UK, for
all the support in conducting these experiments. This research was a part of the University of
Aberdeen, dot. rural project. (EP/G066051/1).
ACRONYMS
ACK Acknowledgement
cwnd congestion window
CA Congestion Avoidance
CWV Congestion Window Validation
DASH Dynamic Adaptive Streaming over HTTP
FTP File Transfer Protocol
HTTP Hyper-Text Transfer Protocol
IETF Internet Engineering Task Force
IP Internet Protocol
IW Initial Window
LKM Loadable Kernel Module
16. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
16
MTU Maximum Transfer Unit
NVP Non-Validated Period
TCP Transmission Control Protocol
RFC Request for Comments
RTO Retransmission Time Out
RTT Round Trip Time
RW Restart Window
SACK Selective ACKnowledgement
ssthresh Slow Start Threshold
REFERENCES
[1] Sandvine Resources, (2022), “Global Internet Phenomena Report”, Whitepaper, Sandvine
Corporations.
[2] Fielding R. et al (1999), “Hypertext Transfer Protocol -- HTTP/1.1”, RFC 2616, IETF
[3] ISI (1981), “Transmission Control Protocol”, RFC 793, IETF.
[4] Handley, M., Padhye, J. and Floyd, S. (2000), “TCP Congestion window Validation”, RFC 2861,
IETF.
[5] Biswas, I. (2011), “Internet congestion control for variable rate TCP traffic”, PhD Thesis, School of
Engineering, University of Aberdeen.
[6] Fairhusrt, G., Sathiaseelan, A. & Secchi, R. (2015), “Updating TCP to Support Rate-Limited Traffic”,
RFC 7661, IETF.
[7] Augustin, B. and Mellouk, A. (2011), “On Traffic Patterns of HTTP Applications”, Proceedings of
IEEE Globecomm, Houston, USA.
[8] Jiang, H. and Dovrolis, C. (2205), “Why is the internet traffic bursty in short time scales?”,
Proceedings of the ACM SIGMETRICS international conference on Measurement and Modeling of
Computer Systems, Banff, Canada.
[9] Casilari, E. et. al. (2004), “Modelling of Individual and Aggregate Web Traffic”, Proceedings of 7th
IEEE conference in High Speed Networks and Multimedia Communications, Toulouse, France.
[10] Duke, M., et al (2015) A Roadmap for Transmission Control Protocol (TCP) Specification
Documents, RFC 7414, IETF
[11] Stevens, W. (1997), "TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery
Algorithms", RFC 2001, IETF.
[12] Allman, M., Paxson, V. and Stevens, W. (1999), "TCP Congestion Control", RFC 2581, IETF.
[13] Allman, M., Paxson, V. and Blanton E. (2009), “TCP Congestion Control”, RFC 5681, IETF.
[14] Fall, K. and Floyd, S. (1996), "Simulation-based Comparisons of Tahoe, Reno and SACK TCP",
ACM SIGCOMM Computer Communication Review, vol. 26, no. 3, pp. 5-21.
[15] Hoe, J. (1996), “Improving the Start-up Behavior of a Congestion Control Scheme for TCP”,
Proceedings of the ACM SIGCOMM.
[16] Floyd, S., Henderson, T. and Gurtov, A. (2004), “The NewReno Modification to TCP's Fast
Recovery Algorithm”, RFC 3782, IETF.
[17] Zhang, L., Shenker, S. , and Clark, D. (1991), “Observations on the Dynamics of a Congestion
Control Algorithm: The Effects of Two Way Traffic”, Proceedings of the ACM SIGCOMM, New
York, USA.
[18] Xu, L., et al, CUBIC for Fast and Long-Distance Networks, draft-ietf-tcpm-rfc8312bis, Work-In-
Progress, TCPM Working Group, IETF.
[19] Iyenger, J. & Thomson, M. “QUIC: A UDP-Based Multiplexed and Secure Transport”, RFC 9000,
IETF.
17. International Journal of Computer Networks & Communications (IJCNC) Vol.15, No.4, July 2023
17
AUTHORS
Dr Ziaul Hossain is a Network Researcher, and his interest lies in making the web
faster. He has achieved a BSc in Computer Science from BUET (Bangladesh) and then
pursued his MSc degree at the Queen Mary, University of London (UK). He received a
PhD in Engineering from University of Aberdeen (UK), where his research focused on
performance of different applications over the satellite platform. He has taught at several
universities and currently working as a Lecturer at the University of Fraser Valley,
British Columbia, Canada.
Dr Gorry Fairhurst is a Professor in the School of Engineering at the University of
Aberdeen, Scotland, UK, His research is in Internet Engineering, and broadband service
provision via satellite. He is an active member of the Internet Engineering Task Force
(IETF) where he chairs the Transport Area working group (TSVWG) and contributes to
the IETF Internet area. He has authored more than 30 RFCs with over 150 citations
within the RFC-series, and currently is contributing to groups including: QUIC, 6MAN,
TSVWG, and MAPRG.