SlideShare a Scribd company logo
1 of 79
Download to read offline
FASTER STARTUP PHASE OF A TCP CONNECTION IN RELATION TO
FIBONACCI TCP
by
Bongomin Charles Anyek
B.Sc (MUK)
2007/HD18/9388U
Department of Networks
School of Computing and Informatics Technology
College of Computing and Information Sciences
Makerere University
Email: cbongoley@gmail.com, Mob: +256-782-274116
A Project Report Submitted to the College of Computing and Information Sciences
in Partial Fulfillment of the Requirements for the Award of Master of Science in
Data Communication and Software Engineering Degree of Makerere University
Option: Network and System Administration
November 2011
Declaration
I, Bongomin Charles Anyek, do hereby declare that this project report is original and has not
been published and/or submitted for any other degree award to this or any other universities
before.
Signature.........................................Date..........................
Bongomin Charles Anyek
B.Sc (CSC,ZOO)
Department of Networks
School of Computing and Informatics Technology
College of Computing and Information Sciences
Makerere University
i
Approval
The project report titled ”Faster Startup Phase of TCP Connection in Relation to Fibonacci
TCP,” has been submitted for examination with my approval.
Signature...........................................Date............................................
Dr. Julianne Sansa Otim, PhD.
Supervisor
School of Computing and Informatics Technology
College of Computing and Information Sciences
Makerere University
ii
Dedication
This work is dedicated to my mother, Mary Anyek for her commitment towards educating me,
despite her financial constraints. My wife Susan Amuge and daughter Charlotte Agenorwot
and Aunt Irene Amal Yubu who missed me alot during the course of the study.
iii
Acknowlegement
Sincere gratitude and heartfelt thanks go to my Supervisor, Dr. Dr. Julianne Sansa Otim
for her valuable time, flexibility, encouragement, guidance and supervision during the study.
Without her, this book would not have been what it is. Many thanks for attending to me.
My special acknowledgement goes to my workmates at Centenary Bank. Martin Mugisha,
Manager Credit Services and Susan Itamba (Credit Officer) for understanding my busy sched-
ules at the University and, to Bernadette Nakayiza (ATM Administrator) who accomodated
me and shielded me particularly when I had just joined BT Division.
To Abdul Sserwada who offered some assistance to me on the use of ns-2 and MatLab. To
M.Sc DCSE class of 2007 and Fote Antonia from Cameroom, it is great knowing you.
Finally, to the Almighty God for His spiritual guidance, love, blessing and giving me hope.
iv
Contents
Declaration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Approval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
List of Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
1 1.0 Introduction 1
1.1 Background of the Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Statement of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Objectives of the Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3.1 General Objectives of the Study . . . . . . . . . . . . . . . . . . . . . . 3
1.3.2 Specific Objectives of the Study . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Significance of the Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5 Scope of the Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 Literature Review 6
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 TCP/IP-Based Transport Layer Protocol . . . . . . . . . . . . . . . . . . . . . 6
2.3 Phases of TCP Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.4 Categories and Properties of TCP’s Startup Schemes . . . . . . . . . . . . . . 8
v
2.4.1 Slow-Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.4.2 Swift-Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4.3 Paced-Start (PaSt) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4.4 Quick Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4.5 The Congestion Manager (CM) . . . . . . . . . . . . . . . . . . . . . . 13
2.4.6 TCP Fast Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4.7 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5 Effects of TCP’s Startup Algorithm on Flow Throughput . . . . . . . . . . . . 19
2.6 Advantages of Increasing TCP’s Initial Windows Size . . . . . . . . . . . . . . 19
2.7 TCP’s Congestion Avoidance (CA) Algorithms . . . . . . . . . . . . . . . . . . 20
2.8 Fibonacci TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.9 Research Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3 Methodology 23
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2 Simulation Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.3 Selection of the Faster Startup Scheme . . . . . . . . . . . . . . . . . . . . . . 24
3.4 Simulation of Chosen Startup Schemes . . . . . . . . . . . . . . . . . . . . . . 25
3.4.1 Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.4.2 Analytical Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.5 Details of Experiments Conducted . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.5.1 Simulation of One TCP Connection . . . . . . . . . . . . . . . . . . . . 29
3.5.2 Simulation of Many TCP Connections . . . . . . . . . . . . . . . . . . 30
3.5.3 Simulation of Fibonacci TCP with Selected Faster Startup Scheme and
Slow Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
4 Results 33
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2 RTT, Packet Loss Ratio and Throughput for One TCP Flow . . . . . . . . . . 33
4.2.1 RTT for Single TCP Flow . . . . . . . . . . . . . . . . . . . . . . . . . 33
vi
4.2.2 Packet Loss for One Regular TCP Flow . . . . . . . . . . . . . . . . . . 38
4.2.3 Throughput from Single Flow . . . . . . . . . . . . . . . . . . . . . . . 39
4.3 Multiple TCP Flows implementing Fast start . . . . . . . . . . . . . . . . . . . 43
4.3.1 RTT for Two and Four TCP Flows . . . . . . . . . . . . . . . . . . . . 43
4.3.2 Packet Loss Rate for Two and Four TCP Flows . . . . . . . . . . . . . 46
4.3.3 Throughput for Two and Four TCP Flows . . . . . . . . . . . . . . . . 47
4.4 Investigation of the effect of faster startup of TCP Connection on the Perfor-
mance of Fibonacci TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4.4.1 RTT for Fibonacci TCP . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.4.2 Packet Loss for Fibonacci TCP . . . . . . . . . . . . . . . . . . . . . . 53
4.4.3 Throughput for Fibonacci TCP . . . . . . . . . . . . . . . . . . . . . . 55
4.5 Discussion of the Fibonacci-Faster Start TCP Results . . . . . . . . . . . . . . 58
4.5.1 Fibonacci TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.5.2 Fast Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.5.3 Buffer Size and Queue Management . . . . . . . . . . . . . . . . . . . . 59
5 Conclusion and Recommendation 60
5.1 Summary of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.2 Recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.3 Proposed Areas for Further Study . . . . . . . . . . . . . . . . . . . . . . . . . 62
vii
List of Figures
3.1 Dumbbell Topology used for the Experiments . . . . . . . . . . . . . . . . . . 25
4.1 RTT for one regular TCP Flow with maximum cwnd of 50 pkts . . . . . . . . 34
4.2 RTT for one regular TCP Flow with maximum cwnd of 100 pkts . . . . . . . . 34
4.3 RTT for one regular TCP Flow with maximum cwnd of 200 pkts . . . . . . . . 34
4.4 Throughput for one Regular TCP Flow with maximum cwnd of 50 packets . . 40
4.5 Throughput for one Regular TCP Flow with maximum cwnd of 100 packets . 40
4.6 Throughput for one Regular TCP Flow with maximum cwnd of 200 packets . 40
4.7 RTT for 2 TCP flows with Fast start . . . . . . . . . . . . . . . . . . . . . . . 43
4.8 RTT for 4 TCP flows with Fast start . . . . . . . . . . . . . . . . . . . . . . . 43
4.9 Throughput for Two TCP flows with Fast start . . . . . . . . . . . . . . . . . 47
4.10 Throughput for Four TCP flows with Fast start . . . . . . . . . . . . . . . . . 47
4.11 RTT for One Fibonnaci TCP flow with 100 packets . . . . . . . . . . . . . . . 50
4.12 RTT for Two Fibonnaci TCP flow with 100 packets . . . . . . . . . . . . . . . 50
4.13 RTT for Four Fibonnaci TCP flow with 50 packets . . . . . . . . . . . . . . . 50
4.14 Throughput for one Fibonacci TCP flow with maximum cwnd of 100 packets . 55
4.15 Throughput for two Fibonacci TCP flow with maximum cwnd of 100 packets . 55
4.16 Throughput for Four Fibonacci TCP flow with maximum cwnd of 50 packets . 55
viii
List of Tables
2.1 Comparison of TCP Startup Schemes . . . . . . . . . . . . . . . . . . . . . . . 15
3.1 Maximum cwnd for One (1) TCP Flow Simulation . . . . . . . . . . . . . . . 30
3.2 Maximum Window for Two (2) TCP Flows Simulation . . . . . . . . . . . . . 31
3.3 Maximum cwnd for Four (4) TCP Flows Simulation . . . . . . . . . . . . . . . 31
4.1 Comparative Analysis of RTT for One Regular TCP Flow . . . . . . . . . . . 37
4.2 Packet Loss Details for one Regular TCP Flow . . . . . . . . . . . . . . . . . . 39
4.3 Throughput Analysis for One Regular TCP Flow . . . . . . . . . . . . . . . . 42
4.4 RTT Analysis for Two and Four TCP Flows . . . . . . . . . . . . . . . . . . . 45
4.5 Packet Loss Details for Fast Start with two and four flows using standard TCP 46
4.6 Throughput Analysis for Two and Four TCP Flows . . . . . . . . . . . . . . . 48
4.7 Comparative analysis of RTT for One Fibonacci TCP Flow . . . . . . . . . . . 52
4.8 Packet Loss for one and Multiple Fibonacci TCP Flows . . . . . . . . . . . . . 54
4.9 Throughput Analysis for One, two and four Fibonacci TCP Flows . . . . . . . 57
ix
List of Acronyms
ACK Acknowledgment
BDP Bandwidth Delay Product
CA Congestion Avoidance
CE Capacity Estimation
cwnd Congestion Window
DACK Delayed Acknowledgement
DHCP Dynamic Host Configuration Protocol
DOS Denial of Service attack
FTP File Transfer Protocol
IP Internet Protocol
IPD Inter-Packet Delay
LFN Long Fat Network(Network with very high BDP)
MSS maximum Segment Size
ns-2 Network Simulator Version 2
PaSt Paced Start
rcwnd Receiver Advertised Congestion Windows
RTT Round Trip Time
RWIN Receiver Window
ssthresh Slow Start Threshold
TCP Transmission Control Protocol
TCP-FS TCP Fast Start (TCP with fast start)
TFTP Trivial File Transfer Protocol
UDP User Datagram Protocol
x
Abstract
The performance of Internet is heavily influenced by the behaviour of TCP. This is because
TCP is the most commonly used transport protocol. The performance of TCP deteriorates
right from startup phase of TCP connection. The optimum initial window size for sending
data is determined during the slow start phase, while the congestion avoidance phase is for
the management of a steady behaviour of TCP connection. Slow start is a poor performer
in all networks including wireless and high speed networks referred to as Long Fat Networks
(LFN). This is because slow start is very slow at the connection setup and its exponential
increase leads higher packet loss rate. There are numerous faster startup schemes developed
to overcome shortfall of slow start and improve performance during TCP connection setup
but the regular TCP itself has never been efficient. Fibonacci TCP is developed to solve
disadvantage in regular TCP. However, how these fast startup schemes impacts on fibonacci
TCP was not well known.
Furthermore, in attempt to make each startup scheme effective, each scheme has its own way
to determine initial window during setup of TCP connection. In this study we explored and
evaluated performance of slow start and two other faster startup schemes namely; quick start
and fast start and chose the best performing startup scheme. The results show that fast start
performs better than both slow start and quick start. We then evaluated the robustness of fast
start under different network settings. In seeking to understand how faster startup impacts
on high speed TCP, we then simulated fibonacci TCP and fast start as well as fibonacci TCP
with slow start and compared the performance. Our main interest in the study is not in the
startup but on how faster startup influences performance of Fibonacci TCP. The study shows
that fast start is better performer than slow start. Hence, fast start influences fibonacci TCP
possitively since it boost throughput during startup phase and CA phase. We also conclude
that using faster startup scheme together with highspeed TCP improves the performance of
TCP during both startup phase and CA phase LFN.
xi
Chapter 1
1.0 Introduction
Advances in communication technology and proliferation of processes that use little network
resources in distributed systems [1] are making the Internet a common choice for communi-
cation. Todate, TCP is the main transport protocol responsible for transmission of Internet
traffic. The main objectives of TCP are to adapt the transmission rate of packets to the
available bandwidth, avoid congestion at the network and creation of a reliable connection.
For example, ACKs regulate transmission rate of TCP by ensuring that packets can be trans-
mitted only when the previous packets have been acknowledged (or have left the network)
and renders connection reliability by transmitting information needed by the sender so as to
retransmit lost packets.
1.1 Background of the Study
As the network size increases, congestion builds in the entire network system due to con-
tention by in-flight network packets from users or processes from IP nodes which might cause
a deadlock. It is also noted that as the network system gets congested, the delay in the sys-
tem increases [2]. The latency/delay causes degradation of the overall performance especially
in the absence of proper congestion control management. This is because traffic congestion
is influenced mainly by the behaviour of the congestion avoidance algorithms. A good un-
derstanding of the relationship between congestion and delay is very essential for designing
1
effective congestion control algorithm. Some researchers mentioned congestion [3] as a prob-
able cause of collapse of the Internet besides other causes such as Denial of Service (DOS)
attack [4], [5].
Some of the causes of congestion and delay have been a result of algorithms used (or not used)
in TCP. For instance, use of explicit feedback which involves Optimistic Acknowledgment
(opt-ack) [4] can be used by non-legitimate users to launch DOS attack, yet opt-ack was
meant to improve end-to-end performance. In situation where other alternative TCP’s startup
scheme such as Swifter-start [6] is used, such an attack would not take place.
TCP congestion control algorithms can be integrated with active measurement and active
packet queue management to prevent packet loss due to buffer overflow at the edge communi-
cating peers. This integration during startup phase of TCP connection creates an opportunity
for early congestion notification to the sender to reduce the transmission rate before the queue
overflows and packet loss is sustained. Much as several researchers focused on faster startup
schemes, other studies proposed the use of CA algorithms such as Fibonacci TCP [7] and
Exponential TCP [8] to achieve good performance in the overall TCP session in high speed
networks. For instance, Swifter-Start has a shorter startup period compared to long startup
period experienced by Slow-Start [9] scheme. Swifter-Start therefore enables full utilization
of the network path with large bandwidth delay product(BPD) without causing degradation
in user-perceived performance.
Besides the numerous congestion control and congestion avoidance algorithms, performance
still remains an issue in TCP implementation with varying percentage of bandwidth usage,
packet loss rate and recovery mechanisms.
1.2 Statement of the Problem
Traffic dynamics in the Internet are heavily influenced by the behaviour of TCP since it
is the most commonly used transport protocol. TCP connection has two phases, namely
2
slow start and congestion avoidance phases [10], [11]. The slow start phase determines the
optimal window size, while the congestion avoidance phase is for the management of a steady
behaviour of TCP connection under condition of minimum packet loss.
Slow-start does not perform well in wireless [12], [6], satellite [13] and the high speed networks
referred to as Long Fat Networks (LFN) [14]. This is because slow-start is very slow at the
connection setup causing unnecessarily long startup and the exponential increase character-
istic leads to a very large window towards the end of slow start phase. This can easily cause
interruption to other flows and high packet loss in one congestion window.
Many researchers have proposed several schemes to address TCP’s slow startup problems
mentioned above. These include; TCP Fast Start [15], swift-start [6], [16], the Congestion
Manager [17], Quick start [4], [5], [18], Paced-Start [10] and SARBE [19]. These schemes
vary in the way in which the initial window size is chosen. However, there has been no
clear indication of how the startup phase of a TCP connection affects the consequent data
transmission during the congestion avoidance phase.
In this project, the behaviour of a TCP connection with and without a faster startup scheme
in relation to the congestion avoidance phase was studied. Particular interest was the impacts
of faster startup on Fibonacci TCP [7] in terms of achieved throughput. A review of Fibonacci
TCP is in Section 2.8.
1.3 Objectives of the Study
1.3.1 General Objectives of the Study
To determine the relationship between the faster startup and the congestion avoidance phase
in TCP connections with emphasis on high speed Fibonacci TCP.
3
1.3.2 Specific Objectives of the Study
i. To analyze some of TCP’s proposed faster startup schemes using one flow and choose
the most effective one.
ii. To use the chosen TCP’s faster startup scheme to initiate several TCP connections
under various network settings in terms of topology, number of flows and flow sizes with
particular interest to study the robustness of this scheme.
iii. To investigate performance of the congestion avoidance algorithm especially Fibonacci
TCP when the TCP’s faster startup scheme is used in comparison to performance when
slow-start is used.
1.4 Significance of the Study
The existing startup schemes of TCP connections that are aiding estimation of available
bandwidth do not show clear impacts on the algorithms used in the congestion avoidance
phase. That is, after the transition from the startup phase of a TCP connection, the impact
of a given initial window size on the congestion avoidance mode is not well known. The
findings from the study is able to show the performance of TCP when each startup scheme
is implemented independently to setup TCP connection. The study has made it possible to
predict the performance of TCP with increase in initial window sizes and number of TCP
flow. This has made it possible to chose between slow start and faster startup scheme for
setting up TCP connection if a high speed CA algorithm such as fibonacci TCP is to be used
in network communication.
1.5 Scope of the Study
The study was limited to simulation of TCP connections using standard TCP and fibonacci
TCP, each implementing both slow start and faster startup schemes independently to deter-
mine the overall performance in terms of RTT, packet loss and throughput. The study does
4
not attempt to replace any TCP startup schemes and TCP control used in the CA phase.
5
Chapter 2
Literature Review
2.1 Introduction
In this chapter, we have given brief account of transport layer protocols, phases of a TCP
connection, TCP’s algorithm, TCP’s startup schemes, relationship between startup algorithm
and flow throughput, and advantages of larger initial window.
2.2 TCP/IP-Based Transport Layer Protocol
The fundamental and significant components of Transmission Control Protocol (TCP) that
support connection-oriented services include the TCP flow and TCP congestion control al-
gorithms. These TCP flow and congestion control algorithms by necessity rely on remote
feedback to determine the rate at which packets should be sent in either cooperative or non-
cooperative environment [21] using active measurement as exemplified in Section 2.4. The
feedback comes from either the network as available bandwidth or directly from the receiver
as a positive or negative ACK.
Other than TCP, the transport layer may also use User Datagram Protocol (UDP) in con-
nectionless services such as DHCP, TFTP, traditional VoIP [12], etc. When UDP is used,
congestion control algorithms implicitly assume that the remote entity generated correct
6
feedback. UDP degrades performance and leads to low throughput since it does not have the
features for congestion management.
Since most Internet traffic uses TCP, the next Section 2.3 elaborated further on TCP.
2.3 Phases of TCP Connection
The TCP phases are categorized into slow start and congestion avoidance phases. Slow
startup phase is used to determine optimal window size using scheme such as [9], [5], while
the congestion avoidance phase maintains the steady behaviour of TCP under condition of
minimum packet loss using congestion avoidance algorithms such as Fibonacci TCP [7].
In [11] TCP phases have been categorized into slow start, window recalculation and constant
phase and each of these phases is explained as follows: (i) Slow-start phase is for determining
optimal windows size and self-clocking system, (ii) Windows-recalculation phase is when
maximum window size is reduced to minimum windows by multiplicative decrease(halving
the maximum windows size), and (iii) Constant phase maintains a threshold value to ensure
minimum packet loss.
During a existing TCP connection, the sender maintains three windows [9], each playing
a particular role. These windows include; receiver congestion windows (rcwnd), congestion
windows (cwnd) and threshold windows (ssthresh). The rcwnd is granted/advertised by the
receiver. The cwnd is the increase above threshold. The ssthresh is the value to switch
between slow startup and CA phases.
Other windows are categorised basing on the way TCP uses the startup scheme. For example,
TCP uses slow start in three (3) different ways and each way uses a particular window which is
different from the ones in the above paragraph. First, TCP uses slow start for TCP connection
setup using initial window. Secondly, it uses slow start for restarting transmission after long
idle time using restart window. Thirdly, TCP uses slow start to start retransmission after
packet loss using loss window. Fast start [15] differs from slow start and uses restart window
7
for starting retransmission after both long idle time and packet loss. The size of loss window
is 1MSS and the size of restart window is the same as the last optimum cwnd used. Hence,
when loss window is used, bandwidth will be under utilised, whereas restart window will not
increase the state of the congestion and maitain good link utilisation.
The behaviour of the TCP phases are influenced by the different congestion control and
congestion avoidance algorithms. The congestion control algorithms control congestion after
packet loss, while congestion avoidance algorithms are meant to control congestion before
packet loss. For the purpose of this study, the TCP algorithms will be divided into two
categories and each category will be implemented in a specific phase of a TCP connection.
These categories include; TCP’s startup algorithms exemplified in Section 2.4 and congestion
avoidance algorithms examplified in Section 2.7.
2.4 Categories and Properties of TCP’s Startup Schemes
The various proposed faster TCP’s startup schemes are passive and active bandwidth probing
models that quantify the overall performance improvement in comparison to the default TCP’s
slow-start. Few startup schemes that have been reviewed are further summarised in Table
2.1.
The bandwidth probing by TCP flow and TCP congestion control may be categorized into
two categories: Packet Rate management (PRM) and Packet Gap Method [10], [22], while
in [23] bandwidth probing mechanisms are put into three (3) broad principles: (i) bandwidth
estimation technique without consuming bandwidth, e.g. swift-start [22], PaSt TCP [10],
etc, (ii) sharing congestion state information between peer applications, e.g. The Conges-
tion Manager, and (iii) explicit feedback from intermediated and/or receiver IP node, e.g.
Quick-start [4], [19]. Some schemes do not carryout bandwidth estimation. That is, data
transmission is initiated with any window without probing the available bandwidth. For ex-
ample, TCP’s Jump-Start [23] scheme select any windows and starts sending data without
the knowledge of the available bandwidth. Recently developed multimedia systems [13] use
8
control data packets to control TCP flow. These packets are used in the control protocol of
IP-telephony, video conferencing, H323 network, etc. The packets are released to the network
after the three-way handshake and is used in the media transmission phase of VoIP networks
(but not in the signaling phase) to adapt the transmission rate to the available bandwidth.
Some of the TCP’s startup schemes will be reviewed in the next section. The effect of a
given startup scheme on TCP flow will be reveiwed in Section 2.5. The general advantage of
increasing initial window size is reviewed in Section 2.6.
2.4.1 Slow-Start
Slow-start [9] is the scheme used in the standard TCP congestion control algorithm. It is only
able to determine window size and self-clocking system. Transition into congestion avoidance
phase is triggered by packet loss or when congestion window has reached statistically config-
ured threshold. That is, in an event of packet loss, the current congestion windows (cwnd) is
half, and the half is saved as the slow start threshold (ssthresh) and slow start begins again
from its initial cwnd. Once the the cwnd reaches the ssthresh, TCP goes into congestion
avoidance (CA) phase. When a loss occurs again, TCP goes back to slow startup phase. The
use of slow-start algorithms can lead to inefficient use of bandwidth during the congestion
avoidance phase. This is because packet loss during congestion avoidance phase triggers slow-
start algorithm which in turn half the current cwnd. This event leads to underutilization of
bandwidth.
According to [24] When using TCP slow-start [9], the number of packets being released into
the network exponentially increases in large burst during the slow start phase. This can cause
a build-up of queue in the bottleneck routers. In networks with high bandwidth delay product
(BDP), these routers queue may be smaller than the maximum TCP window. According
to [22], a large queue may lead buffer overflows resulting to packet loss and a degradation in
overall performance.
Other notable weakness in TCP’s slow-start scheme is its inability to estimate accurate initial
9
cwnd size in wireless networks. This is because the causes of wireless loss is due to signal fad-
ing, random error and handoff processes [11], [6] but not network congestion. The Congestion
Control algorithm cannot rely on timeout to determine optimal cwnd size since the increase
in RTT is due to fading signal and not congestion. Attempt to reduce impact of wireless loss
has been suggested by [8] where ssthresh is predefined.
Deficiency of TCP’s slow-start scheme has also been witnessed in Voice-over-IP application.
This is because the slow startup of TCP causes low throughput. The low throughput is
unable to meet the bit-rate requirement of VoIP codec. TCP Fast-Startup (fsTCP) [12] has
been proposed to determine initial size of sliding window for TCP connection by adjusting
the congestion window parameters before transmitting IP data. These parameters include;
data rate of VoIP codec and connection RTT. fsTCP scheme also solves the problem of tradi-
tional VoIP connection where voice packets were delivered by UDP protocols. UDP protocols
degrades performance due to the fact that it has no features for congestion control and flow
control. However, fsTCP has longer startup period since it goes through four steps (three-way
handshake, parameter determination, connection setting and starting TCP connection with
initial window size derived during stage 3. This condition may lead to underutilization of
bandwidth during connection startup leading to low throughput.
Some recent research suggested the use of combined features of some of the TCP’s startup
schemes to improve performance. For instance, TCP-Adaptive Westwood [25] was designed
basing on the features of TCP Westwood and TCP Adaptive Reno. The benefits of such
combination are efficient use of bandwidth, fair RTT and sufficient throughput required for
quality TCP connection startup.
2.4.2 Swift-Start
Swift-start [6], [22] is a variant of slow-start. It is one of the faster startup schemes. It uses
packet pacing and packet pair and uses only few RTT to determine windows size. When
packet loss occurs, it uses fast recovery. Some researchers witnessed that swift-start send
10
more packets than slow-start during startup phase. This is because estimated bottleneck
bandwidth defines the number of packets to be sent in the second RTT. Swift-start does not
need intermediate routers and does not rely on explicit feedback from the network or receiver.
Its ability to employ both packet pacing and packet pairs offer good throughput.
However, recent study by [16] identified some drawback in the original TCP’s Swift-start
scheme. These problems include the use of swift-start in combination with Delayed ACK
(DACK) and ACK compression. The DACK affected the packet pair algorithm of the swift-
start because ACK would not be sent promptly. Such delay might not have been due to
congestion but within the receiver. In this case, the sender would not be able to correctly
estimate the available bandwidth. Besides, if the arrival time between data segment (reported
by receiver) is less than the maximum ACK delay time, the receiver will send only the second
ACK and the sender would not calculate RTT but act as TCP’s slow-start [9]. Secondly,
ACK compression would decrease the time gap between ACK of individual packets. This
would lead to over estimation of the available bandwidth.
To solve the problem introduced by DACK in the original swift-start [22], a modification was
made to the packet pair algorithm [16] such that cwnd is equal to four (4) segments so that
the packets are sent in pairs. The modification was done such that RTT is receiver based,
that is, the time difference can be determined by the receiver which inturn sends it back to
the sender within the IP header option. The Solution to the problem introduced by ACK
compression can be solved by adopting procedures used in [10] where estimation of available
bandwidth uses ACK of the entire train to get fair RTT.
2.4.3 Paced-Start (PaSt)
PaSt [10] is an active bandwidth probing startup scheme which does not need explicit feedback
from receiver, hence no flooding of network path. Compared to slow-start, PaSt does not use
self-clocking during startup, but controls the inter-gap between the packets in the train to
determine the Turning point, an optimal congestion window in multiplexed flow with no
11
congestion. This means during startup, PaSt does not transmit the next train until all the
ACKs for the previous train have been received. This shows that PaSt is less aggressive
than slow-start. This is because PaSt trains are more stretched out than the corresponding
slow-start train, and the spacing between paced start trains is larger than that between the
slow-start trains. PaSt iteratively calculates an estimate for the congestion window of the path
and then uses that estimate to transition into congestion avoidance phase. However, if packet
loss occurs during that period, PaSt transition into congestion avoidance phase in exactly the
same way as slow-start. Thus, PaSt differ from slow-start in two (2) ways; how it sends trains
and how it transition into congestion avoidance phase. It solves the problem in packet pair
technique used in the standard TCP. This is because packet pair would estimate the bottleneck
link capacity not the available bandwidth. This is beneficial only if the competing traffic is
low. But if the traffic increases, using packet pair can overestimate the available bandwidth,
and thus the initial congestion window. This can easily result into traffic congestion and
significant packet loss.
2.4.4 Quick Start
Quick-Start [1], [26] is not a variant of slow-start TCP. It incorperates active measurement
tools with quick-start request in cooperative environment to estimate the available band-
width. This scheme is known to have shorter startup period to transition into congestion
avoidance phase. That is , the explicit feedback avoids the time consuming capacity probing
by the TCP’s slow-start and is beneficial in underutilized bandwidth [4]. The TCP sender
therefore should advertise a rcwnd which is big enough to allow an efficient utilization of the
connection path with large BDP. The TCP receiver with high number of TCP connections
should also optimized buffer and memory usage in order to be able to serve a maximum pos-
sible number of TCP connections at the same instant. During Quick-start failure/packet loss,
the algorithm reverts to slow-start phase. This mechanism has been supported by [5] because
the failure/packet loss means the current cwnd would not be valid due to sudden changes in
traffic load, misbehaving receiver, etc. Other advantages realized by this are; reduced queue
delay, better performance in terms of link capacity utilization, good transition during handoff
12
and suitable between TCP nodes with different characteristics. However, Quick-Start have
vulnerability to fabricated bandwidth information from the bottleneck link such as DOS at-
tack [3]. Explicit feedback can also suffer from rate limiting [18] in case of probing packet such
as ICMP, etc, which is used in controlled manner for security reason at the receiver. Recent
research [5] supported the use of mobility signaling and nonce in the Quick-start-request to
counteract these attacks [27], [3]. According to study by [1], other problem in Quick-start is
that, when the request has not been approved, the Slow-Start (default congestion control)
scheme [9] is used. When a packet loss occurs, quick start assumes slow start and use the
default initial window for transmitting the remaining data in same way when quick start
requisted has not been approved.
2.4.5 The Congestion Manager (CM)
The CM [17] is an end-to-end module and therefore works at both peers’ application level. The
CM incorporate API (Application Programming Interface) which is a non-standard protocol.
The advantage CM has over Slow-Start is that it has faster startup on link which has been
used because of available aggregate information i.e. can leverage information on previous TCP
connection. CM weakness is that for connection on link which has not been used, it uses the
TCP’s slow-start scheme [9], which is the default congestion control with many shortcoming
as noted in Section 1.2.
2.4.6 TCP Fast Start
TCP Fast Start [15] caches network parameters such as RTT and cwnd from previous TCP
connection and then uses this information estimate the availbale bandwidth. The only disad-
vantage would be wrong estimation when the cached information becomes stale. The scheme
protect against this consequences by preventing fast start connection after packet drop has
occurred, but use fast recovery, and packets sent during fast would be assigned higher drop
priority than other packets. This mechanism is good because it avoids the penalty for slow
startup each time there are new TCP connections.
13
Table 2.1 is a tabulated summary comparing the TCP startup schemes. The basis of the
comparison includes; number of round trip times required to probe the available bandwidth,
accuracy of the estimated value of available bandwdith, the need for having intermediate
routers i.e. estimation is either explicit/online or implicit, whether the scheme is susceptible
to security threat and how each scheme responds to packet loss.
14
Scheme No.of round trips to Accuracy of bandwidth estimation Necessity of Security Response to Packet loss
Probe avalable bandwidth middle Node Threat
Slow-Start Many round trip times because inaccurate estimate in wireless network since it cannot No No reduce to 1MSS, inefficient
[9], [11] of slow nature different cause of packet loss use of bandwidth
Swift-Start few because it acknowledges Less Accurate compared to paced start No No Use fast recovery
[22] packet not a train
Paced-Start Many , it acknowledges Accurate No No Not well known
[10] whole train hence,
similar to slow start
Quick-Start few because it uses Accuracy depends on intermediate node: Required Vulnerable Return to slow start, using default
[1], [3], [26] option in IP-header (i) biased routers do not support bursty traffic, hence they to DOS initial window in the same way as
during 3way-handshake can cause early packet drop/loss in higer initial window attack if quick start was’nt approved
(ii) misbehaving routers may report wrong available bandwidth
when there is low or high traffic induced by an attack
The Congestion few round trips on previously used Accuracy depends on probing nature No No use previous cwnd sent successfully,
Manager [17] link, many new links of slow start on new links stable and efficient bandwidth usage
Fast Start Many round trips on new link, Accurate on new link No No use restart window with fast recovery,
[15] few on used links On used link, accuracy depends on does not enter slow start phase,
whether cached parameters are stale or not efficient bandwidth usage at startup
Table 2.1: Comparison of TCP Startup Schemes
15
Slow start has the longest startup time followed by Paced start, then Swift start. Quick
start assumes slow start when Quick start request is not approved, otherwise, it is faster than
Paced start and Swift start. The Congestion manager and fast start assume slow start on
new link, but both are faster than Slow start.
Only quick start requires intermediate routers, hence it may not be a good choice since an
approved quick start request can be misleading in estimating available bandwidth in such
a way that bias routers will not support bursty traffic and misbehaving router may either
report wrong available bandwidth i.e. either underestimated or over estimated. Dependency
on explicit feedback makes quick start susceptible to security risk such as DOS attack.
The procedure used by slow start and quick start for recovery after packet loss lead to under
utilisation of bandwidth. Fast start gains higher link utilisation faster than all the other
schemes because using restart window and not loss window.
Fast start takes a fewer number of round trip times to estimate available bandwidth since
it uses implicit information from the acknowledgment of the first packets. The congestion
manager assumes slow start on new links, quick start assumes slow start when a request is
not approved, swift start use paired acknowledgment, and paced start acknowledges a whole
train, hence will take more round trip times to estimate available bandwidth compared to
fast start.
In summary fast start is a better TCP startup scheme since it takes few round trip times
to estimate available bandwidth, used higher initial window during connection setup and
use high restart window after long idle time or packet loss rendering high TCP performance
during congestion avoidance phase.
2.4.7 Related Work
The study in [7] involved the use of slow start to compare the performance of standard TCP
and fibonacci TCP. Fibonacci TCP performs better than the regular TCP. In our study we
16
compare the performance of various startup schemes as well as slow start and select the best
scheme for simulation with fibonacci TCP.
Similar findings in [37] shows that TCP variants present self-similarity behaviour over time
scale. This means changing network settings cause only a slight variation in the behaviour of
the traffic pattern. During our study, we look at the characteristics of other TCP variants;
namely quick start and fast start in comparison to slow start scheme in terms of RTT, packet
loss and throughput when we simulate each scheme with the regular TCP. The previous also
shows that whenever there is multiple TCP flows, all the are synchronised. That is, all TCP
flows that passed through the router tended to lose packets at the same time and reduce their
sending rates at the same time. This is because all the TCP flows are summed up to one
TCP window and appears as a single flow through the bottleneck link. When TCP flows are
sychronised [38], a bigger buffer is a better choice in order to achieve high utilisation.
One of the studies [40] involved investigation on the effect of packet size in the maximum
cwnd when there are more than one TCP flows sharing the link, but each with different
packet sizes. The study shows that different packet sizes performs differently. In our study,
we do not compare performance of competiting TCP flows but performace of tagged flow
from different startup schemes using equal number of packets in the maximum cwnd for each
simulation using one TCP flows.
In study by [41], throughput is used as main metric to investigate performance of TCP in
terms of throughput collapse in cluster-based storage systems. In our study, throughput is
also used as the main metric to determine the performance of the selected startup schemes
for setting TCP connection and thereafter, the impact of faster startup on fibonacci TCP
interms throughput variation. Using fibonacci TCP with faster startup scheme in cluster-
based sotrage system may cause erratic throughput due to increase in congestion and RTT
as a result of large amount of data being sent, a situation that may lead to TCP pause.
17
TCP Pause
When TCP begins sending data in blocks at connection setup, it causes sudden congestion
and TCP pause [42]. TCP pause is the process by which TCP connection sends large block of
data and pause before sending next block. Significant TCP pause that last for some duration
of time accounts for throughput collapse. In such Internet application, using fibonacci TCP
implementing faster startup scheme may lead to data being sent in block because of the high
re-start window used after idle time or packet loss coupled with high factor of 1.618934 used
by fibonacci TCP to increase the congestion window. TCP pause occurs if there is a throttling
process restricting data flow. The throttling process includes erratic RTT and congestion.
This erratic pattern of TCP, that is, the stop-start pumbing of data can also impact on data
and, throughput would be reduced to low value, degrading the link performance. This erratic
pattern may not affect services such as e-mail, but will affect multimedia application such as
vidoe and voice. TCP pause can also result into under utilisation of bandwidth. This is a
similar situation as that of throughput collapse in clustered-based storage system [41].
In data communication network, network congestion occurs when a link or node is sending
so much data that its throughput deteriorates. The throughput deterioration is attributed
to queuing delay, packet loss and blocking of new connection. The consequence of packet
loss and blocking of new connection is that any increamental data leads to either only small
increase in network throughput or to an actual decrease in network throughput. Network
protocol such as fibonacci TCP coupled with high restart window used in fast start may
be aggressive in retransmission of data to compensate for the lost packet and this keep the
link in a state of congestion even after the initial data amount has been reduced to level
which may not have induced network congestion. Therefore, using faster startup that has
high restart window with high speed TCP in cluster-based storage system may behaves like
protocols with agressive retransmission that exhibit two stable states under the same level of
data amount. The stable state with low throughput is a congestive/throughput collapse and
the total incoming bandwidth exceeds the outgoing bandwidth. Meanwhile, stable state with
high throughput is when there is no TCP pause.
18
2.5 Effects of TCP’s Startup Algorithm on Flow Through-
put
The impact that startup of a TCP connection has on the (flow) throughput is determined
by flow length [10]. For instance, very short TCP flows never get out of the startup phase,
that is transmission ends in startup phase and, so their throughput is determined by the
startup scheme. For long TCP flows, the startup scheme has negligible impact on the total
throughput since most of the data is transmitted in the congestion avoidance phase. For
intermediate-length flows, the impact of the startup scheme depends on how long the flow
spends in congestion avoidance phase. However, it is not well known how starting a TCP
connection with high initial window influences congestion avoidance phase. Since the startup
algorithm used determines the size of the initial window, we briefly discuss the advantage of
large initial window in the next section.
2.6 Advantages of Increasing TCP’s Initial Windows
Size
A large initial window is advantegeous especially where the connection is established for
transmission of small amount of data. This is because all data may be transmitted in a single
window. For instance, e-mail and web page transfers are of short flows and the larger initial
windows can reduce the data transfer time. In many variants of slow-start such as [24], [28],
connections that are able to use large congestion windows eliminate upto three RTTs. This is
a benefit for high-bandwidth with large propagation delay such as TCP connection in satellite
links and LFN.
Using [23], [18] scenario over an underutilized network bandwidth, the TCP sender would be
able to transmit much of its data in the initial congestion windows as much as the available
bandwidth can absorb, and can complete data remittance in over half or less time required
by [9]. After the slow startup phase, TCP transition into CA phase. The next section contains
19
the review of the algorithms used in the CA phase of a TCP connection.
2.7 TCP’s Congestion Avoidance (CA) Algorithms
In the previous section we discovered that TCP uses various startup algorithms during the
startup phase of a TCP connection. When the TCP window reaches the slowstart threshold
value, it goes into congestion avoidance (CA) mode. For the purpose of this study, the CA
algorithm will be sub-divided into:
i. Standard congestion avoidance algorithm,
ii. Exponential Algorithms, and
iii. High speed algorithms.
The Standard congestion avoidance algorithm is used in congestion avoidance phase of stan-
dard TCP. In this category, timeout triggers TCP’s slow-start [9] algorithms causing the
current window size to be halved. This is because the timeout is assumped to be due to
packet loss as a result of congestion in the link. The disadvantages of this TCP’s behavoiur
were already discussed in the previous section. The Exponential algorithms such as [8], etc
do not half the current window size after packet loss, but provides multiplicative decrease
using exponential of the current windows size (not 0.5 factor) and later additive increase
using the inverse of the current window size to gain maximum utilization of the available
bandwidth. Mean while, High speed algorithms uses ratio inverse of sequence of numbers
(such as Fibonacci series) as the multiplicative factor to reduce current window in an event
of packet loss and increase the windows size by the coefficient. Example of this algorithms is
the proposed Fibonacci TCP [7], which will be further reviewed in the next Section 2.8.
The performance of CA algorithms can be further improved by implementation of inter-
packet-delay (IPD). The TCP-Friendly Rate Controller (TFRC) scheme [29] applies IPD to
ensure rate control in terms of buffer level at the mobile device (receiver). IPD is effective
20
because it uses buffer level at the mobile device and sets it as as sending rate in video-on-
demand application. This is achieved by the implicit prediction of current buffer level based
on the receiver (which has low playback rate), hence reduces the possibility of packet loss
due to overflow at the receiver. The Prediction of the buffer level is done continuously and is
based on RTT and packet loss, hence it does not flood network path.
Additionally, faster startup such as Quick-Start mechanism [19] and [5] can be useful in
sustaining performance during the congestion avoidance phase after long idle period [19] and
and after handoff process between mobile node and the access point [5] respectively. This
is possible because transmission will continue using the previous optimal cwnd before the
idle period and handoff process. In asymmetric networks, performance can be improved by
manupulating the frequency of RTT. For example, Formosa TCP [30] has been found to have
advantages over other TCP variants when used in asymmetric networks. This is because
it has high throughput and low delay variation per connection and its RTT estimation can
identify the direction of the congestion, hence it would not suffer performance degradation
during CA phase.
2.8 Fibonacci TCP
Fibonacci TCP [7] is a particular CA algorithm proposed to increase the utilization of available
bandwidth in high speed networks.
How Fibonacci TCP controls the steady behaviour of CA phase is based on Fibonacci num-
bers. Fibonacci number is a sequence of numbers which are defined by recurrence relation
of numbers ranging from 0 to n. The principle to use Fibonacci number in CA phase was
borrowed from Computer Science where error-correction code implementation is based on
varying Fibonacci numbers (or series) to increase information reliability of a communication
system.
In Computer network system, the nth
is noted when packet loss occurs. As n tends to infinity,
21
the golden ratio (multiplicative factor) to increase the window in the absence of congestion
will tend to 1.618034 and not only 1MSS [9]; and the golden ratio inverse to reduce the cwnd
size when packet loss occurs will be 0.618034 and not 0.5 as in standard CA algorithm. This
means the high initial window size is expected to influence the n term when an event of first
packet loss will take place and the overall performance of the Fibonacci TCP. But how the
high initial window will impact on Fibonacci TCP is not yet well known.
From the study by [7], the use of golden ratio in high speed TCP such as Fibonacci TCP
presents two advantages. First, in the absence of congestion, Fibonacci TCP will increase
the cwnd size faster and utilize bandwidth more than standard CA algorithm. The second
advantage of Fibonacci TCP is that, the reduction of current windows size after packet loss
is least compared to the other two algorithms mentioned above.
2.9 Research Question
Using the proposed faster startup scheme, the study considered performance metrics which
included RTT variation, throughput variation and packet loss rate. The study question was;
i. Which TCP’s startup scheme could be used?
ii. What would be the effect of faster startup scheme (that is, using higher initial window)
on the Fibonacci TCP?
22
Chapter 3
Methodology
3.1 Introduction
This section describes the tools, parameters, detailed procedures and experiments used to
achieve each objective stated in Section 1.3.2. The simulation involves only three startup
schemes due to limited time, related advantages shown in Table 2.1 and are ones which have
their codes available.
3.2 Simulation Tool
The study involved the use of ns-2 simulator [20] tool running on Linux Suse Enterprise
Desktop 11 SP1. We chose to use ns-2 because it is easy to define the various simulated
objects such as applications, protocols, network types and traffic models, and can allow the
study to achieve faster execution time and efficiency. The simulation results can be verified
by analysing the trace files in comparison with existing theoretical values. It would be very
difficult to run the experiments with multiple nodes in real life test environment due to high
expense and unavailability of hardware components.
MATLAB was chosen to be used for the analysis because of it’s a high level mathematical
language which is good for developing the prototype and output generation that can be
23
sufficient for early insights into the investigated TCP flows. After analysis, MATLAB was
also used to plot various graphs from the processed data.
We also used awk script to filter required data from the data traces. This made analysis
easier and saved space on the storage drive.
In the simulation of the various TCP startup schemes, procedures for computing the three
analytical metrics namely; RTT, throughput and packet loss ratio were used. RTT is the
time interval between the time a packet is sent from TCP sender node and its corresponding
ACK packet is received at the TCP sender node. Packet loss ratio is the number of dropped
packets divided by the number of sent packets. Throughput is the amount of data received
over the network per second.
To compare the performance of the TCP startup schemes, we used MS Excel to compute
averages, variance, standard deviation and coefficient of variance using sample data. How-
ever, average, variance, standard deviation and coefficient of variance can be computed using
equation [31].
Coefficient of Variance is the standard deviation divided by the average and expressed in
percentage [31]. We used coefficient of variance to enable us know whether the throughput are
closely concentrated around the average value or not and, how stable a scheme is. Coefficient
of variance is often used when comparing between data set from different units, different
environment of widely different means/average. In such cases, we did not relie on standard
deviation alone since we got sample data from different categories of startup schemes.
3.3 Selection of the Faster Startup Scheme
Three (3) differents startup schemes namely; slow start, quick start and fast start were used
for the simulation with single flows. Based on the results from simulation of single flow, we
simulated multiple (two and four) TCP flows using fast start.
24
3.4 Simulation of Chosen Startup Schemes
We simulated data transmission in TCP connection having: (i) one TCP flow and, (ii) two
TCP flows and (iii) four TCP flows using a dumbbell topology. We described the parameters
in Section 3.5.1 and 3.5.2. Simulations were done for regular TCP as well as fibonacci TCP
while varying the startup schemes. The various simulations were done using the same topology
described in Figure 3.1.
3.4.1 Topology
Figure 3.1 represents the dumbbell topology that was used for all simulations. A dumbbell
topology is a network setup where 1 to nth
TCP nodes are connected to a sinlge router-1 and
router-1 is connected to router-2 over a single slow bottleneck link.
Figure 3.1: Dumbbell Topology used for the Experiments
Bandwidth Delay Product (BDP) and TCP Receive Window(RWIN)
BDP determines the optimal amount of data that should be in transit in the network. It is
directly related to the optimum TCP receive window value in an existing TCP connection.
Essentially, BDP depends on the bandwidth multiplied by the delay value of a link. We
employed equation (3.1) and (3.2) [32] to compute the link specification and parameters.
25
BDP = Bandwidth × Delay (3.1)
RWIN = BDP (3.2)
where BDP is in packets and RWIN is receiver advertised window
Since the bottleneck bandwidth is 5 Mbps and the minimum RTT is 160 ms, using equation
(3.1), the optimum TCP window will consist of 100 packets, assuming the packet size is 1000
bytes. That is, the number of packets that can fully utilise the link is 100 packets. In addition,
the send and receive buffers should be optimsed to allow full network utilization.
Buffer
The requirement to have optimum send and receive buffers is due to the fact that for buffers
that are too large, there will be more buffering implying that more packets in the queue and
an increase in latency, yet the TCP congestion and flow control would determine their own
effective congestion and receive windows, then the remaining buffer will be unused, hence
wasted. On the other hand, if the buffer is small, the TCP congestion and flow control would
effectively reduce the send buffer on the sender to the same small size, hence slowing down
the network. Additionally, in this study, there was need to know how much outstanding
(unacknowledged) data that can be between the sender and receive as that would determine
how large the send buffer should be.
To determine adequate buffer size, we used known bandwidth and network latency basing on
the fact that in TCP, the sender cannot flash from its buffer until the receiver has acknowl-
edged it. Therefore, RTT times bandwidth puts a lower bound on what the buffer size and
TCP window should be. We used equation (3.3) [32] to derive the buffer size.
Buffer = BDP (3.3)
26
But from equation (3.2) and (3.3), it implies that:
Buffer = RWIN (3.4)
The buffer size would be 100 packets. This is our recommendation basing on the above
arguments, but not a rule.
The access link capacities are 100 Mbps and delays are 1 ms. The bottleneck link capacity
is 5 Mbps and the delay is 79 ms. The access links are given lower delay of 1 ms because
the bandwidth is dedicated for only one TCP flow. Meanwhile, the bottleneck link was
given higher delay because it is shared by many TCP flows. Drop Tail queuing [33] is used
throughout the simulation.
These link specifications were kept constant for all the simulations. We varied the parameters
including initial window sizes and number of flows as shown in Table 3.1, 3.2 and 3.3. For
each simulation, we generated traffic in one direction. We explained the different parameters
and methods used in the simulation in Section 3.5.
3.4.2 Analytical Metrics
In this section, we describe the method for achieving the metrics used for analysing the
simulation traces. These metrices included RTT, throughput and packet loss rate.
Round Trip Time (RTT)
We computed RTT by first extracting required data from the data traces using awk script.
We extracted sequence numbers and corresponding time of both data and ACK packets of
the tagged flow. Two separate files were created; the first file was for data packets (consisting
of sequence numbers and their corresponding time when they were de-queued from TCP
27
node-1) and the second file was for ACK packets (consisting of sequence numbers and their
corresponding time when they were received at TCP node-1)
Cases of duplicate data and ACK packets were handled by the use MATLAB script. The
script that searches through the list sequentially from the begining of the file towards the end
of the file. For data and ACK packets which appear more than once, the script would only
record the corresponding time for the last duplicate packet seen.
The data and ACK files were equal size in terms of rows. That is, the number of records in
the files for both data and ACK packets were equal in numbers. Each sequence number of
data packet in the data file would have a corresponding sequence number in the ACK file
since the ACK packets contains the sequence numbers of received packets at the sink.
The RTT of packet (i) was then computed from equation (3.5).
RTT(i) = ACKtime(i) − Datatime(i) (3.5)
where, Datatime(i) is the time when data packet(i) was dequeued and ACKtime(i) is time
when associated ACK packet was received at TCP node-1 and RTT is in (s).
Packet Loss Ratio
This section describes how packet loss ratio was derived.
Using awk code, number of received and lost events were counted from the full trace file. Any
packet belonging to the tagged flow was counted as lost if it was registered as dropped at any
of the nodes. Lost packets is the total count of dropped packets which were sent out by the
tagged TCP node denoted as dp. Received packets is the count of packets received at the
TCP sink denoted as rp. Total packets sent is the summation of dp and rp.
Packet loss ratio can be computed in the same way as computing delivery ratio. We therefore
used equation (3.6) [34] shown below to compute packet loss ratio.
28
Loss ratio =
dp
dp + rp
(3.6)
Data Throughput
The data for evaluating throughput of the tagged flow was extracted during simulation with
the help of a TCL script and later exported to MATLAB for graphical representation of the
throughput. The study emphasised on throughput during startup phase and when TCP had
to recalculate congestion window size when new flows were just added to the link.
ByteRcvd
0.5
×
8
1000000
(3.7)
where ByteRcvd is bytes received after every 0.5 s
We used equation (3.7) [35] to compute the throughput at regular interval of 0.5 s. We
periodically called the byte- method of the sink to return the TCP throughput received at
the sink, convert to Mbps and write the result into an output file. The time interval for calling
the method was 0.5s. Before calling the method again, we reset the value of throughput to
zero after every 0.5 s so as to prevent the method from returning accumulated number of
bytes since the start of the tagged TCP connection.
To investigate the variability of throughput at different traffic load, we varied the number of
flows, windows sizes (packet distribution) for the various flows.
3.5 Details of Experiments Conducted
3.5.1 Simulation of One TCP Connection
This section describes the simulation when only one TCP sender sends unidirection data. The
entire bottleneck link is dedicated to data packets from a single TCP sender and ACK packets
29
from a single corresponding TCP sink. Various simulations are done with the startup scheme
set to any one of the three schemes. For each scheme, the maximum congestion window is
set to either 50, 100 or 200 packets. The study considerd these window sizes because the
optimal window size had been calculated to be 100 packets from equation (3.1). In all cases,
the buffer size was set to 100 packets following equation (3.3). The different parameters are
summarised in Table 3.1.
Startup Schemes maximum Window (in packets)
Slow Start 50, 100 and 200
Fast Start 50, 100 and 200
Quick Start 50, 100 and 200
Table 3.1: Maximum cwnd for One (1) TCP Flow Simulation
3.5.2 Simulation of Many TCP Connections
This section describes the network settings when there are multiple TCP flows and all the flows
share a single bottleneck link. The study on the tagged flow was divided into three categories:
(i) when TCP sender 1 presents smaller maximum cwnd: that is, when the maximum cwnd
for the tagged TCP flow is smaller compared to other flows, (ii) when the maximum cwnd
for all the TCP flows are equal and, (iii) when the maximum cwnd size of the tagged flow
is higher than that of the additional flow as illustrated in Table 3.2 and Table 3.3 for two
flows and four flows respectively. In each simulation, the starting time for traffic generation
is radomly distributed starting from 0.1s and traffic from additional flows is generated at
interval of 3.1 s with the tagged flow being the first TCP sender. Each simulation time was
set to last for 50 s. The results for the experiments are found in Chapter 4.
Simulation of multiple flows is done with the view that, when network resources are shared
by multiple TCP connections, the tagged flow is affected by the other flows which is more
comparable to real networks than the single flow simulation. In regards to that, these other
30
flows are additional traffic that may cause long queue and congestion, resulting into more
queuing time, increased link progation delay and packet loss that can provoke decrease in
throughput. However, each connection must get a fair share of the resources in terms of
bandwidth. Therefore, the bottleneck of 5Mbps will support a single or many TCP flows
with varying maximum congestion window sizes. If bandwidth is shared equally, then each
TCP connection’s window size would be a fraction of the bottleneck’s bandwidth. This
is because TCP congestion control algorithms aims at fair share of the bandwidth among
competing flows.
Window Size of maximum Window Size of maximum Window
Category for Tagged Flow (packets) for Second(packets)
Lower 50 150
Equal 100 100
Higher 150 50
Table 3.2: Maximum Window for Two (2) TCP Flows Simulation
Further simulations were done for four (4) flows to investigate the variation of RTT, packet
drop and throughput when number of flows is doubled. Table 3.3 shows the parameters
considered in the four flows simulation.
Window maximum Window for maximum Window for each of
category Tagged Flow (packets) the additional Flow (packets)
Smaller 20 60
Equal 50 50
Bigger 80 40
Table 3.3: Maximum cwnd for Four (4) TCP Flows Simulation
31
3.5.3 Simulation of Fibonacci TCP with Selected Faster Startup
Scheme and Slow Start
Further simulations are done using fibonacci TCP in the CA phase while considering two
startup schemes namely; slow start and fast start to setup the TCP connections. The objec-
tive is to evaluate and compare the performance of fibonacci TCP in terms of RTT, packet
loss ratio and throughput when it implements slow startup and faster startup mechanisms in-
dependently to setup a TCP connection. Hence, the simulations are done to enhance making
a choice either to use slow start or fast start with fibonacci TCP to achieve good performance.
This is done for a single fibonacci TCP flow as well as for two and four fibonacci TCP flows.
32
Chapter 4
Results
4.1 Introduction
This section presents the performance of slow start, fast start and quick start in terms of
RTT, throughput and packet loss ratio. The analysis and discussion of the results are found
in Section 4.2 for single TCP flow simulations, Section 4.3 for multiple TCP flows and Section
4.4 for simulations of Fibonacci TCP.
4.2 RTT, Packet Loss Ratio and Throughput for One
TCP Flow
4.2.1 RTT for Single TCP Flow
Recall from Section 3.4.1 that the minimum RTT is 160 ms. Figures 4.1, 4.2 and 4.3 show
RTT for one regular TCP flow with various maximum congestion windows of 50, 100 and 200
packets.
There is a sharp increase in RTT during the first 2.5 s of the simulation time for all maximum
congestion windows in each of the startup schemes. This is due to sudden traffic buffered at
the router provoking an increase in queuing time which is reflected in increased RTT. The
33
0 1 2 3 4 5
0.16
0.18
0.2
0.22
0.24
RTT(s)
time(s)
quick start
slow start
fast start
Figure 4.1: RTT for one regular TCP Flow
with maximum cwnd of 50 pkts
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
0
0.5
1
1.5
2
RTT(s)
time(s)
quick−start
slow−start
fast−start
Figure 4.2: RTT for one regular TCP Flow
with maximum cwnd of 100 pkts
0 10 20 30 40 50
0
0.5
1
1.5
2
2.5
RTT(s)
time(s)
quick−start
slow−start
fast−start
Figure 4.3: RTT for one regular TCP Flow
with maximum cwnd of 200 pkts
sharp increase in RTT value within the 2.5 s of the simulation time is highest when the TCP
flow has maximum cwnd bigger than the optimum window size of the link.
RTT variation increases with increase in maximum cwnd. This is seen in Figures 4.2 and 4.3
where the RTT variation for TCP flow with bigger maximum cwnd is higher than that of
smaller maximum cwnd. Additionally, in TCP flows with maximum cwnd higher or equal to
the optimum congestion window size, queuing delay is a significant component of RTT and
contributes to higher variation and RTT value well above 0.24 s whereas for one TCP flow
with maximum congestion windows lower than the optimum window, the increase in RTT is
due to packet processing time at the receiver and link delay but not queuing delay since the
34
buffer is always empty. Comparing all the startup schemes, fast start remained more stable
with increase in maximum congestion window.
The oscillation of RTT above 0.16 s affects optimum throughput that can be achieved from
one TCP flow having any value of the maximum congestion window.
Since the RTT values increase with increase in window size, it is possible that that RTT it is
proportional to traffic volume. In the same way, the RTT increase when there is a decrease in
the difference between the TCP window size and link’s BDP. But when the maximum window
was well above the optimum queue length and optimum window size, the RTT increases
further. To get the number of round trips times for capacity estimation (CE), we used the
specified minimum time to send data and receive ACK as quoted in equation (4.1).
1RTT = 0.16s (4.1)
1s = 6RTT (4.2)
From equation (4.1), we derived another equation (4.2) to get the number of round trips
within 1 s of the simulation time. Equation (4.1) is based on the fact that the minimum RTT
is 160ms and equation (4.2) is used to get the optimum RTT for capacity estimation for each
scheme. To get the actual round trips times during the simulation, we computed the product
of actual time taken for the RTT to gain stable state and the number of round trips in 1 s
shown in equation (4.2). We present the analysis next.
Table 4.1 shows comparative analysis of RTT in terms of average, variance, standard devia-
tion, coefficient of variance (COV), time interval to estimate capacity and number of round
trip to estimate available bandwidth.
Each startup scheme has its own way to determine available bandwidth and the initial window.
The common features is that RTT increases with increase in window size, but number of round
35
trips to determine available bandwidth remains the same. However, the round trip time is
lower whenever maximum cwnd for the flow is less than the optimum window size or BDP of
the link.
From the RTT analysis in Table 4.1, all schemes have uniform performance when there is only
one TCP flow with maximum cwnd lower than the optimum window size. Number of round
trip time for capacity estimation increases with increase in congestion, but the RTT increase
is not infinite with the increase in maximum cwnd. Fast start maintains lower average RTT
and the time to adjust window size/RTT for capacity estimation than both slow start and
quick start, hence fast start performs better than the other two schemes.
36
maximum Scheme RTT RTT RTT RTT Time to No.of RTT Comparison
Window Average Variance Standard COV adjust win- for of three
(packets) (s) Deviatian dow (s) CE Schemes
Slow start 0.18 0.001 0.039 0.22 0.5 3 All schemes had same coeffcient
50 Quick tart 0.18 0.001 0.039 0.22 0.5 3 of variance and no.of round
Fast start 0.18 0.001 0.039 0.22 0.5 3 trips for capacity estimation.
Slow start 0.71 0.90 0.95 1.34 2.4 15 Round trips for capacity estima-
100 Quick start 0.47 0.29 0.54 1.15 2.2 14 tion and coefficient of variance
Fast start 0.52 0.40 0.63 1.21 2.0 12 increased with window size. Fast
start had least number of RTT
Slow start 0.84 0.90 1.18 1.41 2.4 15 Coeff. of variance increased further
200 Quick Start 0.75 0.99 0.99 1.32 2.2 14 with window sizes. Fast start had
Fast start 0.61 0.60 0.77 1.28 2.0 12 least coefficient of variance and
round trips for capacity estimation.
Table 4.1: Comparative Analysis of RTT for One Regular TCP Flow
37
4.2.2 Packet Loss for One Regular TCP Flow
Table 4.2 shows the results containing packet drop, packet received and the computation of
loss ratio for each experiment. Packet loss ratio increases with increase in maximum cwnd.
For simulation of maximum cwnd of 50 packets, the entire window is in transit without causing
congestion because the TCP window size is much less than BDP of the link. No queue builds
up at the router, hence there is no packet drop. However, for the flow with maximum cwnd
of 100 packets, there is a burst of traffic that leads to link congestion and hence packet loss.
Slow start experienced the highest packet loss ratio of 19 x 10−4
, followed by quick start and
fast start at the same loss ratio of 18 x 10−4
.
When the flow’s maximum cwnd is doubled to 200 packets(twice the optimum window size),
there is increase in loss rate for all the three schemes. Slow start maintains the highest loss
ratio of 88 x 10−4
, followed by quick start and fast start with loss ratio of 74 x 10−4
and 56 x
10−4
respectively. The highest increase in loss ratio with increasing window size is witnessed
in slow start from 19 x 10−4
to 88 x 10−4
which is 0.690
/0 increase . The increase in loss
ratio for quick start and fast start is from 18 x 10−4
to 74 x 10−4
and 56 x 10−4
which is
0.560
/0 and 0.380
/0 increase respectively. This means fast start performs better than the
other two schemes in terms of packet drop with increase in maximum congestion window.
On the otherhands, slow start is the worst performer as congestion window size increases and
partly due to the exponential increase in cwnd. Scheme with a a high packet loss rate has
adverse effects in routers that have a bias against bursty traffic as in Drop Tail routers. In
such case, a TCP connection whose cwnd can increase upto a very large size might experience
unnecessary retransmits due to the inability of the router to handle small burst. This could
result in an unnecessary retransmit timeout. In such environment, fast start is the best option
as supported by analysis in table 4.2.
38
maximum Scheme Dropped Received Total Loss Ratio
cwnd(packets) Packets packets packets (10−4
)
Slow start 0 15389 15389 0
50 Quick start 0 15389 15389 0
Fast start 0 15389 15389 0
Slow start 50 26276 26323 19
100 Quick start 50 27874 27924 18
Fast start 50 28098 28148 18
Slow start 199 22520 22719 88
200 Quick start 199 26762 26961 74
Fast start 152 27071 27223 56
Table 4.2: Packet Loss Details for one Regular TCP Flow
4.2.3 Throughput from Single Flow
Fig 4.4, 4.5 and 4.6 show throughput for one TCP flow with maximum congestion window
of 50, 100 and 200 packets respectively. When maximum congestion window is less than
optimum window size, all the startup schemes achieved same optimum throughput and at
the same rate. When the maximum congestion window is equal to optimum window size, all
the startup schemes achieved optimum throughput of the link. However, fast start achieved
optimum throughput after 10 s, slightly faster than both slow start and quick start. When the
maximum congestion window is higher than the optimum window, all the startup schemes
achieved the optimum throughput but after different time in similar way when congestion
window is equal to optimum window. Fast start attained throughput almost at same speed
as quick start but it is more stable.
The average throughput achieved when maximum cwnd is smaller than the optimum window
is approximately 2.5 Mbps yet the bottleneck is 5 Mbps. This is because the flows were
limited to smaller maximum cwnd so as to represent TCP connection that can be ultimately
limited to small cwnd by the receiver advertised window but not the link congestion.
39
0 10 20 30 40 50
0
1
2
3
Time(s)
Throughput(Mbps)
Slow Start
Fast Start
Quick Start
Figure 4.4: Throughput for one Regular TCP
Flow with maximum cwnd of 50 packets
0 5 10 15 20 25 30
0
1
2
3
4
5
6
time(s)
Throughput(Mbps)
Slow Start
Fast Start
Quick Start
Figure 4.5: Throughput for one Regular TCP
Flow with maximum cwnd of 100 packets
0 10 20 30 40 50
0
1
2
3
4
5
6
time(s)
Throughput(Mbps)
Slow Start
Fast Start
Quick Start
Figure 4.6: Throughput for one Regular TCP
Flow with maximum cwnd of 200 packets
The graphs overlap because the throughput values are the same at all points for all the
schemes. Besides, the plots for slow start and fast start cannot be checked because their
graphs are printed first and quick start is printed last.
As maximum congestion window increases, fast start takes shorter time to achieve optimum
throughput in comparision to slow start and quick start. This shows that fast start determines
available bandwidth faster than both slow start and quick start. This is in support of the
fewer round trip times observed while setting TCP connection using fast start.
Even though each scheme eventually gains the optimum link throughput, the rate at which
40
each scheme gains the optimum throughput is different. We analyse performance of the
startup schemes with different maximum congestion windows in Table 4.3 in terms of maxi-
mum achieved throughput, average throughput, variance, standard deviation and coefficient
of variance. Throughput values increase with increase in maximum cwnd. All startup schemes
become unstable with increase in congestion attributed to higher maximum cwnd. Fast start
is more robust than slow start and quick start since it achieve higher average throughput and
its COV is lower than that of slow start and quick start.
41
maximum Scheme maximum Average Variance Standard Coefficient Remarks
cwnd throughput (Mbps) deviation of Variance
(packets) attained(Mbps) (COV)
50 Slow start 2.52 2.5 0.0009 0.03 0.012 All schemes performed at
Quick start 2.52 2.5 0.0009 0.03 0.012 the same level. There was
Fast start 2.52 2.5 0.0009 0.03 0.012 no congestion to varify how each
scheme would react to congestion
100 Slow start 5 4.36 2.1 1.43 0.33 Fast start had highest average but
Quick start 5 4.6 0.8 0.89 0.19 least standard deviation, resulting to
Fast start 5 4.7 0.45 0.67 0.14 least coefficient of variance, hence
performs close to average throughput
200 Slow start 4.9 3.74 2.55 1.6 0.43 Fast start had least COV. Slow start
Quick start 4.8 4.50 0.71 0.84 0.19 has highest deviation because of halving
Fast start 4.9 4.54 0.65 0.80 0.18 window after packet loss and reverting
to slow start.
Table 4.3: Throughput Analysis for One Regular TCP Flow
42
We consider throughput as key metric to select the best scheme, thus we use both average
throughput and standard deviation in our analysis. We involve the use of coefficinet of
variance because we are using sample data from dissimilar schemes. A robust scheme gains
throughput which does not deviate so much from the average throughput value.
Basing on the results from the study carried out on various startup schemes using one TCP
flow, fast start is the best performer. We therefore selected it for simulating many TCP flows
with the objective of investigating its stability and robustness with varying maximum cwnd.
4.3 Multiple TCP Flows implementing Fast start
Since the results of single flow simulation showed that fast start scheme behaved best, multiple
flow simulations are run considering only fast start.
4.3.1 RTT for Two and Four TCP Flows
Figure 4.7 and 4.8 show the RTT for different flow sizes, that is when the tagged flow presents
smaller, equal and bigger maximum congestion window than the untagged TCP flows.
0 10 20 30 40 50
0
0.2
0.4
0.6
0.8
1
RTT(s)
time(s)
Smaller cwnd
Equal cwnd
Higher cwnd
Figure 4.7: RTT for 2 TCP flows with Fast
start
0 10 20 30 40 50
0.2
0.25
0.3
0.35
RTT(s)
time(s)
smaller cwnd
Equal cwnd
Higher cwnd
Figure 4.8: RTT for 4 TCP flows with Fast
start
In the two flows simulation, regardless of the maximum cwnd of the tagged flow, the RTT
starts at the minimum value of 0.16 s and increases steadily to a peak value of 0.98 s, 0.65 s
43
and 0.3s when the tagged flow cwnd is less, equal or higher than the other flow respectively.
After the peak, RTT drops to minimum value and the pattern is repeated. Bigger flow sizes
(i.e flow with higher maximum congestion window) outcompetes flow of smaller sizes and
continue to send more data for longer time before it starts experiencing competition from
smaller flows. To verify the effect of doubling number of flows, we simulated four TCP flows
with various maximum congestion window sizes. The maximum RTT value is about 0.33 s
for all the flow sizes which never drops to the minimum RTT value as was the case in two
flows simulation. This pattern is repeated as shown in Figure 4.8. This means the number of
TCP flows as well as maximum congestion window size have effect on RTT.
Table 4.4 shows the analysis of RTT for multiple flows in terms of number of flows sharing
the link, RTT average, variance, standard deviation and coefficient of variance. Generally,
the average RTT increases with increase in number of TCP flows. Average RTT for two
TCP flows with maximum cwnd of 50 packets is higher due to very high RTT during the
fisrt 5 s of the simulation time as shown in Figure 4.7. This is attributed to larger maximum
cwnd outcompeting it. As number of flow increases, bigger flows cause more congestion and
experience more delay, hence higher RTT values. The COV of four flows is lower than that of
two flows. This shows that increase in maximum congestion window size has more effect on
RTT than increase in number of TCP flows. Basically, the overall performance of fast start
is relatively steady for all the network settings.
44
Number of maximum RTT Variance Standard Coefficient Remarks
TCP cwnd Average deviation of Variance
Connection (packets) (s)
50 0.39 0.070 0.26 0.67 Highest average RTT and COV is registered
Two 100 0.26 0.004 0.06 0.23 smaller maximum cwnd, followed by equal and
150 0.24 0.003 0.05 0.21 bigger maximum cwnd.
Four 20 0.28 0.002 0.04 0.14 COV is lower than in two flows. This is
50 0.29 0.003 0.06 0.21 because maximum cwnd used in four flows are
80 0.30 0.004 0.06 0.20 smaller than those in two flows. RTT values
increase with increase in number of TCP flows
Table 4.4: RTT Analysis for Two and Four TCP Flows
45
Number of maximum cwnd Dropped Received Total Loss Ratio
TCP Connection (in packets) Packets packets packets (10−4
)
50 0 10182 10182 0
Two 100 53 12141 12194 44
150 102 18942 19044 54
20 0 12824 12824 0
Four 50 0 16664 16664 0
80 30 19055 19085 16
Table 4.5: Packet Loss Details for Fast Start with two and four flows using standard TCP
4.3.2 Packet Loss Rate for Two and Four TCP Flows
This section presents the counts of received and dropped packets generated by the tagged flow
for each simulation with different maximum cwnd. We analysed the performance in terms of
loss ratio for different number of flows with various maximum cwnd. We used formula (3.7)
to compute the loss ratio. The details of the values are shown in Table 4.5.
The packet loss ratio in two flows were higher than the the loss ratio in four TCP flows for
all the flow sizes (smaller, equal and bigger flow). The decrease in packet loss is attributed
to maximum congestion window sizes not number of flows. This is because the maximum
congestion window sizes in four flows were smaller than those in two flow and the buffer is
sufficient to handle most packets. This implies that, when the maximum congestion window
size increased, the packet loss ratio increased because the queue grows and congestion increases
leading to further delay.
Comparing the performance in terms of window sizes (smaller, equal or bigger flow), simu-
lation with smaller maximum cwnd size had lower packet loss ratio than those with larger
maximum cwnd. This is because the flow with a smaller maximum cwnd sends fewer data
packets than the one with a higher maximum cwnd. Hence, incase of buffer overflow, the
latter flow has many more packets in the queue and loses more packets.
46
4.3.3 Throughput for Two and Four TCP Flows
0 10 20 30 40 50
0
1
2
3
4
5
time(s)
Throughput(Mbps)
smaller cwnd
equal cwnd
higher cwnd
Figure 4.9: Throughput for Two TCP flows
with Fast start
0 10 20 30 40 50
0
1
2
3
4
time(s)
Throughput(Mbps)
smaller cwnd
equal cwnd
higher cwnd
Figure 4.10: Throughput for Four TCP flows
with Fast start
Figure 4.9 shows maximum throughput for two TCP flows when maximum cwnd the tagged
presents smaller, equal and bigger maximum cwnd than the untagged flows. Figure 4.10
shows throughput for four TCP flows when the tagged flow presented smaller, equal and
bigger maximum cwnd than the untagged flows. As number of flows increase, bigger flow
performs better than smaller flows. This is because bigger flow size outcompete flow with
smaller maximum congestion window. Besides, TCP congestion and flow control adds up
all windows to a single BDP and flows with bigger maximum cwnd contribute bigger packet
distribution in the overall TCP window.
We further analyse the throughput achieved by each flow size in terms of average throughput,
standard deviation and coefficient of variance to establish the robustness of fast start with
varying number of TCP flows and varying maximum congestion window sizes. The analysis
is presented in Table 4.6.
47
Number of maximum maximum Average Variance Standard Coefficient Remarks
TCP cwnd throughput (Mbps) deviation of Variance
Connection attained (Mbps)
50 2 1.58 0.13 0.35 0.22 bigger flow gained highest average
Two 100 2.25 1.62 0.91 0.95 0.59 throughput with least standard deviation
200 4 3.30 0.36 0.60 0.18 hence least coefficient of variance
Four 20 1 0.61 0.07 0.27 0.44 bigger flow still gained highest
50 2.5 1.56 0.28 0.53 0.34 throughput and lowest COV compared to
80 2.75 2.38 0.1 0.32 0.14 both equal and smaller flow
Table 4.6: Throughput Analysis for Two and Four TCP Flows
48
From Table 4.6, number of TCP flows have lesser affect on performance than maximum con-
gestion window, that is variation in throughput is more clear when the maximum congestion
window is changed. This is conforming to the way TCP manages congestion by adding all the
congestion windows from the various flows to one BDP. Hence, the utilisation of bandwidth
by each flow depends on the amount of packet distribution from a given flow in the over all
TCP window size. Bigger maximum cwnd achieves highest average throughput with least
standard deviation, hence least coefficient of variance. Particularly, whenever the number
of TCP flows increases, fast start is more stable with bigger maximum cwnd than smaller
maximum cwnd and achieves higher average throughput whenever maximum cwnd is higher
than those of competing flows. Hence, fast start mechanism is robust and can withstand
varying number of flows and congestion windows.
It is assumed that under condition of fair sharing of the link, each flow utilizes 2.5 Mbps and
1.25 Mbps when there are two and four flows respectively. This study shows that bandwidth
utilization by bigger flow is well above the average link size in both two and four flows. This
implies that bigger flow is aggressive and unfair in terms of bandwidth sharing.
After seeing how multiple flows behave when fast start is implemented, next we simulate the
behaviour of fast start when implemented in fibonacci TCP.
4.4 Investigation of the effect of faster startup of TCP
Connection on the Performance of Fibonacci TCP
This section presents the result for fibonacci TCP simulation which includes RTT, packet
loss and throughput for one, two and four fibonacci TCP flows. In the case of single flow
simulation, the maximum cwnd is equal to the optimal window, while in the case of the
multiple flows the maximum cwnd of the tagged flow is equal to that of the other flows. The
discussion and analysis of results are found in section 4.4.1, 4.4.2 and 4.4.3.
49
4.4.1 RTT for Fibonacci TCP
In this section we presents the result and discussion of RTT.
0 5 10 15 20 25 30 35 40 45 50
0
0.2
0.4
0.6
0.8
1
Time(s)
RTT(s)
slow start
fast start
Figure 4.11: RTT for One Fibonnaci TCP flow
with 100 packets
0 5 10 15 20 25 30 35 40 45 50
0
0.5
1
1.5
2
2.5
Time(s)
RTT(s)
slow start
fast start
Figure 4.12: RTT for Two Fibonnaci TCP flow
with 100 packets
0 5 10 15 20 25 30 35 40 45 50
0
0.5
1
1.5
2
2.5
3
Time(s)
RTT(s)
slow start
fast start
Figure 4.13: RTT for Four Fibonnaci TCP flow
with 50 packets
Figures 4.11, 4.12 and 4.13 shows the RTT for one TCP flow with maximum cwnd of 100
packets, two flows with maximum cwnd of 100 packets and four flows with maximum cwnd
of 50 packets respectively. For two and four flows simulations, RTT value is when the tagged
flow has equal maximum cwnd as the untagged flows.
There is variation in RTT value for both startup schemes throughout the simulation time.
RTT also increases with increase in simulation time. This is attributed to increasing volume
50
of data that leads to bigger queue and more link congestion. This is because queue delay
and link congestion are the main component of RTT. The increase in RTT value is higher
for slow start than fast start. RTT value also increases with increase in number of fibonacci
TCP flows and higher increase is witnessed in slow start than fast start. We further verify
the effect of increasing number of flows in terms of RTT average, variance, standard deviation
(STDEV) and coeeficient of variance (COV) in Table 4.7.
Basing on RTT analysis in Table 4.7, standard deviation (STDEV) for both startup schemes
increase with increase in the number of fibonacci TCP flows. Average RTT value for fast start
is lower than that of slow start and much as its COV is higher than that of slow start, the
RTT variation is within values much lower than that of slow start. This shows that fibonacci
TCP is more stable with fast start than with slow start. The higher COV is due to aggressive
nature of fast start to saturate the link with data.
51
maximum Scheme RTT RTT RTT RTT Comparison
cwnd Average Variance Standard COV of three
(in packets) (s) Deviation Schemes
100 Slow start 0.654 0.064 0.243 0.372 Fast start has least STDEV resulting to lower
(one COV than slow start. Exponential increase has more
flow) Fast start 0.586 0.002 0.040 0.068 effect on slow start when maximum cwnd from only
one TCP flow constitue the overall TCP window.
100 Slow start 1.855 0.027 0.165 0.089 COV is higher than that in single flow reflecting
(Two the increase in STDEV. Fast start has lower RTT
flows) Fast start 1.535 0.096 0.310 0.202 average, but higher STDEV leading to increase in.
COV higher than that of slow start.
50 Slow start 1.835 0.963 0.981 0.535 As number of flow increases,There is a bigger increase
(four in STDEV than increase in average RTT leading to much
Flows) Fast start 1.509 0.767 0.876 0.580 higher COV. Fast start has higher COV but maintains
lower RTT average than that of slow start.
Table 4.7: Comparative analysis of RTT for One Fibonacci TCP Flow
52
4.4.2 Packet Loss for Fibonacci TCP
Table 4.8 shows the maximum cwnd, number of dropped and received packets, total number
of packets and loss ratio. Single fibonacci TCP flow has lower packet drops than multiple
flows. When we compare packet drops in multiple flows, two flows has higher number of
packet drops than four flows. This is because the maximum cwnd size in two flow is higher
than that of four flows. This is attributed to the fact that flow with smaller maximum cwnd
contributes a smaller number of packet in the queue. Hence, as soon as there is packet overflow
in the buffer and there is packet drop, smaller flow drops fewer number of packets than flow
with bigger maximum cwnd. Further, at TCP startup phase, it is the router’s buffer that
determines action on roburst traffic. It is possible that higher maximum cwnd results into
more abrupt increase in traffic. The routers that are biased on bursty traffic window drops
packets as soon as there is an abrupt increase in traffic. This can be a setup mechanism to
prevent certain network security risk.
Basing on analysis in Table 4.8, packet loss ratio is lower in one TCP conection than in
multiple fibonacci TCP flows for both slow start and fast start. This is because there is
no contention for resources when there is only one fibonacci TCP flow. Whenever there is
multiple fibonacci TCP flows, packet loss ratio is higher in flow with bigger maximum cwnd.
In terms of packet loss ratio, maximum cwnd has more effect than number of fibonacci TCP
flows. The high packet loss ratio in slow start is due to exponential increase of its cwnd. In
all simulations, fast start maitains lower packet loss ratio than slow start.
53
maximum Scheme Dropped Received Total Loss Ratio Remarks
cwnd(packets) Packets packets packets (10−4
)
100 Slow start 106 10577 10683 99 There is no competing flow.
One Flow Fast start 114 12549 12663 90 Fast start has lower loss ratio than slow start.
100 Slow start 834 6836 7670 1087 Packet loss ratio is higher than that of single flow
Two Flows Fast start 547 6316 6863 797 because there is competetion and maximum cwnd is
doubled.
50 Slow start 614 8911 9525 644 Packet loss ratio is lower than that of two flows. This
Four Flows Fast start 380 7424 7804 486 is because maximum cwnd for four flow is lower than that
in two flows. Fast start maintains lower packet loss
ratio than slow start in all simulations.
Table 4.8: Packet Loss for one and Multiple Fibonacci TCP Flows
54
4.4.3 Throughput for Fibonacci TCP
0 5 10 15 20 25 30 35 40 45 50
0
1
2
3
4
5
Time(s)
Throughput(Mbps)
fast start
slow start
Figure 4.14: Throughput for one Fibonacci
TCP flow with maximum cwnd of 100 pack-
ets
0 5 10 15 20 25 30 35 40 45 50
0
1
2
3
Time(s)
Throughput(Mbps)
fast start
slow start
Figure 4.15: Throughput for two Fibonacci
TCP flow with maximum cwnd of 100 pack-
ets
0 5 10 15 20 25 30 35 40 45 50
0
0.4
0.8
1.2
1.6
Time(s)
Throughput(Mbps)
fast start
slow start
Figure 4.16: Throughput for Four Fibonacci
TCP flow with maximum cwnd of 50 packets
Figure 4.14 shows throughput for one fibonacci TCP flow. There is more throughput fluctu-
ation in slow start than in fast start as is shown by a smoother graph for fast start.
Figure 4.15 shows throughput two fibonacci TCP flows. There is decrease in throughput as
the number of flow increases. This is because both the tagged and untagged flows share the
same bottleneck link and no single flow can takeup all the link. Fgure 4.16 shows further
decrease in throughput when the number of fibonaci TCP flow is doubled to four coupled
55
with decrease in maximum cwnd.
We analyse the throughput and presented the values in Table 4.9 for one and multiple fibonacci
TCP flows in terms of maximum achieved throughput, average throughput, variance, standard
deviation and coeffificient of variance (COV).
Basing on analysis in Table 4.9 when there is only one TCP flow, both startup schemes
achieved optimum throughput but the throughput values decrease with increase in simulation
time. Maximum achievable througput decreases with increases in number of fibonacci TCP
flow. The decrease in throughput is also observed with decrease in advertised congestion
window which is represented by lower maximum cwnd in four fibonacci TCP flows. Fast start
performs better than slow start in terms of achieved average throughput. Better throughput
is achieved when maximum cwnd is equal to optimum window size in the presence of a single
fibonacci TCP flow. This is because there is no competetion and maximum cwnd is not
restricted to value lower than optimum TCP window. Further, whenever number of TCP
flows increases, bigger flow outcompete smaller flow and achieve higher average throughput.
In TCP flows with various maximum cwnd, higher maximum cwnd is more aggressive in
gaining bandwidth and yields higher average throughput than smaller maximum cwnd. In
all the simulations, fast start performs better than slow start in terms of throughput, hence
it is more robust and a better startup scheme for TCP connection implementing high-speed
TCP such as fibonacci TCP.
56
maximum Scheme maximum Average Variance Standard Coefficient Remarks
cwnd throughput (Mbps) deviation of Variance
(in packets) (Mbps) (STDEV) (COV)
100 Slow start 5 4.7 0.140 0.37 0.08 Each scheme attains optimum throughput.
(one Fast start has higher throughput average and
flow) Fast start 5 4.9 0.002 0.05 0.01 lower STDEV which leads to lower COV than
slow start.
100 Slow start 2.6 2.3 0.040 0.20 0.09 Throughput for both schemes is lower than the
(two optimal value. Fast start attains higher average
flows) Fast start 2.8 2.5 0.027 0.16 0.06 throughput than slow start. COV is higher than
that of one flow.
50 Slow start 1.6 1.2 0.050 0.23 0.19 Both schemes attain same throughput. Fast
(four start maintains higher average throughput
flows) Fast start 1.6 1.3 0.021 0.15 0.12 than that of slow start. COV increases with
increase in the number of fibonacci TCP flows.
Robust nature of fast start makes it have lower
COV than Slow start, hence more stable.
Table 4.9: Throughput Analysis for One, two and four Fibonacci TCP Flows
57
4.5 Discussion of the Fibonacci-Faster Start TCP Re-
sults
Considering analysis in Section 4.4.3, the performance exhibited by Fibonacci TCP when it
implements fast start could be due to a number of reasons.
4.5.1 Fibonacci TCP
CA implementing Fibonacci TCP reduces the congestion window to (0.618034 product of
current TCP window)[7] and use the value of re-start window (set by fast start) as the
present cwnd after packet loss to continue transmitting high amount data without entering
slow start phase. This capability of fibonacci TCP to reduce cwnd to high value results to
high throughput and hence, good link utilisation.
4.5.2 Fast Start
TCP implementation uses fast start in two ways and with two different categories of windows
respectively. Fast start is used to start a connection with high initial window and to re-start
transmission after packet loss or long idle time using high re-start window. It does not use
loss window as is the case in slow start mechanism. Hence, fast start enhances high data
transmission rate.
Fast start has its own mechanism to set optimum initial window, hence the changes of window
sizes specified in this study affect only the value of maximum cwnd. Then during congestion
avoidance phase, TCP implementing fast start set the re-start window to the optimum of the
value used for maximum cwnd and the current window. Hence, using a large value for re-start
window would not increase the size of the congestion window but fibonacci TCP does so.
58
M.Sc_Dissertation_Bongomin
M.Sc_Dissertation_Bongomin
M.Sc_Dissertation_Bongomin
M.Sc_Dissertation_Bongomin
M.Sc_Dissertation_Bongomin
M.Sc_Dissertation_Bongomin
M.Sc_Dissertation_Bongomin
M.Sc_Dissertation_Bongomin
M.Sc_Dissertation_Bongomin

More Related Content

What's hot

ID3 Algorithm - Reference Manual
ID3 Algorithm - Reference ManualID3 Algorithm - Reference Manual
ID3 Algorithm - Reference ManualMichel Alves
 
Perl <b>5 Tutorial</b>, First Edition
Perl <b>5 Tutorial</b>, First EditionPerl <b>5 Tutorial</b>, First Edition
Perl <b>5 Tutorial</b>, First Editiontutorialsruby
 
Machine learning-a-z-q-a
Machine learning-a-z-q-aMachine learning-a-z-q-a
Machine learning-a-z-q-aLokesh Modem
 
Embedded linux barco-20121001
Embedded linux barco-20121001Embedded linux barco-20121001
Embedded linux barco-20121001Marc Leeman
 
Implementation of coarse-grain coherence tracking support in ring-based multi...
Implementation of coarse-grain coherence tracking support in ring-based multi...Implementation of coarse-grain coherence tracking support in ring-based multi...
Implementation of coarse-grain coherence tracking support in ring-based multi...ed271828
 
Windump
WindumpWindump
Windumpjk847
 
Maxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux
 
DCMPL_PROJECT_BOOK_SHAY_ITAMAR
DCMPL_PROJECT_BOOK_SHAY_ITAMARDCMPL_PROJECT_BOOK_SHAY_ITAMAR
DCMPL_PROJECT_BOOK_SHAY_ITAMARshay rubinstain
 
Wireless m-bus-quick-start-guide
Wireless m-bus-quick-start-guideWireless m-bus-quick-start-guide
Wireless m-bus-quick-start-guide봉조 김
 
Modbus protocol reference guide
Modbus protocol reference guideModbus protocol reference guide
Modbus protocol reference guidePanggih Supraja
 
Circuitikzmanual (1)
Circuitikzmanual (1)Circuitikzmanual (1)
Circuitikzmanual (1)Geraldo Silva
 

What's hot (16)

MS_Thesis
MS_ThesisMS_Thesis
MS_Thesis
 
ID3 Algorithm - Reference Manual
ID3 Algorithm - Reference ManualID3 Algorithm - Reference Manual
ID3 Algorithm - Reference Manual
 
Perl 5 guide
Perl 5 guidePerl 5 guide
Perl 5 guide
 
Perl <b>5 Tutorial</b>, First Edition
Perl <b>5 Tutorial</b>, First EditionPerl <b>5 Tutorial</b>, First Edition
Perl <b>5 Tutorial</b>, First Edition
 
Machine learning-a-z-q-a
Machine learning-a-z-q-aMachine learning-a-z-q-a
Machine learning-a-z-q-a
 
Embedded linux barco-20121001
Embedded linux barco-20121001Embedded linux barco-20121001
Embedded linux barco-20121001
 
Implementation of coarse-grain coherence tracking support in ring-based multi...
Implementation of coarse-grain coherence tracking support in ring-based multi...Implementation of coarse-grain coherence tracking support in ring-based multi...
Implementation of coarse-grain coherence tracking support in ring-based multi...
 
Windump
WindumpWindump
Windump
 
CPanel User Guide
CPanel User GuideCPanel User Guide
CPanel User Guide
 
MS-Thesis
MS-ThesisMS-Thesis
MS-Thesis
 
Manual
ManualManual
Manual
 
Maxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysis
 
DCMPL_PROJECT_BOOK_SHAY_ITAMAR
DCMPL_PROJECT_BOOK_SHAY_ITAMARDCMPL_PROJECT_BOOK_SHAY_ITAMAR
DCMPL_PROJECT_BOOK_SHAY_ITAMAR
 
Wireless m-bus-quick-start-guide
Wireless m-bus-quick-start-guideWireless m-bus-quick-start-guide
Wireless m-bus-quick-start-guide
 
Modbus protocol reference guide
Modbus protocol reference guideModbus protocol reference guide
Modbus protocol reference guide
 
Circuitikzmanual (1)
Circuitikzmanual (1)Circuitikzmanual (1)
Circuitikzmanual (1)
 

Similar to M.Sc_Dissertation_Bongomin

Extending TFWC towards Higher Throughput
Extending TFWC towards Higher ThroughputExtending TFWC towards Higher Throughput
Extending TFWC towards Higher Throughputstucon
 
LTE_from_Theory_to_Practise.pdf
LTE_from_Theory_to_Practise.pdfLTE_from_Theory_to_Practise.pdf
LTE_from_Theory_to_Practise.pdfATEC3
 
Meterpreter in Metasploit User Guide
Meterpreter in Metasploit User GuideMeterpreter in Metasploit User Guide
Meterpreter in Metasploit User GuideKhairi Aiman
 
Improved kernel based port-knocking in linux
Improved kernel based port-knocking in linuxImproved kernel based port-knocking in linux
Improved kernel based port-knocking in linuxdinomasch
 
A Push-pull based Application Multicast Layer for P2P live video streaming.pdf
A Push-pull based Application Multicast Layer for P2P live video streaming.pdfA Push-pull based Application Multicast Layer for P2P live video streaming.pdf
A Push-pull based Application Multicast Layer for P2P live video streaming.pdfNuioKila
 
Distributed Decision Tree Learning for Mining Big Data Streams
Distributed Decision Tree Learning for Mining Big Data StreamsDistributed Decision Tree Learning for Mining Big Data Streams
Distributed Decision Tree Learning for Mining Big Data StreamsArinto Murdopo
 
VoLTE and ViLTE.pdf
VoLTE and ViLTE.pdfVoLTE and ViLTE.pdf
VoLTE and ViLTE.pdfAsitSwain5
 
Uni v e r si t ei t
Uni v e r si t ei tUni v e r si t ei t
Uni v e r si t ei tAnandhu Sp
 
A Study of Traffic Management Detection Methods & Tools
A Study of Traffic Management Detection Methods & ToolsA Study of Traffic Management Detection Methods & Tools
A Study of Traffic Management Detection Methods & ToolsMartin Geddes
 
Ibm flex system and pure flex system network implementation with cisco systems
Ibm flex system and pure flex system network implementation with cisco systemsIbm flex system and pure flex system network implementation with cisco systems
Ibm flex system and pure flex system network implementation with cisco systemsEdgar Jara
 
ComputerNetworks.pdf
ComputerNetworks.pdfComputerNetworks.pdf
ComputerNetworks.pdfMeetMiyatra
 
Performance assessment of the MASQUE extension for proxying scenarios in the ...
Performance assessment of the MASQUE extension for proxying scenarios in the ...Performance assessment of the MASQUE extension for proxying scenarios in the ...
Performance assessment of the MASQUE extension for proxying scenarios in the ...AlessandroNuzzi1
 
Master Thesis - A Distributed Algorithm for Stateless Load Balancing
Master Thesis - A Distributed Algorithm for Stateless Load BalancingMaster Thesis - A Distributed Algorithm for Stateless Load Balancing
Master Thesis - A Distributed Algorithm for Stateless Load BalancingAndrea Tino
 
Challenges in VoIP Systems - Mostafa Ahmed Mostafa El Beheiry - First Draft F...
Challenges in VoIP Systems - Mostafa Ahmed Mostafa El Beheiry - First Draft F...Challenges in VoIP Systems - Mostafa Ahmed Mostafa El Beheiry - First Draft F...
Challenges in VoIP Systems - Mostafa Ahmed Mostafa El Beheiry - First Draft F...Mostafa El-Beheiry
 
BE Project Final Report on IVRS
BE Project Final Report on IVRSBE Project Final Report on IVRS
BE Project Final Report on IVRSAbhishek Nadkarni
 

Similar to M.Sc_Dissertation_Bongomin (20)

Report
ReportReport
Report
 
Extending TFWC towards Higher Throughput
Extending TFWC towards Higher ThroughputExtending TFWC towards Higher Throughput
Extending TFWC towards Higher Throughput
 
LTE_from_Theory_to_Practise.pdf
LTE_from_Theory_to_Practise.pdfLTE_from_Theory_to_Practise.pdf
LTE_from_Theory_to_Practise.pdf
 
Meterpreter in Metasploit User Guide
Meterpreter in Metasploit User GuideMeterpreter in Metasploit User Guide
Meterpreter in Metasploit User Guide
 
Improved kernel based port-knocking in linux
Improved kernel based port-knocking in linuxImproved kernel based port-knocking in linux
Improved kernel based port-knocking in linux
 
A Push-pull based Application Multicast Layer for P2P live video streaming.pdf
A Push-pull based Application Multicast Layer for P2P live video streaming.pdfA Push-pull based Application Multicast Layer for P2P live video streaming.pdf
A Push-pull based Application Multicast Layer for P2P live video streaming.pdf
 
trex_astf.pdf
trex_astf.pdftrex_astf.pdf
trex_astf.pdf
 
Distributed Decision Tree Learning for Mining Big Data Streams
Distributed Decision Tree Learning for Mining Big Data StreamsDistributed Decision Tree Learning for Mining Big Data Streams
Distributed Decision Tree Learning for Mining Big Data Streams
 
VoLTE and ViLTE.pdf
VoLTE and ViLTE.pdfVoLTE and ViLTE.pdf
VoLTE and ViLTE.pdf
 
Liebman_Thesis.pdf
Liebman_Thesis.pdfLiebman_Thesis.pdf
Liebman_Thesis.pdf
 
Uni v e r si t ei t
Uni v e r si t ei tUni v e r si t ei t
Uni v e r si t ei t
 
A Study of Traffic Management Detection Methods & Tools
A Study of Traffic Management Detection Methods & ToolsA Study of Traffic Management Detection Methods & Tools
A Study of Traffic Management Detection Methods & Tools
 
Ibm flex system and pure flex system network implementation with cisco systems
Ibm flex system and pure flex system network implementation with cisco systemsIbm flex system and pure flex system network implementation with cisco systems
Ibm flex system and pure flex system network implementation with cisco systems
 
ComputerNetworks.pdf
ComputerNetworks.pdfComputerNetworks.pdf
ComputerNetworks.pdf
 
Performance assessment of the MASQUE extension for proxying scenarios in the ...
Performance assessment of the MASQUE extension for proxying scenarios in the ...Performance assessment of the MASQUE extension for proxying scenarios in the ...
Performance assessment of the MASQUE extension for proxying scenarios in the ...
 
Master Thesis - A Distributed Algorithm for Stateless Load Balancing
Master Thesis - A Distributed Algorithm for Stateless Load BalancingMaster Thesis - A Distributed Algorithm for Stateless Load Balancing
Master Thesis - A Distributed Algorithm for Stateless Load Balancing
 
Challenges in VoIP Systems - Mostafa Ahmed Mostafa El Beheiry - First Draft F...
Challenges in VoIP Systems - Mostafa Ahmed Mostafa El Beheiry - First Draft F...Challenges in VoIP Systems - Mostafa Ahmed Mostafa El Beheiry - First Draft F...
Challenges in VoIP Systems - Mostafa Ahmed Mostafa El Beheiry - First Draft F...
 
BE Project Final Report on IVRS
BE Project Final Report on IVRSBE Project Final Report on IVRS
BE Project Final Report on IVRS
 
Deploying IBM Flex System into a Cisco Network
Deploying IBM Flex System into a Cisco NetworkDeploying IBM Flex System into a Cisco Network
Deploying IBM Flex System into a Cisco Network
 
Ns doc
Ns docNs doc
Ns doc
 

M.Sc_Dissertation_Bongomin

  • 1. FASTER STARTUP PHASE OF A TCP CONNECTION IN RELATION TO FIBONACCI TCP by Bongomin Charles Anyek B.Sc (MUK) 2007/HD18/9388U Department of Networks School of Computing and Informatics Technology College of Computing and Information Sciences Makerere University Email: cbongoley@gmail.com, Mob: +256-782-274116 A Project Report Submitted to the College of Computing and Information Sciences in Partial Fulfillment of the Requirements for the Award of Master of Science in Data Communication and Software Engineering Degree of Makerere University Option: Network and System Administration November 2011
  • 2. Declaration I, Bongomin Charles Anyek, do hereby declare that this project report is original and has not been published and/or submitted for any other degree award to this or any other universities before. Signature.........................................Date.......................... Bongomin Charles Anyek B.Sc (CSC,ZOO) Department of Networks School of Computing and Informatics Technology College of Computing and Information Sciences Makerere University i
  • 3. Approval The project report titled ”Faster Startup Phase of TCP Connection in Relation to Fibonacci TCP,” has been submitted for examination with my approval. Signature...........................................Date............................................ Dr. Julianne Sansa Otim, PhD. Supervisor School of Computing and Informatics Technology College of Computing and Information Sciences Makerere University ii
  • 4. Dedication This work is dedicated to my mother, Mary Anyek for her commitment towards educating me, despite her financial constraints. My wife Susan Amuge and daughter Charlotte Agenorwot and Aunt Irene Amal Yubu who missed me alot during the course of the study. iii
  • 5. Acknowlegement Sincere gratitude and heartfelt thanks go to my Supervisor, Dr. Dr. Julianne Sansa Otim for her valuable time, flexibility, encouragement, guidance and supervision during the study. Without her, this book would not have been what it is. Many thanks for attending to me. My special acknowledgement goes to my workmates at Centenary Bank. Martin Mugisha, Manager Credit Services and Susan Itamba (Credit Officer) for understanding my busy sched- ules at the University and, to Bernadette Nakayiza (ATM Administrator) who accomodated me and shielded me particularly when I had just joined BT Division. To Abdul Sserwada who offered some assistance to me on the use of ns-2 and MatLab. To M.Sc DCSE class of 2007 and Fote Antonia from Cameroom, it is great knowing you. Finally, to the Almighty God for His spiritual guidance, love, blessing and giving me hope. iv
  • 6. Contents Declaration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i Approval . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix List of Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x 1 1.0 Introduction 1 1.1 Background of the Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Statement of the Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Objectives of the Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3.1 General Objectives of the Study . . . . . . . . . . . . . . . . . . . . . . 3 1.3.2 Specific Objectives of the Study . . . . . . . . . . . . . . . . . . . . . . 4 1.4 Significance of the Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.5 Scope of the Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Literature Review 6 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 TCP/IP-Based Transport Layer Protocol . . . . . . . . . . . . . . . . . . . . . 6 2.3 Phases of TCP Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 Categories and Properties of TCP’s Startup Schemes . . . . . . . . . . . . . . 8 v
  • 7. 2.4.1 Slow-Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.4.2 Swift-Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.4.3 Paced-Start (PaSt) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.4.4 Quick Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.4.5 The Congestion Manager (CM) . . . . . . . . . . . . . . . . . . . . . . 13 2.4.6 TCP Fast Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.4.7 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.5 Effects of TCP’s Startup Algorithm on Flow Throughput . . . . . . . . . . . . 19 2.6 Advantages of Increasing TCP’s Initial Windows Size . . . . . . . . . . . . . . 19 2.7 TCP’s Congestion Avoidance (CA) Algorithms . . . . . . . . . . . . . . . . . . 20 2.8 Fibonacci TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 2.9 Research Question . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3 Methodology 23 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2 Simulation Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.3 Selection of the Faster Startup Scheme . . . . . . . . . . . . . . . . . . . . . . 24 3.4 Simulation of Chosen Startup Schemes . . . . . . . . . . . . . . . . . . . . . . 25 3.4.1 Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.4.2 Analytical Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.5 Details of Experiments Conducted . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.5.1 Simulation of One TCP Connection . . . . . . . . . . . . . . . . . . . . 29 3.5.2 Simulation of Many TCP Connections . . . . . . . . . . . . . . . . . . 30 3.5.3 Simulation of Fibonacci TCP with Selected Faster Startup Scheme and Slow Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 4 Results 33 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.2 RTT, Packet Loss Ratio and Throughput for One TCP Flow . . . . . . . . . . 33 4.2.1 RTT for Single TCP Flow . . . . . . . . . . . . . . . . . . . . . . . . . 33 vi
  • 8. 4.2.2 Packet Loss for One Regular TCP Flow . . . . . . . . . . . . . . . . . . 38 4.2.3 Throughput from Single Flow . . . . . . . . . . . . . . . . . . . . . . . 39 4.3 Multiple TCP Flows implementing Fast start . . . . . . . . . . . . . . . . . . . 43 4.3.1 RTT for Two and Four TCP Flows . . . . . . . . . . . . . . . . . . . . 43 4.3.2 Packet Loss Rate for Two and Four TCP Flows . . . . . . . . . . . . . 46 4.3.3 Throughput for Two and Four TCP Flows . . . . . . . . . . . . . . . . 47 4.4 Investigation of the effect of faster startup of TCP Connection on the Perfor- mance of Fibonacci TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 4.4.1 RTT for Fibonacci TCP . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.4.2 Packet Loss for Fibonacci TCP . . . . . . . . . . . . . . . . . . . . . . 53 4.4.3 Throughput for Fibonacci TCP . . . . . . . . . . . . . . . . . . . . . . 55 4.5 Discussion of the Fibonacci-Faster Start TCP Results . . . . . . . . . . . . . . 58 4.5.1 Fibonacci TCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.5.2 Fast Start . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.5.3 Buffer Size and Queue Management . . . . . . . . . . . . . . . . . . . . 59 5 Conclusion and Recommendation 60 5.1 Summary of Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 5.2 Recommendation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.3 Proposed Areas for Further Study . . . . . . . . . . . . . . . . . . . . . . . . . 62 vii
  • 9. List of Figures 3.1 Dumbbell Topology used for the Experiments . . . . . . . . . . . . . . . . . . 25 4.1 RTT for one regular TCP Flow with maximum cwnd of 50 pkts . . . . . . . . 34 4.2 RTT for one regular TCP Flow with maximum cwnd of 100 pkts . . . . . . . . 34 4.3 RTT for one regular TCP Flow with maximum cwnd of 200 pkts . . . . . . . . 34 4.4 Throughput for one Regular TCP Flow with maximum cwnd of 50 packets . . 40 4.5 Throughput for one Regular TCP Flow with maximum cwnd of 100 packets . 40 4.6 Throughput for one Regular TCP Flow with maximum cwnd of 200 packets . 40 4.7 RTT for 2 TCP flows with Fast start . . . . . . . . . . . . . . . . . . . . . . . 43 4.8 RTT for 4 TCP flows with Fast start . . . . . . . . . . . . . . . . . . . . . . . 43 4.9 Throughput for Two TCP flows with Fast start . . . . . . . . . . . . . . . . . 47 4.10 Throughput for Four TCP flows with Fast start . . . . . . . . . . . . . . . . . 47 4.11 RTT for One Fibonnaci TCP flow with 100 packets . . . . . . . . . . . . . . . 50 4.12 RTT for Two Fibonnaci TCP flow with 100 packets . . . . . . . . . . . . . . . 50 4.13 RTT for Four Fibonnaci TCP flow with 50 packets . . . . . . . . . . . . . . . 50 4.14 Throughput for one Fibonacci TCP flow with maximum cwnd of 100 packets . 55 4.15 Throughput for two Fibonacci TCP flow with maximum cwnd of 100 packets . 55 4.16 Throughput for Four Fibonacci TCP flow with maximum cwnd of 50 packets . 55 viii
  • 10. List of Tables 2.1 Comparison of TCP Startup Schemes . . . . . . . . . . . . . . . . . . . . . . . 15 3.1 Maximum cwnd for One (1) TCP Flow Simulation . . . . . . . . . . . . . . . 30 3.2 Maximum Window for Two (2) TCP Flows Simulation . . . . . . . . . . . . . 31 3.3 Maximum cwnd for Four (4) TCP Flows Simulation . . . . . . . . . . . . . . . 31 4.1 Comparative Analysis of RTT for One Regular TCP Flow . . . . . . . . . . . 37 4.2 Packet Loss Details for one Regular TCP Flow . . . . . . . . . . . . . . . . . . 39 4.3 Throughput Analysis for One Regular TCP Flow . . . . . . . . . . . . . . . . 42 4.4 RTT Analysis for Two and Four TCP Flows . . . . . . . . . . . . . . . . . . . 45 4.5 Packet Loss Details for Fast Start with two and four flows using standard TCP 46 4.6 Throughput Analysis for Two and Four TCP Flows . . . . . . . . . . . . . . . 48 4.7 Comparative analysis of RTT for One Fibonacci TCP Flow . . . . . . . . . . . 52 4.8 Packet Loss for one and Multiple Fibonacci TCP Flows . . . . . . . . . . . . . 54 4.9 Throughput Analysis for One, two and four Fibonacci TCP Flows . . . . . . . 57 ix
  • 11. List of Acronyms ACK Acknowledgment BDP Bandwidth Delay Product CA Congestion Avoidance CE Capacity Estimation cwnd Congestion Window DACK Delayed Acknowledgement DHCP Dynamic Host Configuration Protocol DOS Denial of Service attack FTP File Transfer Protocol IP Internet Protocol IPD Inter-Packet Delay LFN Long Fat Network(Network with very high BDP) MSS maximum Segment Size ns-2 Network Simulator Version 2 PaSt Paced Start rcwnd Receiver Advertised Congestion Windows RTT Round Trip Time RWIN Receiver Window ssthresh Slow Start Threshold TCP Transmission Control Protocol TCP-FS TCP Fast Start (TCP with fast start) TFTP Trivial File Transfer Protocol UDP User Datagram Protocol x
  • 12. Abstract The performance of Internet is heavily influenced by the behaviour of TCP. This is because TCP is the most commonly used transport protocol. The performance of TCP deteriorates right from startup phase of TCP connection. The optimum initial window size for sending data is determined during the slow start phase, while the congestion avoidance phase is for the management of a steady behaviour of TCP connection. Slow start is a poor performer in all networks including wireless and high speed networks referred to as Long Fat Networks (LFN). This is because slow start is very slow at the connection setup and its exponential increase leads higher packet loss rate. There are numerous faster startup schemes developed to overcome shortfall of slow start and improve performance during TCP connection setup but the regular TCP itself has never been efficient. Fibonacci TCP is developed to solve disadvantage in regular TCP. However, how these fast startup schemes impacts on fibonacci TCP was not well known. Furthermore, in attempt to make each startup scheme effective, each scheme has its own way to determine initial window during setup of TCP connection. In this study we explored and evaluated performance of slow start and two other faster startup schemes namely; quick start and fast start and chose the best performing startup scheme. The results show that fast start performs better than both slow start and quick start. We then evaluated the robustness of fast start under different network settings. In seeking to understand how faster startup impacts on high speed TCP, we then simulated fibonacci TCP and fast start as well as fibonacci TCP with slow start and compared the performance. Our main interest in the study is not in the startup but on how faster startup influences performance of Fibonacci TCP. The study shows that fast start is better performer than slow start. Hence, fast start influences fibonacci TCP possitively since it boost throughput during startup phase and CA phase. We also conclude that using faster startup scheme together with highspeed TCP improves the performance of TCP during both startup phase and CA phase LFN. xi
  • 13. Chapter 1 1.0 Introduction Advances in communication technology and proliferation of processes that use little network resources in distributed systems [1] are making the Internet a common choice for communi- cation. Todate, TCP is the main transport protocol responsible for transmission of Internet traffic. The main objectives of TCP are to adapt the transmission rate of packets to the available bandwidth, avoid congestion at the network and creation of a reliable connection. For example, ACKs regulate transmission rate of TCP by ensuring that packets can be trans- mitted only when the previous packets have been acknowledged (or have left the network) and renders connection reliability by transmitting information needed by the sender so as to retransmit lost packets. 1.1 Background of the Study As the network size increases, congestion builds in the entire network system due to con- tention by in-flight network packets from users or processes from IP nodes which might cause a deadlock. It is also noted that as the network system gets congested, the delay in the sys- tem increases [2]. The latency/delay causes degradation of the overall performance especially in the absence of proper congestion control management. This is because traffic congestion is influenced mainly by the behaviour of the congestion avoidance algorithms. A good un- derstanding of the relationship between congestion and delay is very essential for designing 1
  • 14. effective congestion control algorithm. Some researchers mentioned congestion [3] as a prob- able cause of collapse of the Internet besides other causes such as Denial of Service (DOS) attack [4], [5]. Some of the causes of congestion and delay have been a result of algorithms used (or not used) in TCP. For instance, use of explicit feedback which involves Optimistic Acknowledgment (opt-ack) [4] can be used by non-legitimate users to launch DOS attack, yet opt-ack was meant to improve end-to-end performance. In situation where other alternative TCP’s startup scheme such as Swifter-start [6] is used, such an attack would not take place. TCP congestion control algorithms can be integrated with active measurement and active packet queue management to prevent packet loss due to buffer overflow at the edge communi- cating peers. This integration during startup phase of TCP connection creates an opportunity for early congestion notification to the sender to reduce the transmission rate before the queue overflows and packet loss is sustained. Much as several researchers focused on faster startup schemes, other studies proposed the use of CA algorithms such as Fibonacci TCP [7] and Exponential TCP [8] to achieve good performance in the overall TCP session in high speed networks. For instance, Swifter-Start has a shorter startup period compared to long startup period experienced by Slow-Start [9] scheme. Swifter-Start therefore enables full utilization of the network path with large bandwidth delay product(BPD) without causing degradation in user-perceived performance. Besides the numerous congestion control and congestion avoidance algorithms, performance still remains an issue in TCP implementation with varying percentage of bandwidth usage, packet loss rate and recovery mechanisms. 1.2 Statement of the Problem Traffic dynamics in the Internet are heavily influenced by the behaviour of TCP since it is the most commonly used transport protocol. TCP connection has two phases, namely 2
  • 15. slow start and congestion avoidance phases [10], [11]. The slow start phase determines the optimal window size, while the congestion avoidance phase is for the management of a steady behaviour of TCP connection under condition of minimum packet loss. Slow-start does not perform well in wireless [12], [6], satellite [13] and the high speed networks referred to as Long Fat Networks (LFN) [14]. This is because slow-start is very slow at the connection setup causing unnecessarily long startup and the exponential increase character- istic leads to a very large window towards the end of slow start phase. This can easily cause interruption to other flows and high packet loss in one congestion window. Many researchers have proposed several schemes to address TCP’s slow startup problems mentioned above. These include; TCP Fast Start [15], swift-start [6], [16], the Congestion Manager [17], Quick start [4], [5], [18], Paced-Start [10] and SARBE [19]. These schemes vary in the way in which the initial window size is chosen. However, there has been no clear indication of how the startup phase of a TCP connection affects the consequent data transmission during the congestion avoidance phase. In this project, the behaviour of a TCP connection with and without a faster startup scheme in relation to the congestion avoidance phase was studied. Particular interest was the impacts of faster startup on Fibonacci TCP [7] in terms of achieved throughput. A review of Fibonacci TCP is in Section 2.8. 1.3 Objectives of the Study 1.3.1 General Objectives of the Study To determine the relationship between the faster startup and the congestion avoidance phase in TCP connections with emphasis on high speed Fibonacci TCP. 3
  • 16. 1.3.2 Specific Objectives of the Study i. To analyze some of TCP’s proposed faster startup schemes using one flow and choose the most effective one. ii. To use the chosen TCP’s faster startup scheme to initiate several TCP connections under various network settings in terms of topology, number of flows and flow sizes with particular interest to study the robustness of this scheme. iii. To investigate performance of the congestion avoidance algorithm especially Fibonacci TCP when the TCP’s faster startup scheme is used in comparison to performance when slow-start is used. 1.4 Significance of the Study The existing startup schemes of TCP connections that are aiding estimation of available bandwidth do not show clear impacts on the algorithms used in the congestion avoidance phase. That is, after the transition from the startup phase of a TCP connection, the impact of a given initial window size on the congestion avoidance mode is not well known. The findings from the study is able to show the performance of TCP when each startup scheme is implemented independently to setup TCP connection. The study has made it possible to predict the performance of TCP with increase in initial window sizes and number of TCP flow. This has made it possible to chose between slow start and faster startup scheme for setting up TCP connection if a high speed CA algorithm such as fibonacci TCP is to be used in network communication. 1.5 Scope of the Study The study was limited to simulation of TCP connections using standard TCP and fibonacci TCP, each implementing both slow start and faster startup schemes independently to deter- mine the overall performance in terms of RTT, packet loss and throughput. The study does 4
  • 17. not attempt to replace any TCP startup schemes and TCP control used in the CA phase. 5
  • 18. Chapter 2 Literature Review 2.1 Introduction In this chapter, we have given brief account of transport layer protocols, phases of a TCP connection, TCP’s algorithm, TCP’s startup schemes, relationship between startup algorithm and flow throughput, and advantages of larger initial window. 2.2 TCP/IP-Based Transport Layer Protocol The fundamental and significant components of Transmission Control Protocol (TCP) that support connection-oriented services include the TCP flow and TCP congestion control al- gorithms. These TCP flow and congestion control algorithms by necessity rely on remote feedback to determine the rate at which packets should be sent in either cooperative or non- cooperative environment [21] using active measurement as exemplified in Section 2.4. The feedback comes from either the network as available bandwidth or directly from the receiver as a positive or negative ACK. Other than TCP, the transport layer may also use User Datagram Protocol (UDP) in con- nectionless services such as DHCP, TFTP, traditional VoIP [12], etc. When UDP is used, congestion control algorithms implicitly assume that the remote entity generated correct 6
  • 19. feedback. UDP degrades performance and leads to low throughput since it does not have the features for congestion management. Since most Internet traffic uses TCP, the next Section 2.3 elaborated further on TCP. 2.3 Phases of TCP Connection The TCP phases are categorized into slow start and congestion avoidance phases. Slow startup phase is used to determine optimal window size using scheme such as [9], [5], while the congestion avoidance phase maintains the steady behaviour of TCP under condition of minimum packet loss using congestion avoidance algorithms such as Fibonacci TCP [7]. In [11] TCP phases have been categorized into slow start, window recalculation and constant phase and each of these phases is explained as follows: (i) Slow-start phase is for determining optimal windows size and self-clocking system, (ii) Windows-recalculation phase is when maximum window size is reduced to minimum windows by multiplicative decrease(halving the maximum windows size), and (iii) Constant phase maintains a threshold value to ensure minimum packet loss. During a existing TCP connection, the sender maintains three windows [9], each playing a particular role. These windows include; receiver congestion windows (rcwnd), congestion windows (cwnd) and threshold windows (ssthresh). The rcwnd is granted/advertised by the receiver. The cwnd is the increase above threshold. The ssthresh is the value to switch between slow startup and CA phases. Other windows are categorised basing on the way TCP uses the startup scheme. For example, TCP uses slow start in three (3) different ways and each way uses a particular window which is different from the ones in the above paragraph. First, TCP uses slow start for TCP connection setup using initial window. Secondly, it uses slow start for restarting transmission after long idle time using restart window. Thirdly, TCP uses slow start to start retransmission after packet loss using loss window. Fast start [15] differs from slow start and uses restart window 7
  • 20. for starting retransmission after both long idle time and packet loss. The size of loss window is 1MSS and the size of restart window is the same as the last optimum cwnd used. Hence, when loss window is used, bandwidth will be under utilised, whereas restart window will not increase the state of the congestion and maitain good link utilisation. The behaviour of the TCP phases are influenced by the different congestion control and congestion avoidance algorithms. The congestion control algorithms control congestion after packet loss, while congestion avoidance algorithms are meant to control congestion before packet loss. For the purpose of this study, the TCP algorithms will be divided into two categories and each category will be implemented in a specific phase of a TCP connection. These categories include; TCP’s startup algorithms exemplified in Section 2.4 and congestion avoidance algorithms examplified in Section 2.7. 2.4 Categories and Properties of TCP’s Startup Schemes The various proposed faster TCP’s startup schemes are passive and active bandwidth probing models that quantify the overall performance improvement in comparison to the default TCP’s slow-start. Few startup schemes that have been reviewed are further summarised in Table 2.1. The bandwidth probing by TCP flow and TCP congestion control may be categorized into two categories: Packet Rate management (PRM) and Packet Gap Method [10], [22], while in [23] bandwidth probing mechanisms are put into three (3) broad principles: (i) bandwidth estimation technique without consuming bandwidth, e.g. swift-start [22], PaSt TCP [10], etc, (ii) sharing congestion state information between peer applications, e.g. The Conges- tion Manager, and (iii) explicit feedback from intermediated and/or receiver IP node, e.g. Quick-start [4], [19]. Some schemes do not carryout bandwidth estimation. That is, data transmission is initiated with any window without probing the available bandwidth. For ex- ample, TCP’s Jump-Start [23] scheme select any windows and starts sending data without the knowledge of the available bandwidth. Recently developed multimedia systems [13] use 8
  • 21. control data packets to control TCP flow. These packets are used in the control protocol of IP-telephony, video conferencing, H323 network, etc. The packets are released to the network after the three-way handshake and is used in the media transmission phase of VoIP networks (but not in the signaling phase) to adapt the transmission rate to the available bandwidth. Some of the TCP’s startup schemes will be reviewed in the next section. The effect of a given startup scheme on TCP flow will be reveiwed in Section 2.5. The general advantage of increasing initial window size is reviewed in Section 2.6. 2.4.1 Slow-Start Slow-start [9] is the scheme used in the standard TCP congestion control algorithm. It is only able to determine window size and self-clocking system. Transition into congestion avoidance phase is triggered by packet loss or when congestion window has reached statistically config- ured threshold. That is, in an event of packet loss, the current congestion windows (cwnd) is half, and the half is saved as the slow start threshold (ssthresh) and slow start begins again from its initial cwnd. Once the the cwnd reaches the ssthresh, TCP goes into congestion avoidance (CA) phase. When a loss occurs again, TCP goes back to slow startup phase. The use of slow-start algorithms can lead to inefficient use of bandwidth during the congestion avoidance phase. This is because packet loss during congestion avoidance phase triggers slow- start algorithm which in turn half the current cwnd. This event leads to underutilization of bandwidth. According to [24] When using TCP slow-start [9], the number of packets being released into the network exponentially increases in large burst during the slow start phase. This can cause a build-up of queue in the bottleneck routers. In networks with high bandwidth delay product (BDP), these routers queue may be smaller than the maximum TCP window. According to [22], a large queue may lead buffer overflows resulting to packet loss and a degradation in overall performance. Other notable weakness in TCP’s slow-start scheme is its inability to estimate accurate initial 9
  • 22. cwnd size in wireless networks. This is because the causes of wireless loss is due to signal fad- ing, random error and handoff processes [11], [6] but not network congestion. The Congestion Control algorithm cannot rely on timeout to determine optimal cwnd size since the increase in RTT is due to fading signal and not congestion. Attempt to reduce impact of wireless loss has been suggested by [8] where ssthresh is predefined. Deficiency of TCP’s slow-start scheme has also been witnessed in Voice-over-IP application. This is because the slow startup of TCP causes low throughput. The low throughput is unable to meet the bit-rate requirement of VoIP codec. TCP Fast-Startup (fsTCP) [12] has been proposed to determine initial size of sliding window for TCP connection by adjusting the congestion window parameters before transmitting IP data. These parameters include; data rate of VoIP codec and connection RTT. fsTCP scheme also solves the problem of tradi- tional VoIP connection where voice packets were delivered by UDP protocols. UDP protocols degrades performance due to the fact that it has no features for congestion control and flow control. However, fsTCP has longer startup period since it goes through four steps (three-way handshake, parameter determination, connection setting and starting TCP connection with initial window size derived during stage 3. This condition may lead to underutilization of bandwidth during connection startup leading to low throughput. Some recent research suggested the use of combined features of some of the TCP’s startup schemes to improve performance. For instance, TCP-Adaptive Westwood [25] was designed basing on the features of TCP Westwood and TCP Adaptive Reno. The benefits of such combination are efficient use of bandwidth, fair RTT and sufficient throughput required for quality TCP connection startup. 2.4.2 Swift-Start Swift-start [6], [22] is a variant of slow-start. It is one of the faster startup schemes. It uses packet pacing and packet pair and uses only few RTT to determine windows size. When packet loss occurs, it uses fast recovery. Some researchers witnessed that swift-start send 10
  • 23. more packets than slow-start during startup phase. This is because estimated bottleneck bandwidth defines the number of packets to be sent in the second RTT. Swift-start does not need intermediate routers and does not rely on explicit feedback from the network or receiver. Its ability to employ both packet pacing and packet pairs offer good throughput. However, recent study by [16] identified some drawback in the original TCP’s Swift-start scheme. These problems include the use of swift-start in combination with Delayed ACK (DACK) and ACK compression. The DACK affected the packet pair algorithm of the swift- start because ACK would not be sent promptly. Such delay might not have been due to congestion but within the receiver. In this case, the sender would not be able to correctly estimate the available bandwidth. Besides, if the arrival time between data segment (reported by receiver) is less than the maximum ACK delay time, the receiver will send only the second ACK and the sender would not calculate RTT but act as TCP’s slow-start [9]. Secondly, ACK compression would decrease the time gap between ACK of individual packets. This would lead to over estimation of the available bandwidth. To solve the problem introduced by DACK in the original swift-start [22], a modification was made to the packet pair algorithm [16] such that cwnd is equal to four (4) segments so that the packets are sent in pairs. The modification was done such that RTT is receiver based, that is, the time difference can be determined by the receiver which inturn sends it back to the sender within the IP header option. The Solution to the problem introduced by ACK compression can be solved by adopting procedures used in [10] where estimation of available bandwidth uses ACK of the entire train to get fair RTT. 2.4.3 Paced-Start (PaSt) PaSt [10] is an active bandwidth probing startup scheme which does not need explicit feedback from receiver, hence no flooding of network path. Compared to slow-start, PaSt does not use self-clocking during startup, but controls the inter-gap between the packets in the train to determine the Turning point, an optimal congestion window in multiplexed flow with no 11
  • 24. congestion. This means during startup, PaSt does not transmit the next train until all the ACKs for the previous train have been received. This shows that PaSt is less aggressive than slow-start. This is because PaSt trains are more stretched out than the corresponding slow-start train, and the spacing between paced start trains is larger than that between the slow-start trains. PaSt iteratively calculates an estimate for the congestion window of the path and then uses that estimate to transition into congestion avoidance phase. However, if packet loss occurs during that period, PaSt transition into congestion avoidance phase in exactly the same way as slow-start. Thus, PaSt differ from slow-start in two (2) ways; how it sends trains and how it transition into congestion avoidance phase. It solves the problem in packet pair technique used in the standard TCP. This is because packet pair would estimate the bottleneck link capacity not the available bandwidth. This is beneficial only if the competing traffic is low. But if the traffic increases, using packet pair can overestimate the available bandwidth, and thus the initial congestion window. This can easily result into traffic congestion and significant packet loss. 2.4.4 Quick Start Quick-Start [1], [26] is not a variant of slow-start TCP. It incorperates active measurement tools with quick-start request in cooperative environment to estimate the available band- width. This scheme is known to have shorter startup period to transition into congestion avoidance phase. That is , the explicit feedback avoids the time consuming capacity probing by the TCP’s slow-start and is beneficial in underutilized bandwidth [4]. The TCP sender therefore should advertise a rcwnd which is big enough to allow an efficient utilization of the connection path with large BDP. The TCP receiver with high number of TCP connections should also optimized buffer and memory usage in order to be able to serve a maximum pos- sible number of TCP connections at the same instant. During Quick-start failure/packet loss, the algorithm reverts to slow-start phase. This mechanism has been supported by [5] because the failure/packet loss means the current cwnd would not be valid due to sudden changes in traffic load, misbehaving receiver, etc. Other advantages realized by this are; reduced queue delay, better performance in terms of link capacity utilization, good transition during handoff 12
  • 25. and suitable between TCP nodes with different characteristics. However, Quick-Start have vulnerability to fabricated bandwidth information from the bottleneck link such as DOS at- tack [3]. Explicit feedback can also suffer from rate limiting [18] in case of probing packet such as ICMP, etc, which is used in controlled manner for security reason at the receiver. Recent research [5] supported the use of mobility signaling and nonce in the Quick-start-request to counteract these attacks [27], [3]. According to study by [1], other problem in Quick-start is that, when the request has not been approved, the Slow-Start (default congestion control) scheme [9] is used. When a packet loss occurs, quick start assumes slow start and use the default initial window for transmitting the remaining data in same way when quick start requisted has not been approved. 2.4.5 The Congestion Manager (CM) The CM [17] is an end-to-end module and therefore works at both peers’ application level. The CM incorporate API (Application Programming Interface) which is a non-standard protocol. The advantage CM has over Slow-Start is that it has faster startup on link which has been used because of available aggregate information i.e. can leverage information on previous TCP connection. CM weakness is that for connection on link which has not been used, it uses the TCP’s slow-start scheme [9], which is the default congestion control with many shortcoming as noted in Section 1.2. 2.4.6 TCP Fast Start TCP Fast Start [15] caches network parameters such as RTT and cwnd from previous TCP connection and then uses this information estimate the availbale bandwidth. The only disad- vantage would be wrong estimation when the cached information becomes stale. The scheme protect against this consequences by preventing fast start connection after packet drop has occurred, but use fast recovery, and packets sent during fast would be assigned higher drop priority than other packets. This mechanism is good because it avoids the penalty for slow startup each time there are new TCP connections. 13
  • 26. Table 2.1 is a tabulated summary comparing the TCP startup schemes. The basis of the comparison includes; number of round trip times required to probe the available bandwidth, accuracy of the estimated value of available bandwdith, the need for having intermediate routers i.e. estimation is either explicit/online or implicit, whether the scheme is susceptible to security threat and how each scheme responds to packet loss. 14
  • 27. Scheme No.of round trips to Accuracy of bandwidth estimation Necessity of Security Response to Packet loss Probe avalable bandwidth middle Node Threat Slow-Start Many round trip times because inaccurate estimate in wireless network since it cannot No No reduce to 1MSS, inefficient [9], [11] of slow nature different cause of packet loss use of bandwidth Swift-Start few because it acknowledges Less Accurate compared to paced start No No Use fast recovery [22] packet not a train Paced-Start Many , it acknowledges Accurate No No Not well known [10] whole train hence, similar to slow start Quick-Start few because it uses Accuracy depends on intermediate node: Required Vulnerable Return to slow start, using default [1], [3], [26] option in IP-header (i) biased routers do not support bursty traffic, hence they to DOS initial window in the same way as during 3way-handshake can cause early packet drop/loss in higer initial window attack if quick start was’nt approved (ii) misbehaving routers may report wrong available bandwidth when there is low or high traffic induced by an attack The Congestion few round trips on previously used Accuracy depends on probing nature No No use previous cwnd sent successfully, Manager [17] link, many new links of slow start on new links stable and efficient bandwidth usage Fast Start Many round trips on new link, Accurate on new link No No use restart window with fast recovery, [15] few on used links On used link, accuracy depends on does not enter slow start phase, whether cached parameters are stale or not efficient bandwidth usage at startup Table 2.1: Comparison of TCP Startup Schemes 15
  • 28. Slow start has the longest startup time followed by Paced start, then Swift start. Quick start assumes slow start when Quick start request is not approved, otherwise, it is faster than Paced start and Swift start. The Congestion manager and fast start assume slow start on new link, but both are faster than Slow start. Only quick start requires intermediate routers, hence it may not be a good choice since an approved quick start request can be misleading in estimating available bandwidth in such a way that bias routers will not support bursty traffic and misbehaving router may either report wrong available bandwidth i.e. either underestimated or over estimated. Dependency on explicit feedback makes quick start susceptible to security risk such as DOS attack. The procedure used by slow start and quick start for recovery after packet loss lead to under utilisation of bandwidth. Fast start gains higher link utilisation faster than all the other schemes because using restart window and not loss window. Fast start takes a fewer number of round trip times to estimate available bandwidth since it uses implicit information from the acknowledgment of the first packets. The congestion manager assumes slow start on new links, quick start assumes slow start when a request is not approved, swift start use paired acknowledgment, and paced start acknowledges a whole train, hence will take more round trip times to estimate available bandwidth compared to fast start. In summary fast start is a better TCP startup scheme since it takes few round trip times to estimate available bandwidth, used higher initial window during connection setup and use high restart window after long idle time or packet loss rendering high TCP performance during congestion avoidance phase. 2.4.7 Related Work The study in [7] involved the use of slow start to compare the performance of standard TCP and fibonacci TCP. Fibonacci TCP performs better than the regular TCP. In our study we 16
  • 29. compare the performance of various startup schemes as well as slow start and select the best scheme for simulation with fibonacci TCP. Similar findings in [37] shows that TCP variants present self-similarity behaviour over time scale. This means changing network settings cause only a slight variation in the behaviour of the traffic pattern. During our study, we look at the characteristics of other TCP variants; namely quick start and fast start in comparison to slow start scheme in terms of RTT, packet loss and throughput when we simulate each scheme with the regular TCP. The previous also shows that whenever there is multiple TCP flows, all the are synchronised. That is, all TCP flows that passed through the router tended to lose packets at the same time and reduce their sending rates at the same time. This is because all the TCP flows are summed up to one TCP window and appears as a single flow through the bottleneck link. When TCP flows are sychronised [38], a bigger buffer is a better choice in order to achieve high utilisation. One of the studies [40] involved investigation on the effect of packet size in the maximum cwnd when there are more than one TCP flows sharing the link, but each with different packet sizes. The study shows that different packet sizes performs differently. In our study, we do not compare performance of competiting TCP flows but performace of tagged flow from different startup schemes using equal number of packets in the maximum cwnd for each simulation using one TCP flows. In study by [41], throughput is used as main metric to investigate performance of TCP in terms of throughput collapse in cluster-based storage systems. In our study, throughput is also used as the main metric to determine the performance of the selected startup schemes for setting TCP connection and thereafter, the impact of faster startup on fibonacci TCP interms throughput variation. Using fibonacci TCP with faster startup scheme in cluster- based sotrage system may cause erratic throughput due to increase in congestion and RTT as a result of large amount of data being sent, a situation that may lead to TCP pause. 17
  • 30. TCP Pause When TCP begins sending data in blocks at connection setup, it causes sudden congestion and TCP pause [42]. TCP pause is the process by which TCP connection sends large block of data and pause before sending next block. Significant TCP pause that last for some duration of time accounts for throughput collapse. In such Internet application, using fibonacci TCP implementing faster startup scheme may lead to data being sent in block because of the high re-start window used after idle time or packet loss coupled with high factor of 1.618934 used by fibonacci TCP to increase the congestion window. TCP pause occurs if there is a throttling process restricting data flow. The throttling process includes erratic RTT and congestion. This erratic pattern of TCP, that is, the stop-start pumbing of data can also impact on data and, throughput would be reduced to low value, degrading the link performance. This erratic pattern may not affect services such as e-mail, but will affect multimedia application such as vidoe and voice. TCP pause can also result into under utilisation of bandwidth. This is a similar situation as that of throughput collapse in clustered-based storage system [41]. In data communication network, network congestion occurs when a link or node is sending so much data that its throughput deteriorates. The throughput deterioration is attributed to queuing delay, packet loss and blocking of new connection. The consequence of packet loss and blocking of new connection is that any increamental data leads to either only small increase in network throughput or to an actual decrease in network throughput. Network protocol such as fibonacci TCP coupled with high restart window used in fast start may be aggressive in retransmission of data to compensate for the lost packet and this keep the link in a state of congestion even after the initial data amount has been reduced to level which may not have induced network congestion. Therefore, using faster startup that has high restart window with high speed TCP in cluster-based storage system may behaves like protocols with agressive retransmission that exhibit two stable states under the same level of data amount. The stable state with low throughput is a congestive/throughput collapse and the total incoming bandwidth exceeds the outgoing bandwidth. Meanwhile, stable state with high throughput is when there is no TCP pause. 18
  • 31. 2.5 Effects of TCP’s Startup Algorithm on Flow Through- put The impact that startup of a TCP connection has on the (flow) throughput is determined by flow length [10]. For instance, very short TCP flows never get out of the startup phase, that is transmission ends in startup phase and, so their throughput is determined by the startup scheme. For long TCP flows, the startup scheme has negligible impact on the total throughput since most of the data is transmitted in the congestion avoidance phase. For intermediate-length flows, the impact of the startup scheme depends on how long the flow spends in congestion avoidance phase. However, it is not well known how starting a TCP connection with high initial window influences congestion avoidance phase. Since the startup algorithm used determines the size of the initial window, we briefly discuss the advantage of large initial window in the next section. 2.6 Advantages of Increasing TCP’s Initial Windows Size A large initial window is advantegeous especially where the connection is established for transmission of small amount of data. This is because all data may be transmitted in a single window. For instance, e-mail and web page transfers are of short flows and the larger initial windows can reduce the data transfer time. In many variants of slow-start such as [24], [28], connections that are able to use large congestion windows eliminate upto three RTTs. This is a benefit for high-bandwidth with large propagation delay such as TCP connection in satellite links and LFN. Using [23], [18] scenario over an underutilized network bandwidth, the TCP sender would be able to transmit much of its data in the initial congestion windows as much as the available bandwidth can absorb, and can complete data remittance in over half or less time required by [9]. After the slow startup phase, TCP transition into CA phase. The next section contains 19
  • 32. the review of the algorithms used in the CA phase of a TCP connection. 2.7 TCP’s Congestion Avoidance (CA) Algorithms In the previous section we discovered that TCP uses various startup algorithms during the startup phase of a TCP connection. When the TCP window reaches the slowstart threshold value, it goes into congestion avoidance (CA) mode. For the purpose of this study, the CA algorithm will be sub-divided into: i. Standard congestion avoidance algorithm, ii. Exponential Algorithms, and iii. High speed algorithms. The Standard congestion avoidance algorithm is used in congestion avoidance phase of stan- dard TCP. In this category, timeout triggers TCP’s slow-start [9] algorithms causing the current window size to be halved. This is because the timeout is assumped to be due to packet loss as a result of congestion in the link. The disadvantages of this TCP’s behavoiur were already discussed in the previous section. The Exponential algorithms such as [8], etc do not half the current window size after packet loss, but provides multiplicative decrease using exponential of the current windows size (not 0.5 factor) and later additive increase using the inverse of the current window size to gain maximum utilization of the available bandwidth. Mean while, High speed algorithms uses ratio inverse of sequence of numbers (such as Fibonacci series) as the multiplicative factor to reduce current window in an event of packet loss and increase the windows size by the coefficient. Example of this algorithms is the proposed Fibonacci TCP [7], which will be further reviewed in the next Section 2.8. The performance of CA algorithms can be further improved by implementation of inter- packet-delay (IPD). The TCP-Friendly Rate Controller (TFRC) scheme [29] applies IPD to ensure rate control in terms of buffer level at the mobile device (receiver). IPD is effective 20
  • 33. because it uses buffer level at the mobile device and sets it as as sending rate in video-on- demand application. This is achieved by the implicit prediction of current buffer level based on the receiver (which has low playback rate), hence reduces the possibility of packet loss due to overflow at the receiver. The Prediction of the buffer level is done continuously and is based on RTT and packet loss, hence it does not flood network path. Additionally, faster startup such as Quick-Start mechanism [19] and [5] can be useful in sustaining performance during the congestion avoidance phase after long idle period [19] and and after handoff process between mobile node and the access point [5] respectively. This is possible because transmission will continue using the previous optimal cwnd before the idle period and handoff process. In asymmetric networks, performance can be improved by manupulating the frequency of RTT. For example, Formosa TCP [30] has been found to have advantages over other TCP variants when used in asymmetric networks. This is because it has high throughput and low delay variation per connection and its RTT estimation can identify the direction of the congestion, hence it would not suffer performance degradation during CA phase. 2.8 Fibonacci TCP Fibonacci TCP [7] is a particular CA algorithm proposed to increase the utilization of available bandwidth in high speed networks. How Fibonacci TCP controls the steady behaviour of CA phase is based on Fibonacci num- bers. Fibonacci number is a sequence of numbers which are defined by recurrence relation of numbers ranging from 0 to n. The principle to use Fibonacci number in CA phase was borrowed from Computer Science where error-correction code implementation is based on varying Fibonacci numbers (or series) to increase information reliability of a communication system. In Computer network system, the nth is noted when packet loss occurs. As n tends to infinity, 21
  • 34. the golden ratio (multiplicative factor) to increase the window in the absence of congestion will tend to 1.618034 and not only 1MSS [9]; and the golden ratio inverse to reduce the cwnd size when packet loss occurs will be 0.618034 and not 0.5 as in standard CA algorithm. This means the high initial window size is expected to influence the n term when an event of first packet loss will take place and the overall performance of the Fibonacci TCP. But how the high initial window will impact on Fibonacci TCP is not yet well known. From the study by [7], the use of golden ratio in high speed TCP such as Fibonacci TCP presents two advantages. First, in the absence of congestion, Fibonacci TCP will increase the cwnd size faster and utilize bandwidth more than standard CA algorithm. The second advantage of Fibonacci TCP is that, the reduction of current windows size after packet loss is least compared to the other two algorithms mentioned above. 2.9 Research Question Using the proposed faster startup scheme, the study considered performance metrics which included RTT variation, throughput variation and packet loss rate. The study question was; i. Which TCP’s startup scheme could be used? ii. What would be the effect of faster startup scheme (that is, using higher initial window) on the Fibonacci TCP? 22
  • 35. Chapter 3 Methodology 3.1 Introduction This section describes the tools, parameters, detailed procedures and experiments used to achieve each objective stated in Section 1.3.2. The simulation involves only three startup schemes due to limited time, related advantages shown in Table 2.1 and are ones which have their codes available. 3.2 Simulation Tool The study involved the use of ns-2 simulator [20] tool running on Linux Suse Enterprise Desktop 11 SP1. We chose to use ns-2 because it is easy to define the various simulated objects such as applications, protocols, network types and traffic models, and can allow the study to achieve faster execution time and efficiency. The simulation results can be verified by analysing the trace files in comparison with existing theoretical values. It would be very difficult to run the experiments with multiple nodes in real life test environment due to high expense and unavailability of hardware components. MATLAB was chosen to be used for the analysis because of it’s a high level mathematical language which is good for developing the prototype and output generation that can be 23
  • 36. sufficient for early insights into the investigated TCP flows. After analysis, MATLAB was also used to plot various graphs from the processed data. We also used awk script to filter required data from the data traces. This made analysis easier and saved space on the storage drive. In the simulation of the various TCP startup schemes, procedures for computing the three analytical metrics namely; RTT, throughput and packet loss ratio were used. RTT is the time interval between the time a packet is sent from TCP sender node and its corresponding ACK packet is received at the TCP sender node. Packet loss ratio is the number of dropped packets divided by the number of sent packets. Throughput is the amount of data received over the network per second. To compare the performance of the TCP startup schemes, we used MS Excel to compute averages, variance, standard deviation and coefficient of variance using sample data. How- ever, average, variance, standard deviation and coefficient of variance can be computed using equation [31]. Coefficient of Variance is the standard deviation divided by the average and expressed in percentage [31]. We used coefficient of variance to enable us know whether the throughput are closely concentrated around the average value or not and, how stable a scheme is. Coefficient of variance is often used when comparing between data set from different units, different environment of widely different means/average. In such cases, we did not relie on standard deviation alone since we got sample data from different categories of startup schemes. 3.3 Selection of the Faster Startup Scheme Three (3) differents startup schemes namely; slow start, quick start and fast start were used for the simulation with single flows. Based on the results from simulation of single flow, we simulated multiple (two and four) TCP flows using fast start. 24
  • 37. 3.4 Simulation of Chosen Startup Schemes We simulated data transmission in TCP connection having: (i) one TCP flow and, (ii) two TCP flows and (iii) four TCP flows using a dumbbell topology. We described the parameters in Section 3.5.1 and 3.5.2. Simulations were done for regular TCP as well as fibonacci TCP while varying the startup schemes. The various simulations were done using the same topology described in Figure 3.1. 3.4.1 Topology Figure 3.1 represents the dumbbell topology that was used for all simulations. A dumbbell topology is a network setup where 1 to nth TCP nodes are connected to a sinlge router-1 and router-1 is connected to router-2 over a single slow bottleneck link. Figure 3.1: Dumbbell Topology used for the Experiments Bandwidth Delay Product (BDP) and TCP Receive Window(RWIN) BDP determines the optimal amount of data that should be in transit in the network. It is directly related to the optimum TCP receive window value in an existing TCP connection. Essentially, BDP depends on the bandwidth multiplied by the delay value of a link. We employed equation (3.1) and (3.2) [32] to compute the link specification and parameters. 25
  • 38. BDP = Bandwidth × Delay (3.1) RWIN = BDP (3.2) where BDP is in packets and RWIN is receiver advertised window Since the bottleneck bandwidth is 5 Mbps and the minimum RTT is 160 ms, using equation (3.1), the optimum TCP window will consist of 100 packets, assuming the packet size is 1000 bytes. That is, the number of packets that can fully utilise the link is 100 packets. In addition, the send and receive buffers should be optimsed to allow full network utilization. Buffer The requirement to have optimum send and receive buffers is due to the fact that for buffers that are too large, there will be more buffering implying that more packets in the queue and an increase in latency, yet the TCP congestion and flow control would determine their own effective congestion and receive windows, then the remaining buffer will be unused, hence wasted. On the other hand, if the buffer is small, the TCP congestion and flow control would effectively reduce the send buffer on the sender to the same small size, hence slowing down the network. Additionally, in this study, there was need to know how much outstanding (unacknowledged) data that can be between the sender and receive as that would determine how large the send buffer should be. To determine adequate buffer size, we used known bandwidth and network latency basing on the fact that in TCP, the sender cannot flash from its buffer until the receiver has acknowl- edged it. Therefore, RTT times bandwidth puts a lower bound on what the buffer size and TCP window should be. We used equation (3.3) [32] to derive the buffer size. Buffer = BDP (3.3) 26
  • 39. But from equation (3.2) and (3.3), it implies that: Buffer = RWIN (3.4) The buffer size would be 100 packets. This is our recommendation basing on the above arguments, but not a rule. The access link capacities are 100 Mbps and delays are 1 ms. The bottleneck link capacity is 5 Mbps and the delay is 79 ms. The access links are given lower delay of 1 ms because the bandwidth is dedicated for only one TCP flow. Meanwhile, the bottleneck link was given higher delay because it is shared by many TCP flows. Drop Tail queuing [33] is used throughout the simulation. These link specifications were kept constant for all the simulations. We varied the parameters including initial window sizes and number of flows as shown in Table 3.1, 3.2 and 3.3. For each simulation, we generated traffic in one direction. We explained the different parameters and methods used in the simulation in Section 3.5. 3.4.2 Analytical Metrics In this section, we describe the method for achieving the metrics used for analysing the simulation traces. These metrices included RTT, throughput and packet loss rate. Round Trip Time (RTT) We computed RTT by first extracting required data from the data traces using awk script. We extracted sequence numbers and corresponding time of both data and ACK packets of the tagged flow. Two separate files were created; the first file was for data packets (consisting of sequence numbers and their corresponding time when they were de-queued from TCP 27
  • 40. node-1) and the second file was for ACK packets (consisting of sequence numbers and their corresponding time when they were received at TCP node-1) Cases of duplicate data and ACK packets were handled by the use MATLAB script. The script that searches through the list sequentially from the begining of the file towards the end of the file. For data and ACK packets which appear more than once, the script would only record the corresponding time for the last duplicate packet seen. The data and ACK files were equal size in terms of rows. That is, the number of records in the files for both data and ACK packets were equal in numbers. Each sequence number of data packet in the data file would have a corresponding sequence number in the ACK file since the ACK packets contains the sequence numbers of received packets at the sink. The RTT of packet (i) was then computed from equation (3.5). RTT(i) = ACKtime(i) − Datatime(i) (3.5) where, Datatime(i) is the time when data packet(i) was dequeued and ACKtime(i) is time when associated ACK packet was received at TCP node-1 and RTT is in (s). Packet Loss Ratio This section describes how packet loss ratio was derived. Using awk code, number of received and lost events were counted from the full trace file. Any packet belonging to the tagged flow was counted as lost if it was registered as dropped at any of the nodes. Lost packets is the total count of dropped packets which were sent out by the tagged TCP node denoted as dp. Received packets is the count of packets received at the TCP sink denoted as rp. Total packets sent is the summation of dp and rp. Packet loss ratio can be computed in the same way as computing delivery ratio. We therefore used equation (3.6) [34] shown below to compute packet loss ratio. 28
  • 41. Loss ratio = dp dp + rp (3.6) Data Throughput The data for evaluating throughput of the tagged flow was extracted during simulation with the help of a TCL script and later exported to MATLAB for graphical representation of the throughput. The study emphasised on throughput during startup phase and when TCP had to recalculate congestion window size when new flows were just added to the link. ByteRcvd 0.5 × 8 1000000 (3.7) where ByteRcvd is bytes received after every 0.5 s We used equation (3.7) [35] to compute the throughput at regular interval of 0.5 s. We periodically called the byte- method of the sink to return the TCP throughput received at the sink, convert to Mbps and write the result into an output file. The time interval for calling the method was 0.5s. Before calling the method again, we reset the value of throughput to zero after every 0.5 s so as to prevent the method from returning accumulated number of bytes since the start of the tagged TCP connection. To investigate the variability of throughput at different traffic load, we varied the number of flows, windows sizes (packet distribution) for the various flows. 3.5 Details of Experiments Conducted 3.5.1 Simulation of One TCP Connection This section describes the simulation when only one TCP sender sends unidirection data. The entire bottleneck link is dedicated to data packets from a single TCP sender and ACK packets 29
  • 42. from a single corresponding TCP sink. Various simulations are done with the startup scheme set to any one of the three schemes. For each scheme, the maximum congestion window is set to either 50, 100 or 200 packets. The study considerd these window sizes because the optimal window size had been calculated to be 100 packets from equation (3.1). In all cases, the buffer size was set to 100 packets following equation (3.3). The different parameters are summarised in Table 3.1. Startup Schemes maximum Window (in packets) Slow Start 50, 100 and 200 Fast Start 50, 100 and 200 Quick Start 50, 100 and 200 Table 3.1: Maximum cwnd for One (1) TCP Flow Simulation 3.5.2 Simulation of Many TCP Connections This section describes the network settings when there are multiple TCP flows and all the flows share a single bottleneck link. The study on the tagged flow was divided into three categories: (i) when TCP sender 1 presents smaller maximum cwnd: that is, when the maximum cwnd for the tagged TCP flow is smaller compared to other flows, (ii) when the maximum cwnd for all the TCP flows are equal and, (iii) when the maximum cwnd size of the tagged flow is higher than that of the additional flow as illustrated in Table 3.2 and Table 3.3 for two flows and four flows respectively. In each simulation, the starting time for traffic generation is radomly distributed starting from 0.1s and traffic from additional flows is generated at interval of 3.1 s with the tagged flow being the first TCP sender. Each simulation time was set to last for 50 s. The results for the experiments are found in Chapter 4. Simulation of multiple flows is done with the view that, when network resources are shared by multiple TCP connections, the tagged flow is affected by the other flows which is more comparable to real networks than the single flow simulation. In regards to that, these other 30
  • 43. flows are additional traffic that may cause long queue and congestion, resulting into more queuing time, increased link progation delay and packet loss that can provoke decrease in throughput. However, each connection must get a fair share of the resources in terms of bandwidth. Therefore, the bottleneck of 5Mbps will support a single or many TCP flows with varying maximum congestion window sizes. If bandwidth is shared equally, then each TCP connection’s window size would be a fraction of the bottleneck’s bandwidth. This is because TCP congestion control algorithms aims at fair share of the bandwidth among competing flows. Window Size of maximum Window Size of maximum Window Category for Tagged Flow (packets) for Second(packets) Lower 50 150 Equal 100 100 Higher 150 50 Table 3.2: Maximum Window for Two (2) TCP Flows Simulation Further simulations were done for four (4) flows to investigate the variation of RTT, packet drop and throughput when number of flows is doubled. Table 3.3 shows the parameters considered in the four flows simulation. Window maximum Window for maximum Window for each of category Tagged Flow (packets) the additional Flow (packets) Smaller 20 60 Equal 50 50 Bigger 80 40 Table 3.3: Maximum cwnd for Four (4) TCP Flows Simulation 31
  • 44. 3.5.3 Simulation of Fibonacci TCP with Selected Faster Startup Scheme and Slow Start Further simulations are done using fibonacci TCP in the CA phase while considering two startup schemes namely; slow start and fast start to setup the TCP connections. The objec- tive is to evaluate and compare the performance of fibonacci TCP in terms of RTT, packet loss ratio and throughput when it implements slow startup and faster startup mechanisms in- dependently to setup a TCP connection. Hence, the simulations are done to enhance making a choice either to use slow start or fast start with fibonacci TCP to achieve good performance. This is done for a single fibonacci TCP flow as well as for two and four fibonacci TCP flows. 32
  • 45. Chapter 4 Results 4.1 Introduction This section presents the performance of slow start, fast start and quick start in terms of RTT, throughput and packet loss ratio. The analysis and discussion of the results are found in Section 4.2 for single TCP flow simulations, Section 4.3 for multiple TCP flows and Section 4.4 for simulations of Fibonacci TCP. 4.2 RTT, Packet Loss Ratio and Throughput for One TCP Flow 4.2.1 RTT for Single TCP Flow Recall from Section 3.4.1 that the minimum RTT is 160 ms. Figures 4.1, 4.2 and 4.3 show RTT for one regular TCP flow with various maximum congestion windows of 50, 100 and 200 packets. There is a sharp increase in RTT during the first 2.5 s of the simulation time for all maximum congestion windows in each of the startup schemes. This is due to sudden traffic buffered at the router provoking an increase in queuing time which is reflected in increased RTT. The 33
  • 46. 0 1 2 3 4 5 0.16 0.18 0.2 0.22 0.24 RTT(s) time(s) quick start slow start fast start Figure 4.1: RTT for one regular TCP Flow with maximum cwnd of 50 pkts 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 0.5 1 1.5 2 RTT(s) time(s) quick−start slow−start fast−start Figure 4.2: RTT for one regular TCP Flow with maximum cwnd of 100 pkts 0 10 20 30 40 50 0 0.5 1 1.5 2 2.5 RTT(s) time(s) quick−start slow−start fast−start Figure 4.3: RTT for one regular TCP Flow with maximum cwnd of 200 pkts sharp increase in RTT value within the 2.5 s of the simulation time is highest when the TCP flow has maximum cwnd bigger than the optimum window size of the link. RTT variation increases with increase in maximum cwnd. This is seen in Figures 4.2 and 4.3 where the RTT variation for TCP flow with bigger maximum cwnd is higher than that of smaller maximum cwnd. Additionally, in TCP flows with maximum cwnd higher or equal to the optimum congestion window size, queuing delay is a significant component of RTT and contributes to higher variation and RTT value well above 0.24 s whereas for one TCP flow with maximum congestion windows lower than the optimum window, the increase in RTT is due to packet processing time at the receiver and link delay but not queuing delay since the 34
  • 47. buffer is always empty. Comparing all the startup schemes, fast start remained more stable with increase in maximum congestion window. The oscillation of RTT above 0.16 s affects optimum throughput that can be achieved from one TCP flow having any value of the maximum congestion window. Since the RTT values increase with increase in window size, it is possible that that RTT it is proportional to traffic volume. In the same way, the RTT increase when there is a decrease in the difference between the TCP window size and link’s BDP. But when the maximum window was well above the optimum queue length and optimum window size, the RTT increases further. To get the number of round trips times for capacity estimation (CE), we used the specified minimum time to send data and receive ACK as quoted in equation (4.1). 1RTT = 0.16s (4.1) 1s = 6RTT (4.2) From equation (4.1), we derived another equation (4.2) to get the number of round trips within 1 s of the simulation time. Equation (4.1) is based on the fact that the minimum RTT is 160ms and equation (4.2) is used to get the optimum RTT for capacity estimation for each scheme. To get the actual round trips times during the simulation, we computed the product of actual time taken for the RTT to gain stable state and the number of round trips in 1 s shown in equation (4.2). We present the analysis next. Table 4.1 shows comparative analysis of RTT in terms of average, variance, standard devia- tion, coefficient of variance (COV), time interval to estimate capacity and number of round trip to estimate available bandwidth. Each startup scheme has its own way to determine available bandwidth and the initial window. The common features is that RTT increases with increase in window size, but number of round 35
  • 48. trips to determine available bandwidth remains the same. However, the round trip time is lower whenever maximum cwnd for the flow is less than the optimum window size or BDP of the link. From the RTT analysis in Table 4.1, all schemes have uniform performance when there is only one TCP flow with maximum cwnd lower than the optimum window size. Number of round trip time for capacity estimation increases with increase in congestion, but the RTT increase is not infinite with the increase in maximum cwnd. Fast start maintains lower average RTT and the time to adjust window size/RTT for capacity estimation than both slow start and quick start, hence fast start performs better than the other two schemes. 36
  • 49. maximum Scheme RTT RTT RTT RTT Time to No.of RTT Comparison Window Average Variance Standard COV adjust win- for of three (packets) (s) Deviatian dow (s) CE Schemes Slow start 0.18 0.001 0.039 0.22 0.5 3 All schemes had same coeffcient 50 Quick tart 0.18 0.001 0.039 0.22 0.5 3 of variance and no.of round Fast start 0.18 0.001 0.039 0.22 0.5 3 trips for capacity estimation. Slow start 0.71 0.90 0.95 1.34 2.4 15 Round trips for capacity estima- 100 Quick start 0.47 0.29 0.54 1.15 2.2 14 tion and coefficient of variance Fast start 0.52 0.40 0.63 1.21 2.0 12 increased with window size. Fast start had least number of RTT Slow start 0.84 0.90 1.18 1.41 2.4 15 Coeff. of variance increased further 200 Quick Start 0.75 0.99 0.99 1.32 2.2 14 with window sizes. Fast start had Fast start 0.61 0.60 0.77 1.28 2.0 12 least coefficient of variance and round trips for capacity estimation. Table 4.1: Comparative Analysis of RTT for One Regular TCP Flow 37
  • 50. 4.2.2 Packet Loss for One Regular TCP Flow Table 4.2 shows the results containing packet drop, packet received and the computation of loss ratio for each experiment. Packet loss ratio increases with increase in maximum cwnd. For simulation of maximum cwnd of 50 packets, the entire window is in transit without causing congestion because the TCP window size is much less than BDP of the link. No queue builds up at the router, hence there is no packet drop. However, for the flow with maximum cwnd of 100 packets, there is a burst of traffic that leads to link congestion and hence packet loss. Slow start experienced the highest packet loss ratio of 19 x 10−4 , followed by quick start and fast start at the same loss ratio of 18 x 10−4 . When the flow’s maximum cwnd is doubled to 200 packets(twice the optimum window size), there is increase in loss rate for all the three schemes. Slow start maintains the highest loss ratio of 88 x 10−4 , followed by quick start and fast start with loss ratio of 74 x 10−4 and 56 x 10−4 respectively. The highest increase in loss ratio with increasing window size is witnessed in slow start from 19 x 10−4 to 88 x 10−4 which is 0.690 /0 increase . The increase in loss ratio for quick start and fast start is from 18 x 10−4 to 74 x 10−4 and 56 x 10−4 which is 0.560 /0 and 0.380 /0 increase respectively. This means fast start performs better than the other two schemes in terms of packet drop with increase in maximum congestion window. On the otherhands, slow start is the worst performer as congestion window size increases and partly due to the exponential increase in cwnd. Scheme with a a high packet loss rate has adverse effects in routers that have a bias against bursty traffic as in Drop Tail routers. In such case, a TCP connection whose cwnd can increase upto a very large size might experience unnecessary retransmits due to the inability of the router to handle small burst. This could result in an unnecessary retransmit timeout. In such environment, fast start is the best option as supported by analysis in table 4.2. 38
  • 51. maximum Scheme Dropped Received Total Loss Ratio cwnd(packets) Packets packets packets (10−4 ) Slow start 0 15389 15389 0 50 Quick start 0 15389 15389 0 Fast start 0 15389 15389 0 Slow start 50 26276 26323 19 100 Quick start 50 27874 27924 18 Fast start 50 28098 28148 18 Slow start 199 22520 22719 88 200 Quick start 199 26762 26961 74 Fast start 152 27071 27223 56 Table 4.2: Packet Loss Details for one Regular TCP Flow 4.2.3 Throughput from Single Flow Fig 4.4, 4.5 and 4.6 show throughput for one TCP flow with maximum congestion window of 50, 100 and 200 packets respectively. When maximum congestion window is less than optimum window size, all the startup schemes achieved same optimum throughput and at the same rate. When the maximum congestion window is equal to optimum window size, all the startup schemes achieved optimum throughput of the link. However, fast start achieved optimum throughput after 10 s, slightly faster than both slow start and quick start. When the maximum congestion window is higher than the optimum window, all the startup schemes achieved the optimum throughput but after different time in similar way when congestion window is equal to optimum window. Fast start attained throughput almost at same speed as quick start but it is more stable. The average throughput achieved when maximum cwnd is smaller than the optimum window is approximately 2.5 Mbps yet the bottleneck is 5 Mbps. This is because the flows were limited to smaller maximum cwnd so as to represent TCP connection that can be ultimately limited to small cwnd by the receiver advertised window but not the link congestion. 39
  • 52. 0 10 20 30 40 50 0 1 2 3 Time(s) Throughput(Mbps) Slow Start Fast Start Quick Start Figure 4.4: Throughput for one Regular TCP Flow with maximum cwnd of 50 packets 0 5 10 15 20 25 30 0 1 2 3 4 5 6 time(s) Throughput(Mbps) Slow Start Fast Start Quick Start Figure 4.5: Throughput for one Regular TCP Flow with maximum cwnd of 100 packets 0 10 20 30 40 50 0 1 2 3 4 5 6 time(s) Throughput(Mbps) Slow Start Fast Start Quick Start Figure 4.6: Throughput for one Regular TCP Flow with maximum cwnd of 200 packets The graphs overlap because the throughput values are the same at all points for all the schemes. Besides, the plots for slow start and fast start cannot be checked because their graphs are printed first and quick start is printed last. As maximum congestion window increases, fast start takes shorter time to achieve optimum throughput in comparision to slow start and quick start. This shows that fast start determines available bandwidth faster than both slow start and quick start. This is in support of the fewer round trip times observed while setting TCP connection using fast start. Even though each scheme eventually gains the optimum link throughput, the rate at which 40
  • 53. each scheme gains the optimum throughput is different. We analyse performance of the startup schemes with different maximum congestion windows in Table 4.3 in terms of maxi- mum achieved throughput, average throughput, variance, standard deviation and coefficient of variance. Throughput values increase with increase in maximum cwnd. All startup schemes become unstable with increase in congestion attributed to higher maximum cwnd. Fast start is more robust than slow start and quick start since it achieve higher average throughput and its COV is lower than that of slow start and quick start. 41
  • 54. maximum Scheme maximum Average Variance Standard Coefficient Remarks cwnd throughput (Mbps) deviation of Variance (packets) attained(Mbps) (COV) 50 Slow start 2.52 2.5 0.0009 0.03 0.012 All schemes performed at Quick start 2.52 2.5 0.0009 0.03 0.012 the same level. There was Fast start 2.52 2.5 0.0009 0.03 0.012 no congestion to varify how each scheme would react to congestion 100 Slow start 5 4.36 2.1 1.43 0.33 Fast start had highest average but Quick start 5 4.6 0.8 0.89 0.19 least standard deviation, resulting to Fast start 5 4.7 0.45 0.67 0.14 least coefficient of variance, hence performs close to average throughput 200 Slow start 4.9 3.74 2.55 1.6 0.43 Fast start had least COV. Slow start Quick start 4.8 4.50 0.71 0.84 0.19 has highest deviation because of halving Fast start 4.9 4.54 0.65 0.80 0.18 window after packet loss and reverting to slow start. Table 4.3: Throughput Analysis for One Regular TCP Flow 42
  • 55. We consider throughput as key metric to select the best scheme, thus we use both average throughput and standard deviation in our analysis. We involve the use of coefficinet of variance because we are using sample data from dissimilar schemes. A robust scheme gains throughput which does not deviate so much from the average throughput value. Basing on the results from the study carried out on various startup schemes using one TCP flow, fast start is the best performer. We therefore selected it for simulating many TCP flows with the objective of investigating its stability and robustness with varying maximum cwnd. 4.3 Multiple TCP Flows implementing Fast start Since the results of single flow simulation showed that fast start scheme behaved best, multiple flow simulations are run considering only fast start. 4.3.1 RTT for Two and Four TCP Flows Figure 4.7 and 4.8 show the RTT for different flow sizes, that is when the tagged flow presents smaller, equal and bigger maximum congestion window than the untagged TCP flows. 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 RTT(s) time(s) Smaller cwnd Equal cwnd Higher cwnd Figure 4.7: RTT for 2 TCP flows with Fast start 0 10 20 30 40 50 0.2 0.25 0.3 0.35 RTT(s) time(s) smaller cwnd Equal cwnd Higher cwnd Figure 4.8: RTT for 4 TCP flows with Fast start In the two flows simulation, regardless of the maximum cwnd of the tagged flow, the RTT starts at the minimum value of 0.16 s and increases steadily to a peak value of 0.98 s, 0.65 s 43
  • 56. and 0.3s when the tagged flow cwnd is less, equal or higher than the other flow respectively. After the peak, RTT drops to minimum value and the pattern is repeated. Bigger flow sizes (i.e flow with higher maximum congestion window) outcompetes flow of smaller sizes and continue to send more data for longer time before it starts experiencing competition from smaller flows. To verify the effect of doubling number of flows, we simulated four TCP flows with various maximum congestion window sizes. The maximum RTT value is about 0.33 s for all the flow sizes which never drops to the minimum RTT value as was the case in two flows simulation. This pattern is repeated as shown in Figure 4.8. This means the number of TCP flows as well as maximum congestion window size have effect on RTT. Table 4.4 shows the analysis of RTT for multiple flows in terms of number of flows sharing the link, RTT average, variance, standard deviation and coefficient of variance. Generally, the average RTT increases with increase in number of TCP flows. Average RTT for two TCP flows with maximum cwnd of 50 packets is higher due to very high RTT during the fisrt 5 s of the simulation time as shown in Figure 4.7. This is attributed to larger maximum cwnd outcompeting it. As number of flow increases, bigger flows cause more congestion and experience more delay, hence higher RTT values. The COV of four flows is lower than that of two flows. This shows that increase in maximum congestion window size has more effect on RTT than increase in number of TCP flows. Basically, the overall performance of fast start is relatively steady for all the network settings. 44
  • 57. Number of maximum RTT Variance Standard Coefficient Remarks TCP cwnd Average deviation of Variance Connection (packets) (s) 50 0.39 0.070 0.26 0.67 Highest average RTT and COV is registered Two 100 0.26 0.004 0.06 0.23 smaller maximum cwnd, followed by equal and 150 0.24 0.003 0.05 0.21 bigger maximum cwnd. Four 20 0.28 0.002 0.04 0.14 COV is lower than in two flows. This is 50 0.29 0.003 0.06 0.21 because maximum cwnd used in four flows are 80 0.30 0.004 0.06 0.20 smaller than those in two flows. RTT values increase with increase in number of TCP flows Table 4.4: RTT Analysis for Two and Four TCP Flows 45
  • 58. Number of maximum cwnd Dropped Received Total Loss Ratio TCP Connection (in packets) Packets packets packets (10−4 ) 50 0 10182 10182 0 Two 100 53 12141 12194 44 150 102 18942 19044 54 20 0 12824 12824 0 Four 50 0 16664 16664 0 80 30 19055 19085 16 Table 4.5: Packet Loss Details for Fast Start with two and four flows using standard TCP 4.3.2 Packet Loss Rate for Two and Four TCP Flows This section presents the counts of received and dropped packets generated by the tagged flow for each simulation with different maximum cwnd. We analysed the performance in terms of loss ratio for different number of flows with various maximum cwnd. We used formula (3.7) to compute the loss ratio. The details of the values are shown in Table 4.5. The packet loss ratio in two flows were higher than the the loss ratio in four TCP flows for all the flow sizes (smaller, equal and bigger flow). The decrease in packet loss is attributed to maximum congestion window sizes not number of flows. This is because the maximum congestion window sizes in four flows were smaller than those in two flow and the buffer is sufficient to handle most packets. This implies that, when the maximum congestion window size increased, the packet loss ratio increased because the queue grows and congestion increases leading to further delay. Comparing the performance in terms of window sizes (smaller, equal or bigger flow), simu- lation with smaller maximum cwnd size had lower packet loss ratio than those with larger maximum cwnd. This is because the flow with a smaller maximum cwnd sends fewer data packets than the one with a higher maximum cwnd. Hence, incase of buffer overflow, the latter flow has many more packets in the queue and loses more packets. 46
  • 59. 4.3.3 Throughput for Two and Four TCP Flows 0 10 20 30 40 50 0 1 2 3 4 5 time(s) Throughput(Mbps) smaller cwnd equal cwnd higher cwnd Figure 4.9: Throughput for Two TCP flows with Fast start 0 10 20 30 40 50 0 1 2 3 4 time(s) Throughput(Mbps) smaller cwnd equal cwnd higher cwnd Figure 4.10: Throughput for Four TCP flows with Fast start Figure 4.9 shows maximum throughput for two TCP flows when maximum cwnd the tagged presents smaller, equal and bigger maximum cwnd than the untagged flows. Figure 4.10 shows throughput for four TCP flows when the tagged flow presented smaller, equal and bigger maximum cwnd than the untagged flows. As number of flows increase, bigger flow performs better than smaller flows. This is because bigger flow size outcompete flow with smaller maximum congestion window. Besides, TCP congestion and flow control adds up all windows to a single BDP and flows with bigger maximum cwnd contribute bigger packet distribution in the overall TCP window. We further analyse the throughput achieved by each flow size in terms of average throughput, standard deviation and coefficient of variance to establish the robustness of fast start with varying number of TCP flows and varying maximum congestion window sizes. The analysis is presented in Table 4.6. 47
  • 60. Number of maximum maximum Average Variance Standard Coefficient Remarks TCP cwnd throughput (Mbps) deviation of Variance Connection attained (Mbps) 50 2 1.58 0.13 0.35 0.22 bigger flow gained highest average Two 100 2.25 1.62 0.91 0.95 0.59 throughput with least standard deviation 200 4 3.30 0.36 0.60 0.18 hence least coefficient of variance Four 20 1 0.61 0.07 0.27 0.44 bigger flow still gained highest 50 2.5 1.56 0.28 0.53 0.34 throughput and lowest COV compared to 80 2.75 2.38 0.1 0.32 0.14 both equal and smaller flow Table 4.6: Throughput Analysis for Two and Four TCP Flows 48
  • 61. From Table 4.6, number of TCP flows have lesser affect on performance than maximum con- gestion window, that is variation in throughput is more clear when the maximum congestion window is changed. This is conforming to the way TCP manages congestion by adding all the congestion windows from the various flows to one BDP. Hence, the utilisation of bandwidth by each flow depends on the amount of packet distribution from a given flow in the over all TCP window size. Bigger maximum cwnd achieves highest average throughput with least standard deviation, hence least coefficient of variance. Particularly, whenever the number of TCP flows increases, fast start is more stable with bigger maximum cwnd than smaller maximum cwnd and achieves higher average throughput whenever maximum cwnd is higher than those of competing flows. Hence, fast start mechanism is robust and can withstand varying number of flows and congestion windows. It is assumed that under condition of fair sharing of the link, each flow utilizes 2.5 Mbps and 1.25 Mbps when there are two and four flows respectively. This study shows that bandwidth utilization by bigger flow is well above the average link size in both two and four flows. This implies that bigger flow is aggressive and unfair in terms of bandwidth sharing. After seeing how multiple flows behave when fast start is implemented, next we simulate the behaviour of fast start when implemented in fibonacci TCP. 4.4 Investigation of the effect of faster startup of TCP Connection on the Performance of Fibonacci TCP This section presents the result for fibonacci TCP simulation which includes RTT, packet loss and throughput for one, two and four fibonacci TCP flows. In the case of single flow simulation, the maximum cwnd is equal to the optimal window, while in the case of the multiple flows the maximum cwnd of the tagged flow is equal to that of the other flows. The discussion and analysis of results are found in section 4.4.1, 4.4.2 and 4.4.3. 49
  • 62. 4.4.1 RTT for Fibonacci TCP In this section we presents the result and discussion of RTT. 0 5 10 15 20 25 30 35 40 45 50 0 0.2 0.4 0.6 0.8 1 Time(s) RTT(s) slow start fast start Figure 4.11: RTT for One Fibonnaci TCP flow with 100 packets 0 5 10 15 20 25 30 35 40 45 50 0 0.5 1 1.5 2 2.5 Time(s) RTT(s) slow start fast start Figure 4.12: RTT for Two Fibonnaci TCP flow with 100 packets 0 5 10 15 20 25 30 35 40 45 50 0 0.5 1 1.5 2 2.5 3 Time(s) RTT(s) slow start fast start Figure 4.13: RTT for Four Fibonnaci TCP flow with 50 packets Figures 4.11, 4.12 and 4.13 shows the RTT for one TCP flow with maximum cwnd of 100 packets, two flows with maximum cwnd of 100 packets and four flows with maximum cwnd of 50 packets respectively. For two and four flows simulations, RTT value is when the tagged flow has equal maximum cwnd as the untagged flows. There is variation in RTT value for both startup schemes throughout the simulation time. RTT also increases with increase in simulation time. This is attributed to increasing volume 50
  • 63. of data that leads to bigger queue and more link congestion. This is because queue delay and link congestion are the main component of RTT. The increase in RTT value is higher for slow start than fast start. RTT value also increases with increase in number of fibonacci TCP flows and higher increase is witnessed in slow start than fast start. We further verify the effect of increasing number of flows in terms of RTT average, variance, standard deviation (STDEV) and coeeficient of variance (COV) in Table 4.7. Basing on RTT analysis in Table 4.7, standard deviation (STDEV) for both startup schemes increase with increase in the number of fibonacci TCP flows. Average RTT value for fast start is lower than that of slow start and much as its COV is higher than that of slow start, the RTT variation is within values much lower than that of slow start. This shows that fibonacci TCP is more stable with fast start than with slow start. The higher COV is due to aggressive nature of fast start to saturate the link with data. 51
  • 64. maximum Scheme RTT RTT RTT RTT Comparison cwnd Average Variance Standard COV of three (in packets) (s) Deviation Schemes 100 Slow start 0.654 0.064 0.243 0.372 Fast start has least STDEV resulting to lower (one COV than slow start. Exponential increase has more flow) Fast start 0.586 0.002 0.040 0.068 effect on slow start when maximum cwnd from only one TCP flow constitue the overall TCP window. 100 Slow start 1.855 0.027 0.165 0.089 COV is higher than that in single flow reflecting (Two the increase in STDEV. Fast start has lower RTT flows) Fast start 1.535 0.096 0.310 0.202 average, but higher STDEV leading to increase in. COV higher than that of slow start. 50 Slow start 1.835 0.963 0.981 0.535 As number of flow increases,There is a bigger increase (four in STDEV than increase in average RTT leading to much Flows) Fast start 1.509 0.767 0.876 0.580 higher COV. Fast start has higher COV but maintains lower RTT average than that of slow start. Table 4.7: Comparative analysis of RTT for One Fibonacci TCP Flow 52
  • 65. 4.4.2 Packet Loss for Fibonacci TCP Table 4.8 shows the maximum cwnd, number of dropped and received packets, total number of packets and loss ratio. Single fibonacci TCP flow has lower packet drops than multiple flows. When we compare packet drops in multiple flows, two flows has higher number of packet drops than four flows. This is because the maximum cwnd size in two flow is higher than that of four flows. This is attributed to the fact that flow with smaller maximum cwnd contributes a smaller number of packet in the queue. Hence, as soon as there is packet overflow in the buffer and there is packet drop, smaller flow drops fewer number of packets than flow with bigger maximum cwnd. Further, at TCP startup phase, it is the router’s buffer that determines action on roburst traffic. It is possible that higher maximum cwnd results into more abrupt increase in traffic. The routers that are biased on bursty traffic window drops packets as soon as there is an abrupt increase in traffic. This can be a setup mechanism to prevent certain network security risk. Basing on analysis in Table 4.8, packet loss ratio is lower in one TCP conection than in multiple fibonacci TCP flows for both slow start and fast start. This is because there is no contention for resources when there is only one fibonacci TCP flow. Whenever there is multiple fibonacci TCP flows, packet loss ratio is higher in flow with bigger maximum cwnd. In terms of packet loss ratio, maximum cwnd has more effect than number of fibonacci TCP flows. The high packet loss ratio in slow start is due to exponential increase of its cwnd. In all simulations, fast start maitains lower packet loss ratio than slow start. 53
  • 66. maximum Scheme Dropped Received Total Loss Ratio Remarks cwnd(packets) Packets packets packets (10−4 ) 100 Slow start 106 10577 10683 99 There is no competing flow. One Flow Fast start 114 12549 12663 90 Fast start has lower loss ratio than slow start. 100 Slow start 834 6836 7670 1087 Packet loss ratio is higher than that of single flow Two Flows Fast start 547 6316 6863 797 because there is competetion and maximum cwnd is doubled. 50 Slow start 614 8911 9525 644 Packet loss ratio is lower than that of two flows. This Four Flows Fast start 380 7424 7804 486 is because maximum cwnd for four flow is lower than that in two flows. Fast start maintains lower packet loss ratio than slow start in all simulations. Table 4.8: Packet Loss for one and Multiple Fibonacci TCP Flows 54
  • 67. 4.4.3 Throughput for Fibonacci TCP 0 5 10 15 20 25 30 35 40 45 50 0 1 2 3 4 5 Time(s) Throughput(Mbps) fast start slow start Figure 4.14: Throughput for one Fibonacci TCP flow with maximum cwnd of 100 pack- ets 0 5 10 15 20 25 30 35 40 45 50 0 1 2 3 Time(s) Throughput(Mbps) fast start slow start Figure 4.15: Throughput for two Fibonacci TCP flow with maximum cwnd of 100 pack- ets 0 5 10 15 20 25 30 35 40 45 50 0 0.4 0.8 1.2 1.6 Time(s) Throughput(Mbps) fast start slow start Figure 4.16: Throughput for Four Fibonacci TCP flow with maximum cwnd of 50 packets Figure 4.14 shows throughput for one fibonacci TCP flow. There is more throughput fluctu- ation in slow start than in fast start as is shown by a smoother graph for fast start. Figure 4.15 shows throughput two fibonacci TCP flows. There is decrease in throughput as the number of flow increases. This is because both the tagged and untagged flows share the same bottleneck link and no single flow can takeup all the link. Fgure 4.16 shows further decrease in throughput when the number of fibonaci TCP flow is doubled to four coupled 55
  • 68. with decrease in maximum cwnd. We analyse the throughput and presented the values in Table 4.9 for one and multiple fibonacci TCP flows in terms of maximum achieved throughput, average throughput, variance, standard deviation and coeffificient of variance (COV). Basing on analysis in Table 4.9 when there is only one TCP flow, both startup schemes achieved optimum throughput but the throughput values decrease with increase in simulation time. Maximum achievable througput decreases with increases in number of fibonacci TCP flow. The decrease in throughput is also observed with decrease in advertised congestion window which is represented by lower maximum cwnd in four fibonacci TCP flows. Fast start performs better than slow start in terms of achieved average throughput. Better throughput is achieved when maximum cwnd is equal to optimum window size in the presence of a single fibonacci TCP flow. This is because there is no competetion and maximum cwnd is not restricted to value lower than optimum TCP window. Further, whenever number of TCP flows increases, bigger flow outcompete smaller flow and achieve higher average throughput. In TCP flows with various maximum cwnd, higher maximum cwnd is more aggressive in gaining bandwidth and yields higher average throughput than smaller maximum cwnd. In all the simulations, fast start performs better than slow start in terms of throughput, hence it is more robust and a better startup scheme for TCP connection implementing high-speed TCP such as fibonacci TCP. 56
  • 69. maximum Scheme maximum Average Variance Standard Coefficient Remarks cwnd throughput (Mbps) deviation of Variance (in packets) (Mbps) (STDEV) (COV) 100 Slow start 5 4.7 0.140 0.37 0.08 Each scheme attains optimum throughput. (one Fast start has higher throughput average and flow) Fast start 5 4.9 0.002 0.05 0.01 lower STDEV which leads to lower COV than slow start. 100 Slow start 2.6 2.3 0.040 0.20 0.09 Throughput for both schemes is lower than the (two optimal value. Fast start attains higher average flows) Fast start 2.8 2.5 0.027 0.16 0.06 throughput than slow start. COV is higher than that of one flow. 50 Slow start 1.6 1.2 0.050 0.23 0.19 Both schemes attain same throughput. Fast (four start maintains higher average throughput flows) Fast start 1.6 1.3 0.021 0.15 0.12 than that of slow start. COV increases with increase in the number of fibonacci TCP flows. Robust nature of fast start makes it have lower COV than Slow start, hence more stable. Table 4.9: Throughput Analysis for One, two and four Fibonacci TCP Flows 57
  • 70. 4.5 Discussion of the Fibonacci-Faster Start TCP Re- sults Considering analysis in Section 4.4.3, the performance exhibited by Fibonacci TCP when it implements fast start could be due to a number of reasons. 4.5.1 Fibonacci TCP CA implementing Fibonacci TCP reduces the congestion window to (0.618034 product of current TCP window)[7] and use the value of re-start window (set by fast start) as the present cwnd after packet loss to continue transmitting high amount data without entering slow start phase. This capability of fibonacci TCP to reduce cwnd to high value results to high throughput and hence, good link utilisation. 4.5.2 Fast Start TCP implementation uses fast start in two ways and with two different categories of windows respectively. Fast start is used to start a connection with high initial window and to re-start transmission after packet loss or long idle time using high re-start window. It does not use loss window as is the case in slow start mechanism. Hence, fast start enhances high data transmission rate. Fast start has its own mechanism to set optimum initial window, hence the changes of window sizes specified in this study affect only the value of maximum cwnd. Then during congestion avoidance phase, TCP implementing fast start set the re-start window to the optimum of the value used for maximum cwnd and the current window. Hence, using a large value for re-start window would not increase the size of the congestion window but fibonacci TCP does so. 58