SlideShare a Scribd company logo
1 of 139
COMPUTER NETWORKS
USHA BARAD
Assistant Professor
CHAPTER 3
TRANSPORT LAYER
Outlines
• Introduction and transport layer services
• Multiplexing and Demultiplexing
• Connection less transport (UDP)
• Principles of reliable data transfer
• Connection oriented transport (TCP)
• Congestion control
Introduction
• The transport layer is the core of the internet model.
• The application layer programs interact with each other
using the services of the transport layer.
• Transport layer provides services to the application
layer and takes services from the network layer.
Application layer
Transport layer
Network layer
Transport layer provides
services to the
application layer
Transport layer takes
services from the
network layer
Layer 5
Layer 4
Layer 3
Transport Layer Concept
Transport Layer and Data Link Layer
Role of Transport Layer
• Application layer
– Communication for specific applications
– E.g., Hypertext Transfer Protocol (HTTP), File Transfer
Protocol (FTP), Network News Transfer Protocol (NNTP)
• Transport layer
– Communication between processes (e.g., socket)
– Relies on network layer and serves the application layer
– E.g., TCP and UDP
• Network layer
– Logical communication between nodes
– Hides details of the link technology
– E.g., IP
The protocols of the layer provide host-to-host communication
services for applications.
It provides services such as connection-oriented data
stream support, reliability, flow control, and multiplexing.
• Transport layer services are conveyed to an application via a
programming interface to the transport layer protocols. The
services may include the following features:
• Connection-oriented communication:
– It is normally easier for an application to interpret a connection
as a data stream rather than having to deal with the underlying
connection-less models, such as the datagram model of
the User Datagram Protocol (UDP) and of the Internet
Protocol (IP).
• Same order delivery:
– The network layer doesn't generally guarantee that packets of
data will arrive in the same order that they were sent, but often
this is a desirable feature.
– This is usually done through the use of segment numbering,
with the receiver passing them to the application in order.
• Reliability:
– Packets may be lost during transport due to network
congestion and errors.
– By means of an error detection code, such as a checksum,
the transport protocol may check that the data is not
corrupted, and verify correct receipt by sending
an ACK or NACK message to the sender.
– Automatic repeat request schemes may be used to
retransmit lost or corrupted data.
• Flow control:
– The rate of data transmission between two nodes must
sometimes be managed to prevent a fast sender from
transmitting more data than can be supported by the
receiving data buffer, causing a buffer overrun.
– This can also be used to improve efficiency by
reducing buffer underrun.
• Congestion avoidance:
– Congestion control can control traffic entry into a
telecommunications network, so as to avoid congestive collapse by
attempting to avoid oversubscription of any of the processing
or link capabilities of the intermediate nodes and networks and
taking resource reducing steps, such as reducing the rate of sending
packets.
– For example, automatic repeat requests may keep the network in a
congested state; this situation can be avoided by adding congestion
avoidance to the flow control, including slow-start.
– This keeps the bandwidth consumption at a low level in the beginning
of the transmission, or after packet retransmission.
• Multiplexing:
– Ports can provide multiple endpoints on a single node. For example,
the name on a postal address is a kind of multiplexing, and
distinguishes between different recipients of the same location.
– Computer applications will each listen for information on their own
ports, which enables the use of more than one network service at the
same time.
– It is part of the transport layer in the TCP/IP model, but of the session
layer in the OSI model.
Transport services and protocols
• provide logical communication
between app processes running on
different hosts
• transport protocols run in end
systems
– send side: breaks app messages
into segments, passes to
network layer
– rcv side: reassembles segments
into messages, passes to app
layer
• more than one transport protocol
available to apps
– Internet: TCP and UDP
application
transport
network
data link
physical
application
transport
network
data link
physical
Transport vs. network layer
• Network layer:
–logical communication between hosts
• Transport layer:
–logical communication between processes
–relies on, enhances, network layer services
Internet transport-layer protocols
• reliable, in-order
delivery (TCP)
– congestion control
– flow control
– connection setup
• unreliable, unordered
delivery: UDP
• services not available:
– delay guarantees
– bandwidth guarantees
application
transport
network
data link
physical
application
transport
network
data link
physical
network
data link
physical
network
data link
physical
network
data link
physical
network
data link
physical
network
data link
physical
network
data link
physical
network
data link
physical
Multiplexing and Demultiplexing
• Multiplexing and De-multiplexing are the two very
important functions that are performed by Transport Layer.
• Transport layer at the sender side receives data from
different Applications , encapsulates every packet with a
Transport Layer header and pass it on to the
underlying Network Layer.
• This job of transport layer is known as Multiplexing.
• At the receiver's side, the transport gathers the data,
examines it socket and passes the data to the correct
Application.
• This is known as De-Multiplexing.
• Suppose that there are two houses. One is in India and Other is in
America. In the house in India, lives a person James along with his 5
children.
• And in the house in America, lives a person Steve along with his 4
children. Now all 5 children of James write a letter to every children of
Steve on every Sunday.
• Therefore total number of letters will be 20. Thus, all the children writes
the letter , put them in envelopes and hand over it to James.
• Then James write source house address and the destination house
address on the envelope and give it to the postal service of India.
• Now the postal service of India puts some other addresses
corresponding to the country and delivers it to the America postal
Service.
• The American Postal sees the destination address on the envelopes and
will deliver those 20 letters to the Steve House.
• Steve collects the letter from the postman and after considering the
name of his respective children on the envelopes, he gives the letter to
each of them.
• In this example we have processes and the layers. Let
me explain.
• Processes = children
• Application Layer messages = envelopes
• Hosts = The two Houses
• Transport Layer Protocol = James and Steve
• Network Layer protocol = Postal Service
• When James collects all the letters from his children, he
multiplexes all and encapsulates them with the
respective children name on the letter and house
address and give it to the Indian postal service.
On the receiving side, Steve collects all the letters from
postal service of America and de-multiplexes them to
see , which letter is for which child and delivers it
respectively.
Formal Definition
• Multiplexing (or muxing) is a way of sending
multiple signals or streams of information
over a communications link at the same time
in the form of a single, complex signal; the
receiver recovers the separate signals, a
process called demultiplexing (or demuxing).
Multiplexing
• Multiplexing is sending more than one signal on a
carrier.
• There are two standard types of multiplexing.
– Frequency-Division Multiplexing (FDM): the
medium carries a number of signals, which have
different frequencies; the signals are carried
simultaneously.
– Time-Division Multiplexing (TDM): different
signals are transmitted over the same medium but they
do so at different times – they take turns.
Mutiplexing
Multiplexing allows one to select one of the many possible
sources.
Continue…
• There are several data inputs and one of them is
routed to the output (possibly the shared
communication channel).
– Like selecting a television channel (although that example
is FDM).
• In addition to data inputs, there must be select inputs.
– The select inputs determine which data input gets through.
• How many select pins are needed?
– Depends on number of data inputs.
Addresses
• All of the (data) inputs at hand are assigned
addresses.
• The address of the data input is used to select
which data input is placed on the shared channel.
• So in addition to the collection of data inputs,
there are selection (or address) inputs that pick
which of the data inputs gets through.
Connection Oriented Versus
Connectionless service
Sr.
No.
Parameters Connection oriented Connectionless
1. Reservation of
resources
Necessary Not necessary
2. Utilization of resources Less Good
3. State information Lot of information
required
Not much information is
required to be stored
4. Guarantee of service Guaranteed No guarantee
5. Connection Connection needs to be
established
Connection need not be
established
6. Delays More Less
7. Overheads Less More
8. Packet travel Sequentially Randomly
9. Congestion due to
overloading
Not possible Very much possible
How demultiplexing work?
Reliable Vs. Unreliable
• If the application layer program needs reliable
transport layer protocol is used by implementing
the flow and error control at the transport layer.
• But this service will be slow and more complex.
• But if the application layer program does not
need reliability because it uses its own flow and
error control mechanism, then an unreliable
service is used.
• UDP is connectionless protocol and unreliable,
but TCP is connection oriented and reliable
protocol.
UDP: User Datagram Protocol [RFC 768]
• What is a connectionless protocol?
– The device sending a message simply sends it
addressed to the intended recipient.
– If there are problems with the transmission, it may
be necessary to resend the data several times.
– The Internet Protocol (IP) and User
Datagram Protocol (UDP) are connectionless
protocols.
UDP: User Datagram Protocol [RFC 768]
• The User Datagram Protocol offers only a
minimal transport service -- non-guaranteed
datagram delivery -- and gives applications
direct access to the datagram service of the IP
layer.
• UDP is almost a null protocol; the only services
it provides over IP are check summing of data
and multiplexing by port number.
• UDP is a standard protocol with STD number 6.
• An application program running over UDP must deal
directly with end-to-end communication problems
that a connection-oriented protocol would have
handled -- e.g., retransmission for reliable delivery,
packetization and reassembly, flow control,
congestion avoidance, etc., when these are required.
• The service provided by UDP is an unreliable service
that provides no guarantees for delivery and no
protection from duplication.
• Compared to other transport protocols, UDP and
its UDP-Lite variant are unique in that they do not
establish end-to-end connections between
communicating end systems.
The UDP header contains four fields :
a 16 bits source port
a 16 bits destination port
a 16 bits length field
a 16 bits checksum
• UDP is basically an application interface to IP. It adds no
reliability, flow-control, or error recovery to IP. It simply
serves as a multiplexer/demultiplexer for sending and
receiving datagrams, using ports to direct the
datagrams.
• UDP provides a mechanism for one application to send a
datagram to another.
• Be aware that UDP and IP do not provide guaranteed
delivery, flow-control, or error recovery, so these must
be provided by the application.
• Standard applications using UDP include:
– Trivial File Transfer Protocol (TFTP)
– Domain Name System (DNS) name server
– Remote Procedure Call (RPC), used by the Network
File System (NFS)
– Simple Network Management Protocol (SNMP)
– Lightweight Directory Access Protocol (LDAP)
Message Header Format for UDP
Field Description
Source Port UDP packets from a client use this as a service access
point (SAP) to indicate the session on the local client
that originated the packet. UDP packets from a server
carry the server SAP in this field
Destination
Port
UDP packets from a client use this as a service access
point (SAP) to indicate the service required from the
remote server. UDP packets from a server carry the
client SAP in this field
UDP length The number of bytes comprising the combined UDP
header information and payload data
UDP
Checksum
A checksum to verify that the end to end data has not
been corrupted by routers or bridges in the network or
by the processing in an end system.
UDP checksum
• Goal: detect “errors” (e.g., flipped bits) in
transmitted segment
Sender:
• treat segment contents as
sequence of 16-bit
integers
• checksum: addition (1’s
complement sum) of
segment contents
• sender puts checksum
value into UDP checksum
field
Receiver:
• compute checksum of
received segment
• check if computed checksum
equals checksum field value:
– NO - error detected
– YES - no error detected.
But maybe errors
nonetheless? More later
….
Internet Checksum Example
• Note
– When adding numbers, a carryout from the
most significant bit needs to be added to the
result
• Example: add two 16-bit integers
1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0
1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1
1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 0
1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1
wraparound
sum
checksum
Fields Description
iph_ver
4 bits of the version of IP currently used, the ip version is 4
(other version is IPv6).
iph_ihl
4 bits, the ip header (datagram) length in 32 bits octets (bytes)
that point to the beginning of the data. The minimum value
for a correct header is 5. This means a value of 5 for
the iph_ihl means 20 bytes (5 * 4).
iph_tos 8 bits, type of service controls the priority of the packet.
iph_len
The total is 16 bits; total length must contain the total length
of the ip datagram (ip and data) in bytes. This includes ip
header, icmp or tcp or udp header and payload size in bytes.
iph_ident
The iph_ident sequence number is mainly used for
reassembly of fragmented IP datagrams.
iph_flag
Consists of a 3-bit field of which the two low-order (least-
significant) bits control fragmentation. The low-order bit
specifies whether the packet can be fragmented. The middle
bit specifies whether the packet is the last fragment in a series
of fragmented packets. The third or high-order bit is not
Fields Description
ihp_offset
The fragment offset is used for reassembly of fragmented
datagrams. The first 3 bits are the fragment flags, the first one
always 0, the second the do-not-fragment bit (set by ihp_offset =
0x4000) and the third the more-flag or more-fragments-following
bit (ihp_offset = 0x2000).
iph_ttl
8 bits, time to live is the number of hops (routers to pass) before
the packet is discarded, and an icmp error message is
returned. The maximum is 255.
iph_protocol
8 bits, the transport layer protocol. It can be tcp (6), udp (17),
icmp (1), or whatever protocol follows the ip header.
iph_chksu
m
16 bits, a checksum on the header only, the ip datagram.
iph_source 32 bits, source IP address. It is converted to long format.
iph_dest
32 bits, destination IP address, converted to long format, e.g.
by inet_addr().
Options Variable.
Padding
Variable. The internet header padding is used to ensure that the
internet header ends on a 32 bit boundary.
Ports and sockets
• This section introduces the concepts of port and socket, which are
needed to determine which local process at a given host actually
communicates with which process, at which remote host, using
which protocol. If this sounds confusing, consider the following:
– An application process is assigned a process identifier number
(process ID), which is likely to be different each time that process is
started.
– Process IDs differ between operating system platforms, hence they
are not uniform.
– A server process can have multiple connections to multiple clients at
a time, hence simple connection identifiers would not be unique.
• The concept of ports and sockets provides a way to uniformly and
uniquely identify connections and the programs and hosts that are
engaged in them, irrespective of specific process IDs.
• Ports:
• Each process that wants to communicate with another process
identifies itself to the TCP/IP protocol suite by one or more
ports.
• A port is a 16-bit number, used by the host-to-host protocol to
identify to which higher level protocol or application program
(process) it must deliver incoming messages. There are two
types of port:
– Well-known:
• Well-known ports belong to standard servers, for example,
Telnet uses port 23. Well-known port numbers range
between 1 and 1023.
• Well-known port numbers are typically odd, because early
systems using the port concept required an odd/even pair of
ports for duplex operations.
– Ephemeral:
• Clients do not need well-known port numbers because they
initiate communication with servers and the port number
they are using is contained in the UDP datagram sent to the
server.
• Ephemeral port numbers have values greater than 1023,
normally in the range 1024 to 65535. A client can use any
number allocated to it, as long as the combination of
<transport protocol, IP address, port number> is unique.
• Sockets:
– The socket interface is one of several application
programming interfaces (APIs) to the communication
protocols. Designed to be a generic communication
programming interface, APIs were first introduced by 4.2
BSD.
– 4.2 BSD allowed two different communication domains:
Internet and UNIX.
– 4.3 BSD has added the Xerox Network System (XNS)
protocols,
– 4.4 BSD will add an extended interface to support the ISO
OSI protocols.
– A socket address is the triple:
<protocol, local-address, local-process>
– For example, in the TCP/IP suite:
<tcp, 193.44.234.3, 12345>
Principles of Reliable data transfer
• Making sure that the packets sent by the sender are
correctly and reliably received by the receiver amid
network errors, i.e., corrupted/lost packets.
– Can be implemented at LL, NL or TL of the protocol stack.
• When and why should this be used?
– Link Layer
• Rarely done over twisted-pair or fiber optic links
• Usually done over lossy links (wireless) for performance improvement
(versus correctness) in P2P links
– Network/Transport Layers
• Necessary if the application requires the data to be reliably delivered
to the receiver, e.g., file transfer
Reliable Delivery: Service Model
– Reliable, In-order delivery
• Typically done when reliability is implemented at the
transport layer, e.g., TCP
• Example application: File transfer
Reliable, In-order Delivery
unreliable channel
Reliable Data Transfer
Protocol (Sender Side)
Reliable Data Transfer
Protocol (Receiver Side)
1234… 1234…
UDT_Send RDT_Receive
Deliver_DataRDT_Send
Reliable Delivery: Assumptions
We’ll:
• Consider only unidirectional data transfer
– A sender sending packets to a receiver
– Bidirectional communication is a simple extension,
where there are 2 sender/receiver pairs
• Start with simple a protocol and make it
complex as we continue
RDT over Unreliable Channel
• Channel may flip bits in packets/lose packets
– The received packet may have been corrupted during
transmission, or dropped at an intermediate router due to
buffer overflow
• The question: how to recover from errors?
• ACKs, NACKs, Timeouts… Next
Unreliable channel
Reliable Data Transfer
Protocol (Sender Side)
Reliable Data Transfer
Protocol (Receiver Side)
1234… 1234…
UDT_Send RDT_Receive
Deliver_DataRDT_Send
RDT over Unreliable Channel
• Two fundamental mechanisms to accomplish
reliable delivery over Unreliable Channels
– Acknowledgements (ACK), Negative ACK (NACK)
• Small control packets (header without any data) that a
protocol sends back to its peer saying that it has received an
earlier packet (positive ACK) or that it has not received a
packet (NACK) Sent by the receiver to the sender.
– Timeouts
• Set by the sender for each transmitted packet
• If an ACK is received before the timer expires, then the
packet has made it to the receiver
• If the timeout occurs, the sender assumes that the packet is
lost (corrupted) and retransmits the packet
ARQ
• Automatic repeat request (ARQ) is a protocol for
error control in data transmission. When the
receiver detects an error in a packet, it automatically
requests the transmitter to resend the packet.
• The general strategy of using ACKs (NACKs) and
timeouts to implement reliable delivery is called
Automatic Repeat reQuest (ARQ)
• 3 ARQ Mechanisms for Reliable Delivery
– Stop and Wait
– Concurrent Logical Channels
– Sliding Window
Stop and Wait
• Simplest ARQ protocol
• Sender:
– Send a packet
– Stop and wait until an ACK
arrives
– If received ACK, send the
next packet
– If timeout, Retransmit the
same packet
• Receiver:
– When you receive a packet
correctly, send an ACK
Time
Timeout
Sender Receiver
• Stop and wait with ARQ: Automatic Repeat reQuest
(ARQ), an error control method, is incorporated with stop
and wait flow control protocol.
– If error is detected by receiver, it discards the frame and send
a negative ACK (NAK), causing sender to re-send the frame
– In case a frame never got to receiver, sender has a timer:
each time a frame is sent, timer is set
→ If no ACK or NAK is received during timeout period, it
re-sends the frame
– Timer introduces a problem: Suppose timeout and sender
retransmits a frame but receiver actually received the
previous transmission → receiver has duplicated copies
– To avoid receiving and accepting two copies of same frame,
frames and ACKs are alternatively labeled 0 or 1: ACK0 for
frame 1, ACK1 for frame 0
An important link parameter is defined by
where ,
 R is data rate (bps), d is link distance (m), V is propagation
velocity (m/s) and L frame length (bits)
 In error-free case, efficiency or maximum link utilization of stop
and Wait with ARQ is:
Recovering from Error
• Does this protocol work?
• When an ACK is lost or a early timeout occurs, how does the receiver
know whether the packet is a retransmission or a new packet?
– Use sequence numbers: Both Packets and ACKs
TimeoutTimeout
Time
TimeoutTimeout
ACK lost Early timeout
TimeoutTimeout
Packet lost
Stop & Wait with Seq #s
TimeoutTimeout
Time
TimeoutTimeout
ACK lost Early timeout
TimeoutTimeout
Packet lost
Performance of Stop and Wait
• Can only send one packet per round trip
• network protocol limits use of physical resources!
first packet bit transmitted, t = 0
sender receive
r
RTT
first packet bit arrives
ACK arrives, send next
packet, t = RTT + L / R
U
sender
=
.008
30.008
= 0.00027
microsec
onds
L / R
RTT + L / R
= = 0.027%
microsec
onds
last packet bit arrives, send
ACK
last packet bit transmitted, t = L / R
Pipelining: Increasing Utilization
first packet bit transmitted, t = 0
sender receiver
RTT
last bit transmitted, t = L / R
first packet bit arrives
last packet bit arrives, send ACK
ACK arrives, send next
packet, t = RTT + L / R
last bit of 2nd packet arrives, send ACK
last bit of 3rd packet arrives, send ACK
U
sender
=
.024
30.008
= 0.0008
microsecon
3 * L / R
RTT + L / R
=
Increase utilization
by a factor of 3!
• Pipelining: sender allows multiple, “in-flight”, yet-to-be-
acknowledged pkts without waiting for first to be ACKed to keep
the pipe full
– Capacity of the Pipe = RTT * BW
Sliding Window Protocols
• Reliable, in-order delivery of packets
• Sender can send “window” of up to N,
consecutive unack’ed packets
• Receiver makes sure that the packets are
delivered in-order to the upper layer
• 2 Generic Versions
– Go-Back-N
– Selective Repeat
• For large link parameter a, stop and wait protocol is inefficient.
• A universally accepted flow control procedure is the sliding
window protocol.
– Frames and acknowledgements are numbered using sequence
numbers
– Sender maintains a list of sequence numbers (frames) it is allowed
to transmit, called sending window
– Receiver maintains a list of sequence numbers it is prepared to
receive, called receiving window
– A sending window of size N means that sender can send up to N
frames without the need for an ACK
– A window size of N implies buffer space for N frames
– For n-bit sequence number, we have 2n numbers: 0, 1, · · · , 2n − 1,
but the maximum window size N = 2n− 1 (not 2n)
– ACK3 means that receiver has received frame 0 to frame 2
correctly, ready to receive frame 3 (and rest of N frames within
window)
• In error-free case, efficiency or maximum link
utilization of sliding window protocol is:
• Thus it is able to maintain efficiency for large link
parameter a: just use large widow size N.
• Note that U = 1 means that link has no idle time: there
are always something in it, either data frames or ACKs.
Continue…
• What should be the size of pipeline?
• How do we handle errors:
– Sender and receiver maintain buffer space
– Receiver window = 1,
– Sender window = n
• Consider the case of 3-bit sequence number
with maximum window size N = 7.
• This illustration shows that Sending and
receiving windows can shrink or grow during
operation.
• The receiver do not need to acknowledge every
frames.
• If both sending and receiving window sizes are N
= 1, the sliding window protocol reduces to the
stop-and-wait.
• In practice, error control must be incorporated
with flow control, and we next discuss two
common error control mechanisms.
Sliding Window:
Generic Sender/Receiver States
ReceiverSender
… …
Sent & Acked Sent Not Acked
OK to Send Not Usable
… …
Last Packet Acceptable
(LPA)
Receiver Window Size
Last ACK Received
(LAR)
Last Packet Sent
(LPS)
Received & Acked Acceptable Packet
Not Usable
Sender Window Size
Next Packet Expected
(NPE)
Sliding Window- Sender Side
• The sender maintains 3 variables
– Sender Window Size (SWS)
• Upper bound on the number of in-flight packets
– Last Acknowledgement Received (LAR)
– Last Packet Sent (LPS)
– We want LPS – LAR <= SWS
LAR LPS
<= SWS
Sliding Window- Receiver Side
• The receiver maintains 3 variables
– Receiver Window Size (RWS)
• Upper bound on the number of buffered packets
– Last Packet Acceptable (LPA)
– Next Packet Expected (NPE)
– We want LPS – NPE + 1 <= RWS
NPE LPA
<= RWS
Go-back-n ARQ
• The basic idea of go-back-n error control is: If
frame i is damaged, receiver requests
retransmission of all frames starting from frame i
• An example:
• Notice that all possible cases of damaged frame
and ACK / NAK must be taken into account
• For n-bit sequence number, maximum window
size is N = 2n − 1 not N = 2n → with N = 2n
• Consider n = 3, if N = 8 what may happen:
• Suppose that sender transmits frame 0 and gets an ACK1
• It then transmits frames 1,2,3,4,5,6,7,0 (this is allowed, as
they are within the sending window of size 8) and gets
another ACK1
• This could means that all eight frames were received
correctly
• It could also mean that all eight frames were lost, and
receiver is repeating its previous ACK1
• With N = 7, this confusing situation is avoided
ReceiverSender
… …
Sent & Acked Sent Not Acked
OK to Send Not Usable
… …
Last Packet Acceptable
(LPA)
RWS = 1 packet
Last ACK Received
(LAR)
Last Packet Sent
(LPS)
Received & Acked Acceptable Packet
Not Usable
SWS = N
Next Packet Expected
(NPE)
• SWS = N: Sender can send up to N consecutive unack’ed pkts
• RWS = 1: Receiver has buffer for just 1 packet
• Always sends ACK for correctly-rcvd pkt with highest in-order seq #
– Cumulative ACK
– discard & re-ACK pkt with highest in-order seq #
• Out-of-order pkt:
Timeout:
Retransmit ALL packets that have been previously sent, but not yet ACKed
Therefore the name Go-Back-N
GBN in action (SWS = 4)
• Why use GBN?
– Very simple receiver
• Why NOT use GBN?
– Throwing away out-of-order packets at the receiver
results in extra transmissions, thus lowering the
channel utilization:
• The channel may become full of retransmissions of old
packets rather than useful new packets
– Can we do better?
• Yes: Buffer out-of-order packets at the receiver and do
Selective Repeat (Retransmissions) at the sender
Selective-Reject ARQ
• In selective-reject ARQ error control, the only
frames retransmitted are those receive a NAK or
which time out.
• An illustrative example:
• Selective-reject would appear to be more
efficient than go-back-n, but it is harder to
implement and less used
• The window size is also more restrictive: for n-bit
sequence number, the maximum window size is
N = 2n/2 to avoid possible confusion
• Go-back-n and selective-reject can be seen as
trade-offs between link bandwidth (data rate)
and data link layer buffer space.
– If link bandwidth is large but buffer space is scarce,
go-back-n is preferred
– If link bandwidth is small but buffer space is pretty,
selective-reject is preferred
ReceiverSender
… …
Sent & Acked Sent Not Acked
OK to Send Not Usable
… …
Last Packet Acceptable
(LPA)
RWS = N
Last ACK Received
(LAR)
Last Packet Sent
(LPS)
Received & Acked Acceptable Packet
Not Usable
SWS = N
Next Packet Expected
(NPE)
• SWS = RWS = N consecutive packets: Sender can send up to N
consecutive unack’ed pkts, Receiver can buffer up to N consecutive packets
• Receiver individually acknowledges all correctly received pkts
– buffers pkts, as needed, for eventual in-order delivery to upper layer
• Sender only resends pkts for which ACK not received
– sender timer for each unACKed pkt
Selective repeat in action
Transmission Control Protocol (TCP)
• The Transmission Control Protocol (TCP) is a connection-
oriented reliable protocol.
• It provides a reliable transport service between pairs of
processes executing on End Systems (ES) using the network
layer service provided by the IP protocol.
• The Transmission Control Protocol (TCP) was initially
defined in RFC 793.
• TCP is a standard protocol with STD number 7.
• TCP is used by a large number of applications,
including :
– Email (SMTP, POP, IMAP)
– World wide web ( HTTP, ...)
– Most file transfer protocols ( ftp, peer-to-peer file
sharing applications , ...)
– remote computer access : telnet, ssh, X11, VNC, ...
– non-interactive multimedia applications : flash
Connection-Oriented Service
• In a connection oriented service, a connection
is established between source and
destination.
• Then the data is transferred and at the end
the connection is released.
Connection Establishment
Three protocol scenarios for establishing a connection using a three-way handshake.
CR denotes CONNECTION REQUEST.
(a) Normal operation,
(b) Old CONNECTION REQUEST appearing out of nowhere.
(c) Duplicate CONNECTION REQUEST and duplicate ACK.
Handshaking Protocol
• handshaking is an automated process of negotiation that
dynamically sets parameters of a communications channel
established between two entities before normal
communication over the channel begins.
• It follows the physical establishment of the channel and
precedes normal information transfer.
• The handshaking process usually takes place in order to
establish rules for communication when a computer sets about
communicating with a foreign device.
• When a computer communicates with another device like a
modem, printer, or network server, it needs to handshake with
it to establish a connection.
• A simple handshaking protocol might only involve
the receiver sending a message meaning "I received
your last message and I am ready for you to send me
another one."
• A more complex handshaking protocol might allow
the sender to ask the receiver if it is ready to receive
or for the receiver to reply with a negative
acknowledgement meaning "I did not receive your
last message correctly, please resend it"
Two Way Handshaking Protocol
Two Army Problem
• Protocol 1: two way handshake:
• Chief of blue army 1 sends a message to the chief of
blue army 2 stating “I propose we attack in the morning
of January 1. Is it OK with you?”.
• Suppose that the messenger reaches to blue army 2 and
the chief of blue army 2 agrees with the idea.
• He sends the message “yes” back army 1 and assume
that this message also gets through.
• This process is called two way handshake and it is shown
in fig.(a).
• The question is will the attack take place? The answer is
probably not because the chief of blue army 2 does not
know whether his reply got through.
Blue army 1 Blue army 2 Blue army 1 Blue army 2
Message 1
Acknowledgement
message 1
Time
Message 3
Message 2
Message 1
Time
(a) Two way handshake (b) Three way handshake
Three Way Handshaking Protocol
• Now let us improve the two way handshake protocol
by making it a three way handshake.
• Assuming no messages lost, blue army – 2 will gets
the acknowledgement.
• But now the chief of blue army – 1 will hesitate,
because he does not know if the last message he has
sent has got through or not.
• So we can make a four way handshake. But it also
does not help, because in every protocol, the
uncertainty after last handshake message always
remains.
• A three-way-handshake is a method used in a TCP/IP
network to create a connection between a local host/client
and server.
EVENT DIAGRAM
Host A sends a TCP SYNchronize packet to Host B
Host B receives A's SYN
Host B sends a SYNchronize ACKnowledgement
Host A receives B's SYN-ACK
Host A sends ACKnowledge
Host B receives ACK.
TCP socket connection is ESTABLISHED.
• TCP can be characterized by the following
facilities it provides for the applications using it:
– Stream Data Transfer:
– From the application's viewpoint, TCP transfers a
contiguous stream of bytes through the network.
– The application does not have to bother with
chopping the data into basic blocks or datagrams.
– TCP does this by grouping the bytes in TCP segments,
which are passed to IP for transmission to the
destination.
– Reliability:
– CP assigns a sequence number to each byte
transmitted and expects a positive acknowledgment
(ACK) from the receiving TCP.
– If the ACK is not received within a timeout interval,
the data is retransmitted.
– Since the data is transmitted in blocks (TCP
segments), only the sequence number of the first
data byte in the segment is sent to the destination
host.
– The receiving TCP uses the sequence numbers to
rearrange the segments when they arrive out of
order, and to eliminate duplicate segments.
– Flow Control:
– The receiving TCP, when sending an ACK back to the
sender, also indicates to the sender the number of
bytes it can receive beyond the last received TCP
segment, without causing overrun and overflow in its
internal buffers.
– Multiplexing:
– Achieved through the use of ports, just as with UDP.
– Logical Connections:
– The reliability and flow control mechanisms described
above require that TCP initializes and maintains
certain status information for each data stream.
– The combination of this status, including sockets,
sequence numbers and window sizes, is called a
logical connection.
– Each connection is uniquely identified by the pair of
sockets used by the sending and receiving processes.
– Full Duplex:
– TCP provides for concurrent data streams in both
directions.
TCP segment format
• Source Port: The 16-bit source port number, used by the receiver to
reply.
• Destination Port: The 16-bit destination port number.
• Sequence Number: The sequence number of the first data byte in
this segment. If the SYN control bit is set, the sequence number is the
initial sequence number (n) and the first data byte is n+1.
• Acknowledgment Number: If the ACK control bit is set, this field
contains the value of the next sequence number that the receiver is
expecting to receive.
• Data Offset: The number of 32-bit words in the TCP header. It
indicates where the data begins.
• Reserved: Six bits reserved for future use; must be zero.
• URG: Indicates that the urgent pointer field is significant in this
segment.
• ACK: Indicates that the acknowledgment field is significant in this
segment.
• PSH: Push function.
• RST: Resets the connection.
• SYN: Synchronizes the sequence numbers.
• FIN: No more data from sender.
• Window: Used in ACK segments. It specifies the number of
data bytes
• Checksum: The 16-bit one's complement of the one's
complement sum of all 16-bit words in a pseudo-header, the
TCP header, and the TCP data.
• Urgent Pointer: Points to the first data octet following the
urgent data. Only significant when the URG control bit is set.
• Options: Just as in the case of IP datagram options, options
can be either:
– A single byte containing the option number
– A variable length option
• Padding: All zero bytes are used to fill up the TCP header to a
total length that is a multiple of 32 bits.
The window principle
• A trivial transport protocol is:
– send a packet and then wait for an ACK from the
receiver before sending the next packet;
– if the ACK is not received within a certain amount of
time, retransmit the packet.
• While this mechanism ensures reliability, it only
uses a part of the available network bandwidth.
• Now, consider a protocol where the sender
groups its packets to be transmitted:
– the sender can send all packets within the window
without receiving an ACK, but must start a timeout
timer for each of them;
– the receiver must acknowledge each packet received,
indicating the sequence number of the last well-
received packet;
– the sender slides the window on each ACK received.
The window principle
• If packet 2 is lost, the
receiver does not
acknowledge the
reception of
subsequent data
messages.
• The sender re-
transmits
unacknowledged
messages after a
timeout expires.
• The window principle is used in TCP, but:
–the window principle is used at the byte level,
that is, the segments sent and ACKs received
carry byte-sequence numbers and the window
size is expressed as a number of bytes;
–the window size is determined by the receiver
when the connection is established and is
variable during the data transfer.
The window principle
Congestion control
• The TCP congestion control scheme was initially
proposed by Van Jacobson in [Jacobson1988].
• Essential strategy :: The TCP host sends packets into the
network without a reservation and then the host reacts
to observable events.
• Originally TCP assumed FIFO queuing.
• Basic idea :: each source determines how much capacity
is available to a given flow in the network.
• ACKs are used to ‘pace’ the transmission of packets such
that TCP is “self-clocking”.
• TCP relies on Additive Increase and Multiplicative
Decrease (AIMD).
• To implement AIMD, a TCP host must be able to control
its transmission rate.
Standard TCP Congestion Control
Algorithms
• One of the most common implementations of
TCP is called Reno, and combines four
different mechanisms :
– Slow start
– Congestion avoidance
– Fast retransmit
– Fast recovery
Slow start
• Slow Start, a requirement for TCP software
implementations is a mechanism used by the sender
to control the transmission rate, otherwise known as
sender-based flow control.
• The rate of acknowledgements returned by the
receiver determine the rate at which the sender can
transmit data.
• When a TCP connection first begins, the Slow Start
algorithm initializes a congestion window to one
segment, which is the maximum segment size (MSS)
initialized by the receiver during the connection
establishment phase.
• When acknowledgements are returned by the receiver, the
congestion window increases by one segment for each
acknowledgement returned.
• Thus, the sender can transmit the minimum of the congestion
window and the advertised window of the receiver, which is
simply called the transmission window.
• For example, the first successful transmission and
acknowledgement of a TCP segment increases the window to
two segments.
• After successful transmission of these two segments and
acknowledgements completes, the window is increased to four
segments.
• Then eight segments, then sixteen segments and so on,
doubling from there on out up to the maximum window size
advertised by the receiver or until congestion finally does occur.
• At some point the congestion window may
become too large for the network or network
conditions may change such that packets may be
dropped.
• Packets lost will trigger a timeout at the sender.
• When this happens, the sender goes into
congestion avoidance mode as described in the
next section.
Congestion Avoidance
• During the initial data transfer phase of a TCP
connection the Slow Start algorithm is used.
• However, there may be a point during Slow Start
that the network is forced to drop one or more
packets due to overload or congestion.
• If this happens, Congestion Avoidance is used to
slow the transmission rate.
• However, Slow Start is used in conjunction with
Congestion Avoidance as the means to get the data
transfer going again so it doesn’t slow down and stay
slow.
• In the Congestion Avoidance algorithm a retransmission
timer expiring or the reception of duplicate ACKs can
implicitly signal the sender that a network congestion
situation is occurring.
• The sender immediately sets its transmission window to
one half of the current window size (the minimum of the
congestion window and the receiver’s advertised
window size), but to at least two segments.
• If congestion was indicated by a timeout, the congestion
window is reset to one segment, which automatically
puts the sender into Slow Start mode.
• If congestion was indicated by duplicate ACKs, the Fast
Retransmit and Fast Recovery algorithms are invoked
• As data is received during Congestion Avoidance, the
congestion window is increased.
• However, Slow Start is only used up to the halfway
point where congestion originally occurred.
• This halfway point was recorded earlier as the new
transmission window.
• After this halfway point, the congestion window is
increased by one segment for all segments in the
transmission window that are acknowledged.
• This mechanism will force the sender to more slowly
grow its transmission rate, as it will approach the
point where congestion had previously been
detected.
Fast Retransmit
• When a duplicate ACK is received, the sender does
not know if it is because a TCP segment was lost or
simply that a segment was delayed and received out
of order at the receiver.
• If the receiver can re-order segments, it should not
be long before the receiver sends the latest
expected acknowledgement.
• Typically no more than one or two duplicate ACKs
should be received when simple out of order
conditions exist.
• If however more than two duplicate ACKs are received
by the sender, it is a strong indication that at least one
segment has been lost.
• The TCP sender will assume enough time has lapsed for
all segments to be properly re-ordered by the fact that
the receiver had enough time to send three duplicate
ACKs.
• When three or more duplicate ACKs are received, the
sender does not even wait for a retransmission timer to
expire before retransmitting the segment.
• This process is called the Fast Retransmit algorithm.
• Immediately following Fast Retransmit is the Fast
Recovery algorithm.
Fast Recovery
• Since the Fast Retransmit algorithm is used when
duplicate ACKs are being received, the TCP sender
has implicit knowledge that there is data still flowing
to the receiver.
• Why? The reason is because duplicate ACKs can only
be generated when a segment is received.
• This is a strong indication that serious network
congestion may not exist and that the lost segment
was a rare event.
• So instead of reducing the flow of data abruptly by going
all the way into Slow Start, the sender only enters
Congestion Avoidance mode.
• Rather than start at a window of one segment as
in Slow Start mode, the sender resumes
transmission with a larger window, incrementing
as if in Congestion Avoidance mode.
• Congestion Control is concerned with efficiently
using a network at high load.
• Several techniques can be employed. These include:
– Warning bit
– Choke packets
– Load shedding
– Random early discard
– Traffic shaping
• The first 3 deal with congestion detection and
recovery. The last 2 deal with congestion avoidance.
129
Warning Bit
• A special bit in the packet header is set by the
router to warn the source when congestion is
detected.
• The bit is copied and piggy-backed on the ACK
and sent to the sender.
• The sender monitors the number of ACK
packets it receives with the warning bit set
and adjusts its transmission rate accordingly.
130
Choke Packets
• A more direct way of telling the source to
slow down.
• A choke packet is a control packet
generated at a congested node and
transmitted to restrict traffic flow.
• The source, on receiving the choke packet
must reduce its transmission rate by a
certain percentage.
• An example of a choke packet is the ICMP
Source Quench Packet.
131
Hop-by-Hop Choke Packets
• Over long distances or at high speeds choke
packets are not very effective.
• A more efficient method is to send to choke
packets hop-by-hop.
• This requires each hop to reduce its
transmission even before the choke packet
arrive at the source.
132
Load Shedding
• When buffers become full, routers simply discard
packets.
• Which packet is chosen to be the victim depends on
the application and on the error strategy used in the
data link layer.
• For a file transfer, for, e.g. cannot discard older
packets since this will cause a gap in the received
data.
• For real-time voice or video it is probably better to
throw away old data and keep new packets.
• Get the application to mark packets with discard
priority.
133
Random Early Discard (RED)
• This is a proactive approach in which the
router discards one or more packets before
the buffer becomes completely full.
• Each time a packet arrives, the RED
algorithm computes the average queue
length, avg.
• If avg is lower than some lower threshold,
congestion is assumed to be minimal or
non-existent and the packet is queued.
134
RED, cont.
• If avg is greater than some upper
threshold, congestion is assumed to be
serious and the packet is discarded.
• If avg is between the two thresholds, this
might indicate the onset of congestion. The
probability of congestion is then calculated.
135
Traffic Shaping
• Another method of congestion control is to
“shape” the traffic before it enters the
network.
• Traffic shaping controls the rate at which
packets are sent (not just how many).
• Used in ATM and Integrated Services
networks.
• At connection set-up time, the sender and
carrier negotiate a traffic pattern (shape).
• Two traffic shaping algorithms are:
– Leaky Bucket
– Token Bucket
Piggybacking
• Piggybacking is a bi-directional data
transmission technique in the network layer (OSI
model).
• In all practical situations, the transmission of data
needs to be bi-directional. This is called as full-
duplex transmission.
• We can achieve this full duplex
transmission i.e. by having two separate
channels-one for forward data transfer and the
other for separate transfer i.e. for
acknowledgements.
• A better solution would be to use each
channel (forward & reverse) to transmit
frames both ways, with both channels having
the same capacity.
• If A and B are two users.
• Then the data frames from A to B are
intermixed with the acknowledgements from
A to B.
• One more improvement that can be made is
piggybacking.
• The concept is explained as follows:
• In two way communication, Whenever a data
frame is received, the received waits and does
not send the control frame (acknowledgement)
back to the sender immediately.
• The receiver waits until its network layer passes
in the next data packet. The delayed
acknowledgement is then attached to this
outgoing data frame.
• This technique of temporarily delaying the
acknowledgement so that it can be hooked with
next outgoing data frame is known as
piggybacking.
• The major advantage of piggybacking is better
use of available channel bandwidth.
• The disadvantages of piggybacking are:
1. Additional complexity.
2. If the data link layer waits too long before
transmitting the acknowledgement, then
retransmission of frame would take place.

More Related Content

What's hot

Network Fundamentals: Ch4 - Transport Layer
Network Fundamentals: Ch4 - Transport LayerNetwork Fundamentals: Ch4 - Transport Layer
Network Fundamentals: Ch4 - Transport LayerAbdelkhalik Mosa
 
Transport Layer In Computer Network
Transport Layer In Computer NetworkTransport Layer In Computer Network
Transport Layer In Computer NetworkDestro Destro
 
Computer Network Fundamentals
Computer Network FundamentalsComputer Network Fundamentals
Computer Network FundamentalsN.Jagadish Kumar
 
Orientation to Computer Networks
Orientation to Computer NetworksOrientation to Computer Networks
Orientation to Computer NetworksMukesh Chinta
 
Transport layer
Transport layerTransport layer
Transport layerM Sajid R
 
Computer networks unit i
Computer networks    unit iComputer networks    unit i
Computer networks unit iJAIGANESH SEKAR
 
Transport layer services (cn)
Transport layer services (cn)Transport layer services (cn)
Transport layer services (cn)Jay Limbachiya
 
Project on wifi ,LTE and Broadband devices
Project on wifi ,LTE and Broadband devicesProject on wifi ,LTE and Broadband devices
Project on wifi ,LTE and Broadband devicesKaran Kumar
 

What's hot (20)

Transport layer
Transport layerTransport layer
Transport layer
 
Network Fundamentals: Ch4 - Transport Layer
Network Fundamentals: Ch4 - Transport LayerNetwork Fundamentals: Ch4 - Transport Layer
Network Fundamentals: Ch4 - Transport Layer
 
Application layer protocols
Application layer protocolsApplication layer protocols
Application layer protocols
 
Computer Networks: Quality of service
Computer Networks: Quality of serviceComputer Networks: Quality of service
Computer Networks: Quality of service
 
Transport Layer In Computer Network
Transport Layer In Computer NetworkTransport Layer In Computer Network
Transport Layer In Computer Network
 
Transport layer
Transport layerTransport layer
Transport layer
 
Computer Network Fundamentals
Computer Network FundamentalsComputer Network Fundamentals
Computer Network Fundamentals
 
Transport layer protocols : Simple Protocol , Stop and Wait Protocol , Go-Bac...
Transport layer protocols : Simple Protocol , Stop and Wait Protocol , Go-Bac...Transport layer protocols : Simple Protocol , Stop and Wait Protocol , Go-Bac...
Transport layer protocols : Simple Protocol , Stop and Wait Protocol , Go-Bac...
 
Computer Networks : WWW , TELNET and SSH
Computer Networks  : WWW , TELNET and SSHComputer Networks  : WWW , TELNET and SSH
Computer Networks : WWW , TELNET and SSH
 
Unit 3 - Data Link Layer - Part A
Unit 3 - Data Link Layer - Part AUnit 3 - Data Link Layer - Part A
Unit 3 - Data Link Layer - Part A
 
COMPUTER NETWORKS UNIT 5
COMPUTER NETWORKS UNIT 5COMPUTER NETWORKS UNIT 5
COMPUTER NETWORKS UNIT 5
 
Week10 transport
Week10 transportWeek10 transport
Week10 transport
 
Orientation to Computer Networks
Orientation to Computer NetworksOrientation to Computer Networks
Orientation to Computer Networks
 
Transport layer
Transport layerTransport layer
Transport layer
 
Application layer : DNS
Application layer : DNSApplication layer : DNS
Application layer : DNS
 
Unit 5 : Transport Layer
Unit 5 : Transport LayerUnit 5 : Transport Layer
Unit 5 : Transport Layer
 
Computer networks unit i
Computer networks    unit iComputer networks    unit i
Computer networks unit i
 
Transport layer services (cn)
Transport layer services (cn)Transport layer services (cn)
Transport layer services (cn)
 
Project on wifi ,LTE and Broadband devices
Project on wifi ,LTE and Broadband devicesProject on wifi ,LTE and Broadband devices
Project on wifi ,LTE and Broadband devices
 
Unit 1
Unit 1Unit 1
Unit 1
 

Viewers also liked

Viewers also liked (7)

Pace IT - Introduction to the Transport Layer
Pace IT - Introduction to the Transport LayerPace IT - Introduction to the Transport Layer
Pace IT - Introduction to the Transport Layer
 
Lte tutorial april 2009 ver1.1
Lte tutorial april 2009 ver1.1Lte tutorial april 2009 ver1.1
Lte tutorial april 2009 ver1.1
 
Chapter 3 final
Chapter 3 finalChapter 3 final
Chapter 3 final
 
Transport layer (computer networks)
Transport layer (computer networks)Transport layer (computer networks)
Transport layer (computer networks)
 
Transport Layer
Transport LayerTransport Layer
Transport Layer
 
Transport layer protocol
Transport layer protocolTransport layer protocol
Transport layer protocol
 
Tcp and udp
Tcp and udpTcp and udp
Tcp and udp
 

Similar to Chapter 3 final

Similar to Chapter 3 final (20)

Transport layer
Transport layerTransport layer
Transport layer
 
Osi layer model
Osi layer modelOsi layer model
Osi layer model
 
Network layer
Network layerNetwork layer
Network layer
 
CN chapter1.ppt
CN chapter1.pptCN chapter1.ppt
CN chapter1.ppt
 
DATA COMMUNICATION BY BP. ...
DATA COMMUNICATION BY BP.                                                    ...DATA COMMUNICATION BY BP.                                                    ...
DATA COMMUNICATION BY BP. ...
 
UNIT-3 network security layers andits types
UNIT-3 network security layers andits typesUNIT-3 network security layers andits types
UNIT-3 network security layers andits types
 
OSI Model.ppt
OSI Model.pptOSI Model.ppt
OSI Model.ppt
 
Chapter 4
Chapter 4Chapter 4
Chapter 4
 
Osi model
Osi modelOsi model
Osi model
 
Dc2 t1
Dc2 t1Dc2 t1
Dc2 t1
 
UNIT I DIS.pptx
UNIT I DIS.pptxUNIT I DIS.pptx
UNIT I DIS.pptx
 
Learning Guide of Determine Best Fit Topology LO 2.pptx
Learning Guide of Determine Best Fit Topology LO 2.pptxLearning Guide of Determine Best Fit Topology LO 2.pptx
Learning Guide of Determine Best Fit Topology LO 2.pptx
 
DS Unit-4-Communication .pdf
DS Unit-4-Communication .pdfDS Unit-4-Communication .pdf
DS Unit-4-Communication .pdf
 
Ex 1 chapter04-transport-layer-tony_chen
Ex 1 chapter04-transport-layer-tony_chenEx 1 chapter04-transport-layer-tony_chen
Ex 1 chapter04-transport-layer-tony_chen
 
08 coms 525 tcpip - tcp 1
08   coms 525 tcpip - tcp 108   coms 525 tcpip - tcp 1
08 coms 525 tcpip - tcp 1
 
chapter6 intro to telecommunications.ppt
chapter6 intro to telecommunications.pptchapter6 intro to telecommunications.ppt
chapter6 intro to telecommunications.ppt
 
chapter1.ppt
chapter1.pptchapter1.ppt
chapter1.ppt
 
Internet protocol
Internet protocolInternet protocol
Internet protocol
 
tcpiposi.pptx
tcpiposi.pptxtcpiposi.pptx
tcpiposi.pptx
 
Week 3
Week 3Week 3
Week 3
 

Recently uploaded

HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSRajkumarAkumalla
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls in Nagpur High Profile
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...ranjana rawat
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxupamatechverse
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Call Girls in Nagpur High Profile
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingrknatarajan
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 

Recently uploaded (20)

HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICSHARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
HARDNESS, FRACTURE TOUGHNESS AND STRENGTH OF CERAMICS
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANVI) Koregaon Park Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur EscortsCall Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
Call Girls Service Nagpur Tanvi Call 7001035870 Meet With Nagpur Escorts
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
(SHREYA) Chakan Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Esc...
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptx
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 

Chapter 3 final

  • 3. Outlines • Introduction and transport layer services • Multiplexing and Demultiplexing • Connection less transport (UDP) • Principles of reliable data transfer • Connection oriented transport (TCP) • Congestion control
  • 4. Introduction • The transport layer is the core of the internet model. • The application layer programs interact with each other using the services of the transport layer. • Transport layer provides services to the application layer and takes services from the network layer. Application layer Transport layer Network layer Transport layer provides services to the application layer Transport layer takes services from the network layer Layer 5 Layer 4 Layer 3
  • 6. Transport Layer and Data Link Layer
  • 7. Role of Transport Layer • Application layer – Communication for specific applications – E.g., Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP), Network News Transfer Protocol (NNTP) • Transport layer – Communication between processes (e.g., socket) – Relies on network layer and serves the application layer – E.g., TCP and UDP • Network layer – Logical communication between nodes – Hides details of the link technology – E.g., IP
  • 8. The protocols of the layer provide host-to-host communication services for applications. It provides services such as connection-oriented data stream support, reliability, flow control, and multiplexing.
  • 9. • Transport layer services are conveyed to an application via a programming interface to the transport layer protocols. The services may include the following features: • Connection-oriented communication: – It is normally easier for an application to interpret a connection as a data stream rather than having to deal with the underlying connection-less models, such as the datagram model of the User Datagram Protocol (UDP) and of the Internet Protocol (IP). • Same order delivery: – The network layer doesn't generally guarantee that packets of data will arrive in the same order that they were sent, but often this is a desirable feature. – This is usually done through the use of segment numbering, with the receiver passing them to the application in order.
  • 10. • Reliability: – Packets may be lost during transport due to network congestion and errors. – By means of an error detection code, such as a checksum, the transport protocol may check that the data is not corrupted, and verify correct receipt by sending an ACK or NACK message to the sender. – Automatic repeat request schemes may be used to retransmit lost or corrupted data. • Flow control: – The rate of data transmission between two nodes must sometimes be managed to prevent a fast sender from transmitting more data than can be supported by the receiving data buffer, causing a buffer overrun. – This can also be used to improve efficiency by reducing buffer underrun.
  • 11. • Congestion avoidance: – Congestion control can control traffic entry into a telecommunications network, so as to avoid congestive collapse by attempting to avoid oversubscription of any of the processing or link capabilities of the intermediate nodes and networks and taking resource reducing steps, such as reducing the rate of sending packets. – For example, automatic repeat requests may keep the network in a congested state; this situation can be avoided by adding congestion avoidance to the flow control, including slow-start. – This keeps the bandwidth consumption at a low level in the beginning of the transmission, or after packet retransmission. • Multiplexing: – Ports can provide multiple endpoints on a single node. For example, the name on a postal address is a kind of multiplexing, and distinguishes between different recipients of the same location. – Computer applications will each listen for information on their own ports, which enables the use of more than one network service at the same time. – It is part of the transport layer in the TCP/IP model, but of the session layer in the OSI model.
  • 12. Transport services and protocols • provide logical communication between app processes running on different hosts • transport protocols run in end systems – send side: breaks app messages into segments, passes to network layer – rcv side: reassembles segments into messages, passes to app layer • more than one transport protocol available to apps – Internet: TCP and UDP application transport network data link physical application transport network data link physical
  • 13. Transport vs. network layer • Network layer: –logical communication between hosts • Transport layer: –logical communication between processes –relies on, enhances, network layer services
  • 14. Internet transport-layer protocols • reliable, in-order delivery (TCP) – congestion control – flow control – connection setup • unreliable, unordered delivery: UDP • services not available: – delay guarantees – bandwidth guarantees application transport network data link physical application transport network data link physical network data link physical network data link physical network data link physical network data link physical network data link physical network data link physical network data link physical
  • 15. Multiplexing and Demultiplexing • Multiplexing and De-multiplexing are the two very important functions that are performed by Transport Layer. • Transport layer at the sender side receives data from different Applications , encapsulates every packet with a Transport Layer header and pass it on to the underlying Network Layer. • This job of transport layer is known as Multiplexing. • At the receiver's side, the transport gathers the data, examines it socket and passes the data to the correct Application. • This is known as De-Multiplexing.
  • 16. • Suppose that there are two houses. One is in India and Other is in America. In the house in India, lives a person James along with his 5 children. • And in the house in America, lives a person Steve along with his 4 children. Now all 5 children of James write a letter to every children of Steve on every Sunday. • Therefore total number of letters will be 20. Thus, all the children writes the letter , put them in envelopes and hand over it to James. • Then James write source house address and the destination house address on the envelope and give it to the postal service of India. • Now the postal service of India puts some other addresses corresponding to the country and delivers it to the America postal Service. • The American Postal sees the destination address on the envelopes and will deliver those 20 letters to the Steve House. • Steve collects the letter from the postman and after considering the name of his respective children on the envelopes, he gives the letter to each of them.
  • 17. • In this example we have processes and the layers. Let me explain. • Processes = children • Application Layer messages = envelopes • Hosts = The two Houses • Transport Layer Protocol = James and Steve • Network Layer protocol = Postal Service
  • 18. • When James collects all the letters from his children, he multiplexes all and encapsulates them with the respective children name on the letter and house address and give it to the Indian postal service. On the receiving side, Steve collects all the letters from postal service of America and de-multiplexes them to see , which letter is for which child and delivers it respectively.
  • 19. Formal Definition • Multiplexing (or muxing) is a way of sending multiple signals or streams of information over a communications link at the same time in the form of a single, complex signal; the receiver recovers the separate signals, a process called demultiplexing (or demuxing).
  • 20.
  • 21. Multiplexing • Multiplexing is sending more than one signal on a carrier. • There are two standard types of multiplexing. – Frequency-Division Multiplexing (FDM): the medium carries a number of signals, which have different frequencies; the signals are carried simultaneously. – Time-Division Multiplexing (TDM): different signals are transmitted over the same medium but they do so at different times – they take turns.
  • 22. Mutiplexing Multiplexing allows one to select one of the many possible sources.
  • 23. Continue… • There are several data inputs and one of them is routed to the output (possibly the shared communication channel). – Like selecting a television channel (although that example is FDM). • In addition to data inputs, there must be select inputs. – The select inputs determine which data input gets through. • How many select pins are needed? – Depends on number of data inputs.
  • 24. Addresses • All of the (data) inputs at hand are assigned addresses. • The address of the data input is used to select which data input is placed on the shared channel. • So in addition to the collection of data inputs, there are selection (or address) inputs that pick which of the data inputs gets through.
  • 26. Sr. No. Parameters Connection oriented Connectionless 1. Reservation of resources Necessary Not necessary 2. Utilization of resources Less Good 3. State information Lot of information required Not much information is required to be stored 4. Guarantee of service Guaranteed No guarantee 5. Connection Connection needs to be established Connection need not be established 6. Delays More Less 7. Overheads Less More 8. Packet travel Sequentially Randomly 9. Congestion due to overloading Not possible Very much possible
  • 28.
  • 29.
  • 30.
  • 31.
  • 32. Reliable Vs. Unreliable • If the application layer program needs reliable transport layer protocol is used by implementing the flow and error control at the transport layer. • But this service will be slow and more complex. • But if the application layer program does not need reliability because it uses its own flow and error control mechanism, then an unreliable service is used. • UDP is connectionless protocol and unreliable, but TCP is connection oriented and reliable protocol.
  • 33. UDP: User Datagram Protocol [RFC 768] • What is a connectionless protocol? – The device sending a message simply sends it addressed to the intended recipient. – If there are problems with the transmission, it may be necessary to resend the data several times. – The Internet Protocol (IP) and User Datagram Protocol (UDP) are connectionless protocols.
  • 34. UDP: User Datagram Protocol [RFC 768] • The User Datagram Protocol offers only a minimal transport service -- non-guaranteed datagram delivery -- and gives applications direct access to the datagram service of the IP layer. • UDP is almost a null protocol; the only services it provides over IP are check summing of data and multiplexing by port number. • UDP is a standard protocol with STD number 6.
  • 35. • An application program running over UDP must deal directly with end-to-end communication problems that a connection-oriented protocol would have handled -- e.g., retransmission for reliable delivery, packetization and reassembly, flow control, congestion avoidance, etc., when these are required. • The service provided by UDP is an unreliable service that provides no guarantees for delivery and no protection from duplication. • Compared to other transport protocols, UDP and its UDP-Lite variant are unique in that they do not establish end-to-end connections between communicating end systems.
  • 36.
  • 37. The UDP header contains four fields : a 16 bits source port a 16 bits destination port a 16 bits length field a 16 bits checksum
  • 38. • UDP is basically an application interface to IP. It adds no reliability, flow-control, or error recovery to IP. It simply serves as a multiplexer/demultiplexer for sending and receiving datagrams, using ports to direct the datagrams. • UDP provides a mechanism for one application to send a datagram to another. • Be aware that UDP and IP do not provide guaranteed delivery, flow-control, or error recovery, so these must be provided by the application.
  • 39. • Standard applications using UDP include: – Trivial File Transfer Protocol (TFTP) – Domain Name System (DNS) name server – Remote Procedure Call (RPC), used by the Network File System (NFS) – Simple Network Management Protocol (SNMP) – Lightweight Directory Access Protocol (LDAP)
  • 41. Field Description Source Port UDP packets from a client use this as a service access point (SAP) to indicate the session on the local client that originated the packet. UDP packets from a server carry the server SAP in this field Destination Port UDP packets from a client use this as a service access point (SAP) to indicate the service required from the remote server. UDP packets from a server carry the client SAP in this field UDP length The number of bytes comprising the combined UDP header information and payload data UDP Checksum A checksum to verify that the end to end data has not been corrupted by routers or bridges in the network or by the processing in an end system.
  • 42. UDP checksum • Goal: detect “errors” (e.g., flipped bits) in transmitted segment Sender: • treat segment contents as sequence of 16-bit integers • checksum: addition (1’s complement sum) of segment contents • sender puts checksum value into UDP checksum field Receiver: • compute checksum of received segment • check if computed checksum equals checksum field value: – NO - error detected – YES - no error detected. But maybe errors nonetheless? More later ….
  • 43. Internet Checksum Example • Note – When adding numbers, a carryout from the most significant bit needs to be added to the result • Example: add two 16-bit integers 1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1 wraparound sum checksum
  • 44.
  • 45. Fields Description iph_ver 4 bits of the version of IP currently used, the ip version is 4 (other version is IPv6). iph_ihl 4 bits, the ip header (datagram) length in 32 bits octets (bytes) that point to the beginning of the data. The minimum value for a correct header is 5. This means a value of 5 for the iph_ihl means 20 bytes (5 * 4). iph_tos 8 bits, type of service controls the priority of the packet. iph_len The total is 16 bits; total length must contain the total length of the ip datagram (ip and data) in bytes. This includes ip header, icmp or tcp or udp header and payload size in bytes. iph_ident The iph_ident sequence number is mainly used for reassembly of fragmented IP datagrams. iph_flag Consists of a 3-bit field of which the two low-order (least- significant) bits control fragmentation. The low-order bit specifies whether the packet can be fragmented. The middle bit specifies whether the packet is the last fragment in a series of fragmented packets. The third or high-order bit is not
  • 46. Fields Description ihp_offset The fragment offset is used for reassembly of fragmented datagrams. The first 3 bits are the fragment flags, the first one always 0, the second the do-not-fragment bit (set by ihp_offset = 0x4000) and the third the more-flag or more-fragments-following bit (ihp_offset = 0x2000). iph_ttl 8 bits, time to live is the number of hops (routers to pass) before the packet is discarded, and an icmp error message is returned. The maximum is 255. iph_protocol 8 bits, the transport layer protocol. It can be tcp (6), udp (17), icmp (1), or whatever protocol follows the ip header. iph_chksu m 16 bits, a checksum on the header only, the ip datagram. iph_source 32 bits, source IP address. It is converted to long format. iph_dest 32 bits, destination IP address, converted to long format, e.g. by inet_addr(). Options Variable. Padding Variable. The internet header padding is used to ensure that the internet header ends on a 32 bit boundary.
  • 47. Ports and sockets • This section introduces the concepts of port and socket, which are needed to determine which local process at a given host actually communicates with which process, at which remote host, using which protocol. If this sounds confusing, consider the following: – An application process is assigned a process identifier number (process ID), which is likely to be different each time that process is started. – Process IDs differ between operating system platforms, hence they are not uniform. – A server process can have multiple connections to multiple clients at a time, hence simple connection identifiers would not be unique. • The concept of ports and sockets provides a way to uniformly and uniquely identify connections and the programs and hosts that are engaged in them, irrespective of specific process IDs.
  • 48. • Ports: • Each process that wants to communicate with another process identifies itself to the TCP/IP protocol suite by one or more ports. • A port is a 16-bit number, used by the host-to-host protocol to identify to which higher level protocol or application program (process) it must deliver incoming messages. There are two types of port: – Well-known: • Well-known ports belong to standard servers, for example, Telnet uses port 23. Well-known port numbers range between 1 and 1023. • Well-known port numbers are typically odd, because early systems using the port concept required an odd/even pair of ports for duplex operations.
  • 49. – Ephemeral: • Clients do not need well-known port numbers because they initiate communication with servers and the port number they are using is contained in the UDP datagram sent to the server. • Ephemeral port numbers have values greater than 1023, normally in the range 1024 to 65535. A client can use any number allocated to it, as long as the combination of <transport protocol, IP address, port number> is unique.
  • 50. • Sockets: – The socket interface is one of several application programming interfaces (APIs) to the communication protocols. Designed to be a generic communication programming interface, APIs were first introduced by 4.2 BSD. – 4.2 BSD allowed two different communication domains: Internet and UNIX. – 4.3 BSD has added the Xerox Network System (XNS) protocols, – 4.4 BSD will add an extended interface to support the ISO OSI protocols. – A socket address is the triple: <protocol, local-address, local-process> – For example, in the TCP/IP suite: <tcp, 193.44.234.3, 12345>
  • 51. Principles of Reliable data transfer • Making sure that the packets sent by the sender are correctly and reliably received by the receiver amid network errors, i.e., corrupted/lost packets. – Can be implemented at LL, NL or TL of the protocol stack. • When and why should this be used? – Link Layer • Rarely done over twisted-pair or fiber optic links • Usually done over lossy links (wireless) for performance improvement (versus correctness) in P2P links – Network/Transport Layers • Necessary if the application requires the data to be reliably delivered to the receiver, e.g., file transfer
  • 52. Reliable Delivery: Service Model – Reliable, In-order delivery • Typically done when reliability is implemented at the transport layer, e.g., TCP • Example application: File transfer Reliable, In-order Delivery unreliable channel Reliable Data Transfer Protocol (Sender Side) Reliable Data Transfer Protocol (Receiver Side) 1234… 1234… UDT_Send RDT_Receive Deliver_DataRDT_Send
  • 53. Reliable Delivery: Assumptions We’ll: • Consider only unidirectional data transfer – A sender sending packets to a receiver – Bidirectional communication is a simple extension, where there are 2 sender/receiver pairs • Start with simple a protocol and make it complex as we continue
  • 54. RDT over Unreliable Channel • Channel may flip bits in packets/lose packets – The received packet may have been corrupted during transmission, or dropped at an intermediate router due to buffer overflow • The question: how to recover from errors? • ACKs, NACKs, Timeouts… Next Unreliable channel Reliable Data Transfer Protocol (Sender Side) Reliable Data Transfer Protocol (Receiver Side) 1234… 1234… UDT_Send RDT_Receive Deliver_DataRDT_Send
  • 55. RDT over Unreliable Channel • Two fundamental mechanisms to accomplish reliable delivery over Unreliable Channels – Acknowledgements (ACK), Negative ACK (NACK) • Small control packets (header without any data) that a protocol sends back to its peer saying that it has received an earlier packet (positive ACK) or that it has not received a packet (NACK) Sent by the receiver to the sender. – Timeouts • Set by the sender for each transmitted packet • If an ACK is received before the timer expires, then the packet has made it to the receiver • If the timeout occurs, the sender assumes that the packet is lost (corrupted) and retransmits the packet
  • 56. ARQ • Automatic repeat request (ARQ) is a protocol for error control in data transmission. When the receiver detects an error in a packet, it automatically requests the transmitter to resend the packet. • The general strategy of using ACKs (NACKs) and timeouts to implement reliable delivery is called Automatic Repeat reQuest (ARQ) • 3 ARQ Mechanisms for Reliable Delivery – Stop and Wait – Concurrent Logical Channels – Sliding Window
  • 57. Stop and Wait • Simplest ARQ protocol • Sender: – Send a packet – Stop and wait until an ACK arrives – If received ACK, send the next packet – If timeout, Retransmit the same packet • Receiver: – When you receive a packet correctly, send an ACK Time Timeout Sender Receiver
  • 58. • Stop and wait with ARQ: Automatic Repeat reQuest (ARQ), an error control method, is incorporated with stop and wait flow control protocol. – If error is detected by receiver, it discards the frame and send a negative ACK (NAK), causing sender to re-send the frame – In case a frame never got to receiver, sender has a timer: each time a frame is sent, timer is set → If no ACK or NAK is received during timeout period, it re-sends the frame – Timer introduces a problem: Suppose timeout and sender retransmits a frame but receiver actually received the previous transmission → receiver has duplicated copies – To avoid receiving and accepting two copies of same frame, frames and ACKs are alternatively labeled 0 or 1: ACK0 for frame 1, ACK1 for frame 0
  • 59. An important link parameter is defined by where ,  R is data rate (bps), d is link distance (m), V is propagation velocity (m/s) and L frame length (bits)  In error-free case, efficiency or maximum link utilization of stop and Wait with ARQ is:
  • 60. Recovering from Error • Does this protocol work? • When an ACK is lost or a early timeout occurs, how does the receiver know whether the packet is a retransmission or a new packet? – Use sequence numbers: Both Packets and ACKs TimeoutTimeout Time TimeoutTimeout ACK lost Early timeout TimeoutTimeout Packet lost
  • 61. Stop & Wait with Seq #s TimeoutTimeout Time TimeoutTimeout ACK lost Early timeout TimeoutTimeout Packet lost
  • 62. Performance of Stop and Wait • Can only send one packet per round trip • network protocol limits use of physical resources! first packet bit transmitted, t = 0 sender receive r RTT first packet bit arrives ACK arrives, send next packet, t = RTT + L / R U sender = .008 30.008 = 0.00027 microsec onds L / R RTT + L / R = = 0.027% microsec onds last packet bit arrives, send ACK last packet bit transmitted, t = L / R
  • 63. Pipelining: Increasing Utilization first packet bit transmitted, t = 0 sender receiver RTT last bit transmitted, t = L / R first packet bit arrives last packet bit arrives, send ACK ACK arrives, send next packet, t = RTT + L / R last bit of 2nd packet arrives, send ACK last bit of 3rd packet arrives, send ACK U sender = .024 30.008 = 0.0008 microsecon 3 * L / R RTT + L / R = Increase utilization by a factor of 3! • Pipelining: sender allows multiple, “in-flight”, yet-to-be- acknowledged pkts without waiting for first to be ACKed to keep the pipe full – Capacity of the Pipe = RTT * BW
  • 64. Sliding Window Protocols • Reliable, in-order delivery of packets • Sender can send “window” of up to N, consecutive unack’ed packets • Receiver makes sure that the packets are delivered in-order to the upper layer • 2 Generic Versions – Go-Back-N – Selective Repeat
  • 65. • For large link parameter a, stop and wait protocol is inefficient. • A universally accepted flow control procedure is the sliding window protocol. – Frames and acknowledgements are numbered using sequence numbers – Sender maintains a list of sequence numbers (frames) it is allowed to transmit, called sending window – Receiver maintains a list of sequence numbers it is prepared to receive, called receiving window – A sending window of size N means that sender can send up to N frames without the need for an ACK – A window size of N implies buffer space for N frames – For n-bit sequence number, we have 2n numbers: 0, 1, · · · , 2n − 1, but the maximum window size N = 2n− 1 (not 2n) – ACK3 means that receiver has received frame 0 to frame 2 correctly, ready to receive frame 3 (and rest of N frames within window)
  • 66. • In error-free case, efficiency or maximum link utilization of sliding window protocol is: • Thus it is able to maintain efficiency for large link parameter a: just use large widow size N. • Note that U = 1 means that link has no idle time: there are always something in it, either data frames or ACKs.
  • 67. Continue… • What should be the size of pipeline? • How do we handle errors: – Sender and receiver maintain buffer space – Receiver window = 1, – Sender window = n
  • 68. • Consider the case of 3-bit sequence number with maximum window size N = 7. • This illustration shows that Sending and receiving windows can shrink or grow during operation. • The receiver do not need to acknowledge every frames. • If both sending and receiving window sizes are N = 1, the sliding window protocol reduces to the stop-and-wait. • In practice, error control must be incorporated with flow control, and we next discuss two common error control mechanisms.
  • 69.
  • 70. Sliding Window: Generic Sender/Receiver States ReceiverSender … … Sent & Acked Sent Not Acked OK to Send Not Usable … … Last Packet Acceptable (LPA) Receiver Window Size Last ACK Received (LAR) Last Packet Sent (LPS) Received & Acked Acceptable Packet Not Usable Sender Window Size Next Packet Expected (NPE)
  • 71. Sliding Window- Sender Side • The sender maintains 3 variables – Sender Window Size (SWS) • Upper bound on the number of in-flight packets – Last Acknowledgement Received (LAR) – Last Packet Sent (LPS) – We want LPS – LAR <= SWS LAR LPS <= SWS
  • 72. Sliding Window- Receiver Side • The receiver maintains 3 variables – Receiver Window Size (RWS) • Upper bound on the number of buffered packets – Last Packet Acceptable (LPA) – Next Packet Expected (NPE) – We want LPS – NPE + 1 <= RWS NPE LPA <= RWS
  • 73. Go-back-n ARQ • The basic idea of go-back-n error control is: If frame i is damaged, receiver requests retransmission of all frames starting from frame i • An example: • Notice that all possible cases of damaged frame and ACK / NAK must be taken into account • For n-bit sequence number, maximum window size is N = 2n − 1 not N = 2n → with N = 2n
  • 74. • Consider n = 3, if N = 8 what may happen: • Suppose that sender transmits frame 0 and gets an ACK1 • It then transmits frames 1,2,3,4,5,6,7,0 (this is allowed, as they are within the sending window of size 8) and gets another ACK1 • This could means that all eight frames were received correctly • It could also mean that all eight frames were lost, and receiver is repeating its previous ACK1 • With N = 7, this confusing situation is avoided
  • 75. ReceiverSender … … Sent & Acked Sent Not Acked OK to Send Not Usable … … Last Packet Acceptable (LPA) RWS = 1 packet Last ACK Received (LAR) Last Packet Sent (LPS) Received & Acked Acceptable Packet Not Usable SWS = N Next Packet Expected (NPE) • SWS = N: Sender can send up to N consecutive unack’ed pkts • RWS = 1: Receiver has buffer for just 1 packet • Always sends ACK for correctly-rcvd pkt with highest in-order seq # – Cumulative ACK – discard & re-ACK pkt with highest in-order seq # • Out-of-order pkt: Timeout: Retransmit ALL packets that have been previously sent, but not yet ACKed Therefore the name Go-Back-N
  • 76. GBN in action (SWS = 4)
  • 77. • Why use GBN? – Very simple receiver • Why NOT use GBN? – Throwing away out-of-order packets at the receiver results in extra transmissions, thus lowering the channel utilization: • The channel may become full of retransmissions of old packets rather than useful new packets – Can we do better? • Yes: Buffer out-of-order packets at the receiver and do Selective Repeat (Retransmissions) at the sender
  • 78. Selective-Reject ARQ • In selective-reject ARQ error control, the only frames retransmitted are those receive a NAK or which time out. • An illustrative example: • Selective-reject would appear to be more efficient than go-back-n, but it is harder to implement and less used • The window size is also more restrictive: for n-bit sequence number, the maximum window size is N = 2n/2 to avoid possible confusion
  • 79. • Go-back-n and selective-reject can be seen as trade-offs between link bandwidth (data rate) and data link layer buffer space. – If link bandwidth is large but buffer space is scarce, go-back-n is preferred – If link bandwidth is small but buffer space is pretty, selective-reject is preferred
  • 80. ReceiverSender … … Sent & Acked Sent Not Acked OK to Send Not Usable … … Last Packet Acceptable (LPA) RWS = N Last ACK Received (LAR) Last Packet Sent (LPS) Received & Acked Acceptable Packet Not Usable SWS = N Next Packet Expected (NPE) • SWS = RWS = N consecutive packets: Sender can send up to N consecutive unack’ed pkts, Receiver can buffer up to N consecutive packets • Receiver individually acknowledges all correctly received pkts – buffers pkts, as needed, for eventual in-order delivery to upper layer • Sender only resends pkts for which ACK not received – sender timer for each unACKed pkt
  • 82. Transmission Control Protocol (TCP) • The Transmission Control Protocol (TCP) is a connection- oriented reliable protocol. • It provides a reliable transport service between pairs of processes executing on End Systems (ES) using the network layer service provided by the IP protocol. • The Transmission Control Protocol (TCP) was initially defined in RFC 793. • TCP is a standard protocol with STD number 7.
  • 83. • TCP is used by a large number of applications, including : – Email (SMTP, POP, IMAP) – World wide web ( HTTP, ...) – Most file transfer protocols ( ftp, peer-to-peer file sharing applications , ...) – remote computer access : telnet, ssh, X11, VNC, ... – non-interactive multimedia applications : flash
  • 84.
  • 85. Connection-Oriented Service • In a connection oriented service, a connection is established between source and destination. • Then the data is transferred and at the end the connection is released.
  • 86. Connection Establishment Three protocol scenarios for establishing a connection using a three-way handshake. CR denotes CONNECTION REQUEST. (a) Normal operation, (b) Old CONNECTION REQUEST appearing out of nowhere. (c) Duplicate CONNECTION REQUEST and duplicate ACK.
  • 87. Handshaking Protocol • handshaking is an automated process of negotiation that dynamically sets parameters of a communications channel established between two entities before normal communication over the channel begins. • It follows the physical establishment of the channel and precedes normal information transfer. • The handshaking process usually takes place in order to establish rules for communication when a computer sets about communicating with a foreign device. • When a computer communicates with another device like a modem, printer, or network server, it needs to handshake with it to establish a connection.
  • 88. • A simple handshaking protocol might only involve the receiver sending a message meaning "I received your last message and I am ready for you to send me another one." • A more complex handshaking protocol might allow the sender to ask the receiver if it is ready to receive or for the receiver to reply with a negative acknowledgement meaning "I did not receive your last message correctly, please resend it"
  • 91.
  • 92.
  • 93.
  • 94. • Protocol 1: two way handshake: • Chief of blue army 1 sends a message to the chief of blue army 2 stating “I propose we attack in the morning of January 1. Is it OK with you?”. • Suppose that the messenger reaches to blue army 2 and the chief of blue army 2 agrees with the idea. • He sends the message “yes” back army 1 and assume that this message also gets through. • This process is called two way handshake and it is shown in fig.(a). • The question is will the attack take place? The answer is probably not because the chief of blue army 2 does not know whether his reply got through.
  • 95. Blue army 1 Blue army 2 Blue army 1 Blue army 2 Message 1 Acknowledgement message 1 Time Message 3 Message 2 Message 1 Time (a) Two way handshake (b) Three way handshake
  • 96. Three Way Handshaking Protocol • Now let us improve the two way handshake protocol by making it a three way handshake. • Assuming no messages lost, blue army – 2 will gets the acknowledgement. • But now the chief of blue army – 1 will hesitate, because he does not know if the last message he has sent has got through or not. • So we can make a four way handshake. But it also does not help, because in every protocol, the uncertainty after last handshake message always remains.
  • 97.
  • 98. • A three-way-handshake is a method used in a TCP/IP network to create a connection between a local host/client and server. EVENT DIAGRAM Host A sends a TCP SYNchronize packet to Host B Host B receives A's SYN Host B sends a SYNchronize ACKnowledgement Host A receives B's SYN-ACK Host A sends ACKnowledge Host B receives ACK. TCP socket connection is ESTABLISHED.
  • 99.
  • 100. • TCP can be characterized by the following facilities it provides for the applications using it: – Stream Data Transfer: – From the application's viewpoint, TCP transfers a contiguous stream of bytes through the network. – The application does not have to bother with chopping the data into basic blocks or datagrams. – TCP does this by grouping the bytes in TCP segments, which are passed to IP for transmission to the destination.
  • 101. – Reliability: – CP assigns a sequence number to each byte transmitted and expects a positive acknowledgment (ACK) from the receiving TCP. – If the ACK is not received within a timeout interval, the data is retransmitted. – Since the data is transmitted in blocks (TCP segments), only the sequence number of the first data byte in the segment is sent to the destination host. – The receiving TCP uses the sequence numbers to rearrange the segments when they arrive out of order, and to eliminate duplicate segments.
  • 102. – Flow Control: – The receiving TCP, when sending an ACK back to the sender, also indicates to the sender the number of bytes it can receive beyond the last received TCP segment, without causing overrun and overflow in its internal buffers. – Multiplexing: – Achieved through the use of ports, just as with UDP.
  • 103. – Logical Connections: – The reliability and flow control mechanisms described above require that TCP initializes and maintains certain status information for each data stream. – The combination of this status, including sockets, sequence numbers and window sizes, is called a logical connection. – Each connection is uniquely identified by the pair of sockets used by the sending and receiving processes. – Full Duplex: – TCP provides for concurrent data streams in both directions.
  • 105. • Source Port: The 16-bit source port number, used by the receiver to reply. • Destination Port: The 16-bit destination port number. • Sequence Number: The sequence number of the first data byte in this segment. If the SYN control bit is set, the sequence number is the initial sequence number (n) and the first data byte is n+1. • Acknowledgment Number: If the ACK control bit is set, this field contains the value of the next sequence number that the receiver is expecting to receive. • Data Offset: The number of 32-bit words in the TCP header. It indicates where the data begins. • Reserved: Six bits reserved for future use; must be zero. • URG: Indicates that the urgent pointer field is significant in this segment. • ACK: Indicates that the acknowledgment field is significant in this segment. • PSH: Push function.
  • 106. • RST: Resets the connection. • SYN: Synchronizes the sequence numbers. • FIN: No more data from sender. • Window: Used in ACK segments. It specifies the number of data bytes • Checksum: The 16-bit one's complement of the one's complement sum of all 16-bit words in a pseudo-header, the TCP header, and the TCP data. • Urgent Pointer: Points to the first data octet following the urgent data. Only significant when the URG control bit is set. • Options: Just as in the case of IP datagram options, options can be either: – A single byte containing the option number – A variable length option • Padding: All zero bytes are used to fill up the TCP header to a total length that is a multiple of 32 bits.
  • 107. The window principle • A trivial transport protocol is: – send a packet and then wait for an ACK from the receiver before sending the next packet; – if the ACK is not received within a certain amount of time, retransmit the packet. • While this mechanism ensures reliability, it only uses a part of the available network bandwidth.
  • 108.
  • 109. • Now, consider a protocol where the sender groups its packets to be transmitted: – the sender can send all packets within the window without receiving an ACK, but must start a timeout timer for each of them; – the receiver must acknowledge each packet received, indicating the sequence number of the last well- received packet; – the sender slides the window on each ACK received.
  • 110.
  • 111. The window principle • If packet 2 is lost, the receiver does not acknowledge the reception of subsequent data messages. • The sender re- transmits unacknowledged messages after a timeout expires.
  • 112. • The window principle is used in TCP, but: –the window principle is used at the byte level, that is, the segments sent and ACKs received carry byte-sequence numbers and the window size is expressed as a number of bytes; –the window size is determined by the receiver when the connection is established and is variable during the data transfer.
  • 114.
  • 115. Congestion control • The TCP congestion control scheme was initially proposed by Van Jacobson in [Jacobson1988]. • Essential strategy :: The TCP host sends packets into the network without a reservation and then the host reacts to observable events. • Originally TCP assumed FIFO queuing. • Basic idea :: each source determines how much capacity is available to a given flow in the network. • ACKs are used to ‘pace’ the transmission of packets such that TCP is “self-clocking”. • TCP relies on Additive Increase and Multiplicative Decrease (AIMD). • To implement AIMD, a TCP host must be able to control its transmission rate.
  • 116.
  • 117. Standard TCP Congestion Control Algorithms • One of the most common implementations of TCP is called Reno, and combines four different mechanisms : – Slow start – Congestion avoidance – Fast retransmit – Fast recovery
  • 118. Slow start • Slow Start, a requirement for TCP software implementations is a mechanism used by the sender to control the transmission rate, otherwise known as sender-based flow control. • The rate of acknowledgements returned by the receiver determine the rate at which the sender can transmit data. • When a TCP connection first begins, the Slow Start algorithm initializes a congestion window to one segment, which is the maximum segment size (MSS) initialized by the receiver during the connection establishment phase.
  • 119. • When acknowledgements are returned by the receiver, the congestion window increases by one segment for each acknowledgement returned. • Thus, the sender can transmit the minimum of the congestion window and the advertised window of the receiver, which is simply called the transmission window. • For example, the first successful transmission and acknowledgement of a TCP segment increases the window to two segments. • After successful transmission of these two segments and acknowledgements completes, the window is increased to four segments. • Then eight segments, then sixteen segments and so on, doubling from there on out up to the maximum window size advertised by the receiver or until congestion finally does occur.
  • 120. • At some point the congestion window may become too large for the network or network conditions may change such that packets may be dropped. • Packets lost will trigger a timeout at the sender. • When this happens, the sender goes into congestion avoidance mode as described in the next section.
  • 121. Congestion Avoidance • During the initial data transfer phase of a TCP connection the Slow Start algorithm is used. • However, there may be a point during Slow Start that the network is forced to drop one or more packets due to overload or congestion. • If this happens, Congestion Avoidance is used to slow the transmission rate. • However, Slow Start is used in conjunction with Congestion Avoidance as the means to get the data transfer going again so it doesn’t slow down and stay slow.
  • 122. • In the Congestion Avoidance algorithm a retransmission timer expiring or the reception of duplicate ACKs can implicitly signal the sender that a network congestion situation is occurring. • The sender immediately sets its transmission window to one half of the current window size (the minimum of the congestion window and the receiver’s advertised window size), but to at least two segments. • If congestion was indicated by a timeout, the congestion window is reset to one segment, which automatically puts the sender into Slow Start mode. • If congestion was indicated by duplicate ACKs, the Fast Retransmit and Fast Recovery algorithms are invoked
  • 123. • As data is received during Congestion Avoidance, the congestion window is increased. • However, Slow Start is only used up to the halfway point where congestion originally occurred. • This halfway point was recorded earlier as the new transmission window. • After this halfway point, the congestion window is increased by one segment for all segments in the transmission window that are acknowledged. • This mechanism will force the sender to more slowly grow its transmission rate, as it will approach the point where congestion had previously been detected.
  • 124. Fast Retransmit • When a duplicate ACK is received, the sender does not know if it is because a TCP segment was lost or simply that a segment was delayed and received out of order at the receiver. • If the receiver can re-order segments, it should not be long before the receiver sends the latest expected acknowledgement. • Typically no more than one or two duplicate ACKs should be received when simple out of order conditions exist.
  • 125. • If however more than two duplicate ACKs are received by the sender, it is a strong indication that at least one segment has been lost. • The TCP sender will assume enough time has lapsed for all segments to be properly re-ordered by the fact that the receiver had enough time to send three duplicate ACKs. • When three or more duplicate ACKs are received, the sender does not even wait for a retransmission timer to expire before retransmitting the segment. • This process is called the Fast Retransmit algorithm. • Immediately following Fast Retransmit is the Fast Recovery algorithm.
  • 126. Fast Recovery • Since the Fast Retransmit algorithm is used when duplicate ACKs are being received, the TCP sender has implicit knowledge that there is data still flowing to the receiver. • Why? The reason is because duplicate ACKs can only be generated when a segment is received. • This is a strong indication that serious network congestion may not exist and that the lost segment was a rare event. • So instead of reducing the flow of data abruptly by going all the way into Slow Start, the sender only enters Congestion Avoidance mode.
  • 127. • Rather than start at a window of one segment as in Slow Start mode, the sender resumes transmission with a larger window, incrementing as if in Congestion Avoidance mode.
  • 128. • Congestion Control is concerned with efficiently using a network at high load. • Several techniques can be employed. These include: – Warning bit – Choke packets – Load shedding – Random early discard – Traffic shaping • The first 3 deal with congestion detection and recovery. The last 2 deal with congestion avoidance.
  • 129. 129 Warning Bit • A special bit in the packet header is set by the router to warn the source when congestion is detected. • The bit is copied and piggy-backed on the ACK and sent to the sender. • The sender monitors the number of ACK packets it receives with the warning bit set and adjusts its transmission rate accordingly.
  • 130. 130 Choke Packets • A more direct way of telling the source to slow down. • A choke packet is a control packet generated at a congested node and transmitted to restrict traffic flow. • The source, on receiving the choke packet must reduce its transmission rate by a certain percentage. • An example of a choke packet is the ICMP Source Quench Packet.
  • 131. 131 Hop-by-Hop Choke Packets • Over long distances or at high speeds choke packets are not very effective. • A more efficient method is to send to choke packets hop-by-hop. • This requires each hop to reduce its transmission even before the choke packet arrive at the source.
  • 132. 132 Load Shedding • When buffers become full, routers simply discard packets. • Which packet is chosen to be the victim depends on the application and on the error strategy used in the data link layer. • For a file transfer, for, e.g. cannot discard older packets since this will cause a gap in the received data. • For real-time voice or video it is probably better to throw away old data and keep new packets. • Get the application to mark packets with discard priority.
  • 133. 133 Random Early Discard (RED) • This is a proactive approach in which the router discards one or more packets before the buffer becomes completely full. • Each time a packet arrives, the RED algorithm computes the average queue length, avg. • If avg is lower than some lower threshold, congestion is assumed to be minimal or non-existent and the packet is queued.
  • 134. 134 RED, cont. • If avg is greater than some upper threshold, congestion is assumed to be serious and the packet is discarded. • If avg is between the two thresholds, this might indicate the onset of congestion. The probability of congestion is then calculated.
  • 135. 135 Traffic Shaping • Another method of congestion control is to “shape” the traffic before it enters the network. • Traffic shaping controls the rate at which packets are sent (not just how many). • Used in ATM and Integrated Services networks. • At connection set-up time, the sender and carrier negotiate a traffic pattern (shape). • Two traffic shaping algorithms are: – Leaky Bucket – Token Bucket
  • 136. Piggybacking • Piggybacking is a bi-directional data transmission technique in the network layer (OSI model). • In all practical situations, the transmission of data needs to be bi-directional. This is called as full- duplex transmission. • We can achieve this full duplex transmission i.e. by having two separate channels-one for forward data transfer and the other for separate transfer i.e. for acknowledgements.
  • 137. • A better solution would be to use each channel (forward & reverse) to transmit frames both ways, with both channels having the same capacity. • If A and B are two users. • Then the data frames from A to B are intermixed with the acknowledgements from A to B. • One more improvement that can be made is piggybacking. • The concept is explained as follows:
  • 138. • In two way communication, Whenever a data frame is received, the received waits and does not send the control frame (acknowledgement) back to the sender immediately. • The receiver waits until its network layer passes in the next data packet. The delayed acknowledgement is then attached to this outgoing data frame. • This technique of temporarily delaying the acknowledgement so that it can be hooked with next outgoing data frame is known as piggybacking.
  • 139. • The major advantage of piggybacking is better use of available channel bandwidth. • The disadvantages of piggybacking are: 1. Additional complexity. 2. If the data link layer waits too long before transmitting the acknowledgement, then retransmission of frame would take place.