The document discusses transport layer protocols. It begins by explaining that the transport layer sits between the application and network layers, providing services to applications and receiving services from the network layer. It then notes that the three major transport layer protocols are UDP, TCP, and SCTP. UDP provides a simple, unreliable connectionless service. TCP provides a reliable, connection-oriented service through mechanisms like flow and error control. SCTP combines features of UDP and TCP, providing both connection-oriented and connectionless services.
Inter-process communication (IPC) allows processes to communicate and synchronize. Common IPC methods include pipes, message queues, shared memory, semaphores, and mutexes. Pipes provide unidirectional communication while message queues allow full-duplex communication through message passing. Shared memory enables processes to access the same memory region. Direct IPC requires processes to explicitly name communication partners while indirect IPC uses shared mailboxes.
Fast Ethernet increased the bandwidth of standard Ethernet from 10 Mbps to 100 Mbps. It used the same CSMA/CD access method and frame format as standard Ethernet but with some changes to address the higher speed. Fast Ethernet was implemented over twisted pair cables using 100BASE-TX or over fiber optic cables using 100BASE-FX. The increased speed enabled Fast Ethernet to compete with other high-speed LAN technologies of the time like FDDI.
RANDOM ACCESS PROTOCOL IN COMMUNICATION AMOGHA A K
In random access ,each station has right to send the data. However , if more than one station tries to send ,collision will occur .To avoid this collision , protocols came into existence.
In random access method , no stations are superior & none is assigned the control over the other .
When a station has a data to send , it uses a procedure defined by a protocol whether to send or not .
This document provides an overview of congestion control presented by a group. It defines congestion as occurring when there is too much traffic on a subnet causing it to go out of buffer. The main causes of congestion are identified as insufficient memory, slow processors, high packet arrival rates, and low bandwidth lines. Several principles for preventing congestion are discussed, including load shedding, choke packets, traffic shaping using leaky bucket and token bucket algorithms, and random early detection. The presentation concludes with the group welcoming questions.
This document discusses the Transmission Control Protocol (TCP) which provides reliable, connection-oriented data transmission over the internet. TCP establishes a virtual connection between endpoints, ensuring reliable delivery through mechanisms like positive acknowledgement and retransmission. It uses a sliding window algorithm to guarantee reliable and in-order delivery while enforcing flow control between sender and receiver. Key aspects of TCP include connection establishment and termination, port numbers, segments, headers, and addressing end-to-end issues over heterogeneous networks.
This document summarizes key topics related to data link control and protocols. It discusses framing methods like fixed-size and variable-size framing. It also covers flow control, error control, and protocols for both noiseless and noisy channels. Specific protocols described include the Simplest Protocol, Stop-and-Wait Protocol, Stop-and-Wait ARQ, Go-Back-N ARQ, and Selective Repeat ARQ. The document provides details on their design, algorithms, and flow diagrams to illustrate how each protocol handles framing, flow control, and error control.
The document discusses MAC layer protocols, specifically CSMA/CD and CSMA/CA.
CSMA/CD is used for wired networks and works by having nodes listen to check if the medium is free before transmitting. If a collision is detected, transmission stops and resumes after a backoff time.
CSMA/CA is used for wireless networks and aims to avoid collisions through the use of request to send, clear to send, and acknowledgement frames exchanged between nodes, rather than detecting collisions.
Both protocols reduce collisions compared to simple CSMA, but CSMA/CA is less efficient and cannot completely solve collisions in wireless networks due to issues like hidden terminals.
The transport layer provides logical communication between application processes running on different hosts. It implements protocols like TCP and UDP that provide services such as multiplexing, reliable data transfer, and flow control. TCP is connection-oriented and provides reliable, ordered delivery of streams of bytes. UDP is connectionless and offers low-latency communications at the cost of reliability. The transport layer addresses ensure end-to-end delivery of data packets from source to destination applications.
Inter-process communication (IPC) allows processes to communicate and synchronize. Common IPC methods include pipes, message queues, shared memory, semaphores, and mutexes. Pipes provide unidirectional communication while message queues allow full-duplex communication through message passing. Shared memory enables processes to access the same memory region. Direct IPC requires processes to explicitly name communication partners while indirect IPC uses shared mailboxes.
Fast Ethernet increased the bandwidth of standard Ethernet from 10 Mbps to 100 Mbps. It used the same CSMA/CD access method and frame format as standard Ethernet but with some changes to address the higher speed. Fast Ethernet was implemented over twisted pair cables using 100BASE-TX or over fiber optic cables using 100BASE-FX. The increased speed enabled Fast Ethernet to compete with other high-speed LAN technologies of the time like FDDI.
RANDOM ACCESS PROTOCOL IN COMMUNICATION AMOGHA A K
In random access ,each station has right to send the data. However , if more than one station tries to send ,collision will occur .To avoid this collision , protocols came into existence.
In random access method , no stations are superior & none is assigned the control over the other .
When a station has a data to send , it uses a procedure defined by a protocol whether to send or not .
This document provides an overview of congestion control presented by a group. It defines congestion as occurring when there is too much traffic on a subnet causing it to go out of buffer. The main causes of congestion are identified as insufficient memory, slow processors, high packet arrival rates, and low bandwidth lines. Several principles for preventing congestion are discussed, including load shedding, choke packets, traffic shaping using leaky bucket and token bucket algorithms, and random early detection. The presentation concludes with the group welcoming questions.
This document discusses the Transmission Control Protocol (TCP) which provides reliable, connection-oriented data transmission over the internet. TCP establishes a virtual connection between endpoints, ensuring reliable delivery through mechanisms like positive acknowledgement and retransmission. It uses a sliding window algorithm to guarantee reliable and in-order delivery while enforcing flow control between sender and receiver. Key aspects of TCP include connection establishment and termination, port numbers, segments, headers, and addressing end-to-end issues over heterogeneous networks.
This document summarizes key topics related to data link control and protocols. It discusses framing methods like fixed-size and variable-size framing. It also covers flow control, error control, and protocols for both noiseless and noisy channels. Specific protocols described include the Simplest Protocol, Stop-and-Wait Protocol, Stop-and-Wait ARQ, Go-Back-N ARQ, and Selective Repeat ARQ. The document provides details on their design, algorithms, and flow diagrams to illustrate how each protocol handles framing, flow control, and error control.
The document discusses MAC layer protocols, specifically CSMA/CD and CSMA/CA.
CSMA/CD is used for wired networks and works by having nodes listen to check if the medium is free before transmitting. If a collision is detected, transmission stops and resumes after a backoff time.
CSMA/CA is used for wireless networks and aims to avoid collisions through the use of request to send, clear to send, and acknowledgement frames exchanged between nodes, rather than detecting collisions.
Both protocols reduce collisions compared to simple CSMA, but CSMA/CA is less efficient and cannot completely solve collisions in wireless networks due to issues like hidden terminals.
The transport layer provides logical communication between application processes running on different hosts. It implements protocols like TCP and UDP that provide services such as multiplexing, reliable data transfer, and flow control. TCP is connection-oriented and provides reliable, ordered delivery of streams of bytes. UDP is connectionless and offers low-latency communications at the cost of reliability. The transport layer addresses ensure end-to-end delivery of data packets from source to destination applications.
IPC allows processes to communicate and share resources. There are several common IPC mechanisms, including message passing, shared memory, semaphores, files, signals, sockets, message queues, and pipes. Message passing involves establishing a communication link and exchanging fixed or variable sized messages using send and receive operations. Shared memory allows processes to access the same memory area. Semaphores are used to synchronize processes. Files provide durable storage that outlives individual processes. Signals asynchronously notify processes of events. Sockets enable two-way point-to-point communication between processes. Message queues allow asynchronous communication where senders and receivers do not need to interact simultaneously. Pipes create a pipeline between processes by connecting standard streams.
CSMA/CD is a media access control method used in early Ethernet technology that uses carrier sensing to detect other signals while transmitting. It improves on CSMA by terminating transmission as soon as a collision is detected to shorten the time before resending. There are three types of CSMA protocols: 1-Persistent, Non-Persistent, and P-Persistent. CSMA/CD networks can detect collisions within twice the propagation delay allowing aborted collisions. It was used in older Ethernet variants and is still supported for backwards compatibility.
This document discusses deadlock avoidance techniques. It explains the concepts of safe and unsafe states when allocating resources to processes. The resource allocation graph algorithm uses claim and assignment edges to model potential resource requests. Banker's algorithm requires processes to declare maximum resource needs upfront. It uses an allocation matrix and need matrix to determine if allocating resources to a process will result in an unsafe state. An example demonstrates tracking available resources and determining if processes can safely obtain requested resources without causing deadlock.
Distributed deadlock detection algorithms allow sites in a distributed system to collectively detect deadlocks by maintaining and analyzing wait-for graphs (WFGs) that model process-resource dependencies. There are several approaches:
1. Centralized algorithms have a single control site that maintains the global WFG but are inefficient due to congestion.
2. Ho-Ramamoorthy algorithms improve this by having each site send periodic status reports to detect differences indicative of deadlocks.
3. Distributed algorithms avoid a single point of failure by having sites detect cycles in parallel through techniques like path-pushing, edge-chasing, and diffusion-based computations across the distributed WFG.
This document discusses interprocess communication and distributed systems. It covers several key topics:
- Application programming interfaces (APIs) for internet protocols like TCP and UDP, which provide building blocks for communication protocols.
- External data representation standards for transmitting objects between processes on different machines.
- Client-server communication models like request-reply that allow processes to invoke methods on remote objects.
- Group communication using multicast to allow a message from one client to be sent to multiple server processes simultaneously.
This document provides an overview of data link control (DLC) and data link layer protocols. It discusses the key functions of DLC including framing, flow control, and error control. Framing involves encapsulating data frames with header information like source and destination addresses. Flow control manages the flow of data between nodes while error control handles detecting and correcting errors. Common data link layer protocols described include simple protocol, stop-and-wait protocol, and High-Level Data Link Control (HDLC). HDLC is a bit-oriented protocol that supports full-duplex communication over both point-to-point and multipoint links. It uses three types of frames: unnumbered, information, and supervisory frames.
The document discusses deadlocks in computer systems. It defines deadlock, presents examples, and describes four conditions required for deadlock to occur. Several methods for handling deadlocks are discussed, including prevention, avoidance, detection, and recovery. Prevention methods aim to ensure deadlocks never occur, while avoidance allows the system to dynamically prevent unsafe states. Detection identifies when the system is in a deadlocked state.
Transport layer protocols provide services like reliable data transfer and connection establishment between applications on networked devices. They address this need through protocols like TCP and UDP. TCP provides reliable, ordered data streams using mechanisms like three-way handshake, sequence numbers, acknowledgments, retransmissions, flow control via sliding windows, and connection termination handshaking. UDP provides simple datagram transmissions without reliability or flow control.
This document provides an overview of distributed operating systems. It discusses the motivation for distributed systems including resource sharing, reliability, and computation speedup. It describes different types of distributed operating systems like network operating systems where users are aware of multiple machines, and distributed operating systems where users are not aware. It also covers network structures, topologies, communication structures, protocols, and provides an example of networking. The objectives are to provide a high-level overview of distributed systems and discuss the general structure of distributed operating systems.
This document presents an overview of computer network congestion and congestion control techniques. It defines congestion as occurring when too many packets are present in a network link, causing queues to overflow and packets to drop. It then discusses factors that can cause congestion as well as the costs. It outlines open-loop and closed-loop congestion control approaches. Specific algorithms covered include leaky bucket, token bucket, choke packets, hop-by-hop choke packets, and load shedding. The document concludes by noting the importance of efficient congestion control techniques with room for improvement.
This document summarizes an introduction to MPI lecture. It outlines the lecture topics which include models of communication for parallel programming, MPI libraries, features of MPI, programming with MPI, using the MPI manual, compilation and running MPI programs, and basic MPI concepts. It provides examples of "Hello World" programs in C, Fortran, and C++. It also discusses what was learned in the lecture which includes processes, communicators, ranks, and the default communicator MPI_COMM_WORLD. The document concludes with noting the general MPI program structure involves initialization, communication/computation, and finalization steps. For homework, it asks to modify the previous "Hello World" program to also print the processor name executing each process using MPI_
TCP uses congestion control to determine how much capacity is available in the network and regulate how many packets can be in transit. It uses additive increase/multiplicative decrease (AIMD) where the congestion window is increased slowly with each ACK but halved upon timeout. Slow start is used initially and after idle periods to grow the window exponentially until congestion is detected. Fast retransmit and fast recovery help detect and recover from packet loss without requiring a timeout.
S.VIJAYALAKSHMI M.SC(CS) discusses Media Access Control and multiple access protocols. The main task of MAC protocols is to minimize collisions and utilize bandwidth by determining when nodes can access the shared channel, what to do when the channel is busy, and how to handle collisions. Early protocols like Aloha and slotted Aloha were inefficient at high loads due to many collisions. CSMA protocols reduce collisions by having nodes listen first before transmitting, but collisions are still possible due to propagation delays.
This document discusses various techniques for process synchronization. It begins by defining process synchronization as coordinating access to shared resources between processes to maintain data consistency. It then discusses critical sections, where shared data is accessed, and solutions like Peterson's algorithm and semaphores to ensure only one process accesses the critical section at a time. Semaphores use wait and signal operations on a shared integer variable to synchronize processes. The document covers binary and counting semaphores and provides an example of their use.
This document discusses implementing a parallel merge sort algorithm using MPI (Message Passing Interface). It describes the background of MPI and how it can be used for communication between processes. It provides details on the dataset used, MPI functions for initialization, communication between processes, and summarizes the results which show a decrease in runtime when increasing the number of processors.
nterprocess communication (IPC) is a set of programming interfaces that allow a programmer to coordinate activities among different program processes that can run concurrently in an operating system. This allows a program to handle many user requests at the same time. Since even a single user request may result in multiple processes running in the operating system on the user's behalf, the processes need to communicate with each other. The IPC interfaces make this possible. Each IPC method has its own advantages and limitations so it is not unusual for a single program to use all of the IPC methods.
IPC methods include pipes and named pipes; message queueing;semaphores; shared memory; and sockets.
Carrier-sense multiple access with collision detection (CSMA/CD) is a media access control method used most notably in early Ethernet technology for local area networking.Carrier-sense multiple access with collision detection is a media access control method used most notably in early Ethernet technology for local area networking. It uses carrier-sensing to defer transmissions until no other stations are transmitting.
Independent processes operate concurrently without affecting each other, while cooperating processes can impact one another. Inter-process communication (IPC) allows processes to share information, improve computation speed, and share resources. The two main types of IPC are shared memory and message passing. Shared memory uses a common memory region for fast communication, while message passing involves establishing communication links and exchanging messages without shared variables. Key considerations for message passing include direct vs indirect communication and synchronous vs asynchronous messaging.
This document discusses protocol layering in communication networks. It introduces the need for protocol layering when communication becomes complex. Protocol layering involves dividing communication tasks across different layers, with each layer having its own protocol. The document then discusses two principles of protocol layering: 1) each layer must support bidirectional communication and 2) the objects under each layer must be identical at both sites. It provides an overview of the OSI 7-layer model and describes the basic functions of each layer.
The document discusses the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). It provides details on:
- UDP is a connectionless protocol that provides unreliable datagram delivery. It has less overhead than TCP but also less features.
- TCP is a connection-oriented protocol that provides reliable, ordered delivery of streams of bytes. It uses three-way handshake for connection establishment, acknowledgments, and network congestion/flow control.
- Both protocols use port numbers to identify applications on hosts. TCP segments carry sequence numbers and acknowledgment numbers to support reliability.
The document provides in-depth explanations of features like multiplexing, error/flow control, congestion control, and how
This document discusses transport layer protocols. It begins by introducing the three main transport layer protocols in TCP/IP - UDP, TCP, and SCTP. It then focuses on UDP and TCP, explaining their packet formats, features, and how they provide different types of services. For UDP, it describes how it is a simple connectionless protocol suited for applications that require low latency. For TCP, it explains how it provides reliable, in-order byte streams using connection establishment and maintenance features like flow control, congestion control, and error recovery. The document contains examples and diagrams to illustrate these concepts.
IPC allows processes to communicate and share resources. There are several common IPC mechanisms, including message passing, shared memory, semaphores, files, signals, sockets, message queues, and pipes. Message passing involves establishing a communication link and exchanging fixed or variable sized messages using send and receive operations. Shared memory allows processes to access the same memory area. Semaphores are used to synchronize processes. Files provide durable storage that outlives individual processes. Signals asynchronously notify processes of events. Sockets enable two-way point-to-point communication between processes. Message queues allow asynchronous communication where senders and receivers do not need to interact simultaneously. Pipes create a pipeline between processes by connecting standard streams.
CSMA/CD is a media access control method used in early Ethernet technology that uses carrier sensing to detect other signals while transmitting. It improves on CSMA by terminating transmission as soon as a collision is detected to shorten the time before resending. There are three types of CSMA protocols: 1-Persistent, Non-Persistent, and P-Persistent. CSMA/CD networks can detect collisions within twice the propagation delay allowing aborted collisions. It was used in older Ethernet variants and is still supported for backwards compatibility.
This document discusses deadlock avoidance techniques. It explains the concepts of safe and unsafe states when allocating resources to processes. The resource allocation graph algorithm uses claim and assignment edges to model potential resource requests. Banker's algorithm requires processes to declare maximum resource needs upfront. It uses an allocation matrix and need matrix to determine if allocating resources to a process will result in an unsafe state. An example demonstrates tracking available resources and determining if processes can safely obtain requested resources without causing deadlock.
Distributed deadlock detection algorithms allow sites in a distributed system to collectively detect deadlocks by maintaining and analyzing wait-for graphs (WFGs) that model process-resource dependencies. There are several approaches:
1. Centralized algorithms have a single control site that maintains the global WFG but are inefficient due to congestion.
2. Ho-Ramamoorthy algorithms improve this by having each site send periodic status reports to detect differences indicative of deadlocks.
3. Distributed algorithms avoid a single point of failure by having sites detect cycles in parallel through techniques like path-pushing, edge-chasing, and diffusion-based computations across the distributed WFG.
This document discusses interprocess communication and distributed systems. It covers several key topics:
- Application programming interfaces (APIs) for internet protocols like TCP and UDP, which provide building blocks for communication protocols.
- External data representation standards for transmitting objects between processes on different machines.
- Client-server communication models like request-reply that allow processes to invoke methods on remote objects.
- Group communication using multicast to allow a message from one client to be sent to multiple server processes simultaneously.
This document provides an overview of data link control (DLC) and data link layer protocols. It discusses the key functions of DLC including framing, flow control, and error control. Framing involves encapsulating data frames with header information like source and destination addresses. Flow control manages the flow of data between nodes while error control handles detecting and correcting errors. Common data link layer protocols described include simple protocol, stop-and-wait protocol, and High-Level Data Link Control (HDLC). HDLC is a bit-oriented protocol that supports full-duplex communication over both point-to-point and multipoint links. It uses three types of frames: unnumbered, information, and supervisory frames.
The document discusses deadlocks in computer systems. It defines deadlock, presents examples, and describes four conditions required for deadlock to occur. Several methods for handling deadlocks are discussed, including prevention, avoidance, detection, and recovery. Prevention methods aim to ensure deadlocks never occur, while avoidance allows the system to dynamically prevent unsafe states. Detection identifies when the system is in a deadlocked state.
Transport layer protocols provide services like reliable data transfer and connection establishment between applications on networked devices. They address this need through protocols like TCP and UDP. TCP provides reliable, ordered data streams using mechanisms like three-way handshake, sequence numbers, acknowledgments, retransmissions, flow control via sliding windows, and connection termination handshaking. UDP provides simple datagram transmissions without reliability or flow control.
This document provides an overview of distributed operating systems. It discusses the motivation for distributed systems including resource sharing, reliability, and computation speedup. It describes different types of distributed operating systems like network operating systems where users are aware of multiple machines, and distributed operating systems where users are not aware. It also covers network structures, topologies, communication structures, protocols, and provides an example of networking. The objectives are to provide a high-level overview of distributed systems and discuss the general structure of distributed operating systems.
This document presents an overview of computer network congestion and congestion control techniques. It defines congestion as occurring when too many packets are present in a network link, causing queues to overflow and packets to drop. It then discusses factors that can cause congestion as well as the costs. It outlines open-loop and closed-loop congestion control approaches. Specific algorithms covered include leaky bucket, token bucket, choke packets, hop-by-hop choke packets, and load shedding. The document concludes by noting the importance of efficient congestion control techniques with room for improvement.
This document summarizes an introduction to MPI lecture. It outlines the lecture topics which include models of communication for parallel programming, MPI libraries, features of MPI, programming with MPI, using the MPI manual, compilation and running MPI programs, and basic MPI concepts. It provides examples of "Hello World" programs in C, Fortran, and C++. It also discusses what was learned in the lecture which includes processes, communicators, ranks, and the default communicator MPI_COMM_WORLD. The document concludes with noting the general MPI program structure involves initialization, communication/computation, and finalization steps. For homework, it asks to modify the previous "Hello World" program to also print the processor name executing each process using MPI_
TCP uses congestion control to determine how much capacity is available in the network and regulate how many packets can be in transit. It uses additive increase/multiplicative decrease (AIMD) where the congestion window is increased slowly with each ACK but halved upon timeout. Slow start is used initially and after idle periods to grow the window exponentially until congestion is detected. Fast retransmit and fast recovery help detect and recover from packet loss without requiring a timeout.
S.VIJAYALAKSHMI M.SC(CS) discusses Media Access Control and multiple access protocols. The main task of MAC protocols is to minimize collisions and utilize bandwidth by determining when nodes can access the shared channel, what to do when the channel is busy, and how to handle collisions. Early protocols like Aloha and slotted Aloha were inefficient at high loads due to many collisions. CSMA protocols reduce collisions by having nodes listen first before transmitting, but collisions are still possible due to propagation delays.
This document discusses various techniques for process synchronization. It begins by defining process synchronization as coordinating access to shared resources between processes to maintain data consistency. It then discusses critical sections, where shared data is accessed, and solutions like Peterson's algorithm and semaphores to ensure only one process accesses the critical section at a time. Semaphores use wait and signal operations on a shared integer variable to synchronize processes. The document covers binary and counting semaphores and provides an example of their use.
This document discusses implementing a parallel merge sort algorithm using MPI (Message Passing Interface). It describes the background of MPI and how it can be used for communication between processes. It provides details on the dataset used, MPI functions for initialization, communication between processes, and summarizes the results which show a decrease in runtime when increasing the number of processors.
nterprocess communication (IPC) is a set of programming interfaces that allow a programmer to coordinate activities among different program processes that can run concurrently in an operating system. This allows a program to handle many user requests at the same time. Since even a single user request may result in multiple processes running in the operating system on the user's behalf, the processes need to communicate with each other. The IPC interfaces make this possible. Each IPC method has its own advantages and limitations so it is not unusual for a single program to use all of the IPC methods.
IPC methods include pipes and named pipes; message queueing;semaphores; shared memory; and sockets.
Carrier-sense multiple access with collision detection (CSMA/CD) is a media access control method used most notably in early Ethernet technology for local area networking.Carrier-sense multiple access with collision detection is a media access control method used most notably in early Ethernet technology for local area networking. It uses carrier-sensing to defer transmissions until no other stations are transmitting.
Independent processes operate concurrently without affecting each other, while cooperating processes can impact one another. Inter-process communication (IPC) allows processes to share information, improve computation speed, and share resources. The two main types of IPC are shared memory and message passing. Shared memory uses a common memory region for fast communication, while message passing involves establishing communication links and exchanging messages without shared variables. Key considerations for message passing include direct vs indirect communication and synchronous vs asynchronous messaging.
This document discusses protocol layering in communication networks. It introduces the need for protocol layering when communication becomes complex. Protocol layering involves dividing communication tasks across different layers, with each layer having its own protocol. The document then discusses two principles of protocol layering: 1) each layer must support bidirectional communication and 2) the objects under each layer must be identical at both sites. It provides an overview of the OSI 7-layer model and describes the basic functions of each layer.
The document discusses the Transmission Control Protocol (TCP) and User Datagram Protocol (UDP). It provides details on:
- UDP is a connectionless protocol that provides unreliable datagram delivery. It has less overhead than TCP but also less features.
- TCP is a connection-oriented protocol that provides reliable, ordered delivery of streams of bytes. It uses three-way handshake for connection establishment, acknowledgments, and network congestion/flow control.
- Both protocols use port numbers to identify applications on hosts. TCP segments carry sequence numbers and acknowledgment numbers to support reliability.
The document provides in-depth explanations of features like multiplexing, error/flow control, congestion control, and how
This document discusses transport layer protocols. It begins by introducing the three main transport layer protocols in TCP/IP - UDP, TCP, and SCTP. It then focuses on UDP and TCP, explaining their packet formats, features, and how they provide different types of services. For UDP, it describes how it is a simple connectionless protocol suited for applications that require low latency. For TCP, it explains how it provides reliable, in-order byte streams using connection establishment and maintenance features like flow control, congestion control, and error recovery. The document contains examples and diagrams to illustrate these concepts.
UDP and TCP Protocol & Encrytion and its algorithmAyesha Tahir
The document discusses the TCP/IP protocol suite and the UDP and TCP transport layer protocols. UDP is a connectionless, unreliable protocol that provides basic process-to-process communication with minimal overhead. TCP is a connection-oriented, reliable protocol that establishes virtual connections between processes, provides reliable in-order data delivery through flow and error control mechanisms, and allows processes to communicate via data streams. Both protocols use port numbers to identify communicating processes and encapsulate data in IP datagrams for transmission.
This document summarizes key concepts about the transport layer in computer networks. It discusses:
1. The transport layer is responsible for process-to-process delivery of data across a network. This involves delivering packets from one process to another, often using a client-server model.
2. There are two main transport layer protocols - UDP, which is a connectionless and unreliable protocol, and TCP, which establishes connections and provides reliable data delivery.
3. TCP and UDP use port numbers along with IP addresses to uniquely identify processes. TCP also implements flow and error control to ensure reliable data transfer.
UDP is a connectionless transport protocol that does not guarantee packet delivery or order. It is faster than TCP but does not ensure reliability. UDP packets have a header containing source and destination port numbers as well as length fields. The checksum field allows detecting errors but packets are not retransmitted if errors occur. UDP is suitable for real-time applications where speed is critical and packet loss can be tolerated.
TCP and UDP are transport layer protocols that package and deliver data between applications. TCP provides reliable, ordered delivery through connection establishment and packet sequencing. UDP provides faster, unreliable datagram delivery without connections. Common applications using TCP include HTTP, FTP, and SMTP. Common UDP applications include DNS, DHCP, and streaming media.
The document discusses the transport layer in networking. It describes two main transport protocols:
1) UDP is a connectionless protocol that provides best-effort delivery of datagrams across IP networks. It uses port numbers for demultiplexing but does not provide reliability.
2) TCP is a connection-oriented protocol that provides reliable, in-order delivery of streams of bytes between applications over unreliable IP networks. It uses a three-way handshake to establish connections and provides flow control and error checking.
The transport layer chapter discusses process-to-process delivery and the transport layer protocols TCP and UDP. TCP provides reliable, connection-oriented data transfer using sequencing, acknowledgements and retransmissions. UDP provides simpler, connectionless delivery without reliability. Well-known ports are assigned for standard services like DNS, HTTP, FTP. TCP uses sliding windows and congestion control to prevent overwhelming the receiver. Reliability and flow control are implemented end-to-end rather than just link-by-link.
The document provides an overview of transport layer protocols including UDP, TCP, and SCTP. It describes the key services each protocol provides such as reliable vs unreliable data delivery and connection-oriented vs connectionless communication. TCP is discussed in more depth including how it provides reliable data transfer using sequence numbers, acknowledgments, flow control, error control, and congestion control mechanisms.
The document discusses the User Datagram Protocol (UDP). It provides the following key points:
- UDP is an alternative to TCP that offers a limited connectionless datagram service for delivery of messages between devices on an IP network. It does not guarantee delivery, order of packets, or duplicate protection like TCP.
- UDP is commonly used for applications that require low latency and minimal processing time like DNS, SNMP, and streaming media. These applications can tolerate some data loss since reliability is not critical.
- The UDP header is only 8 bytes, containing source/destination port numbers and length fields. It provides an optional checksum for error detection but no other reliability mechanisms.
The transport layer provides process-to-process communication and utilizes three main protocols: UDP, TCP, and SCTP. UDP is a connectionless protocol that does not guarantee delivery, while TCP provides reliable, ordered delivery through a connection-oriented approach. SCTP also provides reliable delivery with the added capability of multiple streams. Key aspects of these protocols include port numbers, packet/segment formatting, and connection establishment handshaking.
Unit 4-Transport Layer Protocols-3.pptxDESTROYER39
The document discusses transport layer protocols. It covers the User Datagram Protocol (UDP), Transmission Control Protocol (TCP), and Stream Control Transmission Protocol (SCTP). For UDP, it describes its connectionless and unreliable nature, datagram format, services, and applications. For TCP, it outlines its connection-oriented and reliable design with features like congestion control, flow control, error checking, and segment structure. It also briefly introduces SCTP and its services.
The document discusses transport layer protocols. It covers User Datagram Protocol (UDP), Transmission Control Protocol (TCP), and Stream Control Transmission Protocol (SCTP). UDP is described as a connectionless protocol that does not provide reliability, flow control, or error checking. TCP is connection-oriented and provides reliable in-order delivery through features like sequencing, acknowledgements, retransmissions, flow control, and congestion control. TCP establishes connections using a three-way handshake and transmits data in segments. SCTP is also described as a reliable transport layer protocol providing some features of both TCP and UDP.
The transport layer provides end-to-end communication between processes on different machines. Two main transport protocols are TCP and UDP. TCP provides reliable, connection-oriented data transmission using acknowledgments and retransmissions. UDP provides simpler, connectionless transmission but without reliability. Both protocols use port numbers to identify processes and negotiate quality of service options during connection establishment.
IV B.Tech I Sem CSE&IT JNTUK R10 regulation students have Mobile computing paper. This slides especially contains UNIT - 5 total material required for end exams
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
The document discusses transport layer protocols TCP and UDP. It provides an overview of process-to-process communication using transport layer protocols. It describes the roles, services, requirements, addressing, encapsulation, multiplexing, and error control functions of the transport layer. It specifically examines TCP and UDP, comparing their connection-oriented and connectionless services, typical applications, and segment/datagram formats.
Recital Study of Various Congestion Control Protocols in wireless networkiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
This document discusses and compares several congestion control protocols for wireless networks, including TCP, RCP, and RCP+. It implemented an enhanced version of RCP+ in the NS-2 simulator. Simulation results showed that the proposed approach achieved higher throughput and packet delivery ratio than TCP and RCP+ in a wireless network with 10-50 nodes, with performance degrading as the number of nodes increased beyond 20 due to increased congestion. The paper analyzes the mechanisms and equations of each protocol and argues the proposed approach combines benefits of improved AIMD and RCP+ to address their individual shortcomings.
Similar to Computer Communication Networks- TRANSPORT LAYER PROTOCOLS (20)
This document provides an overview of regular expressions in Python. It defines regular expressions as sequences of characters used to search for patterns in strings. The re module allows using regular expressions in Python programs. Metacharacters like [], ., ^, $, *, + extend the matching capabilities of regular expressions beyond basic text. Examples demonstrate using re functions like search and special characters to extract lines from files based on patterns.
The document discusses Python dictionaries. Some key points:
- A dictionary in Python is an unordered collection of key-value pairs where keys must be unique and immutable, while values can be any data type.
- Dictionaries are created using curly braces {} and keys are separated from values with colons.
- Elements can be accessed, added, updated, and deleted using keys. Nested dictionaries are also supported.
- Common operations include creating, accessing, modifying dictionaries as well as nested dictionaries. User input can also be used to update dictionary values.
This document provides an overview of lists in Python. It defines lists as ordered, mutable sequences that can contain elements of different data types. Key features covered include: lists allow duplicates, are indexed and sliced, can be modified via assignment, support common operations like membership testing and iteration. Examples are provided for list construction, accessing/replacing items by index, slicing subsets, checking if an item exists, and looping through lists.
This document discusses tuples in Python. It begins with definitions of tuples, noting that they are ordered, indexed and immutable sequences. It then provides examples of creating tuples using parentheses or not, and explains that a single element tuple requires a trailing comma. The document discusses tuple operations like slicing, comparison, assignment and using tuples as function return values or dictionary keys. It also covers built-in tuple methods and functions.
The document provides information about strings in Python. Some key points include:
- Strings are immutable sequences of characters that can be accessed using indexes. Common string methods allow operations like uppercase, lowercase, counting characters, etc.
- Strings support slicing to extract substrings, and various string formatting methods allow combining strings with variables or other strings.
- Loops can be used to iterate through strings and perform operations on individual characters. Built-in string methods do not modify the original string.
- Examples demonstrate various string operations like indexing, slicing, checking substrings, string methods, formatting and parsing strings. Loops are used to count characters in examples.
This document discusses file handling in Python. It begins by explaining that files allow permanent storage of data, unlike standard input/output which is volatile. It then covers opening files in different modes, reading files line-by-line or as a whole, and modifying the file pointer position using seek(). Key points include opening files returns a file object, reading can be done line-by-line with for loops or using read()/readlines(), and seek() allows changing the file pointer location.
Computer Communication Networks- Introduction to Transport layerKrishna Nanda
The document summarizes the key functions and services of the transport layer:
1) The transport layer provides process-to-process communication between applications on different hosts using addressing that identifies processes via port numbers in addition to IP addresses.
2) It implements services like multiplexing/demultiplexing, flow control, error control, and reliable data transfer to ensure reliable end-to-end delivery of data between applications.
3) Transport layer protocols encapsulate and decapsulate messages, adding headers that include sequence numbers and acknowledgments to implement functions like error detection and retransmission.
The document discusses network layer protocols and IPv4 specifically. It provides three key points:
1) IPv4 is the main network layer protocol in the Internet that provides "best effort" delivery of packets called datagrams from source to destination through various networks in a connectionless manner.
2) IPv4 packets, or datagrams, contain a header with fields that provide routing information and a payload section for data. The header fields include source and destination addresses, identification information, flags for fragmentation, and more.
3) IPv4 supports fragmentation of large datagrams into smaller pieces to accommodate the size constraints of different networks. The fragmentation process and header fields related to fragmentation are described.
COMPUTER COMMUNICATION NETWORKS-R-Routing protocols 2Krishna Nanda
The document discusses unicast routing protocols used in the Internet, focusing on the Routing Information Protocol (RIP). It provides details on:
1) How the Internet uses hierarchical routing with interior gateway protocols (IGPs) like RIP within autonomous systems (ASes) and exterior gateway protocols like BGP between ASes.
2) Key aspects of RIP including using hop count as the routing metric, periodic routing updates, timers that control route expiration and garbage collection, and its distance-vector algorithm.
3) RIP's scalability is limited by only allowing up to 15 hops within an AS, but it has simple message formats and local updating between neighboring routers.
Computer Communication Networks-Routing protocols 1Krishna Nanda
This document provides an overview of routing protocols in computer networks. It discusses:
1) The goal of routing is to deliver data packets from source to destination(s) using forwarding tables updated by routing protocols. Routing can be unicast (one-to-one) or multicast (one-to-many).
2) Common routing algorithms include distance-vector routing (Bellman-Ford algorithm) and link-state routing (Dijkstra's algorithm). Distance-vector routing uses distance vectors exchanged between neighbors to calculate least-cost paths.
3) Issues with distance-vector routing include slow convergence and counting to infinity when link costs increase.
Computer Communication Networks-Wireless LANKrishna Nanda
Wireless LANs allow hosts to connect to a network without being physically connected via cables. They use radio waves to transmit data through the air. Some key differences between wired and wireless LANs include the mobility of hosts in wireless LANs and the use of access points to connect wireless LANs to wired networks. Wireless LANs also face challenges from signal attenuation, interference, and multipath propagation that wired LANs do not. The IEEE 802.11 standard defines the specifications for wireless LANs, including using basic service sets and extended service sets to connect multiple wireless networks, and employing carrier sense multiple access with collision avoidance for medium access control.
Computer Communication Networks-Network LayerKrishna Nanda
The document discusses the network layer in computer networks. It describes that the network layer is responsible for packetizing data by encapsulating it and adding headers, routing packets from source to destination by determining the best path, and forwarding packets through routers along the path. It explains the two main approaches used at the network layer - connectionless datagram service where each packet is routed independently, and connection-oriented virtual circuit service where a connection is established and packets follow the same path.
This document provides an overview of arrays and strings in C programming. It discusses:
- Arrays can store a collection of like-typed data and each element is accessed via an index. Common array types include one-dimensional and multi-dimensional arrays.
- Strings in C are arrays of characters that are null-terminated. Functions like printf, scanf, gets and puts can be used to output and input strings.
- Linear and binary search algorithms are described for finding a value within an array. Sorting techniques like bubble, insertion and selection sorts are also mentioned.
Structures allow grouping of different data types under one name. A structure defines a template for storing multiple data items of different types together. Structure variables can then be declared based on this template to store actual data. Structure members are accessed using the dot operator. Arrays of structures can be used to store information about multiple objects of the same type. Structures can also be nested by defining a structure as a member of another structure. Structures can be passed to functions by value or by reference using pointers.
This document discusses file operations in C including opening, reading, and writing to files. It covers:
- Using FILE * pointers to access files and opening files with fopen()
- Standard files stdin, stdout, stderr that are opened for input/output
- Reading/writing files using formatted I/O functions like fscanf() and fprintf() as well as lower level functions to get/put characters and lines
- Binary reading/writing entire blocks of memory with fread() and fwrite()
- Closing files, flushing buffers, and detecting the end of file
Pointers are variables that hold the memory address of another variable. A pointer variable contains the address of the variable it points to. Pointer variables must be declared with an asterisk and can be used to access and modify the value of the variable being pointed to using dereferencing operator. Pointers allow passing by reference in functions and dynamically allocating memory using functions like malloc and free. Pointer arithmetic allows treating pointers like arrays for accessing memory locations.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
A SYSTEMATIC RISK ASSESSMENT APPROACH FOR SECURING THE SMART IRRIGATION SYSTEMSIJNSA Journal
The smart irrigation system represents an innovative approach to optimize water usage in agricultural and landscaping practices. The integration of cutting-edge technologies, including sensors, actuators, and data analysis, empowers this system to provide accurate monitoring and control of irrigation processes by leveraging real-time environmental conditions. The main objective of a smart irrigation system is to optimize water efficiency, minimize expenses, and foster the adoption of sustainable water management methods. This paper conducts a systematic risk assessment by exploring the key components/assets and their functionalities in the smart irrigation system. The crucial role of sensors in gathering data on soil moisture, weather patterns, and plant well-being is emphasized in this system. These sensors enable intelligent decision-making in irrigation scheduling and water distribution, leading to enhanced water efficiency and sustainable water management practices. Actuators enable automated control of irrigation devices, ensuring precise and targeted water delivery to plants. Additionally, the paper addresses the potential threat and vulnerabilities associated with smart irrigation systems. It discusses limitations of the system, such as power constraints and computational capabilities, and calculates the potential security risks. The paper suggests possible risk treatment methods for effective secure system operation. In conclusion, the paper emphasizes the significant benefits of implementing smart irrigation systems, including improved water conservation, increased crop yield, and reduced environmental impact. Additionally, based on the security analysis conducted, the paper recommends the implementation of countermeasures and security approaches to address vulnerabilities and ensure the integrity and reliability of the system. By incorporating these measures, smart irrigation technology can revolutionize water management practices in agriculture, promoting sustainability, resource efficiency, and safeguarding against potential security threats.
Technical Drawings introduction to drawing of prisms
Computer Communication Networks- TRANSPORT LAYER PROTOCOLS
1. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 1
Chapter 24: TRANSPORT LAYER PROTOCOLS
❑The transport layer in the TCP/IP suite is located between the application layer
and the network layer. It provides services to the application layer and receives
services from the network layer.
❑ The transport layer acts as a liaison between a client program and a server
program, a process-to-process connection.
❑ The transport layer has 3 major protocols: UDP, TCP and SCTP.
24.1 INTRODUCTION
Fig.24.1 shows the position of these protocols in the TCP/IP protocol suite.
24.1.1 Services
Each protocol provides a different type of service and should be used
appropriately.
1. UDP (User Datagram Protocol)
UDP is an unreliable connectionless transport-layer protocol used for its
simplicity and efficiency in applications where error control can be provided by
the application-layer process.
2. TCP (Transmission Control Protocol)
TCP is a reliable connection-oriented protocol that can be used in any application
where reliability is important.
3. SCTP (Stream Control Transmission Protocol)
SCTP is a new transport-layer protocol that combines the features of UDP and
TCP.
24.1.2 Port Numbers
A Transport-layer protocol provides process-to-process communication using port
addresses. Port numbers provide end-to-end addresses at the transport layer and
allow multiplexing and demultiplexing at this layer.
2. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 2
24.2 USER DATAGRAM PROTOCOL (UDP)
It is a connectionless, unreliable transport layer protocol, providing process-
to-process communication.
UDP is a very simple protocol using a minimum of overhead. If a process wants
to send a small message and reliability is not an issue, it can use UDP.Sending a
small message using UDP takes much less interaction between the sender and
receiver than using TCP.
24.2.1 User Datagram
UDP packets, called user datagrams, have a fixed-size header of 8 bytes. Fig.24.2
shows the format of user datagram.
3. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 3
The Header has a fixed size of 8 bytes made of 4 fields, each of 2 bytes. The fields
are:
i) source port number (16 bits) ii) destination port number (16 bits)
iii) Total length (16 bits) – defines size of user datagram. Max 65535 bytes, but
usually smaller than this
iv) Header checksum (16 bits)
Example 24.1 The following is the content of a UDP header in hexadecimal format.
CB84000D001C001C
a. What is the source port number?
b. What is the destination port number?
c. What is the total length of the user datagram?
d. What is the length of the data?
e. Is the packet directed from a client to a server or vice versa?
f. What is the client process?
Solution
a. The source port number is the first four hex digits (CB84)16 = 52100d i.e., source
port number is 52100.
b. The destination port number is the second four hex digits (000D)16 =13d, i.e., the
destination port number is 13.
c. The third four hex digits (001C)16 defines the length of the whole UDP packet.
So, length is (001C)16 = 28 bytes.
d. Length of data = Total length – header length = 28-8 = 20bytes
e. Since the destination port number is 13 (well-known port), the packet is from
the client to the server.
f. The client process is the Daytime (see Table 24.1).
24.2.2 UDP Services
Process-to-Process Communication
UDP provides process-to-process communication using socket addresses, a
combination of IP addresses and port numbers.
Connectionless Services
UDP provides a connectionless service. i.e., each user datagram sent by UDP is
an independent datagram, even if they are coming from the same source
process and going to the same destination program.
The user datagrams are not numbered. Also, unlike TCP, there is no connection
establishment and no connection termination. This means that each user
datagram can travel on a different path.
Flow Control
UDP is a very simple protocol. There is no flow control, and hence no window
mechanism. The receiver may overflow with incoming messages.
4. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 4
Error Control
There is no error control mechanism in UDP except for the checksum. i.e., the
sender does not know if a message has been lost orduplicated.
When the receiver detects an error through the checksum, the user datagram
is discarded.
Checksum
UDP checksum calculation includes three sections: A pseudoheader, the UDP
header, and the data coming from the application layer.
The pseudoheader is the part of the header of the IP packet in which the user
datagram is to be encapsulated with some fields filled with 0s
The protocol field is added to ensure that the packet belongs to UDP, and not
toTCP.
The value of the protocol field for UDP is 17. If this value is changed during
transmission, the checksum calculation at the receiver will detect it and UDP
drops the packet. It is not delivered to the wrong protocol
Congestion Control
Since UDP is a connectionless protocol, it does not provide congestion control.
UDP assumes that the packets sent are small and sporadic and cannot create
congestion in the network. (However, when UDP is used for interactive real-
time transfer of audio and video, may lead to congestion).
Encapsulation and Decapsulation
To send a message from one process to another, the UDP protocol
encapsulates and decapsulates messages.
Queuing
In UDP, queues are associated with ports. At the client site, when a process
starts, it requests a port number from the operating system. We may
create both an incoming and an outgoing queue associated with each
process.
5. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 5
24.2.3 UDP Applications
UDP Features:
Connectionless Service
UDP is a connectionless protocol. Each UDP packet is independent from other
packets sent by the same application program. i.e., UDP does not recognize
any relationship between the datagrams.
UDP can be used for ex, a client application needs to send a short request to a
server and to receive a short response. If the request and response can each
fit in a single user datagram, a connectionless service may be preferable.
The connectionless service provides less delay. If providing fast response is
more important than reliable operation, UDP is preferred.
A client-server application like DNS uses the services of UDP.
Not suitable for Email, File transfer etc
Lack of Error Control
UDP does not provide error control; it provides an unreliable service.
.
Lack of Congestion Control
UDP does not provide for congestion control. However, since there is no
retransmission, UDP does not add to the congestion in the network. So, its an
advantage inan error-prone network.
Typical Applications
The following shows some typical applications that can benefit more from the
services of UDP than from those of TCP.
❑ UDP is suitable for a process that requires simple request-response
communication with little concern for flow and error control. It is not
usually used for a process suchas FTP that needs to send bulk data.
❑ UDP is suitable for a process which has built-in flow- and error-control
mechanisms. For ex, Trivial File Transfer Protocol (TFTP) process uses UDP.
❑ UDP is a suitable transport protocol for multicasting. Multicasting capability is
embedded in the UDP software but not in the TCP software.
❑ UDP is used for management processes such as SNMP
❑ UDP is used for some route updating protocols such as RIP
❑ UDP is normally used for interactive real-time applications that cannot tolerate
uneven delay between sections of a received message
6. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 6
24.3 Transmission Control Protocol (TCP)
Transmission Control Protocol (TCP) is a connection-oriented, reliable
protocol.
TCP explicitly defines connection establishment, data transfer, and
connection teardown phases to provide a connection-oriented service.
TCP uses a combination of GBN and SR protocols to provide reliability.
24.3.1 TCP Services
1. Process-to-Process communication
TCP provides process-to-process communication using port numbers.
2. Stream Delivery Service
TCP is a stream-oriented protocol. (UDP is not)
TCP allows the sending process to deliver data as a stream of bytes and allows
the receiving process to obtain data as a stream of bytes.
TCP creates an environment in which the two process seem to be connected
by an imaginary “tube” that carries their bytes across the Internet. (Fig.24.4)
Sending process produces (writes to) the stream and receiving process
consumes (reads from) the stream.
3. Sending and Receiving Buffers
The sending and the receiving processes may not write or read data at the
same rate. So, TCP needs two buffers (sending and receiving) for storage.
These buffers are also necessary for flow- and error-control mechanisms used
by TCP.
One way to implement a buffer is to use a circular array of 1-byte locations as
shown in Fig. 24.5. (Ex: two buffers of 20 bytes each). Both buffers need not
be of same size.
Fig. shows the movement of the data in one direction. At the sender, the
buffer has three types of chambers. The white section contains empty
chambers that can be filled by the sending process (producer). The Yellow
region contains bytes to be sent by the sending TCP. The Blue region holds
bytes that have been sent but not yet acknowledged.
After the bytes in the Blue region are acknowledged, the chambers are
recycled and available for use by the sending process.
At the receiver, the circular buffer is divided into two areas. The white area
contains empty chambers to be filled by bytes received from the network. The
7. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 7
colored section contains received bytes that can be read by the receiving
process.
When a byte is read by the receiving process, the chamber is recycled and
added to the pool of empty chambers.
Fig.24.5 Sending and Receiving Buffers
4. TCP Segments
The network layer, as a service provider for TCP, needs to send data in
packets, (not as a stream of bytes). At the transport layer, TCP groups a
number of bytes together into a packet called a Segment.
TCP adds a header to each segment and delivers the segment to the network
layers for transmission.
The segments are encapsulated in an IP datagram and transmitted. These IP
datagrams (containing TCP segments) may follow different physical paths
through the network, may be received out of order, lost or corrupted, and
resent.
These issues are handled by TCP receiver. Since TCP provides stream oriented
service, receiver TCP ensures to deliver the bytes in order to the application
layer. Fig. 24.6 shows how segments are created from the bytes in the buffers.
All Segments need not contain same number of bytes.
5. Full-Duplex Communication
TCP offers full-duplex service, where data can flow in both directions at the same
time. Each TCP endpoint then has its own sending and receiving buffer, and
segments move in both directions.
8. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 8
6. Multiplexing and De-multiplexing
Like UDP, TCP performs multiplexing at the sender and demultiplexing at the
receiver. However, a connection needs to established for each pair of
processes.
7. Connection-Oriented Service
TCP is a connection-oriented protocol. When a process at site A wants to send
to and receive data from another process at site B, the following three phases
occur:
1. The two TCP’s establish a logical connection between them.
2. Data are exchanged in both directions.
3. The connection is terminated.
8. Reliable service
TCP is a reliable transport protocol. It uses an acknowledgement mechanism
to check the safe and sound arrival of data
24.3.2 TCP Features
Numbering Systems
Although the TCP software keeps track of the segments being transmitted
or received, there is no field for a segment number value in the segment
header.
Instead, there are two fields: the sequence number and the ACK number.
These two fields refer to a byte number and not a segment number.
1.Byte Number
TCP numbers all data bytes (octets) that are transmitted in a
connection. Numbering is independent in each direction.
When TCP receives bytes of data from a process, TCP stores them in the
sending buffer and numbers them. TCP chooses an arbitrary number
between 0 and 232-1 for the number of first byte.
2. Sequence Number
After the bytes have been numbered, TCP assigns a sequence number to each
segment. The sequence number, in each direction, is defined as follows:
i. The sequence number of the first segment is the ISN (initial sequence
number), which is a random number (need not be zero).
ii. The sequence number of any other segment is the sequence of the previous
segment plus the number of bytes (real or imaginary) carried by the
previous segment.
Note: Communication in TCP is full-duplex, i.e., both parties can send and receive
at the same time. The sequence number in each direction shows the number of
the first byte carried by the segment.
3. Acknowledgment Number
Each party uses an ACK number to confirm the bytes it has received.
The ACK number defines the number of the next byte that the party expects
to receive.
9. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 9
The ACK number is cumulative, which means that the party takes the
number of the last byte that it has received, adds 1 to it and this sum is the
ACK number.
The term cumulative means that if a party uses 4567 as an ACK number, it
has received all bytes from the beginning up to 4566.
24.3.3 TCP Segment Format.
A Packet in TCP is called as a segment. The segment consists of a header of 20 to 60
bytes followed by data from Application layer.
Fig.24.7 shows TCP segment format.
The fields of the TCP Header are:
i. Source Port Address (16 bits): defines the port number of the application
program of the source.
ii. Destination Port Address (16 bits): defines the port number of the application
program of the destination.
iii. Sequence Number (32-bits): defines the number assigned to the first byte of
data contained in this segment. Each byte to be sent is numbered by the
source.
iv. Acknowledgment Number (32-bits): defines the byte number that the
receiver is expecting to receive next. If receiver has received byte number x
from the sender, its sends an ACK with number x+1.
v. HLEN (4 bits): indicates the TCP header length as multiple of 4-bytes. HLEN
min value is 5d & max value is 15d. Header length in bytes = HLEN value x 4.
vi. Control field (6 bits): 6 different control flags, which enables flow control,
connection establishment and termination, mode of operation etc,.
10. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 10
vii. Window Size (16-bits): defines the window size of the sending TCP in bytes.
Maximum size of the window is 216-1 i.e,. 65535 bytes and is normally
referred to as receiving window (rwnd) and is determined by the receiver.
viii. Checksum (16 bits): use of Checksum in TCP is mandatory. Uses a
pseudoheaer.
ix. Urgent Pointer (16 bits): This field that is valid only if URG flag is set. It is used
when segment contains urgent data.
x. Options (0 to 40 bytes): optional information
Control field in detail:
Checksum using Pseudoheader:
24.3.4.TCP Connection
TCP is connection-oriented, creating a logical path between source and
destination. All the segments belonging to a message are then sent over this
logical path.
This helps in getting ACKs as well as retransmission of damaged or lost
segments.
If segments arrive out-of-order, receiver TCP holds them until all segments
arrive, rearranges them in order and then only delivers to the application
process.
In TCP, communication requires 3 phases.
11. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 11
1. Connection Establishment (using 3 ways handshaking).
2. Data Transfer.
3. Connection termination (using 3 ways handshaking).
1.Connection Establishment using three-way Handshaking
TCP transmits data in full-duplex mode. i.e., each party is able to send segments to
other simultaneously. So, each party must initialize communication and get
approval from the other party before any data are transferred
Three-Way Handshaking
The connection establishment in TCP is called three-way handshaking. Consider
the scenario: an application program, called the client, wants to make a connection
with another application program, called the server, using TCP as the transport-
layer protocol.
Initiation happens from the server. The server program tells its TCP that it is
ready to accept a connection. This request is called a passive open. However, it
cannot make the connection itself.
A client that wishes to connect to an open server tells its TCP to connect to a
particular server and issues a request for an active open.
TCP can now start the three-way handshaking process, as shown in Fig.24.10.
Note: Each segment has values for all its header. However, we show only few fields:
the sequence number, the ACK number, the control flags (only those that are set),
and window size if relevant.
The three steps in this phase are:
i. The client sends the first segment, a SYN segment (only the SYN flag is set). for
synchronization of sequence numbers. The client chooses a random number
as the first sequence number and sends this number to the server. This
sequence number is called the initial sequence number (ISN). The SYN
segment is a control segment and carries no data. However, it consumes one
sequence number because it needs to be acknowledged. We can say that the
SYN segment carries one imaginary byte.
ii. The server sends the second segment, a SYN + ACK segment with two flag bits
set as: SYN and ACK. It serves a dual purpose. a) The server uses this SYN
segment to initialize a sequence number for numbering the bytes to be sent
from the server to the client. b) The server also acknowledges the receipt of
the SYN segment from the client by setting the ACK flag and displaying the
next sequence number it expects to receive from the client. Because the
segment contains an ACK, it also needs to define the receive window size,
rwnd (to be used by the client). This SYN+ACK segment consumes one
sequence number.
iii. The client sends the third segment. This is just an ACK segment. It
acknowledges the receipt of the second segment with the ACK flag and ACK
number field. ACK segment does not consume any sequence numbers if it
does not carry data.
12. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 12
2. Data Transfer
After connection is established, bidirectional data transfer takes place. The client
and server can send data and ACK in both directions. The ACK is piggybacked
with data.
Consider that the client sends 2000 bytes of data in two segments and the
server then sends 2000 bytes in one segment. The client then sends one more
segment.
The first 3 segments carry both data and ACK, but the last segment carries only
an ACK because it has no more data to be sent. Fig.24.11 shows the scenario.
Generally, TCP uses buffering at both the ends to provide flow control and flexibility.
However, due to buffering, data transmission and delivery may be delayed,
which is not suitable for interactive communication. In such a case, application
at the sender can request a Push operation.
So, client TCP creates a segment and sends it immediately and sets the PSH
(push) flag in the data segments sent. So, the server TCP can deliver the data to
the server process as soon as they are received.
TCP is a stream-oriented protocol. i.e., data is a stream of bytes and each byte has a
position in the stream. However, an application may require some bytes to be
treated in a special way. So, sending (client) TCP creates a segment with URG
(urgent) flag set and inserts the urgent data at the start of the segment. The “urgent
pointer” field in the header defines the end of urgent data. The receiving TCP
informs the application program about the beginning and end of urgent data.
13. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 13
Fig.24.11 Data Transfer using TCP
3. Connection Termination
Either of the two parties (client or server) can close the connection. Can be done
two ways: Three-way handshaking and four-way handshaking with half-close
option.
Three-Way Handshaking
Three-way handshaking for connection termination is shown in Figure 24.12.
1. The client TCP, after receiving a close command from the client process, sends
a FIN segment with FIN flag set. A FIN segment can include the last chunk of
data sent by the client or it can be just a control segment. FIN segment
consume one sequence number if it does not carry data.
2. The server TCP after receiving the FIN segment, sends a FIN + ACK segment to
confirm the receipt of the FIN segment from the client and at the same time to
announce the closing of the connection in the other direction. This segment
can also contain the last chunk of data from the server. If it does not carry
data, it consumes only one sequence number.
3. The client TCP sends the last segment, an ACK segment, to confirm the receipt
of the FIN segment from the TCP server. This segment contains the ACK
number, which is one more than the sequence number received in the FIN
segment from the server. This segment cannot carry data and consumes no
sequence numbers.
14. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 14
.
Half-close Operation:
One end (client or server) can stop sending data while it is still receiving data. This
situation is called half-close. For ex., after sending all the data to a server for
computation, client can half-close the connection (i.e., from client-to-server) by
sending a FIN segment. The server accepts it by sending the ACK. Now, client cannot
send any more data to the server.
After the computation is over, server can send the results to client. After sending all
the processed data to client, server sends a FIN segment which is acknowledged by
and ACK from the client.
The half-close operation is depicted in Fig. 24.13.
Connection Reset
TCP at one end may deny a connection request, may abort an existing connection, or
may terminate an idle connection. All of these are done with the RST (reset) flag.
15. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 15
SYN Flooding Attack
The connection establishment procedure in TCP is susceptible to a serious security
problem called SYN flooding attack. This happens when one or more malicious
attackers send a large number of SYN segments to a server pretending that each of
them is coming from a different client by faking the source IP addresses in the
datagrams.
The server, assuming that the clients are issuing an active open, allocates the
necessary resources, such as creating transfer control block (TCB) tables and setting
timers. The TCP server then sends the SYN + ACK segments to the fake clients,
which are lost.
If the number of SYN segments is large, the server may run out of resources and
may be unable to accept connection requests from valid clients. This SYN flooding
attack belongs to a category known as a denial of service (DOS) attack, in which an
attacker monopolizes a system with so many service requests that the system
overloads and denies service to valid requests.
Solution:
1) Restrict the number of connection requests during a specified period of time.
2) Filter out datagrams coming from unwanted source addresses.
3) Postpone resource allocation until the server can verify that the connection
request is coming from a valid IP address, by using what is called a cookie. SCTP
uses this strategy.
16. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 16
24.3.5 State Transition Diagram
TCP is specified as a finite state machine (FSM).
The round corner rectangles represents the states.
Directed lines represents the state transition. Each line has two strings
separated by a slash. The first string is the input (what TCP receives) the
second is the output (what TCP sends)
Dotted line represents the transition done by the server. Solid line represents
the transition done by the client.
Colored lines show special situations.
Table 24.2 shows the states of TCP.
TCP state diagram and transitions can be explained taking an example of Half-close
scenario, as shown in Fig. 24.15
17. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 17
Explanation:
Step 1: The server process issues a passive open command. The server TCP goes to
the LISTEN state and remains there passively until it receives a SYN segment from
client.
Step 2: The client process issues an active open command to its TCP to request a
connection to a specific socket address. Client TCP sends a SYN segment and moves
to the SYN-SENT state.
Step 3: The server TCP sends a SYN + ACK segment and goes to the SYN-RCVD state,
waiting for the client to send an ACK segment.
Step 4: After receiving the SYN + ACK segment, client TCP sends an ACK segment
and goes to the ESTABLISHED state.
Step 5: After receiving the ACK segment, server TCP also goes to the ESTABLISHED
state. Now, data are transferred, possibly in both directions, and acknowledged.
Step 6: When the client process has no more data to send, it issues a command
called an active close. The client TCP sends a FIN segment and goes to the FIN-
WAIT-1 state.
Step 7: The server, upon receiving the FIN segment, sends an ACK segment to the
client and goes to the CLOSE-WAIT state.
Step 8: When client receives the ACK segment, it goes to the FIN-WAIT-2 state.
Step 9: After receiving the passive close command from its application layer, the
server sends a FIN segment to the client and goes to the LAST-ACK state, waiting for
the final ACK from the client.
Step 10: When the client receives a FIN segment, it sends an ACK segment and goes
to the TIME-WAIT state. The client remains in this state for 2 MSL seconds When
the corresponding timer expires, the client goes to the CLOSED state.
Step 11: When the ACK segment is received from the client, the server goes to the
CLOSE state.
18. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 18
24.3.6 WINDOWS IN TCP
▪ TCP uses two windows (Send and Receive window) for each direction of data
transfer. So, there will be 4 windows for a bidirectional communication.
▪ Let us assume unidirectional communication (say from client to server).
1) Send Window:
The window size is say, 100 bytes.
Fig.24.17 shows how a send window opens, closes, or shrinks.
The send window in TCP is similar to the window in SR protocol with some
differences:
1. Window size in TCP is the number of bytes. The variables that control the
window are expressed in bytes.
2. We assume that the sending TCP is capable of sending segments of data as
soon as it receives them from its process (no buffering)
3. The TCP protocol uses only one timer.
Fig.24.17 Send Window in TCP
2) Receive Window
Fig. 24.18 shows an example of a receive window. The window size is 100
bytes.
There are two differences between the receive window in TCP and the one we
used for SR.
1. TCP allows the receiving process to pull data whenever it needs that data.
Receiver buffer may have bytes that have been received and acknowledged,
but are waiting to be pulled by the receiving process. The receive window
size is then smaller than or equal to the buffer size, as shown in Fig. The
19. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 19
receive The receiver window size, normally called rwnd, can be determined
as: rwnd = buffer size – number of bytes waiting to be pulled
2. An ACK in SR is selective, defining the uncorrupted packets that have been
received. The major ACK mechanism in TCP is a cumulative ACK
announcing the next expected byte to receive. The new version of TCP,
however uses both cumulative and selective acknowledgments.
Fig.24.18 Receive window in TCP
24.3.7 FLOW CONTROL
Flow control Balances the rate a producer creates data with the rate a
consumer can use the data.
Assume an error free channel between sending and receiving TCP.
Fig 24.19 shows unidirectional data transfer between a sender and a receiver.
Fig 24.19 shows that data travel from the sending process to sending TCP,
from the sending TCP to the receiving TCP and from the receiving TCP to
receiving process (paths 1,2,3).
Flow control feedbacks are travelling from the receiving TCP to the sending
TCP and from the sending TCP up to the sending process (paths 4 and 5).
The receiving TCP controls the sending TCP; The sending TCP controls the
sending process.
20. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 20
1) Opening and Closing Windows
To achieve flow control, TCP forces the sender and the receiver to adjust their
window sizes. The receive window closes (moves its left wall to the right)
when more bytes arrive from the sender; it opens (moves its right wall to the
right) when more bytes are pulled by the process.
The opening, closing, and shrinking of the send window is controlled by the
receiver. The send window closes (moves its left wall to the right) when a new
ACK allows it to do so. The send window opens (its right wall moves to the
right) when the receive window size (rwnd) advertised by the receiver allows
it to do so. (new ackNo + new rwnd > last ackNo + last rwnd)
2) Silly Window Syndrome
A serious problem arises in the sliding window operation when either the
sending application program creates data slowly or the receiving application
program consumes data slowly, or both. Any of these situations results in the
sending of data in very small segments, which reduces the efficiency of the
operation. This problem is called the silly window syndrome.
a) Syndrome Created by the Sender
The sending TCP may create a silly window syndrome if the application
program creates data slowly. For ex, let the application program write 1 byte
at a time into the buffer of the sender TCP. If TCP sends segments containing
only 1 byte of data, it means that a 41-byte datagram (20 bytes of TCP header
and 20 bytes of IP header) transfers only 1 byte of user data. Here the
overhead is 41/1. i.e, just to send 1 byte of user data we are actually sending
41 bytes on the network. So, we are using the capacity of the network very
inefficiently. The inefficiency is even worse if we account for the data-link
layer and physical-layer overhead.
One solution is that sending TCP must wait & collect data from application so
that it can send data in larger blocks, instead of few bytes. Too long waiting
time may delay the process. Less waiting time means, TCP may send only
small segments. To have a balance, Nagle proposed the following:
21. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 21
Nagle’s Algorithm:
1. The sending TCP sends the first piece of data it receives from the sending
application program even if it is only 1 byte.
2. After sending the first segment, the sending TCP accumulates data in the
output buffer and waits until either the receiving TCP sends an ACK or until
enough data have accumulated to fill a maximum-size segment. At this time,
the sending TCP can send the segment.
3. Step 2 is repeated for the rest of the transmission. Segment 3 is sent
immediately if an ACK is received for segment 2, or if enough data have
accumulated to fill a maximum-size segment
It takes care of both: speed of Application pgm as well as speed of the network.
b) Syndrome Created by the Receiver
The receiving TCP may create a silly window syndrome if it is serving an application
program that consumes data slowly than they arrive. Suppose that the sending
application program creates data in blocks of 1 KB, but the receiving application
program consumes data 1 byte at a time. One byte of data is consumed and a
segment carrying 1 byte of data is sent. Again we have an efficiency problem and the
silly window syndrome.
Solution:
1) Clark’s solution is to send an ACK as soon as the data arrive, but to announce a
window size of zero until either there is enough space to accommodate a segment of
maximum size or until at least half of the receive buffer is empty.
2) The second solution is to delay sending the ACK. i.e., when a segment arrives, it is
not acknowledged immediately. The receiver waits until there is a decent amount of
space in its incoming buffer before acknowledging the arrived segments. The
delayed ACK prevents the sending TCP from sliding its window. After the sending
TCP has sent the data in the window, it stops. This kills the syndrome. Delayed ACK
also has another advantage: it reduces traffic. The receiver does not have to
acknowledge each segment. However, the sender may be unnecessarily
retransmitting the unacknowledged segments. For this, we can fix a threshold, for
ex., the ACK should not be delayed by more than 500 ms.
24.3.8 ERROR CONTROL
Error control in TCP can be achieved using 3 simple tools in TCP. Checksum,
Acknowledgement (Cumulative ACK/ Selective ACK) and Time out.
a) Checksum
Each segment includes a checksum field, which is used to check for a corrupted
segment. If a segment is corrupted, as detected by an invalid checksum, the segment
is discarded by the destination TCP and is considered as lost. TCP uses a 16-bit
checksum that is mandatory in every segment.
22. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 22
b) Acknowledgment
TCP uses acknowledgments to confirm the receipt of data segments. Control
segments that carry no data, but consume a sequence number, are also
acknowledged. ACK segments do not consume sequence numbers and are not
acknowledged
Acknowledgment Type
TCP may use either cumulative ACK or selective ACK.
1) Cumulative Acknowledgment (ACK) TCP was originally designed to
acknowledge receipt of segments cumulatively. The receiver advertises the
next byte it expects to receive. This is also referred to as positive cumulative
ACK, or ACK. No feedback is provided for discarded, lost, or duplicate
segments. The 32-bit ACK field in the TCP header is used for cumulative
acknowledgments, and its value is valid only when the ACK flag bit is set to 1.
2) Selective Acknowledgment (SACK) Newer implementations are using
another type of ACK called selective ACK, or SACK. A SACK does not replace an
ACK. A SACK reports a block of bytes that is out of order, and also a block of
bytes that is duplicated, i.e., received more than once. SACK is implemented as
an option at the end of the TCP header.
c) Retransmission
The heart of the error control mechanism is the retransmission of segments.
When a segment is sent, copy of it is stored in a queue until it receives ACK.
When the retransmission timer expires or when the sender receives 3
duplicate ACKs for the first segment, that segment is retransmitted.
1) Retransmission after RTO
Sender TCP maintains one retransmission time out (RTO) for each connection.
When timer times out, TCP resends the segment in the front of the queue
(with the smallest sequence no.) and restarts the timer. RTO is dynamic and is
updated based on the round-trip time (RTT) of segments.
RTT is the time needed for a segment to reach a destination and for an ACK to
be received.
2) Retransmission after 3 Duplicate ACK
If RTO is large, most implementation today follow 3 duplicate ACK rule. This
feature is called Fast Retransmission. Here, if 3 duplicate ACKs (original ACK
+ 3 identical copies) arrive for a segment, the next segment is retransmitted
immediately without waiting for the time-out.
d) Out-of-Order Segments
TCP accepts out-of-order segments at the receiver. TCP implementations do
not discard out-of-order segments. They store them temporarily and flag
them as out-of-order segments until the missing segments arrive. So, TCP
guarantees that no out-of-order data are delivered to the application process
at the destination.
23. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 23
Generating Acknowledgments:
Many TCP implementations follow some general rules to reduce traffic congestion
and improving efficiency.
Rule 1 :
When end A sends data segment to end B, it must include (piggy backing) an ACK
that gives the next sequence number it expects to receive. Reduces traffic.
Rule 2 :
The receiver needs to delay sending an ACK segment if there is only one outstanding
in-order segment. This rule reduces ACK segments.
Rule 3:
There should not be more than two in-order unacknowledged segments at any
time. This prevents the unnecessary retransmission of segments and reduces
congestion.
Rule 4:
When a segment arrives with an out-of-order sequence number that is higher
than the expected, the receiver immediately sends an ACK segment announcing the
sequence number of the next expected segment. This leads to fast retransmission of
missing segments.
Rule 5:
When a missing segment arrives, the receiver sends an ACK segment to announce
the next sequence number expected.
Rule 6:
If a duplicate segment arrives; the receiver discards the segment, but immediately
sends an ACK indicating the next in-order segment expected. This solves some
problems when an ACK segment itself is lost.
24.3.9 TCP Congestion Control
TCP uses different policies to handle congestion in the network. Congestion
may happen anywhere in the network, but IP do not take care of it. So, it’s the
job of TL to handle congestion.
Congestion may lead to loss of segments, which needs retxn that leads to more
congestion.
To control the number of segmetns to transmit, TCP uses another variable
called as Congestion Window (cwnd), whose size is controlled by the
congestion situation in the network.
The two variables cwnd and rwnd together define the size of the send window
in TCP.
Actual window size= minimum (rwnd, cwnd).
TCP sender uses the occurence of the two events as sign of congestion in the
network: time-out and receiving 3 duplicate ACKs. The lack of regular, timely
receipt of ACKs, which results in a time-out, is the sign of a strong congestion.
Note: Since TCP does not know whether a duplicate ACK is caused by a lost
segment or just reordering of segments, it waits for a small number of duplicate
ACKs to be received.
24. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 24
Congestion Handling Policies (3 Algorithms)
TCP’s general policy for handling congestion is based on 3 Algorithms.
1. Slow-Start Exponential Increase Algorithm
2. Congestion Avoidance Algorithm
3. Fast Recovery Algorithm
1. Slow- Start: Exponential Increase Algorithm
The size of the congestion window (cwnd) starts with one maximum segment
size (MSS), but it increases one MSS each time an acknowledgment arrives.
i.e., this algorithm starts slowly, but grows exponentially.
Fig.24.29 shows the idea. The assumptions are: i) rwnd is much larger than cwnd, so
that the send window size is always equal to cwnd. ii) Each segment is of same size
and carries MSS bytes iii) ACK is sent for each segment received.
Fig 24.29: Slow start exponential increase
The sender starts with cwnd=1; i.e., the sender can send only one segment.
After recovering the first ACK ; the size of the congestion window is increased
by 1. So, cwnd=2. After sending 2 segments and receiving two individual ACKs,
cwnd becomes 4 and increases as power of 2.
So, the size of the congestion window in this algorithm is a function of the
number of ACKs arrived.
If an ACK arrives, cwnd=cwnd+1.
Size of the cwnd in terms of round trip times (RTTs) is exponential in terms of
each RTT.
If we continue, the cwnd increases exponentially and may lead to congestion.
So, we need to set a threshold. A variable named as ssthreshold (slow-start
threshold) is used. When the size of the congestion window reaches this
threshold, the algorithm stops, and a new phase starts.
25. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 25
2. Congestion-Avoidance: Additive increase Algorithm
TCP defines another algorithm called as congestion avoidance, which
increases the cwnd additively instead of exponentially.
When the size of the congestion window reaches slow-start threshold (where
cwnd=i), the slow-start phase stops and the additive phase starts.
In this algorithm, each time the complete “window” of segments is
acknowledged, cwnd is increased by one. A window is the number of
segments sent during RTT. Fig. 24.30 illustrates this idea.
Fig 24.30 Congestion avoidance, additive increase
The sender starts with cwnd=4. After four ACKs arrive, cwnd is increased by 1.
After sending 5 segments and getting 5 ACKs, cwnd =6 and so on.
So, the size of the congestion window is a function of the number of ACKs that
have arrived. If an ACK arrives; cwnd = cwnd + (1/cwnd)
i.e., the size of the window increases only 1/cwnd portion of MSS (in bytes).
So, all segments in the previous window should be acknowledged to increase
the
window 1 MSS bytes.
Also, from Fig. we can see that growth rate of cwnd is linear in terms of each
RTT. This is much better than Slow-start strategy, where cwnd growth rate
was exponential.
26. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 26
3. Fast Recovery
The fast-recovery algorithm is optional in TCP.
It starts when three duplicate ACKs arrive, which is interpreted as light congestion
in the network. This algorithm increases the size of the congestion window when a
duplicate ACK arrives (after the three duplicate ACKs that trigger the use of this
algorithm). We can say
If a duplicate ACK arrives, cwnd = cwnd + (1 / cwnd).
24.3.10 Additional topics in Congestion Control
TCP moves from one policy to another through 3 versions of TCPs.
1. Tahoe TCP
2. Reno TCP
3. New Reno TCP
1. Tahoe TCP
It uses 2 algorithms[Slow Start (SS) and Congestion Avoidance(CA)].
Fig 24.31 :FSM for Tahoe TCP
When the connection is established, TCP stats the slow start algorithm (state
1) and it moves between CA (state 2) and the SS state(state1).
State 1- slow start: 3 events may occur in this state.
Event 1 : Time out or 3 duplicate ACKs removed.
Action : ssthreshold =cwnd/2, cwnd=1 and remains in same state.
Event 2 : cwnd>=ssthreshold.
Action: when the window size is more than ssthreshold, it moves to CA state.
Event 3 : an ACK arrived.
Action: when an ACK arrives, cwnd=cwnd+1
(which increases exponentially).
State 2 : Congestion Avoidance: 2 events may occur in this state.
Event 1: An ack arrived
Action : cwnd=cwnd+(1/cwnd) i.e,. size of the window increases additively.
Event 2 : Time out or 3 duplicate ACKs received.
Action : Set ssthreshold=cwnd/2 and reset cwnd=1 and moves to Slow Start state.
27. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 27
2. Reno TCP
It uses 3 Algorithms (SS,CA,FR(Fast Recovery)).
When a connection is established, it starts from SS state (state 1) and goes to
CA (state 2) and FR(state 3)
Fig.24.33 shows the FSM for Reno TCP.
Fig.24.33 FSM for Reno TCP
Tahoe TCP Reno TCP New Reno TCP
Uses only slow start and
congestion avoidance
strategies
Along with Slow start and
Congestion avoidance, adds a
new mechanism called fast-
recovery.
TCP New Reno, defined
by RFC 6582. Modified version
of TCP Reno
Handles 3 duplicate ACKs
similar to receiving a timeout.
It first performs a fast
retransmit. Then, it halves the
ssthresh value to original
congestion window size, and
sets the new cwnd to 1 and
staying in slow start.
The successor to Tahoe, goes
into fast-recovery mode upon
receiving three duplicate ACKs
thereby halving the ssthresh
value. For each successive
duplicate ACKs (fourth, fifth,
sixth), cwnd increases by 1.
Once the receiver finally
receives the missing packet,
TCP will move to congestion
avoidance or slow start state
upon a timeout
TCP checks to see if more than
one segment is lost in the
current window when three
duplicate ACKs arrive. When
TCP receives three duplicate
ACKs, it retransmits the lost
segment until a new ACK (not
duplicate) arrives
28. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 28
consider retransmission
timeout (RTO) and duplicate
ACKs as packet loss events,
consider retransmission
timeout (RTO) and duplicate
ACKs as packet loss events,
improves retransmission
during the fast-recovery phase
of TCP Reno
if an ACK times out (RTO
timeout), slow start is used,
and reduce congestion
window to 1 MSS.
if an ACK times out (RTO
timeout), slow start is used, and
reduce congestion window to
1 MSS.
New Reno can send new
packets at the end of the
congestion window during fast
recovery
if three duplicate ACKs are
received (i.e. four ACKs
acknowledging the same
packet, which are not
piggybacked on data and do
not change the receiver's
advertised window), Tahoe
performs a fast retransmit,
sets the slow start threshold
to half of the current
congestion window, reduces
the cwnd to 1 MSS, and resets
to slow start state
if three duplicate ACKs are
received, Reno will perform a
fast retransmit and skip the
slow start phase by halving the
congestion window (instead of
setting it to 1 MSS like Tahoe),
setting the slow start threshold
equal to the new congestion
window, and enter a state
called fast recovery. Continues
in this state as long as more
duplicate ACKs arrive.
if the ACK number defines a
position between the
retransmitted segment and the
end of the window, it is
possible that the segment
defined by the ACK is also lost.
NewReno TCP retransmits this
segment to avoid receiving
more duplicate ACKs for it
3. Additive Increase; Multiplicative Decrease
It has been observed that, most of the time the congestion is detected and taken care
of by observing the three duplicate ACKs. In other words, in a long TCP connection,
if we ignore the slow-start states and short exponential growth during fast recovery,
the TCP congestion window is cwnd = cwnd + (1 / cwnd) when an ACK arrives
(congestion avoidance) and cwnd = cwnd / 2 when congestion is detected.
The first is called additive increase; the second is called multiplicative decrease. This
means that the congestion window size, after it passes the initial slow-start state,
follows a saw tooth pattern called additive increase, multiplicative decrease
(AIMD), as shown in Figure 24.35.
Fig 24.35: Additive Increase, Multiplicative Decrease(AIMD)
29. Computer Communication Networks-17EC64 Module 5
Edited by: Prof. Krishnananda L, Dept of ECE, Govt SKSJTI, Bengaluru Page 29
4. TCP Throughput
The throughput for TCP, can be easily found if the cwnd is a constant (flat line)
function of RTT. The throughput with this assumption is throughput = cwnd / RTT.
In this assumption, TCP sends a cwnd bytes of data and receives acknowledgement
for them in RTT time.
However, as shown in Fig. 24.35, the behaviour of TCP is like saw tooth, with many
minimum and maximum values. If each tooth were exactly the same, we could say
that the throughput = [(maximum + minimum) / 2] / RTT. However, we know that
the value of the maximum is twice the value of the minimum because in each
congestion detection, the value of cwnd is set to half of its previous value. So the
throughput can be better calculated as
throughput = (0.75) Wmax / RTT
where Wmax average window size when the congestion occurs.
Example:
If maximum segment size (MSS)=10KB and RTT=100ms, what is the throughput?
Solution: (Referring to Fig 24.35)
Wmax= (10+12+10+8+8)/5 = 9.6 MSS
Throughput = 0.75Wmax/RTT
= (0.75 x 960K bps)/100ms
Throughput =7.2Mbps