This document describes slides for a chapter on the transport layer. It states that the slides can be freely used and modified for educational purposes with proper attribution. It asks users to mention the source if the slides are used for a class and to note any adaptation if slides are posted online. The document also provides copyright information for the material.
This document discusses and compares two routing protocols: distance vector routing and link state routing. Distance vector routing involves each node sharing its routing table only with its neighbors, while link state routing involves each node having knowledge of the entire network topology. The document outlines the working principles, drawbacks like count to infinity, and pros and cons of each approach.
ZRP divides routing into intrazone and interzone routing. Intrazone routing uses a proactive approach to route packets within a node's routing zone. Interzone routing uses a reactive approach where the source node sends route requests to peripheral nodes when the destination is outside its zone. The optimal zone radius depends on factors like mobility and query rates, with smaller radii preferred for higher mobility. ZRP aims to reduce routing overhead through techniques like restricting floods and maintaining multiple routes.
The document discusses network layer concepts including packet switching, IP addressing, and fragmentation. It provides details on:
- Packet switching breaks data into packets that are routed independently and reassembled at the destination. This allows for more efficient use of bandwidth compared to circuit switching.
- IP addresses in IPv4 are 32-bit numbers that identify devices on the network. Addresses are expressed in decimal notation like 192.168.1.1. Fragmentation breaks packets larger than the MTU into smaller fragments for transmission.
Fragmentation is the process of splitting large packets into smaller fragments to accommodate networks with smaller maximum transmission unit sizes. This allows packets to travel across multiple networks without being dropped. There are two types of fragmentation: transparent, where fragments are reassembled at exit gateways, and non-transparent, where fragments are treated as individual packets and reassembled only at the destination host, as used in the Internet Protocol. Key challenges include ensuring all fragments are received for reassembly and minimizing overhead from fragmenting and reassembling packets.
This document discusses TCP flow and congestion control in high speed networks. It covers topics such as TCP flow control using a credit allocation scheme, TCP header fields for flow control, credit allocation flexibility, effects of window size, complicating factors, retransmission strategy using timers, adaptive retransmission timer algorithms, implementation policy options, congestion control difficulties, slow start, dynamic window sizing, fast retransmit, fast recovery, limited transmit, performance of TCP over ATM networks using UBR service, effects of switch buffer size, observations, partial packet discard techniques, and TCP over ABR service.
The transport layer provides efficient, reliable, and cost-effective process-to-process delivery by making use of network layer services. The transport layer works through transport entities to achieve its goal of reliable delivery between application processes. It provides an interface for applications to access its services.
TCP uses congestion control to determine how much capacity is available in the network and regulate how many packets can be in transit. It uses additive increase/multiplicative decrease (AIMD) where the congestion window is increased slowly with each ACK but halved upon timeout. Slow start is used initially and after idle periods to grow the window exponentially until congestion is detected. Fast retransmit and fast recovery help detect and recover from packet loss without requiring a timeout.
This document discusses integrated services architecture (ISA) and differentiated services (DS) for providing quality of service (QoS) in computer networks. It describes the components and functions of ISA, including reservation protocol, admission control, routing, queuing disciplines, and services. It also covers traffic classification, scheduling, and dropping policies implemented in routers. Random early detection (RED) is presented as a proactive packet discard mechanism for congestion management. Differentiated services is introduced as a simpler alternative to ISA that uses traffic classes in packet headers to provide different performance levels.
This document discusses and compares two routing protocols: distance vector routing and link state routing. Distance vector routing involves each node sharing its routing table only with its neighbors, while link state routing involves each node having knowledge of the entire network topology. The document outlines the working principles, drawbacks like count to infinity, and pros and cons of each approach.
ZRP divides routing into intrazone and interzone routing. Intrazone routing uses a proactive approach to route packets within a node's routing zone. Interzone routing uses a reactive approach where the source node sends route requests to peripheral nodes when the destination is outside its zone. The optimal zone radius depends on factors like mobility and query rates, with smaller radii preferred for higher mobility. ZRP aims to reduce routing overhead through techniques like restricting floods and maintaining multiple routes.
The document discusses network layer concepts including packet switching, IP addressing, and fragmentation. It provides details on:
- Packet switching breaks data into packets that are routed independently and reassembled at the destination. This allows for more efficient use of bandwidth compared to circuit switching.
- IP addresses in IPv4 are 32-bit numbers that identify devices on the network. Addresses are expressed in decimal notation like 192.168.1.1. Fragmentation breaks packets larger than the MTU into smaller fragments for transmission.
Fragmentation is the process of splitting large packets into smaller fragments to accommodate networks with smaller maximum transmission unit sizes. This allows packets to travel across multiple networks without being dropped. There are two types of fragmentation: transparent, where fragments are reassembled at exit gateways, and non-transparent, where fragments are treated as individual packets and reassembled only at the destination host, as used in the Internet Protocol. Key challenges include ensuring all fragments are received for reassembly and minimizing overhead from fragmenting and reassembling packets.
This document discusses TCP flow and congestion control in high speed networks. It covers topics such as TCP flow control using a credit allocation scheme, TCP header fields for flow control, credit allocation flexibility, effects of window size, complicating factors, retransmission strategy using timers, adaptive retransmission timer algorithms, implementation policy options, congestion control difficulties, slow start, dynamic window sizing, fast retransmit, fast recovery, limited transmit, performance of TCP over ATM networks using UBR service, effects of switch buffer size, observations, partial packet discard techniques, and TCP over ABR service.
The transport layer provides efficient, reliable, and cost-effective process-to-process delivery by making use of network layer services. The transport layer works through transport entities to achieve its goal of reliable delivery between application processes. It provides an interface for applications to access its services.
TCP uses congestion control to determine how much capacity is available in the network and regulate how many packets can be in transit. It uses additive increase/multiplicative decrease (AIMD) where the congestion window is increased slowly with each ACK but halved upon timeout. Slow start is used initially and after idle periods to grow the window exponentially until congestion is detected. Fast retransmit and fast recovery help detect and recover from packet loss without requiring a timeout.
This document discusses integrated services architecture (ISA) and differentiated services (DS) for providing quality of service (QoS) in computer networks. It describes the components and functions of ISA, including reservation protocol, admission control, routing, queuing disciplines, and services. It also covers traffic classification, scheduling, and dropping policies implemented in routers. Random early detection (RED) is presented as a proactive packet discard mechanism for congestion management. Differentiated services is introduced as a simpler alternative to ISA that uses traffic classes in packet headers to provide different performance levels.
This document discusses protocols for quality of service (QoS) support in computer networks. It introduces the Resource Reservation Protocol (RSVP), which allows network nodes to reserve resources for particular data flows to support QoS. RSVP is receiver-initiated and uses soft state to maintain reservations in the face of changing network conditions or routes. The document also covers Multiprotocol Label Switching (MPLS), which assigns labels to data flows to enable QoS support and traffic engineering within an MPLS domain.
The document summarizes key aspects of the transport layer. It discusses how the transport layer provides logical communication between application processes running on different hosts by abstracting physical network details. It then describes the services provided by the transport layer including connection-oriented and connectionless services. It also discusses topics like quality of service, transport service primitives, addressing, connection establishment and release, flow control, multiplexing, and crash recovery for the transport layer.
This document provides an overview of data link control (DLC) and data link layer protocols. It discusses the key functions of DLC including framing, flow control, and error control. Framing involves encapsulating data frames with header information like source and destination addresses. Flow control manages the flow of data between nodes while error control handles detecting and correcting errors. Common data link layer protocols described include simple protocol, stop-and-wait protocol, and High-Level Data Link Control (HDLC). HDLC is a bit-oriented protocol that supports full-duplex communication over both point-to-point and multipoint links. It uses three types of frames: unnumbered, information, and supervisory frames.
The document summarizes key topics related to transport layer protocols:
- It describes the services provided by the transport layer, including addressing, connection establishment and release, flow control, and multiplexing.
- It provides details on common transport protocols like TCP and UDP, including their packet headers, connection management, congestion control, and performance issues at high speeds.
- It also presents an example transport protocol and uses finite state machines to model its operation and connection management.
Mobile transport layer - traditional TCPVishal Tandel
This document summarizes several mechanisms proposed to improve TCP performance in wireless networks. It discusses approaches like indirect TCP, snooping TCP, and mobile TCP that split the TCP connection to isolate the wireless link. It also covers fast retransmit/recovery techniques, transmission freezing, and selective retransmission to more efficiently handle packet losses due to mobility. While each approach aims to address TCP issues in wireless networks, they often do so by mixing layers or requiring changes to the basic TCP protocol stack.
IGMP (Internet Group Management Protocol) allows multicast routers to track group memberships across multicast networks. It has three message types - query, membership report, and leave report. Upon receiving a query, hosts send membership reports to the router to join or leave groups. The router uses these reports to maintain a list of members for each multicast group on that network segment. IGMP messages are encapsulated in IP datagrams and Ethernet frames for transmission.
This document defines TCP, IP, and UDP. TCP provides reliable, ordered transmission of data and is connection-oriented. It is used for applications like web browsing. IP is connectionless and routes packets to the correct destination. UDP sends short, unreliable datagrams and is used for applications like video games that prioritize speed over reliability. The key difference between TCP and UDP is that TCP provides ordered, error-checked delivery while UDP is faster but unreliable.
Global State Routing (GSR) maintains a global knowledge of network topology like link state routing but avoids inefficient flooding. Each node periodically exchanges its link state table only with neighbors, not the entire network. This reduces overhead compared to link state routing. GSR nodes use Dijkstra's algorithm on the accumulated link state information to compute optimal paths locally without global flooding.
The document discusses the application layer in computer networks. It describes several key application layer protocols including HTTP, FTP, email, telnet, SSH, DNS, and SNMP. It explains the client-server and peer-to-peer paradigms used in application layer communication. It also provides details about the World Wide Web including components like browsers, servers, documents and protocols like HTTP, HTML, and URLs.
1) Computer networks allow communication and sharing of resources between computer systems and devices through communication channels. There are several types of networks including LANs, WANs, and MANs.
2) For communication between systems, both must agree on a protocol which sets rules for data transmission. The two main protocol stacks are OSI and TCP/IP.
3) The network layer is responsible for delivering packets from source to destination. It uses services from the data link layer and provides services to the transport layer. Common network layer protocols are IP (Internet Protocol) for connectionless service and MPLS for connection-oriented service.
The document discusses transport layer protocols TCP and UDP. It provides an overview of process-to-process communication using transport layer protocols. It describes the roles, services, requirements, addressing, encapsulation, multiplexing, and error control functions of the transport layer. It specifically examines TCP and UDP, comparing their connection-oriented and connectionless services, typical applications, and segment/datagram formats.
The application layer allows users to interface with networks through application layer protocols like HTTP, SMTP, POP3, FTP, Telnet, and DHCP. It provides the interface between applications on different ends of a network. Common application layer protocols include DNS for mapping domain names to IP addresses, HTTP for transferring web page data, and SMTP/POP3 for sending and receiving email messages. The client/server and peer-to-peer models describe how requests are made and fulfilled over the application layer.
The document contains descriptions and figures about stop-and-wait, sliding window, and selective reject transmission protocols. Stop-and-wait uses acknowledgments to ensure frames are received correctly one at a time, while sliding window protocols allow multiple unacknowledged frames to be sent by keeping a window of outstanding frames. The figures demonstrate examples of how these protocols handle damaged frames, lost frames, and lost acknowledgments to ensure reliable data transmission.
The document discusses the leaky bucket algorithm for traffic shaping. It begins with an introduction to congestion and traffic shaping. It then provides an overview of the leaky bucket algorithm, describing it as a method that converts bursty traffic into fixed-rate traffic by allowing packets to accumulate in a virtual "leaky bucket" at a set rate before entering the network. The document includes an example demonstrating how the algorithm works with different sized packets in a queue. It discusses parameters like burst rate and average rate and describes uses of the algorithm in networks including shaping bursty traffic to a sustained cell flow rate.
The document discusses the network layer in computer networking. It describes how the network layer is responsible for routing packets from their source to destination. It covers different routing algorithms like distance vector routing and link state routing. It also compares connectionless and connection-oriented services, as well as datagram and virtual circuit subnets. Key aspects of routing algorithms like optimality, stability, and fairness are defined.
The document discusses various topics related to flow and error control in computer networks, including stop-and-wait ARQ, sliding window protocols, and selective reject ARQ. Stop-and-wait ARQ allows transmission of one frame at a time, while sliding window protocols allow multiple outstanding frames using sequence numbers and acknowledgments. Go-back-N ARQ requires retransmission of frames from the lost frame onward, while selective reject ARQ only retransmits the lost frame to minimize retransmissions.
1) Computer networks allow computers to communicate and share resources by connecting them through communication channels. There are several types of networks including LANs, WANs, and MANs.
2) For communication between computers on a network, both sides must agree on protocols which are sets of rules that govern data transmission. The two main protocol stacks are OSI and TCP/IP.
3) The network layer is responsible for delivering packets from source to destination by choosing appropriate paths through routers. It provides connectionless and connection-oriented services to the transport layer above it.
The document describes the Internet Protocol version 4 (IPv4). It discusses the IPv4 datagram format including the header fields, fragmentation, and options. It also covers how IPv4 provides an unreliable datagram delivery service and must be paired with TCP for reliability. The document discusses security issues with IPv4 like packet sniffing, modification, and spoofing, and how IPSec can provide protection against these attacks.
TCP & UDP ( Transmission Control Protocol and User Datagram Protocol)Kruti Niranjan
This document provides information about the Transport Layer protocols TCP and UDP. It describes:
1) TCP is a connection-oriented protocol that provides reliable, in-order delivery of data through features like flow control, error control, and congestion control. UDP is a connectionless protocol that does not guarantee delivery or order of packets.
2) The TCP header contains fields for source/destination ports, sequence numbers, acknowledgement numbers, flags, window size, checksum, and options. The UDP header contains fields for source/destination ports, length, and checksum.
3) The main differences between TCP and UDP are that TCP is connection-oriented, provides error control and flow control, and supports full duplex communication
Wireless Application Protocol (WAP) allows devices to access the Internet over wireless networks. There are three main categories of protocols for managing shared access to wireless networks: fixed assignment, demand assignment, and random assignment. Fixed assignment divides resources like frequency bands or time slots and allocates them exclusively. Demand assignment allocates resources only to nodes that need them. Random assignment does not preallocate resources and relies on collision detection and retransmission to manage shared access. Common protocols that fall under these categories include FDMA, TDMA, CDMA, ALOHA, and CSMA.
This document discusses the link layer and provides an overview of its services and context. It describes how the link layer is implemented in network interface cards and how these cards encapsulate datagrams into frames. It also outlines the topics to be covered, including error detection and correction, multiple access protocols, local area networks, and link virtualization.
The document discusses the key elements and characteristics of wireless networks. It describes the different components of a wireless network including wireless hosts, base stations, wireless links, and the infrastructure and ad hoc modes. It then covers characteristics of wireless links such as decreased signal strength, interference, and multipath propagation that make wireless communication more difficult than wired networks. It also discusses wireless network issues like the hidden terminal problem and signal attenuation.
This document discusses protocols for quality of service (QoS) support in computer networks. It introduces the Resource Reservation Protocol (RSVP), which allows network nodes to reserve resources for particular data flows to support QoS. RSVP is receiver-initiated and uses soft state to maintain reservations in the face of changing network conditions or routes. The document also covers Multiprotocol Label Switching (MPLS), which assigns labels to data flows to enable QoS support and traffic engineering within an MPLS domain.
The document summarizes key aspects of the transport layer. It discusses how the transport layer provides logical communication between application processes running on different hosts by abstracting physical network details. It then describes the services provided by the transport layer including connection-oriented and connectionless services. It also discusses topics like quality of service, transport service primitives, addressing, connection establishment and release, flow control, multiplexing, and crash recovery for the transport layer.
This document provides an overview of data link control (DLC) and data link layer protocols. It discusses the key functions of DLC including framing, flow control, and error control. Framing involves encapsulating data frames with header information like source and destination addresses. Flow control manages the flow of data between nodes while error control handles detecting and correcting errors. Common data link layer protocols described include simple protocol, stop-and-wait protocol, and High-Level Data Link Control (HDLC). HDLC is a bit-oriented protocol that supports full-duplex communication over both point-to-point and multipoint links. It uses three types of frames: unnumbered, information, and supervisory frames.
The document summarizes key topics related to transport layer protocols:
- It describes the services provided by the transport layer, including addressing, connection establishment and release, flow control, and multiplexing.
- It provides details on common transport protocols like TCP and UDP, including their packet headers, connection management, congestion control, and performance issues at high speeds.
- It also presents an example transport protocol and uses finite state machines to model its operation and connection management.
Mobile transport layer - traditional TCPVishal Tandel
This document summarizes several mechanisms proposed to improve TCP performance in wireless networks. It discusses approaches like indirect TCP, snooping TCP, and mobile TCP that split the TCP connection to isolate the wireless link. It also covers fast retransmit/recovery techniques, transmission freezing, and selective retransmission to more efficiently handle packet losses due to mobility. While each approach aims to address TCP issues in wireless networks, they often do so by mixing layers or requiring changes to the basic TCP protocol stack.
IGMP (Internet Group Management Protocol) allows multicast routers to track group memberships across multicast networks. It has three message types - query, membership report, and leave report. Upon receiving a query, hosts send membership reports to the router to join or leave groups. The router uses these reports to maintain a list of members for each multicast group on that network segment. IGMP messages are encapsulated in IP datagrams and Ethernet frames for transmission.
This document defines TCP, IP, and UDP. TCP provides reliable, ordered transmission of data and is connection-oriented. It is used for applications like web browsing. IP is connectionless and routes packets to the correct destination. UDP sends short, unreliable datagrams and is used for applications like video games that prioritize speed over reliability. The key difference between TCP and UDP is that TCP provides ordered, error-checked delivery while UDP is faster but unreliable.
Global State Routing (GSR) maintains a global knowledge of network topology like link state routing but avoids inefficient flooding. Each node periodically exchanges its link state table only with neighbors, not the entire network. This reduces overhead compared to link state routing. GSR nodes use Dijkstra's algorithm on the accumulated link state information to compute optimal paths locally without global flooding.
The document discusses the application layer in computer networks. It describes several key application layer protocols including HTTP, FTP, email, telnet, SSH, DNS, and SNMP. It explains the client-server and peer-to-peer paradigms used in application layer communication. It also provides details about the World Wide Web including components like browsers, servers, documents and protocols like HTTP, HTML, and URLs.
1) Computer networks allow communication and sharing of resources between computer systems and devices through communication channels. There are several types of networks including LANs, WANs, and MANs.
2) For communication between systems, both must agree on a protocol which sets rules for data transmission. The two main protocol stacks are OSI and TCP/IP.
3) The network layer is responsible for delivering packets from source to destination. It uses services from the data link layer and provides services to the transport layer. Common network layer protocols are IP (Internet Protocol) for connectionless service and MPLS for connection-oriented service.
The document discusses transport layer protocols TCP and UDP. It provides an overview of process-to-process communication using transport layer protocols. It describes the roles, services, requirements, addressing, encapsulation, multiplexing, and error control functions of the transport layer. It specifically examines TCP and UDP, comparing their connection-oriented and connectionless services, typical applications, and segment/datagram formats.
The application layer allows users to interface with networks through application layer protocols like HTTP, SMTP, POP3, FTP, Telnet, and DHCP. It provides the interface between applications on different ends of a network. Common application layer protocols include DNS for mapping domain names to IP addresses, HTTP for transferring web page data, and SMTP/POP3 for sending and receiving email messages. The client/server and peer-to-peer models describe how requests are made and fulfilled over the application layer.
The document contains descriptions and figures about stop-and-wait, sliding window, and selective reject transmission protocols. Stop-and-wait uses acknowledgments to ensure frames are received correctly one at a time, while sliding window protocols allow multiple unacknowledged frames to be sent by keeping a window of outstanding frames. The figures demonstrate examples of how these protocols handle damaged frames, lost frames, and lost acknowledgments to ensure reliable data transmission.
The document discusses the leaky bucket algorithm for traffic shaping. It begins with an introduction to congestion and traffic shaping. It then provides an overview of the leaky bucket algorithm, describing it as a method that converts bursty traffic into fixed-rate traffic by allowing packets to accumulate in a virtual "leaky bucket" at a set rate before entering the network. The document includes an example demonstrating how the algorithm works with different sized packets in a queue. It discusses parameters like burst rate and average rate and describes uses of the algorithm in networks including shaping bursty traffic to a sustained cell flow rate.
The document discusses the network layer in computer networking. It describes how the network layer is responsible for routing packets from their source to destination. It covers different routing algorithms like distance vector routing and link state routing. It also compares connectionless and connection-oriented services, as well as datagram and virtual circuit subnets. Key aspects of routing algorithms like optimality, stability, and fairness are defined.
The document discusses various topics related to flow and error control in computer networks, including stop-and-wait ARQ, sliding window protocols, and selective reject ARQ. Stop-and-wait ARQ allows transmission of one frame at a time, while sliding window protocols allow multiple outstanding frames using sequence numbers and acknowledgments. Go-back-N ARQ requires retransmission of frames from the lost frame onward, while selective reject ARQ only retransmits the lost frame to minimize retransmissions.
1) Computer networks allow computers to communicate and share resources by connecting them through communication channels. There are several types of networks including LANs, WANs, and MANs.
2) For communication between computers on a network, both sides must agree on protocols which are sets of rules that govern data transmission. The two main protocol stacks are OSI and TCP/IP.
3) The network layer is responsible for delivering packets from source to destination by choosing appropriate paths through routers. It provides connectionless and connection-oriented services to the transport layer above it.
The document describes the Internet Protocol version 4 (IPv4). It discusses the IPv4 datagram format including the header fields, fragmentation, and options. It also covers how IPv4 provides an unreliable datagram delivery service and must be paired with TCP for reliability. The document discusses security issues with IPv4 like packet sniffing, modification, and spoofing, and how IPSec can provide protection against these attacks.
TCP & UDP ( Transmission Control Protocol and User Datagram Protocol)Kruti Niranjan
This document provides information about the Transport Layer protocols TCP and UDP. It describes:
1) TCP is a connection-oriented protocol that provides reliable, in-order delivery of data through features like flow control, error control, and congestion control. UDP is a connectionless protocol that does not guarantee delivery or order of packets.
2) The TCP header contains fields for source/destination ports, sequence numbers, acknowledgement numbers, flags, window size, checksum, and options. The UDP header contains fields for source/destination ports, length, and checksum.
3) The main differences between TCP and UDP are that TCP is connection-oriented, provides error control and flow control, and supports full duplex communication
Wireless Application Protocol (WAP) allows devices to access the Internet over wireless networks. There are three main categories of protocols for managing shared access to wireless networks: fixed assignment, demand assignment, and random assignment. Fixed assignment divides resources like frequency bands or time slots and allocates them exclusively. Demand assignment allocates resources only to nodes that need them. Random assignment does not preallocate resources and relies on collision detection and retransmission to manage shared access. Common protocols that fall under these categories include FDMA, TDMA, CDMA, ALOHA, and CSMA.
This document discusses the link layer and provides an overview of its services and context. It describes how the link layer is implemented in network interface cards and how these cards encapsulate datagrams into frames. It also outlines the topics to be covered, including error detection and correction, multiple access protocols, local area networks, and link virtualization.
The document discusses the key elements and characteristics of wireless networks. It describes the different components of a wireless network including wireless hosts, base stations, wireless links, and the infrastructure and ad hoc modes. It then covers characteristics of wireless links such as decreased signal strength, interference, and multipath propagation that make wireless communication more difficult than wired networks. It also discusses wireless network issues like the hidden terminal problem and signal attenuation.
This document discusses the network layer and IP protocol. It begins by explaining the key functions of the network layer, including forwarding, routing, and connection setup in some network architectures. It then explains the differences between virtual circuit and datagram networks, as well as the forwarding and routing processes. The document outlines the chapter and describes the IP datagram format and functions of the IP, ICMP, and routing protocols. It also provides details about router architecture and functions.
The document provides an overview of Chapter 3 from the textbook "Computer Networking: A Top Down Approach" by Jim Kurose and Keith Ross. It discusses the goals and outline of the chapter which covers transport layer services, multiplexing and demultiplexing, UDP, principles of reliable data transfer, TCP, and congestion control. Specifically, it describes transport layer services, multiplexing and demultiplexing of data between applications, UDP as a connectionless transport protocol, and outlines the topics to be covered related to reliable data transfer and TCP.
The document discusses network management and the Internet standard framework. It describes the key components of the framework, including the Structure of Management Information (SMI) which defines management objects, the Management Information Base (MIB) which stores the managed objects, and the Simple Network Management Protocol (SNMP) used to communicate between managing and managed devices. The framework also includes security and administration capabilities added in SNMPv3.
This document discusses public key cryptography and the RSA algorithm. RSA allows two parties to communicate securely without having to share a secret key beforehand. It works by having each party generate both a public and private key. The public key can be used to encrypt messages, while only the private key can decrypt them. RSA relies on the difficulty of factoring large numbers to be secure. The document provides an example of how RSA key generation and encryption/decryption work in practice.
The document discusses the application layer and HTTP protocol. It provides an overview of the HTTP protocol, including that it uses a client-server model with browsers as clients and web servers as servers. It describes how HTTP messages are exchanged over TCP connections and covers non-persistent and persistent HTTP connections. It also provides details on the format of HTTP requests and responses.
The document discusses slides that are being made freely available for use in teaching networking concepts. It states that the slides can be modified as needed but requests that their source is credited if used for teaching and their copyright is noted if posted online. The slides are from the textbook "Computer Networking: A Top Down Approach" by Jim Kurose and Keith Ross.
This document discusses fundamentals and link layer concepts in computer networks including:
- Network requirements, layering, protocols, and Internet architecture
- Link layer services such as framing, error detection, and flow control
- Performance metrics including bandwidth, latency, and the relationship between delay and bandwidth
- Common communication patterns like client-server and examples of TCP and UDP socket programming in C
This document discusses the use and copyright of slides from the textbook "Computer Networking: A Top Down Approach" by Jim Kurose and Keith Ross. It states that the slides are being made freely available for educational use provided that the source is cited and the copyright is acknowledged if posted online. The document asks users to mention the source of the slides if used in a class and to note the authors' copyright if posted on a website.
This document provides an overview of the transport layer chapter from the textbook "Computer Networking: A Top Down Approach". It discusses the goals of understanding transport layer services like multiplexing and demultiplexing. It also covers the two main Internet transport protocols - UDP which provides connectionless unreliable data transfer, and TCP which provides connection-oriented reliable data transfer. The document outlines the rest of the chapter which will discuss TCP and UDP in more detail as well as principles of reliable data transfer and congestion control.
The document provides an overview of the transport layer chapter from the textbook "Computer Networking: A Top Down Approach". It begins with goals of understanding transport layer services like multiplexing and demultiplexing. It then outlines the chapter sections which will cover transport layer protocols UDP and TCP, principles of reliable data transfer, and congestion control. Transport layer protocols provide end-to-end communication between application processes on different hosts.
The document provides an overview of the transport layer and outlines the goals and key topics to be covered in the chapter, including transport-layer services, multiplexing and demultiplexing, connectionless transport using UDP, principles of reliable data transfer, connection-oriented transport using TCP, and principles of congestion control. It also includes slides on specific transport layer topics like UDP and TCP segment structure, and multiplexing and demultiplexing.
The document provides an overview of transport layer services and protocols:
1. It discusses the goals of the transport layer including multiplexing, demultiplexing, reliable data transfer, flow control, and congestion control.
2. It describes the two main Internet transport protocols - UDP which provides connectionless unreliable data transfer, and TCP which provides connection-oriented reliable data transfer.
3. It outlines the chapter which covers transport layer services, multiplexing and demultiplexing, UDP, principles of reliable data transfer, TCP, and congestion control.
TRANSPORT LAYER_DATA STRUCTURE LEARNING MATERIALjainyshah20
This document provides an overview of the transport layer and outlines the goals and key topics to be covered in the chapter. It introduces the transport layer services and protocols, including multiplexing, demultiplexing, UDP as a connectionless transport, and TCP as a connection-oriented reliable transport. It also outlines the principles of reliable data transfer and congestion control that will be discussed.
The document provides an overview of the transport layer chapter from the textbook "Computer Networking: A Top Down Approach". It outlines the key topics that will be covered, including multiplexing and demultiplexing, connectionless transport with UDP, reliable data transfer, connection-oriented transport with TCP, and principles of congestion control. Examples are given of how multiplexing and demultiplexing work in both connectionless and connection-oriented transport.
computer network having transport layer.pptHarisKhan35881
This document provides an overview of slides for a computer networking course. It states that the slides can be freely used and modified with attribution given to the authors and their book. It also notes the copyright of the material.
This document provides an overview of slides for a chapter on transport layer networking. It states that the slides can be freely used and modified with proper attribution given. It also notes the copyright of the material.
Module 3: Transport layer
Transport layer services, Multiplexing and demultiplexing, User datagram protocol, Transmission control protocol: connection, features, segment, Round-Trip Time estimation and timeout, Flow control, Congestion control, SCTP
Touch and hold a clip to pin it. Unpinned clips will be deleted after 1 hour.Touch and hold a clip to pin it. Unpinned clips will be deleted after 1 hour.Touch and hold a clip to pin it. Unpinned clips will be deleted after 1 hour.
The document provides an overview of slides for a chapter on transport layer networking. It states that the slides can be freely used and modified with attribution given to the authors and copyright noted. It also provides brief descriptions of the goals and outline of the chapter, which covers transport layer services, multiplexing, UDP, TCP, and congestion control.
This document provides an outline and overview of key topics in the transport layer chapter of a computer networking textbook. It introduces the goals of understanding transport layer services like multiplexing and demultiplexing, reliable data transfer, and congestion control. It summarizes the two main Internet transport protocols - UDP, which provides a connectionless unreliable service, and TCP, which provides connection-oriented reliable delivery along with congestion control. It also outlines the process of developing a reliable data transfer protocol incrementally to handle an unreliable channel.
The document discusses the transport layer and its goals of providing reliable data transfer between application processes running on different hosts. It covers the key transport layer protocols UDP and TCP. UDP provides a basic unreliable datagram service, while TCP enables reliable in-order delivery of data through mechanisms like congestion control, flow control, and connection setup. The document also examines concepts like multiplexing, demultiplexing, and how reliable data transfer is achieved even over unreliable network channels through the use of sequence numbers, acknowledgments, and retransmissions.
The document provides information about a chapter on the transport layer from the textbook "Computer Networking: A Top-Down Approach". It contains notes for slides on transport layer services and protocols. Key points include:
- The transport layer provides logical communication between application processes on different hosts using protocols like TCP and UDP.
- TCP provides reliable, in-order delivery of data through mechanisms like sequencing, acknowledgements, flow control, and congestion control.
- UDP is a simpler "best effort" protocol that does not guarantee delivery or order but has lower overhead.
- Multiplexing and demultiplexing allow multiple applications to share transport layer ports to communicate through a single connection.
- TCP and UDP
Computer networks Module 3 Transport layerclaudle200415
The document discusses the transport layer and provides an overview of key concepts. It introduces transport layer services including multiplexing, demultiplexing, and reliable data transfer. It describes the two main Internet transport protocols - UDP which provides connectionless unreliable data transfer, and TCP which provides connection-oriented reliable transfer. The document outlines the concepts that will be covered in more detail, including multiplexing, demultiplexing, the transport protocols UDP and TCP, and congestion control.
The document discusses the transport layer and key protocols TCP and UDP. It outlines the chapter which covers transport layer services like multiplexing and demultiplexing, connectionless transport with UDP, principles of reliable data transfer, connection-oriented transport with TCP including segment structure, reliable data transfer, flow control, and connection management, and principles of congestion control including TCP congestion control. It provides details on multiplexing and demultiplexing to direct segments to the appropriate socket, UDP using port numbers but providing unreliable delivery, and TCP using a 4-tuple to identify connections and providing reliable in-order byte stream delivery with congestion control and flow control.
The document provides an overview of the transport layer and its goals and protocols. It discusses the principles of transport layer services including multiplexing, demultiplexing, reliable data transfer, flow control, and congestion control. It describes the two main Internet transport layer protocols: UDP, which provides connectionless and unreliable data transfer, and TCP, which provides connection-oriented and reliable data transfer with congestion control. It then goes on to discuss in detail the principles and mechanisms behind reliable data transfer, including error detection, acknowledgements, negative acknowledgements, sequencing, and handling duplicate packets.
The document summarizes key aspects of the transport layer from Chapter 3 of the textbook "Computer Networking: A Top Down Approach". It discusses the goals of the transport layer, including providing multiplexing/demultiplexing, reliable data transfer, congestion control and flow control. It then describes the two main Internet transport protocols - UDP (connectionless) and TCP (connection-oriented), focusing on their differences and how TCP provides reliability.
The document discusses transport layer protocols and services including:
- TCP provides reliable, in-order delivery through congestion control, flow control, and connection setup. UDP provides unreliable, unordered delivery with no connection.
- Transport protocols multiplex and demultiplex data between applications using port numbers. TCP uses a 4-tuple of IP addresses and port numbers to identify each connection.
- UDP is useful for streaming multimedia since it is loss tolerant but rate sensitive, while TCP provides reliability through congestion control and retransmissions.
The transport layer provides end-to-end communication between processes on different machines. Two main transport protocols are TCP and UDP. TCP provides reliable, connection-oriented data transmission using acknowledgments and retransmissions. UDP provides simpler, connectionless transmission but without reliability. Both protocols use port numbers to identify processes and negotiate quality of service options during connection establishment.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Best 20 SEO Techniques To Improve Website Visibility In SERP
Chapter 3 v6.01
1. Chapter 3
Transport Layer
A note on the use of these ppt slides:
We’re making these slides freely available to all (faculty, students, readers).
They’re in PowerPoint form so you see the animations; and can add, modify,
and delete slides (including this one) and slide content to suit your needs.
They obviously represent a lot of work on our part. In return for use, we only
ask the following:
If you use these slides (e.g., in a class) that you mention their source
(after all, we’d like people to use our book!)
If you post any slides on a www site, that you note that they are adapted
from (or perhaps identical to) our slides, and note our copyright of this
material.
Thanks and enjoy! JFK/KWR
Computer
Networking: A Top
Down Approach
6th edition
Jim Kurose, Keith Ross
Addison-Wesley
March 2012
All material copyright 1996-2013
J.F Kurose and K.W. Ross, All Rights Reserved
Transport Layer 3-1
2. Chapter 3: Transport Layer
our goals:
understand
principles behind
transport layer
services:
multiplexing,
demultiplexing
reliable data transfer
flow control
congestion control
learn about Internet
transport layer protocols:
UDP: connectionless
transport
TCP: connection-oriented
reliable transport
TCP congestion control
Transport Layer 3-2
3. Chapter 3 outline
3.1 transport-layer
services
3.2 multiplexing and
demultiplexing
3.3 connectionless
transport: UDP
3.4 principles of reliable
data transfer
3.5 connection-oriented
transport: TCP
segment structure
reliable data transfer
flow control
connection management
3.6 principles of congestion
control
3.7 TCP congestion control
Transport Layer 3-3
4. Transport services and protocols
rt
po
ns
tra
nd
-e
nd
le
ca
gi
lo
provide logical communication
between app processes
running on different hosts
transport protocols run in
end systems
send side: breaks app
messages into segments,
passes to network layer
rcv side: reassembles
segments into messages,
passes to app layer
more than one transport
protocol available to apps
Internet: TCP and UDP
application
transport
network
data link
physical
application
transport
network
data link
physical
Transport Layer 3-4
5. Transport vs. network layer
network layer: logical
communication
between hosts
transport layer:
logical
communication
between processes
relies on, enhances,
network layer
services
household analogy:
12 kids in Ann’s house sending
letters to 12 kids in Bill’s
house:
hosts = houses
processes = kids
app messages = letters in
envelopes
transport protocol = Ann
and Bill who demux to inhouse siblings
network-layer protocol =
postal service
Transport Layer 3-5
6. Internet transport-layer protocols
reliable, in-order
delivery (TCP)
unreliable, unordered
delivery: UDP
no-frills extension of
“best-effort” IP
services not available:
network
data link
physical
network
data link
physical
network
data link
physical
network
data link
physical
rt
po
ns
tra
network
data link
physical
nd
-e
nd
le
ca
gi
lo
congestion control
flow control
connection setup
application
transport
network
data link
physical
network
data link
physical
network
data link
physical
application
transport
network
data link
physical
delay guarantees
bandwidth guarantees
Transport Layer 3-6
7. Chapter 3 outline
3.1 transport-layer
services
3.2 multiplexing and
demultiplexing
3.3 connectionless
transport: UDP
3.4 principles of reliable
data transfer
3.5 connection-oriented
transport: TCP
segment structure
reliable data transfer
flow control
connection management
3.6 principles of congestion
control
3.7 TCP congestion control
Transport Layer 3-7
8. Multiplexing/demultiplexing
multiplexing at sender:
handle data from multiple
sockets, add transport header
(later used for demultiplexing)
demultiplexing at receiver:
use header info to deliver
received segments to correct
socket
application
application
P1
P2
application
P3
transport
P4
transport
network
transport
network
link
socket
network
link
physical
physical
process
link
physical
Transport Layer 3-8
9. How demultiplexing works
host receives IP datagrams
each datagram has source IP
address, destination IP
address
each datagram carries one
transport-layer segment
each segment has source,
destination port number
host uses IP addresses &
port numbers to direct
segment to appropriate
socket
32 bits
source port #
dest port #
other header fields
application
data
(payload)
TCP/UDP segment format
Transport Layer 3-9
10. Connectionless demultiplexing
recall: created socket has
host-local port #:
DatagramSocket mySocket1
= new
DatagramSocket(12534);
when host receives UDP
segment:
checks destination port #
in segment
directs UDP segment to
socket with that port #
recall: when creating
datagram to send into
UDP socket, must specify
destination IP address
destination port #
IP datagrams with same
dest. port #, but different
source IP addresses
and/or source port
numbers will be directed
to same socket at dest
Transport Layer 3-10
11. Connectionless demux: example
DatagramSocket
mySocket2 = new
DatagramSocket
(9157);
DatagramSocket
serverSocket = new
DatagramSocket
(6428);
application
DatagramSocket
mySocket1 = new
DatagramSocket
(5775);
P1
application
application
P3
transport
transport
transport
network
network
network
link
link
link
physical
physical
source port: 6428
dest port: 9157
source port: 9157
dest port: 6428
P4
physical
source port: ?
dest port: ?
source port: ?
dest port: ?
Transport Layer 3-11
12. Connection-oriented demux
TCP socket identified
by 4-tuple:
source IP address
source port number
dest IP address
dest port number
demux: receiver uses
all four values to direct
segment to appropriate
socket
server host may support
many simultaneous TCP
sockets:
each socket identified by
its own 4-tuple
web servers have
different sockets for
each connecting client
non-persistent HTTP will
have different socket for
each request
Transport Layer 3-12
14. Connection-oriented demux: example
threaded server
application
application
P3
transport
network
network
network
link
link
link
physical
physical
server: IP
address B
source IP,port: B,80
dest IP,port: A,9157
source IP,port: A,9157
dest IP, port: B,80
P3
P2
transport
transport
host: IP
address A
application
P4
physical
source IP,port: C,5775
dest IP,port: B,80
host: IP
address C
source IP,port: C,9157
dest IP,port: B,80
Transport Layer 3-14
15. Chapter 3 outline
3.1 transport-layer
services
3.2 multiplexing and
demultiplexing
3.3 connectionless
transport: UDP
3.4 principles of reliable
data transfer
3.5 connection-oriented
transport: TCP
segment structure
reliable data transfer
flow control
connection management
3.6 principles of congestion
control
3.7 TCP congestion control
Transport Layer 3-15
16. UDP: User Datagram Protocol [RFC 768]
“no frills,” “bare bones”
Internet transport
protocol
“best effort” service, UDP
segments may be:
lost
delivered out-of-order
to app
connectionless:
no handshaking
between UDP sender,
receiver
each UDP segment
handled independently
of others
UDP use:
streaming multimedia
apps (loss tolerant, rate
sensitive)
DNS
SNMP
reliable transfer over
UDP:
add reliability at
application layer
application-specific error
recovery!
Transport Layer 3-16
17. UDP: segment header
32 bits
source port #
dest port #
length
checksum
length, in bytes of
UDP segment,
including header
why is there a UDP?
application
data
(payload)
UDP segment format
no connection
establishment (which can
add delay)
simple: no connection
state at sender, receiver
small header size
no congestion control:
UDP can blast away as
fast as desired
Transport Layer 3-17
18. UDP checksum
Goal: detect “errors” (e.g., flipped bits) in transmitted
segment
sender:
treat segment contents,
including header fields,
as sequence of 16-bit
integers
checksum: addition
(one’s complement sum)
of segment contents
sender puts checksum
value into UDP
checksum field
receiver:
compute checksum of
received segment
check if computed
checksum equals checksum
field value:
NO - error detected
YES - no error detected.
But maybe errors
nonetheless? More later
….
Transport Layer 3-18
19. Internet checksum: example
example: add two 16-bit integers
1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0
1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
wraparound 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1
sum 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 0
checksum 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1
Note: when adding numbers, a carryout from the most
significant bit needs to be added to the result
Transport Layer 3-19
20. Chapter 3 outline
3.1 transport-layer
services
3.2 multiplexing and
demultiplexing
3.3 connectionless
transport: UDP
3.4 principles of reliable
data transfer
3.5 connection-oriented
transport: TCP
segment structure
reliable data transfer
flow control
connection management
3.6 principles of congestion
control
3.7 TCP congestion control
Transport Layer 3-20
21. Principles of reliable data
transfer
important in application, transport, link layers
top-10 list of important networking topics!
characteristics of unreliable channel will determine
complexity of reliable data transfer protocol (rdt)
Transport Layer 3-21
22. Principles of reliable data
transfer
important in application, transport, link layers
top-10 list of important networking topics!
characteristics of unreliable channel will determine
complexity of reliable data transfer protocol (rdt)
Transport Layer 3-22
23. Principles of reliable data
transfer
important in application, transport, link layers
top-10 list of important networking topics!
characteristics of unreliable channel will determine
complexity of reliable data transfer protocol (rdt)
Transport Layer 3-23
24. Reliable data transfer: getting started
rdt_send(): called from above,
(e.g., by app.). Passed data to
deliver to receiver upper layer
send
side
udt_send(): called by rdt,
to transfer packet over
unreliable channel to receiver
deliver_data(): called by
rdt to deliver data to upper
receive
side
rdt_rcv(): called when packet
arrives on rcv-side of channel
Transport Layer 3-24
25. Reliable data transfer: getting started
we’ll:
incrementally develop sender, receiver sides of
reliable data transfer protocol (rdt)
consider only unidirectional data transfer
but control info will flow on both directions!
use finite state machines (FSM) to specify sender,
receiver
event causing state transition
actions taken on state transition
state: when in this “state”
next state uniquely
determined by next
event
state
1
event
actions
state
2
Transport Layer 3-25
26. rdt1.0: reliable transfer over a reliable channel
underlying channel perfectly reliable
no bit errors
no loss of packets
separate FSMs for sender, receiver:
sender sends data into underlying channel
receiver reads data from underlying channel
Wait for
call from
above
rdt_send(data)
packet = make_pkt(data)
udt_send(packet)
sender
Wait for
call from
below
rdt_rcv(packet)
extract (packet,data)
deliver_data(data)
receiver
Transport Layer 3-26
27. rdt2.0: channel with bit errors
underlying channel may flip bits in packet
checksum to detect bit errors
the question: how to recover from errors:
acknowledgements (ACKs): receiver explicitly tells sender
that pkt received OK
negative acknowledgements (NAKs): receiver explicitly tells
sender that pkt had errors
sender retransmits pkt on receipt of NAK
How do humans recover from “errors”
new mechanisms in rdt2.0 (beyond rdt1.0):
during conversation?
error detection
receiver feedback: control msgs (ACK,NAK) rcvr>sender
Transport Layer 3-27
28. rdt2.0: channel with bit errors
underlying channel may flip bits in packet
checksum to detect bit errors
the question: how to recover from errors:
acknowledgements (ACKs): receiver explicitly tells sender
that pkt received OK
negative acknowledgements (NAKs): receiver explicitly tells
sender that pkt had errors
sender retransmits pkt on receipt of NAK
new mechanisms in rdt2.0 (beyond rdt1.0):
error detection
feedback: control msgs (ACK,NAK) from receiver to
sender
Transport Layer 3-28
29. rdt2.0: FSM specification
rdt_send(data)
sndpkt = make_pkt(data, checksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) &&
isNAK(rcvpkt)
Wait for
Wait for
call from
ACK or
udt_send(sndpkt)
above
NAK
rdt_rcv(rcvpkt) && isACK(rcvpkt)
Λ
sender
receiver
rdt_rcv(rcvpkt) &&
corrupt(rcvpkt)
udt_send(NAK)
Wait for
call from
below
rdt_rcv(rcvpkt) &&
notcorrupt(rcvpkt)
extract(rcvpkt,data)
deliver_data(data)
udt_send(ACK)
Transport Layer 3-29
30. rdt2.0: operation with no errors
rdt_send(data)
snkpkt = make_pkt(data, checksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) &&
isNAK(rcvpkt)
Wait for
Wait for
call from
ACK or
udt_send(sndpkt)
above
NAK
rdt_rcv(rcvpkt) && isACK(rcvpkt)
Λ
rdt_rcv(rcvpkt) &&
corrupt(rcvpkt)
udt_send(NAK)
Wait for
call from
below
rdt_rcv(rcvpkt) &&
notcorrupt(rcvpkt)
extract(rcvpkt,data)
deliver_data(data)
udt_send(ACK)
Transport Layer 3-30
31. rdt2.0: error scenario
rdt_send(data)
snkpkt = make_pkt(data, checksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) &&
isNAK(rcvpkt)
Wait for
Wait for
call from
ACK or
udt_send(sndpkt)
above
NAK
rdt_rcv(rcvpkt) && isACK(rcvpkt)
Λ
rdt_rcv(rcvpkt) &&
corrupt(rcvpkt)
udt_send(NAK)
Wait for
call from
below
rdt_rcv(rcvpkt) &&
notcorrupt(rcvpkt)
extract(rcvpkt,data)
deliver_data(data)
udt_send(ACK)
Transport Layer 3-31
32. rdt2.0 has a fatal flaw!
what happens if
ACK/NAK corrupted?
sender doesn’t know
what happened at
receiver!
can’t just retransmit:
possible duplicate
handling duplicates:
sender retransmits
current pkt if ACK/NAK
corrupted
sender adds sequence
number to each pkt
receiver discards (doesn’t
deliver up) duplicate pkt
stop and wait
sender sends one packet,
then waits for receiver
response
Transport Layer 3-32
33. rdt2.1: sender, handles garbled ACK/NAKs
rdt_send(data)
sndpkt = make_pkt(0, data, checksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) &&
rdt_rcv(rcvpkt)
&& notcorrupt(rcvpkt)
&& isACK(rcvpkt)
Wait for
call 0 from
above
rdt_rcv(rcvpkt)
&& notcorrupt(rcvpkt)
&& isACK(rcvpkt)
Λ
rdt_rcv(rcvpkt) &&
( corrupt(rcvpkt) ||
isNAK(rcvpkt) )
udt_send(sndpkt)
( corrupt(rcvpkt) ||
isNAK(rcvpkt) )
udt_send(sndpkt)
Wait for
ACK or
NAK 0
Λ
Wait for
ACK or
NAK 1
Wait for
call 1 from
above
rdt_send(data)
sndpkt = make_pkt(1, data, checksum)
udt_send(sndpkt)
Transport Layer 3-33
35. rdt2.1: discussion
sender:
seq # added to pkt
two seq. #’s (0,1) will
suffice. Why?
must check if received
ACK/NAK corrupted
twice as many states
state must “remember”
whether “expected”
pkt should have seq #
of 0 or 1
receiver:
must check if received
packet is duplicate
state indicates whether
0 or 1 is expected pkt
seq #
note: receiver can not
know if its last
ACK/NAK received
OK at sender
Transport Layer 3-35
36. rdt2.2: a NAK-free protocol
same functionality as rdt2.1, using ACKs only
instead of NAK, receiver sends ACK for last pkt
received OK
receiver must explicitly include seq # of pkt being ACKed
duplicate ACK at sender results in same action as
NAK: retransmit current pkt
Transport Layer 3-36
37. rdt2.2: sender, receiver fragments
rdt_send(data)
sndpkt = make_pkt(0, data, checksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) &&
Wait for
call 0 from
above
rdt_rcv(rcvpkt) &&
(corrupt(rcvpkt) ||
has_seq1(rcvpkt))
udt_send(sndpkt)
Wait for
0 from
below
sender FSM
fragment
( corrupt(rcvpkt) ||
isACK(rcvpkt,1) )
udt_send(sndpkt)
Wait for
ACK
0
rdt_rcv(rcvpkt)
&& notcorrupt(rcvpkt)
&& isACK(rcvpkt,0)
receiver FSM
fragment
Λ
rdt_rcv(rcvpkt) && notcorrupt(rcvpkt)
&& has_seq1(rcvpkt)
extract(rcvpkt,data)
deliver_data(data)
sndpkt = make_pkt(ACK1, chksum)
udt_send(sndpkt)
Transport Layer 3-37
38. rdt3.0: channels with errors and loss
new assumption:
underlying channel can
also lose packets
(data, ACKs)
checksum, seq. #,
ACKs, retransmissions
will be of help … but
not enough
approach: sender waits
“reasonable” amount of
time for ACK
retransmits if no ACK
received in this time
if pkt (or ACK) just delayed
(not lost):
retransmission will be
duplicate, but seq. #’s
already handles this
receiver must specify seq
# of pkt being ACKed
requires countdown timer
Transport Layer 3-38
42. Performance of rdt3.0
rdt3.0 is correct, but performance stinks
e.g.: 1 Gbps link, 15 ms prop. delay, 8000 bit packet:
8000 bits
L
Dtrans = R =
109 bits/sec
= 8 microsecs
U sender: utilization – fraction of time sender busy sending
U
sender =
L/R
RTT + L / R
=
.008
30.008
= 0.00027
if RTT=30 msec, 1KB pkt every 30 msec: 33kB/sec thruput
over 1 Gbps link
network protocol limits use of physical resources!
Transport Layer 3-42
43. rdt3.0: stop-and-wait operation
sender
receiver
first packet bit transmitted, t = 0
last packet bit transmitted, t = L / R
first packet bit arrives
last packet bit arrives, send
ACK
RTT
ACK arrives, send next
packet, t = RTT + L / R
U
sender =
L/R
RTT + L / R
=
.008
30.008
= 0.00027
Transport Layer 3-43
44. Pipelined protocols
pipelining: sender allows multiple, “in-flight”, yetto-be-acknowledged pkts
range of sequence numbers must be increased
buffering at sender and/or receiver
two generic forms of pipelined protocols: go-Back-N,
selective repeat
Transport Layer 3-44
45. Pipelining: increased utilization
sender
receiver
first packet bit transmitted, t = 0
last bit transmitted, t = L / R
first packet bit arrives
last packet bit arrives, send ACK
last bit of 2nd packet arrives, send ACK
last bit of 3rd packet arrives, send ACK
RTT
ACK arrives, send next
packet, t = RTT + L / R
3-packet pipelining increases
utilization by a factor of 3!
U
sender =
3L / R
RTT + L / R
=
.0024
30.008
= 0.00081
Transport Layer 3-45
46. Pipelined protocols: overview
Go-back-N:
sender can have up to
N unacked packets in
pipeline
receiver only sends
cumulative ack
Selective Repeat:
sender can have up to N
unack’ed packets in
pipeline
rcvr sends individual ack
for each packet
doesn’t ack packet if
there’s a gap
sender has timer for
oldest unacked packet
when timer expires,
retransmit all unacked
packets
sender maintains timer
for each unacked packet
when timer expires,
retransmit only that
unacked packet
Transport Layer 3-46
47. Go-Back-N: sender
k-bit seq # in pkt header
“window” of up to N, consecutive unack’ed pkts allowed
ACK(n): ACKs all pkts up to, including seq # n - “cumulative
ACK”
may receive duplicate ACKs (see receiver)
timer for oldest in-flight pkt
timeout(n): retransmit packet n and all higher seq # pkts in
window
Transport Layer 3-47
51. Selective repeat
receiver individually acknowledges all correctly
received pkts
buffers pkts, as needed, for eventual in-order delivery
to upper layer
sender only resends pkts for which ACK not
received
sender timer for each unACKed pkt
sender window
N consecutive seq #’s
limits seq #s of sent, unACKed pkts
Transport Layer 3-51
53. Selective repeat
sender
data from above:
if next available seq # in
window, send pkt
timeout(n):
resend pkt n, restart
timer
ACK(n) in [sendbase,sendbase+N]:
mark pkt n as received
if n smallest unACKed
pkt, advance window base
to next unACKed seq #
receiver
pkt n in [rcvbase, rcvbase+N-1]
send ACK(n)
out-of-order: buffer
in-order: deliver (also
deliver buffered, in-order
pkts), advance window to
next not-yet-received pkt
pkt n in [rcvbase-N,rcvbase-1]
ACK(n)
otherwise:
ignore
Transport Layer 3-53
55. Selective repeat:
dilemma
example:
seq #’s: 0, 1, 2, 3
window size=3
receiver sees no
difference in two
scenarios!
duplicate data
accepted as new in
(b)
receiver window
(after receipt)
sender window
(after receipt)
0123012
pkt0
0123012
pkt1
0123012
0123012
pkt2
0123012
0123012
pkt3
0123012
pkt0
(a) no problem
0123012
X
will accept packet
with seq number 0
receiver can’t see sender side.
receiver behavior identical in both cases!
something’s (very) wrong!
pkt0
0123012
pkt1
0123012
0123012
Q: what relationship
between seq # size
and window size to
avoid problem in (b)?
0123012
pkt2
0123012
X
X
timeout
retransmit pkt0 X
0123012
(b) oops!
pkt0
0123012
will accept packet
with seq number 0
Transport Layer 3-55
56. Chapter 3 outline
3.1 transport-layer
services
3.2 multiplexing and
demultiplexing
3.3 connectionless
transport: UDP
3.4 principles of reliable
data transfer
3.5 connection-oriented
transport: TCP
segment structure
reliable data transfer
flow control
connection management
3.6 principles of congestion
control
3.7 TCP congestion control
Transport Layer 3-56
57. TCP: Overview
RFCs: 793,1122,1323, 2018, 2581
point-to-point:
one sender, one receiver
bi-directional data flow
in same connection
MSS: maximum segment
size
reliable, in-order byte
steam:
no “message
boundaries”
connection-oriented:
handshaking (exchange
of control msgs) inits
sender, receiver state
before data exchange
pipelined:
TCP congestion and
flow control set window
size
full duplex data:
flow controlled:
sender will not
overwhelm receiver
Transport Layer 3-57
58. TCP segment structure
32 bits
URG: urgent data
(generally not used)
ACK: ACK #
valid
PSH: push data now
(generally not used)
RST, SYN, FIN:
connection estab
(setup, teardown
commands)
Internet
checksum
(as in UDP)
source port #
dest port #
sequence number
acknowledgement number
head not
UAP R S F
len used
checksum
receive window
Urg data pointer
options (variable length)
counting
by bytes
of data
(not segments!)
# bytes
rcvr willing
to accept
application
data
(variable length)
Transport Layer 3-58
59. TCP seq. numbers, ACKs
sequence numbers:
byte stream “number” of
first byte in segment’s
data
acknowledgements:
seq # of next byte
expected from other side
cumulative ACK
Q: how receiver handles
out-of-order segments
A: TCP spec doesn’t say,
- up to implementor
outgoing segment from sender
source port #
dest port #
sequence number
acknowledgement number
rwnd
checksum
urg pointer
window size
N
sender sequence number space
sent
ACKed
sent, not- usable not
yet ACKed but not usable
(“in-flight”) yet sent
incoming segment to sender
source port #
dest port #
sequence number
acknowledgement number
rwnd
A
checksum
urg pointer
Transport Layer 3-59
60. TCP seq. numbers, ACKs
Host B
Host A
User
types
‘C’
host ACKs
receipt
of echoed
‘C’
Seq=42, ACK=79, data = ‘C’
Seq=79, ACK=43, data = ‘C’
host ACKs
receipt of
‘C’, echoes
back ‘C’
Seq=43, ACK=80
simple telnet scenario
Transport Layer 3-60
61. TCP round trip time, timeout
Q: how to set TCP
timeout value?
Q: how to estimate RTT?
longer than RTT
but RTT varies
too short: premature
timeout, unnecessary
retransmissions
too long: slow reaction
to segment loss
SampleRTT: measured
time from segment
transmission until ACK
receipt
ignore retransmissions
SampleRTT will vary, want
estimated RTT “smoother”
average several recent
measurements, not just
current SampleRTT
Transport Layer 3-61
62. TCP round trip time, timeout
EstimatedRTT = (1- α)*EstimatedRTT + α*SampleRTT
RTT: gaia.cs.umass.edu to fantasia.eurecom.fr
350
RTT: gaia.cs.umass.edu to fantasia.eurecom.fr
300
RTT (milliseconds)
exponential weighted moving average
influence of past sample decreases exponentially fast
typical value: α = 0.125
s dnoc eslli m TT R
i
(
250
200
sampleRTT
150
EstimatedRTT
100
1
8
15
22
29
36
43
50
57
64
time (seconds)
71
time (seconnds)
SampleRTT
Estimated RTT
78
85
92
99
106
Transport Layer 3-62
63. TCP round trip time, timeout
timeout interval: EstimatedRTT plus “safety margin”
large variation in EstimatedRTT -> larger safety margin
estimate SampleRTT deviation from EstimatedRTT:
DevRTT = (1-β)*DevRTT +
β*|SampleRTT-EstimatedRTT|
(typically, β =
0.25)
TimeoutInterval = EstimatedRTT + 4*DevRTT
estimated RTT
“safety margin”
Transport Layer 3-63
64. Chapter 3 outline
3.1 transport-layer
services
3.2 multiplexing and
demultiplexing
3.3 connectionless
transport: UDP
3.4 principles of reliable
data transfer
3.5 connection-oriented
transport: TCP
segment structure
reliable data transfer
flow control
connection management
3.6 principles of congestion
control
3.7 TCP congestion control
Transport Layer 3-64
65. TCP reliable data transfer
TCP creates rdt service
on top of IP’s unreliable
service
pipelined segments
cumulative acks
single retransmission
timer
retransmissions
triggered by:
let’s initially consider
simplified TCP sender:
ignore duplicate acks
ignore flow control,
congestion control
timeout events
duplicate acks
Transport Layer 3-65
66. TCP sender events:
data rcvd from app:
create segment with
seq #
seq # is byte-stream
number of first data
byte in segment
start timer if not
already running
think of timer as for
oldest unacked
segment
expiration interval:
TimeOutInterval
timeout:
retransmit segment
that caused timeout
restart timer
ack rcvd:
if ack acknowledges
previously unacked
segments
update what is known
to be ACKed
start timer if there are
still unacked segments
Transport Layer 3-66
67. TCP sender (simplified)
data received from application above
Λ
NextSeqNum = InitialSeqNum
SendBase = InitialSeqNum
wait
for
event
create segment, seq. #: NextSeqNum
pass segment to IP (i.e., “send”)
NextSeqNum = NextSeqNum + length(data)
if (timer currently not running)
start timer
timeout
retransmit not-yet-acked segment
with smallest seq. #
start timer
ACK received, with ACK field value y
if (y > SendBase) {
SendBase = y
/* SendBase–1: last cumulatively ACKed byte */
if (there are currently not-yet-acked segments)
start timer
else stop timer
}
Transport Layer 3-67
68. TCP: retransmission scenarios
Host B
Host A
Host B
Host A
SendBase=92
X
Seq=92, 8 bytes of data
Seq=100, 20 bytes of data
ACK=100
Seq=92, 8 bytes of data
ACK=100
t uoe m
it
t uoe m
it
Seq=92, 8 bytes of data
SendBase=100
ACK=100
ACK=120
Seq=92, 8
bytes of data
SendBase=120
ACK=120
SendBase=120
lost ACK scenario
premature timeout
Transport Layer 3-68
69. TCP: retransmission scenarios
Host B
Host A
Seq=92, 8 bytes of data
Seq=100, 20 bytes of data
t uoe m
it
X
ACK=100
ACK=120
Seq=120, 15 bytes of data
cumulative ACK
Transport Layer 3-69
70. TCP ACK generation
[RFC 1122, RFC 2581]
event at receiver
TCP receiver action
arrival of in-order segment with
expected seq #. All data up to
expected seq # already ACKed
delayed ACK. Wait up to 500ms
for next segment. If no next segment,
send ACK
arrival of in-order segment with
expected seq #. One other
segment has ACK pending
immediately send single cumulative
ACK, ACKing both in-order segments
arrival of out-of-order segment
higher-than-expect seq. # .
Gap detected
immediately send duplicate ACK,
indicating seq. # of next expected byte
arrival of segment that
partially or completely fills gap
immediate send ACK, provided that
segment starts at lower end of gap
Transport Layer 3-70
71. TCP fast retransmit
time-out period often
relatively long:
long delay before
resending lost packet
detect lost segments
via duplicate ACKs.
sender often sends
many segments backto-back
if segment is lost, there
will likely be many
duplicate ACKs.
TCP fast retransmit
if sender receives 3
ACKs for same data
(“triple duplicate ACKs”),
(“triple duplicate ACKs”),
resend unacked
segment with smallest
seq #
likely that unacked
segment lost, so don’t
wait for timeout
Transport Layer 3-71
72. TCP fast retransmit
Host B
Host A
Seq=92, 8 bytes of data
Seq=100, 20 bytes of data
X
ACK=100
t uoe m
it
ACK=100
ACK=100
ACK=100
Seq=100, 20 bytes of data
fast retransmit after sender
receipt of triple duplicate ACK
Transport Layer 3-72
73. Chapter 3 outline
3.1 transport-layer
services
3.2 multiplexing and
demultiplexing
3.3 connectionless
transport: UDP
3.4 principles of reliable
data transfer
3.5 connection-oriented
transport: TCP
segment structure
reliable data transfer
flow control
connection management
3.6 principles of congestion
control
3.7 TCP congestion control
Transport Layer 3-73
74. TCP flow control
application may
remove data from
TCP socket buffers ….
… slower than TCP
receiver is delivering
(sender is sending)
application
process
application
TCP
code
IP
code
flow control
receiver controls sender, so
sender won’t overflow
receiver’s buffer by transmitting
too much, too fast
OS
TCP socket
receiver buffers
from sender
receiver protocol stack
Transport Layer 3-74
75. TCP flow control
receiver “advertises” free
buffer space by including
rwnd value in TCP header
of receiver-to-sender
segments
RcvBuffer size set via
socket options (typical default
is 4096 bytes)
many operating systems
autoadjust RcvBuffer
sender limits amount of
unacked (“in-flight”) data to
receiver’s rwnd value
guarantees receive buffer
will not overflow
to application process
RcvBuffer
rwnd
buffered data
free buffer space
TCP segment payloads
receiver-side buffering
Transport Layer 3-75
76. Chapter 3 outline
3.1 transport-layer
services
3.2 multiplexing and
demultiplexing
3.3 connectionless
transport: UDP
3.4 principles of reliable
data transfer
3.5 connection-oriented
transport: TCP
segment structure
reliable data transfer
flow control
connection management
3.6 principles of congestion
control
3.7 TCP congestion control
Transport Layer 3-76
77. Connection Management
before exchanging data, sender/receiver “handshake”:
agree to establish connection (each knowing the other willing
to establish connection)
agree on connection parameters
application
application
connection state: ESTAB
connection variables:
seq # client-to-server
server-to-client
rcvBuffer size
at server,client
connection state: ESTAB
connection Variables:
seq # client-to-server
server-to-client
rcvBuffer size
at server,client
network
network
Socket clientSocket =
newSocket("hostname","port
number");
Socket connectionSocket =
welcomeSocket.accept();
Transport Layer 3-77
78. Agreeing to establish a connection
2-way handshake:
Let’s talk
ESTAB
OK
Q: will 2-way handshake
always work in
network?
ESTAB
choose x
ESTAB
req_conn(x)
acc_conn(x)
variable delays
retransmitted messages
(e.g. req_conn(x)) due to
message loss
message reordering
can’t “see” other side
ESTAB
Transport Layer 3-78
79. Agreeing to establish a connection
2-way handshake failure scenarios:
choose x
retransmit
req_conn(x)
req_conn(x)
choose x
ESTAB
acc_conn(x)
retransmit
req_conn(x)
ESTAB
ESTAB
req_conn(x)
client
terminates
connection
x completes
retransmit
data(x+1)
server
forgets x
ESTAB
half open connection!
(no client!)
client
terminates
req_conn(x)
ESTAB
acc_conn(x)
data(x+1)
connection
x completes
req_conn(x)
data(x+1)
accept
data(x+1)
server
forgets x
ESTAB
accept
data(x+1)
Transport Layer 3-79
80. TCP 3-way handshake
client state
LISTEN
SYNSENT
server state
LISTEN
choose init seq num, x
send TCP SYN msg
received SYNACK(x)
indicates server is live;
ESTAB
send ACK for SYNACK;
this segment may contain
client-to-server data
SYNbit=1, Seq=x
choose init seq num, y
send TCP SYNACK
SYN RCVD
msg, acking SYN
SYNbit=1, Seq=y
ACKbit=1; ACKnum=x+1
ACKbit=1, ACKnum=y+1
received ACK(y)
indicates client is live
ESTAB
Transport Layer 3-80
81. TCP 3-way handshake: FSM
closed
Socket connectionSocket =
welcomeSocket.accept();
Λ
SYN(x)
SYNACK(seq=y,ACKnum=x+1)
create new socket for
communication back to client
listen
SYN(seq=x)
SYN
sent
SYN
rcvd
ACK(ACKnum=y+1)
Socket clientSocket =
newSocket("hostname","port
number");
ESTAB
SYNACK(seq=y,ACKnum=x+1)
ACK(ACKnum=y+1)
Λ
Transport Layer 3-81
82. TCP: closing a connection
client, server each close their side of connection
send TCP segment with FIN bit = 1
respond to received FIN with ACK
on receiving FIN, ACK can be combined with own FIN
simultaneous FIN exchanges can be handled
Transport Layer 3-82
83. TCP: closing a connection
client state
server state
ESTAB
ESTAB
clientSocket.close()
FIN_WAIT_1
FIN_WAIT_2
can no longer
send but can
receive data
FINbit=1, seq=x
ACKbit=1; ACKnum=x+1
wait for server
close
FINbit=1, seq=y
TIMED_WAIT
timed wait
for 2*max
segment lifetime
ACKbit=1; ACKnum=y+1
CLOSE_WAIT
can still
send data
LAST_ACK
can no longer
send data
CLOSED
CLOSED
Transport Layer 3-83
84. Chapter 3 outline
3.1 transport-layer
services
3.2 multiplexing and
demultiplexing
3.3 connectionless
transport: UDP
3.4 principles of reliable
data transfer
3.5 connection-oriented
transport: TCP
segment structure
reliable data transfer
flow control
connection management
3.6 principles of congestion
control
3.7 TCP congestion control
Transport Layer 3-84
85. Principles of congestion control
congestion:
informally: “too many sources sending too much
data too fast for network to handle”
different from flow control!
manifestations:
lost packets (buffer overflow at routers)
long delays (queueing in router buffers)
a top-10 problem!
Transport Layer 3-85
86. Causes/costs of congestion: scenario 1
λout
Host A
unlimited shared
output link buffers
Host B
R/2
delay
two senders, two
receivers
one router, infinite
buffers
output link capacity: R
no retransmission
throughput:
λout
original data: λin
λin R/2
maximum per-connection
throughput: R/2
λin R/2
large delays as arrival rate,
λin, approaches capacity
Transport Layer 3-86
87. Causes/costs of congestion: scenario 2
one router, finite buffers
sender retransmission of timed-out packet
application-layer input = application-layer output: λin = λout
transport-layer input includes retransmissions : λin λin
‘
λin : original data
λ'in: original data, plus
λout
retransmitted data
Host A
Host B
finite shared output
link buffers
Transport Layer 3-87
88. Causes/costs of congestion: scenario 2
λout
idealization: perfect
knowledge
sender sends only when
router buffers available
R/2
λin : original data
λ'in: original data, plus
copy
λin
R/2
λout
retransmitted data
A
Host B
free buffer space!
finite shared output
link buffers
Transport Layer 3-88
89. Causes/costs of congestion: scenario 2
Idealization: known loss
packets can be lost,
dropped at router due
to full buffers
sender only resends if
packet known to be lost
λin : original data
λ'in: original data, plus
copy
λout
retransmitted data
A
no buffer space!
Host B
Transport Layer 3-89
90. Causes/costs of congestion: scenario 2
packets can be lost,
dropped at router due
to full buffers
sender only resends if
packet known to be lost
R/2
when sending at R/2,
some packets are
retransmissions but
asymptotic goodput
is still R/2 (why?)
λout
Idealization: known loss
λin : original data
λ'in: original data, plus
λin
R/2
λout
retransmitted data
A
free buffer space!
Host B
Transport Layer 3-90
91. Causes/costs of congestion: scenario 2
packets can be lost, dropped
at router due to full buffers
sender times out prematurely,
sending two copies, both of
which are delivered
R/2
λin
λ'in
timeout
copy
A
when sending at R/2,
some packets are
retransmissions
including duplicated
that are delivered!
λout
Realistic: duplicates
λin
R/2
λout
free buffer space!
Host B
Transport Layer 3-91
92. Causes/costs of congestion: scenario 2
packets can be lost, dropped
at router due to full buffers
sender times out prematurely,
sending two copies, both of
which are delivered
R/2
when sending at R/2,
some packets are
retransmissions
including duplicated
that are delivered!
λout
Realistic: duplicates
λin
R/2
“costs” of congestion:
more work (retrans) for given “goodput”
unneeded retransmissions: link carries multiple copies of pkt
decreasing goodput
Transport Layer 3-92
93. Causes/costs of congestion: scenario 3
four senders
multihop paths
timeout/retransmit
Host A
Q: what happens as λin and λin’
increase ?
A: as red λin’ increases, all arriving
blue pkts at upper queue are
dropped, blue throughput 0
λin : original data
λ'in: original data, plus
λout
Host B
retransmitted data
finite shared output
link buffers
Host D
Host C
Transport Layer 3-93
94. Causes/costs of congestion: scenario 3
λout
C/2
λin’
C/2
another “cost” of congestion:
when packet dropped, any “upstream
transmission capacity used for that packet was
wasted!
Transport Layer 3-94
95. Approaches towards congestion control
two broad approaches towards congestion control:
end-end congestion
control:
no explicit feedback
from network
congestion inferred
from end-system
observed loss, delay
approach taken by
TCP
network-assisted
congestion control:
routers provide
feedback to end systems
single bit indicating
congestion (SNA,
DECbit, TCP/IP ECN,
ATM)
explicit rate for
sender to send at
Transport Layer 3-95
96. Case study: ATM ABR congestion control
ABR: available bit rate:
“elastic service”
if sender’s path
“underloaded”:
sender should use
available bandwidth
if sender’s path
congested:
sender throttled to
minimum guaranteed
rate
RM (resource management)
cells:
sent by sender, interspersed
with data cells
bits in RM cell set by
switches (“network-assisted”)
NI bit: no increase in rate
(mild congestion)
CI bit: congestion
indication
RM cells returned to sender
by receiver, with bits intact
Transport Layer 3-96
97. Case study: ATM ABR congestion control
RM cell
data cell
two-byte ER (explicit rate) field in RM cell
congested switch may lower ER value in cell
senders’ send rate thus max supportable rate on path
EFCI bit in data cells: set to 1 in congested switch
if data cell preceding RM cell has EFCI set, receiver sets
CI bit in returned RM cell
Transport Layer 3-97
98. Chapter 3 outline
3.1 transport-layer
services
3.2 multiplexing and
demultiplexing
3.3 connectionless
transport: UDP
3.4 principles of reliable
data transfer
3.5 connection-oriented
transport: TCP
segment structure
reliable data transfer
flow control
connection management
3.6 principles of congestion
control
3.7 TCP congestion control
Transport Layer 3-98
99. TCP congestion control: additive increase
multiplicative decrease
approach: sender increases transmission rate (window
size), probing for usable bandwidth, until loss occurs
additive increase: increase cwnd by 1 MSS every
RTT until loss detected
multiplicative decrease: cut cwnd in half after loss
AIMD saw tooth
behavior: probing
for bandwidth
cwnd: TCP sender
congestion window size
additively increase window size …
…. until loss occurs (then cut window in half)
time
Transport Layer 3-99
100. TCP Congestion Control: details
sender sequence number space
cwnd
last byte
ACKed
sent, notyet ACKed
(“in-flight”)
last byte
sent
sender limits transmission:
TCP sending rate:
roughly: send cwnd
bytes, wait RTT for
ACKS, then send
more bytes
rate
~
~
cwnd
RTT
bytes/sec
LastByteSent< cwnd
LastByteAcked
cwnd is dynamic, function
of perceived network
congestion
Transport Layer 3-100
101. TCP Slow Start
when connection begins,
increase rate
exponentially until first
loss event:
initially cwnd = 1 MSS
double cwnd every RTT
done by incrementing
cwnd for every ACK
received
summary: initial rate is
slow but ramps up
exponentially fast
RTT
Host B
Host A
one segm
ent
two segm
ents
four segm
ents
time
Transport Layer 3-101
102. TCP: detecting, reacting to loss
loss indicated by timeout:
cwnd set to 1 MSS;
window then grows exponentially (as in slow start)
to threshold, then grows linearly
loss indicated by 3 duplicate ACKs: TCP RENO
dup ACKs indicate network capable of delivering
some segments
cwnd is cut in half window then grows linearly
TCP Tahoe always sets cwnd to 1 (timeout or 3
duplicate acks)
Transport Layer 3-102
103. TCP: switching from slow start to CA
Q: when should the
exponential
increase switch to
linear?
A: when cwnd gets
to 1/2 of its value
before timeout.
Implementation:
variable ssthresh
on loss event, ssthresh
is set to 1/2 of cwnd just
before loss event
Transport Layer 3-103
105. TCP throughput
avg. TCP thruput as function of window size, RTT?
ignore slow start, assume always data to send
W: window size (measured in bytes) where loss occurs
avg. window size (# in-flight bytes) is ¾ W
avg. thruput is 3/4W per RTT
avg TCP thruput =
3 W
bytes/sec
4 RTT
W
W/2
Transport Layer 3-105
106. TCP Futures: TCP over “long, fat pipes”
example: 1500 byte segments, 100ms RTT, want
10 Gbps throughput
requires W = 83,333 in-flight segments
throughput in terms of segment loss probability, L
[Mathis 1997]:
1.22 . MSS
TCP throughput =
RTT L
➜ to achieve 10 Gbps throughput, need a loss rate of L
= 2·10-10 – a very small loss rate!
new versions of TCP for high-speed
Transport Layer 3-106
107. TCP Fairness
fairness goal: if K TCP sessions share same
bottleneck link of bandwidth R, each should have
average rate of R/K
TCP connection 1
TCP connection 2
bottleneck
router
capacity R
Transport Layer 3-107
108. Why is TCP fair?
two competing sessions:
additive increase gives slope of 1, as throughout increases
multiplicative decrease decreases throughput proportionally
R
Connection 2 throughput
equal bandwidth share
loss: decrease window by factor of 2
congestion avoidance: additive increase
loss: decrease window by factor of 2
congestion avoidance: additive increase
Connection 1 throughput
R
Transport Layer 3-108
109. Fairness (more)
Fairness and UDP
multimedia apps often
do not use TCP
Fairness, parallel TCP
connections
application can open
do not want rate
multiple parallel
throttled by congestion
connections between two
control
hosts
instead use UDP:
web browsers do this
send audio/video at
e.g., link of rate R with 9
constant rate, tolerate
packet loss
existing connections:
new app asks for 1 TCP, gets rate
R/10
new app asks for 11 TCPs, gets R/2
Transport Layer 3-109
110. Chapter 3: summary
principles behind
transport layer services:
multiplexing,
demultiplexing
reliable data transfer
flow control
congestion control
instantiation,
implementation in the
Internet
next:
leaving the
network “edge”
(application,
transport layers)
into the network
“core”
UDP
TCP
Transport Layer 3-110
Editor's Notes
Kurose and Ross forgot to say anything about wrapping the carry and adding it to low order bit