This document provides an overview of the User Datagram Protocol (UDP). It discusses UDP's attributes that make it suited for certain applications like streaming media. It describes UDP's packet structure including header fields like source/destination ports and checksum. It also compares UDP to the Transmission Control Protocol (TCP), noting that UDP does not guarantee delivery or ordering while TCP provides reliability. The document provides examples of applications that commonly use UDP like DNS and VoIP.
TCP and UDP are transport layer protocols that package and deliver data between applications. TCP provides reliable, ordered delivery through connection establishment and packet sequencing. UDP provides faster, unreliable datagram delivery without connections. Common applications using TCP include HTTP, FTP, and SMTP. Common UDP applications include DNS, DHCP, and streaming media.
The document discusses the Internet Control Message Protocol (ICMP). ICMP provides error reporting, congestion reporting, and first-hop router redirection. It uses IP to carry its data end-to-end and is considered an integral part of IP. ICMP messages are encapsulated in IP datagrams and are used to report errors in IP datagrams, though some errors may still result in datagrams being dropped without a report. ICMP defines various message types including error messages like destination unreachable and informational messages like echo request and reply.
This document discusses the User Datagram Protocol (UDP) which provides a connectionless mode of communication between applications on hosts in an IP network. It describes the format of UDP packets, how UDP checksums are calculated, and UDP's operation including encapsulation, queuing, and demultiplexing. Examples are provided to illustrate how a UDP control block table and queues are used to handle incoming and outgoing UDP packets. The document also discusses when UDP is an appropriate protocol to use compared to TCP.
The document discusses the Transport layer protocols TCP and UDP. It describes TCP as a connection-oriented protocol that provides reliable, ordered delivery of streams of data through mechanisms like sequencing, acknowledgment, flow control and error checking. UDP is described as a simpler connectionless protocol that provides best-effort delivery without checking for errors or lost packets. The key concepts of ports, sockets, multiplexing and demultiplexing are also covered, as well as the header formats and functions of TCP and UDP.
1) TCP and UDP are the two main Internet protocols for transferring data.
2) TCP is connection-oriented, reliable, and ensures packets are delivered in order. UDP is connectionless and packets may arrive out of order or not at all.
3) TCP is used for applications like web browsing that require reliable data transfer, while UDP is used for real-time applications like streaming video that prioritize speed over reliability.
This document discusses the Internet Protocol (IP) version 4 and 6. It describes the key tasks of IP including addressing computers and fragmenting packets. IP version 4 uses 32-bit addresses while IP version 6 uses 128-bit addresses and has improvements like larger address space and better security. The document also covers IP address classes, private addressing, subnetting, Classless Inter-Domain Routing (CIDR), and address blocks.
The document compares the OSI model and the TCP/IP model. The OSI model has 7 layers - physical, data link, network, transport, session, presentation and application layers. It defines a generic standard for network communication. The TCP/IP model has 4 layers - network access, internet, transport and application layers. It is based on specific TCP and IP protocols and is used widely for internet working. The key difference is that OSI model is protocol-independent while TCP/IP model supports specific protocols for internet.
This document contains 12 questions and answers about transport layer protocols like UDP and TCP. It discusses topics like the maximum size of UDP and TCP packets, examples of when UDP is preferable to TCP, how port numbers allow processes to be uniquely identified, and why TCP must handle out-of-order data even though IP handles fragmentation and reassembly. The document provides technical details about transport layer protocols in response to questions about their specifications, capabilities, and how they address reliability compared to the underlying IP layer.
TCP and UDP are transport layer protocols that package and deliver data between applications. TCP provides reliable, ordered delivery through connection establishment and packet sequencing. UDP provides faster, unreliable datagram delivery without connections. Common applications using TCP include HTTP, FTP, and SMTP. Common UDP applications include DNS, DHCP, and streaming media.
The document discusses the Internet Control Message Protocol (ICMP). ICMP provides error reporting, congestion reporting, and first-hop router redirection. It uses IP to carry its data end-to-end and is considered an integral part of IP. ICMP messages are encapsulated in IP datagrams and are used to report errors in IP datagrams, though some errors may still result in datagrams being dropped without a report. ICMP defines various message types including error messages like destination unreachable and informational messages like echo request and reply.
This document discusses the User Datagram Protocol (UDP) which provides a connectionless mode of communication between applications on hosts in an IP network. It describes the format of UDP packets, how UDP checksums are calculated, and UDP's operation including encapsulation, queuing, and demultiplexing. Examples are provided to illustrate how a UDP control block table and queues are used to handle incoming and outgoing UDP packets. The document also discusses when UDP is an appropriate protocol to use compared to TCP.
The document discusses the Transport layer protocols TCP and UDP. It describes TCP as a connection-oriented protocol that provides reliable, ordered delivery of streams of data through mechanisms like sequencing, acknowledgment, flow control and error checking. UDP is described as a simpler connectionless protocol that provides best-effort delivery without checking for errors or lost packets. The key concepts of ports, sockets, multiplexing and demultiplexing are also covered, as well as the header formats and functions of TCP and UDP.
1) TCP and UDP are the two main Internet protocols for transferring data.
2) TCP is connection-oriented, reliable, and ensures packets are delivered in order. UDP is connectionless and packets may arrive out of order or not at all.
3) TCP is used for applications like web browsing that require reliable data transfer, while UDP is used for real-time applications like streaming video that prioritize speed over reliability.
This document discusses the Internet Protocol (IP) version 4 and 6. It describes the key tasks of IP including addressing computers and fragmenting packets. IP version 4 uses 32-bit addresses while IP version 6 uses 128-bit addresses and has improvements like larger address space and better security. The document also covers IP address classes, private addressing, subnetting, Classless Inter-Domain Routing (CIDR), and address blocks.
The document compares the OSI model and the TCP/IP model. The OSI model has 7 layers - physical, data link, network, transport, session, presentation and application layers. It defines a generic standard for network communication. The TCP/IP model has 4 layers - network access, internet, transport and application layers. It is based on specific TCP and IP protocols and is used widely for internet working. The key difference is that OSI model is protocol-independent while TCP/IP model supports specific protocols for internet.
This document contains 12 questions and answers about transport layer protocols like UDP and TCP. It discusses topics like the maximum size of UDP and TCP packets, examples of when UDP is preferable to TCP, how port numbers allow processes to be uniquely identified, and why TCP must handle out-of-order data even though IP handles fragmentation and reassembly. The document provides technical details about transport layer protocols in response to questions about their specifications, capabilities, and how they address reliability compared to the underlying IP layer.
This document discusses the Transmission Control Protocol (TCP) which provides reliable, connection-oriented data transmission over the internet. TCP establishes a virtual connection between endpoints, ensuring reliable delivery through mechanisms like positive acknowledgement and retransmission. It uses a sliding window algorithm to guarantee reliable and in-order delivery while enforcing flow control between sender and receiver. Key aspects of TCP include connection establishment and termination, port numbers, segments, headers, and addressing end-to-end issues over heterogeneous networks.
The document provides an overview of the OSI model and TCP/IP networking model. It describes the seven layers of the OSI model from the physical layer to the application layer and their responsibilities in networking. It also discusses the four layers of the TCP/IP model and compares it to the OSI model. Key protocols like TCP, UDP, IP, Ethernet, and HTTP are explained in their respective layers along with functions like encapsulation and data flow between layers. Network analysis tools like Wireshark are also mentioned.
ARP resolves IP addresses to MAC addresses for local network delivery. It uses broadcast datagrams to request MAC addresses and unicasts to reply. Proxy ARP allows routers to answer for hosts on remote networks during subnet transition. RARP and Inverse ARP work in reverse to resolve MAC addresses to IP addresses.
TCP and UDP are transport layer protocols used for data transfer in the OSI model. TCP is connection-oriented, requiring a three-way handshake to establish a connection that maintains data integrity. It guarantees data will reach its destination without duplication but is slower than UDP. UDP is connectionless and used for applications requiring fast transmission like video calls, but does not ensure packet delivery and order. Both protocols add headers to packets with TCP focused on reliability and UDP on speed.
In this presentation, we will discuss in details about the TCP/ IP framework, the backbone of every ebusiness.
To know more about Welingkar School’s Distance Learning Program and courses offered, visit:
http://www.welingkaronline.org/distance-learning/online-mba.html
The document discusses the differences between packets and frames, and provides details on the transport layer. It explains that the transport layer is responsible for process-to-process delivery and uses port numbers for addressing. Connection-oriented protocols like TCP use three-way handshaking for connection establishment and termination, and implement flow and error control using mechanisms like sliding windows. Connectionless protocols like UDP are simpler but unreliable, treating each packet independently.
This document provides an overview of the IPv6 header based on Chapter 4 of the book "Understanding IPv6, Third Edition". It describes the components of an IPv6 packet including the IPv6 header, extension headers, and upper-layer protocol data unit. The IPv6 header is a fixed size of 40 bytes and contains fields for version, traffic class, flow label, payload length, next header, hop limit, source address, and destination address. Extension headers can be added after the IPv6 header and are used to expand IPv6's capabilities. The IPv6 header was designed to be more efficient than IPv4 by reducing the number of required fields and moving seldom-used fields to extension headers.
Overview of IP routing protocols, packet forwarding and proxy ARP.
The principle of IP routing proved to be very flexible and scalable in the growth of the Internet and TCP/IP based networks.
IP routing denotes protocols for exchanging IP address range reachability like RIP, BGP and OSPF.
In contrast to IP routing, IP packet forwarding collectively means all functions performed when an IP router receives a packet and forwards it over the output interface indicated by an IP route in the routing table.
When an IP router performs a route lookup, it calculates a route decision based on different properties like prefix (mask) length, route precedence and metrics.
Routing protocols for exchanging route information can be coarsely classified as distance vector and link state protocols. Distance vector protocols like RIP (Routing Information Protocol) exchange information about the path cost to specific targets (IP address ranges). Routers that talk distance vector protocols receive reachability information about all sub-networks indirectly from neighboring routers.
In contrast to distance vector protocols, link state protocols like OSPF disseminate information about the link state of each router link in a network to all routers in the network. Thus link state protocols tend to converge faster to topology changes since all routers have firsthand information of the topology of the network.
Proxy ARP may be a convenient solution when it comes to add additional subnets without having to add routes to routers and hosts. A proxy ARP enabled router would answer ARP requests on behalf of the targeted hosts mimicking a local network access to the requesting host.
UDP is a connectionless transport layer protocol that runs over IP. It provides an unreliable best-effort service where packets may be lost, delivered out of order, or duplicated. UDP has a small 8-byte header and is lightweight, with no connection establishment or guarantee of delivery. This makes it fast and low overhead, suitable for real-time applications like streaming media where resending lost packets would cause delay.
Packet radio protocols allow multiple subscribers to access a shared channel for transmitting data packets. They use contention-based random access techniques like ALOHA. Pure ALOHA protocol has low efficiency due to partial packet collisions. Slotted ALOHA synchronizes transmissions to time slots to prevent partial collisions, improving efficiency. Performance is evaluated using metrics like throughput, which is highest at optimal channel load and drops off above and below this point.
The document discusses the key features and mechanisms of the Transmission Control Protocol (TCP). It begins with an introduction to TCP's main goals of reliable, in-order delivery of data streams between endpoints. It then covers TCP's connection establishment and termination processes, flow and error control techniques using acknowledgments and retransmissions, and congestion control methods like slow start, congestion avoidance, and detection.
This document provides an overview of the Transmission Control Protocol (TCP). It discusses TCP services like reliable data delivery and connection-oriented communication. The document explains TCP features such as flow control, error control, and congestion control. It describes TCP segments, the three-way handshake for connection establishment, and the TCP state transition diagram. Examples are provided to illustrate TCP windows, acknowledgments, retransmissions, and timers.
TCP/IP is a set of protocols that allows computers to communicate over a network. It includes IP for routing packets between hosts and TCP and UDP for transporting data between processes. TCP provides reliable, connection-oriented delivery while UDP provides simpler, connectionless delivery. The protocols originated in the 1970s from research funded by the US military and became the standard for internet communication, allowing different computer platforms to interconnect globally through a common protocol.
IGMP (Internet Group Management Protocol) allows multicast routers to track group memberships across multicast networks. It has three message types - query, membership report, and leave report. Upon receiving a query, hosts send membership reports to the router to join or leave groups. The router uses these reports to maintain a list of members for each multicast group on that network segment. IGMP messages are encapsulated in IP datagrams and Ethernet frames for transmission.
The document discusses the Internet Control Message Protocol (ICMP) which is used to report errors in IP packets and for network debugging tools. ICMP provides error messages, query messages, and is used by tools like ping and traceroute. It describes the different types of ICMP messages including error messages like destination unreachable and source quench, query messages like echo request/reply, and deprecated messages. It also explains how ICMP messages are encapsulated in IP packets and how the checksum is calculated for ICMP headers.
INTERNET PROTOCOL (IP)
, Datagram Format
, Fragmentation
, Options
, Security of IPv4 Datagrams
,ICMPv4
, MESSAGES
, Debugging Tools
, ICMP Checksum
, MOBILE IP
, Addressing
, Agents
, Three Phases
, Inefficiency in Mobile IP
An IP address is divided into a network and host part, with a class A address using the first 8 bits for the network and the last 24 bits for the host. A subnet mask, also consisting of 32 bits, uses 1s to represent the network part and 0s to represent the host part, allowing a computer to determine the network and host parts of an IP address. For example, an IP address of 10.0.0.1 with a default class A subnet mask of 255.0.0.0 would mean any IP address starting with 10 would be in the same network, ranging from 10.0.0.0 to 10.255.255.255.
The document discusses the TCP/IP protocol stack and the headers used at each layer.
It describes that TCP works to divide files into packets and send them to workstations, while IP handles routing packets through networks. The TCP header includes fields like source/destination port numbers, sequence numbers, flags, and checksums. The IP header treats the TCP header+data as a datagram and adds its own header fields like version, length, identification, flags, time to live, and source/destination addresses.
An Authentication Header can also be added for security purposes to authenticate senders and protect against modification of packets.
IP multicasting allows for efficient one-to-many and many-to-many communication on the internet. It uses multicast groups and protocols like IGMP for group management and PIM for multicast routing. PIM supports both source-based trees using flood-and-prune and core-based trees with a rendezvous point to deliver multicast data.
Overview of UDP protocol.
UDP (User Datagram Protocol) is a simple extension of the Internet Protocol services. It basically provides simple packet transport service without any quality of service functions.
Unlike TCP, UDP is connection-less and packet-based. Application PDUs (application packets) sent over a UDP socket are delivered to the receiving host application as is without fragmentation.
UDP is mostly used by applications with simple request-response communication patterns like DNS, DHCP, RADIUS, RIP or RPC.
Since UDP does provide any error recovery such as retransmission of lost packets, the application protocols have to take care of these situations.
User Datagram Protocol (UDP) is a connectionless protocol that provides datagram socket services. It is simpler than TCP with less overhead but does not guarantee delivery or order of packets. The Java API provides the DatagramSocket and DatagramPacket classes to send and receive data packets. A MulticastSocket subclass of DatagramSocket allows sending data to multiple recipients by joining them to a multicast group.
This document discusses the Transmission Control Protocol (TCP) which provides reliable, connection-oriented data transmission over the internet. TCP establishes a virtual connection between endpoints, ensuring reliable delivery through mechanisms like positive acknowledgement and retransmission. It uses a sliding window algorithm to guarantee reliable and in-order delivery while enforcing flow control between sender and receiver. Key aspects of TCP include connection establishment and termination, port numbers, segments, headers, and addressing end-to-end issues over heterogeneous networks.
The document provides an overview of the OSI model and TCP/IP networking model. It describes the seven layers of the OSI model from the physical layer to the application layer and their responsibilities in networking. It also discusses the four layers of the TCP/IP model and compares it to the OSI model. Key protocols like TCP, UDP, IP, Ethernet, and HTTP are explained in their respective layers along with functions like encapsulation and data flow between layers. Network analysis tools like Wireshark are also mentioned.
ARP resolves IP addresses to MAC addresses for local network delivery. It uses broadcast datagrams to request MAC addresses and unicasts to reply. Proxy ARP allows routers to answer for hosts on remote networks during subnet transition. RARP and Inverse ARP work in reverse to resolve MAC addresses to IP addresses.
TCP and UDP are transport layer protocols used for data transfer in the OSI model. TCP is connection-oriented, requiring a three-way handshake to establish a connection that maintains data integrity. It guarantees data will reach its destination without duplication but is slower than UDP. UDP is connectionless and used for applications requiring fast transmission like video calls, but does not ensure packet delivery and order. Both protocols add headers to packets with TCP focused on reliability and UDP on speed.
In this presentation, we will discuss in details about the TCP/ IP framework, the backbone of every ebusiness.
To know more about Welingkar School’s Distance Learning Program and courses offered, visit:
http://www.welingkaronline.org/distance-learning/online-mba.html
The document discusses the differences between packets and frames, and provides details on the transport layer. It explains that the transport layer is responsible for process-to-process delivery and uses port numbers for addressing. Connection-oriented protocols like TCP use three-way handshaking for connection establishment and termination, and implement flow and error control using mechanisms like sliding windows. Connectionless protocols like UDP are simpler but unreliable, treating each packet independently.
This document provides an overview of the IPv6 header based on Chapter 4 of the book "Understanding IPv6, Third Edition". It describes the components of an IPv6 packet including the IPv6 header, extension headers, and upper-layer protocol data unit. The IPv6 header is a fixed size of 40 bytes and contains fields for version, traffic class, flow label, payload length, next header, hop limit, source address, and destination address. Extension headers can be added after the IPv6 header and are used to expand IPv6's capabilities. The IPv6 header was designed to be more efficient than IPv4 by reducing the number of required fields and moving seldom-used fields to extension headers.
Overview of IP routing protocols, packet forwarding and proxy ARP.
The principle of IP routing proved to be very flexible and scalable in the growth of the Internet and TCP/IP based networks.
IP routing denotes protocols for exchanging IP address range reachability like RIP, BGP and OSPF.
In contrast to IP routing, IP packet forwarding collectively means all functions performed when an IP router receives a packet and forwards it over the output interface indicated by an IP route in the routing table.
When an IP router performs a route lookup, it calculates a route decision based on different properties like prefix (mask) length, route precedence and metrics.
Routing protocols for exchanging route information can be coarsely classified as distance vector and link state protocols. Distance vector protocols like RIP (Routing Information Protocol) exchange information about the path cost to specific targets (IP address ranges). Routers that talk distance vector protocols receive reachability information about all sub-networks indirectly from neighboring routers.
In contrast to distance vector protocols, link state protocols like OSPF disseminate information about the link state of each router link in a network to all routers in the network. Thus link state protocols tend to converge faster to topology changes since all routers have firsthand information of the topology of the network.
Proxy ARP may be a convenient solution when it comes to add additional subnets without having to add routes to routers and hosts. A proxy ARP enabled router would answer ARP requests on behalf of the targeted hosts mimicking a local network access to the requesting host.
UDP is a connectionless transport layer protocol that runs over IP. It provides an unreliable best-effort service where packets may be lost, delivered out of order, or duplicated. UDP has a small 8-byte header and is lightweight, with no connection establishment or guarantee of delivery. This makes it fast and low overhead, suitable for real-time applications like streaming media where resending lost packets would cause delay.
Packet radio protocols allow multiple subscribers to access a shared channel for transmitting data packets. They use contention-based random access techniques like ALOHA. Pure ALOHA protocol has low efficiency due to partial packet collisions. Slotted ALOHA synchronizes transmissions to time slots to prevent partial collisions, improving efficiency. Performance is evaluated using metrics like throughput, which is highest at optimal channel load and drops off above and below this point.
The document discusses the key features and mechanisms of the Transmission Control Protocol (TCP). It begins with an introduction to TCP's main goals of reliable, in-order delivery of data streams between endpoints. It then covers TCP's connection establishment and termination processes, flow and error control techniques using acknowledgments and retransmissions, and congestion control methods like slow start, congestion avoidance, and detection.
This document provides an overview of the Transmission Control Protocol (TCP). It discusses TCP services like reliable data delivery and connection-oriented communication. The document explains TCP features such as flow control, error control, and congestion control. It describes TCP segments, the three-way handshake for connection establishment, and the TCP state transition diagram. Examples are provided to illustrate TCP windows, acknowledgments, retransmissions, and timers.
TCP/IP is a set of protocols that allows computers to communicate over a network. It includes IP for routing packets between hosts and TCP and UDP for transporting data between processes. TCP provides reliable, connection-oriented delivery while UDP provides simpler, connectionless delivery. The protocols originated in the 1970s from research funded by the US military and became the standard for internet communication, allowing different computer platforms to interconnect globally through a common protocol.
IGMP (Internet Group Management Protocol) allows multicast routers to track group memberships across multicast networks. It has three message types - query, membership report, and leave report. Upon receiving a query, hosts send membership reports to the router to join or leave groups. The router uses these reports to maintain a list of members for each multicast group on that network segment. IGMP messages are encapsulated in IP datagrams and Ethernet frames for transmission.
The document discusses the Internet Control Message Protocol (ICMP) which is used to report errors in IP packets and for network debugging tools. ICMP provides error messages, query messages, and is used by tools like ping and traceroute. It describes the different types of ICMP messages including error messages like destination unreachable and source quench, query messages like echo request/reply, and deprecated messages. It also explains how ICMP messages are encapsulated in IP packets and how the checksum is calculated for ICMP headers.
INTERNET PROTOCOL (IP)
, Datagram Format
, Fragmentation
, Options
, Security of IPv4 Datagrams
,ICMPv4
, MESSAGES
, Debugging Tools
, ICMP Checksum
, MOBILE IP
, Addressing
, Agents
, Three Phases
, Inefficiency in Mobile IP
An IP address is divided into a network and host part, with a class A address using the first 8 bits for the network and the last 24 bits for the host. A subnet mask, also consisting of 32 bits, uses 1s to represent the network part and 0s to represent the host part, allowing a computer to determine the network and host parts of an IP address. For example, an IP address of 10.0.0.1 with a default class A subnet mask of 255.0.0.0 would mean any IP address starting with 10 would be in the same network, ranging from 10.0.0.0 to 10.255.255.255.
The document discusses the TCP/IP protocol stack and the headers used at each layer.
It describes that TCP works to divide files into packets and send them to workstations, while IP handles routing packets through networks. The TCP header includes fields like source/destination port numbers, sequence numbers, flags, and checksums. The IP header treats the TCP header+data as a datagram and adds its own header fields like version, length, identification, flags, time to live, and source/destination addresses.
An Authentication Header can also be added for security purposes to authenticate senders and protect against modification of packets.
IP multicasting allows for efficient one-to-many and many-to-many communication on the internet. It uses multicast groups and protocols like IGMP for group management and PIM for multicast routing. PIM supports both source-based trees using flood-and-prune and core-based trees with a rendezvous point to deliver multicast data.
Overview of UDP protocol.
UDP (User Datagram Protocol) is a simple extension of the Internet Protocol services. It basically provides simple packet transport service without any quality of service functions.
Unlike TCP, UDP is connection-less and packet-based. Application PDUs (application packets) sent over a UDP socket are delivered to the receiving host application as is without fragmentation.
UDP is mostly used by applications with simple request-response communication patterns like DNS, DHCP, RADIUS, RIP or RPC.
Since UDP does provide any error recovery such as retransmission of lost packets, the application protocols have to take care of these situations.
User Datagram Protocol (UDP) is a connectionless protocol that provides datagram socket services. It is simpler than TCP with less overhead but does not guarantee delivery or order of packets. The Java API provides the DatagramSocket and DatagramPacket classes to send and receive data packets. A MulticastSocket subclass of DatagramSocket allows sending data to multiple recipients by joining them to a multicast group.
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise boosts blood flow and levels of neurotransmitters and endorphins which elevate and stabilize mood.
TCP guarantees reliable delivery of data packets in the correct order, while UDP does not provide these guarantees. TCP is commonly used for applications that require reliable data transfer like HTTP and FTP. UDP is used for applications that prioritize speed over reliability, such as media streaming, VoIP, and online games. While TCP ensures error-free transmission, it introduces more overhead and latency than UDP. The choice between TCP and UDP depends on an application's requirements for reliability versus speed.
TCP is a connection-oriented, reliable transport protocol that provides stream delivery, connection-oriented, and reliable services. It uses sequence numbers, acknowledgment numbers, and other features like flow control, error control, and congestion control to reliably deliver data between two endpoints. A TCP connection involves three phases - connection establishment using a three-way handshake, reliable data transfer with acknowledgments, and connection termination with another three-way handshake or four-way handshake with half-close option. TCP works well for both low and high-speed networks.
The document discusses the Domain Name System (DNS) which maps domain names to IP addresses. It describes how DNS works hierarchically with a root server at the top level, below which are generic, country-specific and other domain levels. DNS servers store and distribute this mapping information across multiple computers to avoid a single point of failure. Primary DNS servers store and update zone files mapping domain names to IP addresses, while secondary servers transfer this information from primary servers.
The document discusses Telnet, a protocol that allows users to access remote computers. It provides an overview of Telnet's history and requirements, steps for connecting to a remote computer using Telnet, tips for its use, current applications, advantages of accessing systems remotely, and disadvantages of the text-based interface. The presentation concludes with thanks and references.
This document provides an overview of the Domain Name System (DNS). It discusses what DNS is, why names are used instead of IP addresses, and the history and development of DNS. It describes the hierarchical name space and domain system. It also explains different DNS record types like A, CNAME, MX, and NS records. The document discusses recursive and iterative queries, legal users of domains, and security issues with the traditional DNS system. It provides an overview of how DNSSEC aims to address some of these security issues through digital signing of DNS records.
Overview of the Domain Name System (DNS).
In the early days of the Internet, hosts had a fixed IP address.
Reaching a host required to know its numeric IP address.
With the growing number of hosts this scheme became quickly awkward and difficult to use.
DNS was introduced to give hosts human readable names that would be translated into a numeric IP addresses on the fly when a requesting host tried to reach another host.
To facilitate a distributed administration of the domain names, a hierarchic scheme was introduced where responsibility to manage domain names is delegated to organizations which can further delegate management of sub-domains.
Due to its importance in the operation of the Internet, domain name servers are usually operated redundantly. The databases of both servers are periodically synchronized.
Transport layer protocols provide services like reliable data transfer and connection establishment between applications on networked devices. They address this need through protocols like TCP and UDP. TCP provides reliable, ordered data streams using mechanisms like three-way handshake, sequence numbers, acknowledgments, retransmissions, flow control via sliding windows, and connection termination handshaking. UDP provides simple datagram transmissions without reliability or flow control.
The document is a presentation on DNS (Domain Name System) given by Mauood Hamidi for his dissertation. It covers definitions of DNS, different types of DNS servers, tools used for DNS queries, DNS records, how DNS works to resolve domain names to IP addresses, and components of the DNS system like zones, name servers, and security considerations. It aims to provide an overview of the key concepts and functioning of DNS.
The document discusses how the Domain Name System (DNS) works by translating domain names to IP addresses. It involves the following steps:
1) A user enters a domain name in their browser. Their computer first checks its local DNS cache for the IP address.
2) If not found locally, the computer queries a recursive DNS server, typically provided by the user's Internet Service Provider.
3) If the recursive DNS server doesn't have the IP address, it queries the root name servers which direct the query to the authoritative name servers for the top-level domain (e.g. .com, .org).
4) The authoritative name servers for the specific domain (e.g. ut
This document discusses different types of errors that can occur during data transmission and various error detection and correction techniques. It describes single-bit errors where one bit is changed and burst errors where multiple consecutive bits are changed. It then explains techniques like two-dimensional parity, checksums, and cyclic redundancy checks which add redundant bits to detect errors by checking for discrepancies between transmitted and received data. The document provides examples of how internet checksums and cyclic redundancy checks work to detect errors.
The document discusses several TCP/IP protocols used for communication over the internet including SMTP, HTTP, FTP, TFTP, NNTP, SNMP, POP, IMAP, and Telnet. It describes the basic functions and workflows of each protocol.
This document summarizes a presentation given on criticality benchmarks for various annular core configurations of Japan's High Temperature Engineering Test Reactor (HTTR). It describes the objectives of benchmarking the HTTR, including developing models to support validation of very high temperature reactor designs. It provides details on the HTTR design specifications, fuel loading patterns for different core configurations, and results of uncertainty and sensitivity analyses. Calculated eigenvalues for different core designs were found to be within 1% of experimental values.
The document provides an overview of the Stream Control Transmission Protocol (SCTP). SCTP is a connection-oriented transport layer protocol that offers reliable data transfer over IP networks. It supports features like multihoming for network fault tolerance, multi-streaming to minimize delay, and congestion control. The document discusses SCTP's architecture, features, security mechanisms, and error handling. It is intended to help application developers write programs using SCTP socket APIs.
This document summarizes a benchmark analysis of start-up physics tests performed at the High Temperature Engineering Test Reactor (HTTR). The analysis evaluated cold critical configurations, excess reactivity measurements, shutdown margins, axial reaction rates, and isothermal temperature coefficients. Some challenges included limitations in available public data and conflicting reported values. Overall, there was generally good agreement between benchmark measurements and calculations, though calculations were approximately 2% higher, likely due to uncertainties in graphite composition and cross sections. Completed benchmarks from this analysis will be published in the International Handbook of Evaluated Reactor Physics Benchmark Experiments.
The transport layer provides efficient, reliable, and cost-effective process-to-process delivery by making use of network layer services. The transport layer works through transport entities to achieve its goal of reliable delivery between application processes. It provides an interface for applications to access its services.
The document discusses the TCP/IP protocol stack and the headers used at each layer.
It describes that TCP works to divide files into packets and send them to workstations, while IP handles routing packets through networks. The TCP header includes fields like source/destination port numbers, sequence numbers, flags, and checksums. The IP header treats the TCP header+data as a datagram and adds its own header fields like version, length, identification, flags, time to live, and source/destination addresses.
An Authentication Header can also be added for security purposes to authenticate senders and protect against modification of packets.
UDP and TCP Protocol & Encrytion and its algorithmAyesha Tahir
The document discusses the TCP/IP protocol suite and the UDP and TCP transport layer protocols. UDP is a connectionless, unreliable protocol that provides basic process-to-process communication with minimal overhead. TCP is a connection-oriented, reliable protocol that establishes virtual connections between processes, provides reliable in-order data delivery through flow and error control mechanisms, and allows processes to communicate via data streams. Both protocols use port numbers to identify communicating processes and encapsulate data in IP datagrams for transmission.
TCP & UDP ( Transmission Control Protocol and User Datagram Protocol)Kruti Niranjan
This document provides information about the Transport Layer protocols TCP and UDP. It describes:
1) TCP is a connection-oriented protocol that provides reliable, in-order delivery of data through features like flow control, error control, and congestion control. UDP is a connectionless protocol that does not guarantee delivery or order of packets.
2) The TCP header contains fields for source/destination ports, sequence numbers, acknowledgement numbers, flags, window size, checksum, and options. The UDP header contains fields for source/destination ports, length, and checksum.
3) The main differences between TCP and UDP are that TCP is connection-oriented, provides error control and flow control, and supports full duplex communication
This document provides an overview of the TCP/IP model created by the Department of Defense (DoD) and compares it to the OSI reference model. The DoD model consists of four layers - Process/Application, Host-to-Host, Internet, and Network Access - which correspond to a condensed version of the seven-layer OSI model. The document describes the functions of each layer and some of the key protocols that operate at each layer, such as TCP, IP, ARP, and Ethernet. It also covers topics like IP addressing, private vs public addresses, broadcast vs unicast traffic, and network access technologies.
1) TCP provides reliable data transmission over unreliable networks like the Internet by establishing connections between endpoints, sequencing packets, detecting and retransmitting lost packets.
2) TCP connections are established through a 3-way handshake process where both sides negotiate sequence numbers to synchronize packet transmission.
3) TCP connections can be closed through a 4-step process where each side sends a FIN packet to gracefully close the connection in both directions.
The document discusses transport layer protocols and their functions. Transport layer protocols like TCP and UDP provide services to applications to allow communication over an internetwork. They are responsible for establishing and maintaining connections between services on different machines and act as a bridge between the needs of applications and the underlying network layer protocols. Transport layer protocols are tightly tied to and designed to work with the specific network layer protocol below them.
The document discusses transport layer protocols TCP and UDP. It provides an overview of process-to-process communication using transport layer protocols. It describes the roles, services, requirements, addressing, encapsulation, multiplexing, and error control functions of the transport layer. It specifically examines TCP and UDP, comparing their connection-oriented and connectionless services, typical applications, and segment/datagram formats.
This document provides an overview of the transport layer and transport layer protocols. It discusses the functions of the transport layer including process-to-process communication using port numbers, multiplexing and demultiplexing, and reliable data transfer. It describes two main transport layer protocols: UDP, which provides connectionless and unreliable data transfer, and TCP, which provides connection-oriented and reliable data transfer. The document outlines key aspects of UDP and TCP including packet formats, connection establishment processes, and services provided.
PPT Slides explains about OSI layer, Internet Protocol(IP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP) & Internet Control Message Protocol(ICMP). It focuses on Protocol Headers and the interpretation of various header fields.
PPT describes about how to detect malicious datagrams, packet filtering systems behaviors & anomalies causing due to fragmentation.
IP datagrams are forwarded across the internet through a process of encapsulation and forwarding. Routers along the path encapsulate each IP datagram within a link layer frame and forward it based on the destination address. If a datagram is larger than the maximum transmission unit of the outgoing link, routers fragment it into smaller pieces that are reassembled by the destination host. Forwarding tables allow routers to determine the next hop for each datagram using longest prefix matching.
The document provides an overview of Packet over SONET/SDH (PoS) and related technologies. It discusses the OSI model and various internet protocols like TCP, UDP, IP, and how they relate to PoS. PoS allows efficient transport of IP traffic over SONET/SDH networks. It offers benefits like utilizing existing infrastructure while efficiently transporting various data, voice, and video traffic with less overhead than alternative protocols. The document also covers applications and measurements of PoS performance and connectivity.
This document discusses the TCP/IP and UDP protocols. It begins with an introduction comparing the TCP/IP model to the OSI model. The TCP/IP model has four layers compared to seven in the OSI model. It then describes the two main host-to-host layer protocols in TCP/IP - TCP and UDP. TCP is connection-oriented and provides reliable, ordered delivery. It uses segments with a header containing fields like sequence numbers. UDP is connectionless and provides fast but unreliable delivery. It uses simpler segments with fewer header fields. The document concludes by explaining the end-to-end delivery process for packets using these protocols as they are transmitted between hosts via routers.
TCP/IP is a set of communication protocols that enable data transmission across networks and between devices. It involves two main protocols: TCP and IP. TCP establishes reliable connections and ensures reliable delivery of data packets. IP handles addressing, routing packets between networks, and fragmentation/reassembly of packets. Key features of TCP/IP include logical addressing, routability, name resolution, multiplexing, and interoperability. TCP/IP operates on four layers - network interface, internet, transport, and application - with each layer building on the services of the layer below.
UDP is a connectionless transport protocol that does not guarantee packet delivery or order. It is faster than TCP but does not ensure reliability. UDP packets have a header containing source and destination port numbers as well as length fields. The checksum field allows detecting errors but packets are not retransmitted if errors occur. UDP is suitable for real-time applications where speed is critical and packet loss can be tolerated.
IRJET- Assessment of Network Protocol Packet Analysis in IPV4 and IPV6 on Loc...IRJET Journal
This document discusses using the TCPDUMP command to analyze network protocol packets for IPv4 and IPv6 on a local area network. TCPDUMP is used to capture network packets and display information like timestamps, source/destination IP addresses, and source/destination MAC addresses. Network administrators can use packet analysis to monitor network activity and traffic, troubleshoot problems, and improve network performance and efficiency. The methodology section describes how TCPDUMP can be used to analyze IPv4 and IPv6 packets and perform tasks like protocol analysis, identifying top network users, analyzing network activity by time or port number, and reconstructing communication between devices.
The document discusses IPv4 and IPv6 addressing and protocols. It provides:
1) IPv4 uses 32-bit addresses represented in dotted decimal notation, consisting of a network and node identifier. IPv6 uses 128-bit addresses to allow for more networks and devices.
2) IPv4 is a connectionless protocol that does not guarantee delivery, while IPv6 includes improvements like larger addresses, better header format, new options, and more security.
3) Transition technologies like dual stack, NAT-PT, 6to4, and 4to6 allow migration from IPv4 to IPv6 networks.
The Internet Protocol (IP) is the fundamental protocol that defines how data is sent between computers on the Internet. IP addresses uniquely identify each computer and data is sent in packets that contain the source and destination addresses. Packets can take different routes and arrive out of order, with TCP ensuring proper ordering. IP is connectionless and sends each packet independently. The most common versions are IPv4 and the newer IPv6. The IP datagram structure includes a header with fields like version, length, checksum, and source/destination addresses, followed by the data. Large data can be fragmented into multiple packets for transmission.
This document describes a custom network protocol designed to improve throughput performance compared to traditional TCP/IP protocols. The custom protocol uses a simplified 8-byte header containing only essential fields like source/destination addresses and port numbers, and sequence number. Tests of the custom protocol transferring a 10MB file between nodes achieved throughputs up to 902kbps, significantly higher than when using smaller packet sizes. By removing unnecessary TCP/IP header fields and processing, the custom protocol reduces overhead and improves throughput.
This document provides an agenda and overview of topics related to the transport layer and networking essentials. The agenda includes discussions of the transport layer, UDP overview, TCP communication process, the socket API, and tools and utilities. Specific topics that will be covered include the role and functions of the transport layer, UDP features and headers, TCP reliability mechanisms like connection establishment and termination, sequence numbers and acknowledgments, window sliding, and data loss/retransmission. The document also provides brief overviews and usage examples for common networking tools like ifconfig, nmcli, route, ping, traceroute, netstat, dig, ncat, nmap, tcpdump, and wireshark.
Bootstrap is a front-end framework that makes building responsive, mobile-first websites faster and easier. It provides pre-built UI components and design templates for common tasks like navigation, typography, forms, buttons, images, and more. Developers and designers can use Bootstrap to quickly prototype and build sites without custom coding.
This presentation discusses green computing, which refers to environmentally sustainable and efficient computing practices. The objectives of green computing are to minimize energy consumption, purchase green energy, reduce paper/consumable use, and minimize equipment disposal requirements. Computing harms the environment through increased energy/cooling needs for data centers and the hazardous materials in devices. Green computing aims to address this through green use, green disposal, green design, and green manufacturing. Benefits include reduced energy consumption and waste, while drawbacks include potential costs and performance sacrifices. Major organizations promoting green computing are The Green Grid and the EPA's Energy Star program.
Generators convert mechanical energy into electrical energy. They operate using electromagnetic induction to produce electric currents. Modern generators mostly produce alternating current and are driven by steam turbines or gas turbines in large power stations. Generators provide nearly all the power for electric grids and come in various types including dynamos, alternators, and specialized generators like homopolar or linear electric generators. They have many applications from powering vehicles to backup power sources.
The document provides an introduction to roofs and floors. It discusses the key components and functions of roofs, including the supporting structure, outer protective layer, insulation, and drainage. It also discusses subfloor construction and different floor covering materials. The main functions of roofs are to provide protection from weather elements and insulation, while allowing for proper drainage. Floors require a sturdy subfloor and covering to provide a safe walking surface that can support expected loads.
This document provides an overview of different types of stones. It begins with an introduction to stones and their classification into igneous, sedimentary, and metamorphic rocks. Key igneous rocks discussed are granite and trap rocks. Key sedimentary rocks discussed are sandstone. The document then discusses India's history with stone architecture before concluding with the physical properties and conditions affecting the disintegration of stones.
The document discusses the Blue Brain project, which aims to create a virtual human brain through detailed computer simulation. It describes how nanobots could potentially scan a natural brain and upload its contents to a supercomputer replica. This virtual brain would function like a human brain by taking inputs, interpreting them, and generating outputs. While a virtual brain could allow intelligence and knowledge to persist after death, it may also introduce disadvantages like dependence on computers. The document outlines both benefits like improved memory and decision making, as well as potential costs and issues regarding human cloning and privacy.
Remote sensing plays a large role in enhancing geographic information systems (GIS) by providing large amounts of data needed for GIS. It reduces the need for manual field work and allows the retrieval of data from difficult to access areas. Remote sensing imagery can directly serve as a visual aid in GIS and can indirectly provide information about land use, vegetation, and other features through analysis. As remote sensing technologies advance, they continue to increase the resolution and coverage of data available to integrate within GIS. This leads to more accurate and detailed geographic information systems.
The Xbox is a video game console brand created by Microsoft. The original Xbox console was released in 2001 competing with the PlayStation 2 and Nintendo GameCube. It included the integrated Xbox Live online multiplayer service. Over 24 million original Xbox units were sold until its successor, the Xbox 360, launched in 2005. The Xbox 360 was a major success selling over 77 million units. Microsoft's most recent console is the Xbox One, launched in 2013.
This document provides an overview of e-ball technology, which describes a conceptual spherical personal computer. The e-ball PC would be 160mm in diameter and contain components like a dual-core processor, 2GB RAM, 350-500GB hard drive, integrated graphics and sound card, speakers, wireless mouse, and an LCD projector to project the screen onto a wall. It would run on a Windows operating system and allow touchless input through a virtual keyboard that projects onto any surface. The document discusses the components and features of the e-ball concept in more detail.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
2. Report submitted to:
Department of Computer Science, GURU RAMDAS KHALSA
INSTITUTE OF SCIENCE & TECHNOLOGY
JABALPUR(M.P.)
SUBMITTED BY:-
ISHTDEEP SINGH HORA
0202CS131006
SESSION
2014-2015
GURU RAMDAS KHALSA INSTITUTE OF SCIENCE & TECHNOLOGY
2
3. Acknowledgement
Apart from the efforts of me, the success of any seminar report depends largely on the
encouragement and guidelines of many others. I take this opportunity to express my gratitude to
the people who have been instrumental in the successful completion of this report.
I would like to show my greatest appreciation to Ms.Shaziya Tabassum I can’t say thank you
enough for his tremendous support and help. I feel motivated and encouraged every time I
attend his meeting. Without his encouragement and guidance this report would not have
materialized.
The guidance and support received from all the members who contributed and who are
contributing to this report, was vital for the success of the report. I am grateful for their constant
support and help.
3
5. User Datagram Protocol
The User Datagram Protocol (UDP) is one of the core members of the Internet protocol suite.
The protocol was designed by David P. Reed in 1980 and formally defined in RFC 768.
UDP uses a simple connectionless transmission model with a minimum of protocol mechanism.
It has no handshaking dialogues, and thus exposes any unreliability of the underlying network
protocol to the user's program. There is no guarantee of delivery, ordering, or duplicate
protection. UDP provides checksums for data integrity, and port numbers for addressing
different functions at the source and destination of the datagram.
With UDP, computer applications can send messages, in this case referred to as datagrams, to
other hosts on an Internet Protocol (IP) network without prior communications to set up special
transmission channels or data paths. UDP is suitable for purposes where error checking and
correction is either not necessary or is performed in the application, avoiding the overhead of
such processing at the network interface level. Time-sensitive applications often use UDP
because dropping packets is preferable to waiting for delayed packets, which may not be an
option in a real-time system. If error correction facilities are needed at the network interface
level, an application may use the Transmission Control Protocol (TCP) or Stream Control
Transmission Protocol (SCTP) which are designed for this purpose.
Attributes
A number of UDP's attributes make it especially suited for certain applications.
• It is transaction-oriented, suitable for simple query-response protocols such as the
Domain Name System or the Network Time Protocol.
• It provides datagrams, suitable for modeling other protocols such as in IP tunneling or
Remote Procedure Call and the Network File System.
• It is simple, suitable for bootstrapping or other purposes without a full protocol stack,
such as the DHCP and Trivial File Transfer Protocol.
5
6. • It is stateless, suitable for very large numbers of clients, such as in streaming media
applications for example IPTV
• The lack of retransmission delays makes it suitable for real-time applications such as
Voice over IP, online games, and many protocols built on top of the Real Time
Streaming Protocol.
• Works well in unidirectional communication, suitable for broadcast information such as
in many kinds of service discovery and shared information such as broadcast time or
Routing Information Protocol
Service ports
Applications use datagram sockets to establish host-to-host communications. An application
binds a socket to its endpoint of data transmission, which is a combination of an IP address and
a service port. A port is a software structure that is identified by the port number, a 16 bit integer
value, allowing for port numbers between 0 and 65535. Port 0 is reserved, but is a permissible
source port value if the sending process does not expect messages in response.
The Internet Assigned Numbers Authority (IANA) has divided port numbers into three ranges.[2]
Port numbers 0 through 1023 are used for common, well-known services. On Unix-like
operating systems, using one of these ports requires superuser operating permission. Port
numbers 1024 through 49151 are the registered ports used for IANA-registered services. Ports
49152 through 65535 are dynamic ports that are not officially designated for any specific
service, and may be used for any purpose. They also are used as ephemeral ports, from which
software running on the host may randomly choose a port in order to define itself. In effect, they
are used as temporary ports primarily by clients when communicating with servers.
Packet structure
UDP is a minimal message-oriented Transport Layer protocol that is documented in IETF RFC
768.
6
7. UDP provides no guarantees to the upper layer protocol for message delivery and the UDP layer
retains no state of UDP messages once sent. For this reason, UDP sometimes is referred to as
Unreliable Datagram Protocol.
UDP provides application multiplexing (via port numbers) and integrity verification (via
checksum) of the header and payload.[4]
If transmission reliability is desired, it must be
implemented in the user's application.
The UDP header consists of 4 fields, each of which is 2 bytes (16 bits).[1]
The use of the fields
"Checksum" and "Source port" is optional in IPv4 (pink background in table). In IPv6 only the
source port is optional (see below).
Source port number
This field identifies the sender's port when meaningful and should be assumed to be the
port to reply to if needed. If not used, then it should be zero. If the source host is the
client, the port number is likely to be an ephemeral port number. If the source host is the
server, the port number is likely to be a well-known port number.
Destination port number
This field identifies the receiver's port and is required. Similar to source port number, if
the client is the destination host then the port number will likely be an ephemeral port
number and if the destination host is the server then the port number will likely be a
well-known port number.
Length
A field that specifies the length in bytes of the UDP header and UDP data. The minimum
length is 8 bytes because that is the length of the header. The field size sets a theoretical
limit of 65,535 bytes (8 byte header + 65,527 bytes of data) for a UDP datagram. The
practical limit for the data length which is imposed by the underlying IPv4 protocol is
65,507 bytes (65,535 − 8 byte UDP header − 20 byte IP header).
In IPv6 Jumbograms it is possible to have UDP packets of size greater than 65,535
bytes.[5]
RFC 2675 specifies that the length field is set to zero if the length of the UDP
header plus UDP data is greater than 65,535.
Checksum
7
8. The checksum field is used for error-checking of the header and data. If no checksum is
generated by the transmitter, the field uses the value all-zeros. This field is not optional
for IPv6.
Checksum computation
The method used to compute the checksum is defined in RFC 768:
Checksum is the 16-bit one's complement of the one's complement sum of a pseudo
header of information from the IP header, the UDP header, and the data, padded with
zero octets at the end (if necessary) to make a multiple of two octets.[6]
In other words, all 16-bit words are summed using one's complement arithmetic . Add the 16-bit
values up. Each time a carry-out (17th bit) is produced, swing that bit around and add it back
into the least significant bit.[8]
The sum is then one's complemented to yield the value of the
UDP checksum field.
If the checksum calculation results in the value zero (all 16 bits 0) it should be sent as the one's
complement (all 1s).
The difference between IPv4 and IPv6 is in the data used to compute the checksum.
IPv4 Pseudo Header
When UDP runs over IPv4, the checksum is computed using a "pseudo header" that contains
some of the same information from the real IPv4 header. The pseudo header is not the real IPv4
header used to send an IP packet, it is used only for the checksum calculation.
The source and destination addresses are those in the IPv4 header. The protocol is that for UDP
(see List of IP protocol numbers): 17 (0x11). The UDP length field is the length of the UDP
header and data.
UDP checksum computation is optional for IPv4. If a checksum is not used it should be set to
the value zero.
8
9. IPv6 Pseudo Header
When UDP runs over IPv6, the checksum is mandatory. The method used to compute it is
changed as documented in RFC 2460:
Any transport or other upper-layer protocol that includes the addresses from the IP
header in its checksum computation must be modified for use over IPv6 to include the
128-bit IPv6 addresses.
When computing the checksum, again a pseudo header is used that mimics the real IPv6 header:
The source address is the one in the IPv6 header. The destination address is the final destination;
if the IPv6 packet does not contain a Routing header, that will be the destination address in the
IPv6 header; otherwise, at the originating node, it will be the address in the last element of the
Routing header, and, at the receiving node, it will be the destination address in the IPv6 header.
The value of the Next Header field is the protocol value for UDP: 17. The UDP length field is
the length of the UDP header and data.
Reliability and congestion control solutions
Lacking reliability, UDP applications must generally be willing to accept some loss, errors or
duplication. Some applications, such as TFTP, may add rudimentary reliability mechanisms into
the application layer as needed.[2]
Most often, UDP applications do not employ reliability mechanisms and may even be hindered
by them. Streaming media, real-time multiplayer games and voice over IP (VoIP) are examples
of applications that often use UDP. In these particular applications, loss of packets is not usually
a fatal problem. If an application requires a high degree of reliability, a protocol such as the
Transmission Control Protocol may be used instead.
9
10. In VoIP, for example, latency and jitter are the primary concerns. The use of TCP would cause
jitter if any packets were lost as TCP does not provide subsequent data to the application while
it is requesting re-sending of the missing data. If using UDP the end user applications must
provide any necessary handshaking such as real time confirmation that the message has been
received.
Applications
Numerous key Internet applications use UDP, including: the Domain Name System (DNS),
where queries must be fast and only consist of a single request followed by a single reply
packet, the Simple Network Management Protocol (SNMP), the Routing Information Protocol
(RIP)[1]
and the Dynamic Host Configuration Protocol (DHCP).
Voice and video traffic is generally transmitted using UDP. Real-time video and audio
streaming protocols are designed to handle occasional lost packets, so only slight degradation in
quality occurs, rather than large delays if lost packets were retransmitted. Because both TCP and
UDP run over the same network, many businesses are finding that a recent increase in UDP
traffic from these real-time applications is hindering the performance of applications using TCP,
such as point of sale, accounting, and database systems. When TCP detects packet loss, it will
throttle back its data rate usage. Since both real-time and business applications are important to
businesses, developing quality of service solutions is seen as crucial by some.[10]
Some VPN systems such as OpenVPN may use UDP while implementing reliable connections
and error checking at the application level.
Transmission Control Protocol
The Transmission Control Protocol (TCP) is a core protocol of the Internet Protocol Suite. It
originated in the initial network implementation in which it complemented the Internet
Protocol (IP). Therefore, the entire suite is commonly referred to as TCP/IP. TCP provides
reliable, ordered, and error-checked delivery of a stream of octets between applications
running on hosts communicating over an IP network. TCP is the protocol that major
10
11. Internet applications such as the World Wide Web, email, remote administration and file
transfer rely on. Applications that do not require reliable data stream service may use the
User Datagram Protocol (UDP), which provides a connectionless datagram service that
emphasizes reduced latency over reliability.
Network function
The Transmission Control Protocol provides a communication service at an intermediate level
between an application program and the Internet Protocol. It provides host-to-host connectivity
at the Transport Layer of the Internet model. An application does not need to know the
particular mechanisms for sending data via a link to another host, such as the required packet
fragmentation on the transmission medium. At the transport layer, the protocol handles all
handshaking and transmission details and presents an abstraction of the network connection to
the application.
At the lower levels of the protocol stack, due to network congestion, traffic load balancing, or
other unpredictable network behavior, IP packets may be lost, duplicated, or delivered out of
order. TCP detects these problems, requests retransmission of lost data, rearranges out-of-order
data, and even helps minimize network congestion to reduce the occurrence of the other
problems. If the data still remains undelivered, its source is notified of this failure. Once the
TCP receiver has reassembled the sequence of octets originally transmitted, it passes them to the
receiving application. Thus, TCP abstracts the application's communication from the underlying
networking details.
TCP is utilized extensively by many popular applications carried on the Internet, including the
World Wide Web (WWW), E-mail, File Transfer Protocol, Secure Shell, peer-to-peer file
sharing, and many streaming media applications.
TCP is optimized for accurate delivery rather than timely delivery, and therefore, TCP
sometimes incurs relatively long delays (on the order of seconds) while waiting for out-of-order
messages or retransmissions of lost messages. It is not particularly suitable for real-time
applications such as Voice over IP. For such applications, protocols like the Real-time Transport
11
12. Protocol (RTP) running over the User Datagram Protocol (UDP) are usually recommended
instead.[2]
TCP is a reliable stream delivery service that guarantees that all bytes received will be identical
with bytes sent and in the correct order. Since packet transfer over many networks is not
reliable, a technique known as positive acknowledgment with retransmission is used to
guarantee reliability of packet transfers. This fundamental technique requires the receiver to
respond with an acknowledgment message as it receives the data. The sender keeps a record of
each packet it sends. The sender also maintains a timer from when the packet was sent, and
retransmits a packet if the timer expires before the message has been acknowledged. The timer
is needed in case a packet gets lost or corrupted.
While IP handles actual delivery of the data, TCP keeps track of the individual units of data
transmission, called segments, that a message is divided into for efficient routing through the
network. For example, when an HTML file is sent from a web server, the TCP software layer of
that server divides the sequence of octets of the file into segments and forwards them
individually to the IP software layer (Internet Layer). The Internet Layer encapsulates each TCP
segment into an IP packet by adding a header that includes (among other data) the destination IP
address. When the client program on the destination computer receives them, the TCP layer
(Transport Layer) reassembles the individual segments and ensures they are correctly ordered
and error free as it streams them to an application.
TCP segment structure
Transmission Control Protocol accepts data from a data stream, divides it into chunks, and adds
a TCP header creating a TCP segment. The TCP segment is then encapsulated into an Internet
Protocol (IP) datagram, and exchanged with peers.[3]
The term TCP packet appears in both informal and formal usage, whereas in more precise
terminology segment refers to the TCP Protocol Data Unit (PDU), datagram to the IP PDU, and
frame to the data link layer PDU:
12
13. Processes transmit data by calling on the TCP and passing buffers of data as arguments. The
TCP packages the data from these buffers into segments and calls on the internet module [e.g.
IP] to transmit each segment to the destination TCP.
A TCP segment consists of a segment header and a data section. The TCP header contains 10
mandatory fields, and an optional extension field (Options, pink background in table).
The data section follows the header. Its contents are the payload data carried for the application.
The length of the data section is not specified in the TCP segment header.
Protocol operation
A Simplified TCP State Diagram. See TCP EFSM diagram for a more detailed state diagram
including the states inside the ESTABLISHED state.
TCP protocol operations may be divided into three phases. Connections must be properly
established in a multi-step handshake process (connection establishment) before entering the
data transfer phase. After data transmission is completed, the connection termination closes
established virtual circuits and releases all allocated resources.
A TCP connection is managed by an operating system through a programming interface that
represents the local end-point for communications, the Internet socket. During the lifetime of a
TCP connection the local end-point undergoes a series of state changes:[11]
LISTEN
(server) represents waiting for a connection request from any remote TCP and port.
13
14. SYN-SENT
(client) represents waiting for a matching connection request after having sent a
connection request.
SYN-RECEIVED
(server) represents waiting for a confirming connection request acknowledgment after
having both received and sent a connection request.
ESTABLISHED
(both server and client) represents an open connection, data received can be delivered to
the user. The normal state for the data transfer phase of the connection.
FIN-WAIT-1
(both server and client) represents waiting for a connection termination request from the
remote TCP, or an acknowledgment of the connection termination request previously
sent.
FIN-WAIT-2
(both server and client) represents waiting for a connection termination request from the
remote TCP.
CLOSE-WAIT
(both server and client) represents waiting for a connection termination request from the
local user.
CLOSING
(both server and client) represents waiting for a connection termination request
acknowledgment from the remote TCP.
LAST-ACK
(both server and client) represents waiting for an acknowledgment of the connection
termination request previously sent to the remote TCP (which includes an
acknowledgment of its connection termination request).
TIME-WAIT
(either server or client) represents waiting for enough time to pass to be sure the remote
TCP received the acknowledgment of its connection termination request. [According to
14
15. RFC 793 a connection can stay in TIME-WAIT for a maximum of four minutes known
as a MSL (maximum segment lifetime).]
CLOSED
(both server and client) represents no connection state at all.
Connection establishment
To establish a connection, TCP uses a three-way handshake. Before a client attempts to connect
with a server, the server must first bind to and listen at a port to open it up for connections: this
is called a passive open. Once the passive open is established, a client may initiate an active
open. To establish a connection, the three-way (or 3-step) handshake occurs:
1. SYN: The active open is performed by the client sending a SYN to the server. The client
sets the segment's sequence number to a random value A.
2. SYN-ACK: In response, the server replies with a SYN-ACK. The acknowledgment
number is set to one more than the received sequence number i.e. A+1, and the sequence
number that the server chooses for the packet is another random number, B.
3. ACK: Finally, the client sends an ACK back to the server. The sequence number is set
to the received acknowledgement value i.e. A+1, and the acknowledgement number is
set to one more than the received sequence number i.e. B+1.
At this point, both the client and server have received an acknowledgment of the connection.
The steps 1, 2 establish the connection parameter (sequence number) for one direction and it is
acknowledged. The steps 2, 3 establish the connection parameter (sequence number) for the
other direction and it is acknowledged. With these, a full-duplex communication is established.
15
16. Connection termination
Connection termination
The connection termination phase uses a four-way handshake, with each side of the connection
terminating independently. When an endpoint wishes to stop its half of the connection, it
transmits a FIN packet, which the other end acknowledges with an ACK. Therefore, a typical
tear-down requires a pair of FIN and ACK segments from each TCP endpoint. After both
FIN/ACK exchanges are concluded, the side that sent the first FIN before receiving one waits
for a timeout before finally closing the connection, during which time the local port is
unavailable for new connections; this prevents confusion due to delayed packets being delivered
during subsequent connections.
A connection can be "half-open", in which case one side has terminated its end, but the other
has not. The side that has terminated can no longer send any data into the connection, but the
other side can. The terminating side should continue reading the data until the other side
terminates as well.
It is also possible to terminate the connection by a 3-way handshake, when host A sends a FIN
and host B replies with a FIN & ACK (merely combines 2 steps into one) and host A replies
with an ACK.[12]
This is perhaps the most common method.
Some host TCP stacks may implement a half-duplex close sequence, as Linux or HP-UX do. If
such a host actively closes a connection but still has not read all the incoming data the stack
already received from the link, this host sends a RST instead of a FIN (Section 4.2.2.13 in RFC
16
17. 1122). This allows a TCP application to be sure the remote application has read all the data the
former sent—waiting the FIN from the remote side, when it actively closes the connection. But
the remote TCP stack cannot distinguish between a Connection Aborting RST and Data Loss
RST. Both cause the remote stack to lose all the data received.
Some application protocols may violate the OSI model layers, using the TCP open/close
handshaking for the application protocol open/close handshaking — these may find the RST
problem on active close. As an example:
s = connect(remote);
send(s, data);
close(s);
For a usual program flow like above, a TCP/IP stack like that described above does not
guarantee that all the data arrives to the other application.
Resource usage
Most implementations allocate an entry in a table that maps a session to a running operating
system process. Because TCP packets do not include a session identifier, both endpoints identify
the session using the client's address and port. Whenever a packet is received, the TCP
implementation must perform a lookup on this table to find the destination process. Each entry
in the table is known as a Transmission Control Block or TCB. It contains information about the
endpoints (IP and port), status of the connection, running data about the packets that are being
exchanged and buffers for sending and receiving data.
The number of sessions in the server side is limited only by memory and can grow as new
connections arrive, but the client must allocate a random port before sending the first SYN to
the server. This port remains allocated during the whole conversation, and effectively limits the
number of outgoing connections from each of the client's IP addresses. If an application fails to
properly close unrequired connections, a client can run out of resources and become unable to
establish new TCP connections, even from other applications.
17
18. Both endpoints must also allocate space for unacknowledged packets and received (but unread)
data.
Comparison of UDP and TCP
Transmission Control Protocol is a connection-oriented protocol, which means that it requires
handshaking to set up end-to-end communications. Once a connection is set up, user data may
be sent bi-directionally over the connection.
• Reliable – TCP manages message acknowledgment, retransmission and timeout.
Multiple attempts to deliver the message are made. If it gets lost along the way, the
server will re-request the lost part. In TCP, there's either no missing data, or, in case of
multiple timeouts, the connection is dropped.
• Ordered – If two messages are sent over a connection in sequence, the first message will
reach the receiving application first. When data segments arrive in the wrong order, TCP
buffers delay the out-of-order data until all data can be properly re-ordered and delivered
to the application.
• Heavyweight – TCP requires three packets to set up a socket connection, before any user
data can be sent. TCP handles reliability and congestion control.
• Streaming – Data is read as a byte stream, no distinguishing indications are transmitted
to signal message (segment) boundaries.
User Datagram Protocol is a simpler message-based connectionless protocol. Connectionless
protocols do not set up a dedicated end-to-end connection. Communication is achieved by
transmitting information in one direction from source to destination without verifying the
readiness or state of the receiver.
• Unreliable – When a UDP message is sent, it cannot be known if it will reach its
destination; it could get lost along the way. There is no concept of acknowledgment,
retransmission, or timeout.
• Not ordered – If two messages are sent to the same recipient, the order in which they
arrive cannot be predicted.
18
19. • Lightweight – There is no ordering of messages, no tracking connections, etc. It is a
small transport layer designed on top of IP.
• Datagrams – Packets are sent individually and are checked for integrity only if they
arrive. Packets have definite boundaries which are honored upon receipt, meaning a read
operation at the receiver socket will yield an entire message as it was originally sent.
• No congestion control – UDP itself does not avoid congestion, unless they implement
congestion control measures at the application level.
• Broadcasts - being connectionless, UDP can broadcast - sent packets can be addressed to
be receivable by all devices on the subnet.
Conclusion
19
20. It has no handshaking dialogues, and thus exposes any unreliability of the underlying network
protocol to the user's program. There is no guarantee of delivery, ordering, or duplicate
protection. UDP provides checksums for data integrity, and port numbers for addressing
different functions at the source and destination of the datagram.
With UDP, computer applications can send messages, in this case referred to as datagrams, to
other hosts on an Internet Protocol (IP) network without prior communications to set up special
transmission channels or data paths. UDP is suitable for purposes where error checking and
correction is either not necessary or is performed in the application, avoiding the overhead of
such processing at the network interface level. Time-sensitive applications often use UDP
because dropping packets is preferable to waiting for delayed packets, which may not be an
option in a real-time system. If error correction facilities are needed at the network interface
level, an application may use the Transmission Control Protocol (TCP) or Stream Control
Transmission Protocol (SCTP) which are designed for this purpose.
20
21. References
RFC references
• RFC 768 – User Datagram Protocol
• RFC 2460 – Internet Protocol, Version 6 (IPv6) Specification
• RFC 2675 – IPv6 Jumbograms
• RFC 4113 – Management Information Base for the UDP
• RFC 5405 – Unicast UDP Usage Guidelines for Application Designers
External links
• IANA Port Assignments
• The Trouble with UDP Scanning (PDF)
• Breakdown of UDP frame
• UDP on MSDN Magazine Sockets and WCF
• UDP connections
21