Exploring various evolutions of the TCP/IP protocol suite towards a better support of data byte-streams. This paper is organized as follows. Section 2 describes the background of SPDY, HTTP/2 and QUIC, also giving a comparison of them, and how SPDY, HTTP/2 and QUIC reduce the page load latency by making a more efficient use of TCP. Section 3 describes two of the major proposals to change TCP so to support multi path, they are SCTP and MPTCP, with full description of each proposal, a point to point comparison, congestion control, mobility and multihoming and how HTTP/2 can benefit from multipath TCP.
Keywords: TCP/IP, HTTP/2, SPDY, QUIC, SCTP, MPTCP, Web, Browser
The web has dramatically evolved over the last 20+ years, yet HTTP - the workhorse of the Web - has not. Web developers have worked around HTTP's limitations, but:
--> Performance still falls short of full bandwidth utilization
--> Web design and maintenance are more complex
--> Resource consumption increases for client and server
--> Cacheability of resources suffers
HTTP/2 attempts to solve many of the shortcomings and inflexibilities of HTTP/1.1
This document discusses application layer protocols and components of web applications. It explains that application layer protocols define how processes running on different systems communicate by exchanging messages. A key application layer protocol is HTTP, which specifies how browsers and web servers exchange request and response messages. The document also describes how processes are addressed to allow communication, including using IP addresses and port numbers. User agents like web browsers interface between users and the application layer.
Here are explanations of the requested concepts:
1. SMTP (Simple Mail Transfer Protocol):
- Used for sending and receiving email messages between servers
- Works on TCP port 25
- Client (email sender) connects to SMTP server and sends the email
- SMTP server then handles delivering the email to recipient's SMTP server
2. POP (Post Office Protocol):
- Used for retrieving email from a remote server to a local machine
- Works on TCP port 110 (POP3)
- User must first authenticate with their username and password
- POP downloads all emails from the server to the local machine
- Emails are then deleted from the server by default
3. IMAP (Internet Message Access Protocol):
The document describes the implementation of a peer-to-peer server that allows peers to register, deregister, search for, and download content. The server uses TCP sockets and threads to handle multiple client connections simultaneously. Issues encountered included buffers not clearing properly and thread arrays causing segmentation faults. These were resolved by adding buffer clearing logic and allocating memory for thread indices. The implemented code now meets specifications by allowing peers to share a centralized content registry and download files from each other.
The constrained application protocol (co ap) implementation-part5Hamdamboy (함담보이)
The document describes an out testing mechanism for underwater internet of things (UIoT) networks. It specifies the testing procedure and environment for implementing an out testing mechanism using CoAP and Java. The mechanism uses GET requests from a client and reliable message transmission over CON messages to test communication possibilities between nodes in a UIoT network.
Proxy servers act as an intermediary for requests from clients seeking resources from other servers. There are different types of proxy servers including cache proxies that speed up access and web proxies that allow users to connect to servers and access the internet. Proxy servers aim to provide privacy by hiding clients' IP addresses and allow access around content blocks. FTP and HTTP are protocols for transferring files and web pages respectively using the client-server model, with FTP using separate control and data connections and HTTP using request and response messages. Proxy servers can also be used with FTP and HTTP to add capabilities like caching, authentication, and traffic monitoring.
The application layer allows users to interface with networks through application layer protocols like HTTP, SMTP, POP3, FTP, Telnet, and DHCP. It provides the interface between applications on different ends of a network. Common application layer protocols include DNS for mapping domain names to IP addresses, HTTP for transferring web page data, and SMTP/POP3 for sending and receiving email messages. The client/server and peer-to-peer models describe how requests are made and fulfilled over the application layer.
The document summarizes the application layer of the OSI model. It discusses the World Wide Web and how browsers work to fetch and display web pages. It describes the client-server model of the web including how browsers resolve URLs and make requests to web servers. It also covers HTML, URLs, DNS, and how static and dynamic web pages are generated both on the client-side using JavaScript and on the server-side using languages like PHP.
The web has dramatically evolved over the last 20+ years, yet HTTP - the workhorse of the Web - has not. Web developers have worked around HTTP's limitations, but:
--> Performance still falls short of full bandwidth utilization
--> Web design and maintenance are more complex
--> Resource consumption increases for client and server
--> Cacheability of resources suffers
HTTP/2 attempts to solve many of the shortcomings and inflexibilities of HTTP/1.1
This document discusses application layer protocols and components of web applications. It explains that application layer protocols define how processes running on different systems communicate by exchanging messages. A key application layer protocol is HTTP, which specifies how browsers and web servers exchange request and response messages. The document also describes how processes are addressed to allow communication, including using IP addresses and port numbers. User agents like web browsers interface between users and the application layer.
Here are explanations of the requested concepts:
1. SMTP (Simple Mail Transfer Protocol):
- Used for sending and receiving email messages between servers
- Works on TCP port 25
- Client (email sender) connects to SMTP server and sends the email
- SMTP server then handles delivering the email to recipient's SMTP server
2. POP (Post Office Protocol):
- Used for retrieving email from a remote server to a local machine
- Works on TCP port 110 (POP3)
- User must first authenticate with their username and password
- POP downloads all emails from the server to the local machine
- Emails are then deleted from the server by default
3. IMAP (Internet Message Access Protocol):
The document describes the implementation of a peer-to-peer server that allows peers to register, deregister, search for, and download content. The server uses TCP sockets and threads to handle multiple client connections simultaneously. Issues encountered included buffers not clearing properly and thread arrays causing segmentation faults. These were resolved by adding buffer clearing logic and allocating memory for thread indices. The implemented code now meets specifications by allowing peers to share a centralized content registry and download files from each other.
The constrained application protocol (co ap) implementation-part5Hamdamboy (함담보이)
The document describes an out testing mechanism for underwater internet of things (UIoT) networks. It specifies the testing procedure and environment for implementing an out testing mechanism using CoAP and Java. The mechanism uses GET requests from a client and reliable message transmission over CON messages to test communication possibilities between nodes in a UIoT network.
Proxy servers act as an intermediary for requests from clients seeking resources from other servers. There are different types of proxy servers including cache proxies that speed up access and web proxies that allow users to connect to servers and access the internet. Proxy servers aim to provide privacy by hiding clients' IP addresses and allow access around content blocks. FTP and HTTP are protocols for transferring files and web pages respectively using the client-server model, with FTP using separate control and data connections and HTTP using request and response messages. Proxy servers can also be used with FTP and HTTP to add capabilities like caching, authentication, and traffic monitoring.
The application layer allows users to interface with networks through application layer protocols like HTTP, SMTP, POP3, FTP, Telnet, and DHCP. It provides the interface between applications on different ends of a network. Common application layer protocols include DNS for mapping domain names to IP addresses, HTTP for transferring web page data, and SMTP/POP3 for sending and receiving email messages. The client/server and peer-to-peer models describe how requests are made and fulfilled over the application layer.
The document summarizes the application layer of the OSI model. It discusses the World Wide Web and how browsers work to fetch and display web pages. It describes the client-server model of the web including how browsers resolve URLs and make requests to web servers. It also covers HTML, URLs, DNS, and how static and dynamic web pages are generated both on the client-side using JavaScript and on the server-side using languages like PHP.
The document discusses slides being made freely available for use in substantially unaltered form with attribution, notes the copyright of the material, and encourages readers to enjoy the slides on topics including principles of network applications, popular protocols like HTTP, FTP, and DNS, and socket programming.
The document discusses several key protocols and concepts related to email:
1) File Transfer Protocol (FTP) allows clients to transfer files from a remote server using separate TCP connections for control commands and data transfer.
2) Simple Mail Transfer Protocol (SMTP) is used by email servers to send and receive email messages. It uses TCP port 25 and involves greeting, message transfer, and closure phases between servers.
3) Email messages are formatted according to RFC 822 with headers, body, and MIME extensions that allow encoding of attachments and multimedia content.
HTTP is the protocol used to retrieve web pages over the Internet. It uses a request/response model where browsers make requests to web servers using HTTP and receive responses. HTTP 1.1 introduced persistent connections allowing multiple requests to be sent over a single TCP connection, improving efficiency over HTTP 1.0 which required a separate connection for each request. Caching of responses in browsers and proxies improves page load times and reduces server load.
HTTP (Hypertext Transfer Protocol) is a stateless, request-response protocol for transferring hypermedia documents across the internet. It runs on top of TCP and uses port 80 by default. HTTP specifies the messages that clients and servers can send and the responses received. Common HTTP methods include GET, HEAD, POST, PUT, DELETE, and OPTIONS. Status codes inform the client if the request was successful or not. HTTP headers provide additional metadata in requests and responses.
The document discusses the architecture and components of the World Wide Web (WWW) and Hypertext Transfer Protocol (HTTP). It describes the WWW as a distributed client/server system where clients access services from servers located across multiple sites. It categorizes web documents as static, dynamic, or active based on when their contents are determined. It also outlines HTTP, the main protocol for accessing data on the WWW. HTTP functions similarly to FTP and SMTP and uses TCP on port 80. Requests and responses have message formats with status lines, headers and bodies. Examples demonstrate GET and POST requests and responses.
This document provides an overview of SMTP (Simple Mail Transfer Protocol) including its history, general features, how it works, and limitations. SMTP is an Internet standard used to transfer email between Mail Transfer Agents (MTAs). It originated in 1980 and was standardized in 1981. Key points are that SMTP operates over TCP port 25 in a request-response format, uses status codes to indicate success or failure, and relies on MTAs like Sendmail to route and deliver messages between servers. However, it only supports basic 7-bit ASCII encoding and is susceptible to misuse like spamming.
This document provides an overview of various application layer protocols including electronic mail (SMTP, POP3, IMAP), HTTP, web services, DNS, and SNMP. It discusses the distinction between application programs and protocols, how protocols implement remote procedure calls, and that protocols have companion protocols that define message formats. Specific protocols covered in more detail include SMTP for mail transfer, POP3 and IMAP for mail access, HTTP for web access, and the general functions of DNS and SNMP in networks.
The document provides an overview of the Hypertext Transfer Protocol (HTTP) including:
- HTTP is an application-level protocol for distributed, collaborative, hypermedia information systems that facilitates information transfer across the internet.
- A URL uniquely identifies resources over the web and consists of the protocol, hostname, port, and path/file name.
- HTTP uses a request-response model where a client sends a request and the server returns a response. Common request methods are GET, POST, HEAD.
- Responses contain a status line indicating success or error, and headers providing metadata about the response.
This document describes a custom reliable file transfer protocol over UDP that is designed to perform better than TCP in lossy network conditions. It discusses the protocol design which uses sequence numbers and negative acknowledgements to provide reliability over UDP. The protocol is tested on a simulated network with varying packet loss rates and delays. Results show the protocol achieves higher throughput than TCP-based file transfer methods on lossy links.
A study of reserach paper they investigate the problem of seamless VM migrations in the DCN. Leveraging the benefit of decoupling a service from its physical location in the emerging technique named data networking; we propose a named service framework to support seamless VM migrations. In comparison with other approaches, their approach has following advantages: 1) the VM migration is interruption free; 2) the overhead to maintain the routing information is less than that caused by classic NDN; 3) the routing protocol is robust to both link and node failures; 4) the framework inherently supports the implementation of a distributed load balancing algorithm, via which requests are distributed to VMs in balance. The analysis and simulation results verify these benefits.
The application layer is the top layer of the OSI model and controls how applications communicate over a network. It provides services for applications including mail, file transfer, domain name translation and network security. Protocols at this layer include HTTP, FTP, SMTP, DNS and others that allow applications to access remote files and exchange messages over the internet in a standardized way. The application layer hides the complexities of the underlying network from applications and ensures reliable and secure communication between devices.
The constrained application protocol (co ap) part 2Hamdamboy (함담보이)
The document provides an overview of the Constrained Application Protocol (CoAP), including:
- CoAP is a web transfer protocol for constrained nodes and networks that can be used as a lightweight alternative to HTTP. It uses UDP instead of TCP for transport and is designed for constrained environments.
- CoAP supports the same methods as HTTP (GET, POST, PUT, DELETE) but uses a simpler binary header and message format. It includes features like asynchronous messaging, multicast support, and optional block transfers and observations.
- CoAP can be secured using DTLS and includes options for caching, proxying, and resource discovery. It is implemented in many IoT devices and platforms and allows constrained nodes to integrate
The document discusses various aspects of transport layer protocols including services provided, primitives, addressing, connection establishment and release, flow control, multiplexing, crash recovery, TCP and UDP, and performance issues. Specific topics covered include Berkeley sockets, an example file server, TCP and UDP headers, congestion control, and fast TPDU processing techniques.
A server algorithm that combines delayed allocation and pre-allocation can limit maximum concurrency levels. Delayed allocation reserves disk space for files but writes data to disk periodically rather than immediately. Pre-allocation reserves additional disk blocks upfront. To limit concurrency, the server restricts the number of concurrent client connections or operations that can be processed simultaneously.
The document discusses the transport layer and UDP protocol. It begins with an introduction to the transport layer, describing its functions like process-to-process communication, addressing with port numbers, and services such as flow control and error control. It then focuses on UDP, describing it as a connectionless protocol that does not provide reliability. UDP uses port numbers for addressing and queues for demultiplexing data to processes. Some common applications of UDP mentioned are DNS, SNMP, and real-time media like video where reliability is less important.
Unit 3 - Protocols and Client-Server Applications - ITDeepraj Bhujel
The document summarizes several internet protocols used for communication over IP networks:
SMTP is used for email transmission and uses TCP port 25. It allows for mail, recipient, and data commands in a transaction. POP and IMAP are used for retrieving email from servers, with POP deleting emails from the server and IMAP leaving them on the server. HTTP is the underlying protocol for the web and uses port 80. FTP uses ports 20 and 21 for data and control connections. PGP provides encryption for email. Client-server and n-tier architectures partition tasks between clients and servers. Multiple protocols are needed for complex network communication due to hardware failures, congestion, and other issues.
Checksum is a simple and commonly used error detection technique that involves calculating the sum of all words in a transmission and sending the result. The receiver performs the same calculation and compares its result to the received checksum. A mismatch indicates an error occurred during transmission.
HTTP/2 - Differences and Performance Improvements with HTTPAmit Bhakay
HTTP/2 was developed to improve web performance over HTTP/1.1 and SPDY. It uses a binary framing layer, header compression, multiplexing, server push, and prioritization. These allow full request/response parallelism over a single connection. HTTP/2 maintains compatibility with HTTP/1.1 by using the same methods and status codes while improving performance through binary encoding and header compression.
The document discusses slides being made freely available for use in substantially unaltered form with attribution, notes the copyright of the material, and encourages readers to enjoy the slides on topics including principles of network applications, popular protocols like HTTP, FTP, and DNS, and socket programming.
The document discusses several key protocols and concepts related to email:
1) File Transfer Protocol (FTP) allows clients to transfer files from a remote server using separate TCP connections for control commands and data transfer.
2) Simple Mail Transfer Protocol (SMTP) is used by email servers to send and receive email messages. It uses TCP port 25 and involves greeting, message transfer, and closure phases between servers.
3) Email messages are formatted according to RFC 822 with headers, body, and MIME extensions that allow encoding of attachments and multimedia content.
HTTP is the protocol used to retrieve web pages over the Internet. It uses a request/response model where browsers make requests to web servers using HTTP and receive responses. HTTP 1.1 introduced persistent connections allowing multiple requests to be sent over a single TCP connection, improving efficiency over HTTP 1.0 which required a separate connection for each request. Caching of responses in browsers and proxies improves page load times and reduces server load.
HTTP (Hypertext Transfer Protocol) is a stateless, request-response protocol for transferring hypermedia documents across the internet. It runs on top of TCP and uses port 80 by default. HTTP specifies the messages that clients and servers can send and the responses received. Common HTTP methods include GET, HEAD, POST, PUT, DELETE, and OPTIONS. Status codes inform the client if the request was successful or not. HTTP headers provide additional metadata in requests and responses.
The document discusses the architecture and components of the World Wide Web (WWW) and Hypertext Transfer Protocol (HTTP). It describes the WWW as a distributed client/server system where clients access services from servers located across multiple sites. It categorizes web documents as static, dynamic, or active based on when their contents are determined. It also outlines HTTP, the main protocol for accessing data on the WWW. HTTP functions similarly to FTP and SMTP and uses TCP on port 80. Requests and responses have message formats with status lines, headers and bodies. Examples demonstrate GET and POST requests and responses.
This document provides an overview of SMTP (Simple Mail Transfer Protocol) including its history, general features, how it works, and limitations. SMTP is an Internet standard used to transfer email between Mail Transfer Agents (MTAs). It originated in 1980 and was standardized in 1981. Key points are that SMTP operates over TCP port 25 in a request-response format, uses status codes to indicate success or failure, and relies on MTAs like Sendmail to route and deliver messages between servers. However, it only supports basic 7-bit ASCII encoding and is susceptible to misuse like spamming.
This document provides an overview of various application layer protocols including electronic mail (SMTP, POP3, IMAP), HTTP, web services, DNS, and SNMP. It discusses the distinction between application programs and protocols, how protocols implement remote procedure calls, and that protocols have companion protocols that define message formats. Specific protocols covered in more detail include SMTP for mail transfer, POP3 and IMAP for mail access, HTTP for web access, and the general functions of DNS and SNMP in networks.
The document provides an overview of the Hypertext Transfer Protocol (HTTP) including:
- HTTP is an application-level protocol for distributed, collaborative, hypermedia information systems that facilitates information transfer across the internet.
- A URL uniquely identifies resources over the web and consists of the protocol, hostname, port, and path/file name.
- HTTP uses a request-response model where a client sends a request and the server returns a response. Common request methods are GET, POST, HEAD.
- Responses contain a status line indicating success or error, and headers providing metadata about the response.
This document describes a custom reliable file transfer protocol over UDP that is designed to perform better than TCP in lossy network conditions. It discusses the protocol design which uses sequence numbers and negative acknowledgements to provide reliability over UDP. The protocol is tested on a simulated network with varying packet loss rates and delays. Results show the protocol achieves higher throughput than TCP-based file transfer methods on lossy links.
A study of reserach paper they investigate the problem of seamless VM migrations in the DCN. Leveraging the benefit of decoupling a service from its physical location in the emerging technique named data networking; we propose a named service framework to support seamless VM migrations. In comparison with other approaches, their approach has following advantages: 1) the VM migration is interruption free; 2) the overhead to maintain the routing information is less than that caused by classic NDN; 3) the routing protocol is robust to both link and node failures; 4) the framework inherently supports the implementation of a distributed load balancing algorithm, via which requests are distributed to VMs in balance. The analysis and simulation results verify these benefits.
The application layer is the top layer of the OSI model and controls how applications communicate over a network. It provides services for applications including mail, file transfer, domain name translation and network security. Protocols at this layer include HTTP, FTP, SMTP, DNS and others that allow applications to access remote files and exchange messages over the internet in a standardized way. The application layer hides the complexities of the underlying network from applications and ensures reliable and secure communication between devices.
The constrained application protocol (co ap) part 2Hamdamboy (함담보이)
The document provides an overview of the Constrained Application Protocol (CoAP), including:
- CoAP is a web transfer protocol for constrained nodes and networks that can be used as a lightweight alternative to HTTP. It uses UDP instead of TCP for transport and is designed for constrained environments.
- CoAP supports the same methods as HTTP (GET, POST, PUT, DELETE) but uses a simpler binary header and message format. It includes features like asynchronous messaging, multicast support, and optional block transfers and observations.
- CoAP can be secured using DTLS and includes options for caching, proxying, and resource discovery. It is implemented in many IoT devices and platforms and allows constrained nodes to integrate
The document discusses various aspects of transport layer protocols including services provided, primitives, addressing, connection establishment and release, flow control, multiplexing, crash recovery, TCP and UDP, and performance issues. Specific topics covered include Berkeley sockets, an example file server, TCP and UDP headers, congestion control, and fast TPDU processing techniques.
A server algorithm that combines delayed allocation and pre-allocation can limit maximum concurrency levels. Delayed allocation reserves disk space for files but writes data to disk periodically rather than immediately. Pre-allocation reserves additional disk blocks upfront. To limit concurrency, the server restricts the number of concurrent client connections or operations that can be processed simultaneously.
The document discusses the transport layer and UDP protocol. It begins with an introduction to the transport layer, describing its functions like process-to-process communication, addressing with port numbers, and services such as flow control and error control. It then focuses on UDP, describing it as a connectionless protocol that does not provide reliability. UDP uses port numbers for addressing and queues for demultiplexing data to processes. Some common applications of UDP mentioned are DNS, SNMP, and real-time media like video where reliability is less important.
Unit 3 - Protocols and Client-Server Applications - ITDeepraj Bhujel
The document summarizes several internet protocols used for communication over IP networks:
SMTP is used for email transmission and uses TCP port 25. It allows for mail, recipient, and data commands in a transaction. POP and IMAP are used for retrieving email from servers, with POP deleting emails from the server and IMAP leaving them on the server. HTTP is the underlying protocol for the web and uses port 80. FTP uses ports 20 and 21 for data and control connections. PGP provides encryption for email. Client-server and n-tier architectures partition tasks between clients and servers. Multiple protocols are needed for complex network communication due to hardware failures, congestion, and other issues.
Checksum is a simple and commonly used error detection technique that involves calculating the sum of all words in a transmission and sending the result. The receiver performs the same calculation and compares its result to the received checksum. A mismatch indicates an error occurred during transmission.
HTTP/2 - Differences and Performance Improvements with HTTPAmit Bhakay
HTTP/2 was developed to improve web performance over HTTP/1.1 and SPDY. It uses a binary framing layer, header compression, multiplexing, server push, and prioritization. These allow full request/response parallelism over a single connection. HTTP/2 maintains compatibility with HTTP/1.1 by using the same methods and status codes while improving performance through binary encoding and header compression.
HTTP/2 is an updated protocol that improves upon HTTP/1.1 by allowing multiple requests to be sent simultaneously over a single TCP connection using multiplexing and header compression. It reduces latency compared to HTTP/1.1 by fixing the head-of-line blocking problem and prioritizing important requests. Key features of HTTP/2 evolved from the SPDY protocol and include multiplexing, header compression, prioritization, and protocol negotiation.
Hypertext transfer protocol performance analysis in traditional and software ...IJECEIAES
The extensive use of the internet has resulted in novel technologies and protocol improvisation. Hypertext transfer protocol/1.1 (HTTP/1.1) is widely adapted on the internet. However, HTTP/2 is found to be more efficient over transport control protocol (TCP). The HTTP/2 protocol can withstand the payload overhead when compared to HTTP/1.1 by multiplexing multiple requests. However, both the protocols are highly susceptible to applicationlevel denial of service (DoS) attacks. In this research, a slow-rate DoS attack called Slowloris is detected over Apache2 servers enabled with both versions of HTTP in traditional networks and software defined networks (SDN). Server metrics such as server connection time to the webpage, latency in receiving a response from the server, page load time, response-response gap, and inter-packet arrival time at the server are monitored to analyze attack activity. A Monte Carlo simulation is used to estimate threshold values for server connection time and latency for attack detection. This work is implemented in a lab environment using virtual machines, Ryu controller, zodiac FX OpenFlow switch and Apache2 servers. This study also highlights SDN's security benefits over traditional networks.
an overview from the HTTP2 protocol including comparison with previous version, a deeper look over the protocol enhancements, compatibility matrix with the internet ecosystem and set of online demos that can show the performance optimization.
This document compares the HTTP and SPDY protocols. It finds that SPDY significantly reduces latency compared to HTTP, especially on mobile devices. SPDY allows for multiplexed requests, prioritized requests, and compressed headers over a single TCP connection. Experiments show SPDY was 27-60% faster than HTTP without SSL and 39-55% faster than HTTP with SSL. However, SPDY latency could still be improved by addressing issues like bandwidth efficiency, SSL challenges, and packet loss recovery.
This document discusses various protocols that can be used for communication with devices in the Internet of Things. It describes several protocols including HTTP/HTTPS, WebSockets, MQTT, MQTT-SN, CoAP, and XMPP. For each protocol, it provides details on their appropriate uses, capabilities, and limitations when used with devices that have limited memory, power, or network connectivity. It recommends selecting the right protocol based on a device's capabilities and the specific communication needs of the application.
This document discusses various protocols that can be used for communication with devices in the Internet of Things. It describes several protocols including HTTP/HTTPS, WebSockets, MQTT, MQTT-SN, CoAP, and XMPP. For each protocol, it provides details on their appropriate uses, capabilities, and limitations when used with different types of devices and communication needs. It also compares MQTT and CoAP, noting that the best protocol depends on the specific application and devices used.
This document discusses various protocols that can be used for communication with devices in the Internet of Things. It describes several protocols including HTTP/HTTPS, WebSockets, MQTT, MQTT-SN, CoAP, and XMPP. For each protocol, it provides details on their appropriate uses, capabilities, and limitations when used with devices that have limited memory, power, or network connectivity. It recommends selecting the right protocol based on a device's capabilities and the specific communication needs of the application.
Internet of Things requires communication to devices that are either actuators or sensors. Each actuator and sensor has an identity. Each actuator and sensor may be either directly connected to the world wide web or indirectly connected via a type of gateway.
Communication to these devices needs to be reliable. Therefore each device may implement their most suitable communication protocol.
This deck describes the main common protocols and their usage for the Internet of Things
Charles Gibbons
apicrazy.com
Internet of Things requires communication to devices that are either actuators or sensors. Each actuator and sensor has an identity. Each actuator and sensor may be either directly connected to the world wide web or indirectly connected via a type of gateway.
Communication to these devices needs to be reliable. Therefore each device may implement their most suitable communication protocol.
This deck describes the main common protocols and their usage for the Internet of Things
Charles Gibbons
apicrazy.com
HTTP/2 is the latest version of the HTTP network protocol. It improves performance over HTTP/1.1 by allowing multiple requests to be placed on the same connection, compressing headers to reduce overhead, and enabling servers to push additional resources to clients. These changes allow for faster page loads and more efficient use of network resources compared to previous HTTP versions.
SPDY is a protocol that uses a single TCP connection to request and receive web pages in parallel by compressing headers and chunking responses. It aims to reduce page load time by 50% by allowing concurrent requests across a single connection while avoiding changes to website content or infrastructure. Major browsers and companies like Google use SPDY to optimize HTTP performance.
SPDY is a protocol that uses a single TCP connection to request and receive web page resources in parallel through multiplexed streams, allowing for header compression and prioritization of resources to reduce page load times. It works by establishing a single TCP connection between a client and server based on IP addresses, then streaming HTTP requests and responses concurrently over that connection through prioritized streams.
IRJET- An Overview of Web Sockets: The Future of Real-Time CommunicationIRJET Journal
This document provides an overview of web sockets and how they enable real-time communication between clients and servers. It discusses how earlier methods like HTTP polling and long polling were inefficient for real-time updates. Web sockets allow for full-duplex communication over a single socket connection. The document analyzes network traffic from a cryptocurrency price tracking website to demonstrate how web sockets reduce overhead compared to earlier techniques and enable real-time updates with minimal bandwidth.
- HTTP/2 aims to reduce HTTP response times by improving bandwidth efficiency and reducing the number of connections and messages needed. It allows requests to be multiplexed over a single connection.
- While it can't reduce latency at the packet level, it aims to reduce overall response times through features like header compression, server push, and priority hints.
- HTTP/2 is currently supported by major browsers and servers. Implementations so far show response time reductions of 5-60% compared to HTTP/1.1.
Similar to Web Protocol Future (QUIC/SPDY/HTTP2/MPTCP/SCTP) (20)
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
1. Web Protocol Future
NGUYEN Hoang Minh
1 INTRODUCTION
The objective of this project is to explore various evolutions of the TCP/IP protocol suite
towards a better support of data byte-streams. This paper is organized as follows. Section 2
describes the background of SPDY, HTTP/2 and QUIC, also giving a comparison of them, and
how SPDY, HTTP/2 and QUIC reduce the page load latency by making a more efficient use of
TCP. Section 3 describes two of the major proposals to change TCP so to support multi path,
they are SCTP and MPTCP, with full description of each proposal, a point to point comparison,
congestion control, mobility and multihoming and how HTTP/2 can benefit from multipath TCP.
Keywords: TCP/IP, HTTP/2, SPDY, QUIC, SCTP, MPTCP, Web, Browser
2 SPDY, HTTP/2, QUIC
2.1 SPDY
SPDY protocol is designed to fix the aforementioned issues of HTTP [1]. The protocol operates
in the application layer on top of TCP. The framing layer of SPDY is optimized for HTTP-like
response request streams enabling web applications that run on HTTP to run on SPDY with little
or no modifications. The key improvements offered by SPDY are described below.
Figure 2.1: Streams in HTTP, SPDY
● Multiplexed Stream with single TCP connection to a domain as shown in Figure 2.1
There is no limit to the requests that can be handled concurrently within the same SPDY
connection (called SPDY session). These requests create streams in the session which are
1/18
2. bidirectional flows of data. This multiplexing is a much more fine-tuned solution than
HTTP pipelining. It helps with reducing SSL (Secure Sockets Layer) overhead, avoiding
network congestion and improves server efficiency. Streams can be created on either the
server- or the client side, can concurrently send data interleaved with other streams and
are identified by a stream ID which is a 31 bit integer value; odd, if the stream is initiated
by the client, and even if initiated by the server [1].
● Request prioritization The client is allowed to specify a priority level for each object
and the server then schedules the transfer of the objects accordingly. This helps avoiding
the problem when the network channel is congested with non-critical resources and
high-priority requests, example: JavaScript code. Style Sheet.
● Server push mechanism is also included in SPDY thus servers can send data before the
explicit request from the client. Without this feature, the client must first download the
primary document, and only after it can request the secondary resources. Server push is
designed to improve latency when loading embedded objects but it can also reduce the
efficiency of caching in a case where the objects are already cached on the clients side
thus the optimization of this mechanism is still in progress.
● HTTP header compression SPDY compresses request and response HTTP headers,
resulting in fewer packets and fewer bytes transmitted.
● Furthermore, SPDY provides an advanced feature, server-initiated streams.
Server-initiated streams can be used to deliver content to the client without the client
needing to ask for it. This option is configurable by the web developer in two ways:
2.2 HTTP/2
HTTP/2 is the next evolution of HTTP. Based on Google’s SPDY, the new protocol is presented
in a formal, openly available specification. While HTTP/2 maintains compatibility with SPDY
and the current version of HTTP. This below show brief of protocol.
Binary framing layer:
At the core of all performance enhancements of HTTP/2 is the new binary framing layer, which
dictates how the HTTP messages are encapsulated and transferred between the client and server.
The HTTP semantics, such as verbs, methods, and headers, are unaffected, but the way they are
encoded while in transit is different. All HTTP/2 communication is split into smaller messages
and frames, each of which is encoded in binary format.
2/18
3. Figure 2.2: Binary Framing Layer
Streams, Messages, and Frames
Now checking how the data is exchanged between the client and server for new binary framing
mechanism. Before start let explain some HTTP/2 terminology:
● Stream: A bidirectional flow of bytes within an established connection, which may carry
one or more messages.
● Message: A complete sequence of frames that map to a logical request or response
message.
● Frame: The smallest unit of communication in HTTP/2, each containing a frame header,
which at a minimum identifies the stream to which the frame belongs.
Here are some of the mechanism:
● All communication is performed over a single TCP connection that can carry any number
of bidirectional streams.
● Each stream has a unique identifier and optional priority information that is used to carry
bidirectional messages.
● Each message is a logical HTTP message, such as a request, or response, which consists
of one or more frames.
● The frame is the smallest unit of communication that carries a specific type of data - e.g.,
HTTP headers, message payload, and so on. Frames from different streams may be
interleaved and then reassembled via the embedded stream identifier in the header of
each frame.
HTTP/2 breaks down the HTTP protocol communication into an exchange of binary-encoded
frames, which are then mapped to messages that belong to a particular stream, all of which are
multiplexed within a single TCP connection. This is the foundation that enables all other features
and performance optimizations provided by the HTTP/2 protocol.
3/18
4. Figure 2.3: Streams, Messages, and Frames
Request and response multiplexing
In HTTP/1 if client wants to improve performance, it will make multiple parallel requests TCP
connections however, this will be root cause of head-of-line blocking and inefficient use of the
underlying TCP connection. The new binary framing layer in HTTP/2 resolves that problem by
break down an HTTP message into independent frames, interleave or reassemble them on the
other end and eliminates the need for multiple connections to enable parallel processing. As a
result, this makes our applications faster, simpler, and cheaper to deploy.
Figure 2.4: Request and response multiplexing
4/18
5. Stream prioritization
Once an HTTP message can be split into many individual frames, and we allow for frames from
multiple streams to be multiplexed, the order in which the frames are interleaved and delivered
both by the client and server becomes a critical performance consideration. To facilitate this, the
HTTP/2 standard allows each stream to have an associated weight and dependency:
● Each stream may be assigned an integer weight between 1 and 256.
● Each stream may be given an explicit dependency on another stream.
Server push
Another powerful new feature of HTTP/2 is the ability of the server to send multiple responses
for a single client request. That is, in addition to the response to the original request, the server
can push additional resources to the client (Figure 2.5), without the client having to request each
one explicitly.
Figure 2.5: HTTP/2 Server Push
Header Compression
Each HTTP transfer carries a set of headers that describe the transferred resource and its
properties. In HTTP/1.x, this metadata is always sent as plain text and adds anywhere from
500–800 bytes of overhead per transfer, and sometimes kilobytes more if HTTP cookies are
being used. To reduce this overhead and improve performance, HTTP/2 compresses request and
response header metadata (see Figure 2.6) using the HPACK compression format that uses two
simple but powerful techniques:
● It allows the transmitted header fields to be encoded via a static Huffman code, which
reduces their individual transfer size.
● It requires that both the client and server maintain and update an indexed list of
previously seen header fields, which is then used as a reference to efficiently encode
previously transmitted values.
Huffman coding allows the individual values to be compressed when transferred, and the
indexed list of previously transferred values allows us to encode duplicate values by transferring
5/18
6. index values that can be used to efficiently look up and reconstruct the full header keys and
values.
Figure 2.6: HTTP/2 Header Compression
Although HTTP/2 is built on SPDY, it introduces some important new changes [3].
Table 1: Comparison of SPDY with HTTP/2
SPDY HTTP/2
SSL Required. In order to use the protocol
and get the speed benefits, connections must
be encrypted.
SSL Not Required. However - even though
the IETF doesn’t require SSL for HTTP/2 to
work, many popular browsers do require it.
Fast Encrypted Connections. Does not use the
ALPN (Application Layer Protocol
Negotiation) extension that HTTP/2 uses.
Faster Encrypted Connections. The new
ALPN extension lets browsers and servers
determine which application protocol to use
6/18
7. during the initial connection instead of after.
Single-Host Multiplexing. Multiplexing
happens on one host at a time.
Multi-Host Multiplexing. Multiplexing
happens on different hosts at the same time.
Compression. SPDY leaves a small space for
vulnerabilities in its current compression
methods.(DEFLATE)
Faster, More Secure Compression. HTTP/2
introduces HPACK, a compression format
designed specifically for shortening headers
and preventing vulnerabilities.
Prioritization. While prioritization is available
with SPDY, HTTP/2’s implementation is
more flexible and friendlier to proxies.
Improved Prioritization. Lets web browsers
determine how and when to download a web
page’s content more efficiently.
2.3 QUIC
QUIC stands for Quick UDP Internet Connections. It is an experimental web protocol from
Google that is an extension of the research evident in SPDY and HTTP/2. QUIC is premised on
the belief that SPDY performance problems are mainly TCP problems and that it is infeasible to
update TCP due to its pervasive nature. QUIC sidesteps those problems by operating over UDP
instead. Although QUIC works on UDP ports 80 and 443 it has not encountered any firewall
problems. QUIC is a multiplexing protocol for exchanging requests and responses over the
Internet with lower latency and faster recovery from errors than HTTP/2 over TLS/TCP. QUIC
contains some features not present in SPDY such as roaming between different types of
networks.
QUIC provides connection establishment with zero round trip time overhead. It promises also to
remove Head of Line Blocking on multiplexed streams. In SPDY/HTTP2.0, if a packet is lost in
one stream, the whole set of streams is delayed due to the underlying TCP behavior; no stream
on the TCP connection can progress until the lost packet is retransmitted. In QUIC if a single
packet is lost only one stream is affected [4].
● Multiplexing, Prioritization and Dependency of Streams: QUIC multiplexes multiples
streams over a single UDP set of end points. This is of course is not obligatory as it rarely
happens on the web due to having several domains. QUIC uses the same prioritization
and dependency mechanisms as SPDY.
● Congestion control: UDP lacks congestion control so in order to be TCP Fair QUIC has
a pluggable congestion control algorithm option. This is currently TCP Cubic.
● Security: QUIC provides an ad-hoc encryption protocol named “QUIC Crypto” which is
compatible with TLS/SSL. The handshake process is more efficient than TLS.
Handshakes in QUIC require zero round trips before sending payloads. In TLS on top of
TCP this needs between one to three RTTs. QUIC aligns cryptographic block boundaries
with packet boundaries. The protocol has protection from IP Spoofing packet reordering
and Replay attack [5].
● FRAID-4 is available. In the case of one packet being lost in a group, it can be recovered
7/18
8. from the FEC packet for the group.
● Connection Migration Feature: QUIC connections are identified by a randomly
generated 64 bit CID (Connection Identifier) rather than the traditional 5-tuple of
protocol, source address, source port, destination address, destination port. In TCP,
whenever a client changes any of these attributes, the connection is no longer valid. In
contrast, QUIC has the ability to allow users to roam between different types of
connections (for example changing from WiFi to 3G) .Forward Error Correction
(FEC): A Forward Error Correction mechanism inspired by
This table show difference between QUIC with HTTP/2 Protocol
Table 2: Comparison of QUIC with HTTP/2
QUIC HTTP2
Runs over UDP Runs over TCP (ports 80, 443)
Multiplexing multiple requests/responses over
one UDP pseudo- connection per domain
Multiplexing multiple requests/responses over
one TCP connection per domain
Promises to solve Head Of Line Blocking at
the Transport Layer (caused by TCP
behaviour)
Promises to solve Head Of Line Blocking at
the Application layer (caused by HTTP 1.1
pipelining)
Best case scenario (in repeat connections,
client can send data immediately (Zero Round
Trips)
Best Case Scenario (1 to 3 Round Trips for
TCP connection establishment and/or TLS
connection)
Reduction in RT gained by features of the
protocol such as Multiplexing over one
connection etc…
Reduction in RTs in comparison to HTTP 1.X
gained by features such as Multiplexing over
one connection, and Server Push
HTTP/2 or SPDY can layer on top of QUIC,
all features of SPDY are supported in QUIC.
HTTP/2 or SPDY can layer on top of QUIC
or TCP
Packet-level Forward Error Correction TCP selective reject ARQ used for error
correction
Connection migration feature N/A
Security in QUIC is TLS-like but with a more
efficient handshake TCP Cubic-based
congestion control
Security provided by underlying TLS
Congestion control provided by underlying
TCP
TCP Cubic-based congestion control Congestion control provided by underlying
TCP
8/18
9. 2.4 How SPDY/HTTP2 Reduce The Page Load Latency
● Reducing latency with multiplexing: In SPDY/HTTP2, multiple asset requests can reuse
a single TCP connection. Unlike HTTP 1.1 requests that use the Keep-Alive header, the
requests and response binary frames in SPDY/HTTP2 are interleaved and head-of-line
blocking does not happen. [6] The cost of establishing a connection three-way handshake
has to happen only once per host, each establishing a connect will take 1 RTT. Beside
that Multiplexing is especially beneficial for secure connections because of the
performance cost involved with multiple TLS negotiations.
● Reduces the congestion window with single TCP connection more aggressively than
parallel connections [6].
● Compressing header reduce the used bandwidth and eliminating unnecessary headers.
● It allows servers to push responses proactively into client caches instead of waiting for a
new request for each resource. Server Push potentially allows the server to avoid this
round trip of delay by pushing the responses it thinks the client will need into its cache.
[7]
2.5 QUIC Reduce The Page Load Latency
● QUIC use UDP as a transport protocol, that will remove the roundtrip time of
establishing a connection three-way handshake of TCP and TLS auth ans key exchange.
Figure 2.7 show the flow to establish connection each protocol, and table 3 show the
comparison connection RTT (Round Trip Times) in TCP, TLS and QUIC protocols,
QUIC reduce RTT to 0.
Figure 2.7: Connection Round Trip Times in TCP, TLS and QUIC protocols
9/18
10. Table 3: Connection Round Trip Times in TCP, TLS and QUIC protocols
TCP TCP/TLS QUIC
First Connection 1 RTT 3 RTT 1 RTT
Repeat Connection 1 RTT 2 RTT 0 RTT
● Additionally, UDP is decrease usage bandwidth by reduce the header length compare to
TCP header. Another benefit of using UDP is multiplexing stream avoid head-of-line
blocking, each stream frame can be immediately dispatched to that stream on arrival, so
streams without loss can continue to be reassembled and make forward progress in the
application.
Figure 2.8: Streams in QUIC protocols
● QUIC introduces Forward Error Correction, which is used to reconstruct lost packets
instead of requesting it again. Therefore, redundant data has to be sent (see in Figure 2.8).
3 MPTCP, SCTP
3.1 Multipath TCP (MPTCP)
MPTCP is currently an experimental protocol defined in RFC 6824. It’s stated goal is to exist
alongside TCP and to “do no harm” to existing TCP connections, while providing the extensions
necessary so that additional paths can be discovered and utilized. Multipath TCP starts and
maintains additional TCP connections and runs them as subflows underneath the main TCP
connection. See Figure 3.1 for a quick visualization of this:
10/18
11. Figure 3.1: Comparison of Standard TCP and MPTCP Protocol Stacks
The IP addresses for these additional subflows are discovered one of two ways; implicitly when
a host with a free port connects to a known port on the other host, or explicitly using an inband
message. Each subflow is treated as an individual TCP connection with it’s own set of
congestion control variables. Subflows can also be designated as backup subflows which do not
immediately transfer data, but activate when primary flows fail. [9]
Research has shown that MPTCP congestion control as defined in RFC 5681 does not result in
fairness with standard TCP connections if two flows from an MPTCP connection go through the
same bottlenecked link. As such, there’s a great deal of ongoing research about alternative
congestion control schemes specifically for multipath protocols [10].
3.2 Stream Control Transmission Protocol (SCTP)
SCTP is a transport layer protocol in the TCP/IP stack (similar to TCP and UDP). It is message
oriented like UDP, but also ensure reliable, insequence transport of messages with congestion
control like TCP. It achieves this by using multihoming to establish multiple redundant paths
between two hosts. Init’s current specification, SCTP is designed to transfer data on one pair of
IP addresses at a time while the redundant pairs are used for failover and path health or control
messages. [12] However, significant research is being done to allow SCTP to use multiple
concurrent paths at once as needed [11].
SCTP requires that endpoint IP addresses are provided to the protocol at initialization. It does not
include any way for endpoints to communicate possible other paths with each other. Ports must
also connect in such a way that no port on either host is used more than once for the connection.
SCTP is currently not in widespread use, and as such routers and firewalls may not route SCTP
packets properly. In the absence of native SCTP support in operating systems it is possible to
tunnel SCTP over UDP, as well as mapping TCP API calls to SCTP ones.
3.3 MPTCP and SCTP Comparison
A. Handshakes
Multipath TCP uses a 3-way handshake to initialize a new flow the same way as basic TCP.
SCTP however follows a 4-way Handshake for its connection setup. This is shown in figure 3.2.
As such, SCTP places more solid importance on authentication with explicit verification tags.
11/18
12. This is crucial in protecting systems against SYN Flooding attacks which are a persistent
problem in TCP based communications.
Figure 3.2: TCP Handshake, MPTCP Handshake and SCTP Handshake.
B. Congestion Control
On a subflow to subflow basis, MPTCP and SCTP both act either identically or similarly to TCP
and utilize slow start algorithms and congestion windows for end to end flow control on a path.
Additionally, MPTCP and CMT-SCTP both couple all subflow congestion windows together
under a global congestion window. Load balancing decisions on which subflow to use using
these parameters are a constant subject of research and are not trivial.
However, MPTCP can have significantly more flows to manage, as MPTCP allows for fully
meshed connections compared to even CMT-SCTP. See figure 3.3 for an example of a fully
meshed connection in MPTCP as opposed to the parallel connections in SCTP.
Figure 3.3: Connections established in SCTP vs MPTCP
In this picture, each host has two ports but the protocols set up connections between the two
12/18
13. ports in different ways. In SCTP, these connection pair may be explicitly defined while in
MPTCP it is up to the protocol to detect and use the correct one. As such, choosing efficient port
pairs ahead of time is crucial to the operation of SCTP and unfortunately this is neither trivial nor
done automatically in most implementations. On the plus side, SCTP connection scheme means
that it does not suffer from the unfairness problem mentioned in the background section on
MPTCP. As currently defined, SCTP is not designed for concurrent multipath transfer the same
way that MPTCP is. Instead, SCTP uses only one path at a time, and it switches to another path
only after the current path fails. There has been a fair amount of academic work on an SCTP
extension to provide concurrent multipath transmission (CMT-SCTP)
Finding a suitable Congestion Control mechanism able to handle multiple paths is nontrivial [9].
Simply adopting the mechanisms used for the singlepath protocols in a straightforward manner
does neither guarantee an appropriate throughput [9] nor achieve a fair resource allocation when
dealing with multipath transfer [12]. To solve the fairness issue, Resource Pooling has been
adopted for both MPTCP and CMT-SCTP. In the context of Resource Pooling, multiple
resources (in this case paths) are considered to be a single, pooled resource and the Congestion
Control focuses on the complete network instead of only a single path. As a result, the complete
multipath connection (i.e. all paths) is throttled even though congestion occurs only on one path.
This avoids the bottle- neck problem described earlier and shifts traffic from more congested to
less congested paths. Releasing resources on a congested path decreases the loss rate and
improves the stability of the whole network. In three design goals are set for Resource Pooling
based multipath Congestion Control for a TCP-friendly Internet deployment. These rules are:
● Improve throughput: A multipath flow should perform at least as well as a singlepath
flow on the best path.
● Do not harm: A multipath flow should not take more capacity on any one of its paths
than a singlepath flow using only that path.
● Balance congestion: A multipath flow should move as much traffic as possible off its
most congested paths.
The Congestion Control proposed for MPTCP was designed with these goals in mind already.
The Congestion Control of the original CMT-SCTP proposal did not use Resource Pooling, but
we already proposed an algorithm for CMT-SCTP which uses Resource Pooling and fulfills the
requirements. This algorithm behaves slightly different from the MPTCP Congestion Control
and, therefore, we also adapted the MPTCP Congestion Control to SCTP which will be called
“MPTCP-like” in the following. While both mechanisms are still candidates for CMT-SCTP in
the IETF discussion, we will only use the MPTCP-like algorithm in this paper to get an unbiased
comparison with MPTCP. The MPTCP and MPTCP-like Congestion Control treat each path as a
self-contained congestion area and reduce just the path congestion window of the path
experiencing congestion. In order to avoid an unfair overall bandwidth allocation, the congestion
window growth behavior of the Congestion Control is adapted: a per-flow aggressiveness factor
is used to bring the increase and decrease of into equilibrium.
The MPTCP Congestion Control is based on counting bytes as TCP and MPTCP are
byte-oriented protocols. SCTP, however, is a message-oriented protocol and the Congestion
Control is based on counting bytes which are limited in size by the Maximum Transmission Unit
(MTU). The limit for the calculation is defined as Maximum Segment Size (MSS) for TCP and
13/18
14. SCTP. So it is, e.g., 1,460 bytes for TCP or 1,452 bytes for SCTP using IPv4 over an Ethernet
interface with a typical MTU of 1,500 bytes.
C. Path Management
Figure 3.4: Paths combinations
Path Management in MPTCP: A MPTCP connection consists, in principle, of several TCP-like
connections (called subflows) using the different network paths available. A MPTCP connection
between Peer A ( ) and Peer B ( ) (see Figure 3.4(a)) is initiated by setting up a regularPA PB
TCP connection between the two endpoints via one of the available paths, e.g., to .IPA1 IPB1
During the connection setup, the new TCP option MP_CAPABLE is used to signal the intention
to use multiple paths to the remote peer [13]. Once the initial connection is established,
additional sub-connections are added. This is done similar to regular TCP connection
establishment by performing a three-way-handshake with the new TCP option MP_JOIN present
in the segment headers. By default MPTCP uses all available address combinations to set up
subflows resulting in a full mesh using all available paths between the endpoints. The option
ADD_ADDR is used in the Linux implementation to announce an additional IP address to the
remote host. In the case of Figure 3.4(a), the MPTCP connection is first set up between IPA1
and . Both hosts then include all additional IP addresses in an ADD_ADDR option, sinceIPB1
they are both multi-homed. After that, an additional subflow is started between andIPA2 IPB1
by sending a SYN packet including the MP_JOIN option. The same is done with two additional
sub-connections between and as well as and . The result of theseIPA2 IPB2 IPA1 IPB2
operations is the use of 4 subflows using direct as well as cross paths: , ,PA1−B1 PA1−B2 PA2−B1
and .PA2−B2
14/18
15. Path Management in CMT-SCTP: CMT-SCTP is based on SCTP as defined in [14]. Standard
SCTP already provides multi-homing capabilities which are directly usable for CMT-SCTP. An
SCTP packet is composed of an SCTP header and multiple information elements called Chunks
which can carry control information (Control Chunks) or user data (DATA Chunk). A
connection, denoted as Association in SCTP, is initiated by a 4-way handshake and is started by
sending an INITIATION (INIT) chunk. With this first message, the initiating host informsPA
the remote host about all IP addresses available on . Once has received the INITPB PA PB
chunk it answers with an INITIATION-ACKNOWLEDGMENT (INIT-ACK) chunk. The
INIT-ACK also includes a list of all the IP addresses available on .PB
When initiates an SCTP connection to , it uses the primary IP addresses of both hostsPA PB
and as source and destination address, respectively. This creates a first path betweenIPA1 IPB1
these two addresses, denoted as in Figure 3.4(b) which is designated as “Primary Path”.PA1−B1
In standard SCTP this is the only path used for exchange of user data, the others are only used to
provide robustness in case of network failures. SCTP, and consequently also CMT-SCTP, uses
all additional IP addresses to create additional paths. In contrast to MPTCP, each secondary IP
address is only used for a single additional path in an attempt to make the established paths
disjoint. In the example, the secondary path is established.PA2−B2
As a result, while the MPTCP creates a full mesh of possible network paths among the available
addresses, CMT-SCTP only uses pairs of addresses to set up communication paths. CMT- SCTP
only determines the specific source address to specify which path has to be used (source address
selection) and leaves it to the IP layer to select the route to the next hop. MPTCP, however,
maintains a table in the Transport layer identifying all possible combinations of local and remote
addresses and uses this table to predefine the network path to be used.
3.3 HTTP2 Benefits from Multipath TCP
● Multipath TCP should be backward compatible. That mean HTTP/2 should able to run
over MPTC, in case due to any reason a successful multipath tcp connection cannot be set
up, it must always fall back to the normal TCP connection.
● MPTCP will increase the bandwidth It will increase the bandwidth because two
connection links with two separate paths are used in a single connection. Due to
congestion if one path is only providing a small percentage of its bandwidth, the other
path can also be utilized. Hence the total bandwidth for a MPTCP connection will be the
combined bandwidth used by both the paths. HTTP/2 over MP-TCP has clear benefits
compared to HTTP/1.0 over MPTCP since there are fewer transport connections and
these will carry more data, giving time for the MPTCP subflows to correctly utilise the
available paths.
● MPTCP provides Better Redundancy Multipath TCP provides a better connection
redundancy, because your connection will not be affected even if one link goes down. An
example use case is suppose you are downloading a file with HTTP/2 multistreaming
and you are over your WiFi connection. Even if you walk out of your WiFi connection
range, the file streaming should not be affected because it should automatically stop
sending data through WiFi connection and should now only use cellular network.
15/18
16. Figure 3.5: Optimization across layers
In the detail of workflow, HTTP/2 using multiplexer mechanism to generate the long live
communication between hosts to send and receive many requests/responses with only a
connection., HTTP/2 will use multiplexer mechanism to establish a connection be able to carry
multiple messages at application layer. Next, The Multipath TCP will work in network layer.
The data was divided into many segments which were delivery to multiple connections be
generated by inverse multiplexer. The connections will be merged by demultiplexer of MPTCP
for the destination host. Finally, HTTP/2 will handle data for the requests and response of
applications.
16/18
17. 4 CONCLUSIONS AND RELATED WORK
This report presented a describe QUIC, SPDY and HTTP/2 and comparison of the these
protocols. HTTP/2 is the next evolution of HTTP. Based on Google’s SPDY, the new protocol is
presented in a formal. HTTP/2 maintains compatibility with SPDY and the current version of
HTTP. Although HTTP/2 is built on SPDY, it introduces some important new changes, the main
difference between HTTP/2 and SPDY comes from their header compression algorithms.
HTTP/2 uses HPACK algorithm for header compression, compared to SPDY, which uses
DEFLATE. QUIC is a very recent protocol developed by Google in 2013 for efficient transfer of
web pages. QUIC aims to improve performance compared to SPDY and HTTP by multiplexing
web objects in one stream over UDP protocol instead of traditional TCP.
Additionally, The page also present two of the major proposals to change TCP so to support
multipath: SCTP and MPTCP and comparison between them on path management, connection
establishing, congestion control and HTTP/2 benefits from these proposals. Multipath TCP
allows existing TCP applications to achieve better performance and robustness over today’s
networks, and it has been standardized at the IETF. Now multipath is very important. Mobile
devices have multiple wireless interfaces, data-centers have many redundant paths between
servers, and multihoming has become the norm for big server farms . TCP is essentially a
single-path protocol: when a TCP connection is established, If one of these addresses changes
the connection will fail. In fact, a TCP connection cannot even be load balanced across more
than one path within the network, because this results in packet reordering, and TCP
misinterprets this reordering as congestion and slows down .Example if a smartphone’s WiFi
loses signal, the TCP connections associated with it stall; there is no way to migrate them to
other working interfaces, such as 3G . This makes mobility a frustrating experience for users .
Modern data-centers are another example: many paths are available between two endpoints, and
multipath routing randomly picks one for a particular TCP connection .
We survey related work in 2 topics (i) Multipath QUIC and (ii) Optimized Cooperation of
HTTP/2 and Multipath TCP.
i) Multipath QUIC is an extension to the QUIC protocol that enables hosts to exchange data
over multiple networks over a single connection on end hosts are equipped with several network
interfaces and users expect to be able to seamlessly switch from one to another or use them
simultaneously to aggregate bandwidth as well as enables QUIC flows to cope with events
affecting the such as NAT rebinding or IP address changes.
ii) Optimized Cooperation of HTTP/2 and Multipath TCP: HTTP/2 is the next evolution of
HTTPs and Multipath TCP allows existing TCP applications to achieve better performance and
robustness, The optimization of HTTP2 run over MP-TCP have a chance to make applications
faster, simpler, and more robust.
17/18
18. 5 REFERENCES
1. SPDY Protocol - Draft 3. Retrieved November, Accessed May 16, 2018.
http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3
2. Introduction to HTTP/2, Ilya Grigorik, Surma, Accessed May 16, 2018.
https://developers.google.com/web/fundamentals/performance/http2/
3. Shifting from SPDY to HTTP/2, Justin Dorfman. Accessed May 16, 2018
https://blog.stackpath.com/spdy-to-http2
4. QUIC Protocol Official Website. Available at: https://www.chromium.org/quic.
5. QUIC Crypto. Accessed May 16, 2018.
https://docs.google.com/document/d/1g5nIXAIkN_Y-7XJW5K45IblHd_L2f5LTaDUDw
vZ5L6g/edit.
6. How Speedy is SPDY, Xiao Sophia Wang, Aruna Balasubramanian, USENIX, 2014
7. HTTP/2 Frequently Asked Questions, Accessed May 16, 2018 https://http2.github.io/faq/
8. Ford, Et Al., RFC 6824 TCP Extensions for Multipath Operation with Multiple
Addresses., RFC 6824, January 1, 2013. Accessed May 16, 2018
http://tools.ietf.org/html/rfc6824.
9. Ford, et al., RFC 6182 Architectural Guidelines for Multipath TCP Development, RFC
6182. March 2011. Accessed May 16, 2018 http://tools.ietf.org/html/rfc6182
10. Singh, et al. Enhancing Fairness and Congestion Control in Multipath TCP, 6th Joint
IFIP Wireless and Mobile Networking Conference, 2013
11. Iyengar, J. R. et al. Concurrent Multipath Transfer Using SCTP Multihoming, SPECTS,
2004
12. Stewart, et al., RFC 4960 Stream Control Transmission Protocol, RFC 4960, September
2007, Accessed Accessed May 16, 2018. http://tools.ietf.org/html/rfc4960
13. A. Ford, C. Raiciu, M. Handley, S. Barre ́, and J. R. Iyengar, Architectural Guidelines for
Multipath TCP Development, IETF, Informational RFC 6182, Mar. 2011, ISSN
2070-1721.
14. R.R.Stewart, Stream Control Transmission Protocol, IETF,Standards Track RFC 4960,
Sept. 2007, ISSN 2070-1721.
15. Martin Becke, Fu Fa, Comparison of Multipath TCP and CMT-SCTP based on
Intercontinental Measurements, IEEE 12 June 2014, ISSN: 1930-529X
16. Maximilian Weller, Optimized Cooperation of HTTP/2 and Multipath TCP, May 1, 2017
17. Slashroot, How does MULTIPATH in TCP work, Accessed May 17, 2018
https://www.slashroot.in/what-tcp-multipath-and-how-does-multipath-tcp-work
18/18