Why HTTP/1.1 needs to be phased out and how to leverage HTTP/2 features for improved web development practices. Also, the state of Ruby and how to use the new protocol even though support is limited.
This document summarizes the evolution of HTTP including versions 0.9, 1.0, 1.1, and the development of HTTP/2. It outlines limitations of HTTP/1.1 like concurrent connection limits and head-of-line blocking. It then describes how SPDY was developed by Google to address these limitations and optimize HTTP, and how HTTP/2 was later standardized based on SPDY incorporating features like request multiplexing, header compression, server push, and stream prioritization to improve performance.
The document introduces HTTP/2 and discusses limitations of HTTP 1.1 including head of line blocking, TCP slow start, and latency issues. It describes key features of HTTP/2 such as multiplexing requests over a single TCP connection, header compression, and server push to reduce page load times. The presentation includes demos of HTTP/2 in Chrome dev tools and Wireshark to troubleshoot HTTP/2 connections.
HTTP/2 is a new version of the HTTP network protocol that makes web content delivery faster and more efficient. It introduces features like multiplexing, header compression, and server push that fix limitations in HTTP/1.1 like head-of-line blocking and slow start. HTTP/2 is now supported in all major browsers and servers and provides performance improvements over HTTP/1.1 without requiring workarounds. The presentation provided an overview of HTTP/2 concepts and how to troubleshoot using developer tools.
HTTP/2 aims to address issues with HTTP/1.x such as head-of-line blocking and wasted bandwidth through duplicate requests. It uses a binary format for multiplexing requests, server push, header compression, stream prioritization and flow control. Major browsers now support HTTP/2 over TLS, though server implementations are still in development. While preserving the HTTP/1.1 API, HTTP/2 provides advantages like cheaper requests and more efficient use of network resources and server capacity.
This document introduces HTTP/2, describing its goals of improving on HTTP 1.1 by allowing multiple requests to be sent over a single TCP connection through request multiplexing and header compression. It outlines issues with HTTP 1.1 like head-of-line blocking and slow start that cause latency. HTTP/2 aims to address these by sending requests concurrently in interleaved frames and compressing headers. The document demonstrates these concepts and how to troubleshoot HTTP/2 connections using the Chrome network console and Wireshark.
This document discusses HTTP/2, including a brief history of HTTP 1.x, the development of SPDY which became the basis for HTTP/2, the key features of HTTP/2 like binary framing, streams, header compression and server push, considerations for transitioning from HTTP 1.x to HTTP/2, and strategies for optimizing performance with HTTP/2. It recommends benchmarking optimizations and transitioning first internal APIs, then public APIs and CDNs, followed by front-end applications and proxies.
This document summarizes the evolution of HTTP including versions 0.9, 1.0, 1.1, and the development of HTTP/2. It outlines limitations of HTTP/1.1 like concurrent connection limits and head-of-line blocking. It then describes how SPDY was developed by Google to address these limitations and optimize HTTP, and how HTTP/2 was later standardized based on SPDY incorporating features like request multiplexing, header compression, server push, and stream prioritization to improve performance.
The document introduces HTTP/2 and discusses limitations of HTTP 1.1 including head of line blocking, TCP slow start, and latency issues. It describes key features of HTTP/2 such as multiplexing requests over a single TCP connection, header compression, and server push to reduce page load times. The presentation includes demos of HTTP/2 in Chrome dev tools and Wireshark to troubleshoot HTTP/2 connections.
HTTP/2 is a new version of the HTTP network protocol that makes web content delivery faster and more efficient. It introduces features like multiplexing, header compression, and server push that fix limitations in HTTP/1.1 like head-of-line blocking and slow start. HTTP/2 is now supported in all major browsers and servers and provides performance improvements over HTTP/1.1 without requiring workarounds. The presentation provided an overview of HTTP/2 concepts and how to troubleshoot using developer tools.
HTTP/2 aims to address issues with HTTP/1.x such as head-of-line blocking and wasted bandwidth through duplicate requests. It uses a binary format for multiplexing requests, server push, header compression, stream prioritization and flow control. Major browsers now support HTTP/2 over TLS, though server implementations are still in development. While preserving the HTTP/1.1 API, HTTP/2 provides advantages like cheaper requests and more efficient use of network resources and server capacity.
This document introduces HTTP/2, describing its goals of improving on HTTP 1.1 by allowing multiple requests to be sent over a single TCP connection through request multiplexing and header compression. It outlines issues with HTTP 1.1 like head-of-line blocking and slow start that cause latency. HTTP/2 aims to address these by sending requests concurrently in interleaved frames and compressing headers. The document demonstrates these concepts and how to troubleshoot HTTP/2 connections using the Chrome network console and Wireshark.
This document discusses HTTP/2, including a brief history of HTTP 1.x, the development of SPDY which became the basis for HTTP/2, the key features of HTTP/2 like binary framing, streams, header compression and server push, considerations for transitioning from HTTP 1.x to HTTP/2, and strategies for optimizing performance with HTTP/2. It recommends benchmarking optimizations and transitioning first internal APIs, then public APIs and CDNs, followed by front-end applications and proxies.
O'Reilly Fluent Conference: HTTP/1.1 vs. HTTP/2Load Impact
HTTP/2 is a new version of the HTTP network protocol that can improve page load performance over HTTP/1.1. The document discusses the history and limitations of HTTP/1.1, how HTTP/2 addresses these through features like multiplexing and header compression, and the results of an experiment that found HTTP/2 reduced page load times by 50-70% compared to HTTP/1.1. Real-world performance benefits may be less since HTTP/2 and site optimizations for it are still maturing, but initial experiments show promise of HTTP/2 significantly improving load times.
HTTP by Hand: Exploring HTTP/1.0, 1.1 and 2.0Cory Forsyth
This document summarizes the evolution of HTTP from versions 0.9 to 2. It discusses key aspects of HTTP/1.0 and HTTP/1.1 such as persistent connections and pipelining. It also covers how these features were abused to optimize page load performance. Finally, it provides an overview of HTTP/2 and how it differs from previous versions through the use of binary format, header compression, and multiplexing requests over a single TCP connection.
an overview from the HTTP2 protocol including comparison with previous version, a deeper look over the protocol enhancements, compatibility matrix with the internet ecosystem and set of online demos that can show the performance optimization.
The HTTP/2 protocol is the latest evolution of the HTTP protocol addressing the issue of HTTP/TCP impedance mismatch. Web applications have been working around this problem for years employing techniques like concatenation or css spriting to reduce page load time and improve user experience. HTTP/2 is also a game changer on the server enabling increased concurrency. This talk will focus on the impact HTTP/2 will have on the server and examine how particularly well adapted the Vert.x concurrency model is to serve HTTP/2 applications.
HTTP has evolved over time to address efficiency and performance issues. HTTP 1.1 was released in 1999 to improve on HTTP 1.0 by allowing multiple requests and responses per connection, required host headers, and added caching headers. SPDY was introduced in 2009 by Google to address mobile network latency and content size issues by interleaving requests and responses. HTTP/2 was standardized in 2015, based on SPDY but with header compression and stronger security requirements. HTTP/2 uses a binary format instead of text, so HTTP 1.1 and HTTP 2 are not compatible, requiring infrastructure to support both.
The document discusses SPDY and HTTP/2, which aim to improve upon HTTP/1.1 by allowing multiple requests to be sent concurrently over a single TCP connection through header compression and multiplexing. It notes that SPDY is now supported by major browsers but not Internet Explorer, while HTTP/2 is still not widely adopted. The document also describes how protocols like NPN and ALPN enable negotiation of the transport layer and encryption ensures security for intermediaries.
- HTTP/2 aims to reduce HTTP response times by improving bandwidth efficiency and reducing the number of connections and messages needed. It allows requests to be multiplexed over a single connection.
- While it can't reduce latency at the packet level, it aims to reduce overall response times through features like header compression, server push, and priority hints.
- HTTP/2 is currently supported by major browsers and servers. Implementations so far show response time reductions of 5-60% compared to HTTP/1.1.
HTTP/2 : why upgrading the web? - apidays ParisQuentin Adam
This document discusses HTTP/2 and why it is an improvement over HTTP/1.1. Some key points covered include:
- HTTP/2 uses a binary format for faster processing and includes features like header compression, multiplexing of requests over a single connection, and push capabilities from servers.
- It was developed by the HTTPbis working group building off the SPDY protocol draft.
- HTTP/2 promises performance improvements by removing hacks used in HTTP/1.1 and enabling new possibilities. The author believes it will improve the user experience.
- Support is growing among browsers and companies like Google, Twitter, and Akamai. The author's company Clever Cloud is working to support HTTP/2
HTTP has a new specification (two actually) and has received a major overhaul of some of it's internals. While the protocol itself has not changed much, the transfer mechanism and other underlying systems have been completely re-worked. Adrian will expand on what has and has not changed, how to make the best use of it, and how to transition to the new standard if you need to.
This document provides an overview of the Hypertext Transfer Protocol (HTTP) by explaining its key components and concepts. It describes the main parts of an HTTP request, including the request line, headers, and body. It also covers HTTP responses, status codes, and common methods like GET and POST. The document discusses how HTTP enables communication on the web and APIs through its stateless request/response model and standardized methods, headers, and status codes. It concludes by mentioning newer developments like HTTP/2 and SPDY that aim to improve web performance.
A technical description of http2, including background of HTTP what's been problematic with it and how http2 and its features improves the web.
See the "http2 explained" document with the complete transcript and more: http://daniel.haxx.se/http2/
(Updated version to slides shown on April 13th, 2016)
This document discusses attacking HTTP/2 implementations and summarizes findings from fuzzing tests. It introduces HTTP/2 and describes challenges in implementing it securely due to its large attack surface. A fuzzer called http2fuzz is presented that was used to find vulnerabilities in Apache Traffic Server, Firefox, and NodeJS HTTP/2 implementations. Specific issues discovered include a memory overflow in Firefox from a malformed header frame and an integer underflow in Firefox from a malformed push promise frame. The conclusion is that HTTP/2 implementations must carefully validate all frame fields and values to avoid security issues.
Stuart Larsen, attacking http2implementations-rev1PacSecJP
This document discusses attacking HTTP/2 implementations by fuzzing them with malformed requests. It introduces http2fuzz, an open source fuzzer for HTTP/2 written in Go. Using http2fuzz, the authors discovered crashes in the Apache Traffic Server and Firefox HTTP/2 implementations by sending unexpected or invalid frame types and payloads. Fuzzing uncovered vulnerabilities like out-of-bounds memory accesses, integer overflows, and denial of service issues. The document emphasizes that HTTP/2 implementations must carefully validate all frame fields and control states to avoid security issues.
HTTP/2 provides improvements over HTTP/1.1 such as multiplexed requests, header compression and priority hints from browsers that can reduce latency. While it shows benefits in testing, real-world impacts may be more modest depending on server and client configurations. Further optimizations are still needed and HTTP/2 opens up new possibilities around features like server pushing and progressive content delivery that could enhance performance.
HTTP request smuggling occurs when HTTP requests pass through multiple entities that parse requests differently, allowing an attacker to smuggle a request to one entity without the other being aware. There are two main causes - using HTTP connection modifications like Keep-Alive and Pipeline, which allow multiple requests over a single connection, and message body transfer encodings like chunked encoding, which enable streaming of unknown sizes. Attackers can exploit the different ways front-end and back-end servers handle parameters like Content-Length and Transfer-Encoding to smuggle requests.
HTTP request smuggling occurs when HTTP requests pass through multiple entities that parse requests differently, allowing an attacker to smuggle a request to one entity without the other being aware. There are two main causes - using HTTP connection modifications like Keep-Alive and Pipeline, which allow multiple requests over a single connection, and message body transfer encodings like chunked encoding, which send content in chunks. An attacker can craft requests that one entity parses one way based on headers like Content-Length, while the other entity parses differently based on Transfer-Encoding, allowing a request to be smuggled through.
The document provides an overview of Tomcat and JBoss, open-source servlet containers. It discusses the origins and frameworks of Tomcat and JBoss, how to get started with Tomcat configuration, deployment, security, and load balancing of Tomcat instances with Apache HTTP Server. Key configuration files for Tomcat are also summarized.
This document discusses smuggling TCP traffic through HTTP by leveraging HTTP upgrades. It proposes a new project called Purr that implements a TCP "smuggling" server in Ruby using Rack and a client-side proxy. Purr aims to allow anything TCP-based to be tunneled through HTTP, controlled by a browser extension using native messaging and accessible from web apps via a JS library. The incomplete implementation has a server and basic client-side proxy functionality, but more work is needed for distribution, libraries, HTTPS support, and testing.
Java EE 8: What Servlet 4.0 and HTTP/2 mean to youAlex Theedom
The goal of HTTP/2 is to increase the perceived performance of the web browsing experience. This is achieved by multiplexing over TCP and Server Push among other techniques. What implications does this have for developers? How does Servlet 4.0 embrace HTTP/2 and what support is there in JDK 9? We will see, with code examples, what the future of developing with HTTP/2 might look like.
O'Reilly Fluent Conference: HTTP/1.1 vs. HTTP/2Load Impact
HTTP/2 is a new version of the HTTP network protocol that can improve page load performance over HTTP/1.1. The document discusses the history and limitations of HTTP/1.1, how HTTP/2 addresses these through features like multiplexing and header compression, and the results of an experiment that found HTTP/2 reduced page load times by 50-70% compared to HTTP/1.1. Real-world performance benefits may be less since HTTP/2 and site optimizations for it are still maturing, but initial experiments show promise of HTTP/2 significantly improving load times.
HTTP by Hand: Exploring HTTP/1.0, 1.1 and 2.0Cory Forsyth
This document summarizes the evolution of HTTP from versions 0.9 to 2. It discusses key aspects of HTTP/1.0 and HTTP/1.1 such as persistent connections and pipelining. It also covers how these features were abused to optimize page load performance. Finally, it provides an overview of HTTP/2 and how it differs from previous versions through the use of binary format, header compression, and multiplexing requests over a single TCP connection.
an overview from the HTTP2 protocol including comparison with previous version, a deeper look over the protocol enhancements, compatibility matrix with the internet ecosystem and set of online demos that can show the performance optimization.
The HTTP/2 protocol is the latest evolution of the HTTP protocol addressing the issue of HTTP/TCP impedance mismatch. Web applications have been working around this problem for years employing techniques like concatenation or css spriting to reduce page load time and improve user experience. HTTP/2 is also a game changer on the server enabling increased concurrency. This talk will focus on the impact HTTP/2 will have on the server and examine how particularly well adapted the Vert.x concurrency model is to serve HTTP/2 applications.
HTTP has evolved over time to address efficiency and performance issues. HTTP 1.1 was released in 1999 to improve on HTTP 1.0 by allowing multiple requests and responses per connection, required host headers, and added caching headers. SPDY was introduced in 2009 by Google to address mobile network latency and content size issues by interleaving requests and responses. HTTP/2 was standardized in 2015, based on SPDY but with header compression and stronger security requirements. HTTP/2 uses a binary format instead of text, so HTTP 1.1 and HTTP 2 are not compatible, requiring infrastructure to support both.
The document discusses SPDY and HTTP/2, which aim to improve upon HTTP/1.1 by allowing multiple requests to be sent concurrently over a single TCP connection through header compression and multiplexing. It notes that SPDY is now supported by major browsers but not Internet Explorer, while HTTP/2 is still not widely adopted. The document also describes how protocols like NPN and ALPN enable negotiation of the transport layer and encryption ensures security for intermediaries.
- HTTP/2 aims to reduce HTTP response times by improving bandwidth efficiency and reducing the number of connections and messages needed. It allows requests to be multiplexed over a single connection.
- While it can't reduce latency at the packet level, it aims to reduce overall response times through features like header compression, server push, and priority hints.
- HTTP/2 is currently supported by major browsers and servers. Implementations so far show response time reductions of 5-60% compared to HTTP/1.1.
HTTP/2 : why upgrading the web? - apidays ParisQuentin Adam
This document discusses HTTP/2 and why it is an improvement over HTTP/1.1. Some key points covered include:
- HTTP/2 uses a binary format for faster processing and includes features like header compression, multiplexing of requests over a single connection, and push capabilities from servers.
- It was developed by the HTTPbis working group building off the SPDY protocol draft.
- HTTP/2 promises performance improvements by removing hacks used in HTTP/1.1 and enabling new possibilities. The author believes it will improve the user experience.
- Support is growing among browsers and companies like Google, Twitter, and Akamai. The author's company Clever Cloud is working to support HTTP/2
HTTP has a new specification (two actually) and has received a major overhaul of some of it's internals. While the protocol itself has not changed much, the transfer mechanism and other underlying systems have been completely re-worked. Adrian will expand on what has and has not changed, how to make the best use of it, and how to transition to the new standard if you need to.
This document provides an overview of the Hypertext Transfer Protocol (HTTP) by explaining its key components and concepts. It describes the main parts of an HTTP request, including the request line, headers, and body. It also covers HTTP responses, status codes, and common methods like GET and POST. The document discusses how HTTP enables communication on the web and APIs through its stateless request/response model and standardized methods, headers, and status codes. It concludes by mentioning newer developments like HTTP/2 and SPDY that aim to improve web performance.
A technical description of http2, including background of HTTP what's been problematic with it and how http2 and its features improves the web.
See the "http2 explained" document with the complete transcript and more: http://daniel.haxx.se/http2/
(Updated version to slides shown on April 13th, 2016)
This document discusses attacking HTTP/2 implementations and summarizes findings from fuzzing tests. It introduces HTTP/2 and describes challenges in implementing it securely due to its large attack surface. A fuzzer called http2fuzz is presented that was used to find vulnerabilities in Apache Traffic Server, Firefox, and NodeJS HTTP/2 implementations. Specific issues discovered include a memory overflow in Firefox from a malformed header frame and an integer underflow in Firefox from a malformed push promise frame. The conclusion is that HTTP/2 implementations must carefully validate all frame fields and values to avoid security issues.
Stuart Larsen, attacking http2implementations-rev1PacSecJP
This document discusses attacking HTTP/2 implementations by fuzzing them with malformed requests. It introduces http2fuzz, an open source fuzzer for HTTP/2 written in Go. Using http2fuzz, the authors discovered crashes in the Apache Traffic Server and Firefox HTTP/2 implementations by sending unexpected or invalid frame types and payloads. Fuzzing uncovered vulnerabilities like out-of-bounds memory accesses, integer overflows, and denial of service issues. The document emphasizes that HTTP/2 implementations must carefully validate all frame fields and control states to avoid security issues.
HTTP/2 provides improvements over HTTP/1.1 such as multiplexed requests, header compression and priority hints from browsers that can reduce latency. While it shows benefits in testing, real-world impacts may be more modest depending on server and client configurations. Further optimizations are still needed and HTTP/2 opens up new possibilities around features like server pushing and progressive content delivery that could enhance performance.
HTTP request smuggling occurs when HTTP requests pass through multiple entities that parse requests differently, allowing an attacker to smuggle a request to one entity without the other being aware. There are two main causes - using HTTP connection modifications like Keep-Alive and Pipeline, which allow multiple requests over a single connection, and message body transfer encodings like chunked encoding, which enable streaming of unknown sizes. Attackers can exploit the different ways front-end and back-end servers handle parameters like Content-Length and Transfer-Encoding to smuggle requests.
HTTP request smuggling occurs when HTTP requests pass through multiple entities that parse requests differently, allowing an attacker to smuggle a request to one entity without the other being aware. There are two main causes - using HTTP connection modifications like Keep-Alive and Pipeline, which allow multiple requests over a single connection, and message body transfer encodings like chunked encoding, which send content in chunks. An attacker can craft requests that one entity parses one way based on headers like Content-Length, while the other entity parses differently based on Transfer-Encoding, allowing a request to be smuggled through.
The document provides an overview of Tomcat and JBoss, open-source servlet containers. It discusses the origins and frameworks of Tomcat and JBoss, how to get started with Tomcat configuration, deployment, security, and load balancing of Tomcat instances with Apache HTTP Server. Key configuration files for Tomcat are also summarized.
This document discusses smuggling TCP traffic through HTTP by leveraging HTTP upgrades. It proposes a new project called Purr that implements a TCP "smuggling" server in Ruby using Rack and a client-side proxy. Purr aims to allow anything TCP-based to be tunneled through HTTP, controlled by a browser extension using native messaging and accessible from web apps via a JS library. The incomplete implementation has a server and basic client-side proxy functionality, but more work is needed for distribution, libraries, HTTPS support, and testing.
Java EE 8: What Servlet 4.0 and HTTP/2 mean to youAlex Theedom
The goal of HTTP/2 is to increase the perceived performance of the web browsing experience. This is achieved by multiplexing over TCP and Server Push among other techniques. What implications does this have for developers? How does Servlet 4.0 embrace HTTP/2 and what support is there in JDK 9? We will see, with code examples, what the future of developing with HTTP/2 might look like.
Http2 is here! And why the web needs itIndicThreads
The document summarizes the evolution of HTTP from versions 0.9 to 2.0. It outlines the limitations of HTTP/1.1 for modern web pages with many dependent resources. HTTP/2 aims to address these limitations through features like multiplexing, header compression, server push and priority to reduce latency. It discusses implementations of HTTP/2 and the impact on developers. The document also briefly mentions upcoming protocols like QUIC that build on HTTP/2 to further optimize performance.
Java EE 8: What Servlet 4.0 and HTTP2 mean to youAlex Theedom
Servlet 4.0 and HTTP/2 aim to improve web performance. HTTP/2 allows requests and responses to be multiplexed over a single connection, avoiding head-of-line blocking. It also includes header compression and server push. Servlet 4.0 leverages these features, allowing servers to proactively push resources to clients using push builders. Major servers and frameworks are adding support to take advantage of these new capabilities.
JDKIO: Java EE 8 what Servlet 4 and HTTP2 mean to youAlex Theedom
The goal of HTTP/2 is to increase the perceived performance of the web browsing experience. This is achieved by multiplexing over TCP and Server Push among other techniques. What implications does this have for developers? How does Servlet 4.0 embrace HTTP/2? We will see, with code examples, what the future of developing with HTTP/2 might look like.
Java EE 8: What Servlet 4 and HTTP2 MeanAlex Theedom
The goal of HTTP/2 is to increase the perceived performance of the web browsing experience. This is achieved by multiplexing over TCP and Server Push among other techniques. What implications does this have for developers? How does Servlet 4.0 embrace HTTP/2? We will see, with code examples, what the future of developing with HTTP/2 might look like.
After 16 years of solid use, the HTTP protocol finally got a major update this year. HTTP is the standard that defines how computers communicate over the Internet, and had not changed since 1999. The modern web, however, has become much more complex and HTTP/2 helps to address this brave new world.
Watch the webinar on demand: https://www.nginx.com/resources/webinars/whats-new-in-http2/
HTTP is one of the most widely used protocols in the world.
The version of HTTP 1.1, used to this day, was developed and described 18 years ago - 1999.
With the increasing complexity of web applications, the capabilities of HTTP 1.1 are already insufficient to provide increased demands on performance and responsiveness.
So in order to meet new requirements, HTTP must evolve. HTTP 2.0 is designed to make web applications faster, simple and reliable.
In this report I will tell about
- drawbacks of HTTP 1.1 and why we need a new version of HTTP.
- which advantages HTTP/2 offers in comparison with the previous version?
- how the new protocol affected the new version of SERVLET 4.0 and how we can use it.
The web has dramatically evolved over the last 20+ years, yet HTTP - the workhorse of the Web - has not. Web developers have worked around HTTP's limitations, but:
--> Performance still falls short of full bandwidth utilization
--> Web design and maintenance are more complex
--> Resource consumption increases for client and server
--> Cacheability of resources suffers
HTTP/2 attempts to solve many of the shortcomings and inflexibilities of HTTP/1.1
HTTP/2 for Developers: How It Changes Developer's Life?
by Svetlin Nakov (SoftUni) - http://www.nakov.com
jProfessionals Conference - Sofia, 22-Nov-2015
Key new features in HTTP/2
- Multiplexing: multiple streams over a single connection
- Header compression: reuse headers from previous requests
- Sever push: multiple parallel responses for a single request
- Prioritization and flow control: resources have priorities
A New Internet? Introduction to HTTP/2, QUIC and DOHAPNIC
This document discusses recent changes and improvements to core internet protocols like HTTP, DNS, and TCP. It introduces HTTP/2, which improves performance over HTTP/1.1 by allowing multiple requests per connection and header compression. It also discusses the development of QUIC, an experimental UDP-based protocol that aims to improve latency compared to TCP. Additionally, it covers DNS over HTTPS (DOH) which aims to increase privacy and censorship resistance by encrypting DNS queries over HTTPS. The document concludes that these protocols help accelerate the web by reducing round trips and blocking while securing more internet traffic.
HTTP/2 and QUICK protocols. Optimizing the Web stack for HTTP/2 erapeychevi
The new HTTP/2 protocol which is going to replace HTTP 1.1 was finished on February. Together with it, QUIC is being developed rapidly. Discover why are they so important for the Web and how will they influence the way we optimize the Web stack for the HTTP/2 era.
HTTP/2 Comes to Java: Servlet 4.0 and what it means for the Java/Jakarta EE e...Edward Burns
Servlet is very easily the most important standard in server-side Java. The much awaited HTTP/2 standard is now complete, was fifteen years in the making and promises to radically speed up the entire web through a series of fundamental protocol optimizations.
In this session we will take a detailed look at the changes in HTTP/2 and discuss how it may change the Java ecosystem including the foundational Servlet 4 specification included in Java/Jakarta EE 8.
HTTP is an application-layer protocol for transmitting hypermedia documents across the internet. It is a stateless protocol that can be used on any reliable transport layer. HTTP uses requests and responses between clients and servers, with common methods including GET, POST, PUT, DELETE. It supports features like caching, cookies, authentication, and more to enable the web as we know it.
A new Internet? Intro to HTTP/2, QUIC, DoH and DNS over QUICAPNIC
The document discusses recent changes to internet protocols that are aimed at improving performance and security. It describes the evolution from HTTP/1.1 to HTTP/2, which introduced features like multiplexing and header compression. It also covers the development of QUIC and DOH - UDP-based protocols that can enhance performance by avoiding head-of-line blocking and enable new use cases. QUIC is being deployed to carry HTTP and DNS traffic, while DOH standardizes encoding of DNS queries over HTTPS to prevent discrimination of DNS resolution. These protocol changes are driving more internet traffic to use HTTP, HTTPS and soon DNS over secure and optimized transports.
HTTP/1.1 is an obsolete and inefficient protocol for web communication. HTTP/2 provides major improvements such as header compression, multiplexing of requests and responses, and server push that can reduce webpage loading times by over 50%. HTTP/3 is expected in 2019 and will replace TCP with the UDP-based QUIC protocol, addressing limitations of TCP like head-of-line blocking and allowing for faster 0-RTT connections. Organizations should adopt HTTP/2 now and prepare for HTTP/3 to improve performance of their web applications and infrastructure.
This document provides an overview of HTTP caching and content distribution networks. It begins with a review of HTTP and persistent connections. It then discusses how caching works in HTTP, including cache validation via If-Modified-Since headers and ETags. It describes how web proxies and content delivery networks can be used for caching. Finally, it explains how content distribution networks like Akamai replicate and distribute content to edge servers close to users for improved performance.
Similar to In a HTTP/2 World - DeccanRubyConf 2017 (20)
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slack
In a HTTP/2 World - DeccanRubyConf 2017
1. IN A HTTP/2 WORLD
DOUGLAS VAZ, EQUAL EXPERTS
DECCAN RUBYCONF 2017
2. 1. CURRENT STATE
2. PROBLEMS WITH HTTP/1.x
3. HTTP/2 (H2) FEATURES
4. RETHINKING CURRENT PRACTICES
5. ADOPTION, AND THE STATE OF RUBY
3. HTTP - A brief history
1997 - RFC 2068 (HTTP/1.1 first draft)
1999 - RFC 2616 (standard for HTTP/1.1)
2014 - RFC 7230 (6 part spec to revise HTTP/1.1)
1989 - 1996 - HTTP/1.0
1. Current State
7. Problem 1: HTTP/1.x only allowed sequential request/response
HTTP/1.x wasn’t designed for async requests.
Pipelining allowed async requests but responses need to be consumed in order. Slow
responses would block all later requests and reduce overall performance i.e. head-of-
line blocking
2. Problems with HTTP/1.x
9. Since only one requestconnection serviced at a given time, increasing bandwidth
doesn’t reduce latency.
Browsers open multiple connections for parallel requests, but are restricted to a
max number per domain
Problem 2: More bandwidth doesn’t mean lower latency
http://httparchive.org/
2. Problems with HTTP/1.x
12. Workaround:
tely with an intermediate state and serve content when ready via
2. Problems with HTTP/1.x
13. ession data stored in cookies are transferred as uncompressed headers and can add several kilo
Problem 4: Protocol overhead due to headers
2. Problems with HTTP/1.x
17. H2 Key Features
• Parallel request streams on a single connection
• Binary protocol
• Server push
• Header compression
• Stream prioritisation
3. H2 Features
19. Parallel requests with HTTP/1.1
- Open multiple connections (ex. 6 in Chrome)
- High request queue time
3. H2 Features
20. Parallel requests with HTTP/2
“Designed to reduce perceived latency”
- Request multiple files in parallel on same connection
- All requests are served immediately
Based on Go’s HTTP/2 Demo
3. H2 Features
29. Available Tools
5. Adoption, Implementation, Ruby
• Chrome Dev Tools (inspect sessions and streams)
• Wireshark (inspect frames and compressed headers)
• nghttp2 (C library plus helpful binaries)
• curl (needs to be build from source)
31. Rack is not HTTP/2 compatible!
• Rack is designed for request/response cycles
• Communication with backend servers is not bi-directional
or message oriented
5. Adoption, Implementation, Ruby
32. Option 1: igrigorik/http-2
gem install http-2
Limitations:
1. Not Rack compatible, hence can’t be used with Rails
2. Does not negotiate a fallback to HTTP/1.x
Pure Ruby implementation of HTTP/2
5. Adoption, Implementation, Ruby
33. Option 2: H2 Enabled Proxy + Ruby backend
Limitations:
1. Multiplexing won’t work
2. Server push requires additional configuration
Proxy client requests via H20, nghttpx, Apache or nginx
Enables header compression!
5. Adoption, Implementation, Ruby
34. Option 3: Server push via a CDN
Hinted push: Use Link headers in the response
Link: </css/styles.css>; rel=preload; as=style
5. Adoption, Implementation, Ruby
35. Option 4: Server push via an edge proxy
Manual server push by configuring the edge proxy
5. Adoption, Implementation, Ruby
36. Caveats
1. Server might push files that are already cached
2. Server might push files not present on page
3. Stream prioritisation and cancelling might be affected by OS level TCP buffers
5. Adoption, Implementation, Ruby
38. References
• Risks of pipelining: https://www.chromium.org/developers/design-
documents/network-stack/http-pipelining
• Design and technical goals: https://hpbn.co/http2/#design-and-technical-goals
• Study on bandwidth vs latency:
https://docs.google.com/a/chromium.org/viewer?a=v&pid=sites&srcid=Y2hyb2
1pdW0ub3JnfGRldnxneDoxMzcyOWI1N2I4YzI3NzE2
• Starting point for HTTP/2: https://http2.github.io/
• HTTP/2 for Ruby: https://www.speedshop.co/2016/01/07/what-http2-means-
for-ruby-developers.html
Editor's Notes
First used in 1989, along with other protocols such as Gopher. By 1995, HTTP had become the de fecto application layer protocol.
This lead to standardisation by the Internet Engineering Task Force and HTTP/1.1 was published in ‘97
In 2014, HTTP/1.1 was further clarified with a 6 part draft (with regard to use of certain headers (Content-*, Referer, Location)
While HTTP continued to remain the same, the internet was not the same playing field. The number and size of assets on a web page increased nearly exponentially. Almost a 10x increase from 2000 to 2016. It was clear that the protocol imposed restrictions that could no longer be dealt with at the application layer.
How HTTP/1.x behaves.
Starts with a 3 way TCP handshake to establish a connection. Browser requests an HTML page, parses it, and then requests for images, Javascript files, CSS files and other assets (one at a time over a connection)
Typical HTTP/1.1 request and response format.
Method, followed by path followed by protocol, followed by mandatory and optional headers.
Response is similar. Starts with a status line, followed by a newline, headers and then data
NOTE: TEXT BASED and human readable which makes it easy to construct by hand or debug
HTTP/1.0 needed one connection per asset, HTTP/1.1 introduced keep-alive, but requests were synchronous.
Also introduced pipelining which enabled sending multiple requests without waiting for a response, but responses needed to be consumed in order. If the first response is slow, all other later responses will be blocked. Badly implemented by proxies
(The option to enable pipelining has been removed from Chrome, as there are known crashing bugs and known front-of-queue blocking issues. There are also a large number of servers and middleboxes that behave badly and inconsistently when pipelining is enabled. )
Data bandwidth today is much greater than in early-2000s. Yet we couldn't take advantage of this because of the protocol limitations.
Both transfer sizes and number of assets per page are growing (ex. CNN has 157 resources)
Connection limit: could exhaust server and client limit
Common practices to get around browser limits to make assets download faster, bundling assets together (JS, CSS, images) to avoid multiple round trips
The network connection is blocked when the server is processing a request (browsers can’t request anymore on the connection, servers can’t switch to sending alternate files in the meantime)
Return a static view, which can then fetch the remaining content when it’s ready (how most SPAs work)
Since, HTTP is stateless, every request needs to convey the context. This is useful but can also be a huge overhead due to duplication of data on the wire
However, the fact that all HTTP headers are transferred in plain text (without any compression), can lead to high overhead costs for each and every request, which can be a serious bottleneck for some applications and devices with limited resources.
RFC 2616 (HTTP 1.1) does not define any limit on the size of the HTTP headers.
in practice, many servers and proxies will try to enforce either an 8 KB or a 16 KB limit.
In May 2015, HTTP/2 became a standard. A lot of vendors who had already supported SPDY due to Google’s influence quickly migrated to this spec
Metaphor for HTTP/2’s design. Instead of placing an order one item at a time, you simply request for the entire meal upfront. You can also specify priorities such as wanting wine before the appetiser.
A single TCP connection can have several logical connections, each transporting it’s own data. This is done by splitting packets into typed, interleaved frames, each having a unique identifier to indicate which stream it belongs to.
H2 is a binary protocol, so not human readable and harder to debug. But much more efficient for a machine to process (think IoT with resource constraints)
Server push is a feature where the server can send out data without an explicit request. This also indicated that streams are bidirectional in nature
Headers are now sent in a frame at the start of a stream and the context is maintained for all frames in that stream. Additionally, the headers are compressed with a new algorithm called HPACK
Clients can assign a priority weight to streams to fetch the more important files first (ex. CSS before images and fonts). Server doesn’t have to comply or can decide a default priority for different request types
HTTP connection with 3 active streams. The colours indicate grouping of frames into a stream. Open a connection with HEADERS frame, send data with DATA frame. Streams can also be cancelled using a RST_STREAM header.
How much of a difference does this make?
A demonstration first constructed by the Golang team: The logo is tiled, comprising of 256 image tags on a web page. On page load, the browser requests for all 256 images in parallel. However, only one asset can be downloaded per connection and Chrome opens up 6 connections. That’s only 6 x 2 kb (12kbps) of a 2 Mbps connection.
In contrast let’s see how HTTP/2 performs.
All 256 tiles served at once.
Less that a second on a 2 Mbps connection
Header compression to reduce duplication with every request. HPACK is based on Huffman encoding with both a static and dynamic table. Header-value pairs sorted in descending order of frequency. The most frequently occurring header-value pair was given the smallest bit value.
Apart from this static table, a connection can also negotiate a dynamic table which is used for lookups specific to that connection
When I tested this on popular H2 enabled websites, the results were very positive. google.com shows an amazing 87% reduction in header size, which translates to lower bandwidth requirements.
So should we continue with our existing best practices now that HTTP/2 is around
Domain sharding is a practice of distributing assets among a set of domains to increase the number of connections to download in parallel. This is no longer needed cause parallel downloads can now be done with a single connection. Domain sharding is expensive cause of the DNS lookup plus handshake for each connection.
Not necessary as a web optimisation. Easier to maintain cache consistency cause of separation. Keep stable vendor libs + frequently changing business code can be separated so that every small change does not invalidate the cache. Some bundling might be a good idea easy maintainability
Common web servers, application server, CDNs Akamai and Cloudflare
Chrome allows filtering on all active HTTP/2 sessions. Useful for viewing stream data
Wireshark, network protocol analyser, has some support like decoding HPACK (but decoding streams is not straightforward)
nghttp client to connect to H2 servers with HTTP Upgrade ALPN
nghttpx -multi-threaded reverse proxy for HTTP/2, SPDY and HTTP/1.1 (mruby support available)
Popular command line tool for fetching data from a server.
Most OS distributions aren’t compiled with H2 capabilities (run with -V and look for HTTP2 under features)
Turns out that Ruby has VERY limited support for HTTP/2. One of the reasons is that almost all Ruby web frameworks depend on Rack at the CGI level to talk to the web server
The problem is, Rack is not HTTP/2 compatible. The architecture makes assumptions and hence cannot easily accommodate the new message oriented, bi-directional stream communication. There are still ways to use HTTP/2 with Ruby
(don’t use in production, but good POC)
Preload is a web optimisation technique to indicate assets which need an early fetch. Usually used via the <link> HTML tag. Some servers look for the LINK header in the application’s response and sends a PUSH_PROMISE frame followed by a DATA frame with actual content.
Server can reject the push promise with a RST_STREAM frame
Note: Server can only hint additional resource belonging to same origin policy.
Similar to the CDN option, this relies on a configurable external proxy server to handle HTTP/2 requests. Advantage: Assets can be served via a server push while request is forwarded to application server for processing
Cache digests is a proposed implementation to use a Bloom filter to inform the server of cached files
Usually when CDN config and rendered page are out of sync. Managed by better engineering practices
Not usually a worry but good to keep in mind