The latest version of the TLS protocol, TLS 1.3, was just released in August 2018. TLS 1.3 is faster and more secure than TLS 1.2. In this webinar, we cover what’s new in TLS 1.3 and how to use it with NGINX, plus other new features in NGINX Open Source and NGINX Plus.
Join this webinar to learn:
- What’s new in TLS 1.3 and why it's faster and more secure than TLS 1.2
- How to use TLS 1.3 with NGINX Plus and NGINX Open Source
- About two-stage rate limiting, simplified OpenID Connect, and 2x faster NGINX and ModSecurity WAF performance
- More with a live demo of TLS 1.3 in action
Watch On-demand: https://www.nginx.com/resources/webinars/tls-1-3-new-features-nginx-plus-r17-nginx-open-source/
6. TLS 1.3 Overview
• Ratified in October 2018, RFC 8446
• Ten years since TLS 1.2. Numerous vulnerabilities:
◦ FREAK
◦ Heartbleed
◦ Poodle
◦ ROBOT
◦ SLOTH
• TLS 1.3 is faster and more secure than TLS 1.2
• Not supported by F5 BIG-IP
6
7. FREAK
• With FREAK, a man-in-the-middle could downgrade the cipher to something weaker
• TLS 1.3 removes all the weaker Export ciphers and signs the entire key exchange7
12. TLS 1.3 Support
• Requires Open SSL 1.1.1
• Supported OS: Ubuntu 18.10, FreeBSD 12.0, Alpine 3.9
◦ Debian 10 will have OpenSSL 1.1.1 when released later this year
• Supported browsers: Chrome 70, Firefox 63
◦ Not supported by Safari yet
◦ Latest status info: caniuse.com/#feat=tls1-3
12
13. TLS 1.3 NGINX Config
• We recommend to include
TLSv1.2 because not all
browsers support TLS 1.3
• NGINX uses TLS 1.3 if client
supports it, and TLS 1.2 if
not.
• ssl_early_data enables 0-
RTT mode
• Use $ssl_early_data to
have backend server drop potential
replay packets
13
server {
listen 443 ssl;
ssl_certificate /etc/ssl/my_site_cert.pem;
ssl_certificate_key /etc/ssl/my_site_key.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_early_data on; # Enable 0-RTT (TLS 1.3)
location / {
proxy_pass http://my_backend;
proxy_set_header Early-Data $ssl_early_data;
}
}
15. Rate Limiting in NGINX
• Follows the leaky bucket algorithm
• If the rate at which water is poured in
exceeds the rate at which it leaks, the
bucket overflows
• What to do with excessive requests?
• More info: nginx.com/blog/rate-
limiting-nginx/
15
16. Rate Limiting in NGINX
• Choices:
◦ Drop them immediately
◦ Queue and service them later
◦ Queue but service immediately
◦ Queue, service immediately up to a
point, then delay and service later
16
19. NGINX Plus JWT Authentication
Support timeline:
• R10 -- Initial support for native JWT authentication
added
• R12 -- Support for custom fields
• R14 -- Support for nested claims
• R15 -- Support for OpenID Connect SSO. Link to
Okta, OneLogin, PingIdentity, etc.
• R17 -- Support for fetching JWK from URL
JWTAuthentication and OpenID Connect SSO are
exclusive to NGINX Plus
20. NGINX Plus JWT Config
• auth_jwt_key_request
initiates a subrequest to fetch
the JWKs from the server.
• Responses are cached.
• You can use NGINX cache
tuning tricks such as
proxy_cache_use_stale,
overring expiration headers, etc.
20
# Create directory to cache keys from IdP
proxy_cache_path /var/cache/nginx/jwk levels=1
keys_zone=jwk:1m max_size=10m;
server {
listen 80; # Use SSL/TLS in production
location / {
auth_jwt "closed site";
auth_jwt_key_request /_jwks_uri;
proxy_pass http://my_backend;
}
location = /_jwks_uri {
internal;
proxy_cache jwk; # Cache responses
proxy_pass https://idp.example.com/oauth2/keys;
}
}
24. NGINX Ingress Controller for Kubernetes 1.4.0
New features:
• TCP/UDP load balancing
• Extended Prometheus support
• Easy development of custom annotations
• Random with Two Choices load balancing
algorithm
Enterprise-grade application delivery for Kubernetes
25. Additional features
• TCP Keepalives to Upstreams -- New proxy_socket_keepalive directive toggles
TCP keepalives between NGINX and proxied server.
• Upstream HTTP Keepalive Timeout and Request Cap -- New keepalive_timeout directive sets
max idle time for keepalive connection between NGINX and proxied server.
• Finite Upstream UDP Session Size -- New proxy_requests directive sets max number of UDP
packets sent from NGINX to proxied server before new UDP “session” created.
• Enhancement to Cluster State Sharing -- When using state sharing in cluster, can
now do server name verification, using SNI to pass the server name when connecting
to cluster nodes. (NGINX Plus exclusive)
25
28. Summary
• New support for TLS 1.3 improves security and performance
• TLS 1.3 currently supported in Ubuntu 18.10, FreeBSD 12.0, Alpine 3.9.
• New two-stage rate limiting allows burst packets to be serviced with no delay up to a
point, then delayed, then dropped.
• NGINX Plus can now fetch JSON Web Keys from iDP making for easier OpenID
Connect configuration.
• NGINX WAF and ModSecurity 3.0 2x faster performance
• NGINX Ingress Controller for Kubernetes 1.4.0 adds TCP/UDP load balancing,
extended Prometheus support and additional new features.
29. Q & ATry NGINX Plus and NGINX WAF free for 30 days: nginx.com/free-trial-request
Editor's Notes
NGINX Plus gives you all the tools you need to deliver your application reliably.
Web Server
NGINX is a fully featured web server that can directly serve static content. NGINX Plus can scale to handle hundreds of thousands of clients simultaneously, and serve hundreds of thousands of content resources per second.
Application Gateway
NGINX handles all HTTP traffic, and forwards requests in a smooth, controlled manner to PHP, Ruby, Java, and other application types, using FastCGI, uWSGI, and Linux sockets.
Reverse Proxy
NGINX is a reverse proxy that you can put in front of your applications. NGINX can cache both static and dynamic content to improve overall performance, as well as load balance traffic enabling you to scale-out.
TLS 1.3 supports session resumption, which makes connection establishment faster by eliminating the overhead of repeating the TLS handshake when a client returns to a previously visited site. This is also called 0‑RTT (zero round trip time) resumption, because no handshake messages have to go back and forth between client and server for the resumed session. Session resumption is implemented by creating a shared secret during the original session and storing it in a session ticket. When the client returns, it presents the session ticket along with its request, which is encrypted with the shared secret that’s in the ticket.
Using 0‑RTT opens up the risk of a replay. In this scenario, the attacker re‑sends a packet that results in a state change, such as a request to transfer money between two bank accounts.
To protect against replay attacks, the only HTTP request type that clients should send in the 0‑RTT data (the data encrypted with the shared secret) is GET.
You might not want to enable 0‑RTT resumption when deploying NGINX Plus as an API gateway, however, because for API traffic resumed TLS sessions are more likely to contain non‑idempotent request types.
The configuration allows bursts of up to 12 requests, the first 8 of which are processed without delay. A delay is added after 8 excessive requests to enforce the 5 r/s limit. After 12 excessive requests, any further requests are rejected.
Even when you understand security, it is difficult to create secure applications, especially when working under the pressures so common in today’s enterprise.
The NGINX Web Application Firewall (WAF) protects applications against sophisticated Layer 7 attacks that might otherwise lead to systems being taken over by attackers, loss of sensitive data, and downtime. The NGINX WAF is based on the widely used ModSecurity open source software.
Support for TCP and UDP load balancing – Enables efficiencies by using the same Ingress routing tier for all protocols, not just HTTP
Extended Prometheus support – Introduces support for stub_status metrics with NGINX Open Source, and extended TCP and UDP metrics with NGINX Plus
Easy development of custom Annotations – Makes it simpler to configure more NGINX load‑balancing features for your applications
Support for a “power of two choices” load‑balancing algorithm – Enables the new Random with Two Choices algorithm, which is very well suited for distributed environments with multiple load balancers, as the default algorithm