You’re ready to make your applications more responsive, scalable, fast and secure. Then it’s time to get started with NGINX. In this webinar, you will learn how to install NGINX from a package or from source onto a Linux host. We’ll then look at some common operating system tunings you could make to ensure your NGINX install is ready for prime time.
View full webinar on demand at http://nginx.com/resources/webinars/installing-tuning-nginx/
DevEX - reference for building teams, processes, and platforms
NGINX Installation and Tuning
1. NGINX Installation and
Tuning
Introduced by Andrew Alexeev
Presented by Owen Garrett
Nginx, Inc.
2. About this webinar
You’re ready to make your applications more responsive, scalable, fast and
secure. Then it’s time to get started with NGINX. In this webinar, you will
learn how to install NGINX from a package or from source onto a Linux
host. We’ll then look at some common operating system tunings you could
make to ensure your NGINX install is ready for prime time.
3. Agenda
• Installing NGINX
– Installation source, NGINX features
• Tuning NGINX
– Operating System tuning
– NGINX software tuning
• Benchmarking NGINX
We’re covering a lot of material.
Please feel free to take screenshots
and read up afterwards.
5. What can NGINX do for you?
Internet
N
Web Server
Serve content from disk
Application Gateway
FastCGI, uWSGI, Passenger…
Proxy
HTTP traffic Caching, Load Balancing…
Application Acceleration
SSL and SPDY termination
Performance Monitoring
High Availability
Advanced Features: Bandwidth Management
Content-based Routing
Request Manipulation
Response Rewriting
Authentication
Video Delivery
Mail Proxy
GeoLocation
6. Deployment Plan
Determine the functionality you’ll need
from NGINX:
• Authentication
• Proxy to API gateways
• GZIP
• GeoIP
• etc. etc.
Modules list at nginx.org
7. Three questions before installing NGINX
1. What functionality do you require?
• Standard modules
• NGINX Plus functionality
• Optional NGINX and third-party modules
3. How do you want to install?
• “Official” NGINX packages (nginx.org)
• Build from Source
• From Operating System repository
• From Amazon AWS Marketplace
2. What branch do you want to track?
• Mainline (1.7)
• Stable (1.6)
• Something older?
http://nginx.com/blog/ngi
nx-1-6-1-7-released/
8. Recommended Install
1. Standard modules (nginx.org) or NGINX Plus
2. Mainline (1.7)
3. Install from nginx.org or nginx-plus repository
nginx.org builds do not include:
• Modules with complex 3rd-party dependencies:
• GeoIP, Image_Filter, Perl, XSLT
• Modules that are part of NGINX Plus
• Third-party modules e.g. Lua, Phusion Passenger
http://nginx.com/products/technical-specs/
9. Difference between NGINX and NGINX Plus
http://nginx.com/products/feature-matrix/
NGINX
• High-performance, open
source web server and
accelerating proxy.
• Community support through
mailing lists on nginx.org,
stackoverflow, subject
experts etc.
NGINX Plus
• Adds Enterprise Load
Balancing and Application
Delivery features.
• Full support and updates
from NGINX Inc., the team
who built and manage
NGINX.
18. Tuning the operating system
• Basic tunables:
– Backlog queue: limits number of
pending connections
– File descriptors: limit number of
active connections
– Ephemeral ports: limit number of
upstream connections
19. Configuring Tunables - HOWTO
• /proc:
# echo "1" > /proc/sys/net/ipv4/tcp_syncookies
• sysctl.conf:
# vi /etc/sysctl.conf
# Prevent against the common 'syn flood attack'
net.ipv4.tcp_syncookies = 1
# sysctl –p
20. The Backlog Queue
• What happens when a connection is received?
– SYN / SYNACK [syn_backlog queue] or syncookie
– ACK [listen backlog queue] / NGINX:accept()
– net.ipv4.tcp_max_syn_backlog
– net.ipv4.tcp_syncookies
– net.core.somaxconn
• NGINX: listen backlog=1024
– net.core.netdev_max_backlog
21. File Descriptors
• What happens when a connection is processed?
File descriptors are the key resource – estimate 2 per connection.
– fs.file_max
– /etc/security/limits.conf
– worker_rlimit_nofile 200000;
22. Ephemeral Ports
• What happens when NGINX proxies connections?
Each TCP connection requires a unique 4-tuple:
[src_ip:src_port, dst_ip:dst_port]
Ephemeral port range and lifetime:
– net.ipv4.ip_local_port_range
– net.ipv4.tcp_fin_timeout
25. Tuning NGINX
#1: You don’t need to “tune” very much
#2: Don’t tune just for a benchmark
#3: Use our Prof Services team to help
26. Common tunings
worker_processes auto; – set to ‘auto’ or higher
worker_connections – set to less than file descriptor
count.
accept_mutex: disable for busy services
27. The proxy should use keepalives
Close TCP Connection
(two-way handshake)
Open TCP Connection
(three-way handshake)
Write HTTP request Read HTTP response
Wait
(timeout)
NGINX or server
closes the
connection
NGINX re-uses connection for another request
server {
listen 80;
location / {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
upstream backend {
server webserver1 max_conns=256;
server webserver2 max_conns=256;
queue 4096 timeout=15s;
# maintain a maximum of 20 idle connections to each upstream server
keepalive 20;
}
31. In conclusion:
• Install from the nginx repo
– NGINX or NGINX Plus
• Basic tuning and configuration
– dmesg / kern.log
• Benchmark / stress test
http://nginx.com/
• NGINX Professional Services and Training
Does a lot of things… can sit at the center of your web infrastructure… worthwhile building a deployment plan
Deployment plan will identify how many, where they are installed, what features are needed and will help to construct the configuration
It’s a mess…. When I run apt-cache search nginx on Ubuntu14.04 with the nginx repo, I get 30 hits, 14 of which are nginx installation candidates.
Only two of these are the ‘official’ nginx binaries
accept_mutex; is on by default, should be off to reduce delay in accepts
worker_processes; always auto. default 1. large amounts of diskio - set to larger than number of CPUs. e.g. consider wa column in vmstat, but be aware of other workloads on host
keepalive_timeout; 75 seconds (check tcp keepalive)
keepalive; (keepalive connection cache) how many sim conns can backend support?
worker_connections - must be less than number of open files per process. will see message in error log if exceeded “worker_connections are not enough”. Should be a little less than number of fds per process
Config in blue is nginx plus only
Answer – to stress-test to determine where the problems are and address them with additional tuning where possible.
You can’t rely on benchmark results to indicate real-world performance