On-demand recording: https://www.nginx.com/resources/webinars/microservices-ama-load-balancing-service-discovery/
Speakers:
Charles Pretzer, Technical Architect, NGINX, Inc.
Floyd Smith, Director of Content Marketing, NGINX, Inc.
About the webinar:
Load balancing and service discovery are key to effective microservices implementations. In this webinar, we describe how NGINX Plus supports service discovery and load balancing in the NGINX Microservices Reference Architecture (MRA), related approaches, and converting monolithic apps to microservices.
Watch this webinar to learn:
- Get the latest answers to your questions about implementing microservices
- Learn about the challenges others are facing in development and deployment
- Ask questions about converting monolithic apps to microservices
- Get an update on the NGINX Microservices Reference Architecture
3. MORE INFORMATION AT NGINX.COM
Who Are We?
Charles Pretzer
Technical Architect
Formerly:
- Software architecture consultant
- Engineering lead at Zinio, StyleHive,
others
Floyd Smith
Technical Marketing Writer
Formerly:
- Apple, Alta Vista, Google, and more
- Author of multiple books on technology
including web, marketing, usability
4. Job Titles
Analyst
Architect
CEO, CIO, CTO
Consultant
cool guy
Cloud Architect, Engineer
Developer
DevOps
Engineer
IT Lead, Manager
Security Architect
Systems Analyst
Tech Lead
Organization Types
Airlines
Computer technology
Consultancy
Ecommerce
Finance
Freelance
Government
Higher education
Mobile phones
Networking technology
Press
SaaS
Telecoms
Reasons for Attending
“Build ms platforms”
“Client proposals”
“Compare the three models
to our solution”
“Create API Gateway”
“Developing microservices”
“DevOps”
“Microservices strategies”
“Migrate from F5”
“Move from .NET”
“Move from Apache”
“Moving from monolith”
“Playing with
microservices”
Who Attends?
4
7. About NGINX, Inc.
7
• NGINX Open Source project early 2000s
• Company founded in 2011
• NGINX Plus first released in 2013
• VC-backed by enterprise software industry leaders
• Offices in San Francisco, Sunnyvale, Cork, Cambridge, and Moscow
• 1,000+ commercial customers
• 100+ employees
8. 8
50%of the top 100,000
busiest websites
Source: W3Techs Web Technology Survey
>
10. Where NGINX Plus fits
10
Internet
Web Server
Serve content from disk
Application Gateway
FastCGI, uWSGI, Passenger…
Reverse Proxy
Caching, load balancing…
HTTP traffic
11. NGINX Plus works in all environments
11
Public/Private/Hybrid
Cloud
Bare Metal Containers
12. NGINX and Microservices
12
• Two-thirds of surveyed developers using or investigating microservices
• Microservices is #1 topic on our website
• Chris Richardson series, Introduction to Microservices
• Chris Stetson series, NGINX MRA**
• Three Models webinar
• MRA Training
• NGINX Professional Services – creators of the MRA
…and much more
** = backup for this presentation
13. Load Balancing and More
13
• Load balancing is #1 application for NGINX and NGINX Plus
• Load balancing ebook
• NGINX vs. F5 comparison
• NGINX vs. Citrix comparison
• nginx.conf 2017 keynotes, with NGINX Controller, NGINX Unit, and
more; blog posts and NGINX channel on YouTube
• NGINX Plus free trial, contact Sales, or call (800) 915-9122
15. MORE INFORMATION AT
NGINX.COM
Load Balancing Considerations
• Coupled with Service Discovery and Monitoring
• Must be able to detect dynamic changes
• When new services are added, the load balancer must be able to detect the
service and distribute requests to each of its instances
• When a service is scaled, the load balancer must add the new instance(s) to
the load balancer pool for request distribution
• Some applications/services require session persistence
18. MORE INFORMATION AT
NGINX.COM
Proxy Model
• NGINX Plus load
balances requests only to
specific upstreams
• Interprocess
communication is left to
the services themselves
• Good for starting the
migration process, but
this is not a true service
mesh
19. MORE INFORMATION AT
NGINX.COM
Router Mesh
Model
• Inbound routing through
reverse proxy
• Centralized load
balancing through a
separate load balancing
service
• Represents the first
NGINX architecture for
microservices that
implements a service
mesh
20. MORE INFORMATION AT
NGINX.COM
Fabric Model
• Routing is done at the
container level
• Services connect to each
other as needed forming
an robust service mesh
• NGINX Plus acts as the
forward and reverse
proxy for all requests
21. MORE INFORMATION AT
NGINX.COM
Kubernetes Load Balancing
• Implemented by Services as internal or external load
balancers
• Ingress and Ingress Controllers are more robust forms of
load balancing in Kubernetes
• The NGINX Ingress Controller provides all the load
balancing features NGINX within Kubernetes
• https://github.com/nginxinc/kubernetes-ingress
22. MORE INFORMATION AT
NGINX.COM
Istio Load Balancing
• In order to discover services Istio assumes the presence
of a service registry
• NGINX has built the nginxmesh repository which
provides an implementation of a sidecar proxy for Istio
using NGINX
• https://github.com/nginmesh/nginmesh
24. NGINX Professional Services
24
• Developers of the NGINX MRA
• Highlights: NGINX Plus Quick Start and
Microservices Architecture Strategy and Consultation
• Find the Fabric Model on GitHub
• MRA Training
• Contact NGINX Sales to discuss
Half of the top 10,000
We’re now the number one web server for the top 100,000 as well, and climbing fast in every category.
NGINX Plus extends NGINX with advanced features such as health monitoring, session persistence, and an advanced monitoring interface. NGINX Plus is a complete application delivery platform.
NGINX Plus gives you all the tools you need to deliver your application reliably.
Web Server
NGINX is a fully featured web server that can directly serve static content. NGINX Plus can scale to handle hundreds of thousands of clients simultaneously, and serve hundreds of thousands of content resources per second.
Application Gateway
NGINX handles all HTTP traffic, and forwards requests in a smooth, controlled manner to PHP, Ruby, Java, and other application types, using FastCGI, uWSGI, and Linux sockets.
Reverse Proxy
NGINX is a reverse proxy that you can put in front of your applications. NGINX can cache both static and dynamic content to improve overall performance, as well as load balance traffic enabling you to scale-out.
Being software NGINX Plus can operate in any environment, from bare metal to VMs to containers.
We don’t need to QA and qualify every environment. If you can run Linux you can run NGINX and it will just work.
Not just across infrastructure, but the same NGINX software that runs in production can also run in staging and development environments without incurring additional capital costs.
Keeping the different environments in sync as much as possible is an industry best practice and helps to reduce issues where it worked in dev but broke in production.
With NGINX Plus enterprises can easily eliminate this potential gap in the deployment process.
In its simplest form load balancing involves configuring the load balancer with the locations of each of the servers to which the requests should be distributed
in NGINX, this is accomplished by configuring upstreams
The default load balancing algorithm is round robin, includes
least requests
hash and ip hash
NGINX Plus includes the least time algorithm based on header or last_byte
A more advanced system can dynamically detect changes and update the load balancer configuration accordingly
Load Balancing in Microservices means evenly distributing requests to pools or instances of services within a microservices architecture
One of the features of NGINX Plus which allows for configuration can be reloaded without killing the master process
Session persistence is an exception to load balancing and algorithms must be aware of scenarios where a request must be routed to a specific server or instance
Once you know where the services are, you need to distribute traffic to them
Load balancing is simple in its basest form, round robin
Complicated in the more sophisticated formats
Policy should be set by developer, with regards to LB type, session persist.
NGINX Plus has sophisticated algorithms for load balancing
This diagram shows equal distribution to a single service, however, fine-grained control to each service is possible using NGINX
For example, in our car service app, the passenger request service can use a different load balancing algorithm than the billing service
When migrating to microservices from a monolith, the strangler pattern is very popular
Name taken from the strangler fig
Features of the monolith are refactored out of the application to a separate service
As more features become services, the monolith becomes smaller
Eventually, the monolith is “strangled” away by the microservices created from its features
This model focuses entirely on in-bound traffic and ignores the whole inter process communication problem. Basically think of it as putting NGINX on a public facing server and letting the associated services on the private network fend for themselves.
The good thing is that:
You get all the benefit of HTTP traffic management in this system that you normally get with NGINX
SSL termination
Traffic shaping and security
Caching
With NGINX plus you get robust load-balancing and service discovery
This model works well for a simple and flat API or a monolith with some basic microservices attached.
For Kubernetes we have an open source Ingress Controller that allows you to easily implement this system using our OSS or commercial version
NGINX Plus gives you dynamic upstreams, active health checks, and robust monitoring
NGINX Plus can also act as a Web Application Firewall using the ModSecurity module
Like the proxy model, it has NGINX running in front of the system to manage in bound traffic and gives you all of the benefits of the proxy model
Where it differs is in the introduction of a centralized load balancer between the services
When services need to communicate with other services, they route through the this centralized load balancer and the traffic is distributed to other instances
The Deis Router with NGINX/NGINX Plus work in this manner
Service discovery through DNS and monitoring the service event stream in the registry
But it exacerbates the performance problem by adding another hop in the network connection requiring another SSL handshake to make it work
So instead of a 9 step SSL handshake, you need to do an 18 step SSL handshake
The final model is what we call the fabric model.
Like the other two models you have a public proxy in front of the system to handle incoming HTTP traffic
Where it differs from other models is that
Instead of a centralized router, each container has an instance of NGINX Plus running in the container
This system acts as a local reverse and forward proxy for all http traffic
Using this system you get service discovery, robust load balancing, and most importantly, high performance, encrypted networking
Services default to internal load balancers to pods
Services defined as NodePort or LoadBalancer types will act as load balancers for cloud providers that support them: GKE/AWS
Ingress and Ingress Controllers are more robust implementations of load balancers for kubernetes
NGINX provides an open source Ingress Controller with instructions for building and an example
This Ingress Controller uses NGINX Plus upstreams to route requests to different internal services which, in turn, balance requests to the associated podshttps://github.com/nginxinc/kubernetes-ingress
The service registry required by Istio is usually provided by Kubernetes or Mesos
The nginmesh repository plays an integral role in the load balancing within Istio, thereby providing a service mesh
- After querying the registry for the locations of the services, Pilot routes requests to the NGINX Sidecar
- Requests are load balanced to the pods which provide the services in the application architecture