Microservices and containers are on the top of everyone’s minds these days, but how can you really apply these emerging trends to your work today? This webinar covers:
- Best practices with microservices architecture and approach
- How to connect and deploy your microservices in the cloud and in container environments
- How NGINX can make the process of connecting, scaling, and securing these deployments easy and painless
- New scripting, configuration, and performance capabilities coming in upcoming releases of NGINX
View full webinar on demand at https://www.nginx.com/resources/webinars/connecting-and-deploying-microservices-at-scale/
2. About me
Nick Shadrin
Technical Solutions Architect
Located in SF, CA
Used nginx since 2007
nick@nginx.com
3. Agenda
Intro to microservices (again)
The use of nginx for microservices
Containers or no containers
Nice old features
Shiny new features
Bits of nginx roadmap
4. Building a great application
is only half the battle,
delivering the application
is the other half.
8. NGINX Web tier Application tier Database
N
N
Microservices enable you to break away from siloed departments (tiers) to a flexible
architecture which improves performance, scalability and manageability
Microservices Architecture
18. Our Dockerfile
FROM debian:jessie
MAINTAINER NGINX Docker Maintainers "docker-maint@nginx.com"
RUN apt-key adv --keyserver hkp://pgp.mit.edu:80 --recv-keys
573BFD6B3D8FBC641079A6ABABF5BD827BD9BF62
RUN echo "deb http://nginx.org/packages/mainline/debian/ jessie
nginx" >> /etc/apt/sources.list
ENV NGINX_VERSION 1.9.3-1~jessie
RUN apt-get update &&
apt-get install -y ca-certificates nginx=${NGINX_VERSION} &&
rm -rf /var/lib/apt/lists/*
# forward request and error logs to docker log collector
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
VOLUME ["/var/cache/nginx"]
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
See more at https://registry.hub.docker.com/_/nginx/
19. Extending your Dockerfile
root@linux# docker run --name mynginx1 -P -d nginx
root@linux# docker run --name mynginx2 -v /var/www:/usr/share/nginx/html:ro
-v /var/nginx/conf:/etc/nginx:ro -P -d
Dockerfile:
FROM nginx
RUN rm /etc/nginx/conf.d/default.conf
RUN rm /etc/nginx/conf.d/example_ssl.conf
COPY static-html-directory /usr/share/nginx/html
COPY nginx.conf /etc/nginx/nginx.conf
See more at https://blog.docker.com/2015/04/tips-for-deploying-nginx-official-image-with-docker/
20. A/B testing
upstream a {
server web.backend.com:9000;
}
upstream b {
server staging.web.backend.com:9000;
}
split_clients "${arg_token}" $dynamic {
97% a;
* b;
}
server {
listen 80;
location / {
fastcgi_pass $dynamic;
# ... other settings ...
}
}
22. Stream module
Released originally for commercial version
Open source since nginx 1.9.0
Used to connect non-HTTP services
23. Stream module
Released originally for commercial version
Open source since nginx 1.9.0
Used to connect non-HTTP services
Use it for:
- Reverse proxy
- Load balancing
- SSL offload / reencryption
- Additional security
24. TCP Proxy with stream module
server {
listen 127.0.0.1:12345;
proxy_pass 127.0.0.1:8080;
}
server {
listen 12345;
proxy_connect_timeout 1s;
proxy_timeout 1m;
proxy_pass example.com:12345;
}
server {
listen [::1]:12345;
proxy_pass unix:/tmp/stream.socket;
}
25. Stream module - Load Balancing
upstream backend {
hash $remote_addr consistent;
server backend1.example.com:12345 weight=5;
server backend2.example.com:12345;
server unix:/tmp/backend3;
server backup1.example.com:12345 backup;
server backup2.example.com:12345 backup;
}
server {
listen 12346;
proxy_pass backend;
}
In reality, microservices architectures look more like this. Here we show an aggregation layer at the front. This layer takes single service requests and makes multiple service requests, aggregating the responses before returning to the client. This is especially useful for mobile apps. Because of the lower bandwidth and higher latency of mobile device,s bundling multiple requests can have a large impact on performance, but aggregation can also be used for non-mobile applications.
Then once the aggregation layer makes its service requests, each service can make requests to other services.
Since the aggregation layer and the services layer can scale independently, you need something to distribute the traffic. And this is where NGINX comes in. It can handle the client requests to the aggregation layer, load balancing them across the available aggregation servers, and then handling the request from the aggregation layer to the services and also the requests from one service to another. In all cases making sure to route traffic to healthy services, using the NGINX Plus health checks. Allowing services to be easily scaled using tools like docker-compose, kubernetes, swarm, NGINX Plus dynamic configuration API,and other automation infrastructures allowing for intelligent routing based on factors such as URL’s, headers, and letting you do A/B testing, etc.
You build a self contained service that does soemthing like recommends similar widgets. It may use the same data and data store, but it is a separate logical service and group of programs with a well defined interface (APIs anyone?) and a SLA that other programs can query.
This also simplifiys things like scaling if these services are isolated in cloud instances, containers or vms.
This also simplifies things like scaling if these services are isolated in cloud instances, containers or vms.