APIs are fueling innovation and digital transformation initiatives. With the explosive growth in APIs, developers and architects are employing different kinds of architectures to process API traffic. Attend this session to learn about commonly deployed API Management architectures.
Approach 1: Centralized API Lifecycle management where the data plane and control plane are tightly coupled .
Approach 2: “Hybrid” architectural approach that involves some processing at the edge by microgateways to process API calls between microservices.
Approach 3: Decoupled data plane and control plane resulting in no need for microgateways or databases to process API calls.
Unblocking The Main Thread Solving ANRs and Frozen Frames
A Tour of Different API Management Architectures
1. Deployment Patterns for
API Gateways
(a tour of API management architectures)
Liam Crilly
Director, Product Management, NGINX
2.
3. 3
“When I started NGINX,
I focused on a very specific
problem – how to handle more
customers per a single server.”
- Igor Sysoev, NGINX creator and founder
4. #1 40%“Most websites use NGINX” of NGINX deployments
are as an API gateway
Source: NGINX User survey 2017, 2018, 2019Source: Netcraft April 2019 Web Server Survey
5. Gateway vs Management
API Management
• Policy management
• Analytics &
monitoring
• Developer
documentation
API Gateway
• Request routing
• Authentication
• Rate limiting
• Exception handling
6. API Gateway Essential Functions
6
TLS termination
Client
authentication
Fine-grained
access control
Request routing
Rate limiting Load balancing
Service discovery
of backends
Request/response
manipulation
9. Edge Gateway
9
API
A
API
B
API
C
D
E
F
G
H
• TLS termination
• Client authentication
• Authorization
• Request routing
• Rate limiting
• Load balancing
• Request/response manipulation
• Façade routing
14. F
E
Sidecar Gateway
14
E
E
F
F
D
D
D
• Outbound load balancing
• Service discovery integration
• Authentication
• Authorization?
Edge / Security Gateway
• TLS termination
• Client authentication
• Centralized logging
• Tracing injection
18. Deployment Pattern Options
Edge
Gateway
+ Monoliths with centralized governance
- Resists frequent changes, DevOps antipattern
Two-Tier
Gateway
+ Good flexibility, independent scaling of functions
- Difficult to delegate control to multiple teams
Microgateway + DevOps friendly, suits high-frequency updates
- Hard to achieve consistency, no central security control
Sidecar
Gateway
+ Policy-based E/W, strict authentication requirements
- Significant control plane complexity
1993 – my first site
1994 – my first web app
2001 – first gen internet starts to struggle
2001 – Igor’s idea for nginx
Solved C10K problem
The busier the site...
We conservatively estimate that at least half of the world’s internet traffic passes through NGINX
So first, just as a level-setting exercise
Let's cover the 8 essential functions of an API gateway
Look at the most common API gateway deployment patterns
Starting with … the Edge Gateway
Here we have our perfectly polished APIs – A, B and C
Follows the classic Application Delivery Controller architecture
Clients talk to the API gateway, deployed at the edge of the application infrastructure
The API gateway does things
As we scale the APIs the API gateway also load balances traffic intelligently across the available instances of each API, and
It’s the traditional pattern, and it works very well for monoliths
What happens when you start decomposing those monoliths? OR introducing microservices
Façade routing allows us to publish a single API that is comprised of multiple backend services.
It’s fine if we just want to expose more APIs directly, for example service E
But when API-A needs to make an API call to service D …
We have no security controls, no authentication, no traffic management, no load balancing
Another common pattern is to use a 2-tier gateway pattern. Security+Routing
Now when API A calls service D, it goes through the routing gateway
Here we have a nice separation of concerns – independently scaled
Notice how the security gateway doesn’t do much in the way of, what you would call, API gateway work.
Compatible with existing application delivery controllers, or simple cloud platform load balancers
As you get deeper into microservices deployments – when these things are in containers, deployed through automated CI/CD pipelines, and owned by multiple DevOps teams the centralized gateway (1-tier or 2-tier) can become an administrative bottleneck. Typically, these centralized gateways are owned by a different team, whose change control processes are incompatible with DevOps-style, high-frequency, automated deployments.
Works well with cloud deployments because you can utilize the simple, native load balancer
E consumes F in the same way as an external client
Particularly good for widely distributed deployments where service E and service F are deployed far away from one another
Be careful with authN per API – because you’ll end up with at least as many authN methods as you have APIs
But there’s one more thing to consider
Our application is part of a wider ecosystem - a wider economy - that requires East/West traffic to be successful.
Notice that with microservices we expect a lot more service-to-service communication
That's simply a function of their size
As components get smaller they necessarily depend on other components
The microgateway pattern is an effective approach for service-to-service traffic but it has some drawbacks:
Service E needs to be written or configured to know that service F is available through migrogateway F – and that changes for each deployment environment
Also needs to be configured with or dynamically obtain authentication credentials
And what if each service implements a different authentication scheme?!
Difficult to reliably determine whether service E is allowed to call service F – who owns the policy for that
If the microgateway is now responsible for all of the essential functions of an API gateway then it’s not really “micro” any more. Beware of Microgateway solutions that do not deliver the complete feature set.
To address this we can look at the sidecar pattern
Whereby we co-locate the microgateway alongside each service - you can imagine these in a single container.
Outbound load balancing is interesting
What load balancing algorithm do you use for outbound load balancing?
How can you be sure that each sidecar for service E has exactly the same configuration
Where do you put the configuration for who can talk to who?
Does the sidecar gateway co-located with service E know which other services it may request?
Or does the sidecar co-located with service F know which other services may make requests?
Or both?
That’s a lot to keep track of!
Istio?
Now all your essential API gw functions no longer have to fit in a Microgateway have to fit in each service container.
All of the DevOps teams share that control plane, using RBAC to avoid making changes to someone else’s application.
So you are delegating the sidecar gateway configuration to the control plane. It promises a lot.
With that in mind, let's revisit the 2-tier gateway
It can manage
But it's a classic hub and spoke architecture
It's going to be a bottleneck, right?
We might have hundreds, even thousands of individual services, talking to other services.
But if we imagine these services as external clients
Internet clients
It's really not a big deal
Can we handle 10,000 concurrent connections on a single server? That problem was solved 15 years ago.
So what have we learnt?
Each approach has merits…GOVERNANCE IS KING