To get there, we’re first going to discuss some context to set the stage on why serverless architectures on AWS are the next evolution of application design.
Then we’ll cover the services that enable that transformation.
Then I’ll give a live demonstration of creating a serverless architecture, and discuss some other example serverless application patterns.
And finally we’ll discuss some best practices for you to keep in mind as you move forward with designing your own serverless applicaitons.
First, lets backtrack a little bit to provide some context about how, even though the value of building with AWS Lambda is revolutionary, it’s actually a very natural next step in application design patterns.
One application pattern that perhaps many of you have experienced, and hopefully have put in your past, is the monolithic architecture.
Here, a holistic application is deployed as a single unit, on shared infrastructure. It’s tightly-coupled components perform many different functions that are maintained by many different development teams. Deployments are extremely risky and disruptive. If any component’s deployment fails, the whole deployment must be rolled back. So deployments rarely occur and require an intense amount of centralized approval and coordination.
By now, many organizations have recognized the negative impact that this application pattern has on the speed with which development teams can deliver new functionality.
So moving from the monolithic pattern, one of the most popular architecture patterns for many years is the Service Oriented Architecture approach. Here, components are decoupled and communicate with each other via web service APIs. This picture is one of the most common examples of a service oriented architecture, the multi-tiered architecture or n-tier architecture. In it, the user experience is delivered on the front end of the application by a presentation tier. The core business logic resides on the server side as part of a logic tier. And any data that needs to be persisted by the application resides inside the data tier.
This pattern allows for a couple really important things, among others.
Infrastructure is decoupled. So deployments can occur independently, and scaling and resource consumption can be managed independently. Development teams can be specialized. Now teams can be filled with developers that have specialized skillsets that are well suited for the area of the application that their team owns. What you’re left with is smaller ownership and deeper, less broad, technical skills. So a faster development processes, and smaller, more stable, more frequent deployments could be achieved.
Finally, a pattern that’s become very popular in the last few years is the Microservices Archtiecture. Here, the same tenets that apply to a Service Oriented Architecture are still valid. But components are broken down into smaller, single-function modular web services so that the benefits of SOA are amplified throughout the architecture.
If you’re working in a centralized organization today that’s happy supporting a monolithic organization, this might look a bit complex to support. But remember that in order to support a move to SOA or Microservices there will have to be a similar breaking apart of the deployment and development processes to reap the full benefits.
Because things have been broken apart, you may have a small development team whose responsibility is limited to just these pieces of the application.
The SOA pattern has been a staple for so long and has so many millions of applications utilizing it, that the tools built to help you implement SOA apps and support them are extremely vast and proven. Many of those same tools lend themselves well to the microservice approach as well.
There are many features and services that AWS has created as well that make creating and supporting web services-based applications easier. ‘Web Services’ is even in our name!
Of course, its Servers! And all of the complexity and responsibility that servers introduce into your environment.
There are tools and whole industries out there who’s entire value propositions are about addressing just one of these questions. You, as an application owner have to have answers for every single one of these questions.
That’s not to say there aren’t tools and strategies for answering all of these questions. There are, and I think running on top of AWS gives you the most and best tools for answering them well.
If you could use everything you’ve already learned about designing service-based applications without the need to manage the server-based infrastructure, that would be pretty compelling, right?
You can have your applications operations be fully managed, no provisioning, high availability built in, no patching or monitoring Operating Systems.
Also, when creating web services, a lot of the code that your development team will be responsible for writing is relevant to the web services paradigm itself. Running a web server, exposing an API, marshalling requests/responses, etc. By architecting to be serverless, your developers can focus on the core business logic that matters.
And finally, the serverless applications you build will have their scaling managed for you, no matter the what that scale is.
I’ll explain the service by describing the four different components that I think about when building a Lambda-based application. We jump into each one of these components now.
First and foremost, is The Lambda Function. This is the heart of AWS Lambda. A Lambda function is the code that you write, the AWS security wrapped around that code, and the resources required to execute your code.
Next, is the AWS service access that your code function will have within your AWS account. This is defined by assigning an IAM role to your Lambda function that dictates which AWS services and resources your code is allowed to integrate with. If you’re familiar with IAM Roles for EC2 instances, this is the same. We’ll talk more about the power this holds during the demo.
Finally, is the amount of resources that your function needs to execute, as defined by the amount of memory you allocate. The amount of memory will dictate the amount of CPU time and Network bandwidth allocated to your code as well.
Next is the event source. For the code that you’ve written and would like to have executed, the event source will define how and when that occurs. There are a number of different event sources available today and that’s continuing to grow at a very rapid pace.
Each different event source type will define what data and metadata passed to your function so that it’s able to process with all the context that it needs for your application. For example, if you would like your code to execute whenever a new object lands inside an S3 bucket and choose S3 as an event source… when your Lambda function is triggered it will be provided metadata like the userIdentity that uploaded the object, the bucket the object was created in, and the key and size of that new object.
And if, for example, you choose Amazon API Gateway as your event source, your Lambda function will receive all of the HTTPS request details it needs to process that API request.
Next is the AWS Lambda Service itself.
This is where the magic happens. The Lambda service is responsible for taking the code that you’ve uploaded, and provisioning it onto the Lambda infrastructure (that you don’t manage), and providing you an API should you ever need to directly invoke your Lambda function.
The service is responsible for making sure that your code is executed for each and every event that you’ve configured as event sources for your function. Whether that be once a week when a new report lands inside and S3 bucket, every time somebody speaks your application’s Invocation Name to Amazon Alexa, or thousands of requests a second that your public API receives.
Finally, it provides you some out of the box operational capabilities like monitoring your function in CloudWatch and a Logger object that will stream any log statements you make from your code using it to CloudWatch Logs.
Last, is the function networking environment. This defines what your code has connectivity to. We provide two broad options here.
First is a default networking environment, where your function has access to the internet but no private connectivity to any resources running inside of your VPCs.
Second, your Lambda functions can be provisioned within your own customer-created Virtual Private Cloud. Here you’ll choose which subnets your functions will execute within. Your Lambda functions will consume private IP space within your subnets, and their connectivity will follow the route rules you have configured in your VPC and security groups you’ve assigned your functions. Public internet is accessed via NAT, we encourage the use of our NAT Gateway service for this.
It’s important to note that this affects the runtime environment where your code is executed, but the invoking your function occurs via the public AWS Lambda APIs or via the event sources that you’ve configured – regardless of if the function is deployed inside a VPC or the default networking environment.
Because of these factors, if your function does not require any private network connectivity to VPC deployed resources and internet access is permissible, the default option is a good one. You can change this configuration for an existing Lambda function later if needed.
“Wait, wait, wait. Hold on.” you might be saying “Serverless might be a new(ish) phrase in IT, but there are lots of ways for me to build applications where I don’t have to manage a server already. Aren’t those ‘serverless’?”
Sure, there are lots of ways for you to build applications without using servers today.
Lambda allows you to work within a model that provides an amazing balance between abstraction and control. You get to be abstracted away from all the undifferentiated heavy lifting of infrastructure, and you get full control over the code required to run your application. All of the practices and tools your developers are using for code creation and management can still be used before deploying to Lambda.
Security is AWS’s #1 priority and always will be. Using Lambda means you get native integration with AWS features and services like IAM and VPC that make implementing security best practices easier. Lambda is already part of many mission critical applications for AWS customers already today.
You pay per function execution. When you’ve provisioned a server that no users are interacting with, you’re still paying for that unused capacity. Not with Lambda. No concept of paying for idle capacity, no commitments required. And there is a gigantic free tier available. The first 1 million function executions per month are free with Lambda. And the Lambda free tier does not expire after 12 months like some other AWS free tiers.
There is already a booming community around Lambda, there for your support and they’ve documented answers for a lot of questions that you may run into when starting out.
Your function is collocated on the AWS platform with all of the other services at your functions fingertips. You could write a simple API and a simple code function that’s deployed and managed by AWS Lambda that directly (and securely) integrates with a single relational database that could grow and scale up to 64TB with Amazon Aurora. That’s insane! Not to mention the Support, Solutions Architect, and Partner organization that are here to help make you successful.
Compute is one small piece of a serverless architecture. A full application needs many different capabilities, and AWS provides a number of services that are fully managed with zero need for you to manage any servers.
Let’s step through an innovative example of an AWS customer running Lambda at scale in their environment.
This is a real production architecture from the AWS Customer PlayOn! Sports. They gave a great re:Invent topic on their Lambda architecture if you’d like to hear more details.
In brief, their architecture allows end users who are streaming live video via laptop encoders up to S3 via HLS. And as those chunks land in S3 buckets, a cascade of Lambda functions execute to transcode those video chunks into the various formats and storing them in a separate S3 bucket. Where viewing users are able to stream the video in near real-time.
This is a fully serverless architecture that would have required polling mechanisms or message queues managed by a set of processing servers before Lambda. But now, not only does the compute happen on top of Lambda without the need for owning any servers, but the workflow is entirely event driven and managed by Lambda itself.
You might be saying…. That example is an awesome, and very interesting. Once you have a firm grasp on the power of Lambda, you should consider transforming the way you architect applications to take advantage of the value event driven architectures provide…
You might be saying… gaining internal approval for and an internal understanding of those previous applications would be a nightmare with my company’s architecture review processes… my teams and developers are already set up to build multi-tier SOA applications.
Great! So let’s build those.
Amazon API Gateway is a fully managed service for hosting HTTPS APIs on top of AWS.
You can create APIs: Support for standard HTTP methods Console, API, CLI support Swagger Import/Export Custom domains
Configure: Choose what your APIs integrate with: AWS Lambda AWS Service APIs Any other accessible web service Add an optional managed cache layer Stage variables for dynamic routing
Publish: Test new API versions pre-release Click-button or single API call deployment Create multiple versions and stages of your API Start letting developers integrate via mock responses
Maintain Managed scaling Usage-based pricing Ability to create and require API keys for developer integration Generate client SDK programmatically
Monitor: Native CloudWatch metrics and CloudWatch Logs integration CloudTrail integration to track changes to your API
Secure: Native integration with IAM and AWS Sigv4 to authorize access to APIs Custom Authorization Mutual SSL with backend web services Integration with Amazon Cloudfront for DDoS protection Throttle and monitor requests to protect your backend
This example looks at using AWS Lambda and Amazon API Gateway to build a dynamic voting application, which receives votes via SMS, aggregates the totals into Amazon DynamoDB, and uses Amazon Simple Storage Service (Amazon S3)to display the results in real time.
The source code and other details can be found here: https://github.com/awslabs/lambda-refarch-webapp
Limit your function sizes. Don’t let your function uploads be the new “monolith”. Remember that the first invocation can take time. A great solution here is to use a scheduled Lambda function to run every 5 minutes to keep a function container alive.
Your Lambda container MIGHT be reused. This could be GREAT for your performance. But your code can’t assume it, it should be stateless. But your code SHOULD take advantage of it if your container is reused (loading configuration, keeping connections alive, in-memory caches, etc).
You’ve got scratch space available on disk. This could open up new use cases for you or reduce your cost. Not everything needs to live in memory all the time.
Create custom metrics using the AWS SDK and CloudWatch. Things like error scenarios, or metrics that your business would love to aggregate and report on, dependency response times, or intra-function response times. A simple statement to push out metrics to cloudwatch split on the dimensions important to your application or business are all that you need for very valuable dashboards to be generated.
Mock APIs can let you create experiments/POCs that feel end-to-end with very little work. And sometimes the flaws in your API design won’t be identified until after a client tries to integrate for the first time.
Cognito is a user identity repository that’ll give you the ability to create dynamic IAM policies specific to each user (segregate S3 buckets by user, DynamoDB table/record access, etc).
Stage variables are a great way for the metadata that’s relevant to your API to be injected into your Lambda functions. Can be used for things like A/B testing.
Mapping templates don’t just serve as a mechanism for transformation, they also decouple backend responses and API response. This will make identifying backward incompatible changes faster, protecting your end users.
Naming conventions – make your function names programmatically consumable (think of EC2 tags as example values you might include). Use those naming conventions to drive automation (CI/CD, metrics/monitoring, reports, etc)
By externalizing your security posture to IAM, your code and application can be agnostic of it. Code flaws don’t impact security. You’ll get all of the audit and tracking capabilities that IAM provides through things like CloudTrail and Config.
Create separate IAM roles for everything. Least privilege can’t be least if multiple functions/APIs share the same role and you eventually need to make permission changes for one of your functions/APIs but not all. Make this easier by dynamically building your IAM roles by merging CloudFormation templates programmatically.
Externalize the configuration of your functions. Environment variables, log levels, etc. You can “deploy” these configuration changes on the fly. DynamoDB is great for this.
Be aware of scaling. These services will manage scaling events for you, but there are some initial levels where throttling could occur. If you plan on adopting Lambda and/or API Gateway with production scale, engage AWS Support or your Solutions Architect to make them aware.
We are hiring across all services at AWS. If you are interested in hearing more about opportunities with AWS, please send us an email to firstname.lastname@example.org We will connect you with a recruiter from the service that you express interest in! There are 2 recruiters in attendance as well that can answer questions. (ask them to raise their hands)
Getting Started with Serverless Architectures
Chuck Meyer, Security Solutions Architect, AWS
September 8, 2016
Getting Started with
Amazon API Gateway
Serverless Architecture Patterns
Serverless Best Practices
How serverless architecture patterns with AWS Lambda are the next
evolution of application design
Tools to help this pattern are VAST
Web Service/Application Frameworks
Configuration Management Tools
API Management Platforms
Etc. Etc. Etc.
AWS has helped too!
AWS Elastic Load Balancer
AWS Trusted Advisor
AWS Elastic Beanstalk
AWS EC2 Container Service
Etc. Etc. Etc.
many of these tools and
innovations are still coupled to
a shared dependency…
What size servers are right for
How many users create too
much load for my servers?
How much remaining capacity
do my servers have?
How can I detect if a server has
How many servers
should I budget for?
Which OS should my
Which users should have
access to my servers?
How can I control access from
How will I keep my server
How will new code be
deployed to my servers?
How can I increase
utilization of my servers?
When should I decide to
scale out my servers?
What size server is
right for my performance?
Should I tune OS settings
to optimize my application?
Which packages should be
baked into my server images?
When should I decide to
scale up my servers?
How should I handle server
How will the application handle
server hardware failure?
Architect to be Serverless
• No provisioning
• Zero administration
• High availability
• Focus on the code that
• Innovate rapidly
• Reduce time to market
• Scale up and scale down
Serverless, event-driven compute service
Lambda = microservice without servers
Components of Lambda
• A Lambda Function (that you write)
• An Event Source
• The AWS Lambda Service
• The Function Networking Environment
The Lambda Function
• Your Code
(Java, NodeJS, Python)
• The IAM role that code
assumes during execution
• The amount of memory
allocated to your code
(affects CPU and Network
A valid, complete
An Event Source
• When should your function
• Many AWS services can be
an event source today:
• Config Rules
• Amazon Echo
• …and Amazon API
Gateway (more later)
The AWS Lambda Service
• Runs your function code without you managing or
• Provides an API to trigger the execution of your function.
• Ensures function is executed when triggered, in parallel,
regardless of scale.
• Provides additional capabilities for your function
The Function Networking Environment
Default - a default network
environment within VPC is
provided for you
• Access to the internet always
permitted to your function
• No access to VPC-deployed assets
Customer VPC - Your function
executes within the context of your own VPC.
• Privately communicate with other
resources within your VPC.
• Familiar configuration and behavior
– Elastic Network Interfaces (ENIs)
– EC2 Security Groups
– VPC Route Tables
– NAT Gateway
Lots of existing ways to abstract away servers
What’s unique about Lambda?
Abstraction at the code/function level (arbitrary, flexible,
The security model (IAM, VPC)
The pricing model
Integration with the AWS Service ecosystem!
Many Serverless Options on AWS
Compute Content DeliveryMessaging and QueuesSecurity
User Management Monitoring & Logging
Internet of Things
AWS Lambda Best Practices
1. Limit your function size –
especially for Java (starting
the JVM takes time)
2. Node – remember execution
3. Don’t assume function
container reuse – but take
advantage of it when it does
4. Don’t forget about disk
(500MB /tmp directory
provided to each function)
5. Use function Aliases for
6. Use the included logger
(include details from service-
7. Create custom metrics
Amazon API Gateway Best Practices
4. Use request/response
everywhere within reason, not
5. Take ownership of HTTP
6. Use Swagger import/export
for cross-account sharing
1. Use Mock integrations
2. Combine with Cognito for
managed end user-based
3. Use stage variables (inject
API config values into
Lambda functions for logging,
Additional Best Practices
1. Use strategic, consumable
naming conventions (Lambda
function names, IAM roles,
API names, API stage names,
2. Use naming conventions and
versioning to create
3. Externalize authorization to
IAM roles whenever possible
4. Least privilege and separate
5. Externalize configuration –
DynamoDB is great for this.
6. Contact AWS Support before
known large scaling events
7. Be aware of service throttling,
engage AWS support if so.