4. Simple but usable primitives Scales with usage
Never pay for idle
Availability and fault
tolerance built in
Serverless means..
5. Serverless application
EVENT SOURCE SERVICES (ANYTHING)
Changes in
data state
Requests to
endpoints
Changes in
resource state
FUNCTION
Node.js
Python
Java
C#
6. Amazon S3 Amazon
DynamoDB
Amazon
Kinesis
AWS
CloudFormation
AWS CloudTrail Amazon
CloudWatch
Amazon
Cognito
Amazon SNSAmazon
SES
Cron events
DATA STORES ENDPOINTS
CONFIGURATION REPOSITORIES EVENT/MESSAGE SERVICES
Event sources that trigger AWS Lambda
… and the list will continue to grow!
AWS
CodeCommit
Amazon
API Gateway
Amazon
Alexa
AWS IoT AWS Step
Functions
7. Common use cases
Web
Applications
• Static
websites
• Complex web
apps
• Packages for
Flask and
Express
Data
Processing
• Real time
• MapReduce
• Batch
Chatbots
• Powering
chatbot logic
Backends
• Apps &
services
• Mobile
• IoT
</></>
Amazon
Alexa
• Powering
voice-enabled
apps
• Alexa Skills
Kit
Autonomous
IT
• Policy engines
• Extending
AWS services
• Infrastructure
management
9. What is ALM?
Application Lifecycle Management:
developers customers
react monitor
delivery pipeline
feedback loop
productiontestbuildsource
10. • Integration
tests with other
systems
• Load testing
• UI tests
• Penetration
testing
Release processes have four major phases
Source Build Test Production
• Check-in
source code
such as .java
files.
• Peer review
new code
• Compile code
• Unit tests
• Style checkers
• Code metrics
• Create container
images
• Deployment to
production
environments
13. Managing continuous delivery
Source Build Test Production
Amazon S3 AWS Lambda (DIY)
AWS CodeCommit
GitHub
AWS CodePipeline
Scripts/PluginsJenkins
AWS CodeBuild
… OR …
14. Fully managed build service that compiles source code,
runs tests, and produces software packages
Scales continuously and processes multiple builds
concurrently
You can provide custom build environments suited to
your needs via Docker images
Only pay by the minute for the compute resources you
use
Launched with CodePipeline and Jenkins integration
AWS CodeBuild
16. version: 0.1
environment_variables:
plaintext:
"INPUT_FILE": "saml.yaml”
"S3_BUCKET": "”
phases:
install:
commands:
- npm install
pre_build:
commands:
- eslint *.js
build:
commands:
- npm test
post_build:
commands:
- aws cloudformation package --template $INPUT_FILE --s3-
bucket $S3_BUCKET --output-template post-saml.yaml
artifacts:
type: zip
files:
- post-saml.yaml
- beta.json
• Variables to be used by phases of
build
• Examples for what you can do in
the phases of a build:
• You can install packages or run
commands to prepare your
environment in ”install”.
• Run syntax checking,
commands in “pre_build”.
• Execute your build
tool/command in “build”
• Test your app further or ship a
container image to a repository
in post_build
• Create and store an artifact in S3
buildspec.yml Example
17. Building a deployment package
Node.js & Python
• .zip file consisting of
your code and any
dependencies
• Use npm/pip to
install libraries
• All dependencies
must be at root level
Java
• Either .zip file with all
code/dependencies,
or standalone .jar
• Use Maven / Eclipse
IDE plugins
• Compiled class &
resource files at root
level, required jars in
/lib directory
C# (.NET Core)
• Either .zip file with all
code/dependencies,
or a standalone .dll
• Use NuGet /
VisualStudio plugins
• All assemblies (.dll)
at root level
18. Example:
1. Start with a repository (github.com/awslabs/aws-
codedeploy-sample-tomcat)
2. Add buildspec.yml
3. Create CodePipeline pipeline with a Source and Build
stage
4. Do a build
5. Add a deploy stage
6. Do a full execution of the pipeline
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29. Testing Your Code
• Testing is both a science and an art
form!
• Goals for testing your code:
• Want to confirm desired functionality
• Catch programming syntax errors
• Standardize code patterns and format
• Reduce bugs due to non-desired
application usage and logic failures
• Make applications more secure
31. What service and release step corresponds with which tests?
Third Party
Tooling
AWS CodeBuild
BuildTest
32. Continuous delivery service for fast and
reliable application updates
Model and visualize your software release
process
Builds, tests, and deploys your code every time
there is a code change
Integrates with third-party tools and AWS
AWS CodePipeline
36. Function versioning and aliases
• Versions = immutable copies of
code + configuration
• Aliases = mutable pointers to
versions
• Development against $LATEST
version
• Each version/alias gets its own
ARN
• Enables rollbacks, staged
promotions, “locked” behavior for
client
37. Lambda Environment Variables
• Key-value pairs that you can dynamically pass to your
function
• Available via standard environment variable APIs such
as process.env for Node.js or os.environ for Python
• Can optionally be encrypted via KMS
– Allows you to specify in IAM what roles have access to the
keys to decrypt the information
• Useful for creating environments per stage (i.e. dev,
testing, production)
38. API Gateway Stage Variables
• Stage variables act like environment variables
• Use stage variables to store configuration values
• Stage variables are available in the $context object
• Values are accessible from most fields in API Gateway
• Lambda function ARN
• HTTP endpoint
• Custom authorizer function name
• Parameter mappings
39. Stage variables and Lambda alias for stages
Using Stage Variables in API Gateway together with Lambda function Aliases
helps you manage a single API configuration and Lambda function for multiple
stages
myLambdaFunction
1
2
3 = prod
4
5
6 = beta
7
8 = dev
My First API
Stage variable = lambdaAlias
Prod
lambdaAlias = prod
Beta
lambdaAlias = beta
Dev
lambdaAlias = dev
40. Manage Multiple Versions and Stages
of your APIs
Works like a source repository – clone your API to create a new version:
API 1
(v1) Stage (dev)
Stage (prod)
API 2
(v2)
Stage (dev)
41. Create templates of your infrastructure
CloudFormation provisions AWS resources
based on dependency needs
Version control/replicate/update templates like
code
Integrates with development, CI/CD,
management tools
JSON and YAML supported
AWS CloudFormation
46. AWS commands – Package & Deploy
Package
• Creates a deployment package (.zip file)
• Uploads deployment package to an Amazon S3
bucket
• Adds a CodeUri property with S3 URI
Deploy
• Calls CloudFormation ‘CreateChangeSet’ API
• Calls CloudFormation ‘ExecuteChangeSet’ API
47. Deploy via CodePipeline
Pipeline flow:
• Package in CodeBuild
• Use CloudFormation actions in CodePipeline to
create or update stacks via SAM templates
• Optional: Make use of ChangeSets
• Make use of specific stage/environment parameter
files to pass in Lambda variables
• Test our application between stages/environments
• Optional: Make use of Manual Approvals
50. Metrics and logs
CloudWatch Metrics
• Default (free) metrics:
• Invocations
• Duration
• Throttles
• Errors
• Create custom metrics for
health and status tracking
CloudWatch Logs
• Every invocation generates
START, END and REPORT
entries to CW Logs
• Emit your own log entries
• Use 3rd party tools for
aggregation and
visualization
52. Tracing and tracking
Integration with AWS X-Ray
• Collects data about requests that your
application serves
• Visibility into the AWS Lambda service
(dwell time, number of retries, latency
and errors)
• Detailed breakdown of your function’s
performance, including calls made to
downstream services and endpoints
Integration with AWS CloudTrail
• Captures calls made to AWS Lambda API;
delivers log files to Amazon S3
• Tracks the request made to AWS Lambda, the
source IP address from which the request was
made, who made the request, when it was
made
• All control plane APIs can be tracked (no
versioning/aliasing and invoke API)
COMING
SOON!
53. • Identify performance bottlenecks and errors
• Pinpoint issues to specific service(s) in your
application
• Identify impact of issues on users of the
application
• Visualize the service call graph of your
application
AWS X-Ray
58. Next steps
• Explore the AWS SAM specification on
GitHub
• Visit the Lambda console, download a
blueprint, and get started with AWS SAM
• Send us your questions, comments, and
feedback on the AWS Lambda Forums.
7 the Serverless computing approach that Lambda brings about isn’t just about “not having to manage servers”. Serverless means having a simple but usable primitive – your code as a Lambda function - with nothing that looks like a container or server. The programming model and APIs are all oriented around dealing with functions. Serverless means you only pay for work done, not for provisioning capacity. You don’t have to worry about utilization, because you never pay for idle. You only pay for compute time, that is, the time your function takes to run, in units of 100 ms. This is something most customers get excited about thinking about what paying 21 microcents for 100 ms of compute can do for their costs . For example, Nordstrom tells us switching to Lambda reduced the cost of their analytics pipeline by two orders of magnitude. A publishing company from Singapore tells us they saves over 30,000 per month by switching from a proprietary image processing solution to one built on Lambda for processing millions of images a day. Which brings me to the third aspect, that is Serverless means scaling is built in - you can never overprovision or under provision. Since your code is run in response to events, Lambda will automatically spin up as many instances of your function as required to handle any incoming event rate. Let me repeat this, any event rate. We have customers running backends handling in excess of 100, 000 TPS at peak, and others like Adroll who are processing over 55 B ad impressions a day through Lambda. And last but not the least - Serverless means that functions come with high availability and, depending on the workload, fault tolerance come built in. The combination of offloading these responsibilities can have significant impact on the way you own and operate applications running in the cloud. For example, Vidroll tells us what used to take them 10 engineers now takes them two, while handling twice the scale.
But first, lets make sure we are all on the same page as to what a serverless application is.
A Serverless application usually starts with an event. That event can be a write to a dynamoDB table, a PutObject to an S3 bucket, an HTTP call, or a host of other Lambda supported event sources.
That event then triggers your Lambda function, which can be written in Node.js, Python, Java, or C#. Now remember, this is your code, and you can program it to do whatever you’d like. You could do things like call other downstream services to continue processing, return a result, or write metadata to database.
Now, how would the lifecycle of such an application would look like?
Recap the common use cases for serverless
Web Applications: By combining AWS Lambda with other AWS services, developers can build powerful web applications that automatically scale up and down and run in a highly available configuration across multiple data centers – with zero administrative effort required for scalability, back-ups or multi-data center redundancy.
Mention Flask and Express
Backends: You can build serverless backends using AWS Lambda, Amazon API Gateway, and Amazon DynamoDB to handle web, mobile, Internet of Things (IoT) requests.
Data Processing: You can build a variety of real-time data processing systems using AWS Lambda, Amazon Kinesis, Amazon S3, and Amazon DynamoDB.
What companies are using serverless?
Quick lambda how-to example in the console
I want to take a moment to talk about different release processes.
Each team’s release process takes a different shape to accommodate the needs of each team.
Nearly all release processes can be simplified down to four stages – source, build, test and production. Each phase of the process provides increase confidence that the code being made available to customers will work in the way that was intended.
During the source phase, developers check changes into a source code repository. Many teams require peer feedback on code changes before shipping code into production. Some teams use code reviews to provide peer feedback on the quality of code change. Others use pair programming as a way to provide real time peer feedback.
During the Build phase an application’s source code is built and the quality of the code is tested on the build machine. The most common type of quality check are automated tests that do not require a server in order to execute and can be initiated from a test harness. Some teams extend their quality tests to include code metrics and style checks. There is an opportunity for automation any time a human is needed to make a decision on the code.
The goal of the test phase is to perform tests that cannot be done on during the build phase and require the software to be deployed to a production like stages. Often these tests include testing integration with other live systems, load testing, UI testing and penetration testing. At Amazon we have many different pre-production stages we deploy to. A common pattern is for engineers to deploy builds to a personal development stage where an engineer can poke and prod their software running in a mini prod like stage to check that their automated tests are working correctly. Teams deploy to pre-production stages where their application interacts with other systems to ensure that the newly changed software work in an integrated environment.
Finally code gets deployed to production. Different teams have different deployment strategies though we all share a goal of reducing risk when deploying new changes and minimizing the impact if a bad change does get out to production.
Each of these steps can be automated without the entire release process being automated. There are several levels of release automation that I’ll step through.
Continuous Integration
Continuous Integration is the practice of checking in your code to the continuously and verifying each change with an automated build and test process. Over the past 10 years Continuous Integration has gained popularity in the software community. In the past developers were working in isolation for an extended period of time and only attempting to merge their changes into the mainline of their code once their feature was completed. Batching up changes to merge back into the mainline made not only merging the business logic hard, but it also made merging the test logic difficult. Continuous Integration practices have made teams more productive and allowed them to develop new features faster. Continuous Integration requires teams to write automated tests which, as we learned, improve the quality of the software being released and reduce the time it takes to validate that the new version of the software is good.
There are different definitions of Continuous Integration, but the one we hear from our customers is that CI stops at the build stage, so I’m going to use that definition.
Continuous Delivery
Continuous Delivery extends Continuous Integration to include testing out to production-like stages and running verification testing against those deployments. Continuous Delivery may extend all the way to a production deployment, but they have some form of manual intervention between a code check-in and when that code is available for customers to use.
Continuous Delivery is a big step forward over Continuous Integration allowing teams to be gain a greater level of certainty that their software will work in production.
Continuous Deployment
Continuous Deployment extends continuous delivery and is the automated release of software to customers from check in through to production without human intervention. Many of the teams at Amazon have reached a state of continuous deployment. Continuous Deployment reduces the time for your customers to get value from the code your team has just written, with the team getting faster feedback on the changes you’ve made. This fast customer feedback loop allow you to iterate quickly, allowing you to deliver more valuable software to your customers, quicker.
Some of the most common pieces of feedback we get around ALM and tooling, are the 4 you see up on the screen.
First, since a serverless application is essentially a collection of services and resources, how do I configure and manage them as one unit? As an application?
Once I’ve defined and constructed an application, what’s the best way to consistently deploy the same application across different environments or accounts, with minimum effort?
Once I know how to construct and deploy my serverless application, how do I automate that process? How do I set up a release process to automatically build, test and deploy my application to multiple environments?
We are going to use the next hour or so to introduce best practices involving both new and existing services that provide answers to the challenges above
The effort you put into the testing triangle should not be evenly distributed! Many experts in the industry recommend a 70,20,10 mix. (will need sources)
The effort you put into the testing triangle should not be evenly distributed! Many experts in the industry recommend a 70,20,10 mix. (will need sources)
Let’s take a look at an example Pipeline. I’ve created a simple 3 stage Pipeline to talk though my example.
Source actions are special actions. They continuously poll the source providers, such as GitHub and S3, in order to detect changes. Once a change is detected, the new pipeline run is created and the new pipeline begins its run. The source actions retrieve a copy of the source information and place it into a customer owned S3 bucket.
Once the source action is completed, the Source stage is marked as successful and we transition to the Build stage.
In the Build Stage we have one action, Jenkins. Jenkins was integrated into CodePipeline as a CustomAction and has the same lifecycle as all custom actions. Talk through interaction
Once the build action is completed, the Build stage is marked as successful and we transition to the Deploy stage
The Deploy stage contains one action, an AWS Elastic Beanstalk deployment action. The Beanstalk action retrieves the build artifact from the customer’s S3 bucket and deploys it to the Elastic Beanstalk web container.
Let’s take a look at an example Pipeline. I’ve created a simple 3 stage Pipeline to talk though my example.
Source actions are special actions. They continuously poll the source providers, such as GitHub and S3, in order to detect changes. Once a change is detected, the new pipeline run is created and the new pipeline begins its run. The source actions retrieve a copy of the source information and place it into a customer owned S3 bucket.
Once the source action is completed, the Source stage is marked as successful and we transition to the Build stage.
In the Build Stage we have one action, Jenkins. Jenkins was integrated into CodePipeline as a CustomAction and has the same lifecycle as all custom actions. Talk through interaction
Once the build action is completed, the Build stage is marked as successful and we transition to the Deploy stage
The Deploy stage contains one action, an AWS Elastic Beanstalk deployment action. The Beanstalk action retrieves the build artifact from the customer’s S3 bucket and deploys it to the Elastic Beanstalk web container.
Let’s take a look at an example Pipeline. I’ve created a simple 3 stage Pipeline to talk though my example.
Source actions are special actions. They continuously poll the source providers, such as GitHub and S3, in order to detect changes. Once a change is detected, the new pipeline run is created and the new pipeline begins its run. The source actions retrieve a copy of the source information and place it into a customer owned S3 bucket.
Once the source action is completed, the Source stage is marked as successful and we transition to the Build stage.
In the Build Stage we have one action, Jenkins. Jenkins was integrated into CodePipeline as a CustomAction and has the same lifecycle as all custom actions. Talk through interaction
Once the build action is completed, the Build stage is marked as successful and we transition to the Deploy stage
The Deploy stage contains one action, an AWS Elastic Beanstalk deployment action. The Beanstalk action retrieves the build artifact from the customer’s S3 bucket and deploys it to the Elastic Beanstalk web container.
Let’s take a look at an example Pipeline. I’ve created a simple 3 stage Pipeline to talk though my example.
Source actions are special actions. They continuously poll the source providers, such as GitHub and S3, in order to detect changes. Once a change is detected, the new pipeline run is created and the new pipeline begins its run. The source actions retrieve a copy of the source information and place it into a customer owned S3 bucket.
Once the source action is completed, the Source stage is marked as successful and we transition to the Build stage.
In the Build Stage we have one action, Jenkins. Jenkins was integrated into CodePipeline as a CustomAction and has the same lifecycle as all custom actions. Talk through interaction
Once the build action is completed, the Build stage is marked as successful and we transition to the Deploy stage
The Deploy stage contains one action, an AWS Elastic Beanstalk deployment action. The Beanstalk action retrieves the build artifact from the customer’s S3 bucket and deploys it to the Elastic Beanstalk web container.
Demo of CodePipeline + CodeBuild off of a repo (could be github or CodeCommit).
Some of the most common pieces of feedback we get around ALM and tooling, are the 4 you see up on the screen.
First, since a serverless application is essentially a collection of services and resources, how do I configure and manage them as one unit? As an application?
Once I’ve defined and constructed an application, what’s the best way to consistently deploy the same application across different environments or accounts, with minimum effort?
Once I know how to construct and deploy my serverless application, how do I automate that process? How do I set up a release process to automatically build, test and deploy my application to multiple environments?
We are going to use the next hour or so to introduce best practices involving both new and existing services that provide answers to the challenges above
This how that template would look like. This is 5 times longer than the SAM template, and defines 8 separate resources. And this is what you had to write, before SAM existed.
AWS SAM is a new specification that extends CloudFormation, and is optimized for serverless.
It allows you to define 3 resource types commonly used in serverless applications, in a simpler and cleaner way: Lambda function, API Gateway APIs, and DynamoDB tables.
It’s worth noting that SAM in its core, is a CloudFormation template. That means you can define any CloudFormation resource in your SAM template, to go along with your serverless resources.
This how that template would look like. This is 5 times longer than the SAM template, and defines 8 separate resources. And this is what you had to write, before SAM existed.
Let’s go over a SAM template to understand the specification better:
First, we are defining a serverless function, which is transformed into a Lambda function under the covers.
The first property specified is CodeUri. This property receives a URI that points to an S3 object. When CloudFormation creates my Lambda function it refers to this URI to retrieve the function’s deployment package.
The next property I’d like you to pay attention to, is policies. The managed policies that you specify here will be included in the execution role that CloudFormation will generate for your Lambda function.
Next, we are defining the function’s event source, which in this case is an API. Notice that I don’t need to explicitly define an API as a separate resource. Specifying an API as my function’s event source is sufficient for CloudFormation to generate an API with the specified characteristics for me.
Lastly, I’m defining a DynamoDB table using the simpleTable. This shortcut will generate a DynamoDB table with a single attribute primary key, with a provisioned throughput of 5.
The key piece that makes all of this possible, is the transform capability CloudFormation introduced.
When you specify the serverless transform, then under the covers, CloudFormation turns this template into a regular CloudFormation template. CouldFormation then uses that template to generate my resources.
First,
After writing your template and defining your resources, you would have to point the CodeUri property to a deployment package located in S3. In order to do that, you would have to perform three steps:
First, you would need to generate your deployment package, which includes your code. Second, you would need to upload that deployment package to S3. Third, you would fill in the CodeUri. Now, if you had 50 Lambda functions defined in your template, you would have to follow these steps for each of these functions.
Luckily for us, the package command takes care of all these steps for us, and outputs a new SAM template that is identical to the old one, except it now has an updated CodeUri property that points to a deployment package in S3.
After we have the new template, we need to provide it to CloudFormation so the service can provision the specified resources in our account. To do that, we can call the deploy command. The deploy command wraps to CloudFormation APIs – the first one is create change set, which unsurprisingly, creates a change set. A change set is essentially the delta between your existing stack, the template you are looking to deploy. If you are generating a new stack, then obviously the change set is going to include everything that’s defined in your template. After your change set has been created, the execute change set API will be called. This API simply applies the updates in your change set to your CloudFormation stack.
Add CloudFormation deploy of application.
Some of the most common pieces of feedback we get around ALM and tooling, are the 4 you see up on the screen.
First, since a serverless application is essentially a collection of services and resources, how do I configure and manage them as one unit? As an application?
Once I’ve defined and constructed an application, what’s the best way to consistently deploy the same application across different environments or accounts, with minimum effort?
Once I know how to construct and deploy my serverless application, how do I automate that process? How do I set up a release process to automatically build, test and deploy my application to multiple environments?
We are going to use the next hour or so to introduce best practices involving both new and existing services that provide answers to the challenges above
Today, you can monitor your application with CloudWatch metrics. Lambda functions come with 4 metrics, out of the box – number of Invocations, average duration of your function, number of throttles, and number of errors (4XX specifically).
If you wish, you could always create custom metrics to track the health and status of your application.
For your debugging needs, Lambda integrates with CloudWatch logs. Every invocation generates a start and end timestamp, as well as a report entry. That includes information such as function duration, billed duration, and the amount of memory used by your function. In addition customers can emit as many log entries as they want, to track their function’s operations.
Log-based debugging is a great solution that provides visibility into your function’s operations, but it doesn’t provide any visibility into the lambda service itself, and the overhead it introduces.
In addition, with logs, there is only so much information that you can get for downstream calls that your function made.
First, it is going to provide you with something they call a service map. A service map is a visual representation of how the request flows through your application.
In this example, you see three nodes. Each node includes three data points – avg. latency, number of req. per minute, and errors. The first two are shown inside the node, while errors will be represented by the node’s color. 200s will always be green, and 400s, and 500s will be displayed in different colors.
In this example, you see a Lambda function that made a downstream call to a DynamoDB table. The first node is the Lambda service. The timing info written inside represents the time the request spent in the Lambda service, from the time it hits our FE, to the time it leaves Lambda. The second node represents your function’s execution. The timing info represents the function’s execution time. The third node represents a downstream call to Dynamo DB, and the timing info shows you time that passed from when your function made the call, till it received a response.
Looking at a visualization such as this one allows you to gain insights in a matter of seconds:
Where are latency issues or errors coming from? Are they caused by the Lambda service, by my function, or more specifically, from which specific downstream call?
Now, let’s say I’ve looked at the service map, and identified a latency issue. I could quickly conclude that the problem stems from the Lambda service itself. But which part exactly? That’s when you would want to switch to the trace view.
The trace view allows us to zoom in into your request and where it spent time in Lambda.
This request is the same request we saw in the service map – an async request that makes a call to DynamoDB.
The first segment you see, represents the entire time spent in Lambda, from hitting the FE, till leaving the service.
The dwell time segment shows you how much time was spent in the Lambda queue. All async requests to Lambda are put into a queue before being processed.
Next, you see Lambda’s attempt to invoke your function. You could have up to 3 attempts for async requests, as Lambda retries your function up to three times before it fails.
As you can see, this request succeeded on the first attempt, and returned a 200.
Next, you can see actual time it took your function to execute. The difference between attempt 1 and your function segment, the gap over there, is the time it took Lambda to start your function. Next, you can see the downstream call your function made. In this case, a putitem to test-table, which took 31ms and returned a 200.
If any of these calls returned an error, you would be able to access your error information from the trace view.
With X-ray, you now have visibility into the Lambda service that you never had before.
Add CloudFormation deploy of application.
Some of the most common pieces of feedback we get around ALM and tooling, are the 4 you see up on the screen.
First, since a serverless application is essentially a collection of services and resources, how do I configure and manage them as one unit? As an application?
Once I’ve defined and constructed an application, what’s the best way to consistently deploy the same application across different environments or accounts, with minimum effort?
Once I know how to construct and deploy my serverless application, how do I automate that process? How do I set up a release process to automatically build, test and deploy my application to multiple environments?
We are going to use the next hour or so to introduce best practices involving both new and existing services that provide answers to the challenges above