Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Serverless and Containers, AWS Federal Pop-Up Loft

200 views

Published on

Join this workshop to learn the basics of working with microservices and Amazon ECS. Discover how to prepare two microservice container images, set up the initial Amazon ECS cluster, and deploy the containers with traffic routed through an ALB. You'll deploy a simple web application that enables users to request unicorn rides from the Wild Rydes (http://wildrydes.com/) fleet. The application architecture uses AWS Lambda (https://aws.amazon.com/lambda/), Amazon API Gateway (https://aws.amazon.com/api-gateway/), Amazon S3 (https://aws.amazon.com/s3/), Amazon DynamoDB (https://aws.amazon.com/dynamodb/), Amazon Cognito (https://aws.amazon.com/cognito/), and AWS Amplify Console (https://aws.amazon.com/amplify/console/). Amplify Console hosts static web resources including HTML, CSS, JavaScript, and image files which are loaded in the user's browser via Amazon S3. JavaScript executed in the browser sends and receives data from a public backend API built using AWS Lambda and Amazon API Gateway. Amazon Cognito provides user management and authentication functions to secure the backend API. Finally, DynamoDB provides a persistence layer where data can be stored by the API's AWS Lambda function.

  • Be the first to comment

  • Be the first to like this

Serverless and Containers, AWS Federal Pop-Up Loft

  1. 1. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon ECR & ECS Xiang Shen Specialist Solutions Architect 9/16/2019
  2. 2. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. $ vi Dockerfile $ docker build -t mykillerapp:0.0.1 . $ docker run -it mykillerapp:0.0.1 Running containers in development is easy…
  3. 3. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Moving to production is hard Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS Server Guest OS AZ 1 AZ 2 AZ 3
  4. 4. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. AWS native container stack MANAGEMENT The API interface you use to launch applications Tracks application state and connects application to other resources like load balancers HOSTING Containers run on demand No capacity planning needed Automatically updated and patched infrastructure IMAGE REGISTRY Stores your docker container right there in the datacenter where you will run it Amazon EKS
  5. 5. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.© 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Elastic Container Registry
  6. 6. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Elastic Container Registry Fully Managed • Tight Integration with Amazon EC • Integration with Docker Toolset • Management Console and AWS CLI Highly Available • Amazon S3 backed • Regional endpoints Secure • IAM Resource-based Policies • AWS CloudTrail Audit Logs • Images encrypted at transit and at rest
  7. 7. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. ECS: can be totally managed, or can customize resource usage, networking, task placement etc. to fit your application needs. Shared responsibility with AWS (because managed service). ecs- tl;dr
  8. 8. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Elastic Container Service (ECS) is a highly scalable, high performance container orchestration service that supports Docker containers and allows you to run and scale containerized applications on AWS.
  9. 9. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon ECS Benefits • Fully managed elastic service – You don’t need to run anything, and the service scales as your microservices architecture grows • Shared state optimistic scheduling • Deep integration with other AWS services • Elastic Load Balancing • Amazon Elastic Block Store • Amazon Virtual Private Cloud • Amazon CloudWatch • AWS Identify and Access Management • AWS CloudTrail
  10. 10. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon ECS EC2 INSTANCES ECS AGENT TASK Container TASK Container ECS AGENT TASK Container TASK Container AGENT COMMUNICATION SERVICE Amazon ECS API CLUSTER MANAGEMENT ENGINE KEY/VALUE STORE ECS AGENT TASK Container TASK Container Internet LOAD BALANCER LOAD BALANCER
  11. 11. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Cluster of hosts on Amazon EC2 EC2 INSTANCES ECS AGENT TASK Container TASK Container ECS AGENT TASK Container TASK Container AGENT COMMUNICATION SERVICE Amazon ECS API CLUSTER MANAGEMENT ENGINE KEY/VALUE STORE ECS AGENT TASK Container TASK Container Internet LOAD BALANCER LOAD BALANCER EC2 INSTANCES ECS AGENT TASK Container TASK Container ECS AGENT TASK Container TASK Container AGENT COMMUNICATION SERVICE Amazon ECS API CLUSTER MANAGEMENT ENGINE KEY/VALUE STORE ECS AGENT TASK Container TASK Container Internet
  12. 12. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Lightweight agent on each host EC2 INSTANCES ECS AGENT TASK Container TASK Container ECS AGENT TASK Container TASK Container AGENT COMMUNICATION SERVICE Amazon ECS API CLUSTER MANAGEMENT ENGINE KEY/VALUE STORE ECS AGENT TASK Container TASK Container Internet LOAD BALANCER LOAD BALANCER EC2 INSTANCES ECS AGENT TASK Container TASK Container ECS AGENT TASK Container TASK Container AGENT COMMUNICATION SERVICE Amazon ECS API CLUSTER MANAGEMENT ENGINE KEY/VALUE STORE ECS AGENT TASK Container TASK Container Internet
  13. 13. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. API for launching containers on the cluster EC2 INSTANCES ECS AGENT TASK Container TASK Container ECS AGENT TASK Container TASK Container AGENT COMMUNICATION SERVICE Amazon ECS API CLUSTER MANAGEMENT ENGINE KEY/VALUE STORE ECS AGENT TASK Container TASK Container Internet LOAD BALANCER LOAD BALANCER EC2 INSTANCES ECS AGENT TASK Container TASK Container ECS AGENT TASK Container TASK Container AGENT COMMUNICATION SERVICE Amazon ECS API CLUSTER MANAGEMENT ENGINE KEY/VALUE STORE ECS AGENT TASK Container TASK Container Internet
  14. 14. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Container task is placed on a host EC2 INSTANCES ECS AGENT TASK Container TASK Container ECS AGENT TASK Container TASK Container AGENT COMMUNICATION SERVICE Amazon ECS API CLUSTER MANAGEMENT ENGINE KEY/VALUE STORE ECS AGENT TASK Container TASK Container Internet LOAD BALANCER LOAD BALANCER
  15. 15. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Traffic is sent to your host EC2 INSTANCES ECS AGENT TASK Container TASK Container ECS AGENT TASK Container TASK Container AGENT COMMUNICATION SERVICE Amazon ECS API CLUSTER MANAGEMENT ENGINE KEY/VALUE STORE ECS AGENT TASK Container TASK Container Internet LOAD BALANCER LOAD BALANCER
  16. 16. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Code Snippet { "containerDefinitions": [ { "name": "simple-app", "image": "httpd:2.4", "cpu": 10, "memory": 300, "portMappings": [ { "hostPort": 80, "containerPort": 80, "protocol": "tcp" } ], "essential": true, "mountPoints": [ { "containerPath": "/usr/local/apache2/htdocs", "sourceVolume": "my-vol" } ] }, { "name": "busybox", "image": "busybox", "cpu": 10, "memory": 200, "volumesFrom": [ { "sourceContainer": "simple-app" } ], "command": [ "/bin/sh -c "..."" ], "essential": false } ], "volumes": [ { "name": “my-vol" } ] }
  17. 17. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Code Snippet { "containerDefinitions": [ { "name": "simple-app", "image": "httpd:2.4", "cpu": 10, "memory": 300, "portMappings": [ { "hostPort": 80, "containerPort": 80, "protocol": "tcp" } ], "essential": true, "mountPoints": [ { "containerPath": "/usr/local/apache2/htdocs", "sourceVolume": "my-vol" } ] }, Create and mount volumes Essential to our task Expose port 80 in container To port 80 on host 10 CPU units (1024 is a full CPU) 500 MB of memory
  18. 18. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Code Snippet { "name": "busybox", "image": "busybox", "cpu": 10, "memory": 200, "volumesFrom": [ { "sourceContainer": "simple-app" } ], "command": [ "/bin/sh -c "..."" ], "essential": false } ], "volumes": [ { "name": “my-vol" } ] } From Docker Hub Mount volume from other container Command to exec Volumes
  19. 19. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.© 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Networking Modes
  20. 20. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Bridge Mode Default/Root Global Namespace docker0 Io eth0 172.16.0.0 172.16.1.0 172.16.2.0 Io 172.16.0.0 172.16.1.0 172.16.2.0 Io 172.16.0.0 172.16.1.0 172.16.2.0 ve-c1 (172.17.0.2/16) ve-c2 (172.17.0.3/16) Container 1 Container 2 172.17.0.1/16 Default/Root Global Namespace docker0 Io eth0 172.16.0.0 172.16.1.0 172.16.2.0 Io 172.16.0.0 172.16.1.0 172.16.2.0 Io 172.16.0.0 172.16.1.0 172.16.2.0 ve-c1 (172.17.0.2/16 ) ve-c2 (172.17.0.3/16) Container 1 Container 2 172.17.0.1/1610.0.0.27/24 10.0.0.26/24 Containers share same network interface as the instance Each container gets a private IP and uses Docker bridge for any communication Multiple task use the same ENI
  21. 21. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Task Networking Default/Root Global Namespace docker0 Io eth0 172.16.0.0 172.16.1.0 172.16.2.0 1. Pre ENI Attachment: The Primary ENI (eth0) is in the default namespace 2. ENI Attachment: The new ENI (eth1) is in the default namespace 3. ENI Provisioned: The ECS Agent invokes CNI plugins to move the new ENI into a new namespace and configures addresses and routes Default/Root Global Namespace docker0 Io eth0 172.16.0.0 172.16.1.0 172.16.2.0eth1 Default/Root Global Namespace ecs0 Io eth0 172.16.0.0 172.16.1.0 172.16.2.0 docker0 Io eth0 172.16.0.0 172.16.1.0 172.16.2.0 ve-c1
  22. 22. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.© 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Scheduling
  23. 23. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Deploying Containers on ECS – Choose a Scheduler Batch Jobs ECS task scheduler Run tasks once Batch jobs RunTask (random) StartTask (placed) Long-Running Apps ECS service scheduler Health management Scale-up and scale-down AZ aware Grouped containers
  24. 24. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon ECS - Scheduling
  25. 25. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon ECS - Scheduling
  26. 26. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon ECS - Scheduling X
  27. 27. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon ECS - Scheduling
  28. 28. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.© 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Logging and Monitoring
  29. 29. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Logging with Amazon CloudWatch Logs • Supported Docker Logging Drivers • json-file, syslog, journald, gelf, fluentd, awslogs • stdout/stderr outputs are automatically sent by the driver • awslogs sends logs to Amazon CloudWatch Logs • Log group for a specific service • Log stream for each container • Amazon CloudWatch Logs => Other services • Search, Filter, Export to Amazon S3, Send to Amazon Kinesis, AWS Lambda, Amazon Elasticsearch Service
  30. 30. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Configuring Logging in Task Definition { "family":"hello-world", "containerDefinitions":[ { "logConfiguration":{ "logDriver":"awslogs", "options":{ "awslogs-group":"awslogs-test", "awslogs-region":"us-east-1", "awslogs-stream-prefix":”hello-world" } }, "name":"hello-world", "image":"aws_account_id.dkr.ecr.us-east-1.amazonaws.com/hello-world", "cpu":10, "memory":500, ...
  31. 31. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Monitoring with Amazon CloudWatch • Metric data sent to CloudWatch in 1-minute periods and recorded for a period of two weeks • Available metrics: • CPUReservation • MemoryReservation • CPUUtilization • MemoryUtilization
  32. 32. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Monitoring with Amazon CloudWatch
  33. 33. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved.© 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. ECS Workflow
  34. 34. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Typical User Workflow I have a Docker image, and I want to run the image on a cluster
  35. 35. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Typical User Workflow Push Image(s) Amazon ECR
  36. 36. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Push Image to ECR $ docker push my-account.dkr.ecr.us-east-1.amazonaws.com/myImage
  37. 37. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Typical User Workflow Create Task Definition Amazon ECS Declare resource requirements
  38. 38. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Register task definition $ aws ecs register-task-definition --cli-input-json file://task- defintion.json
  39. 39. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Task definition { "taskDefinition": { "status": "INACTIVE", "family": "curler", "volumes": [], "taskDefinitionArn": "arn:aws:ecs:us-west-2:123456778901:task-definition/curler:1", "containerDefinitions": [ { "environment": [], "name": "curler", "mountPoints": [], "image": "curl:latest", "cpu": 100, "portMappings": [], "entryPoint": [], "memory": 256, "command": [ "curl -v http://example.com/" ], "essential": true, "volumesFrom": [] } ], "revision": 1 } }
  40. 40. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Typical User Workflow Run Instances EC2 Use custom AMI with Docker support and ECS Agent. Instances will register with default cluster.
  41. 41. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Create cluster $ aws ecs create-cluster --cluster-name "my_cluster"
  42. 42. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Typical User Workflow Run Task or Create Service Amazon ECS Using the task definition created above
  43. 43. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Create service $ aws ecs create-service --service-name ecs-simple-service --task- definition ecs-demo --desired-count 10
  44. 44. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Thank you!
  45. 45. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Wild Rydes : Build a full stack serverless ride-sharing app on AWS Xiang Shen Specialist Solutions Architect 9/16/2019
  46. 46. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Computing evolution—A paradigm shift Focus on business logic Levelofabstraction Physical machines • Requires “guess” planning • Lives for years on-premises • Heavy investments (capital expenditure [CapEx]) • Low innovation factor • Deploy in months
  47. 47. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Computing evolution—A paradigm shift Focus on business logic Levelofabstraction Physical machines Virtual Servers in Datacenters Virtual machines • Hardware independence • Faster provisioning speed (minutes/hours) • Trade CapEx for operating expense • More scale • Elastic resources • Faster speed and agility • Reduced maintenance
  48. 48. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Computing evolution—A paradigm shift Focus on business logic Levelofabstraction Virtual Servers in Datacenters Virtual Servers in Datacenters Physical machines Virtual machines Containerization • Platform independence • Consistent runtime environment • Higher resource utilization • Easier and faster deployments • Isolation and sandboxing • Start speed (deploy in seconds)
  49. 49. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Computing evolution—A paradigm shift Focus on business logic Levelofabstraction Virtual Servers in Datacenters Virtual Servers in Datacenters Physical machines Virtual machines Containerization Serverless • Continuous scaling • Fault tolerance built- in • Event-driven • Pay per usage • Zero maintenance AWS Fargate AWS Lambda
  50. 50. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Serverless tenants on Amazon Web Services No server management Flexible scaling No idle capacity $ High availability
  51. 51. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Snowballing adoption of microservices Key trends Don’t code it yourself! The rise of managed services Stream processing and “embarrassingly parallel” computing Serverless, event-driven architectures
  52. 52. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Lambda—How it works Bring your own code • Node.js, Java, Python, C#, Go, Ruby, or bring your own language! • Bring your own libraries Flexible invocation paths • Event or RequestResponse invoke options • Existing integrations with various AWS services Simple resource model • Select memory from 128 MB to 3 GB in 64-MB steps • CPU and network allocated proportionately to RAM • Reports actual usage Fine-grained permissions • Uses AWS Identity and Access Management role for Lambda execution permissions • Uses resource policy for AWS event sources
  53. 53. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon API Gateway
  54. 54. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. API Gateway—Capabilities • Host multiple versions and stages of your APIs • Create and distribute API keys to developers • Leverage signature version 4 to authorize access to APIs • Throttle and monitor requests to protect your backend • Utilize Lambda as a backend or route to EC2 or Load Balancer
  55. 55. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. An API call flow Internet Mobile apps Websites Services Lambda functions AWS API gateway cache Endpoints on Amazon EC2/AWS Elastic Beanstalk Any other publicly accessible endpoint Amazon CloudWatch monitoring API Gateway
  56. 56. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Different APIs • Regional APIs – intended for APIs and clients in the same region • Private APIs – run APIs inside a virtual private cloud = internal microservice • Edge-optimized APIs – enables you to package Amazon API Gateway with Amazon CloudFront and distribute to Global PoPs
  57. 57. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Additional benefits of API Gateway • Managed cache to store API responses • Reduced latency and distributed-denial-of-service protection through CloudFront • SDK generation for iOS, Android, and JavaScript • Swagger support • Request and response data transformation
  58. 58. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon DynamoDB
  59. 59. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. DynamoDB Fast and flexible NoSQL database service for any scale Dead simple • GetItem(primaryKey) • PutItem(item) Robust depth • Fine-grained access control • Streams • Triggers • Cross-region replication • DynamoDB local • Free-text search • Titan graph database integration • Strong consistency option • Atomic counters
  60. 60. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Cognito
  61. 61. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Cognito identity Federated user identities Your users can sign-in through social identity providers such as Facebook, Twitter, and SAML providers, and you can control access to AWS resources from your app. Cognito user pools You can easily and securely add sign-up and sign-in functionality to your mobile and web apps with a fully managed service that scales to support hundreds of millions of users. GuestYour own auth SAML
  62. 62. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Amazon Cognito user pools Add user sign-up and sign-in easily to your mobile and web apps without worrying about server infrastructure Serverless authentication and user management Verify phone numbers and email addresses and offer multi-factor authentication Enhanced security features A simple, secure, low- cost, and fully managed service to create and maintain a user directory that scales to hundreds of millions of users Managed user directory
  63. 63. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Comprehensive support for identity use cases
  64. 64. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Scenario: Wild Rydes (www.wildrydes.com)
  65. 65. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Help Wild Rydes disrupt transportation! So how does this magic work?
  66. 66. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Wild Rydes is backed by leading investors The Barn Accelerator Tenderloin Capital Penglai Communications and Post New Century Technology Corp. Limited
  67. 67. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Your task: Build the Wild Rydes website Welcome to Wild Rydes, Inc., employee #3!
  68. 68. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Scenario: Wild Rydes The Wild Rydes Serverless Web Application Workshop introduces the basics of building web applications using serverless infrastructure.
  69. 69. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Lab 1: Static website hosting Objective: Configure AWS Amplify Console to host the static resources for your web application.
  70. 70. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Lab 2: User management Objective: Allow visitors to register as a new user on Wild Rydes by providing and validating their email address. Amazon Cognito will be used to manage the user pool for Wild Rydes.
  71. 71. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Lab 3: Serverless service backend Objective: Create a service backend using Lambda and DynamoDB to handle requests from your frontend static website content.
  72. 72. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Lab 4: Create RESTful API Objective: Use API Gateway to expose the Lambda function you built in the previous module as a RESTful API.
  73. 73. © 2019, Amazon Web Services, Inc. or its Affiliates. All rights reserved. Thank you!
  74. 74. © 2019 Amazon Web Services, Inc. and its affiliates. All rights served. May not be copied, modified, or distributed in whole or in part without the express consent of Amazon Web Services, Inc. AWS Serverless & Container Workshop: Lab 1
  75. 75. 2 Overview of lab This lab introduces the basics of working with microservices and ECS. This includes: preparing two microservice container images, setting up the initial ECS cluster, and deployment of the containers with traffic routed through an ALB. You'll need to have a working AWS account to use this lab. (Note: you can skip this step if you are provided an AWS account.)
  76. 76. 3 1. Verify the VPC For this lab, we will use the default VPC in the region. It should have at least 3 public subnets. Go to the AWS VPC console, verify the default VPC has the CIDR block such as 172.31.0.0/16 Click the Subnets link on the left side and verify there are subnets in the VPC.
  77. 77. 4 2. Setting up the IAM user and roles In order to work with ECS from our workstation, we will need the appropriate permissions for our developer workstation instance. Go to the IAM Console, Roles > Create Role > AWS Service > EC2. We will later assign this role to our workstation instance. Click Next: Permissions. Enter AmazonEC2ContainerRegistryFullAccess in the Filter text field. Click Next: Tags Click Next: Review Enter ecslabworkstationprofile for the Role name and click Create Role.
  78. 78. 5 Use the same process to create another new role so that EC2 instances in the ECS cluster have appropriate permissions to access the container registry, auto-scale, etc. We will later assign this role to the EC2 instances in our ECS cluster. In the Create Role screen, enter AmazonEC2ContainerServiceforEC2Role AmazonEC2ContainerServiceAutoscaleRole in the text field (without a comma) and select the two policies. In the Review screen, enter ecslabinstanceprofile for the Role name and click Create Role.
  79. 79. 6 Note: By default, the ECS first run wizard creates ecsInstanceRole for you to use. However, it's a best practice to create a specific role for your use so that we can add more policies in the future when we need to. 3. Launching the Cluster Next, let’s launch the ECS cluster which will host our container instances. We're going to put these instances in the public subnets since they're going to be hosting public microservices. Create a new security group by navigating to the EC2 console > Security Group and create sgecslabpubliccluster. Keep the defaults. Make sure the correct VPC is selected when creating the security group. Navigate to the ECS console and click Clusters on the top-left corner, then click Create Cluster. Choose the EC2 Linux + Networking cluster template. Click Next Step.
  80. 80. 7 In the next screen, configure the cluster as follows: Field Name Value Cluster Name EcsLabPublicCluster Provisioning Model On-Demand Instance EC2 instance type t2.micro Number of instances 2 EBS storage 22 Keypair none Networking Section VPC The default VPC Subnets pick 2 public subnets, e.g. us-east-1a and us-east-1b Security Group sgecslabpubliccluster IAM Role ecslabinstanceprofile Click Create. It will take a few minutes to create the cluster.
  81. 81. 8
  82. 82. 9
  83. 83. 10 4. Launching the Cloud 9 environment Next, let’s launch our developer environment. Think of this as the developer's machine which runs Docker and has access to our Git repository. Navigate to the AWS Cloud9 > Create environment Provide a name such as lab-env, click the Next step button and use the default values. Click Next step again and Create environment. Once the environment is running, you should have something similar to the following: Note: you can resize different panels and the bottom one is a terminal in which you can run Linux commands such as docker info Verify docker is configured correctly:
  84. 84. 11 $ docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 17.03.1-ce Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: (expected: 4ab9917febca54791c5f071a9d1f404867857fcc) runc version: N/A (expected: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe) init version: N/A (expected: 949e6facb77383876aeff8a6944dde66b3089574) Security Options: seccomp Profile: default Kernel Version: 4.9.32-15.41.amzn1.x86_64 Operating System: Amazon Linux AMI 2017.03 … We now have a working developer workspace. 5. Prepping the Docker images At this point, we're going to pretend that we're the developers of both the web and api microservices, and we will get the latest from our source repo. In this case we will just be using the plain
  85. 85. 12 old curl, but just pretend you're using git (Note: please ignore the errors/warnings from the tar command): curl -O http://workshop.summit.awsdemo.me/ecs-lab-code.tar.gz tar -xvf ecs-lab-code.tar.gz Our first step is to build and test our containers locally. If you've never worked with Docker before, there are a few basic commands that we'll use in this workshop, but you can find a more thorough list in the Docker "Getting Started" documentation. To build your first container, go to the web directory. This folder contains our web Python Flask microservice: cd aws-microservices-ecs-bootcamp-v2/web Notice there is a Dockerfile under the directory and you can view the file using: cat Dockerfile To build the container: docker build -t ecs-lab-web . This should output steps that look something like this: Sending build context to Docker daemon 4.096 kB Sending build context to Docker daemon Step 0 : FROM ubuntu:latest ---> 6aa0b6d7eb90 Step 1 : MAINTAINER widha@amazon.com ---> Using cache ---> 3f2b91d4e7a9 If the container builds successfully, the output should end with something like this: Removing intermediate container d2cd523c946a
  86. 86. 13 Successfully built ec59b8b825de To view the image that was just built: $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE ecs-lab-web latest 2b849343f6be 13 seconds ago 452MB ubuntu latest 113a43faa138 12 days ago 81.2MB To run your container: docker run -d -p 8080:3000 ecs-lab-web This command runs the image in daemon mode and maps the docker container port 3000 with the host (in this case our workstation) port 3000. We're doing this so that we can run both microservices on a single host without port conflicts. To check if your container is running: docker ps This should return a list of all the currently running containers. In this example, it should just return a single container, the one that we just started: CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7b0d04f4502c ecs-lab-web "python app.py" 9 seconds ago Up 9 seconds 0.0.0.0:8080->3000/tcp eloquent_noether To test the actual container output: curl localhost:8080/web This should return some html text like the following: <html><head>...</head><body>hi! i'm served via Python + Flask. i'm a web endpoint. ...</body></html>
  87. 87. 14 Repeat the same steps with the api microservice. Change directory to /api and repeat the same steps above: cd ../api docker build -t ecs-lab-api . docker images docker run -d -p 8000:8000 ecs-lab-api curl localhost:8000/api The API container should return: { "response" : "hi! i'm ALSO served via Python + Flask. i'm an API." } We now have two working microservice containers. 6. Creating container registries with ECR Once images are built, it’s useful to share them and this is done by pushing the images to a container registry. Let’s create two repositories in Amazon EC2 Container Registry (ECR). Navigate to the ECR console, and click Create repository. Name your first repository ecs-lab-web:
  88. 88. 15 Once you've created the repository, click the repository name and the button View push commands. It will display the push commands. Take note of these, as you'll need them in the next step. The push commands should like something like this:
  89. 89. 16 Once you've created the ecs-lab-web repository, repeat the process for the ecs-lab-api repository. Take note of the push commands for this second repository. Push commands are unique per repository. 7. Configuring the AWS CLI On our Cloud9 environment, we will use the AWS CLI to push images to ECR. You can confirm that your CLI is setup correctly by running the command to obtain an ECR authentication token. aws ecr get-login
  90. 90. 17 This should output something like: docker login -u AWS -p AQECAHhwm0YaISJeRtJm5n1G6uqeekXuoXXPe5UFce9Rq8/14wAAAy0wggMpBgkq hkiG9w0BBwagggMaMIIDFgIBADCCAw8GCSqGSIb3DQEHATAeBglghkgBZQMEAS4w EQQM+76slnFaYrrZwLJyAgEQgIIC4LJKIDmvEDtJyr7jO661//6sX6cb2jeD/RP0 IA03wh62YxFKqwRMk8gjOAc89ICxlNxQ6+cvwjewi+8/W+9xbv5+PPWfwGSAXQJS Hx3IWfrbca4WSLXQf2BDq0CTtDc0+payiDdsXdR8gzvyM7YWIcKzgcRVjOjjoLJp XemQ9liPWe4HKp+D57zCcBvgUk131xCiwPzbmGTZ+xtE1GPK0tgNH3t9N5+XA2BY YhXQzkTGISVGGL6Wo1tiERz+WA2aRKE+Sb+FQ7YDDRDtOGj4MwZ3/uMnOZDcwu3u UfrURXdJVddTEdS3jfo3d7yVWhmXPet+3qwkISstIxG+V6IIzQyhtq3BXW/I7pwZ B9ln/mDNlJVRh9Ps2jqoXUXg/j/shZxBPm33LV+MvUqiEBhkXa9cz3AaqIpc2gXy XYN3xgJUV7OupLVq2wrGQZWPVoBvHPwrt/DKsNs28oJ67L4kTiRoufye1KjZQAi3 FIPtMLcUGjFf+ytxzEPuTvUk4Xfoc4A29qp9v2j98090Qx0CHD4ZKyj7bIL53jSp eeFDh9EXubeqp6idIwG9SpIL9AJfKxY7essZdk/0i/e4C+481XIM/IjiVkh/ZsJz uAPDIpa8fPRa5Gc8i9h0bioSHgYIpMlRkVmaAqH/Fmk+K00yG8USOAYtP6BmsFUv kBqmRtCJ/Sj+MHs+BrSP7VqPbO1ppTWZ6avl43DM0blG6W9uIxKC9SKBAqvPwr/C Kz2LrOhyqn1WgtTXzaLFEd3ybilqhrcNtS16I5SFVI2ihmNbP3RRjmBeA6/QbreQ sewQOfSk1u35YmwFxloqH3w/lPQrY1OD+kySrlGvXA3wupq6qlphGLEWeMC6CEQQ KSiWbbQnLdFJazuwRUjSQlRvHDbe7XQTXdMzBZoBcC1Y99Kk4/nKprty2IeBvxPg +NRzg+1e0lkkqUu31oZ/AgdUcD8Db3qFjhXz4QhIZMGFogiJcmo= -e none https://<account_id>.dkr.ecr.us-east-1.amazonaws.com To register ECR as your Docker repository, copy and paste that output or run: $(aws ecr get-login --no-include-email --region us-east-1) Your shell will execute the output of that command and respond: Login Succeeded If you are unable to login to ECR, check your IAM permissions. 8. Pushing our tested images to ECR Now that we've tested our images locally, we need to tag and push them to ECR. This will allow us to use them in Task Definitions that can be deployed to an ECS cluster.
  91. 91. 18 You'll need your push commands that you saw during registry creation. You can find them again by going back to the repository (ECR Console > Repositories > Select the Repository you want to see the commands for > View Push Commands). To tag and push to the web repository (you can copy/paste the commands #3 and #4 from the View Push Commands output): cd ~/environment/aws-microservices-ecs-bootcamp-v2/web docker tag ecs-lab-web:latest <account_id>.dkr.ecr.us-east-1.amazonaws.com/ecs-lab-web:latest docker push <account_id>.dkr.ecr.us-east-1.amazonaws.com/ecs-lab-web:latest This should return something like this: The push refers to a repository [<account_id>.ecr.us-east- 1.amazonaws.com/ecs-lab-web] (len: 1) ec59b8b825de: Image already exists 5158f10ac216: Image successfully pushed 860a4e60cdf8: Image successfully pushed 6fb890c93921: Image successfully pushed aa78cde6a49b: Image successfully pushed Digest: sha256:fa0601417fff4c3f3e067daa7e533fbed479c95e40ee96a24b3d63b24 938cba8 To tag and push to the api repository: cd ~/environment/aws-microservices-ecs-bootcamp-v2/api docker tag ecs-lab-api:latest <account_id>.dkr.ecr.us-east-1.amazonaws.com/ecs-lab-api:latest docker push <account_id>.dkr.ecr.us-east-1.amazonaws.com/ecs-lab-api:latest Note: why :latest? This is the actual image tag. In most production environments, you'd tag images for different schemes, for example, you might tag the most up-to-date image with :latest, and all other versions of the same container with a commit SHA from a CI job. If you push an image without a specific tag, it will default to :latest, and untag the previous
  92. 92. 19 image with that tag. For more information on Docker tags, see the Docker documentation. You can see your pushed images by viewing the repository in the ECS Console. Alternatively, you can use the CLI: $ aws ecr list-images --repository-name=ecs-lab-api { "imageIds": [ { "imageTag": "latest", "imageDigest": "sha256:f0819d27f73c7fa6329644efe8110644e23c248f2f3a9445cbbb6c84a01e108f" } ] } You have successfully completed Lab 1.. Keep all the infrastructure you have built running. You will be building on this in Lab 2
  93. 93. © 2019 Amazon Web Services, Inc. and its affiliates. All rights served. May not be copied, modified, or distributed in whole or in part without the express consent of Amazon Web Services, Inc. AWS Serverless & Container Workshop: Lab 2 Lab 2 will build on Lab 1. 9. Creating the ALB Now that we've pushed our images, we need an Application Load Balancer ALB to route traffic to our endpoints. An ALB lets you direct traffic between different endpoints and in this lab, we'll use two separate endpoints: /web and /api. To create the ALB, navigate to the EC2 Console, and select Load Balancers from the left-hand menu. Choose Create Load Balancer. Create an Application Load Balancer: Name your ALB EcsLabAlb and add an HTTP listener on port 80:
  94. 94. 2 Note: in a production environment, you should also have a secure listener on port 443. This will require an SSL certificate, which can be obtained from AWS Certificate Manager, or from your registrar/CA. For the purposes of this lab, we will only create the insecure HTTP listener. DO NOT RUN THIS IN PRODUCTION. Next, select your VPC and we need at least two subnets for high availability. Make sure to choose the VPC and subnets that were used in Lab 1. Click Next, and create a new security group (sgecslabloadbalancer) with the following rule: Ports Protocol Source 80 tcp 0.0.0.0/0
  95. 95. 3 Continue to the next step: Configure Routing. For this initial setup, we're just adding a dummy health check on /. We'll add specific health checks for our service endpoints when we register them with the ALB. Click through the "Next:Register Targets" step, and continue to the Review step. If your values look correct, click Create. Important Note: If you created your own security group for the ECS Cluster (sgecslabpubliccluster), and only added a rule for port 80, you'll need to add one more. Edit your security group and add a rule to allow your ALB security group (sgecslabloadbalancer) to access the port range for ECS (0- 65535) for port mapping. This rule references itself and you will see the security group appears when you start typing “sg-” in the Source textbox for the All TCP rule.
  96. 96. 4 We now have the following security group setup: 10. Creating the Task Definitions We need to create a service in ECS but before that can be done, the container needs be a part of a Task Definition. Task
  97. 97. 5 Definitions define things like environment variables, the container image you wish to use, and the resources you want to allocate to the service (port, memory, CPU). To create a Task Definition, choose Task Definitions from the ECS console menu. Then, choose Create new Task Definition. For launch type compatibility, select EC2, Next Step. Scroll down and leave the default values for the “Task execution IAM role” and “Task size” sections. Click on the Add Container button. Use ecs-lab-web for Container name. In the Image textbox, paste the Image URI that you used to push the web image to ECR from the previous lab. You can also find the web URI in the ECR web repo (look for the value for Repository URI). For Memory Limit, use a value of 128.
  98. 98. 6 A few things to note here: We've specified a specific container image, including the :latest tag. Although it's not important for this lab, in a production environment where you were creating Task Definitions programmatically from a CI/CD pipeline, Task Definitions could include a specific SHA hash, or a more accurate tag. Under Port Mappings, we've specified a Container Port (3000), but left Host Port as 0. This is required to facilitate dynamic port allocation. This means that we don't need to map the Container Port to a specific Host Port in our Container Definition; instead, we can let the ALB allocate a port during task placement. To learn more about port allocation, check out the ECS documentation here. Once you've specified your Port Mappings, scroll down and add a log driver. There are a few options here, but for this lab, choose awslogs:
  99. 99. 7 For this web container, make sure the Auto-configure CloudWatch Logs is checked in the Log configuration section. Once you've added your log driver, save the Container Definition by clicking Add, and click on Create to complete the Task Definition. Repeat the Task Definition creation process with the API container, taking care to use the api container image registry, and the correct port (8000) for the Container Port option. For the log driver, make sure Auto-configure CloudWatch Logs is checked.
  100. 100. 8 Don’t forget to click on the Create button to complete the Task Definition.
  101. 101. 9 11. Creating the Services Next, we're going to create the service based on our Task Definition. A service is a group of tasks (which are containers). You can define how many tasks you want to run simultaneously, specify load balancing, auto scaling and configure many other options. First, we need to create an IAM role for this Service. Navigate to IAM > Roles > Create role: Click Next: Permissions
  102. 102. 10 Click Next: Tags Click Next: Review. In the Review page, use EcsLabServiceRole for the role name and click the Create Role button. Navigate back to the ECS console, and choose the cluster that you created. This should be named EcsLabPublicCluster. From the cluster detail page, choose Services > Create. Make sure the launch type is EC2 (not Fargate) and configure the service as follows:
  103. 103. 11 Choose the web Task Definition you created in the previous section. For the purposes of this lab, we'll only start one copy of each task. In a production environment, you will always want more than one copy of each task running for reliability and availability. You can keep the default AZ Balanced Spread for the Task Placement Policy. To learn more about the different Task Placement Policies, see the documentation, or this blog post. Click Next step to configure load balancing. Choose Application Load Balancer and configure as follows:
  104. 104. 12 Select the web container, choose Add to load balancer and configure load balancing.
  105. 105. 13 Service discovery is not used for this lab, please uncheck the checkbox for Enable service discovery integration. When we created our ALB, we only added a listener for HTTP:80. Select this from the dropdown as the value for Listener. For Target Group Name, enter a value that will make sense to you later, like ecs-lab-web. For Path Pattern, the value should be /web*. This is the route that we specified in our Python application.
  106. 106. 14 If the values look correct, click Next Step, click through the optional Auto Scaling page click Create Service. Repeat this process for the api microservice and task definition. Don't forget to adjust the target group name, path pattern, evaluation order and health check path accordingly.
  107. 107. 15 12. Testing our service deployments from the console and the ALB You can see service level events from the ECS console. This includes deployment events. You can test that both of your services are deployed and registered properly with the ALB by looking at the service's Events tab:
  108. 108. 16 We can also test from the ALB itself. To find the DNS A record for your ALB, navigate to the EC2 Console > Load Balancers > Select your Load Balancer. Under Description, you can find details about your ALB, including a section for DNS Name. You can enter this value in your browser, and append the endpoint of your service, to see your ALB and ECS Cluster in action: The ALB routes traffic appropriately based on the paths we specified when we registered the containers: /web* requests go to our web service, and /api* requests go to our API service.
  109. 109. 17 13. More in-depth logging with CloudWatch When we created our Container Definitions, we also added the awslogs driver, which sends logs to CloudWatch. You can see more details logs for your services by going to the CloudWatch console, and selecting first our log group ecs-lab and then choosing an individual stream: That's a wrap! Congratulations! You've deployed an ECS Cluster with two working endpoints. Clean up (Optional) Don't forget to do the following, after you're finished with the lab: Delete the ecs-lab stack Go to CloudWatch Console > Logs and delete Log Group ecs- lab Go to ECS Console > Repositories and delete the cluster, deregister the 2 task definitions, delete the 2 created repositories Go to the EC2 Console, terminate the ecs-lab-workstation EC2 Instance, the Application Load Balancer and the 3 Target Groups Go to IAM console and delete the 2 roles EcslabInstanceRole and EcsWorkstationRole
  110. 110. 18 Find the above a little boring? Here are some ideas to make it more interesting: Try to migrate the tasks to Fargate The development team refactored our api and now it requires a host with GPU. Deploy the api containers to EC2 P2 GPU instances by defining a Task Placement Constraint.

×